You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by gi...@apache.org on 2022/03/16 10:17:44 UTC
[dolphinscheduler-website] branch asf-site updated: Automated deployment: a3ea63e47b628deac012074d9af18e1ddbd14dca
This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 2a69cff Automated deployment: a3ea63e47b628deac012074d9af18e1ddbd14dca
2a69cff is described below
commit 2a69cfff0b6f195ae63991db98a9b2aaf0da5480
Author: github-actions[bot] <gi...@users.noreply.github.com>
AuthorDate: Wed Mar 16 10:17:38 2022 +0000
Automated deployment: a3ea63e47b628deac012074d9af18e1ddbd14dca
---
.../dev/user_doc/guide/installation/cluster.html | 20 +-
.../dev/user_doc/guide/installation/cluster.json | 2 +-
.../dev/user_doc/guide/installation/docker.html | 323 ++++++++++-----------
.../dev/user_doc/guide/installation/docker.json | 2 +-
.../dev/user_doc/guide/installation/hardware.html | 16 +-
.../dev/user_doc/guide/installation/hardware.json | 2 +-
.../user_doc/guide/installation/kubernetes.html | 170 +++++------
.../user_doc/guide/installation/kubernetes.json | 2 +-
.../guide/installation/pseudo-cluster.html | 50 ++--
.../guide/installation/pseudo-cluster.json | 2 +-
.../guide/installation/skywalking-agent.html | 10 +-
.../guide/installation/skywalking-agent.json | 2 +-
.../user_doc/guide/installation/standalone.html | 18 +-
.../user_doc/guide/installation/standalone.json | 2 +-
14 files changed, 310 insertions(+), 311 deletions(-)
diff --git a/en-us/docs/dev/user_doc/guide/installation/cluster.html b/en-us/docs/dev/user_doc/guide/installation/cluster.html
index 2ddd04b..f7ba7be 100644
--- a/en-us/docs/dev/user_doc/guide/installation/cluster.html
+++ b/en-us/docs/dev/user_doc/guide/installation/cluster.html
@@ -11,20 +11,20 @@
</head>
<body>
<div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
-<p>Cluster deployment is to deploy the DolphinScheduler on multiple machines for running a large number of tasks in production.</p>
-<p>If you are a green hand and want to experience DolphinScheduler, we recommended you install follow <a href="standalone.md">Standalone</a>. If you want to experience more complete functions or schedule large tasks number, we recommended you install follow <a href="pseudo-cluster.md">pseudo-cluster deployment</a>. If you want to using DolphinScheduler in production, we recommended you follow <a href="cluster.md">cluster deployment</a> or <a href="kubernetes.md">kubernetes</a></p>
-<h2>Deployment Step</h2>
-<p>Cluster deployment uses the same scripts and configuration files as we deploy in <a href="pseudo-cluster.md">pseudo-cluster deployment</a>, so the prepare and required are the same as pseudo-cluster deployment. The difference is that <a href="pseudo-cluster.md">pseudo-cluster deployment</a> is for one machine, while cluster deployment (Cluster) for multiple. and the steps of "Modify configuration" are quite different between pseudo-cluster deployment and cluster deployment.</p>
-<h3>Prepare and DolphinScheduler Startup Environment</h3>
-<p>Because of cluster deployment for multiple machine, so you have to run you "Prepare" and "startup" in every machine in <a href="pseudo-cluster.md">pseudo-cluster.md</a>, except section "Configure machine SSH password-free login", "Start ZooKeeper", "Initialize the database", which is only for deployment or just need an single server</p>
+<p>Cluster deployment is to deploy the DolphinScheduler on multiple machines for running massive tasks in production.</p>
+<p>If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow <a href="standalone.md">Standalone deployment</a>. If you want to experience more complete functions and schedule massive tasks, we recommend you install follow <a href="pseudo-cluster.md">pseudo-cluster deployment</a>. If you want to deploy DolphinScheduler in production, we recommend you follow <a href="cluster.md">cluster deployment</a> or <a href="kubernetes.md">Kubernetes depl [...]
+<h2>Deployment Steps</h2>
+<p>Cluster deployment uses the same scripts and configuration files as <a href="pseudo-cluster.md">pseudo-cluster deployment</a>, so the preparation and deployment steps are the same as pseudo-cluster deployment. The difference is that <a href="pseudo-cluster.md">pseudo-cluster deployment</a> is for one machine, while cluster deployment (Cluster) is for multiple machines. And steps of "Modify Configuration" are quite different between pseudo-cluster deployment and cluster deplo [...]
+<h3>Prerequisites and DolphinScheduler Startup Environment Preparations</h3>
+<p>Configure all the configurations refer to <a href="pseudo-cluster.md">pseudo-cluster deployment</a> on every machine, except sections <code>Prerequisites</code>, <code>Start ZooKeeper</code> and <code>Initialize the Database</code> of the <code>DolphinScheduler Startup Environment</code>.</p>
<h3>Modify Configuration</h3>
-<p>This is a step that is quite different from <a href="pseudo-cluster.md">pseudo-cluster.md</a>, because the deployment script will transfer the resources required for installation machine to each deployment machine using <code>scp</code>. And we have to declare all machine we want to install DolphinScheduler and then run script <code>install.sh</code>. The configuration file is under the path <code>conf/config/install_config.conf</code>, here we only need to modify section <strong>INST [...]
+<p>This step differs quite a lot from <a href="pseudo-cluster.md">pseudo-cluster deployment</a>, because the deployment script transfers the required resources for installation to each deployment machine by using <code>scp</code>. So we only need to modify the configuration of the machine that runs <code>install.sh</code> script and configurations will dispatch to cluster by <code>scp</code>. The configuration file is under the path <code>conf/config/install_config.conf</code>, here we o [...]
<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> ---------------------------------------------------------</span>
<span class="hljs-meta">#</span><span class="bash"> INSTALL MACHINE</span>
<span class="hljs-meta">#</span><span class="bash"> ---------------------------------------------------------</span>
-<span class="hljs-meta">#</span><span class="bash"> Using IP or machine hostname <span class="hljs-keyword">for</span> server going to deploy master, worker, API server, the IP of the server</span>
-<span class="hljs-meta">#</span><span class="bash"> If you using hostname, make sure machine could connect each others by hostname</span>
-<span class="hljs-meta">#</span><span class="bash"> As below, the hostname of the machine deploying DolphinScheduler is ds1, ds2, ds3, ds4, ds5, <span class="hljs-built_in">where</span> ds1, ds2 install master server, ds3, ds4, and ds5 installs worker server, the alert server is installed <span class="hljs-keyword">in</span> ds4, and the api server is installed <span class="hljs-keyword">in</span> ds5</span>
+<span class="hljs-meta">#</span><span class="bash"> Using IP or machine hostname <span class="hljs-keyword">for</span> the server going to deploy master, worker, API server, the IP of the server</span>
+<span class="hljs-meta">#</span><span class="bash"> If you using a hostname, make sure machines could connect each other by hostname</span>
+<span class="hljs-meta">#</span><span class="bash"> As below, the hostname of the machine deploying DolphinScheduler is ds1, ds2, ds3, ds4, ds5, <span class="hljs-built_in">where</span> ds1, ds2 install the master server, ds3, ds4, and ds5 installs worker server, the alert server is installed <span class="hljs-keyword">in</span> ds4, and the API server is installed <span class="hljs-keyword">in</span> ds5</span>
ips="ds1,ds2,ds3,ds4,ds5"
masters="ds1,ds2"
workers="ds3:default,ds4:default,ds5:default"
diff --git a/en-us/docs/dev/user_doc/guide/installation/cluster.json b/en-us/docs/dev/user_doc/guide/installation/cluster.json
index 3790c4c..0989ca0 100644
--- a/en-us/docs/dev/user_doc/guide/installation/cluster.json
+++ b/en-us/docs/dev/user_doc/guide/installation/cluster.json
@@ -1,6 +1,6 @@
{
"filename": "cluster.md",
- "__html": "<h1>Cluster Deployment</h1>\n<p>Cluster deployment is to deploy the DolphinScheduler on multiple machines for running a large number of tasks in production.</p>\n<p>If you are a green hand and want to experience DolphinScheduler, we recommended you install follow <a href=\"standalone.md\">Standalone</a>. If you want to experience more complete functions or schedule large tasks number, we recommended you install follow <a href=\"pseudo-cluster.md\">pseudo-cluster deployment</ [...]
+ "__html": "<h1>Cluster Deployment</h1>\n<p>Cluster deployment is to deploy the DolphinScheduler on multiple machines for running massive tasks in production.</p>\n<p>If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow <a href=\"standalone.md\">Standalone deployment</a>. If you want to experience more complete functions and schedule massive tasks, we recommend you install follow <a href=\"pseudo-cluster.md\">pseudo-cluster deployment< [...]
"link": "/dist/en-us/docs/dev/user_doc/guide/installation/cluster.html",
"meta": {}
}
\ No newline at end of file
diff --git a/en-us/docs/dev/user_doc/guide/installation/docker.html b/en-us/docs/dev/user_doc/guide/installation/docker.html
index 1cbf310..b4f2c41 100644
--- a/en-us/docs/dev/user_doc/guide/installation/docker.html
+++ b/en-us/docs/dev/user_doc/guide/installation/docker.html
@@ -13,31 +13,31 @@
<div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
<h2>Prerequisites</h2>
<ul>
-<li><a href="https://docs.docker.com/engine/install/">Docker</a> 1.13.1+</li>
-<li><a href="https://docs.docker.com/compose/">Docker Compose</a> 1.11.0+</li>
+<li><a href="https://docs.docker.com/engine/install/">Docker</a> version: 1.13.1+</li>
+<li><a href="https://docs.docker.com/compose/">Docker Compose</a> version: 1.11.0+</li>
</ul>
<h2>How to Use this Docker Image</h2>
-<p>Here're 3 ways to quickly install DolphinScheduler</p>
-<h3>The First Way: Start a DolphinScheduler by Docker Compose (Recommended)</h3>
-<p>In this way, you need to install <a href="https://docs.docker.com/compose/">docker-compose</a> as a prerequisite, please install it yourself according to the rich docker-compose installation guidance on the Internet</p>
-<p>For Windows 7-10, you can install <a href="https://github.com/docker/toolbox/releases">Docker Toolbox</a>. For Windows 10 64-bit, you can install <a href="https://docs.docker.com/docker-for-windows/install/">Docker Desktop</a>, and pay attention to the <a href="https://docs.docker.com/docker-for-windows/install/#system-requirements">system requirements</a></p>
+<p>Here are 3 ways to quickly install DolphinScheduler:</p>
+<h3>Start DolphinScheduler by Docker Compose (Recommended)</h3>
+<p>In this way, you need to install <a href="https://docs.docker.com/compose/">docker-compose</a> as a prerequisite, please install it yourself according to the rich docker-compose installation guidance on the Internet.</p>
+<p>For Windows 7-10, you can install <a href="https://github.com/docker/toolbox/releases">Docker Toolbox</a>. For Windows 10 64-bit, you can install <a href="https://docs.docker.com/docker-for-windows/install/">Docker Desktop</a>, and meet the <a href="https://docs.docker.com/docker-for-windows/install/#system-requirements">system requirements</a>.</p>
<h4>Configure Memory not Less Than 4GB</h4>
-<p>For Mac user, click <code>Docker Desktop -> Preferences -> Resources -> Memory</code></p>
-<p>For Windows Docker Toolbox user, two items need to be configured:</p>
+<p>For Mac user, click <code>Docker Desktop -> Preferences -> Resources -> Memory</code>.</p>
+<p>For Windows Docker Toolbox users, configure the following two settings:</p>
<ul>
-<li><strong>Memory</strong>: Open Oracle VirtualBox Manager, if you double-click Docker Quickstart Terminal and successfully run Docker Toolbox, you will see a Virtual Machine named <code>default</code>. And click <code>Settings -> System -> Motherboard -> Base Memory</code></li>
-<li><strong>Port Forwarding</strong>: Click <code>Settings -> Network -> Advanced -> Port forwarding -> Add</code>. <code>Name</code>, <code>Host Port</code> and <code>Guest Port</code> all fill in <code>12345</code>, regardless of <code>Host IP</code> and <code>Guest IP</code></li>
+<li><strong>Memory</strong>: Open Oracle VirtualBox Manager, if you double-click <code>Docker Quickstart Terminal</code> and successfully run <code>Docker Toolbox</code>, you will see a Virtual Machine named <code>default</code>. And click <code>Settings -> System -> Motherboard -> Base Memory</code></li>
+<li><strong>Port Forwarding</strong>: Click <code>Settings -> Network -> Advanced -> Port Forwarding -> Add</code>. fill <code>Name</code>, <code>Host Port</code> and <code>Guest Port</code> forms with <code>12345</code>, regardless of <code>Host IP</code> and <code>Guest IP</code></li>
</ul>
<p>For Windows Docker Desktop user</p>
<ul>
-<li><strong>Hyper-V mode</strong>: Click <code>Docker Desktop -> Settings -> Resources -> Memory</code></li>
-<li><strong>WSL 2 mode</strong>: Refer to <a href="https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig">WSL 2 utility VM</a></li>
+<li><strong>Hyper-V Mode</strong>: Click <code>Docker Desktop -> Settings -> Resources -> Memory</code></li>
+<li><strong>WSL 2 Mode</strong>: Refer to <a href="https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig">WSL 2 utility VM</a></li>
</ul>
<h4>Download the Source Code Package</h4>
-<p>Please download the source code package apache-dolphinscheduler-1.3.8-src.tar.gz, download address: <a href="/en-us/download/download.html">download</a></p>
+<p>Please download the source code package <code>apache-dolphinscheduler-1.3.8-src.tar.gz</code>, download address: <a href="/en-us/download/download.html">download address</a>.</p>
<h4>Pull Image and Start the Service</h4>
<blockquote>
-<p>For Mac and Linux user, open <strong>Terminal</strong>
+<p>For Mac and Linux users, open <strong>Terminal</strong>
For Windows Docker Toolbox user, open <strong>Docker Quickstart Terminal</strong>
For Windows Docker Desktop user, open <strong>Windows PowerShell</strong></p>
</blockquote>
@@ -48,28 +48,28 @@ $ docker tag apache/dolphinscheduler:1.3.8 apache/dolphinscheduler:latest
$ docker-compose up -d
</code></pre>
<blockquote>
-<p>PowerShell should use <code>cd apache-dolphinscheduler-1.3.8-src\docker\docker-swarm</code></p>
+<p>PowerShell should run <code>cd apache-dolphinscheduler-1.3.8-src\docker\docker-swarm</code></p>
</blockquote>
-<p>The <strong>PostgreSQL</strong> (with username <code>root</code>, password <code>root</code> and database <code>dolphinscheduler</code>) and <strong>ZooKeeper</strong> services will start by default</p>
+<p>The <strong>PostgreSQL</strong> (with username <code>root</code>, password <code>root</code> and database <code>dolphinscheduler</code>) and <strong>ZooKeeper</strong> services will start by default.</p>
<h4>Login</h4>
-<p>Visit the Web UI: <a href="http://localhost:12345/dolphinscheduler">http://localhost:12345/dolphinscheduler</a> (The local address is <a href="http://localhost:12345/dolphinscheduler">http://localhost:12345/dolphinscheduler</a>)</p>
-<p>The default username is <code>admin</code> and the default password is <code>dolphinscheduler123</code></p>
+<p>Visit the Web UI: <a href="http://localhost:12345/dolphinscheduler">http://localhost:12345/dolphinscheduler</a> (Modify the IP address if needed).</p>
+<p>The default username is <code>admin</code> and the default password is <code>dolphinscheduler123</code>.</p>
<p align="center">
<img src="/img/login_en.png" width="60%" />
</p>
-<p>Please refer to the <code>Quick Start</code> in the chapter <a href="../quick-start.md">Quick Start</a> to explore how to use DolphinScheduler</p>
-<h3>The Second Way: Start via Specifying the Existing PostgreSQL and ZooKeeper Service</h3>
-<p>In this way, you need to install <a href="https://docs.docker.com/engine/install/">docker</a> as a prerequisite, please install it yourself according to the rich docker installation guidance on the Internet</p>
+<p>Please refer to the <a href="../quick-start.md">Quick Start</a> to explore how to use DolphinScheduler.</p>
+<h3>Start via Existing PostgreSQL and ZooKeeper Service</h3>
+<p>In this way, you need to install <a href="https://docs.docker.com/engine/install/">docker</a> as a prerequisite, please install it yourself according to the rich docker installation guidance on the Internet.</p>
<h4>Basic Required Software</h4>
<ul>
-<li><a href="https://www.postgresql.org/download/">PostgreSQL</a> (8.2.15+)</li>
-<li><a href="https://zookeeper.apache.org/releases.html">ZooKeeper</a> (3.4.6+)</li>
-<li><a href="https://docs.docker.com/engine/install/">Docker</a> (1.13.1+)</li>
+<li><a href="https://www.postgresql.org/download/">PostgreSQL</a> (version 8.2.15+)</li>
+<li><a href="https://zookeeper.apache.org/releases.html">ZooKeeper</a> (version 3.4.6+)</li>
+<li><a href="https://docs.docker.com/engine/install/">Docker</a> (version 1.13.1+)</li>
</ul>
-<h4>Please Login to the PostgreSQL Database and Create a Database Named <code>dolphinscheduler</code></h4>
+<h4>Login to the PostgreSQL Database and Create a Database Named <code>dolphinscheduler</code></h4>
<h4>Initialize the Database, Import <code>sql/dolphinscheduler_postgre.sql</code> to Create Tables and Initial Data</h4>
<h4>Download the DolphinScheduler Image</h4>
-<p>We have already uploaded user-oriented DolphinScheduler image to the Docker repository so that you can pull the image from the docker repository:</p>
+<p>We have already uploaded the user-oriented DolphinScheduler image to the Docker repository so that you can pull the image from the docker repository:</p>
<pre><code>docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
</code></pre>
<h4>5. Run a DolphinScheduler Instance</h4>
@@ -80,18 +80,17 @@ $ docker-compose up -d
-p 12345:12345 \
apache/dolphinscheduler:1.3.8 all
</code></pre>
-<p>Note: database username test and password test need to be replaced with your actual PostgreSQL username and password, 192.168.x.x need to be replaced with your relate PostgreSQL and ZooKeeper host IP</p>
+<p>Note: database test username and password need to be replaced with your actual PostgreSQL username and password, 192.168.x.x need to be replaced with your related PostgreSQL and ZooKeeper host IP.</p>
<h4>Login</h4>
<p>Same as above</p>
-<h3>The Third Way: Start a Standalone DolphinScheduler Server</h3>
-<p>The following services are automatically started when the container starts:</p>
+<h3>Start a Standalone DolphinScheduler Server</h3>
+<p>The following services automatically start when the container starts:</p>
<pre><code> MasterServer ----- master service
WorkerServer ----- worker service
ApiApplicationServer ----- api service
AlertServer ----- alert service
</code></pre>
-<p>If you just want to run part of the services in the DolphinScheduler</p>
-<p>You can start some services in DolphinScheduler by running the following commands.</p>
+<p>If you just want to run part of the services in the DolphinScheduler, you can start a single service in DolphinScheduler by running the following commands.</p>
<ul>
<li>Start a <strong>master server</strong>, For example:</li>
</ul>
@@ -111,7 +110,7 @@ apache/dolphinscheduler:1.3.8 master-server
apache/dolphinscheduler:1.3.8 worker-server
</code></pre>
<ul>
-<li>Start a <strong>api server</strong>, For example:</li>
+<li>Start an <strong>api server</strong>, For example:</li>
</ul>
<pre><code>$ docker run -d --name dolphinscheduler-api \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
@@ -121,17 +120,17 @@ apache/dolphinscheduler:1.3.8 worker-server
apache/dolphinscheduler:1.3.8 api-server
</code></pre>
<ul>
-<li>Start a <strong>alert server</strong>, For example:</li>
+<li>Start an <strong>alert server</strong>, For example:</li>
</ul>
<pre><code>$ docker run -d --name dolphinscheduler-alert \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
apache/dolphinscheduler:1.3.8 alert-server
</code></pre>
-<p><strong>Note</strong>: You must be specify <code>DATABASE_HOST</code>, <code>DATABASE_PORT</code>, <code>DATABASE_DATABASE</code>, <code>DATABASE_USERNAME</code>, <code>DATABASE_PASSWORD</code>, <code>ZOOKEEPER_QUORUM</code> when start a standalone dolphinscheduler server.</p>
+<p><strong>Note</strong>: You must specify environment variables <code>DATABASE_HOST</code>, <code>DATABASE_PORT</code>, <code>DATABASE_DATABASE</code>, <code>DATABASE_USERNAME</code>, <code>DATABASE_PASSWORD</code>, <code>ZOOKEEPER_QUORUM</code> when start a single DolphinScheduler server.</p>
<h2>Environment Variables</h2>
-<p>The Docker container is configured through environment variables, and the <a href="#appendix-environment-variables">Appendix-Environment Variables</a> lists the configurable environment variables of the DolphinScheduler and their default values</p>
-<p>Especially, it can be configured through the environment variable configuration file <code>config.env.sh</code> in Docker Compose and Docker Swarm</p>
+<p>The Docker container is configured through environment variables, and the <a href="#appendix-environment-variables">Appendix-Environment Variables</a> lists the configurable environment variables of the DolphinScheduler and their default values.</p>
+<p>Especially, it can be configured through the environment variable configuration file <code>config.env.sh</code> in Docker Compose and Docker Swarm.</p>
<h2>Support Matrix</h2>
<table>
<thead>
@@ -309,7 +308,7 @@ docker-compose ps
<pre><code>docker-compose down -v
</code></pre>
<h3>How to View the Logs of a Container?</h3>
-<p>List all running containers:</p>
+<p>List all running containers logs:</p>
<pre><code>docker ps
docker ps --format "{{.Names}}" # only print names
</code></pre>
@@ -326,24 +325,24 @@ docker logs --tail 10 docker-swarm_dolphinscheduler-api_1 # show last 10 lines f
<pre><code>docker-compose up -d --scale dolphinscheduler-worker=3 dolphinscheduler-worker
</code></pre>
<h3>How to Deploy DolphinScheduler on Docker Swarm?</h3>
-<p>Assuming that the Docker Swarm cluster has been created (If there is no Docker Swarm cluster, please refer to <a href="https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/">create-swarm</a>)</p>
-<p>Start a stack named dolphinscheduler:</p>
+<p>Assuming that the Docker Swarm cluster has been created (If there is no Docker Swarm cluster, please refer to <a href="https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/">create-swarm</a>).</p>
+<p>Start a stack named <code>dolphinscheduler</code>:</p>
<pre><code>docker stack deploy -c docker-stack.yml dolphinscheduler
</code></pre>
-<p>List the services in the stack named dolphinscheduler:</p>
+<p>List the services in the stack named <code>dolphinscheduler</code>:</p>
<pre><code>docker stack services dolphinscheduler
</code></pre>
-<p>Stop and remove the stack named dolphinscheduler:</p>
+<p>Stop and remove the stack named <code>dolphinscheduler</code>:</p>
<pre><code>docker stack rm dolphinscheduler
</code></pre>
-<p>Remove the volumes of the stack named dolphinscheduler:</p>
+<p>Remove the volumes of the stack named <code>dolphinscheduler</code>:</p>
<pre><code>docker volume rm -f $(docker volume ls --format "{{.Name}}" | grep -e "^dolphinscheduler")
</code></pre>
<h3>How to Scale Master and Worker on Docker Swarm?</h3>
-<p>Scale master of the stack named dolphinscheduler to 2 instances:</p>
+<p>Scale master of the stack named <code>dolphinscheduler</code> to 2 instances:</p>
<pre><code>docker service scale dolphinscheduler_dolphinscheduler-master=2
</code></pre>
-<p>Scale worker of the stack named dolphinscheduler to 3 instances:</p>
+<p>Scale worker of the stack named <code>dolphinscheduler</code> to 3 instances:</p>
<pre><code>docker service scale dolphinscheduler_dolphinscheduler-worker=3
</code></pre>
<h3>How to Build a Docker Image?</h3>
@@ -354,9 +353,9 @@ docker logs --tail 10 docker-swarm_dolphinscheduler-api_1 # show last 10 lines f
<p>In Windows, execute in cmd or PowerShell:</p>
<pre><code class="language-bat"><span class="hljs-function">C:\<span class="hljs-title">dolphinscheduler</span>-<span class="hljs-title">src</span>>.\<span class="hljs-title">docker</span>\<span class="hljs-title">build</span>\<span class="hljs-title">hooks</span>\<span class="hljs-title">build.bat</span>
</span></code></pre>
-<p>Please read <code>./docker/build/hooks/build</code> <code>./docker/build/hooks/build.bat</code> script files if you don't understand</p>
+<p>Please read <code>./docker/build/hooks/build</code> <code>./docker/build/hooks/build.bat</code> script files if you don't understand.</p>
<h4>Build From the Binary Distribution (Not require Maven 3.3+ and JDK 1.8+)</h4>
-<p>Please download the binary distribution package apache-dolphinscheduler-1.3.8-bin.tar.gz, download address: <a href="/en-us/download/download.html">download</a>. And put apache-dolphinscheduler-1.3.8-bin.tar.gz into the <code>apache-dolphinscheduler-1.3.8-src/docker/build</code> directory, execute in Terminal or PowerShell:</p>
+<p>Please download the binary distribution package <code>apache-dolphinscheduler-1.3.8-bin.tar.gz</code>, download address: <a href="/en-us/download/download.html">download address</a>. And put <code>apache-dolphinscheduler-1.3.8-bin.tar.gz</code> into the <code>apache-dolphinscheduler-1.3.8-src/docker/build</code> directory, execute in Terminal or PowerShell:</p>
<pre><code>$ cd apache-dolphinscheduler-1.3.8-src/docker/build
$ docker build --build-arg VERSION=1.3.8 -t apache/dolphinscheduler:1.3.8 .
</code></pre>
@@ -364,21 +363,21 @@ $ docker build --build-arg VERSION=1.3.8 -t apache/dolphinscheduler:1.3.8 .
<p>PowerShell should use <code>cd apache-dolphinscheduler-1.3.8-src/docker/build</code></p>
</blockquote>
<h4>Build Multi-Platform Images</h4>
-<p>Currently support to build images including <code>linux/amd64</code> and <code>linux/arm64</code> platform architecture, requirements:</p>
+<p>Currently, support build images including <code>linux/amd64</code> and <code>linux/arm64</code> platform architecture, requirements:</p>
<ol>
<li>Support <a href="https://docs.docker.com/engine/reference/commandline/buildx/">docker buildx</a></li>
-<li>Own the push permission of <a href="https://hub.docker.com/r/apache/dolphinscheduler">https://hub.docker.com/r/apache/dolphinscheduler</a> (<strong>Be cautious</strong>: The build command will automatically push the multi-platform architecture images to the docker hub of apache/dolphinscheduler by default)</li>
+<li>Own the push permission of <code>https://hub.docker.com/r/apache/dolphinscheduler</code> (<strong>Be cautious</strong>: The build command will automatically push the multi-platform architecture images to the docker hub of <code>apache/dolphinscheduler</code> by default)</li>
</ol>
<p>Execute:</p>
<pre><code class="language-bash">$ docker login <span class="hljs-comment"># login to push apache/dolphinscheduler</span>
-$ bash ./docker/build/hooks/build
+$ bash ./docker/build/hooks/build x
</code></pre>
<h3>How to Add an Environment Variable for Docker?</h3>
-<p>If you would like to do additional initialization in an image derived from this one, add one or more environment variables under <code>/root/start-init-conf.sh</code>, and modify template files in <code>/opt/dolphinscheduler/conf/*.tpl</code>.</p>
-<p>For example, to add an environment variable <code>SECURITY_AUTHENTICATION_TYPE</code> in <code>/root/start-init-conf.sh</code>:</p>
+<p>If you would like to do additional initialization or add environment variables when compiling or execution, you can add one or more environment variables in the script <code>/root/start-init-conf.sh</code>. If involves configuration modification, modify the script <code>/opt/dolphinscheduler/conf/*.tpl</code>.</p>
+<p>For example, add an environment variable <code>SECURITY_AUTHENTICATION_TYPE</code> in <code>/root/start-init-conf.sh</code>:</p>
<pre><code>export SECURITY_AUTHENTICATION_TYPE=PASSWORD
</code></pre>
-<p>and to modify <code>application-api.properties.tpl</code> template file, add the <code>SECURITY_AUTHENTICATION_TYPE</code>:</p>
+<p>Add the <code>SECURITY_AUTHENTICATION_TYPE</code> to the template file <code>application-api.properties.tpl</code>:</p>
<pre><code>security.authentication.type=${SECURITY_AUTHENTICATION_TYPE}
</code></pre>
<p><code>/root/start-init-conf.sh</code> will dynamically generate config file:</p>
@@ -397,7 +396,7 @@ EOF
</blockquote>
<ol>
<li>
-<p>Download the MySQL driver <a href="https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar">mysql-connector-java-8.0.16.jar</a></p>
+<p>Download the MySQL driver <a href="https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar">mysql-connector-java-8.0.16.jar</a>.</p>
</li>
<li>
<p>Create a new <code>Dockerfile</code> to add MySQL driver:</p>
@@ -412,20 +411,20 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
<pre><code>docker build -t apache/dolphinscheduler:mysql-driver .
</code></pre>
<ol start="4">
-<li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:mysql-driver</code> in <code>docker-compose.yml</code></li>
+<li>Modify all the <code>image</code> fields to <code>apache/dolphinscheduler:mysql-driver</code> in <code>docker-compose.yml</code>.</li>
</ol>
<blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy DolphinScheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
</blockquote>
<ol start="5">
<li>
-<p>Comment the <code>dolphinscheduler-postgresql</code> block in <code>docker-compose.yml</code></p>
+<p>Comment the <code>dolphinscheduler-postgresql</code> block in <code>docker-compose.yml</code>.</p>
</li>
<li>
-<p>Add <code>dolphinscheduler-mysql</code> service in <code>docker-compose.yml</code> (<strong>Optional</strong>, you can directly use an external MySQL database)</p>
+<p>Add <code>dolphinscheduler-mysql</code> service in <code>docker-compose.yml</code> (<strong>Optional</strong>, you can directly use an external MySQL database).</p>
</li>
<li>
-<p>Modify DATABASE environment variables in <code>config.env.sh</code></p>
+<p>Modify DATABASE environment variables in <code>config.env.sh</code>:</p>
</li>
</ol>
<pre><code>DATABASE_TYPE=mysql
@@ -441,7 +440,7 @@ DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
<p>If you have added <code>dolphinscheduler-mysql</code> service in <code>docker-compose.yml</code>, just set <code>DATABASE_HOST</code> to <code>dolphinscheduler-mysql</code></p>
</blockquote>
<ol start="8">
-<li>Run a dolphinscheduler (See <strong>How to use this docker image</strong>)</li>
+<li>Run the DolphinScheduler (See <strong>How to use this docker image</strong>)</li>
</ol>
<h3>How to Support MySQL Datasource in <code>Datasource manage</code>?</h3>
<blockquote>
@@ -450,7 +449,7 @@ DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
</blockquote>
<ol>
<li>
-<p>Download the MySQL driver <a href="https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar">mysql-connector-java-8.0.16.jar</a></p>
+<p>Download the MySQL driver <a href="https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar">mysql-connector-java-8.0.16.jar</a>.</p>
</li>
<li>
<p>Create a new <code>Dockerfile</code> to add MySQL driver:</p>
@@ -465,17 +464,17 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
<pre><code>docker build -t apache/dolphinscheduler:mysql-driver .
</code></pre>
<ol start="4">
-<li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:mysql-driver</code> in <code>docker-compose.yml</code></li>
+<li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:mysql-driver</code> in <code>docker-compose.yml</code>.</li>
</ol>
<blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy DolphinScheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code>.</p>
</blockquote>
<ol start="5">
<li>
-<p>Run a dolphinscheduler (See <strong>How to use this docker image</strong>)</p>
+<p>Run the DolphinScheduler (See <strong>How to use this docker image</strong>).</p>
</li>
<li>
-<p>Add a MySQL datasource in <code>Datasource manage</code></p>
+<p>Add a MySQL datasource in <code>Datasource manage</code>.</p>
</li>
</ol>
<h3>How to Support Oracle Datasource in <code>Datasource manage</code>?</h3>
@@ -485,7 +484,7 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
</blockquote>
<ol>
<li>
-<p>Download the Oracle driver <a href="https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/">ojdbc8.jar</a> (such as <code>ojdbc8-19.9.0.0.jar</code>)</p>
+<p>Download the Oracle driver <a href="https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/">ojdbc8.jar</a> (such as <code>ojdbc8-19.9.0.0.jar</code>).</p>
</li>
<li>
<p>Create a new <code>Dockerfile</code> to add Oracle driver:</p>
@@ -500,17 +499,17 @@ COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
<pre><code>docker build -t apache/dolphinscheduler:oracle-driver .
</code></pre>
<ol start="4">
-<li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:oracle-driver</code> in <code>docker-compose.yml</code></li>
+<li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:oracle-driver</code> in <code>docker-compose.yml</code>.</li>
</ol>
<blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy DolphinScheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code>.</p>
</blockquote>
<ol start="5">
<li>
-<p>Run a dolphinscheduler (See <strong>How to use this docker image</strong>)</p>
+<p>Run the DolphinScheduler (See <strong>How to use this docker image</strong>).</p>
</li>
<li>
-<p>Add an Oracle datasource in <code>Datasource manage</code></p>
+<p>Add an Oracle datasource in <code>Datasource manage</code>.</p>
</li>
</ol>
<h3>How to Support Python 2 pip and Custom requirements.txt?</h3>
@@ -524,7 +523,7 @@ RUN apt-get update && \
pip install --no-cache-dir -r /tmp/requirements.txt && \
rm -rf /var/lib/apt/lists/*
</code></pre>
-<p>The command will install the default <strong>pip 18.1</strong>. If you upgrade the pip, just add one line</p>
+<p>The command will install the default <strong>pip 18.1</strong>. If you need to upgrade the pip, just add one more line.</p>
<pre><code> pip install --no-cache-dir -U pip && \
</code></pre>
<ol start="2">
@@ -533,17 +532,17 @@ RUN apt-get update && \
<pre><code>docker build -t apache/dolphinscheduler:pip .
</code></pre>
<ol start="3">
-<li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:pip</code> in <code>docker-compose.yml</code></li>
+<li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:pip</code> in <code>docker-compose.yml</code>.</li>
</ol>
<blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy DolphinScheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code>.</p>
</blockquote>
<ol start="4">
<li>
-<p>Run a dolphinscheduler (See <strong>How to use this docker image</strong>)</p>
+<p>Run the DolphinScheduler (See <strong>How to use this docker image</strong>).</p>
</li>
<li>
-<p>Verify pip under a new Python task</p>
+<p>Verify pip under a new Python task.</p>
</li>
</ol>
<h3>How to Support Python 3?</h3>
@@ -555,7 +554,7 @@ RUN apt-get update && \
apt-get install -y --no-install-recommends python3 && \
rm -rf /var/lib/apt/lists/*
</code></pre>
-<p>The command will install the default <strong>Python 3.7.3</strong>. If you also want to install <strong>pip3</strong>, just replace <code>python3</code> with <code>python3-pip</code> like</p>
+<p>The command will install the default <strong>Python 3.7.3</strong>. If you also want to install <strong>pip3</strong>, just replace <code>python3</code> with <code>python3-pip</code>.</p>
<pre><code> apt-get install -y --no-install-recommends python3-pip && \
</code></pre>
<ol start="2">
@@ -564,40 +563,40 @@ RUN apt-get update && \
<pre><code>docker build -t apache/dolphinscheduler:python3 .
</code></pre>
<ol start="3">
-<li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:python3</code> in <code>docker-compose.yml</code></li>
+<li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:python3</code> in <code>docker-compose.yml</code>.</li>
</ol>
<blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy DolphinScheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code>.</p>
</blockquote>
<ol start="4">
<li>
-<p>Modify <code>PYTHON_HOME</code> to <code>/usr/bin/python3</code> in <code>config.env.sh</code></p>
+<p>Modify <code>PYTHON_HOME</code> to <code>/usr/bin/python3</code> in <code>config.env.sh</code>.</p>
</li>
<li>
-<p>Run a dolphinscheduler (See <strong>How to use this docker image</strong>)</p>
+<p>Run the DolphinScheduler (See <strong>How to use this docker image</strong>).</p>
</li>
<li>
-<p>Verify Python 3 under a new Python task</p>
+<p>Verify Python 3 under a new Python task.</p>
</li>
</ol>
<h3>How to Support Hadoop, Spark, Flink, Hive or DataX?</h3>
<p>Take Spark 2.4.7 as an example:</p>
<ol>
<li>
-<p>Download the Spark 2.4.7 release binary <code>spark-2.4.7-bin-hadoop2.7.tgz</code></p>
+<p>Download the Spark 2.4.7 release binary <code>spark-2.4.7-bin-hadoop2.7.tgz</code>.</p>
</li>
<li>
-<p>Run a dolphinscheduler (See <strong>How to use this docker image</strong>)</p>
+<p>Run the DolphinScheduler (See <strong>How to use this docker image</strong>).</p>
</li>
<li>
-<p>Copy the Spark 2.4.7 release binary into Docker container</p>
+<p>Copy the Spark 2.4.7 release binary into the Docker container.</p>
</li>
</ol>
<pre><code class="language-bash">docker cp spark-2.4.7-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
</code></pre>
-<p>Because the volume <code>dolphinscheduler-shared-local</code> is mounted on <code>/opt/soft</code>, all files in <code>/opt/soft</code> will not be lost</p>
+<p>Because the volume <code>dolphinscheduler-shared-local</code> is mounted on <code>/opt/soft</code>, all files in <code>/opt/soft</code> will not be lost.</p>
<ol start="4">
-<li>Attach the container and ensure that <code>SPARK_HOME2</code> exists</li>
+<li>Attach the container and ensure that <code>SPARK_HOME2</code> exists.</li>
</ol>
<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
<span class="hljs-built_in">cd</span> /opt/soft
@@ -606,15 +605,15 @@ rm -f spark-2.4.7-bin-hadoop2.7.tgz
ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
<span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
</code></pre>
-<p>The last command will print the Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well.</p>
<ol start="5">
-<li>Verify Spark under a Shell task</li>
+<li>Verify Spark under a Shell task.</li>
</ol>
<pre><code>$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.11-2.4.7.jar
</code></pre>
-<p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
+<p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code>.</p>
<ol start="6">
-<li>Verify Spark under a Spark task</li>
+<li>Verify Spark under a Spark task.</li>
</ol>
<p>The file <code>spark-examples_2.11-2.4.7.jar</code> needs to be uploaded to the resources first, and then create a Spark task with:</p>
<ul>
@@ -623,30 +622,30 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
<li>Main Package: <code>spark-examples_2.11-2.4.7.jar</code></li>
<li>Deploy Mode: <code>local</code></li>
</ul>
-<p>Similarly, check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
+<p>Similarly, check whether the task log contains the output like <code>Pi is roughly 3.146015</code>.</p>
<ol start="7">
-<li>Verify Spark on YARN</li>
+<li>Verify Spark on YARN.</li>
</ol>
-<p>Spark on YARN (Deploy Mode is <code>cluster</code> or <code>client</code>) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps</p>
-<p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> exists</p>
+<p>Spark on YARN (Deploy Mode is <code>cluster</code> or <code>client</code>) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps.</p>
+<p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> exists.</p>
<h3>How to Support Spark 3?</h3>
-<p>In fact, the way to submit applications with <code>spark-submit</code> is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of <code>SPARK_HOME2</code> is the second <code>SPARK_HOME</code> instead of <code>SPARK2</code>'s <code>HOME</code>, so just set <code>SPARK_HOME2=/path/to/spark3</code></p>
+<p>In fact, the way to submit applications with <code>spark-submit</code> is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of <code>SPARK_HOME2</code> is the second <code>SPARK_HOME</code> instead of <code>SPARK2</code>'s <code>HOME</code>, so just set <code>SPARK_HOME2=/path/to/spark3</code>.</p>
<p>Take Spark 3.1.1 as an example:</p>
<ol>
<li>
-<p>Download the Spark 3.1.1 release binary <code>spark-3.1.1-bin-hadoop2.7.tgz</code></p>
+<p>Download the Spark 3.1.1 release binary <code>spark-3.1.1-bin-hadoop2.7.tgz</code>.</p>
</li>
<li>
-<p>Run a dolphinscheduler (See <strong>How to use this docker image</strong>)</p>
+<p>Run the DolphinScheduler (See <strong>How to use this docker image</strong>).</p>
</li>
<li>
-<p>Copy the Spark 3.1.1 release binary into Docker container</p>
+<p>Copy the Spark 3.1.1 release binary into the Docker container.</p>
</li>
</ol>
<pre><code class="language-bash">docker cp spark-3.1.1-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
</code></pre>
<ol start="4">
-<li>Attach the container and ensure that <code>SPARK_HOME2</code> exists</li>
+<li>Attach the container and ensure that <code>SPARK_HOME2</code> exists.</li>
</ol>
<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
<span class="hljs-built_in">cd</span> /opt/soft
@@ -655,23 +654,23 @@ rm -f spark-3.1.1-bin-hadoop2.7.tgz
ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
<span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
</code></pre>
-<p>The last command will print the Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well.</p>
<ol start="5">
-<li>Verify Spark under a Shell task</li>
+<li>Verify Spark under a Shell task.</li>
</ol>
<pre><code>$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.12-3.1.1.jar
</code></pre>
-<p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
-<h3>How to Support Shared Storage between Master, Worker and Api server?</h3>
+<p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code>.</p>
+<h3>How to Support Shared Storage between Master, Worker and API server?</h3>
<blockquote>
-<p><strong>Note</strong>: If it is deployed on a single machine by <code>docker-compose</code>, step 1 and 2 can be skipped directly, and execute the command like <code>docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft</code> to put Hadoop into the shared directory <code>/opt/soft</code> in the container</p>
+<p><strong>Note</strong>: If it is deployed on a single machine by <code>docker-compose</code>, step 1 and 2 can be skipped directly, and execute the command like <code>docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft</code> to put Hadoop into the shared directory <code>/opt/soft</code> in the container.</p>
</blockquote>
-<p>For example, Master, Worker and Api server may use Hadoop at the same time</p>
+<p>For example, Master, Worker and API servers may use Hadoop at the same time.</p>
<ol>
-<li>Modify the volume <code>dolphinscheduler-shared-local</code> to support NFS in <code>docker-compose.yml</code></li>
+<li>Modify the volume <code>dolphinscheduler-shared-local</code> to support NFS in <code>docker-compose.yml</code>.</li>
</ol>
<blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy DolphinScheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code>.</p>
</blockquote>
<pre><code class="language-yaml"><span class="hljs-attr">volumes:</span>
<span class="hljs-attr">dolphinscheduler-shared-local:</span>
@@ -682,15 +681,15 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
</code></pre>
<ol start="2">
<li>
-<p>Put the Hadoop into the NFS</p>
+<p>Put the Hadoop into the NFS.</p>
</li>
<li>
-<p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> are correct</p>
+<p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> are correct.</p>
</li>
</ol>
<h3>How to Support Local File Resource Storage Instead of HDFS and S3?</h3>
<blockquote>
-<p><strong>Note</strong>: If it is deployed on a single machine by <code>docker-compose</code>, step 2 can be skipped directly</p>
+<p><strong>Note</strong>: If it is deployed on a single machine by <code>docker-compose</code>, step 2 can be skipped directly.</p>
</blockquote>
<ol>
<li>Modify the following environment variables in <code>config.env.sh</code>:</li>
@@ -699,10 +698,10 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
FS_DEFAULT_FS=file:///
</code></pre>
<ol start="2">
-<li>Modify the volume <code>dolphinscheduler-resource-local</code> to support NFS in <code>docker-compose.yml</code></li>
+<li>Modify the volume <code>dolphinscheduler-resource-local</code> to support NFS in <code>docker-compose.yml</code>.</li>
</ol>
<blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy DolphinScheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code>.</p>
</blockquote>
<pre><code class="language-yaml"><span class="hljs-attr">volumes:</span>
<span class="hljs-attr">dolphinscheduler-resource-local:</span>
@@ -712,7 +711,7 @@ FS_DEFAULT_FS=file:///
<span class="hljs-attr">device:</span> <span class="hljs-string">":/path/to/resource/dir"</span>
</code></pre>
<h3>How to Support S3 Resource Storage Like MinIO?</h3>
-<p>Take MinIO as an example: Modify the following environment variables in <code>config.env.sh</code></p>
+<p>Take MinIO as an example: modify the following environment variables in <code>config.env.sh</code>.</p>
<pre><code>RESOURCE_STORAGE_TYPE=S3
RESOURCE_UPLOAD_PATH=/dolphinscheduler
FS_DEFAULT_FS=s3a://BUCKET_NAME
@@ -720,9 +719,9 @@ FS_S3A_ENDPOINT=http://MINIO_IP:9000
FS_S3A_ACCESS_KEY=MINIO_ACCESS_KEY
FS_S3A_SECRET_KEY=MINIO_SECRET_KEY
</code></pre>
-<p><code>BUCKET_NAME</code>, <code>MINIO_IP</code>, <code>MINIO_ACCESS_KEY</code> and <code>MINIO_SECRET_KEY</code> need to be modified to actual values</p>
+<p>Modify <code>BUCKET_NAME</code>, <code>MINIO_IP</code>, <code>MINIO_ACCESS_KEY</code> and <code>MINIO_SECRET_KEY</code> to actual values.</p>
<blockquote>
-<p><strong>Note</strong>: <code>MINIO_IP</code> can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access</p>
+<p><strong>Note</strong>: <code>MINIO_IP</code> can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access.</p>
</blockquote>
<h3>How to Configure SkyWalking?</h3>
<p>Modify SkyWalking environment variables in <code>config.env.sh</code>:</p>
@@ -734,68 +733,68 @@ SW_GRPC_LOG_SERVER_PORT=11800
<h2>Appendix-Environment Variables</h2>
<h3>Database</h3>
<p><strong><code>DATABASE_TYPE</code></strong></p>
-<p>This environment variable sets the type for the database. The default value is <code>postgresql</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the <code>TYPE</code> for the <code>database</code>. The default value is <code>postgresql</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone DolphinScheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
<p><strong><code>DATABASE_DRIVER</code></strong></p>
-<p>This environment variable sets the type for the database. The default value is <code>org.postgresql.Driver</code>.</p>
-<p><strong>Note</strong>: You must specify it when starting a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the <code>DRIVER</code> for the <code>database</code>. The default value is <code>org.postgresql.Driver</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone DolphinScheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
<p><strong><code>DATABASE_HOST</code></strong></p>
-<p>This environment variable sets the host for the database. The default value is <code>127.0.0.1</code>.</p>
-<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the <code>HOST</code> for the <code>database</code>. The default value is <code>127.0.0.1</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone DolphinScheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
<p><strong><code>DATABASE_PORT</code></strong></p>
-<p>This environment variable sets the port for the database. The default value is <code>5432</code>.</p>
-<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the <code>PORT</code> for the <code>database</code>. The default value is <code>5432</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone DolphinScheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
<p><strong><code>DATABASE_USERNAME</code></strong></p>
-<p>This environment variable sets the username for the database. The default value is <code>root</code>.</p>
-<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the <code>USERNAME</code> for the <code>database</code>. The default value is <code>root</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone DolphinScheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
<p><strong><code>DATABASE_PASSWORD</code></strong></p>
-<p>This environment variable sets the password for the database. The default value is <code>root</code>.</p>
-<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the <code>PASSWORD</code> for the <code>database</code>. The default value is <code>root</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone DolphinScheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
<p><strong><code>DATABASE_DATABASE</code></strong></p>
-<p>This environment variable sets the database for the database. The default value is <code>dolphinscheduler</code>.</p>
-<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the <code>DATABASE</code> for the <code>database</code>. The default value is <code>dolphinscheduler</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone DolphinScheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
<p><strong><code>DATABASE_PARAMS</code></strong></p>
-<p>This environment variable sets the database for the database. The default value is <code>characterEncoding=utf8</code>.</p>
-<p><strong>Note</strong>: You must specify it when starting a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the <code>PARAMS</code> for the <code>database</code>. The default value is <code>characterEncoding=utf8</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone DolphinScheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
<h3>ZooKeeper</h3>
<p><strong><code>ZOOKEEPER_QUORUM</code></strong></p>
<p>This environment variable sets ZooKeeper quorum. The default value is <code>127.0.0.1:2181</code>.</p>
-<p><strong>Note</strong>: You must specify it when starting a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone DolphinScheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>.</p>
<p><strong><code>ZOOKEEPER_ROOT</code></strong></p>
-<p>This environment variable sets ZooKeeper root directory for dolphinscheduler. The default value is <code>/dolphinscheduler</code>.</p>
+<p>This environment variable sets the ZooKeeper root directory for DolphinScheduler. The default value is <code>/dolphinscheduler</code>.</p>
<h3>Common</h3>
<p><strong><code>DOLPHINSCHEDULER_OPTS</code></strong></p>
-<p>This environment variable sets JVM options for dolphinscheduler, suitable for <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>. The default value is empty.</p>
+<p>This environment variable sets JVM options for DolphinScheduler, suitable for <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>. The default value is empty.</p>
<p><strong><code>DATA_BASEDIR_PATH</code></strong></p>
-<p>User data directory path, self configuration, please make sure the directory exists and have read-write permissions. The default value is <code>/tmp/dolphinscheduler</code></p>
+<p>This environment variable sets user data directory, customized configuration, please make sure the directory exists and have read-write permissions. The default value is <code>/tmp/dolphinscheduler</code></p>
<p><strong><code>RESOURCE_STORAGE_TYPE</code></strong></p>
-<p>This environment variable sets resource storage types for dolphinscheduler like <code>HDFS</code>, <code>S3</code>, <code>NONE</code>. The default value is <code>HDFS</code>.</p>
+<p>This environment variable sets resource storage types for DolphinScheduler like <code>HDFS</code>, <code>S3</code>, <code>NONE</code>. The default value is <code>HDFS</code>.</p>
<p><strong><code>RESOURCE_UPLOAD_PATH</code></strong></p>
-<p>This environment variable sets resource store path on HDFS/S3 for resource storage. The default value is <code>/dolphinscheduler</code>.</p>
+<p>This environment variable sets resource store path on <code>HDFS/S3</code> for resource storage. The default value is <code>/dolphinscheduler</code>.</p>
<p><strong><code>FS_DEFAULT_FS</code></strong></p>
-<p>This environment variable sets fs.defaultFS for resource storage like <code>file:///</code>, <code>hdfs://mycluster:8020</code> or <code>s3a://dolphinscheduler</code>. The default value is <code>file:///</code>.</p>
+<p>This environment variable sets <code>fs.defaultFS</code> for resource storage like <code>file:///</code>, <code>hdfs://mycluster:8020</code> or <code>s3a://dolphinscheduler</code>. The default value is <code>file:///</code>.</p>
<p><strong><code>FS_S3A_ENDPOINT</code></strong></p>
-<p>This environment variable sets s3 endpoint for resource storage. The default value is <code>s3.xxx.amazonaws.com</code>.</p>
+<p>This environment variable sets <code>s3</code> endpoint for resource storage. The default value is <code>s3.xxx.amazonaws.com</code>.</p>
<p><strong><code>FS_S3A_ACCESS_KEY</code></strong></p>
-<p>This environment variable sets s3 access key for resource storage. The default value is <code>xxxxxxx</code>.</p>
+<p>This environment variable sets <code>s3</code> access key for resource storage. The default value is <code>xxxxxxx</code>.</p>
<p><strong><code>FS_S3A_SECRET_KEY</code></strong></p>
-<p>This environment variable sets s3 secret key for resource storage. The default value is <code>xxxxxxx</code>.</p>
+<p>This environment variable sets <code>s3</code> secret key for resource storage. The default value is <code>xxxxxxx</code>.</p>
<p><strong><code>HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE</code></strong></p>
<p>This environment variable sets whether to startup Kerberos. The default value is <code>false</code>.</p>
<p><strong><code>JAVA_SECURITY_KRB5_CONF_PATH</code></strong></p>
-<p>This environment variable sets java.security.krb5.conf path. The default value is <code>/opt/krb5.conf</code>.</p>
+<p>This environment variable sets <code>java.security.krb5.conf</code> path. The default value is <code>/opt/krb5.conf</code>.</p>
<p><strong><code>LOGIN_USER_KEYTAB_USERNAME</code></strong></p>
-<p>This environment variable sets login user from the keytab username. The default value is <code>hdfs@HADOOP.COM</code>.</p>
+<p>This environment variable sets the <code>keytab</code> username for the login user. The default value is <code>hdfs@HADOOP.COM</code>.</p>
<p><strong><code>LOGIN_USER_KEYTAB_PATH</code></strong></p>
-<p>This environment variable sets login user from the keytab path. The default value is <code>/opt/hdfs.keytab</code>.</p>
+<p>This environment variable sets the <code>keytab</code> path for the login user. The default value is <code>/opt/hdfs.keytab</code>.</p>
<p><strong><code>KERBEROS_EXPIRE_TIME</code></strong></p>
-<p>This environment variable sets Kerberos expire time, the unit is hour. The default value is <code>2</code>.</p>
+<p>This environment variable sets Kerberos expiration time, use hour as unit. The default value is <code>2</code>.</p>
<p><strong><code>HDFS_ROOT_USER</code></strong></p>
-<p>This environment variable sets HDFS root user when resource.storage.type=HDFS. The default value is <code>hdfs</code>.</p>
+<p>This environment variable sets HDFS root user when <code>resource.storage.type=HDFS</code>. The default value is <code>hdfs</code>.</p>
<p><strong><code>RESOURCE_MANAGER_HTTPADDRESS_PORT</code></strong></p>
<p>This environment variable sets resource manager HTTP address port. The default value is <code>8088</code>.</p>
<p><strong><code>YARN_RESOURCEMANAGER_HA_RM_IDS</code></strong></p>
-<p>This environment variable sets yarn resourcemanager ha rm ids. The default value is empty.</p>
+<p>This environment variable sets yarn <code>resourcemanager</code> ha rm ids. The default value is empty.</p>
<p><strong><code>YARN_APPLICATION_STATUS_ADDRESS</code></strong></p>
<p>This environment variable sets yarn application status address. The default value is <code>http://ds1:%s/ws/v1/cluster/apps/%s</code>.</p>
<p><strong><code>SKYWALKING_ENABLE</code></strong></p>
@@ -803,9 +802,9 @@ SW_GRPC_LOG_SERVER_PORT=11800
<p><strong><code>SW_AGENT_COLLECTOR_BACKEND_SERVICES</code></strong></p>
<p>This environment variable sets agent collector backend services for SkyWalking. The default value is <code>127.0.0.1:11800</code>.</p>
<p><strong><code>SW_GRPC_LOG_SERVER_HOST</code></strong></p>
-<p>This environment variable sets gRPC log server host for SkyWalking. The default value is <code>127.0.0.1</code>.</p>
+<p>This environment variable sets <code>gRPC</code> log server host for SkyWalking. The default value is <code>127.0.0.1</code>.</p>
<p><strong><code>SW_GRPC_LOG_SERVER_PORT</code></strong></p>
-<p>This environment variable sets gRPC log server port for SkyWalking. The default value is <code>11800</code>.</p>
+<p>This environment variable sets <code>gRPC</code> log server port for SkyWalking. The default value is <code>11800</code>.</p>
<p><strong><code>HADOOP_HOME</code></strong></p>
<p>This environment variable sets <code>HADOOP_HOME</code>. The default value is <code>/opt/soft/hadoop</code>.</p>
<p><strong><code>HADOOP_CONF_DIR</code></strong></p>
@@ -828,15 +827,15 @@ SW_GRPC_LOG_SERVER_PORT=11800
<p><strong><code>MASTER_SERVER_OPTS</code></strong></p>
<p>This environment variable sets JVM options for <code>master-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
<p><strong><code>MASTER_EXEC_THREADS</code></strong></p>
-<p>This environment variable sets exec thread number for <code>master-server</code>. The default value is <code>100</code>.</p>
+<p>This environment variable sets execute thread number for <code>master-server</code>. The default value is <code>100</code>.</p>
<p><strong><code>MASTER_EXEC_TASK_NUM</code></strong></p>
-<p>This environment variable sets exec task number for <code>master-server</code>. The default value is <code>20</code>.</p>
+<p>This environment variable sets execute task number for <code>master-server</code>. The default value is <code>20</code>.</p>
<p><strong><code>MASTER_DISPATCH_TASK_NUM</code></strong></p>
<p>This environment variable sets dispatch task number for <code>master-server</code>. The default value is <code>3</code>.</p>
<p><strong><code>MASTER_HOST_SELECTOR</code></strong></p>
<p>This environment variable sets host selector for <code>master-server</code>. Optional values include <code>Random</code>, <code>RoundRobin</code> and <code>LowerWeight</code>. The default value is <code>LowerWeight</code>.</p>
<p><strong><code>MASTER_HEARTBEAT_INTERVAL</code></strong></p>
-<p>This environment variable sets heartbeat interval for <code>master-server</code>. The default value is <code>10</code>.</p>
+<p>This environment variable sets heartbeat intervals for <code>master-server</code>. The default value is <code>10</code>.</p>
<p><strong><code>MASTER_TASK_COMMIT_RETRYTIMES</code></strong></p>
<p>This environment variable sets task commit retry times for <code>master-server</code>. The default value is <code>5</code>.</p>
<p><strong><code>MASTER_TASK_COMMIT_INTERVAL</code></strong></p>
@@ -849,7 +848,7 @@ SW_GRPC_LOG_SERVER_PORT=11800
<p><strong><code>WORKER_SERVER_OPTS</code></strong></p>
<p>This environment variable sets JVM options for <code>worker-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
<p><strong><code>WORKER_EXEC_THREADS</code></strong></p>
-<p>This environment variable sets exec thread number for <code>worker-server</code>. The default value is <code>100</code>.</p>
+<p>This environment variable sets execute thread number for <code>worker-server</code>. The default value is <code>100</code>.</p>
<p><strong><code>WORKER_HEARTBEAT_INTERVAL</code></strong></p>
<p>This environment variable sets heartbeat interval for <code>worker-server</code>. The default value is <code>10</code>.</p>
<p><strong><code>WORKER_MAX_CPULOAD_AVG</code></strong></p>
@@ -862,7 +861,7 @@ SW_GRPC_LOG_SERVER_PORT=11800
<p><strong><code>ALERT_SERVER_OPTS</code></strong></p>
<p>This environment variable sets JVM options for <code>alert-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
<p><strong><code>XLS_FILE_PATH</code></strong></p>
-<p>This environment variable sets xls file path for <code>alert-server</code>. The default value is <code>/tmp/xls</code>.</p>
+<p>This environment variable sets <code>xls</code> file path for <code>alert-server</code>. The default value is <code>/tmp/xls</code>.</p>
<p><strong><code>MAIL_SERVER_HOST</code></strong></p>
<p>This environment variable sets mail server host for <code>alert-server</code>. The default value is empty.</p>
<p><strong><code>MAIL_SERVER_PORT</code></strong></p>
@@ -874,22 +873,22 @@ SW_GRPC_LOG_SERVER_PORT=11800
<p><strong><code>MAIL_PASSWD</code></strong></p>
<p>This environment variable sets mail password for <code>alert-server</code>. The default value is empty.</p>
<p><strong><code>MAIL_SMTP_STARTTLS_ENABLE</code></strong></p>
-<p>This environment variable sets SMTP tls for <code>alert-server</code>. The default value is <code>true</code>.</p>
+<p>This environment variable sets SMTP <code>tls</code> for <code>alert-server</code>. The default value is <code>true</code>.</p>
<p><strong><code>MAIL_SMTP_SSL_ENABLE</code></strong></p>
-<p>This environment variable sets SMTP ssl for <code>alert-server</code>. The default value is <code>false</code>.</p>
+<p>This environment variable sets SMTP <code>ssl</code> for <code>alert-server</code>. The default value is <code>false</code>.</p>
<p><strong><code>MAIL_SMTP_SSL_TRUST</code></strong></p>
-<p>This environment variable sets SMTP ssl truest for <code>alert-server</code>. The default value is empty.</p>
+<p>This environment variable sets SMTP <code>ssl</code> trust for <code>alert-server</code>. The default value is empty.</p>
<p><strong><code>ENTERPRISE_WECHAT_ENABLE</code></strong></p>
-<p>This environment variable sets enterprise wechat enable for <code>alert-server</code>. The default value is <code>false</code>.</p>
+<p>This environment variable sets enterprise WeChat enables for <code>alert-server</code>. The default value is <code>false</code>.</p>
<p><strong><code>ENTERPRISE_WECHAT_CORP_ID</code></strong></p>
-<p>This environment variable sets enterprise wechat corp id for <code>alert-server</code>. The default value is empty.</p>
+<p>This environment variable sets enterprise WeChat corp id for <code>alert-server</code>. The default value is empty.</p>
<p><strong><code>ENTERPRISE_WECHAT_SECRET</code></strong></p>
-<p>This environment variable sets enterprise wechat secret for <code>alert-server</code>. The default value is empty.</p>
+<p>This environment variable sets enterprise WeChat secret for <code>alert-server</code>. The default value is empty.</p>
<p><strong><code>ENTERPRISE_WECHAT_AGENT_ID</code></strong></p>
-<p>This environment variable sets enterprise wechat agent id for <code>alert-server</code>. The default value is empty.</p>
+<p>This environment variable sets enterprise WeChat agent id for <code>alert-server</code>. The default value is empty.</p>
<p><strong><code>ENTERPRISE_WECHAT_USERS</code></strong></p>
-<p>This environment variable sets enterprise wechat users for <code>alert-server</code>. The default value is empty.</p>
-<h3>Api Server</h3>
+<p>This environment variable sets enterprise WeChat users for <code>alert-server</code>. The default value is empty.</p>
+<h3>API Server</h3>
<p><strong><code>API_SERVER_OPTS</code></strong></p>
<p>This environment variable sets JVM options for <code>api-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
</div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><a href="/en-us/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>Email List</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png [...]
diff --git a/en-us/docs/dev/user_doc/guide/installation/docker.json b/en-us/docs/dev/user_doc/guide/installation/docker.json
index 1c6957d..adb66d7 100644
--- a/en-us/docs/dev/user_doc/guide/installation/docker.json
+++ b/en-us/docs/dev/user_doc/guide/installation/docker.json
@@ -1,6 +1,6 @@
{
"filename": "docker.md",
- "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>How to Use this Docker Image</h2>\n<p>Here're 3 ways to quickly install DolphinScheduler</p>\n<h3>The First Way: Start a DolphinScheduler by Docker Compose (Recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker.co [...]
+ "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> version: 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> version: 1.11.0+</li>\n</ul>\n<h2>How to Use this Docker Image</h2>\n<p>Here are 3 ways to quickly install DolphinScheduler:</p>\n<h3>Start DolphinScheduler by Docker Compose (Recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker [...]
"link": "/dist/en-us/docs/dev/user_doc/guide/installation/docker.html",
"meta": {}
}
\ No newline at end of file
diff --git a/en-us/docs/dev/user_doc/guide/installation/hardware.html b/en-us/docs/dev/user_doc/guide/installation/hardware.html
index 9739e53..08624b9 100644
--- a/en-us/docs/dev/user_doc/guide/installation/hardware.html
+++ b/en-us/docs/dev/user_doc/guide/installation/hardware.html
@@ -11,7 +11,7 @@
</head>
<body>
<div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
-<p>DolphinScheduler, as an open-source distributed workflow task scheduling system, can be well deployed and run in Intel architecture server environments and mainstream virtualization environments, and supports mainstream Linux operating system environments.</p>
+<p>DolphinScheduler, as an open-source distributed workflow task scheduling system, can deploy and run smoothly in Intel architecture server environments and mainstream virtualization environments and supports mainstream Linux operating system environments.</p>
<h2>Linux Operating System Version Requirements</h2>
<table>
<thead>
@@ -44,7 +44,7 @@
The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.</p>
</blockquote>
<h2>Recommended Server Configuration</h2>
-<p>DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The following recommendation is made for server hardware configuration in a production environment:</p>
+<p>DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The recommended server requirements in a production environment are as follow:</p>
<h3>Production Environment</h3>
<table>
<thead>
@@ -69,8 +69,8 @@ The above Linux operating systems can run on physical servers and mainstream vir
<blockquote>
<p><strong>Attention:</strong></p>
<ul>
-<li>The above-recommended configuration is the minimum configuration for deploying DolphinScheduler. The higher configuration is strongly recommended for production environments.</li>
-<li>The hard disk size configuration is recommended by more than 50GB. The system disk and data disk are separated.</li>
+<li>The above recommended configuration is the minimum configuration for deploying DolphinScheduler. Higher configuration is strongly recommended for production environments.</li>
+<li>The recommended hard disk size is more than 50GB and separate the system disk and data disk.</li>
</ul>
</blockquote>
<h2>Network Requirements</h2>
@@ -87,17 +87,17 @@ The above Linux operating systems can run on physical servers and mainstream vir
<tr>
<td>MasterServer</td>
<td>5678</td>
-<td>Not the communication port. Require the native ports do not conflict</td>
+<td>not the communication port, require the native ports do not conflict</td>
</tr>
<tr>
<td>WorkerServer</td>
<td>1234</td>
-<td>Not the communication port. Require the native ports do not conflict</td>
+<td>not the communication port, require the native ports do not conflict</td>
</tr>
<tr>
<td>ApiApplicationServer</td>
<td>12345</td>
-<td>Backend communication port</td>
+<td>backend communication port</td>
</tr>
</tbody>
</table>
@@ -109,7 +109,7 @@ The above Linux operating systems can run on physical servers and mainstream vir
</ul>
</blockquote>
<h2>Browser Requirements</h2>
-<p>DolphinScheduler recommends Chrome and the latest browsers which using Chrome Kernel to access the front-end visual operator page.</p>
+<p>DolphinScheduler recommends Chrome and the latest browsers which use Chrome Kernel to access the front-end UI page.</p>
</div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><a href="/en-us/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>Email List</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png [...]
<script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
<script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
diff --git a/en-us/docs/dev/user_doc/guide/installation/hardware.json b/en-us/docs/dev/user_doc/guide/installation/hardware.json
index ef7513f..84483ff 100644
--- a/en-us/docs/dev/user_doc/guide/installation/hardware.json
+++ b/en-us/docs/dev/user_doc/guide/installation/hardware.json
@@ -1,6 +1,6 @@
{
"filename": "hardware.md",
- "__html": "<h1>Hardware Environment</h1>\n<p>DolphinScheduler, as an open-source distributed workflow task scheduling system, can be well deployed and run in Intel architecture server environments and mainstream virtualization environments, and supports mainstream Linux operating system environments.</p>\n<h2>Linux Operating System Version Requirements</h2>\n<table>\n<thead>\n<tr>\n<th style=\"text-align:left\">OS</th>\n<th style=\"text-align:center\">Version</th>\n</tr>\n</thead>\n<tb [...]
+ "__html": "<h1>Hardware Environment</h1>\n<p>DolphinScheduler, as an open-source distributed workflow task scheduling system, can deploy and run smoothly in Intel architecture server environments and mainstream virtualization environments and supports mainstream Linux operating system environments.</p>\n<h2>Linux Operating System Version Requirements</h2>\n<table>\n<thead>\n<tr>\n<th style=\"text-align:left\">OS</th>\n<th style=\"text-align:center\">Version</th>\n</tr>\n</thead>\n<tbod [...]
"link": "/dist/en-us/docs/dev/user_doc/guide/installation/hardware.html",
"meta": {}
}
\ No newline at end of file
diff --git a/en-us/docs/dev/user_doc/guide/installation/kubernetes.html b/en-us/docs/dev/user_doc/guide/installation/kubernetes.html
index 2a048ed..d298ad3 100644
--- a/en-us/docs/dev/user_doc/guide/installation/kubernetes.html
+++ b/en-us/docs/dev/user_doc/guide/installation/kubernetes.html
@@ -11,61 +11,61 @@
</head>
<body>
<div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
-<p>Kubernetes deployment is deploy DolphinScheduler in a Kubernetes cluster, which can schedule a large number of tasks and can be used in production.</p>
-<p>If you are a green hand and want to experience DolphinScheduler, we recommended you install follow <a href="standalone.md">Standalone</a>. If you want to experience more complete functions or schedule large tasks number, we recommended you install follow <a href="pseudo-cluster.md">pseudo-cluster deployment</a>. If you want to using DolphinScheduler in production, we recommended you follow <a href="cluster.md">cluster deployment</a> or <a href="kubernetes.md">kubernetes</a></p>
+<p>Kubernetes deployment is DolphinScheduler deployment in a Kubernetes cluster, which can schedule massive tasks and can be used in production.</p>
+<p>If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow <a href="standalone.md">Standalone deployment</a>. If you want to experience more complete functions and schedule massive tasks, we recommend you install follow <a href="pseudo-cluster.md">pseudo-cluster deployment</a>. If you want to deploy DolphinScheduler in production, we recommend you follow <a href="cluster.md">cluster deployment</a> or <a href="kubernetes.md">Kubernetes depl [...]
<h2>Prerequisites</h2>
<ul>
-<li><a href="https://helm.sh/">Helm</a> 3.1.0+</li>
-<li><a href="https://kubernetes.io/">Kubernetes</a> 1.12+</li>
+<li><a href="https://helm.sh/">Helm</a> version 3.1.0+</li>
+<li><a href="https://kubernetes.io/">Kubernetes</a> version 1.12+</li>
<li>PV provisioner support in the underlying infrastructure</li>
</ul>
-<h2>Install the Chart</h2>
-<p>Please download the source code package apache-dolphinscheduler-1.3.8-src.tar.gz, download address: <a href="/en-us/download/download.html">download</a></p>
-<p>To install the chart with the release name <code>dolphinscheduler</code>, please execute the following commands:</p>
+<h2>Install DolphinScheduler</h2>
+<p>Please download the source code package <code>apache-dolphinscheduler-1.3.8-src.tar.gz</code>, download address: <a href="/en-us/download/download.html">download address</a></p>
+<p>To publish the release name <code>dolphinscheduler</code> version, please execute the following commands:</p>
<pre><code>$ tar -zxvf apache-dolphinscheduler-1.3.8-src.tar.gz
$ cd apache-dolphinscheduler-1.3.8-src/docker/kubernetes/dolphinscheduler
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm dependency update .
$ helm install dolphinscheduler . --set image.tag=1.3.8
</code></pre>
-<p>To install the chart with a namespace named <code>test</code>:</p>
+<p>To publish the release name <code>dolphinscheduler</code> version to <code>test</code> namespace:</p>
<pre><code class="language-bash">$ helm install dolphinscheduler . -n <span class="hljs-built_in">test</span>
</code></pre>
<blockquote>
-<p><strong>Tip</strong>: If a namespace named <code>test</code> is used, the option <code>-n test</code> needs to be added to the <code>helm</code> and <code>kubectl</code> command</p>
+<p><strong>Tip</strong>: If a namespace named <code>test</code> is used, the optional parameter <code>-n test</code> needs to be added to the <code>helm</code> and <code>kubectl</code> commands.</p>
</blockquote>
-<p>These commands deploy DolphinScheduler on the Kubernetes cluster in the default configuration. The <a href="#appendix-configuration">Appendix-Configuration</a> section lists the parameters that can be configured during installation.</p>
+<p>These commands are used to deploy DolphinScheduler on the Kubernetes cluster by default. The <a href="#appendix-configuration">Appendix-Configuration</a> section lists the parameters that can be configured during installation.</p>
<blockquote>
<p><strong>Tip</strong>: List all releases using <code>helm list</code></p>
</blockquote>
-<p>The <strong>PostgreSQL</strong> (with username <code>root</code>, password <code>root</code> and database <code>dolphinscheduler</code>) and <strong>ZooKeeper</strong> services will start by default</p>
+<p>The <strong>PostgreSQL</strong> (with username <code>root</code>, password <code>root</code> and database <code>dolphinscheduler</code>) and <strong>ZooKeeper</strong> services will start by default.</p>
<h2>Access DolphinScheduler UI</h2>
-<p>If <code>ingress.enabled</code> in <code>values.yaml</code> is set to <code>true</code>, you just access <code>http://${ingress.host}/dolphinscheduler</code> in browser.</p>
+<p>If <code>ingress.enabled</code> in <code>values.yaml</code> is set to <code>true</code>, you could access <code>http://${ingress.host}/dolphinscheduler</code> in browser.</p>
<blockquote>
-<p><strong>Tip</strong>: If there is a problem with ingress access, please contact the Kubernetes administrator and refer to the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Ingress</a></p>
+<p><strong>Tip</strong>: If there is a problem with ingress access, please contact the Kubernetes administrator and refer to the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Ingress</a>.</p>
</blockquote>
-<p>Otherwise, when <code>api.service.type=ClusterIP</code> you need to execute port-forward command like:</p>
+<p>Otherwise, when <code>api.service.type=ClusterIP</code> you need to execute <code>port-forward</code> commands:</p>
<pre><code class="language-bash">$ kubectl port-forward --address 0.0.0.0 svc/dolphinscheduler-api 12345:12345
$ kubectl port-forward --address 0.0.0.0 -n <span class="hljs-built_in">test</span> svc/dolphinscheduler-api 12345:12345 <span class="hljs-comment"># with test namespace</span>
</code></pre>
<blockquote>
-<p><strong>Tip</strong>: If the error of <code>unable to do port forwarding: socat not found</code> appears, you need to install <code>socat</code> at first</p>
+<p><strong>Tip</strong>: If the error of <code>unable to do port forwarding: socat not found</code> appears, you need to install <code>socat</code> first.</p>
</blockquote>
-<p>And then access the web: <a href="http://localhost:12345/dolphinscheduler">http://localhost:12345/dolphinscheduler</a> (The local address is <a href="http://localhost:12345/dolphinscheduler">http://localhost:12345/dolphinscheduler</a>)</p>
+<p>Access the web: <code>http://localhost:12345/dolphinscheduler</code> (Modify the IP address if needed).</p>
<p>Or when <code>api.service.type=NodePort</code> you need to execute the command:</p>
<pre><code class="language-bash">NODE_IP=$(kubectl get no -n {{ .Release.Namespace }} -o jsonpath=<span class="hljs-string">"{.items[0].status.addresses[0].address}"</span>)
NODE_PORT=$(kubectl get svc {{ template <span class="hljs-string">"dolphinscheduler.fullname"</span> . }}-api -n {{ .Release.Namespace }} -o jsonpath=<span class="hljs-string">"{.spec.ports[0].nodePort}"</span>)
<span class="hljs-built_in">echo</span> http://<span class="hljs-variable">$NODE_IP</span>:<span class="hljs-variable">$NODE_PORT</span>/dolphinscheduler
</code></pre>
-<p>And then access the web: http://<span class="katex"><span class="katex-mathml"><math><semantics><mrow><mi>N</mi><mi>O</mi><mi>D</mi><msub><mi>E</mi><mi>I</mi></msub><mi>P</mi><mo>:</mo></mrow><annotation encoding="application/x-tex">NODE_IP:</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="strut" style="height:0.68333em;"></span><span class="strut bottom" style="height:0.83333em;vertical-align:-0.15em;"></span><span class="base textstyle u [...]
-<p>The default username is <code>admin</code> and the default password is <code>dolphinscheduler123</code></p>
-<p>Please refer to the <code>Quick Start</code> in the chapter <a href="../quick-start.md">Quick Start</a> to explore how to use DolphinScheduler</p>
+<p>Access the web: <code>http://$NODE_IP:$NODE_PORT/dolphinscheduler</code>.</p>
+<p>The default username is <code>admin</code> and the default password is <code>dolphinscheduler123</code>.</p>
+<p>Please refer to the <code>Quick Start</code> in the chapter <a href="../quick-start.md">Quick Start</a> to explore how to use DolphinScheduler.</p>
<h2>Uninstall the Chart</h2>
-<p>To uninstall/delete the <code>dolphinscheduler</code> deployment:</p>
+<p>To uninstall or delete the <code>dolphinscheduler</code> deployment:</p>
<pre><code class="language-bash">$ helm uninstall dolphinscheduler
</code></pre>
-<p>The command removes all the Kubernetes components but PVC's associated with the chart and deletes the release.</p>
-<p>To delete the PVC's associated with <code>dolphinscheduler</code>:</p>
+<p>The command removes all the Kubernetes components (except PVC) associated with the <code>dolphinscheduler</code> and deletes the release.</p>
+<p>Run the command below to delete the PVC's associated with <code>dolphinscheduler</code>:</p>
<pre><code class="language-bash">$ kubectl delete pvc -l app.kubernetes.io/instance=dolphinscheduler
</code></pre>
<blockquote>
@@ -241,12 +241,12 @@ NODE_PORT=$(kubectl get svc {{ template <span class="hljs-string">"dolphins
<pre><code>kubectl get po
kubectl get po -n test # with test namespace
</code></pre>
-<p>View the logs of a pod container named dolphinscheduler-master-0:</p>
+<p>View the logs of a pod container named <code>dolphinscheduler-master-0</code>:</p>
<pre><code>kubectl logs dolphinscheduler-master-0
kubectl logs -f dolphinscheduler-master-0 # follow log output
kubectl logs --tail 10 dolphinscheduler-master-0 -n test # show last 10 lines from the end of the logs
</code></pre>
-<h3>How to Scale api, master and worker on Kubernetes?</h3>
+<h3>How to Scale API, master and worker on Kubernetes?</h3>
<p>List all deployments (aka <code>deploy</code>):</p>
<pre><code>kubectl get deploy
kubectl get deploy -n test # with test namespace
@@ -274,7 +274,7 @@ kubectl scale --replicas=6 sts dolphinscheduler-worker -n test # with test names
</blockquote>
<ol>
<li>
-<p>Download the MySQL driver <a href="https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar">mysql-connector-java-8.0.16.jar</a></p>
+<p>Download the MySQL driver <a href="https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar">mysql-connector-java-8.0.16.jar</a>.</p>
</li>
<li>
<p>Create a new <code>Dockerfile</code> to add MySQL driver:</p>
@@ -290,13 +290,13 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
</code></pre>
<ol start="4">
<li>
-<p>Push the docker image <code>apache/dolphinscheduler:mysql-driver</code> to a docker registry</p>
+<p>Push the docker image <code>apache/dolphinscheduler:mysql-driver</code> to a docker registry.</p>
</li>
<li>
-<p>Modify image <code>repository</code> and update <code>tag</code> to <code>mysql-driver</code> in <code>values.yaml</code></p>
+<p>Modify image <code>repository</code> and update <code>tag</code> to <code>mysql-driver</code> in <code>values.yaml</code>.</p>
</li>
<li>
-<p>Modify postgresql <code>enabled</code> to <code>false</code> in <code>values.yaml</code></p>
+<p>Modify postgresql <code>enabled</code> to <code>false</code> in <code>values.yaml</code>.</p>
</li>
<li>
<p>Modify externalDatabase (especially modify <code>host</code>, <code>username</code> and <code>password</code>) in <code>values.yaml</code>:</p>
@@ -313,7 +313,7 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
<span class="hljs-attr">params:</span> <span class="hljs-string">"useUnicode=true&characterEncoding=UTF-8"</span>
</code></pre>
<ol start="8">
-<li>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</li>
+<li>Run a DolphinScheduler release in Kubernetes (See <strong>Install DolphinScheduler</strong>).</li>
</ol>
<h3>How to Support MySQL Datasource in <code>Datasource manage</code>?</h3>
<blockquote>
@@ -322,7 +322,7 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
</blockquote>
<ol>
<li>
-<p>Download the MySQL driver <a href="https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar">mysql-connector-java-8.0.16.jar</a></p>
+<p>Download the MySQL driver <a href="https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar">mysql-connector-java-8.0.16.jar</a>.</p>
</li>
<li>
<p>Create a new <code>Dockerfile</code> to add MySQL driver:</p>
@@ -338,16 +338,16 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
</code></pre>
<ol start="4">
<li>
-<p>Push the docker image <code>apache/dolphinscheduler:mysql-driver</code> to a docker registry</p>
+<p>Push the docker image <code>apache/dolphinscheduler:mysql-driver</code> to a docker registry.</p>
</li>
<li>
-<p>Modify image <code>repository</code> and update <code>tag</code> to <code>mysql-driver</code> in <code>values.yaml</code></p>
+<p>Modify image <code>repository</code> and update <code>tag</code> to <code>mysql-driver</code> in <code>values.yaml</code>.</p>
</li>
<li>
-<p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
+<p>Run a DolphinScheduler release in Kubernetes (See <strong>Install DolphinScheduler</strong>).</p>
</li>
<li>
-<p>Add a MySQL datasource in <code>Datasource manage</code></p>
+<p>Add a MySQL datasource in <code>Datasource manage</code>.</p>
</li>
</ol>
<h3>How to Support Oracle Datasource in <code>Datasource manage</code>?</h3>
@@ -373,16 +373,16 @@ COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
</code></pre>
<ol start="4">
<li>
-<p>Push the docker image <code>apache/dolphinscheduler:oracle-driver</code> to a docker registry</p>
+<p>Push the docker image <code>apache/dolphinscheduler:oracle-driver</code> to a docker registry.</p>
</li>
<li>
-<p>Modify image <code>repository</code> and update <code>tag</code> to <code>oracle-driver</code> in <code>values.yaml</code></p>
+<p>Modify image <code>repository</code> and update <code>tag</code> to <code>oracle-driver</code> in <code>values.yaml</code>.</p>
</li>
<li>
-<p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
+<p>Run a DolphinScheduler release in Kubernetes (See <strong>Install DolphinScheduler</strong>).</p>
</li>
<li>
-<p>Add an Oracle datasource in <code>Datasource manage</code></p>
+<p>Add an Oracle datasource in <code>Datasource manage</code>.</p>
</li>
</ol>
<h3>How to Support Python 2 pip and Custom requirements.txt?</h3>
@@ -396,7 +396,7 @@ RUN apt-get update && \
pip install --no-cache-dir -r /tmp/requirements.txt && \
rm -rf /var/lib/apt/lists/*
</code></pre>
-<p>The command will install the default <strong>pip 18.1</strong>. If you upgrade the pip, just add one line</p>
+<p>The command will install the default <strong>pip 18.1</strong>. If you upgrade the pip, just add the following command.</p>
<pre><code> pip install --no-cache-dir -U pip && \
</code></pre>
<ol start="2">
@@ -406,16 +406,16 @@ RUN apt-get update && \
</code></pre>
<ol start="3">
<li>
-<p>Push the docker image <code>apache/dolphinscheduler:pip</code> to a docker registry</p>
+<p>Push the docker image <code>apache/dolphinscheduler:pip</code> to a docker registry.</p>
</li>
<li>
-<p>Modify image <code>repository</code> and update <code>tag</code> to <code>pip</code> in <code>values.yaml</code></p>
+<p>Modify image <code>repository</code> and update <code>tag</code> to <code>pip</code> in <code>values.yaml</code>.</p>
</li>
<li>
-<p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
+<p>Run a DolphinScheduler release in Kubernetes (See <strong>Install DolphinScheduler</strong>).</p>
</li>
<li>
-<p>Verify pip under a new Python task</p>
+<p>Verify pip under a new Python task.</p>
</li>
</ol>
<h3>How to Support Python 3?</h3>
@@ -427,7 +427,7 @@ RUN apt-get update && \
apt-get install -y --no-install-recommends python3 && \
rm -rf /var/lib/apt/lists/*
</code></pre>
-<p>The command will install the default <strong>Python 3.7.3</strong>. If you also want to install <strong>pip3</strong>, just replace <code>python3</code> with <code>python3-pip</code> like</p>
+<p>The command will install the default <strong>Python 3.7.3</strong>. If you also want to install <strong>pip3</strong>, just replace <code>python3</code> with <code>python3-pip</code> like:</p>
<pre><code> apt-get install -y --no-install-recommends python3-pip && \
</code></pre>
<ol start="2">
@@ -437,43 +437,43 @@ RUN apt-get update && \
</code></pre>
<ol start="3">
<li>
-<p>Push the docker image <code>apache/dolphinscheduler:python3</code> to a docker registry</p>
+<p>Push the docker image <code>apache/dolphinscheduler:python3</code> to a docker registry.</p>
</li>
<li>
-<p>Modify image <code>repository</code> and update <code>tag</code> to <code>python3</code> in <code>values.yaml</code></p>
+<p>Modify image <code>repository</code> and update <code>tag</code> to <code>python3</code> in <code>values.yaml</code>.</p>
</li>
<li>
-<p>Modify <code>PYTHON_HOME</code> to <code>/usr/bin/python3</code> in <code>values.yaml</code></p>
+<p>Modify <code>PYTHON_HOME</code> to <code>/usr/bin/python3</code> in <code>values.yaml</code>.</p>
</li>
<li>
-<p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
+<p>Run a DolphinScheduler release in Kubernetes (See <strong>Install DolphinScheduler</strong>).</p>
</li>
<li>
-<p>Verify Python 3 under a new Python task</p>
+<p>Verify Python 3 under a new Python task.</p>
</li>
</ol>
<h3>How to Support Hadoop, Spark, Flink, Hive or DataX?</h3>
<p>Take Spark 2.4.7 as an example:</p>
<ol>
<li>
-<p>Download the Spark 2.4.7 release binary <code>spark-2.4.7-bin-hadoop2.7.tgz</code></p>
+<p>Download the Spark 2.4.7 release binary <code>spark-2.4.7-bin-hadoop2.7.tgz</code>.</p>
</li>
<li>
-<p>Ensure that <code>common.sharedStoragePersistence.enabled</code> is turned on</p>
+<p>Ensure that <code>common.sharedStoragePersistence.enabled</code> is turned on.</p>
</li>
<li>
-<p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
+<p>Run a DolphinScheduler release in Kubernetes (See <strong>Install DolphinScheduler</strong>).</p>
</li>
<li>
-<p>Copy the Spark 2.4.7 release binary into the Docker container</p>
+<p>Copy the Spark 2.4.7 release binary into the Docker container.</p>
</li>
</ol>
<pre><code class="language-bash">kubectl cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
kubectl cp -n <span class="hljs-built_in">test</span> spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft <span class="hljs-comment"># with test namespace</span>
</code></pre>
-<p>Because the volume <code>sharedStoragePersistence</code> is mounted on <code>/opt/soft</code>, all files in <code>/opt/soft</code> will not be lost</p>
+<p>Because the volume <code>sharedStoragePersistence</code> is mounted on <code>/opt/soft</code>, all files in <code>/opt/soft</code> will not be lost.</p>
<ol start="5">
-<li>Attach the container and ensure that <code>SPARK_HOME2</code> exists</li>
+<li>Attach the container and ensure that <code>SPARK_HOME2</code> exists.</li>
</ol>
<pre><code class="language-bash">kubectl <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker-0 bash
kubectl <span class="hljs-built_in">exec</span> -n <span class="hljs-built_in">test</span> -it dolphinscheduler-worker-0 bash <span class="hljs-comment"># with test namespace</span>
@@ -483,15 +483,15 @@ rm -f spark-2.4.7-bin-hadoop2.7.tgz
ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
<span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
</code></pre>
-<p>The last command will print the Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well.</p>
<ol start="6">
-<li>Verify Spark under a Shell task</li>
+<li>Verify Spark under a Shell task.</li>
</ol>
<pre><code>$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.11-2.4.7.jar
</code></pre>
-<p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
+<p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code>.</p>
<ol start="7">
-<li>Verify Spark under a Spark task</li>
+<li>Verify Spark under a Spark task.</li>
</ol>
<p>The file <code>spark-examples_2.11-2.4.7.jar</code> needs to be uploaded to the resources first, and then create a Spark task with:</p>
<ul>
@@ -500,34 +500,34 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
<li>Main Package: <code>spark-examples_2.11-2.4.7.jar</code></li>
<li>Deploy Mode: <code>local</code></li>
</ul>
-<p>Similarly, check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
+<p>Similarly, check whether the task log contains the output like <code>Pi is roughly 3.146015</code>.</p>
<ol start="8">
-<li>Verify Spark on YARN</li>
+<li>Verify Spark on YARN.</li>
</ol>
-<p>Spark on YARN (Deploy Mode is <code>cluster</code> or <code>client</code>) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps</p>
-<p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> exists</p>
+<p>Spark on YARN (Deploy Mode is <code>cluster</code> or <code>client</code>) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps.</p>
+<p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> exists.</p>
<h3>How to Support Spark 3?</h3>
-<p>In fact, the way to submit applications with <code>spark-submit</code> is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of <code>SPARK_HOME2</code> is the second <code>SPARK_HOME</code> instead of <code>SPARK2</code>'s <code>HOME</code>, so just set <code>SPARK_HOME2=/path/to/spark3</code></p>
+<p>In fact, the way to submit applications with <code>spark-submit</code> is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of <code>SPARK_HOME2</code> is the second <code>SPARK_HOME</code> instead of <code>SPARK2</code>'s <code>HOME</code>, so just set <code>SPARK_HOME2=/path/to/spark3</code>.</p>
<p>Take Spark 3.1.1 as an example:</p>
<ol>
<li>
-<p>Download the Spark 3.1.1 release binary <code>spark-3.1.1-bin-hadoop2.7.tgz</code></p>
+<p>Download the Spark 3.1.1 release binary <code>spark-3.1.1-bin-hadoop2.7.tgz</code>.</p>
</li>
<li>
-<p>Ensure that <code>common.sharedStoragePersistence.enabled</code> is turned on</p>
+<p>Ensure that <code>common.sharedStoragePersistence.enabled</code> is turned on.</p>
</li>
<li>
-<p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
+<p>Run a DolphinScheduler release in Kubernetes (See <strong>Install DolphinScheduler</strong>).</p>
</li>
<li>
-<p>Copy the Spark 3.1.1 release binary into the Docker container</p>
+<p>Copy the Spark 3.1.1 release binary into the Docker container.</p>
</li>
</ol>
<pre><code class="language-bash">kubectl cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
kubectl cp -n <span class="hljs-built_in">test</span> spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft <span class="hljs-comment"># with test namespace</span>
</code></pre>
<ol start="5">
-<li>Attach the container and ensure that <code>SPARK_HOME2</code> exists</li>
+<li>Attach the container and ensure that <code>SPARK_HOME2</code> exists.</li>
</ol>
<pre><code class="language-bash">kubectl <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker-0 bash
kubectl <span class="hljs-built_in">exec</span> -n <span class="hljs-built_in">test</span> -it dolphinscheduler-worker-0 bash <span class="hljs-comment"># with test namespace</span>
@@ -537,15 +537,15 @@ rm -f spark-3.1.1-bin-hadoop2.7.tgz
ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
<span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
</code></pre>
-<p>The last command will print the Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well.</p>
<ol start="6">
-<li>Verify Spark under a Shell task</li>
+<li>Verify Spark under a Shell task.</li>
</ol>
<pre><code>$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.12-3.1.1.jar
</code></pre>
-<p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
+<p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code>.</p>
<h3>How to Support Shared Storage Between Master, Worker and Api Server?</h3>
-<p>For example, Master, Worker and API server may use Hadoop at the same time</p>
+<p>For example, Master, Worker and API server may use Hadoop at the same time.</p>
<ol>
<li>Modify the following configurations in <code>values.yaml</code></li>
</ol>
@@ -558,20 +558,20 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
<span class="hljs-attr">storageClassName:</span> <span class="hljs-string">"-"</span>
<span class="hljs-attr">storage:</span> <span class="hljs-string">"20Gi"</span>
</code></pre>
-<p><code>storageClassName</code> and <code>storage</code> need to be modified to actual values</p>
+<p>Modify <code>storageClassName</code> and <code>storage</code> to actual environment values.</p>
<blockquote>
-<p><strong>Note</strong>: <code>storageClassName</code> must support the access mode: <code>ReadWriteMany</code></p>
+<p><strong>Note</strong>: <code>storageClassName</code> must support the access mode: <code>ReadWriteMany</code>.</p>
</blockquote>
<ol start="2">
<li>
-<p>Copy the Hadoop into the directory <code>/opt/soft</code></p>
+<p>Copy the Hadoop into the directory <code>/opt/soft</code>.</p>
</li>
<li>
-<p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> are correct</p>
+<p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> are correct.</p>
</li>
</ol>
<h3>How to Support Local File Resource Storage Instead of HDFS and S3?</h3>
-<p>Modify the following configurations in <code>values.yaml</code></p>
+<p>Modify the following configurations in <code>values.yaml</code>:</p>
<pre><code class="language-yaml"><span class="hljs-attr">common:</span>
<span class="hljs-attr">configmap:</span>
<span class="hljs-attr">RESOURCE_STORAGE_TYPE:</span> <span class="hljs-string">"HDFS"</span>
@@ -584,12 +584,12 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
<span class="hljs-attr">storageClassName:</span> <span class="hljs-string">"-"</span>
<span class="hljs-attr">storage:</span> <span class="hljs-string">"20Gi"</span>
</code></pre>
-<p><code>storageClassName</code> and <code>storage</code> need to be modified to actual values</p>
+<p>Modify <code>storageClassName</code> and <code>storage</code> to actual environment values.</p>
<blockquote>
-<p><strong>Note</strong>: <code>storageClassName</code> must support the access mode: <code>ReadWriteMany</code></p>
+<p><strong>Note</strong>: <code>storageClassName</code> must support the access mode: <code>ReadWriteMany</code>.</p>
</blockquote>
<h3>How to Support S3 Resource Storage Like MinIO?</h3>
-<p>Take MinIO as an example: Modify the following configurations in <code>values.yaml</code></p>
+<p>Take MinIO as an example: Modify the following configurations in <code>values.yaml</code>:</p>
<pre><code class="language-yaml"><span class="hljs-attr">common:</span>
<span class="hljs-attr">configmap:</span>
<span class="hljs-attr">RESOURCE_STORAGE_TYPE:</span> <span class="hljs-string">"S3"</span>
@@ -599,12 +599,12 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
<span class="hljs-attr">FS_S3A_ACCESS_KEY:</span> <span class="hljs-string">"MINIO_ACCESS_KEY"</span>
<span class="hljs-attr">FS_S3A_SECRET_KEY:</span> <span class="hljs-string">"MINIO_SECRET_KEY"</span>
</code></pre>
-<p><code>BUCKET_NAME</code>, <code>MINIO_IP</code>, <code>MINIO_ACCESS_KEY</code> and <code>MINIO_SECRET_KEY</code> need to be modified to actual values</p>
+<p>Modify <code>BUCKET_NAME</code>, <code>MINIO_IP</code>, <code>MINIO_ACCESS_KEY</code> and <code>MINIO_SECRET_KEY</code> to actual environment values.</p>
<blockquote>
-<p><strong>Note</strong>: <code>MINIO_IP</code> can only use IP instead of domain name, because DolphinScheduler currently doesn't support S3 path style access</p>
+<p><strong>Note</strong>: <code>MINIO_IP</code> can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access.</p>
</blockquote>
<h3>How to Configure SkyWalking?</h3>
-<p>Modify SKYWALKING configurations in <code>values.yaml</code>:</p>
+<p>Modify SkyWalking configurations in <code>values.yaml</code>:</p>
<pre><code class="language-yaml"><span class="hljs-attr">common:</span>
<span class="hljs-attr">configmap:</span>
<span class="hljs-attr">SKYWALKING_ENABLE:</span> <span class="hljs-string">"true"</span>
@@ -644,7 +644,7 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
</tr>
<tr>
<td><code>image.pullPolicy</code></td>
-<td>Image pull policy. One of Always, Never, IfNotPresent</td>
+<td>Image pull policy. Options: Always, Never, IfNotPresent</td>
<td><code>IfNotPresent</code></td>
</tr>
<tr>
diff --git a/en-us/docs/dev/user_doc/guide/installation/kubernetes.json b/en-us/docs/dev/user_doc/guide/installation/kubernetes.json
index 611e91f..7c6d840 100644
--- a/en-us/docs/dev/user_doc/guide/installation/kubernetes.json
+++ b/en-us/docs/dev/user_doc/guide/installation/kubernetes.json
@@ -1,6 +1,6 @@
{
"filename": "kubernetes.md",
- "__html": "<h1>QuickStart in Kubernetes</h1>\n<p>Kubernetes deployment is deploy DolphinScheduler in a Kubernetes cluster, which can schedule a large number of tasks and can be used in production.</p>\n<p>If you are a green hand and want to experience DolphinScheduler, we recommended you install follow <a href=\"standalone.md\">Standalone</a>. If you want to experience more complete functions or schedule large tasks number, we recommended you install follow <a href=\"pseudo-cluster.md\ [...]
+ "__html": "<h1>QuickStart in Kubernetes</h1>\n<p>Kubernetes deployment is DolphinScheduler deployment in a Kubernetes cluster, which can schedule massive tasks and can be used in production.</p>\n<p>If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow <a href=\"standalone.md\">Standalone deployment</a>. If you want to experience more complete functions and schedule massive tasks, we recommend you install follow <a href=\"pseudo-cluste [...]
"link": "/dist/en-us/docs/dev/user_doc/guide/installation/kubernetes.html",
"meta": {}
}
\ No newline at end of file
diff --git a/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.html b/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.html
index 1cef8ba..a2b5eeb 100644
--- a/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.html
+++ b/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.html
@@ -11,10 +11,10 @@
</head>
<body>
<div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
-<p>The purpose of pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, api server, are all on the same machine.</p>
-<p>If you are a green hand and want to experience DolphinScheduler, we recommended you install follow <a href="standalone.md">Standalone</a>. If you want to experience more complete functions or schedule large tasks number, we recommended you install follow <a href="pseudo-cluster.md">pseudo-cluster deployment</a>. If you want to using DolphinScheduler in production, we recommended you follow <a href="cluster.md">cluster deployment</a> or <a href="kubernetes.md">kubernetes</a></p>
-<h2>Prepare</h2>
-<p>Pseudo-cluster deployment of DolphinScheduler requires external software support</p>
+<p>The purpose of the pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, API server, are all on the same machine.</p>
+<p>If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow <a href="standalone.md">Standalone deployment</a>. If you want to experience more complete functions and schedule massive tasks, we recommend you install follow <a href="pseudo-cluster.md">pseudo-cluster deployment</a>. If you want to deploy DolphinScheduler in production, we recommend you follow <a href="cluster.md">cluster deployment</a> or <a href="kubernetes.md">Kubernetes depl [...]
+<h2>Preparation</h2>
+<p>Pseudo-cluster deployment of DolphinScheduler requires external software support:</p>
<ul>
<li>JDK:Download <a href="https://www.oracle.com/technetwork/java/javase/downloads/index.html">JDK</a> (1.8+), and configure <code>JAVA_HOME</code> to and <code>PATH</code> variable. You can skip this step, if it already exists in your environment.</li>
<li>Binary package: Download the DolphinScheduler binary package at <a href="https://dolphinscheduler.apache.org/en-us/download/download.html">download page</a></li>
@@ -28,11 +28,11 @@
</li>
</ul>
<blockquote>
-<p><strong><em>Note:</em></strong> DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but if you need to run tasks that depend on them, you need to have the corresponding environment support</p>
+<p><strong><em>Note:</em></strong> DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but if you need to run tasks that depend on them, you need to have the corresponding environment support.</p>
</blockquote>
<h2>DolphinScheduler Startup Environment</h2>
<h3>Configure User Exemption and Permissions</h3>
-<p>Create a deployment user, and be sure to configure <code>sudo</code> without password. We here make a example for user dolphinscheduler.</p>
+<p>Create a deployment user, and make sure to configure <code>sudo</code> without password. Here make an example to create user <code>dolphinscheduler</code>:</p>
<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> To create a user, login as root</span>
useradd dolphinscheduler
<span class="hljs-meta">
@@ -49,12 +49,12 @@ chown -R dolphinscheduler:dolphinscheduler apache-dolphinscheduler-*-bin
<blockquote>
<p><strong><em>NOTICE:</em></strong></p>
<ul>
-<li>Because DolphinScheduler's multi-tenant task switch user by command <code>sudo -u {linux-user}</code>, the deployment user needs to have sudo privileges and is password-free. If novice learners don’t understand, you can ignore this point for the time being.</li>
-<li>If you find the line "Defaults requirest" in the <code>/etc/sudoers</code> file, please comment it</li>
+<li>Due to DolphinScheduler's multi-tenant task switch user using command <code>sudo -u {linux-user}</code>, the deployment user needs to have <code>sudo</code> privileges and be password-free. If novice learners don’t understand, you can ignore this point for now.</li>
+<li>If you find the line "Defaults requirett" in the <code>/etc/sudoers</code> file, please comment the content.</li>
</ul>
</blockquote>
<h3>Configure Machine SSH Password-Free Login</h3>
-<p>Since resources need to be sent to different machines during installation, SSH password-free login is required between each machine. The steps to configure password-free login are as follows</p>
+<p>Since resources need to be sent to different machines during installation, SSH password-free login is required between each machine. The steps to configure password-free login are as follows:</p>
<pre><code class="language-shell">su dolphinscheduler
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
@@ -62,10 +62,10 @@ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
</code></pre>
<blockquote>
-<p><strong><em>Notice:</em></strong> After the configuration is complete, you can run the command <code>ssh localhost</code> to test if it work or not, if you can login with ssh without password.</p>
+<p><strong><em>Notice:</em></strong> After the configuration is complete, you can run the command <code>ssh localhost</code> to test works or not. If you can login with ssh without password stands for successful.</p>
</blockquote>
<h3>Start ZooKeeper</h3>
-<p>Go to the ZooKeeper installation directory, copy configure file <code>zoo_sample.cfg</code> to <code>conf/zoo.cfg</code>, and change value of dataDir in <code>conf/zoo.cfg</code> to <code>dataDir=./tmp/zookeeper</code></p>
+<p>Go to the ZooKeeper installation directory, copy configure file <code>zoo_sample.cfg</code> to <code>conf/zoo.cfg</code>, and change value of dataDir in <code>conf/zoo.cfg</code> to <code>dataDir=./tmp/zookeeper</code>.</p>
<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> Start ZooKeeper</span>
./bin/zkServer.sh start
</code></pre>
@@ -80,43 +80,43 @@ spring.datasource.username=dolphinscheduler
spring.datasource.password=dolphinscheduler
```
-After modifying and saving, execute the following command to create database table and inti basic data.
+After modifying and saving, execute the following command to create database tables and init basic data.
```shell
sh script/create-dolphinscheduler.sh
```
-->
<h2>Modify Configuration</h2>
-<p>After completing the preparation of the basic environment, you need to modify the configuration file according to your environment. The configuration file is in the path of <code>conf/config/install_config.conf</code>. Generally, you just needs to modify the <strong>INSTALL MACHINE, DolphinScheduler ENV, Database, Registry Server</strong> part to complete the deployment, the following describes the parameters that must be modified</p>
+<p>After completing the preparation of the basic environment, you need to modify the configuration file according to your environment. The configuration file is in the path of <code>conf/config/install_config.conf</code>. Generally, you just need to modify the <strong>INSTALL MACHINE, DolphinScheduler ENV, Database, Registry Server</strong> part to complete the deployment, the following describes the parameters that must be modified:</p>
<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> ---------------------------------------------------------</span>
<span class="hljs-meta">#</span><span class="bash"> INSTALL MACHINE</span>
<span class="hljs-meta">#</span><span class="bash"> ---------------------------------------------------------</span>
-<span class="hljs-meta">#</span><span class="bash"> Because the master, worker, and API server are deployed on a single node, the IP of the server is the machine IP or localhost</span>
+<span class="hljs-meta">#</span><span class="bash"> Due to the master, worker, and API server being deployed on a single node, the IP of the server is the machine IP or localhost</span>
ips="localhost"
masters="localhost"
workers="localhost:default"
alertServer="localhost"
apiServers="localhost"
<span class="hljs-meta">
-#</span><span class="bash"> DolphinScheduler installation path, it will auto create <span class="hljs-keyword">if</span> not exists</span>
+#</span><span class="bash"> DolphinScheduler installation path, it will auto-create <span class="hljs-keyword">if</span> not exists</span>
installPath="~/dolphinscheduler"
<span class="hljs-meta">
-#</span><span class="bash"> Deploy user, use what you create <span class="hljs-keyword">in</span> section **Configure machine SSH password-free login**</span>
+#</span><span class="bash"> Deploy user, use the user you create <span class="hljs-keyword">in</span> section **Configure machine SSH password-free login**</span>
deployUser="dolphinscheduler"
<span class="hljs-meta">
#</span><span class="bash"> ---------------------------------------------------------</span>
<span class="hljs-meta">#</span><span class="bash"> DolphinScheduler ENV</span>
<span class="hljs-meta">#</span><span class="bash"> ---------------------------------------------------------</span>
-<span class="hljs-meta">#</span><span class="bash"> The path of JAVA_HOME, <span class="hljs-built_in">which</span> JDK install path <span class="hljs-keyword">in</span> section **Prepare**</span>
+<span class="hljs-meta">#</span><span class="bash"> The path of JAVA_HOME, <span class="hljs-built_in">which</span> JDK install path <span class="hljs-keyword">in</span> section **Preparation**</span>
javaHome="/your/java/home/here"
<span class="hljs-meta">
#</span><span class="bash"> ---------------------------------------------------------</span>
<span class="hljs-meta">#</span><span class="bash"> Database</span>
<span class="hljs-meta">#</span><span class="bash"> ---------------------------------------------------------</span>
-<span class="hljs-meta">#</span><span class="bash"> Database <span class="hljs-built_in">type</span>, username, password, IP, port, metadata. For now dbtype supports `mysql` and `postgresql`</span>
+<span class="hljs-meta">#</span><span class="bash"> Database <span class="hljs-built_in">type</span>, username, password, IP, port, metadata. For now `dbtype` supports `mysql` and `postgresql`</span>
dbtype="mysql"
dbhost="localhost:3306"
-<span class="hljs-meta">#</span><span class="bash"> Have to modify <span class="hljs-keyword">if</span> you are not using dolphinscheduler/dolphinscheduler as your username and password</span>
+<span class="hljs-meta">#</span><span class="bash"> Need to modify <span class="hljs-keyword">if</span> you are not using `dolphinscheduler/dolphinscheduler` as your username and password</span>
username="dolphinscheduler"
password="dolphinscheduler"
dbname="dolphinscheduler"
@@ -128,7 +128,7 @@ dbname="dolphinscheduler"
registryServers="localhost:2181"
</code></pre>
<h2>Initialize the Database</h2>
-<p>DolphinScheduler metadata is stored in relational database. Currently, PostgreSQL and MySQL are supported. If you use MySQL, you need to manually download <a href="https://downloads.MySQL.com/archives/c-j/">mysql-connector-java driver</a> (8.0.16) and move it to the lib directory of DolphinScheduler. Let's take MySQL as an example for how to initialize the database</p>
+<p>DolphinScheduler metadata is stored in the relational database. Currently, supports PostgreSQL and MySQL. If you use MySQL, you need to manually download <a href="https://downloads.MySQL.com/archives/c-j/">mysql-connector-java driver</a> (8.0.16) and move it to the lib directory of DolphinScheduler. Let's take MySQL as an example for how to initialize the database:</p>
<pre><code class="language-shell">mysql -uroot -p
<span class="hljs-meta">
mysql></span><span class="bash"> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;</span>
@@ -139,19 +139,19 @@ mysql></span><span class="bash"> CREATE DATABASE dolphinscheduler DEFAULT CHA
<span class="hljs-meta">
mysql></span><span class="bash"> flush privileges;</span>
</code></pre>
-<p>After above steps done you would create a new database for DolphinScheduler, then run shortcut Shell scripts to init database</p>
+<p>After the above steps done you would create a new database for DolphinScheduler, then run Shell scripts to init database:</p>
<pre><code class="language-shell">sh script/create-dolphinscheduler.sh
</code></pre>
<h2>Start DolphinScheduler</h2>
-<p>Use <strong>deployment user</strong> you created above, running the following command to complete the deployment, and the server log will be stored in the logs folder</p>
+<p>Use <strong>deployment user</strong> you created above, running the following command to complete the deployment, and the server log will be stored in the logs folder.</p>
<pre><code class="language-shell">sh install.sh
</code></pre>
<blockquote>
-<p><strong><em>Note:</em></strong> For the first time deployment, there maybe occur five times of <code>sh: bin/dolphinscheduler-daemon.sh: No such file or directory</code> in terminal
-, this is non-important information and you can ignore it.</p>
+<p><strong><em>Note:</em></strong> For the first time deployment, there maybe occur five times of <code>sh: bin/dolphinscheduler-daemon.sh: No such file or directory</code> in the terminal,
+this is non-important information that you can ignore.</p>
</blockquote>
<h2>Login DolphinScheduler</h2>
-<p>The browser access address <a href="http://localhost:12345/dolphinscheduler">http://localhost:12345/dolphinscheduler</a> can login DolphinScheduler UI. The default username and password are <strong>admin/dolphinscheduler123</strong></p>
+<p>Access address <code>http://localhost:12345/dolphinscheduler</code> and login DolphinScheduler UI. The default username and password are <strong>admin/dolphinscheduler123</strong></p>
<h2>Start or Stop Server</h2>
<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> Stop all DolphinScheduler server</span>
sh ./bin/stop-all.sh
diff --git a/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.json b/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.json
index 362e964..8a9cca1 100644
--- a/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.json
+++ b/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.json
@@ -1,6 +1,6 @@
{
"filename": "pseudo-cluster.md",
- "__html": "<h1>Pseudo-Cluster Deployment</h1>\n<p>The purpose of pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, api server, are all on the same machine.</p>\n<p>If you are a green hand and want to experience DolphinScheduler, we recommended you install follow <a href=\"standalone.md\">Standalone</a>. If you want to experience more complete functions or schedule large tasks number, we recommended [...]
+ "__html": "<h1>Pseudo-Cluster Deployment</h1>\n<p>The purpose of the pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, API server, are all on the same machine.</p>\n<p>If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow <a href=\"standalone.md\">Standalone deployment</a>. If you want to experience more complete functions and schedule massive tasks [...]
"link": "/dist/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.html",
"meta": {}
}
\ No newline at end of file
diff --git a/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.html b/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.html
index aad1b1a..aed15b8 100644
--- a/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.html
+++ b/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.html
@@ -11,10 +11,10 @@
</head>
<body>
<div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
-<p>The dolphinscheduler-skywalking module provides <a href="https://skywalking.apache.org/">SkyWalking</a> monitor agent for the Dolphinscheduler project.</p>
-<p>This document describes how to enable SkyWalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>
+<p>The <code>dolphinscheduler-skywalking</code> module provides <a href="https://skywalking.apache.org/">SkyWalking</a> monitor agent for the DolphinScheduler project.</p>
+<p>This document describes how to enable SkyWalking version 8.4+ support with this module (recommend using SkyWalking 8.5.0).</p>
<h2>Installation</h2>
-<p>The following configuration is used to enable SkyWalking agent.</p>
+<p>The following configuration is used to enable the SkyWalking agent.</p>
<h3>Through Environment Variable Configuration (for Docker Compose)</h3>
<p>Modify SkyWalking environment variables in <code>docker/docker-swarm/config.env.sh</code>:</p>
<pre><code>SKYWALKING_ENABLE=true
@@ -22,7 +22,7 @@ SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
SW_GRPC_LOG_SERVER_HOST=127.0.0.1
SW_GRPC_LOG_SERVER_PORT=11800
</code></pre>
-<p>And run</p>
+<p>And run:</p>
<pre><code class="language-shell"><span class="hljs-meta">$</span><span class="bash"> docker-compose up -d</span>
</code></pre>
<h3>Through Environment Variable Configuration (for Docker)</h3>
@@ -56,7 +56,7 @@ apache/dolphinscheduler:1.3.8 all</span>
<h4>Import DolphinScheduler Dashboard to SkyWalking Server</h4>
<p>Copy the <code>${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml</code> file into <code>${skywalking-oap-server.home}/config/ui-initialized-templates/</code> directory, and restart SkyWalking oap-server.</p>
<h4>View DolphinScheduler Dashboard</h4>
-<p>If you have opened SkyWalking dashboard with a browser before, you need to clear the browser cache.</p>
+<p>If you have opened the SkyWalking dashboard with a browser before, you need to clear the browser cache.</p>
<p><img src="/img/skywalking/import-dashboard-1.jpg" alt="img1"></p>
</div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><a href="/en-us/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>Email List</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png [...]
<script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
diff --git a/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.json b/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.json
index 9e6c173..ff50667 100644
--- a/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.json
+++ b/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.json
@@ -1,6 +1,6 @@
{
"filename": "skywalking-agent.md",
- "__html": "<h1>SkyWalking Agent Deployment</h1>\n<p>The dolphinscheduler-skywalking module provides <a href=\"https://skywalking.apache.org/\">SkyWalking</a> monitor agent for the Dolphinscheduler project.</p>\n<p>This document describes how to enable SkyWalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>\n<h2>Installation</h2>\n<p>The following configuration is used to enable SkyWalking agent.</p>\n<h3>Through Environment Variable Configuration (for Docker [...]
+ "__html": "<h1>SkyWalking Agent Deployment</h1>\n<p>The <code>dolphinscheduler-skywalking</code> module provides <a href=\"https://skywalking.apache.org/\">SkyWalking</a> monitor agent for the DolphinScheduler project.</p>\n<p>This document describes how to enable SkyWalking version 8.4+ support with this module (recommend using SkyWalking 8.5.0).</p>\n<h2>Installation</h2>\n<p>The following configuration is used to enable the SkyWalking agent.</p>\n<h3>Through Environment Variable Con [...]
"link": "/dist/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.html",
"meta": {}
}
\ No newline at end of file
diff --git a/en-us/docs/dev/user_doc/guide/installation/standalone.html b/en-us/docs/dev/user_doc/guide/installation/standalone.html
index 410fdc9..29644e9 100644
--- a/en-us/docs/dev/user_doc/guide/installation/standalone.html
+++ b/en-us/docs/dev/user_doc/guide/installation/standalone.html
@@ -11,28 +11,28 @@
</head>
<body>
<div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
-<p>Standalone only for quick look for DolphinScheduler.</p>
-<p>If you are a green hand and want to experience DolphinScheduler, we recommended you install follow <a href="standalone.md">Standalone</a>. If you want to experience more complete functions or schedule large tasks number, we recommended you install follow <a href="pseudo-cluster.md">pseudo-cluster deployment</a>. If you want to using DolphinScheduler in production, we recommended you follow <a href="cluster.md">cluster deployment</a> or <a href="kubernetes.md">kubernetes</a></p>
+<p>Standalone only for quick experience for DolphinScheduler.</p>
+<p>If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow <a href="standalone.md">Standalone deployment</a>. If you want to experience more complete functions and schedule massive tasks, we recommend you install follow <a href="pseudo-cluster.md">pseudo-cluster deployment</a>. If you want to deploy DolphinScheduler in production, we recommend you follow <a href="cluster.md">cluster deployment</a> or <a href="kubernetes.md">Kubernetes depl [...]
<blockquote>
-<p><strong><em>Note:</em></strong> Standalone only recommends the use of less than 20 workflows, because it uses H2 Database, ZooKeeper Testing Server, too many tasks may cause instability</p>
+<p><strong><em>Note:</em></strong> Standalone only recommends the usage of fewer than 20 workflows, because it uses H2 Database, ZooKeeper Testing Server, too many tasks may cause instability.</p>
</blockquote>
-<h2>Prepare</h2>
+<h2>Preparation</h2>
<ul>
-<li>JDK:Download <a href="https://www.oracle.com/technetwork/java/javase/downloads/index.html">JDK</a> (1.8+), and configure <code>JAVA_HOME</code> to and <code>PATH</code> variable. You can skip this step, if it already exists in your environment.</li>
-<li>Binary package: Download the DolphinScheduler binary package at <a href="https://dolphinscheduler.apache.org/en-us/download/download.html">download page</a></li>
+<li>JDK:download <a href="https://www.oracle.com/technetwork/java/javase/downloads/index.html">JDK</a> (1.8+), and configure <code>JAVA_HOME</code> to and <code>PATH</code> variable. You can skip this step if it already exists in your environment.</li>
+<li>Binary package: download the DolphinScheduler binary package at <a href="https://dolphinscheduler.apache.org/en-us/download/download.html">download page</a>.</li>
</ul>
<h2>Start DolphinScheduler Standalone Server</h2>
<h3>Extract and Start DolphinScheduler</h3>
-<p>There is a standalone startup script in the binary compressed package, which can be quickly started after extract. Switch to a user with sudo permission and run the script</p>
+<p>There is a standalone startup script in the binary compressed package, which can be quickly started after extraction. Switch to a user with sudo permission and run the script:</p>
<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> Extract and start Standalone Server</span>
tar -xvzf apache-dolphinscheduler-*-bin.tar.gz
cd apache-dolphinscheduler-*-bin
sh ./bin/dolphinscheduler-daemon.sh start standalone-server
</code></pre>
<h3>Login DolphinScheduler</h3>
-<p>The browser access address <a href="http://localhost:12345/dolphinscheduler">http://localhost:12345/dolphinscheduler</a> can login DolphinScheduler UI. The default username and password are <strong>admin/dolphinscheduler123</strong></p>
+<p>Access address <code>http://localhost:12345/dolphinscheduler</code> and login DolphinScheduler UI. The default username and password are <strong>admin/dolphinscheduler123</strong></p>
<h3>Start or Stop Server</h3>
-<p>The script <code>./bin/dolphinscheduler-daemon.sh</code> can not only quickly start standalone, but also stop the service operation. All the commands are as follows</p>
+<p>The script <code>./bin/dolphinscheduler-daemon.sh</code>can be used not only quickly start standalone, but also to stop the service operation. All the commands are as follows:</p>
<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> Start Standalone Server</span>
sh ./bin/dolphinscheduler-daemon.sh start standalone-server
<span class="hljs-meta">#</span><span class="bash"> Stop Standalone Server</span>
diff --git a/en-us/docs/dev/user_doc/guide/installation/standalone.json b/en-us/docs/dev/user_doc/guide/installation/standalone.json
index bdc2ce5..e8cddda 100644
--- a/en-us/docs/dev/user_doc/guide/installation/standalone.json
+++ b/en-us/docs/dev/user_doc/guide/installation/standalone.json
@@ -1,6 +1,6 @@
{
"filename": "standalone.md",
- "__html": "<h1>Standalone</h1>\n<p>Standalone only for quick look for DolphinScheduler.</p>\n<p>If you are a green hand and want to experience DolphinScheduler, we recommended you install follow <a href=\"standalone.md\">Standalone</a>. If you want to experience more complete functions or schedule large tasks number, we recommended you install follow <a href=\"pseudo-cluster.md\">pseudo-cluster deployment</a>. If you want to using DolphinScheduler in production, we recommended you foll [...]
+ "__html": "<h1>Standalone</h1>\n<p>Standalone only for quick experience for DolphinScheduler.</p>\n<p>If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow <a href=\"standalone.md\">Standalone deployment</a>. If you want to experience more complete functions and schedule massive tasks, we recommend you install follow <a href=\"pseudo-cluster.md\">pseudo-cluster deployment</a>. If you want to deploy DolphinScheduler in production, we re [...]
"link": "/dist/en-us/docs/dev/user_doc/guide/installation/standalone.html",
"meta": {}
}
\ No newline at end of file