You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@dolphinscheduler.apache.org by gi...@apache.org on 2019/11/21 05:39:28 UTC

[incubator-dolphinscheduler-website] branch asf-site updated: Automated deployment: Thu Nov 21 05:39:17 UTC 2019 fc66f1a5d54da10e41a9cd2429079d86fe036d8b

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-dolphinscheduler-website.git


View the commit online:
https://github.com/apache/incubator-dolphinscheduler-website/commit/d0c30a4982f4457d4c0d55b28d49ca683d2bbcaa

The following commit(s) were added to refs/heads/asf-site by this push:
     new d0c30a4  Automated deployment: Thu Nov 21 05:39:17 UTC 2019 fc66f1a5d54da10e41a9cd2429079d86fe036d8b
d0c30a4 is described below

commit d0c30a4982f4457d4c0d55b28d49ca683d2bbcaa
Author: dailidong <da...@users.noreply.github.com>
AuthorDate: Thu Nov 21 05:39:17 2019 +0000

    Automated deployment: Thu Nov 21 05:39:17 UTC 2019 fc66f1a5d54da10e41a9cd2429079d86fe036d8b
---
 en-us/blog/architecture-design.html    | 304 +++++++++++++++++++++++++++++++++
 en-us/blog/architecture-design.json    |   6 +
 en-us/blog/index.html                  |  27 +++
 en-us/blog/meetup_2019_10_26.html      |  40 +++++
 en-us/blog/meetup_2019_10_26.json      |   6 +
 en-us/index.html                       |   2 +-
 img/addtenant.png                      | Bin 0 -> 15441 bytes
 img/auth_project.png                   | Bin 0 -> 17445 bytes
 img/auth_user.png                      | Bin 0 -> 10254 bytes
 img/complement.png                     | Bin 0 -> 58442 bytes
 img/create-queue.png                   | Bin 0 -> 254254 bytes
 img/dag1.png                           | Bin 0 -> 500407 bytes
 img/dag2.png                           | Bin 0 -> 343883 bytes
 img/dag3.png                           | Bin 0 -> 329032 bytes
 img/dag4.png                           | Bin 0 -> 24671 bytes
 img/depend-node.png                    | Bin 0 -> 435609 bytes
 img/depend-node2.png                   | Bin 0 -> 412944 bytes
 img/depend-node3.png                   | Bin 0 -> 473875 bytes
 img/dependent_edit.png                 | Bin 0 -> 65166 bytes
 img/file-manage.png                    | Bin 0 -> 234978 bytes
 img/file_create.png                    | Bin 0 -> 48006 bytes
 img/file_detail.png                    | Bin 0 -> 169913 bytes
 img/file_rename.png                    | Bin 0 -> 9581 bytes
 img/file_upload.png                    | Bin 0 -> 14500 bytes
 img/gant-pic.png                       | Bin 0 -> 238924 bytes
 img/global_parameter.png               | Bin 0 -> 116141 bytes
 img/hive_edit.png                      | Bin 0 -> 46641 bytes
 img/hive_edit2.png                     | Bin 0 -> 48423 bytes
 img/hive_kerberos.png                  | Bin 0 -> 37052 bytes
 img/instance-detail.png                | Bin 0 -> 333599 bytes
 img/instance-list.png                  | Bin 0 -> 491317 bytes
 img/local_parameter.png                | Bin 0 -> 25661 bytes
 img/login.jpg                          | Bin 0 -> 49053 bytes
 img/mail_edit.png                      | Bin 0 -> 14438 bytes
 img/master-jk.png                      | Bin 0 -> 325435 bytes
 img/mr_edit.png                        | Bin 0 -> 136183 bytes
 img/mysql-jk.png                       | Bin 0 -> 208525 bytes
 img/mysql_edit.png                     | Bin 0 -> 48390 bytes
 img/postgresql_edit.png                | Bin 0 -> 32368 bytes
 img/procedure_edit.png                 | Bin 0 -> 89355 bytes
 img/project.png                        | Bin 0 -> 190310 bytes
 img/python_edit.png                    | Bin 0 -> 467741 bytes
 img/run-work.png                       | Bin 0 -> 43151 bytes
 img/shell_edit.png                     | Bin 0 -> 157618 bytes
 img/spark_datesource.png               | Bin 0 -> 29955 bytes
 img/spark_edit.png                     | Bin 0 -> 123946 bytes
 img/sparksql_kerberos.png              | Bin 0 -> 37390 bytes
 img/sql-node.png                       | Bin 0 -> 477610 bytes
 img/sql-node2.png                      | Bin 0 -> 505130 bytes
 img/subprocess_edit.png                | Bin 0 -> 76964 bytes
 img/task-list.png                      | Bin 0 -> 426472 bytes
 img/task-log.png                       | Bin 0 -> 302312 bytes
 img/task-log2.png                      | Bin 0 -> 334315 bytes
 img/task_history.png                   | Bin 0 -> 109027 bytes
 img/time-schedule.png                  | Bin 0 -> 55284 bytes
 img/time-schedule2.png                 | Bin 0 -> 44594 bytes
 img/udf_edit.png                       | Bin 0 -> 26472 bytes
 img/useredit2.png                      | Bin 0 -> 17285 bytes
 img/worker-jk.png                      | Bin 0 -> 309276 bytes
 img/worker1.png                        | Bin 0 -> 253566 bytes
 img/zk-jk.png                          | Bin 0 -> 194588 bytes
 zh-cn/docs/user_doc/quick-start.html   |  16 +-
 zh-cn/docs/user_doc/quick-start.json   |   2 +-
 zh-cn/docs/user_doc/system-manual.html | 112 ++++++------
 zh-cn/docs/user_doc/system-manual.json |   2 +-
 65 files changed, 450 insertions(+), 67 deletions(-)

diff --git a/en-us/blog/architecture-design.html b/en-us/blog/architecture-design.html
new file mode 100644
index 0000000..66eddfd
--- /dev/null
+++ b/en-us/blog/architecture-design.html
@@ -0,0 +1,304 @@
+<!DOCTYPE html>
+<html lang="en">
+
+<head>
+	<meta charset="UTF-8">
+	<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
+	<meta name="keywords" content="architecture-design" />
+	<meta name="description" content="architecture-design" />
+	<!-- 网页标签标题 -->
+	<title>architecture-design</title>
+	<link rel="shortcut icon" href="/img/docsite.ico"/>
+	<link rel="stylesheet" href="/build/blogDetail.css" />
+</head>
+<body>
+	<div id="root"><div class="blog-detail-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_colorful.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class="language-switch language-switch-normal">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_gray.png"/><ul><li class="menu-item menu-item-normal [...]
+<p>Before explaining the architecture of the schedule system, let us first understand the common nouns of the schedule system.</p>
+<h3>1.Noun Interpretation</h3>
+<p><strong>DAG:</strong> Full name Directed Acyclic Graph,referred to as DAG。Tasks in the workflow are assembled in the form of directed acyclic graphs, which are topologically traversed from nodes with zero indegrees of ingress until there are no successor nodes. For example, the following picture:</p>
+<p align="center">
+  <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag_examples_cn.jpg" alt="dag示例"  width="60%" />
+  <p align="center">
+        <em>dag example</em>
+  </p>
+</p>
+<p><strong>Process definition</strong>: Visualization <strong>DAG</strong> by dragging task nodes and establishing associations of task nodes</p>
+<p><strong>Process instance</strong>: A process instance is an instantiation of a process definition, which can be generated by manual startup or  scheduling. The process definition runs once, a new process instance is generated</p>
+<p><strong>Task instance</strong>: A task instance is the instantiation of a specific task node when a process instance runs, which indicates the specific task execution status</p>
+<p><strong>Task type</strong>: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (dependency), and plans to support dynamic plug-in extension, note: the sub-<strong>SUB_PROCESS</strong> is also A separate process definition that can be launched separately</p>
+<p><strong>Schedule mode</strong> :  The system supports timing schedule and manual schedule based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timer, rerun, pause, stop, resume waiting thread. Where <strong>recovers the fault-tolerant workflow</strong> and <strong>restores the waiting thread</strong> The two command types are used by the sc [...]
+<p><strong>Timed schedule</strong>: The system uses <strong>quartz</strong> distributed scheduler and supports the generation of cron expression visualization</p>
+<p><strong>Dependency</strong>: The system does not only support <strong>DAG</strong> Simple dependencies between predecessors and successor nodes, but also provides <strong>task dependencies</strong> nodes, support for custom task dependencies between processes**</p>
+<p><strong>Priority</strong>: Supports the priority of process instances and task instances. If the process instance and task instance priority are not set, the default is first in, first out.</p>
+<p><strong>Mail Alert</strong>: Support <strong>SQL Task</strong> Query Result Email Send, Process Instance Run Result Email Alert and Fault Tolerant Alert Notification</p>
+<p><strong>Failure policy</strong>: For tasks running in parallel, if there are tasks that fail, two failure policy processing methods are provided. <strong>Continue</strong> means that the status of the task is run in parallel until the end of the process failure. <strong>End</strong> means that once a failed task is found, Kill also drops the running parallel task and the process ends.</p>
+<p><strong>Complement</strong>: Complement historical data, support ** interval parallel and serial ** two complement methods</p>
+<h3>2.System architecture</h3>
+<h4>2.1 System Architecture Diagram</h4>
+<p align="center">
+  <img src="https://user-images.githubusercontent.com/48329107/62609545-8f973480-b934-11e9-9a58-d8133222f14d.png" alt="System Architecture Diagram"  />
+  <p align="center">
+        <em>System Architecture Diagram</em>
+  </p>
+</p>
+<h4>2.2 Architectural description</h4>
+<ul>
+<li>
+<p><strong>MasterServer</strong></p>
+<p>MasterServer adopts the distributed non-central design concept. MasterServer is mainly responsible for DAG task split, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer.
+When the MasterServer service starts, it registers a temporary node with Zookeeper, and listens to the Zookeeper temporary node state change for fault tolerance processing.</p>
+<h5>The service mainly contains:</h5>
+<ul>
+<li>
+<p><strong>Distributed Quartz</strong> distributed scheduling component, mainly responsible for the start and stop operation of the scheduled task. When the quartz picks up the task, the master internally has a thread pool to be responsible for the subsequent operations of the task.</p>
+</li>
+<li>
+<p><strong>MasterSchedulerThread</strong> is a scan thread that periodically scans the <strong>command</strong> table in the database for different business operations based on different ** command types**</p>
+</li>
+<li>
+<p><strong>MasterExecThread</strong> is mainly responsible for DAG task segmentation, task submission monitoring, logic processing of various command types</p>
+</li>
+<li>
+<p><strong>MasterTaskExecThread</strong> is mainly responsible for task persistence</p>
+</li>
+</ul>
+</li>
+<li>
+<p><strong>WorkerServer</strong></p>
+<ul>
+<li>
+<p>WorkerServer also adopts a distributed, non-central design concept. WorkerServer is mainly responsible for task execution and providing log services. When the WorkerServer service starts, it registers the temporary node with Zookeeper and maintains the heartbeat.</p>
+<h5>This service contains:</h5>
+<ul>
+<li><strong>FetchTaskThread</strong> is mainly responsible for continuously receiving tasks from <strong>Task Queue</strong> and calling <strong>TaskScheduleThread</strong> corresponding executors according to different task types.</li>
+<li><strong>LoggerServer</strong> is an RPC service that provides functions such as log fragment viewing, refresh and download.</li>
+</ul>
+</li>
+<li>
+<p><strong>ZooKeeper</strong></p>
+<p>The ZooKeeper service, the MasterServer and the WorkerServer nodes in the system all use the ZooKeeper for cluster management and fault tolerance. In addition, the system also performs event monitoring and distributed locking based on ZooKeeper.
+We have also implemented queues based on Redis, but we hope that EasyScheduler relies on as few components as possible, so we finally removed the Redis implementation.</p>
+</li>
+<li>
+<p><strong>Task Queue</strong></p>
+<p>The task queue operation is provided. Currently, the queue is also implemented based on Zookeeper. Since there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have over-measured a million-level data storage queue, which has no effect on system stability and performance.</p>
+</li>
+<li>
+<p><strong>Alert</strong></p>
+<p>Provides alarm-related interfaces. The interfaces mainly include <strong>Alarms</strong>. The storage, query, and notification functions of the two types of alarm data. The notification function has two types: <strong>mail notification</strong> and <strong>SNMP (not yet implemented)</strong>.</p>
+</li>
+<li>
+<p><strong>API</strong></p>
+<p>The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service provides a RESTful api to provide request services externally.
+Interfaces include workflow creation, definition, query, modification, release, offline, manual start, stop, pause, resume, start execution from this node, and more.</p>
+</li>
+<li>
+<p><strong>UI</strong></p>
+<p>The front-end page of the system provides various visual operation interfaces of the system. For details, see the <strong>[System User Manual] (System User <a href="http://Manual.md">Manual.md</a>)</strong> section.</p>
+</li>
+</ul>
+</li>
+</ul>
+<h4>2.3 Architectural Design Ideas</h4>
+<h5>I. Decentralized vs centralization</h5>
+<h6>Centralization Thought</h6>
+<p>The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into two roles according to their roles:</p>
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave role" width="50%" />
+ </p>
+<ul>
+<li>The role of Master is mainly responsible for task distribution and supervising the health status of Slave. It can dynamically balance the task to Slave, so that the Slave node will not be &quot;busy&quot; or &quot;free&quot;.</li>
+<li>The role of the Worker is mainly responsible for the execution of the task and maintains the heartbeat with the Master so that the Master can assign tasks to the Slave.</li>
+</ul>
+<p>Problems in the design of centralized :</p>
+<ul>
+<li>Once the Master has a problem, the group has no leader and the entire cluster will crash. In order to solve this problem, most Master/Slave architecture modes adopt the design scheme of the master and backup masters, which can be hot standby or cold standby, automatic switching or manual switching, and more and more new systems are available. Automatically elects the ability to switch masters to improve system availability.</li>
+<li>Another problem is that if the Scheduler is on the Master, although it can support different tasks in one DAG running on different machines, it will generate overload of the Master. If the Scheduler is on the Slave, all tasks in a DAG can only be submitted on one machine. If there are more parallel tasks, the pressure on the Slave may be larger.</li>
+</ul>
+<h6>Decentralization</h6>
+ <p align="center"
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="decentralized" width="50%" />
+ </p>
+<ul>
+<li>
+<p>In the decentralized design, there is usually no Master/Slave concept, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, networked arbitrary node equipment down machine , all will only affect a small range of features.</p>
+</li>
+<li>
+<p>The core design of decentralized design is that there is no &quot;manager&quot; that is different from other nodes in the entire distributed system, so there is no single point of failure problem. However, since there is no &quot;manager&quot; node, each node needs to communicate with other nodes to get the necessary machine information, and the unreliable line of distributed system communication greatly increases the difficulty of implementing the above functions.</p>
+</li>
+<li>
+<p>In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly emerging. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will spontaneously hold &quot;meetings&quot; to elect new &quot;managers&quot;. Go to preside over the work. The most typical case is the Etcd implemented in ZooKeeper and Go.</p>
+</li>
+<li>
+<p>Decentralization of EasyScheduler is the registration of Master/Worker to ZooKeeper. The Master Cluster and the Worker Cluster are not centered, and the Zookeeper distributed lock is used to elect one Master or Worker as the “manager” to perform the task.</p>
+</li>
+</ul>
+<h5>二、Distributed lock practice</h5>
+<p>EasyScheduler uses ZooKeeper distributed locks to implement only one Master to execute the Scheduler at the same time, or only one Worker to perform task submission.</p>
+<ol>
+<li>The core process algorithm for obtaining distributed locks is as follows</li>
+</ol>
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Get Distributed Lock Process" width="50%" />
+ </p>
+<ol start="2">
+<li>Scheduler thread distributed lock implementation flow chart in EasyScheduler:</li>
+</ol>
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock_procss.png" alt="Get Distributed Lock Process" width="50%" />
+ </p>
+<h5>Third, the thread is insufficient loop waiting problem</h5>
+<ul>
+<li>If there is no subprocess in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the direct process waits or fails.</li>
+<li>If a large number of sub-processes are nested in a large DAG, the following figure will result in a &quot;dead&quot; state:</li>
+</ul>
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Thread is not enough to wait for loop" width="50%" />
+ </p>
+<p>In the above figure, MainFlowThread waits for SubFlowThread1 to end, SubFlowThread1 waits for SubFlowThread2 to end, SubFlowThread2 waits for SubFlowThread3 to end, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, and thus the thread cannot be released. This forms the state of the child parent process loop waiting. At this point, the scheduling cluster will no longer be available unless a new Master is started to add threads to brea [...]
+<p>It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three options to reduce this risk:</p>
+<ol>
+<li>Calculate the sum of the threads of all Masters, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time.</li>
+<li>Judge the single master thread pool. If the thread pool is full, let the thread fail directly.</li>
+<li>Add a Command type with insufficient resources. If the thread pool is insufficient, the main process will be suspended. This way, the thread pool has a new thread, which can make the process with insufficient resources hang up and wake up again.</li>
+</ol>
+<p>Note: The Master Scheduler thread is FIFO-enabled when it gets the Command.</p>
+<p>So we chose the third way to solve the problem of insufficient threads.</p>
+<h5>IV. Fault Tolerant Design</h5>
+<p>Fault tolerance is divided into service fault tolerance and task retry. Service fault tolerance is divided into two types: Master Fault Tolerance and Worker Fault Tolerance.</p>
+<h6>1. Downtime fault tolerance</h6>
+<p>Service fault tolerance design relies on ZooKeeper's Watcher mechanism. The implementation principle is as follows:</p>
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="EasyScheduler Fault Tolerant Design" width="40%" />
+ </p>
+<p>The Master monitors the directories of other Masters and Workers. If the remove event is detected, the process instance is fault-tolerant or the task instance is fault-tolerant according to the specific business logic.</p>
+<ul>
+<li>Master fault tolerance flow chart:</li>
+</ul>
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master Fault Tolerance Flowchart" width="40%" />
+ </p>
+<p>After the ZooKeeper Master is fault-tolerant, it is rescheduled by the Scheduler thread in EasyScheduler. It traverses the DAG to find the &quot;Running&quot; and &quot;Submit Successful&quot; tasks, and monitors the status of its task instance for the &quot;Running&quot; task. You need to determine whether the Task Queue already exists. If it exists, monitor the status of the task instance. If it does not exist, resubmit the task instance.</p>
+<ul>
+<li>Worker fault tolerance flow chart:</li>
+</ul>
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker Fault Tolerance Flowchart" width="40%" />
+ </p>
+<p>Once the Master Scheduler thread finds the task instance as &quot;need to be fault tolerant&quot;, it takes over the task and resubmits.</p>
+<p>Note: Because the &quot;network jitter&quot; may cause the node to lose the heartbeat of ZooKeeper in a short time, the node's remove event occurs. In this case, we use the easiest way, that is, once the node has timeout connection with ZooKeeper, it will directly stop the Master or Worker service.</p>
+<h6>2. Task failure retry</h6>
+<p>Here we must first distinguish between the concept of task failure retry, process failure recovery, and process failure rerun:</p>
+<ul>
+<li>Task failure Retry is task level, which is automatically performed by the scheduling system. For example, if a shell task sets the number of retries to 3 times, then the shell task will try to run up to 3 times after failing to run.</li>
+<li>Process failure recovery is process level, is done manually, recovery can only be performed from the failed node ** or ** from the current node **</li>
+<li>Process failure rerun is also process level, is done manually, rerun is from the start node</li>
+</ul>
+<p>Next, let's talk about the topic, we divided the task nodes in the workflow into two types.</p>
+<ul>
+<li>One is a business node, which corresponds to an actual script or processing statement, such as a Shell node, an MR node, a Spark node, a dependent node, and so on.</li>
+<li>There is also a logical node, which does not do the actual script or statement processing, but the logical processing of the entire process flow, such as sub-flow sections.</li>
+</ul>
+<p>Each ** service node** can configure the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. <strong>Logical node</strong> does not support failed retry. But the tasks in the logical nodes support retry.</p>
+<p>If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process resumed.</p>
+<h5>V. Task priority design</h5>
+<p>In the early scheduling design, if there is no priority design and fair scheduling design, it will encounter the situation that the task submitted first may be completed simultaneously with the task submitted subsequently, but the priority of the process or task cannot be set. We have redesigned this, and we are currently designing it as follows:</p>
+<ul>
+<li>
+<p>According to ** different process instance priority ** prioritizes ** same process instance priority ** prioritizes ** task priority within the same process ** takes precedence over ** same process ** commit order from high Go to low for task processing.</p>
+<ul>
+<li>
+<p>The specific implementation is to resolve the priority according to the json of the task instance, and then save the ** process instance priority _ process instance id_task priority _ task id** information in the ZooKeeper task queue, when obtained from the task queue, Through string comparison, you can get the task that needs to be executed first.</p>
+<ul>
+<li>
+<p>The priority of the process definition is that some processes need to be processed before other processes. This can be configured at the start of the process or at the time of scheduled start. There are 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below</p>
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process Priority Configuration" width="40%" />
+ </p>
+</li>
+<li>
+<p>The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below</p>
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="task priority configuration" width="35%" />
+ </p>
+</li>
+</ul>
+</li>
+</ul>
+</li>
+</ul>
+<h5>VI. Logback and gRPC implement log access</h5>
+<ul>
+<li>Since the Web (UI) and Worker are not necessarily on the same machine, viewing the log is not as it is for querying local files. There are two options:
+<ul>
+<li>Put the logs on the ES search engine</li>
+<li>Obtain remote log information through gRPC communication</li>
+</ul>
+</li>
+<li>Considering the lightweightness of EasyScheduler as much as possible, gRPC was chosen to implement remote access log information.</li>
+</ul>
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access" width="50%" />
+ </p>
+<ul>
+<li>We use a custom Logback FileAppender and Filter function to generate a log file for each task instance.</li>
+<li>The main implementation of FileAppender is as follows:</li>
+</ul>
+<pre><code class="language-java"> <span class="hljs-comment">/**
+  * task log appender
+  */</span>
+ Public <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TaskLogAppender</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">FileAppender</span>&lt;<span class="hljs-title">ILoggingEvent</span> </span>{
+ 
+     ...
+
+    <span class="hljs-meta">@Override</span>
+    <span class="hljs-function">Protected <span class="hljs-keyword">void</span> <span class="hljs-title">append</span><span class="hljs-params">(ILoggingEvent event)</span> </span>{
+
+        If (currentlyActiveFile == <span class="hljs-keyword">null</span>){
+            currentlyActiveFile = getFile();
+        }
+        String activeFile = currentlyActiveFile;
+        <span class="hljs-comment">// thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId</span>
+        String threadName = event.getThreadName();
+        String[] threadNameArr = threadName.split(<span class="hljs-string">"-"</span>);
+        <span class="hljs-comment">// logId = processDefineId_processInstanceId_taskInstanceId</span>
+        String logId = threadNameArr[<span class="hljs-number">1</span>];
+        ...
+        <span class="hljs-keyword">super</span>.subAppend(event);
+    }
+}
+</code></pre>
+<p>Generate a log in the form of /process definition id/process instance id/task instance id.log</p>
+<ul>
+<li>Filter matches the thread name starting with TaskLogInfo:</li>
+<li>TaskLogFilter is implemented as follows:</li>
+</ul>
+<pre><code class="language-java"> <span class="hljs-comment">/**
+ * task log filter
+ */</span>
+Public <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TaskLogFilter</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">Filter</span>&lt;<span class="hljs-title">ILoggingEvent</span> </span>{
+
+    <span class="hljs-meta">@Override</span>
+    <span class="hljs-function">Public FilterReply <span class="hljs-title">decide</span><span class="hljs-params">(ILoggingEvent event)</span> </span>{
+        If (event.getThreadName().startsWith(<span class="hljs-string">"TaskLogInfo-"</span>)){
+            Return FilterReply.ACCEPT;
+        }
+        Return FilterReply.DENY;
+    }
+}
+</code></pre>
+<h3>summary</h3>
+<p>Starting from the scheduling, this paper introduces the architecture principle and implementation ideas of the big data distributed workflow scheduling system-EasyScheduler. To be continued</p>
+</section><footer class="footer-container"><div class="footer-body"><img src="/img/ds_gray.svg"/><div class="cols-container"><div class="col col-12"><h3>Disclaimer</h3><p>Apache DolphinScheduler (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. 
+Incubation is required of all newly accepted projects until a further review indicates 
+that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. 
+While incubation status is not necessarily a reflection of the completeness or stability of the code, 
+it does indicate that the project has yet to be fully endorsed by the ASF.</p></div><div class="col col-6"><dl><dt>Documentation</dt><dd><a href="/en-us/docs/developer_guide/architecture-design.html" target="_self">Overview</a></dd><dd><a href="/en-us/docs/user_doc/quick-start.html" target="_self">Quick start</a></dd><dd><a href="/en-us/docs/development/developers.html" target="_self">Developer guide</a></dd></dl></div><div class="col col-6"><dl><dt>ASF</dt><dd><a href="http://www.apache [...]
+	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
+	<script src="https://f.alicdn.com/react/15.4.1/react-dom.min.js"></script>
+	<script>
+		window.rootPath = '';
+  </script>
+	<script src="/build/blogDetail.js"></script>
+</body>
+</html>
\ No newline at end of file
diff --git a/en-us/blog/architecture-design.json b/en-us/blog/architecture-design.json
new file mode 100644
index 0000000..3952afd
--- /dev/null
+++ b/en-us/blog/architecture-design.json
@@ -0,0 +1,6 @@
+{
+  "filename": "architecture-design.md",
+  "__html": "<h2>Architecture Design</h2>\n<p>Before explaining the architecture of the schedule system, let us first understand the common nouns of the schedule system.</p>\n<h3>1.Noun Interpretation</h3>\n<p><strong>DAG:</strong> Full name Directed Acyclic Graph,referred to as DAG。Tasks in the workflow are assembled in the form of directed acyclic graphs, which are topologically traversed from nodes with zero indegrees of ingress until there are no successor nodes. For example, the fol [...]
+  "link": "/en-us/blog/architecture-design.html",
+  "meta": {}
+}
\ No newline at end of file
diff --git a/en-us/blog/index.html b/en-us/blog/index.html
new file mode 100644
index 0000000..fc5d2a7
--- /dev/null
+++ b/en-us/blog/index.html
@@ -0,0 +1,27 @@
+<!DOCTYPE html>
+<html lang="en">
+
+<head>
+	<meta charset="UTF-8">
+	<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
+	<meta name="keywords" content="blog,DolphinScheduler blog" />
+	<meta name="description" content="page description" />
+	<!-- 网页标签标题 -->
+	<title>Apache DolphinScheduler | BLOG</title>
+	<link rel="shortcut icon" href="/img/docsite.ico"/>
+	<link rel="stylesheet" href="/build/blog.css" />
+</head>
+<body>
+	<div id="root"><div class="blog-list-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_colorful.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class="language-switch language-switch-normal">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_gray.png"/><ul><li class="menu-item menu-item-normal"> [...]
+Incubation is required of all newly accepted projects until a further review indicates 
+that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. 
+While incubation status is not necessarily a reflection of the completeness or stability of the code, 
+it does indicate that the project has yet to be fully endorsed by the ASF.</p></div><div class="col col-6"><dl><dt>Documentation</dt><dd><a href="/en-us/docs/developer_guide/architecture-design.html" target="_self">Overview</a></dd><dd><a href="/en-us/docs/user_doc/quick-start.html" target="_self">Quick start</a></dd><dd><a href="/en-us/docs/development/developers.html" target="_self">Developer guide</a></dd></dl></div><div class="col col-6"><dl><dt>ASF</dt><dd><a href="http://www.apache [...]
+	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
+	<script src="https://f.alicdn.com/react/15.4.1/react-dom.min.js"></script>
+	<script>
+		window.rootPath = '';
+  </script>
+	<script src="/build/blog.js"></script>
+</body>
+</html>
\ No newline at end of file
diff --git a/en-us/blog/meetup_2019_10_26.html b/en-us/blog/meetup_2019_10_26.html
new file mode 100644
index 0000000..af7a35a
--- /dev/null
+++ b/en-us/blog/meetup_2019_10_26.html
@@ -0,0 +1,40 @@
+<!DOCTYPE html>
+<html lang="en">
+
+<head>
+	<meta charset="UTF-8">
+	<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
+	<meta name="keywords" content="meetup_2019_10_26" />
+	<meta name="description" content="meetup_2019_10_26" />
+	<!-- 网页标签标题 -->
+	<title>meetup_2019_10_26</title>
+	<link rel="shortcut icon" href="/img/docsite.ico"/>
+	<link rel="stylesheet" href="/build/blogDetail.css" />
+</head>
+<body>
+	<div id="root"><div class="blog-detail-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_colorful.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class="language-switch language-switch-normal">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_gray.png"/><ul><li class="menu-item menu-item-normal [...]
+Apache Dolphin Scheduler(Incubating) Meetup has been held successfully in Shanghai 2019.10.26.</p>
+<p>Address: Shanghai Changning Yuyuan Road 1107 Chuangyi space (Hongji) 3r20</p>
+<p>The meetup was begin at 2:00 pm, and close at about 5:00 pm.</p>
+<p>The Agenda is as following:</p>
+<ul>
+<li>Introduction/overview of DolphinScheduler (William-GuoWei 14:00-14:20). <a href="/download/2019-10-26/DolphinScheduler_guowei.pptx">PPT</a></li>
+<li>DolphinScheduler internals, fairly technical: how DolphinScheduler works and so on (Zhanwei Qiao 14:20-15:00) <a href="/download/2019-10-26/DolphinScheduler_qiaozhanwei.pptx">PPT</a></li>
+<li>From open source users to PPMC --- Talking about Me and DolphinScheduler(Baoqi Wu from guandata 15:10-15:50)<a href="/download/2019-10-26/Dolphinescheduler_baoqiwu.pptx">PPT</a></li>
+<li>Application and Practice of Dolphin Scheduler in cathay-ins (Zongyao Zhang from cathay-ins 15:50-16:30)<a href="/download/2019-10-26/DolphinScheduler_zhangzongyao.pptx">PPT</a></li>
+<li>Recently released features and Roadmap (Lidong Dai from analysys 16:30-17:00) <a href="/download/2019-10-26/DolphinScheduler_dailidong.pptx">PPT</a></li>
+<li>Free discussion</li>
+</ul>
+</section><footer class="footer-container"><div class="footer-body"><img src="/img/ds_gray.svg"/><div class="cols-container"><div class="col col-12"><h3>Disclaimer</h3><p>Apache DolphinScheduler (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. 
+Incubation is required of all newly accepted projects until a further review indicates 
+that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. 
+While incubation status is not necessarily a reflection of the completeness or stability of the code, 
+it does indicate that the project has yet to be fully endorsed by the ASF.</p></div><div class="col col-6"><dl><dt>Documentation</dt><dd><a href="/en-us/docs/developer_guide/architecture-design.html" target="_self">Overview</a></dd><dd><a href="/en-us/docs/user_doc/quick-start.html" target="_self">Quick start</a></dd><dd><a href="/en-us/docs/development/developers.html" target="_self">Developer guide</a></dd></dl></div><div class="col col-6"><dl><dt>ASF</dt><dd><a href="http://www.apache [...]
+	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
+	<script src="https://f.alicdn.com/react/15.4.1/react-dom.min.js"></script>
+	<script>
+		window.rootPath = '';
+  </script>
+	<script src="/build/blogDetail.js"></script>
+</body>
+</html>
\ No newline at end of file
diff --git a/en-us/blog/meetup_2019_10_26.json b/en-us/blog/meetup_2019_10_26.json
new file mode 100644
index 0000000..37dbf45
--- /dev/null
+++ b/en-us/blog/meetup_2019_10_26.json
@@ -0,0 +1,6 @@
+{
+  "filename": "meetup_2019_10_26.md",
+  "__html": "<p><img src=\"/img/2019-10-26-user.jpg\" alt=\"avatar\">\nApache Dolphin Scheduler(Incubating) Meetup has been held successfully in Shanghai 2019.10.26.</p>\n<p>Address: Shanghai Changning Yuyuan Road 1107 Chuangyi space (Hongji) 3r20</p>\n<p>The meetup was begin at 2:00 pm, and close at about 5:00 pm.</p>\n<p>The Agenda is as following:</p>\n<ul>\n<li>Introduction/overview of DolphinScheduler (William-GuoWei 14:00-14:20). <a href=\"/download/2019-10-26/DolphinScheduler_guow [...]
+  "link": "/en-us/blog/meetup_2019_10_26.html",
+  "meta": {}
+}
\ No newline at end of file
diff --git a/en-us/index.html b/en-us/index.html
index 069b410..4fb2dba 100644
--- a/en-us/index.html
+++ b/en-us/index.html
@@ -14,7 +14,7 @@
 <body>
 	<div id="root"><div class="home-page" data-reactroot=""><section class="top-section"><header class="header-container header-container-primary"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-primary"><span class="icon-search"></span></div><span class="language-switch language-switch-primary">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_white.png"/><ul><li class="men [...]
 Incubation is required of all newly accepted projects until a further review indicates 
-that the infrastructure, communications, decision making process have stabilized in a manner consistent with other successful ASF projects.
+that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. 
 While incubation status is not necessarily a reflection of the completeness or stability of the code, 
 it does indicate that the project has yet to be fully endorsed by the ASF.</p></div><div class="col col-6"><dl><dt>Documentation</dt><dd><a href="/en-us/docs/developer_guide/architecture-design.html" target="_self">Overview</a></dd><dd><a href="/en-us/docs/user_doc/quick-start.html" target="_self">Quick start</a></dd><dd><a href="/en-us/docs/development/developers.html" target="_self">Developer guide</a></dd></dl></div><div class="col col-6"><dl><dt>ASF</dt><dd><a href="http://www.apache [...]
 	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
diff --git a/img/addtenant.png b/img/addtenant.png
new file mode 100755
index 0000000..c3909ec
Binary files /dev/null and b/img/addtenant.png differ
diff --git a/img/auth_project.png b/img/auth_project.png
new file mode 100755
index 0000000..96c05d8
Binary files /dev/null and b/img/auth_project.png differ
diff --git a/img/auth_user.png b/img/auth_user.png
new file mode 100755
index 0000000..e03f00c
Binary files /dev/null and b/img/auth_user.png differ
diff --git a/img/complement.png b/img/complement.png
new file mode 100755
index 0000000..058311f
Binary files /dev/null and b/img/complement.png differ
diff --git a/img/create-queue.png b/img/create-queue.png
new file mode 100644
index 0000000..537df26
Binary files /dev/null and b/img/create-queue.png differ
diff --git a/img/dag1.png b/img/dag1.png
new file mode 100644
index 0000000..67ff1bd
Binary files /dev/null and b/img/dag1.png differ
diff --git a/img/dag2.png b/img/dag2.png
new file mode 100644
index 0000000..6be79b3
Binary files /dev/null and b/img/dag2.png differ
diff --git a/img/dag3.png b/img/dag3.png
new file mode 100644
index 0000000..d3afd13
Binary files /dev/null and b/img/dag3.png differ
diff --git a/img/dag4.png b/img/dag4.png
new file mode 100755
index 0000000..b3ec5a8
Binary files /dev/null and b/img/dag4.png differ
diff --git a/img/depend-node.png b/img/depend-node.png
new file mode 100644
index 0000000..5f2b8bd
Binary files /dev/null and b/img/depend-node.png differ
diff --git a/img/depend-node2.png b/img/depend-node2.png
new file mode 100644
index 0000000..e8286e5
Binary files /dev/null and b/img/depend-node2.png differ
diff --git a/img/depend-node3.png b/img/depend-node3.png
new file mode 100644
index 0000000..6c4a624
Binary files /dev/null and b/img/depend-node3.png differ
diff --git a/img/dependent_edit.png b/img/dependent_edit.png
new file mode 100755
index 0000000..b007cac
Binary files /dev/null and b/img/dependent_edit.png differ
diff --git a/img/file-manage.png b/img/file-manage.png
new file mode 100644
index 0000000..e306f71
Binary files /dev/null and b/img/file-manage.png differ
diff --git a/img/file_create.png b/img/file_create.png
new file mode 100755
index 0000000..464b179
Binary files /dev/null and b/img/file_create.png differ
diff --git a/img/file_detail.png b/img/file_detail.png
new file mode 100755
index 0000000..726f9bf
Binary files /dev/null and b/img/file_detail.png differ
diff --git a/img/file_rename.png b/img/file_rename.png
new file mode 100755
index 0000000..bcbc6da
Binary files /dev/null and b/img/file_rename.png differ
diff --git a/img/file_upload.png b/img/file_upload.png
new file mode 100755
index 0000000..b2f36ea
Binary files /dev/null and b/img/file_upload.png differ
diff --git a/img/gant-pic.png b/img/gant-pic.png
new file mode 100644
index 0000000..d1befa9
Binary files /dev/null and b/img/gant-pic.png differ
diff --git a/img/global_parameter.png b/img/global_parameter.png
new file mode 100755
index 0000000..9fb415c
Binary files /dev/null and b/img/global_parameter.png differ
diff --git a/img/hive_edit.png b/img/hive_edit.png
new file mode 100755
index 0000000..50d0eed
Binary files /dev/null and b/img/hive_edit.png differ
diff --git a/img/hive_edit2.png b/img/hive_edit2.png
new file mode 100755
index 0000000..789d65f
Binary files /dev/null and b/img/hive_edit2.png differ
diff --git a/img/hive_kerberos.png b/img/hive_kerberos.png
new file mode 100755
index 0000000..1532934
Binary files /dev/null and b/img/hive_kerberos.png differ
diff --git a/img/instance-detail.png b/img/instance-detail.png
new file mode 100644
index 0000000..fd17e47
Binary files /dev/null and b/img/instance-detail.png differ
diff --git a/img/instance-list.png b/img/instance-list.png
new file mode 100644
index 0000000..992ce15
Binary files /dev/null and b/img/instance-list.png differ
diff --git a/img/local_parameter.png b/img/local_parameter.png
new file mode 100755
index 0000000..1eac919
Binary files /dev/null and b/img/local_parameter.png differ
diff --git a/img/login.jpg b/img/login.jpg
new file mode 100755
index 0000000..b6574e1
Binary files /dev/null and b/img/login.jpg differ
diff --git a/img/mail_edit.png b/img/mail_edit.png
new file mode 100755
index 0000000..a7ca3f7
Binary files /dev/null and b/img/mail_edit.png differ
diff --git a/img/master-jk.png b/img/master-jk.png
new file mode 100644
index 0000000..a965945
Binary files /dev/null and b/img/master-jk.png differ
diff --git a/img/mr_edit.png b/img/mr_edit.png
new file mode 100755
index 0000000..1fa8549
Binary files /dev/null and b/img/mr_edit.png differ
diff --git a/img/mysql-jk.png b/img/mysql-jk.png
new file mode 100644
index 0000000..888c1c1
Binary files /dev/null and b/img/mysql-jk.png differ
diff --git a/img/mysql_edit.png b/img/mysql_edit.png
new file mode 100755
index 0000000..1ae75cb
Binary files /dev/null and b/img/mysql_edit.png differ
diff --git a/img/postgresql_edit.png b/img/postgresql_edit.png
new file mode 100755
index 0000000..79c1eec
Binary files /dev/null and b/img/postgresql_edit.png differ
diff --git a/img/procedure_edit.png b/img/procedure_edit.png
new file mode 100755
index 0000000..e6d31ab
Binary files /dev/null and b/img/procedure_edit.png differ
diff --git a/img/project.png b/img/project.png
new file mode 100644
index 0000000..46bc4e4
Binary files /dev/null and b/img/project.png differ
diff --git a/img/python_edit.png b/img/python_edit.png
new file mode 100755
index 0000000..e2f6380
Binary files /dev/null and b/img/python_edit.png differ
diff --git a/img/run-work.png b/img/run-work.png
new file mode 100755
index 0000000..f069942
Binary files /dev/null and b/img/run-work.png differ
diff --git a/img/shell_edit.png b/img/shell_edit.png
new file mode 100755
index 0000000..1fe8870
Binary files /dev/null and b/img/shell_edit.png differ
diff --git a/img/spark_datesource.png b/img/spark_datesource.png
new file mode 100755
index 0000000..ac30d9f
Binary files /dev/null and b/img/spark_datesource.png differ
diff --git a/img/spark_edit.png b/img/spark_edit.png
new file mode 100755
index 0000000..b7c2321
Binary files /dev/null and b/img/spark_edit.png differ
diff --git a/img/sparksql_kerberos.png b/img/sparksql_kerberos.png
new file mode 100755
index 0000000..761279b
Binary files /dev/null and b/img/sparksql_kerberos.png differ
diff --git a/img/sql-node.png b/img/sql-node.png
new file mode 100644
index 0000000..97260ef
Binary files /dev/null and b/img/sql-node.png differ
diff --git a/img/sql-node2.png b/img/sql-node2.png
new file mode 100644
index 0000000..0163d5d
Binary files /dev/null and b/img/sql-node2.png differ
diff --git a/img/subprocess_edit.png b/img/subprocess_edit.png
new file mode 100755
index 0000000..6a2152a
Binary files /dev/null and b/img/subprocess_edit.png differ
diff --git a/img/task-list.png b/img/task-list.png
new file mode 100644
index 0000000..9072bc0
Binary files /dev/null and b/img/task-list.png differ
diff --git a/img/task-log.png b/img/task-log.png
new file mode 100644
index 0000000..b8aea54
Binary files /dev/null and b/img/task-log.png differ
diff --git a/img/task-log2.png b/img/task-log2.png
new file mode 100644
index 0000000..20ccb95
Binary files /dev/null and b/img/task-log2.png differ
diff --git a/img/task_history.png b/img/task_history.png
new file mode 100755
index 0000000..07f8ad6
Binary files /dev/null and b/img/task_history.png differ
diff --git a/img/time-schedule.png b/img/time-schedule.png
new file mode 100755
index 0000000..0a6d6ba
Binary files /dev/null and b/img/time-schedule.png differ
diff --git a/img/time-schedule2.png b/img/time-schedule2.png
new file mode 100755
index 0000000..fdfc5be
Binary files /dev/null and b/img/time-schedule2.png differ
diff --git a/img/udf_edit.png b/img/udf_edit.png
new file mode 100755
index 0000000..eb5df04
Binary files /dev/null and b/img/udf_edit.png differ
diff --git a/img/useredit2.png b/img/useredit2.png
new file mode 100755
index 0000000..0e9f5d7
Binary files /dev/null and b/img/useredit2.png differ
diff --git a/img/worker-jk.png b/img/worker-jk.png
new file mode 100644
index 0000000..e1cfbb6
Binary files /dev/null and b/img/worker-jk.png differ
diff --git a/img/worker1.png b/img/worker1.png
new file mode 100644
index 0000000..03d4e00
Binary files /dev/null and b/img/worker1.png differ
diff --git a/img/zk-jk.png b/img/zk-jk.png
new file mode 100644
index 0000000..ffdcd63
Binary files /dev/null and b/img/zk-jk.png differ
diff --git a/zh-cn/docs/user_doc/quick-start.html b/zh-cn/docs/user_doc/quick-start.html
index 57d0b2c..4a15f4d 100644
--- a/zh-cn/docs/user_doc/quick-start.html
+++ b/zh-cn/docs/user_doc/quick-start.html
@@ -21,31 +21,31 @@
 </li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/login.jpg" width="60%" />
+   <img src="/img/login.jpg" width="60%" />
  </p>
 <ul>
 <li>创建队列</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/create-queue.png" width="60%" />
+   <img src="/img/create-queue.png" width="60%" />
  </p>
 <ul>
 <li>创建租户</li>
 </ul>
    <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/addtenant.png" width="60%" />
+    <img src="/img/addtenant.png" width="60%" />
   </p>
 <ul>
 <li>创建普通用户</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/useredit2.png" width="60%" />
+   <img src="/img/useredit2.png" width="60%" />
  </p>
 <ul>
 <li>创建告警组</li>
 </ul>
  <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/mail_edit.png" width="60%" />
+    <img src="/img/mail_edit.png" width="60%" />
   </p>
 <ul>
 <li>使用普通用户登录</li>
@@ -57,19 +57,19 @@
 <li>项目管理-&gt;创建项目-&gt;点击项目名称</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/project.png" width="60%" />
+   <img src="/img/project.png" width="60%" />
  </p>
 <ul>
 <li>点击工作流定义-&gt;创建工作流定义-&gt;上线工作流定义</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag1.png" width="60%" />
+   <img src="/img/dag1.png" width="60%" />
  </p>
 <ul>
 <li>运行工作流定义-&gt;点击工作流实例-&gt;点击工作流实例名称-&gt;双击任务节点-&gt;查看任务执行日志</li>
 </ul>
  <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task-log.png" width="60%" />
+   <img src="/img/task-log.png" width="60%" />
 </p></div></section><footer class="footer-container"><div class="footer-body"><img src="/img/ds_gray.svg"/><div class="cols-container"><div class="col col-12"><h3>Disclaimer</h3><p>Apache DolphinScheduler (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. 
 Incubation is required of all newly accepted projects until a further review indicates 
 that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. 
diff --git a/zh-cn/docs/user_doc/quick-start.json b/zh-cn/docs/user_doc/quick-start.json
index f23bc83..f13003a 100644
--- a/zh-cn/docs/user_doc/quick-start.json
+++ b/zh-cn/docs/user_doc/quick-start.json
@@ -1,6 +1,6 @@
 {
   "filename": "quick-start.md",
-  "__html": "<h1>快速上手</h1>\n<ul>\n<li>管理员用户登录\n<blockquote>\n<p>地址:192.168.xx.xx:8888 用户名密码:admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"https://analysys.github.io/easyscheduler_docs_cn/images/login.jpg\" width=\"60%\" />\n </p>\n<ul>\n<li>创建队列</li>\n</ul>\n<p align=\"center\">\n   <img src=\"https://analysys.github.io/easyscheduler_docs_cn/images/create-queue.png\" width=\"60%\" />\n </p>\n<ul>\n<li>创建租户</li>\n</ul>\n   <p align=\"cente [...]
+  "__html": "<h1>快速上手</h1>\n<ul>\n<li>管理员用户登录\n<blockquote>\n<p>地址:192.168.xx.xx:8888 用户名密码:admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/login.jpg\" width=\"60%\" />\n </p>\n<ul>\n<li>创建队列</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/create-queue.png\" width=\"60%\" />\n </p>\n<ul>\n<li>创建租户</li>\n</ul>\n   <p align=\"center\">\n    <img src=\"/img/addtenant.png\" width=\"60%\" />\n  </p>\n<ul>\n<li>创建普通用户</li>\n</ul>\n<p a [...]
   "link": "/zh-cn/docs/user_doc/quick-start.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/user_doc/system-manual.html b/zh-cn/docs/user_doc/system-manual.html
index 46b1534..2076081 100644
--- a/zh-cn/docs/user_doc/system-manual.html
+++ b/zh-cn/docs/user_doc/system-manual.html
@@ -24,7 +24,7 @@
 <li>点击项目名称,进入项目首页。</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/project.png" width="60%" />
+   <img src="/img/project.png" width="60%" />
  </p>
 <blockquote>
 <p>项目首页其中包含任务状态统计,流程状态统计、工作流定义统计</p>
@@ -45,25 +45,25 @@
 <li>填写&quot;自定义参数&quot;,参考<a href="#%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%8F%82%E6%95%B0">自定义参数</a></li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag1.png" width="60%" />
+   <img src="/img/dag1.png" width="60%" />
  </p>
 <ul>
 <li>增加节点之间执行的先后顺序: 点击”线条连接“;如图示,任务1和任务3并行执行,当任务1执行完,任务2、3会同时执行。</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag2.png" width="60%" />
+   <img src="/img/dag2.png" width="60%" />
  </p>
 <ul>
 <li>删除依赖关系: 点击箭头图标”拖动节点和选中项“,选中连接线,点击删除图标,删除节点间依赖关系。</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag3.png" width="60%" />
+   <img src="/img/dag3.png" width="60%" />
  </p>
 <ul>
 <li>点击”保存“,输入工作流定义名称,工作流定义描述,设置全局参数,参考<a href="#%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%8F%82%E6%95%B0">自定义参数</a>。</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag4.png" width="60%" />
+   <img src="/img/dag4.png" width="60%" />
  </p>
 <ul>
 <li>其他类型节点,请参考 <a href="#%E4%BB%BB%E5%8A%A1%E8%8A%82%E7%82%B9%E7%B1%BB%E5%9E%8B%E5%92%8C%E5%8F%82%E6%95%B0%E8%AE%BE%E7%BD%AE">任务节点类型和参数设置</a></li>
@@ -92,13 +92,13 @@
 </li>
 </ul>
   <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/run-work.png" width="60%" />
+   <img src="/img/run-work.png" width="60%" />
  </p>
 <ul>
 <li>补数: 执行指定日期的工作流定义,可以选择补数时间范围(目前只支持针对连续的天进行补数),比如要补5月1号到5月10号的数据,如图示:</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/complement.png" width="60%" />
+   <img src="/img/complement.png" width="60%" />
  </p>
 <blockquote>
 <p>补数执行模式有<strong>串行执行、并行执行</strong>,串行模式下,补数会从5月1号到5月10号依次执行;并行模式下,会同时执行5月1号到5月10号的任务。</p>
@@ -109,13 +109,13 @@
 <li>选择起止时间,在起止时间范围内,定时正常工作,超过范围,就不会再继续产生定时工作流实例了。</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/time-schedule.png" width="60%" />
+   <img src="/img/time-schedule.png" width="60%" />
  </p>
 <ul>
 <li>添加一个每天凌晨5点执行一次的定时,如图示:</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/time-schedule2.png" width="60%" />
+   <img src="/img/time-schedule2.png" width="60%" />
  </p>
 <ul>
 <li>定时上线,<strong>新创建的定时是下线状态,需要点击“定时管理-&gt;上线”,定时才能正常工作</strong>。</li>
@@ -128,25 +128,25 @@
 <p>点击工作流名称,查看任务执行状态。</p>
 </blockquote>
   <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/instance-detail.png" width="60%" />
+   <img src="/img/instance-detail.png" width="60%" />
  </p>
 <blockquote>
 <p>点击任务节点,点击“查看日志”,查看任务执行日志。</p>
 </blockquote>
   <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task-log.png" width="60%" />
+   <img src="/img/task-log.png" width="60%" />
  </p>
 <blockquote>
 <p>点击任务实例节点,点击<strong>查看历史</strong>,可以查看该工作流实例运行的该任务实例列表</p>
 </blockquote>
  <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_history.png" width="60%" />
+    <img src="/img/task_history.png" width="60%" />
   </p>
 <blockquote>
 <p>对工作流实例的操作:</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/instance-list.png" width="60%" />
+   <img src="/img/instance-list.png" width="60%" />
 </p>
 <ul>
 <li>编辑:可以对已经终止的流程进行编辑,编辑后保存的时候,可以选择是否更新到工作流定义。</li>
@@ -159,20 +159,20 @@
 <li>甘特图:Gantt图纵轴是某个工作流实例下的任务实例的拓扑排序,横轴是任务实例的运行时间,如图示:</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/gant-pic.png" width="60%" />
+   <img src="/img/gant-pic.png" width="60%" />
 </p>
 <h3>查看任务实例</h3>
 <blockquote>
 <p>点击“任务实例”,进入任务列表页,查询任务执行情况</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task-list.png" width="60%" />
+   <img src="/img/task-list.png" width="60%" />
 </p>
 <blockquote>
 <p>点击操作列中的“查看日志”,可以查看任务执行的日志情况。</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task-log2.png" width="60%" />
+   <img src="/img/task-log2.png" width="60%" />
 </p>
 <h3>创建数据源</h3>
 <blockquote>
@@ -212,7 +212,7 @@
 </li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/mysql_edit.png" width="60%" />
+   <img src="/img/mysql_edit.png" width="60%" />
  </p>
 <blockquote>
 <p>点击“测试连接”,测试数据源是否可以连接成功。</p>
@@ -230,12 +230,12 @@
 <li>Jdbc连接参数:用于POSTGRESQL连接的参数设置,以JSON形式填写</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/postgresql_edit.png" width="60%" />
+   <img src="/img/postgresql_edit.png" width="60%" />
  </p>
 <h4>创建、编辑HIVE数据源</h4>
 <p>1.使用HiveServer2方式连接</p>
  <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/hive_edit.png" width="60%" />
+    <img src="/img/hive_edit.png" width="60%" />
   </p>
 <ul>
 <li>数据源:选择HIVE</li>
@@ -250,15 +250,15 @@
 </ul>
 <p>2.使用HiveServer2 HA Zookeeper方式连接</p>
  <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/hive_edit2.png" width="60%" />
+    <img src="/img/hive_edit2.png" width="60%" />
   </p>
 <p>注意:如果开启了<strong>kerberos</strong>,则需要填写 <strong>Principal</strong></p>
 <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/hive_kerberos.png" width="60%" />
+    <img src="/img/hive_kerberos.png" width="60%" />
   </p>
 <h4>创建、编辑Spark数据源</h4>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/spark_datesource.png" width="60%" />
+   <img src="/img/spark_datesource.png" width="60%" />
  </p>
 <ul>
 <li>数据源:选择Spark</li>
@@ -273,7 +273,7 @@
 </ul>
 <p>注意:如果开启了<strong>kerberos</strong>,则需要填写 <strong>Principal</strong></p>
 <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/sparksql_kerberos.png" width="60%" />
+    <img src="/img/sparksql_kerberos.png" width="60%" />
   </p>
 <h3>上传资源</h3>
 <ul>
@@ -291,7 +291,7 @@ conf/common/hadoop.properties
 <p>是对各种资源文件的管理,包括创建基本的txt/log/sh/conf等文件、上传jar包等各种类型文件,以及编辑、下载、删除等操作。</p>
 </blockquote>
   <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/file-manage.png" width="60%" />
+   <img src="/img/file-manage.png" width="60%" />
  </p>
 <ul>
 <li>创建文件</li>
@@ -300,7 +300,7 @@ conf/common/hadoop.properties
 <p>文件格式支持以下几种类型:txt、log、sh、conf、cfg、py、java、sql、xml、hql</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/file_create.png" width="60%" />
+   <img src="/img/file_create.png" width="60%" />
  </p>
 <ul>
 <li>上传文件</li>
@@ -309,7 +309,7 @@ conf/common/hadoop.properties
 <p>上传文件:点击上传按钮进行上传,将文件拖拽到上传区域,文件名会自动以上传的文件名称补全</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/file_upload.png" width="60%" />
+   <img src="/img/file_upload.png" width="60%" />
  </p>
 <ul>
 <li>文件查看</li>
@@ -318,7 +318,7 @@ conf/common/hadoop.properties
 <p>对可查看的文件类型,点击 文件名称 可以查看文件详情</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/file_detail.png" width="60%" />
+   <img src="/img/file_detail.png" width="60%" />
  </p>
 <ul>
 <li>下载文件</li>
@@ -330,7 +330,7 @@ conf/common/hadoop.properties
 <li>文件重命名</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/file_rename.png" width="60%" />
+   <img src="/img/file_rename.png" width="60%" />
  </p>
 <h4>删除</h4>
 <blockquote>
@@ -364,7 +364,7 @@ conf/common/hadoop.properties
 <li>UDF资源:设置创建的UDF对应的资源文件</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/udf_edit.png" width="60%" />
+   <img src="/img/udf_edit.png" width="60%" />
  </p>
 <h2>安全中心(权限系统)</h2>
 <ul>
@@ -377,7 +377,7 @@ conf/common/hadoop.properties
 <li>“安全中心”-&gt;“队列管理”-&gt;“创建队列”</li>
 </ul>
  <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/create-queue.png" width="60%" />
+    <img src="/img/create-queue.png" width="60%" />
   </p>
 <h3>添加租户</h3>
 <ul>
@@ -385,7 +385,7 @@ conf/common/hadoop.properties
 <li>租户编码:<strong>租户编码是Linux上的用户,唯一,不能重复</strong></li>
 </ul>
  <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/addtenant.png" width="60%" />
+    <img src="/img/addtenant.png" width="60%" />
   </p>
 <h3>创建普通用户</h3>
 <ul>
@@ -396,7 +396,7 @@ conf/common/hadoop.properties
 * 注意:**如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下**
 </code></pre>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/useredit2.png" width="60%" />
+   <img src="/img/useredit2.png" width="60%" />
  </p>
 <h3>创建告警组</h3>
 <ul>
@@ -406,7 +406,7 @@ conf/common/hadoop.properties
 <li>新建、编辑告警组</li>
 </ul>
   <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/mail_edit.png" width="60%" />
+    <img src="/img/mail_edit.png" width="60%" />
   </p>
 <h3>创建worker分组</h3>
 <ul>
@@ -414,7 +414,7 @@ conf/common/hadoop.properties
 <li>worker分组内多个ip地址(<strong>不能写别名</strong>),以<strong>英文逗号</strong>分隔</li>
 </ul>
   <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/worker1.png" width="60%" />
+    <img src="/img/worker1.png" width="60%" />
   </p>
 <h3>令牌管理</h3>
 <ul>
@@ -468,13 +468,13 @@ conf/common/hadoop.properties
 <li>1.点击指定人的授权按钮,如下图:</li>
 </ul>
   <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/auth_user.png" width="60%" />
+   <img src="/img/auth_user.png" width="60%" />
  </p>
 <ul>
 <li>2.选中项目按钮,进行项目授权</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/auth_project.png" width="60%" />
+   <img src="/img/auth_project.png" width="60%" />
  </p>
 <h2>监控中心</h2>
 <h3>服务管理</h3>
@@ -486,28 +486,28 @@ conf/common/hadoop.properties
 <li>主要是master的相关信息。</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master-jk.png" width="60%" />
+   <img src="/img/master-jk.png" width="60%" />
  </p>
 <h4>worker监控</h4>
 <ul>
 <li>主要是worker的相关信息。</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/worker-jk.png" width="60%" />
+   <img src="/img/worker-jk.png" width="60%" />
  </p>
 <h4>Zookeeper监控</h4>
 <ul>
 <li>主要是zookpeeper中各个worker和master的相关配置信息。</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/zk-jk.png" width="60%" />
+   <img src="/img/zk-jk.png" width="60%" />
  </p>
-<h4>Mysql监控</h4>
+<h4>DB监控</h4>
 <ul>
-<li>主要是mysql的健康状况</li>
+<li>主要是DB的健康状况</li>
 </ul>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/mysql-jk.png" width="60%" />
+   <img src="/img/mysql-jk.png" width="60%" />
  </p>
 <h2>任务节点类型和参数设置</h2>
 <h3>Shell节点</h3>
@@ -518,7 +518,7 @@ conf/common/hadoop.properties
 <p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SHELL.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/shell_edit.png" width="60%" />
+   <img src="/img/shell_edit.png" width="60%" />
  </p>
 <ul>
 <li>节点名称:一个工作流定义中的节点名称是唯一的</li>
@@ -538,7 +538,7 @@ conf/common/hadoop.properties
 <p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/subprocess_edit.png" width="60%" />
+   <img src="/img/subprocess_edit.png" width="60%" />
  </p>
 <ul>
 <li>节点名称:一个工作流定义中的节点名称是唯一的</li>
@@ -554,25 +554,25 @@ conf/common/hadoop.properties
 <p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dependent_edit.png" width="60%" />
+   <img src="/img/dependent_edit.png" width="60%" />
  </p>
 <blockquote>
 <p>依赖节点提供了逻辑判断功能,比如检查昨天的B流程是否成功,或者C流程是否执行成功。</p>
 </blockquote>
   <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node.png" width="80%" />
+   <img src="/img/depend-node.png" width="80%" />
  </p>
 <blockquote>
 <p>例如,A流程为周报任务,B、C流程为天任务,A任务需要B、C任务在上周的每一天都执行成功,如图示:</p>
 </blockquote>
  <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node2.png" width="80%" />
+   <img src="/img/depend-node2.png" width="80%" />
  </p>
 <blockquote>
 <p>假如,周报A同时还需要自身在上周二执行成功:</p>
 </blockquote>
  <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node3.png" width="80%" />
+   <img src="/img/depend-node3.png" width="80%" />
  </p>
 <h3>存储过程节点</h3>
 <ul>
@@ -582,7 +582,7 @@ conf/common/hadoop.properties
 <p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/procedure_edit.png" width="60%" />
+   <img src="/img/procedure_edit.png" width="60%" />
  </p>
 <ul>
 <li>数据源:存储过程的数据源类型支持MySQL和POSTGRESQL两种,选择对应的数据源</li>
@@ -594,7 +594,7 @@ conf/common/hadoop.properties
 <li>执行非查询SQL功能</li>
 </ul>
   <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/sql-node.png" width="60%" />
+   <img src="/img/sql-node.png" width="60%" />
  </p>
 <ul>
 <li>执行查询SQL功能,可以选择通过表格和附件形式发送邮件到指定的收件人。</li>
@@ -603,7 +603,7 @@ conf/common/hadoop.properties
 <p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/sql-node2.png" width="60%" />
+   <img src="/img/sql-node2.png" width="60%" />
  </p>
 <ul>
 <li>数据源:选择对应的数据源</li>
@@ -621,7 +621,7 @@ conf/common/hadoop.properties
 <p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/spark_edit.png" width="60%" />
+   <img src="/img/spark_edit.png" width="60%" />
  </p>
 <ul>
 <li>程序类型:支持JAVA、Scala和Python三种语言</li>
@@ -662,7 +662,7 @@ conf/common/hadoop.properties
 <li>Python程序</li>
 </ol>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/mr_edit.png" width="60%" />
+   <img src="/img/mr_edit.png" width="60%" />
  </p>
 <ul>
 <li>程序类型:选择Python语言</li>
@@ -681,7 +681,7 @@ conf/common/hadoop.properties
 <p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/python_edit.png" width="60%" />
+   <img src="/img/python_edit.png" width="60%" />
  </p>
 <ul>
 <li>脚本:用户开发的Python程序</li>
@@ -736,13 +736,13 @@ conf/common/hadoop.properties
 <p>例如:</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/local_parameter.png" width="60%" />
+   <img src="/img/local_parameter.png" width="60%" />
  </p>
 <blockquote>
 <p>global_bizdate为全局参数,引用的是系统参数。</p>
 </blockquote>
 <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/global_parameter.png" width="60%" />
+   <img src="/img/global_parameter.png" width="60%" />
  </p>
 <blockquote>
 <p>任务中local_param_bizdate通过{global_bizdate}来引用全局参数,对于脚本可以通过{local_param_bizdate}来引用变量local_param_bizdate的值,或通过JDBC直接将local_param_bizdate的值set进去</p>
diff --git a/zh-cn/docs/user_doc/system-manual.json b/zh-cn/docs/user_doc/system-manual.json
index 83e2f99..34301c8 100644
--- a/zh-cn/docs/user_doc/system-manual.json
+++ b/zh-cn/docs/user_doc/system-manual.json
@@ -1,6 +1,6 @@
 {
   "filename": "system-manual.md",
-  "__html": "<h1>系统使用手册</h1>\n<h2>快速上手</h2>\n<blockquote>\n<p>请参照<a href=\"quick-start.html\">快速上手</a></p>\n</blockquote>\n<h2>操作指南</h2>\n<h3>创建项目</h3>\n<ul>\n<li>点击“项目管理-&gt;创建项目”,输入项目名称,项目描述,点击“提交”,创建新的项目。</li>\n<li>点击项目名称,进入项目首页。</li>\n</ul>\n<p align=\"center\">\n   <img src=\"https://analysys.github.io/easyscheduler_docs_cn/images/project.png\" width=\"60%\" />\n </p>\n<blockquote>\n<p>项目首页其中包含任务状态统计,流程状态统计、工作流定义统计</p>\n</blockquote>\n<ul>\n<li>任务状态统计:是指在指定时间范围内,统计任务实例中的待运行、失败、运行中、完 [...]
+  "__html": "<h1>系统使用手册</h1>\n<h2>快速上手</h2>\n<blockquote>\n<p>请参照<a href=\"quick-start.html\">快速上手</a></p>\n</blockquote>\n<h2>操作指南</h2>\n<h3>创建项目</h3>\n<ul>\n<li>点击“项目管理-&gt;创建项目”,输入项目名称,项目描述,点击“提交”,创建新的项目。</li>\n<li>点击项目名称,进入项目首页。</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/project.png\" width=\"60%\" />\n </p>\n<blockquote>\n<p>项目首页其中包含任务状态统计,流程状态统计、工作流定义统计</p>\n</blockquote>\n<ul>\n<li>任务状态统计:是指在指定时间范围内,统计任务实例中的待运行、失败、运行中、完成、成功的个数</li>\n<li>流程状态统计:是指在指定时间范围内,统计工作流实例中的待运行、失败 [...]
   "link": "/zh-cn/docs/user_doc/system-manual.html",
   "meta": {}
 }
\ No newline at end of file