You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by ki...@apache.org on 2021/02/14 11:05:00 UTC

[incubator-dolphinscheduler-website] branch master updated: release 1.3.5 (#295)

This is an automated email from the ASF dual-hosted git repository.

kirs pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new f707e6b  release 1.3.5 (#295)
f707e6b is described below

commit f707e6b27950252a6adbd1008dd6beaa6bedc08f
Author: Kirs <ac...@163.com>
AuthorDate: Sun Feb 14 19:04:51 2021 +0800

    release 1.3.5 (#295)
    
    * release 1.3.5
---
 docs/en-us/1.3.5/user_doc/build-docker-image.md    |  247 +++++
 docs/en-us/1.3.5/user_doc/cluster-deployment.md    |  405 +++++++
 docs/en-us/1.3.5/user_doc/configuration-file.md    |  407 +++++++
 docs/en-us/1.3.5/user_doc/docker-deployment.md     |  137 +++
 docs/en-us/1.3.5/user_doc/hardware-environment.md  |   47 +
 docs/en-us/1.3.5/user_doc/metadata-1.3.md          |  173 +++
 docs/en-us/1.3.5/user_doc/quick-start.md           |   65 ++
 docs/en-us/1.3.5/user_doc/standalone-deployment.md |  340 ++++++
 docs/en-us/1.3.5/user_doc/system-manual.md         |  888 +++++++++++++++
 docs/en-us/1.3.5/user_doc/task-structure.md        | 1131 +++++++++++++++++++
 docs/en-us/1.3.5/user_doc/upgrade.md               |   80 ++
 docs/zh-cn/1.3.5/user_doc/architecture-design.md   |  331 ++++++
 docs/zh-cn/1.3.5/user_doc/build-docker-image.md    |  247 +++++
 docs/zh-cn/1.3.5/user_doc/cluster-deployment.md    |  475 ++++++++
 docs/zh-cn/1.3.5/user_doc/configuration-file.md    |  405 +++++++
 docs/zh-cn/1.3.5/user_doc/docker-deployment.md     |  143 +++
 docs/zh-cn/1.3.5/user_doc/expansion-reduction.md   |  257 +++++
 docs/zh-cn/1.3.5/user_doc/hardware-environment.md  |   48 +
 docs/zh-cn/1.3.5/user_doc/load-balance.md          |   62 ++
 docs/zh-cn/1.3.5/user_doc/metadata-1.3.md          |  185 ++++
 docs/zh-cn/1.3.5/user_doc/quick-start.md           |   58 +
 docs/zh-cn/1.3.5/user_doc/standalone-deployment.md |  336 ++++++
 docs/zh-cn/1.3.5/user_doc/system-manual.md         |  865 +++++++++++++++
 docs/zh-cn/1.3.5/user_doc/task-structure.md        | 1134 ++++++++++++++++++++
 docs/zh-cn/1.3.5/user_doc/upgrade.md               |   82 ++
 download/en-us/download.md                         |    2 +
 download/zh-cn/download.md                         |    2 +
 site_config/docs1-3-5.js                           |  154 +++
 site_config/home.jsx                               |    4 +-
 site_config/site.js                                |   42 +-
 30 files changed, 8734 insertions(+), 18 deletions(-)

diff --git a/docs/en-us/1.3.5/user_doc/build-docker-image.md b/docs/en-us/1.3.5/user_doc/build-docker-image.md
new file mode 100644
index 0000000..6238aac
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/build-docker-image.md
@@ -0,0 +1,247 @@
+## How to build a docker image
+
+You can build a docker image in A Unix-like operating system, You can also build it in Windows operating system.
+
+In Unix-Like, Example:
+
+```bash
+$ cd path/incubator-dolphinscheduler
+$ sh ./docker/build/hooks/build
+```
+
+In Windows, Example:
+
+```bat
+c:\incubator-dolphinscheduler>.\docker\build\hooks\build.bat
+```
+
+Please read `./docker/build/hooks/build` `./docker/build/hooks/build.bat` script files if you don't understand
+
+## Environment Variables
+
+The Dolphin Scheduler image uses several environment variables which are easy to miss. While none of the variables are required, they may significantly aid you in using the image.
+
+**`DATABASE_TYPE`**
+
+This environment variable sets the type for database. The default value is `postgresql`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_DRIVER`**
+
+This environment variable sets the type for database. The default value is `org.postgresql.Driver`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_HOST`**
+
+This environment variable sets the host for database. The default value is `127.0.0.1`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_PORT`**
+
+This environment variable sets the port for database. The default value is `5432`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`. 
+
+**`DATABASE_USERNAME`**
+
+This environment variable sets the username for database. The default value is `root`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_PASSWORD`**
+
+This environment variable sets the password for database. The default value is `root`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_DATABASE`**
+
+This environment variable sets the database for database. The default value is `dolphinscheduler`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_PARAMS`**
+
+This environment variable sets the database for database. The default value is `characterEncoding=utf8`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DOLPHINSCHEDULER_ENV_PATH`**
+
+This environment variable sets the runtime environment for task. The default value is `/opt/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
+
+**`DOLPHINSCHEDULER_DATA_BASEDIR_PATH`**
+
+User data directory path, self configuration, please make sure the directory exists and have read write permissions. The default value is `/tmp/dolphinscheduler`
+
+**`ZOOKEEPER_QUORUM`**
+
+This environment variable sets zookeeper quorum for `master-server` and `worker-serverr`. The default value is `127.0.0.1:2181`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`.
+
+**`MASTER_EXEC_THREADS`**
+
+This environment variable sets exec thread num for `master-server`. The default value is `100`.
+
+**`MASTER_EXEC_TASK_NUM`**
+
+This environment variable sets exec task num for `master-server`. The default value is `20`.
+
+**`MASTER_HEARTBEAT_INTERVAL`**
+
+This environment variable sets heartbeat interval for `master-server`. The default value is `10`.
+
+**`MASTER_TASK_COMMIT_RETRYTIMES`**
+
+This environment variable sets task commit retry times for `master-server`. The default value is `5`.
+
+**`MASTER_TASK_COMMIT_INTERVAL`**
+
+This environment variable sets task commit interval for `master-server`. The default value is `1000`.
+
+**`MASTER_MAX_CPULOAD_AVG`**
+
+This environment variable sets max cpu load avg for `master-server`. The default value is `100`.
+
+**`MASTER_RESERVED_MEMORY`**
+
+This environment variable sets reserved memory for `master-server`. The default value is `0.1`.
+
+**`MASTER_LISTEN_PORT`**
+
+This environment variable sets port for `master-server`. The default value is `5678`.
+
+**`WORKER_EXEC_THREADS`**
+
+This environment variable sets exec thread num for `worker-server`. The default value is `100`.
+
+**`WORKER_HEARTBEAT_INTERVAL`**
+
+This environment variable sets heartbeat interval for `worker-server`. The default value is `10`.
+
+**`WORKER_FETCH_TASK_NUM`**
+
+This environment variable sets fetch task num for `worker-server`. The default value is `3`.
+
+**`WORKER_MAX_CPULOAD_AVG`**
+
+This environment variable sets max cpu load avg for `worker-server`. The default value is `100`.
+
+**`WORKER_RESERVED_MEMORY`**
+
+This environment variable sets reserved memory for `worker-server`. The default value is `0.1`.
+
+**`WORKER_WEIGHT`**
+
+This environment variable sets port for `worker-server`. The default value is `100`.
+
+**`WORKER_LISTEN_PORT`**
+
+This environment variable sets port for `worker-server`. The default value is `1234`.
+
+**`WORKER_GROUP`**
+
+This environment variable sets group for `worker-server`. The default value is `default`.
+
+**`XLS_FILE_PATH`**
+
+This environment variable sets xls file path for `alert-server`. The default value is `/tmp/xls`.
+
+**`MAIL_SERVER_HOST`**
+
+This environment variable sets mail server host for `alert-server`. The default value is empty.
+
+**`MAIL_SERVER_PORT`**
+
+This environment variable sets mail server port for `alert-server`. The default value is empty.
+
+**`MAIL_SENDER`**
+
+This environment variable sets mail sender for `alert-server`. The default value is empty.
+
+**`MAIL_USER=`**
+
+This environment variable sets mail user for `alert-server`. The default value is empty.
+
+**`MAIL_PASSWD`**
+
+This environment variable sets mail password for `alert-server`. The default value is empty.
+
+**`MAIL_SMTP_STARTTLS_ENABLE`**
+
+This environment variable sets SMTP tls for `alert-server`. The default value is `true`.
+
+**`MAIL_SMTP_SSL_ENABLE`**
+
+This environment variable sets SMTP ssl for `alert-server`. The default value is `false`.
+
+**`MAIL_SMTP_SSL_TRUST`**
+
+This environment variable sets SMTP ssl truest for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_ENABLE`**
+
+This environment variable sets enterprise wechat enable for `alert-server`. The default value is `false`.
+
+**`ENTERPRISE_WECHAT_CORP_ID`**
+
+This environment variable sets enterprise wechat corp id for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_SECRET`**
+
+This environment variable sets enterprise wechat secret for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_AGENT_ID`**
+
+This environment variable sets enterprise wechat agent id for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_USERS`**
+
+This environment variable sets enterprise wechat users for `alert-server`. The default value is empty.
+
+**`FRONTEND_API_SERVER_HOST`**
+
+This environment variable sets api server host for `frontend`. The default value is `127.0.0.1`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `api-server`.
+
+**`FRONTEND_API_SERVER_PORT`**
+
+This environment variable sets api server port for `frontend`. The default value is `123451`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `api-server`.
+
+## Initialization scripts
+
+If you would like to do additional initialization in an image derived from this one, add one or more environment variable under `/root/start-init-conf.sh`, and modify template files in `/opt/dolphinscheduler/conf/*.tpl`.
+
+For example, to add an environment variable `API_SERVER_PORT` in `/root/start-init-conf.sh`:
+
+```
+export API_SERVER_PORT=5555
+``` 
+
+and to modify `/opt/dolphinscheduler/conf/application-api.properties.tpl` template file, add server port:
+```
+server.port=${API_SERVER_PORT}
+```
+
+`/root/start-init-conf.sh` will dynamically generate config file:
+
+```sh
+echo "generate app config"
+ls ${DOLPHINSCHEDULER_HOME}/conf/ | grep ".tpl" | while read line; do
+eval "cat << EOF
+$(cat ${DOLPHINSCHEDULER_HOME}/conf/${line})
+EOF
+" > ${DOLPHINSCHEDULER_HOME}/conf/${line%.*}
+done
+
+echo "generate nginx config"
+sed -i "s/FRONTEND_API_SERVER_HOST/${FRONTEND_API_SERVER_HOST}/g" /etc/nginx/conf.d/dolphinscheduler.conf
+sed -i "s/FRONTEND_API_SERVER_PORT/${FRONTEND_API_SERVER_PORT}/g" /etc/nginx/conf.d/dolphinscheduler.conf
+```
diff --git a/docs/en-us/1.3.5/user_doc/cluster-deployment.md b/docs/en-us/1.3.5/user_doc/cluster-deployment.md
new file mode 100644
index 0000000..bceb3ad
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/cluster-deployment.md
@@ -0,0 +1,405 @@
+# Cluster Deployment
+
+# 1、Before you begin (please install requirement basic software by yourself)
+
+ * PostgreSQL (8.2.15+) or MySQL (5.7)  :  Choose One, JDBC Driver 5.1.47+ is required if MySQL is used
+ * [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) :  Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile
+ * ZooKeeper (3.4.6+) :Required
+ * Hadoop (2.6+) or MinIO :Optional. If you need to upload a resource function, you can choose a local file directory as the upload folder for a single machine (this operation does not need to deploy Hadoop). Of course, you can also choose to upload to Hadoop or MinIO.
+
+```markdown
+ Tips:DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients for the corresponding task of running.
+```
+
+# 2、Download the binary package.
+
+- Please download the latest version of the default installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: [Download](/en-us/download/download.html),Download the package and move to the installation and deployment directory. Then unzip it.
+
+```shell
+# Create the deployment directory. Do not choose a deployment directory with a high-privilege directory such as / root or / home.
+mkdir -p /opt/dolphinscheduler;
+cd /opt/dolphinscheduler;
+# unzip
+tar -zxvf apache-dolphinscheduler-incubating-1.3.2-dolphinscheduler-bin.tar.gz -C /opt/dolphinscheduler;
+
+mv apache-dolphinscheduler-incubating-1.3.2-dolphinscheduler-bin  dolphinscheduler-bin
+```
+
+# 3、Create deployment user and hosts mapping
+
+- Create a deployment user on the ** all ** deployment machines, and be sure to configure sudo passwordless. If we plan to deploy DolphinScheduler on 4 machines: ds1, ds2, ds3, and ds4, we first need to create a deployment user on each machine.
+
+```shell
+# To create a user, you need to log in as root and set the deployment user name. Please modify it yourself. The following uses dolphinscheduler as an example.
+useradd dolphinscheduler;
+
+# Set the user password, please modify it yourself. The following takes dolphinscheduler123 as an example.
+echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
+
+# Configure sudo passwordless
+echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+```
+
+```
+ Notes:
+ - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners who can ignore it if they don't understand.
+ - If find the "Default requiretty" in the "/etc/sudoers" file, also comment out.
+ - If you need to use resource upload, you need to assign the user of permission to operate the local file system, HDFS or MinIO.
+```
+
+# 4、Configure hosts mapping and ssh access and modify directory permissions.
+
+- Use the first machine (hostname is ds1) as the deployment machine, configure the hosts of all machines to be deployed on ds1, and login as root on ds1.
+
+  ```shell
+  vi /etc/hosts
+
+  #add ip hostname
+  192.168.xxx.xxx ds1
+  192.168.xxx.xxx ds2
+  192.168.xxx.xxx ds3
+  192.168.xxx.xxx ds4
+  ```
+
+  *Note: Please delete or comment out the line 127.0.0.1*
+
+- Sync /etc/hosts on ds1 to all deployment machines
+
+  ```shell
+  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of machines you want to deploy
+  do
+      sudo scp -r /etc/hosts  $ip:/etc/          # Need to enter root password during operation
+  done
+  ```
+
+  *Note: can use `sshpass -p xxx sudo scp -r /etc/hosts $ip:/etc/` to avoid type password.*
+
+  > Install sshpass in Centos:
+  >
+  > 1. Install epel
+  >
+  >    yum install -y epel-release
+  >
+  >    yum repolist
+  >
+  > 2. After installing epel, you can install sshpass
+  >
+  >    yum install -y sshpass
+  >
+  >
+
+- On ds1, switch to the deployment user and configure ssh passwordless login
+
+  ```shell
+   su dolphinscheduler;
+
+  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
+  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
+  chmod 600 ~/.ssh/authorized_keys
+  ```
+​      Note: *If configure success, the dolphinscheduler user does not need to enter a password when executing the command `ssh localhost`*
+
+
+
+- On ds1, configure the deployment user dolphinscheduler ssh to connect to other machines to be deployed.
+
+  ```shell
+  su dolphinscheduler;
+  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of the machine you want to deploy.
+  do
+      ssh-copy-id  $ip   # You need to manually enter the password of the dolphinscheduler user during the operation.
+  done
+  # can use `sshpass -p xxx ssh-copy-id $ip` to avoid type password.
+  ```
+
+- On ds1, modify the directory permissions so that the deployment user has operation permissions on the dolphinscheduler-bin directory.
+
+  ```shell
+  sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
+  ```
+
+# 5、Database initialization
+
+- Into the database. The default database is PostgreSQL. If you select MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.
+```
+mysql -h192.168.xx.xx -P3306 -uroot -p
+```
+
+- After entering the database command line window, execute the database initialization command and set the user and password. **Note: {user} and {password} need to be replaced with a specific database username and password**
+
+ ``` mysql
+    mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
+    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
+    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
+    mysql> flush privileges;
+ ```
+
+- Create tables and import basic data
+
+    - Modify the following configuration in datasource.properties under the conf directory
+
+    ```shell
+      vi conf/datasource.properties
+    ```
+
+    - If you choose Mysql, please comment out the relevant configuration of PostgreSQL (vice versa), you also need to manually add the [[mysql-connector-java driver jar] (https://downloads.mysql.com/archives/c-j/)] package to lib under the directory, and then configure the database connection information correctly.
+
+    ```properties
+      #postgre
+      #spring.datasource.driver-class-name=org.postgresql.Driver
+      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
+      # mysql
+      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     # Replace the correct IP address
+      spring.datasource.username=xxx						# replace the correct {user} value
+      spring.datasource.password=xxx						# replace the correct {password} value
+    ```
+
+    - After modifying and saving, execute the create table and import data script in the script directory.
+
+    ```shell
+    sh script/create-dolphinscheduler.sh
+    ```
+
+​       *Note: If you execute the above script and report "/bin/java: No such file or directory" error, please configure JAVA_HOME and PATH variables in /etc/profile*
+
+# 6、Modify runtime parameters.
+
+- Modify the environment variable in `dolphinscheduler_env.sh` file which on the 'conf/env' directory (take the relevant software installed under '/opt/soft' as an example)
+
+    ```shell
+        export HADOOP_HOME=/opt/soft/hadoop
+        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+        #export SPARK_HOME1=/opt/soft/spark1
+        export SPARK_HOME2=/opt/soft/spark2
+        export PYTHON_HOME=/opt/soft/python
+        export JAVA_HOME=/opt/soft/java
+        export HIVE_HOME=/opt/soft/hive
+        export FLINK_HOME=/opt/soft/flink
+        export DATAX_HOME=/opt/soft/datax/bin/datax.py
+        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+        ```
+
+     `Note: This step is very important. For example, JAVA_HOME and PATH must be configured. Those that are not used can be ignored or commented out.`
+
+
+
+- Create Soft link jdk to /usr/bin/java (still JAVA_HOME=/opt/soft/java as an example)
+
+    ```shell
+    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+    ```
+
+ - Modify the parameters in the one-click deployment config file `conf/config/install_config.conf`, pay special attention to the configuration of the following parameters.
+
+    ```shell
+    # choose mysql or postgresql
+    dbtype="mysql"
+
+    # Database connection address and port
+    dbhost="192.168.xx.xx:3306"
+
+    # database name
+    dbname="dolphinscheduler"
+
+    # database username
+    username="xxx"
+
+    # database password
+    # NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[`
+    password="xxx"
+
+    #Zookeeper cluster
+    zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+    # Note: the target installation path for dolphinscheduler, please not config as the same as the current path (pwd)
+    installPath="/opt/soft/dolphinscheduler"
+
+    # deployment user
+    # Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
+    deployUser="dolphinscheduler"
+
+    # alert config,take QQ email for example
+    # mail protocol
+    mailProtocol="SMTP"
+
+    # mail server host
+    mailServerHost="smtp.qq.com"
+
+    # mail server port
+    # note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, make sure the port is correct.
+    mailServerPort="25"
+
+    # mail sender
+    mailSender="xxx@qq.com"
+
+    # mail user
+    mailUser="xxx@qq.com"
+
+    # mail sender password
+    # note: The mail.passwd is email service authorization code, not the email login password.
+    mailPassword="xxx"
+
+    # Whether TLS mail protocol is supported,true is supported and false is not supported
+    starttlsEnable="true"
+
+    # Whether TLS mail protocol is supported,true is supported and false is not supported。
+    # note: only one of TLS and SSL can be in the true state.
+    sslEnable="false"
+
+    # note: sslTrust is the same as mailServerHost
+    sslTrust="smtp.qq.com"
+
+
+    # resource storage type:HDFS,S3,NONE
+    resourceStorageType="HDFS"
+
+    # If resourceStorageType = HDFS, and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
+    # if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+    # Note,s3 be sure to create the root directory /dolphinscheduler
+    defaultFS="hdfs://mycluster:8020"
+
+
+    # if not use hadoop resourcemanager, please keep default value; if resourcemanager HA enable, please type the HA ips ; if resourcemanager is single, make this value empty
+    yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+    # if resourcemanager HA enable or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.
+    singleYarnIp="yarnIp1"
+
+    # resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。/dolphinscheduler is recommended
+    resourceUploadPath="/dolphinscheduler"
+
+    # who have permissions to create directory under HDFS/S3 root path
+    # Note: if kerberos is enabled, please config hdfsRootUser=
+    hdfsRootUser="hdfs"
+
+
+
+    # install hosts
+    # Note: install the scheduled hostname list. If it is pseudo-distributed, just write a pseudo-distributed hostname
+    ips="ds1,ds2,ds3,ds4"
+
+    # ssh port, default 22
+    # Note: if ssh port is not default, modify here
+    sshPort="22"
+
+    # run master machine
+    # Note: list of hosts hostname for deploying master
+    masters="ds1,ds2"
+
+    # run worker machine
+    # note: need to write the worker group name of each worker, the default value is "default"
+    workers="ds3:default,ds4:default"
+
+    # run alert machine
+    # note: list of machine hostnames for deploying alert server
+    alertServer="ds2"
+
+    # run api machine
+    # note: list of machine hostnames for deploying api server
+    apiServers="ds1"
+
+    ```
+
+    *Attention:*
+
+    - If you need to upload resources to the Hadoop cluster, and the NameNode of the Hadoop cluster is configured with HA, you need to enable HDFS resource upload, and you need to copy the core-site.xml and hdfs-site.xml in the Hadoop cluster to /opt/ dolphinscheduler/conf. Non-NameNode HA skips the next step.
+
+# 7、Automated Deployment
+
+- Switch to the deployment user and execute the one-click deployment script
+
+    `sh install.sh`
+
+   ```
+   Note:
+   For the first deployment, the following message appears in step 3 of `3, stop server` during operation. This message can be ignored.
+   sh: bin/dolphinscheduler-daemon.sh: No such file or directory
+   ```
+
+- After the script is completed, the following 5 services will be started. Use the `jps` command to check whether the services are started (` jps` comes with `java JDK`)
+
+```aidl
+    MasterServer         ----- master service
+    WorkerServer         ----- worker service
+    LoggerServer         ----- logger service
+    ApiApplicationServer ----- api service
+    AlertServer          ----- alert service
+```
+If the above services are started normally, the automatic deployment is successful.
+
+
+After the deployment is successful, you can view the logs. The logs are stored in the logs folder.
+
+```log path
+ logs/
+    ├── dolphinscheduler-alert-server.log
+    ├── dolphinscheduler-master-server.log
+    |—— dolphinscheduler-worker-server.log
+    |—— dolphinscheduler-api-server.log
+    |—— dolphinscheduler-logger-server.log
+```
+
+
+
+# 8、login
+
+- Access the address of the front page, interface IP (self-modified)
+http://192.168.xx.xx:12345/dolphinscheduler
+
+   <p align="center">
+     <img src="/img/login_en.png" width="60%" />
+   </p>
+
+
+
+# 9、Start and stop service
+
+* Stop all services
+
+  ` sh ./bin/stop-all.sh`
+
+* Start all services
+
+  ` sh ./bin/start-all.sh`
+
+* Start and stop master service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start master-server
+sh ./bin/dolphinscheduler-daemon.sh stop master-server
+```
+
+* Start and stop worker Service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start worker-server
+sh ./bin/dolphinscheduler-daemon.sh stop worker-server
+```
+
+* Start and stop api Service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start api-server
+sh ./bin/dolphinscheduler-daemon.sh stop api-server
+```
+
+* Start and stop logger Service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start logger-server
+sh ./bin/dolphinscheduler-daemon.sh stop logger-server
+```
+
+* Start and stop alert service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start alert-server
+sh ./bin/dolphinscheduler-daemon.sh stop alert-server
+```
+
+``Note: Please refer to the "Architecture Design" section for service usage``
+
diff --git a/docs/en-us/1.3.5/user_doc/configuration-file.md b/docs/en-us/1.3.5/user_doc/configuration-file.md
new file mode 100644
index 0000000..24b8f7d
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/configuration-file.md
@@ -0,0 +1,407 @@
+
+
+# Preface
+This document explains the DolphinScheduler application configurations according to DolphinScheduler-1.3.x versions.
+
+# Directory Structure
+Currently, all the configuration files are under [conf ] directory. Please check the following simplified DolphinScheduler installation directories to have a direct view about the position [conf] directory in and configuration files inside. This document only describes DolphinScheduler configurations and other modules are not going into.
+
+[Note: the DolphinScheduler (hereinafter called the ‘DS’) .]
+```
+
+├─bin                               DS application commands directory
+│  ├─dolphinscheduler-daemon.sh         startup/shutdown DS application 
+│  ├─start-all.sh                       startup all DS services with configurations
+│  ├─stop-all.sh                        shutdown all DS services with configurations
+├─conf                              configurations directory
+│  ├─application-api.properties         API-service config properties
+│  ├─datasource.properties              datasource config properties
+│  ├─zookeeper.properties               zookeeper config properties
+│  ├─master.properties                  master config properties
+│  ├─worker.properties                  worker config properties
+│  ├─quartz.properties                  quartz config properties
+│  ├─common.properties                  common-service[storage] config properties
+│  ├─alert.properties                   alert-service config properties
+│  ├─config                             environment variables config directory
+│      ├─install_config.conf                DS environment variables configuration script[install/start DS]
+│  ├─env                                load environment variables configs script directory
+│      ├─dolphinscheduler_env.sh            load environment variables configs [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+│  ├─org                                mybatis mapper files directory
+│  ├─i18n                               i18n configs directory
+│  ├─logback-api.xml                    API-service log config
+│  ├─logback-master.xml                 master-service log config
+│  ├─logback-worker.xml                 worker-service log config
+│  ├─logback-alert.xml                  alert-service log config
+├─sql                                   DS metadata to create/upgrade .sql directory
+│  ├─create                             create SQL scripts directory
+│  ├─upgrade                            upgrade SQL scripts directory
+│  ├─dolphinscheduler-postgre.sql       postgre database init script
+│  ├─dolphinscheduler_mysql.sql         mysql database init script
+│  ├─soft_version                       current DS version-id file
+├─script                            DS services deployment, database create/upgrade scripts directory
+│  ├─create-dolphinscheduler.sh         DS database init script
+│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script
+│  ├─monitor-server.sh                  DS monitor-server start script       
+│  ├─scp-hosts.sh                       transfer installation files script                                     
+│  ├─remove-zk-node.sh                  cleanup zookeeper caches script       
+├─ui                                front-end web resources directory
+├─lib                               DS .jar dependencies directory
+├─install.sh                        auto-setup DS services script
+
+
+```
+
+
+# Configurations in Details
+
+serial number| service classification| config file|
+|--|--|--|
+1|startup/shutdown DS application|dolphinscheduler-daemon.sh
+2|datasource config properties| datasource.properties
+3|zookeeper config properties|zookeeper.properties
+4|common-service[storage] config properties|common.properties
+5|API-service config properties|application-api.properties
+6|master config properties|master.properties
+7|worker config properties|worker.properties
+8|alert-service config properties|alert.properties
+9|quartz config properties|quartz.properties
+10|DS environment variables configuration script[install/start DS]|install_config.conf
+11|load environment variables configs <br /> [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]|dolphinscheduler_env.sh
+12|services log config files|API-service log config : logback-api.xml  <br /> master-service log config  : logback-master.xml    <br /> worker-service log config : logback-worker.xml  <br /> alert-service log config : logback-alert.xml 
+
+
+## 1.dolphinscheduler-daemon.sh [startup/shutdown DS application]
+dolphinscheduler-daemon.sh is responsible for DS startup & shutdown. 
+Essentially, start-all.sh/stop-all.sh startup/shutdown the cluster via dolphinscheduler-daemon.sh.
+Currently, DS just makes a basic config, please config further JVM options based on your practical situation of resources.
+
+Default simplified parameters are:
+```bash
+export DOLPHINSCHEDULER_OPTS="
+-server 
+-Xmx16g 
+-Xms1g 
+-Xss512k 
+-XX:+UseConcMarkSweepGC 
+-XX:+CMSParallelRemarkEnabled 
+-XX:+UseFastAccessorMethods 
+-XX:+UseCMSInitiatingOccupancyOnly 
+-XX:CMSInitiatingOccupancyFraction=70
+"
+```
+
+> "-XX:DisableExplicitGC" is not recommended due to may lead to memory link (DS dependent on Netty to communicate). 
+
+## 2.datasource.properties [datasource config properties]
+DS uses Druid to manage database connections and default simplified configs are:
+|Parameters | Default value| Description|
+|--|--|--|
+spring.datasource.driver-class-name||datasource driver
+spring.datasource.url||datasource connection url
+spring.datasource.username||datasource username
+spring.datasource.password||datasource password
+spring.datasource.initialSize|5| initail connection pool size number
+spring.datasource.minIdle|5| minimum connection pool size number
+spring.datasource.maxActive|5| maximum connection pool size number
+spring.datasource.maxWait|60000| max wait mili-seconds
+spring.datasource.timeBetweenEvictionRunsMillis|60000| idle connection check interval
+spring.datasource.timeBetweenConnectErrorMillis|60000| retry interval
+spring.datasource.minEvictableIdleTimeMillis|300000| connections over minEvictableIdleTimeMillis will be collect when idle check
+spring.datasource.validationQuery|SELECT 1| validate connection by running the SQL
+spring.datasource.validationQueryTimeout|3| validate connection timeout[seconds]
+spring.datasource.testWhileIdle|true| set whether the pool validates the allocated connection when a new connection request comes
+spring.datasource.testOnBorrow|true| validity check when the program requests a new connection
+spring.datasource.testOnReturn|false| validity check when the program recalls a connection
+spring.datasource.defaultAutoCommit|true| whether auto commit
+spring.datasource.keepAlive|true| runs validationQuery SQL to avoid the connection closed by pool when the connection idles over minEvictableIdleTimeMillis
+spring.datasource.poolPreparedStatements|true| Open PSCache
+spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| specify the size of PSCache on each connection
+
+
+## 3.zookeeper.properties [zookeeper config properties]
+|Parameters | Default value| Description|
+|--|--|--|
+zookeeper.quorum|localhost:2181| zookeeper cluster connection info
+zookeeper.dolphinscheduler.root|/dolphinscheduler| DS is stored under zookeeper root directory
+zookeeper.session.timeout|60000|  session timeout
+zookeeper.connection.timeout|30000| connection timeout
+zookeeper.retry.base.sleep|100| time to wait between subsequent retries
+zookeeper.retry.max.sleep|30000| maximum time to wait between subsequent retries
+zookeeper.retry.maxtime|10| maximum retry times
+
+
+## 4.common.properties [hadoop、s3、yarn config properties]
+Currently, common.properties mainly configures hadoop/s3a related configurations. 
+|Parameters | Default value| Description|
+|--|--|--|
+resource.storage.type|NONE| type of resource files: HDFS, S3, NONE
+resource.upload.path|/dolphinscheduler| storage path of resource files
+data.basedir.path|/tmp/dolphinscheduler| local directory used to store temp files
+hadoop.security.authentication.startup.state|false| whether hadoop grant kerberos permission
+java.security.krb5.conf.path|/opt/krb5.conf|kerberos config directory
+login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos username
+login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos user keytab
+resource.view.suffixs| txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties| file types supported by resource center
+hdfs.root.user|hdfs| configure users with corresponding permissions if storage type is HDFS
+fs.defaultFS|hdfs://mycluster:8020|If resource.storage.type=S3, then the request url would be similar to 's3a://dolphinscheduler'. Otherwise if resource.storage.type=HDFS and hadoop supports HA, please copy core-site.xml and hdfs-site.xml into 'conf' directory. 
+fs.s3a.endpoint||s3 endpoint url
+fs.s3a.access.key||s3 access key
+fs.s3a.secret.key|     |s3 secret key
+yarn.resourcemanager.ha.rm.ids|     | specify the yarn resourcemanager url. if resourcemanager supports HA, input HA IP addresses (separated by comma), or input null for standalone
+yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s| keep default if resourcemanager supports HA or not use resourcemanager. Or replace ds1 with corresponding hostname if resourcemanager in standalone mode.
+dolphinscheduler.env.path|env/dolphinscheduler_env.sh| load environment variables configs [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+development.state|false| specify whether in development state
+kerberos.expire.time|7|kerberos expire time [hour]
+
+
+## 5.application-api.properties [API-service log config]
+|Parameters | Default value| Description|
+|--|--|--|
+server.port|12345|api service communication port
+server.servlet.session.timeout|7200|session timeout
+server.servlet.context-path|/dolphinscheduler | request path
+spring.servlet.multipart.max-file-size|1024MB| maximum file size
+spring.servlet.multipart.max-request-size|1024MB| maximum request size
+server.jetty.max-http-post-size|5000000| jetty maximum post size
+spring.messages.encoding|UTF-8| message encoding
+spring.jackson.time-zone|GMT+8| time zone
+spring.messages.basename|i18n/messages| i18n config
+security.authentication.type|PASSWORD| authentication type
+
+
+## 6.master.properties [master-service log config]
+|Parameters | Default value| Description|
+|--|--|--|
+master.listen.port|5678|master communication port
+master.exec.threads|100|work threads count
+master.exec.task.num|20|parallel task count
+master.dispatch.task.num | 3|dispatch task count
+master.heartbeat.interval|10|heartbeat interval
+master.task.commit.retryTimes|5|task retry times
+master.task.commit.interval|1000|task commit interval|
+master.max.cpuload.avg|-1|master service operates when cpu load less than this number. (default -1: cpu cores * 2)
+master.reserved.memory|0.3|specify memory threshold value, master service operates when available memory greater than the threshold
+
+
+## 7.worker.properties [worker-service log config]
+|Parameters | Default value| Description|
+|--|--|--|
+worker.listen.port|1234|worker communication port
+worker.exec.threads|100|work threads count
+worker.heartbeat.interval|10|heartbeat interval
+worker.max.cpuload.avg|-1|worker service operates when CPU load less than this number. (default -1: CPU cores * 2)
+worker.reserved.memory|0.3|specify memory threshold value, worker service operates when available memory greater than threshold
+worker.group|default|workgroup grouping config. <br> worker will join corresponding group according to this config when startup
+
+
+## 8.alert.properties [alert-service log config]
+|Parameters | Default value| Description|
+|--|--|--|
+alert.type|EMAIL|alter type|
+mail.protocol|SMTP|mail server protocol
+mail.server.host|xxx.xxx.com|mail server host
+mail.server.port|25|mail server port
+mail.sender|xxx@xxx.com|mail sender email
+mail.user|xxx@xxx.com|mail sender email name
+mail.passwd|111111|mail sender email password
+mail.smtp.starttls.enable|true|specify mail whether open tls
+mail.smtp.ssl.enable|false|specify mail whether open ssl
+mail.smtp.ssl.trust|xxx.xxx.com|specify mail ssl trust list
+xls.file.path|/tmp/xls|mail attachment temp storage directory
+||following configure WeCom[optional]|
+enterprise.wechat.enable|false|specify whether enable WeCom
+enterprise.wechat.corp.id|xxxxxxx|WeCom corp id
+enterprise.wechat.secret|xxxxxxx|WeCom secret
+enterprise.wechat.agent.id|xxxxxxx|WeCom agent id
+enterprise.wechat.users|xxxxxxx|WeCom users
+enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|WeCom token url
+enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|WeCom push url
+enterprise.wechat.user.send.msg||send message format
+enterprise.wechat.team.send.msg||group message format
+plugin.dir|/Users/xx/your/path/to/plugin/dir|plugin directory
+
+
+## 9.quartz.properties [quartz config properties]
+This part describes quartz configs and please configure them based on your practical situation and resources.
+|Parameters | Default value| Description|
+|--|--|--|
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
+org.quartz.scheduler.instanceName | DolphinScheduler
+org.quartz.scheduler.instanceId | AUTO
+org.quartz.scheduler.makeSchedulerThreadDaemon | true
+org.quartz.jobStore.useProperties | false
+org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
+org.quartz.threadPool.makeThreadsDaemons | true
+org.quartz.threadPool.threadCount | 25
+org.quartz.threadPool.threadPriority | 5
+org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
+org.quartz.jobStore.tablePrefix | QRTZ_
+org.quartz.jobStore.isClustered | true
+org.quartz.jobStore.misfireThreshold | 60000
+org.quartz.jobStore.clusterCheckinInterval | 5000
+org.quartz.jobStore.acquireTriggersWithinLock|true
+org.quartz.jobStore.dataSource | myDs
+org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
+
+
+## 10.install_config.conf [DS environment variables configuration script[install/start DS]]
+install_config.conf is a bit complicated and is mainly used in the following two places.
+* 1.DS cluster auto installation
+
+> System will load configs in the install_config.conf and auto-configure files below, based on the file content when executing 'install.sh'.
+> Files such as dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties and etc.
+
+
+* 2.Startup/shutdown DS cluster
+> The system will load masters, workers, alertServer, apiServers and other parameters inside the file to startup/shutdown DS cluster.
+
+File content as follows:
+```bash
+
+# Note:  please escape the character if the file contains special characters such as `.*[]^${}\+?|()@#&`.
+#   eg: `[` escape to `\[`
+
+# Database type (DS currently only supports postgresql and mysql)
+dbtype="mysql"
+
+# Database url & port
+dbhost="192.168.xx.xx:3306"
+
+# Database name
+dbname="dolphinscheduler"
+
+
+# Database username
+username="xx"
+
+# Database password
+password="xx"
+
+# Zookeeper url
+zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+# DS installation path, such as '/data1_1T/dolphinscheduler'
+installPath="/data1_1T/dolphinscheduler"
+
+# Deployment user
+# Note: Deployment user needs 'sudo' privilege and has rights to operate HDFS
+#     Root directory must be created by the same user if using HDFS, otherwise permission related issues will be raised.
+deployUser="dolphinscheduler"
+
+
+# Followings are alert-service configs
+# Mail server host
+mailServerHost="smtp.exmail.qq.com"
+
+# Mail server port
+mailServerPort="25"
+
+# Mail sender
+mailSender="xxxxxxxxxx"
+
+# Mail user
+mailUser="xxxxxxxxxx"
+
+# Mail password
+mailPassword="xxxxxxxxxx"
+
+# Mail supports TLS set true if not set false
+starttlsEnable="true"
+
+# Mail supports SSL set true if not set false. Note: starttlsEnable and sslEnable cannot both set true
+sslEnable="false"
+
+# Mail server host, same as mailServerHost
+sslTrust="smtp.exmail.qq.com"
+
+# Specify which resource upload function to use for resources storage such as sql files. And supported options are HDFS, S3 and NONE. HDFS for upload to HDFS and NONE for not using this function.
+resourceStorageType="NONE"
+
+# if S3, write S3 address. HA, for example: s3a://dolphinscheduler,
+# Note: s3 make sure to create the root directory /dolphinscheduler
+defaultFS="hdfs://mycluster:8020"
+
+# If parameter 'resourceStorageType' is S3, following configs are needed:
+s3Endpoint="http://192.168.xx.xx:9010"
+s3AccessKey="xxxxxxxxxx"
+s3SecretKey="xxxxxxxxxx"
+
+# If ResourceManager supports HA, then input master and standby node IP or hostname, eg: '192.168.xx.xx,192.168.xx.xx'. Or else ResourceManager run in standalone mode, please set yarnHaIps="" and "" for not using yarn.
+yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+
+# If ResourceManager runs in standalone, then set ResourceManager node ip or hostname, or else remain default.
+singleYarnIp="yarnIp1"
+
+# Storage path when using HDFS/S3
+resourceUploadPath="/dolphinscheduler"
+
+
+# HDFS/S3 root user
+hdfsRootUser="hdfs"
+
+# Followings are kerberos configs
+
+# Spicify kerberos enable or not
+kerberosStartUp="false"
+
+# Kdc krb5 config file path
+krb5ConfPath="$installPath/conf/krb5.conf"
+
+# Keytab username
+keytabUserName="hdfs-mycluster@ESZ.COM"
+
+# Username keytab path
+keytabPath="$installPath/conf/hdfs.headless.keytab"
+
+
+# API-service port
+apiServerPort="12345"
+
+
+# All hosts deploy DS
+ips="ds1,ds2,ds3,ds4,ds5"
+
+# Ssh port, default 22
+sshPort="22"
+
+# Master service hosts
+masters="ds1,ds2"
+
+# All hosts deploy worker service
+# Note: Each worker needs to set a worker group name and default name is "default"
+workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
+
+#  Host deploy alert-service
+alertServer="ds3"
+
+# Host deploy API-service
+apiServers="ds1"
+```
+
+## 11.dolphinscheduler_env.sh [load environment variables configs]
+When using shell to commit tasks, DS will load environment variables inside dolphinscheduler_env.sh into the host.
+Types of tasks involved are: Shell task、Python task、Spark task、Flink task、Datax task and etc.
+```bash
+export HADOOP_HOME=/opt/soft/hadoop
+export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+export SPARK_HOME1=/opt/soft/spark1
+export SPARK_HOME2=/opt/soft/spark2
+export PYTHON_HOME=/opt/soft/python
+export JAVA_HOME=/opt/soft/java
+export HIVE_HOME=/opt/soft/hive
+export FLINK_HOME=/opt/soft/flink
+export DATAX_HOME=/opt/soft/datax/bin/datax.py
+
+export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+```
+
+## 12. Services logback configs
+Services name| logback config name |
+|--|--|--|
+API-service logback config |logback-api.xml|
+master-service logback config|logback-master.xml |
+worker-service logback config|logback-worker.xml |
+alert-service logback config|logback-alert.xml |
diff --git a/docs/en-us/1.3.5/user_doc/docker-deployment.md b/docs/en-us/1.3.5/user_doc/docker-deployment.md
new file mode 100644
index 0000000..70feeed
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/docker-deployment.md
@@ -0,0 +1,137 @@
+## QuickStart in Docker
+
+Here're 2 ways to quickly install DolphinScheduler 
+
+### The First Way:Start With docker-compose (Recommended)
+In this way, you need to install docker-compose as a prerequisite, please install it yourself according to the rich docker-compose installation guidance on the Internet
+
+##### 1、 Download the Source Code Zip Package
+
+- Please download the latest version of the source code package and unzip it
+```shell
+mkdir -p /opt/soft/dolphinscheduler;
+cd /opt/soft/dolphinscheduler;
+
+# download source code package
+wget https://mirrors.tuna.tsinghua.edu.cn/apache/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-src.zip
+
+# unzip
+unzip apache-dolphinscheduler-incubating-1.3.5-src.zip
+ 
+mv apache-dolphinscheduler-incubating-1.3.5-src-release  dolphinscheduler-src
+```
+##### 2、 Install and Start the Service
+```
+cd dolphinscheduler-src
+docker-compose -f ./docker/docker-swarm/docker-compose.yml up -d
+```
+
+##### 3、 Login
+Visit the front-end UI: http://192.168.xx.xx:8888
+  <p align="center">
+    <img src="/img/login_en.png" width="60%" />
+  </p>
+Please refer to the `Quick Start` in the chapter 'User Manual' to explore how to use DolphinScheduler
+
+### The Second way: Start in the Docker Mode
+
+##### 1. Basic Required Software (please install by yourself)
+  * PostgreSQL (8.2.15+)
+  * ZooKeeper (3.4.6+)
+  * Docker
+ 
+##### 2. Please login to the PostgreSQL database and create a database named `dolphinscheduler`
+
+##### 3. Initialize the database, import `sql/dolphinscheduler-postgre.sql` to create tables and initial data 
+
+##### 4. Download the DolphinScheduler Image
+We have already uploaded user-oriented DolphinScheduler image to the Docker repository so that you can pull the image from the docker repository and self-build image not needed: 
+```
+docker pull apache/dolphinscheduler:latest
+```
+
+##### 5. Run a DolphinScheduler Instance
+Check follows: 
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e ZOOKEEPER_QUORUM="l92.168.x.x:2181"
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="{user}" -e DATABASE_PASSWORD="{password}" \
+-p 8888:8888 \
+dolphinscheduler all
+```
+Note: {user} and {password} need to be replaced with your database user name and password
+
+##### 6. Login
+Visit the front-end UI: http://192.168.xx.xx:8888
+  <p align="center">
+    <img src="/img/login_en.png" width="60%" />
+  </p>
+Please refer to the `Quick Start` in the chapter 'User Manual' to explore how to use DolphinScheduler
+
+## Appendix
+
+### The following services are automatically started when the container starts:
+
+```
+     MasterServer ----- master service
+     WorkerServer ----- worker service
+     LoggerServer ----- logger service
+     ApiApplicationServer ----- API service
+     AlertServer ----- alert service
+```
+### If you just want to run part of the services in the DolphinScheduler
+
+You can start selected services in DolphinScheduler by run the following commands.
+
+* Start a **master server**, For example:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e ZOOKEEPER_QUORUM="l92.168.x.x:2181"
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+dolphinscheduler master-server
+```
+
+* Start a **worker server**, For example:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e ZOOKEEPER_QUORUM="l92.168.x.x:2181"
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+dolphinscheduler worker-server
+```
+
+* Start a **api server**, For example:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-p 12345:12345 \
+dolphinscheduler api-server
+```
+
+* Start a **alert server**, For example:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+dolphinscheduler alert-server
+```
+
+* Start a **frontend**, For example:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e FRONTEND_API_SERVER_HOST="192.168.x.x" -e FRONTEND_API_SERVER_PORT="12345" \
+-p 8888:8888 \
+dolphinscheduler frontend
+```
+
+**Note**: You must specify the following environment variables: `DATABASE_HOST` `DATABASE_PORT` `DATABASE_DATABASE` `DATABASE_USERNAME` `DATABASE_PASSWORD` `ZOOKEEPER_QUORUM` when start part of the DolphinScheduler services.
+
diff --git a/docs/en-us/1.3.5/user_doc/hardware-environment.md b/docs/en-us/1.3.5/user_doc/hardware-environment.md
new file mode 100644
index 0000000..cc122c9
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/hardware-environment.md
@@ -0,0 +1,47 @@
+# Hareware Environment
+
+DolphinScheduler, as an open-source distributed workflow task scheduling system, can be well deployed and run in Intel architecture server environments and mainstream virtualization environments, and supports mainstream Linux operating system environments.
+
+## 1. Linux operating system version requirements
+
+| OS       | Version         |
+| :----------------------- | :----------: |
+| Red Hat Enterprise Linux | 7.0 and above   |
+| CentOS                   | 7.0 and above   |
+| Oracle Enterprise Linux  | 7.0 and above   |
+| Ubuntu LTS               | 16.04 and above |
+
+> **Attention:**
+>The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.
+
+## 2. Recommended server configuration
+DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The following recommendation is made for server hardware configuration in a production environment:
+### Production Environment
+
+| **CPU** | **MEM** | **HD** | **NIC** | **Num** |
+| --- | --- | --- | --- | --- |
+| 4 core+ | 8 GB+ | SAS | GbE | 1+ |
+
+> **Attention:**
+> - The above-recommended configuration is the minimum configuration for deploying DolphinScheduler. The higher configuration is strongly recommended for production environments.
+> - The hard disk size configuration is recommended by more than 50GB. The system disk and data disk are separated.
+
+
+## 3. Network requirements
+
+DolphinScheduler provides the following network port configurations for normal operation:
+
+| Server | Port | Desc |
+|  --- | --- | --- |
+| MasterServer |  5678  | Not the communication port. Require the native ports do not conflict |
+| WorkerServer | 1234  | Not the communication port. Require the native ports do not conflict |
+| ApiApplicationServer |  12345 | Backend communication port |
+
+> **Attention:**
+> - MasterServer and WorkerServer do not need to enable communication between the networks. As long as the local ports do not conflict.
+> - Administrators can adjust relevant ports on the network side and host-side according to the deployment plan of DolphinScheduler components in the actual environment.
+
+## 4. Browser requirements
+
+DolphinScheduler recommends Chrome and the latest browsers which using Chrome Kernel to access the front-end visual operator page.
+
diff --git a/docs/en-us/1.3.5/user_doc/metadata-1.3.md b/docs/en-us/1.3.5/user_doc/metadata-1.3.md
new file mode 100644
index 0000000..223867a
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/metadata-1.3.md
@@ -0,0 +1,173 @@
+# Dolphin Scheduler 1.3 MetaData
+
+<a name="V5KOl"></a>
+### Dolphin Scheduler 1.2 DB Table Overview
+| Table Name | Comment |
+| :---: | :---: |
+| t_ds_access_token | token for access ds backend |
+| t_ds_alert | alert detail |
+| t_ds_alertgroup | alert group |
+| t_ds_command | command detail |
+| t_ds_datasource | data source |
+| t_ds_error_command | error command detail |
+| t_ds_process_definition | process difinition |
+| t_ds_process_instance | process instance |
+| t_ds_project | project |
+| t_ds_queue | queue |
+| t_ds_relation_datasource_user | datasource related to user |
+| t_ds_relation_process_instance | sub process |
+| t_ds_relation_project_user | project related to user |
+| t_ds_relation_resources_user | resource related to user |
+| t_ds_relation_udfs_user | UDF related to user |
+| t_ds_relation_user_alertgroup | alert group related to user |
+| t_ds_resources | resoruce center file |
+| t_ds_schedules | process difinition schedule |
+| t_ds_session | user login session |
+| t_ds_task_instance | task instance |
+| t_ds_tenant | tenant |
+| t_ds_udfs | UDF resource |
+| t_ds_user | user detail |
+| t_ds_version | ds version |
+
+
+---
+
+<a name="XCLy1"></a>
+### E-R Diagram
+<a name="5hWWZ"></a>
+#### User Queue DataSource
+![image.png](/img/metadata-erd/user-queue-datasource.png)
+
+- Multiple users can belong to one tenant
+- The queue field in t_ds_user table stores the queue_name information in t_ds_queue table, but t_ds_tenant stores queue infomation using queue_id. During the execution of the process definition, the user queue has the highest priority. If the user queue is empty, the tenant queue is used.
+- The user_id field in the t_ds_datasource table indicates the user who created the data source. The user_id in t_ds_relation_datasource_user indicates the user who has permission to the data source.
+<a name="7euSN"></a>
+#### Project Resource Alert
+![image.png](/img/metadata-erd/project-resource-alert.png)
+
+- User can have multiple projects, User project authorization completes the relationship binding using project_id and user_id in t_ds_relation_project_user table
+- The user_id in the t_ds_projcet table represents the user who created the project, and the user_id in the t_ds_relation_project_user table represents users who have permission to the project
+- The user_id in the t_ds_resources table represents the user who created the resource, and the user_id in t_ds_relation_resources_user represents the user who has permissions to the resource
+- The user_id in the t_ds_udfs table represents the user who created the UDF, and the user_id in the t_ds_relation_udfs_user table represents a user who has permission to the UDF
+<a name="JEw4v"></a>
+#### Command Process Task
+![image.png](/img/metadata-erd/command.png)<br />![image.png](/img/metadata-erd/process-task.png)
+
+- A project has multiple process definitions, a process definition can generate multiple process instances, and a process instance can generate multiple task instances
+- The t_ds_schedulers table stores the timing schedule information for process difinition
+- The data stored in the t_ds_relation_process_instance table is used to deal with that the process definition contains sub-processes, parent_process_instance_id field represents the id of the main process instance containing the child process, process_instance_id field represents the id of the sub-process instance, parent_task_instance_id field represents the task instance id of the sub-process node
+- The process instance table and the task instance table correspond to the t_ds_process_instance table and the t_ds_task_instance table, respectively.
+
+---
+
+<a name="yd79T"></a>
+### Core Table Schema
+<a name="6bVhH"></a>
+#### t_ds_process_definition
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| name | varchar | process definition name |
+| version | int | process definition version |
+| release_state | tinyint | process definition release state:0:offline,1:online |
+| project_id | int | project id |
+| user_id | int | process definition creator id |
+| process_definition_json | longtext | process definition json content |
+| description | text | process difinition desc |
+| global_params | text | global parameters |
+| flag | tinyint | process is available: 0 not available, 1 available |
+| locations | text | Node location information |
+| connects | text | Node connection information |
+| receivers | text | receivers |
+| receivers_cc | text | carbon copy list |
+| create_time | datetime | create time |
+| timeout | int | timeout |
+| tenant_id | int | tenant id |
+| update_time | datetime | update time |
+
+<a name="t5uxM"></a>
+#### t_ds_process_instance
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| name | varchar | process instance name |
+| process_definition_id | int | process definition id |
+| state | tinyint | process instance Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
+| recovery | tinyint | process instance failover flag:0:normal,1:failover instance |
+| start_time | datetime | process instance start time |
+| end_time | datetime | process instance end time |
+| run_times | int | process instance run times |
+| host | varchar | process instance host |
+| command_type | tinyint | command type:0 start ,1 Start from the current node,2 Resume a fault-tolerant process,3 Resume Pause Process, 4 Execute from the failed node,5 Complement, 6 dispatch, 7 re-run, 8 pause, 9 stop ,10 Resume waiting thread |
+| command_param | text | json command parameters |
+| task_depend_type | tinyint | task depend type. 0: only current node,1:before the node,2:later nodes |
+| max_try_times | tinyint | max try times |
+| failure_strategy | tinyint | failure strategy. 0:end the process when node failed,1:continue running the other nodes when node failed |
+| warning_type | tinyint | warning type. 0:no warning,1:warning if process success,2:warning if process failed,3:warning if success |
+| warning_group_id | int | warning group id |
+| schedule_time | datetime | schedule time |
+| command_start_time | datetime | command start time |
+| global_params | text | global parameters |
+| process_instance_json | longtext | process instance json(copy的process definition 的json) |
+| flag | tinyint | process instance is available: 0 not available, 1 available |
+| update_time | timestamp | update time |
+| is_sub_process | int | whether the process is sub process:  1 sub-process,0 not sub-process |
+| executor_id | int | executor id |
+| locations | text | Node location information |
+| connects | text | Node connection information |
+| history_cmd | text | history commands of process instance operation |
+| dependence_schedule_times | text | depend schedule fire time |
+| process_instance_priority | int | process instance priority. 0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group_id | int | worker group id |
+| timeout | int | time out |
+| tenant_id | int | tenant id |
+
+<a name="tHZsY"></a>
+#### t_ds_task_instance
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| name | varchar | task name |
+| task_type | varchar | task type |
+| process_definition_id | int | process definition id |
+| process_instance_id | int | process instance id |
+| task_json | longtext | task content json |
+| state | tinyint | Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
+| submit_time | datetime | task submit time |
+| start_time | datetime | task start time |
+| end_time | datetime | task end time |
+| host | varchar | host of task running on |
+| execute_path | varchar | task execute path in the host |
+| log_path | varchar | task log path |
+| alert_flag | tinyint | whether alert |
+| retry_times | int | task retry times |
+| pid | int | pid of task |
+| app_link | varchar | yarn app id |
+| flag | tinyint | taskinstance is available: 0 not available, 1 available |
+| retry_interval | int | retry interval when task failed  |
+| max_retry_times | int | max retry times |
+| task_instance_priority | int | task instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group_id | int | worker group id |
+
+<a name="gLGtm"></a>
+#### t_ds_command
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| command_type | tinyint | Command type: 0 start workflow, 1 start execution from current node, 2 resume fault-tolerant workflow, 3 resume pause process, 4 start execution from failed node, 5 complement, 6 schedule, 7 rerun, 8 pause, 9 stop, 10 resume waiting thread |
+| process_definition_id | int | process definition id |
+| command_param | text | json command parameters |
+| task_depend_type | tinyint | Node dependency type: 0 current node, 1 forward, 2 backward |
+| failure_strategy | tinyint | Failed policy: 0 end, 1 continue |
+| warning_type | tinyint | Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent |
+| warning_group_id | int | warning group |
+| schedule_time | datetime | schedule time |
+| start_time | datetime | start time |
+| executor_id | int | executor id |
+| dependence | varchar | dependence |
+| update_time | datetime | update time |
+| process_instance_priority | int | process instance priority: 0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group_id | int | worker group id |
+
+
+
diff --git a/docs/en-us/1.3.5/user_doc/quick-start.md b/docs/en-us/1.3.5/user_doc/quick-start.md
new file mode 100644
index 0000000..bf01b04
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/quick-start.md
@@ -0,0 +1,65 @@
+# Quick Start
+
+* Administrator user login
+
+  > Address:http://192.168.xx.xx:12345/dolphinscheduler  Username and password:admin/dolphinscheduler123
+
+<p align="center">
+   <img src="/img/login_en.png" width="60%" />
+ </p>
+
+* Create queue
+
+<p align="center">
+   <img src="/img/create-queue-en.png" width="60%" />
+ </p>
+
+  * Create tenant
+      <p align="center">
+    <img src="/img/create-tenant-en.png" width="60%" />
+  </p>
+
+  * Creating Ordinary Users
+<p align="center">
+      <img src="/img/create-user-en.png" width="60%" />
+ </p>
+
+  * Create an alarm group
+
+ <p align="center">
+    <img src="/img/alarm-group-en.png" width="60%" />
+  </p>
+
+  
+  * Create an worker group
+  
+   <p align="center">
+      <img src="/img/worker-group-en.png" width="60%" />
+    </p>
+    
+ * Create an token
+  
+   <p align="center">
+      <img src="/img/token-en.png" width="60%" />
+    </p>
+     
+  
+  * Log in with regular users
+  > Click on the user name in the upper right corner to "exit" and re-use the normal user login.
+
+  * Project Management - > Create Project - > Click on Project Name
+<p align="center">
+      <img src="/img/create_project_en.png" width="60%" />
+ </p>
+
+  * Click Workflow Definition - > Create Workflow Definition - > Online Process Definition
+
+<p align="center">
+   <img src="/img/process_definition_en.png" width="60%" />
+ </p>
+
+  * Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log
+
+ <p align="center">
+   <img src="/img/log_en.png" width="60%" />
+</p>
diff --git a/docs/en-us/1.3.5/user_doc/standalone-deployment.md b/docs/en-us/1.3.5/user_doc/standalone-deployment.md
new file mode 100644
index 0000000..9cd61a8
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/standalone-deployment.md
@@ -0,0 +1,340 @@
+# Standalone Deployment
+
+# 1、Install basic softwares (please install required softwares by yourself)
+
+ * PostgreSQL (8.2.15+) or MySQL (5.7)  :  Choose One, JDBC Driver 5.1.47+ is required if MySQL is used
+ * [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) :  Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile
+ * ZooKeeper (3.4.6+) :Required
+ * Hadoop (2.6+) or MinIO :Optional. If you need resource function, for Standalone Deployment you can choose a local directory as the upload destination (this does not need Hadoop deployed). Of course, you can also choose to upload to Hadoop or MinIO.
+
+```markdown
+ Tips:DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients to run corresponding task.
+```
+
+# 2、Download the binary tar.gz package.
+
+- Please download the latest version installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: [Download](/en-us/download/download.html), download package, move to deployment directory and unzip it.
+
+```shell
+# Create the deployment directory. Please do not choose a high-privilege directory such as /root or /home.
+mkdir -p /opt/dolphinscheduler;
+cd /opt/dolphinscheduler;
+
+# unzip
+tar -zxvf apache-dolphinscheduler-incubating-1.3.2-dolphinscheduler-bin.tar.gz -C /opt/dolphinscheduler;
+
+# rename
+mv apache-dolphinscheduler-incubating-1.3.2-dolphinscheduler-bin  dolphinscheduler-bin
+```
+
+# 3、Create deployment user and assign directory operation permissions
+
+- Create a deployment user, and be sure to configure sudo secret-free. Here take the creation of a dolphinscheduler user as example.
+
+```shell
+# To create a user, you need to log in as root and set the deployment user name.
+useradd dolphinscheduler;
+
+# Set the user password, please modify it yourself.
+echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
+
+# Configure sudo secret-free
+echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+# Modify the directory permissions so that the deployment user has operation permissions on the dolphinscheduler-bin directory
+chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
+```
+
+```
+ Notes:
+ - Because the task execution is based on 'sudo -u {linux-user}' to switch among different Linux users to implement multi-tenant job running, so the deployment user must have sudo permissions and is secret-free. If beginner learners don’t understand, you can ignore this point for now.
+ - Please comment out line "Default requiretty", if it present in "/etc/sudoers" file. 
+ - If you need to use resource upload, you need to assign user the permission to operate the local file system, HDFS or MinIO.
+```
+
+# 4、SSH secret-free configuration
+
+- Switch to the deployment user and configure SSH local secret-free login
+
+  ```shell
+  su dolphinscheduler;
+
+  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
+  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
+  chmod 600 ~/.ssh/authorized_keys
+  ```
+  
+​  Note: *If configure successed, the dolphinscheduler user does not need to enter a password when executing the command `ssh localhost`.*
+
+# 5、Database initialization
+
+- Log in to the database, the default database type is PostgreSQL. If you choose MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.
+```
+mysql -uroot -p
+```
+
+- After log into the database command line window, execute the database initialization command and set the user and password. 
+
+**Note: {user} and {password} need to be replaced with a specific database username and password.**
+
+ ``` mysql
+    mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
+    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
+    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
+    mysql> flush privileges;
+ ```
+
+- Create tables and import basic data
+
+    - Modify the following configuration in datasource.properties under the conf directory.
+
+    ```shell
+      vi conf/datasource.properties
+    ```
+
+    - If you choose Mysql, please comment out the relevant configuration of PostgreSQL (vice versa), you also need to manually add the [[mysql-connector-java driver jar] (https://downloads.mysql.com/archives/c-j/)] package to lib directory, and then configure the database connection information correctly.
+
+    ```properties
+      #postgre
+      #spring.datasource.driver-class-name=org.postgresql.Driver
+      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
+      # mysql
+      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     # Replace the correct IP address
+      spring.datasource.username=xxx						# replace the correct {username} value
+      spring.datasource.password=xxx						# replace the correct {password} value
+    ```
+
+    - After modifying and saving, execute **create-dolphinscheduler.sh** in the script directory.
+
+    ```shell
+    sh script/create-dolphinscheduler.sh
+    ```
+
+​       *Note: If you execute the above script and report "/bin/java: No such file or directory" error, please configure JAVA_HOME and PATH variables in /etc/profile.*
+
+# 6、Modify runtime parameters.
+
+- Modify the environment variable in `dolphinscheduler_env.sh` file under 'conf/env' directory (take the relevant software installed under '/opt/soft' as example)
+
+    ```shell
+        export HADOOP_HOME=/opt/soft/hadoop
+        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+        #export SPARK_HOME1=/opt/soft/spark1
+        export SPARK_HOME2=/opt/soft/spark2
+        export PYTHON_HOME=/opt/soft/python
+        export JAVA_HOME=/opt/soft/java
+        export HIVE_HOME=/opt/soft/hive
+        export FLINK_HOME=/opt/soft/flink
+        export DATAX_HOME=/opt/soft/datax/bin/datax.py
+        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+        ```
+
+     `Note: This step is very important. For example, JAVA_HOME and PATH must be configured. Those that are not used can be ignored or commented out. If you can not find dolphinscheduler_env.sh, please run ls -a.`
+
+- Create JDK soft link to /usr/bin/java (still JAVA_HOME=/opt/soft/java as an example)
+
+    ```shell
+    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+    ```
+
+ - Modify the parameters in the one-click deployment config file `conf/config/install_config.conf`, pay special attention to the configuration of the following parameters.
+
+    ```shell
+    # choose mysql or postgresql
+    dbtype="mysql"
+
+    # Database connection address and port
+    dbhost="localhost:3306"
+
+    # database name
+    dbname="dolphinscheduler"
+
+    # database username
+    username="xxx"
+
+    # database password
+    # NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[`
+    password="xxx"
+
+    # Zookeeper address, localhost:2181, remember port 2181
+    zkQuorum="localhost:2181"
+
+    # Note: the target installation path for dolphinscheduler, please do not use current path (pwd)
+    installPath="/opt/soft/dolphinscheduler"
+
+    # deployment user
+    # Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
+    deployUser="dolphinscheduler"
+
+    # alert config,take QQ email for example
+    # mail protocol
+    mailProtocol="SMTP"
+
+    # mail server host
+    mailServerHost="smtp.qq.com"
+
+    # mail server port
+    # note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, port may be different, make sure the port is correct.
+    mailServerPort="25"
+
+    # mail sender
+    mailSender="xxx@qq.com"
+
+    # mail user
+    mailUser="xxx@qq.com"
+
+    # mail sender password
+    # note: The mail.passwd is email service authorization code, not the email login password.
+    mailPassword="xxx"
+
+    # Whether TLS mail protocol is supported,true is supported and false is not supported
+    starttlsEnable="true"
+
+    # Whether TLS mail protocol is supported,true is supported and false is not supported。
+    # note: only one of TLS and SSL can be in the true state.
+    sslEnable="false"
+
+    # note: sslTrust is the same as mailServerHost
+    sslTrust="smtp.qq.com"
+
+    # resource storage type:HDFS,S3,NONE
+    resourceStorageType="HDFS"
+
+    # here is an example of saving to a local file system
+    # Note: If you want to upload resource file(jar file and so on)to HDFS and the NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml of hadoop cluster in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and Configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
+    defaultFS="file:///data/dolphinscheduler"
+
+    # if not use hadoop resourcemanager, please keep default value; if resourcemanager HA enable, please type the HA ips ; if resourcemanager is single, make this value empty
+    yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+    # if resourcemanager HA enable or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.
+    singleYarnIp="yarnIp1"
+
+    # resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。/dolphinscheduler is recommended
+    resourceUploadPath="/data/dolphinscheduler"
+
+    # specify the user who have permissions to create directory under HDFS/S3 root path
+    hdfsRootUser="hdfs"
+
+    # On which machines to deploy the DS service, choose localhost for this machine
+    ips="localhost"
+
+    # ssh port, default 22
+    # Note: if ssh port is not default, modify here
+    sshPort="22"
+
+    # run master machine
+    masters="localhost"
+
+    # run worker machine
+    workers="localhost"
+
+    # run alert machine
+    alertServer="localhost"
+
+    # run api machine
+    apiServers="localhost"
+
+    ```
+
+    *Attention:* if you need upload resource function, please execute below command:
+
+    ```
+    
+    sudo mkdir /data/dolphinscheduler
+    sudo chown -R dolphinscheduler:dolphinscheduler /data/dolphinscheduler 
+    
+    ```
+
+# 7、Automated Deployment
+
+- Switch to the deployment user and execute the one-click deployment script
+
+    `sh install.sh`
+
+   ```
+   Note:
+   For the first deployment, the following message appears in step 3 of `3, stop server` during operation. This message can be ignored.
+   sh: bin/dolphinscheduler-daemon.sh: No such file or directory
+   ```
+
+- After script completed, the following 5 services will be started. Use `jps` command to check whether the services started (` jps` comes with `java JDK`)
+
+```aidl
+    MasterServer         ----- master service
+    WorkerServer         ----- worker service
+    LoggerServer         ----- logger service
+    ApiApplicationServer ----- api service
+    AlertServer          ----- alert service
+```
+If the above services started normally, the automatic deployment is successful.
+
+After the deployment is success, you can view logs. Logs stored in the logs folder.
+
+```log path
+ logs/
+    ├── dolphinscheduler-alert-server.log
+    ├── dolphinscheduler-master-server.log
+    |—— dolphinscheduler-worker-server.log
+    |—— dolphinscheduler-api-server.log
+    |—— dolphinscheduler-logger-server.log
+```
+
+# 8、login
+
+- Access the front page address, interface IP (self-modified)
+http://192.168.xx.xx:12345/dolphinscheduler
+
+   <p align="center">
+     <img src="/img/login.png" width="60%" />
+   </p>
+
+# 9、Start and stop service
+
+* Stop all services
+
+  ` sh ./bin/stop-all.sh`
+
+* Start all services
+
+  ` sh ./bin/start-all.sh`
+
+* Start and stop master service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start master-server
+sh ./bin/dolphinscheduler-daemon.sh stop master-server
+```
+
+* Start and stop worker Service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start worker-server
+sh ./bin/dolphinscheduler-daemon.sh stop worker-server
+```
+
+* Start and stop api Service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start api-server
+sh ./bin/dolphinscheduler-daemon.sh stop api-server
+```
+
+* Start and stop logger Service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start logger-server
+sh ./bin/dolphinscheduler-daemon.sh stop logger-server
+```
+
+* Start and stop alert service
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start alert-server
+sh ./bin/dolphinscheduler-daemon.sh stop alert-server
+```
+
+``Note: Please refer to the "Architecture Design" section for service usage``
diff --git a/docs/en-us/1.3.5/user_doc/system-manual.md b/docs/en-us/1.3.5/user_doc/system-manual.md
new file mode 100644
index 0000000..6a388db
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/system-manual.md
@@ -0,0 +1,888 @@
+# System User Manual
+
+## Get started quickly
+
+> Please refer to [Quick Start](quick-start.html)
+
+## Operation guide
+
+### 1. Home
+
+The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.
+
+<p align="center">
+<img src="/img/home_en.png" width="80%" />
+</p>
+
+### 2. Project management
+
+#### 2.1 Create project
+
+- Click "Project Management" to enter the project management page, click the "Create Project" button, enter the project name, project description, and click "Submit" to create a new project.
+
+  <p align="center">
+      <img src="/img/create_project_en1.png" width="80%" />
+  </p>
+
+#### 2.2 Project home
+
+- Click the project name link on the project management page to enter the project home page, as shown in the figure below, the project home page contains the task status statistics, process status statistics, and workflow definition statistics of the project.
+  <p align="center">
+     <img src="/img/project_home_en.png" width="80%" />
+  </p>
+
+- Task status statistics: within the specified time range, count the number of task instances as successful submission, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads
+- Process status statistics: within the specified time range, count the number of the status of the workflow instance as submission success, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads
+- Workflow definition statistics: Count the workflow definitions created by this user and the workflow definitions granted to this user by the administrator
+
+#### 2.3 Workflow definition
+
+#### <span id=creatDag>2.3.1 Create workflow definition</span>
+
+- Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, and click the "Create Workflow" button to enter the **workflow DAG edit** page, as shown in the following figure:
+  <p align="center">
+      <img src="/img/dag5.png" width="80%" />
+  </p>
+- Drag in the toolbar <img src="/img/shell.png" width="35"/> Add a Shell task to the drawing board, as shown in the figure below:
+  <p align="center">
+      <img src="/img/shell-en.png" width="80%" />
+  </p>
+- **Add parameter settings for this shell task:**
+
+1. Fill in the "Node Name", "Description", and "Script" fields;
+2. Check “Normal” for “Run Flag”. If “Prohibit Execution” is checked, the task will not be executed when the workflow runs;
+3. Select "Task Priority": When the number of worker threads is insufficient, high-level tasks will be executed first in the execution queue, and tasks with the same priority will be executed in the order of first in, first out;
+4. Timeout alarm (optional): Check the timeout alarm, timeout failure, and fill in the "timeout period". When the task execution time exceeds **timeout period**, an alert email will be sent and the task timeout fails;
+5. Resources (optional). Resource files are files created or uploaded on the Resource Center -> File Management page. For example, the file name is `test.sh`, and the command to call the resource in the script is `sh test.sh`;
+6. Custom parameters (optional), refer to [Custom Parameters](#UserDefinedParameters);
+7. Click the "Confirm Add" button to save the task settings.
+
+- **Increase the order of task execution:** Click the icon in the upper right corner <img src="/img/line.png" width="35"/> to connect the task; as shown in the figure below, task 2 and task 3 are executed in parallel, When task 1 finished execute, tasks 2 and 3 will be executed simultaneously.
+
+  <p align="center">
+     <img src="/img/dag6.png" width="80%" />
+  </p>
+
+- **Delete dependencies:** Click the "arrow" icon in the upper right corner <img src="/img/arrow.png" width="35"/>, select the connection line, and click the "Delete" icon in the upper right corner <img src= "/img/delete.png" width="35"/>, delete dependencies between tasks.
+  <p align="center">
+     <img src="/img/dag7.png" width="80%" />
+  </p>
+
+- **Save workflow definition:** Click the "Save" button, and the "Set DAG chart name" pop-up box will pop up, as shown in the figure below. Enter the workflow definition name, workflow definition description, and set global parameters (optional, refer to [ Custom parameters](#UserDefinedParameters)), click the "Add" button, and the workflow definition is created successfully.
+  <p align="center">
+     <img src="/img/dag8.png" width="80%" />
+   </p>
+> For other types of tasks, please refer to [Task Node Type and Parameter Settings](#TaskParamers).
+
+#### 2.3.2 Workflow definition operation function
+
+Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, as shown below:
+
+<p align="center">
+<img src="/img/work_list_en.png" width="80%" />
+</p>
+The operation functions of the workflow definition list are as follows:
+
+- **Edit:** Only "offline" workflow definitions can be edited. Workflow DAG editing is the same as [Create Workflow Definition](#creatDag).
+- **Online:** When the workflow status is "Offline", used to online workflow. Only the workflow in the "Online" state can run, but cannot be edited.
+- **Offline:** When the workflow status is "Online", used to offline workflow. Only the workflow in the "Offline" state can be edited, but not run.
+- **Run:** Only workflow in the online state can run. See [2.3.3 Run Workflow](#runWorkflow) for the operation steps
+- **Timing:** Timing can only be set in online workflows, and the system automatically schedules the workflow to run on a regular basis. The status after creating a timing is "offline", and the timing must be online on the timing management page to take effect. See [2.3.4 Workflow Timing](#creatTiming) for timing operation steps.
+- **Timing Management:** The timing management page can be edited, online/offline, and deleted.
+- **Delete:** Delete the workflow definition.
+- **Download:** Download workflow definition to local.
+- **Tree Diagram:** Display the task node type and task status in a tree structure, as shown in the figure below:
+  <p align="center">
+      <img src="/img/tree_en.png" width="80%" />
+  </p>
+
+#### <span id=runWorkflow>2.3.3 Run the workflow</span>
+
+- Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, as shown in the figure below, click the "Go Online" button <img src="/img/online.png" width="35"/>,Go online workflow.
+  <p align="center">
+      <img src="/img/work_list_en.png" width="80%" />
+  </p>
+
+- Click the "Run" button to pop up the startup parameter setting pop-up box, as shown in the figure below, set the startup parameters, click the "Run" button in the pop-up box, the workflow starts running, and the workflow instance page generates a workflow instance.
+     <p align="center">
+       <img src="/img/run_work_en.png" width="80%" />
+     </p>  
+  <span id=runParamers>Description of workflow operating parameters:</span> 
+       
+      * Failure strategy: When a task node fails to execute, other parallel task nodes need to execute the strategy. "Continue" means: after a certain task fails, other task nodes execute normally; "End" means: terminate all tasks being executed, and terminate the entire process.
+      * Notification strategy: When the process is over, the process execution information notification email is sent according to the process status, including any status is not sent, successful sent, failed sent, successful or failed sent.
+      * Process priority: The priority of process operation, divided into five levels: highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), and lowest (LOWEST). When the number of master threads is insufficient, high-level processes will be executed first in the execution queue, and processes with the same priority will be executed in a first-in first-out order.
+      * Worker group: The process can only be executed in the specified worker machine group. The default is Default, which can be executed on any worker.
+      * Notification group: select notification strategy||timeout alarm||when fault tolerance occurs, process information or email will be sent to all members in the notification group.
+      * Recipient: Select notification policy||Timeout alarm||When fault tolerance occurs, process information or alarm email will be sent to the recipient list.
+      * Cc: Select the notification strategy||Timeout alarm||When fault tolerance occurs, the process information or warning email will be copied to the CC list.
+      * Startup paramter: Set or overwrite global paramter values when starting a new process instance.
+      * Complement: Two modes including serial complement and parallel complement. Serial complement: within the specified time range, the complement is executed sequentially from the start date to the end date, and only one process instance is generated; parallel complement: within the specified time range, multiple days are complemented at the same time to generate N process instances.
+    * For example, you need to fill in the data from May 1 to May 10.
+
+    <p align="center">
+        <img src="/img/complement_en1.png" width="80%" />
+    </p>
+
+  > Serial mode: The complement is executed sequentially from May 1 to May 10, and a process instance is generated on the process instance page;
+
+  > Parallel mode: The tasks from May 1 to may 10 are executed simultaneously, and 10 process instances are generated on the process instance page.
+
+#### <span id=creatTiming>2.3.4 Workflow timing</span>
+
+- Create timing: Click Project Management->Workflow->Workflow Definition, enter the workflow definition page, go online the workflow, click the "timing" button <img src="/img/timing.png" width="35"/> ,The timing parameter setting dialog box pops up, as shown in the figure below:
+  <p align="center">
+      <img src="/img/time_schedule_en.png" width="80%" />
+  </p>
+- Choose the start and end time. In the start and end time range, the workflow is run at regular intervals; not in the start and end time range, no more regular workflow instances are generated.
+- Add a timing that is executed once every day at 5 AM, as shown in the following figure:
+  <p align="center">
+      <img src="/img/timer-en.png" width="80%" />
+  </p>
+- Failure strategy, notification strategy, process priority, worker group, notification group, recipient, and CC are the same as [workflow running parameters](#runParamers).
+- Click the "Create" button to create the timing successfully. At this time, the timing status is "**Offline**" and the timing needs to be **Online** to take effect.
+- Timing online: Click the "timing management" button <img src="/img/timeManagement.png" width="35"/>, enter the timing management page, click the "online" button, the timing status will change to "online", as shown in the below figure, the workflow takes effect regularly.
+  <p align="center">
+      <img src="/img/time-manage-list-en.png" width="80%" />
+  </p>
+
+#### 2.3.5 Import workflow
+
+Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, click the "Import Workflow" button to import the local workflow file, the workflow definition list displays the imported workflow, and the status is offline.
+
+#### 2.4 Workflow instance
+
+#### 2.4.1 View workflow instance
+
+- Click Project Management -> Workflow -> Workflow Instance to enter the Workflow Instance page, as shown in the figure below:
+     <p align="center">
+        <img src="/img/instance-list-en.png" width="80%" />
+     </p>
+- Click the workflow name to enter the DAG view page to view the task execution status, as shown in the figure below.
+  <p align="center">
+    <img src="/img/instance-runs-en.png" width="80%" />
+  </p>
+
+#### 2.4.2 View task log
+
+- Enter the workflow instance page, click the workflow name, enter the DAG view page, double-click the task node, as shown in the following figure:
+   <p align="center">
+     <img src="/img/instanceViewLog-en.png" width="80%" />
+   </p>
+- Click "View Log", a log pop-up box will pop up, as shown in the figure below, the task log can also be viewed on the task instance page, refer to [Task View Log](#taskLog)。
+   <p align="center">
+     <img src="/img/task-log-en.png" width="80%" />
+   </p>
+
+#### 2.4.3 View task history
+
+- Click Project Management -> Workflow -> Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;
+- Double-click the task node, as shown in the figure below, click "View History" to jump to the task instance page, and display a list of task instances running by the workflow instance
+   <p align="center">
+     <img src="/img/task_history_en.png" width="80%" />
+   </p>
+
+#### 2.4.4 View operating parameters
+
+- Click Project Management -> Workflow -> Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;
+- Click the icon in the upper left corner <img src="/img/run_params_button.png" width="35"/>,View the startup parameters of the workflow instance; click the icon <img src="/img/global_param.png" width="35"/>,View the global and local parameters of the workflow instance, as shown in the following figure:
+   <p align="center">
+     <img src="/img/run_params_en.png" width="80%" />
+   </p>
+
+#### 2.4.4 Workflow instance operation function
+
+Click Project Management -> Workflow -> Workflow Instance to enter the Workflow Instance page, as shown in the figure below:
+
+  <p align="center">
+    <img src="/img/instance-list-en.png" width="80%" />
+  </p>
+
+- **Edit:** Only terminated processes can be edited. Click the "Edit" button or the name of the workflow instance to enter the DAG edit page. After edit, click the "Save" button to pop up the Save DAG pop-up box, as shown in the figure below. In the pop-up box, check "Whether to update to workflow definition" and save After that, the workflow definition will be updated; if it is not checked, the workflow definition will not be updated.
+     <p align="center">
+       <img src="/img/editDag-en.png" width="80%" />
+     </p>
+- **Rerun:** Re-execute the terminated process.
+- **Recovery failed:** For failed processes, you can perform recovery operations, starting from the failed node.
+- **Stop:** To **stop** the running process, the background will first `kill`worker process, and then execute `kill -9` operation
+- **Pause:** Perform a **pause** operation on the running process, the system status will change to **waiting for execution**, it will wait for the end of the task being executed, and pause the next task to be executed.
+- **Resume pause:** To resume the paused process, start running directly from the **paused node**
+- **Delete:** Delete the workflow instance and the task instance under the workflow instance
+- **Gantt chart:** The vertical axis of the Gantt chart is the topological sorting of task instances under a certain workflow instance, and the horizontal axis is the running time of the task instances, as shown in the figure:
+     <p align="center">
+         <img src="/img/gantt-en.png" width="80%" />
+     </p>
+
+#### 2.5 Task instance
+
+- Click Project Management -> Workflow -> Task Instance to enter the task instance page, as shown in the figure below, click the name of the workflow instance, you can jump to the workflow instance DAG chart to view the task status.
+     <p align="center">
+        <img src="/img/task-list-en.png" width="80%" />
+     </p>
+
+- <span id=taskLog>View log:</span>Click the "view log" button in the operation column to view the log of task execution.
+     <p align="center">
+        <img src="/img/task-log2-en.png" width="80%" />
+     </p>
+
+### 3. Resource Center
+
+#### 3.1 hdfs resource configuration
+
+- Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:
+
+```
+
+conf/common/common.properties
+    # Users who have permission to create directories under the HDFS root path
+    hdfs.root.user=hdfs
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/escheduler" is recommended
+    data.store2hdfs.basepath=/dolphinscheduler
+    # resource upload startup type : HDFS,S3,NONE
+    res.upload.startup.type=HDFS
+    # whether kerberos starts
+    hadoop.security.authentication.startup.state=false
+    # java.security.krb5.conf path
+    java.security.krb5.conf.path=/opt/krb5.conf
+    # loginUserFromKeytab user
+    login.user.keytab.username=hdfs-mycluster@ESZ.COM
+    # loginUserFromKeytab path
+    login.user.keytab.path=/opt/hdfs.headless.keytab
+
+conf/common/hadoop.properties
+    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
+    # to the conf directory,support s3,for example : s3a://dolphinscheduler
+    fs.defaultFS=hdfs://mycluster:8020
+    #resourcemanager ha note this need ips , this empty if single
+    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
+    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
+    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
+
+```
+
+- Only one address needs to be configured for yarn.resourcemanager.ha.rm.ids and yarn.application.status.address, and the other address is empty.
+- You need to copy core-site.xml and hdfs-site.xml from the conf directory of the Hadoop cluster to the conf directory of the dolphinscheduler project, and restart the api-server service.
+
+#### 3.2 File management
+
+> It is the management of various resource files, including creating basic txt/log/sh/conf/py/java and other files, uploading jar packages and other types of files, and can do edit, rename, download, delete and other operations.
+
+  <p align="center">
+   <img src="/img/file-manage-en.png" width="80%" />
+ </p>
+
+- Create a file
+  > The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql, properties
+
+<p align="center">
+   <img src="/img/file_create_en.png" width="80%" />
+ </p>
+
+- upload files
+
+> Upload file: Click the "Upload File" button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name
+
+<p align="center">
+   <img src="/img/file-upload-en.png" width="80%" />
+ </p>
+
+- File View
+
+> For the file types that can be viewed, click the file name to view the file details
+
+<p align="center">
+   <img src="/img/file_detail_en.png" width="80%" />
+ </p>
+
+- download file
+
+> Click the "Download" button in the file list to download the file or click the "Download" button in the upper right corner of the file details to download the file
+
+- File rename
+
+<p align="center">
+   <img src="/img/file_rename_en.png" width="80%" />
+ </p>
+
+- delete
+  > File list -> Click the "Delete" button to delete the specified file
+
+#### 3.3 UDF management
+
+#### 3.3.1 Resource management
+
+> The resource management and file management functions are similar. The difference is that the resource management is the uploaded UDF function, and the file management uploads the user program, script and configuration file.
+> Operation function: rename, download, delete.
+
+- Upload udf resources
+  > Same as uploading files.
+
+#### 3.3.2 Function management
+
+- Create UDF function
+  > Click "Create UDF Function", enter the udf function parameters, select the udf resource, and click "Submit" to create the udf function.
+
+> Currently only supports temporary UDF functions of HIVE
+
+- UDF function name: the name when the UDF function is entered
+- Package name Class name: Enter the full path of the UDF function
+- UDF resource: Set the resource file corresponding to the created UDF
+
+<p align="center">
+   <img src="/img/udf_edit_en.png" width="80%" />
+ </p>
+
+### 4. Create data source
+
+> Data source center supports MySQL, POSTGRESQL, HIVE/IMPALA, SPARK, CLICKHOUSE, ORACLE, SQLSERVER and other data sources
+
+#### 4.1 Create/Edit MySQL data source
+
+- Click "Data Source Center -> Create Data Source" to create different types of data sources according to requirements.
+
+- Data source: select MYSQL
+- Data source name: enter the name of the data source
+- Description: Enter a description of the data source
+- IP hostname: enter the IP to connect to MySQL
+- Port: Enter the port to connect to MySQL
+- Username: Set the username for connecting to MySQL
+- Password: Set the password for connecting to MySQL
+- Database name: Enter the name of the database connected to MySQL
+- Jdbc connection parameters: parameter settings for MySQL connection, filled in in JSON form
+
+<p align="center">
+   <img src="/img/mysql-en.png" width="80%" />
+ </p>
+
+> Click "Test Connection" to test whether the data source can be successfully connected.
+
+#### 4.2 Create/Edit POSTGRESQL data source
+
+- Data source: select POSTGRESQL
+- Data source name: enter the name of the data source
+- Description: Enter a description of the data source
+- IP/Host Name: Enter the IP to connect to POSTGRESQL
+- Port: Enter the port to connect to POSTGRESQL
+- Username: Set the username for connecting to POSTGRESQL
+- Password: Set the password for connecting to POSTGRESQL
+- Database name: Enter the name of the database connected to POSTGRESQL
+- Jdbc connection parameters: parameter settings for POSTGRESQL connection, filled in in JSON form
+
+<p align="center">
+   <img src="/img/postgresql-en.png" width="80%" />
+ </p>
+
+#### 4.3 Create/Edit HIVE data source
+
+1.Use HiveServer2 to connect
+
+ <p align="center">
+    <img src="/img/hive-en.png" width="80%" />
+  </p>
+
+- Data source: select HIVE
+- Data source name: enter the name of the data source
+- Description: Enter a description of the data source
+- IP/Host Name: Enter the IP connected to HIVE
+- Port: Enter the port connected to HIVE
+- Username: Set the username for connecting to HIVE
+- Password: Set the password for connecting to HIVE
+- Database name: Enter the name of the database connected to HIVE
+- Jdbc connection parameters: parameter settings for HIVE connection, filled in in JSON form
+
+  2.Use HiveServer2 HA Zookeeper to connect
+
+ <p align="center">
+    <img src="/img/hive1-en.png" width="80%" />
+  </p>
+
+Note: If you enable **kerberos**, you need to fill in **Principal**
+
+<p align="center">
+    <img src="/img/hive-en.png" width="80%" />
+  </p>
+
+#### 4.4 Create/Edit Spark data source
+
+<p align="center">
+   <img src="/img/spark-en.png" width="80%" />
+ </p>
+
+- Data source: select Spark
+- Data source name: enter the name of the data source
+- Description: Enter a description of the data source
+- IP/Hostname: Enter the IP connected to Spark
+- Port: Enter the port connected to Spark
+- Username: Set the username for connecting to Spark
+- Password: Set the password for connecting to Spark
+- Database name: Enter the name of the database connected to Spark
+- Jdbc connection parameters: parameter settings for Spark connection, filled in in JSON form
+
+### 5. Security Center (Permission System)
+
+     * Only the administrator account in the security center has the authority to operate. It has functions such as queue management, tenant management, user management, alarm group management, worker group management, token management, etc. In the user management module, resources, data sources, projects, etc. Authorization
+     * Administrator login, default user name and password: admin/dolphinscheduler123
+
+#### 5.1 Create queue
+
+- Queue is used when the "queue" parameter is needed to execute programs such as spark and mapreduce.
+- The administrator enters the Security Center->Queue Management page and clicks the "Create Queue" button to create a queue.
+<p align="center">
+   <img src="/img/create-queue-en.png" width="80%" />
+ </p>
+
+#### 5.2 Add tenant
+
+- The tenant corresponds to the Linux user, which is used by the worker to submit the job. If Linux does not have this user, the worker will create this user when executing the script.
+- Tenant Code: **Tenant Code is the only user on Linux and cannot be repeated**
+- The administrator enters the Security Center->Tenant Management page and clicks the "Create Tenant" button to create a tenant.
+
+ <p align="center">
+    <img src="/img/addtenant-en.png" width="80%" />
+  </p>
+
+#### 5.3 Create normal user
+
+- Users are divided into **administrator users** and **normal users**
+
+  - The administrator has authorization and user management authority, but does not have the authority to create project and workflow definition operations.
+  - Ordinary users can create projects and create, edit, and execute workflow definitions.
+  - Note: If the user switches tenants, all resources under the tenant where the user belongs will be copied to the new tenant that is switched.
+
+- The administrator enters the Security Center -> User Management page and clicks the "Create User" button to create a user.
+<p align="center">
+   <img src="/img/user-en.png" width="80%" />
+ </p>
+
+> **Edit user information**
+
+- The administrator enters the Security Center->User Management page and clicks the "Edit" button to edit user information.
+- After an ordinary user logs in, click the user information in the user name drop-down box to enter the user information page, and click the "Edit" button to edit the user information.
+
+> **Modify user password**
+
+- The administrator enters the Security Center->User Management page and clicks the "Edit" button. When editing user information, enter the new password to modify the user password.
+- After a normal user logs in, click the user information in the user name drop-down box to enter the password modification page, enter the password and confirm the password and click the "Edit" button, then the password modification is successful.
+
+#### 5.4 Create alarm group
+
+- The alarm group is a parameter set at startup. After the process ends, the status of the process and other information will be sent to the alarm group in the form of email.
+
+* The administrator enters the Security Center -> Alarm Group Management page and clicks the "Create Alarm Group" button to create an alarm group.
+
+  <p align="center">
+    <img src="/img/mail-en.png" width="80%" />
+
+#### 5.5 Token management
+
+> Since the back-end interface has login check, token management provides a way to perform various operations on the system by calling the interface.
+
+- The administrator enters the Security Center -> Token Management page, clicks the "Create Token" button, selects the expiration time and user, clicks the "Generate Token" button, and clicks the "Submit" button, then the selected user's token is created successfully.
+
+  <p align="center">
+      <img src="/img/create-token-en.png" width="80%" />
+   </p>
+
+  - After an ordinary user logs in, click the user information in the user name drop-down box, enter the token management page, select the expiration time, click the "generate token" button, and click the "submit" button, then the user creates a token successfully.
+  - Call example:
+
+```
+Token call example
+    /**
+     * test token
+     */
+    public  void doPOSTParam()throws Exception{
+        // create HttpClient
+        CloseableHttpClient httpclient = HttpClients.createDefault();
+
+        // create http post request
+        HttpPost httpPost = new HttpPost("http://127.0.0.1:12345/escheduler/projects/create");
+        httpPost.setHeader("token", "123");
+        // set parameters
+        List<NameValuePair> parameters = new ArrayList<NameValuePair>();
+        parameters.add(new BasicNameValuePair("projectName", "qzw"));
+        parameters.add(new BasicNameValuePair("desc", "qzw"));
+        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
+        httpPost.setEntity(formEntity);
+        CloseableHttpResponse response = null;
+        try {
+            // execute
+            response = httpclient.execute(httpPost);
+            // response status code 200
+            if (response.getStatusLine().getStatusCode() == 200) {
+                String content = EntityUtils.toString(response.getEntity(), "UTF-8");
+                System.out.println(content);
+            }
+        } finally {
+            if (response != null) {
+                response.close();
+            }
+            httpclient.close();
+        }
+    }
+```
+
+#### 5.6 Granted permission
+
+    * Granted permissions include project permissions, resource permissions, data source permissions, UDF function permissions.
+    * The administrator can authorize the projects, resources, data sources and UDF functions not created by ordinary users. Because the authorization methods for projects, resources, data sources and UDF functions are the same, we take project authorization as an example.
+    * Note: For projects created by users themselves, the user has all permissions. The project list and the selected project list will not be displayed.
+
+- The administrator enters the Security Center -> User Management page and clicks the "Authorize" button of the user who needs to be authorized, as shown in the figure below:
+ <p align="center">
+  <img src="/img/auth-en.png" width="80%" />
+</p>
+
+- Select the project to authorize the project.
+
+<p align="center">
+   <img src="/img/authproject-en.png" width="80%" />
+ </p>
+
+- Resources, data sources, and UDF function authorization are the same as project authorization.
+
+### 6. monitoring Center
+
+#### 6.1 Service management
+
+- Service management is mainly to monitor and display the health status and basic information of each service in the system
+
+#### 6.1.1 master monitoring
+
+- Mainly related to master information.
+<p align="center">
+   <img src="/img/master-jk-en.png" width="80%" />
+ </p>
+
+#### 6.1.2 worker monitoring
+
+- Mainly related to worker information.
+
+<p align="center">
+   <img src="/img/worker-jk-en.png" width="80%" />
+ </p>
+
+#### 6.1.3 Zookeeper monitoring
+
+- Mainly related configuration information of each worker and master in zookpeeper.
+
+<p align="center">
+   <img src="/img/zookeeper-monitor-en.png" width="80%" />
+ </p>
+
+#### 6.1.4 DB monitoring
+
+- Mainly the health of the DB
+
+<p align="center">
+   <img src="/img/mysql-jk-en.png" width="80%" />
+ </p>
+
+#### 6.2 Statistics management
+
+<p align="center">
+   <img src="/img/statistics-en.png" width="80%" />
+ </p>
+
+- Number of commands to be executed: statistics on the t_ds_command table
+- The number of failed commands: statistics on the t_ds_error_command table
+- Number of tasks to run: Count the data of task_queue in Zookeeper
+- Number of tasks to be killed: Count the data of task_kill in Zookeeper
+
+### 7. <span id=TaskParamers>Task node type and parameter settings</span>
+
+#### 7.1 Shell node
+
+> Shell node, when the worker is executed, a temporary shell script is generated, and the linux user with the same name as the tenant executes the script.
+
+- Click Project Management-Project Name-Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
+- Drag <img src="/img/shell.png" width="35"/> from the toolbar to the drawing board, as shown in the figure below:
+
+  <p align="center">
+      <img src="/img/shell-en.png" width="80%" />
+  </p>
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
+- Descriptive information: describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
+- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
+- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- Script: SHELL program developed by users.
+- Resource: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.
+- User-defined parameters: It is a user-defined parameter that is part of SHELL, which will replace the content with \${variable} in the script.
+
+#### 7.2 Sub-process node
+
+- The sub-process node is to execute a certain external workflow definition as a task node.
+  > Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png) task node in the toolbar to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/sub-process-en.png" width="80%" />
+ </p>
+
+- Node name: The node name in a workflow definition is unique
+- Run flag: identify whether this node can be scheduled normally
+- Descriptive information: describe the function of the node
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- Sub-node: It is the workflow definition of the selected sub-process. Enter the sub-node in the upper right corner to jump to the workflow definition of the selected sub-process
+
+#### 7.3 DEPENDENT node
+
+- Dependent nodes are **dependency check nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node will check whether process B has a successful execution yesterday.
+
+> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png) task node in the toolbar to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/dependent-nodes-en.png" width="80%" />
+ </p>
+
+> The dependent node provides a logical judgment function, such as checking whether the B process was successful yesterday, or whether the C process was executed successfully.
+
+  <p align="center">
+   <img src="/img/depend-node-en.png" width="80%" />
+ </p>
+
+> For example, process A is a weekly report task, processes B and C are daily tasks, and task A requires tasks B and C to be successfully executed every day of the last week, as shown in the figure:
+
+ <p align="center">
+   <img src="/img/depend-node1-en.png" width="80%" />
+ </p>
+
+> If the weekly report A also needs to be executed successfully last Tuesday:
+
+ <p align="center">
+   <img src="/img/depend-node3-en.png" width="80%" />
+ </p>
+
+#### 7.4 Stored procedure node
+
+- According to the selected data source, execute the stored procedure.
+  > Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png)The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/procedure-en.png" width="80%" />
+ </p>
+
+- Data source: The data source type of the stored procedure supports MySQL and POSTGRESQL, select the corresponding data source
+- Method: is the method name of the stored procedure
+- Custom parameters: The custom parameter types of the stored procedure support IN and OUT, and the data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, and BOOLEAN
+
+#### 7.5 SQL node
+
+- Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png)Task node into the drawing board
+- Non-query SQL function: edit non-query SQL task information, select non-query for sql type, as shown in the figure below:
+ <p align="center">
+  <img src="/img/sql-en.png" width="80%" />
+</p>
+
+- Query SQL function: Edit and query SQL task information, sql type selection query, select form or attachment to send mail to the specified recipient, as shown in the figure below.
+
+<p align="center">
+   <img src="/img/sql-node-en.png" width="80%" />
+ </p>
+
+- Data source: select the corresponding data source
+- sql type: supports query and non-query. The query is a select type query, which is returned with a result set. You can specify three templates for email notification as form, attachment or form attachment. Non-queries are returned without a result set, and are for three types of operations: update, delete, and insert.
+- sql parameter: the input parameter format is key1=value1;key2=value2...
+- sql statement: SQL statement
+- UDF function: For data sources of type HIVE, you can refer to UDF functions created in the resource center. UDF functions are not supported for other types of data sources.
+- Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the \${variable} in the SQL statement.
+- Pre-sql: Pre-sql is executed before the sql statement.
+- Post-sql: Post-sql is executed after the sql statement.
+
+#### 7.6 SPARK node
+
+- Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the `spark-submit` method to submit tasks
+
+> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png)The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/spark-submit-en.png" width="80%" />
+ </p>
+
+- Program type: supports JAVA, Scala and Python three languages
+- The class of the main function: is the full path of the Spark program’s entry Main Class
+- Main jar package: Spark jar package
+- Deployment mode: support three modes of yarn-cluster, yarn-client and local
+- Driver core number: You can set the number of Driver cores and the number of memory
+- Number of Executors: You can set the number of Executors, the number of Executor memory, and the number of Executor cores
+- Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.
+- Other parameters: support --jars, --files, --archives, --conf format
+- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
+- User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with \${variable} in the script
+
+Note: JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no main function class, and the others are the same
+
+#### 7.7 MapReduce(MR) node
+
+- Using the MR node, you can directly execute the MR program. For the mr node, the worker will use the `hadoop jar` method to submit tasks
+
+> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png) task node in the toolbar to the drawing board, as shown in the following figure:
+
+1.  JAVA program
+
+ <p align="center">
+   <img src="/img/mr_java_en.png" width="80%" />
+ </p>
+
+- The class of the main function: is the full path of the Main Class, the entry point of the MR program
+- Program type: select JAVA language
+- Main jar package: is the MR jar package
+- Command line parameters: set the input parameters of the MR program and support the substitution of custom parameter variables
+- Other parameters: support -D, -files, -libjars, -archives format
+- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
+- User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with \${variable} in the script
+
+2. Python program
+
+<p align="center">
+   <img src="/img/mr_edit_en.png" width="80%" />
+ </p>
+
+- Program type: select Python language
+- Main jar package: is the Python jar package for running MR
+- Other parameters: support -D, -mapper, -reducer, -input -output format, here you can set the input of user-defined parameters, such as:
+- -mapper "mapper.py 1" -file mapper.py -reducer reducer.py -file reducer.py –input /journey/words.txt -output /journey/out/mr/\${currentTimeMillis}
+- The mapper.py 1 after -mapper is two parameters, the first parameter is mapper.py, and the second parameter is 1
+- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
+- User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with \${variable} in the script
+
+#### 7.8 Python Node
+
+- Using python nodes, you can directly execute python scripts. For python nodes, workers will use `python **` to submit tasks.
+
+> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png)The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/python-en.png" width="80%" />
+ </p>
+
+- Script: Python program developed by the user
+- Resources: refers to the list of resource files that need to be called in the script
+- User-defined parameter: It is a local user-defined parameter of Python, which will replace the content with \${variable} in the script
+- Note: If you import the python file under the resource directory tree, you need to add the __init__.py file
+
+#### 7.9 Flink Node
+
+- Drag in the toolbar<img src="/img/flink.png" width="35"/>The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+  <img src="/img/flink-en.png" width="80%" />
+</p>
+
+- Program type: supports JAVA, Scala and Python three languages
+- The class of the main function: is the full path of the Main Class, the entry point of the Flink program
+- Main jar package: is the Flink jar package
+- Deployment mode: support three modes of cluster and local
+- Number of slots: You can set the number of slots
+- Number of taskManage: You can set the number of taskManage
+- JobManager memory number: You can set the jobManager memory number
+- TaskManager memory number: You can set the taskManager memory number
+- Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.
+- Other parameters: support --jars, --files, --archives, --conf format
+- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
+- Custom parameter: It is a local user-defined parameter of Flink, which will replace the content with \${variable} in the script
+
+Note: JAVA and Scala are only used for identification, there is no difference, if it is Flink developed by Python, there is no class of the main function, the others are the same
+
+#### 7.10 http Node
+
+- Drag in the toolbar<img src="/img/http.png" width="35"/>The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/http-en.png" width="80%" />
+ </p>
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
+- Descriptive information: describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
+- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
+- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- Request address: http request URL.
+- Request type: support GET, POSt, HEAD, PUT, DELETE.
+- Request parameters: Support Parameter, Body, Headers.
+- Verification conditions: support default response code, custom response code, content included, content not included.
+- Verification content: When the verification condition selects a custom response code, the content contains, and the content does not contain, the verification content is required.
+- Custom parameter: It is a user-defined parameter of http part, which will replace the content with \${variable} in the script.
+
+#### 7.11 DATAX Node
+
+- Drag in the toolbar<img src="/img/datax.png" width="35"/>Task node into the drawing board
+
+  <p align="center">
+   <img src="/img/datax-en.png" width="80%" />
+  </p>
+
+- Custom template: When you turn on the custom template switch, you can customize the content of the json configuration file of the datax node (applicable when the control configuration does not meet the requirements)
+- Data source: select the data source to extract the data
+- sql statement: the sql statement used to extract data from the target database, the sql query column name is automatically parsed when the node is executed, and mapped to the target table synchronization column name. When the source table and target table column names are inconsistent, they can be converted by column alias (as)
+- Target library: select the target library for data synchronization
+- Target table: the name of the target table for data synchronization
+- Pre-sql: Pre-sql is executed before the sql statement (executed by the target library).
+- Post-sql: Post-sql is executed after the sql statement (executed by the target library).
+- json: json configuration file for datax synchronization
+- Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the \${variable} in the SQL statement.
+
+#### 8. parameter
+
+#### 8.1 System parameters
+
+<table>
+    <tr><th>variable</th><th>meaning</th></tr>
+    <tr>
+        <td>${system.biz.date}</td>
+        <td>The day before the scheduled time of the daily scheduling instance, the format is yyyyMMdd, when the data is supplemented, the date is +1</td>
+    </tr>
+    <tr>
+        <td>${system.biz.curdate}</td>
+        <td>The timing time of the daily scheduling instance, the format is yyyyMMdd, when the data is supplemented, the date is +1</td>
+    </tr>
+    <tr>
+        <td>${system.datetime}</td>
+        <td>The timing time of the daily scheduling instance, the format is yyyyMMddHHmmss, when the data is supplemented, the date is +1</td>
+    </tr>
+</table>
+
+#### 8.2 Time custom parameters
+
+- Support custom variable names in the code, declaration method: \${variable name}. It can refer to "system parameters" or specify "constants".
+
+- We define this benchmark variable as $[...] format, $[yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as: $[yyyyMMdd], $[HHmmss], \$[yyyy-MM-dd], etc.
+
+- The following format can also be used:
+
+      * Next N years:$[add_months(yyyyMMdd,12*N)]
+      * N years before:$[add_months(yyyyMMdd,-12*N)]
+      * Next N months:$[add_months(yyyyMMdd,N)]
+      * N months before:$[add_months(yyyyMMdd,-N)]
+      * Next N weeks:$[yyyyMMdd+7*N]
+      * First N weeks:$[yyyyMMdd-7*N]
+      * Next N days:$[yyyyMMdd+N]
+      * N days before:$[yyyyMMdd-N]
+      * Next N hours:$[HHmmss+N/24]
+      * First N hours:$[HHmmss-N/24]
+      * Next N minutes:$[HHmmss+N/24/60]
+      * First N minutes:$[HHmmss-N/24/60]
+
+#### 8.3 <span id=UserDefinedParameters>User-defined parameters</span>
+
+- User-defined parameters are divided into global parameters and local parameters. Global parameters are global parameters passed when saving workflow definitions and workflow instances. Global parameters can be referenced in the local parameters of any task node in the entire process.
+  example:
+
+<p align="center">
+   <img src="/img/local_parameter_en.png" width="80%" />
+ </p>
+
+- global_bizdate is a global parameter, which refers to a system parameter.
+
+<p align="center">
+   <img src="/img/global_parameter_en.png" width="80%" />
+ </p>
+
+- In the task, local_param_bizdate uses \${global_bizdate} to refer to global parameters. For scripts, you can use \${local_param_bizdate} to refer to the value of global variable global_bizdate, or directly set the value of local_param_bizdate through JDBC.
diff --git a/docs/en-us/1.3.5/user_doc/task-structure.md b/docs/en-us/1.3.5/user_doc/task-structure.md
new file mode 100644
index 0000000..378f14c
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/task-structure.md
@@ -0,0 +1,1131 @@
+
+# Overall Tasks Storage Structure
+All tasks created in Dolphinscheduler are saved in the t_ds_process_definition table.
+
+The following shows the 't_ds_process_definition' table structure:
+
+
+No. | field  | type  |  description
+-------- | ---------| -------- | ---------
+1|id|int(11)|primary key
+2|name|varchar(255)|process definition name
+3|version|int(11)|process definition version
+4|release_state|tinyint(4)|release status of process definition: 0 not online, 1 online
+5|project_id|int(11)|project id
+6|user_id|int(11)|user id of the process definition
+7|process_definition_json|longtext|process definition JSON
+8|description|text|process definition description
+9|global_params|text|global parameters
+10|flag|tinyint(4)|specify whether the process is available: 0 is not available, 1 is available
+11|locations|text|node location infomation
+12|connects|text|node connectivity info
+13|receivers|text|receivers
+14|receivers_cc|text|CC receivers
+15|create_time|datetime|create time
+16|timeout|int(11) |timeout
+17|tenant_id|int(11) |tenant id
+18|update_time|datetime|update time
+19|modify_by|varchar(36)|specifics of the user that made the modification
+20|resource_ids|varchar(255)|resource ids
+
+The 'process_definition_json' field is the core field, which defines the task information in the DAG diagram, and it is stored in JSON format.
+
+The following table describes the common data structure.
+No. | field  | type  |  description
+-------- | ---------| -------- | ---------
+1|globalParams|Array|global parameters
+2|tasks|Array|task collections in the process [for the structure of each type, please refer to the following sections]
+3|tenantId|int|tenant ID
+4|timeout|int|timeout
+
+Data example:
+```bash
+{
+    "globalParams":[
+        {
+            "prop":"golbal_bizdate",
+            "direct":"IN",
+            "type":"VARCHAR",
+            "value":"${system.biz.date}"
+        }
+    ],
+    "tasks":Array[1],
+    "tenantId":0,
+    "timeout":0
+}
+```
+
+# The Detailed Explanation of The Storage Structure of Each Task Type
+
+## Shell Nodes
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type |SHELL
+3| name| |String|task name |
+4| params| |Object|customized parameters |Json format
+5| |rawScript |String| Shell script |
+6| | localParams| Array|customized local parameters||
+7| | resourceList| Array|resource files||
+8|description | |String|description | |
+9|runFlag | |String |execution flag| |
+10|conditionResult | |Object|condition branch | |
+11| | successNode| Array|jump to node if success| |
+12| | failedNode|Array|jump to node if failure| 
+13| dependence| |Object |task dependency |mutual exclusion with params
+14|maxRetryTimes | |String|max retry times | |
+15|retryInterval | |String |retry interval| |
+16|timeout | |Object|timeout | |
+17| taskInstancePriority| |String|task priority | |
+18|workerGroup | |String |Worker group| |
+19|preTasks | |Array|preposition tasks | |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"SHELL",
+    "id":"tasks-80760",
+    "name":"Shell Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"echo "This is a shell script""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+
+```
+
+
+## SQL Node
+Perform data query and update operations on the specified datasource through SQL.
+
+**The node data structure is as follows:**
+No.|parameter name||type|description |note
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task id|
+2|type ||String |task type |SQL
+3| name| |String|task name|
+4| params| |Object|customized parameters|Json format
+5| |type |String |database type
+6| |datasource |Int |datasource id
+7| |sql |String |query SQL statement
+8| |udfs | String| udf functions|specify UDF function ids, separate by comma
+9| |sqlType | String| SQL node type |0 for query and 1 for none-query SQL
+10| |title |String | mail title
+11| |receivers |String |receivers
+12| |receiversCc |String |CC receivers
+13| |showType | String|display type of mail|optionals: TABLE or ATTACHMENT
+14| |connParams | String|connect parameters
+15| |preStatements | Array|preposition SQL statements
+16| | postStatements| Array|postposition SQL statements||
+17| | localParams| Array|customized parameters||
+18|description | |String|description | |
+19|runFlag | |String |execution flag| |
+20|conditionResult | |Object|condition branch  | |
+21| | successNode| Array|jump to node if success| |
+22| | failedNode|Array|jump to node if failure| 
+23| dependence| |Object |task dependency |mutual exclusion with params
+24|maxRetryTimes | |String|max retry times | |
+25|retryInterval | |String |retry interval| |
+26|timeout | |Object|timeout | |
+27| taskInstancePriority| |String|task priority | |
+28|workerGroup | |String |Worker group| |
+29|preTasks | |Array|preposition tasks | |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"SQL",
+    "id":"tasks-95648",
+    "name":"SqlTask-Query",
+    "params":{
+        "type":"MYSQL",
+        "datasource":1,
+        "sql":"select id , namge , age from emp where id =  ${id}",
+        "udfs":"",
+        "sqlType":"0",
+        "title":"xxxx@xxx.com",
+        "receivers":"xxxx@xxx.com",
+        "receiversCc":"",
+        "showType":"TABLE",
+        "localParams":[
+            {
+                "prop":"id",
+                "direct":"IN",
+                "type":"INTEGER",
+                "value":"1"
+            }
+        ],
+        "connParams":"",
+        "preStatements":[
+            "insert into emp ( id,name ) value (1,'Li' )"
+        ],
+        "postStatements":[
+
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## PROCEDURE [stored procedures] Node
+**The node data structure is as follows:**
+**Node data example:**
+
+## SPARK Node
+**The node data structure is as follows:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type |SPARK
+3| name| |String|task name |
+4| params| |Object|customized parameters |Json format
+5| |mainClass |String | main class
+6| |mainArgs | String| execution arguments
+7| |others | String| other arguments
+8| |mainJar |Object | application jar package
+9| |deployMode |String |deployment mode |local,client,cluster
+10| |driverCores | String| driver cores
+11| |driverMemory | String| driver memory
+12| |numExecutors |String | executor count
+13| |executorMemory |String | executor memory
+14| |executorCores |String | executor cores
+15| |programType | String| program type|JAVA,SCALA,PYTHON
+16| | sparkVersion| String|	Spark version| SPARK1 , SPARK2
+17| | localParams| Array|customized local parameters
+18| | resourceList| Array|resource files
+19|description | |String|description | |
+20|runFlag | |String |execution flag| |
+21|conditionResult | |Object|condition branch| |
+22| | successNode| Array|jump to node if success| |
+23| | failedNode|Array|jump to node if failure| 
+24| dependence| |Object |task dependency |mutual exclusion with params
+25|maxRetryTimes | |String|max retry times | |
+26|retryInterval | |String |retry interval| |
+27|timeout | |Object|timeout | |
+28| taskInstancePriority| |String|task priority | |
+29|workerGroup | |String |Worker group| |
+30|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"SPARK",
+    "id":"tasks-87430",
+    "name":"SparkTask",
+    "params":{
+        "mainClass":"org.apache.spark.examples.SparkPi",
+        "mainJar":{
+            "id":4
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "driverCores":1,
+        "driverMemory":"512M",
+        "numExecutors":2,
+        "executorMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"10",
+        "others":"",
+        "programType":"SCALA",
+        "sparkVersion":"SPARK2"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## MapReduce(MR) Node
+**The node data structure is as follows:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type |MR
+3| name| |String|task name |
+4| params| |Object|customized parameters |Json format
+5| |mainClass |String | main class
+6| |mainArgs | String|execution arguments
+7| |others | String|other arguments
+8| |mainJar |Object | application jar package
+9| |programType | String|program type|JAVA,PYTHON
+10| | localParams| Array|customized local parameters
+11| | resourceList| Array|resource files
+12|description | |String|description | |
+13|runFlag | |String |execution flag| |
+14|conditionResult | |Object|condition branch| |
+15| | successNode| Array|jump to node if success| |
+16| | failedNode|Array|jump to node if failure| 
+17| dependence| |Object |task dependency |mutual exclusion with params
+18|maxRetryTimes | |String|max retry times | |
+19|retryInterval | |String |retry interval| |
+20|timeout | |Object|timeout | |
+21| taskInstancePriority| |String|task priority| |
+22|workerGroup | |String |Worker group| |
+23|preTasks | |Array|preposition tasks| |
+
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"MR",
+    "id":"tasks-28997",
+    "name":"MRTask",
+    "params":{
+        "mainClass":"wordcount",
+        "mainJar":{
+            "id":5
+        },
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
+        "others":"",
+        "programType":"JAVA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## Python Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|  task Id|
+2|type ||String |task type|PYTHON
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |rawScript |String| Python script|
+6| | localParams| Array|customized local parameters||
+7| | resourceList| Array|resource files||
+8|description | |String|description | |
+9|runFlag | |String |execution flag| |
+10|conditionResult | |Object|condition branch| |
+11| | successNode| Array|jump to node if success| |
+12| | failedNode|Array|jump to node if failure | 
+13| dependence| |Object |task dependency |mutual exclusion with params
+14|maxRetryTimes | |String|max retry times | |
+15|retryInterval | |String |retry interval| |
+16|timeout | |Object|timeout | |
+17| taskInstancePriority| |String|task priority | |
+18|workerGroup | |String |Worker group| |
+19|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"PYTHON",
+    "id":"tasks-5463",
+    "name":"Python Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"print("This is a python script")"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+
+## Flink Node
+**The node data structure is as follows:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task Id|
+2|type ||String |task type|FLINK
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |mainClass |String |main class
+6| |mainArgs | String|execution arguments
+7| |others | String|other arguments
+8| |mainJar |Object |application jar package
+9| |deployMode |String |deployment mode |local,client,cluster
+10| |slot | String| slot count
+11| |taskManager |String | taskManager count
+12| |taskManagerMemory |String |taskManager memory size
+13| |jobManagerMemory |String | jobManager memory size
+14| |programType | String| program type|JAVA,SCALA,PYTHON
+15| | localParams| Array|local parameters
+16| | resourceList| Array|resource files
+17|description | |String|description | |
+18|runFlag | |String |execution flag| |
+19|conditionResult | |Object|condition branch| |
+20| | successNode| Array|jump node if success| |
+21| | failedNode|Array|jump node if failure| 
+22| dependence| |Object |task dependency |mutual exclusion with params
+23|maxRetryTimes | |String|max retry times| |
+24|retryInterval | |String |retry interval| |
+25|timeout | |Object|timeout | |
+26| taskInstancePriority| |String|task priority| |
+27|workerGroup | |String |Worker group| |
+38|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"FLINK",
+    "id":"tasks-17135",
+    "name":"FlinkTask",
+    "params":{
+        "mainClass":"com.flink.demo",
+        "mainJar":{
+            "id":6
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "slot":1,
+        "taskManager":"2",
+        "jobManagerMemory":"1G",
+        "taskManagerMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"100",
+        "others":"",
+        "programType":"SCALA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## HTTP Node
+**The node data structure is as follows:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task Id|
+2|type ||String |task type|HTTP
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |url |String |request url
+6| |httpMethod | String|http method|GET,POST,HEAD,PUT,DELETE
+7| | httpParams| Array|http parameters
+8| |httpCheckCondition | String|validation of HTTP code status|default code 200
+9| |condition |String |validation conditions
+10| | localParams| Array|customized local parameters
+11|description | |String|description| |
+12|runFlag | |String |execution flag| |
+13|conditionResult | |Object|condition branch| |
+14| | successNode| Array|jump node if success| |
+15| | failedNode|Array|jump node if failure| 
+16| dependence| |Object |task dependency |mutual exclusion with params
+17|maxRetryTimes | |String|max retry times | |
+18|retryInterval | |String |retry interval| |
+19|timeout | |Object|timeout | |
+20| taskInstancePriority| |String|task priority| |
+21|workerGroup | |String |Worker group| |
+22|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"HTTP",
+    "id":"tasks-60499",
+    "name":"HttpTask",
+    "params":{
+        "localParams":[
+
+        ],
+        "httpParams":[
+            {
+                "prop":"id",
+                "httpParametersType":"PARAMETER",
+                "value":"1"
+            },
+            {
+                "prop":"name",
+                "httpParametersType":"PARAMETER",
+                "value":"Bo"
+            }
+        ],
+        "url":"https://www.xxxxx.com:9012",
+        "httpMethod":"POST",
+        "httpCheckCondition":"STATUS_CODE_DEFAULT",
+        "condition":""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## DataX Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type|DATAX
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |customConfig |Int |specify whether use customized config| 0 none customized, 1 customized
+6| |dsType |String | datasource type
+7| |dataSource |Int | datasource ID
+8| |dtType | String|target database type
+9| |dataTarget | Int|target database ID 
+10| |sql |String | SQL statements
+11| |targetTable |String |target table
+12| |jobSpeedByte |Int |job speed limiting(bytes)
+13| |jobSpeedRecord | Int|job speed limiting(records)
+14| |preStatements | Array|preposition SQL
+15| | postStatements| Array|postposition SQL
+16| | json| String|customized configs|valid if customConfig=1
+17| | localParams| Array|customized parameters|valid if customConfig=1
+18|description | |String|description| |
+19|runFlag | |String |execution flag| |
+20|conditionResult | |Object|condition branch| |
+21| | successNode| Array|jump node if success| |
+22| | failedNode|Array|jump node if failure| 
+23| dependence| |Object |task dependency |mutual exclusion with params
+24|maxRetryTimes | |String|max retry times| |
+25|retryInterval | |String |retry interval| |
+26|timeout | |Object|timeout | |
+27| taskInstancePriority| |String|task priority| |
+28|workerGroup | |String |Worker group| |
+29|preTasks | |Array|preposition tasks| |
+
+
+
+**Node data example:**
+
+
+```bash
+{
+    "type":"DATAX",
+    "id":"tasks-91196",
+    "name":"DataxTask-DB",
+    "params":{
+        "customConfig":0,
+        "dsType":"MYSQL",
+        "dataSource":1,
+        "dtType":"MYSQL",
+        "dataTarget":1,
+        "sql":"select id, name ,age from user ",
+        "targetTable":"emp",
+        "jobSpeedByte":524288,
+        "jobSpeedRecord":500,
+        "preStatements":[
+            "truncate table emp "
+        ],
+        "postStatements":[
+            "truncate table user"
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## Sqoop Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task ID|
+2|type ||String |task type|SQOOP
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| | concurrency| Int|concurrency rate
+6| | modelType|String |flow direction|import,export
+7| |sourceType|String |datasource type|
+8| |sourceParams |String|datasource parameters| JSON format
+9| | targetType|String |target datasource type
+10| |targetParams | String|target datasource parameters|JSON format
+11| |localParams |Array |customized local parameters
+12|description | |String|description| |
+13|runFlag | |String |execution flag| |
+14|conditionResult | |Object|condition branch| |
+15| | successNode| Array|jump node if success| |
+16| | failedNode|Array|jump node if failure| 
+17| dependence| |Object |task dependency |mutual exclusion with params
+18|maxRetryTimes | |String|max retry times| |
+19|retryInterval | |String |retry interval| |
+20|timeout | |Object|timeout | |
+21| taskInstancePriority| |String|task priority| |
+22|workerGroup | |String |Worker group| |
+23|preTasks | |Array|preposition tasks| |
+
+
+
+
+**Node data example:**
+
+```bash
+{
+            "type":"SQOOP",
+            "id":"tasks-82041",
+            "name":"Sqoop Task",
+            "params":{
+                "concurrency":1,
+                "modelType":"import",
+                "sourceType":"MYSQL",
+                "targetType":"HDFS",
+                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
+                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
+                "localParams":[
+
+                ]
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+## Condition Branch Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task ID|
+2|type ||String |task type |SHELL
+3| name| |String|task name |
+4| params| |Object|customized parameters | null
+5|description | |String|description| |
+6|runFlag | |String |execution flag| |
+7|conditionResult | |Object|condition branch | |
+8| | successNode| Array|jump to node if success| |
+9| | failedNode|Array|jump to node if failure| 
+10| dependence| |Object |task dependency |mutual exclusion with params
+11|maxRetryTimes | |String|max retry times | |
+12|retryInterval | |String |retry interval| |
+13|timeout | |Object|timeout | |
+14| taskInstancePriority| |String|task priority | |
+15|workerGroup | |String |Worker group| |
+16|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"CONDITIONS",
+    "id":"tasks-96189",
+    "name":"条件",
+    "params":{
+
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            "test04"
+        ],
+        "failedNode":[
+            "test05"
+        ]
+    },
+    "dependence":{
+        "relation":"AND",
+        "dependTaskList":[
+
+        ]
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+        "test01",
+        "test02"
+    ]
+}
+```
+
+
+## Subprocess Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task ID|
+2|type ||String |task type|SHELL
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |processDefinitionId |Int| process definition ID
+6|description | |String|description | |
+7|runFlag | |String |execution flag| |
+8|conditionResult | |Object|condition branch | |
+9| | successNode| Array|jump to node if success| |
+10| | failedNode|Array|jump to node if failure| 
+11| dependence| |Object |task dependency |mutual exclusion with params
+12|maxRetryTimes | |String|max retry times| |
+13|retryInterval | |String |retry interval| |
+14|timeout | |Object|timeout| |
+15| taskInstancePriority| |String|task priority| |
+16|workerGroup | |String |Worker group| |
+17|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+            "type":"SUB_PROCESS",
+            "id":"tasks-14806",
+            "name":"SubProcessTask",
+            "params":{
+                "processDefinitionId":2
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+
+
+## DEPENDENT Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task ID|
+2|type ||String |task type|DEPENDENT
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |rawScript |String|Shell script|
+6| | localParams| Array|customized local parameters||
+7| | resourceList| Array|resource files||
+8|description | |String|description| |
+9|runFlag | |String |execution flag| |
+10|conditionResult | |Object|condition branch| |
+11| | successNode| Array|jump to node if success| |
+12| | failedNode|Array|jump to node if failure| 
+13| dependence| |Object |task dependency |mutual exclusion with params
+14| | relation|String |relation|AND,OR
+15| | dependTaskList|Array |dependent task list|
+16|maxRetryTimes | |String|max retry times| |
+17|retryInterval | |String |retry interval| |
+18|timeout | |Object|timeout| |
+19| taskInstancePriority| |String|task priority| |
+20|workerGroup | |String |Worker group| |
+21|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+            "type":"DEPENDENT",
+            "id":"tasks-57057",
+            "name":"DenpendentTask",
+            "params":{
+
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+                "relation":"AND",
+                "dependTaskList":[
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":7,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"ALL",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    },
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":5,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"SqlTask-Update",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    }
+                ]
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
diff --git a/docs/en-us/1.3.5/user_doc/upgrade.md b/docs/en-us/1.3.5/user_doc/upgrade.md
new file mode 100644
index 0000000..2ea0764
--- /dev/null
+++ b/docs/en-us/1.3.5/user_doc/upgrade.md
@@ -0,0 +1,80 @@
+
+# DolphinScheduler upgrade documentation
+
+## 1. Back up previous version's files and database.
+
+## 2. Stop all services of DolphinScheduler.
+
+ `sh ./script/stop-all.sh`
+
+## 3. Download the new version's installation package.
+
+- [Download](/en-us/download/download.html) the latest version of the installation packages.
+- The following upgrade operations need to be performed in the new version's directory.
+
+## 4. Database upgrade
+- Modify the following properties in conf/datasource.properties.
+
+- If you use MySQL as database to run DolphinScheduler, please comment out PostgreSQL releated configurations, and add mysql connector jar into lib dir, here we download mysql-connector-java-5.1.47.jar, and then correctly config database connect infoformation. You can download mysql connector jar [here](https://downloads.MySQL.com/archives/c-j/). Alternatively if you use Postgres as database, you just need to comment out Mysql related configurations, and correctly config database connect [...]
+
+    ```properties
+      # postgre
+      #spring.datasource.driver-class-name=org.postgresql.Driver
+      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
+      # mysql
+      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true
+      spring.datasource.username=xxx
+      spring.datasource.password=xxx
+    ```
+
+- Execute database upgrade script
+
+    `sh ./script/upgrade-dolphinscheduler.sh`
+
+## 5. Backend service upgrade.
+
+### 5.1 Modify the content in `conf/config/install_config.conf` file.
+- Standalone Deployment please refer the [6, Modify running arguments] in [Standalone-Deployment](/en-us/docs/1.3.5/user_doc/standalone-deployment.html).
+- Cluster Deployment please refer the [6, Modify running arguments] in [Cluster-Deployment](/en-us/docs/1.3.5/user_doc/cluster-deployment.html).
+
+#### Masters need attentions
+Create worker group in 1.3.1 version has different design: 
+
+- Brfore version 1.3.1 worker group can be created through UI interface.
+- Since version 1.3.1 worker group can be created by modify the worker configuration. 
+
+#### When upgrade from version before 1.3.1 to 1.3.2, below operations are what we need to do to keep worker group config consist with previous.
+
+1, Go to the backup database, search records in t_ds_worker_group table, mainly focus id, name and ip these three columns.
+
+| id | name | ip_list    |
+| :---         |     :---:      |          ---: |
+| 1   | service1     | 192.168.xx.10    |
+| 2   | service2     | 192.168.xx.11,192.168.xx.12      |
+
+2、Modify the workers config item in conf/config/install_config.conf file.
+
+Imaging bellow are the machine worker service to be deployed:
+| hostname | ip |
+| :---  | :---:  |
+| ds1   | 192.168.xx.10     |
+| ds2   | 192.168.xx.11     |
+| ds3   | 192.168.xx.12     |
+
+To keep worker group config consistant with previous version, we need to modify workers config item as below:
+
+```shell
+#worker service is deployed on which machine, and also specify which worker group this worker belong to. 
+workers="ds1:service1,ds2:service2,ds3:service2"
+```
+
+#### The worker group has been enhanced in version 1.3.2.
+Worker in 1.3.1 can't belong to more than one worker group, in 1.3.2 it's supported. So in 1.3.1 it's not supported when workers="ds1:service1,ds1:service2", and in 1.3.2 it's supported. 
+  
+### 5.2 Execute deploy script.
+```shell
+`sh install.sh`
+```
+
+
diff --git a/docs/zh-cn/1.3.5/user_doc/architecture-design.md b/docs/zh-cn/1.3.5/user_doc/architecture-design.md
new file mode 100644
index 0000000..446cfc8
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/architecture-design.md
@@ -0,0 +1,331 @@
+## 系统架构设计
+在对调度系统架构说明之前,我们先来认识一下调度系统常用的名词
+
+### 1.名词解释
+**DAG:** 全称Directed Acyclic Graph,简称DAG。工作流中的Task任务以有向无环图的形式组装起来,从入度为零的节点进行拓扑遍历,直到无后继节点为止。举例如下图:
+
+<p align="center">
+  <img src="/img/dag_examples_cn.jpg" alt="dag示例"  width="60%" />
+  <p align="center">
+        <em>dag示例</em>
+  </p>
+</p>
+
+**流程定义**:通过拖拽任务节点并建立任务节点的关联所形成的可视化**DAG**
+
+**流程实例**:流程实例是流程定义的实例化,可以通过手动启动或定时调度生成,流程定义每运行一次,产生一个流程实例
+
+**任务实例**:任务实例是流程定义中任务节点的实例化,标识着具体的任务执行状态
+
+**任务类型**: 目前支持有SHELL、SQL、SUB_PROCESS(子流程)、PROCEDURE、MR、SPARK、PYTHON、DEPENDENT(依赖)、,同时计划支持动态插件扩展,注意:其中子 **SUB_PROCESS**  也是一个单独的流程定义,是可以单独启动执行的
+
+**调度方式:** 系统支持基于cron表达式的定时调度和手动调度。命令类型支持:启动工作流、从当前节点开始执行、恢复被容错的工作流、恢复暂停流程、从失败节点开始执行、补数、定时、重跑、暂停、停止、恢复等待线程。其中 **恢复被容错的工作流** 和 **恢复等待线程** 两种命令类型是由调度内部控制使用,外部无法调用
+
+**定时调度**:系统采用 **quartz** 分布式调度器,并同时支持cron表达式可视化的生成
+
+**依赖**:系统不单单支持 **DAG** 简单的前驱和后继节点之间的依赖,同时还提供**任务依赖**节点,支持**流程间的自定义任务依赖**
+
+**优先级** :支持流程实例和任务实例的优先级,如果流程实例和任务实例的优先级不设置,则默认是先进先出
+
+**邮件告警**:支持 **SQL任务** 查询结果邮件发送,流程实例运行结果邮件告警及容错告警通知
+
+**失败策略**:对于并行运行的任务,如果有任务失败,提供两种失败策略处理方式,**继续**是指不管并行运行任务的状态,直到流程失败结束。**结束**是指一旦发现失败任务,则同时Kill掉正在运行的并行任务,流程失败结束
+
+**补数**:补历史数据,支持**区间并行和串行**两种补数方式
+
+### 2.系统架构
+
+#### 2.1 系统架构图
+<p align="center">
+  <img src="/img/architecture-1.3.0.jpg" alt="系统架构图"  width="70%" />
+  <p align="center">
+        <em>系统架构图</em>
+  </p>
+</p>
+
+#### 2.2 启动流程活动图
+<p align="center">
+  <img src="/img/process-start-flow-1.3.0.png" alt="启动流程活动图"  width="70%" />
+  <p align="center">
+        <em>启动流程活动图</em>
+  </p>
+</p>
+
+#### 2.3 架构说明
+
+* **MasterServer** 
+
+    MasterServer采用分布式无中心设计理念,MasterServer主要负责 DAG 任务切分、任务提交监控,并同时监听其它MasterServer和WorkerServer的健康状态。
+    MasterServer服务启动时向Zookeeper注册临时节点,通过监听Zookeeper临时节点变化来进行容错处理。
+    MasterServer基于netty提供监听服务。
+
+    ##### 该服务内主要包含:
+
+    - **Distributed Quartz**分布式调度组件,主要负责定时任务的启停操作,当quartz调起任务后,Master内部会有线程池具体负责处理任务的后续操作
+
+    - **MasterSchedulerThread**是一个扫描线程,定时扫描数据库中的 **command** 表,根据不同的**命令类型**进行不同的业务操作
+
+    - **MasterExecThread**主要是负责DAG任务切分、任务提交监控、各种不同命令类型的逻辑处理
+
+    - **MasterTaskExecThread**主要负责任务的持久化
+
+* **WorkerServer** 
+
+     WorkerServer也采用分布式无中心设计理念,WorkerServer主要负责任务的执行和提供日志服务。
+     WorkerServer服务启动时向Zookeeper注册临时节点,并维持心跳。
+     Server基于netty提供监听服务。Worker
+     ##### 该服务包含:
+     - **FetchTaskThread**主要负责不断从**Task Queue**中领取任务,并根据不同任务类型调用**TaskScheduleThread**对应执行器。
+
+     - **LoggerServer**是一个RPC服务,提供日志分片查看、刷新和下载等功能
+
+* **ZooKeeper** 
+
+    ZooKeeper服务,系统中的MasterServer和WorkerServer节点都通过ZooKeeper来进行集群管理和容错。另外系统还基于ZooKeeper进行事件监听和分布式锁。
+    我们也曾经基于Redis实现过队列,不过我们希望DolphinScheduler依赖到的组件尽量地少,所以最后还是去掉了Redis实现。
+
+* **Task Queue** 
+
+    提供任务队列的操作,目前队列也是基于Zookeeper来实现。由于队列中存的信息较少,不必担心队列里数据过多的情况,实际上我们压测过百万级数据存队列,对系统稳定性和性能没影响。
+
+* **Alert** 
+
+    提供告警相关接口,接口主要包括**告警**两种类型的告警数据的存储、查询和通知功能。其中通知功能又有**邮件通知**和**SNMP(暂未实现)**两种。
+
+* **API** 
+
+    API接口层,主要负责处理前端UI层的请求。该服务统一提供RESTful api向外部提供请求服务。
+    接口包括工作流的创建、定义、查询、修改、发布、下线、手工启动、停止、暂停、恢复、从该节点开始执行等等。
+
+* **UI** 
+
+    系统的前端页面,提供系统的各种可视化操作界面,详见<a href="/zh-cn/docs/user_doc/system-manual.html" target="_self">系统使用手册</a>部分。
+
+#### 2.3 架构设计思想
+
+##### 一、去中心化vs中心化 
+
+###### 中心化思想
+
+中心化的设计理念比较简单,分布式集群中的节点按照角色分工,大体上分为两种角色:
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave角色"  width="50%" />
+ </p>
+
+- Master的角色主要负责任务分发并监督Slave的健康状态,可以动态的将任务均衡到Slave上,以致Slave节点不至于“忙死”或”闲死”的状态。
+- Worker的角色主要负责任务的执行工作并维护和Master的心跳,以便Master可以分配任务给Slave。
+
+
+
+中心化思想设计存在的问题:
+
+- 一旦Master出现了问题,则群龙无首,整个集群就会崩溃。为了解决这个问题,大多数Master/Slave架构模式都采用了主备Master的设计方案,可以是热备或者冷备,也可以是自动切换或手动切换,而且越来越多的新系统都开始具备自动选举切换Master的能力,以提升系统的可用性。
+- 另外一个问题是如果Scheduler在Master上,虽然可以支持一个DAG中不同的任务运行在不同的机器上,但是会产生Master的过负载。如果Scheduler在Slave上,则一个DAG中所有的任务都只能在某一台机器上进行作业提交,则并行任务比较多的时候,Slave的压力可能会比较大。
+
+
+
+###### 去中心化
+ <p align="center"
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="去中心化"  width="50%" />
+ </p>
+
+- 在去中心化设计里,通常没有Master/Slave的概念,所有的角色都是一样的,地位是平等的,全球互联网就是一个典型的去中心化的分布式系统,联网的任意节点设备down机,都只会影响很小范围的功能。
+- 去中心化设计的核心设计在于整个分布式系统中不存在一个区别于其他节点的”管理者”,因此不存在单点故障问题。但由于不存在” 管理者”节点所以每个节点都需要跟其他节点通信才得到必须要的机器信息,而分布式系统通信的不可靠性,则大大增加了上述功能的实现难度。
+- 实际上,真正去中心化的分布式系统并不多见。反而动态中心化分布式系统正在不断涌出。在这种架构下,集群中的管理者是被动态选择出来的,而不是预置的,并且集群在发生故障的时候,集群的节点会自发的举行"会议"来选举新的"管理者"去主持工作。最典型的案例就是ZooKeeper及Go语言实现的Etcd。
+
+
+
+- DolphinScheduler的去中心化是Master/Worker注册到Zookeeper中,实现Master集群和Worker集群无中心,并使用Zookeeper分布式锁来选举其中的一台Master或Worker为“管理者”来执行任务。
+
+#####  二、分布式锁实践
+
+DolphinScheduler使用ZooKeeper分布式锁来实现同一时刻只有一台Master执行Scheduler,或者只有一台Worker执行任务的提交。
+1. 获取分布式锁的核心流程算法如下
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="获取分布式锁流程"  width="50%" />
+ </p>
+
+2. DolphinScheduler中Scheduler线程分布式锁实现流程图:
+ <p align="center">
+   <img src="/img/distributed_lock_procss.png" alt="获取分布式锁流程"  width="50%" />
+ </p>
+
+
+##### 三、线程不足循环等待问题
+
+-  如果一个DAG中没有子流程,则如果Command中的数据条数大于线程池设置的阈值,则直接流程等待或失败。
+-  如果一个大的DAG中嵌套了很多子流程,如下图则会产生“死等”状态:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="线程不足循环等待问题"  width="50%" />
+ </p>
+上图中MainFlowThread等待SubFlowThread1结束,SubFlowThread1等待SubFlowThread2结束, SubFlowThread2等待SubFlowThread3结束,而SubFlowThread3等待线程池有新线程,则整个DAG流程不能结束,从而其中的线程也不能释放。这样就形成的子父流程循环等待的状态。此时除非启动新的Master来增加线程来打破这样的”僵局”,否则调度集群将不能再使用。
+
+对于启动新Master来打破僵局,似乎有点差强人意,于是我们提出了以下三种方案来降低这种风险:
+
+1. 计算所有Master的线程总和,然后对每一个DAG需要计算其需要的线程数,也就是在DAG流程执行之前做预计算。因为是多Master线程池,所以总线程数不太可能实时获取。 
+2. 对单Master线程池进行判断,如果线程池已经满了,则让线程直接失败。
+3. 增加一种资源不足的Command类型,如果线程池不足,则将主流程挂起。这样线程池就有了新的线程,可以让资源不足挂起的流程重新唤醒执行。
+
+注意:Master Scheduler线程在获取Command的时候是FIFO的方式执行的。
+
+于是我们选择了第三种方式来解决线程不足的问题。
+
+
+##### 四、容错设计
+容错分为服务宕机容错和任务重试,服务宕机容错又分为Master容错和Worker容错两种情况
+
+###### 1. 宕机容错
+
+服务容错设计依赖于ZooKeeper的Watcher机制,实现原理如图:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler容错设计"  width="40%" />
+ </p>
+其中Master监控其他Master和Worker的目录,如果监听到remove事件,则会根据具体的业务逻辑进行流程实例容错或者任务实例容错。
+
+
+
+- Master容错流程图:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master容错流程图"  width="40%" />
+ </p>
+ZooKeeper Master容错完成之后则重新由DolphinScheduler中Scheduler线程调度,遍历 DAG 找到”正在运行”和“提交成功”的任务,对”正在运行”的任务监控其任务实例的状态,对”提交成功”的任务需要判断Task Queue中是否已经存在,如果存在则同样监控任务实例的状态,如果不存在则重新提交任务实例。
+
+
+
+- Worker容错流程图:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker容错流程图"  width="40%" />
+ </p>
+
+Master Scheduler线程一旦发现任务实例为” 需要容错”状态,则接管任务并进行重新提交。
+
+ 注意:由于” 网络抖动”可能会使得节点短时间内失去和ZooKeeper的心跳,从而发生节点的remove事件。对于这种情况,我们使用最简单的方式,那就是节点一旦和ZooKeeper发生超时连接,则直接将Master或Worker服务停掉。
+
+###### 2.任务失败重试
+
+这里首先要区分任务失败重试、流程失败恢复、流程失败重跑的概念:
+
+- 任务失败重试是任务级别的,是调度系统自动进行的,比如一个Shell任务设置重试次数为3次,那么在Shell任务运行失败后会自己再最多尝试运行3次
+- 流程失败恢复是流程级别的,是手动进行的,恢复是从只能**从失败的节点开始执行**或**从当前节点开始执行**
+- 流程失败重跑也是流程级别的,是手动进行的,重跑是从开始节点进行
+
+
+
+接下来说正题,我们将工作流中的任务节点分了两种类型。
+
+- 一种是业务节点,这种节点都对应一个实际的脚本或者处理语句,比如Shell节点,MR节点、Spark节点、依赖节点等。
+
+- 还有一种是逻辑节点,这种节点不做实际的脚本或语句处理,只是整个流程流转的逻辑处理,比如子流程节等。
+
+每一个**业务节点**都可以配置失败重试的次数,当该任务节点失败,会自动重试,直到成功或者超过配置的重试次数。**逻辑节点**不支持失败重试。但是逻辑节点里的任务支持重试。
+
+如果工作流中有任务失败达到最大重试次数,工作流就会失败停止,失败的工作流可以手动进行重跑操作或者流程恢复操作
+
+
+
+##### 五、任务优先级设计
+在早期调度设计中,如果没有优先级设计,采用公平调度设计的话,会遇到先行提交的任务可能会和后继提交的任务同时完成的情况,而不能做到设置流程或者任务的优先级,因此我们对此进行了重新设计,目前我们设计如下:
+
+-  按照**不同流程实例优先级**优先于**同一个流程实例优先级**优先于**同一流程内任务优先级**优先于**同一流程内任务**提交顺序依次从高到低进行任务处理。
+    - 具体实现是根据任务实例的json解析优先级,然后把**流程实例优先级_流程实例id_任务优先级_任务id**信息保存在ZooKeeper任务队列中,当从任务队列获取的时候,通过字符串比较即可得出最需要优先执行的任务
+
+        - 其中流程定义的优先级是考虑到有些流程需要先于其他流程进行处理,这个可以在流程启动或者定时启动时配置,共有5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="流程优先级配置"  width="40%" />
+             </p>
+
+        - 任务的优先级也分为5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="任务优先级配置"  width="35%" />
+             </p>
+
+
+##### 六、Logback和netty实现日志访问
+
+-  由于Web(UI)和Worker不一定在同一台机器上,所以查看日志不能像查询本地文件那样。有两种方案:
+  -  将日志放到ES搜索引擎上
+  -  通过netty通信获取远程日志信息
+
+-  介于考虑到尽可能的DolphinScheduler的轻量级性,所以选择了gRPC实现远程访问日志信息。
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc远程访问"  width="50%" />
+ </p>
+
+
+- 我们使用自定义Logback的FileAppender和Filter功能,实现每个任务实例生成一个日志文件。
+- FileAppender主要实现如下:
+
+ ```java
+ /**
+  * task log appender
+  */
+ public class TaskLogAppender extends FileAppender<ILoggingEvent> {
+ 
+     ...
+
+    @Override
+    protected void append(ILoggingEvent event) {
+
+        if (currentlyActiveFile == null){
+            currentlyActiveFile = getFile();
+        }
+        String activeFile = currentlyActiveFile;
+        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+        String threadName = event.getThreadName();
+        String[] threadNameArr = threadName.split("-");
+        // logId = processDefineId_processInstanceId_taskInstanceId
+        String logId = threadNameArr[1];
+        ...
+        super.subAppend(event);
+    }
+}
+ ```
+
+
+以/流程定义id/流程实例id/任务实例id.log的形式生成日志
+
+- 过滤匹配以TaskLogInfo开始的线程名称:
+
+- TaskLogFilter实现如下:
+
+ ```java
+ /**
+ *  task log filter
+ */
+public class TaskLogFilter extends Filter<ILoggingEvent> {
+
+    @Override
+    public FilterReply decide(ILoggingEvent event) {
+        if (event.getThreadName().startsWith("TaskLogInfo-")){
+            return FilterReply.ACCEPT;
+        }
+        return FilterReply.DENY;
+    }
+}
+ ```
+
+### 3.模块介绍
+- dolphinscheduler-alert 告警模块,提供AlertServer服务。
+
+- dolphinscheduler-api   web应用模块,提供ApiServer服务。
+
+- dolphinscheduler-common 通用的常量枚举、工具类、数据结构或者基类
+
+- dolphinscheduler-dao 提供数据库访问等操作。
+
+- dolphinscheduler-remote 基于netty的客户端、服务端
+
+- dolphinscheduler-server MasterServer和WorkerServer服务
+
+- dolphinscheduler-service service模块,包含Quartz、Zookeeper、日志客户端访问服务,便于server模块和api模块调用
+
+- dolphinscheduler-ui 前端模块
+### 总结
+本文从调度出发,初步介绍了大数据分布式工作流调度系统--DolphinScheduler的架构原理及实现思路。未完待续
+
+
diff --git a/docs/zh-cn/1.3.5/user_doc/build-docker-image.md b/docs/zh-cn/1.3.5/user_doc/build-docker-image.md
new file mode 100644
index 0000000..e813aa8
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/build-docker-image.md
@@ -0,0 +1,247 @@
+### 如何构建 DolphinScheduler 的 docker 镜像
+
+你能够在类 Unix 系统和 Windows 系统中构建一个 docker 镜像。
+
+类 Unix 系统, 如下:
+
+```bash
+$ cd path/incubator-dolphinscheduler
+$ sh ./docker/build/hooks/build
+```
+
+Windows系统, 如下:
+
+```bat
+c:\incubator-dolphinscheduler>.\docker\build\hooks\build.bat
+```
+
+如果你不理解这些脚本 `./docker/build/hooks/build` `./docker/build/hooks/build.bat`,请阅读里面的内容。
+
+## 环境变量
+
+DolphinScheduler 映像使用了几个容易遗漏的环境变量。虽然这些变量不是必须的,但是可以帮助你更容易配置镜像并根据你的需求定义相应的服务配置。
+
+**`DATABASE_TYPE`**
+
+配置`database`的`TYPE`, 默认值 `postgresql`。
+
+**注意**: 当运行`dolphinscheduler`中`master-server`、`worker-server`、`api-server`、`alert-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+
+**`DATABASE_DRIVER`**
+
+配置`database`的`DRIVER`, 默认值 `org.postgresql.Driver`。
+
+**注意**: 当运行`dolphinscheduler`中`master-server`、`worker-server`、`api-server`、`alert-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+
+**`DATABASE_HOST`**
+
+配置`database`的`HOST`, 默认值 `127.0.0.1`。
+
+**注意**: 当运行`dolphinscheduler`中`master-server`、`worker-server`、`api-server`、`alert-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+
+**`DATABASE_PORT`**
+
+配置`database`的`PORT`, 默认值 `5432`。
+
+**注意**: 当运行`dolphinscheduler`中`master-server`、`worker-server`、`api-server`、`alert-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+
+**`DATABASE_USERNAME`**
+
+配置`database`的`USERNAME`, 默认值 `root`。
+
+**注意**: 当运行`dolphinscheduler`中`master-server`、`worker-server`、`api-server`、`alert-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+
+**`DATABASE_PASSWORD`**
+
+配置`database`的`PASSWORD`, 默认值 `root`。
+
+**注意**: 当运行`dolphinscheduler`中`master-server`、`worker-server`、`api-server`、`alert-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+
+**`DATABASE_DATABASE`**
+
+配置`database`的`DATABASE`, 默认值 `dolphinscheduler`。
+
+**注意**: 当运行`dolphinscheduler`中`master-server`、`worker-server`、`api-server`、`alert-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+
+**`DATABASE_PARAMS`**
+
+配置`database`的`PARAMS`, 默认值 `characterEncoding=utf8`。
+
+**注意**: 当运行`dolphinscheduler`中`master-server`、`worker-server`、`api-server`、`alert-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+
+**`DOLPHINSCHEDULER_ENV_PATH`**
+
+任务执行时的环境变量配置文件, 默认值 `/opt/dolphinscheduler/conf/env/dolphinscheduler_env.sh`。
+
+**`DOLPHINSCHEDULER_DATA_BASEDIR_PATH`**
+
+用户数据目录, 用户自己配置, 请确保这个目录存在并且用户读写权限, 默认值 `/tmp/dolphinscheduler`。
+
+**`ZOOKEEPER_QUORUM`**
+
+配置`master-server`和`worker-serverr`的`Zookeeper`地址, 默认值 `127.0.0.1:2181`。
+
+**注意**: 当运行`dolphinscheduler`中`master-server`、`worker-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+
+**`MASTER_EXEC_THREADS`**
+
+配置`master-server`中的执行线程数量,默认值 `100`。
+
+**`MASTER_EXEC_TASK_NUM`**
+
+配置`master-server`中的执行任务数量,默认值 `20`。
+
+**`MASTER_HEARTBEAT_INTERVAL`**
+
+配置`master-server`中的心跳交互时间,默认值 `10`。
+
+**`MASTER_TASK_COMMIT_RETRYTIMES`**
+
+配置`master-server`中的任务提交重试次数,默认值 `5`。
+
+**`MASTER_TASK_COMMIT_INTERVAL`**
+
+配置`master-server`中的任务提交交互时间,默认值 `1000`。
+
+**`MASTER_MAX_CPULOAD_AVG`**
+
+配置`master-server`中的CPU中的`load average`值,默认值 `100`。
+
+**`MASTER_RESERVED_MEMORY`**
+
+配置`master-server`的保留内存,默认值 `0.1`。
+
+**`MASTER_LISTEN_PORT`**
+
+配置`master-server`的端口,默认值 `5678`。
+
+**`WORKER_EXEC_THREADS`**
+
+配置`worker-server`中的执行线程数量,默认值 `100`。
+
+**`WORKER_HEARTBEAT_INTERVAL`**
+
+配置`worker-server`中的心跳交互时间,默认值 `10`。
+
+**`WORKER_FETCH_TASK_NUM`**
+
+配置`worker-server`中的获取任务的数量,默认值 `3`。
+
+**`WORKER_MAX_CPULOAD_AVG`**
+
+配置`worker-server`中的CPU中的最大`load average`值,默认值 `100`。
+
+**`WORKER_RESERVED_MEMORY`**
+
+配置`worker-server`的保留内存,默认值 `0.1`。
+
+**`WORKER_WEIGHT`**
+
+配置`worker-server`的权重,默认之`100`。
+
+**`WORKER_LISTEN_PORT`**
+
+配置`worker-server`的端口,默认值 `1234`。
+
+**`WORKER_GROUP`**
+
+配置`worker-server`的分组,默认值 `default`。
+
+**`XLS_FILE_PATH`**
+
+配置`alert-server`的`XLS`文件的存储路径,默认值 `/tmp/xls`。
+
+**`MAIL_SERVER_HOST`**
+
+配置`alert-server`的邮件服务地址,默认值 `空`。
+
+**`MAIL_SERVER_PORT`**
+
+配置`alert-server`的邮件服务端口,默认值 `空`。
+
+**`MAIL_SENDER`**
+
+配置`alert-server`的邮件发送人,默认值 `空`。
+
+**`MAIL_USER=`**
+
+配置`alert-server`的邮件服务用户名,默认值 `空`。
+
+**`MAIL_PASSWD`**
+
+配置`alert-server`的邮件服务用户密码,默认值 `空`。
+
+**`MAIL_SMTP_STARTTLS_ENABLE`**
+
+配置`alert-server`的邮件服务是否启用TLS,默认值 `true`。
+
+**`MAIL_SMTP_SSL_ENABLE`**
+
+配置`alert-server`的邮件服务是否启用SSL,默认值 `false`。
+
+**`MAIL_SMTP_SSL_TRUST`**
+
+配置`alert-server`的邮件服务SSL的信任地址,默认值 `空`。
+
+**`ENTERPRISE_WECHAT_ENABLE`**
+
+配置`alert-server`的邮件服务是否启用企业微信,默认值 `false`。
+
+**`ENTERPRISE_WECHAT_CORP_ID`**
+
+配置`alert-server`的邮件服务企业微信`ID`,默认值 `空`。
+
+**`ENTERPRISE_WECHAT_SECRET`**
+
+配置`alert-server`的邮件服务企业微信`SECRET`,默认值 `空`。
+
+**`ENTERPRISE_WECHAT_AGENT_ID`**
+
+配置`alert-server`的邮件服务企业微信`AGENT_ID`,默认值 `空`。
+
+**`ENTERPRISE_WECHAT_USERS`**
+
+配置`alert-server`的邮件服务企业微信`USERS`,默认值 `空`。
+
+**`FRONTEND_API_SERVER_HOST`**
+
+配置`frontend`的连接`api-server`的地址,默认值 `127.0.0.1`。
+
+**Note**: 当单独运行`api-server`时,你应该指定`api-server`这个值。
+
+**`FRONTEND_API_SERVER_PORT`**
+
+配置`frontend`的连接`api-server`的端口,默认值 `12345`。
+
+**Note**: 当单独运行`api-server`时,你应该指定`api-server`这个值。
+
+## 初始化脚本
+
+如果你想在编译的时候或者运行的时候附加一些其它的操作及新增一些环境变量,你可以在`/root/start-init-conf.sh`文件中进行修改,同时如果涉及到配置文件的修改,请在`/opt/dolphinscheduler/conf/*.tpl`中修改相应的配置文件
+
+例如,在`/root/start-init-conf.sh`添加一个环境变量`API_SERVER_PORT`:
+
+```
+export API_SERVER_PORT=5555
+``` 
+
+当添加以上环境变量后,你应该在相应的模板文件`/opt/dolphinscheduler/conf/application-api.properties.tpl`中添加这个环境变量配置:
+```
+server.port=${API_SERVER_PORT}
+```
+
+`/root/start-init-conf.sh`将根据模板文件动态的生成配置文件:
+
+```sh
+echo "generate app config"
+ls ${DOLPHINSCHEDULER_HOME}/conf/ | grep ".tpl" | while read line; do
+eval "cat << EOF
+$(cat ${DOLPHINSCHEDULER_HOME}/conf/${line})
+EOF
+" > ${DOLPHINSCHEDULER_HOME}/conf/${line%.*}
+done
+
+echo "generate nginx config"
+sed -i "s/FRONTEND_API_SERVER_HOST/${FRONTEND_API_SERVER_HOST}/g" /etc/nginx/conf.d/dolphinscheduler.conf
+sed -i "s/FRONTEND_API_SERVER_PORT/${FRONTEND_API_SERVER_PORT}/g" /etc/nginx/conf.d/dolphinscheduler.conf
+```
\ No newline at end of file
diff --git a/docs/zh-cn/1.3.5/user_doc/cluster-deployment.md b/docs/zh-cn/1.3.5/user_doc/cluster-deployment.md
new file mode 100644
index 0000000..402b9ed
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/cluster-deployment.md
@@ -0,0 +1,475 @@
+# 集群部署(Cluster)
+
+# 1、基础软件安装(必装项请自行安装)
+
+ * PostgreSQL (8.2.15+) or MySQL (5.7系列)  :  两者任选其一即可, 如MySQL则需要JDBC Driver 5.1.47+
+ * [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) :  必装,请安装好后在/etc/profile下配置 JAVA_HOME 及 PATH 变量
+ * ZooKeeper (3.4.6+) :必装 
+ * Hadoop (2.6+) or MinIO :选装,如果需要用到资源上传功能,可以选择上传到Hadoop or MinIO上
+
+```markdown
+ 注意:DolphinScheduler本身不依赖Hadoop、Hive、Spark,仅是会调用他们的Client,用于对应任务的提交。
+```
+
+# 2、下载二进制tar.gz包
+
+- 请下载最新版本的后端安装包至服务器部署目录,比如创建 /opt/dolphinscheduler 做为安装部署目录,下载地址: [下载](/zh-cn/download/download.html),下载后上传tar包到该目录中,并进行解压
+
+```shell
+# 创建部署目录,部署目录请不要创建在/root、/home等高权限目录 
+mkdir -p /opt/dolphinscheduler;
+cd /opt/dolphinscheduler;
+# 解压缩
+tar -zxvf apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin.tar.gz -C /opt/dolphinscheduler;
+
+mv apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin  dolphinscheduler-bin
+```
+
+# 3、创建部署用户和hosts映射
+
+- 在**所有**部署调度的机器上创建部署用户,并且一定要配置sudo免密。假如我们计划在ds1,ds2,ds3,ds4这4台机器上部署调度,首先需要在每台机器上都创建部署用户
+
+```shell
+# 创建用户需使用root登录,设置部署用户名,请自行修改,后面以dolphinscheduler为例
+useradd dolphinscheduler;
+
+# 设置用户密码,请自行修改,后面以dolphinscheduler123为例
+echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
+
+# 配置sudo免密
+echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+```
+
+```
+ 注意:
+ - 因为是以 sudo -u {linux-user} 切换不同linux用户的方式来实现多租户运行作业,所以部署用户需要有 sudo 权限,而且是免密的。
+ - 如果发现/etc/sudoers文件中有"Default requiretty"这行,也请注释掉
+ - 如果用到资源上传的话,还需要在`HDFS或者MinIO`上给该部署用户分配读写的权限
+```
+
+# 4、配置hosts映射和ssh打通及修改目录权限
+
+- 以第一台机器(hostname为ds1)作为部署机,在ds1上配置所有待部署机器的hosts, 在ds1以root登录
+
+  ```shell
+  vi /etc/hosts
+  
+  #add ip hostname
+  192.168.xxx.xxx ds1
+  192.168.xxx.xxx ds2
+  192.168.xxx.xxx ds3
+  192.168.xxx.xxx ds4
+  ```
+
+  *注意:请删掉或者注释掉127.0.0.1这行*
+
+- 同步ds1上的/etc/hosts到所有部署机器
+
+  ```shell
+  for ip in ds2 ds3;     #请将此处ds2 ds3替换为自己要部署的机器的hostname
+  do
+      sudo scp -r /etc/hosts  $ip:/etc/          #在运行中需要输入root密码
+  done
+  ```
+
+  *备注:当然 通过`sshpass -p xxx sudo scp -r /etc/hosts $ip:/etc/`就可以省去输入密码了*
+
+  > centos下sshpass的安装:
+  >
+  > 1. 先安装epel
+  >
+  >    yum install -y epel-release
+  >
+  >    yum repolist
+  >
+  > 2. 安装完成epel之后,就可以按照sshpass了
+  >
+  >    yum install -y sshpass
+  >
+  >    
+
+- 在ds1上,切换到部署用户并配置ssh本机免密登录
+
+  ```shell
+   su dolphinscheduler;
+  
+  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
+  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
+  chmod 600 ~/.ssh/authorized_keys
+  ```
+​      注意:*正常设置后,dolphinscheduler用户在执行命令`ssh localhost` 是不需要再输入密码的*
+
+
+
+- 在ds1上,配置部署用户dolphinscheduler ssh打通到其他待部署的机器
+
+  ```shell
+  su dolphinscheduler;
+  for ip in ds2 ds3;     #请将此处ds2 ds3替换为自己要部署的机器的hostname
+  do
+      ssh-copy-id  $ip   #该操作执行过程中需要手动输入dolphinscheduler用户的密码
+  done
+  # 当然 通过 sshpass -p xxx ssh-copy-id $ip 就可以省去输入密码了
+  ```
+
+- 在ds1上,修改目录权限,使得部署用户对dolphinscheduler-bin目录有操作权限
+
+  ```shell
+  sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
+  ```
+
+# 5、数据库初始化
+
+- 进入数据库,默认数据库是PostgreSQL,如选择MySQL的话,后续需要添加mysql-connector-java驱动包到DolphinScheduler的lib目录下,这里以MySQL为例
+``` 
+mysql -h192.168.xx.xx -P3306 -uroot -p
+```
+
+- 进入数据库命令行窗口后,执行数据库初始化命令,设置访问账号和密码。**注: {user} 和 {password} 需要替换为具体的数据库用户名和密码** 
+
+ ``` mysql
+    mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
+    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
+    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
+    mysql> flush privileges;
+ ```
+
+- 创建表和导入基础数据
+
+    - 修改 conf 目录下 datasource.properties 中的下列配置
+
+    ```shell
+      vi conf/datasource.properties
+    ```
+
+    - 如果选择 MySQL,请注释掉 PostgreSQL 相关配置(反之同理), 还需要手动添加 [[ mysql-connector-java 驱动 jar ](https://downloads.mysql.com/archives/c-j/)] 包到 lib 目录下,这里下载的是mysql-connector-java-5.1.47.jar,然后正确配置数据库连接相关信息
+    
+    ```properties
+      #postgre
+      #spring.datasource.driver-class-name=org.postgresql.Driver
+      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
+      # mysql
+      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     需要修改ip
+      spring.datasource.username=xxx						需要修改为上面的{user}值
+      spring.datasource.password=xxx						需要修改为上面的{password}值
+    ```
+
+    - 修改并保存完后,执行 script 目录下的创建表及导入基础数据脚本
+
+    ```shell
+    sh script/create-dolphinscheduler.sh
+    ```
+
+​       *注意: 如果执行上述脚本报 ”/bin/java: No such file or directory“ 错误,请在/etc/profile下配置  JAVA_HOME 及 PATH 变量*
+
+# 6、修改运行参数
+
+- 修改 conf/env 目录下的 `dolphinscheduler_env.sh` 环境变量(以相关用到的软件都安装在/opt/soft下为例)
+
+    ```shell
+        export HADOOP_HOME=/opt/soft/hadoop
+        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+        #export SPARK_HOME1=/opt/soft/spark1
+        export SPARK_HOME2=/opt/soft/spark2
+        export PYTHON_HOME=/opt/soft/python
+        export JAVA_HOME=/opt/soft/java
+        export HIVE_HOME=/opt/soft/hive
+        export FLINK_HOME=/opt/soft/flink
+        export DATAX_HOME=/opt/soft/datax/bin/datax.py
+        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+        ```
+
+     `注: 这一步非常重要,例如 JAVA_HOME 和 PATH 是必须要配置的,没有用到的可以忽略或者注释掉`
+
+
+
+- 将jdk软链到/usr/bin/java下(仍以 JAVA_HOME=/opt/soft/java 为例)
+
+    ```shell
+    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+    ```
+
+ - 修改一键部署配置文件 `conf/config/install_config.conf`中的各参数,特别注意以下参数的配置
+
+    ```shell
+    # 这里填 mysql or postgresql
+    dbtype="mysql"
+
+    # 数据库连接地址
+    dbhost="192.168.xx.xx:3306"
+
+    # 数据库名
+    dbname="dolphinscheduler"
+
+    # 数据库用户名,此处需要修改为上面设置的{user}具体值
+    username="xxx"
+
+    # 数据库密码, 如果有特殊字符,请使用\转义,需要修改为上面设置的{password}具体值
+    password="xxx"
+
+    #Zookeeper地址
+    zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+    #将DS安装到哪个目录,如: /opt/soft/dolphinscheduler,不同于现在的目录
+    installPath="/opt/soft/dolphinscheduler"
+
+    #使用哪个用户部署,使用第3节创建的用户
+    deployUser="dolphinscheduler"
+
+    # 邮件配置,以qq邮箱为例
+    # 邮件协议
+    mailProtocol="SMTP"
+
+    # 邮件服务地址
+    mailServerHost="smtp.qq.com"
+
+    # 邮件服务端口
+    mailServerPort="25"
+
+    # mailSender和mailUser配置成一样即可
+    # 发送者
+    mailSender="xxx@qq.com"
+
+    # 发送用户
+    mailUser="xxx@qq.com"
+
+    # 邮箱密码
+    mailPassword="xxx"
+
+    # TLS协议的邮箱设置为true,否则设置为false
+    starttlsEnable="true"
+
+    # 开启SSL协议的邮箱配置为true,否则为false。注意: starttlsEnable和sslEnable不能同时为true
+    sslEnable="false"
+
+    # 邮件服务地址值,参考上面 mailServerHost
+    sslTrust="smtp.qq.com"
+   
+    # 业务用到的比如sql等资源文件上传到哪里,可以设置:HDFS,S3,NONE,单机如果想使用本地文件系统,请配置为HDFS,因为HDFS支持本地文件系统;如果不需要资源上传功能请选择NONE。强调一点:使用本地文件系统不需要部署hadoop
+    resourceStorageType="HDFS"
+
+    #如果上传资源保存想保存在hadoop上,hadoop集群的NameNode启用了HA的话,需要将hadoop的配置文件core-site.xml和hdfs-site.xml放到安装路径的conf目录下,本例即是放到/opt/soft/dolphinscheduler/conf下面,并配置namenode cluster名称;如果NameNode不是HA,则只需要将mycluster修改为具体的ip或者主机名即可
+    defaultFS="hdfs://mycluster:8020"
+
+
+    # 如果没有使用到Yarn,保持以下默认值即可;如果ResourceManager是HA,则配置为ResourceManager节点的主备ip或者hostname,比如"192.168.xx.xx,192.168.xx.xx";如果是单ResourceManager请配置yarnHaIps=""即可
+    yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+    # 如果ResourceManager是HA或者没有使用到Yarn保持默认值即可;如果是单ResourceManager,请配置真实的ResourceManager主机名或者ip
+    singleYarnIp="yarnIp1"
+
+    # 资源上传根路径,主持HDFS和S3,由于hdfs支持本地文件系统,需要确保本地文件夹存在且有读写权限
+    resourceUploadPath="/data/dolphinscheduler"
+
+    # 具备权限创建resourceUploadPath的用户
+    hdfsRootUser="hdfs"
+
+
+
+    #在哪些机器上部署DS服务,本机选localhost
+    ips="ds1,ds2,ds3,ds4"
+
+    #ssh端口,默认22
+    sshPort="22"
+
+    #master服务部署在哪台机器上
+    masters="ds1,ds2"
+
+    #worker服务部署在哪台机器上,并指定此worker属于哪一个worker组,下面示例的default即为组名
+    workers="ds3:default,ds4:default"
+
+    #报警服务部署在哪台机器上
+    alertServer="ds2"
+
+    #后端api服务部署在在哪台机器上
+    apiServers="ds1"
+
+    ```
+    
+    *特别注意:*
+    
+    - 如果需要用资源上传到Hadoop集群功能, 并且Hadoop集群的NameNode 配置了 HA的话 ,需要开启 HDFS类型的资源上传,同时需要将Hadoop集群下的core-site.xml和hdfs-site.xml复制到/opt/dolphinscheduler/conf,非NameNode HA跳过次步骤
+   
+   
+   
+# 7、一键部署
+
+- 切换到部署用户dolphinscheduler,然后执行一键部署脚本
+
+    `sh install.sh` 
+
+    ```
+    注意:
+    第一次部署的话,在运行中第3步`3,stop server`出现5次以下信息,此信息可以忽略
+    sh: bin/dolphinscheduler-daemon.sh: No such file or directory
+    ```
+
+- 脚本完成后,会启动以下5个服务,使用`jps`命令查看服务是否启动(`jps`为`java JDK`自带)
+
+```aidl
+    MasterServer         ----- master服务
+    WorkerServer         ----- worker服务
+    LoggerServer         ----- logger服务
+    ApiApplicationServer ----- api服务
+    AlertServer          ----- alert服务
+```
+如果以上服务都正常启动,说明自动部署成功
+
+
+部署成功后,可以进行日志查看,日志统一存放于logs文件夹内
+
+```日志路径
+ logs/
+    ├── dolphinscheduler-alert-server.log
+    ├── dolphinscheduler-master-server.log
+    |—— dolphinscheduler-worker-server.log
+    |—— dolphinscheduler-api-server.log
+    |—— dolphinscheduler-logger-server.log
+```
+
+
+
+# 8、登录系统
+
+- 访问前端页面地址,接口ip(自行修改)
+http://192.168.xx.xx:12345/dolphinscheduler
+
+   <p align="center">
+     <img src="/img/login.png" width="60%" />
+   </p>
+
+
+
+# 9、启停服务
+
+* 一键停止集群所有服务
+
+  ` sh ./bin/stop-all.sh`
+
+* 一键开启集群所有服务
+
+  ` sh ./bin/start-all.sh`
+
+* 启停Master
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start master-server
+sh ./bin/dolphinscheduler-daemon.sh stop master-server
+```
+
+* 启停Worker
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start worker-server
+sh ./bin/dolphinscheduler-daemon.sh stop worker-server
+```
+
+* 启停Api
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start api-server
+sh ./bin/dolphinscheduler-daemon.sh stop api-server
+```
+
+* 启停Logger
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start logger-server
+sh ./bin/dolphinscheduler-daemon.sh stop logger-server
+```
+
+* 启停Alert
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start alert-server
+sh ./bin/dolphinscheduler-daemon.sh stop alert-server
+```
+
+`注:服务用途请具体参见《系统架构设计》小节`
+
+
+-----
+### 附录:
+
+ - 如果您需要使用到企业微信进行告警,请在安装完成后,修改 alert.properties 文件,然后重启 alert 服务即可:
+   
+    ```
+    # 设置企业微信告警功能是否开启:开启为 true,否则为 false。
+    enterprise.wechat.enable="true"
+    ```
+   
+    ```
+    # 设置 corpid,每个企业都拥有唯一的 corpid,获取此信息可在管理后台“我的企业”-“企业信息”下查看“企业 ID”(需要有管理员权限)
+    enterprise.wechat.corp.id="xxx"
+    ```
+    <p align="center">
+      <img src="/img/alert/corpid.png" width="60%" />
+    </p>
+    
+    ```
+    # 设置 secret,secret 是企业应用里面用于保障数据安全的“钥匙”,每一个应用都有一个独立的访问密钥。
+    enterprise.wechat.secret="xxx"
+    ```
+    <p align="center">
+     <img src="/img/alert/secret.png" width="60%" />
+    </p>
+    
+    ```
+    # 设置 agentid,每个应用都有唯一的 agentid。在管理后台->“应用与小程序”->“应用”,点进某个应用,即可看到 agentid。
+    enterprise.wechat.agent.id="xxxx"
+    ```
+   <p align="center">
+    <img src="/img/alert/agentid.png" width="60%" />
+   </p>
+   
+    ```
+    # 设置 userid,多个用逗号分隔。每个成员都有唯一的 userid,即所谓“帐号”。在管理后台->“通讯录”->点进某个成员的详情页,可以看到。
+    enterprise.wechat.users=zhangsan,lisi
+    ```
+      <p align="center">
+       <img src="/img/alert/userid.png" width="60%" />
+      </p>
+      
+    ```
+    # 获取 access_token 的地址,使用如下例子无需修改。
+    enterprise.wechat.token.url=https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid={corpId}&corpsecret={secret}
+   
+    # 发送应用消息地址,使用如下例子无需改动。
+    enterprise.wechat.push.url=https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token={token}
+    
+    #发送消息格式,无需改动
+    enterprise.wechat.user.send.msg={\"touser\":\"{toUser}\",\"agentid\":\"{agentId}\",\"msgtype\":\"markdown\",\"markdown\":{\"content\":\"{msg}\"}}
+   ```
+ - 关于dolphinscheduler 在运行过程中,网卡使用说明:
+ 
+   > master服务,worker服务在zookeeper注册时,会以ip:port的形式创建相关信息。
+     
+      在明确通信网卡情况下,可以指定网卡名称的方式获取ip地址,配置方式是在`common.properties`中修改配置:
+                                                                                                                                                                                                                                                                                                                                                                                                                                                        
+      ```
+      dolphin.scheduler.network.interface.preferred=eth0
+      ```  
+                                                                                                                                                                                                                                                                                                                                                                                                                                                        
+     如linux系统通过`ifconfig`命令查看网络信息,以下图为例,配置eth0就是使用图中eth0的网卡作为通信网卡:
+     
+      <p align="center">
+           <img src="/img/network/network_config.png" width="60%" />
+      </p>
+                                       
+     还可以使用dolphinscheduler提供的三种策略,获取可用ip:
+   
+      1. default: 优先获取内网网卡获取ip地址,其次获取外网网卡获取ip地址,在前两项失效情况下,使用第一块可用网卡的地址。
+      2. inner: 使用内网网卡获取ip地址,如果获取失败抛出异常信息。
+      3. outter: 使用外网网卡获取ip地址,如果获取失败抛出异常信息。
+      
+      配置方式是在`common.properties`中修改相关配置:
+      
+      ```
+       # Network IP gets priority, default inner outer
+       #dolphin.scheduler.network.priority.strategy=default
+      ```
+      以上配置修改后重启服务生效。                        
diff --git a/docs/zh-cn/1.3.5/user_doc/configuration-file.md b/docs/zh-cn/1.3.5/user_doc/configuration-file.md
new file mode 100644
index 0000000..364c0c3
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/configuration-file.md
@@ -0,0 +1,405 @@
+
+
+# 前言
+本文档为dolphinscheduler配置文件说明文档,针对版本为 dolphinscheduler-1.3.x 版本.
+
+# 目录结构
+目前dolphinscheduler 所有的配置文件都在 [conf ] 目录中.
+为了更直观的了解[conf]目录所在的位置以及包含的配置文件,请查看下面dolphinscheduler安装目录的简化说明.
+本文主要讲述dolphinscheduler的配置文件.其他部分先不做赘述.
+
+[注:以下 dolphinscheduler 简称为DS.]
+```
+
+├─bin                               DS命令存放目录
+│  ├─dolphinscheduler-daemon.sh         启动/关闭DS服务脚本
+│  ├─start-all.sh                       根据配置文件启动所有DS服务
+│  ├─stop-all.sh                        根据配置文件关闭所有DS服务
+├─conf                              配置文件目录
+│  ├─application-api.properties         api服务配置文件
+│  ├─datasource.properties              数据库配置文件
+│  ├─zookeeper.properties               zookeeper配置文件
+│  ├─master.properties                  master服务配置文件
+│  ├─worker.properties                  worker服务配置文件
+│  ├─quartz.properties                  quartz服务配置文件
+│  ├─common.properties                  公共服务[存储]配置文件
+│  ├─alert.properties                   alert服务配置文件
+│  ├─config                             环境变量配置文件夹
+│      ├─install_config.conf                DS环境变量配置脚本[用于DS安装/启动]
+│  ├─env                                运行脚本环境变量配置目录
+│      ├─dolphinscheduler_env.sh            运行脚本加载环境变量配置文件[如: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+│  ├─org                                mybatis mapper文件目录
+│  ├─i18n                               i18n配置文件目录
+│  ├─logback-api.xml                    api服务日志配置文件
+│  ├─logback-master.xml                 master服务日志配置文件
+│  ├─logback-worker.xml                 worker服务日志配置文件
+│  ├─logback-alert.xml                  alert服务日志配置文件
+├─sql                               DS的元数据创建升级sql文件
+│  ├─create                             创建SQL脚本目录
+│  ├─upgrade                            升级SQL脚本目录
+│  ├─dolphinscheduler-postgre.sql       postgre数据库初始化脚本
+│  ├─dolphinscheduler_mysql.sql         mysql数据库初始化脚本
+│  ├─soft_version                       当前DS版本标识文件
+├─script                            DS服务部署,数据库创建/升级脚本目录
+│  ├─create-dolphinscheduler.sh         DS数据库初始化脚本      
+│  ├─upgrade-dolphinscheduler.sh        DS数据库升级脚本                
+│  ├─monitor-server.sh                  DS服务监控启动脚本               
+│  ├─scp-hosts.sh                       安装文件传输脚本                                                    
+│  ├─remove-zk-node.sh                  清理zookeeper缓存文件脚本       
+├─ui                                前端WEB资源目录
+├─lib                               DS依赖的jar存放目录
+├─install.sh                        自动安装DS服务脚本
+
+
+```
+
+
+# 配置文件详解
+
+序号| 服务分类 |  配置文件|
+|--|--|--|
+1|启动/关闭DS服务脚本|dolphinscheduler-daemon.sh
+2|数据库连接配置 | datasource.properties
+3|zookeeper连接配置|zookeeper.properties
+4|公共[存储]配置|common.properties
+5|API服务配置|application-api.properties
+6|Master服务配置|master.properties
+7|Worker服务配置|worker.properties
+8|Alert 服务配置|alert.properties
+9|Quartz配置|quartz.properties
+10|DS环境变量配置脚本[用于DS安装/启动]|install_config.conf
+11|运行脚本加载环境变量配置文件 <br />[如: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]|dolphinscheduler_env.sh
+12|各服务日志配置文件|api服务日志配置文件 : logback-api.xml  <br /> master服务日志配置文件  : logback-master.xml    <br /> worker服务日志配置文件 : logback-worker.xml  <br /> alert服务日志配置文件 : logback-alert.xml 
+
+
+## 1.dolphinscheduler-daemon.sh [启动/关闭DS服务脚本]
+dolphinscheduler-daemon.sh脚本负责DS的启动&关闭. 
+start-all.sh/stop-all.sh最终也是通过dolphinscheduler-daemon.sh对集群进行启动/关闭操作.
+目前DS只是做了一个基本的设置,JVM参数请根据各自资源的实际情况自行设置.
+
+默认简化参数如下:
+```bash
+export DOLPHINSCHEDULER_OPTS="
+-server 
+-Xmx16g 
+-Xms1g 
+-Xss512k 
+-XX:+UseConcMarkSweepGC 
+-XX:+CMSParallelRemarkEnabled 
+-XX:+UseFastAccessorMethods 
+-XX:+UseCMSInitiatingOccupancyOnly 
+-XX:CMSInitiatingOccupancyFraction=70
+"
+```
+
+> 不建议设置"-XX:DisableExplicitGC" , DS使用Netty进行通讯,设置该参数,可能会导致内存泄漏.
+
+## 2.datasource.properties [数据库连接]
+在DS中使用Druid对数据库连接进行管理,默认简化配置如下.
+|参数 | 默认值| 描述|
+|--|--|--|
+spring.datasource.driver-class-name| |数据库驱动
+spring.datasource.url||数据库连接地址
+spring.datasource.username||数据库用户名
+spring.datasource.password||数据库密码
+spring.datasource.initialSize|5| 初始连接池数量
+spring.datasource.minIdle|5| 最小连接池数量
+spring.datasource.maxActive|5| 最大连接池数量
+spring.datasource.maxWait|60000| 最大等待时长
+spring.datasource.timeBetweenEvictionRunsMillis|60000| 连接检测周期
+spring.datasource.timeBetweenConnectErrorMillis|60000| 重试间隔
+spring.datasource.minEvictableIdleTimeMillis|300000| 连接保持空闲而不被驱逐的最小时间
+spring.datasource.validationQuery|SELECT 1|检测连接是否有效的sql
+spring.datasource.validationQueryTimeout|3| 检测连接是否有效的超时时间[seconds]
+spring.datasource.testWhileIdle|true| 申请连接的时候检测,如果空闲时间大于timeBetweenEvictionRunsMillis,执行validationQuery检测连接是否有效。
+spring.datasource.testOnBorrow|true| 申请连接时执行validationQuery检测连接是否有效
+spring.datasource.testOnReturn|false| 归还连接时执行validationQuery检测连接是否有效
+spring.datasource.defaultAutoCommit|true| 是否开启自动提交
+spring.datasource.keepAlive|true| 连接池中的minIdle数量以内的连接,空闲时间超过minEvictableIdleTimeMillis,则会执行keepAlive操作。
+spring.datasource.poolPreparedStatements|true| 开启PSCache
+spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| 要启用PSCache,必须配置大于0,当大于0时,poolPreparedStatements自动触发修改为true。
+
+
+## 3.zookeeper.properties [zookeeper连接配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+zookeeper.quorum|localhost:2181| zk集群连接信息
+zookeeper.dolphinscheduler.root|/dolphinscheduler| DS在zookeeper存储根目录
+zookeeper.session.timeout|60000|  session 超时
+zookeeper.connection.timeout|30000|  连接超时
+zookeeper.retry.base.sleep|100| 基本重试时间差
+zookeeper.retry.max.sleep|30000| 最大重试时间
+zookeeper.retry.maxtime|10|最大重试次数
+
+
+## 4.common.properties [hadoop、s3、yarn配置]
+common.properties配置文件目前主要是配置hadoop/s3a相关的配置. 
+|参数 |默认值| 描述| 
+|--|--|--|
+resource.storage.type|NONE|资源文件存储类型: HDFS,S3,NONE
+resource.upload.path|/dolphinscheduler|资源文件存储路径
+data.basedir.path|/tmp/dolphinscheduler|本地工作目录,用于存放临时文件
+hadoop.security.authentication.startup.state|false|hadoop是否开启kerberos权限
+java.security.krb5.conf.path|/opt/krb5.conf|kerberos配置目录
+login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos登录用户
+login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos登录用户keytab
+resource.view.suffixs| txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties|资源中心支持的文件格式
+hdfs.root.user|hdfs|如果存储类型为HDFS,需要配置拥有对应操作权限的用户
+fs.defaultFS|hdfs://mycluster:8020|请求地址如果resource.storage.type=S3 ,该值类似为: s3a://dolphinscheduler. 如果resource.storage.type=HDFS, 如果 hadoop 配置了 HA ,需要复制core-site.xml 和 hdfs-site.xml 文件到conf目录
+fs.s3a.endpoint||s3 endpoint地址
+fs.s3a.access.key||s3 access key
+fs.s3a.secret.key|     |s3 secret key
+yarn.resourcemanager.ha.rm.ids|     |yarn resourcemanager 地址, 如果resourcemanager开启了HA, 输入HA的IP地址(以逗号分隔),如果resourcemanager为单节点, 该值为空即可.
+yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|如果resourcemanager开启了HA或者没有使用resourcemanager,保持默认值即可. 如果resourcemanager为单节点,你需要将ds1 配置为resourcemanager对应的hostname
+dolphinscheduler.env.path|env/dolphinscheduler_env.sh|运行脚本加载环境变量配置文件[如: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+development.state|false|是否处于开发模式
+kerberos.expire.time|7|kerberos过期时间 [小时]
+
+
+## 5.application-api.properties [API服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+server.port|12345|api服务通讯端口
+server.servlet.session.timeout|7200|session超时时间
+server.servlet.context-path|/dolphinscheduler |请求路径
+spring.servlet.multipart.max-file-size|1024MB|最大上传文件大小
+spring.servlet.multipart.max-request-size|1024MB|最大请求大小
+server.jetty.max-http-post-size|5000000|jetty服务最大发送请求大小
+spring.messages.encoding|UTF-8|请求编码
+spring.jackson.time-zone|GMT+8|设置时区
+spring.messages.basename|i18n/messages|i18n配置
+security.authentication.type|PASSWORD|权限校验类型
+
+
+## 6.master.properties [Master服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+master.listen.port|5678|master通讯端口
+master.exec.threads|100| 工作线程数量
+master.exec.task.num|20|并行任务数量
+master.dispatch.task.num | 3|分发任务数量
+master.heartbeat.interval|10|心跳间隔
+master.task.commit.retryTimes|5|任务重试次数
+master.task.commit.interval|1000|任务提交间隔
+master.max.cpuload.avg|-1|cpu小于该配置时,master 服务才能工作.默认值为-1 :  cpu cores * 2
+master.reserved.memory|0.3|内存阈值限制,可用内存大于该值,master 服务才能工作.
+
+
+## 7.worker.properties [Worker服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+worker.listen.port|1234|worker通讯端口
+worker.exec.threads|100|工作线程数量
+worker.heartbeat.interval|10|心跳间隔
+worker.max.cpuload.avg|-1|cpu小于该配置时,worker 服务才能工作. 默认值为-1 :  cpu cores * 2
+worker.reserved.memory|0.3|内存阈值限制,可用内存大于该值,worker 服务才能工作.
+worker.group|default|workgroup分组配置. <br> worker启动时会根据该配置自动加入对应的分组.
+
+
+## 8.alert.properties [Alert 告警服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+alert.type|EMAIL|告警类型|
+mail.protocol|SMTP| 邮件服务器协议
+mail.server.host|xxx.xxx.com|邮件服务器地址
+mail.server.port|25|邮件服务器端口
+mail.sender|xxx@xxx.com|发送人邮箱
+mail.user|xxx@xxx.com|发送人邮箱名称
+mail.passwd|111111|发送人邮箱密码
+mail.smtp.starttls.enable|true|邮箱是否开启tls
+mail.smtp.ssl.enable|false|邮箱是否开启ssl
+mail.smtp.ssl.trust|xxx.xxx.com|邮箱ssl白名单
+xls.file.path|/tmp/xls|邮箱附件临时工作目录
+||以下为企业微信配置[选填]|
+enterprise.wechat.enable|false|企业微信是否启用
+enterprise.wechat.corp.id|xxxxxxx|
+enterprise.wechat.secret|xxxxxxx|
+enterprise.wechat.agent.id|xxxxxxx|
+enterprise.wechat.users|xxxxxxx|
+enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|
+enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|
+enterprise.wechat.user.send.msg||发送消息格式
+enterprise.wechat.team.send.msg||群发消息格式
+plugin.dir|/Users/xx/your/path/to/plugin/dir|插件目录
+
+
+## 9.quartz.properties [Quartz配置]
+这里面主要是quartz配置,请结合实际业务场景&资源进行配置,本文暂时不做展开.
+|参数 |默认值| 描述| 
+|--|--|--|
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
+org.quartz.scheduler.instanceName | DolphinScheduler
+org.quartz.scheduler.instanceId | AUTO
+org.quartz.scheduler.makeSchedulerThreadDaemon | true
+org.quartz.jobStore.useProperties | false
+org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
+org.quartz.threadPool.makeThreadsDaemons | true
+org.quartz.threadPool.threadCount | 25
+org.quartz.threadPool.threadPriority | 5
+org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
+org.quartz.jobStore.tablePrefix | QRTZ_
+org.quartz.jobStore.isClustered | true
+org.quartz.jobStore.misfireThreshold | 60000
+org.quartz.jobStore.clusterCheckinInterval | 5000
+org.quartz.jobStore.acquireTriggersWithinLock|true
+org.quartz.jobStore.dataSource | myDs
+org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
+
+
+## 10.install_config.conf [DS环境变量配置脚本[用于DS安装/启动]]
+install_config.conf这个配置文件比较繁琐,这个文件主要有两个地方会用到.
+* 1.DS集群的自动安装. 
+
+> 调用install.sh脚本会自动加载该文件中的配置.并根据该文件中的内容自动配置上述的配置文件中的内容. 
+> 比如:dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties 等文件.
+
+
+* 2.DS集群的启动&关闭.
+>DS集群在启动&关闭的时候,会加载该配置文件中的masters,workers,alertServer,apiServers等参数,启动/关闭DS集群.
+
+文件内容如下:
+```bash
+
+# 注意: 该配置文件中如果包含特殊字符,如: `.*[]^${}\+?|()@#&`, 请转义,
+#      示例: `[` 转义为 `\[`
+
+# 数据库类型, 目前仅支持 postgresql 或者 mysql
+dbtype="mysql"
+
+# 数据库 地址 & 端口
+dbhost="192.168.xx.xx:3306"
+
+# 数据库 名称
+dbname="dolphinscheduler"
+
+
+# 数据库 用户名
+username="xx"
+
+# 数据库 密码
+password="xx"
+
+# Zookeeper地址
+zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+# 将DS安装到哪个目录,如: /data1_1T/dolphinscheduler,
+installPath="/data1_1T/dolphinscheduler"
+
+# 使用哪个用户部署
+# 注意: 部署用户需要sudo 权限, 并且可以操作 hdfs .
+#     如果使用hdfs的话,根目录必须使用该用户进行创建.否则会有权限相关的问题.
+deployUser="dolphinscheduler"
+
+
+# 以下为告警服务配置
+# 邮件服务器地址
+mailServerHost="smtp.exmail.qq.com"
+
+# 邮件服务器 端口
+mailServerPort="25"
+
+# 发送者
+mailSender="xxxxxxxxxx"
+
+# 发送用户
+mailUser="xxxxxxxxxx"
+
+# 邮箱密码
+mailPassword="xxxxxxxxxx"
+
+# TLS协议的邮箱设置为true,否则设置为false
+starttlsEnable="true"
+
+# 开启SSL协议的邮箱配置为true,否则为false。注意: starttlsEnable和sslEnable不能同时为true
+sslEnable="false"
+
+# 邮件服务地址值,同 mailServerHost
+sslTrust="smtp.exmail.qq.com"
+
+#业务用到的比如sql等资源文件上传到哪里,可以设置:HDFS,S3,NONE。如果想上传到HDFS,请配置为HDFS;如果不需要资源上传功能请选择NONE。
+resourceStorageType="NONE"
+
+# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+# Note,s3 be sure to create the root directory /dolphinscheduler
+defaultFS="hdfs://mycluster:8020"
+
+# 如果resourceStorageType 为S3 需要配置的参数如下:
+s3Endpoint="http://192.168.xx.xx:9010"
+s3AccessKey="xxxxxxxxxx"
+s3SecretKey="xxxxxxxxxx"
+
+# 如果ResourceManager是HA,则配置为ResourceManager节点的主备ip或者hostname,比如"192.168.xx.xx,192.168.xx.xx",否则如果是单ResourceManager或者根本没用到yarn,请配置yarnHaIps=""即可,如果没用到yarn,配置为""
+yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+# 如果是单ResourceManager,则配置为ResourceManager节点ip或主机名,否则保持默认值即可。
+singleYarnIp="yarnIp1"
+
+# 资源文件在 HDFS/S3  存储路径
+resourceUploadPath="/dolphinscheduler"
+
+
+# HDFS/S3  操作用户
+hdfsRootUser="hdfs"
+
+# 以下为 kerberos 配置
+
+# kerberos是否开启
+kerberosStartUp="false"
+# kdc krb5 config file path
+krb5ConfPath="$installPath/conf/krb5.conf"
+# keytab username
+keytabUserName="hdfs-mycluster@ESZ.COM"
+# username keytab path
+keytabPath="$installPath/conf/hdfs.headless.keytab"
+
+
+# api 服务端口
+apiServerPort="12345"
+
+
+# 部署DS的所有主机hostname
+ips="ds1,ds2,ds3,ds4,ds5"
+
+# ssh 端口 , 默认 22
+sshPort="22"
+
+# 部署master服务主机
+masters="ds1,ds2"
+
+# 部署 worker服务的主机
+# 注意: 每一个worker都需要设置一个worker 分组的名称,默认值为 "default"
+workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
+
+#  部署alert服务主机
+alertServer="ds3"
+
+# 部署api服务主机 
+apiServers="ds1"
+```
+
+## 11.dolphinscheduler_env.sh [环境变量配置]
+通过类似shell方式提交任务的的时候,会加载该配置文件中的环境变量到主机中.
+涉及到的任务类型有: Shell任务、Python任务、Spark任务、Flink任务、Datax任务等等
+```bash
+export HADOOP_HOME=/opt/soft/hadoop
+export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+export SPARK_HOME1=/opt/soft/spark1
+export SPARK_HOME2=/opt/soft/spark2
+export PYTHON_HOME=/opt/soft/python
+export JAVA_HOME=/opt/soft/java
+export HIVE_HOME=/opt/soft/hive
+export FLINK_HOME=/opt/soft/flink
+export DATAX_HOME=/opt/soft/datax/bin/datax.py
+
+export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+```
+
+## 12.各服务日志配置文件
+对应服务服务名称| 日志文件名 |
+|--|--|--|
+api服务日志配置文件 |logback-api.xml|
+master服务日志配置文件|logback-master.xml |
+worker服务日志配置文件|logback-worker.xml |
+alert服务日志配置文件|logback-alert.xml |
diff --git a/docs/zh-cn/1.3.5/user_doc/docker-deployment.md b/docs/zh-cn/1.3.5/user_doc/docker-deployment.md
new file mode 100644
index 0000000..6bb8623
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/docker-deployment.md
@@ -0,0 +1,143 @@
+## 快速试用 DolphinScheduler
+
+有 2 种方式可以快速试用 DolphinScheduler,分别介绍
+### 一、以 docker-compose 的方式启动(推荐)
+这种方式需要先安装 docker-compose , docker-compose 的安装网上已经有非常多的资料,请自行安装即可
+
+##### 1、下载源码 zip 包
+
+- 请下载最新版本的源码包并进行解压
+
+```shell
+# 创建源码存放目录
+mkdir -p /opt/soft/dolphinscheduler;
+cd /opt/soft/dolphinscheduler;
+
+# 下载源码包
+wget https://mirrors.tuna.tsinghua.edu.cn/apache/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-src.zip
+
+# 解压缩
+unzip apache-dolphinscheduler-incubating-1.3.4-src.zip
+ 
+mv apache-dolphinscheduler-incubating-1.3.4-src-release  dolphinscheduler-src
+```
+
+##### 2、安装并启动服务
+```
+cd dolphinscheduler-src
+docker-compose -f ./docker/docker-swarm/docker-compose.yml up -d
+```
+
+##### 3、登录系统   
+访问前端界面: http://192.168.xx.xx:8888
+ <p align="center">
+   <img src="/img/login.png" width="60%" />
+ </p>
+然后参考用户手册章节的`快速上手`即可进行使用
+
+
+下面介绍第 2 种方式
+### 二、以 docker 方式启动
+这种方式需要先安装 docker , docker 的安装网上已经有非常多的资料,请自行安装即可
+##### 1、基础软件安装(请自行安装)
+ * PostgreSQL (8.2.15+)
+ * ZooKeeper (3.4.6+)
+ * Docker
+ 
+##### 2、请登录 PostgreSQL 数据库,创建名为 `dolphinscheduler` 数据库
+
+##### 3、初始化数据库,导入 `sql/dolphinscheduler-postgre.sql` 进行创建表及基础数据导入
+
+##### 4、下载 DolphinScheduler 镜像
+我们已将面向用户的 DolphinScheduler 镜像上传至 docker 仓库,用户无需在本地构建镜像,直接执行以下命令从 docker 仓库 pull 镜像:
+```
+docker pull apache/dolphinscheduler:latest
+```
+
+##### 5、运行一个 DolphinScheduler 实例
+
+如下:(注: {user} 和 {password} 需要替换为具体的数据库用户名和密码)
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e ZOOKEEPER_QUORUM="l92.168.x.x:2181"
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="{user}" -e DATABASE_PASSWORD="{password}" \
+-p 8888:8888 \
+dolphinscheduler all
+```
+##### 6、登录系统   
+访问前端界面: http://192.168.xx.xx:8888
+ <p align="center">
+   <img src="/img/login.png" width="60%" />
+ </p>
+然后参考用户手册章节的`快速上手`即可进行使用
+
+## 附录
+
+### 在容器启动时,会自动启动以下服务:
+
+```
+    MasterServer         ----- master服务
+    WorkerServer         ----- worker服务
+    LoggerServer         ----- logger服务
+    ApiApplicationServer ----- api服务
+    AlertServer          ----- alert服务
+```
+### 如果你只是想运行 dolphinscheduler 中的部分服务
+
+你能够通执行以下指令仅运行dolphinscheduler中的部分服务。
+
+* 启动一个 **master server**, 如下:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e ZOOKEEPER_QUORUM="l92.168.x.x:2181"
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+dolphinscheduler master-server
+```
+
+* 启动一个 **worker server**, 如下:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e ZOOKEEPER_QUORUM="l92.168.x.x:2181"
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+dolphinscheduler worker-server
+```
+
+* 启动一个 **api server**, 如下:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-p 12345:12345 \
+dolphinscheduler api-server
+```
+
+* 启动一个 **alert server**, 如下:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+dolphinscheduler alert-server
+```
+
+* 启动一个 **frontend**, 如下:
+
+```
+$ docker run -dit --name dolphinscheduler \
+-e FRONTEND_API_SERVER_HOST="192.168.x.x" -e FRONTEND_API_SERVER_PORT="12345" \
+-p 8888:8888 \
+dolphinscheduler frontend
+```
+
+**注意**: 当你运行dolphinscheduler中的部分服务时,你必须指定这些环境变量 `DATABASE_HOST` `DATABASE_PORT` `DATABASE_DATABASE` `DATABASE_USERNAME` `DATABASE_PASSWORD` `ZOOKEEPER_QUORUM`。
+
+
+
+
diff --git a/docs/zh-cn/1.3.5/user_doc/expansion-reduction.md b/docs/zh-cn/1.3.5/user_doc/expansion-reduction.md
new file mode 100644
index 0000000..8ea8c26
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/expansion-reduction.md
@@ -0,0 +1,257 @@
+
+# DolphinScheduler扩容/缩容 文档
+
+
+## 1. DolphinScheduler扩容文档
+本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.
+
+```
+ 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.
+       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.
+```
+
+### 1.1. 基础软件安装(必装项请自行安装)
+
+* [必装] [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) :  必装,请安装好后在/etc/profile下配置 JAVA_HOME 及 PATH 变量
+* [可选] 如果扩容的是worker类型的节点,需要考虑是否要安装外部客户端,比如Hadoop、Hive、Spark 的Client.
+
+
+```markdown
+ 注意:DolphinScheduler本身不依赖Hadoop、Hive、Spark,仅是会调用他们的Client,用于对应任务的提交。
+```
+
+### 1.2. 获取安装包
+- 确认现有环境使用的DolphinScheduler是哪个版本,获取对应版本的安装包,如果版本不同,可能存在兼容性的问题.
+- 确认其他节点的统一安装目录,本文假设DolphinScheduler统一安装在 /opt/ 目录中,安装全路径为/opt/dolphinscheduler.
+- 请下载对应版本的安装包至服务器安装目录,解压并重名为dolphinscheduler存放在/opt目录中. 
+- 添加数据库依赖包,本文使用Mysql数据库,添加mysql-connector-java驱动包到/opt/dolphinscheduler/lib目录中
+```shell
+# 创建安装目录,安装目录请不要创建在/root、/home等高权限目录 
+mkdir -p /opt
+cd /opt
+# 解压缩
+tar -zxvf apache-dolphinscheduler-incubating-1.3.2-dolphinscheduler-bin.tar.gz -C /opt 
+cd /opt
+mv apache-dolphinscheduler-incubating-1.3.2-dolphinscheduler-bin  dolphinscheduler
+```
+
+```markdown
+ 注意:安装包可以从现有的环境直接复制到扩容的物理机上使用.
+```
+
+### 1.3. 创建部署用户
+
+- 在**所有**扩容的机器上创建部署用户,并且一定要配置sudo免密。假如我们计划在ds1,ds2,ds3,ds4这四台扩容机器上部署调度,首先需要在每台机器上都创建部署用户
+
+```shell
+# 创建用户需使用root登录,设置部署用户名,请自行修改,后面以dolphinscheduler为例
+useradd dolphinscheduler;
+
+# 设置用户密码,请自行修改,后面以dolphinscheduler123为例
+echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
+
+# 配置sudo免密
+echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+```
+
+```markdown
+ 注意:
+ - 因为是以 sudo -u {linux-user} 切换不同linux用户的方式来实现多租户运行作业,所以部署用户需要有 sudo 权限,而且是免密的。
+ - 如果发现/etc/sudoers文件中有"Default requiretty"这行,也请注释掉
+ - 如果用到资源上传的话,还需要在`HDFS或者MinIO`上给该部署用户分配读写的权限
+```
+
+### 1.4. 修改配置
+
+- 从现有的节点比如Master/Worker节点,直接拷贝conf目录替换掉新增节点中的conf目录.拷贝之后检查一下配置项是否正确.
+    
+    ```markdown
+    重点检查:
+    datasource.properties 中的数据库连接信息. 
+    zookeeper.properties 中的连接zk的信息.
+    common.properties 中关于资源存储的配置信息(如果设置了hadoop,请检查是否存在core-site.xml和hdfs-site.xml配置文件).
+    env/dolphinscheduler_env.sh 中的环境变量
+    ````
+
+- 根据机器配置,修改 conf/env 目录下的 `dolphinscheduler_env.sh` 环境变量(以相关用到的软件都安装在/opt/soft下为例)
+
+    ```shell
+        export HADOOP_HOME=/opt/soft/hadoop
+        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+        #export SPARK_HOME1=/opt/soft/spark1
+        export SPARK_HOME2=/opt/soft/spark2
+        export PYTHON_HOME=/opt/soft/python
+        export JAVA_HOME=/opt/soft/java
+        export HIVE_HOME=/opt/soft/hive
+        export FLINK_HOME=/opt/soft/flink
+        export DATAX_HOME=/opt/soft/datax/bin/datax.py
+        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+    
+        ```
+
+     `注: 这一步非常重要,例如 JAVA_HOME 和 PATH 是必须要配置的,没有用到的可以忽略或者注释掉`
+
+
+- 将jdk软链到/usr/bin/java下(仍以 JAVA_HOME=/opt/soft/java 为例)
+
+    ```shell
+    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+    ```
+
+ - 修改 **所有** 节点上的配置文件 `conf/config/install_config.conf`, 同步修改以下配置.
+    
+    * 新增的master节点, 需要修改 ips 和 masters 参数.
+    * 新增的worker节点, 需要修改 ips 和  workers 参数.
+
+```shell
+#在哪些机器上新增部署DS服务,多个物理机之间用逗号隔开.
+ips="ds1,ds2,ds3,ds4"
+
+#ssh端口,默认22
+sshPort="22"
+
+#master服务部署在哪台机器上
+masters="现有master01,现有master02,ds1,ds2"
+
+#worker服务部署在哪台机器上,并指定此worker属于哪一个worker组,下面示例的default即为组名
+workers="现有worker01:default,现有worker02:default,ds3:default,ds4:default"
+
+```
+- 如果扩容的是worker节点,需要设置worker分组.请参考用户手册[5.7 创建worker分组 ](/zh-cn/docs/1.3.4/user_doc/system-manual.html)
+
+- 在所有的新增节点上,修改目录权限,使得部署用户对dolphinscheduler目录有操作权限
+
+```shell
+sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler
+```
+
+
+
+### 1.4. 重启集群&验证
+
+- 重启集群
+
+```shell
+停止命令:
+bin/stop-all.sh 停止所有服务
+
+sh bin/dolphinscheduler-daemon.sh stop master-server  停止 master 服务
+sh bin/dolphinscheduler-daemon.sh stop worker-server  停止 worker 服务
+sh bin/dolphinscheduler-daemon.sh stop logger-server  停止 logger  服务
+sh bin/dolphinscheduler-daemon.sh stop api-server     停止 api    服务
+sh bin/dolphinscheduler-daemon.sh stop alert-server   停止 alert  服务
+
+
+启动命令:
+bin/start-all.sh 启动所有服务
+
+sh bin/dolphinscheduler-daemon.sh start master-server  启动 master 服务
+sh bin/dolphinscheduler-daemon.sh start worker-server  启动 worker 服务
+sh bin/dolphinscheduler-daemon.sh start logger-server  启动 logger  服务
+sh bin/dolphinscheduler-daemon.sh start api-server     启动 api    服务
+sh bin/dolphinscheduler-daemon.sh start alert-server   启动 alert  服务
+
+```
+
+```
+ 注意: 使用stop-all.sh或者stop-all.sh的时候,如果执行该命令的物理机没有配置到所有机器的ssh免登陆的话,会提示输入密码
+```
+
+
+- 脚本完成后,使用`jps`命令查看各个节点服务是否启动(`jps`为`java JDK`自带)
+
+```
+    MasterServer         ----- master服务
+    WorkerServer         ----- worker服务
+    LoggerServer         ----- logger服务
+    ApiApplicationServer ----- api服务
+    AlertServer          ----- alert服务
+```
+
+启动成功后,可以进行日志查看,日志统一存放于logs文件夹内
+
+```日志路径
+ logs/
+    ├── dolphinscheduler-alert-server.log
+    ├── dolphinscheduler-master-server.log
+    |—— dolphinscheduler-worker-server.log
+    |—— dolphinscheduler-api-server.log
+    |—— dolphinscheduler-logger-server.log
+```
+如果以上服务都正常启动且调度系统页面正常,在web系统的[监控中心]查看是否有扩容的Master或者Worker服务.如果存在,则扩容完成
+
+-----------------------------------------------------------------------------
+
+## 2. 缩容
+缩容是针对现有的DolphinScheduler集群减少master或者worker服务,
+缩容一共分两个步骤,执行完以下两步,即可完成缩容操作.
+
+### 2.1 停止缩容节点上的服务
+ * 如果缩容master节点,要确定要缩容master服务所在的物理机,并在物理机上停止该master服务.
+ * 如果缩容worker节点,要确定要缩容worker服务所在的物理机,并在物理机上停止worker和logger服务.
+ 
+```shell
+停止命令:
+bin/stop-all.sh 停止所有服务
+
+sh bin/dolphinscheduler-daemon.sh stop master-server  停止 master 服务
+sh bin/dolphinscheduler-daemon.sh stop worker-server  停止 worker 服务
+sh bin/dolphinscheduler-daemon.sh stop logger-server  停止 logger  服务
+sh bin/dolphinscheduler-daemon.sh stop api-server     停止 api    服务
+sh bin/dolphinscheduler-daemon.sh stop alert-server   停止 alert  服务
+
+
+启动命令:
+bin/start-all.sh 启动所有服务
+
+sh bin/dolphinscheduler-daemon.sh start master-server  启动 master 服务
+sh bin/dolphinscheduler-daemon.sh start worker-server  启动 worker 服务
+sh bin/dolphinscheduler-daemon.sh start logger-server  启动 logger  服务
+sh bin/dolphinscheduler-daemon.sh start api-server     启动 api    服务
+sh bin/dolphinscheduler-daemon.sh start alert-server   启动 alert  服务
+
+```
+
+```
+ 注意: 使用stop-all.sh或者stop-all.sh的时候,如果没有执行该命令的机器没有配置到所有机器的ssh免登陆的话,会提示输入密码
+```
+
+- 脚本完成后,使用`jps`命令查看各个节点服务是否成功关闭(`jps`为`java JDK`自带)
+
+```
+    MasterServer         ----- master服务
+    WorkerServer         ----- worker服务
+    LoggerServer         ----- logger服务
+    ApiApplicationServer ----- api服务
+    AlertServer          ----- alert服务
+```
+如果对应的master服务或者worker服务不存在,则代表master/worker服务成功关闭.
+
+
+### 2.2 修改配置文件
+
+ - 修改 **所有** 节点上的配置文件 `conf/config/install_config.conf`, 同步修改以下配置.
+    
+    * 缩容master节点, 需要修改 ips 和 masters 参数.
+    * 缩容worker节点, 需要修改 ips 和  workers 参数.
+
+```shell
+#在哪些机器上部署DS服务,本机选localhost
+ips="ds1,ds2,ds3,ds4"
+
+#ssh端口,默认22
+sshPort="22"
+
+#master服务部署在哪台机器上
+masters="现有master01,现有master02,ds1,ds2"
+
+#worker服务部署在哪台机器上,并指定此worker属于哪一个worker组,下面示例的default即为组名
+workers="现有worker01:default,现有worker02:default,ds3:default,ds4:default"
+
+```
+
+
+
+
diff --git a/docs/zh-cn/1.3.5/user_doc/hardware-environment.md b/docs/zh-cn/1.3.5/user_doc/hardware-environment.md
new file mode 100644
index 0000000..670740a
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/hardware-environment.md
@@ -0,0 +1,48 @@
+# 软硬件环境建议配置
+
+DolphinScheduler 作为一款开源分布式工作流任务调度系统,可以很好的部署和运行在 Intel 架构服务器环境及主流虚拟化环境下,并支持主流的Linux操作系统环境。
+
+## 1. Linux 操作系统版本要求
+
+| 操作系统       | 版本         |
+| :----------------------- | :----------: |
+| Red Hat Enterprise Linux | 7.0 及以上   |
+| CentOS                   | 7.0 及以上   |
+| Oracle Enterprise Linux  | 7.0 及以上   |
+| Ubuntu LTS               | 16.04 及以上 |
+
+> **注意:**
+>以上 Linux 操作系统可运行在物理服务器以及 VMware、KVM、XEN 主流虚拟化环境上。
+
+## 2. 服务器建议配置
+DolphinScheduler 支持运行在 Intel x86-64 架构的 64 位通用硬件服务器平台。对生产环境的服务器硬件配置有以下建议:
+### 生产环境
+
+| **CPU** | **内存** | **硬盘类型** | **网络** | **实例数量** |
+| --- | --- | --- | --- | --- |
+| 4核+ | 8 GB+ | SAS | 千兆网卡 | 1+ |
+
+> **注意:**
+> - 以上建议配置为部署 DolphinScheduler 的最低配置,生产环境强烈推荐使用更高的配置。
+> - 硬盘大小配置建议 50GB+ ,系统盘和数据盘分开。
+
+
+## 3. 网络要求
+
+DolphinScheduler正常运行提供如下的网络端口配置:
+
+| 组件 | 默认端口 | 说明 |
+|  --- | --- | --- |
+| MasterServer |  5678  | 非通信端口,只需本机端口不冲突即可 |
+| WorkerServer | 1234  | 非通信端口,只需本机端口不冲突即可 |
+| ApiApplicationServer |  12345 | 提供后端通信端口 |
+
+
+> **注意:**
+> - MasterServer 和 WorkerServer 不需要开启网络间通信,只需本机端口不冲突即可
+> - 管理员可根据实际环境中 DolphinScheduler 组件部署方案,在网络侧和主机侧开放相关端口
+
+## 4. 客户端 Web 浏览器要求
+
+DolphinScheduler 推荐 Chrome 以及使用 Chrome 内核的较新版本浏览器访问前端可视化操作界面。
+
diff --git a/docs/zh-cn/1.3.5/user_doc/load-balance.md b/docs/zh-cn/1.3.5/user_doc/load-balance.md
new file mode 100644
index 0000000..b4ac771
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/load-balance.md
@@ -0,0 +1,62 @@
+### 负载均衡
+负载均衡即通过路由算法(通常是集群环境),合理的分摊服务器压力,达到服务器性能的最大优化。
+
+### DolphinScheduler-Worker负载均衡算法
+
+DolphinScheduler-Master分配任务至worker,默认提供了三种算法:
+
+加权随机(random)
+
+平滑轮询(roundrobin)
+
+线性负载(lowerweight)
+
+默认配置为线性加权负载。
+
+由于路由是在客户端做的,即master服务,因此你可以更改master.properties 中的master.host.selector来配置你所想要的算法。
+
+eg:master.host.selector=random(不区分大小写)
+
+### Worker 负载均衡配置
+
+配置文件 worker.properties
+
+#### 权重
+
+上述所有的负载算法都是基于权重来进行加权分配的,权重影响分流结果。你可以在 修改worker.weight的值来给不同的机器设置不同的权重。
+
+#### 预热
+
+考虑到JIT优化,我们会让worker在启动后低功率的运行一段时间,使其逐渐达到最佳状态,这段过程我们称之为预热。感兴趣的同学可以去阅读JIT相关的文章。
+
+因此worker在启动后,他的权重会随着时间逐渐达到最大(默认十分钟,我们没有提供配置项,如果需要,你可以修改并提交相关的PR)。
+
+### 负载均衡算法细述
+
+#### 随机(加权)
+
+该算法比较简单,即在符合的worker中随机选取一台(权重会影响他的比重)。
+
+#### 平滑轮询(加权)
+
+加权轮询算法一个明显的缺陷。即在某些特殊的权重下,加权轮询调度会生成不均匀的实例序列,这种不平滑的负载可能会使某些实例出现瞬时高负载的现象,导致系统存在宕机的风险。为了解决这个调度缺陷,我们提供了平滑加权轮询算法。
+
+每台worker都有两个权重,即weight(预热完成后保持不变),current_weight(动态变化),每次路由。都会遍历所有的worker,使其current_weight+weight,同时累加所有worker的weight,计为total_weight,然后挑选current_weight最大的作为本次执行任务的worker,于此同时,将这台worker的current_weight-total_weight。
+
+#### 线性加权(默认算法)
+
+该算法每隔一段时间会向注册中心上报自己的负载信息。我们主要根据两个信息来进行判断
+
+* load平均值(默认是CPU核数*2)
+* 可用物理内存  (默认是0.3,单位是G)
+
+如果两者任何一个低于配置项,那么这台worker将不参与负载。(即不分配流量)
+
+你可以在worker.properties修改下面的属性来自定义配置
+
+* worker.max.cpuload.avg= -1(only less than cpu avg load, worker server can work. default value -1: the number of cpu cores * 2
+)
+
+* worker.reserved.memory=0.3(only larger than reserved memory, worker server can work. default value : physical memory * 1/6, unit is G.
+)
+
diff --git a/docs/zh-cn/1.3.5/user_doc/metadata-1.3.md b/docs/zh-cn/1.3.5/user_doc/metadata-1.3.md
new file mode 100644
index 0000000..e298b48
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/metadata-1.3.md
@@ -0,0 +1,185 @@
+# Dolphin Scheduler 1.3元数据文档
+
+<a name="25Ald"></a>
+### 表概览
+| 表名 | 表信息 |
+| :---: | :---: |
+| t_ds_access_token | 访问ds后端的token |
+| t_ds_alert | 告警信息 |
+| t_ds_alertgroup | 告警组 |
+| t_ds_command | 执行命令 |
+| t_ds_datasource | 数据源 |
+| t_ds_error_command | 错误命令 |
+| t_ds_process_definition | 流程定义 |
+| t_ds_process_instance | 流程实例 |
+| t_ds_project | 项目 |
+| t_ds_queue | 队列 |
+| t_ds_relation_datasource_user | 用户关联数据源 |
+| t_ds_relation_process_instance | 子流程 |
+| t_ds_relation_project_user | 用户关联项目 |
+| t_ds_relation_resources_user | 用户关联资源 |
+| t_ds_relation_udfs_user | 用户关联UDF函数 |
+| t_ds_relation_user_alertgroup | 用户关联告警组 |
+| t_ds_resources | 资源文件 |
+| t_ds_schedules | 流程定时调度 |
+| t_ds_session | 用户登录的session |
+| t_ds_task_instance | 任务实例 |
+| t_ds_tenant | 租户 |
+| t_ds_udfs | UDF资源 |
+| t_ds_user | 用户 |
+| t_ds_version | ds版本信息 |
+
+<a name="VNVGr"></a>
+### 用户	队列	数据源
+![image.png](/img/metadata-erd/user-queue-datasource.png)
+
+- 一个租户下可以有多个用户<br />
+- t_ds_user中的queue字段存储的是队列表中的queue_name信息,t_ds_tenant下存的是queue_id,在流程定义执行过程中,用户队列优先级最高,用户队列为空则采用租户队列<br />
+- t_ds_datasource表中的user_id字段表示创建该数据源的用户,t_ds_relation_datasource_user中的user_id表示,对数据源有权限的用户<br />
+<a name="HHyGV"></a>
+### 项目	资源	告警
+![image.png](/img/metadata-erd/project-resource-alert.png)
+
+- 一个用户可以有多个项目,用户项目授权通过t_ds_relation_project_user表完成project_id和user_id的关系绑定<br />
+- t_ds_projcet表中的user_id表示创建该项目的用户,t_ds_relation_project_user表中的user_id表示对项目有权限的用户<br />
+- t_ds_resources表中的user_id表示创建该资源的用户,t_ds_relation_resources_user中的user_id表示对资源有权限的用户<br />
+- t_ds_udfs表中的user_id表示创建该UDF的用户,t_ds_relation_udfs_user表中的user_id表示对UDF有权限的用户<br />
+<a name="Bg2Sn"></a>
+### 命令	流程	任务
+![image.png](/img/metadata-erd/command.png)<br />![image.png](/img/metadata-erd/process-task.png)
+
+- 一个项目有多个流程定义,一个流程定义可以生成多个流程实例,一个流程实例可以生成多个任务实例<br />
+- t_ds_schedulers表存放流程定义的定时调度信息<br />
+- t_ds_relation_process_instance表存放的数据用于处理流程定义中含有子流程的情况,parent_process_instance_id表示含有子流程的主流程实例id,process_instance_id表示子流程实例的id,parent_task_instance_id表示子流程节点的任务实例id,流程实例表和任务实例表分别对应t_ds_process_instance表和t_ds_task_instance表
+<a name="Pv25P"></a>
+### 核心表Schema
+<a name="32Jzd"></a>
+#### t_ds_process_definition
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| name | varchar | 流程定义名称 |
+| version | int | 流程定义版本 |
+| release_state | tinyint | 流程定义的发布状态:0 未上线  1已上线 |
+| project_id | int | 项目id |
+| user_id | int | 流程定义所属用户id |
+| process_definition_json | longtext | 流程定义json串 |
+| description | text | 流程定义描述 |
+| global_params | text | 全局参数 |
+| flag | tinyint | 流程是否可用:0 不可用,1 可用 |
+| locations | text | 节点坐标信息 |
+| connects | text | 节点连线信息 |
+| receivers | text | 收件人 |
+| receivers_cc | text | 抄送人 |
+| create_time | datetime | 创建时间 |
+| timeout | int | 超时时间 |
+| tenant_id | int | 租户id |
+| update_time | datetime | 更新时间 |
+| modify_by | varchar | 修改用户 |
+| resource_ids | varchar | 资源id集 |
+
+<a name="e6jfz"></a>
+#### t_ds_process_instance
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| name | varchar | 流程实例名称 |
+| process_definition_id | int | 流程定义id |
+| state | tinyint | 流程实例状态:0 提交成功,1 正在运行,2 准备暂停,3 暂停,4 准备停止,5 停止,6 失败,7 成功,8 需要容错,9 kill,10 等待线程,11 等待依赖完成 |
+| recovery | tinyint | 流程实例容错标识:0 正常,1 需要被容错重启 |
+| start_time | datetime | 流程实例开始时间 |
+| end_time | datetime | 流程实例结束时间 |
+| run_times | int | 流程实例运行次数 |
+| host | varchar | 流程实例所在的机器 |
+| command_type | tinyint | 命令类型:0 启动工作流,1 从当前节点开始执行,2 恢复被容错的工作流,3 恢复暂停流程,4 从失败节点开始执行,5 补数,6 调度,7 重跑,8 暂停,9 停止,10 恢复等待线程 |
+| command_param | text | 命令的参数(json格式) |
+| task_depend_type | tinyint | 节点依赖类型:0 当前节点,1 向前执行,2 向后执行 |
+| max_try_times | tinyint | 最大重试次数 |
+| failure_strategy | tinyint | 失败策略 0 失败后结束,1 失败后继续 |
+| warning_type | tinyint | 告警类型:0 不发,1 流程成功发,2 流程失败发,3 成功失败都发 |
+| warning_group_id | int | 告警组id |
+| schedule_time | datetime | 预期运行时间 |
+| command_start_time | datetime | 开始命令时间 |
+| global_params | text | 全局参数(固化流程定义的参数) |
+| process_instance_json | longtext | 流程实例json(copy的流程定义的json) |
+| flag | tinyint | 是否可用,1 可用,0不可用 |
+| update_time | timestamp | 更新时间 |
+| is_sub_process | int | 是否是子工作流 1 是,0 不是 |
+| executor_id | int | 命令执行用户 |
+| locations | text | 节点坐标信息 |
+| connects | text | 节点连线信息 |
+| history_cmd | text | 历史命令,记录所有对流程实例的操作 |
+| dependence_schedule_times | text | 依赖节点的预估时间 |
+| process_instance_priority | int | 流程实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group | varchar | 任务指定运行的worker分组 |
+| timeout | int | 超时时间 |
+| tenant_id | int | 租户id |
+
+<a name="IvHEc"></a>
+#### t_ds_task_instance
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| name | varchar | 任务名称 |
+| task_type | varchar | 任务类型 |
+| process_definition_id | int | 流程定义id |
+| process_instance_id | int | 流程实例id |
+| task_json | longtext | 任务节点json |
+| state | tinyint | 任务实例状态:0 提交成功,1 正在运行,2 准备暂停,3 暂停,4 准备停止,5 停止,6 失败,7 成功,8 需要容错,9 kill,10 等待线程,11 等待依赖完成 |
+| submit_time | datetime | 任务提交时间 |
+| start_time | datetime | 任务开始时间 |
+| end_time | datetime | 任务结束时间 |
+| host | varchar | 执行任务的机器 |
+| execute_path | varchar | 任务执行路径 |
+| log_path | varchar | 任务日志路径 |
+| alert_flag | tinyint | 是否告警 |
+| retry_times | int | 重试次数 |
+| pid | int | 进程pid |
+| app_link | varchar | yarn app id |
+| flag | tinyint | 是否可用:0 不可用,1 可用 |
+| retry_interval | int | 重试间隔 |
+| max_retry_times | int | 最大重试次数 |
+| task_instance_priority | int | 任务实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group | varchar | 任务指定运行的worker分组 |
+
+<a name="pPQkU"></a>
+#### t_ds_schedules
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| process_definition_id | int | 流程定义id |
+| start_time | datetime | 调度开始时间 |
+| end_time | datetime | 调度结束时间 |
+| crontab | varchar | crontab 表达式 |
+| failure_strategy | tinyint | 失败策略: 0 结束,1 继续 |
+| user_id | int | 用户id |
+| release_state | tinyint | 状态:0 未上线,1 上线 |
+| warning_type | tinyint | 告警类型:0 不发,1 流程成功发,2 流程失败发,3 成功失败都发 |
+| warning_group_id | int | 告警组id |
+| process_instance_priority | int | 流程实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group | varchar | 任务指定运行的worker分组 |
+| create_time | datetime | 创建时间 |
+| update_time | datetime | 更新时间 |
+
+<a name="TkQzn"></a>
+#### t_ds_command
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| command_type | tinyint | 命令类型:0 启动工作流,1 从当前节点开始执行,2 恢复被容错的工作流,3 恢复暂停流程,4 从失败节点开始执行,5 补数,6 调度,7 重跑,8 暂停,9 停止,10 恢复等待线程 |
+| process_definition_id | int | 流程定义id |
+| command_param | text | 命令的参数(json格式) |
+| task_depend_type | tinyint | 节点依赖类型:0 当前节点,1 向前执行,2 向后执行 |
+| failure_strategy | tinyint | 失败策略:0结束,1继续 |
+| warning_type | tinyint | 告警类型:0 不发,1 流程成功发,2 流程失败发,3 成功失败都发 |
+| warning_group_id | int | 告警组 |
+| schedule_time | datetime | 预期运行时间 |
+| start_time | datetime | 开始时间 |
+| executor_id | int | 执行用户id |
+| dependence | varchar | 依赖字段 |
+| update_time | datetime | 更新时间 |
+| process_instance_priority | int | 流程实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group | varchar | 任务指定运行的worker分组 |
+
+
+
diff --git a/docs/zh-cn/1.3.5/user_doc/quick-start.md b/docs/zh-cn/1.3.5/user_doc/quick-start.md
new file mode 100644
index 0000000..72a0a89
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/quick-start.md
@@ -0,0 +1,58 @@
+# 快速上手
+
+* 管理员用户登录
+  >地址:http://192.168.xx.xx:12345/dolphinscheduler 用户名密码:admin/dolphinscheduler123
+
+<p align="center">
+   <img src="/img/login.png" width="60%" />
+ </p>
+
+* 创建队列
+<p align="center">
+   <img src="/img/create-queue.png" width="60%" />
+ </p>
+
+  * 创建租户
+   <p align="center">
+    <img src="/img/addtenant.png" width="60%" />
+  </p>
+
+  * 创建普通用户
+<p align="center">
+   <img src="/img/useredit2.png" width="60%" />
+ </p>
+
+  * 创建告警组
+ <p align="center">
+    <img src="/img/mail_edit.png" width="60%" />
+  </p>
+
+ * 创建Worker分组
+ <p align="center">
+    <img src="/img/worker_group.png" width="60%" />
+  </p>
+ 
+ * 创建token令牌
+ <p align="center">
+    <img src="/img/creat_token.png" width="60%" />
+  </p>
+
+  * 使用普通用户登录
+  > 点击右上角用户名“退出”,重新使用普通用户登录。
+
+  * 项目管理->创建项目->点击项目名称
+<p align="center">
+   <img src="/img/project.png" width="60%" />
+ </p>
+
+  * 点击工作流定义->创建工作流定义->上线工作流定义
+
+<p align="center">
+   <img src="/img/dag1.png" width="60%" />
+ </p>
+
+  * 运行工作流定义->点击工作流实例->点击工作流实例名称->双击任务节点->查看任务执行日志
+
+ <p align="center">
+   <img src="/img/task-log.png" width="60%" />
+</p>
\ No newline at end of file
diff --git a/docs/zh-cn/1.3.5/user_doc/standalone-deployment.md b/docs/zh-cn/1.3.5/user_doc/standalone-deployment.md
new file mode 100644
index 0000000..baea805
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/standalone-deployment.md
@@ -0,0 +1,336 @@
+# 单机部署(Standalone)
+
+# 1、基础软件安装(必装项请自行安装)
+
+ * PostgreSQL (8.2.15+) or MySQL (5.7系列)  :  两者任选其一即可, 如MySQL则需要JDBC Driver 5.1.47+
+ * [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) :  必装,请安装好后在/etc/profile下配置 JAVA_HOME 及 PATH 变量
+ * ZooKeeper (3.4.6+) :必装 
+ * Hadoop (2.6+) or MinIO :选装, 如果需要用到资源上传功能,针对单机可以选择本地文件目录作为上传文件夹(此操作不需要部署Hadoop);当然也可以选择上传到Hadoop or MinIO集群上
+
+```markdown
+ 注意:DolphinScheduler本身不依赖Hadoop、Hive、Spark,仅是会调用他们的Client,用于对应任务的运行。
+```
+
+# 2、下载二进制tar.gz包
+
+- 请下载最新版本的后端安装包至服务器部署目录,比如创建 /opt/dolphinscheduler 做为安装部署目录,下载地址: [下载](/zh-cn/download/download.html),下载后上传tar包到该目录中,并进行解压
+
+```shell
+# 创建部署目录,部署目录请不要创建在/root、/home等高权限目录 
+mkdir -p /opt/dolphinscheduler;
+cd /opt/dolphinscheduler;
+# 解压缩
+tar -zxvf apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin.tar.gz -C /opt/dolphinscheduler;
+ 
+mv apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin  dolphinscheduler-bin
+```
+
+# 3、创建部署用户并赋予目录操作权限
+
+- 创建部署用户,并且一定要配置sudo免密。以创建dolphinscheduler用户为例
+
+```shell
+# 创建用户需使用root登录
+useradd dolphinscheduler;
+
+# 添加密码
+echo "dolphinscheduler" | passwd --stdin dolphinscheduler
+
+# 配置sudo免密
+sed -i '$adolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+# 修改目录权限,使得部署用户对dolphinscheduler-bin目录有操作权限
+chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
+```
+
+```
+ 注意:
+ - 因为任务执行服务是以 sudo -u {linux-user} 切换不同linux用户的方式来实现多租户运行作业,所以部署用户需要有 sudo 权限,而且是免密的。初学习者不理解的话,完全可以暂时忽略这一点
+ - 如果发现/etc/sudoers文件中有"Default requiretty"这行,也请注释掉
+ - 如果用到资源上传的话,还需要给该部署用户分配操作`本地文件系统或者HDFS或者MinIO`的权限
+```
+
+# 4、ssh免密配置
+
+- 切换到部署用户并配置ssh本机免密登录
+
+```shell
+su dolphinscheduler;
+
+ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
+cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
+chmod 600 ~/.ssh/authorized_keys
+```
+注意:*正常设置后,dolphinscheduler用户在执行命令`ssh localhost` 是不需要再输入密码的*
+
+# 5、数据库初始化
+
+- 进入数据库,默认数据库是PostgreSQL,如选择MySQL的话,后续需要添加mysql-connector-java驱动包到DolphinScheduler的lib目录下
+``` 
+mysql -uroot -p
+```
+
+- 进入数据库命令行窗口后,执行数据库初始化命令,设置访问账号和密码。**注: {user} 和 {password} 需要替换为具体的数据库用户名和密码** 
+
+    ``` mysql
+    mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
+    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
+    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
+    mysql> flush privileges;
+    ```
+
+
+- 创建表和导入基础数据
+
+    - 修改 conf 目录下 datasource.properties 中的下列配置
+
+      - ```shell
+        vi conf/datasource.properties
+        ```
+
+    - 如果选择 MySQL,请注释掉 PostgreSQL 相关配置(反之同理), 还需要手动添加 [[ mysql-connector-java 驱动 jar ](https://downloads.MySQL.com/archives/c-j/)] 包到 lib 目录下,这里下载的是mysql-connector-java-5.1.47.jar,然后正确配置数据库连接相关信息
+    
+    ```properties
+      # postgre
+      #spring.datasource.driver-class-name=org.postgresql.Driver
+      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
+      # mysql
+      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     需要修改ip,本机localhost即可
+      spring.datasource.username=xxx						需要修改为上面的{user}值
+      spring.datasource.password=xxx						需要修改为上面的{password}值
+    ```
+
+    - 修改并保存完后,执行 script 目录下的创建表及导入基础数据脚本
+
+    ```shell
+    sh script/create-dolphinscheduler.sh
+    ```
+
+​       *注意: 如果执行上述脚本报 ”/bin/java: No such file or directory“ 错误,请在/etc/profile下配置  JAVA_HOME 及 PATH 变量*
+
+# 6、修改运行参数
+
+- 修改 conf/env 目录下的 `dolphinscheduler_env.sh` 环境变量(以相关用到的软件都安装在/opt/soft下为例)
+
+    ```shell
+    export HADOOP_HOME=/opt/soft/hadoop
+    export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+    #export SPARK_HOME1=/opt/soft/spark1
+    export SPARK_HOME2=/opt/soft/spark2
+    export PYTHON_HOME=/opt/soft/python
+    export JAVA_HOME=/opt/soft/java
+    export HIVE_HOME=/opt/soft/hive
+    export FLINK_HOME=/opt/soft/flink
+    export DATAX_HOME=/opt/soft/datax/bin/datax.py
+    export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+    ```
+
+     `注: 这一步非常重要,例如 JAVA_HOME 和 PATH 是必须要配置的,没有用到的可以忽略或者注释掉;如果找不到dolphinscheduler_env.sh, 请运行 ls -a`
+
+    
+
+- 将jdk软链到/usr/bin/java下(仍以 JAVA_HOME=/opt/soft/java 为例)
+
+    ```shell
+    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+    ```
+
+ - 修改一键部署配置文件 `conf/config/install_config.conf`中的各参数,特别注意以下参数的配置
+
+    ```shell
+    # 这里填 mysql or postgresql
+    dbtype="mysql"
+    
+    # 数据库连接地址
+    dbhost="localhost:3306"
+    
+    # 数据库名
+    dbname="dolphinscheduler"
+    
+    # 数据库用户名,此处需要修改为上面设置的{user}具体值
+    username="xxx"    
+    
+    # 数据库密码, 如果有特殊字符,请使用\转义,需要修改为上面设置的{password}具体值
+    password="xxx"
+
+    #Zookeeper地址,单机本机是localhost:2181,记得把2181端口带上
+    zkQuorum="localhost:2181"
+    
+    #将DS安装到哪个目录,如: /opt/soft/dolphinscheduler,不同于现在的目录
+    installPath="/opt/soft/dolphinscheduler"
+    
+    #使用哪个用户部署,使用第3节创建的用户
+    deployUser="dolphinscheduler"
+    
+    # 邮件配置,以qq邮箱为例
+    # 邮件协议
+    mailProtocol="SMTP"
+
+    # 邮件服务地址
+    mailServerHost="smtp.qq.com"
+
+    # 邮件服务端口
+    mailServerPort="25"
+
+    # mailSender和mailUser配置成一样即可
+    # 发送者
+    mailSender="xxx@qq.com"
+
+    # 发送用户
+    mailUser="xxx@qq.com"
+
+    # 邮箱密码
+    mailPassword="xxx"
+
+    # TLS协议的邮箱设置为true,否则设置为false
+    starttlsEnable="true"
+
+    # 开启SSL协议的邮箱配置为true,否则为false。注意: starttlsEnable和sslEnable不能同时为true
+    sslEnable="false"
+
+    # 邮件服务地址值,参考上面 mailServerHost
+    sslTrust="smtp.qq.com"
+
+    # 业务用到的比如sql等资源文件上传到哪里,可以设置:HDFS,S3,NONE,单机如果想使用本地文件系统,请配置为HDFS,因为HDFS支持本地文件系统;如果不需要资源上传功能请选择NONE。强调一点:使用本地文件系统不需要部署hadoop
+    resourceStorageType="HDFS"
+
+    # 这里以保存到本地文件系统为例
+    #注:但是如果你想上传到HDFS的话,NameNode启用了HA,则需要将hadoop的配置文件core-site.xml和hdfs-site.xml放到conf目录下,本例即是放到/opt/dolphinscheduler/conf下面,并配置namenode cluster名称;如果NameNode不是HA,则修改为具体的ip或者主机名即可
+    defaultFS="file:///data/dolphinscheduler"    #hdfs://{具体的ip/主机名}:8020
+
+    # 如果没有使用到Yarn,保持以下默认值即可;如果ResourceManager是HA,则配置为ResourceManager节点的主备ip或者hostname,比如"192.168.xx.xx,192.168.xx.xx";如果是单ResourceManager请配置yarnHaIps=""即可
+    yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+    # 如果ResourceManager是HA或者没有使用到Yarn保持默认值即可;如果是单ResourceManager,请配置真实的ResourceManager主机名或者ip
+    singleYarnIp="yarnIp1"
+
+    # 资源上传根路径,支持HDFS和S3,由于hdfs支持本地文件系统,需要确保本地文件夹存在且有读写权限
+    resourceUploadPath="/data/dolphinscheduler"
+
+    # 具备权限创建resourceUploadPath的用户
+    hdfsRootUser="hdfs"
+
+    #在哪些机器上部署DS服务,本机选localhost
+    ips="localhost"
+
+    #ssh端口,默认22
+    sshPort="22"
+    
+    #master服务部署在哪台机器上
+    masters="localhost"
+
+    #worker服务部署在哪台机器上,并指定此worker属于哪一个worker组,下面示例的default即为组名
+    workers="localhost:default"
+    
+    #报警服务部署在哪台机器上
+    alertServer="localhost"
+    
+    #后端api服务部署在在哪台机器上
+    apiServers="localhost"
+
+    ```
+    
+
+    
+    *注:如果打算用到`资源中心`功能,请执行以下命令:*
+    
+    ```shell
+    sudo mkdir /data/dolphinscheduler
+    sudo chown -R dolphinscheduler:dolphinscheduler /data/dolphinscheduler
+    ```
+
+# 7、一键部署
+
+- 切换到部署用户,执行一键部署脚本
+
+    `sh install.sh` 
+
+    ```
+    注意:
+    第一次部署的话,在运行中第3步`3,stop server`出现5次以下信息,此信息可以忽略
+    sh: bin/dolphinscheduler-daemon.sh: No such file or directory
+    ```
+
+- 脚本完成后,会启动以下5个服务,使用`jps`命令查看服务是否启动(`jps`为`java JDK`自带)
+
+```aidl
+    MasterServer         ----- master服务
+    WorkerServer         ----- worker服务
+    LoggerServer         ----- logger服务
+    ApiApplicationServer ----- api服务
+    AlertServer          ----- alert服务
+```
+如果以上服务都正常启动,说明自动部署成功
+
+
+部署成功后,可以进行日志查看,日志统一存放于logs文件夹内
+
+```日志路径
+ logs/
+    ├── dolphinscheduler-alert-server.log
+    ├── dolphinscheduler-master-server.log
+    |—— dolphinscheduler-worker-server.log
+    |—— dolphinscheduler-api-server.log
+    |—— dolphinscheduler-logger-server.log
+```
+
+
+
+# 8、登录系统
+
+- 访问前端页面地址,接口ip(自行修改)
+http://192.168.xx.xx:12345/dolphinscheduler
+
+   <p align="center">
+     <img src="/img/login.png" width="60%" />
+   </p>
+
+# 9、启停服务
+
+* 一键停止集群所有服务
+
+  ` sh ./bin/stop-all.sh`
+
+* 一键开启集群所有服务
+
+  ` sh ./bin/start-all.sh`
+
+* 启停Master
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start master-server
+sh ./bin/dolphinscheduler-daemon.sh stop master-server
+```
+
+* 启停Worker
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start worker-server
+sh ./bin/dolphinscheduler-daemon.sh stop worker-server
+```
+
+* 启停Api
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start api-server
+sh ./bin/dolphinscheduler-daemon.sh stop api-server
+```
+
+* 启停Logger
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start logger-server
+sh ./bin/dolphinscheduler-daemon.sh stop logger-server
+```
+
+* 启停Alert
+
+```shell
+sh ./bin/dolphinscheduler-daemon.sh start alert-server
+sh ./bin/dolphinscheduler-daemon.sh stop alert-server
+```
+
+`注:服务用途请具体参见《系统架构设计》小节`
+
diff --git a/docs/zh-cn/1.3.5/user_doc/system-manual.md b/docs/zh-cn/1.3.5/user_doc/system-manual.md
new file mode 100644
index 0000000..6e9bb35
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/system-manual.md
@@ -0,0 +1,865 @@
+# 系统使用手册
+
+
+## 快速上手
+
+  > 请参照[快速上手](quick-start.html)
+
+## 操作指南
+
+### 1. 首页
+   首页包含用户所有项目的任务状态统计、流程状态统计、工作流定义统计。
+    <p align="center">
+     <img src="/img/home.png" width="80%" />
+    </p>
+
+### 2. 项目管理
+#### 2.1 创建项目
+  - 点击"项目管理"进入项目管理页面,点击“创建项目”按钮,输入项目名称,项目描述,点击“提交”,创建新的项目。
+  
+    <p align="center">
+        <img src="/img/project.png" width="80%" />
+    </p>
+
+#### 2.2 项目首页
+   - 在项目管理页面点击项目名称链接,进入项目首页,如下图所示,项目首页包含该项目的任务状态统计、流程状态统计、工作流定义统计。
+     <p align="center">
+        <img src="/img/project-home.png" width="80%" />
+     </p>
+ 
+ - 任务状态统计:在指定时间范围内,统计任务实例中状态为提交成功、正在运行、准备暂停、暂停、准备停止、停止、失败、成功、需要容错、kill、等待线程的个数
+ - 流程状态统计:在指定时间范围内,统计工作流实例中状态为提交成功、正在运行、准备暂停、暂停、准备停止、停止、失败、成功、需要容错、kill、等待线程的个数
+ - 工作流定义统计:统计用户创建的工作流定义及管理员授予该用户的工作流定义
+
+#### 2.3 工作流定义
+#### <span id=creatDag>2.3.1 创建工作流定义</span>
+  - 点击项目管理->工作流->工作流定义,进入工作流定义页面,点击“创建工作流”按钮,进入**工作流DAG编辑**页面,如下图所示:
+    <p align="center">
+        <img src="/img/dag0.png" width="80%" />
+    </p>  
+  - 工具栏中拖拽<img src="/img/shell.png" width="35"/>到画板中,新增一个Shell任务,如下图所示:
+    <p align="center">
+        <img src="/img/shell_dag.png" width="80%" />
+    </p>  
+  - **添加shell任务的参数设置:**
+  1. 填写“节点名称”,“描述”,“脚本”字段;
+  1. “运行标志”勾选“正常”,若勾选“禁止执行”,运行工作流不会执行该任务;
+  1. 选择“任务优先级”:当worker线程数不足时,级别高的任务在执行队列中会优先执行,相同优先级的任务按照先进先出的顺序执行;
+  1. 超时告警(非必选):勾选超时告警、超时失败,填写“超时时长”,当任务执行时间超过**超时时长**,会发送告警邮件并且任务超时失败;
+  1. 资源(非必选)。资源文件是资源中心->文件管理页面创建或上传的文件,如文件名为`test.sh`,脚本中调用资源命令为`sh test.sh`;
+  1. 自定义参数(非必填),参考[自定义参数](#UserDefinedParameters);
+  1. 点击"确认添加"按钮,保存任务设置。
+  
+  - **增加任务执行的先后顺序:** 点击右上角图标<img src="/img/line.png" width="35"/>连接任务;如下图所示,任务2和任务3并行执行,当任务1执行完,任务2、3会同时执行。
+
+    <p align="center">
+       <img src="/img/dag2.png" width="80%" />
+    </p>
+
+  - **删除依赖关系:** 点击右上角"箭头"图标<img src="/img/arrow.png" width="35"/>,选中连接线,点击右上角"删除"图标<img src="/img/delete.png" width="35"/>,删除任务间的依赖关系。
+    <p align="center">
+       <img src="/img/dag3.png" width="80%" />
+    </p>
+
+  - **保存工作流定义:** 点击”保存“按钮,弹出"设置DAG图名称"弹框,如下图所示,输入工作流定义名称,工作流定义描述,设置全局参数(选填,参考[自定义参数](#UserDefinedParameters)),点击"添加"按钮,工作流定义创建成功。
+    <p align="center">
+       <img src="/img/dag4.png" width="80%" />
+     </p>
+  > 其他类型任务,请参考 [任务节点类型和参数设置](#TaskParamers)。
+#### 2.3.2  工作流定义操作功能
+  点击项目管理->工作流->工作流定义,进入工作流定义页面,如下图所示:
+      <p align="center">
+          <img src="/img/work_list.png" width="80%" />
+      </p>
+  工作流定义列表的操作功能如下:
+  - **编辑:** 只能编辑"下线"的工作流定义。工作流DAG编辑同[创建工作流定义](#creatDag)。
+  - **上线:** 工作流状态为"下线"时,上线工作流,只有"上线"状态的工作流能运行,但不能编辑。
+  - **下线:** 工作流状态为"上线"时,下线工作流,下线状态的工作流可以编辑,但不能运行。
+  - **运行:** 只有上线的工作流能运行。运行操作步骤见[2.3.3 运行工作流](#runWorkflow)
+  - **定时:** 只有上线的工作流能设置定时,系统自动定时调度工作流运行。创建定时后的状态为"下线",需在定时管理页面上线定时才生效。定时操作步骤见[2.3.4 工作流定时](#creatTiming)。
+  - **定时管理:** 定时管理页面可编辑、上线/下线、删除定时。
+  - **删除:** 删除工作流定义。
+  - **下载:** 下载工作流定义到本地。
+  - **树形图:** 以树形结构展示任务节点的类型及任务状态,如下图所示:
+    <p align="center">
+        <img src="/img/tree.png" width="80%" />
+    </p>  
+
+#### <span id=runWorkflow>2.3.3 运行工作流</span>
+  - 点击项目管理->工作流->工作流定义,进入工作流定义页面,如下图所示,点击"上线"按钮<img src="/img/online.png" width="35"/>,上线工作流。
+    <p align="center">
+        <img src="/img/work_list.png" width="80%" />
+    </p>
+
+  - 点击”运行“按钮,弹出启动参数设置弹框,如下图所示,设置启动参数,点击弹框中的"运行"按钮,工作流开始运行,工作流实例页面生成一条工作流实例。
+     <p align="center">
+       <img src="/img/run-work.png" width="80%" />
+     </p>  
+  <span id=runParamers>工作流运行参数说明:</span> 
+       
+    * 失败策略:当某一个任务节点执行失败时,其他并行的任务节点需要执行的策略。”继续“表示:某一任务失败后,其他任务节点正常执行;”结束“表示:终止所有正在执行的任务,并终止整个流程。
+    * 通知策略:当流程结束,根据流程状态发送流程执行信息通知邮件,包含任何状态都不发,成功发,失败发,成功或失败都发。
+    * 流程优先级:流程运行的优先级,分五个等级:最高(HIGHEST),高(HIGH),中(MEDIUM),低(LOW),最低(LOWEST)。当master线程数不足时,级别高的流程在执行队列中会优先执行,相同优先级的流程按照先进先出的顺序执行。
+    * worker分组:该流程只能在指定的worker机器组里执行。默认是Default,可以在任一worker上执行。
+    * 通知组:选择通知策略||超时报警||发生容错时,会发送流程信息或邮件到通知组里的所有成员。
+    * 收件人:选择通知策略||超时报警||发生容错时,会发送流程信息或告警邮件到收件人列表。
+    * 抄送人:选择通知策略||超时报警||发生容错时,会抄送流程信息或告警邮件到抄送人列表。
+    * 启动参数: 在启动新的流程实例时,设置或覆盖全局参数的值。
+    * 补数:包括串行补数、并行补数2种模式。串行补数:指定时间范围内,从开始日期至结束日期依次执行补数,只生成一条流程实例;并行补数:指定时间范围内,多天同时进行补数,生成N条流程实例。 
+  * 补数: 执行指定日期的工作流定义,可以选择补数时间范围(目前只支持针对连续的天进行补数),比如需要补5月1号到5月10号的数据,如下图所示: 
+    <p align="center">
+        <img src="/img/complement.png" width="80%" />
+    </p>
+
+    >串行模式:补数从5月1号到5月10号依次执行,流程实例页面生成一条流程实例;
+    
+    >并行模式:同时执行5月1号到5月10号的任务,流程实例页面生成十条流程实例。
+
+#### <span id=creatTiming>2.3.4 工作流定时</span>
+  - 创建定时:点击项目管理->工作流->工作流定义,进入工作流定义页面,上线工作流,点击"定时"按钮<img src="/img/timing.png" width="35"/>,弹出定时参数设置弹框,如下图所示:
+    <p align="center">
+        <img src="/img/time-schedule.png" width="80%" />
+    </p>
+  - 选择起止时间。在起止时间范围内,定时运行工作流;不在起止时间范围内,不再产生定时工作流实例。
+  - 添加一个每天凌晨5点执行一次的定时,如下图所示:
+    <p align="center">
+        <img src="/img/time-schedule2.png" width="80%" />
+    </p>
+  - 失败策略、通知策略、流程优先级、Worker分组、通知组、收件人、抄送人同[工作流运行参数](#runParamers)。
+  - 点击"创建"按钮,创建定时成功,此时定时状态为"**下线**",定时需**上线**才生效。
+  - 定时上线:点击"定时管理"按钮<img src="/img/timeManagement.png" width="35"/>,进入定时管理页面,点击"上线"按钮,定时状态变为"上线",如下图所示,工作流定时生效。
+    <p align="center">
+        <img src="/img/time-schedule3.png" width="80%" />
+    </p>
+#### 2.3.5 导入工作流
+  点击项目管理->工作流->工作流定义,进入工作流定义页面,点击"导入工作流"按钮,导入本地工作流文件,工作流定义列表显示导入的工作流,状态为下线。
+
+#### 2.4 工作流实例
+#### 2.4.1 查看工作流实例
+   - 点击项目管理->工作流->工作流实例,进入工作流实例页面,如下图所示:
+        <p align="center">
+           <img src="/img/instance-list.png" width="80%" />
+        </p>           
+   -  点击工作流名称,进入DAG查看页面,查看任务执行状态,如下图所示。
+      <p align="center">
+        <img src="/img/instance-detail.png" width="80%" />
+      </p>
+#### 2.4.2 查看任务日志
+   - 进入工作流实例页面,点击工作流名称,进入DAG查看页面,双击任务节点,如下图所示:
+      <p align="center">
+        <img src="/img/instanceViewLog.png" width="80%" />
+      </p>
+   - 点击"查看日志",弹出日志弹框,如下图所示,任务实例页面也可查看任务日志,参考[任务查看日志](#taskLog)。
+      <p align="center">
+        <img src="/img/task-log.png" width="80%" />
+      </p>
+#### 2.4.3 查看任务历史记录
+   - 点击项目管理->工作流->工作流实例,进入工作流实例页面,点击工作流名称,进入工作流DAG页面;
+   - 双击任务节点,如下图所示,点击"查看历史",跳转到任务实例页面,并展示该工作流实例运行的任务实例列表
+      <p align="center">
+        <img src="/img/task_history.png" width="80%" />
+      </p>
+      
+#### 2.4.4 查看运行参数
+   - 点击项目管理->工作流->工作流实例,进入工作流实例页面,点击工作流名称,进入工作流DAG页面; 
+   - 点击左上角图标<img src="/img/run_params_button.png" width="35"/>,查看工作流实例的启动参数;点击图标<img src="/img/global_param.png" width="35"/>,查看工作流实例的全局参数和局部参数,如下图所示:
+      <p align="center">
+        <img src="/img/run_params.png" width="80%" />
+      </p>      
+ 
+#### 2.4.4 工作流实例操作功能
+   点击项目管理->工作流->工作流实例,进入工作流实例页面,如下图所示:          
+      <p align="center">
+        <img src="/img/instance-list.png" width="80%" />
+      </p>
+
+  - **编辑:** 只能编辑已终止的流程。点击"编辑"按钮或工作流实例名称进入DAG编辑页面,编辑后点击"保存"按钮,弹出保存DAG弹框,如下图所示,在弹框中勾选"是否更新到工作流定义",保存后则更新工作流定义;若不勾选,则不更新工作流定义。
+       <p align="center">
+         <img src="/img/editDag.png" width="80%" />
+       </p>
+  - **重跑:** 重新执行已经终止的流程。
+  - **恢复失败:** 针对失败的流程,可以执行恢复失败操作,从失败的节点开始执行。
+  - **停止:** 对正在运行的流程进行**停止**操作,后台会先`kill`worker进程,再执行`kill -9`操作
+  - **暂停:** 对正在运行的流程进行**暂停**操作,系统状态变为**等待执行**,会等待正在执行的任务结束,暂停下一个要执行的任务。
+  - **恢复暂停:** 对暂停的流程恢复,直接从**暂停的节点**开始运行
+  - **删除:** 删除工作流实例及工作流实例下的任务实例
+  - **甘特图:** Gantt图纵轴是某个工作流实例下的任务实例的拓扑排序,横轴是任务实例的运行时间,如图示:         
+       <p align="center">
+           <img src="/img/gant-pic.png" width="80%" />
+       </p>
+
+#### 2.5 任务实例
+  - 点击项目管理->工作流->任务实例,进入任务实例页面,如下图所示,点击工作流实例名称,可跳转到工作流实例DAG图查看任务状态。
+       <p align="center">
+          <img src="/img/task-list.png" width="80%" />
+       </p>
+
+  - <span id=taskLog>查看日志:</span>点击操作列中的“查看日志”按钮,可以查看任务执行的日志情况。
+       <p align="center">
+          <img src="/img/task-log2.png" width="80%" />
+       </p>
+
+### 3. 资源中心
+#### 3.1 hdfs资源配置
+  - 上传资源文件和udf函数,所有上传的文件和资源都会被存储到hdfs上,所以需要以下配置项:
+  
+```  
+conf/common.properties  
+    # Users who have permission to create directories under the HDFS root path
+    hdfs.root.user=hdfs
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/dolphinscheduler" is recommended
+    resource.upload.path=/dolphinscheduler
+    # resource storage type : HDFS,S3,NONE
+    resource.storage.type=HDFS
+    # whether kerberos starts
+    hadoop.security.authentication.startup.state=false
+    # java.security.krb5.conf path
+    java.security.krb5.conf.path=/opt/krb5.conf
+    # loginUserFromKeytab user
+    login.user.keytab.username=hdfs-mycluster@ESZ.COM
+    # loginUserFromKeytab path
+    login.user.keytab.path=/opt/hdfs.headless.keytab    
+    # if resource.storage.type is HDFS,and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
+    # if resource.storage.type is S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+    # Note,s3 be sure to create the root directory /dolphinscheduler
+    fs.defaultFS=hdfs://mycluster:8020    
+    #resourcemanager ha note this need ips , this empty if single
+    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
+    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
+    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
+
+```
+
+
+#### 3.2 文件管理
+
+  > 是对各种资源文件的管理,包括创建基本的txt/log/sh/conf/py/java等文件、上传jar包等各种类型文件,可进行编辑、重命名、下载、删除等操作。
+  <p align="center">
+   <img src="/img/file-manage.png" width="80%" />
+ </p>
+
+  * 创建文件
+ > 文件格式支持以下几种类型:txt、log、sh、conf、cfg、py、java、sql、xml、hql、properties
+
+<p align="center">
+   <img src="/img/file_create.png" width="80%" />
+ </p>
+
+  * 上传文件
+
+> 上传文件:点击"上传文件"按钮进行上传,将文件拖拽到上传区域,文件名会自动以上传的文件名称补全
+
+<p align="center">
+   <img src="/img/file_upload.png" width="80%" />
+ </p>
+
+
+  * 文件查看
+
+> 对可查看的文件类型,点击文件名称,可查看文件详情
+
+<p align="center">
+   <img src="/img/file_detail.png" width="80%" />
+ </p>
+
+  * 下载文件
+
+> 点击文件列表的"下载"按钮下载文件或者在文件详情中点击右上角"下载"按钮下载文件
+
+  * 文件重命名
+
+<p align="center">
+   <img src="/img/file_rename.png" width="80%" />
+ </p>
+
+  * 删除
+>  文件列表->点击"删除"按钮,删除指定文件
+
+#### 3.3 UDF管理
+#### 3.3.1 资源管理
+  > 资源管理和文件管理功能类似,不同之处是资源管理是上传的UDF函数,文件管理上传的是用户程序,脚本及配置文件
+  > 操作功能:重命名、下载、删除。
+
+  * 上传udf资源
+  > 和上传文件相同。
+  
+
+#### 3.3.2 函数管理
+
+  * 创建udf函数
+  > 点击“创建UDF函数”,输入udf函数参数,选择udf资源,点击“提交”,创建udf函数。
+
+ > 目前只支持HIVE的临时UDF函数
+
+  - UDF函数名称:输入UDF函数时的名称
+  - 包名类名:输入UDF函数的全路径  
+  - UDF资源:设置创建的UDF对应的资源文件
+
+<p align="center">
+   <img src="/img/udf_edit.png" width="80%" />
+ </p>
+
+
+### 4. 创建数据源
+  > 数据源中心支持MySQL、POSTGRESQL、HIVE/IMPALA、SPARK、CLICKHOUSE、ORACLE、SQLSERVER等数据源
+
+#### 4.1 创建/编辑MySQL数据源
+
+  - 点击“数据源中心->创建数据源”,根据需求创建不同类型的数据源。
+
+  - 数据源:选择MYSQL
+  - 数据源名称:输入数据源的名称
+  - 描述:输入数据源的描述
+  - IP主机名:输入连接MySQL的IP
+  - 端口:输入连接MySQL的端口
+  - 用户名:设置连接MySQL的用户名
+  - 密码:设置连接MySQL的密码
+  - 数据库名:输入连接MySQL的数据库名称
+  - Jdbc连接参数:用于MySQL连接的参数设置,以JSON形式填写
+
+<p align="center">
+   <img src="/img/mysql_edit.png" width="80%" />
+ </p>
+
+  > 点击“测试连接”,测试数据源是否可以连接成功。
+
+#### 4.2 创建/编辑POSTGRESQL数据源
+
+- 数据源:选择POSTGRESQL
+- 数据源名称:输入数据源的名称
+- 描述:输入数据源的描述
+- IP/主机名:输入连接POSTGRESQL的IP
+- 端口:输入连接POSTGRESQL的端口
+- 用户名:设置连接POSTGRESQL的用户名
+- 密码:设置连接POSTGRESQL的密码
+- 数据库名:输入连接POSTGRESQL的数据库名称
+- Jdbc连接参数:用于POSTGRESQL连接的参数设置,以JSON形式填写
+
+<p align="center">
+   <img src="/img/postgresql_edit.png" width="80%" />
+ </p>
+
+#### 4.3 创建/编辑HIVE数据源
+
+1.使用HiveServer2方式连接
+
+ <p align="center">
+    <img src="/img/hive_edit.png" width="80%" />
+  </p>
+
+  - 数据源:选择HIVE
+  - 数据源名称:输入数据源的名称
+  - 描述:输入数据源的描述
+  - IP/主机名:输入连接HIVE的IP
+  - 端口:输入连接HIVE的端口
+  - 用户名:设置连接HIVE的用户名
+  - 密码:设置连接HIVE的密码
+  - 数据库名:输入连接HIVE的数据库名称
+  - Jdbc连接参数:用于HIVE连接的参数设置,以JSON形式填写
+
+2.使用HiveServer2 HA Zookeeper方式连接
+
+ <p align="center">
+    <img src="/img/hive_edit2.png" width="80%" />
+  </p>
+
+
+注意:如果开启了**kerberos**,则需要填写 **Principal**
+<p align="center">
+    <img src="/img/hive_kerberos.png" width="80%" />
+  </p>
+
+
+
+
+#### 4.4 创建/编辑Spark数据源
+
+<p align="center">
+   <img src="/img/spark_datesource.png" width="80%" />
+ </p>
+
+- 数据源:选择Spark
+- 数据源名称:输入数据源的名称
+- 描述:输入数据源的描述
+- IP/主机名:输入连接Spark的IP
+- 端口:输入连接Spark的端口
+- 用户名:设置连接Spark的用户名
+- 密码:设置连接Spark的密码
+- 数据库名:输入连接Spark的数据库名称
+- Jdbc连接参数:用于Spark连接的参数设置,以JSON形式填写
+
+
+
+注意:如果开启了**kerberos**,则需要填写 **Principal**
+
+<p align="center">
+    <img src="/img/sparksql_kerberos.png" width="80%" />
+  </p>
+
+
+
+### 5. 安全中心(权限系统)
+
+     * 安全中心只有管理员账户才有权限操作,分别有队列管理、租户管理、用户管理、告警组管理、worker分组管理、令牌管理等功能,在用户管理模块可以对资源、数据源、项目等授权
+     * 管理员登录,默认用户名密码:admin/dolphinscheduler123
+
+#### 5.1 创建队列
+  - 队列是在执行spark、mapreduce等程序,需要用到“队列”参数时使用的。
+  - 管理员进入安全中心->队列管理页面,点击“创建队列”按钮,创建队列。
+ <p align="center">
+    <img src="/img/create-queue.png" width="80%" />
+  </p>
+
+
+#### 5.2 添加租户
+  - 租户对应的是Linux的用户,用于worker提交作业所使用的用户。如果linux没有这个用户,worker会在执行脚本的时候创建这个用户。
+  - 租户编码:**租户编码是Linux上的用户,唯一,不能重复**
+  - 管理员进入安全中心->租户管理页面,点击“创建租户”按钮,创建租户。
+
+ <p align="center">
+    <img src="/img/addtenant.png" width="80%" />
+  </p>
+
+#### 5.3 创建普通用户
+  -  用户分为**管理员用户**和**普通用户**
+  
+    * 管理员有授权和用户管理等权限,没有创建项目和工作流定义的操作的权限。
+    * 普通用户可以创建项目和对工作流定义的创建,编辑,执行等操作。
+    * 注意:如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下。
+  - 管理员进入安全中心->用户管理页面,点击“创建用户”按钮,创建用户。        
+<p align="center">
+   <img src="/img/useredit2.png" width="80%" />
+ </p>
+  
+  > **编辑用户信息** 
+   - 管理员进入安全中心->用户管理页面,点击"编辑"按钮,编辑用户信息。
+   - 普通用户登录后,点击用户名下拉框中的用户信息,进入用户信息页面,点击"编辑"按钮,编辑用户信息。
+  
+  > **修改用户密码** 
+   - 管理员进入安全中心->用户管理页面,点击"编辑"按钮,编辑用户信息时,输入新密码修改用户密码。
+   - 普通用户登录后,点击用户名下拉框中的用户信息,进入修改密码页面,输入密码并确认密码后点击"编辑"按钮,则修改密码成功。
+   
+
+#### 5.4 创建告警组
+  * 告警组是在启动时设置的参数,在流程结束以后会将流程的状态和其他信息以邮件形式发送给告警组。
+  - 管理员进入安全中心->告警组管理页面,点击“创建告警组”按钮,创建告警组。
+
+  <p align="center">
+    <img src="/img/mail_edit.png" width="80%" />
+  </p>
+
+
+#### 5.5 令牌管理
+  > 由于后端接口有登录检查,令牌管理提供了一种可以通过调用接口的方式对系统进行各种操作。
+  - 管理员进入安全中心->令牌管理页面,点击“创建令牌”按钮,选择失效时间与用户,点击"生成令牌"按钮,点击"提交"按钮,则选择用户的token创建成功。
+
+  <p align="center">
+      <img src="/img/creat_token.png" width="80%" />
+   </p>
+  
+  - 普通用户登录后,点击用户名下拉框中的用户信息,进入令牌管理页面,选择失效时间,点击"生成令牌"按钮,点击"提交"按钮,则该用户创建token成功。
+    
+  - 调用示例:
+  
+```令牌调用示例
+    /**
+     * test token
+     */
+    public  void doPOSTParam()throws Exception{
+        // create HttpClient
+        CloseableHttpClient httpclient = HttpClients.createDefault();
+
+        // create http post request
+        HttpPost httpPost = new HttpPost("http://127.0.0.1:12345/escheduler/projects/create");
+        httpPost.setHeader("token", "123");
+        // set parameters
+        List<NameValuePair> parameters = new ArrayList<NameValuePair>();
+        parameters.add(new BasicNameValuePair("projectName", "qzw"));
+        parameters.add(new BasicNameValuePair("desc", "qzw"));
+        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
+        httpPost.setEntity(formEntity);
+        CloseableHttpResponse response = null;
+        try {
+            // execute
+            response = httpclient.execute(httpPost);
+            // response status code 200
+            if (response.getStatusLine().getStatusCode() == 200) {
+                String content = EntityUtils.toString(response.getEntity(), "UTF-8");
+                System.out.println(content);
+            }
+        } finally {
+            if (response != null) {
+                response.close();
+            }
+            httpclient.close();
+        }
+    }
+```
+
+#### 5.6 授予权限
+
+    * 授予权限包括项目权限,资源权限,数据源权限,UDF函数权限。
+    * 管理员可以对普通用户进行非其创建的项目、资源、数据源和UDF函数进行授权。因为项目、资源、数据源和UDF函数授权方式都是一样的,所以以项目授权为例介绍。
+    * 注意:对于用户自己创建的项目,该用户拥有所有的权限。则项目列表和已选项目列表中不会显示。
+ 
+  - 管理员进入安全中心->用户管理页面,点击需授权用户的“授权”按钮,如下图所示:
+  <p align="center">
+   <img src="/img/auth_user.png" width="80%" />
+ </p>
+
+  - 选择项目,进行项目授权。
+
+<p align="center">
+   <img src="/img/auth_project.png" width="80%" />
+ </p>
+  
+  - 资源、数据源、UDF函数授权同项目授权。
+#### 5.7 Worker分组
+每个worker节点都会归属于自己的Worker分组,默认分组为default.
+
+在任务执行时,可以将任务分配给指定worker分组,最终由该组中的worker节点执行该任务.
+
+> 新增/更新 worker分组
+
+- 打开要设置分组的worker节点上的"conf/worker.properties"配置文件. 修改worker.groups参数. 
+- worker.groups参数后面对应的为该worker节点对应的分组名称,默认为default.
+- 如果该worker节点对应多个分组,则以逗号隔开.
+```
+示例: 
+worker.groups=default,test
+```
+
+### 6. 监控中心
+
+#### 6.1 服务管理
+  - 服务管理主要是对系统中的各个服务的健康状况和基本信息的监控和显示
+
+#### 6.1.1 master监控
+  - 主要是master的相关信息。
+<p align="center">
+   <img src="/img/master-jk.png" width="80%" />
+ </p>
+
+#### 6.1.2 worker监控
+  - 主要是worker的相关信息。
+
+<p align="center">
+   <img src="/img/worker-jk.png" width="80%" />
+ </p>
+
+#### 6.1.3 Zookeeper监控
+  - 主要是zookpeeper中各个worker和master的相关配置信息。
+
+<p align="center">
+   <img src="/img/zk-jk.png" width="80%" />
+ </p>
+
+#### 6.1.4 DB监控
+  - 主要是DB的健康状况
+
+<p align="center">
+   <img src="/img/mysql-jk.png" width="80%" />
+ </p>
+ 
+#### 6.2 统计管理
+<p align="center">
+   <img src="/img/Statistics.png" width="80%" />
+ </p>
+ 
+  - 待执行命令数:统计t_ds_command表的数据
+  - 执行失败的命令数:统计t_ds_error_command表的数据
+  - 待运行任务数:统计Zookeeper中task_queue的数据
+  - 待杀死任务数:统计Zookeeper中task_kill的数据
+ 
+### 7. <span id=TaskParamers>任务节点类型和参数设置</span>
+
+#### 7.1 Shell节点
+  > shell节点,在worker执行的时候,会生成一个临时shell脚本,使用租户同名的linux用户执行这个脚本。
+  - 点击项目管理-项目名称-工作流定义,点击"创建工作流"按钮,进入DAG编辑页面。
+  - 工具栏中拖动<img src="/img/shell.png" width="35"/>到画板中,如下图所示:
+
+    <p align="center">
+        <img src="/img/shell_dag.png" width="80%" />
+    </p> 
+
+- 节点名称:一个工作流定义中的节点名称是唯一的。
+- 运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。
+- 描述信息:描述该节点的功能。
+- 任务优先级:worker线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。
+- Worker分组:任务分配给worker组的机器机执行,选择Default,会随机选择一台worker机执行。
+- 失败重试次数:任务失败重新提交的次数,支持下拉和手填。
+- 失败重试间隔:任务失败重新提交任务的时间间隔,支持下拉和手填。
+- 超时告警:勾选超时告警、超时失败,当任务超过"超时时长"后,会发送告警邮件并且任务执行失败.
+- 脚本:用户开发的SHELL程序。
+- 资源:是指脚本中需要调用的资源文件列表,资源中心-文件管理上传或创建的文件。
+- 自定义参数:是SHELL局部的用户自定义参数,会替换脚本中以${变量}的内容。
+
+#### 7.2 子流程节点
+  - 子流程节点,就是把外部的某个工作流定义当做一个任务节点去执行。
+> 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png)任务节点到画板中,如下图所示:
+
+<p align="center">
+   <img src="/img/subprocess_edit.png" width="80%" />
+ </p>
+
+- 节点名称:一个工作流定义中的节点名称是唯一的
+- 运行标志:标识这个节点是否能正常调度
+- 描述信息:描述该节点的功能
+- 超时告警:勾选超时告警、超时失败,当任务超过"超时时长"后,会发送告警邮件并且任务执行失败.
+- 子节点:是选择子流程的工作流定义,右上角进入该子节点可以跳转到所选子流程的工作流定义
+
+#### 7.3 依赖(DEPENDENT)节点
+  - 依赖节点,就是**依赖检查节点**。比如A流程依赖昨天的B流程执行成功,依赖节点会去检查B流程在昨天是否有执行成功的实例。
+
+> 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png)任务节点到画板中,如下图所示:
+
+<p align="center">
+   <img src="/img/dependent_edit.png" width="80%" />
+ </p>
+
+  > 依赖节点提供了逻辑判断功能,比如检查昨天的B流程是否成功,或者C流程是否执行成功。
+
+  <p align="center">
+   <img src="/img/depend-node.png" width="80%" />
+ </p>
+
+  > 例如,A流程为周报任务,B、C流程为天任务,A任务需要B、C任务在上周的每一天都执行成功,如图示:
+
+ <p align="center">
+   <img src="/img/depend-node2.png" width="80%" />
+ </p>
+
+  > 假如,周报A同时还需要自身在上周二执行成功:
+
+ <p align="center">
+   <img src="/img/depend-node3.png" width="80%" />
+ </p>
+
+#### 7.4 存储过程节点
+  - 根据选择的数据源,执行存储过程。
+> 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png)任务节点到画板中,如下图所示:
+
+<p align="center">
+   <img src="/img/procedure_edit.png" width="80%" />
+ </p>
+
+- 数据源:存储过程的数据源类型支持MySQL和POSTGRESQL两种,选择对应的数据源
+- 方法:是存储过程的方法名称
+- 自定义参数:存储过程的自定义参数类型支持IN、OUT两种,数据类型支持VARCHAR、INTEGER、LONG、FLOAT、DOUBLE、DATE、TIME、TIMESTAMP、BOOLEAN九种数据类型
+
+#### 7.5 SQL节点
+  - 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png)任务节点到画板中
+  - 非查询SQL功能:编辑非查询SQL任务信息,sql类型选择非查询,如下图所示:
+  <p align="center">
+   <img src="/img/sql-node.png" width="80%" />
+ </p>
+
+  - 查询SQL功能:编辑查询SQL任务信息,sql类型选择查询,选择表格或附件形式发送邮件到指定的收件人,如下图所示。
+
+<p align="center">
+   <img src="/img/sql-node2.png" width="80%" />
+ </p>
+
+- 数据源:选择对应的数据源
+- sql类型:支持查询和非查询两种,查询是select类型的查询,是有结果集返回的,可以指定邮件通知为表格、附件或表格附件三种模板。非查询是没有结果集返回的,是针对update、delete、insert三种类型的操作。
+- sql参数:输入参数格式为key1=value1;key2=value2…
+- sql语句:SQL语句
+- UDF函数:对于HIVE类型的数据源,可以引用资源中心中创建的UDF函数,其他类型的数据源暂不支持UDF函数。
+- 自定义参数:SQL任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换sql语句中${变量}。
+- 前置sql:前置sql在sql语句之前执行。
+- 后置sql:后置sql在sql语句之后执行。
+
+
+#### 7.6 SPARK节点
+  - 通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用`spark-submit`方式提交任务
+
+> 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png)任务节点到画板中,如下图所示:
+
+<p align="center">
+   <img src="/img/spark_edit.png" width="80%" />
+ </p>
+
+- 程序类型:支持JAVA、Scala和Python三种语言
+- 主函数的class:是Spark程序的入口Main Class的全路径
+- 主jar包:是Spark的jar包
+- 部署方式:支持yarn-cluster、yarn-client和local三种模式
+- Driver内核数:可以设置Driver内核数及内存数
+- Executor数量:可以设置Executor数量、Executor内存数和Executor内核数
+- 命令行参数:是设置Spark程序的输入参数,支持自定义参数变量的替换。
+- 其他参数:支持 --jars、--files、--archives、--conf格式
+- 资源:如果其他参数中引用了资源文件,需要在资源中选择指定
+- 自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容
+
+ 注意:JAVA和Scala只是用来标识,没有区别,如果是Python开发的Spark则没有主函数的class,其他都是一样
+
+#### 7.7 MapReduce(MR)节点
+  - 使用MR节点,可以直接执行MR程序。对于mr节点,worker会使用`hadoop jar`方式提交任务
+
+
+> 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png)任务节点到画板中,如下图所示:
+
+ 1. JAVA程序
+
+ <p align="center">
+   <img src="/img/mr_java.png" width="80%" />
+ </p>
+ 
+- 主函数的class:是MR程序的入口Main Class的全路径
+- 程序类型:选择JAVA语言 
+- 主jar包:是MR的jar包
+- 命令行参数:是设置MR程序的输入参数,支持自定义参数变量的替换
+- 其他参数:支持 –D、-files、-libjars、-archives格式
+- 资源: 如果其他参数中引用了资源文件,需要在资源中选择指定
+- 自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容
+
+2. Python程序
+
+<p align="center">
+   <img src="/img/mr_edit.png" width="80%" />
+ </p>
+
+- 程序类型:选择Python语言 
+- 主jar包:是运行MR的Python jar包
+- 其他参数:支持 –D、-mapper、-reducer、-input  -output格式,这里可以设置用户自定义参数的输入,比如:
+- -mapper  "mapper.py 1"  -file mapper.py   -reducer reducer.py  -file reducer.py –input /journey/words.txt -output /journey/out/mr/${currentTimeMillis}
+- 其中 -mapper 后的 mapper.py 1是两个参数,第一个参数是mapper.py,第二个参数是1
+- 资源: 如果其他参数中引用了资源文件,需要在资源中选择指定
+- 自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容
+
+#### 7.8 Python节点
+  - 使用python节点,可以直接执行python脚本,对于python节点,worker会使用`python **`方式提交任务。
+
+
+> 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png)任务节点到画板中,如下图所示:
+
+<p align="center">
+   <img src="/img/python_edit.png" width="80%" />
+ </p>
+
+- 脚本:用户开发的Python程序
+- 资源:是指脚本中需要调用的资源文件列表
+- 自定义参数:是Python局部的用户自定义参数,会替换脚本中以${变量}的内容
+- 注意:若引入资源目录树下的python文件,需添加__init__.py文件
+
+#### 7.9 Flink节点
+  - 拖动工具栏中的<img src="/img/flink.png" width="35"/>任务节点到画板中,如下图所示:
+
+<p align="center">
+  <img src="/img/flink_edit.png" width="80%" />
+</p>
+
+
+- 程序类型:支持JAVA、Scala和Python三种语言
+- 主函数的class:是Flink程序的入口Main Class的全路径
+- 主jar包:是Flink的jar包
+- 部署方式:支持cluster、local三种模式
+- slot数量:可以设置slot数
+- taskManage数量:可以设置taskManage数
+- jobManager内存数:可以设置jobManager内存数
+- taskManager内存数:可以设置taskManager内存数
+- 命令行参数:是设置Spark程序的输入参数,支持自定义参数变量的替换。
+- 其他参数:支持 --jars、--files、--archives、--conf格式
+- 资源:如果其他参数中引用了资源文件,需要在资源中选择指定
+- 自定义参数:是Flink局部的用户自定义参数,会替换脚本中以${变量}的内容
+
+ 注意:JAVA和Scala只是用来标识,没有区别,如果是Python开发的Flink则没有主函数的class,其他都是一样
+
+#### 7.10 http节点  
+
+  - 拖动工具栏中的<img src="/img/http.png" width="35"/>任务节点到画板中,如下图所示:
+
+<p align="center">
+   <img src="/img/http_edit.png" width="80%" />
+ </p>
+
+- 节点名称:一个工作流定义中的节点名称是唯一的。
+- 运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。
+- 描述信息:描述该节点的功能。
+- 任务优先级:worker线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。
+- Worker分组:任务分配给worker组的机器机执行,选择Default,会随机选择一台worker机执行。
+- 失败重试次数:任务失败重新提交的次数,支持下拉和手填。
+- 失败重试间隔:任务失败重新提交任务的时间间隔,支持下拉和手填。
+- 超时告警:勾选超时告警、超时失败,当任务超过"超时时长"后,会发送告警邮件并且任务执行失败.
+- 请求地址:http请求URL。
+- 请求类型:支持GET、POSt、HEAD、PUT、DELETE。
+- 请求参数:支持Parameter、Body、Headers。
+- 校验条件:支持默认响应码、自定义响应码、内容包含、内容不包含。
+- 校验内容:当校验条件选择自定义响应码、内容包含、内容不包含时,需填写校验内容。
+- 自定义参数:是http局部的用户自定义参数,会替换脚本中以${变量}的内容。
+
+#### 7.11 DATAX节点
+
+  - 拖动工具栏中的<img src="/img/datax.png" width="35"/>任务节点到画板中
+
+  <p align="center">
+   <img src="/img/datax_edit.png" width="80%" />
+  </p>
+
+- 自定义模板:打开自定义模板开关时,可以自定义datax节点的json配置文件内容(适用于控件配置不满足需求时)
+- 数据源:选择抽取数据的数据源
+- sql语句:目标库抽取数据的sql语句,节点执行时自动解析sql查询列名,映射为目标表同步列名,源表和目标表列名不一致时,可以通过列别名(as)转换
+- 目标库:选择数据同步的目标库
+- 目标表:数据同步的目标表名
+- 前置sql:前置sql在sql语句之前执行(目标库执行)。
+- 后置sql:后置sql在sql语句之后执行(目标库执行)。
+- json:datax同步的json配置文件
+- 自定义参数:SQL任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换sql语句中${变量}。
+
+#### 8. 参数
+#### 8.1 系统参数
+
+<table>
+    <tr><th>变量</th><th>含义</th></tr>
+    <tr>
+        <td>${system.biz.date}</td>
+        <td>日常调度实例定时的定时时间前一天,格式为 yyyyMMdd,补数据时,该日期 +1</td>
+    </tr>
+    <tr>
+        <td>${system.biz.curdate}</td>
+        <td>日常调度实例定时的定时时间,格式为 yyyyMMdd,补数据时,该日期 +1</td>
+    </tr>
+    <tr>
+        <td>${system.datetime}</td>
+        <td>日常调度实例定时的定时时间,格式为 yyyyMMddHHmmss,补数据时,该日期 +1</td>
+    </tr>
+</table>
+
+
+#### 8.2 时间自定义参数
+
+  - 支持代码中自定义变量名,声明方式:${变量名}。可以是引用 "系统参数" 或指定 "常量"。
+
+  - 我们定义这种基准变量为 $[...] 格式的,$[yyyyMMddHHmmss] 是可以任意分解组合的,比如:$[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd] 等
+
+  - 也可以使用以下格式:
+  
+
+        * 后 N 年:$[add_months(yyyyMMdd,12*N)]
+        * 前 N 年:$[add_months(yyyyMMdd,-12*N)]
+        * 后 N 月:$[add_months(yyyyMMdd,N)]
+        * 前 N 月:$[add_months(yyyyMMdd,-N)]
+        * 后 N 周:$[yyyyMMdd+7*N]
+        * 前 N 周:$[yyyyMMdd-7*N]
+        * 后 N 天:$[yyyyMMdd+N]
+        * 前 N 天:$[yyyyMMdd-N]
+        * 后 N 小时:$[HHmmss+N/24]
+        * 前 N 小时:$[HHmmss-N/24]
+        * 后 N 分钟:$[HHmmss+N/24/60]
+        * 前 N 分钟:$[HHmmss-N/24/60]
+
+#### 8.3 <span id=UserDefinedParameters>用户自定义参数</span>
+
+  - 用户自定义参数分为全局参数和局部参数。全局参数是保存工作流定义和工作流实例的时候传递的全局参数,全局参数可以在整个流程中的任何一个任务节点的局部参数引用。
+    例如:
+
+<p align="center">
+   <img src="/img/local_parameter.png" width="80%" />
+ </p>
+
+  - global_bizdate为全局参数,引用的是系统参数。
+
+<p align="center">
+   <img src="/img/global_parameter.png" width="80%" />
+ </p>
+
+ - 任务中local_param_bizdate通过\${global_bizdate}来引用全局参数,对于脚本可以通过\${local_param_bizdate}来引全局变量global_bizdate的值,或通过JDBC直接将local_param_bizdate的值set进去
diff --git a/docs/zh-cn/1.3.5/user_doc/task-structure.md b/docs/zh-cn/1.3.5/user_doc/task-structure.md
new file mode 100644
index 0000000..f369116
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/task-structure.md
@@ -0,0 +1,1134 @@
+
+# 任务总体存储结构
+在dolphinscheduler中创建的所有任务都保存在t_ds_process_definition 表中.
+
+该数据库表结构如下表所示:
+
+
+序号 | 字段  | 类型  |  描述
+-------- | ---------| -------- | ---------
+1|id|int(11)|主键
+2|name|varchar(255)|流程定义名称
+3|version|int(11)|流程定义版本
+4|release_state|tinyint(4)|流程定义的发布状态:0 未上线 ,  1已上线
+5|project_id|int(11)|项目id
+6|user_id|int(11)|流程定义所属用户id
+7|process_definition_json|longtext|流程定义JSON
+8|description|text|流程定义描述
+9|global_params|text|全局参数
+10|flag|tinyint(4)|流程是否可用:0 不可用,1 可用
+11|locations|text|节点坐标信息
+12|connects|text|节点连线信息
+13|receivers|text|收件人
+14|receivers_cc|text|抄送人
+15|create_time|datetime|创建时间
+16|timeout|int(11) |超时时间
+17|tenant_id|int(11) |租户id
+18|update_time|datetime|更新时间
+19|modify_by|varchar(36)|修改用户
+20|resource_ids|varchar(255)|资源ids
+
+其中process_definition_json 字段为核心字段, 定义了 DAG 图中的任务信息.该数据以JSON 的方式进行存储.
+
+公共的数据结构如下表.
+序号 | 字段  | 类型  |  描述
+-------- | ---------| -------- | ---------
+1|globalParams|Array|全局参数
+2|tasks|Array|流程中的任务集合  [ 各个类型的结构请参考如下章节]
+3|tenantId|int|租户id
+4|timeout|int|超时时间
+
+数据示例:
+```bash
+{
+    "globalParams":[
+        {
+            "prop":"golbal_bizdate",
+            "direct":"IN",
+            "type":"VARCHAR",
+            "value":"${system.biz.date}"
+        }
+    ],
+    "tasks":Array[1],
+    "tenantId":0,
+    "timeout":0
+}
+```
+
+# 各任务类型存储结构详解
+
+## Shell节点
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SHELL
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |rawScript |String| Shell脚本 |
+6| | localParams| Array|自定义参数||
+7| | resourceList| Array|资源文件||
+8|description | |String|描述 | |
+9|runFlag | |String |运行标识| |
+10|conditionResult | |Object|条件分支 | |
+11| | successNode| Array|成功跳转节点| |
+12| | failedNode|Array|失败跳转节点 | 
+13| dependence| |Object |任务依赖 |与params互斥
+14|maxRetryTimes | |String|最大重试次数 | |
+15|retryInterval | |String |重试间隔| |
+16|timeout | |Object|超时控制 | |
+17| taskInstancePriority| |String|任务优先级 | |
+18|workerGroup | |String |Worker 分组| |
+19|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"SHELL",
+    "id":"tasks-80760",
+    "name":"Shell Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"echo "This is a shell script""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+
+```
+
+
+## SQL节点
+通过 SQL对指定的数据源进行数据查询、更新操作.
+
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SQL
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |type |String | 数据库类型
+6| |datasource |Int | 数据源id
+7| |sql |String | 查询SQL语句
+8| |udfs | String| udf函数|UDF函数id,以逗号分隔.
+9| |sqlType | String| SQL节点类型 |0 查询  , 1 非查询
+10| |title |String | 邮件标题
+11| |receivers |String | 收件人
+12| |receiversCc |String | 抄送人
+13| |showType | String| 邮件显示类型|TABLE 表格  ,  ATTACHMENT附件
+14| |connParams | String| 连接参数
+15| |preStatements | Array| 前置SQL
+16| | postStatements| Array|后置SQL||
+17| | localParams| Array|自定义参数||
+18|description | |String|描述 | |
+19|runFlag | |String |运行标识| |
+20|conditionResult | |Object|条件分支 | |
+21| | successNode| Array|成功跳转节点| |
+22| | failedNode|Array|失败跳转节点 | 
+23| dependence| |Object |任务依赖 |与params互斥
+24|maxRetryTimes | |String|最大重试次数 | |
+25|retryInterval | |String |重试间隔| |
+26|timeout | |Object|超时控制 | |
+27| taskInstancePriority| |String|任务优先级 | |
+28|workerGroup | |String |Worker 分组| |
+29|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"SQL",
+    "id":"tasks-95648",
+    "name":"SqlTask-Query",
+    "params":{
+        "type":"MYSQL",
+        "datasource":1,
+        "sql":"select id , namge , age from emp where id =  ${id}",
+        "udfs":"",
+        "sqlType":"0",
+        "title":"xxxx@xxx.com",
+        "receivers":"xxxx@xxx.com",
+        "receiversCc":"",
+        "showType":"TABLE",
+        "localParams":[
+            {
+                "prop":"id",
+                "direct":"IN",
+                "type":"INTEGER",
+                "value":"1"
+            }
+        ],
+        "connParams":"",
+        "preStatements":[
+            "insert into emp ( id,name ) value (1,'Li' )"
+        ],
+        "postStatements":[
+
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## PROCEDURE[存储过程]节点
+**节点数据结构如下:**
+**节点数据样例:**
+
+## SPARK节点
+**节点数据结构如下:**
+
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SPARK
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |mainClass |String | 运行主类
+6| |mainArgs | String| 运行参数
+7| |others | String| 其他参数
+8| |mainJar |Object | 程序 jar 包
+9| |deployMode |String | 部署模式  |local,client,cluster
+10| |driverCores | String| driver核数
+11| |driverMemory | String| driver 内存数
+12| |numExecutors |String | executor数量
+13| |executorMemory |String | executor内存
+14| |executorCores |String | executor核数
+15| |programType | String| 程序类型|JAVA,SCALA,PYTHON
+16| | sparkVersion| String|	Spark 版本| SPARK1 , SPARK2
+17| | localParams| Array|自定义参数
+18| | resourceList| Array|资源文件
+19|description | |String|描述 | |
+20|runFlag | |String |运行标识| |
+21|conditionResult | |Object|条件分支 | |
+22| | successNode| Array|成功跳转节点| |
+23| | failedNode|Array|失败跳转节点 | 
+24| dependence| |Object |任务依赖 |与params互斥
+25|maxRetryTimes | |String|最大重试次数 | |
+26|retryInterval | |String |重试间隔| |
+27|timeout | |Object|超时控制 | |
+28| taskInstancePriority| |String|任务优先级 | |
+29|workerGroup | |String |Worker 分组| |
+30|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"SPARK",
+    "id":"tasks-87430",
+    "name":"SparkTask",
+    "params":{
+        "mainClass":"org.apache.spark.examples.SparkPi",
+        "mainJar":{
+            "id":4
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "driverCores":1,
+        "driverMemory":"512M",
+        "numExecutors":2,
+        "executorMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"10",
+        "others":"",
+        "programType":"SCALA",
+        "sparkVersion":"SPARK2"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## MapReduce(MR)节点
+**节点数据结构如下:**
+
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |MR
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |mainClass |String | 运行主类
+6| |mainArgs | String| 运行参数
+7| |others | String| 其他参数
+8| |mainJar |Object | 程序 jar 包
+9| |programType | String| 程序类型|JAVA,PYTHON
+10| | localParams| Array|自定义参数
+11| | resourceList| Array|资源文件
+12|description | |String|描述 | |
+13|runFlag | |String |运行标识| |
+14|conditionResult | |Object|条件分支 | |
+15| | successNode| Array|成功跳转节点| |
+16| | failedNode|Array|失败跳转节点 | 
+17| dependence| |Object |任务依赖 |与params互斥
+18|maxRetryTimes | |String|最大重试次数 | |
+19|retryInterval | |String |重试间隔| |
+20|timeout | |Object|超时控制 | |
+21| taskInstancePriority| |String|任务优先级 | |
+22|workerGroup | |String |Worker 分组| |
+23|preTasks | |Array|前置任务 | |
+
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"MR",
+    "id":"tasks-28997",
+    "name":"MRTask",
+    "params":{
+        "mainClass":"wordcount",
+        "mainJar":{
+            "id":5
+        },
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
+        "others":"",
+        "programType":"JAVA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## Python节点
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |PYTHON
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |rawScript |String| Python脚本 |
+6| | localParams| Array|自定义参数||
+7| | resourceList| Array|资源文件||
+8|description | |String|描述 | |
+9|runFlag | |String |运行标识| |
+10|conditionResult | |Object|条件分支 | |
+11| | successNode| Array|成功跳转节点| |
+12| | failedNode|Array|失败跳转节点 | 
+13| dependence| |Object |任务依赖 |与params互斥
+14|maxRetryTimes | |String|最大重试次数 | |
+15|retryInterval | |String |重试间隔| |
+16|timeout | |Object|超时控制 | |
+17| taskInstancePriority| |String|任务优先级 | |
+18|workerGroup | |String |Worker 分组| |
+19|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"PYTHON",
+    "id":"tasks-5463",
+    "name":"Python Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"print("This is a python script")"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+
+## Flink节点
+**节点数据结构如下:**
+
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |FLINK
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |mainClass |String | 运行主类
+6| |mainArgs | String| 运行参数
+7| |others | String| 其他参数
+8| |mainJar |Object | 程序 jar 包
+9| |deployMode |String | 部署模式  |local,client,cluster
+10| |slot | String| slot数量
+11| |taskManager |String | taskManager数量
+12| |taskManagerMemory |String | taskManager内存数
+13| |jobManagerMemory |String | jobManager内存数
+14| |programType | String| 程序类型|JAVA,SCALA,PYTHON
+15| | localParams| Array|自定义参数
+16| | resourceList| Array|资源文件
+17|description | |String|描述 | |
+18|runFlag | |String |运行标识| |
+19|conditionResult | |Object|条件分支 | |
+20| | successNode| Array|成功跳转节点| |
+21| | failedNode|Array|失败跳转节点 | 
+22| dependence| |Object |任务依赖 |与params互斥
+23|maxRetryTimes | |String|最大重试次数 | |
+24|retryInterval | |String |重试间隔| |
+25|timeout | |Object|超时控制 | |
+26| taskInstancePriority| |String|任务优先级 | |
+27|workerGroup | |String |Worker 分组| |
+38|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"FLINK",
+    "id":"tasks-17135",
+    "name":"FlinkTask",
+    "params":{
+        "mainClass":"com.flink.demo",
+        "mainJar":{
+            "id":6
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "slot":1,
+        "taskManager":"2",
+        "jobManagerMemory":"1G",
+        "taskManagerMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"100",
+        "others":"",
+        "programType":"SCALA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## HTTP节点
+**节点数据结构如下:**
+
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |HTTP
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |url |String | 请求地址
+6| |httpMethod | String| 请求方式|GET,POST,HEAD,PUT,DELETE
+7| | httpParams| Array|请求参数
+8| |httpCheckCondition | String| 校验条件|默认响应码200
+9| |condition |String | 校验内容
+10| | localParams| Array|自定义参数
+11|description | |String|描述 | |
+12|runFlag | |String |运行标识| |
+13|conditionResult | |Object|条件分支 | |
+14| | successNode| Array|成功跳转节点| |
+15| | failedNode|Array|失败跳转节点 | 
+16| dependence| |Object |任务依赖 |与params互斥
+17|maxRetryTimes | |String|最大重试次数 | |
+18|retryInterval | |String |重试间隔| |
+19|timeout | |Object|超时控制 | |
+20| taskInstancePriority| |String|任务优先级 | |
+21|workerGroup | |String |Worker 分组| |
+22|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"HTTP",
+    "id":"tasks-60499",
+    "name":"HttpTask",
+    "params":{
+        "localParams":[
+
+        ],
+        "httpParams":[
+            {
+                "prop":"id",
+                "httpParametersType":"PARAMETER",
+                "value":"1"
+            },
+            {
+                "prop":"name",
+                "httpParametersType":"PARAMETER",
+                "value":"Bo"
+            }
+        ],
+        "url":"https://www.xxxxx.com:9012",
+        "httpMethod":"POST",
+        "httpCheckCondition":"STATUS_CODE_DEFAULT",
+        "condition":""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## DataX节点
+
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |DATAX
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |customConfig |Int | 自定义类型| 0定制 , 1自定义
+6| |dsType |String | 源数据库类型
+7| |dataSource |Int | 源数据库ID
+8| |dtType | String| 目标数据库类型
+9| |dataTarget | Int| 目标数据库ID 
+10| |sql |String | SQL语句
+11| |targetTable |String | 目标表
+12| |jobSpeedByte |Int | 限流(字节数)
+13| |jobSpeedRecord | Int| 限流(记录数)
+14| |preStatements | Array| 前置SQL
+15| | postStatements| Array|后置SQL
+16| | json| String|自定义配置|customConfig=1时生效
+17| | localParams| Array|自定义参数|customConfig=1时生效
+18|description | |String|描述 | |
+19|runFlag | |String |运行标识| |
+20|conditionResult | |Object|条件分支 | |
+21| | successNode| Array|成功跳转节点| |
+22| | failedNode|Array|失败跳转节点 | 
+23| dependence| |Object |任务依赖 |与params互斥
+24|maxRetryTimes | |String|最大重试次数 | |
+25|retryInterval | |String |重试间隔| |
+26|timeout | |Object|超时控制 | |
+27| taskInstancePriority| |String|任务优先级 | |
+28|workerGroup | |String |Worker 分组| |
+29|preTasks | |Array|前置任务 | |
+
+
+
+**节点数据样例:**
+
+
+```bash
+{
+    "type":"DATAX",
+    "id":"tasks-91196",
+    "name":"DataxTask-DB",
+    "params":{
+        "customConfig":0,
+        "dsType":"MYSQL",
+        "dataSource":1,
+        "dtType":"MYSQL",
+        "dataTarget":1,
+        "sql":"select id, name ,age from user ",
+        "targetTable":"emp",
+        "jobSpeedByte":524288,
+        "jobSpeedRecord":500,
+        "preStatements":[
+            "truncate table emp "
+        ],
+        "postStatements":[
+            "truncate table user"
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## Sqoop节点
+
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SQOOP
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |JSON 格式
+5| | concurrency| Int|并发度
+6| | modelType|String |流向|import,export
+7| |sourceType|String |数据源类型 |
+8| |sourceParams |String| 数据源参数| JSON格式
+9| | targetType|String |目标数据类型
+10| |targetParams | String|目标数据参数|JSON格式
+11| |localParams |Array |自定义参数
+12|description | |String|描述 | |
+13|runFlag | |String |运行标识| |
+14|conditionResult | |Object|条件分支 | |
+15| | successNode| Array|成功跳转节点| |
+16| | failedNode|Array|失败跳转节点 | 
+17| dependence| |Object |任务依赖 |与params互斥
+18|maxRetryTimes | |String|最大重试次数 | |
+19|retryInterval | |String |重试间隔| |
+20|timeout | |Object|超时控制 | |
+21| taskInstancePriority| |String|任务优先级 | |
+22|workerGroup | |String |Worker 分组| |
+23|preTasks | |Array|前置任务 | |
+
+
+
+
+**节点数据样例:**
+
+```bash
+{
+            "type":"SQOOP",
+            "id":"tasks-82041",
+            "name":"Sqoop Task",
+            "params":{
+                "concurrency":1,
+                "modelType":"import",
+                "sourceType":"MYSQL",
+                "targetType":"HDFS",
+                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
+                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
+                "localParams":[
+
+                ]
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+## 条件分支节点
+
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SHELL
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 | null
+5|description | |String|描述 | |
+6|runFlag | |String |运行标识| |
+7|conditionResult | |Object|条件分支 | |
+8| | successNode| Array|成功跳转节点| |
+9| | failedNode|Array|失败跳转节点 | 
+10| dependence| |Object |任务依赖 |与params互斥
+11|maxRetryTimes | |String|最大重试次数 | |
+12|retryInterval | |String |重试间隔| |
+13|timeout | |Object|超时控制 | |
+14| taskInstancePriority| |String|任务优先级 | |
+15|workerGroup | |String |Worker 分组| |
+16|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"CONDITIONS",
+    "id":"tasks-96189",
+    "name":"条件",
+    "params":{
+
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            "test04"
+        ],
+        "failedNode":[
+            "test05"
+        ]
+    },
+    "dependence":{
+        "relation":"AND",
+        "dependTaskList":[
+
+        ]
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+        "test01",
+        "test02"
+    ]
+}
+```
+
+
+## 子流程节点
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SHELL
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |processDefinitionId |Int| 流程定义id
+6|description | |String|描述 | |
+7|runFlag | |String |运行标识| |
+8|conditionResult | |Object|条件分支 | |
+9| | successNode| Array|成功跳转节点| |
+10| | failedNode|Array|失败跳转节点 | 
+11| dependence| |Object |任务依赖 |与params互斥
+12|maxRetryTimes | |String|最大重试次数 | |
+13|retryInterval | |String |重试间隔| |
+14|timeout | |Object|超时控制 | |
+15| taskInstancePriority| |String|任务优先级 | |
+16|workerGroup | |String |Worker 分组| |
+17|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+            "type":"SUB_PROCESS",
+            "id":"tasks-14806",
+            "name":"SubProcessTask",
+            "params":{
+                "processDefinitionId":2
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+
+
+## 依赖(DEPENDENT)节点
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |DEPENDENT
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |rawScript |String| Shell脚本 |
+6| | localParams| Array|自定义参数||
+7| | resourceList| Array|资源文件||
+8|description | |String|描述 | |
+9|runFlag | |String |运行标识| |
+10|conditionResult | |Object|条件分支 | |
+11| | successNode| Array|成功跳转节点| |
+12| | failedNode|Array|失败跳转节点 | 
+13| dependence| |Object |任务依赖 |与params互斥
+14| | relation|String |关系 |AND,OR
+15| | dependTaskList|Array |依赖任务清单 |
+16|maxRetryTimes | |String|最大重试次数 | |
+17|retryInterval | |String |重试间隔| |
+18|timeout | |Object|超时控制 | |
+19| taskInstancePriority| |String|任务优先级 | |
+20|workerGroup | |String |Worker 分组| |
+21|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+            "type":"DEPENDENT",
+            "id":"tasks-57057",
+            "name":"DenpendentTask",
+            "params":{
+
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+                "relation":"AND",
+                "dependTaskList":[
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":7,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"ALL",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    },
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":5,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"SqlTask-Update",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    }
+                ]
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
diff --git a/docs/zh-cn/1.3.5/user_doc/upgrade.md b/docs/zh-cn/1.3.5/user_doc/upgrade.md
new file mode 100644
index 0000000..3422d8f
--- /dev/null
+++ b/docs/zh-cn/1.3.5/user_doc/upgrade.md
@@ -0,0 +1,82 @@
+
+# DolphinScheduler升级文档
+
+## 1. 备份上一版本文件和数据库
+
+## 2. 停止dolphinscheduler所有服务
+
+ `sh ./script/stop-all.sh`
+
+## 3. 下载新版本的安装包
+
+- [下载](/zh-cn/download/download.html), 下载最新版本的二进制安装包
+- 以下升级操作都需要在新版本的目录进行
+
+## 4. 数据库升级
+- 修改conf/datasource.properties中的下列属性
+
+- 如果选择 MySQL,请注释掉 PostgreSQL 相关配置(反之同理), 还需要手动添加 [[ mysql-connector-java 驱动 jar ](https://downloads.MySQL.com/archives/c-j/)] 包到 lib 目录下,这里下载的是mysql-connector-java-5.1.47.jar,然后正确配置数据库连接相关信息
+
+    ```properties
+      # postgre
+      #spring.datasource.driver-class-name=org.postgresql.Driver
+      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
+      # mysql
+      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     需要修改ip,本机localhost即可
+      spring.datasource.username=xxx						需要修改为上面的{user}值
+      spring.datasource.password=xxx						需要修改为上面的{password}值
+    ```
+
+- 执行数据库升级脚本
+
+`sh ./script/upgrade-dolphinscheduler.sh`
+
+## 5. 服务升级
+
+### 5.1 修改`conf/config/install_config.conf`配置内容
+单机部署请参照[单机部署(Standalone)](/zh-cn/docs/1.3.4/user_doc/standalone-deployment.html)中的`6.修改运行参数部分`
+集群部署请参照[集群部署(Cluster)](/zh-cn/docs/1.3.4/user_doc/cluster-deployment.html)中的`6.修改运行参数部分`
+
+### 注意事项
+创建worker分组在1.3.1版本和之前版本有了不同的设计
+
+- worker分组在1.3.1版本之前是通过UI界面创建
+- worker分组在1.3.1版本是修改worker配置指定
+
+### 1.3.1之前的版本升级1.3.2时如何设置worker分组与之前一致
+
+1、查询已备份的数据库,查看t_ds_worker_group表记录,重点看下id、name和ip_list三个字段
+
+| id | name | ip_list    |
+| :---         |     :---:      |          ---: |
+| 1   | service1     | 192.168.xx.10    |
+| 2   | service2     | 192.168.xx.11,192.168.xx.12      |
+
+2、修改conf/config/install_config.conf中的workers参数
+
+假设以下为要部署的worker主机名和ip的对应关系
+| 主机名 | ip |
+| :---  | :---:  |
+| ds1   | 192.168.xx.10     |
+| ds2   | 192.168.xx.11     |
+| ds3   | 192.168.xx.12     |
+
+那么为了保持与之前版本worker分组一致,则需要把workers参数改为如下
+
+```shell
+#worker服务部署在哪台机器上,并指定此worker属于哪一个worker组
+workers="ds1:service1,ds2:service2,ds3:service2"
+```
+
+### 1.3.2的worker分组进行了增强
+1.3.1版本的worker不能同时属于多个worker分组,1.3.2是可以支持的
+所以在1.3.1里面的workers="ds1:service1,ds1:service2"是不支持的,
+在1.3.2可以设置workers="ds1:service1,ds1:service2"
+  
+### 5.2 执行部署脚本
+```shell
+`sh install.sh`
+```
+
+
diff --git a/download/en-us/download.md b/download/en-us/download.md
index e53babb..f2663de 100644
--- a/download/en-us/download.md
+++ b/download/en-us/download.md
@@ -8,6 +8,8 @@ Use the links below to download the Apache DolphinScheduler from one of our mirr
 ## DolphinScheduler
 | Date | Version| | Downloads |
 |:---:|:--:|:--:|:--:|
+| Feb. 14th, 2021 | 1.3.5 | Source code| [[src]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-src.zip) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-src.zip.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-src.zip.sha512)|
+| | | Binary Distribution| [[tar]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-dolphinscheduler-bin.tar.gz) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-dolphinscheduler-bin.tar.gz.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-dolphinscheduler-bin.tar.gz.sha512)|
 | Dec. 29th, 2020 | 1.3.4 | Source code| [[src]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-src.zip) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-src.zip.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-src.zip.sha512)|
 | | | Binary Distribution| [[tar]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin.tar.gz) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin.tar.gz.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin.tar.gz.sha512)|
 | Nov. 9th, 2020 | 1.3.3 | Source code| [[src]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.3/apache-dolphinscheduler-incubating-1.3.3-src.zip) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.3/apache-dolphinscheduler-incubating-1.3.3-src.zip.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.3/apache-dolphinscheduler-incubating-1.3.3-src.zip.sha512)|
diff --git a/download/zh-cn/download.md b/download/zh-cn/download.md
index 6b53638..e7f467c 100644
--- a/download/zh-cn/download.md
+++ b/download/zh-cn/download.md
@@ -7,6 +7,8 @@
 ## DolphinScheduler
 | 日期 | 版本| | 下载 |
 |:---:|:--:|:--:|:--:|
+| Feb. 14th, 2021 | 1.3.5 | Source code| [[src]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-src.zip) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-src.zip.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-src.zip.sha512)|
+| | | Binary Distribution| [[tar]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-dolphinscheduler-bin.tar.gz) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-dolphinscheduler-bin.tar.gz.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.5/apache-dolphinscheduler-incubating-1.3.5-dolphinscheduler-bin.tar.gz.sha512)|
 | Dec. 29th, 2020 | 1.3.4 | Source code| [[src]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-src.zip) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-src.zip.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-src.zip.sha512)|
 | | | Binary Distribution| [[tar]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin.tar.gz) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin.tar.gz.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.4/apache-dolphinscheduler-incubating-1.3.4-dolphinscheduler-bin.tar.gz.sha512)|
 | Nov. 9th, 2020 | 1.3.3 | Source code| [[src]](https://www.apache.org/dyn/closer.cgi/incubator/dolphinscheduler/1.3.3/apache-dolphinscheduler-incubating-1.3.3-src.zip) [[asc]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.3/apache-dolphinscheduler-incubating-1.3.3-src.zip.asc) [[sha512]](https://downloads.apache.org/incubator/dolphinscheduler/1.3.3/apache-dolphinscheduler-incubating-1.3.3-src.zip.sha512)|
diff --git a/site_config/docs1-3-5.js b/site_config/docs1-3-5.js
new file mode 100644
index 0000000..d14a9e8
--- /dev/null
+++ b/site_config/docs1-3-5.js
@@ -0,0 +1,154 @@
+export default {
+  'en-us': {
+    sidemenu: [
+      {
+        title: 'Deployment Document',
+        children: [
+          {
+            title: 'Hardware Environment',
+            link: '/en-us/docs/1.3.5/user_doc/hardware-environment.html',
+          },
+          {
+            title: 'Standalone Deployment',
+            link: '/en-us/docs/1.3.5/user_doc/standalone-deployment.html',
+          },
+          {
+            title: 'Cluster Deployment',
+            link: '/en-us/docs/1.3.5/user_doc/cluster-deployment.html',
+          },
+          {
+            title: 'Docker Deployment',
+            link: '/en-us/docs/1.3.5/user_doc/docker-deployment.html',
+          },
+        ],
+      },
+      {
+        title: 'User Manual',
+        children: [
+          {
+            title: 'Quick Start',
+            link: '/en-us/docs/1.3.5/user_doc/quick-start.html',
+          },
+          {
+            title: 'User Manual',
+            link: '/en-us/docs/1.3.5/user_doc/system-manual.html',
+          },
+          {
+            title: 'Metadata',
+            link: '/en-us/docs/1.3.5/user_doc/metadata-1.3.html',
+          },
+          {
+            title: 'Configuration File',
+            link: '/en-us/docs/1.3.5/user_doc/configuration-file.html',
+          },
+          {
+            title: 'Task Structure',
+            link: '/en-us/docs/1.3.5/user_doc/task-structure.html',
+          },
+        ],
+      },
+      {
+        title: 'Upgrade',
+        children: [
+          {
+            title: 'Upgrade',
+            link: '/en-us/docs/1.3.5/user_doc/upgrade.html',
+          },
+        ],
+      },
+      {
+        title: 'FAQ',
+        children: [
+          {
+            title: 'FAQ',
+            link: '/en-us/docs/release/faq.html',
+          },
+        ],
+      },
+    ],
+    barText: 'Documentation',
+  },
+  'zh-cn': {
+    sidemenu: [
+      {
+        title: '部署文档',
+        children: [
+          {
+            title: '软硬件环境建议配置',
+            link: '/zh-cn/docs/1.3.5/user_doc/hardware-environment.html',
+          },
+          {
+            title: '单机部署(Standalone)',
+            link: '/zh-cn/docs/1.3.5/user_doc/standalone-deployment.html',
+          },
+          {
+            title: '集群部署(Cluster)',
+            link: '/zh-cn/docs/1.3.5/user_doc/cluster-deployment.html',
+          },
+          {
+            title: 'Docker部署(Docker)',
+            link: '/zh-cn/docs/1.3.5/user_doc/docker-deployment.html',
+          },
+        ],
+      },
+      {
+        title: '用户手册',
+        children: [
+          {
+            title: '快速上手',
+            link: '/zh-cn/docs/1.3.5/user_doc/quick-start.html',
+          },
+          {
+            title: '用户手册',
+            link: '/zh-cn/docs/1.3.5/user_doc/system-manual.html',
+          },
+
+        ],
+      },
+      {
+        title: '架构设计',
+        children: [
+          {
+            title: '元数据文档',
+            link: '/zh-cn/docs/1.3.5/user_doc/metadata-1.3.html',
+          },
+          {
+            title: '架构设计',
+            link: '/zh-cn/docs/1.3.5/user_doc/architecture-design.html',
+          },
+          {
+            title: '配置文件',
+            link: '/zh-cn/docs/1.3.5/user_doc/configuration-file.html',
+          },
+          {
+            title: '任务结构',
+            link: '/zh-cn/docs/1.3.5/user_doc/task-structure.html',
+          },
+          {
+            title: '负载均衡',
+            link: '/zh-cn/docs/1.3.5/user_doc/load-balance.html',
+          },
+        ],
+      },
+      {
+        title: '版本升级',
+        children: [
+          {
+            title: '升级',
+            link: '/zh-cn/docs/1.3.5/user_doc/upgrade.html',
+          },
+        ],
+      },
+      {
+        title: 'FAQ',
+        children: [
+          {
+            title: 'FAQ',
+            link: '/zh-cn/docs/release/faq.html',
+          },
+        ],
+      },
+    ],
+    barText: '文档',
+  },
+};
diff --git a/site_config/home.jsx b/site_config/home.jsx
index f152d9d..96fd4c6 100644
--- a/site_config/home.jsx
+++ b/site_config/home.jsx
@@ -8,7 +8,7 @@ export default {
       buttons: [
         {
           text: '立即开始',
-          link: '/zh-cn/docs/1.3.4/user_doc/quick-start.html',
+          link: '/zh-cn/docs/1.3.5/user_doc/quick-start.html',
           type: 'primary',
         },
         {
@@ -61,7 +61,7 @@ export default {
       buttons: [
         {
           text: 'Quick Start',
-          link: '/en-us/docs/1.3.4/user_doc/quick-start.html',
+          link: '/en-us/docs/1.3.5/user_doc/quick-start.html',
           type: 'primary',
         },
         {
diff --git a/site_config/site.js b/site_config/site.js
index b70066e..d933ef8 100644
--- a/site_config/site.js
+++ b/site_config/site.js
@@ -15,40 +15,45 @@ export default {
       {
         key: 'docs',
         text: 'DOCS',
-        link: '/en-us/docs/1.3.4/user_doc/quick-start.html',
+        link: '/en-us/docs/1.3.5/user_doc/quick-start.html',
         children: [
           {
             key: 'docs1',
-            text: '1.3.4(Recommend)',
-            link: '/en-us/docs/1.3.4/user_doc/quick-start.html',
+            text: '1.3.5(Recommend)',
+            link: '/en-us/docs/1.3.5/user_doc/quick-start.html',
           },
           {
             key: 'docs2',
+            text: '1.3.4',
+            link: '/en-us/docs/1.3.4/user_doc/quick-start.html',
+          },
+          {
+            key: 'docs3',
             text: '1.3.3',
             link: '/en-us/docs/1.3.3/user_doc/quick-start.html',
           },
           {
-            key: 'docs3',
+            key: 'docs4',
             text: '1.3.2',
             link: '/en-us/docs/1.3.2/user_doc/quick-start.html',
           },
           {
-            key: 'docs4',
+            key: 'docs5',
             text: '1.3.1',
             link: '/en-us/docs/1.3.1/user_doc/quick-start.html',
           },
           {
-            key: 'docs5',
+            key: 'docs6',
             text: '1.2.1',
             link: '/en-us/docs/1.2.1/user_doc/quick-start.html',
           },
           {
-            key: 'docs6',
+            key: 'docs7',
             text: '1.2.0',
             link: '/en-us/docs/1.2.0/user_doc/quick-start.html',
           },
           {
-            key: 'docs7',
+            key: 'docs8',
             text: '1.1.0(Not Apache Release)',
             link: 'https://analysys.github.io/easyscheduler_docs_cn/',
           },
@@ -176,40 +181,45 @@ export default {
       {
         key: 'docs',
         text: '文档',
-        link: '/zh-cn/docs/1.3.4/user_doc/quick-start.html',
+        link: '/zh-cn/docs/1.3.5/user_doc/quick-start.html',
         children: [
           {
             key: 'docs1',
-            text: '1.3.4(推荐)',
-            link: '/zh-cn/docs/1.3.4/user_doc/quick-start.html',
+            text: '1.3.5(推荐)',
+            link: '/zh-cn/docs/1.3.5/user_doc/quick-start.html',
           },
           {
             key: 'docs2',
+            text: '1.3.4',
+            link: '/zh-cn/docs/1.3.4/user_doc/quick-start.html',
+          },
+          {
+            key: 'docs3',
             text: '1.3.3',
             link: '/zh-cn/docs/1.3.3/user_doc/quick-start.html',
           },
           {
-            key: 'docs3',
+            key: 'docs4',
             text: '1.3.2',
             link: '/zh-cn/docs/1.3.2/user_doc/quick-start.html',
           },
           {
-            key: 'docs4',
+            key: 'docs5',
             text: '1.3.1',
             link: '/zh-cn/docs/1.3.1/user_doc/quick-start.html',
           },
           {
-            key: 'docs5',
+            key: 'docs6',
             text: '1.2.1',
             link: '/zh-cn/docs/1.2.1/user_doc/quick-start.html',
           },
           {
-            key: 'docs6',
+            key: 'docs7',
             text: '1.2.0',
             link: '/zh-cn/docs/1.2.0/user_doc/quick-start.html',
           },
           {
-            key: 'docs7',
+            key: 'docs8',
             text: '1.1.0(Not Apache Release)',
             link: 'https://analysys.github.io/easyscheduler_docs_cn/',
           },