You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by gi...@apache.org on 2020/10/15 06:48:56 UTC

[incubator-dolphinscheduler-website] branch asf-site updated: Automated deployment: Thu Oct 15 06:48:43 UTC 2020 f8cbcf7adbab60d6fc69fd3a0189fbc76728084f

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-dolphinscheduler-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 5116999  Automated deployment: Thu Oct 15 06:48:43 UTC 2020 f8cbcf7adbab60d6fc69fd3a0189fbc76728084f
5116999 is described below

commit 511699983e428ffa5c5fbdab5191a0a3b43d3e35
Author: dailidong <da...@users.noreply.github.com>
AuthorDate: Thu Oct 15 06:48:43 2020 +0000

    Automated deployment: Thu Oct 15 06:48:43 UTC 2020 f8cbcf7adbab60d6fc69fd3a0189fbc76728084f
---
 zh-cn/docs/1.3.1/user_doc/system-manual.html | 23 +++++++++--------------
 zh-cn/docs/1.3.1/user_doc/system-manual.json |  2 +-
 zh-cn/docs/1.3.2/user_doc/system-manual.html | 23 +++++++++--------------
 zh-cn/docs/1.3.2/user_doc/system-manual.json |  2 +-
 4 files changed, 20 insertions(+), 30 deletions(-)

diff --git a/zh-cn/docs/1.3.1/user_doc/system-manual.html b/zh-cn/docs/1.3.1/user_doc/system-manual.html
index 4d1756a..40e6a9e 100644
--- a/zh-cn/docs/1.3.1/user_doc/system-manual.html
+++ b/zh-cn/docs/1.3.1/user_doc/system-manual.html
@@ -255,13 +255,13 @@
 <ul>
 <li>上传资源文件和udf函数,所有上传的文件和资源都会被存储到hdfs上,所以需要以下配置项:</li>
 </ul>
-<pre><code>conf/common/common.properties  
+<pre><code>conf/common.properties  
     # Users who have permission to create directories under the HDFS root path
     hdfs.root.user=hdfs
-    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。&quot;/escheduler&quot; is recommended
-    data.store2hdfs.basepath=/dolphinscheduler
-    # resource upload startup type : HDFS,S3,NONE
-    res.upload.startup.type=HDFS
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。&quot;/dolphinscheduler&quot; is recommended
+    resource.upload.path=/dolphinscheduler
+    # resource storage type : HDFS,S3,NONE
+    resource.storage.type=HDFS
     # whether kerberos starts
     hadoop.security.authentication.startup.state=false
     # java.security.krb5.conf path
@@ -269,11 +269,10 @@
     # loginUserFromKeytab user
     login.user.keytab.username=hdfs-mycluster@ESZ.COM
     # loginUserFromKeytab path
-    login.user.keytab.path=/opt/hdfs.headless.keytab
-    
-conf/common/hadoop.properties      
-    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
-    # to the conf directory,support s3,for example : s3a://dolphinscheduler
+    login.user.keytab.path=/opt/hdfs.headless.keytab    
+    # if resource.storage.type is HDFS,and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
+    # if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+    # Note,s3 be sure to create the root directory /dolphinscheduler
     fs.defaultFS=hdfs://mycluster:8020    
     #resourcemanager ha note this need ips , this empty if single
     yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
@@ -281,10 +280,6 @@ conf/common/hadoop.properties
     yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
 
 </code></pre>
-<ul>
-<li>yarn.resourcemanager.ha.rm.ids与yarn.application.status.address只需配置其中一个地址,另一个地址配置为空。</li>
-<li>需要从Hadoop集群的conf目录下复制core-site.xml、hdfs-site.xml到dolphinscheduler项目的conf目录下,重启api-server服务。</li>
-</ul>
 <h4>3.2 文件管理</h4>
 <blockquote>
 <p>是对各种资源文件的管理,包括创建基本的txt/log/sh/conf/py/java等文件、上传jar包等各种类型文件,可进行编辑、重命名、下载、删除等操作。</p>
diff --git a/zh-cn/docs/1.3.1/user_doc/system-manual.json b/zh-cn/docs/1.3.1/user_doc/system-manual.json
index c63df01..c2f3639 100644
--- a/zh-cn/docs/1.3.1/user_doc/system-manual.json
+++ b/zh-cn/docs/1.3.1/user_doc/system-manual.json
@@ -1,6 +1,6 @@
 {
   "filename": "system-manual.md",
-  "__html": "<h1>系统使用手册</h1>\n<h2>快速上手</h2>\n<blockquote>\n<p>请参照<a href=\"quick-start.html\">快速上手</a></p>\n</blockquote>\n<h2>操作指南</h2>\n<h3>1. 首页</h3>\n<p>首页包含用户所有项目的任务状态统计、流程状态统计、工作流定义统计。\n<p align=\"center\">\n<img src=\"/img/home.png\" width=\"80%\" />\n</p></p>\n<h3>2. 项目管理</h3>\n<h4>2.1 创建项目</h4>\n<ul>\n<li>\n<p>点击&quot;项目管理&quot;进入项目管理页面,点击“创建项目”按钮,输入项目名称,项目描述,点击“提交”,创建新的项目。</p>\n<p align=\"center\">\n    <img src=\"/img/project.png\" width=\"80%\" />\n</p>\n</li>\n</ul>\n<h4>2.2 [...]
+  "__html": "<h1>系统使用手册</h1>\n<h2>快速上手</h2>\n<blockquote>\n<p>请参照<a href=\"quick-start.html\">快速上手</a></p>\n</blockquote>\n<h2>操作指南</h2>\n<h3>1. 首页</h3>\n<p>首页包含用户所有项目的任务状态统计、流程状态统计、工作流定义统计。\n<p align=\"center\">\n<img src=\"/img/home.png\" width=\"80%\" />\n</p></p>\n<h3>2. 项目管理</h3>\n<h4>2.1 创建项目</h4>\n<ul>\n<li>\n<p>点击&quot;项目管理&quot;进入项目管理页面,点击“创建项目”按钮,输入项目名称,项目描述,点击“提交”,创建新的项目。</p>\n<p align=\"center\">\n    <img src=\"/img/project.png\" width=\"80%\" />\n</p>\n</li>\n</ul>\n<h4>2.2 [...]
   "link": "/zh-cn/docs/1.3.1/user_doc/system-manual.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/1.3.2/user_doc/system-manual.html b/zh-cn/docs/1.3.2/user_doc/system-manual.html
index c0137bc..91c10dc 100644
--- a/zh-cn/docs/1.3.2/user_doc/system-manual.html
+++ b/zh-cn/docs/1.3.2/user_doc/system-manual.html
@@ -255,13 +255,13 @@
 <ul>
 <li>上传资源文件和udf函数,所有上传的文件和资源都会被存储到hdfs上,所以需要以下配置项:</li>
 </ul>
-<pre><code>conf/common/common.properties  
+<pre><code>conf/common.properties  
     # Users who have permission to create directories under the HDFS root path
     hdfs.root.user=hdfs
-    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。&quot;/escheduler&quot; is recommended
-    data.store2hdfs.basepath=/dolphinscheduler
-    # resource upload startup type : HDFS,S3,NONE
-    res.upload.startup.type=HDFS
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。&quot;/dolphinscheduler&quot; is recommended
+    resource.upload.path=/dolphinscheduler
+    # resource storage type : HDFS,S3,NONE
+    resource.storage.type=HDFS
     # whether kerberos starts
     hadoop.security.authentication.startup.state=false
     # java.security.krb5.conf path
@@ -269,11 +269,10 @@
     # loginUserFromKeytab user
     login.user.keytab.username=hdfs-mycluster@ESZ.COM
     # loginUserFromKeytab path
-    login.user.keytab.path=/opt/hdfs.headless.keytab
-    
-conf/common/hadoop.properties      
-    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
-    # to the conf directory,support s3,for example : s3a://dolphinscheduler
+    login.user.keytab.path=/opt/hdfs.headless.keytab    
+    # if resource.storage.type is HDFS,and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
+    # if resource.storage.type is S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+    # Note,s3 be sure to create the root directory /dolphinscheduler
     fs.defaultFS=hdfs://mycluster:8020    
     #resourcemanager ha note this need ips , this empty if single
     yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
@@ -281,10 +280,6 @@ conf/common/hadoop.properties
     yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
 
 </code></pre>
-<ul>
-<li>yarn.resourcemanager.ha.rm.ids与yarn.application.status.address只需配置其中一个地址,另一个地址配置为空。</li>
-<li>需要从Hadoop集群的conf目录下复制core-site.xml、hdfs-site.xml到dolphinscheduler项目的conf目录下,重启api-server服务。</li>
-</ul>
 <h4>3.2 文件管理</h4>
 <blockquote>
 <p>是对各种资源文件的管理,包括创建基本的txt/log/sh/conf/py/java等文件、上传jar包等各种类型文件,可进行编辑、重命名、下载、删除等操作。</p>
diff --git a/zh-cn/docs/1.3.2/user_doc/system-manual.json b/zh-cn/docs/1.3.2/user_doc/system-manual.json
index c0230ad..0e5c435 100644
--- a/zh-cn/docs/1.3.2/user_doc/system-manual.json
+++ b/zh-cn/docs/1.3.2/user_doc/system-manual.json
@@ -1,6 +1,6 @@
 {
   "filename": "system-manual.md",
-  "__html": "<h1>系统使用手册</h1>\n<h2>快速上手</h2>\n<blockquote>\n<p>请参照<a href=\"quick-start.html\">快速上手</a></p>\n</blockquote>\n<h2>操作指南</h2>\n<h3>1. 首页</h3>\n<p>首页包含用户所有项目的任务状态统计、流程状态统计、工作流定义统计。\n<p align=\"center\">\n<img src=\"/img/home.png\" width=\"80%\" />\n</p></p>\n<h3>2. 项目管理</h3>\n<h4>2.1 创建项目</h4>\n<ul>\n<li>\n<p>点击&quot;项目管理&quot;进入项目管理页面,点击“创建项目”按钮,输入项目名称,项目描述,点击“提交”,创建新的项目。</p>\n<p align=\"center\">\n    <img src=\"/img/project.png\" width=\"80%\" />\n</p>\n</li>\n</ul>\n<h4>2.2 [...]
+  "__html": "<h1>系统使用手册</h1>\n<h2>快速上手</h2>\n<blockquote>\n<p>请参照<a href=\"quick-start.html\">快速上手</a></p>\n</blockquote>\n<h2>操作指南</h2>\n<h3>1. 首页</h3>\n<p>首页包含用户所有项目的任务状态统计、流程状态统计、工作流定义统计。\n<p align=\"center\">\n<img src=\"/img/home.png\" width=\"80%\" />\n</p></p>\n<h3>2. 项目管理</h3>\n<h4>2.1 创建项目</h4>\n<ul>\n<li>\n<p>点击&quot;项目管理&quot;进入项目管理页面,点击“创建项目”按钮,输入项目名称,项目描述,点击“提交”,创建新的项目。</p>\n<p align=\"center\">\n    <img src=\"/img/project.png\" width=\"80%\" />\n</p>\n</li>\n</ul>\n<h4>2.2 [...]
   "link": "/zh-cn/docs/1.3.2/user_doc/system-manual.html",
   "meta": {}
 }
\ No newline at end of file