You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by sh...@apache.org on 2018/09/30 02:22:45 UTC

[kylin] branch document updated: review howto_use_cli document

This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git


The following commit(s) were added to refs/heads/document by this push:
     new c8dbef0  review howto_use_cli document
c8dbef0 is described below

commit c8dbef0f1e8fd0d27d7568bab0bb6a8d3c9e3017
Author: shaofengshi <sh...@apache.org>
AuthorDate: Sun Sep 30 10:22:30 2018 +0800

    review howto_use_cli document
---
 website/_data/docs-cn.yml                          |  2 +-
 website/_data/docs.yml                             |  2 +-
 website/_docs/gettingstarted/faq.md                |  4 +-
 website/_docs/howto/howto_backup_metadata.cn.md    | 22 +++---
 website/_docs/howto/howto_backup_metadata.md       | 10 +--
 .../tools.cn.md => howto/howto_use_cli.cn.md}      | 48 +++++++------
 .../{tutorial/tools.md => howto/howto_use_cli.md}  | 78 ++++++++++++----------
 7 files changed, 84 insertions(+), 82 deletions(-)

diff --git a/website/_data/docs-cn.yml b/website/_data/docs-cn.yml
index 93f29ec..27a69cd 100644
--- a/website/_data/docs-cn.yml
+++ b/website/_data/docs-cn.yml
@@ -53,7 +53,6 @@
   - tutorial/squirrel
   - tutorial/Qlik
   - tutorial/superset
-  - tutorial/tools
 
 
 - title: 帮助
@@ -64,4 +63,5 @@
   - howto/howto_optimize_build
   - howto/howto_backup_metadata
   - howto/howto_cleanup_storage
+  - howto/howto_use_cli
 
diff --git a/website/_data/docs.yml b/website/_data/docs.yml
index ead957e..49fc11e 100644
--- a/website/_data/docs.yml
+++ b/website/_data/docs.yml
@@ -65,7 +65,6 @@
   - tutorial/hue
   - tutorial/Qlik
   - tutorial/superset
-  - tutorial/tools
 
 - title: How To
   docs:
@@ -76,6 +75,7 @@
   - howto/howto_backup_metadata
   - howto/howto_cleanup_storage
   - howto/howto_upgrade
+  - howto/howto_use_cli
   - howto/howto_ldap_and_sso
   - howto/howto_use_beeline
   - howto/howto_update_coprocessor
diff --git a/website/_docs/gettingstarted/faq.md b/website/_docs/gettingstarted/faq.md
index 9ebcad4..23fd3ba 100644
--- a/website/_docs/gettingstarted/faq.md
+++ b/website/_docs/gettingstarted/faq.md
@@ -12,7 +12,7 @@ since: v0.6.x
 
 #### How to compare Kylin with other SQL engines like Hive, Presto, Spark SQL, Impala?
 
-  * They answer a query in different ways. Kylin is not a replacement for them, but a supplement (query accelerator). Many users run Kylin together with other SQL engines. For the high frequent query patterns, building Cubes can greatly improve the performance and also offload cluster workloads. For less queried patterns or ad-hoc queries, other engines are more flexible.
+  * They answer a query in different ways. Kylin is not a replacement for them, but a supplement (query accelerator). Many users run Kylin together with other SQL engines. For the high frequent query patterns, building Cubes can greatly improve the performance and also offload cluster workloads. For less queried patterns or ad-hoc queries, ther MPP engines are more flexible.
 
 #### What's a typical scenario to use Apache Kylin?
 
@@ -86,7 +86,7 @@ But if you do want, there are some workarounds. 1) Add the primary key as a dime
 
   * Cube is stored in HBase. Each cube segment is an HBase table. The dimension values will be composed as the row key. The measures will be serialized in columns. To improve the storage efficiency, both dimension and measure values will be encoded to bytes. Kylin will decode the bytes to origin values after fetching from HBase. Without Kylin's metadata, the HBase tables are not readable.
 
-#### How to encrypt Cube Data?
+#### How to encrypt cube data?
 
   * You can enable encryption at HBase side. Refer https://hbase.apache.org/book.html#hbase.encryption.server for more details.
 
diff --git a/website/_docs/howto/howto_backup_metadata.cn.md b/website/_docs/howto/howto_backup_metadata.cn.md
index 58d3c99..4717597 100644
--- a/website/_docs/howto/howto_backup_metadata.cn.md
+++ b/website/_docs/howto/howto_backup_metadata.cn.md
@@ -5,18 +5,18 @@ categories: 帮助
 permalink: /cn/docs/howto/howto_backup_metadata.html
 ---
 
-Kylin将它全部的元数据(包括cube描述和实例、项目、倒排索引描述和实例、任务、表和字典)组织成层级文件系统的形式。然而,Kylin使用hbase来存储元数据,而不是一个普通的文件系统。如果你查看过Kylin的配置文件(kylin.properties),你会发现这样一行:
+Kylin将它全部的元数据(包括cube描述和实例、项目、倒排索引描述和实例、任务、表和字典)组织成层级文件系统的形式。然而,Kylin 使用 HBase 来存储元数据,而不是一个普通的文件系统。如果你查看过Kylin的配置文件(kylin.properties),你会发现这样一行:
 
 {% highlight Groff markup %}
 ## The metadata store in hbase
 kylin.metadata.url=kylin_metadata@hbase
 {% endhighlight %}
 
-这表明元数据会被保存在一个叫作“kylin_metadata”的htable里。你可以在hbase shell里scan该htbale来获取它。
+这表明元数据会被保存在一个叫作 “kylin_metadata”的htable 里。你可以在 hbase shell 里 scan 该 htbale 来获取它。
 
-## 使用二进制包来备份Metadata Store
+## 使用二进制包来备份 metadata store
 
-有时你需要将Kylin的Metadata Store从hbase备份到磁盘文件系统。在这种情况下,假设你在部署Kylin的hadoop命令行(或沙盒)里,你可以到KYLIN_HOME并运行:
+有时你需要将 Kylin 的 metadata store 从 hbase 备份到磁盘文件系统。在这种情况下,假设你在部署 Kylin 的 hadoop 命令行(或沙盒)里,你可以到KYLIN_HOME并运行:
 
 {% highlight Groff markup %}
 ./bin/metastore.sh backup
@@ -24,26 +24,26 @@ kylin.metadata.url=kylin_metadata@hbase
 
 来将你的元数据导出到本地目录,这个目录在KYLIN_HOME/metadata_backps下,它的命名规则使用了当前时间作为参数:KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second 。
 
-## 使用二进制包来恢复Metatdara Store
+## 使用二进制包来恢复 metatdara store
 
 万一你发现你的元数据被搞得一团糟,想要恢复先前的备份:
 
-首先,重置Metatdara Store(这个会清理Kylin在hbase的Metadata Store的所有信息,请确保先备份):
+首先,重置 metatdara store(这个会清理 Kylin 在 HBase 的 metadata store的所有信息,请确保先备份):
 
 {% highlight Groff markup %}
 ./bin/metastore.sh reset
 {% endhighlight %}
 
-然后上传备份的元数据到Kylin的Metadata Store:
+然后上传备份的元数据到 Kylin 的 metadata store:
 {% highlight Groff markup %}
 ./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
 {% endhighlight %}
 
-## 在开发环境备份/恢复元数据(0.7.3版本以上可用)
+## 在开发环境备份/恢复元数据
 
-在开发调试Kylin时,典型的环境是一台装有IDE的开发机上和一个后台的沙盒,通常你会写代码并在开发机上运行测试案例,但每次都需要将二进制包放到沙盒里以检查元数据是很麻烦的。这时有一个名为SandboxMetastoreCLI工具类可以帮助你在开发机本地下载/上传元数据。
+在开发调试 Kylin 时,典型的环境是一台装有 IDE 的开发机上和一个后台的沙盒,通常你会写代码并在开发机上运行测试案例,但每次都需要将二进制包放到沙盒里以检查元数据是很麻烦的。这时有一个名为 SandboxMetastoreCLI 工具类可以帮助你在开发机本地下载/上传元数据。
 
-## 从Metadata Store清理无用的资源(0.7.3版本以上可用)
+## 从 metadata store 清理无用的资源
 随着运行时间增长,类似字典、表快照的资源变得没有用(cube segment被丢弃或者合并了),但是它们依旧占用空间,你可以运行命令来找到并清除它们:
 
 首先,运行一个检查,这是安全的因为它不会改变任何东西:
@@ -53,7 +53,7 @@ kylin.metadata.url=kylin_metadata@hbase
 
 将要被删除的资源会被列出来:
 
-接下来,增加“--delete true”参数来清理这些资源;在这之前,你应该确保已经备份metadata store:
+接下来,增加 “--delete true” 参数来清理这些资源;在这之前,你应该确保已经备份 metadata store:
 {% highlight Groff markup %}
 ./bin/metastore.sh clean --delete true
 {% endhighlight %}
diff --git a/website/_docs/howto/howto_backup_metadata.md b/website/_docs/howto/howto_backup_metadata.md
index 2400f19..a559d9a 100644
--- a/website/_docs/howto/howto_backup_metadata.md
+++ b/website/_docs/howto/howto_backup_metadata.md
@@ -14,9 +14,9 @@ kylin.metadata.url=kylin_metadata@hbase
 
 This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
 
-## Backup Metadata Store with binary package
+## Backup metadata store with binary package
 
-Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
+Sometimes you need to backup the Kylin's metadata store from hbase to your disk file system.
 In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
 
 {% highlight Groff markup %}
@@ -25,7 +25,7 @@ In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed
 
 to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
 
-## Restore Metadata Store with binary package
+## Restore metadata store with binary package
 
 In case you find your metadata store messed up, and you want to restore to a previous backup:
 
@@ -40,11 +40,11 @@ Then upload the backup metadata to Kylin's metadata store:
 ./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
 {% endhighlight %}
 
-## Backup/restore metadata in development env (available since 0.7.3)
+## Backup/restore metadata in development env 
 
 When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
 
-## Cleanup unused resources from Metadata Store (available since 0.7.3)
+## Cleanup unused resources from metadata store
 As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
 
 Firstly, run a check, this is safe as it will not change anything:
diff --git a/website/_docs/tutorial/tools.cn.md b/website/_docs/howto/howto_use_cli.cn.md
similarity index 72%
rename from website/_docs/tutorial/tools.cn.md
rename to website/_docs/howto/howto_use_cli.cn.md
index 20cb65b..fd18a49 100644
--- a/website/_docs/tutorial/tools.cn.md
+++ b/website/_docs/howto/howto_use_cli.cn.md
@@ -1,10 +1,10 @@
 ---
 layout: docs-cn
-title:  "Kylin 中的工具类"
-categories: tutorial
-permalink: /cn/docs/tutorial/tools.html
+title:  "实用 CLI 工具"
+categories: howto
+permalink: /cn/docs/howto/howto_use_cli.html
 ---
-Kylin 有很多好的工具类。这篇文档会介绍以下几个工具类:KylinConfigCLI.java,CubeMetaExtractor.java,CubeMetaIngester.java,CubeMigrationCLI.java 和 CubeMigrationCheckCLI.java。在使用这些工具类前,首先要切换到 KYLIN_HOME 目录下。
+Kylin 提供一些方便实用的工具类。这篇文档会介绍以下几个工具类:KylinConfigCLI.java,CubeMetaExtractor.java,CubeMetaIngester.java,CubeMigrationCLI.java 和 CubeMigrationCheckCLI.java。在使用这些工具类前,首先要切换到 KYLIN_HOME 目录下。
 
 ## KylinConfigCLI.java
 
@@ -42,7 +42,7 @@ sampling-percentage=100
 ## CubeMetaExtractor.java
 
 ### 作用
-CubeMetaExtractor.java 用于提取与 Cube 相关的信息以达到调试/分发的目的。  
+CubeMetaExtractor.java 用于提取与 cube 相关的信息以达到调试/分发的目的。  
 
 ### 如何使用
 类名后至少写两个参数。
@@ -54,7 +54,7 @@ CubeMetaExtractor.java 用于提取与 Cube 相关的信息以达到调试/分
 ./bin/kylin.sh org.apache.kylin.tool.CubeMetaExtractor -cube querycube -destDir /root/newconfigdir1
 {% endhighlight %}
 结果:
-命令执行成功后,您想要抽取的 Cube / project / hybrid 将会存在于您指定的 destDir 目录中。
+命令执行成功后,您想要抽取的 cube / project / hybrid 将会存在于您指定的 destDir 目录中。
 
 下面会列出所有支持的参数:
 
@@ -64,7 +64,6 @@ CubeMetaExtractor.java 用于提取与 Cube 相关的信息以达到调试/分
 | compress <compress>                                   | Specify whether to compress the output with zip. Default true.                                      | 
 | cube <cube>                                           | Specify which Cube to extract                                                                       |
 | destDir <destDir>                                     | (Required) Specify the dest dir to save the related information                                     |
-| engineType <engineType>                               | Specify the engine type to overwrite. Default is empty, keep origin.                                |
 | hybrid <hybrid>                                       | Specify which hybrid to extract                                                                     |
 | includeJobs <includeJobs>                             | Set this to true if want to extract job info/outputs too. Default false                             |
 | includeSegmentDetails <includeSegmentDetails>         | Set this to true if want to extract segment details too, such as dict, tablesnapshot. Default false |
@@ -72,16 +71,15 @@ CubeMetaExtractor.java 用于提取与 Cube 相关的信息以达到调试/分
 | onlyOutput <onlyOutput>                               | When include jobs, only extract output of job. Default true                                         |
 | packagetype <packagetype>                             | Specify the package type                                                                            |
 | project <project>                                     | Specify realizations in which project to extract                                                    |
-| storageType <storageType>                             | Specify the storage type to overwrite. Default is empty, keep origin.                               |
 | submodule <submodule>                                 | Specify whether this is a submodule of other CLI tool. Default false.                               |
 
 ## CubeMetaIngester.java
 
 ### 作用
-CubeMetaIngester.java 将提取的 Cube 吞并到另一个 metadata store 中。目前其只支持吞并 cube。  
+CubeMetaIngester.java 将提取的 cube 注入到另一个 metadata store 中。目前其只支持注入 cube。  
 
 ### 如何使用
-类名后至少写两个参数。请确保您想要吞并的 Cube 在要吞并到的 project 中不存在。注意:zip 文件解压后必须只能包含一个目录。
+类名后至少写两个参数。请确保您想要注入的 cube 在要注入的 project 中不存在。注意:zip 文件解压后必须只能包含一个目录。
 {% highlight Groff markup %}
 ./bin/kylin.sh org.apache.kylin.tool.CubeMetaIngester -project <target_project> -srcPath <your_src_dir>
 {% endhighlight %}
@@ -90,21 +88,21 @@ CubeMetaIngester.java 将提取的 Cube 吞并到另一个 metadata store 中。
 ./bin/kylin.sh org.apache.kylin.tool.CubeMetaIngester -project querytest -srcPath /root/newconfigdir1/cubes.zip
 {% endhighlight %}
 结果:
-命令执行成功后,您想要吞并的 Cube 将会存在于您指定的 srcPath 目录中。
+命令执行成功后,您想要注入的 cube 将会存在于您指定的 srcPath 目录中。
 
 下面会列出所有支持的参数:
 
 | Parameter                         | Description                                                                                                                                                                                        |
 | --------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| forceIngest <forceIngest>         | Skip the target Cube, model and table check and ingest by force. Use in caution because it might break existing cubes! Suggest to backup metadata store first. Default false.                      |
+| forceIngest <forceIngest>         | Skip the target cube, model and table check and ingest by force. Use in caution because it might break existing cubes! Suggest to backup metadata store first. Default false.                      |
 | overwriteTables <overwriteTables> | If table meta conflicts, overwrite the one in metadata store with the one in srcPath. Use in caution because it might break existing cubes! Suggest to backup metadata store first. Default false. |
-| project <project>                 | (Required) Specify the target project for the new cubes.                                                                                                                                           |
+| project <project>                 | (Required) Specify the target project for the new cubes.                              
 | srcPath <srcPath>                 | (Required) Specify the path to the extracted Cube metadata zip file.                                                                                                                               |
 
 ##  CubeMigrationCLI.java
 
 ### 作用
-CubeMigrationCLI.java 用于迁移 cubes。例如:将 Cube 从开发环境迁移到测试(生产)环境,反之亦然。请注意,我们假设不同的环境是共享相同的 Hadoop 集群,包括 HDFS,HBase 和 HIVE。  
+CubeMigrationCLI.java 用于迁移 cubes。例如:将 cube 从测试环境迁移到生产环境。请注意,不同的环境是共享相同的 Hadoop 集群,包括 HDFS,HBase 和 HIVE。此 CLI 不支持跨 Hadoop 集群的数据迁移。
 
 ### 如何使用
 前八个参数必须有且次序不能改变。
@@ -113,25 +111,25 @@ CubeMigrationCLI.java 用于迁移 cubes。例如:将 Cube 从开发环境迁
 {% endhighlight %}
 例如:
 {% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCLI /root/apache-kylin-2.5.0-bin-hbase1x/conf/kylin.properties /root/me/apache-kylin-2.5.0-bin-hbase1x/conf/kylin.properties querycube IngesterTest true false false true false
+./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCLI kylin-qa:7070 kylin-prod:7070 kylin_sales_cube learn_kylin true false false true false
 {% endhighlight %}
-命令执行成功后,请 reload metadata,您想要迁移的 Cube 将会存在于迁移后的 project 中。
+命令执行成功后,请 reload metadata,您想要迁移的 cube 将会存在于迁移后的 project 中。
 
 下面会列出所有支持的参数:
- 如果您使用 `cubeName` 这个参数,但想要迁移的 Cube 所对应的 model 在要迁移的环境中不存在,不必担心,model 的数据也会迁移过去。
- 如果您将 `overwriteIfExists` 设置为 false,且该 Cube 已存在于要迁移的环境中,当您运行命令,Cube 存在的提示信息将会出现。
+ 如果您使用 `cubeName` 这个参数,但想要迁移的 cube 所对应的 model 在要迁移的环境中不存在,model 的数据也会迁移过去。
+ 如果您将 `overwriteIfExists` 设置为 false,且该 cube 已存在于要迁移的环境中,当您运行命令,cube 存在的提示信息将会出现。
  如果您将 `migrateSegmentOrNot` 设置为 true,请保证 Kylin metadata 的 HDFS 目录存在且 Cube 的状态为 READY。
 
 | Parameter           | Description                                                                                |
 | ------------------- | :----------------------------------------------------------------------------------------- |
-| srcKylinConfigUri   | The KylinConfig of the Cube’s source                                                      |
-| dstKylinConfigUri   | The KylinConfig of the Cube’s new home                                                    |
+| srcKylinConfigUri   | The URL of the source environment's Kylin configuration. It can be `host:7070`, or an absolute file path to the `kylin.properties`.                                                     |
+| dstKylinConfigUri   | The URL of the target environment's Kylin configuration.                                                 |
 | cubeName            | the name of Cube to be migrated.(Make sure it exist)                                       |
 | projectName         | The target project in the target environment.(Make sure it exist)                          |
-| copyAclOrNot        | True or false: whether copy Cube ACL to target environment.                                |
-| purgeOrNot          | True or false: whether purge the Cube from src server after the migration.                 |
-| overwriteIfExists   | Overwrite Cube if it already exists in the target environment.                             |
-| realExecute         | If false, just print the operations to take, if true, do the real migration.               |
+| copyAclOrNot        | `true` or `false`: whether copy Cube ACL to target environment.                                |
+| purgeOrNot          | `true` or `false`: whether purge the Cube from src server after the migration.                 |
+| overwriteIfExists   | `true` or `false`: overwrite cube if it already exists in the target environment.                             |
+| realExecute         | `true` or `false`: if false, just print the operations to take, if true, do the real migration.               |
 | migrateSegmentOrNot | (Optional) true or false: whether copy segment data to target environment. Default true.   |
 
 ## CubeMigrationCheckCLI.java
@@ -145,7 +143,7 @@ CubeMigrationCheckCLI.java 用于在迁移 Cube 之后检查“KYLIN_HOST”属
 {% endhighlight %}
 例如:
 {% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCheckCLI -fix true -dstCfgUri /root/me/apache-kylin-2.5.0-bin-hbase1x/conf/kylin.properties -cube querycube
+./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCheckCLI -fix true -dstCfgUri kylin-prod:7070 -cube querycube
 {% endhighlight %}
 下面会列出所有支持的参数:
 
diff --git a/website/_docs/tutorial/tools.md b/website/_docs/howto/howto_use_cli.md
similarity index 64%
rename from website/_docs/tutorial/tools.md
rename to website/_docs/howto/howto_use_cli.md
index 853e9f7..1024295 100644
--- a/website/_docs/tutorial/tools.md
+++ b/website/_docs/howto/howto_use_cli.md
@@ -1,14 +1,14 @@
 ---
 layout: docs
-title:  Tool classes in Kylin
-categories: tutorial
-permalink: /docs/tutorial/tools.html
+title:  Use Utility CLIs
+categories: howto
+permalink: /docs/howto/howto_use_cli.html
 ---
-Kylin has many tool class. This document will introduce the following class: KylinConfigCLI.java, CubeMetaExtractor.java, CubeMetaIngester.java, CubeMigrationCLI.java and CubeMigrationCheckCLI.java. Before using these tools, you have to switch to the KYLIN_HOME directory. 
+Kylin has some client utility tools. This document will introduce the following class: KylinConfigCLI.java, CubeMetaExtractor.java, CubeMetaIngester.java, CubeMigrationCLI.java and CubeMigrationCheckCLI.java. Before using these tools, you have to switch to the KYLIN_HOME directory. 
 
 ## KylinConfigCLI.java
 
-### Intention
+### Function
 KylinConfigCLI.java outputs the value of Kylin properties. 
 
 ### How to use 
@@ -42,7 +42,7 @@ sampling-percentage=100
 
 ## CubeMetaExtractor.java
 
-### Intention
+### Function
 CubeMetaExtractor.java is to extract Cube related info for debugging / distributing purpose.  
 
 ### How to use
@@ -52,10 +52,10 @@ At least two parameters should be followed.
 {% endhighlight %}
 For example: 
 {% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaExtractor -cube querycube -destDir /root/newconfigdir1
+./bin/kylin.sh org.apache.kylin.tool.CubeMetaExtractor -cube kylin_sales_cube -destDir /tmp/kylin_sales_cube
 {% endhighlight %}
 Result:
-After the command is successfully executed, the Cube / project / hybrid you want to extract will exist in the destDir.
+After the command is executed, the cube, project or hybrid you want to extract will be dumped in the specified path.
 
 All supported parameters are listed below:  
 
@@ -65,33 +65,34 @@ All supported parameters are listed below:
 | compress <compress>                                   | Specify whether to compress the output with zip. Default true.                                      | 
 | cube <cube>                                           | Specify which Cube to extract                                                                       |
 | destDir <destDir>                                     | (Required) Specify the dest dir to save the related information                                     |
-| engineType <engineType>                               | Specify the engine type to overwrite. Default is empty, keep origin.                                |
 | hybrid <hybrid>                                       | Specify which hybrid to extract                                                                     |
 | includeJobs <includeJobs>                             | Set this to true if want to extract job info/outputs too. Default false                             |
 | includeSegmentDetails <includeSegmentDetails>         | Set this to true if want to extract segment details too, such as dict, tablesnapshot. Default false |
 | includeSegments <includeSegments>                     | Set this to true if want extract the segments info. Default true                                    |
 | onlyOutput <onlyOutput>                               | When include jobs, only extract output of job. Default true                                         |
 | packagetype <packagetype>                             | Specify the package type                                                                            |
-| project <project>                                     | Specify realizations in which project to extract                                                    |
-| storageType <storageType>                             | Specify the storage type to overwrite. Default is empty, keep origin.                               |
-| submodule <submodule>                                 | Specify whether this is a submodule of other CLI tool. Default false.                               |
+| project <project>                                     | Which project to extract                                                    |
+                             |
 
 ## CubeMetaIngester.java
 
-### Intention
-CubeMetaIngester.java is to ingest the extracted Cube meta into another metadata store. It only supports ingest cube now. 
+### Function
+CubeMetaIngester.java is to ingest the extracted cube meta data into another metadata store. It only supports ingest cube now. 
 
 ### How to use
-At least two parameters should be followed. Please make sure the cube you want to ingest does not exist in the target project. Note: The zip file must contain only one directory after it has been decompressed.
+At least two parameters should be specified. Please make sure the cube you want to ingest does not exist in the target project. 
+
+Note: The zip file must contain only one directory after it has been decompressed.
+
 {% highlight Groff markup %}
 ./bin/kylin.sh org.apache.kylin.tool.CubeMetaIngester -project <target_project> -srcPath <your_src_dir>
 {% endhighlight %}
 For example: 
 {% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaIngester -project querytest -srcPath /root/newconfigdir1/cubes.zip
+./bin/kylin.sh org.apache.kylin.tool.CubeMetaIngester -project querytest -srcPath /tmp/newconfigdir1/cubes.zip
 {% endhighlight %}
 Result:
-After the command is successfully executed, the Cube you want to ingest will exist in the srcPath.
+After the command is successfully executed, the cube you want to ingest will exist in the srcPath.
 
 All supported parameters are listed below:
 
@@ -104,40 +105,43 @@ All supported parameters are listed below:
 
 ## CubeMigrationCLI.java
 
-### Intention
-CubeMigrationCLI.java serves for the purpose of migrating cubes. e.g. upgrade Cube from dev env to test(prod) env, or vice versa. Note that different envs are assumed to share the same Hadoop cluster, including HDFS, HBase and HIVE.  
+### Function
+CubeMigrationCLI.java can migrate a cube from a Kylin environment to another, for example, promote a well tested cube from the testing env to production env. Note that the different Kylin environments should share the same Hadoop cluster, including HDFS, HBase and HIVE. 
+
+Please note, this tool will migrate the Kylin metadata, rename the Kylin HDFS folders and update HBase table's metadata. It doesn't migrate data across Hadoop clusters. 
 
 ### How to use
-The first eight parameters must have and the order cannot be changed.
+
+
 {% highlight Groff markup %}
 ./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCLI <srcKylinConfigUri> <dstKylinConfigUri> <cubeName> <projectName> <copyAclOrNot> <purgeOrNot> <overwriteIfExists> <realExecute> <migrateSegmentOrNot>
 {% endhighlight %}
 For example: 
 {% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCLI /root/apache-kylin-2.5.0-bin-hbase1x/conf/kylin.properties /root/me/apache-kylin-2.5.0-bin-hbase1x/conf/kylin.properties querycube IngesterTest true false false true false
+./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCLI kylin-qa:7070 kylin-prod:7070 kylin_sales_cube learn_kylin true false false true false
 {% endhighlight %}
-After the command is successfully executed, please reload metadata, the Cube you want to migrate will exist in the target project.
+After the command is successfully executed, please reload Kylin metadata, the cube you want to migrate will appear in the target environment.
 
 All supported parameters are listed below:
- If you use `cubeName`, and the model of Cube you want to migrate in target environment does not exist, it will also migrate the model resource.
- If you set `overwriteIfExists` to false, and the Cube exists in the target environment, when you run the command, the prompt message will show.
- If you set `migrateSegmentOrNot` to true, please make sure the metadata HDFS dir of Kylin exists and the Cube status is READY.
+ If the data model of the cube you want to migrate does not exist in the target environment, this tool will also migrate the model.
+ If you set `overwriteIfExists` to `false`, and the cube exists in the target environment, the tool will stop to proceed.
+ If you set `migrateSegmentOrNot` to `true`, please make sure the cube has `READY` segments, they will be migrated to target environment together.
 
 | Parameter           | Description                                                                                |
 | ------------------- | :----------------------------------------------------------------------------------------- |
-| srcKylinConfigUri   | The KylinConfig of the Cube’s source                                                      |
-| dstKylinConfigUri   | The KylinConfig of the Cube’s new home                                                    |
-| cubeName            | the name of Cube to be migrated.(Make sure it exist)                                       |
-| projectName         | The target project in the target environment.(Make sure it exist)                          |
-| copyAclOrNot        | True or false: whether copy Cube ACL to target environment.                                |
-| purgeOrNot          | True or false: whether purge the Cube from src server after the migration.                 |
-| overwriteIfExists   | Overwrite Cube if it already exists in the target environment.                             |
-| realExecute         | If false, just print the operations to take, if true, do the real migration.               |
-| migrateSegmentOrNot | (Optional) true or false: whether copy segment data to target environment. Default true.   |
+| srcKylinConfigUri   | The URL of the source environment's Kylin configuration. It can be `host:7070`, or an absolute file path to the `kylin.properties`.                                                      |
+| dstKylinConfigUri   | The URL of the target environment's Kylin configuration.                                                     |
+| cubeName            | the name of cube to be migrated.                                        |
+| projectName         | The target project in the target environment. If it doesn't exist, create it before run this command.                          |
+| copyAclOrNot        | `true` or `false`: whether copy the cube ACL to target environment.                                |
+| purgeOrNot          | `true` or `false`: whether to purge the cube from source environment after it be migrated to target environment.                 |
+| overwriteIfExists   | `true` or `false`: whether to overwrite if it already exists in the target environment.                             |
+| realExecute         | `true` or `false`: If false, just print the operations to take (dry-run mode); if true, do the real migration.               |
+| migrateSegmentOrNot | (Optional) `true` or `false`: whether copy segment info to the target environment. Default true.   |
 
 ## CubeMigrationCheckCLI.java
 
-### Intention
+### Function
 CubeMigrationCheckCLI.java serves for the purpose of checking the "KYLIN_HOST" property to be consistent with the dst's MetadataUrlPrefix for all of Cube segments' corresponding HTables after migrating a Cube. CubeMigrationCheckCLI.java will be called in CubeMigrationCLI.java and is usually not used separately. 
 
 ### How to use
@@ -146,7 +150,7 @@ CubeMigrationCheckCLI.java serves for the purpose of checking the "KYLIN_HOST" p
 {% endhighlight %}
 For example: 
 {% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCheckCLI -fix true -dstCfgUri /root/me/apache-kylin-2.5.0-bin-hbase1x/conf/kylin.properties -cube querycube
+./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCheckCLI -fix true -dstCfgUri kylin-prod:7070 -cube querycube
 {% endhighlight %}
 All supported parameters are listed below:
 
@@ -154,4 +158,4 @@ All supported parameters are listed below:
 | ------------------- | :---------------------------------------------------------------------------- |
 | fix                 | Fix the inconsistent Cube segments' HOST, default false                       |
 | dstCfgUri           | The KylinConfig of the Cube’s new home                                       |
-| cube                | The name of Cube migrated                                                     |
\ No newline at end of file
+| cube                | The cube name.                                                     |
\ No newline at end of file