You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by sh...@apache.org on 2018/07/02 11:25:06 UTC

[kylin] branch document updated: KYLIN-2554 update chinese doc for v2.4

This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git


The following commit(s) were added to refs/heads/document by this push:
     new 54b17bf  KYLIN-2554 update chinese doc for v2.4
54b17bf is described below

commit 54b17bf86874f1dc8d2a1aadb09c2f743bf99d6a
Author: shaofengshi <sh...@apache.org>
AuthorDate: Mon Jul 2 19:24:53 2018 +0800

    KYLIN-2554 update chinese doc for v2.4
---
 website/_data/docs-cn.yml                          |  36 +-
 website/_docs/howto/howto_optimize_cubes.cn.md     | 212 ++++++++++
 website/_docs/howto/howto_upgrade.md               |   3 -
 .../howto/howto_use_restapi.cn.md                  |   6 +-
 website/_docs/install/advance_settings.cn.md       | 113 ++++++
 website/_docs/install/configuration.cn.md          | 223 +++++++++++
 website/_docs/install/index.cn.md                  |  84 ++--
 .../{_docs23 => _docs}/install/kylin_aws_emr.cn.md |   6 +-
 website/_docs/install/kylin_aws_emr.md             |   2 +-
 .../{_docs23 => _docs}/install/kylin_cluster.cn.md |   6 +-
 website/_docs/install/kylin_docker.cn.md           |  10 +
 website/_docs/tutorial/create_cube.cn.md           |  10 +-
 website/_docs/tutorial/create_cube.md              |   8 +-
 website/_docs/tutorial/cube_build_job.cn.md        |   2 +-
 .../_docs/tutorial/cube_build_performance.cn.md    | 266 +++++++++++++
 website/_docs/tutorial/cube_build_performance.md   |   8 +-
 website/_docs/tutorial/cube_spark.cn.md            | 169 ++++++++
 website/_docs/tutorial/cube_streaming.cn.md        | 219 +++++++++++
 website/_docs/tutorial/cube_streaming.md           |   2 +-
 website/_docs/tutorial/jdbc.cn.md                  |   2 +-
 website/_docs/tutorial/kylin_client_tool.cn.md     |   2 +-
 website/_docs/tutorial/kylin_sample.cn.md          |  34 ++
 website/_docs/tutorial/odbc.cn.md                  |   4 +-
 website/_docs/tutorial/powerbi.cn.md               |   2 +-
 website/_docs/tutorial/project_level_acl.cn.md     |  63 +++
 website/_docs/tutorial/setup_jdbc_datasource.cn.md |  93 +++++
 website/_docs/tutorial/setup_systemcube.cn.md      | 438 +++++++++++++++++++++
 website/_docs/tutorial/spark.cn.md                 |  90 +++++
 website/_docs/tutorial/squirrel.cn.md              | 112 ++++++
 website/_docs/tutorial/tableau.cn.md               |   8 +-
 website/_docs/tutorial/tableau_91.cn.md            |   4 -
 website/_docs/tutorial/tableau_91.md               |   4 -
 website/_docs/tutorial/use_cube_planner.cn.md      | 127 ++++++
 website/_docs/tutorial/use_cube_planner.md         |   8 +-
 website/_docs/tutorial/use_dashboard.cn.md         |  99 +++++
 website/_docs/tutorial/web.cn.md                   |   2 +-
 website/_docs23/howto/howto_use_restapi.cn.md      |   2 +-
 website/_docs23/install/kylin_aws_emr.cn.md        |   2 +-
 website/_docs23/install/kylin_cluster.cn.md        |   2 +-
 website/_docs23/install/manual_install_guide.cn.md |  29 --
 40 files changed, 2399 insertions(+), 113 deletions(-)

diff --git a/website/_data/docs-cn.yml b/website/_data/docs-cn.yml
index ef35796..421ff2f 100644
--- a/website/_data/docs-cn.yml
+++ b/website/_data/docs-cn.yml
@@ -18,25 +18,47 @@
 
 - title: 安装
   docs:
-  - install/manual_install_guide
+  - install/index
+  - install/kylin_cluster
+  - install/configuration
+  - install/advance_settings
+  - install/kylin_aws_emr
+  - install/kylin_docker
 
 - title: 教程
   docs:
+  - tutorial/kylin_sample
+  - tutorial/web
   - tutorial/create_cube
   - tutorial/cube_build_job
   - tutorial/project_level_acl
-  - tutorial/web
+  - tutorial/cube_spark
+  - tutorial/cube_streaming
+  - tutorial/cube_build_performance
   - tutorial/kylin_client_tool
+  - tutorial/setup_systemcube
+  - tutorial/use_cube_planner
+  - tutorial/use_dashboard
+  - tutorial/setup_jdbc_datasource
+
+- title: 工具集成
+  docs:
+  - tutorial/odbc
+  - tutorial/jdbc
   - tutorial/tableau
   - tutorial/tableau_91
   - tutorial/powerbi
-  - tutorial/odbc
+  - tutorial/microstrategy
+  - tutorial/squirrel
   - tutorial/Qlik
 
+
 - title: 帮助
-  docs:
-  - howto/howto_backup_metadata
+  docs:  
+  - howto/howto_use_restapi
   - howto/howto_build_cube_with_restapi
-  - howto/howto_cleanup_storage
-  - howto/howto_jdbc
+  - howto/howto_optimize_cubes
   - howto/howto_optimize_build
+  - howto/howto_backup_metadata
+  - howto/howto_cleanup_storage
+
diff --git a/website/_docs/howto/howto_optimize_cubes.cn.md b/website/_docs/howto/howto_optimize_cubes.cn.md
new file mode 100644
index 0000000..6cd201d
--- /dev/null
+++ b/website/_docs/howto/howto_optimize_cubes.cn.md
@@ -0,0 +1,212 @@
+---
+layout: docs-cn
+title:  优化 Cube 设计
+categories: howto
+permalink: /cn/docs/howto/howto_optimize_cubes.html
+---
+
+## Hierarchies:
+
+Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
+
+group by continent
+group by continent, country
+group by continent, country, city
+
+In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
+
+If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
+
+
+A. Hierarchies on lookup table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, FK</td>
+    <td></td>
+    <td>PK,,H1,H2,H3,,,,</td>
+  </tr>
+</table>
+
+---
+
+B. Hierarchies on fact table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
+  </tr>
+</table>
+
+---
+
+
+There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
+
+A*. Hierarchies on lookup table over its primary key
+
+
+<table>
+  <tr>
+    <td align="center">Lookup Table(Calendar)</td>
+  </tr>
+  <tr>
+    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
+  </tr>
+</table>
+
+---
+
+
+For cases like A* what you need is another optimization called "Derived Columns"
+
+## Derived Columns:
+
+Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
+
+For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, DimA(FK) </td>
+    <td></td>
+    <td>DimX(PK),,DimB, DimC</td>
+  </tr>
+</table>
+
+---
+
+
+Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
+
+
+<table>
+  <tr>
+    <th>dimA</th>
+    <th>dimB</th>
+    <th>dimC</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>b</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>c</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+</table>
+
+
+in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
+
+original combinations:
+ABC,AB,AC,BC,A,B,C
+
+combinations when driving B from A:
+AC,A,C
+
+at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
+
+
+<table>
+  <tr>
+    <th>DimA</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>2</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"
diff --git a/website/_docs/howto/howto_upgrade.md b/website/_docs/howto/howto_upgrade.md
index 0f698fe..1110f21 100644
--- a/website/_docs/howto/howto_upgrade.md
+++ b/website/_docs/howto/howto_upgrade.md
@@ -20,9 +20,6 @@ Running as a Hadoop client, Apache Kylin's metadata and Cube data are persistend
 Below are versions specific guides:
 
 
-## Upgrade from v2.3.0 to v2.4.0
-Metadata is compitable, but the coprocessor need be updated.
-
 ## Upgrade from v2.1.0 to v2.2.0
 
 Kylin v2.2.0 cube metadata is compitable with v2.1.0, but you need aware the following changes:
diff --git a/website/_docs23/howto/howto_use_restapi.cn.md b/website/_docs/howto/howto_use_restapi.cn.md
similarity index 99%
copy from website/_docs23/howto/howto_use_restapi.cn.md
copy to website/_docs/howto/howto_use_restapi.cn.md
index a3399d0..2bbebeb 100644
--- a/website/_docs23/howto/howto_use_restapi.cn.md
+++ b/website/_docs/howto/howto_use_restapi.cn.md
@@ -1,8 +1,8 @@
 ---
-layout: docs23-cn
-title:  Use RESTful API
+layout: docs-cn
+title:  RESTful API
 categories: howto
-permalink: /cn/docs23/howto/howto_use_restapi.html
+permalink: /cn/docs/howto/howto_use_restapi.html
 since: v0.7.1
 ---
 
diff --git a/website/_docs/install/advance_settings.cn.md b/website/_docs/install/advance_settings.cn.md
new file mode 100644
index 0000000..37f6a48
--- /dev/null
+++ b/website/_docs/install/advance_settings.cn.md
@@ -0,0 +1,113 @@
+---
+layout: docs-cn
+title: "高级设置"
+categories: install
+permalink: /cn/docs/install/advance_settings.html
+---
+
+## 在 Cube 级别重写默认的 kylin.properties
+`conf/kylin.properties` 里有许多的参数,控制/影响着 Kylin 的行为;大多数参数是全局配置的,例如 security 或 job 相关的参数;有一些是 Cube 相关的;这些 Cube 相关的参数可以在任意 Cube 级别进行自定义。对应的 GUI 页面是 Cube 创建的 "重写配置" 步骤所示的页面,如下图所示.
+
+![]( /images/install/overwrite_config_v2.png)
+
+两个示例:
+
+ * `kylin.cube.algorithm`:定义了 job engine 选择的 Cubing 算法;默认值为 "auto",意味着 engine 会通过采集数据动态的选择一个算法 ("layer" or "inmem")。如果您很了解 Kylin 和 您的数据/集群,您可以直接设置您喜欢的算法。   
+
+ * `kylin.storage.hbase.region-cut-gb`:定义了创建 HBase 表时一个 region 的大小。默认一个 region "5" (GB)。对于小的或中等大小的 cube 来说它的值可能太大了,所以您可以设置更小的值来获得更多的 regions,可获得更好的查询性能。
+
+## 在 Cube 级别重写默认的 Hadoop job conf 值
+`conf/kylin_job_conf.xml` 和 `conf/kylin_job_conf_inmem.xml` 管理 Hadoop jobs 的默认配置。如果您想通过 cube 自定义配置,您可以通过和上面相似的方式获得,但是需要加一个前缀 `kylin.engine.mr.config-override.`;当提交 jobs 这些配置会被解析并应用。下面是两个示例:
+
+ * 希望 job 从 Yarn 获得更多 memory,您可以这样定义:`kylin.engine.mr.config-override.mapreduce.map.java.opts=-Xmx7g` 和 `kylin.engine.mr.config-override.mapreduce.map.memory.mb=8192`
+ * 希望 cube's job 使用不同的 Yarn resource queue,您可以这样定义:`kylin.engine.mr.config-override.mapreduce.job.queuename=myQueue` ("myQueue" 是一个举例,可更换成您的 queue 名字)
+
+## 在 Cube 级别重写默认的 Hive job conf 值
+
+`conf/kylin_hive_conf.xml` 管理运行时 Hive job 的默认配置 (例如创建 flat hive table)。如果您想通过 cube 自定义配置,您可以通过和上面相似的方式获得,但需要另一个前缀 `kylin.source.hive.config-override.`;当运行 "hive -e" 或 "beeline" 命令,这些配置会被解析并应用。请看下面示例:
+
+ * 希望 hive 使用不同的 Yarn resource queue,您可以这样定义:`kylin.source.hive.config-override.mapreduce.job.queuename=myQueue` ("myQueue" 是一个举例,可更换成您的 queue 名字)
+
+## 在 Cube 级别重写默认的 Spark conf 值
+
+ Spark 的配置是在 `conf/kylin.properties` 中管理,前缀为 `kylin.engine.spark-conf.`。例如,如果您想要使用 job queue "myQueue" 运行 Spark,设置 "kylin.engine.spark-conf.spark.yarn.queue=myQueue" 会让 Spark 在提交应用时获取 "spark.yarn.queue=myQueue"。参数可以在 Cube 级别进行配置,将会覆盖 `conf/kylin.properties` 中的默认值。 
+
+## 支持压缩
+
+默认情况,Kylin 不支持压缩,在产品环境这不是一个推荐的设置,但对于新的 Kylin 用户是个权衡。一个合适的算法将会减少存储负载。不支持的算法会阻碍 Kylin job build。Kylin 可以使用三种类型的压缩,HBase 表压缩,Hive 输出压缩 和 MR jobs 输出压缩。 
+
+* HBase 表压缩
+压缩设置通过 `kylin.hbase.default.compression.codec` 定义在 `kyiln.properties` 中,默认值为 *none*。有效的值包括 *none*,*snappy*,*lzo*,*gzip* 和 *lz4*。在变换压缩算法前,请确保您的 Hbase 集群支持所选算法。尤其是 snappy,lzo 和 lz4,不是所有的 Hadoop 分布式都会包含。 
+
+* Hive 输出压缩
+压缩设置定义在 `kylin_hive_conf.xml`。默认设置为 empty 其利用了 Hive 的默认配置。如果您重写配置,请在 `kylin_hive_conf.xml` 中添加 (或替换) 下列属性。以 snappy 压缩为例:
+{% highlight Groff markup %}
+    <property>
+        <name>mapreduce.map.output.compress.codec</name>
+        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
+        <description></description>
+    </property>
+    <property>
+        <name>mapreduce.output.fileoutputformat.compress.codec</name>
+        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
+        <description></description>
+    </property>
+{% endhighlight %}
+
+* MR jobs 输出压缩
+压缩设置定义在 `kylin_job_conf.xml` 和 `kylin_job_conf_inmem.xml`中。默认设置为 empty 其利用了 MR 的默认配置。如果您重写配置,请在 `kylin_job_conf.xml` 和 `kylin_job_conf_inmem.xml` 中添加 (或替换) 下列属性。以 snappy 压缩为例:
+{% highlight Groff markup %}
+    <property>
+        <name>mapreduce.map.output.compress.codec</name>
+        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
+        <description></description>
+    </property>
+    <property>
+        <name>mapreduce.output.fileoutputformat.compress.codec</name>
+        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
+        <description></description>
+    </property>
+{% endhighlight %}
+
+压缩设置只有在重启 Kylin 服务器实例后才会生效。
+
+## 分配更多内存给 Kylin 实例
+
+打开 `bin/setenv.sh`,这里有两个 `KYLIN_JVM_SETTINGS` 环境变量的样例设置;默认设置较小 (最大为 4GB),您可以注释它然后取消下一行的注释来给其分配 16GB:
+
+{% highlight Groff markup %}
+export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$KYLIN_HOME/logs/kylin.gc.$$ -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M"
+# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX:MaxPermSize=512m -XX:NewSize=3g -XX:MaxNewSize=3g -XX:SurvivorRatio=4 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=70 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError"
+{% endhighlight %}
+
+## 启用多个任务引擎
+从 2.0 开始, Kylin 支持多个任务引擎一起运行,相比于默认单任务引擎的配置,多引擎可以保证任务构建的高可用。
+
+使用多任务引擎,你可以在多个 Kylin 节点上配置它的角色为 `job` 或 `all`。为了避免它们之间产生竞争,需要启用分布式任务锁,请在 `kylin.properties` 里配置:
+
+```
+kylin.job.scheduler.default=2
+kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperDistributedJobLock
+```
+并记得将所有任务和查询节点的地址注册到 `kylin.server.cluster-servers`.
+
+## 支持 LDAP 或 SSO authentication
+
+查看 [How to Enable Security with LDAP and SSO](../howto/howto_ldap_and_sso.html)
+
+
+## 支持邮件通知
+
+Kylin 可以在 job 完成/失败的时候发送邮件通知;编辑 `conf/kylin.properties`,设置如下参数使其生效:
+{% highlight Groff markup %}
+mail.enabled=true
+mail.host=your-smtp-server
+mail.username=your-smtp-account
+mail.password=your-smtp-pwd
+mail.sender=your-sender-address
+kylin.job.admin.dls=adminstrator-address
+{% endhighlight %}
+
+重启 Kylin 服务器使其生效。设置 `mail.enabled` 为 `false` 令其失效。
+
+所有的 jobs 管理员都会收到通知。建模者和分析师需要将邮箱填写在 cube 创建的第一页的 "Notification List" 中,然后即可收到关于该 cube 的通知。
diff --git a/website/_docs/install/configuration.cn.md b/website/_docs/install/configuration.cn.md
new file mode 100644
index 0000000..6523df3
--- /dev/null
+++ b/website/_docs/install/configuration.cn.md
@@ -0,0 +1,223 @@
+---
+layout: docs-cn
+title:  "Kylin 配置"
+categories: install
+permalink: /cn/docs/install/configuration.html
+---
+
+Kylin 会自动从环境中检测 Hadoop/Hive/HBase 配置,如 "core-site.xml", "hbase-site.xml" 和其他。除此之外,Kylin 有自己的配置,在 "conf" 文件夹下。
+
+{% highlight Groff markup %}
+-bash-4.1# ls -l $KYLIN_HOME/conf
+
+kylin_hive_conf.xml
+kylin_job_conf_inmem.xml
+kylin_job_conf.xml
+kylin-kafka-consumer.xml
+kylin.properties
+kylin-server-log4j.properties
+kylin-tools-log4j.properties
+setenv.sh 
+{% endhighlight %}
+
+## kylin_hive_conf.xml
+
+Kylin 从 Hive 中取数据时应用的 Hive 配置。
+
+## kylin_job_conf.xml and kylin_job_conf_inmem.xml
+
+Kylin 运行 MapReduce jobs 时的 Hadoop MR 配置。在 Kylin 的 "In-mem cubing" job 的时候,"kylin_job_conf_inmem.xml" 需要更多的 memory 给 mapper。
+
+## kylin-kafka-consumer.xml
+
+Kylin 从 Kafka brokers 中取数据时应用的 Kafka 配置。
+
+
+## kylin-server-log4j.properties
+
+Kylin 服务器的日志配置。
+
+## kylin-tools-log4j.properties
+
+Kylin 命令行的日志配置。
+
+## setenv.sh 
+
+设置环境变量的 shell 脚本。它将在 "kylin.sh" 和 "bin" 文件夹中的其它脚本中被调用。通常,您可以在这里调整 Kylin JVM 栈的大小,且可以设置 "KAFKA_HOME" 和其他环境变量。
+
+## kylin.properties
+
+Kylin 的主要配置文件。 
+
+
+| Key                                                   | Default value        | Description                                                  | Overwritten at Cube |
+| ----------------------------------------------------- | -------------------- | ------------------------------------------------------------ | ------------------------- |
+| kylin.env                                             | Dev                  | Whether this env is a Dev, QA, or Prod environment           | No                        |
+| kylin.env.hdfs-working-dir                            | /kylin               | Working directory on HDFS                                    | No                        |
+| kylin.env.zookeeper-base-path                         | /kylin               | Path on ZK                                                   | No                        |
+| kylin.env.zookeeper-connect-string                    |                      | ZK connection string; If blank, use HBase's ZK               | No                        |
+| kylin.env.zookeeper-acl-enabled                       | false                |                                                              | No                        |
+| kylin.env.zookeeper.zk-auth                           | digest:ADMIN:KYLIN   |                                                              | No                        |
+| kylin.env.zookeeper.zk-acl                            | world:anyone:rwcda   |                                                              | No                        |
+| kylin.metadata.url                                    | kylin_metadata@hbase | Kylin metadata storage                                       | No                        |
+| kylin.metadata.sync-retries                           | 3                    |                                                              | No                        |
+| kylin.metadata.sync-error-handler                     |                      |                                                              | No                        |
+| kylin.metadata.check-copy-on-write                    | false                |                                                              | No                        |
+| kylin.metadata.hbase-client-scanner-timeout-period    | 10000                |                                                              | No                        |
+| kylin.metadata.hbase-rpc-timeout                      | 5000                 |                                                              | No                        |
+| kylin.metadata.hbase-client-retries-number            | 1                    |                                                              | No                        |
+| kylin.dictionary.use-forest-trie                      | true                 |                                                              | No                        |
+| kylin.dictionary.forest-trie-max-mb                   | 500                  |                                                              | No                        |
+| kylin.dictionary.max-cache-entry                      | 3000                 |                                                              | No                        |
+| kylin.dictionary.growing-enabled                      | false                |                                                              | No                        |
+| kylin.dictionary.append-entry-size                    | 10000000             |                                                              | No                        |
+| kylin.dictionary.append-max-versions                  | 3                    |                                                              | No                        |
+| kylin.dictionary.append-version-ttl                   | 259200000            |                                                              | No                        |
+| kylin.snapshot.max-cache-entry                        | 500                  |                                                              | No                        |
+| kylin.snapshot.max-mb                                 | 300                  |                                                              | No                        |
+| kylin.snapshot.ext.shard-mb                           | 500                  |                                                              | No                        |
+| kylin.snapshot.ext.local.cache.path                   | lookup_cache         |                                                              | No                        |
+| kylin.snapshot.ext.local.cache.max-size-gb            | 200                  |                                                              | No                        |
+| kylin.cube.size-estimate-ratio                        | 0.25                 |                                                              | Yes                       |
+| kylin.cube.size-estimate-memhungry-ratio              | 0.05                 | Deprecated                                                   | Yes                       |
+| kylin.cube.size-estimate-countdistinct-ratio          | 0.05                 |                                                              | Yes                       |
+| kylin.cube.algorithm                                  | auto                 | Cubing algorithm for MR engine, other options: layer, inmem  | Yes                       |
+| kylin.cube.algorithm.layer-or-inmem-threshold         | 7                    |                                                              | Yes                       |
+| kylin.cube.algorithm.inmem-split-limit                | 500                  |                                                              | Yes                       |
+| kylin.cube.algorithm.inmem-concurrent-threads         | 1                    |                                                              | Yes                       |
+| kylin.cube.ignore-signature-inconsistency             | false                |                                                              |                           |
+| kylin.cube.aggrgroup.max-combination                  | 4096                 | Max cuboid numbers in a Cube                                 | Yes                       |
+| kylin.cube.aggrgroup.is-mandatory-only-valid          | false                | Whether allow a Cube only has the base cuboid.               | Yes                       |
+| kylin.cube.rowkey.max-size                            | 63                   | Max columns in Rowkey                                        | No                        |
+| kylin.metadata.dimension-encoding-max-length          | 256                  | Max length for one dimension's encoding                      | Yes                       |
+| kylin.cube.max-building-segments                      | 10                   | Max building segments in one Cube                            | Yes                       |
+| kylin.cube.allow-appear-in-multiple-projects          | false                | Whether allow a Cueb appeared in multiple projects           | No                        |
+| kylin.cube.gtscanrequest-serialization-level          | 1                    |                                                              |                           |
+| kylin.cube.is-automerge-enabled                       | true                 | Whether enable auto merge.                                   | Yes                       |
+| kylin.job.log-dir                                     | /tmp/kylin/logs      |                                                              |                           |
+| kylin.job.allow-empty-segment                         | true                 | Whether tolerant data source is emtpy.                       | Yes                       |
+| kylin.job.max-concurrent-jobs                         | 10                   | Max concurrent running jobs                                  | No                        |
+| kylin.job.sampling-percentage                         | 100                  | Data sampling percentage, to calculate Cube statistics; Default be all. | Yes                       |
+| kylin.job.notification-enabled                        | false                | Whether send email notification on job error/succeed.        | No                        |
+| kylin.job.notification-mail-enable-starttls           | false                |                                                              | No                        |
+| kylin.job.notification-mail-port                      | 25                   |                                                              | No                        |
+| kylin.job.notification-mail-host                      |                      |                                                              | No                        |
+| kylin.job.notification-mail-username                  |                      |                                                              | No                        |
+| kylin.job.notification-mail-password                  |                      |                                                              | No                        |
+| kylin.job.notification-mail-sender                    |                      |                                                              | No                        |
+| kylin.job.notification-admin-emails                   |                      |                                                              | No                        |
+| kylin.job.retry                                       | 0                    |                                                              | No                        |
+|                                                       |                      |                                                              |                           |
+| kylin.job.scheduler.priority-considered               | false                |                                                              | No                        |
+| kylin.job.scheduler.priority-bar-fetch-from-queue     | 20                   |                                                              | No                        |
+| kylin.job.scheduler.poll-interval-second              | 30                   |                                                              | No                        |
+| kylin.job.error-record-threshold                      | 0                    |                                                              | No                        |
+| kylin.source.hive.keep-flat-table                     | false                | Whether keep the intermediate Hive table after job finished. | No                        |
+| kylin.source.hive.database-for-flat-table             | default              | Hive database to create the intermediate table.              | No                        |
+| kylin.source.hive.flat-table-storage-format           | SEQUENCEFILE         |                                                              | No                        |
+| kylin.source.hive.flat-table-field-delimiter          | \u001F               |                                                              | No                        |
+| kylin.source.hive.redistribute-flat-table             | true                 | Whether or not to redistribute the flat table.               | Yes                       |
+| kylin.source.hive.client                              | cli                  |                                                              | No                        |
+| kylin.source.hive.beeline-shell                       | beeline              |                                                              | No                        |
+| kylin.source.hive.beeline-params                      |                      |                                                              | No                        |
+| kylin.source.hive.enable-sparksql-for-table-ops       | false                |                                                              | No                        |
+| kylin.source.hive.sparksql-beeline-shell              |                      |                                                              | No                        |
+| kylin.source.hive.sparksql-beeline-params             |                      |                                                              | No                        |
+| kylin.source.hive.table-dir-create-first              | false                |                                                              | No                        |
+| kylin.source.hive.flat-table-cluster-by-dict-column   |                      |                                                              |                           |
+| kylin.source.hive.default-varchar-precision           | 256                  |                                                              | No                        |
+| kylin.source.hive.default-char-precision              | 255                  |                                                              | No                        |
+| kylin.source.hive.default-decimal-precision           | 19                   |                                                              | No                        |
+| kylin.source.hive.default-decimal-scale               | 4                    |                                                              | No                        |
+| kylin.source.jdbc.connection-url                      |                      |                                                              |                           |
+| kylin.source.jdbc.driver                              |                      |                                                              |                           |
+| kylin.source.jdbc.dialect                             | default              |                                                              |                           |
+| kylin.source.jdbc.user                                |                      |                                                              |                           |
+| kylin.source.jdbc.pass                                |                      |                                                              |                           |
+| kylin.source.jdbc.sqoop-home                          |                      |                                                              |                           |
+| kylin.source.jdbc.sqoop-mapper-num                    | 4                    |                                                              |                           |
+| kylin.source.jdbc.field-delimiter                     | \|                   |                                                              |                           |
+| kylin.storage.default                                 | 2                    |                                                              | No                        |
+| kylin.storage.hbase.table-name-prefix                 | KYLIN_               |                                                              | No                        |
+| kylin.storage.hbase.namespace                         | default              |                                                              | No                        |
+| kylin.storage.hbase.cluster-fs                        |                      |                                                              |                           |
+| kylin.storage.hbase.cluster-hdfs-config-file          |                      |                                                              |                           |
+| kylin.storage.hbase.coprocessor-local-jar             |                      |                                                              |                           |
+| kylin.storage.hbase.min-region-count                  | 1                    |                                                              |                           |
+| kylin.storage.hbase.max-region-count                  | 500                  |                                                              |                           |
+| kylin.storage.hbase.hfile-size-gb                     | 2.0                  |                                                              |                           |
+| kylin.storage.hbase.run-local-coprocessor             | false                |                                                              |                           |
+| kylin.storage.hbase.coprocessor-mem-gb                | 3.0                  |                                                              |                           |
+| kylin.storage.partition.aggr-spill-enabled            | true                 |                                                              |                           |
+| kylin.storage.partition.max-scan-bytes                | 3221225472           |                                                              |                           |
+| kylin.storage.hbase.coprocessor-timeout-seconds       | 0                    |                                                              |                           |
+| kylin.storage.hbase.max-fuzzykey-scan                 | 200                  |                                                              |                           |
+| kylin.storage.hbase.max-fuzzykey-scan-split           | 1                    |                                                              |                           |
+| kylin.storage.hbase.max-visit-scanrange               | 1000000              |                                                              |                           |
+| kylin.storage.hbase.scan-cache-rows                   | 1024                 |                                                              |                           |
+| kylin.storage.hbase.region-cut-gb                     | 5.0                  |                                                              |                           |
+| kylin.storage.hbase.max-scan-result-bytes             | 5242880              |                                                              |                        |
+| kylin.storage.hbase.compression-codec                 | none                 |                                                              |                           |
+| kylin.storage.hbase.rowkey-encoding                   | FAST_DIFF            |                                                              |                           |
+| kylin.storage.hbase.block-size-bytes                  | 1048576              |                                                              |                           |
+| kylin.storage.hbase.small-family-block-size-bytes     | 65536                |                                                              |                           |
+| kylin.storage.hbase.owner-tag                         |                      |                                                              |                           |
+| kylin.storage.hbase.endpoint-compress-result          | true                 |                                                              |                           |
+| kylin.storage.hbase.max-hconnection-threads           | 2048                 |                                                              |                           |
+| kylin.storage.hbase.core-hconnection-threads          | 2048                 |                                                              |                           |
+| kylin.storage.hbase.hconnection-threads-alive-seconds | 60                   |                                                              |                           |
+| kylin.engine.mr.lib-dir                               |                      |                                                              |                           |
+| kylin.engine.mr.reduce-input-mb                       | 500                  |                                                              |                           |
+| kylin.engine.mr.reduce-count-ratio                    | 1.0                  |                                                              |                           |
+| kylin.engine.mr.min-reducer-number                    | 1                    |                                                              |                           |
+| kylin.engine.mr.max-reducer-number                    | 500                  |                                                              |                           |
+| kylin.engine.mr.mapper-input-rows                     | 1000000              |                                                              |                           |
+| kylin.engine.mr.max-cuboid-stats-calculator-number    | 1                    |                                                              |                           |
+| kylin.engine.mr.uhc-reducer-count                     | 1                    |                                                              |                           |
+| kylin.engine.mr.build-uhc-dict-in-additional-step     | false                |                                                              |                           |
+| kylin.engine.mr.build-dict-in-reducer                 | true                 |                                                              |                           |
+| kylin.engine.mr.yarn-check-interval-seconds           | 10                   |                                                              |                           |
+| kylin.env.hadoop-conf-dir                             |                      | Hadoop conf directory; If not specified, parse from environment. | No                        |
+| kylin.engine.spark.rdd-partition-cut-mb               | 10.0                 | Spark Cubing RDD partition split size.                       | Yes                       |
+| kylin.engine.spark.min-partition                      | 1                    | Spark Cubing RDD min partition number                        | Yes                       |
+| kylin.engine.spark.max-partition                      | 5000                 | RDD max partition number                                     | Yes                       |
+| kylin.engine.spark.storage-level                      | MEMORY_AND_DISK_SER  | RDD persistent level.                                        | Yes                       |
+| kylin.query.skip-empty-segments                       | true                 | Whether directly skip empty segment (metadata shows size be 0) when run SQL query. | Yes                       |
+| kylin.query.force-limit                               | -1                   |                                                              |                           |
+| kylin.query.max-scan-bytes                            | 0                    |                                                              |                           |
+| kylin.query.max-return-rows                           | 5000000              |                                                              |                           |
+| kylin.query.large-query-threshold                     | 1000000              |                                                              |                           |
+| kylin.query.cache-threshold-duration                  | 2000                 |                                                              |                           |
+| kylin.query.cache-threshold-scan-count                | 10240                |                                                              |                           |
+| kylin.query.cache-threshold-scan-bytes                | 1048576              |                                                              |                           |
+| kylin.query.security-enabled                          | true                 |                                                              |                           |
+| kylin.query.cache-enabled                             | true                 |                                                              |                           |
+| kylin.query.timeout-seconds                           | 0                    |                                                              |                           |
+| kylin.query.pushdown.runner-class-name                |                      |                                                              |                           |
+| kylin.query.pushdown.update-enabled                   | false                |                                                              |                           |
+| kylin.query.pushdown.cache-enabled                    | false                |                                                              |                           |
+| kylin.query.pushdown.jdbc.url                         |                      |                                                              |                           |
+| kylin.query.pushdown.jdbc.driver                      |                      |                                                              |                           |
+| kylin.query.pushdown.jdbc.username                    |                      |                                                              |                           |
+| kylin.query.pushdown.jdbc.password                    |                      |                                                              |                           |
+| kylin.query.pushdown.jdbc.pool-max-total              | 8                    |                                                              |                           |
+| kylin.query.pushdown.jdbc.pool-max-idle               | 8                    |                                                              |                           |
+| kylin.query.pushdown.jdbc.pool-min-idle               | 0                    |                                                              |                           |
+| kylin.query.security.table-acl-enabled                | true                 |                                                              | No                        |
+| kylin.server.mode                                     | all                  | Kylin node mode: all\|job\|query.                            | No                        |
+| kylin.server.cluster-servers                          | localhost:7070       |                                                              | No                        |
+| kylin.server.cluster-name                             |                      |                                                              | No                        |
+| kylin.server.query-metrics-enabled                    | false                |                                                              | No                        |
+| kylin.server.query-metrics2-enabled                   | false                |                                                              | No                        |
+| kylin.server.auth-user-cache.expire-seconds           | 300                  |                                                              | No                        |
+| kylin.server.auth-user-cache.max-entries              | 100                  |                                                              | No                        |
+| kylin.server.external-acl-provider                    |                      |                                                              | No                        |
+| kylin.security.ldap.user-search-base                  |                      |                                                              | No                        |
+| kylin.security.ldap.user-group-search-base            |                      |                                                              | No                        |
+| kylin.security.acl.admin-role                         |                      |                                                              | No                        |
+| kylin.web.timezone                                    | PST                  |                                                              | No                        |
+| kylin.web.cross-domain-enabled                        | true                 |                                                              | No                        |
+| kylin.web.export-allow-admin                          | true                 |                                                              | No                        |
+| kylin.web.export-allow-other                          | true                 |                                                              | No                        |
+| kylin.web.dashboard-enabled                           | false                |                                                              | No            |
+
diff --git a/website/_docs/install/index.cn.md b/website/_docs/install/index.cn.md
index 5610c0f..a0c7527 100644
--- a/website/_docs/install/index.cn.md
+++ b/website/_docs/install/index.cn.md
@@ -1,41 +1,79 @@
 ---
-layout: docs
-title:  "Installation Guide"
+layout: docs-cn
+title:  "安装指南"
 categories: install
 permalink: /cn/docs/install/index.html
-version: v0.7.2
-since: v0.7.1
 ---
 
-### Environment
+## 软件要求
 
-Kylin requires a properly setup hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check this reference: [Hadoop Environment](hadoop_env.html).
+* Hadoop: 2.7+
+* Hive: 0.13 - 1.2.1+
+* HBase: 1.1+
+* Spark (可选) 2.1.1+
+* Kafka (可选) 0.10.0+
+* JDK: 1.7+
+* OS: Linux only, CentOS 6.5+ or Ubuntu 16.0.4+
 
-## Prerequisites on Hadoop
+在 Hortonworks HDP 2.2 - 2.6, Cloudera CDH 5.7 - 5.11, AWS EMR 5.7 - 5.10, Azure HDInsight 3.5 - 3.6 上测试通过。
 
-* Hadoop: 2.4+
-* Hive: 0.13+
-* HBase: 0.98+, 1.x
-* JDK: 1.7+  
-_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1_
+出于试用和开发的目的,我们建议您使用集成的 sandbox 来试用 Kylin,比如 [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/),且其要保证 10 GB memory。我们建议您在 Virtual Box settings 中使用桥接模式代替 NAT 模式。 
 
+## 硬件要求
 
-It is most common to install Kylin on a Hadoop client machine. It can be used for demo use, or for those who want to host their own web site to provide Kylin service. The scenario is depicted as:
+运行 Kylin 的服务器的最低的配置为 4 core CPU, 16 GB memory 和 100 GB disk。 对于高负载的场景,建议使用 24 core CPU, 64 GB memory 或更高的配置。
 
-![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
 
-For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
+## Hadoop 环境
 
-Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
+Kylin 依赖于 Hadoop 集群处理大量的数据集。您需要准备一个配置好 HDFS, YARN, MapReduce, Hive, Hbase, Zookeeper 和其他服务的 Hadoop 集群供 Kylin 运行。最常见的是在 Hadoop client machine 上安装 Kylin,这样 Kylin 可以通过(`hive`, `hbase`, `hadoop`, 以及其他的)命令行与 Hadoop 进行通信。 
 
-### Install Kylin
+Kylin 可以在 Hadoop 集群的任意节点上启动。方便起见,您可以在 master 节点上运行 Kylin。但为了更好的稳定性,我们建议您将其部署在一个干净的 Hadoop client 节点上,该节点上 `hive`, `hbase`, `hadoop`, `hdfs` 命令行已安装好且 client 配置如(core-site.xml, hive-site.xml, hbase-site.xml, 及其他)也已经合理的配置且其可以自动和其它节点同步。运行 Kylin 的 Linux 账户要有访问 Hadoop 集群的权限,包括 create/write HDFS 文件夹, hive 表, hbase 表 和 提交 MR jobs 的权限。 
 
-1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
-2. Export KYLIN_HOME pointing to the extracted Kylin folder
-3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
-4. To start Kylin, simply run **bin/kylin.sh start**
-5. To stop Kylin, simply run **bin/kylin.sh stop**
+## Kylin 安装
 
-After Kylin started you can visit <http://your_hostname:7070/kylin>. The username/password is ADMIN/KYLIN. 
+ * 从最新的 Apache 下载网站下载一个适用于您 Hadoop 版本的 Kylin binaries 文件。例如,来源于 US 适用于 HBase 1.x 的 Kylin 2.3.1:
+{% highlight Groff markup %}
+cd /usr/local
+wget http://www-us.apache.org/dist/kylin/apache-kylin-2.3.1/apache-kylin-2.3.1-hbase1x-bin.tar.gz
+{% endhighlight %}
+ * 解压 tar 包,然后配置环境变量 KYLIN_HOME 指向 Kylin 文件夹
+{% highlight Groff markup %}
+tar -zxvf apache-kylin-2.3.1-hbase1x-bin.tar.gz
+cd apache-kylin-2.3.1-bin
+export KYLIN_HOME=`pwd`
+{% endhighlight %}
+ * 确保用户有权限在 shell 中运行 hadoop, hive 和 hbase cmd。如果您不确定,您可以运行 `$KYLIN_HOME/bin/check-env.sh` 脚本,如果您的环境有任何的问题,它会将打印出详细的信息。如果没有 error,意味着环境没问题。
+{% highlight Groff markup %}
+-bash-4.1# $KYLIN_HOME/bin/check-env.sh
+Retrieving hadoop conf dir...
+KYLIN_HOME is set to /usr/local/apache-kylin-2.3.1-bin
+-bash-4.1#
+{% endhighlight %}
+ * 运行 `$KYLIN_HOME/bin/kylin.sh start` 脚本来启动 Kylin,服务器启动后,您可以通过查看 `$KYLIN_HOME/logs/kylin.log` 获得运行时日志。
+{% highlight Groff markup %}
+-bash-4.1# $KYLIN_HOME/bin/kylin.sh start
+Retrieving hadoop conf dir...
+KYLIN_HOME is set to /usr/local/apache-kylin-2.3.1-bin
+Retrieving hive dependency...
+Retrieving hbase dependency...
+Retrieving hadoop conf dir...
+Retrieving kafka dependency...
+Retrieving Spark dependency...
+...
+A new Kylin instance is started by root. To stop it, run 'kylin.sh stop'
+Check the log at /usr/local/apache-kylin-2.3.1-bin/logs/kylin.log
+Web UI is at http://<hostname>:7070/kylin
+-bash-4.1#
+{% endhighlight %}
+ * Kylin 启动后您可以通过浏览器 <http://hostname:7070/kylin> 查看。初始用户名和密码是 ADMIN/KYLIN。
+ * 运行 `$KYLIN_HOME/bin/kylin.sh stop` 脚本,停止 Kylin。
+{% highlight Groff markup %}
+-bash-4.1# $KYLIN_HOME/bin/kylin.sh stop
+Retrieving hadoop conf dir... 
+KYLIN_HOME is set to /usr/local/apache-kylin-2.3.1-bin
+Stopping Kylin: 7014
+Kylin with pid 7014 has been stopped.
+{% endhighlight %}
 
 
diff --git a/website/_docs23/install/kylin_aws_emr.cn.md b/website/_docs/install/kylin_aws_emr.cn.md
similarity index 98%
copy from website/_docs23/install/kylin_aws_emr.cn.md
copy to website/_docs/install/kylin_aws_emr.cn.md
index a7b5274..be1a8b9 100644
--- a/website/_docs23/install/kylin_aws_emr.cn.md
+++ b/website/_docs/install/kylin_aws_emr.cn.md
@@ -1,8 +1,8 @@
 ---
-layout: docs23-cn
-title:  "在 AWS EMR 上 安装 Kylin"
+layout: docs-cn
+title:  "在 AWS EMR 上安装 Kylin"
 categories: install
-permalink: /cn/docs23/install/kylin_aws_emr.html
+permalink: /cn/docs/install/kylin_aws_emr.html
 ---
 
 今天许多用户将 Hadoop 运行在像 AWS 这样的公有云上。Apache Kylin,由标准的 Hadoop/HBase API 编译,支持多数主流的 Hadoop 发布;现在的版本是 Kylin v2.2,支持 AWS EMR 5.0 - 5.10。本文档介绍了在 EMR 上如何运行 Kylin。
diff --git a/website/_docs/install/kylin_aws_emr.md b/website/_docs/install/kylin_aws_emr.md
index dd1357f..dc8003c 100644
--- a/website/_docs/install/kylin_aws_emr.md
+++ b/website/_docs/install/kylin_aws_emr.md
@@ -72,7 +72,7 @@ If using HDFS as Kylin working directory, you just leave configurations unchange
 kylin.env.hdfs-working-dir=/kylin
 ```
 
-Before you shudown/restart the cluster, you must backup the "/kylin" data on HDFS to S3 with [S3DistCp](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html), or you may lost data and couldn't recover the cluster later.
+Before you shutdown/restart the cluster, you must backup the "/kylin" data on HDFS to S3 with [S3DistCp](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html), or you may lost data and couldn't recover the cluster later.
 
 - Use S3 as "kylin.env.hdfs-working-dir" 
 
diff --git a/website/_docs23/install/kylin_cluster.cn.md b/website/_docs/install/kylin_cluster.cn.md
similarity index 94%
copy from website/_docs23/install/kylin_cluster.cn.md
copy to website/_docs/install/kylin_cluster.cn.md
index 9250e95..39e3ffa 100644
--- a/website/_docs23/install/kylin_cluster.cn.md
+++ b/website/_docs/install/kylin_cluster.cn.md
@@ -1,8 +1,8 @@
 ---
-layout: docs23-cn
-title:  "Cluster 模式下部署"
+layout: docs-cn
+title:  "集群模式部署"
 categories: install
-permalink: /cn/docs23/install/kylin_cluster.html
+permalink: /cn/docs/install/kylin_cluster.html
 ---
 
 
diff --git a/website/_docs/install/kylin_docker.cn.md b/website/_docs/install/kylin_docker.cn.md
new file mode 100644
index 0000000..a02218a
--- /dev/null
+++ b/website/_docs/install/kylin_docker.cn.md
@@ -0,0 +1,10 @@
+---
+layout: docs
+title:  "用 Docker 运行 Kylin"
+categories: install
+permalink: /cn/docs/install/kylin_docker.html
+version: v1.5.3
+since: v1.5.2
+---
+
+Apache Kylin 作为一个 Hadoop 集群的客户端运行, 因此运行在 Docker 容器中是合理的; 请查看 github 项目[kylin-docker](https://github.com/Kyligence/kylin-docker/).
diff --git a/website/_docs/tutorial/create_cube.cn.md b/website/_docs/tutorial/create_cube.cn.md
index 324df7b..612738f 100644
--- a/website/_docs/tutorial/create_cube.cn.md
+++ b/website/_docs/tutorial/create_cube.cn.md
@@ -93,7 +93,7 @@ since: v0.7.1
 12. 点击 `Save` 然后选择 `Yes` 来保存 data model。创建完成,data model 就会列在左边 `Models` 列表中。
    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-created.png)
 
-### III. 新建 Cube
+### IV. 新建 Cube
 
 创建完 data model,可以开始创建 cube。
 点击顶部 `Model`,然后点击 `Models` 标签。点击 `+New` 按钮,在下拉框中选择 `New Cube`。
@@ -157,12 +157,12 @@ cube 名字可以使用字母,数字和下划线(空格不允许)。`Notif
    * EXTENDED_COLUMN
    Extended_Column 作为度量比作为维度更节省空间。一列和零一列可以生成新的列。
    
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-extended_column.png)
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-extended_column.PNG)
 
    * PERCENTILE
    Percentile 代表了百分比。值越大,错误就越少。100为最合适的值。
 
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-percentile.png)
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-percentile.PNG)
 
 **步骤4. 更新设置**
 
@@ -212,12 +212,12 @@ cube 名字可以使用字母,数字和下划线(空格不允许)。`Notif
 
 Kylin 允许在 Cube 级别覆盖部分 kylin.properties 中的配置,你可以在这里定义覆盖的属性。如果你没有要配置的,点击 `Next` 按钮。
 
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 configuration.png)
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 configuration.PNG)
 
 **步骤7. 概览 & 保存**
 
 你可以概览你的 cube 并返回之前的步骤进行修改。点击 `Save` 按钮完成 cube 创建。
 
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/11 overview.png)
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/11 overview.PNG)
 
 恭喜,cube 创建好了,你可以去构建和玩它了。
diff --git a/website/_docs/tutorial/create_cube.md b/website/_docs/tutorial/create_cube.md
index 9001b03..9d92cbd 100644
--- a/website/_docs/tutorial/create_cube.md
+++ b/website/_docs/tutorial/create_cube.md
@@ -150,12 +150,12 @@ You can use letters, numbers and '_' to name your cube (blank space in name is n
 * EXTENDED_COLUMN
    Extended_Column as a measure rather than a dimension is to save space. One column with another column can generate new columns.
 
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-extended_column.png)
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-extended_column.PNG)
 
 * PERCENTILE
    Percentile represent the percentage. The larger the value, the smaller the error. 100 is the most suitable.
 
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-percentile.png)
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-percentile.PNG)
 
 **Step 4. Refresh Setting**
 
@@ -205,12 +205,12 @@ Please note: "Global Dictionary" and "Segment Dictionary" are one-way dictionary
 
 Kylin allows overwritting system configurations (conf/kylin.properties) at Cube level . You can add the key/values that you want to overwrite here. If you don't have anything to config, click `Next` button.
 
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 configuration.png)
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 configuration.PNG)
 
 **Step 7. Overview & Save**
 
 You can overview your cube and go back to previous step to modify it. Click the `Save` button to complete the cube creation.
 
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/11 overview.png)
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/11 overview.PNG)
 
 Cheers! Now the cube is created, you can go ahead to build and play it.
diff --git a/website/_docs/tutorial/cube_build_job.cn.md b/website/_docs/tutorial/cube_build_job.cn.md
index 3ba1b0d..c1b1be1 100644
--- a/website/_docs/tutorial/cube_build_job.cn.md
+++ b/website/_docs/tutorial/cube_build_job.cn.md
@@ -1,6 +1,6 @@
 ---
 layout: docs-cn
-title: Cube 构建和 Job 监控
+title: "Cube 构建和 Job 监控"
 categories: 教程
 permalink: /cn/docs/tutorial/cube_build_job.html
 version: v1.2
diff --git a/website/_docs/tutorial/cube_build_performance.cn.md b/website/_docs/tutorial/cube_build_performance.cn.md
new file mode 100644
index 0000000..b9e770c
--- /dev/null
+++ b/website/_docs/tutorial/cube_build_performance.cn.md
@@ -0,0 +1,266 @@
+---
+layout: docs-cn
+title: "优化 Cube 构建"
+categories: tutorial
+permalink: /cn/docs/tutorial/cube_build_performance.html
+---
+ *本教程是关于如何一步步优化 cube build 的样例。* 
+ 
+在这个场景中我们尝试优化一个简单的 Cube,用 1 张 fact 和 1 张 lookup 表 (日期 Dimension)。在真正的调整之前,请从 [优化 Cube Build](/docs20/howto/howto_optimize_build.html) 中大体了解关于 Cube build 的过程
+
+![]( /images/tutorial/2.0/cube_build_performance/01.png)
+
+基准是:
+
+* 一个 Measure:平衡,总是计算 Max,Min 和 Count
+* 所有 Dim_date (10 项) 会被用作 dimensions 
+* 输入为 Hive CSV 外部表 
+* 输出为 HBase 中未压缩的 Cube 
+
+使用这些配置,结果为:13 分钟 build 一个 20 Mb 的 cube (Cube_01)
+
+### Cube_02:减少组合
+第一次提升,在 Dimensions 上使用 Joint 和 Hierarchy 来减少组合 (cuboids 的数量)。
+
+使用月,周,工作日和季度的 Joint Dimension 将所有的 ID 和 Text 组合在一起
+
+![]( /images/tutorial/2.0/cube_build_performance/02.png)
+
+	
+定义 Id_date 和 Year 作为 Hierarchy Dimension
+
+这将其大小减至 0.72 MB 而时间减至 5 分钟
+
+[Kylin 2149](https://issues.apache.org/jira/browse/KYLIN-2149),理想情况下,这些 Hierarchies 也能够这样定义:
+* Id_weekday > Id_date
+* Id_Month > Id_date
+* Id_Quarter > Id_date
+* Id_week > Id_date
+
+现在,还不能对同一 dimension 一起使用 Joint 和 Hierarchy。
+
+
+### Cube_03:输出压缩
+下一次提升,使用 Snappy 压缩 HBase Cube:
+
+![alt text](/images/tutorial/2.0/cube_build_performance/03.png)
+
+另一个选项为 Gzip:
+
+![alt text](/images/tutorial/2.0/cube_build_performance/04.png)
+
+
+压缩输出的结果为:
+
+![alt text](/images/tutorial/2.0/cube_build_performance/05.png)
+
+Snappy 和 Ggzip 的区别在时间上少于 1% 但是在大小上有 18% 差别
+
+
+### Cube_04:压缩 Hive 表
+时间分布如下:
+
+![]( /images/tutorial/2.0/cube_build_performance/06.png)
+
+
+按概念分组的详细信息 :
+
+![]( /images/tutorial/2.0/cube_build_performance/07.png)
+
+67 % 用来 build / process flat 表且遵守 30% 用来 build cube
+
+大量时间用在了第一步。
+
+这种时间分布在有很少的 measures 和很少的 dim (或者是非常优化的) 的 cube 中是很典型的 
+
+
+尝试在 Hive 输入表中使用 ORC 格式和压缩(Snappy):
+
+![]( /images/tutorial/2.0/cube_build_performance/08.png)
+
+
+前三步 (Flat Table) 的时间已经提升了一半。
+
+其他列式格式可以被测试:
+
+![]( /images/tutorial/2.0/cube_build_performance/19.png)
+
+
+* ORC
+* 使用 Snappy 的 ORC 压缩
+
+但结果比使用 Sequence 文件的效果差。
+
+请看:[Shaofengshi in MailList](http://apache-kylin.74782.x6.nabble.com/Kylin-Performance-td6713.html#a6767) 关于这个的评论
+
+第二步是重新分配 Flat Hive 表:
+
+![]( /images/tutorial/2.0/cube_build_performance/20.png)
+
+是一个简单的 row count,可以做出两个近似值
+* 如果其不需要精确,fact 表的 row 可以被统计→ 这可以与步骤 1 并行执行 (且 99% 的时间将是精确的)
+
+![]( /images/tutorial/2.0/cube_build_performance/21.png)
+
+
+* 将来的版本中 (KYLIN-2165 v2.0),这一步将使用 Hive 表数据实现。
+
+
+
+### Cube_05:Hive 表 (失败) 分区
+Rows 的分布为:
+
+Table | Rows
+--- | --- 
+Fact Table | 3.900.00 
+Dim Date | 2.100 
+
+build flat 表的查询语句 (简单版本):
+{% highlight Groff markup %}
+```sql
+SELECT
+,DIM_DATE.X
+,DIM_DATE.y
+,FACT_POSICIONES.BALANCE
+FROM  FACT_POSICIONES  INNER JOIN DIM_DATE 
+	ON  ID_FECHA = .ID_FECHA
+WHERE (ID_DATE >= '2016-12-08' AND ID_DATE < '2016-12-23')
+```
+{% endhighlight %}
+
+这里存在的问题是,Hive 只使用 1 个 Map 创建 Flat 表。重要的是我们要改变这种行为。解决方案是在同一列将 DIM 和 FACT 分区
+
+* 选项 1:在 Hive 表中使用 id_date 作为分区列。这有一个大问题:Hive metastore 意味着几百个分区而不是几千个 (在 [Hive 9452](https://issues.apache.org/jira/browse/HIVE-9452) 中有一个解决该问题的方法但现在还未完成)
+* 选项 2:生成一个新列如 Monthslot。
+
+![]( /images/tutorial/2.0/cube_build_performance/09.png)
+
+
+为 dim 和 fact 表添加同一个列
+
+现在,用这个新的条件 join 表来更新数据模型
+
+![]( /images/tutorial/2.0/cube_build_performance/10.png)
+
+	
+生成 flat 表的新查询类似于:
+{% highlight Groff markup %}
+```sql
+SELECT *
+	FROM  FACT_POSICIONES  **INNER JOIN** DIM_DATE 
+		ON  ID_FECHA = .ID_FECHA    AND  MONTHSLOT=MONTHSLOT
+```
+{% endhighlight %}
+
+用这个数据模型 rebuild 新 cube
+
+结果,性能更糟了 :(。尝试了几种方法后,还是没找到解决方案
+
+![]( /images/tutorial/2.0/cube_build_performance/11.png)
+
+
+问题是分区没有被用来生成几个 Mappers
+
+![]( /images/tutorial/2.0/cube_build_performance/12.png)
+
+	
+(我和 ShaoFeng Shi 检查了这个问题。他认为问题是这里只有很少的 rows 而且我们不是使用的真实的 Hadoop 集群。请看这个 [tech note](http://kylin.apache.org/docs16/howto/howto_optimize_build.html))。
+	
+
+### 结果摘要
+
+![]( /images/tutorial/2.0/cube_build_performance/13.png)
+
+
+调整进度如下:
+* Hive 输入表压缩了
+* HBase 输出压缩了
+* 应用了 cardinality (Joint,Derived,Hierarchy 和 Mandatory) 减少的技术
+* 为每一个 Dim 个性化 Dim 编码器并选择了 Dim 在 Row Key 中最好的顺序
+
+
+
+现在,这里有三种类型的 cubes:
+* 在 dimensions 中使用低 cardinality 的 Cubes(如 cube 4,大多数时间用在 flat 表这一步)
+* 在 dimensions 中使用高 cardinality 的 Cubes(如 cube 6,大多数时间用于 Build cube,flat 表这一步少于 10%)
+* 第三种类型,超高 cardinality (UHC) 其超出了本文的范围
+
+
+### Cube 6:用高 cardinality Dimensions 的 Cube
+
+![]( /images/tutorial/2.0/cube_build_performance/22.png)
+
+在这个用例中 **72%** 的时间用来 build Cube
+
+这一步是 MapReduce 任务,您可以在 ![alt text](/images/tutorial/2.0/cube_build_performance/23.png) > ![alt text](/images/tutorial/2.0/cube_build_performance/24.png) 看 YARN 中关于这一步的日志
+
+Map – Reduce 的性能怎样能提升呢? 简单的方式是增加 Mappers 和 Reduces (等于增加了并行数) 的数量。
+
+
+![]( /images/tutorial/2.0/cube_build_performance/25.png)
+
+
+**注意:** YARN / MapReduce 有很多参数配置和适应您的系统。这里的重点只在于小部分。 
+
+(在我的系统中我可以分配 12 – 14 GB 和 8 cores 给 YARN 资源):
+
+* yarn.nodemanager.resource.memory-mb = 15 GB
+* yarn.scheduler.maximum-allocation-mb = 8 GB
+* yarn.nodemanager.resource.cpu-vcores = 8 cores
+有了这些配置我们并行列表的最大理论级别为 8。然而这里有一个问题:“3600 秒后超时了”
+
+![]( /images/tutorial/2.0/cube_build_performance/26.png)
+
+
+参数 mapreduce.task.timeout  (默认为 1 小时) 定义了 Application Master (AM) 在没有 ACK of Yarn Container 的情况下发生的最大时间。一旦这次通过了,AM 杀死 container 并重新尝试 4 次 (都是同一个结果)
+
+问题在哪? 问题是 4 个 mappers 启动了,但每一个 mapper 需要超过 4 GB 完成
+
+* 解决方案 1:增加 RAM 给 YARN 
+* 解决方案 2:增加在 Mapper 步骤中使用的 vCores 数量来减少 RAM 使用
+* 解决方案 3:您可以通过 node 为 YARN 使用最大的 RAM(yarn.nodemanager.resource.memory-mb) 并为每一个 container 使用最小的 RAM 进行实验(yarn.scheduler.minimum-allocation-mb)。如果您为每一个 container 增加了最小的 RAM,YARN 将会减少 Mappers 的数量。
+
+![]( /images/tutorial/2.0/cube_build_performance/27.png)
+
+
+在最后两个用例中结果是相同的:减少并行化的级别 ==> 
+* 现在我们只启动 3 个 mappers 且同时启动,第四个必须等待空闲时间
+* 3 个 mappers 将 ram 分散在它们之间,结果它们就会有足够的 ram 完成 task
+
+一个正常的 “Build Cube” 步骤中您将会在 YARN 日志中看到相似的消息:
+
+![]( /images/tutorial/2.0/cube_build_performance/28.png)
+
+
+如果您没有周期性的看见这个,也许您在内存中遇到了瓶颈。
+
+
+
+### Cube 7:提升 cube 响应时间
+我们尝试使用不同 aggregations groups 来提升一些非常重要 Dim 或有高 cardinality 的 Dim 的查询性能。
+
+在我们的用例中定义 3 个 Aggregations Groups:
+1. “Normal cube”
+2. 使用日期 Dim 和 Currency 的 Cube(就像 mandatory)
+3. 使用日期 Dim 和 Carteras_Desc 的 Cube(就像 mandatory)
+
+![]( /images/tutorial/2.0/cube_build_performance/29.png)
+
+
+![]( /images/tutorial/2.0/cube_build_performance/30.png)
+
+
+![]( /images/tutorial/2.0/cube_build_performance/31.png)
+
+
+
+比较未使用 / 使用 AGGs:
+
+![]( /images/tutorial/2.0/cube_build_performance/32.png)
+
+
+使用多于 3% 的时间 build cube 以及 0.6% 的 space,使用 currency 或 Carteras_Desc 的查询会快很多。
+
+
+
+
diff --git a/website/_docs/tutorial/cube_build_performance.md b/website/_docs/tutorial/cube_build_performance.md
index 831d0c8..2979836 100755
--- a/website/_docs/tutorial/cube_build_performance.md
+++ b/website/_docs/tutorial/cube_build_performance.md
@@ -79,7 +79,7 @@ Try to use ORC Format and compression on Hive input table (Snappy):
 ![]( /images/tutorial/2.0/cube_build_performance/08.png)
 
 
-The time in the first three stree steps (Flat Table) has been improved by half.
+The time in the first three steps (Flat Table) has been improved by half.
 
 Other columnar formats can be tested:
 
@@ -200,14 +200,14 @@ How can the performance of Map – Reduce be improved? The easy way is to increa
 ![]( /images/tutorial/2.0/cube_build_performance/25.png)
 
 
-**NOTE:** YARN / MapReduce have a lot parameters to configure and adapt to theyour system. The focus here is only on small parts. 
+**NOTE:** YARN / MapReduce have a lot parameters to configure and adapt to the your system. The focus here is only on small parts. 
 
 (In my system I can assign 12 – 14 GB and 8 cores to YARN Resources):
 
 * yarn.nodemanager.resource.memory-mb = 15 GB
 * yarn.scheduler.maximum-allocation-mb = 8 GB
 * yarn.nodemanager.resource.cpu-vcores = 8 cores
-With this config our max theoreticaleorical grade of parallelismelist is 8. However, but this has a problem: “Timed out after 3600 secs”
+With this config our max theoretical orical grade of parallelism list is 8. However, but this has a problem: “Timed out after 3600 secs”
 
 ![]( /images/tutorial/2.0/cube_build_performance/26.png)
 
@@ -218,7 +218,7 @@ Where is the problem? The problem is that 4 mappers started, but each mapper nee
 
 * The solution 1: add more RAM to YARN 
 * The solution 2: increase vCores number used in Mapper step to reduce the RAM used
-* The solution 3: you can play with max RAM to YARN by node  (yarn.nodemanager.resource.memory-mb) and experiment with mimin RAM perto container (yarn.scheduler.minimum-allocation-mb). If you increase minimum RAM per container, YARN will reduce the numbers of Mappers     
+* The solution 3: you can play with max RAM to YARN by node  (yarn.nodemanager.resource.memory-mb) and experiment with minimum RAM per to container (yarn.scheduler.minimum-allocation-mb). If you increase minimum RAM per container, YARN will reduce the numbers of Mappers     
 
 ![]( /images/tutorial/2.0/cube_build_performance/27.png)
 
diff --git a/website/_docs/tutorial/cube_spark.cn.md b/website/_docs/tutorial/cube_spark.cn.md
new file mode 100644
index 0000000..be0b2c7
--- /dev/null
+++ b/website/_docs/tutorial/cube_spark.cn.md
@@ -0,0 +1,169 @@
+---
+layout: docs-cn
+title:  "用 Spark 构建 Cube"
+categories: tutorial
+permalink: /cn/docs/tutorial/cube_spark.html
+---
+Kylin v2.0 介绍了 Spark cube engine,在 build cube 步骤中使用 Apache Spark 代替 MapReduce;您可以通过查看 [这篇博客](/blog/2017/02/23/by-layer-spark-cubing/) 的图片了解整体情况。当前的文档使用样例 cube 对如何尝试 new engine 进行了演示。
+
+
+## 准备阶段
+您需要一个安装了 Kylin v2.1.0 及以上版本的 Hadoop 环境。使用 Hortonworks HDP 2.4 Sandbox VM,其中 Hadoop 组件和 Hive/HBase 已经启动了。 
+
+## 安装 Kylin v2.1.0 及以上版本
+
+从 Kylin 的下载页面下载适用于 HBase 1.x 的 Kylin v2.1.0,然后在 */usr/local/* 文件夹中解压 tar 包:
+
+{% highlight Groff markup %}
+
+wget http://www-us.apache.org/dist/kylin/apache-kylin-2.1.0/apache-kylin-2.1.0-bin-hbase1x.tar.gz -P /tmp
+
+tar -zxvf /tmp/apache-kylin-2.1.0-bin-hbase1x.tar.gz -C /usr/local/
+
+export KYLIN_HOME=/usr/local/apache-kylin-2.1.0-bin-hbase1x
+{% endhighlight %}
+
+## 准备 "kylin.env.hadoop-conf-dir"
+
+为使 Spark 运行在 Yarn 上,需指定 **HADOOP_CONF_DIR** 环境变量,其是一个包含 Hadoop(客户端) 配置文件的目录。许多 Hadoop 分布式的目录设置为 "/etc/hadoop/conf";但 Kylin 不仅需要访问 HDFS,Yarn 和 Hive,还有 HBase,因此默认的目录可能并未包含所有需要的文件。在此用例中,您需要创建一个新的目录然后拷贝或者连接这些客户端文件 (core-site.xml,hdfs-site.xml,yarn-site.xml,hive-site.xml 和 hbase-site.xml) 到这个目录下。在 HDP 2.4 中,hive-tez 和 Spark 之间有个冲突,因此当为 Kylin 进行复制时,需要将默认的 engine 由 "tez" 换为 "mr"。
+
+{% highlight Groff markup %}
+
+mkdir $KYLIN_HOME/hadoop-conf
+ln -s /etc/hadoop/conf/core-site.xml $KYLIN_HOME/hadoop-conf/core-site.xml 
+ln -s /etc/hadoop/conf/hdfs-site.xml $KYLIN_HOME/hadoop-conf/hdfs-site.xml 
+ln -s /etc/hadoop/conf/yarn-site.xml $KYLIN_HOME/hadoop-conf/yarn-site.xml 
+ln -s /etc/hbase/2.4.0.0-169/0/hbase-site.xml $KYLIN_HOME/hadoop-conf/hbase-site.xml 
+cp /etc/hive/2.4.0.0-169/0/hive-site.xml $KYLIN_HOME/hadoop-conf/hive-site.xml 
+vi $KYLIN_HOME/hadoop-conf/hive-site.xml (change "hive.execution.engine" value from "tez" to "mr")
+
+{% endhighlight %}
+
+现在,在 kylin.properties 中设置属性 "kylin.env.hadoop-conf-dir" 好让 Kylin 知道这个目录:
+
+{% highlight Groff markup %}
+kylin.env.hadoop-conf-dir=/usr/local/apache-kylin-2.1.0-bin-hbase1x/hadoop-conf
+{% endhighlight %}
+
+如果这个属性没有设置,Kylin 将会使用 "hive-site.xml" 中的默认目录;然而那个文件夹可能并没有 "hbase-site.xml",会导致 Spark 的 HBase/ZK 连接错误。
+
+## 检查 Spark 配置
+
+Kylin 在 $KYLIN_HOME/spark 中嵌入一个 Spark binary (v2.1.0),所有使用 *"kylin.engine.spark-conf."* 作为前缀的 Spark 配置属性都能在 $KYLIN_HOME/conf/kylin.properties 中进行管理。这些属性当运行提交 Spark job 时会被提取并应用;例如,如果您配置 "kylin.engine.spark-conf.spark.executor.memory=4G",Kylin 将会在执行 "spark-submit" 操作时使用 "--conf spark.executor.memory=4G" 作为参数。
+
+运行 Spark cubing 前,建议查看一下这些配置并根据您集群的情况进行自定义。下面是默认配置,也是 sandbox 最低要求的配置 (1 个 1GB memory 的 executor);通常一个集群,需要更多的 executors 且每一个至少有 4GB memory 和 2 cores:
+
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.master=yarn
+kylin.engine.spark-conf.spark.submit.deployMode=cluster
+kylin.engine.spark-conf.spark.yarn.queue=default
+kylin.engine.spark-conf.spark.executor.memory=1G
+kylin.engine.spark-conf.spark.executor.cores=2
+kylin.engine.spark-conf.spark.executor.instances=1
+kylin.engine.spark-conf.spark.eventLog.enabled=true
+kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-history
+kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history
+
+#kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
+
+## uncomment for HDP
+#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
+
+{% endhighlight %}
+
+为了在 Hortonworks 平台上运行,需要将 "hdp.version" 指定为 Yarn 容器的 Java 选项,因此请取消 kylin.properties 的最后三行。 
+
+除此之外,为了避免重复上传 Spark jar 包到 Yarn,您可以手动上传一次,然后配置 jar 包的 HDFS 路径;请注意,HDFS 路径必须是全限定名。
+
+{% highlight Groff markup %}
+jar cv0f spark-libs.jar -C $KYLIN_HOME/spark/jars/ .
+hadoop fs -mkdir -p /kylin/spark/
+hadoop fs -put spark-libs.jar /kylin/spark/
+{% endhighlight %}
+
+然后,要在 kylin.properties 中进行如下配置:
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.yarn.archive=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-libs.jar
+kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
+{% endhighlight %}
+
+所有 "kylin.engine.spark-conf.*" 参数都可以在 Cube 或 Project 级别进行重写,这为用户提供了极大的灵活性。
+
+## 创建和修改样例 cube
+
+运行 sample.sh 创建样例 cube,然后启动 Kylin 服务器:
+
+{% highlight Groff markup %}
+
+$KYLIN_HOME/bin/sample.sh
+$KYLIN_HOME/bin/kylin.sh start
+
+{% endhighlight %}
+
+Kylin 启动后,访问 Kylin 网站,在 "Advanced Setting" 页,编辑名为 "kylin_sales" 的 cube,将 "Cube Engine" 由 "MapReduce" 换成 "Spark":
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/1_cube_engine.png)
+
+点击 "Next" 进入 "Configuration Overwrites" 页面,点击 "+Property" 添加属性 "kylin.engine.spark.rdd-partition-cut-mb" 其值为 "500" (理由如下):
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_overwrite_partition.png)
+
+样例 cube 有两个耗尽内存的度量: "COUNT DISTINCT" 和 "TOPN(100)";当源数据较小时,他们的大小估计的不太准确: 预估的大小会比真实的大很多,导致了更多的 RDD partitions 被切分,使得 build 的速度降低。100 对于其是一个较为合理的数字。点击 "Next" 和 "Save" 保存 cube。
+
+
+## 用 Spark 构建 Cube
+
+点击 "Build",选择当前日期为 end date。Kylin 会在 "Monitor" 页生成一个构建 job,第 7 步是 Spark cubing。Job engine 开始按照顺序执行每一步。 
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_job_with_spark.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/3_spark_cubing_step.png)
+
+当 Kylin 执行这一步时,您可以监视 Yarn 资源管理器里的状态. 点击 "Application Master" 链接将会打开 Spark 的 UI 网页,它会显示每一个 stage 的进度以及详细的信息。
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/4_job_on_rm.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/5_spark_web_gui.png)
+
+
+所有步骤成功执行后,Cube 的状态变为 "Ready" 且您可以像往常那样进行查询。
+
+## 疑难解答
+
+当出现 error,您可以首先查看 "logs/kylin.log". 其中包含 Kylin 执行的所有 Spark 命令,例如:
+
+{% highlight Groff markup %}
+2017-03-06 14:44:38,574 INFO  [Job 2d5c1178-c6f6-4b50-8937-8e5e3b39227e-306] spark.SparkExecutable:121 : cmd:export HADOOP_CONF_DIR=/usr/local/apache-kylin-2.1.0-bin-hbase1x/hadoop-conf && /usr/local/apache-kylin-2.1.0-bin-hbase1x/spark/bin/spark-submit --class org.apache.kylin.common.util.SparkEntry  --conf spark.executor.instances=1  --conf spark.yarn.queue=default  --conf spark.yarn.am.extraJavaOptions=-Dhdp.version=current  --conf spark.history.fs.logDirectory=hdfs:///kylin/spark-his [...]
+
+{% endhighlight %}
+
+您可以拷贝 cmd 以便在 shell 中手动执行,然后快速进行参数调整;执行期间,您可以访问 Yarn 资源管理器查看更多的消息。如果 job 已经完成了,您可以在 Spark history server 中查看历史信息。 
+
+Kylin 默认将历史信息输出到 "hdfs:///kylin/spark-history",您需要在该目录下启动 Spark history server,或在 conf/kylin.properties 中使用参数 "kylin.engine.spark-conf.spark.eventLog.dir" 和 "kylin.engine.spark-conf.spark.history.fs.logDirectory" 替换为您已存在的 Spark history server 的事件目录。
+
+下面的命令可以在 Kylin 的输出目录下启动一个 Spark history server 实例,运行前请确保 sandbox 中已存在的 Spark history server 关闭了:
+
+{% highlight Groff markup %}
+$KYLIN_HOME/spark/sbin/start-history-server.sh hdfs://sandbox.hortonworks.com:8020/kylin/spark-history 
+{% endhighlight %}
+
+浏览器访问 "http://sandbox:18080" 将会显示 job 历史:
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/9_spark_history.png)
+
+点击一个具体的 job,运行时的具体信息将会展示,该信息对疑难解答和性能调整有极大的帮助。
+
+## 进一步
+
+如果您是 Kylin 的管理员但是对于 Spark 是新手,建议您浏览 [Spark 文档](https://spark.apache.org/docs/2.1.0/),别忘记相应地去更新配置。您可以让 Spark 的 [Dynamic Resource Allocation](https://spark.apache.org/docs/2.1.0/job-scheduling.html#dynamic-resource-allocation) 生效以便其对于不同的工作负载能自动伸缩。Spark 性能依赖于集群的内存和 CPU 资源,当有复杂数据模型和巨大的数据集一次构建时 Kylin 的 Cube 构建将会是一项繁重的任务。如果您的集群资源不能够执行,Spark executors 就会抛出如 "OutOfMemorry" 这样的错误,因此请合理的使用。对于有 UHC dimension,过多组合 (例如,一个 cube 超过 12 dimensions),或耗尽内存的度量 (Count Distinct,Top-N) 的 Cube,建议您使用 MapReduce e [...]
+
+如果您有任何问题,意见,或 bug 修复,欢迎在 dev@kylin.apache.org 中讨论。
diff --git a/website/_docs/tutorial/cube_streaming.cn.md b/website/_docs/tutorial/cube_streaming.cn.md
new file mode 100644
index 0000000..00d5429
--- /dev/null
+++ b/website/_docs/tutorial/cube_streaming.cn.md
@@ -0,0 +1,219 @@
+---
+layout: docs-cn
+title:  "从 Kafka 流构建 Cube"
+categories: tutorial
+permalink: /cn/docs/tutorial/cube_streaming.html
+---
+Kylin v1.6 发布了可扩展的 streaming cubing 功能,它利用 Hadoop 消费 Kafka 数据的方式构建 cube,您可以查看 [这篇博客](/blog/2016/10/18/new-nrt-streaming/) 以进行高级别的设计。本文档是一步接一步的阐述如何创建和构建样例 cube 的教程;
+
+## 前期准备
+您需要一个安装了 kylin v1.6.0 或以上版本和可运行的 Kafka(v0.10.0 或以上版本)的 Hadoop 环境;先前的 Kylin 版本有一定的问题因此请首先升级您的 Kylin 实例。
+
+本教程中我们使用 Hortonworks HDP 2.2.4 Sandbox VM + Kafka v0.10.0(Scala 2.10) 作为环境。
+
+## 安装 Kafka 0.10.0.0 和 Kylin
+不要使用 HDP 2.2.4 自带的 Kafka,因为它太旧了,如果其运行着请先停掉。
+{% highlight Groff markup %}
+curl -s http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgz | tar -xz -C /usr/local/
+
+cd /usr/local/kafka_2.10-0.10.0.0/
+
+bin/kafka-server-start.sh config/server.properties &
+
+{% endhighlight %}
+
+从下载页下载 Kylin v1.6,在 /usr/local/ 文件夹中解压 tar 包。
+
+## 创建样例 Kafka topic 并填充数据
+
+创建样例名为 "kylin_streaming_topic" 具有三个分区的 topic:
+
+{% highlight Groff markup %}
+
+bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic kylin_streaming_topic
+Created topic "kylin_streaming_topic".
+{% endhighlight %}
+
+将样例数据放入 topic;Kylin 有一个实用类可以做这项工作;
+
+{% highlight Groff markup %}
+export KAFKA_HOME=/usr/local/kafka_2.10-0.10.0.0
+export KYLIN_HOME=/usr/local/apache-kylin-2.1.0-bin
+
+cd $KYLIN_HOME
+./bin/kylin.sh org.apache.kylin.source.kafka.util.KafkaSampleProducer --topic kylin_streaming_topic --broker localhost:9092
+{% endhighlight %}
+
+工具每一秒会向 Kafka 发送 100 条记录。直至本教程结束请让其一直运行。现在您可以用 kafka-console-consumer.sh 查看样例消息:
+
+{% highlight Groff markup %}
+cd $KAFKA_HOME
+bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic kylin_streaming_topic --from-beginning
+{"amount":63.50375137330458,"category":"TOY","order_time":1477415932581,"device":"Other","qty":4,"user":{"id":"bf249f36-f593-4307-b156-240b3094a1c3","age":21,"gender":"Male"},"currency":"USD","country":"CHINA"}
+{"amount":22.806058795736583,"category":"ELECTRONIC","order_time":1477415932591,"device":"Andriod","qty":1,"user":{"id":"00283efe-027e-4ec1-bbed-c2bbda873f1d","age":27,"gender":"Female"},"currency":"USD","country":"INDIA"}
+
+ {% endhighlight %}
+
+## 用 streaming 定义一张表
+用 "$KYLIN_HOME/bin/kylin.sh start" 启动 Kylin 服务器,输入 http://sandbox:7070/kylin/ 登陆 Kylin Web GUI,选择一个已存在的 project 或创建一个新的 project;点击 "Model" -> "Data Source",点击 "Add Streaming Table" 图标;
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/1_Add_streaming_table.png)
+
+在弹出的对话框中,输入您从 kafka-console-consumer 中获得的样例记录,点击 ">>" 按钮,Kylin 会解析 JSON 消息并列出所有的消息;
+
+您需要为这个 streaming 数据源起一个逻辑表名;该名字会在后续用于 SQL 查询;这里是在 "Table Name" 字段输入 "STREAMING_SALES_TABLE" 作为样例。
+
+您需要选择一个时间戳字段用来标识消息的时间;Kylin 可以从这列值中获得其他时间值,如 "year_start","quarter_start",这为您构建和查询 cube 提供了更高的灵活性。这里可以查看 "order_time"。您可以取消选择那些 cube 不需要的属性。这里我们保留了所有字段。
+
+注意 Kylin 从 1.6 版本开始支持结构化 (或称为 "嵌入") 消息,会将其转换成一个 flat table structure。默认使用 "_" 作为结构化属性的分隔符。
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/2_Define_streaming_table.png)
+
+
+点击 "Next"。在这个页面,提供了 Kafka 集群信息;输入 "kylin_streaming_topic" 作为 "Topic" 名;集群有 1 个 broker,其主机名为 "sandbox",端口为 "9092",点击 "Save"。
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Kafka_setting.png)
+
+在 "Advanced setting" 部分,"timeout" 和 "buffer size" 是和 Kafka 进行连接的配置,保留它们。 
+
+在 "Parser Setting",Kylin 默认您的消息为 JSON 格式,每一个记录的时间戳列 (由 "tsColName" 指定) 是 bigint (新纪元时间) 类型值;在这个例子中,您只需设置 "tsColumn" 为 "order_time";
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_setting.png)
+
+在现实情况中如果时间戳值为 string 如 "Jul 20,2016 9:59:17 AM",您需要用 "tsParser" 指定解析类和时间模式例如:
+
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_time.png)
+
+点击 "Submit" 保存设置。现在 "Streaming" 表就创建好了。
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/4_Streaming_table.png)
+
+## 定义数据模型
+有了上一步创建的表,现在我们可以创建数据模型了。步骤和您创建普通数据模型是一样的,但有两个要求:
+
+* Streaming Cube 不支持与 lookup 表进行 join;当定义数据模型时,只选择 fact 表,不选 lookup 表;
+* Streaming Cube 必须进行分区;如果您想要在分钟级别增量的构建 Cube,选择 "MINUTE_START" 作为 cube 的分区日期列。如果是在小时级别,选择 "HOUR_START"。
+
+这里我们选择 13 个 dimension 和 2 个 measure 列:
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/5_Data_model_dimension.png)
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/6_Data_model_measure.png)
+保存数据模型。
+
+## 创建 Cube
+
+Streaming Cube 和普通的 cube 大致上一样. 有以下几点需要您注意:
+
+* 分区时间列应该是 Cube 的一个 dimension。在 Streaming OLAP 中时间总是一个查询条件,Kylin 利用它来缩小扫描分区的范围。
+* 不要使用 "order\_time" 作为 dimension 因为它非常的精细;建议使用 "mintue\_start","hour\_start" 或其他,取决于您如何检查数据。
+* 定义 "year\_start","quarter\_start","month\_start","day\_start","hour\_start","minute\_start" 作为层级以减少组合计算。
+* 在 "refersh setting" 这一步,创建更多合并的范围,如 0.5 小时,4 小时,1 天,然后是 7 天;这将会帮助您控制 cube segment 的数量。
+* 在 "rowkeys" 部分,拖拽 "minute\_start" 到最上面的位置,对于 streaming 查询,时间条件会一直显示;将其放到前面将会帮助您缩小扫描范围。
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/8_Cube_dimension.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/9_Cube_measure.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/10_agg_group.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/11_Rowkey.png)
+
+保存 cube。
+
+## 运行 build
+
+您可以在 web GUI 触发 build,通过点击 "Actions" -> "Build",或用 'curl' 命令发送一个请求到 Kylin RESTful API:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0,"sourceOffsetEnd": 9223372036854775807,"buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+{% endhighlight %}
+
+请注意 API 终端和普通 cube 不一样 (这个 URL 以 "build2" 结尾)。
+
+这里的 0 表示从最后一个位置开始,9223372036854775807 (Long 类型的最大值) 表示到 Kafka topic 的结束位置。如果这是第一次 build (没有以前的 segment),Kylin 将会寻找 topics 的开头作为开始位置。 
+
+在 "Monitor" 页面,一个新的 job 生成了;等待其直到 100% 完成。
+
+## 点击 "Insight" 标签,编写 SQL 运行,例如:
+
+ {% highlight Groff markup %}
+select minute_start,count(*),sum(amount),sum(qty) from streaming_sales_table group by minute_start order by minute_start
+ {% endhighlight %}
+
+结果如下。
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/13_Query_result.png)
+
+
+## 自动 build
+
+一旦第一个 build 和查询成功了,您可以按照一定的频率调度增量 build。Kylin 将会记录每一个 build 的 offsets;当收到一个 build 请求,它将会从上一个结束的位置开始,然后从 Kafka 获取最新的 offsets。有了 REST API 您可以使用任何像 Linux cron 调度工具触发它:
+
+  {% highlight Groff markup %}
+crontab -e
+*/5 * * * * curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0,"sourceOffsetEnd": 9223372036854775807,"buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+ {% endhighlight %}
+
+现在您可以观看 cube 从 streaming 中自动 built。当 cube segments 累积到更大的时间范围,Kylin 将会自动的将其合并到一个更大的 segment 中。
+
+## 疑难解答
+
+ * 运行 "kylin.sh" 时您可能遇到以下错误:
+{% highlight Groff markup %}
+Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/Producer
+	at java.lang.Class.getDeclaredMethods0(Native Method)
+	at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
+	at java.lang.Class.getMethod0(Class.java:2856)
+	at java.lang.Class.getMethod(Class.java:1668)
+	at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
+	at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
+Caused by: java.lang.ClassNotFoundException: org.apache.kafka.clients.producer.Producer
+	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
+	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
+	at java.security.AccessController.doPrivileged(Native Method)
+	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
+	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
+	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
+	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
+	... 6 more
+{% endhighlight %}
+
+原因是 Kylin 不能找到正确的 Kafka client jars;确保您设置了正确的 "KAFKA_HOME" 环境变量。
+
+ * "Build Cube" 步骤中的 "killed by admin" 错误 
+
+ 在 Sandbox VM 中,YARN 不能给 MR job 分配请求的内存资源,因为 "inmem" cubing 算法需要更多的内存。您可以通过请求更少的内存来绕过这一步: 编辑 "conf/kylin_job_conf_inmem.xml",将这两个参数改为如下这样:
+
+ {% highlight Groff markup %}
+    <property>
+        <name>mapreduce.map.memory.mb</name>
+        <value>1072</value>
+        <description></description>
+    </property>
+
+    <property>
+        <name>mapreduce.map.java.opts</name>
+        <value>-Xmx800m</value>
+        <description></description>
+    </property>
+ {% endhighlight %}
+
+ * 如果 Kafka 里已经有一组历史 message 且您不想从最开始 build,您可以触发一个调用来将当前的结束位置设为 cube 的开始:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0,"sourceOffsetEnd": 9223372036854775807,"buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/init_start_offsets
+{% endhighlight %}
+
+ * 如果一些 build job 出错了并且您将其 discard,Cube 中就会留有一个洞(或称为空隙)。每一次 Kylin 都会从最后的位置 build,您不可期望通过正常的 builds 将洞填补。Kylin 提供了 API 检查和填补洞 
+
+检查洞:
+ {% highlight Groff markup %}
+curl -X GET --user ADMINN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+
+如果查询结果是一个空的数组,意味着没有洞;否则,触发 Kylin 填补他们:
+ {% highlight Groff markup %}
+curl -X PUT --user ADMINN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+
diff --git a/website/_docs/tutorial/cube_streaming.md b/website/_docs/tutorial/cube_streaming.md
index 0f8046a..65d99d8 100644
--- a/website/_docs/tutorial/cube_streaming.md
+++ b/website/_docs/tutorial/cube_streaming.md
@@ -59,7 +59,7 @@ Start Kylin server with "$KYLIN_HOME/bin/kylin.sh start", login Kylin Web GUI at
 
    ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/1_Add_streaming_table.png)
 
-In the pop-up dialogue, enter a sample record which you got from the kafka-console-consumer, click the ">>" button, Kylin parses the JSON message and listS all the properties;
+In the pop-up dialogue, enter a sample record which you got from the kafka-console-consumer, click the ">>" button, Kylin parses the JSON message and lists all the properties;
 
 You need give a logic table name for this streaming data source; The name will be used for SQL query later; here enter "STREAMING_SALES_TABLE" as an example in the "Table Name" field.
 
diff --git a/website/_docs/tutorial/jdbc.cn.md b/website/_docs/tutorial/jdbc.cn.md
index 405e714..0feaa3b 100644
--- a/website/_docs/tutorial/jdbc.cn.md
+++ b/website/_docs/tutorial/jdbc.cn.md
@@ -1,6 +1,6 @@
 ---
 layout: docs-cn
-title:  Kylin JDBC Driver
+title:  "JDBC 驱动"
 categories: 教程
 permalink: /cn/docs/tutorial/jdbc.html
 ---
diff --git a/website/_docs/tutorial/kylin_client_tool.cn.md b/website/_docs/tutorial/kylin_client_tool.cn.md
index 47dfbd9..d991044 100644
--- a/website/_docs/tutorial/kylin_client_tool.cn.md
+++ b/website/_docs/tutorial/kylin_client_tool.cn.md
@@ -1,6 +1,6 @@
 ---
 layout: docs-cn
-title:  Python 客户端工具库
+title:  "Python 客户端"
 categories: 教程
 permalink: /cn/docs/tutorial/kylin_client_tool.html
 ---
diff --git a/website/_docs/tutorial/kylin_sample.cn.md b/website/_docs/tutorial/kylin_sample.cn.md
new file mode 100644
index 0000000..8673af0
--- /dev/null
+++ b/website/_docs/tutorial/kylin_sample.cn.md
@@ -0,0 +1,34 @@
+---
+layout: docs-cn
+title:  "样例 Cube 快速入门"
+categories: tutorial
+permalink: /cn/docs/tutorial/kylin_sample.html
+---
+
+Kylin 提供了一个创建样例 Cube 脚本;脚本会创建五个样例 hive 表:
+
+1. 运行 ${KYLIN_HOME}/bin/sample.sh ;重启 kylin 服务器刷新缓存;
+2. 用默认的用户名和密码 ADMIN/KYLIN 登陆 Kylin 网站,选择 project 下拉框(左上角)中的 "learn_kylin" 工程;
+3. 选择名为 "kylin_sales_cube" 的样例 cube,点击 "Actions" -> "Build",选择一个在 2014-01-01 之后的日期(覆盖所有的 10000 样例记录);
+4. 点击 "Monitor" 标签,查看 build 进度直至 100%;
+5. 点击 "Insight" 标签,执行 SQLs,例如:
+	select part_dt,sum(price) as total_selled,count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt
+6. 您可以验证查询结果且与 hive 的响应时间进行比较;
+
+   
+## Streaming 样例 Cube 快速入门
+
+Kylin 也提供了 streaming 样例 cube 脚本。该脚本将会创建 Kafka topic 且不断的向生成的 topic 发送随机 messages。
+
+1. 首先设置 KAFKA_HOME,然后启动 Kylin。
+2. 运行 ${KYLIN_HOME}/bin/sample.sh,它会在 learn_kylin 工程中生成 DEFAULT.KYLIN_STREAMING_TABLE 表,kylin_streaming_model 模型,Cube kylin_streaming_cube。
+3. 运行 ${KYLIN_HOME}/bin/sample-streaming.sh,他会在 localhost:9092 broker 中创建名为 kylin_streaming_topic 的 Kafka Topic。它也会每秒随机发送 100 条 messages 到 kylin_streaming_topic。
+4. 遵循标准 cube build 过程,并触发 Cube kylin_streaming_cube build。  
+5. 点击 "Monitor" 标签,查看 build 进度直至至少有一个 job 达到 100%。
+6. 点击 "Insight" 标签,执行 SQLs,例如:
+         select count(*),HOUR_START from kylin_streaming_table group by HOUR_START
+7. 验证查询结果。
+ 
+## 下一步干什么
+
+您可以通过接下来的教程用同一张表创建另一个 cube。
diff --git a/website/_docs/tutorial/odbc.cn.md b/website/_docs/tutorial/odbc.cn.md
index a9f754d..084c29f 100644
--- a/website/_docs/tutorial/odbc.cn.md
+++ b/website/_docs/tutorial/odbc.cn.md
@@ -1,8 +1,8 @@
 ---
 layout: docs-cn
-title:  ODBC 驱动程序
+title:  "ODBC 驱动"
 categories: 教程
-permalink: /cn/docs23/tutorial/odbc.html
+permalink: /cn/docs/tutorial/odbc.html
 version: v1.2
 since: v0.7.1
 ---
diff --git a/website/_docs/tutorial/powerbi.cn.md b/website/_docs/tutorial/powerbi.cn.md
index a523893..f43a6f5 100644
--- a/website/_docs/tutorial/powerbi.cn.md
+++ b/website/_docs/tutorial/powerbi.cn.md
@@ -1,6 +1,6 @@
 ---
 layout: docs-cn
-title:  MS Excel及Power BI教程
+title:  "Excel 及 Power BI 教程"
 categories: tutorial
 permalink: /cn/docs/tutorial/powerbi.html
 version: v1.2
diff --git a/website/_docs/tutorial/project_level_acl.cn.md b/website/_docs/tutorial/project_level_acl.cn.md
new file mode 100644
index 0000000..e33b706
--- /dev/null
+++ b/website/_docs/tutorial/project_level_acl.cn.md
@@ -0,0 +1,63 @@
+---
+layout: docs-cn
+title: Project Level ACL
+categories: tutorial
+permalink: /cn/docs/tutorial/project_level_acl.html
+since: v2.1.0
+---
+
+Whether a user can access a project and use some functionalities within the project is determined by project-level access control, there are four types of access permission role set at the project-level in Apache Kylin. They are *ADMIN*, *MANAGEMENT*, *OPERATION* and *QUERY*. Each role defines a list of functionality user may perform in Apache Kylin.
+
+- *QUERY*: designed to be used by analysts who only need access permission to query tables/cubes in the project.
+- *OPERATION*: designed to be used by operation team in a corporate/organization who need permission to maintain the Cube. OPERATION access permission includes QUERY.
+- *MANAGEMENT*: designed to be used by Modeler or Designer who is fully knowledgeable of business meaning of the data/model, Cube will be in charge of Model and Cube design. MANAGEMENT access permission includes OPERATION, and QUERY.
+- *ADMIN*: Designed to fully manage the project. ADMIN access permission includes MANAGEMENT, OPERATION and QUERY.
+
+Access permissions are independent between different projects.
+
+### How Access Permission is Determined
+
+Once project-level access permission has been set for a user, access permission on data source, model and Cube will be inherited based on the access permission role defined on project-level. For detailed functionalities, each access permission role can have access to, see table below.
+
+|                                          | System Admin | Project Admin | Management | Operation | Query |
+| ---------------------------------------- | ------------ | ------------- | ---------- | --------- | ----- |
+| Create/delete project                    | Yes          | No            | No         | No        | No    |
+| Edit project                             | Yes          | Yes           | No         | No        | No    |
+| Add/edit/delete project access permission | Yes          | Yes           | No         | No        | No    |
+| Check model page                         | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Check data source page                   | Yes          | Yes           | Yes        | No        | No    |
+| Load, unload table, reload table         | Yes          | Yes           | No         | No        | No    |
+| View model in read only mode             | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Add, edit, clone, drop model             | Yes          | Yes           | Yes        | No        | No    |
+| Check cube detail definition             | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Add, disable/enable, clone cube, edit, drop cube, purge cube | Yes          | Yes           | Yes        | No        | No    |
+| Build, refresh, merge cube               | Yes          | Yes           | Yes        | Yes       | No    |
+| Edit, view cube json                     | Yes          | Yes           | Yes        | No        | No    |
+| Check insight page                       | Yes          | Yes           | Yes        | Yes       | Yes   |
+| View table in insight page               | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Check monitor page                       | Yes          | Yes           | Yes        | Yes       | No    |
+| Check system page                        | Yes          | No            | No         | No        | No    |
+| Reload metadata, disable cache, set config, diagnosis | Yes          | No            | No         | No        | No    |
+
+
+Additionally, when Query Pushdown is enabled, QUERY access permission on a project allows users to issue push down queries on all tables in the project even though no cube could serve them. It's impossible if a user is not yet granted QUERY permission at project-level.
+
+### Manage Access Permission at Project-level
+
+1. Click the small gear shape icon on the top-left corner of Model page. You will be redirected to project page
+
+   ![](/images/Project-level-acl/ACL-1.png)
+
+2. In project page, expand a project and choose Access.
+3. Click `Grant`to grant permission to user.
+
+	![](/images/Project-level-acl/ACL-2.png)
+
+4. Fill in name of the user or role, choose permission and then click `Grant` to grant permission.
+
+5. You can also revoke and update permission on this page.
+
+   ![](/images/Project-level-acl/ACL-3.png)
+
+   Please note that in order to grant permission to default user (MODELER and ANLAYST), these users need to login as least once. 
+   ​
diff --git a/website/_docs/tutorial/setup_jdbc_datasource.cn.md b/website/_docs/tutorial/setup_jdbc_datasource.cn.md
new file mode 100644
index 0000000..952a220
--- /dev/null
+++ b/website/_docs/tutorial/setup_jdbc_datasource.cn.md
@@ -0,0 +1,93 @@
+---
+layout: docs-cn
+title:  建立 JDBC 数据源
+categories: howto
+permalink: /cn/docs/tutorial/setup_jdbc_datasource.html
+---
+
+> 自 Apache Kylin v2.3.x 起有效
+
+## 支持 JDBC 数据源
+
+自 v2.3.0 Apache Kylin 开始支持 JDBC 作为第三种数据源(继 Hive,Kafka)。用户可以直接集成 Kylin 和他们的 SQL 数据库或数据仓库如 MySQL,Microsoft SQL Server 和 HP Vertica。其他相关的数据库也很容易支持。
+
+## 配置 JDBC 数据源
+
+1. 准备 Sqoop
+
+Kylin 使用 Apache Sqoop 从关系型数据库加载数据到 HDFS。在与 Kylin 同一个机器上下载并安装最新版本的 Sqoop。我们使用 `SQOOP_HOME` 环境变量指出在本指南中 Sqoop 的安装路径。
+
+2. 准备 JDBC driver
+
+需要下载您数据库的 JDBC Driver 到 Kylin server。JDBC driver jar 需要被加到 `$KYLIN_HOME/ext` 和 `$SQOOP_HOME/lib` 文件夹下。
+
+3. 配置 Kylin
+
+在 `$KYLIN_HOME/conf/kylin.properties` 中,添加以下配置。
+
+MySQL 样例:
+
+```
+kylin.source.default=8
+kylin.source.jdbc.connection-url=jdbc:mysql://hostname:3306/employees
+kylin.source.jdbc.driver=com.mysql.jdbc.Driver
+kylin.source.jdbc.dialect=mysql
+kylin.source.jdbc.user=your_username
+kylin.source.jdbc.pass=your_password
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.filed-delimiter=|
+```
+
+SQL Server 样例:
+
+```
+kylin.source.default=8
+kylin.source.jdbc.connection-url=jdbc:sqlserver://hostname:1433;database=sample
+kylin.source.jdbc.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
+kylin.source.jdbc.dialect=mssql
+kylin.source.jdbc.user=your_username
+kylin.source.jdbc.pass=your_password
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.filed-delimiter=|
+```
+
+Amazon Redshift 样例:
+
+```
+kylin.source.default=8
+kylin.source.jdbc.connection-url=jdbc:redshift://hostname:5439/sample
+kylin.source.jdbc.driver=com.amazon.redshift.jdbc.Driver
+kylin.source.jdbc.dialect=default
+kylin.source.jdbc.user=user
+kylin.source.jdbc.pass=pass
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.default=8
+kylin.source.jdbc.filed-delimiter=|
+```
+
+这里有另一个参数指定应该分为多少个切片。Sqoop 将为每一个切片运行一个 mapper。
+
+```
+kylin.source.jdbc.sqoop-mapper-num=4
+```
+
+为了使每个 mapper 都能够输入,分割列按以下规则进行选择:
+ * ShardBy 列,如果存在;
+ * Partition date 列,如果存在;
+ * High cardinality 列,如果存在;
+ * Numeric 列,如果存在;
+ * 随便选一个。
+
+请注意,当在 `conf/kylin.properties` 中配置这些参数时,您所有的 projects 使用 JDBC 作为数据源。如果您需要访问不同类型的数据源,您需要在 project 级别配置这些参数,这是推荐的方式(自从 Kylin v2.4.0)。
+
+## 从 JDBC 数据源加载表
+
+重启 Kylin 让改变生效。您现在可以从 JDBC 数据源加载表。访问 Kylin web 然后导航到数据源面板。 
+
+点击 **Load table** 按钮然后输入表名,或点击 "Load Table From Tree" 按钮然后选择要加载的表。不检查 **Calculate column cardinality** 因为对于 JDBC 数据源这个功能并不支持。
+
+点击 "Sync",Kylin 通过 JDBC 接口加载表定义。当表加载成功后您可以查看表和列,和 Hive 相似。
+
+![](/images/docs/jdbc-datasource/load_table_03.png)
+
+继续向前设计您的 model 和 Cube。当 building Cube 时,Kylin 将会使用 Sqoop 从 HDFS 中引入数据,然后在其上运行 building。
\ No newline at end of file
diff --git a/website/_docs/tutorial/setup_systemcube.cn.md b/website/_docs/tutorial/setup_systemcube.cn.md
new file mode 100644
index 0000000..ab0a5ef
--- /dev/null
+++ b/website/_docs/tutorial/setup_systemcube.cn.md
@@ -0,0 +1,438 @@
+---
+layout: docs-cn
+title:  建立系统 Cube
+categories: tutorial
+permalink: /cn/docs/tutorial/setup_systemcube.html
+---
+
+> 自 Apache Kylin v2.3.0 起有效
+
+## 什么是系统 Cube
+
+为了更好的支持自我监控,在系统 project 下创建一组系统 Cubes,叫做 "KYLIN_SYSTEM"。现在,这里有五个 Cubes。三个用于查询指标,"METRICS_QUERY","METRICS_QUERY_CUBE","METRICS_QUERY_RPC"。另外两个是 job 指标,"METRICS_JOB","METRICS_JOB_EXCEPTION"。
+
+## 如何建立系统 Cube
+
+### 准备
+在 KYLIN_HOME 目录下创建一个配置文件 SCSinkTools.json。
+
+例如:
+
+```
+[
+  [
+    "org.apache.kylin.tool.metrics.systemcube.util.HiveSinkTool",
+    {
+      "storage_type": 2,
+      "cube_desc_override_properties": [
+        "java.util.HashMap",
+        {
+          "kylin.cube.algorithm": "INMEM",
+          "kylin.cube.max-building-segments": "1"
+        }
+      ]
+    }
+  ]
+]
+```
+
+### 1. 生成 Metadata
+在 KYLIN_HOME 文件夹下运行一下命令生成相关的 metadata:
+
+```
+./bin/kylin.sh org.apache.kylin.tool.metrics.systemcube.SCCreator \
+-inputConfig SCSinkTools.json \
+-output <output_forder>
+```
+
+通过这个命令,相关的 metadata 将会生成且其位置位于 `<output_forder>` 下。细节如下,system_cube 就是我们的 `<output_forder>`:
+
+![metadata](/images/SystemCube/metadata.png)
+
+### 2. 建立数据源
+运行下列命令生成 hive 源表:
+
+```
+hive -f <output_forder>/create_hive_tables_for_system_cubes.sql
+```
+
+通过这个命令,相关的 hive 表将会被创建。
+
+![hive_table](/images/SystemCube/hive_table.png)
+
+### 3. 为 System Cubes 上传 Metadata 
+然后我们需要通过下列命令上传 metadata 到 hbase:
+
+```
+./bin/metastore.sh restore <output_forder>
+```
+
+### 4. 重载 Metadata
+最终,我们需要在 Kylin web UI 重载 metadata。
+
+
+然后,一组系统 Cubes 将会被创建在系统 project 下,称为 "KYLIN_SYSTEM"。
+
+
+### 5. 系统 Cube build
+当系统 Cube 被创建,我们需要定期 build Cube。
+
+1. 创建一个 shell 脚本其通过调用 org.apache.kylin.tool.job.CubeBuildingCLI 来 build 系统 Cube
+  
+	例如:
+
+{% highlight Groff markup %}
+#!/bin/bash
+
+dir=$(dirname ${0})
+export KYLIN_HOME=${dir}/../
+
+CUBE=$1
+INTERVAL=$2
+DELAY=$3
+CURRENT_TIME_IN_SECOND=`date +%s`
+CURRENT_TIME=$((CURRENT_TIME_IN_SECOND * 1000))
+END_TIME=$((CURRENT_TIME-DELAY))
+END=$((END_TIME - END_TIME%INTERVAL))
+
+ID="$END"
+echo "building for ${CUBE}_${ID}" >> ${KYLIN_HOME}/logs/build_trace.log
+sh ${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.job.CubeBuildingCLI --cube ${CUBE} --endTime ${END} > ${KYLIN_HOME}/logs/system_cube_${CUBE}_${END}.log 2>&1 &
+
+{% endhighlight %}
+
+2. 然后定期运行这个 shell 脚本
+
+	例如,像接下来这样添加一个 cron job:
+
+{% highlight Groff markup %}
+0 */2 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_QUERY_QA 3600000 1200000
+
+20 */2 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_QUERY_CUBE_QA 3600000 1200000
+
+40 */4 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_QUERY_RPC_QA 3600000 1200000
+
+30 */4 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_JOB_QA 3600000 1200000
+
+50 */12 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_JOB_EXCEPTION_QA 3600000 12000
+
+{% endhighlight %}
+
+## 系统 Cube 的细节
+
+### 普通 Dimension
+对于这些 Cube,admins 能够用四个时间粒度查询。从高级别到低级别,如下:
+
+<table>
+  <tr>
+    <td>KYEAR_BEGIN_DATE</td>
+    <td>year</td>
+  </tr>
+  <tr>
+    <td>KMONTH_BEGIN_DATE</td>
+    <td>month</td>
+  </tr>
+  <tr>
+    <td>KWEEK_BEGIN_DATE</td>
+    <td>week</td>
+  </tr>
+  <tr>
+    <td>KDAY_DATE</td>
+    <td>date</td>
+  </tr>
+</table>
+
+### METRICS_QUERY
+这个 Cube 用于在最高级别收集查询 metrics。细节如下:
+
+<table>
+  <tr>
+    <th colspan="2">Dimension</th>
+  </tr>
+  <tr>
+    <td>HOST</td>
+    <td>the host of server for query engine</td>
+  </tr>
+  <tr>
+    <td>PROJECT</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>REALIZATION</td>
+    <td>in Kylin,there are two OLAP realizations: Cube,or Hybrid of Cubes</td>
+  </tr>
+  <tr>
+    <td>REALIZATION_TYPE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>QUERY_TYPE</td>
+    <td>users can query on different data sources,CACHE,OLAP,LOOKUP_TABLE,HIVE</td>
+  </tr>
+  <tr>
+    <td>EXCEPTION</td>
+    <td>when doing query,exceptions may happen. It's for classifying different exception types</td>
+  </tr>
+</table>
+
+<table>
+  <tr>
+    <th colspan="2">Measure</th>
+  </tr>
+  <tr>
+    <td>COUNT</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>MIN,MAX,SUM of QUERY_TIME_COST</td>
+    <td>the time cost for the whole query</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of CALCITE_SIZE_RETURN</td>
+    <td>the row count of the result Calcite returns</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of STORAGE_SIZE_RETURN</td>
+    <td>the row count of the input to Calcite</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of CALCITE_SIZE_AGGREGATE_FILTER</td>
+    <td>the row count of Calcite aggregates and filters</td>
+  </tr>
+  <tr>
+    <td>COUNT DISTINCT of QUERY_HASH_CODE</td>
+    <td>the number of different queries</td>
+  </tr>
+</table>
+
+### METRICS_QUERY_RPC
+这个 Cube 用于在最低级别收集查询 metrics。对于一个查询,相关的 aggregation 和 filter 能够下推到每一个 rpc 目标服务器。Rpc 目标服务器的健壮性是更好查询性能的基础。细节如下:
+
+<table>
+  <tr>
+    <th colspan="2">Dimension</th>
+  </tr>
+  <tr>
+    <td>HOST</td>
+    <td>the host of server for query engine</td>
+  </tr>
+  <tr>
+    <td>PROJECT</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>REALIZATION</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>RPC_SERVER</td>
+    <td>the rpc related target server</td>
+  </tr>
+  <tr>
+    <td>EXCEPTION</td>
+    <td>the exception of a rpc call. If no exception,"NULL" is used</td>
+  </tr>
+</table>
+
+<table>
+  <tr>
+    <th colspan="2">Measure</th>
+  </tr>
+  <tr>
+    <td>COUNT</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of CALL_TIME</td>
+    <td>the time cost of a rpc all</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of COUNT_SKIP</td>
+    <td>based on fuzzy filters or else,a few rows will be skiped. This indicates the skipped row count</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of SIZE_SCAN</td>
+    <td>the row count actually scanned</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of SIZE_RETURN</td>
+    <td>the row count actually returned</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of SIZE_AGGREGATE</td>
+    <td>the row count actually aggregated</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of SIZE_AGGREGATE_FILTER</td>
+    <td>the row count actually aggregated and filtered,= SIZE_SCAN - SIZE_RETURN</td>
+  </tr>
+</table>
+
+### METRICS_QUERY_CUBE
+这个 Cube 用于在 Cube 级别收集查询 metrics。最重要的是 cuboids 相关的,其为 Cube planner 提供服务。细节如下:
+
+<table>
+  <tr>
+    <th colspan="2">Dimension</th>
+  </tr>
+  <tr>
+    <td>CUBE_NAME</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>CUBOID_SOURCE</td>
+    <td>source cuboid parsed based on query and Cube design</td>
+  </tr>
+  <tr>
+    <td>CUBOID_TARGET</td>
+    <td>target cuboid already precalculated and served for source cuboid</td>
+  </tr>
+  <tr>
+    <td>IF_MATCH</td>
+    <td>whether source cuboid and target cuboid are equal</td>
+  </tr>
+  <tr>
+    <td>IF_SUCCESS</td>
+    <td>whether a query on this Cube is successful or not</td>
+  </tr>
+</table>
+
+<table>
+  <tr>
+    <th colspan="2">Measure</th>
+  </tr>
+  <tr>
+    <td>COUNT</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of STORAGE_CALL_COUNT</td>
+    <td>the number of rpc calls for a query hit on this Cube</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of STORAGE_CALL_TIME_SUM</td>
+    <td>sum of time cost for the rpc calls of a query</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of STORAGE_CALL_TIME_MAX</td>
+    <td>max of time cost among the rpc calls of a query</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of STORAGE_COUNT_SKIP</td>
+    <td>the sum of row count skipped for the related rpc calls</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of STORAGE_SIZE_SCAN</td>
+    <td>the sum of row count scanned for the related rpc calls</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of STORAGE_SIZE_RETURN</td>
+    <td>the sum of row count returned for the related rpc calls</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of STORAGE_SIZE_AGGREGATE</td>
+    <td>the sum of row count aggregated for the related rpc calls</td>
+  </tr>
+  <tr>
+    <td>MAX,SUM of STORAGE_SIZE_AGGREGATE_FILTER</td>
+    <td>the sum of row count aggregated and filtered for the related rpc calls,= STORAGE_SIZE_SCAN - STORAGE_SIZE_RETURN</td>
+  </tr>
+</table>
+
+### METRICS_JOB
+在 Kylin 中,主要有三种类型的 job:
+- "BUILD",为了从 **HIVE** 中 building Cube segments。
+- "MERGE",为了在 **HBASE** 中 merging Cube segments。
+- "OPTIMIZE",为了在 **HBASE** 中基于 **base cuboid** 动态调整预计算 cuboid tree。
+
+这个 Cube 是用来收集 job 指标。细节如下:
+
+<table>
+  <tr>
+    <th colspan="2">Dimension</th>
+  </tr>
+  <tr>
+    <td>PROJECT</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>CUBE_NAME</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>JOB_TYPE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>CUBING_TYPE</td>
+    <td>in kylin,there are two cubing algorithms,Layered & Fast(InMemory)</td>
+  </tr>
+</table>
+
+<table>
+  <tr>
+    <th colspan="2">Measure</th>
+  </tr>
+  <tr>
+    <td>COUNT</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>MIN,MAX,SUM of DURATION</td>
+    <td>the duration from a job start to finish</td>
+  </tr>
+  <tr>
+    <td>MIN,MAX,SUM of TABLE_SIZE</td>
+    <td>the size of data source in bytes</td>
+  </tr>
+  <tr>
+    <td>MIN,MAX,SUM of CUBE_SIZE</td>
+    <td>the size of created Cube segment in bytes</td>
+  </tr>
+  <tr>
+    <td>MIN,MAX,SUM of PER_BYTES_TIME_COST</td>
+    <td>= DURATION / TABLE_SIZE</td>
+  </tr>
+  <tr>
+    <td>MIN,MAX,SUM of WAIT_RESOURCE_TIME</td>
+    <td>a job may includes serveral MR(map reduce) jobs. Those MR jobs may wait because of lack of Hadoop resources.</td>
+  </tr>
+</table>
+
+### METRICS_JOB_EXCEPTION
+这个 Cube 是用来收集 job exception 指标。细节如下:
+
+<table>
+  <tr>
+    <th colspan="2">Dimension</th>
+  </tr>
+  <tr>
+    <td>PROJECT</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>CUBE_NAME</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>JOB_TYPE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>CUBING_TYPE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>EXCEPTION</td>
+    <td>when running a job,exceptions may happen. It's for classifying different exception types</td>
+  </tr>
+</table>
+
+<table>
+  <tr>
+    <th>Measure</th>
+  </tr>
+  <tr>
+    <td>COUNT</td>
+  </tr>
+</table>
diff --git a/website/_docs/tutorial/spark.cn.md b/website/_docs/tutorial/spark.cn.md
new file mode 100644
index 0000000..ae4ddb4
--- /dev/null
+++ b/website/_docs/tutorial/spark.cn.md
@@ -0,0 +1,90 @@
+---
+layout: docs-cn
+title:  Apache Spark
+categories: tutorial
+permalink: /cn/docs/tutorial/spark.html
+---
+
+
+### Introduction
+
+Apache Kylin provides JDBC driver to query the Cube data, and Apache Spark supports JDBC data source. With it, you can connect with Kylin from your Spark application and then do the analysis over a very huge data set in an interactive way.
+
+Please keep in mind, Kylin is an OLAP system, which already aggregated the raw data by the given dimensions. If you simply load the source table like a normal database, you may not gain the benefit of Cubes, and it may crash your application.
+
+The right way is to start from a summarized view (e.g., a query with "group by"), loading it as a data frame, and then do the transformation and other actions.
+
+This document describes how to use Kylin as a data source in Apache Spark. You need to install Kylin, build a Cube before run it. And remember to put Kylin's JDBC driver (in the 'lib' folder of Kylin binary package) onto Spark's class path. 
+
+### The wrong way
+
+The below Python application tries to directly load Kylin's table as a data frame, and then to get the total row count with "df.count()", but the result is incorrect.
+
+{% highlight Groff markup %}
+
+conf = SparkConf() 
+conf.setMaster('yarn')
+conf.setAppName('Kylin jdbc example')
+
+sc = SparkContext(conf=conf)
+sqlContext = SQLContext(self.sc)
+
+url='jdbc:kylin://sandbox:7070/default'
+df = self.sqlContext.read.format('jdbc').options(
+    url=url, user='ADMIN', password='KYLIN',
+    driver='org.apache.kylin.jdbc.Driver',
+    dbtable='kylin_sales').load()
+
+print df.count()
+
+    
+{% endhighlight %}
+
+The output is:
+{% highlight Groff markup %}
+132
+
+{% endhighlight %}
+
+
+The result "132" is not the total count of the origin table. Because Spark didn't send a "select count(*)" query to Kylin as you thought, but send a "select * " and then try to count within Spark; This would be inefficient and, as Kylin doesn't have the raw data, the "select * " query will be answered with the base Cuboid (summarized by all dimensions). The "132" is the row number of the base Cuboid, not original data. 
+
+
+### The right way
+
+The right behavior is to push down possible aggregations to Kylin, so that the Cube can be leveraged and the performance would be much better. Below is the correct code:
+
+{% highlight Groff markup %}
+
+conf = SparkConf() 
+conf.setMaster('yarn')
+conf.setAppName('Kylin jdbc example')
+
+sc = SparkContext(conf=conf)
+sqlContext = SQLContext(sc)
+  
+url='jdbc:kylin://sandbox:7070/default'
+tab_name = '(select count(*) as total from kylin_sales) the_alias'
+
+df = sqlContext.read.format('jdbc').options(
+        url=url, user='ADMIN', password='KYLIN',
+        driver='org.apache.kylin.jdbc.Driver',
+        dbtable=tab_name).load()
+
+df.show()
+
+{% endhighlight %}
+
+Here is the output, the result is correct as Spark push down the aggregation to Kylin:
+
+{% highlight Groff markup %}
++-----+
+|TOTAL|
++-----+
+| 2000|
++-----+
+
+{% endhighlight %}
+
+Thanks for the input and sample code from Shuxin Yang (shuxinyang.oss@gmail.com).
+
diff --git a/website/_docs/tutorial/squirrel.cn.md b/website/_docs/tutorial/squirrel.cn.md
new file mode 100644
index 0000000..67bbdba
--- /dev/null
+++ b/website/_docs/tutorial/squirrel.cn.md
@@ -0,0 +1,112 @@
+---
+layout: docs-cn
+title:  SQuirreL
+categories: tutorial
+permalink: /cn/docs/tutorial/squirrel.html
+---
+
+### Introduction
+
+[SQuirreL SQL](http://www.squirrelsql.org/) is a multi platform Universal SQL Client (GNU License). You can use it to access HBase + Phoenix and Hive. This document introduces how to connect to Kylin from SQuirreL.
+
+### Used Software
+
+* [Kylin v1.6.0](/download/) & ODBC 1.6
+* [SquirreL SQL v3.7.1](http://www.squirrelsql.org/)
+
+## Pre-requisites
+
+* Find the Kylin JDBC driver jar
+  From Kylin Download, Choose Binary and the **correct version of Kylin and HBase**
+	Download & Unpack:  in **./lib**: 
+  ![](/images/SQuirreL-Tutorial/01.png)
+
+
+* Need an instance of Kylin, with a Cube; the [Sample Cube](kylin_sample.html) is enough.
+
+  ![](/images/SQuirreL-Tutorial/02.png)
+
+
+* [Dowload and install SquirreL](http://www.squirrelsql.org/#installation)
+
+## Add Kylin JDBC Driver
+
+On left menu: ![alt text](/images/SQuirreL-Tutorial/03.png) >![alt text](/images/SQuirreL-Tutorial/04.png)  > ![alt text](/images/SQuirreL-Tutorial/05.png)  > ![alt text](/images/SQuirreL-Tutorial/06.png)
+
+And locate the JAR: ![alt text](/images/SQuirreL-Tutorial/07.png)
+
+Configure this parameters:
+
+* Put a name: ![alt text](/images/SQuirreL-Tutorial/08.png)
+* Example URL ![alt text](/images/SQuirreL-Tutorial/09.png)
+
+  jdbc:kylin://172.17.0.2:7070/learn_kylin
+* Put Class Name: ![alt text](/images/SQuirreL-Tutorial/10.png)
+	Tip:  If auto complete not work, type:  org.apache.kylin.jdbc.Driver 
+	
+Check the Driver List: ![alt text](/images/SQuirreL-Tutorial/11.png)
+
+## Add Aliases
+
+On left menu: ![alt text](/images/SQuirreL-Tutorial/12.png)  > ![alt text](/images/SQuirreL-Tutorial/13.png) : (Login pass by default: ADMIN / KYLIN)
+
+  ![](/images/SQuirreL-Tutorial/14.png)
+
+
+And automatically launch conection:
+
+  ![](/images/SQuirreL-Tutorial/15.png)
+
+
+## Connect and Execute
+
+The startup window when connected:
+
+  ![](/images/SQuirreL-Tutorial/16.png)
+
+
+Choose Tab: and write a query  (whe use Kylin’s example cube):
+
+  ![](/images/SQuirreL-Tutorial/17.png)
+
+
+```
+select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
+from kylin_sales group by part_dt 
+order by part_dt
+```
+
+Execute With: ![alt text](/images/SQuirreL-Tutorial/18.png) 
+
+  ![](/images/SQuirreL-Tutorial/19.png)
+
+
+And it’s works!
+
+## Tips:
+
+SquirreL isn’t the most stable SQL Client, but it is very flexible and get a lot of info; It can be used for PoC and checking connectivity issues.
+
+List of tables: 
+
+  ![](/images/SQuirreL-Tutorial/21.png)
+
+
+List of columns of table:
+
+  ![](/images/SQuirreL-Tutorial/22.png)
+
+
+List of column of Querie:
+
+  ![](/images/SQuirreL-Tutorial/23.png)
+
+
+Export the result of queries:
+
+  ![](/images/SQuirreL-Tutorial/24.png)
+
+
+ Info about time query execution:
+
+  ![](/images/SQuirreL-Tutorial/25.png)
diff --git a/website/_docs/tutorial/tableau.cn.md b/website/_docs/tutorial/tableau.cn.md
index 0e49c12..e1dc579 100644
--- a/website/_docs/tutorial/tableau.cn.md
+++ b/website/_docs/tutorial/tableau.cn.md
@@ -1,6 +1,6 @@
 ---
 layout: docs-cn
-title:  Tableau
+title:  Tableau 8
 categories: 教程
 permalink: /cn/docs/tutorial/tableau.html
 version: v1.2
@@ -13,11 +13,7 @@ since: v0.7.1
 > * 请勿尝试在多个事实表或多个查找表之间进行连接;
 > * 你可以尝试使用类似Tableau过滤器中seller id这样的高基数维度,但引擎现在将只返回有限个Tableau过滤器中的seller id。
 > 
-> 如需更多详细信息或有任何问题,请联系Kylin团队:`kylinolap@gmail.com`
-
-
-### 使用Tableau 9.x的用户
-请参考[Tableau 9 教程](./tableau_91.html)以获得更详细帮助。
+> 如需更多详细信息或有任何问题,请联系Kylin团队:`dev@kylin.apache.org`
 
 ### 步骤1. 安装Kylin ODBC驱动程序
 参考页面[Kylin ODBC 驱动程序教程](./odbc.html)。
diff --git a/website/_docs/tutorial/tableau_91.cn.md b/website/_docs/tutorial/tableau_91.cn.md
index 30108e9..25ea701 100644
--- a/website/_docs/tutorial/tableau_91.cn.md
+++ b/website/_docs/tutorial/tableau_91.cn.md
@@ -9,10 +9,6 @@ since: v1.2
 
 Tableau 9已经发布一段时间了,社区有很多用户希望Apache Kylin能进一步支持该版本。现在可以通过更新Kylin ODBC驱动以使用Tableau 9来与Kylin服务进行交互。
 
-
-### Tableau 8.x 用户
-请参考[Tableau 教程](./tableau.html)以获得更详细帮助。
-
 ### Install ODBC Driver
 参考页面[Kylin ODBC 驱动程序教程](./odbc.html),请确保下载并安装Kylin ODBC Driver __v1.5__. 如果你安装有早前版本,请卸载后再安装。 
 
diff --git a/website/_docs/tutorial/tableau_91.md b/website/_docs/tutorial/tableau_91.md
index 7e37cdb..39c23ef 100644
--- a/website/_docs/tutorial/tableau_91.md
+++ b/website/_docs/tutorial/tableau_91.md
@@ -7,10 +7,6 @@ permalink: /docs/tutorial/tableau_91.html
 
 Tableau 9.x has been released a while, there are many users are asking about support this version with Apache Kylin. With updated Kylin ODBC Driver, now user could interactive with Kylin service through Tableau 9.x.
 
-
-### For Tableau 8.x User
-Please refer to [Kylin and Tableau Tutorial](./tableau.html) for detail guide.
-
 ### Install Kylin ODBC Driver
 Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
 Please make sure to download and install Kylin ODBC Driver __v1.5__. If you already installed ODBC Driver in your system, please uninstall it first. 
diff --git a/website/_docs/tutorial/use_cube_planner.cn.md b/website/_docs/tutorial/use_cube_planner.cn.md
new file mode 100644
index 0000000..9fdc162
--- /dev/null
+++ b/website/_docs/tutorial/use_cube_planner.cn.md
@@ -0,0 +1,127 @@
+---
+layout: docs-cn
+title:  使用 Cube Planner
+categories: tutorial
+permalink: /cn/docs/tutorial/use_cube_planner.html
+---
+
+> 自 Apache Kylin v2.3.0 起使用
+
+# Cube Planner
+
+## 什么是 Cube Planner
+
+OLAP 解决方案权衡了线上查询速度和线下 Cube build 花费(build Cube 的计算资源及保存 Cube 数据的存储资源)。资源效率是 OLAP engine 的最重要的能力。为了提高资源利用率,pre-build 最有价值的 cuboids 是至关重要的。
+
+Cube Planner 使 Apache Kylin 变得更节约资源。其智能 build 部分 Cube 以最小化 building Cube 的花费且同时最大化服务终端用户查询的利益,然后从运行中的查询学习模式且相应的进行动态的推荐 cuboids。 
+
+![CubePlanner](/images/CubePlanner/CubePlanner.png)
+
+## 前提
+
+为使得在 WebUI 上的 Dashboard 有效,您需要设置 **kylin.cube.cubeplanner.enabled=true** 以及 **kylin.properties** 中的其他属性。
+
+
+{% highlight Groff markup %}
+kylin.cube.cubeplanner.enabled=true
+kylin.server.query-metrics2-enabled=true
+kylin.metrics.reporter-query-enabled=true
+kylin.metrics.reporter-job-enabled=true
+kylin.metrics.monitor-enabled=true
+{% endhighlight %}
+
+## 如何使用
+
+*注意:Cube planner 优化不适合新的 Cube。优化前 Cube 应该在产品上线一段时间(如 3 个月)。因而 Kylin 平台从终端用户收集了足够真实的查询且使用他们优化 Cube。*  
+
+#### 步骤 1:
+
+​	选择一个 Cube
+
+#### 步骤 2:
+
+1. 点击 '**Planner**' 按钮查看 Cube 的 '**Current Cuboid Distribution**'。
+
+  应该确保 Cube 的状态是 '**READY**'
+
+  如果 Cube 的状态是 '**DISABLED**',将不能使用 Cube planner。
+
+  如果 Cube 之前已经 build 过,需要通过 building 或 enabling 它来将其状态从 '**DISABLED**' 变为 '**READY**'。
+
+
+#### 步骤 3:
+
+a. 点击 '**Planner**' 按钮查看 Cube 的 '**Current Cuboid Distribution**'。
+
+- 数据将会展示在 Sunburst Chart。
+
+- 每一部分代表一个 cuboid,以不同的颜色展示,颜色取决于对 cuboid 查询的**频率**。
+
+     ![CubePlanner](/images/CubePlanner/CP.png)
+
+
+-  您可以移动鼠标经过图标然后它将会展示 cuboid 的详细信息。
+
+   详细信息包含 5 属性,'**Name**','**ID**','**Query Count**','**Exactly Match Count**','**Row Count**' 和 '**Rollup Rate**'。
+
+   Cuboid **Name** 由几个 '0' 或 '1' 组成。其意味着 dimensions 的结合。'0' 意味着 dimension 在这个组合中不存在,'1' 在这个组合中存在。所有的 dimensions 由在 advanced settings 中设置的 HBase 的 row keys 进行排序。 
+
+   样例:
+
+   ![CubePlanner](/images/CubePlanner/Leaf.png)
+
+   "1111111110000000" 意味着 dimension 组合是 ["MONTH_BEG_DT","USER_CNTRY_SITE_CD","RPRTD_SGMNT_VAL","RPRTD_IND","SRVY_TYPE_ID","QSTN_ID","L1_L2_IND","PRNT_L1_ID","TRANCHE_ID"],其基于 row key 排序。
+
+   **ID** 是 cuboid 唯一的 id
+
+   **Query Count** 是这个 cuboid 服务的查询总数,包括那些未预计算 cuboids 的查询,但是从线上 cuboid 中聚合出来。  
+
+   **Exactly Match Count** 是针对 cuboid 的真实查询的查询数量统计。
+
+   **Row Count** 是针对这个 cuboid 所有 segment 的总 row 统计。
+
+   **Rollup Rate** = (Cuboid 的 Row 统计数 / 其父母 cuboid 的 Row 统计数) * 100%  
+
+-  太阳图表的中心包含基本 cuboid 的组合信息。其 '**名字**' 由几个 '1' 组成。
+
+至于一片叶子,其 '**名字**' 由几个 '0' 和 1' 组成。 
+
+-    如果您想要指定一片叶子,点击它。视图将会自己变化。
+
+     ![Leaf-Specify](/images/CubePlanner/Leaf-Specify.png)
+
+-    如果您想要指定一片叶子的父叶子,点击**圆圈中心**(标记为黄色的部分)。
+
+![Leaf-Specify-Parent](/images/CubePlanner/Leaf-Specify-Parent.png)
+
+b. 点击 '**Recommend**' 按钮去查看 Cube 的 '**Recommend Cuboid Distribution**'。
+
+如果 Cube 正在 building,Cube planner 的 '**Recommend**' 功能将不能被执行。请等待 cube build 完成。
+
+-  数据将会被使用唯一的算法计算。看见这个窗口很正常。
+
+   ![Recommending](/images/CubePlanner/Recommending.png)
+
+-  数据将会被展示在太阳图表
+
+   - 每一个部分以不同的颜色展示,颜色取决于**频率**。
+
+![CubePlanner_Recomm](/images/CubePlanner/CPRecom.png)
+
+- '**Recommend Cuboid Distribution**' 图表的具体操作和 '**Current Cuboid Distribution**' 图表的一样。
+- 当鼠标悬停在太阳形图表上时,用户可以从 cuboid 中指出 dimension 名称,如下图所示
+- 用户可以点击**导出**从现有的 Cube 中以 json 文件导出受欢迎的维度组合(TopN cuboid,当前包括 Top 10,Top 50,Top 100 选项),并将其下载到您的本地文件系统,用于在创建 Cube 时记录或将来导入 dimension 组合。
+
+![export cuboids](/images/CubePlanner/export_cuboids.png)
+
+c. 点击 '**Optimize**' 按钮优化 Cube。
+
+- 一个确认窗口会弹出。点击 '**Yes**' 开始优化。点击 '**Cancel**' 取消优化。
+
+- 用户可以在 Cube Planner 标签页了解 Cube 的最新优化时间。 
+
+![column name+optimize time](/images/CubePlanner/column_name+optimize_time.png)
+
+- 用户可以收到一个 Cube 优化 job 的邮件通知。
+
+![optimize email](/images/CubePlanner/optimize_email.png)
diff --git a/website/_docs/tutorial/use_cube_planner.md b/website/_docs/tutorial/use_cube_planner.md
index 3d5340d..457ff4b 100644
--- a/website/_docs/tutorial/use_cube_planner.md
+++ b/website/_docs/tutorial/use_cube_planner.md
@@ -21,7 +21,7 @@ Read more at [eBay tech blog](https://www.ebayinc.com/stories/blogs/tech/cube-pl
 
 ## Prerequisites
 
-To enable Dashboard on WebUI, you need to set `kylin.cube.cubeplanner.enabled=true` and other properties in`kylin.properties`
+To enable Dashboard on WebUI, you need to set `kylin.cube.cubeplanner.enabled=true` and other properties in `kylin.properties`
 
 {% highlight Groff markup %}
 kylin.cube.cubeplanner.enabled=true
@@ -45,14 +45,16 @@ kylin.metrics.monitor-enabled=true
 
   You should make sure the status of the Cube is '**READY**'
 
-  If the status of the Cube is '**DISABLED**', you will not be able to use the Cube planner. You should change the status of the Cube from '**DISABLED**' to '**READY**' by building it or enabling it if it has been built before.
+  If the status of the Cube is '**DISABLED**', you will not be able to use the Cube planner.
+
+  You should change the status of the Cube from '**DISABLED**' to '**READY**' by building it or enabling it if it has been built before.
 
 
 #### Step 3:
 
 a. Click the '**Planner**' button to view the '**Current Cuboid Distribution**' of the Cube.
 
-- The data will be displayed in Sunburst Chart. 
+- The data will be displayed in **Sunburst Chart**. 
 
 - Each part refers to a cuboid, is shown in different colors determined by the query **frequency** against this cuboid.
 
diff --git a/website/_docs/tutorial/use_dashboard.cn.md b/website/_docs/tutorial/use_dashboard.cn.md
new file mode 100644
index 0000000..03463ea
--- /dev/null
+++ b/website/_docs/tutorial/use_dashboard.cn.md
@@ -0,0 +1,99 @@
+---
+layout: docs-cn
+title:  使用 Dashboard
+categories: tutorial
+permalink: /cn/docs/tutorial/use_dashboard.html
+---
+
+> 自 Apache Kylin v2.3.0 起使用
+
+# Dashboard
+
+作为 project 拥有者,想了解您 Cube 使用指标? 想要知道针对您 Cube 的每一天的查询? 什么是 AVG 查询延迟?您是否想知道每 GB 源数据的 AVG Cube build 时间,这对预测即将到来的 Cube build job 的时间成本非常有帮助? 您可以从 Kylin Dashboard 找到所有信息。
+
+Kylin Dashboard 展示有用的 Cube 使用数据,对用户非常重要。
+
+## 前期
+
+为了使 Dashboard 在 WebUI 中生效,您需要确保这些都设置了:
+* 在 **kylin.properties** 中设置 **kylin.web.dashboard-enabled=true**。
+* 根据 [toturial](setup_systemcube.html) 建立系统 Cubes.
+
+## 如何使用
+
+#### 步骤 1:
+
+​	在导航条处点击 '**Dashboard**' 按钮。
+
+​	此页面上有9个可供您操作的框。
+
+​	每个框代表不同的属性,包括 '**Time Period**','**Total Cube Count**','**Avg Cube Expansion**','**Query Count**','**Average Query Latency**','**Job Count**','**Average Build Time per MB**','**Data grouped by Project**' 和 '**Data grouped by Time**'。
+
+![Kylin Dashboard](/images/Dashboard/QueryCount.jpg)
+
+#### 步骤 2:
+
+您应该点击日历修改 '**Time Period**'。
+
+![SelectPeriod](/images/Dashboard/SelectPeriod.png)
+
+- '**Time period**' 默认为 **'Last 7 Days**'。
+
+- 这里有 **2** 种方式修改 time period。一种是*使用 standard time period* 而另一种是 *定制您的 time period*。
+
+  1. 如果您想要*使用 standard time periods*,您可以点击 '**Last 7 Days**' 只选择过去 7 天的数据,或点击 '**This Month**' 选择这个月的数据,或点击 '**Last Month**' 选择上个月的数据。 
+
+  2. 如果您想要*定制 time period*,点击 '**Custom Range**'。
+
+     这里有 **2** 种方式定制 time period,一种是*在文本框输入日期*而另一种是*从日历中选择日期*。
+
+     1. 如果您想要*在文本框输入日期*,请确保两个日期都有效。
+     2. 如果您想要*从日历中选择日期*,请确保点击了两个具体的日期。
+
+- 您修改了 time period 后,点击 '**Apply**' 来使得改变生效,点击 '**Cancel**' 放弃修改。
+
+#### 步骤 3:
+
+现在数据分析将会在同一个页面改变和展示(重要的信息已经打上马赛克了)
+
+- '**Total Cube Count**' 和 '**Avg Cube Expansion**' 的数量的颜色是**蓝色**。
+
+  您可以在这两个框中点击 '**More Details**' 且您将会被引导至 '**Model**' 页面。
+
+- '**Query Count**','**Average Query Latency**','**Job Count**' 和 '**Average Build Time per MB**' 的数量的颜色是**绿色**。
+
+  您可以点击这个四个矩形获得关于你所选的数据的详细信息。详细信息将以图表的形式展示且展示在 '**Data grouped by Project**' 和 '**Data grouped by Time**' 框。
+
+  1. '**Query Count**' 和 '**Average Query Latency**'
+
+     您可以点击 '**Query Count**' 以获得详细信息。 
+
+     ![QueryCount](/images/Dashboard/QueryCount.jpg)
+
+     您可以点击 '**Average Query Latency**' 以获得详细信息。 
+
+     ![AVG-Query-Latency](/images/Dashboard/AVGQueryLatency.jpg)
+
+     您可以在这两个框中点击 '**More Details**' 并且将会引导至 '**Insight**' 页面。 
+
+  2. '**Job Count**' 和 '**Average Build Time per MB**'
+
+     您可以点击 '**Job Count**' 以获得详细信息。 
+
+     ![Job-Count](/images/Dashboard/JobCount.jpg)
+
+     您可以点击 '**Average Build Time per MB**' 以获得详细信息。 
+
+     ![AVG-Build-Time](/images/Dashboard/AVGBuildTimePerMB.jpg)
+
+     您可以在这两个框中点击 '**More Details**' 并且将会引导至 '**Monitor**' 页面。浏览器中看到 'Please wait...' 是常见的。
+
+#### 步骤 4:
+
+**Advanced Operations**
+
+'**Data grouped by Project**' 和 '**Data grouped by Time**' 以图表的形式显示数据。
+
+在 '**Data grouped by Project**' 中有一个单选按钮称为 '**showValue**',您可以选择在图表中显示数字。
+
+有一个单选的下拉框 '**Data grouped by Time**',您可以选择在不同的时间线中显示图表。
diff --git a/website/_docs/tutorial/web.cn.md b/website/_docs/tutorial/web.cn.md
index c40f102..2aecc1b 100644
--- a/website/_docs/tutorial/web.cn.md
+++ b/website/_docs/tutorial/web.cn.md
@@ -103,7 +103,7 @@ Kylin 的网页版提供一个简单的 Pivot 与可视化分析工具供用户
 
    注意:线形图仅当至少一个从 Hive 表中获取的维度有真实的 “Date” 数据类型列时才是可用的。
 
-* 条形图:
+* 饼图:
 
    ![](/images/tutorial/1.5/Kylin-Web-Tutorial/15 bar-chart.png)
 
diff --git a/website/_docs23/howto/howto_use_restapi.cn.md b/website/_docs23/howto/howto_use_restapi.cn.md
index a3399d0..dae1431 100644
--- a/website/_docs23/howto/howto_use_restapi.cn.md
+++ b/website/_docs23/howto/howto_use_restapi.cn.md
@@ -1,6 +1,6 @@
 ---
 layout: docs23-cn
-title:  Use RESTful API
+title:  RESTful API
 categories: howto
 permalink: /cn/docs23/howto/howto_use_restapi.html
 since: v0.7.1
diff --git a/website/_docs23/install/kylin_aws_emr.cn.md b/website/_docs23/install/kylin_aws_emr.cn.md
index a7b5274..e40caff 100644
--- a/website/_docs23/install/kylin_aws_emr.cn.md
+++ b/website/_docs23/install/kylin_aws_emr.cn.md
@@ -1,6 +1,6 @@
 ---
 layout: docs23-cn
-title:  "在 AWS EMR 上 安装 Kylin"
+title:  "在 AWS EMR 上安装 Kylin"
 categories: install
 permalink: /cn/docs23/install/kylin_aws_emr.html
 ---
diff --git a/website/_docs23/install/kylin_cluster.cn.md b/website/_docs23/install/kylin_cluster.cn.md
index 9250e95..467a44a 100644
--- a/website/_docs23/install/kylin_cluster.cn.md
+++ b/website/_docs23/install/kylin_cluster.cn.md
@@ -1,6 +1,6 @@
 ---
 layout: docs23-cn
-title:  "Cluster 模式下部署"
+title:  "集群模式部署"
 categories: install
 permalink: /cn/docs23/install/kylin_cluster.html
 ---
diff --git a/website/_docs23/install/manual_install_guide.cn.md b/website/_docs23/install/manual_install_guide.cn.md
deleted file mode 100644
index ba95bc5..0000000
--- a/website/_docs23/install/manual_install_guide.cn.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-layout: docs23-cn
-title:  "手动安装指南"
-categories: 安装
-permalink: /cn/docs23/install/manual_install_guide.html
-version: v0.7.2
-since: v0.7.1
----
-
-## 引言
-
-在大多数情况下,我们的自动脚本[Installation Guide](./index.html)可以帮助你在你的hadoop sandbox甚至你的hadoop cluster中启动Kylin。但是,为防部署脚本出错,我们撰写本文作为参考指南来解决你的问题。
-
-基本上本文解释了自动脚本中的每一步骤。我们假设你已经对Linux上的Hadoop操作非常熟悉。
-
-## 前提条件
-* Kylin 二进制文件拷贝至本地并解压,之后使用$KYLIN_HOME引用
-`export KYLIN_HOME=/path/to/kylin`
-`cd $KYLIN_HOME`
-
-### 启动Kylin
-
-以`./bin/kylin.sh start`
-
-启动Kylin
-
-并以`./bin/Kylin.sh stop`
-
-停止Kylin