You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by sh...@apache.org on 2018/09/18 23:59:18 UTC

[kylin] 02/02: Update document for v2.5

This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 20f7e1bba4284b82f62975bb5dea1fdc4d6fb32b
Author: shaofengshi <sh...@apache.org>
AuthorDate: Tue Sep 18 18:14:13 2018 +0800

    Update document for v2.5
---
 website/_dev/howto_hbase_branches.cn.md            |    9 +-
 website/_dev/howto_hbase_branches.md               |   13 +-
 website/_dev/howto_release.cn.md                   |    9 +-
 website/_dev/howto_release.md                      |    9 +-
 website/_docs/gettingstarted/events.md             |    1 +
 website/_docs/gettingstarted/faq.md                |  188 ++-
 website/_docs/howto/howto_upgrade.md               |    7 +
 website/_docs/index.cn.md                          |    8 +-
 website/_docs/index.md                             |    8 +-
 website/_docs/install/advance_settings.cn.md       |   27 +-
 website/_docs/install/advance_settings.md          |   34 +-
 website/_docs/release_notes.md                     |  109 +-
 website/_docs/tutorial/hybrid.cn.md                |   12 +-
 website/_docs/tutorial/hybrid.md                   |   33 +-
 website/_docs/tutorial/setup_jdbc_datasource.cn.md |    6 +-
 website/_docs/tutorial/setup_jdbc_datasource.md    |    4 +-
 website/_docs16/gettingstarted/best_practices.md   |   27 -
 website/_docs16/gettingstarted/concepts.md         |   64 -
 website/_docs16/gettingstarted/events.md           |   24 -
 website/_docs16/gettingstarted/faq.md              |  119 --
 website/_docs16/gettingstarted/terminology.md      |   25 -
 website/_docs16/howto/howto_backup_metadata.md     |   60 -
 .../_docs16/howto/howto_build_cube_with_restapi.md |   53 -
 website/_docs16/howto/howto_cleanup_storage.md     |   22 -
 website/_docs16/howto/howto_jdbc.md                |   92 --
 website/_docs16/howto/howto_ldap_and_sso.md        |  128 --
 website/_docs16/howto/howto_optimize_build.md      |  190 ---
 website/_docs16/howto/howto_optimize_cubes.md      |  212 ----
 website/_docs16/howto/howto_update_coprocessor.md  |   14 -
 website/_docs16/howto/howto_upgrade.md             |   66 -
 website/_docs16/howto/howto_use_beeline.md         |   14 -
 .../howto/howto_use_distributed_scheduler.md       |   16 -
 website/_docs16/howto/howto_use_restapi.md         | 1113 ----------------
 website/_docs16/howto/howto_use_restapi_in_js.md   |   46 -
 website/_docs16/index.cn.md                        |   26 -
 website/_docs16/index.md                           |   57 -
 website/_docs16/install/advance_settings.md        |   98 --
 website/_docs16/install/hadoop_evn.md              |   40 -
 website/_docs16/install/index.cn.md                |   46 -
 website/_docs16/install/index.md                   |   35 -
 website/_docs16/install/kylin_cluster.md           |   32 -
 website/_docs16/install/kylin_docker.md            |   10 -
 website/_docs16/install/manual_install_guide.cn.md |   48 -
 website/_docs16/release_notes.md                   | 1333 --------------------
 website/_docs16/tutorial/acl.cn.md                 |   35 -
 website/_docs16/tutorial/acl.md                    |   32 -
 website/_docs16/tutorial/create_cube.cn.md         |  129 --
 website/_docs16/tutorial/create_cube.md            |  198 ---
 website/_docs16/tutorial/cube_build_job.cn.md      |   66 -
 website/_docs16/tutorial/cube_build_job.md         |   67 -
 website/_docs16/tutorial/cube_streaming.md         |  219 ----
 website/_docs16/tutorial/flink.md                  |  249 ----
 website/_docs16/tutorial/kylin_sample.md           |   21 -
 website/_docs16/tutorial/odbc.cn.md                |   34 -
 website/_docs16/tutorial/odbc.md                   |   49 -
 website/_docs16/tutorial/powerbi.cn.md             |   56 -
 website/_docs16/tutorial/powerbi.md                |   54 -
 website/_docs16/tutorial/squirrel.md               |  112 --
 website/_docs16/tutorial/tableau.cn.md             |  116 --
 website/_docs16/tutorial/tableau.md                |  113 --
 website/_docs16/tutorial/tableau_91.cn.md          |   51 -
 website/_docs16/tutorial/tableau_91.md             |   50 -
 website/_docs16/tutorial/web.cn.md                 |  134 --
 website/_docs16/tutorial/web.md                    |  123 --
 website/archive/docs16.tar.gz                      |  Bin 0 -> 91609 bytes
 website/download/index.cn.md                       |   12 +
 website/download/index.md                          |   19 +-
 67 files changed, 407 insertions(+), 6019 deletions(-)

diff --git a/website/_dev/howto_hbase_branches.cn.md b/website/_dev/howto_hbase_branches.cn.md
index 4b0319a..9605dab 100644
--- a/website/_dev/howto_hbase_branches.cn.md
+++ b/website/_dev/howto_hbase_branches.cn.md
@@ -11,10 +11,11 @@ permalink: /cn/development/howto_hbase_branches.html
 
 分支设计为
 
-- `master` 分支编译的是 HBase 0.98,也是开发的主要分支。 所有错误修复和新功能仅提交给 `master`。
-- `master-hbase1.x` 分支编译的是 HBase 1.x。通过在 `master` 上应用一个 patch 来创建此分支。换句话说,`master-hbase1.x` = `master` + `a patch to support HBase 1.x`.
-- 同样的,有 `master-cdh5.7` = `master-hbase1.x` + `a patch to support CDH 5.7`。
-- 在 `master-hbase1.x` 和 `master-cdh5.7` 上不会直接发生代码更改(除非分支上最后一次提交采用了 HBase 调用)。
+- `master` 分支编译的是 HBase 1.1,也是开发的主要分支。 所有错误修复和新功能仅提交给 `master`。
+- `master-hadoop3.1` 分支编译的是 Hadoop 3.1 + HBase 2.x。通过在 `master` 上应用若干个 patch 来创建此分支。换句话说,`master-hadoop3.1` = `master` + `patches to support HBase 2.x`.
+- The `master-hbase0.98` 已经弃之不用,0.98用户建议升级HBase;
+- 另外有若干个Kylin版本维护分支,如2.5.x, 2.4.x 等;如果你提了一个patch或Pull request, 请告知 reviewer 哪几个版本需要此patch, reviewer 会把 patch 合并到除master以外的其它分支;
+- 在 `master-hadoop3.1` 上不会直接发生代码更改(除非分支上最后一次提交采用了 HBase 调用)。
 
 有一个脚本有助于保持这些分支同步:`dev-support/sync_hbase_cdh_branches.sh`。
 
diff --git a/website/_dev/howto_hbase_branches.md b/website/_dev/howto_hbase_branches.md
index f871b23..48a2dd5 100644
--- a/website/_dev/howto_hbase_branches.md
+++ b/website/_dev/howto_hbase_branches.md
@@ -1,20 +1,21 @@
 ---
 layout: dev
-title:  How to Maintain HBase Branches
+title:  How to Maintain Hadoop/HBase Branches
 categories: development
 permalink: /development/howto_hbase_branches.html
 ---
 
-### Kylin Branches for Different HBase Versions
+### Kylin Branches for Different Hadoop/HBase Versions
 
 Because HBase API diverges based on versions and vendors, different code branches have to be maintained for different HBase versions.
 
 The branching design is
 
-- The `master` branch compiles with HBase 0.98, and is also the main branch for development. All bug fixes and new features commits to `master` only.
-- The `master-hbase1.x` branch compiles with HBase 1.x. This branch is created by applying one patch on top of `master`. In other word, `master-hbase1.x` = `master` + `a patch to support HBase 1.x`.
-- Similarly, there is `master-cdh5.7` = `master-hbase1.x` + `a patch to support CDH 5.7`.
-- No code changes should happen on `master-hbase1.x` and `master-cdh5.7` directly (apart from the last commit on the branch that adapts HBase calls).
+- The `master` branch compiles with HBase 1.1, and is also the main branch for development. All bug fixes and new features commits to `master` only.
+- The `master-hadoop3.1` branch compiles with Hadoop 3.1 and HBase 1.x. This branch is created by applying several patches on top of `master`. In other word, `master-hadoop3.1` = `master` + `patches to support Hadoop 3 and HBase 2.x`.
+- The `master-hbase0.98` is deprecated;
+- There are several release maintenance branches like `2.5.x`, `2.4.x`. If you have a PR or patch, please let reviewer knows which branch it need be applied. The reviewer should cherry-pick the patch to a specific branch after be merged into master.
+- No code changes should happen on `master-hadoop3.1`, `master-hbase0.98` directly (apart from the last commit on the branch that adapts HBase calls).
 
 There is a script helps to keep these branches in sync: `dev-support/sync_hbase_cdh_branches.sh`.
 
diff --git a/website/_dev/howto_release.cn.md b/website/_dev/howto_release.cn.md
index 69a7117..e7b1476 100644
--- a/website/_dev/howto_release.cn.md
+++ b/website/_dev/howto_release.cn.md
@@ -18,8 +18,10 @@ _对于中国用户,请谨慎使用代理以避免潜在的防火墙问题。_
 * Apache Nexus (maven 仓库): [https://repository.apache.org](https://repository.apache.org)  
 * Apache Kylin dist 仓库: [https://dist.apache.org/repos/dist/dev/kylin](https://dist.apache.org/repos/dist/dev/kylin)  
 
-## 安装使用 Java 8 和 Maven 3.5.3+
-开始之前,确保已经安装了 Java 8,以及 Maven 3.5.3 或更高版本。
+## 软件要求
+* Java 8 or above; 
+* Maven 3.5.3 或更高版本。
+* 如果你是用 Mac OS X 做发布, 请安装 GNU TAR, 按照 [此文章](http://macappstore.org/gnu-tar/).
 
 ## 设置 GPG 签名密钥  
 按照 [http://www.apache.org/dev/release-signing](http://www.apache.org/dev/release-signing) 上的说明创建密钥对  
@@ -383,7 +385,8 @@ $ mkdir -p ~/dist/release
 $ cd ~/dist/release
 $ svn co https://dist.apache.org/repos/dist/release/kylin
 $ cd kylin
-$ cp -rp ../../dev/kylin/apache-kylin-X.Y.Z-rcN apache-kylin-X.Y.Z
+$ mkdir apache-kylin-X.Y.Z
+$ cp -rp ../../dev/kylin/apache-kylin-X.Y.Z-rcN/apache-kylin* apache-kylin-X.Y.Z/
 $ svn add apache-kylin-X.Y.Z
 
 # Check in.
diff --git a/website/_dev/howto_release.md b/website/_dev/howto_release.md
index 9eda756..9821f25 100644
--- a/website/_dev/howto_release.md
+++ b/website/_dev/howto_release.md
@@ -18,8 +18,10 @@ Make sure you have avaliable account and privilege for following applications:
 * Apache Nexus (maven repo): [https://repository.apache.org](https://repository.apache.org)  
 * Apache Kylin dist repo: [https://dist.apache.org/repos/dist/dev/kylin](https://dist.apache.org/repos/dist/dev/kylin)  
 
-## Install Java 8 and Maven 3.5.3+
-Make sure you have Java 8 and Maven 3.5.3 or above installed.
+## Software requirement
+* Java 8 or above; 
+* Maven 3.5.3 or above;
+* If you're on Apple Mac OS X, please install GNU TAR, check [this post](http://macappstore.org/gnu-tar/).
 
 ## Setup GPG signing keys  
 Follow instructions at [http://www.apache.org/dev/release-signing](http://www.apache.org/dev/release-signing) to create a key pair  
@@ -386,7 +388,8 @@ $ mkdir -p ~/dist/release
 $ cd ~/dist/release
 $ svn co https://dist.apache.org/repos/dist/release/kylin
 $ cd kylin
-$ cp -rp ../../dev/kylin/apache-kylin-X.Y.Z-rcN apache-kylin-X.Y.Z
+$ mkdir apache-kylin-X.Y.Z
+$ cp -rp ../../dev/kylin/apache-kylin-X.Y.Z-rcN/apache-kylin* apache-kylin-X.Y.Z/
 $ svn add apache-kylin-X.Y.Z
 
 # Check in.
diff --git a/website/_docs/gettingstarted/events.md b/website/_docs/gettingstarted/events.md
index f617907..3b35c19 100644
--- a/website/_docs/gettingstarted/events.md
+++ b/website/_docs/gettingstarted/events.md
@@ -7,6 +7,7 @@ permalink: /docs/gettingstarted/events.html
 
 __Conferences__
 
+* [Refactor your data warehouse with mobile analytics products](https://conferences.oreilly.com/strata/strata-ny/public/schedule/speaker/313314) by Zhi Zhu and Luke Han at Strata Data Conference New York, New York September 11–13, 2018
 * [Apache Kylin on HBase: Extreme OLAP engine for big data](https://www.slideshare.net/ShiShaoFeng1/apache-kylin-on-hbase-extreme-olap-engine-for-big-data) by Shaofeng Shi at [HBaseCon Asia 2018](https://hbase.apache.org/hbaseconasia-2018/)
 * [The Evolution of Apache Kylin: Realtime and Plugin Architecture in Kylin](https://www.youtube.com/watch?v=n74zvLmIgF0)([slides](http://www.slideshare.net/YangLi43/apache-kylin-15-updates)) by [Li Yang](https://github.com/liyang-gmt8), at [Hadoop Summit 2016 Dublin](http://hadoopsummit.org/dublin/agenda/), Ireland, 2016-04-14
 * [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
diff --git a/website/_docs/gettingstarted/faq.md b/website/_docs/gettingstarted/faq.md
index 751a4ad..26bce81 100644
--- a/website/_docs/gettingstarted/faq.md
+++ b/website/_docs/gettingstarted/faq.md
@@ -6,7 +6,169 @@ permalink: /docs/gettingstarted/faq.html
 since: v0.6.x
 ---
 
-#### 1. "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat" or "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState"
+#### Is Kylin a generic SQL engine for big data?
+
+  * No, Kylin is an OLAP engine with SQL interface. The SQL queries need be matched with the pre-defined OLAP model.
+
+#### How to compare Kylin with other SQL engines like Hive, Presto, Spark SQL, Impala?
+
+  * They answer a query in different ways. Kylin is not a replacement for them, but a supplement (query accelerator). Many users run Kylin together with other SQL engines. For the high frequent query patterns, building Cubes can greatly improve the performance and also offload cluster workloads. For less queried patterns or ad-hoc queries, other engines are more flexible.
+
+#### What's a typical scenario to use Apache Kylin?
+
+  * Kylin can be the best option if you have a huge table (e.g., >100 million rows), join with lookup tables, while queries need be finished in the second level (dashboards, interactive reports, business intelligence, etc), and the concurrent users can be dozens or hundreds.
+
+#### How large a data scale can Kylin support? How about the performance?
+
+  * Kylin can supports second level query performance at TB to PB level dataset. This has been verified by users like eBay, Meituan, Toutiao. Take Meituan's case as an example (till 2018-08), 973 cubes, 3.8 million queries per day, raw data 8.9 trillion, total cube size 971 TB (original data is bigger), 50% queries finished in < 0.5 seconds, 90% queries < 1.2 seconds.
+
+#### Who are using Apache Kylin?
+
+  * Please check Kylin's powered by page(https://kylin.apache.org/community/poweredby.html) .
+
+#### What's the expansion rate of Cube (compared with raw data)?
+
+  * It depends on a couple of factors, for example, dimension/measure number, dimension cardinality, cuboid number, compression algorithm, etc. You can optimize the cube expansion in many ways to control the size.
+
+#### How to compare Kylin with Druid?
+
+  * Druid is more suitable for real-time analysis. Kylin is more focus on OLAP case. Druid has good integration with Kafka as real-time streaming; Kylin fetches data from Hive or Kafka in batches. The real-time capability of Kylin is still under development.
+
+  * Many internet service providers host both Druid and Kylin, serving different purposes (real-time and historical).
+
+  * Some other Kylin's highlights: supports star & snowflake schema; ANSI-SQL support, JDBC/ODBC for BI integrations. Kylin also has a Web GUI with LDAP/SSO user authentication.
+
+  * For more information, please do a search or check this [mail thread](https://mail-archives.apache.org/mod_mbox/kylin-dev/201503.mbox/%3CCAKmQrOY0fjZLUU0MGo5aajZ2uLb3T0qJknHQd+Wv1oxd5PKixQ@mail.gmail.com%3E).
+
+#### How to quick start with Kylin?
+
+  * To get a quick start, you can run Kylin in a Hadoop sandbox VM or in the cloud, for example, start a small AWS EMR or Azure HDInsight cluster and then install Kylin in one of the node.
+
+#### How many nodes of the Hadoop are needed to run Kylin?
+
+  * Kylin can run on a Hadoop cluster from only a couple nodes to thousands of nodes, depends on how much data you have. The architecture is horizontally scalable.
+
+  * Because most of the computation is happening in Hadoop (MapReduce/Spark/HBase), usually you just need to install Kylin in a couple of nodes.
+
+#### How many dimensions can be in a cube?
+
+  * The max physical dimension number (exclude derived column in lookup tables) in a cube is 63; If you can normalize some dimensions to lookup tables, with derived dimensions, you can create a cube with more than 100 dimensions.
+
+  * But a cube with > 30 physical dimensions is not recommended; You even couldn't save that in Kylin if you don't optimize the aggregation groups. Please search "curse of dimensionality".
+
+#### Why I got an error when running a "select * " query?
+
+  * The cube only has aggregated data, so all your queries should be aggregated queries ("GROUP BY"). You can use a SQL with all dimensions be grouped to get them as close as the detailed result, but that is not the raw data.
+
+  * In order to be connected from some BI tools, Kylin tries to answer "select *" query but please aware the result might not be expected. Please make sure each query to Kylin is aggregated.
+
+#### How can I query raw data from a cube?
+
+  * Cube is not the right option for raw data.
+
+But if you do want, there are some workarounds. 1) Add the primary key as a dimension, then the "group by pk" will return the raw data; 2) Configure Kylin to push down the query to another SQL engine like Hive, but the performance has no assurance.
+
+#### What is the UHC dimension?
+
+  * UHC means Ultra High Cardinality. Cardinality means the number of distinct values of a dimension. Usually, a dimension's cardinality is from tens to millions. If above million, we call it a UHC dimension, for example, user id, cell number, etc.
+
+  * Kylin supports UHC dimension but you need to pay attention to UHC dimension, especially the encoding and the cuboid combinations. It may cause your Cube very large and query to be slow.
+
+#### Can I specify a cube to answer my SQL statements?
+
+  * No, you couldn't; Cube is transparent for the end user. If you have multiple Cubes for the same data models, separating them into different projects is a good idea.
+
+#### Is there a REST API to create the project/model/cube?
+
+  * Yes, but they are private APIs, incline to change over versions (without notification). By design, Kylin expects the user to create a new project/model/cube in Kylin's web GUI.
+
+#### Where does the cube locate, can I directly read cube from HBase without going through Kylin API?
+
+  * Cube is stored in HBase. Each cube segment is an HBase table. The dimension values will be composed as the row key. The measures will be serialized in columns. To improve the storage efficiency, both dimension and measure values will be encoded to bytes. Kylin will decode the bytes to origin values after fetching from HBase. Without Kylin's metadata, the HBase tables are not readable.
+
+#### How to encrypt Cube Data?
+
+  * You can enable encryption at HBase side. Refer https://hbase.apache.org/book.html#hbase.encryption.server for more details.
+
+#### How to schedule the cube build at a fixed frequency, in an automatic way?
+
+  * Kylin doesn't have a built-in scheduler for this. You can trigger that through Rest API from external scheduler services, like Linux cron job, Apache Airflow, etc.
+
+#### Does Kylin support Hadoop 3 and HBase 2.0?
+
+  * From v2.5.0, Kylin will provide a binary package for Hadoop 3 and HBase 2.
+
+#### The Cube is ready, but why the table does not appear in the "Insight" tab?
+
+  * Make sure the "kylin.server.cluster-servers" property in `conf/kylin.properties` is configured with EVERY Kylin node, all job and query nodes. Kylin nodes notify each other to flush cache with this configuration. And please ensure the network among them are healthy.
+
+#### What should I do if I encounter a "java.lang.NoClassDefFoundError" error?
+
+  * Kylin doesn't ship those Hadoop jars, because they should already exist in the Hadoop node. So Kylin will try to find them and then add to Kylin's classpath. Due to Hadoop's complexity, there might be some case a jar wasn't found. In this case please look at the "bin/find-*-dependency.sh" and "bin/kylin.sh", modify them to fit your environment.
+
+#### How to query Kylin in Python?
+
+  * Please check: [https://github.com/Kyligence/kylinpy]()
+
+#### How to add dimension/measure to a cube?
+
+  * Once a cube is built, its structure couldn't be modified. To add dimension/measure, you need to clone a new cube, and then add in it.
+
+When the new cube is built, please disable or drop the old one.
+
+If you can accept the absence of new dimensions for historical data, you can build the new cube since the end time of the old cube. And then create a hybrid model over the old and new cube.
+
+#### The query result is not exactly matched with that in Hive, what's the possible reason?
+
+  * Possible reasons:
+a) Source data changed in Hive after built into the cube;
+b) Cube's time range is not the same as in Hive;
+c) Another cube answered your query;
+d) The data model has inner joins, but the query doesn't join all tables;
+e) Cube has some approximate measures like HyberLogLog, TopN;
+f) In v2.3 and before, Kylin may have data loss when fetching from Hive, see KYLIN-3388.
+
+#### What to do if the source data changed after being built into the cube?
+
+  * You need to refresh the cube. If the cube is partitioned, you can refresh certain segments.
+
+#### What is the possible reason for getting the error ‘bulk load aborted with some files not yet loaded’ in the ‘Load HFile to HBase Table’ step?
+
+  * Kylin doesn't have permissions to execute HBase CompleteBulkLoad. Check whether the current user (that run Kylin service) has the permission to access HBase.
+
+#### Why `bin/sample.sh` cannot create the `/tmp/kylin` folder on HDFS?
+
+  * Run ./bin/find-hadoop-conf-dir.sh -v, check the error message, then check the environment according to the information reported.
+
+#### In Chrome, web console shows net::ERR_CONTENT_DECODING_FAILED, what should I do?
+
+  * Edit $KYLIN_HOME/tomcat/conf/server.xml, find the "compress=on", change it to off.
+
+#### How to configure one cube to be built using a chosen YARN queue?
+
+  * Set the YARN queue in Cube’s Configuration Overwrites page, then it will affect only one cube. Here are the three parameters:
+
+  {% highlight Groff markup %}
+kylin.engine.mr.config-override.mapreduce.job.queuename=YOUR_QUEUE_NAME
+kylin.source.hive.config-override.mapreduce.job.queuename=YOUR_QUEUE_NAME
+kylin.engine.spark-conf.spark.yarn.queue=YOUR_QUEUE_NAME
+  {% endhighlight %}
+
+#### How to add a new JDBC data source dialect?
+
+  * That is easy to add a new type of JDBC data source. You can follow such steps:
+
+1) Add the dialect in  source-hive/src/main/java/org/apache/kylin/source/jdbc/JdbcDialect.java
+
+2) Implement a new IJdbcMetadata if {database that you want to add}'s metadata fetching is different with others and then register it in JdbcMetadataFactory
+
+3) You may need to customize the SQL for creating/dropping table in JdbcExplorer for {database that you want to add}.
+
+#### How to ask a question?
+
+  * Check Kylin documents first. and do a Google search also can help. Sometimes the question has been answered so you don't need ask again. If no matching, please send your question to Apache Kylin user mailing list: user@kylin.apache.org; You need to drop an email to user-subscribe@kylin.apache.org to subscribe if you haven't done so. In the email content, please provide your Kylin and Hadoop version, specific error logs (as much as possible), and also the how to re-produce steps.  
+
+#### "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat" or "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState"
 
   * Kylin need many dependent jars (hadoop/hive/hcat/hbase/kafka) on classpath to work, but Kylin doesn't ship them. It will seek these jars from your local machine by running commands like `hbase classpath`, `hive -e set` etc. The founded jars' path will be appended to the environment variable *HBASE_CLASSPATH* (Kylin uses `hbase` shell command to start up, which will read this). But in some Hadoop distribution (like AWS EMR 5.0), the `hbase` shell doesn't keep the origin `HBASE_CLASSPA [...]
 
@@ -22,12 +184,12 @@ since: v0.6.x
   export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
   {% endhighlight %}
 
-#### 2. Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
+#### Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
 
   * Kylin uses "Dictionary" encoding to encode/decode the dimension values (check [this blog](/blog/2015/08/13/kylin-dictionary/)); Usually a dimension's cardinality is less than millions, so the "Dict" encoding is good to use. As dictionary need be persisted and loaded into memory, if a dimension's cardinality is very high, the memory footprint will be tremendous, so Kylin add a check on this. If you see this error, suggest to identify the UHC dimension first and then re-evaluate the de [...]
 
 
-#### 3. How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
+#### How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
 
   * Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
 
@@ -42,16 +204,16 @@ since: v0.6.x
   {% endhighlight %}
 
 
-#### 4. SUM(field) returns a negtive result while all the numbers in this field are > 0
+#### SUM(field) returns a negtive result while all the numbers in this field are > 0
   * If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which  [...]
 
-#### 5. Why Kylin need extract the distinct columns from Fact Table before building cube?
+#### Why Kylin need extract the distinct columns from Fact Table before building cube?
   * Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
 
-#### 6. Why Kylin calculate the HIVE table cardinality?
+#### Why Kylin calculate the HIVE table cardinality?
   * The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
 
-#### 7. How to add new user or change the default password?
+#### How to add new user or change the default password?
   * Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
 
    {% highlight Groff markup %}
@@ -61,7 +223,7 @@ since: v0.6.x
   * The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
   * When you deploy Kylin for more users, switch to LDAP authentication is recommended.
 
-#### 8. Using sub-query for un-supported SQL
+#### Using sub-query for un-supported SQL
 
 {% highlight Groff markup %}
 Original SQL:
@@ -90,7 +252,7 @@ from (
 group by a.slr_sgmt
 {% endhighlight %}
 
-#### 9. Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
+#### Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
 
   * Please add proxy for your NPM:  
   `npm config set proxy http://YOUR_PROXY_IP`
@@ -98,14 +260,14 @@ group by a.slr_sgmt
   * Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
   [http://npm.taobao.org](http://npm.taobao.org)
 
-#### 10. Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
+#### Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
   * User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
 
-#### 11. Kylin JDBC driver returns a different Date/time than the REST API, seems it add the timezone to parse the date.
+#### Kylin JDBC driver returns a different Date/time than the REST API, seems it add the timezone to parse the date.
   * Please check the [post in mailing list](http://apache-kylin.74782.x6.nabble.com/JDBC-query-result-Date-column-get-wrong-value-td5370.html)
 
 
-#### 12. How to update the default password for 'ADMIN'?
+#### How to update the default password for 'ADMIN'?
   * By default, Kylin uses a simple, configuration based user registry; The default administrator 'ADMIN' with password 'KYLIN' is hard-coded in `kylinSecurity.xml`. To modify the password, you need firstly get the new password's encrypted value (with BCrypt), and then set it in `kylinSecurity.xml`. Here is a sample with password 'ABCDE'
   
 {% highlight Groff markup %}
@@ -141,7 +303,7 @@ Replace the origin encrypted password with the new one:
 
 Restart Kylin to take effective. If you have multiple Kylin server as a cluster, do the same on each instance. 
 
-#### 13. What kind of data be left in 'kylin.env.hdfs-working-dir' ? We often execute kylin cleanup storage command, but now our working dir folder is about 300 GB size, can we delete old data manually?
+#### What kind of data be left in 'kylin.env.hdfs-working-dir' ? We often execute kylin cleanup storage command, but now our working dir folder is about 300 GB size, can we delete old data manually?
 
 The data in 'hdfs-working-dir' ('hdfs:///kylin/kylin_metadata/' by default) includes intermediate files (will be GC) and Cuboid data (won't be GC). The Cuboid data is kept for the further segments' merge, as Kylin couldn't merge from HBase. If you're sure those segments won't be merged, you can move them to other paths or even delete.
 
diff --git a/website/_docs/howto/howto_upgrade.md b/website/_docs/howto/howto_upgrade.md
index 1110f21..e11edfa 100644
--- a/website/_docs/howto/howto_upgrade.md
+++ b/website/_docs/howto/howto_upgrade.md
@@ -19,6 +19,13 @@ Running as a Hadoop client, Apache Kylin's metadata and Cube data are persistend
 
 Below are versions specific guides:
 
+## Upgrade from 2.4 to 2.5.0
+
+* Kylin 2.5 need Java 8; Please upgrade Java if you're running with Java 7.
+* Kylin metadata is compitable between 2.4 and 2.5. No migration is needed.
+* Spark engine will move more steps from MR to Spark, you may see performance difference for the same cube after the upgrade.
+* Property `kylin.source.jdbc.sqoop-home` need be the location of sqoop installation, not its "bin" subfolder, please modify it if you're using RDBMS as the data source. 
+* The Cube planner is enabled by default now; New cubes will be optimized by it on first build. System cube and dashboard still need manual enablement.
 
 ## Upgrade from v2.1.0 to v2.2.0
 
diff --git a/website/_docs/index.cn.md b/website/_docs/index.cn.md
index c741e05..f30068f 100644
--- a/website/_docs/index.cn.md
+++ b/website/_docs/index.cn.md
@@ -12,10 +12,10 @@ permalink: /cn/docs/index.html
 Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
 
 查看旧版本文档: 
-* [v2.3.x document](/cn/docs23/)
-* [v2.1.x and v2.2.x document](/cn/docs21/)
-* [v2.0.x document](/cn/docs20/)
-* [v1.6.x document](/cn/docs16/)
+* [v2.4 document](/cn/docs24/)
+* [v2.3 document](/cn/docs23/)
+* [v2.1 and v2.2 document](/cn/docs21/)
+* [v2.0 document](/cn/docs20/)
 * [归档](/archive/)
 
 安装 
diff --git a/website/_docs/index.md b/website/_docs/index.md
index 3f3a2e9..c16deb3 100644
--- a/website/_docs/index.md
+++ b/website/_docs/index.md
@@ -12,10 +12,10 @@ Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
 Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
 
 This is the document for the latest released version (v2.4). Document of prior versions: 
-* [v2.3.x document](/docs23)
-* [v2.1.x and v2.2.x document](/docs21/)
-* [v2.0.x document](/docs20/)
-* [v1.6.x document](/docs16/)
+* [v2.4 document](/docs24)
+* [v2.3 document](/docs23)
+* [v2.1 and v2.2 document](/docs21/)
+* [v2.0 document](/docs20/)
 * [Archive](/archive/)
 
 Installation & Setup
diff --git a/website/_docs/install/advance_settings.cn.md b/website/_docs/install/advance_settings.cn.md
index ce42f2c..43de7ba 100644
--- a/website/_docs/install/advance_settings.cn.md
+++ b/website/_docs/install/advance_settings.cn.md
@@ -116,9 +116,12 @@ kylin.job.admin.dls=adminstrator-address
 ## 支持 MySQL 作为 Kylin metadata 的存储(测试)
 
 Kylin 支持 MySQL 作为 metadata 的存储;为了使该功能生效,您需要执行以下步骤:
-<ol>
-<li>在 MySQL 数据库中新建名为 kylin 的数据库</li>
-<li>编辑 `conf/kylin.properties`,配置以下参数</li>
+
+* 安装 MySQL 服务,例如 v5.1.17;
+* 下载并拷贝 MySQL JDBC connector "mysql-connector-java-<version>.jar" 到 $KYLIN_HOME/ext 目录(如没有该目录请自行创建)
+* 在 MySQL 中新建一个专为 Kylin 元数据的数据库,例如 kylin_metadata;
+* 编辑 `conf/kylin.properties`,配置以下参数:
+
 {% highlight Groff markup %}
 kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password}
 kylin.metadata.jdbc.dialect=mysql
@@ -127,11 +130,13 @@ kylin.metadata.jdbc.small-cell-meta-size-warning-threshold=100mb
 kylin.metadata.jdbc.small-cell-meta-size-error-threshold=1gb
 kylin.metadata.jdbc.max-cell-size=1mb
 {% endhighlight %}
-配置项的含义如下,其中 `url`, `username`,和 `password` 为必须配置项。其余项若不配置将使用默认配置项:
+
+`kylin.metadata.url` 配置项中可以添加更多JDBC 连接的配置项;其中 `url`, `username`,和 `password` 为必须配置项。其余项若不配置将使用默认配置项:
+
 {% highlight Groff markup %}
-Url: JDBC 的 url
-Username: JDBC 的用户名
-Password: JDBC 的密码,如果选择了加密,那这里请写加密后的密码
+url: JDBC connection URL
+username: JDBC 的用户名
+password: JDBC 的密码,如果选择了加密,那这里请写加密后的密码
 driverClassName: JDBC 的 driver 类名,默认值为 com.mysql.jdbc.Driver
 maxActive: 最大数据库连接数,默认值为 5
 maxIdle: 最大等待中的连接数量,默认值为 5
@@ -140,13 +145,13 @@ removeAbandoned: 是否自动回收超时连接,默认值为 true
 removeAbandonedTimeout: 超时时间秒数,默认为 300
 passwordEncrypted: 是否对 JDBC 密码进行加密,默认为 false
 {% endhighlight %}
-<li>(可选)以这种方式加密:</li>
+
+你可以加密JDBC 连接密码:
 {% highlight Groff markup %}
 cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
 java -classpath kylin-server-base-\<version\>.jar:kylin-core-common-\<version\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
 {% endhighlight %}
-<li>拷贝 JDBC connector jar 到 $KYLIN_HOME/ext 目录(如没有该目录请自行创建)</li>
-<li>启动 Kylin</li>
-</ol>
+
+*启动 Kylin
 
 *注意:该功能还在测试中,建议您谨慎使用*
\ No newline at end of file
diff --git a/website/_docs/install/advance_settings.md b/website/_docs/install/advance_settings.md
index e3a8307..595fb66 100644
--- a/website/_docs/install/advance_settings.md
+++ b/website/_docs/install/advance_settings.md
@@ -113,40 +113,42 @@ Restart Kylin server to take effective. To disable, set `mail.enabled` back to `
 Administrator will get notifications for all jobs. Modeler and Analyst need enter email address into the "Notification List" at the first page of cube wizard, and then will get notified for that cube.
 
 
-## Enable MySQL as Kylin metadata storage(Beta)
+## Enable MySQL as Kylin metadata storage (beta)
 
-Kylin supports MySQL as metadata storage; To enable this, you should perform the following steps: 
-<ol>
-<li>Create a new database named kylin in the MySQL database</li>
-<li>Edit `conf/kylin.properties`, set the following parameters:</li>
+Kylin can use MySQL as the metadata storage, for the scenarios that HBase is not the best option; To enable this, you can perform the following steps: 
+
+* Install a MySQL server, e.g, v5.1.17;
+* Create a new MySQL database for Kylin metadata, for example "kylin_metadata";
+* Download and copy MySQL JDBC connector "mysql-connector-java-<version>.jar" to $KYLIN_HOME/ext (if the folder does not exist, create it yourself);
+* Edit `conf/kylin.properties`, set the following parameters:
 {% highlight Groff markup %}
-kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password}
+kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password},driverClassName=com.mysql.jdbc.Driver
 kylin.metadata.jdbc.dialect=mysql
 kylin.metadata.jdbc.json-always-small-cell=true
 kylin.metadata.jdbc.small-cell-meta-size-warning-threshold=100mb
 kylin.metadata.jdbc.small-cell-meta-size-error-threshold=1gb
 kylin.metadata.jdbc.max-cell-size=1mb
 {% endhighlight %}
-The configuration items have the following meanings, `url`, `username`, and `password` are required configuration items. If not configured, the default configuration items will be used:
+In "kylin.metadata.url" more configuration items can be added; The `url`, `username`, and `password` are required items. If not configured, the default configuration items will be used:
 {% highlight Groff markup %}
-Url: JDBC url
-Username: JDBC username
-Password: JDBC password, if encryption is selected, please write the encrypted password here;
+url: the JDBC connection URL;
+username: JDBC user name
+password: JDBC password, if encryption is selected, please put the encrypted password here;
 driverClassName: JDBC driver class name, the default value is com.mysql.jdbc.Driver
 maxActive: the maximum number of database connections, the default value is 5;
 maxIdle: the maximum number of connections waiting, the default value is 5;
 maxWait: The maximum number of milliseconds to wait for connection. The default value is 1000.
 removeAbandoned: Whether to automatically reclaim timeout connections, the default value is true;
 removeAbandonedTimeout: the number of seconds in the timeout period, the default is 300;
-passwordEncrypted: Whether to encrypt the JDBC password, the default is false;
+passwordEncrypted: Whether the JDBC password is encrypted or not, the default is false;
 {% endhighlight %}
-<li>(Optional) Encrypt password in this way:</li>
+
+* You can encrypt your password:
 {% highlight Groff markup %}
 cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
 java -classpath kylin-server-base-\<version\>.jar:kylin-core-common-\<version\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
 {% endhighlight %}
-<li>Copy the JDBC connector jar to $KYLIN_HOME/ext (if it does not exist, create it yourself)</li>
-<li>Start Kylin</li>
-</ol>
 
-*Note: The function is still in the test, it is recommended that you use it with caution*
+* Start Kylin
+
+**Note: The feature is in beta now.**
diff --git a/website/_docs/release_notes.md b/website/_docs/release_notes.md
index f92e9ea..558a378 100644
--- a/website/_docs/release_notes.md
+++ b/website/_docs/release_notes.md
@@ -15,10 +15,115 @@ or send to Apache Kylin mailing list:
 * User relative: [user@kylin.apache.org](mailto:user@kylin.apache.org)
 * Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
 
+## v2.5.0 - 2018-09-16
+_Tag:_ [kylin-2.5.0](https://github.com/apache/kylin/tree/kylin-2.5.0)
+This is a major release after 2.4, with 96 bug fixes and enhancement. Check [How to upgrade](/docs/howto/howto_upgrade.html).
+
+__New Feature__
+* [KYLIN-2565] - Support Hadoop 3.0
+* [KYLIN-3488] - Support MySQL as Kylin metadata storage
+
+__Improvement__
+* [KYLIN-2998] - Kill spark app when cube job was discarded
+* [KYLIN-3033] - Support HBase 2.0
+* [KYLIN-3071] - Add config to reuse dict to reduce dict size
+* [KYLIN-3094] - Upgrade zookeeper to 3.4.12
+* [KYLIN-3146] - Response code and exception should be standardised for cube checking
+* [KYLIN-3186] - Add support for partitioning columns that combine date and time (e.g. YYYYMMDDHHMISS)
+* [KYLIN-3250] - Upgrade jetty version to 9.3.22
+* [KYLIN-3259] - When a cube is deleted, remove it from the hybrid cube definition
+* [KYLIN-3321] - Set MALLOC_ARENA_MAX in script
+* [KYLIN-3355] - Improve the HTTP return code of Rest API
+* [KYLIN-3370] - Enhance segment pruning
+* [KYLIN-3384] - Allow setting REPLICATION_SCOPE on newly created tables
+* [KYLIN-3414] - Optimize the cleanup of project L2 cache
+* [KYLIN-3418] - User interface for hybrid model
+* [KYLIN-3419] - Upgrade to Java 8
+* [KYLIN-3421] - Improve job scheduler fetch performance
+* [KYLIN-3423] - Performance improvement in FactDistinctColumnsMapper
+* [KYLIN-3424] - Missing invoke addCubingGarbageCollectionSteps in the cleanup step for HBaseMROutput2Transition
+* [KYLIN-3427] - Convert to HFile in Spark
+* [KYLIN-3434] - Support prepare statement in Kylin server side
+* [KYLIN-3441] - Merge cube segments in Spark
+* [KYLIN-3442] - Fact distinct columns in Spark
+* [KYLIN-3449] - Should allow deleting a segment in NEW status
+* [KYLIN-3452] - Optimize spark cubing memory footprint
+* [KYLIN-3453] - Improve cube size estimation for TOPN, COUNT DISTINCT
+* [KYLIN-3454] - Fix potential thread-safe problem in ResourceTool
+* [KYLIN-3457] - Distribute by multiple columns if not set shard-by column
+* [KYLIN-3463] - Improve optimize job by avoiding creating empty output files on HDFS
+* [KYLIN-3464] - Less user confirmation
+* [KYLIN-3470] - Add cache for execute and execute_output to speed up list job api
+* [KYLIN-3471] - Merge dictionary and statistics on Yarn
+* [KYLIN-3472] - TopN merge in Spark engine performance tunning
+* [KYLIN-3475] - Make calcite case handling and quoting method more configurable.
+* [KYLIN-3478] - Enhance backwards compatibility
+* [KYLIN-3479] - Model can save when kafka partition date column not select
+* [KYLIN-3480] - Change the conformance of calcite from default to lenient
+* [KYLIN-3481] - Kylin Jdbc: Shaded dependencies should not be transitive
+* [KYLIN-3485] - Make unloading table more flexible
+* [KYLIN-3489] - Improve the efficiency of enumerating dictionary values
+* [KYLIN-3490] - For single column queries, only dictionaries are enough
+* [KYLIN-3491] - Improve the cube building process when using global dictionary
+* [KYLIN-3503] - Missing java.util.logging.config.file when starting kylin instance
+* [KYLIN-3507] - Query NPE when project is not found
+* [KYLIN-3509] - Allocate more memory for "Merge dictionary on yarn" step
+* [KYLIN-3510] - Correct sqoopHome at 'createSqoopToFlatHiveStep'
+* [KYLIN-3521] - Enable Cube Planner by default
+* [KYLIN-3539] - Hybrid segment overlap not cover some case
+* [KYLIN-3317] - Replace UUID.randomUUID with deterministic PRNG
+* [KYLIN-3436] - Refactor code related to loading hive/stream table
+
+__Bug fix__
+* [KYLIN-2522] - Compilation fails with Java 8 when upgrading to hbase 1.2.5
+* [KYLIN-2662] - NegativeArraySizeException in "Extract Fact Table Distinct Columns"
+* [KYLIN-2933] - Fix compilation against the Kafka 1.0.0 release
+* [KYLIN-3025] - kylin odbc error : {fn CONVERT} for bigint type in tableau 10.4
+* [KYLIN-3255] - Cannot save cube
+* [KYLIN-3258] - No check for duplicate cube name when creating a hybrid cube
+* [KYLIN-3379] - timestampadd bug fix and add test
+* [KYLIN-3382] - YARN job link wasn't displayed when job is running
+* [KYLIN-3385] - Error when have sum(1) measure
+* [KYLIN-3390] - QueryInterceptorUtil.queryInterceptors is not thread safe
+* [KYLIN-3391] - BadQueryDetector only detect first query
+* [KYLIN-3399] - Leaked lookup table in DictionaryGeneratorCLI#processSegment
+* [KYLIN-3403] - Querying sample cube with filter "KYLIN_CAL_DT.WEEK_BEG_DT >= CAST('2001-09-09' AS DATE)" returns unexpected empty result set
+* [KYLIN-3428] - java.lang.OutOfMemoryError: Requested array size exceeds VM limit
+* [KYLIN-3438] - mapreduce.job.queuename does not work at 'Convert Cuboid Data to HFile' Step
+* [KYLIN-3446] - Convert to HFile in spark reports ZK connection refused
+* [KYLIN-3451] - Cloned cube doesn't have Mandatory Cuboids copied
+* [KYLIN-3456] - Cube level's snapshot config does not work
+* [KYLIN-3458] - Enabling config kylin.job.retry will cause log info incomplete
+* [KYLIN-3461] - "metastore.sh refresh-cube-signature" not updating cube signature as expected
+* [KYLIN-3462] - "dfs.replication=2" and compression not work in Spark cube engine
+* [KYLIN-3476] - Fix TupleExpression verification when parsing sql
+* [KYLIN-3477] - Spark job size not available when deployMode is cluster
+* [KYLIN-3482] - Unclosed SetAndUnsetThreadLocalConfig in SparkCubingByLayer
+* [KYLIN-3483] - Imprecise comparison between double and integer division
+* [KYLIN-3492] - Wrong constant value in KylinConfigBase.getDefaultVarcharPrecision
+* [KYLIN-3500] - kylin 2.4 use jdbc datasource :Unknown column 'A.A.CRT_DATE' in 'where clause'
+* [KYLIN-3505] - DataType.getType wrong usage of cache
+* [KYLIN-3516] - Job status not updated after job discarded
+* [KYLIN-3517] - Couldn't update coprocessor on HBase 2.0
+* [KYLIN-3518] - Coprocessor reports NPE when execute a query on HBase 2.0
+* [KYLIN-3522] - PrepareStatement cache issue
+* [KYLIN-3525] - kylin.source.hive.keep-flat-table=true will delete data
+* [KYLIN-3529] - Prompt not friendly
+* [KYLIN-3533] - Can not save hybrid
+* [KYLIN-3534] - Failed at update cube info step
+* [KYLIN-3535] - "kylin-port-replace-util.sh" changed port but not uncomment it
+* [KYLIN-3536] - PrepareStatement cache issue when there are new segments built
+* [KYLIN-3538] - Automatic cube enabled functionality is not merged into 2.4.0
+* [KYLIN-3547] - DimensionRangeInfo: Unsupported data type boolean
+* [KYLIN-3550] - "kylin.source.hive.flat-table-field-delimiter" has extra "\"
+* [KYLIN-3551] - Spark job failed with "FileNotFoundException"
+* [KYLIN-3553] - Upgrade Tomcat to 7.0.90.
+* [KYLIN-3554] - Spark job failed but Yarn shows SUCCEED, causing Kylin move to next step
+* [KYLIN-3557] - PreparedStatement should be closed in JDBCResourceDAO#checkTableExists
 
 ## v2.4.1 - 2018-09-09
 _Tag:_ [kylin-2.4.1](https://github.com/apache/kylin/tree/kylin-2.4.1)
-This is a bug fix release after 2.4.0, with 22 bug fixes and enhancement. Check [How to upgrade](/docs23/howto/howto_upgrade.html).
+This is a bug fix release after 2.4.0, with 22 bug fixes and enhancement. Check [How to upgrade](/docs/howto/howto_upgrade.html).
 
 __Improvement__
 * [KYLIN-3421] - Improve job scheduler fetch performance
@@ -28,7 +133,7 @@ __Improvement__
 * [KYLIN-3503] - Missing java.util.logging.config.file when starting kylin instance
 * [KYLIN-3507] - Query NPE when project is not found
 
-__Bug__
+__Bug fix__
 * [KYLIN-2662] - NegativeArraySizeException in "Extract Fact Table Distinct Columns
 * [KYLIN-3025] - kylin odbc error : {fn CONVERT} for bigint type in tableau 10.4
 * [KYLIN-3255] - Cannot save cube
diff --git a/website/_docs/tutorial/hybrid.cn.md b/website/_docs/tutorial/hybrid.cn.md
index a93b9c6..b812bb8 100644
--- a/website/_docs/tutorial/hybrid.cn.md
+++ b/website/_docs/tutorial/hybrid.cn.md
@@ -4,10 +4,10 @@ title:  Hybrid 模型
 categories: 教程
 permalink: /cn/docs/tutorial/hybrid.html
 version: v1.2
-since: v0.7.1
+since: v2.5.0
 ---
 
-本教材将会指导您创建一个 Hybrid 模型。 
+本教材将会指导您创建一个 Hybrid 模型。 关于 Hybrid 的概念,请参考[这篇博客](http://kylin.apache.org/blog/2015/09/25/hybrid-model/)。
 
 ### I. 创建 Hybrid 模型
 一个 Hybrid 模型可以包含多个 cube。
@@ -39,9 +39,9 @@ since: v0.7.1
 2. 点击 `Yes` 将 Hybrid 模型删除。 
 
 ### IV. 运行查询
-Hybrid 模型创建成功后,您可以直接进行查询。 
+Hybrid 模型创建成功后,您可以直接进行查询。 因为 hybrid 比 cube 有更高优先级,因此可以命中 cube 的查询会优先被 hybrid 执行,然后再转交给 cube。
 
-点击顶部的 `Insight`,然后输入您的 sql 语句。
+点击顶部的 `Insight`,然后输入您的 SQL 语句。
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/5 sql-statement.png)
-    
-其他事宜请参考[这篇博客](http://kylin.apache.org/blog/2015/09/25/hybrid-model/)。
\ No newline at end of file
+
+*请注意, Hybrid model 不适合 "bitmap" 类型的 count distinct 跨 cube 的二次合并,请务必在查询中带上日期维度. *
\ No newline at end of file
diff --git a/website/_docs/tutorial/hybrid.md b/website/_docs/tutorial/hybrid.md
index d81c196..fff16d2 100644
--- a/website/_docs/tutorial/hybrid.md
+++ b/website/_docs/tutorial/hybrid.md
@@ -3,45 +3,44 @@ layout: docs
 title: Hybrid Model
 categories: tutorial
 permalink: /docs/tutorial/hybrid.html
-since: v0.7.1
+since: v2.5.0
 ---
 
-This tutorial will guide you to create a Hybrid. 
+This tutorial will guide you to create a hybrid model. Regarding the concept of hybri, please refer to [this blog](http://kylin.apache.org/blog/2015/09/25/hybrid-model/).
 
-### I. Create Hybrid Model
-One Hybrid model can be referenced by multiple cubes.
+### I. Create a hybrid model
+One Hybrid model can refer to multiple cubes.
 
 1. Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Hybrid`.
 
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/1 +hybrid.png)
 
-2. Enter a name for the Hybrid, then choose the model including cubes that you want to query, and then check the box before cube name, click > button to add cube(s) to hybrid.
+2. Enter a name for the hybrid, select the data model, and then check the box for the cubes that you want to add, click > button to add the cube to this hybrid.
 
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/2 hybrid-name.png)
     
-*Note: If you want to change the model, you should remove all the cubes that you selected.* 
+*Note: If you want to change the data model, you need to remove all the cubes that you already selected.* 
 
-3. Click `Submit` and then select `Yes` to save the Hybrid model. After created, the Hybrid model will be shown in the left `Hybrids` list.
+3. Click `Submit` to save the Hybrid model. After be created, the hybrid model will be shown in the left `Hybrids` list.
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/3 hybrid-created.png)
 
-### II. Update Hybrid Model
-1. Place the mouse over the Hybrid name, then click `Action` button, in the drop-down list select `Edit`. Then you can update Hybrid by adding(> button) or deleting(< button) cubes. 
+### II. Update a hybrid model
+1. Place the mouse over the hybrid name, then click `Action` button, in the drop-down list select `Edit`. You can update the ybrid by adding(> button) or deleting(< button) cubes to/from it. 
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/4 edit-hybrid.png)
 
-2. Click `Submit` and then select `Yes` to save the Hybrid model. 
+2. Click `Submit` to save the Hybrid model. 
 
-Now you only can view Hybrid details by click `Edit` button.
+Now you only can view hybrid details by click `Edit` button.
 
-### III. Drop Hybrid Model
+### III. Drop a hybrid model
 1. Place the mouse over the Hybrid name, then click `Action` button, in the drop-down list select `Drop`. Then the window will pop up. 
 
-2. Click `Yes` to delete the Hybrid model. 
+2. Click `Yes` to drop the Hybrid model. 
 
 ### IV. Run Query
-After the Hybrid model is created, you can run a query directly. 
+After the hybrid model is created, you can run a query. As the hybrid has higher priority than the cube, queries will already hit the hybrid model, and then be delegated to cubes. 
 
-Click `Insight` in top bar, and then input sql statement of you needs.
+Click `Insight` in top bar, input a SQL statement to execute.
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/5 sql-statement.png)
 
-
-Please refer to [this blog](http://kylin.apache.org/blog/2015/09/25/hybrid-model/) for other matters.
\ No newline at end of file
+*Please note, Hybrid model is not suitable for "bitmap" count distinct measures's merge across cubes, please have the partition date as a group by field in the SQL query. *
\ No newline at end of file
diff --git a/website/_docs/tutorial/setup_jdbc_datasource.cn.md b/website/_docs/tutorial/setup_jdbc_datasource.cn.md
index a3816f4..380bff7 100644
--- a/website/_docs/tutorial/setup_jdbc_datasource.cn.md
+++ b/website/_docs/tutorial/setup_jdbc_datasource.cn.md
@@ -34,7 +34,7 @@ kylin.source.jdbc.driver=com.mysql.jdbc.Driver
 kylin.source.jdbc.dialect=mysql
 kylin.source.jdbc.user=your_username
 kylin.source.jdbc.pass=your_password
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.jdbc.filed-delimiter=|
 ```
 
@@ -47,7 +47,7 @@ kylin.source.jdbc.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
 kylin.source.jdbc.dialect=mssql
 kylin.source.jdbc.user=your_username
 kylin.source.jdbc.pass=your_password
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.jdbc.filed-delimiter=|
 ```
 
@@ -60,7 +60,7 @@ kylin.source.jdbc.driver=com.amazon.redshift.jdbc.Driver
 kylin.source.jdbc.dialect=default
 kylin.source.jdbc.user=user
 kylin.source.jdbc.pass=pass
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.default=8
 kylin.source.jdbc.filed-delimiter=|
 ```
diff --git a/website/_docs/tutorial/setup_jdbc_datasource.md b/website/_docs/tutorial/setup_jdbc_datasource.md
index 8b0f38b..c845296 100644
--- a/website/_docs/tutorial/setup_jdbc_datasource.md
+++ b/website/_docs/tutorial/setup_jdbc_datasource.md
@@ -47,7 +47,7 @@ kylin.source.jdbc.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
 kylin.source.jdbc.dialect=mssql
 kylin.source.jdbc.user=your_username
 kylin.source.jdbc.pass=your_password
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.jdbc.filed-delimiter=|
 ```
 
@@ -60,7 +60,7 @@ kylin.source.jdbc.driver=com.amazon.redshift.jdbc.Driver
 kylin.source.jdbc.dialect=default
 kylin.source.jdbc.user=user
 kylin.source.jdbc.pass=pass
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.default=8
 kylin.source.jdbc.filed-delimiter=|
 ```
diff --git a/website/_docs16/gettingstarted/best_practices.md b/website/_docs16/gettingstarted/best_practices.md
deleted file mode 100644
index 5c3a12d..0000000
--- a/website/_docs16/gettingstarted/best_practices.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-layout: docs16
-title:  "Community Best Practices"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/best_practices.html
-since: v1.3.x
----
-
-List of articles about Kylin best practices contributed by community. Some of them are from Chinese community. Many thanks!
-
-* [Apache Kylin在百度地图的实践](http://www.infoq.com/cn/articles/practis-of-apache-kylin-in-baidu-map)
-
-* [Apache Kylin 大数据时代的OLAP利器](http://www.bitstech.net/2016/01/04/kylin-olap/)(网易案例)
-
-* [Apache Kylin在云海的实践](http://www.csdn.net/article/2015-11-27/2826343)(京东案例)
-
-* [Kylin, Mondrian, Saiku系统的整合](http://tech.youzan.com/kylin-mondrian-saiku/)(有赞案例)
-
-* [Big Data MDX with Mondrian and Apache Kylin](https://www.inovex.de/fileadmin/files/Vortraege/2015/big-data-mdx-with-mondrian-and-apache-kylin-sebastien-jelsch-pcm-11-2015.pdf)
-
-* [Kylin and Mondrain Interaction](https://github.com/mustangore/kylin-mondrian-interaction) (Thanks to [mustangore](https://github.com/mustangore))
-
-* [Kylin And Tableau Tutorial](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
-
-* [Kylin and Qlik Integration](https://github.com/albertoRamon/Kylin/tree/master/KylinWithQlik) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
-
-* [How to use Hue with Kylin](https://github.com/albertoRamon/Kylin/tree/master/KylinWithHue) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
\ No newline at end of file
diff --git a/website/_docs16/gettingstarted/concepts.md b/website/_docs16/gettingstarted/concepts.md
deleted file mode 100644
index cf5ce07..0000000
--- a/website/_docs16/gettingstarted/concepts.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-layout: docs16
-title:  "Technical Concepts"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/concepts.html
-since: v1.2
----
- 
-Here are some basic technical concepts used in Apache Kylin, please check them for your reference.
-For terminology in domain, please refer to: [Terminology](terminology.html)
-
-## CUBE
-* __Table__ - This is definition of hive tables as source of cubes, which must be synced before building cubes.
-![](/images/docs/concepts/DataSource.png)
-
-* __Data Model__ - This describes a [STAR SCHEMA](https://en.wikipedia.org/wiki/Star_schema) data model, which defines fact/lookup tables and filter condition.
-![](/images/docs/concepts/DataModel.png)
-
-* __Cube Descriptor__ - This describes definition and settings for a cube instance, defining which data model to use, what dimensions and measures to have, how to partition to segments and how to handle auto-merge etc.
-![](/images/docs/concepts/CubeDesc.png)
-
-* __Cube Instance__ - This is instance of cube, built from one cube descriptor, and consist of one or more cube segments according partition settings.
-![](/images/docs/concepts/CubeInstance.png)
-
-* __Partition__ - User can define a DATE/STRING column as partition column on cube descriptor, to separate one cube into several segments with different date periods.
-![](/images/docs/concepts/Partition.png)
-
-* __Cube Segment__ - This is actual carrier of cube data, and maps to a HTable in HBase. One building job creates one new segment for the cube instance. Once data change on specified data period, we can refresh related segments to avoid rebuilding whole cube.
-![](/images/docs/concepts/CubeSegment.png)
-
-* __Aggregation Group__ - Each aggregation group is subset of dimensions, and build cuboid with combinations inside. It aims at pruning for optimization.
-![](/images/docs/concepts/AggregationGroup.png)
-
-## DIMENSION & MEASURE
-* __Mandotary__ - This dimension type is used for cuboid pruning, if a dimension is specified as “mandatory”, then those combinations without such dimension are pruned.
-* __Hierarchy__ - This dimension type is used for cuboid pruning, if dimension A,B,C forms a “hierarchy” relation, then only combinations with A, AB or ABC shall be remained. 
-* __Derived__ - On lookup tables, some dimensions could be generated from its PK, so there's specific mapping between them and FK from fact table. So those dimensions are DERIVED and don't participate in cuboid generation.
-![](/images/docs/concepts/Dimension.png)
-
-* __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
-* __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
-* __Top N__ - For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
-![](/images/docs/concepts/Measure.png)
-
-## CUBE ACTIONS
-* __BUILD__ - Given an interval of partition column, this action is to build a new cube segment.
-* __REFRESH__ - This action will rebuilt cube segment in some partition period, which is used in case of source table increasing.
-* __MERGE__ - This action will merge multiple continuous cube segments into single one. This can be automated with auto-merge settings in cube descriptor.
-* __PURGE__ - Clear segments under a cube instance. This will only update metadata, and won't delete cube data from HBase.
-![](/images/docs/concepts/CubeAction.png)
-
-## JOB STATUS
-* __NEW__ - This denotes one job has been just created.
-* __PENDING__ - This denotes one job is paused by job scheduler and waiting for resources.
-* __RUNNING__ - This denotes one job is running in progress.
-* __FINISHED__ - This denotes one job is successfully finished.
-* __ERROR__ - This denotes one job is aborted with errors.
-* __DISCARDED__ - This denotes one job is cancelled by end users.
-![](/images/docs/concepts/Job.png)
-
-## JOB ACTION
-* __RESUME__ - Once a job in ERROR status, this action will try to restore it from latest successful point.
-* __DISCARD__ - No matter status of a job is, user can end it and release resources with DISCARD action.
-![](/images/docs/concepts/JobAction.png)
diff --git a/website/_docs16/gettingstarted/events.md b/website/_docs16/gettingstarted/events.md
deleted file mode 100644
index 277d580..0000000
--- a/website/_docs16/gettingstarted/events.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-layout: docs16
-title:  "Events and Conferences"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/events.html
----
-
-__Conferences__
-
-* [The Evolution of Apache Kylin: Realtime and Plugin Architecture in Kylin](https://www.youtube.com/watch?v=n74zvLmIgF0)([slides](http://www.slideshare.net/YangLi43/apache-kylin-15-updates)) by [Li Yang](https://github.com/liyang-gmt8), at [Hadoop Summit 2016 Dublin](http://hadoopsummit.org/dublin/agenda/), Ireland, 2016-04-14
-* [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
-* [Apache Kylin - Balance Between Space and Time](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015) ([video](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015)) by [Debashis Saha](https://twitter.com/debashis_saha) & [Luke Han](https://twitter.com/lukehq), at Hadoop Summit 2015  [...]
-* [HBaseCon 2015: Apache Kylin; Extreme OLAP Engine for Hadoop](https://vimeo.com/128152444) ([video](https://vimeo.com/128152444), [slides](http://www.slideshare.net/HBaseCon/ecosystem-session-3b)) by [Seshu Adunuthula](https://twitter.com/SeshuAd) at HBaseCon 2015 in San Francisco, US, 2015-05-07
-* [Apache Kylin - Extreme OLAP Engine for Hadoop](http://strataconf.com/big-data-conference-uk-2015/public/schedule/detail/40029) ([slides](http://www.slideshare.net/lukehan/apache-kylin-extreme-olap-engine-for-big-data)) by [Luke Han](https://twitter.com/lukehq) & [Yang Li](https://github.com/liyang-gmt8), at Strata+Hadoop World in London, UK, 2015-05-06
-* [Apache Kylin Open Source Journey](http://www.infoq.com/cn/presentations/open-source-journey-of-apache-kylin) ([slides](http://www.slideshare.net/lukehan/apache-kylin-open-source-journey-for-qcon2015-beijing)) by [Luke Han](https://twitter.com/lukehq), at QCon Beijing in Beijing, China, 2015-04-23
-* [Apache Kylin - OLAP on Hadoop](http://cio.it168.com/a2015/0418/1721/000001721404.shtml) by [Yang Li](https://github.com/liyang-gmt8), at Database Technology Conference China 2015 in Beijing, China, 2015-04-18
-* [Apache Kylin – Cubes on Hadoop](https://www.youtube.com/watch?v=U0SbrVzuOe4) ([video](https://www.youtube.com/watch?v=U0SbrVzuOe4), [slides](http://www.slideshare.net/Hadoop_Summit/apache-kylin-cubes-on-hadoop)) by [Ted Dunning](https://twitter.com/ted_dunning), at Hadoop Summit 2015 Europe in Brussels, Belgium, 2015-04-16
-* [Apache Kylin - Hadoop 上的大规模联机分析平台](http://bdtc2014.hadooper.cn/m/zone/bdtc_2014/schedule3) ([slides](http://www.slideshare.net/lukehan/apache-kylin-big-data-technology-conference-2014-beijing-v2)) by [Luke Han](https://twitter.com/lukehq), at Big Data Technology Conference China in Beijing, China, 2014-12-14
-* [Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive](http://v.csdn.hudong.com/s/article.html?arcid=15820707) ([video](http://v.csdn.hudong.com/s/article.html?arcid=15820707), [slides](http://www.slideshare.net/XuJiang2/kylin-hadoop-olap-engine)) by [Jiang Xu](https://www.linkedin.com/pub/xu-jiang/4/5a8/230), at Shanghai Big Data Summit 2014 in Shanghai, China , 2014-10-25
-
-__Meetup__
-
-* [Apache Kylin Meetup @Bay Area](http://www.meetup.com/Cloud-at-ebayinc/events/218914395/), in San Jose, US, 6:00PM - 7:30PM, Thursday, 2014-12-04
-
diff --git a/website/_docs16/gettingstarted/faq.md b/website/_docs16/gettingstarted/faq.md
deleted file mode 100644
index 0ecb44e..0000000
--- a/website/_docs16/gettingstarted/faq.md
+++ /dev/null
@@ -1,119 +0,0 @@
----
-layout: docs16
-title:  "FAQ"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/faq.html
-since: v0.6.x
----
-
-#### 1. "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat"
-
-  * Kylin need many dependent jars (hadoop/hive/hcat/hbase/kafka) on classpath to work, but Kylin doesn't ship them. It will seek these jars from your local machine by running commands like `hbase classpath`, `hive -e set` etc. The founded jars' path will be appended to the environment variable *HBASE_CLASSPATH* (Kylin uses `hbase` shell command to start up, which will read this). But in some Hadoop distribution (like EMR 5.0), the `hbase` shell doesn't keep the origin `HBASE_CLASSPATH`  [...]
-
-  * To fix this, find the hbase shell script (in hbase/bin folder), and search *HBASE_CLASSPATH*, check whether it overwrite the value like :
-
-  {% highlight Groff markup %}
-  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*
-  {% endhighlight %}
-
-  * If true, change it to keep the origin value like:
-
-   {% highlight Groff markup %}
-  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
-  {% endhighlight %}
-
-#### 2. Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
-
-  * Kylin uses "Dictionary" encoding to encode/decode the dimension values (check [this blog](/blog/2015/08/13/kylin-dictionary/)); Usually a dimension's cardinality is less than millions, so the "Dict" encoding is good to use. As dictionary need be persisted and loaded into memory, if a dimension's cardinality is very high, the memory footprint will be tremendous, so Kylin add a check on this. If you see this error, suggest to identify the UHC dimension first and then re-evaluate the de [...]
-
-#### 3. Build cube failed due to "error check status"
-
-  * Check if `kylin.log` contains *yarn.resourcemanager.webapp.address:http://0.0.0.0:8088* and *java.net.ConnectException: Connection refused*
-  * If yes, then the problem is the address of resource manager was not available in yarn-site.xml
-  * A workaround is update `kylin.properties`, set `kylin.job.yarn.app.rest.check.status.url=http://YOUR_RM_NODE:8088/ws/v1/cluster/apps/${job_id}?anonymous=true`
-
-#### 4. HBase cannot get master address from ZooKeeper on Hortonworks Sandbox
-   
-  * By default hortonworks disables hbase, you'll have to start hbase in ambari homepage first.
-
-#### 5. Map Reduce Job information cannot display on Hortonworks Sandbox
-   
-  * Check out [https://github.com/KylinOLAP/Kylin/issues/40](https://github.com/KylinOLAP/Kylin/issues/40)
-
-#### 6. How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
-
-  * Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
-
-  {% highlight Groff markup %}
-  I was able to deploy Kylin with following option in POM.
-  <hadoop2.version>2.5.0</hadoop2.version>
-  <yarn.version>2.5.0</yarn.version>
-  <hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
-  <zookeeper.version>3.4.5</zookeeper.version>
-  <hive.version>0.13.1</hive.version>
-  My Cluster is running on Cloudera Distribution CDH 5.2.0.
-  {% endhighlight %}
-
-
-#### 7. SUM(field) returns a negtive result while all the numbers in this field are > 0
-  * If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which  [...]
-
-#### 8. Why Kylin need extract the distinct columns from Fact Table before building cube?
-  * Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
-
-#### 9. Why Kylin calculate the HIVE table cardinality?
-  * The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
-
-#### 10. How to add new user or change the default password?
-  * Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
-
-   {% highlight Groff markup %}
-   ${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
-   {% endhighlight %}
-
-  * The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
-  * When you deploy Kylin for more users, switch to LDAP authentication is recommended.
-
-#### 11. Using sub-query for un-supported SQL
-
-{% highlight Groff markup %}
-Original SQL:
-select fact.slr_sgmt,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from ih_daily_fact fact
-inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-group by fact.slr_sgmt
-{% endhighlight %}
-
-{% highlight Groff markup %}
-Using sub-query
-select a.slr_sgmt,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from (
-    select fact.slr_sgmt as slr_sgmt,
-    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
-    sum(gmv) as gmv36,
-    sum(gmv) as gmv35
-    from ih_daily_fact fact
-    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
-) a
-group by a.slr_sgmt
-{% endhighlight %}
-
-#### 12. Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
-
-  * Please add proxy for your NPM:  
-  `npm config set proxy http://YOUR_PROXY_IP`
-
-  * Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
-  [http://npm.taobao.org](http://npm.taobao.org)
-
-#### 13. Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
-  * User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
-
-
-
-
diff --git a/website/_docs16/gettingstarted/terminology.md b/website/_docs16/gettingstarted/terminology.md
deleted file mode 100644
index 7c3a108..0000000
--- a/website/_docs16/gettingstarted/terminology.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-layout: docs16
-title:  "Terminology"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/terminology.html
-since: v0.5.x
----
- 
-
-Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
-They are basic knowledge of Apache Kylin which also will help to well understand such concept, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analycits. 
-
-* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
-* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
-* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
-* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
-* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
-* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
-* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
-* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
-* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
-* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
-
-
-
diff --git a/website/_docs16/howto/howto_backup_metadata.md b/website/_docs16/howto/howto_backup_metadata.md
deleted file mode 100644
index 0d295aa..0000000
--- a/website/_docs16/howto/howto_backup_metadata.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-layout: docs16
-title:  Backup Metadata
-categories: howto
-permalink: /docs16/howto/howto_backup_metadata.html
----
-
-Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
-
-{% highlight Groff markup %}
-## The metadata store in hbase
-kylin.metadata.url=kylin_metadata@hbase
-{% endhighlight %}
-
-This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
-
-## Backup Metadata Store with binary package
-
-Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
-In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
-
-{% highlight Groff markup %}
-./bin/metastore.sh backup
-{% endhighlight %}
-
-to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
-
-## Restore Metadata Store with binary package
-
-In case you find your metadata store messed up, and you want to restore to a previous backup:
-
-Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
-
-{% highlight Groff markup %}
-./bin/metastore.sh reset
-{% endhighlight %}
-
-Then upload the backup metadata to Kylin's metadata store:
-{% highlight Groff markup %}
-./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
-{% endhighlight %}
-
-## Backup/restore metadata in development env (available since 0.7.3)
-
-When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
-
-## Cleanup unused resources from Metadata Store (available since 0.7.3)
-As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
-
-Firstly, run a check, this is safe as it will not change anything:
-{% highlight Groff markup %}
-./bin/metastore.sh clean
-{% endhighlight %}
-
-The resources that will be dropped will be listed;
-
-Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
-{% highlight Groff markup %}
-./bin/metastore.sh clean --delete true
-{% endhighlight %}
diff --git a/website/_docs16/howto/howto_build_cube_with_restapi.md b/website/_docs16/howto/howto_build_cube_with_restapi.md
deleted file mode 100644
index 0ccd486..0000000
--- a/website/_docs16/howto/howto_build_cube_with_restapi.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-layout: docs16
-title:  Build Cube with RESTful API
-categories: howto
-permalink: /docs16/howto/howto_build_cube_with_restapi.html
----
-
-### 1.	Authentication
-*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
-*   Add `Authorization` header to first request for authentication
-*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
-*   Once authenticated, client can go subsequent requests with cookies.
-{% highlight Groff markup %}
-POST http://localhost:7070/kylin/api/user/authentication
-    
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-
-### 2.	Get details of cube. 
-*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
-*   Client can find cube segment date ranges in returned cube detail.
-{% highlight Groff markup %}
-GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-### 3.	Then submit a build job of the cube. 
-*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
-*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
-    *   `startTime` and `endTime` should be utc timestamp.
-    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
-*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
-{% highlight Groff markup %}
-PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-    
-{
-    "startTime": 0,
-    "endTime": 1388563200000,
-    "buildType": "BUILD"
-}
-{% endhighlight %}
-
-### 4.	Track job status. 
-*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
-*   Returned `job_status` represents current status of job.
-
-### 5.	If the job got errors, you can resume it. 
-*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`
diff --git a/website/_docs16/howto/howto_cleanup_storage.md b/website/_docs16/howto/howto_cleanup_storage.md
deleted file mode 100644
index 233d32d..0000000
--- a/website/_docs16/howto/howto_cleanup_storage.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-layout: docs16
-title:  Cleanup Storage (HDFS & HBase)
-categories: howto
-permalink: /docs16/howto/howto_cleanup_storage.html
----
-
-Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
-automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
-
-Steps:
-1. Check which resources can be cleanup, this will not remove anything:
-{% highlight Groff markup %}
-export KYLIN_HOME=/path/to/kylin_home
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete false
-{% endhighlight %}
-Here please replace (version) with the specific Kylin jar version in your installation;
-2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
-{% highlight Groff markup %}
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete true
-{% endhighlight %}
-On finish, the intermediate HDFS location and HTables should be dropped;
diff --git a/website/_docs16/howto/howto_jdbc.md b/website/_docs16/howto/howto_jdbc.md
deleted file mode 100644
index 9990df6..0000000
--- a/website/_docs16/howto/howto_jdbc.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-layout: docs16
-title:  Use JDBC Driver
-categories: howto
-permalink: /docs16/howto/howto_jdbc.html
----
-
-### Authentication
-
-###### Build on Apache Kylin authentication restful service. Supported parameters:
-* user : username 
-* password : password
-* ssl: true/false. Default be false; If true, all the services call will use https.
-
-### Connection URL format:
-{% highlight Groff markup %}
-jdbc:kylin://<hostname>:<port>/<kylin_project_name>
-{% endhighlight %}
-* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
-* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
-* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
-
-### 1. Query with Statement
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 2. Query with PreparedStatement
-
-###### Supported prepared statement parameters:
-* setString
-* setInt
-* setShort
-* setLong
-* setFloat
-* setDouble
-* setBoolean
-* setByte
-* setDate
-* setTime
-* setTimestamp
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
-state.setInt(1, 10);
-ResultSet resultSet = state.executeQuery();
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 3. Get query result set metadata
-Kylin jdbc driver supports metadata list methods:
-List catalog, schema, table and column with sql pattern filters(such as %).
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
-while (tables.next()) {
-    for (int i = 0; i < 10; i++) {
-        assertEquals("dummy", tables.getString(i + 1));
-    }
-}
-{% endhighlight %}
diff --git a/website/_docs16/howto/howto_ldap_and_sso.md b/website/_docs16/howto/howto_ldap_and_sso.md
deleted file mode 100644
index 0a377b1..0000000
--- a/website/_docs16/howto/howto_ldap_and_sso.md
+++ /dev/null
@@ -1,128 +0,0 @@
----
-layout: docs16
-title: Enable Security with LDAP and SSO
-categories: howto
-permalink: /docs16/howto/howto_ldap_and_sso.html
----
-
-## Enable LDAP authentication
-
-Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
-
-#### Configure LDAP server info
-
-Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be encrypted; You can run the following command to get the encrypted value (please note, the password's length should be less than 16 characters, see [KYLIN-2416](https://issues.apache.org/jira/browse/KYLIN-2416)):
-
-```
-cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
-java -classpath kylin-server-base-1.6.0.jar:spring-beans-3.2.17.RELEASE.jar:spring-core-3.2.17.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
-```
-
-Config them in the conf/kylin.properties:
-
-```
-ldap.server=ldap://<your_ldap_host>:<port>
-ldap.username=<your_user_name>
-ldap.password=<your_password_encrypted>
-```
-
-Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
-
-```
-ldap.user.searchBase=OU=UserAccounts,DC=mycompany,DC=com
-ldap.user.searchPattern=(&(cn={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
-ldap.user.groupSearchBase=OU=Group,DC=mycompany,DC=com
-```
-
-If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in ldap.service.*; Otherwise, leave them be empty;
-
-### Configure the administrator group and default role
-
-To map an LDAP group to the admin group in Kylin, need set the "acl.adminRole" to "ROLE_" + GROUP_NAME. For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
-
-```
-acl.adminRole=ROLE_KYLIN-ADMIN-GROUP
-acl.defaultRole=ROLE_ANALYST,ROLE_MODELER
-```
-
-The "acl.defaultRole" is a list of the default roles that grant to everyone, keep it as-is.
-
-#### Enable LDAP
-
-Set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server.
-
-## Enable SSO authentication
-
-From v1.5, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
-
-Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
-
-### Generate IDP metadata xml
-Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
-
-  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
-  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
-  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
-
-### Generate JKS keystore for Kylin
-As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
-
-Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
-
-```
-$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
-Enter Export Password: <export_pwd>
-Verifying - Enter Export Password: <export_pwd>
-
-
-$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
-
-Enter destination keystore password:  changeit
-Re-enter new password: changeit
-```
-
-It will put the keys to "samlKeystore.jks" with alias "kylin";
-
-### Enable Higher Ciphers
-
-Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
-
-### Deploy IDP xml file and keystore to Kylin
-
-The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
-	
-  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
-  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
-  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
-
-```
-<!-- Central storage of cryptographic keys -->
-<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
-	<constructor-arg value="classpath:samlKeystore.jks"/>
-	<constructor-arg type="java.lang.String" value="changeit"/>
-	<constructor-arg>
-		<map>
-			<entry key="kylin" value="changeit"/>
-		</map>
-	</constructor-arg>
-	<constructor-arg type="java.lang.String" value="kylin"/>
-</bean>
-
-```
-
-### Other configurations
-In conf/kylin.properties, add the following properties with your server information:
-
-```
-saml.metadata.entityBaseURL=https://host-name/kylin
-saml.context.scheme=https
-saml.context.serverName=host-name
-saml.context.serverPort=443
-saml.context.contextPath=/kylin
-```
-
-Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
-
-### Enable SSO
-Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
-
diff --git a/website/_docs16/howto/howto_optimize_build.md b/website/_docs16/howto/howto_optimize_build.md
deleted file mode 100644
index 627ddcc..0000000
--- a/website/_docs16/howto/howto_optimize_build.md
+++ /dev/null
@@ -1,190 +0,0 @@
----
-layout: docs16
-title:  Optimize Cube Build
-categories: howto
-permalink: /docs16/howto/howto_optimize_build.html
----
-
-Kylin decomposes a Cube build task into several steps and then executes them in sequence. These steps include Hive operations, MapReduce jobs, and other types job. When you have many Cubes to build daily, then you definitely want to speed up this process. Here are some practices that you probably want to know, and they are organized in the same order as the steps sequence.
-
-
-
-## Create Intermediate Flat Hive Table
-
-This step extracts data from source Hive tables (with all tables joined) and inserts them into an intermediate flat table. If Cube is partitioned, Kylin will add a time condition so that only the data in the range would be fetched. You can check the related Hive command in the log of this step, e.g: 
-
-```
-hive -e "USE default;
-DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
-
-CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
-(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
-STORED AS SEQUENCEFILE
-LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
-
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
-AIRLINE.FLIGHTDATE
-,AIRLINE.YEAR
-,AIRLINE.QUARTER
-,...
-,AIRLINE.ARRDELAYMINUTES
-FROM AIRLINE.AIRLINE as AIRLINE
-WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
-"
-
-```
-
-Kylin applies the configuration in conf/kylin\_hive\_conf.xml while Hive commands are running, for instance, use less replication and enable Hive's mapper side join. If it is needed, you can add other configurations which are good for your cluster.
-
-If Cube's partition column ("FLIGHTDATE" in this case) is the same as Hive table's partition column, then filtering on it will let Hive smartly skip those non-matched partitions. So it is highly recommended to use Hive table's paritition column (if it is a date column) as the Cube's partition column. This is almost required for those very large tables, or Hive has to scan all files each time in this step, costing terribly long time.
-
-If your Hive enables file merge, you can disable them in "conf/kylin\_hive\_conf.xml" as Kylin has its own way to merge files (in the next step): 
-
-    <property>
-        <name>hive.merge.mapfiles</name>
-        <value>false</value>
-        <description>Disable Hive's auto merge</description>
-    </property>
-
-
-## Redistribute intermediate table
-
-After the previous step, Hive generates the data files in HDFS folder: while some files are large, some are small or even empty. The imbalanced file distribution would lead subsequent MR jobs to imbalance as well: some mappers finish quickly yet some others are very slow. To balance them, Kylin adds this step to "redistribute" the data and here is a sample output:
-
-```
-total input rows = 159869711
-expected input rows per mapper = 1000000
-num reducers for RedistributeFlatHiveTableStep = 160
-
-```
-
-
-Redistribute table, cmd: 
-
-```
-hive -e "USE default;
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-set mapreduce.job.reduces=160;
-set hive.merge.mapredfiles=false;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
-"
-
-```
-
-
-
-Firstly, Kylin gets the row count of this intermediate table; then based on the number of row count, it would get amount of files needed to get data redistributed. By default, Kylin allocates one file per 1 million rows. In this sample, there are 160 million rows and exist 160 reducers, and each reducer would write 1 file. In following MR step over this table, Hadoop will start the same number Mappers as the files to process (usually 1 million's data size is small than a HDFS block size) [...]
-
-`kylin.job.mapreduce.mapper.input.rows=500000`
-
-
-Secondly, Kylin runs a *"INSERT OVERWIRTE TABLE .... DISTRIBUTE BY "* HiveQL to distribute the rows among a specified number of reducers.
-
-In most cases, Kylin asks Hive to randomly distributes the rows among reducers, then get files very closed in size. The distribute clause is "DISTRIBUTE BY RAND()".
-
-If your Cube has specified a "shard by" dimension (in Cube's "Advanced setting" page), which is a high cardinality column (like "USER\_ID"), Kylin will ask Hive to redistribute data by that column's value. Then for the rows that have the same value as this column has, they will go to the same file. This is much better than "by random",  because the data will be not only redistributed but also pre-categorized without additional cost, thus benefiting the subsequent Cube build process. Unde [...]
-
-**Please note:** 1) The "shard by" column should be a high cardinality dimension column, and it appears in many cuboids (not just appears in seldom cuboids). Utilize it to distribute properly can get equidistribution in every time range; otherwise it will cause data incline, which will reduce the building speed. Typical good cases are: "USER\_ID", "SELLER\_ID", "PRODUCT", "CELL\_NUMBER", so forth, whose cardinality is higher than one thousand (should be much more than the reducer numbers [...]
-
-
-
-## Extract Fact Table Distinct Columns
-
-In this step Kylin runs a MR job to fetch distinct values for the dimensions, which are using dictionary encoding. 
-
-Actually this step does more: it collects the Cube statistics by using HyperLogLog counters to estimate the row count of each Cuboid. If you find that mappers work incredible slowly, it usually indicates that the Cube design is too complex, please check [optimize cube design](/docs16/howto/howto_optimize_cubes.html) to make the Cube thinner. If the reducers get OutOfMemory error, it indicates that the Cuboid combination does explode or the default YARN memory allocation cannot meet deman [...]
-
-You can reduce the sampling percentage (kylin.job.cubing.inmem.sampling.percen in kylin.properties) to get this step accelerated, but this may not help much and impact on the accuracy of Cube statistics, thus we don't recommend.  
-
-
-
-## Build Dimension Dictionary
-
-With the distinct values fetched in previous step, Kylin will build dictionaries in memory (in next version this will be moved to MR). Usually this step is fast, but if the value set is large, Kylin may report error like "Too high cardinality is not suitable for dictionary". For UHC column, please use other encoding method for the UHC column, such as "fixed_length", "integer" and so on.
-
-
-
-## Save Cuboid Statistics and Create HTable
-
-These two steps are lightweight and fast.
-
-
-
-## Build Base Cuboid 
-
-This step is building the base cuboid from the intermediate table, which is the first round MR of the "by-layer" cubing algorithm. The mapper number is equals to the reducer number of step 2; The reducer number is estimated with the cube statistics: by default use 1 reducer every 500MB output; If you observed the reducer number is small, you can set "kylin.job.mapreduce.default.reduce.input.mb" in kylin.properties to a smaller value to get more resources, e.g: `kylin.job.mapreduce.defaul [...]
-
-
-## Build N-Dimension Cuboid 
-
-These steps are the "by-layer" cubing process, each step uses the output of previous step as the input, and then cut off one dimension to aggregate to get one child cuboid. For example, from cuboid ABCD, cut off A get BCD, cut off B get ACD etc. 
-
-Some cuboid can be aggregated from more than 1 parent cubiods, in this case, Kylin will select the minimal parent cuboid. For example, AB can be generated from ABC (id: 1110) and ABD (id: 1101), so ABD will be used as its id is smaller than ABC. Based on this, if D's cardinality is small, the aggregation will be cost-efficient. So, when you design the Cube rowkey sequence, please remember to put low cardinality dimensions to the tail position. This not only benefit the Cube build, but al [...]
-
-Usually from the N-D to (N/2)-D the building is slow, because it is the cuboid explosion process: N-D has 1 Cuboid, (N-1)-D has N cuboids, (N-2)-D has N*(N-1) cuboids, etc. After (N/2)-D step, the building gets faster gradually.
-
-
-
-## Build Cube
-
-This step uses a new algorithm to build the Cube: "by-split" Cubing (also called as "in-mem" cubing). It will use one round MR to calculate all cuboids, but it requests more memory than normal. The "conf/kylin\_job\_conf\_inmem.xml" is made for this step. By default it requests 3GB memory for each mapper. If your cluster has enough memory, you can allocate more in "conf/kylin\_job\_conf\_inmem.xml" so it will use as much possible memory to hold the data and gain a better performance, e.g:
-
-    <property>
-        <name>mapreduce.map.memory.mb</name>
-        <value>6144</value>
-        <description></description>
-    </property>
-    
-    <property>
-        <name>mapreduce.map.java.opts</name>
-        <value>-Xmx5632m</value>
-        <description></description>
-    </property>
-
-
-Please note, Kylin will automatically select the best algorithm based on the data distribution (get in Cube statistics). The not-selected algorithm's steps will be skipped. You don't need to select the algorithm explicitly.
-
-
-
-## Convert Cuboid Data to HFile
-
-This step starts a MR job to convert the Cuboid files (sequence file format) into HBase's HFile format. Kylin calculates the HBase region number with the Cube statistics, by default 1 region per 5GB. The more regions got, the more reducers would be utilized. If you observe the reducer's number is small and performance is poor, you can set the following parameters in "conf/kylin.properties" to smaller, as follows:
-
-```
-kylin.hbase.region.cut=2
-kylin.hbase.hfile.size.gb=1
-```
-
-If you're not sure what size a region should be, contact your HBase administrator. 
-
-
-## Load HFile to HBase Table
-
-This step uses HBase API to load the HFile to region servers, it is lightweight and fast.
-
-
-
-## Update Cube Info
-
-After loading data into HBase, Kylin marks this Cube segment as ready in metadata. This step is very fast.
-
-
-
-## Cleanup
-
-Drop the intermediate table from Hive. This step doesn't block anything as the segment has been marked ready in the previous step. If this step gets error, no need to worry, the garbage can be collected later when Kylin executes the [StorageCleanupJob](/docs16/howto/howto_cleanup_storage.html).
-
-
-## Summary
-There are also many other methods to boost the performance. If you have practices to share, welcome to discuss in [dev@kylin.apache.org](mailto:dev@kylin.apache.org).
\ No newline at end of file
diff --git a/website/_docs16/howto/howto_optimize_cubes.md b/website/_docs16/howto/howto_optimize_cubes.md
deleted file mode 100644
index 17e7fb8..0000000
--- a/website/_docs16/howto/howto_optimize_cubes.md
+++ /dev/null
@@ -1,212 +0,0 @@
----
-layout: docs16
-title:  Optimize Cube Design
-categories: howto
-permalink: /docs16/howto/howto_optimize_cubes.html
----
-
-## Hierarchies:
-
-Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
-
-group by continent
-group by continent, country
-group by continent, country, city
-
-In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
-
-If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
-
-
-A. Hierarchies on lookup table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, FK</td>
-    <td></td>
-    <td>PK,,H1,H2,H3,,,,</td>
-  </tr>
-</table>
-
----
-
-B. Hierarchies on fact table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
-  </tr>
-</table>
-
----
-
-
-There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
-
-A*. Hierarchies on lookup table over its primary key
-
-
-<table>
-  <tr>
-    <td align="center">Lookup Table(Calendar)</td>
-  </tr>
-  <tr>
-    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
-  </tr>
-</table>
-
----
-
-
-For cases like A* what you need is another optimization called "Derived Columns"
-
-## Derived Columns:
-
-Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
-
-For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, DimA(FK) </td>
-    <td></td>
-    <td>DimX(PK),,DimB, DimC</td>
-  </tr>
-</table>
-
----
-
-
-Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
-
-
-<table>
-  <tr>
-    <th>dimA</th>
-    <th>dimB</th>
-    <th>dimC</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>b</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>c</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-</table>
-
-
-in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
-
-original combinations:
-ABC,AB,AC,BC,A,B,C
-
-combinations when driving B from A:
-AC,A,C
-
-at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
-
-
-<table>
-  <tr>
-    <th>DimA</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>2</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"
diff --git a/website/_docs16/howto/howto_update_coprocessor.md b/website/_docs16/howto/howto_update_coprocessor.md
deleted file mode 100644
index 1aa8b0e..0000000
--- a/website/_docs16/howto/howto_update_coprocessor.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-layout: docs16
-title:  How to Update HBase Coprocessor
-categories: howto
-permalink: /docs16/howto/howto_update_coprocessor.html
----
-
-Kylin leverages HBase coprocessor to optimize query performance. After new versions released, the RPC protocol may get changed, so user need to redeploy coprocessor to HTable.
-
-There's a CLI tool to update HBase Coprocessor:
-
-{% highlight Groff markup %}
-$KYLIN_HOME/bin/kylin.sh org.apache.kylin.storage.hbase.util.DeployCoprocessorCLI $KYLIN_HOME/lib/kylin-coprocessor-*.jar all
-{% endhighlight %}
diff --git a/website/_docs16/howto/howto_upgrade.md b/website/_docs16/howto/howto_upgrade.md
deleted file mode 100644
index ed53116..0000000
--- a/website/_docs16/howto/howto_upgrade.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-layout: docs16
-title:  Upgrade From Old Versions
-categories: howto
-permalink: /docs16/howto/howto_upgrade.html
-since: v1.5.1
----
-
-Running as a Hadoop client, Apache Kylin's metadata and Cube data are persistended in Hadoop (HBase and HDFS), so the upgrade is relatively easy and user doesn't need worry about data loss. The upgrade can be performed in the following steps:
-
-* Download the new Apache Kylin binary package for your Hadoop version from Kylin's download page;
-* Uncompress the new version Kylin package to a new folder, e.g, /usr/local/kylin/apache-kylin-1.6.0/ (directly overwrite old instance is not recommended);
-* Copy the configuration files (`$KYLIN_HOME/conf/*`) from old instance (e.g /usr/local/kylin/apache-kylin-1.5.4/) to the new instance's `conf` folder if you have customized configurations; It is recommended to do a compare and merge since there might be new parameters introduced. If you have modified tomcat configuration ($KYLIN_HOME/tomcat/conf/), also remember to do the same.
-* Stop the current Kylin instance with `./bin/kylin.sh stop`;
-* Set the `KYLIN_HOME` env variable to the new instance's installation folder. If you have set `KYLIN_HOME` in `~/.bash_profile` or other scripts, remember to update them as well.
-* Start the new Kylin instance with `$KYLIN_HOME/bin/kylin start`; After be started, login Kylin web to check whether your cubes can be loaded correctly.
-* [Upgrade coprocessor](howto_update_coprocessor.html) to ensure the HBase region servers use the latest Kylin coprocessor.
-* Verify your SQL queries can be performed successfully.
-
-Below are versions specific guides:
-
-## Upgrade from v1.5.4 to v1.6.0
-Kylin v1.5.4 and v1.6.0 are compitible in metadata; Please follow the common upgrade steps above.
-
-## Upgrade from v1.5.3 to v1.5.4
-Kylin v1.5.3 and v1.5.4 are compitible in metadata; Please follow the common upgrade steps above.
-
-## Upgrade from 1.5.2 to v1.5.3
-Kylin v1.5.3 metadata is compitible with v1.5.2, your cubes don't need rebuilt, as usual, some actions need to be performed:
-
-#### 1. Update HBase coprocessor
-The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
-
-#### 2. Update conf/kylin_hive_conf.xml
-From 1.5.3, Kylin doesn't need Hive to merge small files anymore; For users who copy the conf/ from previous version, please remove the "merge" related properties in kylin_hive_conf.xml, including "hive.merge.mapfiles", "hive.merge.mapredfiles", and "hive.merge.size.per.task"; this will save the time on extracting data from Hive.
-
-
-## Upgrade from 1.5.1 to v1.5.2
-Kylin v1.5.2 metadata is compitible with v1.5.1, your cubes don't need upgrade, while some actions need to be performed:
-
-#### 1. Update HBase coprocessor
-The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
-
-#### 2. Update conf/kylin.properties
-In v1.5.2 several properties are deprecated, and several new one are added:
-
-Deprecated:
-
-* kylin.hbase.region.cut.small=5
-* kylin.hbase.region.cut.medium=10
-* kylin.hbase.region.cut.large=50
-
-New:
-
-* kylin.hbase.region.cut=5
-* kylin.hbase.hfile.size.gb=2
-
-These new parameters determines how to split HBase region; To use different size you can overwite these params in Cube level. 
-
-When copy from old kylin.properties file, suggest to remove the deprecated ones and add the new ones.
-
-#### 3. Add conf/kylin\_job\_conf\_inmem.xml
-A new job conf file named "kylin\_job\_conf\_inmem.xml" is added in "conf" folder; As Kylin 1.5 introduced the "fast cubing" algorithm, which aims to leverage more memory to do the in-mem aggregation; Kylin will use this new conf file for submitting the in-mem cube build job, which requesting different memory with a normal job; Please update it properly according to your cluster capacity.
-
-Besides, if you have used separate config files for different capacity cubes, for example "kylin\_job\_conf\_small.xml", "kylin\_job\_conf\_medium.xml" and "kylin\_job\_conf\_large.xml", please note that they are deprecated now; Only "kylin\_job\_conf.xml" and "kylin\_job\_conf\_inmem.xml" will be used for submitting cube job; If you have cube level job configurations (like using different Yarn job queue), you can customize at cube level, check [KYLIN-1706](https://issues.apache.org/jira [...]
-
diff --git a/website/_docs16/howto/howto_use_beeline.md b/website/_docs16/howto/howto_use_beeline.md
deleted file mode 100644
index 7c3148a..0000000
--- a/website/_docs16/howto/howto_use_beeline.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-layout: docs16
-title:  Use Beeline for Hive Commands
-categories: howto
-permalink: /docs16/howto/howto_use_beeline.html
----
-
-Beeline(https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients) is recommended by many venders to replace Hive CLI. By default Kylin uses Hive CLI to synchronize Hive tables, create flatten intermediate tables, etc. By simple configuration changes you can set Kylin to use Beeline instead.
-
-Edit $KYLIN_HOME/conf/kylin.properties by:
-
-  1. change kylin.hive.client=cli to kylin.hive.client=beeline
-  2. add "kylin.hive.beeline.params", this is where you can specifiy beeline commmand parameters. Like username(-n), JDBC URL(-u),etc. There's a sample kylin.hive.beeline.params included in default kylin.properties, however it's commented. You can modify the sample based on your real environment.
-
diff --git a/website/_docs16/howto/howto_use_distributed_scheduler.md b/website/_docs16/howto/howto_use_distributed_scheduler.md
deleted file mode 100644
index 01bb097..0000000
--- a/website/_docs16/howto/howto_use_distributed_scheduler.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-layout: docs16
-title:  Use distributed job scheduler
-categories: howto
-permalink: /docs16/howto/howto_use_distributed_scheduler.html
----
-
-Since Kylin 2.0, Kylin support distributed job scheduler.
-Which is more extensible, available and reliable than default job scheduler.
-To enable the distributed job scheduler, you need to set or update three configs in the kylin.properties:
-
-```
-1. kylin.job.scheduler.default=2
-2. kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperDistributedJobLock
-3. add all job servers and query servers to the kylin.server.cluster-servers
-```
diff --git a/website/_docs16/howto/howto_use_restapi.md b/website/_docs16/howto/howto_use_restapi.md
deleted file mode 100644
index 8d1a575..0000000
--- a/website/_docs16/howto/howto_use_restapi.md
+++ /dev/null
@@ -1,1113 +0,0 @@
----
-layout: docs16
-title:  Use RESTful API
-categories: howto
-permalink: /docs16/howto/howto_use_restapi.html
-since: v0.7.1
----
-
-This page lists the major RESTful APIs provided by Kylin.
-
-* Query
-   * [Authentication](#authentication)
-   * [Query](#query)
-   * [List queryable tables](#list-queryable-tables)
-* CUBE
-   * [List cubes](#list-cubes)
-   * [Get cube](#get-cube)
-   * [Get cube descriptor (dimension, measure info, etc)](#get-cube-descriptor)
-   * [Get data model (fact and lookup table info)](#get-data-model)
-   * [Build cube](#build-cube)
-   * [Disable cube](#disable-cube)
-   * [Purge cube](#purge-cube)
-   * [Enable cube](#enable-cube)
-* JOB
-   * [Resume job](#resume-job)
-   * [Pause job](#pause-job)
-   * [Discard job](#discard-job)
-   * [Get job status](#get-job-status)
-   * [Get job step output](#get-job-step-output)
-* Metadata
-   * [Get Hive Table](#get-hive-table)
-   * [Get Hive Table (Extend Info)](#get-hive-table-extend-info)
-   * [Get Hive Tables](#get-hive-tables)
-   * [Load Hive Tables](#load-hive-tables)
-* Cache
-   * [Wipe cache](#wipe-cache)
-* Streaming
-   * [Initiate cube start position](#initiate-cube-start-position)
-   * [Build stream cube](#build-stream-cube)
-   * [Check segment holes](#check-segment-holes)
-   * [Fill segment holes](#fill-segment-holes)
-
-## Authentication
-`POST /kylin/api/user/authentication`
-
-#### Request Header
-Authorization data encoded by basic auth is needed in the header, such as:
-Authorization:Basic {data}
-
-#### Response Body
-* userDetails - Defined authorities and status of current user.
-
-#### Response Sample
-
-```sh
-{  
-   "userDetails":{  
-      "password":null,
-      "username":"sample",
-      "authorities":[  
-         {  
-            "authority":"ROLE_ANALYST"
-         },
-         {  
-            "authority":"ROLE_MODELER"
-         }
-      ],
-      "accountNonExpired":true,
-      "accountNonLocked":true,
-      "credentialsNonExpired":true,
-      "enabled":true
-   }
-}
-```
-
-#### Curl Example
-
-```
-curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' http://<host>:<port>/kylin/api/user/authentication
-```
-
-If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
-
-```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
-```
-
-Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
-
-
-```
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "startTime": 820454400000, "endTime": 821318400000, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/kylin_sales/build
-```
-
-***
-
-## Query
-`POST /kylin/api/query`
-
-#### Request Body
-* sql - `required` `string` The text of sql statement.
-* offset - `optional` `int` Query offset. If offset is set in sql, curIndex will be ignored.
-* limit - `optional` `int` Query limit. If limit is set in sql, perPage will be ignored.
-* acceptPartial - `optional` `bool` Whether accept a partial result or not, default be "false". Set to "false" for production use. 
-* project - `optional` `string` Project to perform query. Default value is 'DEFAULT'.
-
-#### Request Sample
-
-```sh
-{  
-   "sql":"select * from TEST_KYLIN_FACT",
-   "offset":0,
-   "limit":50000,
-   "acceptPartial":false,
-   "project":"DEFAULT"
-}
-```
-
-#### Curl Example
-
-```
-curl -X POST -H "Authorization: Basic XXXXXXXXX" -H "Content-Type: application/json" -d '{ "sql":"select count(*) from TEST_KYLIN_FACT", "project":"learn_kylin" }' http://localhost:7070/kylin/api/query
-```
-
-#### Response Body
-* columnMetas - Column metadata information of result set.
-* results - Data set of result.
-* cube - Cube used for this query.
-* affectedRowCount - Count of affected row by this sql statement.
-* isException - Whether this response is an exception.
-* ExceptionMessage - Message content of the exception.
-* Duration - Time cost of this query
-* Partial - Whether the response is a partial result or not. Decided by `acceptPartial` of request.
-
-#### Response Sample
-
-```sh
-{  
-   "columnMetas":[  
-      {  
-         "isNullable":1,
-         "displaySize":0,
-         "label":"CAL_DT",
-         "name":"CAL_DT",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":0,
-         "scale":0,
-         "columnType":91,
-         "columnTypeName":"DATE",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      },
-      {  
-         "isNullable":1,
-         "displaySize":10,
-         "label":"LEAF_CATEG_ID",
-         "name":"LEAF_CATEG_ID",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":10,
-         "scale":0,
-         "columnType":4,
-         "columnTypeName":"INTEGER",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      }
-   ],
-   "results":[  
-      [  
-         "2013-08-07",
-         "32996",
-         "15",
-         "15",
-         "Auction",
-         "10000000",
-         "49.048952730908745",
-         "49.048952730908745",
-         "49.048952730908745",
-         "1"
-      ],
-      [  
-         "2013-08-07",
-         "43398",
-         "0",
-         "14",
-         "ABIN",
-         "10000633",
-         "85.78317064220418",
-         "85.78317064220418",
-         "85.78317064220418",
-         "1"
-      ]
-   ],
-   "cube":"test_kylin_cube_with_slr_desc",
-   "affectedRowCount":0,
-   "isException":false,
-   "exceptionMessage":null,
-   "duration":3451,
-   "partial":false
-}
-```
-
-
-## List queryable tables
-`GET /kylin/api/tables_and_columns`
-
-#### Request Parameters
-* project - `required` `string` The project to load tables
-
-#### Response Sample
-```sh
-[  
-   {  
-      "columns":[  
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"CAL_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":1,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         },
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"WEEK_BEG_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":2,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         }
-      ],
-      "table_NAME":"TEST_CAL_DT",
-      "table_SCHEM":"EDW",
-      "ref_GENERATION":null,
-      "self_REFERENCING_COL_NAME":null,
-      "type_SCHEM":null,
-      "table_TYPE":"TABLE",
-      "table_CAT":"defaultCatalog",
-      "remarks":null,
-      "type_CAT":null,
-      "type_NAME":null
-   }
-]
-```
-
-***
-
-## List cubes
-`GET /kylin/api/cubes`
-
-#### Request Parameters
-* offset - `required` `int` Offset used by pagination
-* limit - `required` `int ` Cubes per page.
-* cubeName - `optional` `string` Keyword for cube names. To find cubes whose name contains this keyword.
-* projectName - `optional` `string` Project name.
-
-#### Response Sample
-```sh
-[  
-   {  
-      "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-      "last_modified":1407831634847,
-      "name":"test_kylin_cube_with_slr_empty",
-      "owner":null,
-      "version":null,
-      "descriptor":"test_kylin_cube_with_slr_desc",
-      "cost":50,
-      "status":"DISABLED",
-      "segments":[  
-      ],
-      "create_time":null,
-      "source_records_count":0,
-      "source_records_size":0,
-      "size_kb":0
-   }
-]
-```
-
-## Get cube
-`GET /kylin/api/cubes/{cubeName}`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name to find.
-
-## Get cube descriptor
-`GET /kylin/api/cube_desc/{cubeName}`
-Get descriptor for specified cube instance.
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-[
-    {
-        "uuid": "a24ca905-1fc6-4f67-985c-38fa5aeafd92", 
-        "name": "test_kylin_cube_with_slr_desc", 
-        "description": null, 
-        "dimensions": [
-            {
-                "id": 0, 
-                "name": "CAL_DT", 
-                "table": "EDW.TEST_CAL_DT", 
-                "column": null, 
-                "derived": [
-                    "WEEK_BEG_DT"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 1, 
-                "name": "CATEGORY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": null, 
-                "derived": [
-                    "USER_DEFINED_FIELD1", 
-                    "USER_DEFINED_FIELD3", 
-                    "UPD_DATE", 
-                    "UPD_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 2, 
-                "name": "CATEGORY_HIERARCHY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": [
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": true
-            }, 
-            {
-                "id": 3, 
-                "name": "LSTG_FORMAT_NAME", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "LSTG_FORMAT_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }, 
-            {
-                "id": 4, 
-                "name": "SITE_ID", 
-                "table": "EDW.TEST_SITES", 
-                "column": null, 
-                "derived": [
-                    "SITE_NAME", 
-                    "CRE_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 5, 
-                "name": "SELLER_TYPE_CD", 
-                "table": "EDW.TEST_SELLER_TYPE_DIM", 
-                "column": null, 
-                "derived": [
-                    "SELLER_TYPE_DESC"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 6, 
-                "name": "SELLER_ID", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "SELLER_ID"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }
-        ], 
-        "measures": [
-            {
-                "id": 1, 
-                "name": "GMV_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 2, 
-                "name": "GMV_MIN", 
-                "function": {
-                    "expression": "MIN", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 3, 
-                "name": "GMV_MAX", 
-                "function": {
-                    "expression": "MAX", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 4, 
-                "name": "TRANS_CNT", 
-                "function": {
-                    "expression": "COUNT", 
-                    "parameter": {
-                        "type": "constant", 
-                        "value": "1", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 5, 
-                "name": "ITEM_COUNT_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "ITEM_COUNT", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }
-        ], 
-        "rowkey": {
-            "rowkey_columns": [
-                {
-                    "column": "SELLER_ID", 
-                    "length": 18, 
-                    "dictionary": null, 
-                    "mandatory": true
-                }, 
-                {
-                    "column": "CAL_DT", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LEAF_CATEG_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "META_CATEG_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL2_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL3_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_FORMAT_NAME", 
-                    "length": 12, 
-                    "dictionary": null, 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_SITE_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "SLR_SEGMENT_CD", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }
-            ], 
-            "aggregation_groups": [
-                [
-                    "LEAF_CATEG_ID", 
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME", 
-                    "CAL_DT"
-                ]
-            ]
-        }, 
-        "signature": "lsLAl2jL62ZApmOLZqWU3g==", 
-        "last_modified": 1445850327000, 
-        "model_name": "test_kylin_with_slr_model_desc", 
-        "null_string": null, 
-        "hbase_mapping": {
-            "column_family": [
-                {
-                    "name": "F1", 
-                    "columns": [
-                        {
-                            "qualifier": "M", 
-                            "measure_refs": [
-                                "GMV_SUM", 
-                                "GMV_MIN", 
-                                "GMV_MAX", 
-                                "TRANS_CNT", 
-                                "ITEM_COUNT_SUM"
-                            ]
-                        }
-                    ]
-                }
-            ]
-        }, 
-        "notify_list": null, 
-        "auto_merge_time_ranges": null, 
-        "retention_range": 0
-    }
-]
-```
-
-## Get data model
-`GET /kylin/api/model/{modelName}`
-
-#### Path Variable
-* modelName - `required` `string` Data model name, by default it should be the same with cube name.
-
-#### Response Sample
-```sh
-{
-    "uuid": "ff527b94-f860-44c3-8452-93b17774c647", 
-    "name": "test_kylin_with_slr_model_desc", 
-    "lookups": [
-        {
-            "table": "EDW.TEST_CAL_DT", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "CAL_DT"
-                ], 
-                "foreign_key": [
-                    "CAL_DT"
-                ]
-            }
-        }, 
-        {
-            "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "LEAF_CATEG_ID", 
-                    "SITE_ID"
-                ], 
-                "foreign_key": [
-                    "LEAF_CATEG_ID", 
-                    "LSTG_SITE_ID"
-                ]
-            }
-        }
-    ], 
-    "capacity": "MEDIUM", 
-    "last_modified": 1442372116000, 
-    "fact_table": "DEFAULT.TEST_KYLIN_FACT", 
-    "filter_condition": null, 
-    "partition_desc": {
-        "partition_date_column": "DEFAULT.TEST_KYLIN_FACT.CAL_DT", 
-        "partition_date_start": 0, 
-        "partition_date_format": "yyyy-MM-dd", 
-        "partition_type": "APPEND", 
-        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
-    }
-}
-```
-
-## Build cube
-`PUT /kylin/api/cubes/{cubeName}/build`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Request Body
-* startTime - `required` `long` Start timestamp of data to build, e.g. 1388563200000 for 2014-1-1
-* endTime - `required` `long` End timestamp of data to build
-* buildType - `required` `string` Supported build type: 'BUILD', 'MERGE', 'REFRESH'
-
-#### Curl Example
-```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
-```
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-
-## Enable Cube
-`PUT /kylin/api/cubes/{cubeName}/enable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-{  
-   "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-   "last_modified":1407909046305,
-   "name":"test_kylin_cube_with_slr_ready",
-   "owner":null,
-   "version":null,
-   "descriptor":"test_kylin_cube_with_slr_desc",
-   "cost":50,
-   "status":"ACTIVE",
-   "segments":[  
-      {  
-         "name":"19700101000000_20140531160000",
-         "storage_location_identifier":"KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_READY-19700101000000_20140531160000_BF043D2D-9A4A-45E9-AA59-5A17D3F34A50",
-         "date_range_start":0,
-         "date_range_end":1401552000000,
-         "status":"READY",
-         "size_kb":4758,
-         "source_records":6000,
-         "source_records_size":620356,
-         "last_build_time":1407832663227,
-         "last_build_job_id":"2c7a2b63-b052-4a51-8b09-0c24b5792cda",
-         "binary_signature":null,
-         "dictionaries":{  
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME/16d8185c-ee6b-4f8c-a919-756d9809f937.dict",
-            "TEST_KYLIN_FACT/LSTG_SITE_ID":"/dict/TEST_SITES/SITE_ID/0bec6bb3-1b0d-469c-8289-b8c4ca5d5001.dict",
-            "TEST_KYLIN_FACT/SLR_SEGMENT_CD":"/dict/TEST_SELLER_TYPE_DIM/SELLER_TYPE_CD/0c5d77ec-316b-47e0-ba9a-0616be890ad6.dict",
-            "TEST_KYLIN_FACT/CAL_DT":"/dict/PREDEFINED/date(yyyy-mm-dd)/64ac4f82-f2af-476e-85b9-f0805001014e.dict",
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME/270fbfb0-281c-4602-8413-2970a7439c47.dict",
-            "TEST_KYLIN_FACT/LEAF_CATEG_ID":"/dict/TEST_CATEGORY_GROUPINGS/LEAF_CATEG_ID/2602386c-debb-4968-8d2f-b52b8215e385.dict",
-            "TEST_CATEGORY_GROUPINGS/META_CATEG_NAME":"/dict/TEST_CATEGORY_GROUPINGS/META_CATEG_NAME/0410d2c4-4686-40bc-ba14-170042a2de94.dict"
-         },
-         "snapshots":{  
-            "TEST_CAL_DT":"/table_snapshot/TEST_CAL_DT.csv/8f7cfc8a-020d-4019-b419-3c6deb0ffaa0.snapshot",
-            "TEST_SELLER_TYPE_DIM":"/table_snapshot/TEST_SELLER_TYPE_DIM.csv/c60fd05e-ac94-4016-9255-96521b273b81.snapshot",
-            "TEST_CATEGORY_GROUPINGS":"/table_snapshot/TEST_CATEGORY_GROUPINGS.csv/363f4a59-b725-4459-826d-3188bde6a971.snapshot",
-            "TEST_SITES":"/table_snapshot/TEST_SITES.csv/78e0aecc-3ec6-4406-b86e-bac4b10ea63b.snapshot"
-         }
-      }
-   ],
-   "create_time":null,
-   "source_records_count":6000,
-   "source_records_size":0,
-   "size_kb":4758
-}
-```
-
-## Disable Cube
-`PUT /kylin/api/cubes/{cubeName}/disable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-## Purge Cube
-`PUT /kylin/api/cubes/{cubeName}/purge`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-***
-
-## Resume Job
-`PUT /kylin/api/jobs/{jobId}/resume`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-## Pause Job
-`PUT /kylin/api/jobs/{jobId}/pause`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Discard Job
-`PUT /kylin/api/jobs/{jobId}/cancel`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Get Job Status
-`GET /kylin/api/jobs/{jobId}`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-(Same as "Resume Job")
-
-## Get job step output
-`GET /kylin/api/jobs/{jobId}/steps/{stepId}/output`
-
-#### Path Variable
-* jobId - `required` `string` Job id.
-* stepId - `required` `string` Step id; the step id is composed by jobId with step sequence id; for example, the jobId is "fb479e54-837f-49a2-b457-651fc50be110", its 3rd step id is "fb479e54-837f-49a2-b457-651fc50be110-3", 
-
-#### Response Sample
-```
-{  
-   "cmd_output":"log string"
-}
-```
-
-***
-
-## Get Hive Table
-`GET /kylin/api/tables/{tableName}`
-
-#### Request Parameters
-* tableName - `required` `string` table name to find.
-
-#### Response Sample
-```sh
-{
-    uuid: "69cc92c0-fc42-4bb9-893f-bd1141c91dbe",
-    name: "SAMPLE_07",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    last_modified: 1419330476755
-}
-```
-
-## Get Hive Table (Extend Info)
-`GET /kylin/api/tables/{tableName}/exd-map`
-
-#### Request Parameters
-* tableName - `optional` `string` table name to find.
-
-#### Response Sample
-```
-{
-    "minFileSize": "46055",
-    "totalNumberFiles": "1",
-    "location": "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_07",
-    "lastAccessTime": "1418374103365",
-    "lastUpdateTime": "1398176493340",
-    "columns": "struct columns { string code, string description, i32 total_emp, i32 salary}",
-    "partitionColumns": "",
-    "EXD_STATUS": "true",
-    "maxFileSize": "46055",
-    "inputformat": "org.apache.hadoop.mapred.TextInputFormat",
-    "partitioned": "false",
-    "tableName": "sample_07",
-    "owner": "hue",
-    "totalFileSize": "46055",
-    "outputformat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
-}
-```
-
-## Get Hive Tables
-`GET /kylin/api/tables`
-
-#### Request Parameters
-* project- `required` `string` will list all tables in the project.
-* ext- `optional` `boolean`  set true to get extend info of table.
-
-#### Response Sample
-```sh
-[
- {
-    uuid: "53856c96-fe4d-459e-a9dc-c339b1bc3310",
-    name: "SAMPLE_08",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    cardinality: {},
-    last_modified: 0,
-    exd: {
-        minFileSize: "46069",
-        totalNumberFiles: "1",
-        location: "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_08",
-        lastAccessTime: "1398176495945",
-        lastUpdateTime: "1398176495981",
-        columns: "struct columns { string code, string description, i32 total_emp, i32 salary}",
-        partitionColumns: "",
-        EXD_STATUS: "true",
-        maxFileSize: "46069",
-        inputformat: "org.apache.hadoop.mapred.TextInputFormat",
-        partitioned: "false",
-        tableName: "sample_08",
-        owner: "hue",
-        totalFileSize: "46069",
-        outputformat: "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
-    }
-  }
-]
-```
-
-## Load Hive Tables
-`POST /kylin/api/tables/{tables}/{project}`
-
-#### Request Parameters
-* tables - `required` `string` table names you want to load from hive, separated with comma.
-* project - `required` `String`  the project which the tables will be loaded into.
-
-#### Response Sample
-```
-{
-    "result.loaded": ["DEFAULT.SAMPLE_07"],
-    "result.unloaded": ["sapmle_08"]
-}
-```
-
-***
-
-## Wipe cache
-`PUT /kylin/api/cache/{type}/{name}/{action}`
-
-#### Path variable
-* type - `required` `string` 'METADATA' or 'CUBE'
-* name - `required` `string` Cache key, e.g the cube name.
-* action - `required` `string` 'create', 'update' or 'drop'
-
-***
-
-## Initiate cube start position
-Set the stream cube's start position to the current latest offsets; This can avoid building from the earlist position of Kafka topic (if you have set a long retension time); 
-
-`PUT /kylin/api/cubes/{cubeName}/init_start_offsets`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Response Sample
-```sh
-{
-    "result": "success", 
-    "offsets": "{0=246059529, 1=253547684, 2=253023895, 3=172996803, 4=165503476, 5=173513896, 6=19200473, 7=26691891, 8=26699895, 9=26694021, 10=19204164, 11=26694597}"
-}
-```
-
-## Build stream cube
-`PUT /kylin/api/cubes/{cubeName}/build2`
-
-This API is specific for stream cube's building;
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Request Body
-
-* sourceOffsetStart - `required` `long` The start offset, 0 represents from previous position;
-* sourceOffsetEnd  - `required` `long` The end offset, 9223372036854775807 represents to the end position of current stream data
-* buildType - `required` Build type, "BUILD", "MERGE" or "REFRESH"
-
-#### Request Sample
-
-```sh
-{  
-   "sourceOffsetStart": 0, 
-   "sourceOffsetEnd": 9223372036854775807, 
-   "buildType": "BUILD"
-}
-```
-
-#### Response Sample
-```sh
-{
-    "uuid": "3afd6e75-f921-41e1-8c68-cb60bc72a601", 
-    "last_modified": 1480402541240, 
-    "version": "1.6.0", 
-    "name": "embedded_cube_clone - 1409830324_1409849348 - BUILD - PST 2016-11-28 22:55:41", 
-    "type": "BUILD", 
-    "duration": 0, 
-    "related_cube": "embedded_cube_clone", 
-    "related_segment": "42ebcdea-cbe9-4905-84db-31cb25f11515", 
-    "exec_start_time": 0, 
-    "exec_end_time": 0, 
-    "mr_waiting": 0, 
- ...
-}
-```
-
-## Check segment holes
-`GET /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-## Fill segment holes
-`PUT /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
diff --git a/website/_docs16/howto/howto_use_restapi_in_js.md b/website/_docs16/howto/howto_use_restapi_in_js.md
deleted file mode 100644
index ebc5699..0000000
--- a/website/_docs16/howto/howto_use_restapi_in_js.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-layout: docs16
-title:  Use RESTful API in Javascript
-categories: howto
-permalink: /docs16/howto/howto_use_restapi_in_js.html
----
-Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers.
-
-## Example on Query API.
-```
-$.ajaxSetup({
-      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
-    });
-    var request = $.ajax({
-       url: "http://hostname/kylin/api/query",
-       type: "POST",
-       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
-       dataType: "json"
-    });
-    request.done(function( msg ) {
-       alert(msg);
-    }); 
-    request.fail(function( jqXHR, textStatus ) {
-       alert( "Request failed: " + textStatus );
-  });
-
-```
-
-## Keypoints
-1. add basic access authorization info in http headers.
-2. use right ajax type and data synax.
-
-## Basic access authorization
-For what is basic access authorization, refer to [Wikipedia Page](http://en.wikipedia.org/wiki/Basic_access_authentication).
-How to generate your authorization code (download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
-
-```
-var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
- 
-$.ajaxSetup({
-   headers: { 
-    'Authorization': "Basic " + authorizationCode, 
-    'Content-Type': 'application/json;charset=utf-8' 
-   }
-});
-```
diff --git a/website/_docs16/index.cn.md b/website/_docs16/index.cn.md
deleted file mode 100644
index 005a713..0000000
--- a/website/_docs16/index.cn.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-layout: docs16-cn
-title: 概述
-categories: docs16
-permalink: /cn/docs16/index.html
----
-
-欢迎来到 Apache Kylin™
-------------  
-> Extreme OLAP Engine for Big Data
-
-Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
-
-查看旧版本文档: 
-* [v1.5](/cn/docs15/)
-* [v1.3](/cn/docs/) 
-
-安装 
-------------  
-请参考安装文档以安装Apache Kylin: [安装向导](/cn/docs16/install/)
-
-
-
-
-
-
diff --git a/website/_docs16/index.md b/website/_docs16/index.md
deleted file mode 100644
index b4eee3b..0000000
--- a/website/_docs16/index.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-layout: docs16
-title: Overview
-categories: docs
-permalink: /docs16/index.html
----
-
-Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
-------------  
-
-Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
-
-Document of prior versions: 
-
-* [v1.5.x document](/docs15/)
-* [v1.3.x document](/docs/) 
-
-Installation & Setup
-------------  
-1. [Hadoop Env](install/hadoop_env.html)
-2. [Installation Guide](install/index.html)
-3. [Advanced settings](install/advance_settings.html)
-4. [Deploy in cluster mode](install/kylin_cluster.html)
-5. [Run Kylin with Docker](install/kylin_docker.html)
-
-
-Tutorial
-------------  
-1. [Quick Start with Sample Cube](tutorial/kylin_sample.html)
-2. [Cube Creation](tutorial/create_cube.html)
-3. [Cube Build and Job Monitoring](tutorial/cube_build_job.html)
-4. [Web Interface](tutorial/web.html)
-5. [SQL reference: by Apache Calcite](http://calcite.apache.org/docs/reference.html)
-6. [Build Cube with Streaming Data (beta)](tutorial/cube_streaming.html)
-
-
-Connectivity and APIs
-------------  
-1. [ODBC driver](tutorial/odbc.html)
-2. [JDBC driver](howto/howto_jdbc.html)
-3. [RESTful API list](howto/howto_use_restapi.html)
-4. [Build cube with RESTful API](howto/howto_build_cube_with_restapi.html)
-5. [Call RESTful API in Javascript](howto/howto_use_restapi_in_js.html)
-6. [Connect from MS Excel and PowerBI](tutorial/powerbi.html)
-7. [Connect from Tableau 8](tutorial/tableau.html)
-8. [Connect from Tableau 9](tutorial/tableau_91.html)
-9. [Connect from SQuirreL](tutorial/squirrel.html)
-10. [Connect from Apache Flink](tutorial/flink.html)
-
-Operations
-------------  
-1. [Backup/restore Kylin metadata](howto/howto_backup_metadata.html)
-2. [Cleanup storage (HDFS & HBase)](howto/howto_cleanup_storage.html)
-3. [Upgrade from old version](howto/howto_upgrade.html)
-
-
-
diff --git a/website/_docs16/install/advance_settings.md b/website/_docs16/install/advance_settings.md
deleted file mode 100644
index 8b5c24d..0000000
--- a/website/_docs16/install/advance_settings.md
+++ /dev/null
@@ -1,98 +0,0 @@
----
-layout: docs16
-title:  "Advanced Settings"
-categories: install
-permalink: /docs16/install/advance_settings.html
----
-
-## Overwrite default kylin.properties at Cube level
-In `conf/kylin.properties` there are many parameters, which control/impact on Kylin's behaviors; Most parameters are global configs like security or job related; while some are Cube related; These Cube related parameters can be customized at each Cube level, so you can control the behaviors more flexibly. The GUI to do this is in the "Configuration Overwrites" step of the Cube wizard, as the screenshot below.
-
-![]( /images/install/overwrite_config.png)
-
-Here take two example: 
-
- * `kylin.cube.algorithm`: it defines the Cubing algorithm that the job engine will select; Its default value is "auto", means the engine will dynamically pick an algorithm ("layer" or "inmem") by sampling the data. If you knows Kylin and your data/cluster well, you can set your preferred algorithm directly (usually "inmem" has better performance but will request more memory).   
-
- * `kylin.hbase.region.cut`: it defines how big a region is when creating the HBase table. The default value is "5" (GB) per region. It might be too big for a small or medium cube, so you can give it a smaller value to get more regions created, then can gain better query performance.
-
-## Overwrite default Hadoop job conf at Cube level
-The `conf/kylin_job_conf.xml` and `conf/kylin_job_conf_inmem.xml` manage the default configurations for Hadoop jobs. If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need adding a prefix `kylin.job.mr.config.override.`; These configs will be parsed out and then applied when submitting jobs. See two examples below:
-
- * If want a cube's job getting more memory from Yarn, you can define: `kylin.job.mr.config.override.mapreduce.map.java.opts=-Xmx7g` and `kylin.job.mr.config.override.mapreduce.map.memory.mb=8192`
- * If want a cube's job going to a different Yarn resource queue, you can define: `kylin.job.mr.config.override.mapreduce.job.queuename=myQueue` (note: "myQueue" is just a sample)
-
- ## Overwrite default Hive job conf at Cube level
-The `conf/kylin_hive_conf.xml` manage the default configurations when running Hive job (like creating intermediate flat hive table). If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need using another prefix `kylin.hive.config.override.`; These configs will be parsed out and then applied when running "hive -e" or "beeline" commands. See example below:
-
- * If want hive goes a different Yarn resource queue, you can define: `kylin.hive.config.override.mapreduce.job.queuename=myQueue` (note: "myQueue" is just a sample)
-
-
-## Enable compression
-
-By default, Kylin does not enable compression, this is not the recommend settings for production environment, but a tradeoff for new Kylin users. A suitable compression algorithm will reduce the storage overhead. But unsupported algorithm will break the Kylin job build also. There are three kinds of compression used in Kylin, HBase table compression, Hive output compression and MR jobs output compression. 
-
-* HBase table compression
-The compression settings define in `kyiln.properties` by `kylin.hbase.default.compression.codec`, default value is *none*. The valid value includes *none*, *snappy*, *lzo*, *gzip* and *lz4*. Before changing the compression algorithm, please make sure the selected algorithm is supported on your HBase cluster. Especially for snappy, lzo and lz4, not all Hadoop distributions include these. 
-
-* Hive output compression
-The compression settings define in `kylin_hive_conf.xml`. The default setting is empty which leverages the Hive default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_hive_conf.xml`. Take the snappy compression for example:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-* MR jobs output compression
-The compression settings define in `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. The default setting is empty which leverages the MR default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. Take the snappy compression for example:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-Compression settings only take effect after restarting Kylin server instance.
-
-## Allocate more memory to Kylin instance
-
-Open `bin/setenv.sh`, which has two sample settings for `KYLIN_JVM_SETTINGS` environment variable; The default setting is small (4GB at max.), you can comment it and then un-comment the next line to allocate 16GB:
-
-{% highlight Groff markup %}
-export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$KYLIN_HOME/logs/kylin.gc.$$ -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M"
-# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX:MaxPermSize=512m -XX:NewSize=3g -XX:MaxNewSize=3g -XX:SurvivorRatio=4 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=70 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError"
-{% endhighlight %}
-
-## Enable LDAP or SSO authentication
-
-Check [How to Enable Security with LDAP and SSO](../howto/howto_ldap_and_sso.html)
-
-
-## Enable email notification
-
-Kylin can send email notification on job complete/fail; To enable this, edit `conf/kylin.properties`, set the following parameters:
-{% highlight Groff markup %}
-mail.enabled=true
-mail.host=your-smtp-server
-mail.username=your-smtp-account
-mail.password=your-smtp-pwd
-mail.sender=your-sender-address
-kylin.job.admin.dls=adminstrator-address
-{% endhighlight %}
-
-Restart Kylin server to take effective. To disable, set `mail.enabled` back to `false`.
-
-Administrator will get notifications for all jobs. Modeler and Analyst need enter email address into the "Notification List" at the first page of cube wizard, and then will get notified for that cube.
diff --git a/website/_docs16/install/hadoop_evn.md b/website/_docs16/install/hadoop_evn.md
deleted file mode 100644
index cea96cf..0000000
--- a/website/_docs16/install/hadoop_evn.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-layout: docs16
-title:  "Hadoop Environment"
-categories: install
-permalink: /docs16/install/hadoop_env.html
----
-
-Kylin need run in a Hadoop node, to get better stability, we suggest you to deploy it a pure Hadoop client machine, on which it the command lines like `hive`, `hbase`, `hadoop`, `hdfs` already be installed and configured. The Linux account that running Kylin has got permission to the Hadoop cluster, including create/write hdfs, hive tables, hbase tables and submit MR jobs. 
-
-## Recommended Hadoop Versions
-
-* Hadoop: 2.6 - 2.7
-* Hive: 0.13 - 1.2.1
-* HBase: 0.98 - 0.99, 1.x
-* JDK: 1.7+
-
-_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1. Windows and MacOS have known issues._
-
-To make things easier we strongly recommend you try Kylin with an all-in-one sandbox VM, like [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/), and give it 10 GB memory. In the following tutorial we'll go with **Hortonworks Sandbox 2.1** and **Cloudera QuickStart VM 5.1**. 
-
-To avoid permission issue in the sandbox, you can use its `root` account. The password for **Hortonworks Sandbox 2.1** is `hadoop` , for **Cloudera QuickStart VM 5.1** is `cloudera`.
-
-We also suggest you using bridged mode instead of NAT mode in Virtual Box settings. Bridged mode will assign your sandbox an independent IP address so that you can avoid issues like [this](https://github.com/KylinOLAP/Kylin/issues/12).
-
-### Start Hadoop
-Use ambari helps to launch hadoop:
-
-```
-ambari-agent start
-ambari-server start
-```
-
-With both command successfully run you can go to ambari homepage at <http://your_sandbox_ip:8080> (user:admin,password:admin) to check everything's status. **By default hortonworks ambari disables Hbase, you need manually start the `Hbase` service at ambari homepage.**
-
-![start hbase in ambari](https://raw.githubusercontent.com/KylinOLAP/kylinolap.github.io/master/docs/installation/starthbase.png)
-
-**Additonal Info for setting up Hortonworks Sandbox on Virtual Box**
-
-	Please make sure Hbase Master port [Default 60000] and Zookeeper [Default 2181] is forwarded to Host OS.
- 
diff --git a/website/_docs16/install/index.cn.md b/website/_docs16/install/index.cn.md
deleted file mode 100644
index 5c4d321..0000000
--- a/website/_docs16/install/index.cn.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-layout: docs16
-title:  "Installation Guide"
-categories: install
-permalink: /cn/docs16/install/index.html
-version: v0.7.2
-since: v0.7.1
----
-
-### Environment
-
-Kylin requires a properly setup hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check this reference: [Hadoop Environment](hadoop_env.html).
-
-## Prerequisites on Hadoop
-
-* Hadoop: 2.4+
-* Hive: 0.13+
-* HBase: 0.98+, 1.x
-* JDK: 1.7+  
-_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1_
-
-
-It is most common to install Kylin on a Hadoop client machine. It can be used for demo use, or for those who want to host their own web site to provide Kylin service. The scenario is depicted as:
-
-![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
-
-For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
-
-Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
-
-### Install Kylin
-
-1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
-2. Export KYLIN_HOME pointing to the extracted Kylin folder
-3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
-4. To start Kylin, simply run **bin/kylin.sh start**
-5. To stop Kylin, simply run **bin/kylin.sh stop**
-
-> If you want to have multiple Kylin nodes please refer to [this](kylin_cluster.html)
-
-After Kylin started you can visit <http://your_hostname:7070/kylin>. The username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
-
-1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
-2. [Create and Build your own cube](../tutorial/create_cube.html)
-3. [Kylin Web Tutorial](../tutorial/web.html)
-
diff --git a/website/_docs16/install/index.md b/website/_docs16/install/index.md
deleted file mode 100644
index 3086b42..0000000
--- a/website/_docs16/install/index.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-layout: docs16
-title:  "Installation Guide"
-categories: install
-permalink: /docs16/install/index.html
----
-
-### Environment
-
-Kylin requires a properly setup Hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check [Hadoop Environment](hadoop_env.html).
-
-It is most common to install Kylin on a Hadoop client machine, from which Kylin can talk with the Hadoop cluster via command lines including `hive`, `hbase`, `hadoop`, etc. The scenario is depicted as:
-
-![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
-
-For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
-
-Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
-
-### Install Kylin
-
-1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
-2. Export KYLIN_HOME pointing to the extracted Kylin folder
-3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
-4. To start Kylin, run **bin/kylin.sh start**, after the server starts, you can watch logs/kylin.log for runtime logs;
-5. To stop Kylin, run **bin/kylin.sh stop**
-
-> If you want to have multiple Kylin nodes running to provide high availability, please refer to [this](kylin_cluster.html)
-
-After Kylin started you can visit <http://hostname:7070/kylin>. The default username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
-
-1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
-2. [Create and Build a cube](../tutorial/create_cube.html)
-3. [Kylin Web Tutorial](../tutorial/web.html)
-
diff --git a/website/_docs16/install/kylin_cluster.md b/website/_docs16/install/kylin_cluster.md
deleted file mode 100644
index 1938000..0000000
--- a/website/_docs16/install/kylin_cluster.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-layout: docs16
-title:  "Deploy in Cluster Mode"
-categories: install
-permalink: /docs16/install/kylin_cluster.html
----
-
-
-### Kylin Server modes
-
-Kylin instances are stateless,  the runtime state is saved in its "Metadata Store" in hbase (kylin.metadata.url config in conf/kylin.properties). For load balance considerations it is possible to start multiple Kylin instances sharing the same metadata store (thus sharing the same state on table schemas, job status, cube status, etc.)
-
-Each of the kylin instances has a kylin.server.mode entry in conf/kylin.properties specifying the runtime mode, it has three options: 1. "job" for running job engine only 2. "query" for running query engine only and 3 "all" for running both. Notice that only one server can run the job engine("all" mode or "job" mode), the others must all be "query" mode.
-
-A typical scenario is depicted in the following chart:
-
-![]( /images/install/kylin_server_modes.png)
-
-### Setting up Multiple Kylin REST servers
-
-If you are running Kylin in a cluster where you have multiple Kylin REST server instances, please make sure you have the following property correctly configured in ${KYLIN_HOME}/conf/kylin.properties for EVERY server instance.
-
-1. kylin.rest.servers 
-	List of web servers in use, this enables one web server instance to sync up with other servers. For example: kylin.rest.servers=sandbox1:7070,sandbox2:7070
-  
-2. kylin.server.mode
-	Make sure there is only one instance whose "kylin.server.mode" is set to "all"(or "job"), others should be "query"
-	
-## Setup load balancer 
-
-To enable Kylin high availability, you need setup a load balancer in front of these servers, let it routing the incoming requests to the cluster. Client sides send all requests to the load balancer, instead of talk with a specific instance. 
-	
diff --git a/website/_docs16/install/kylin_docker.md b/website/_docs16/install/kylin_docker.md
deleted file mode 100644
index 57d4e20..0000000
--- a/website/_docs16/install/kylin_docker.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-layout: docs16
-title:  "Run Kylin with Docker"
-categories: install
-permalink: /docs16/install/kylin_docker.html
-version: v1.5.3
-since: v1.5.2
----
-
-Apache Kylin runs as a client of Hadoop cluster, so it is reasonable to run within a Docker container; please check [this project](https://github.com/Kyligence/kylin-docker/) on github.
diff --git a/website/_docs16/install/manual_install_guide.cn.md b/website/_docs16/install/manual_install_guide.cn.md
deleted file mode 100644
index b192dfa..0000000
--- a/website/_docs16/install/manual_install_guide.cn.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-layout: docs16-cn
-title:  "手动安装指南"
-categories: 安装
-permalink: /cn/docs16/install/manual_install_guide.html
-version: v0.7.2
-since: v0.7.1
----
-
-## 引言
-
-在大多数情况下,我们的自动脚本[Installation Guide](./index.html)可以帮助你在你的hadoop sandbox甚至你的hadoop cluster中启动Kylin。但是,为防部署脚本出错,我们撰写本文作为参考指南来解决你的问题。
-
-基本上本文解释了自动脚本中的每一步骤。我们假设你已经对Linux上的Hadoop操作非常熟悉。
-
-## 前提条件
-* 已安装Tomcat,输出到CATALINA_HOME(with CATALINA_HOME exported). 
-* Kylin 二进制文件拷贝至本地并解压,之后使用$KYLIN_HOME引用
-
-## 步骤
-
-### 准备Jars
-
-Kylin会需要使用两个jar包,两个jar包和配置在默认kylin.properties:(there two jars and configured in the default kylin.properties)
-
-```
-kylin.job.jar=/tmp/kylin/kylin-job-latest.jar
-
-```
-
-这是Kylin用于MR jobs的job jar包。你需要复制 $KYLIN_HOME/job/target/kylin-job-latest.jar 到 /tmp/kylin/
-
-```
-kylin.coprocessor.local.jar=/tmp/kylin/kylin-coprocessor-latest.jar
-
-```
-
-这是一个Kylin会放在hbase上的hbase协处理jar包。它用于提高性能。你需要复制 $KYLIN_HOME/storage/target/kylin-coprocessor-latest.jar 到 /tmp/kylin/
-
-### 启动Kylin
-
-以`./kylin.sh start`
-
-启动Kylin
-
-并以`./Kylin.sh stop`
-
-停止Kylin
diff --git a/website/_docs16/release_notes.md b/website/_docs16/release_notes.md
deleted file mode 100644
index 235d752..0000000
--- a/website/_docs16/release_notes.md
+++ /dev/null
@@ -1,1333 +0,0 @@
----
-layout: docs16
-title:  Apache Kylin Release Notes
-categories: gettingstarted
-permalink: /docs16/release_notes.html
----
-
-To download latest release, please visit: [http://kylin.apache.org/download/](http://kylin.apache.org/download/), 
-there are source code package, binary package, ODBC driver and installation guide avaliable.
-
-Any problem or issue, please report to Apache Kylin JIRA project: [https://issues.apache.org/jira/browse/KYLIN](https://issues.apache.org/jira/browse/KYLIN)
-
-or send to Apache Kylin mailing list:
-
-* User relative: [user@kylin.apache.org](mailto:user@kylin.apache.org)
-* Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
-
-## v1.6.0 - 2016-11-26
-_Tag:_ [kylin-1.6.0](https://github.com/apache/kylin/tree/kylin-1.6.0)
-This is a major release with better support for using Apache Kafka as data source. Check [how to upgrade](/docs16/howto/howto_upgrade.html) to do the upgrading.
-
-__New Feature__
-
-* [KYLIN-1726] - Scalable streaming cubing
-* [KYLIN-1919] - Support Embedded Structure when Parsing Streaming Message
-* [KYLIN-2055] - Add an encoder for Boolean type
-* [KYLIN-2067] - Add API to check and fill segment holes
-* [KYLIN-2079] - add explicit configuration knob for coprocessor timeout
-* [KYLIN-2088] - Support intersect count for calculation of retention or conversion rates
-* [KYLIN-2125] - Support using beeline to load hive table metadata
-
-__Bug__
-
-* [KYLIN-1565] - Read the kv max size from HBase config
-* [KYLIN-1820] - Column autocomplete should remove the user input in model designer
-* [KYLIN-1828] - java.lang.StringIndexOutOfBoundsException in org.apache.kylin.storage.hbase.util.StorageCleanupJob
-* [KYLIN-1967] - Dictionary rounding can cause IllegalArgumentException in GTScanRangePlanner
-* [KYLIN-1978] - kylin.sh compatible issue on Ubuntu
-* [KYLIN-1990] - The SweetAlert at the front page may out of the page if the content is too long.
-* [KYLIN-2007] - CUBOID_CACHE is not cleared when rebuilding ALL cache
-* [KYLIN-2012] - more robust approach to hive schema changes
-* [KYLIN-2024] - kylin TopN only support the first measure 
-* [KYLIN-2027] - Error "connection timed out" occurs when zookeeper's port is set in hbase.zookeeper.quorum of hbase-site.xml
-* [KYLIN-2028] - find-*-dependency script fail on Mac OS
-* [KYLIN-2035] - Auto Merge Submit Continuously
-* [KYLIN-2041] - Wrong parameter definition in Get Hive Tables REST API
-* [KYLIN-2043] - Rollback httpclient to 4.2.5 to align with Hadoop 2.6/2.7
-* [KYLIN-2044] - Unclosed DataInputByteBuffer in BitmapCounter#peekLength
-* [KYLIN-2045] - Wrong argument order in JobInstanceExtractor#executeExtract()
-* [KYLIN-2047] - Ineffective null check in MetadataManager
-* [KYLIN-2050] - Potentially ineffective call to close() in QueryCli
-* [KYLIN-2051] - Potentially ineffective call to IOUtils.closeQuietly()
-* [KYLIN-2052] - Edit "Top N" measure, the "group by" column wasn't displayed
-* [KYLIN-2059] - Concurrent build issue in CubeManager.calculateToBeSegments()
-* [KYLIN-2069] - NPE in LookupStringTable
-* [KYLIN-2078] - Can't see generated SQL at Web UI
-* [KYLIN-2084] - Unload sample table failed
-* [KYLIN-2085] - PrepareStatement return incorrect result in some cases
-* [KYLIN-2086] - Still report error when there is more than 12 dimensions in one agg group
-* [KYLIN-2093] - Clear cache in CubeMetaIngester
-* [KYLIN-2097] - Get 'Column does not exist in row key desc" on cube has TopN measure
-* [KYLIN-2099] - Import table error of sample table KYLIN_CAL_DT
-* [KYLIN-2106] - UI bug - Advanced Settings - Rowkeys - new Integer dictionary encoding - could possibly impact also cube metadata
-* [KYLIN-2109] - Deploy coprocessor only this server own the table
-* [KYLIN-2110] - Ineffective comparison in BooleanDimEnc#equals()
-* [KYLIN-2114] - WEB-Global-Dictionary bug fix and improve
-* [KYLIN-2115] - some extended column query returns wrong answer
-* [KYLIN-2116] - when hive field delimitor exists in table field values, fields order is wrong
-* [KYLIN-2119] - Wrong chart value and sort when process scientific notation 
-* [KYLIN-2120] - kylin1.5.4.1 with cdh5.7 cube sql Oops Faild to take action
-* [KYLIN-2121] - Failed to pull data to PowerBI or Excel on some query
-* [KYLIN-2127] - UI bug fix for Extend Column
-* [KYLIN-2130] - QueryMetrics concurrent bug fix
-* [KYLIN-2132] - Unable to pull data from Kylin Cube ( learn_kylin cube ) to Excel or Power BI for Visualization and some dimensions are not showing up.
-* [KYLIN-2134] - Kylin will treat empty string as NULL by mistake
-* [KYLIN-2137] - Failed to run mr job when user put a kafka jar in hive's lib folder
-* [KYLIN-2138] - Unclosed ResultSet in BeelineHiveClient
-* [KYLIN-2146] - "Streaming Cluster" page should remove "Margin" inputbox
-* [KYLIN-2152] - TopN group by column does not distinguish between NULL and ""
-* [KYLIN-2154] - source table rows will be skipped if TOPN's group column contains NULL values
-* [KYLIN-2158] - Delete joint dimension not right
-* [KYLIN-2159] - Redistribution Hive Table Step always requires row_count filename as 000000_0 
-* [KYLIN-2167] - FactDistinctColumnsReducer may get wrong max/min partition col value
-* [KYLIN-2173] - push down limit leads to wrong answer when filter is loosened
-* [KYLIN-2178] - CubeDescTest is unstable
-* [KYLIN-2201] - Cube desc and aggregation group rule combination max check fail
-* [KYLIN-2226] - Build Dimension Dictionary Error
-
-__Improvement__
-
-* [KYLIN-1042] - Horizontal scalable solution for streaming cubing
-* [KYLIN-1827] - Send mail notification when runtime exception throws during build/merge cube
-* [KYLIN-1839] - improvement set classpath before submitting mr job
-* [KYLIN-1917] - TopN counter merge performance improvement
-* [KYLIN-1962] - Split kylin.properties into two files
-* [KYLIN-1999] - Use some compression at UT/IT
-* [KYLIN-2019] - Add license checker into checkstyle rule
-* [KYLIN-2033] - Refactor broadcast of metadata change
-* [KYLIN-2042] - QueryController puts entry in Cache w/o checking QueryCacheEnabled
-* [KYLIN-2054] - TimedJsonStreamParser should support other time format
-* [KYLIN-2068] - Import hive comment when sync tables
-* [KYLIN-2070] - UI changes for allowing concurrent build/refresh/merge
-* [KYLIN-2073] - Need timestamp info for diagnose  
-* [KYLIN-2075] - TopN measure: need select "constant" + "1" as the SUM|ORDER parameter
-* [KYLIN-2076] - Improve sample cube and data
-* [KYLIN-2080] - UI: allow multiple building jobs for the same cube
-* [KYLIN-2082] - Support to change streaming configuration
-* [KYLIN-2089] - Make update HBase coprocessor concurrent
-* [KYLIN-2090] - Allow updating cube level config even the cube is ready
-* [KYLIN-2091] - Add API to init the start-point (of each parition) for streaming cube
-* [KYLIN-2095] - Hive mr job use overrided MR job configuration by cube properties
-* [KYLIN-2098] - TopN support query UHC column without sorting by sum value
-* [KYLIN-2100] - Allow cube to override HIVE job configuration by properties
-* [KYLIN-2108] - Support usage of schema name "default" in SQL
-* [KYLIN-2111] - only allow columns from Model dimensions when add group by column to TOP_N
-* [KYLIN-2112] - Allow a column be a dimension as well as "group by" column in TopN measure
-* [KYLIN-2113] - Need sort by columns in SQLDigest
-* [KYLIN-2118] - allow user view CubeInstance json even cube is ready
-* [KYLIN-2122] - Move the partition offset calculation before submitting job
-* [KYLIN-2126] - use column name as default dimension name when auto generate dimension for lookup table
-* [KYLIN-2140] - rename packaged js with different name when build
-* [KYLIN-2143] - allow more options from Extended Columns,COUNT_DISTINCT,RAW_TABLE
-* [KYLIN-2162] - Improve the cube validation error message
-* [KYLIN-2221] - rethink on KYLIN-1684
-* [KYLIN-2083] - more RAM estimation test for MeasureAggregator and GTAggregateScanner
-* [KYLIN-2105] - add QueryId
-* [KYLIN-1321] - Add derived checkbox for lookup table columns on Auto Generate Dimensions panel
-* [KYLIN-1995] - Upgrade MapReduce properties which are deprecated
-
-__Task__
-
-* [KYLIN-2072] - Cleanup old streaming code
-* [KYLIN-2081] - UI change to support embeded streaming message
-* [KYLIN-2171] - Release 1.6.0
-
-
-## v1.5.4.1 - 2016-09-28
-_Tag:_ [kylin-1.5.4.1](https://github.com/apache/kylin/tree/kylin-1.5.4.1)
-This version fixes two major bugs introduced in 1.5.4; The metadata and HBase coprocessor is compatible with 1.5.4.
-
-__Bug__
-
-* [KYLIN-2010] - Date dictionary return wrong SQL result
-* [KYLIN-2026] - NPE occurs when build a cube without partition column
-* [KYLIN-2032] - Cube build failed when partition column isn't in dimension list
-
-## v1.5.4 - 2016-09-15
-_Tag:_ [kylin-1.5.4](https://github.com/apache/kylin/tree/kylin-1.5.4)
-This version includes bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.3; While after upgrade, you still need update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__New Feature__
-
-* [KYLIN-1732] - Support Window Function
-* [KYLIN-1767] - UI for TopN: specify encoding and multiple "group by"
-* [KYLIN-1849] - Search cube by name in Web UI
-* [KYLIN-1908] - Collect Metrics to JMX
-* [KYLIN-1921] - Support Grouping Funtions
-* [KYLIN-1964] - Add a companion tool of CubeMetaExtractor for cube importing
-
-__Bug__
-
-* [KYLIN-962] - [UI] Cube Designer can't drag rowkey normally
-* [KYLIN-1194] - Filter(CubeName) on Jobs/Monitor page works only once
-* [KYLIN-1488] - When modifying a model, Save after deleting a lookup table. The internal error will pop up.
-* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
-* [KYLIN-1808] - unload non existing table cause NPE
-* [KYLIN-1834] - java.lang.IllegalArgumentException: Value not exists! - in Step 4 - Build Dimension Dictionary
-* [KYLIN-1883] - Consensus Problem when running the tool, MetadataCleanupJob
-* [KYLIN-1889] - Didn't deal with the failure of renaming folder in hdfs when running the tool CubeMigrationCLI
-* [KYLIN-1929] - Error to load slow query in "Monitor" page for non-admin user
-* [KYLIN-1933] - Deploy in cluster mode, the "query" node report "scheduler has not been started" every second
-* [KYLIN-1934] - 'Value not exist' During Cube Merging Caused by Empty Dict
-* [KYLIN-1939] - Linkage error while executing any queries
-* [KYLIN-1942] - Models are missing after change project's name
-* [KYLIN-1953] - Error handling for diagnosis
-* [KYLIN-1956] - Can't query from child cube of a hybrid cube after its status changed from disabled to enabled
-* [KYLIN-1961] - Project name is always constant instead of real project name in email notification
-* [KYLIN-1970] - System Menu UI ACL issue
-* [KYLIN-1972] - Access denied when query seek to hybrid
-* [KYLIN-1973] - java.lang.NegativeArraySizeException when Build Dimension Dictionary
-* [KYLIN-1982] - CubeMigrationCLI: associate model with project
-* [KYLIN-1986] - CubeMigrationCLI: make global dictionary unique
-* [KYLIN-1992] - Clear ThreadLocal Contexts when query failed before scaning HBase
-* [KYLIN-1996] - Keep original column order when designing cube
-* [KYLIN-1998] - Job engine lock is not release at shutdown
-* [KYLIN-2003] - error start time at query result page
-* [KYLIN-2005] - Move all storage side behavior hints to GTScanRequest
-
-__Improvement__
-
-* [KYLIN-672] - Add Env and Project Info in job email notification
-* [KYLIN-1702] - The Key of the Snapshot to the related lookup table may be not informative
-* [KYLIN-1855] - Should exclude those joins in whose related lookup tables no dimensions are used in cube
-* [KYLIN-1858] - Remove all InvertedIndex(Streaming purpose) related codes and tests
-* [KYLIN-1866] - Add tip for field at 'Add Streaming' table page.
-* [KYLIN-1867] - Upgrade dependency libraries
-* [KYLIN-1874] - Make roaring bitmap version determined
-* [KYLIN-1898] - Upgrade to Avatica 1.8 or higher
-* [KYLIN-1904] - WebUI for GlobalDictionary
-* [KYLIN-1906] - Add more comments and default value for kylin.properties
-* [KYLIN-1910] - Support Separate HBase Cluster with NN HA and Kerberos Authentication
-* [KYLIN-1920] - Add view CubeInstance json function
-* [KYLIN-1922] - Improve the logic to decide whether to pre aggregate on Region server
-* [KYLIN-1923] - Add access controller to query
-* [KYLIN-1924] - Region server metrics: replace int type for long type for scanned row count
-* [KYLIN-1925] - Do not allow cross project clone for cube
-* [KYLIN-1926] - Loosen the constraint on FK-PK data type matching
-* [KYLIN-1936] - Improve enable limit logic (exactAggregation is too strict)
-* [KYLIN-1940] - Add owner for DataModel
-* [KYLIN-1941] - Show submitter for slow query
-* [KYLIN-1954] - BuildInFunctionTransformer should be executed per CubeSegmentScanner
-* [KYLIN-1963] - Delegate the loading of certain package (like slf4j) to tomcat's parent classloader
-* [KYLIN-1965] - Check duplicated measure name
-* [KYLIN-1966] - Refactor IJoinedFlatTableDesc
-* [KYLIN-1979] - Move hackNoGroupByAggregation to cube-based storage implementations
-* [KYLIN-1984] - Don't use compression in packaging configuration
-* [KYLIN-1985] - SnapshotTable should only keep the columns described in tableDesc
-* [KYLIN-1997] - Add pivot feature back in query result page
-* [KYLIN-2004] - Make the creating intermediate hive table steps configurable (two options)
-
-## v1.5.3 - 2016-07-28
-_Tag:_ [kylin-1.5.3](https://github.com/apache/kylin/tree/kylin-1.5.3)
-This version includes many bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.2; But after upgrade, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__New Feature__
-
-* [KYLIN-1478] - TopN measure should support non-dictionary encoding for ultra high cardinality
-* [KYLIN-1693] - Support multiple group-by columns for TOP_N meausre
-* [KYLIN-1752] - Add an option to fail cube build job when source table is empty
-* [KYLIN-1756] - Allow user to run MR jobs against different Hadoop queues
-
-__Bug__
-
-* [KYLIN-1499] - Couldn't save query, error in backend
-* [KYLIN-1568] - Calculate row value buffer size instead of hard coded ROWVALUE_BUFFER_SIZE
-* [KYLIN-1645] - Exception inside coprocessor should report back to the query thread
-* [KYLIN-1646] - Column appeared twice if it was declared as both dimension and measure
-* [KYLIN-1676] - High CPU in TrieDictionary due to incorrect use of HashMap
-* [KYLIN-1679] - bin/get-properties.sh cannot get property which contains space or equals sign
-* [KYLIN-1684] - query on table "kylin_sales" return empty resultset after cube "kylin_sales_cube" which generated by sample.sh is ready
-* [KYLIN-1694] - make multiply coefficient configurable when estimating cuboid size
-* [KYLIN-1695] - Skip cardinality calculation job when loading hive table
-* [KYLIN-1703] - The not-thread-safe ToolRunner.run() will cause concurrency issue in job engine
-* [KYLIN-1704] - When load empty snapshot, NULL Pointer Exception occurs
-* [KYLIN-1723] - GTAggregateScanner$Dump.flush() must not write the WHOLE metrics buffer
-* [KYLIN-1738] - MRJob Id is not saved to kylin jobs if MR job is killed
-* [KYLIN-1742] - kylin.sh should always set KYLIN_HOME to an absolute path
-* [KYLIN-1755] - TopN Measure IndexOutOfBoundsException
-* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
-* [KYLIN-1762] - Query threw NPE with 3 or more join conditions
-* [KYLIN-1769] - There is no response when click "Property" button at Cube Designer
-* [KYLIN-1777] - Streaming cube build shouldn't check working segment
-* [KYLIN-1780] - Potential issue in SnapshotTable.equals()
-* [KYLIN-1781] - kylin.properties encoding error while contain chinese prop key or value
-* [KYLIN-1783] - Can't add override property at cube design 'Configuration Overwrites' step.
-* [KYLIN-1785] - NoSuchElementException when Mandatory Dimensions contains all Dimensions
-* [KYLIN-1787] - Properly deal with limit clause in CubeHBaseEndpointRPC (SELECT * problem)
-* [KYLIN-1788] - Allow arbitrary number of mandatory dimensions in one aggregation group
-* [KYLIN-1789] - Couldn't use View as Lookup when join type is "inner"
-* [KYLIN-1795] - bin/sample.sh doesn't work when configured hive client is beeline
-* [KYLIN-1800] - IllegalArgumentExceptio: Too many digits for NumberDictionary: -0.009999999999877218. Expect 19 digits before decimal point at max.
-* [KYLIN-1803] - ExtendedColumn Measure Encoding with Non-ascii Characters
-* [KYLIN-1811] - Error step may be skipped sometimes when resume a cube job
-* [KYLIN-1816] - More than one base KylinConfig exist in spring JVM
-* [KYLIN-1817] - No result from JDBC with Date filter in prepareStatement
-* [KYLIN-1838] - Fix sample cube definition
-* [KYLIN-1848] - Can't sort cubes by any field in Web UI
-* [KYLIN-1862] - "table not found" in "Build Dimension Dictionary" step
-* [KYLIN-1879] - RestAPI /api/jobs always returns 0 for exec_start_time and exec_end_time fields
-* [KYLIN-1882] - it report can't find the intermediate table in '#4 Step Name: Build Dimension Dictionary' when use hive view as lookup table
-* [KYLIN-1896] - JDBC support mybatis
-* [KYLIN-1905] - Wrong Default Date in Cube Build Web UI
-* [KYLIN-1909] - Wrong access control to rest get cubes
-* [KYLIN-1911] - NPE when extended column has NULL value
-* [KYLIN-1912] - Create Intermediate Flat Hive Table failed when using beeline
-* [KYLIN-1913] - query log printed abnormally if the query contains "\r" (not "\r\n")
-* [KYLIN-1918] - java.lang.UnsupportedOperationException when unload hive table
-
-__Improvement__
-
-* [KYLIN-1319] - Find a better way to check hadoop job status
-* [KYLIN-1379] - More stable and functional precise count distinct implements after KYLIN-1186
-* [KYLIN-1656] - Improve performance of MRv2 engine by making each mapper handles a configured number of records
-* [KYLIN-1657] - Add new configuration kylin.job.mapreduce.min.reducer.number
-* [KYLIN-1669] - Deprecate the "Capacity" field from DataModel
-* [KYLIN-1677] - Distribute source data by certain columns when creating flat table
-* [KYLIN-1705] - Global (and more scalable) dictionary
-* [KYLIN-1706] - Allow cube to override MR job configuration by properties
-* [KYLIN-1714] - Make job/source/storage engines configurable from kylin.properties
-* [KYLIN-1717] - Make job engine scheduler configurable
-* [KYLIN-1718] - Grow ByteBuffer Dynamically in Cube Building and Query
-* [KYLIN-1719] - Add config in scan request to control compress the query result or not
-* [KYLIN-1724] - Support Amazon EMR
-* [KYLIN-1725] - Use KylinConfig inside coprocessor
-* [KYLIN-1728] - Introduce dictionary metadata
-* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
-* [KYLIN-1747] - Calculate all 0 (except mandatory) cuboids
-* [KYLIN-1749] - Allow mandatory only cuboid
-* [KYLIN-1751] - Make kylin log configurable
-* [KYLIN-1766] - CubeTupleConverter.translateResult() is slow due to date conversion
-* [KYLIN-1775] - Add Cube Migrate Support for Global Dictionary
-* [KYLIN-1782] - API redesign for CubeDesc
-* [KYLIN-1786] - Frontend work for KYLIN-1313 (extended columns as measure)
-* [KYLIN-1792] - behaviours for non-aggregated queries
-* [KYLIN-1805] - It's easily got stuck when deleting HTables during running the StorageCleanupJob
-* [KYLIN-1815] - Cleanup package size
-* [KYLIN-1818] - change kafka dependency to provided
-* [KYLIN-1821] - Reformat all of the java files and enable checkstyle to enforce code formatting
-* [KYLIN-1823] - refactor kylin-server packaging
-* [KYLIN-1846] - minimize dependencies of JDBC driver
-* [KYLIN-1884] - Reload metadata automatically after migrating cube
-* [KYLIN-1894] - GlobalDictionary may corrupt when server suddenly crash
-* [KYLIN-1744] - Separate concepts of source offset and date range on cube segments
-* [KYLIN-1654] - Upgrade httpclient dependency
-* [KYLIN-1774] - Update Kylin's tomcat version to 7.0.69
-* [KYLIN-1861] - Hive may fail to create flat table with "GC overhead error"
-
-## v1.5.2.1 - 2016-06-07
-_Tag:_ [kylin-1.5.2.1](https://github.com/apache/kylin/tree/kylin-1.5.2.1)
-
-This is a hot-fix version on v1.5.2, no new feature introduced, please upgrade to this version;
-
-__Bug__
-
-* [KYLIN-1758] - createLookupHiveViewMaterializationStep will create intermediate table for fact table
-* [KYLIN-1739] - kylin_job_conf_inmem.xml can impact non-inmem MR job
-
-
-## v1.5.2 - 2016-05-26
-_Tag:_ [kylin-1.5.2](https://github.com/apache/kylin/tree/kylin-1.5.2)
-
-This version is backward compatiple with v1.5.1. But after upgrade to v1.5.2 from v1.5.1, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__Highlights__
-
-* [KYLIN-1077] - Support Hive View as Lookup Table
-* [KYLIN-1515] - Make Kylin run on MapR
-* [KYLIN-1600] - Download diagnosis zip from GUI
-* [KYLIN-1672] - support kylin on cdh 5.7
-
-__New Feature__
-
-* [KYLIN-1016] - Count distinct on any dimension should work even not a predefined measure
-* [KYLIN-1077] - Support Hive View as Lookup Table
-* [KYLIN-1441] - Display time column as partition column
-* [KYLIN-1515] - Make Kylin run on MapR
-* [KYLIN-1600] - Download diagnosis zip from GUI
-* [KYLIN-1672] - support kylin on cdh 5.7
-
-__Improvement__
-
-* [KYLIN-869] - Enhance mail notification
-* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
-* [KYLIN-1313] - Enable deriving dimensions on non PK/FK
-* [KYLIN-1323] - Improve performance of converting data to hfile
-* [KYLIN-1340] - Tools to extract all cube/hybrid/project related metadata to facilitate diagnosing/debugging/* sharing
-* [KYLIN-1381] - change RealizationCapacity from three profiles to specific numbers
-* [KYLIN-1391] - quicker and better response to v2 storage engine's rpc timeout exception
-* [KYLIN-1418] - Memory hungry cube should select LAYER and INMEM cubing smartly
-* [KYLIN-1432] - For GUI, to add one option "yyyy-MM-dd HH:MM:ss" for Partition Date Column
-* [KYLIN-1453] - cuboid sharding based on specific column
-* [KYLIN-1487] - attach a hyperlink to introduce new aggregation group
-* [KYLIN-1526] - Move query cache back to query controller level
-* [KYLIN-1542] - Hfile owner is not hbase
-* [KYLIN-1544] - Make hbase encoding and block size configurable just like hbase compression
-* [KYLIN-1561] - Refactor storage engine(v2) to be extension friendly
-* [KYLIN-1566] - Add and use a separate kylin_job_conf.xml for in-mem cubing
-* [KYLIN-1567] - Front-end work for KYLIN-1557
-* [KYLIN-1578] - Coprocessor thread voluntarily stop itself when it reaches timeout
-* [KYLIN-1579] - IT preparation classes like BuildCubeWithEngine should exit with status code upon build * exception
-* [KYLIN-1580] - Use 1 byte instead of 8 bytes as column indicator in fact distinct MR job
-* [KYLIN-1584] - Specify region cut size in cubedesc and leave the RealizationCapacity in model as a hint
-* [KYLIN-1585] - make MAX_HBASE_FUZZY_KEYS in GTScanRangePlanner configurable
-* [KYLIN-1587] - show cube level configuration overwrites properties in CubeDesigner
-* [KYLIN-1591] - enabling different block size setting for small column families
-* [KYLIN-1599] - Add "isShardBy" flag in rowkey panel
-* [KYLIN-1601] - Need not to shrink scan cache when hbase rows can be large
-* [KYLIN-1602] - User could dump hbase usage for diagnosis
-* [KYLIN-1614] - Bring more information in diagnosis tool
-* [KYLIN-1621] - Use deflate level 1 to enable compression "on the fly"
-* [KYLIN-1623] - Make the hll precision for data samping configurable
-* [KYLIN-1624] - HyperLogLogPlusCounter will become inaccurate when there're billions of entries
-* [KYLIN-1625] - GC log overwrites old one after restart Kylin service
-* [KYLIN-1627] - add backdoor toggle to dump binary cube storage response for further analysis
-* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
-
-__Bug__
-
-* [KYLIN-989] - column width is too narrow for timestamp field
-* [KYLIN-1197] - cube data not updated after purge
-* [KYLIN-1305] - Can not get more than one system admin email in config
-* [KYLIN-1551] - Should check and ensure TopN measure has two parameters specified
-* [KYLIN-1563] - Unsafe check of initiated in HybridInstance#init()
-* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
-* [KYLIN-1574] - Unclosed ResultSet in QueryService#getMetadata()
-* [KYLIN-1581] - NPE in Job engine when execute MR job
-* [KYLIN-1593] - Agg group info will be blank when trying to edit cube
-* [KYLIN-1595] - columns in metric could also be in filter/groupby
-* [KYLIN-1596] - UT fail, due to String encoding CharsetEncoder mismatch
-* [KYLIN-1598] - cannot run complete UT at windows dev machine
-* [KYLIN-1604] - Concurrent write issue on hdfs when deploy coprocessor
-* [KYLIN-1612] - Cube is ready but insight tables not result
-* [KYLIN-1615] - UT 'HiveCmdBuilderTest' fail on 'testBeeline'
-* [KYLIN-1619] - Can't find any realization coursed by Top-N measure
-* [KYLIN-1622] - sql not executed and report topN error
-* [KYLIN-1631] - Web UI of TopN, "group by" column couldn't be a dimension column
-* [KYLIN-1634] - Unclosed OutputStream in SSHClient#scpFileToLocal()
-* [KYLIN-1637] - Sample cube build error
-* [KYLIN-1638] - Unclosed HBaseAdmin in ToolUtil#getHBaseMetaStoreId()
-* [KYLIN-1639] - Wrong logging of JobID in MapReduceExecutable.java
-* [KYLIN-1643] - Kylin's hll counter count "NULL" as a value
-* [KYLIN-1647] - Purge a cube, and then build again, the start date is not updated
-* [KYLIN-1650] - java.io.IOException: Filesystem closed - in Cube Build Step 2 (MapR)
-* [KYLIN-1655] - function name 'getKylinPropertiesAsInputSteam' misspelt
-* [KYLIN-1660] - Streaming/kafka config not match with table name
-* [KYLIN-1662] - tableName got truncated during request mapping for /tables/tableName
-* [KYLIN-1666] - Should check project selection before add a stream table
-* [KYLIN-1667] - Streaming table name should allow enter "DB.TABLE" format
-* [KYLIN-1673] - make sure metadata in 1.5.2 compatible with 1.5.1
-* [KYLIN-1678] - MetaData clean just clean FINISHED and DISCARD jobs,but job correct status is SUCCEED
-* [KYLIN-1685] - error happens while execute a sql contains '?' using Statement
-* [KYLIN-1688] - Illegal char on result dataset table
-* [KYLIN-1721] - KylinConfigExt lost base properties when store into file
-* [KYLIN-1722] - IntegerDimEnc serialization exception inside coprocessor
-
-## v1.5.1 - 2016-04-13
-_Tag:_ [kylin-1.5.1](https://github.com/apache/kylin/tree/kylin-1.5.1)
-
-This version is backward compatiple with v1.5.0. But after upgrade to v1.5.1 from v1.5.0, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__Highlights__
-
-* [KYLIN-1122] - Kylin support detail data query from fact table
-* [KYLIN-1492] - Custom dimension encoding
-* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
-* [KYLIN-1534] - Cube specific config, override global kylin.properties
-* [KYLIN-1546] - Tool to dump information for diagnosis
-
-__New Feature__
-
-* [KYLIN-1122] - Kylin support detail data query from fact table
-* [KYLIN-1378] - Add UI for TopN measure
-* [KYLIN-1492] - Custom dimension encoding
-* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
-* [KYLIN-1501] - Run some classes at the beginning of kylin server startup
-* [KYLIN-1503] - Print version information with kylin.sh
-* [KYLIN-1531] - Add smoke test scripts
-* [KYLIN-1534] - Cube specific config, override global kylin.properties
-* [KYLIN-1540] - REST API for deleting segment
-* [KYLIN-1541] - IntegerDimEnc, custom dimension encoding for integers
-* [KYLIN-1546] - Tool to dump information for diagnosis
-* [KYLIN-1550] - Persist some recent bad query
-
-__Improvement__
-
-* [KYLIN-1490] - Use InstallShield 2015 to generate ODBC Driver setup files
-* [KYLIN-1498] - cube desc signature not calculated correctly
-* [KYLIN-1500] - streaming_fillgap cause out of memory
-* [KYLIN-1502] - When cube is not empty, only signature consistent cube desc updates are allowed
-* [KYLIN-1504] - Use NavigableSet to store rowkey and use prefix filter to check resource path prefix instead String comparison on tomcat side
-* [KYLIN-1505] - Combine guava filters with Predicates.and
-* [KYLIN-1543] - GTFilterScanner performance tuning
-* [KYLIN-1557] - Enhance the check on aggregation group dimension number
-
-__Bug__
-
-* [KYLIN-1373] - need to encode export query url to get right result in query page
-* [KYLIN-1434] - Kylin Job Monitor API: /kylin/api/jobs is too slow in large kylin deployment
-* [KYLIN-1472] - Export csv get error when there is a plus sign in the sql
-* [KYLIN-1486] - java.lang.IllegalArgumentException: Too many digits for NumberDictionary
-* [KYLIN-1491] - Should return base cuboid as valid cuboid if no aggregation group matches
-* [KYLIN-1493] - make ExecutableManager.getInstance thread safe
-* [KYLIN-1497] - Make three <class>.getInstance thread safe
-* [KYLIN-1507] - Couldn't find hive dependency jar on some platform like CDH
-* [KYLIN-1513] - Time partitioning doesn't work across multiple days
-* [KYLIN-1514] - MD5 validation of Tomcat does not work when package tar
-* [KYLIN-1521] - Couldn't refresh a cube segment whose start time is before 1970-01-01
-* [KYLIN-1522] - HLLC is incorrect when result is feed from cache
-* [KYLIN-1524] - Get "java.lang.Double cannot be cast to java.lang.Long" error when Top-N metris data type is BigInt
-* [KYLIN-1527] - Columns with all NULL values can't be queried
-* [KYLIN-1537] - Failed to create flat hive table, when name is too long
-* [KYLIN-1538] - DoubleDeltaSerializer cause obvious error after deserialize and serialize
-* [KYLIN-1553] - Cannot find rowkey column "COL_NAME" in cube CubeDesc
-* [KYLIN-1564] - Unclosed table in BuildCubeWithEngine#checkHFilesInHBase()
-* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
-
-## v1.5.0 - 2016-03-12
-_Tag:_ [kylin-1.5.0](https://github.com/apache/kylin/tree/kylin-1.5.0)
-
-__This version is not backward compatible.__ The format of cube and metadata has been refactored in order to get times of performance improvement. We recommend this version, but does not suggest upgrade from previous deployment directly. A clean and new deployment of this version is strongly recommended. If you have to upgrade from previous deployment, an upgrade guide will be provided by community later.
-
-__Highlights__
-
-* [KYLIN-875] - A plugin-able architecture, to allow alternative cube engine / storage engine / data source.
-* [KYLIN-1245] - A better MR cubing algorithm, about 1.5 times faster by comparing hundreds of jobs.
-* [KYLIN-942] - A better storage engine, makes query roughly 2 times faster (especially for slow queries) by comparing tens of thousands sqls.
-* [KYLIN-738] - Streaming cubing EXPERIMENTAL support, source from kafka, build cube in-mem at minutes interval.
-* [KYLIN-242] - Redesign aggregation group, support of 20+ dimensions made easy.
-* [KYLIN-976] - Custom aggregation types (or UDF in other words).
-* [KYLIN-943] - TopN aggregation type.
-* [KYLIN-1065] - ODBC compatible with Tableau 9.1, MS Excel, MS PowerBI.
-* [KYLIN-1219] - Kylin support SSO with Spring SAML.
-
-__New Feature__
-
-* [KYLIN-528] - Build job flow for Inverted Index building
-* [KYLIN-579] - Unload table from Kylin
-* [KYLIN-596] - Support Excel and Power BI
-* [KYLIN-599] - Near real-time support
-* [KYLIN-607] - More efficient cube building
-* [KYLIN-609] - Add Hybrid as a federation of Cube and Inverted-index realization
-* [KYLIN-625] - Create GridTable, a data structure that abstracts vertical and horizontal partition of a table
-* [KYLIN-728] - IGTStore implementation which use disk when memory runs short
-* [KYLIN-738] - StreamingOLAP
-* [KYLIN-749] - support timestamp type in II and cube
-* [KYLIN-774] - Automatically merge cube segments
-* [KYLIN-868] - add a metadata backup/restore script in bin folder
-* [KYLIN-886] - Data Retention for streaming data
-* [KYLIN-906] - cube retention
-* [KYLIN-943] - Approximate TopN supported by Cube
-* [KYLIN-986] - Generalize Streaming scripts and put them into code repository
-* [KYLIN-1219] - Kylin support SSO with Spring SAML
-* [KYLIN-1277] - Upgrade tool to put old-version cube and new-version cube into a hybrid model
-* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
-	
-* [KYLIN-976] - Support Custom Aggregation Types
-* [KYLIN-1054] - Support Hive client Beeline
-* [KYLIN-1128] - Clone Cube Metadata
-* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
-* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
-* [KYLIN-1483] - Command tool to visualize all cuboids in a cube/segment
-
-__Improvement__
-
-* [KYLIN-225] - Support edit "cost" of cube
-* [KYLIN-410] - table schema not expand when clicking the database text
-* [KYLIN-589] - Cleanup Intermediate hive table after cube build
-* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
-* [KYLIN-633] - Support Timestamp for cube partition
-* [KYLIN-649] - move the cache layer from service tier back to storage tier
-* [KYLIN-655] - Migrate cube storage (query side) to use GridTable API
-* [KYLIN-663] - Push time condition down to ii endpoint
-* [KYLIN-668] - Out of memory in mapper when building cube in mem
-* [KYLIN-671] - Implement fine grained cache for cube and ii
-* [KYLIN-674] - IIEndpoint return metrics as well
-* [KYLIN-675] - cube&model designer refactor
-* [KYLIN-678] - optimize RowKeyColumnIO
-* [KYLIN-697] - Reorganize all test cases to unit test and integration tests
-* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS
-* [KYLIN-708] - replace BitSet for AggrKey
-* [KYLIN-712] - some enhancement after code review
-* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
-* [KYLIN-718] - replace aliasMap in storage context with a clear specified return column list
-* [KYLIN-719] - bundle statistics info in endpoint response
-* [KYLIN-720] - Optimize endpoint's response structure to suit with no-dictionary data
-* [KYLIN-721] - streaming cli support third-party streammessage parser
-* [KYLIN-726] - add remote cli port configuration for KylinConfig
-* [KYLIN-729] - IIEndpoint eliminate the non-aggregate routine
-* [KYLIN-734] - Push cache layer to each storage engine
-* [KYLIN-752] - Improved IN clause performance
-* [KYLIN-753] - Make the dependency on hbase-common to "provided"
-* [KYLIN-755] - extract copying libs from prepare.sh so that it can be reused
-* [KYLIN-760] - Improve the hasing performance in Sampling cuboid size
-* [KYLIN-772] - Continue cube job when hive query return empty resultset
-* [KYLIN-773] - performance is slow list jobs
-* [KYLIN-783] - update hdp version in test cases to 2.2.4
-* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
-* [KYLIN-809] - Streaming cubing allow multiple kafka clusters/topics
-* [KYLIN-816] - Allow gap in cube segments, for streaming case
-* [KYLIN-822] - list cube overview in one page
-* [KYLIN-823] - replace fk on fact table on rowkey & aggregation group generate
-* [KYLIN-838] - improve performance of job query
-* [KYLIN-844] - add backdoor toggles to control query behavior
-* [KYLIN-845] - Enable coprocessor even when there is memory hungry distinct count
-* [KYLIN-858] - add snappy compression support
-* [KYLIN-866] - Confirm with user when he selects empty segments to merge
-* [KYLIN-869] - Enhance mail notification
-* [KYLIN-870] - Speed up hbase segments info by caching
-* [KYLIN-871] - growing dictionary for streaming case
-* [KYLIN-874] - script for fill streaming gap automatically
-* [KYLIN-875] - Decouple with Hadoop to allow alternative Input / Build Engine / Storage
-* [KYLIN-879] - add a tool to collect orphan hbases
-* [KYLIN-880] - Kylin should change the default folder from /tmp to user configurable destination
-* [KYLIN-881] - Upgrade Calcite to 1.3.0
-* [KYLIN-882] - check access to kylin.hdfs.working.dir
-* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
-* [KYLIN-893] - Remove the dependency on quartz and metrics
-* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
-* [KYLIN-896] - Clean ODBC code, add them into main repository and write docs to help compiling
-* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
-* [KYLIN-902] - move streaming related parameters into StreamingConfig
-* [KYLIN-909] - Adapt GTStore to hbase endpoint
-* [KYLIN-919] - more friendly UI for 0.8
-* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
-* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
-* [KYLIN-927] - Real time cubes merging skipping gaps
-* [KYLIN-933] - friendly UI to use data model
-* [KYLIN-938] - add friendly tip to page when rest request failed
-* [KYLIN-942] - Cube parallel scan on Hbase
-* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
-* [KYLIN-957] - Support HBase in a separate cluster
-* [KYLIN-960] - Split storage module to core-storage and storage-hbase
-* [KYLIN-973] - add a tool to analyse streaming output logs
-* [KYLIN-984] - Behavior change in streaming data consuming
-* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
-* [KYLIN-1014] - Support kerberos authentication while getting status from RM
-* [KYLIN-1018] - make TimedJsonStreamParser default parser
-* [KYLIN-1019] - Remove v1 cube model classes from code repository
-* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
-* [KYLIN-1025] - Save cube change is very slow
-* [KYLIN-1036] - Code Clean, remove code which never used at front end
-* [KYLIN-1041] - ADD Streaming UI
-* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-* [KYLIN-1058] - Remove "right join" during model creation
-* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
-* [KYLIN-1065] - ODBC driver support tableau 9.1
-* [KYLIN-1068] - Optimize the memory footprint for TopN counter
-* [KYLIN-1069] - update tip for 'Partition Column' on UI
-* [KYLIN-1074] - Load hive tables with selecting mode
-* [KYLIN-1095] - Update AdminLTE to latest version
-* [KYLIN-1096] - Deprecate minicluster
-* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
-* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
-* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
-* [KYLIN-1116] - Use local dictionary for InvertedIndex batch building
-* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
-* [KYLIN-1126] - v2 storage(for parallel scan) backward compatibility with v1 storage
-* [KYLIN-1135] - Pscan use share thread pool
-* [KYLIN-1136] - Distinguish fast build mode and complete build mode
-* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
-* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
-* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
-* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
-* [KYLIN-1160] - Set default logger appender of log4j for JDBC
-* [KYLIN-1161] - Rest API /api/cubes?cubeName= is doing fuzzy match instead of exact match
-* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
-* [KYLIN-1190] - Make memory budget per query configurable
-* [KYLIN-1211] - Add 'Enable Cache' button in System page
-* [KYLIN-1234] - Cube ACL does not work
-* [KYLIN-1235] - allow user to select dimension column as options when edit COUNT_DISTINCT measure
-* [KYLIN-1237] - Revisit on cube size estimation
-* [KYLIN-1239] - attribute each htable with team contact and owner name
-* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
-* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
-* [KYLIN-1246] - get cubes API update - offset,limit not required
-* [KYLIN-1251] - add toggle event for tree label
-* [KYLIN-1259] - Change font/background color of job progress
-* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
-* [KYLIN-1266] - Tune release package size
-* [KYLIN-1267] - Check Kryo performance when spilling aggregation cache
-* [KYLIN-1268] - Fix 2 kylin logs
-* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
-* [KYLIN-1281] - Add "partition_date_end", and move "partition_date_start" into cube descriptor
-* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
-* [KYLIN-1287] - UI update for streaming build action
-* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
-* [KYLIN-1301] - fix segment pruning failure
-* [KYLIN-1308] - query storage v2 enable parallel cube visiting
-* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
-* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
-* [KYLIN-1318] - enable gc log for kylin server instance
-* [KYLIN-1323] - Improve performance of converting data to hfile
-* [KYLIN-1327] - Tool for batch updating host information of htables
-* [KYLIN-1333] - Kylin Entity Permission Control
-* [KYLIN-1334] - allow truncating string for fixed length dimensions
-* [KYLIN-1341] - Display JSON of Data Model in the dialog
-* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
-* [KYLIN-1365] - Kylin ACL enhancement
-* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
-* [KYLIN-1424] - Should support multiple selection in picking up dimension/measure column step in data model wizard
-* [KYLIN-1438] - auto generate aggregation group
-* [KYLIN-1474] - expose list, remove and cat in metastore.sh
-* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
-	
-* [KYLIN-242] - Redesign aggregation group
-* [KYLIN-770] - optimize memory usage for GTSimpleMemStore GTAggregationScanner
-* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
-* [KYLIN-980] - FactDistinctColumnsJob to support high cardinality columns
-* [KYLIN-1079] - Manager large number of entries in metadata store
-* [KYLIN-1082] - Hive dependencies should be add to tmpjars
-* [KYLIN-1201] - Enhance project level ACL
-* [KYLIN-1222] - restore testing v1 query engine in case need it as a fallback for v2
-* [KYLIN-1232] - Refine ODBC Connection UI
-* [KYLIN-1237] - Revisit on cube size estimation
-* [KYLIN-1239] - attribute each htable with team contact and owner name
-* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
-* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
-* [KYLIN-1266] - Tune release package size
-* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
-* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
-* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
-* [KYLIN-1301] - fix segment pruning failure
-* [KYLIN-1308] - query storage v2 enable parallel cube visiting
-* [KYLIN-1318] - enable gc log for kylin server instance
-* [KYLIN-1327] - Tool for batch updating host information of htables
-* [KYLIN-1343] - Upgrade calcite version to 1.6
-* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
-* [KYLIN-1366] - Bind metadata version with release version
-* [KYLIN-1389] - Formatting ODBC Drive C++ code
-* [KYLIN-1405] - Aggregation group validation
-* [KYLIN-1465] - Beautify kylin log to convenience both production trouble shooting and CI debuging
-* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
-
-__Bug__
-
-* [KYLIN-404] - Can't get cube source record size.
-* [KYLIN-457] - log4j error and dup lines in kylin.log
-* [KYLIN-521] - No verification even if join condition is invalid
-* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
-* [KYLIN-635] - IN clause within CASE when is not working
-* [KYLIN-656] - REST API get cube desc NullPointerException when cube is not exists
-* [KYLIN-660] - Make configurable of dictionary cardinality cap
-* [KYLIN-665] - buffer error while in mem cubing
-* [KYLIN-688] - possible memory leak for segmentIterator
-* [KYLIN-731] - Parallel stream build will throw OOM
-* [KYLIN-740] - Slowness with many IN() values
-* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
-* [KYLIN-748] - II returned result not correct when decimal omits precision and scal
-* [KYLIN-751] - Max on negative double values is not working
-* [KYLIN-766] - round BigDecimal according to the DataType scale
-* [KYLIN-769] - empty segment build fail due to no dictionary
-* [KYLIN-771] - query cache is not evicted when metadata changes
-* [KYLIN-778] - can't build cube after package to binary
-* [KYLIN-780] - Upgrade Calcite to 1.0
-* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted
-* [KYLIN-801] - fix remaining issues on query cache and storage cache
-* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
-* [KYLIN-807] - Avoid write conflict between job engine and stream cube builder
-* [KYLIN-817] - Support Extract() on timestamp column
-* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
-* [KYLIN-828] - kylin still use ldap profile when comment the line "kylin.sandbox=false" in kylin.properties
-* [KYLIN-834] - optimize StreamingUtil binary search perf
-* [KYLIN-837] - fix submit build type when refresh cube
-* [KYLIN-873] - cancel button does not work when [resume][discard] job
-* [KYLIN-889] - Support more than one HDFS files of lookup table
-* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
-* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
-* [KYLIN-905] - Boolean type not supported
-* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
-* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
-* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
-* [KYLIN-914] - Scripts shebang should use /bin/bash
-* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
-* [KYLIN-930] - can't see realizations under each project at project list page
-* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
-* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
-* [KYLIN-936] - can not see job step log
-* [KYLIN-944] - update doc about how to consume kylin API in javascript
-* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
-* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
-* [KYLIN-951] - Drop RowBlock concept from GridTable general API
-* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
-* [KYLIN-967] - Dump running queries on memory shortage
-* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
-* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
-* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
-* [KYLIN-983] - Query sql offset keyword bug
-* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
-* [KYLIN-991] - StorageCleanupJob may clean a newly created HTable in streaming cube building
-* [KYLIN-992] - ConcurrentModificationException when initializing ResourceStore
-* [KYLIN-993] - implement substr support in kylin
-* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
-* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
-* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million
-* [KYLIN-1026] - Error message for git check is not correct in package.sh
-* [KYLIN-1027] - HBase Token not added after KYLIN-1007
-* [KYLIN-1033] - Error when joining two sub-queries
-* [KYLIN-1039] - Filter like (A or false) yields wrong result
-* [KYLIN-1047] - Upgrade to Calcite 1.4
-* [KYLIN-1066] - Only 1 reducer is started in the "Build cube" step of MR_Engine_V2
-* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
-* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
-* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
-* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
-* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
-* [KYLIN-1113] - Support TopN query in v2/CubeStorageQuery.java
-* [KYLIN-1115] - Clean up ODBC driver code
-* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
-* [KYLIN-1127] - Refactor CacheService
-* [KYLIN-1137] - TopN measure need support dictionary merge
-* [KYLIN-1138] - Bad CubeDesc signature cause segment be delete when enable a cube
-* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
-* [KYLIN-1151] - Menu items should be aligned when create new model
-* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
-* [KYLIN-1153] - Upgrade is needed for cubedesc metadata from 1.3 to 1.4
-* [KYLIN-1171] - KylinConfig truncate bug
-* [KYLIN-1179] - Cannot use String as partition column
-* [KYLIN-1180] - Some NPE in Dictionary
-* [KYLIN-1181] - Split metadata size exceeded when data got huge in one segment
-* [KYLIN-1182] - DataModelDesc needs to be updated from v1.x to v2.0
-* [KYLIN-1192] - Cannot edit data model desc without name change
-* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
-* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
-* [KYLIN-1218] - java.lang.NullPointerException in MeasureTypeFactory when sync hive table
-* [KYLIN-1220] - JsonMappingException: Can not deserialize instance of java.lang.String out of START_ARRAY
-* [KYLIN-1225] - Only 15 cubes listed in the /models page
-* [KYLIN-1226] - InMemCubeBuilder throw OOM for multiple HLLC measures
-* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
-* [KYLIN-1236] - redirect to home page when input invalid url
-* [KYLIN-1250] - Got NPE when discarding a job
-* [KYLIN-1260] - Job status labels are not in same style
-* [KYLIN-1269] - Can not get last error message in email
-* [KYLIN-1271] - Create streaming table layer will disappear if click on outside
-* [KYLIN-1274] - Query from JDBC is partial results by default
-* [KYLIN-1282] - Comparison filter on Date/Time column not work for query
-* [KYLIN-1289] - Click on subsequent wizard steps doesn't work when editing existing cube or model
-* [KYLIN-1303] - Error when in-mem cubing on empty data source which has boolean columns
-* [KYLIN-1306] - Null strings are not applied during fast cubing
-* [KYLIN-1314] - Display issue for aggression groups
-* [KYLIN-1315] - UI: Cannot add normal dimension when creating new cube
-* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
-* [KYLIN-1328] - "UnsupportedOperationException" is thrown when remove a data model
-* [KYLIN-1330] - UI create model: Press enter will go back to pre step
-* [KYLIN-1336] - 404 errors of model page and api 'access/DataModelDesc' in console
-* [KYLIN-1337] - Sort cube name doesn't work well
-* [KYLIN-1346] - IllegalStateException happens in SparkCubing
-* [KYLIN-1347] - UI: cannot place cursor in front of the last dimension
-* [KYLIN-1349] - 'undefined' is logged in console when adding lookup table
-* [KYLIN-1352] - 'Cache already exists' exception in high-concurrency query situation
-* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
-* [KYLIN-1357] - Cloned cube has build time information
-* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
-* [KYLIN-1382] - CubeMigrationCLI reports error when migrate cube
-* [KYLIN-1387] - Streaming cubing doesn't generate cuboids files on HDFS, cause cube merge failure
-* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
-* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
-* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
-* [KYLIN-1412] - Widget width of "Partition date column" is too small to select
-* [KYLIN-1413] - Row key column's sequence is wrong after saving the cube
-* [KYLIN-1414] - Couldn't drag and drop rowkey, js error is thrown in browser console
-* [KYLIN-1417] - TimedJsonStreamParser is case sensitive for message's property name
-* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
-* [KYLIN-1420] - Query returns empty result on partition column's boundary condition
-* [KYLIN-1421] - Cube "source record" is always zero for streaming
-* [KYLIN-1423] - HBase size precision issue
-* [KYLIN-1430] - Not add "STREAMING_" prefix when import a streaming table
-* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
-* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
-* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
-* 
-* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
-* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
-* [KYLIN-1344] - Bitmap measure defined after TopN measure can cause merge to fail
-* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
-* [KYLIN-1386] - Duplicated projects appear in connection dialog after clicking CONNECT button multiple times
-* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
-* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
-* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
-* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
-* [KYLIN-1469] - Hive dependency jars are hard coded in test
-* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
-* [KYLIN-1473] - Cannot have comments in the end of New Query textbox
-
-__Task__
-
-* [KYLIN-529] - Migrate ODBC source code to Apache Git
-* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
-* [KYLIN-762] - remove quartz dependency
-* [KYLIN-763] - remove author name
-* [KYLIN-820] - support streaming cube of exact timestamp range
-* [KYLIN-907] - Improve Kylin community development experience
-* [KYLIN-1112] - Reorganize InvertedIndex source codes into plug-in architecture
-	
-* [KYLIN-808] - streaming cubing support split by data timestamp
-* [KYLIN-1427] - Enable partition date column to support date and hour as separate columns for increment cube build
-
-__Test__
-
-* [KYLIN-677] - benchmark for Endpoint without dictionary
-* [KYLIN-826] - create new test case for streaming building & queries
-
-
-## v1.3.0 - 2016-03-14
-_Tag:_ [kylin-1.3.0](https://github.com/apache/kylin/tree/kylin-1.3.0)
-
-__New Feature__
-
-* [KYLIN-579] - Unload table from Kylin
-* [KYLIN-976] - Support Custom Aggregation Types
-* [KYLIN-1054] - Support Hive client Beeline
-* [KYLIN-1128] - Clone Cube Metadata
-* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
-
-__Improvement__
-
-* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
-* [KYLIN-1014] - Support kerberos authentication while getting status from RM
-* [KYLIN-1074] - Load hive tables with selecting mode
-* [KYLIN-1082] - Hive dependencies should be add to tmpjars
-* [KYLIN-1132] - make filtering input easier in creating cube
-* [KYLIN-1201] - Enhance project level ACL
-* [KYLIN-1211] - Add 'Enable Cache' button in System page
-* [KYLIN-1234] - Cube ACL does not work
-* [KYLIN-1240] - Fix link and typo in README
-* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
-* [KYLIN-1246] - get cubes API update - offset,limit not required
-* [KYLIN-1251] - add toggle event for tree label
-* [KYLIN-1259] - Change font/background color of job progress
-* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
-* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
-* [KYLIN-1323] - Improve performance of converting data to hfile
-* [KYLIN-1333] - Kylin Entity Permission Control 
-* [KYLIN-1343] - Upgrade calcite version to 1.6
-* [KYLIN-1365] - Kylin ACL enhancement
-* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
-
-__Bug__
-
-* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
-* [KYLIN-1078] - Cannot have comments in the end of New Query textbox
-* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
-* [KYLIN-1110] - can not see project options after clear brower cookie and cache
-* [KYLIN-1159] - problem about kylin web UI
-* [KYLIN-1214] - Remove "Back to My Cubes" link in non-edit mode
-* [KYLIN-1215] - minor, update website member's info on community page
-* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
-* [KYLIN-1236] - redirect to home page when input invalid url
-* [KYLIN-1250] - Got NPE when discarding a job
-* [KYLIN-1254] - cube model will be overridden while creating a new cube with the same name
-* [KYLIN-1260] - Job status labels are not in same style
-* [KYLIN-1274] - Query from JDBC is partial results by default
-* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
-* [KYLIN-1330] - UI create model: Press enter will go back to pre step
-* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
-* [KYLIN-1342] - Typo in doc
-* [KYLIN-1354] - Couldn't edit a cube if it has no "partition date" set
-* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
-* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale 
-* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
-* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
-* [KYLIN-1412] - Widget width of "Partition date column"  is too small to select
-* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
-* [KYLIN-1423] - HBase size precision issue
-* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
-* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
-* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
-* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
-* [KYLIN-1469] - Hive dependency jars are hard coded in test
-
-__Test__
-
-* [KYLIN-1335] - Disable PrintResult in KylinQueryTest
-
-
-## v1.2 - 2015-12-15
-_Tag:_ [kylin-1.2](https://github.com/apache/kylin/tree/kylin-1.2)
-
-__New Feature__
-
-* [KYLIN-596] - Support Excel and Power BI
-    
-__Improvement__
-
-* [KYLIN-389] - Can't edit cube name for existing cubes
-* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS 
-* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
-* [KYLIN-1058] - Remove "right join" during model creation
-* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
-* [KYLIN-1065] - ODBC driver support tableau 9.1
-* [KYLIN-1069] - update tip for 'Partition Column' on UI
-* [KYLIN-1081] - ./bin/find-hive-dependency.sh may not find hive-hcatalog-core.jar
-* [KYLIN-1095] - Update AdminLTE to latest version
-* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
-* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
-* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
-* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
-* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
-* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
-* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
-* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
-* [KYLIN-1160] - Set default logger appender of log4j for JDBC
-* [KYLIN-1161] - Rest API /api/cubes?cubeName=  is doing fuzzy match instead of exact match
-* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
-* [KYLIN-1166] - CubeMigrationCLI should disable and purge the cube in source store after be migrated
-* [KYLIN-1168] - Couldn't save cube after doing some modification, get "Update data model is not allowed! Please create a new cube if needed" error
-* [KYLIN-1190] - Make memory budget per query configurable
-
-__Bug__
-
-* [KYLIN-693] - Couldn't change a cube's name after it be created
-* [KYLIN-930] - can't see realizations under each project at project list page
-* [KYLIN-966] - When user creates a cube, if enter a name which already exists, Kylin will thrown expection on last step
-* [KYLIN-1033] - Error when joining two sub-queries
-* [KYLIN-1039] - Filter like (A or false) yields wrong result
-* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
-* [KYLIN-1070] - changing  case in table name in  model desc
-* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
-* [KYLIN-1098] - two "kylin.hbase.region.count.min" in conf/kylin.properties
-* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
-* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
-* [KYLIN-1120] - MapReduce job read local meta issue
-* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
-* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
-* [KYLIN-1148] - Edit project's name and cancel edit, project's name still modified
-* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
-* [KYLIN-1155] - unit test with minicluster doesn't work on 1.x
-* [KYLIN-1203] - Cannot save cube after correcting the configuration mistake
-* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
-* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
-
-__Task__
-
-* [KYLIN-1170] - Update website and status files to TLP
-
-
-## v1.1.1-incubating - 2015-11-04
-_Tag:_ [kylin-1.1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1.1-incubating)
-
-__Improvement__
-
-* [KYLIN-999] - License check and cleanup for release
-
-## v1.1-incubating - 2015-10-25
-_Tag:_ [kylin-1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1-incubating)
-
-__New Feature__
-
-* [KYLIN-222] - Web UI to Display CubeInstance Information
-* [KYLIN-906] - cube retention
-* [KYLIN-910] - Allow user to enter "retention range" in days on Cube UI
-
-__Bug__
-
-* [KYLIN-457] - log4j error and dup lines in kylin.log
-* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
-* [KYLIN-740] - Slowness with many IN() values
-* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
-* [KYLIN-771] - query cache is not evicted when metadata changes
-* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted 
-* [KYLIN-847] - "select * from fact" does not work on 0.7 branch
-* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
-* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-* [KYLIN-944] - update doc about how to consume kylin API in javascript
-* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
-* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
-* [KYLIN-958] - update cube data model may fail and leave metadata in inconsistent state
-* [KYLIN-961] - Can't get cube  source record count.
-* [KYLIN-967] - Dump running queries on memory shortage
-* [KYLIN-968] - CubeSegment.lastBuildJobID is null in new instance but used for rowkey_stats path
-* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
-* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
-* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
-* [KYLIN-983] - Query sql offset keyword bug
-* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
-* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
-* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
-* [KYLIN-1005] - fail to acquire ZookeeperJobLock when hbase.zookeeper.property.clientPort is configured other than 2181
-* [KYLIN-1015] - Hive dependency jars appeared twice on job configuration
-* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million 
-* [KYLIN-1026] - Error message for git check is not correct in package.sh
-
-__Improvement__
-
-* [KYLIN-343] - Enable timeout on query 
-* [KYLIN-367] - automatically backup metadata everyday
-* [KYLIN-589] - Cleanup Intermediate hive table after cube build
-* [KYLIN-772] - Continue cube job when hive query return empty resultset
-* [KYLIN-858] - add snappy compression support
-* [KYLIN-882] - check access to kylin.hdfs.working.dir
-* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
-* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
-* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
-* [KYLIN-957] - Support HBase in a separate cluster
-* [KYLIN-965] - Allow user to configure the region split size for cube
-* [KYLIN-971] - kylin display timezone on UI
-* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
-* [KYLIN-998] - Finish the hive intermediate table clean up job in org.apache.kylin.job.hadoop.cube.StorageCleanupJob
-* [KYLIN-999] - License check and cleanup for release
-* [KYLIN-1013] - Make hbase client configurations like timeout configurable
-* [KYLIN-1025] - Save cube change is very slow
-* [KYLIN-1034] - Faster bitmap indexes with Roaring bitmaps
-* [KYLIN-1035] - Validate [Project] before create Cube on UI
-* [KYLIN-1037] - Remove hardcoded "hdp.version" from regression tests
-* [KYLIN-1047] - Upgrade to Calcite 1.4
-* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-
-
-## v1.0-incubating - 2015-09-06
-_Tag:_ [kylin-1.0-incubating](https://github.com/apache/kylin/tree/kylin-1.0-incubating)
-
-__New Feature__
-
-* [KYLIN-591] - Leverage Zeppelin to interactive with Kylin
-
-__Bug__
-
-* [KYLIN-404] - Can't get cube source record size.
-* [KYLIN-626] - JDBC error for float and double values
-* [KYLIN-751] - Max on negative double values is not working
-* [KYLIN-757] - Cache wasn't flushed in cluster mode
-* [KYLIN-780] - Upgrade Calcite to 1.0
-* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
-* [KYLIN-889] - Support more than one HDFS files of lookup table
-* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
-* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
-* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
-* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
-* [KYLIN-914] - Scripts shebang should use /bin/bash
-* [KYLIN-915] - appendDBName in CubeMetadataUpgrade will return null
-* [KYLIN-921] - Dimension with all nulls cause BuildDimensionDictionary failed due to FileNotFoundException
-* [KYLIN-923] - FetcherRunner will never run again if encountered exception during running
-* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
-* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
-* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
-* [KYLIN-936] - can not see job step log 
-* [KYLIN-940] - NPE when close the null resouce
-* [KYLIN-945] - Kylin JDBC - Get Connection from DataSource results in NullPointerException
-* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
-* [KYLIN-949] - Query cache doesn't work properly for prepareStatement queries
-
-__Improvement__
-
-* [KYLIN-568] - job support stop/suspend function so that users can manually resume a job
-* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
-* [KYLIN-792] - kylin performance insight [dashboard]
-* [KYLIN-838] - improve performance of job query
-* [KYLIN-842] - Add version and commit id into binary package
-* [KYLIN-844] - add backdoor toggles to control query behavior 
-* [KYLIN-857] - backport coprocessor improvement in 0.8 to 0.7
-* [KYLIN-866] - Confirm with user when he selects empty segments to merge
-* [KYLIN-867] - Hybrid model for multiple realizations/cubes
-* [KYLIN-880] -  Kylin should change the default folder from /tmp to user configurable destination
-* [KYLIN-881] - Upgrade Calcite to 1.3.0
-* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
-* [KYLIN-893] - Remove the dependency on quartz and metrics
-* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
-* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
-* [KYLIN-933] - friendly UI to use data model
-* [KYLIN-938] - add friendly tip to page when rest request failed
-
-__Task__
-
-* [KYLIN-884] - Restructure docs and website
-* [KYLIN-907] - Improve Kylin community development experience
-* [KYLIN-954] - Release v1.0 (formerly v0.7.3)
-* [KYLIN-863] - create empty segment when there is no data in one single streaming batch
-* [KYLIN-908] - Help community developer to setup develop/debug environment
-* [KYLIN-931] - Port KYLIN-921 to 0.8 branch
-
-## v0.7.2-incubating - 2015-07-21
-_Tag:_ [kylin-0.7.2-incubating](https://github.com/apache/kylin/tree/kylin-0.7.2-incubating)
-
-__Main Changes:__  
-Critical bug fixes after v0.7.1 release, please go with this version directly for new case and upgrade to this version for existing deployment.
-
-__Bug__  
-
-* [KYLIN-514] - Error message is not helpful to user when doing something in Jason Editor window
-* [KYLIN-598] - Kylin detecting hive table delim failure
-* [KYLIN-660] - Make configurable of dictionary cardinality cap
-* [KYLIN-765] - When a cube job is failed, still be possible to submit a new job
-* [KYLIN-814] - Duplicate columns error for subqueries on fact table
-* [KYLIN-819] - Fix necessary ColumnMetaData order for Calcite (Optic)
-* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
-* [KYLIN-829] - Cube "Actions" shows "NA"; but after expand the "access" tab, the button shows up
-* [KYLIN-830] - Cube merge failed after migrating from v0.6 to v0.7
-* [KYLIN-831] - Kylin report "Column 'ABC' not found in table 'TABLE' while executing SQL", when that column is FK but not define as a dimension
-* [KYLIN-840] - HBase table compress not enabled even LZO is installed
-* [KYLIN-848] - Couldn't resume or discard a cube job
-* [KYLIN-849] - Couldn't query metrics on lookup table PK
-* [KYLIN-865] - Cube has been built but couldn't query; In log it said "Realization 'CUBE.CUBE_NAME' defined under project PROJECT_NAME is not found
-* [KYLIN-873] - cancel button does not work when [resume][discard] job
-* [KYLIN-888] - "Jobs" page only shows 15 job at max, the "Load more" button was disappeared
-
-__Improvement__
-
-* [KYLIN-159] - Metadata migrate tool 
-* [KYLIN-199] - Validation Rule: Unique value of Lookup table's key columns
-* [KYLIN-207] - Support SQL pagination
-* [KYLIN-209] - Merge tail small MR jobs into one
-* [KYLIN-210] - Split heavy MR job to more small jobs
-* [KYLIN-221] - Convert cleanup and GC to job 
-* [KYLIN-284] - add log for all Rest API Request
-* [KYLIN-488] - Increase HDFS block size 1GB
-* [KYLIN-600] - measure return type update
-* [KYLIN-611] - Allow Implicit Joins
-* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
-* [KYLIN-727] - Cube build in BuildCubeWithEngine does not cover incremental build/cube merge
-* [KYLIN-752] - Improved IN clause performance
-* [KYLIN-773] - performance is slow list jobs
-* [KYLIN-839] - Optimize Snapshot table memory usage 
-
-__New Feature__
-
-* [KYLIN-211] - Bitmap Inverted Index
-* [KYLIN-285] - Enhance alert program for whole system
-* [KYLIN-467] - Validataion Rule: Check duplicate rows in lookup table
-* [KYLIN-471] - Support "Copy" on grid result
-
-__Task__
-
-* [KYLIN-7] - Enable maven checkstyle plugin
-* [KYLIN-885] - Release v0.7.2
-* [KYLIN-812] - Upgrade to Calcite 0.9.2
-
-## v0.7.1-incubating (First Apache Release) - 2015-06-10  
-_Tag:_ [kylin-0.7.1-incubating](https://github.com/apache/kylin/tree/kylin-0.7.1-incubating)
-
-Apache Kylin v0.7.1-incubating has rolled out on June 10, 2015. This is also the first Apache release after join incubating. 
-
-__Main Changes:__
-
-* Package renamed from com.kylinolap to org.apache.kylin
-* Code cleaned up to apply Apache License policy
-* Easy install and setup with bunch of scripts and automation
-* Job engine refactor to be generic job manager for all jobs, and improved efficiency
-* Support Hive database other than 'default'
-* JDBC driver avaliable for client to interactive with Kylin server
-* Binary pacakge avaliable download 
-
-__New Feature__
-
-* [KYLIN-327] - Binary distribution 
-* [KYLIN-368] - Move MailService to Common module
-* [KYLIN-540] - Data model upgrade for legacy cube descs
-* [KYLIN-576] - Refactor expansion rate expression
-
-__Task__
-
-* [KYLIN-361] - Rename package name with Apache Kylin
-* [KYLIN-531] - Rename package name to org.apache.kylin
-* [KYLIN-533] - Job Engine Refactoring
-* [KYLIN-585] - Simplify deployment
-* [KYLIN-586] - Add Apache License header in each source file
-* [KYLIN-587] - Remove hard copy of javascript libraries
-* [KYLIN-624] - Add dimension and metric info into DataModel
-* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
-* [KYLIN-669] - Release v0.7.1 as first apache release
-* [KYLIN-670] - Update pom with "incubating" in version number
-* [KYLIN-737] - Generate and sign release package for review and vote
-* [KYLIN-795] - Release after success vote
-
-__Bug__
-
-* [KYLIN-132] - Job framework
-* [KYLIN-194] - Dict & ColumnValueContainer does not support number comparison, they do string comparison right now
-* [KYLIN-220] - Enable swap column of Rowkeys in Cube Designer
-* [KYLIN-230] - Error when create HTable
-* [KYLIN-255] - Error when a aggregated function appear twice in select clause
-* [KYLIN-383] - Sample Hive EDW database name should be replaced by "default" in the sample
-* [KYLIN-399] - refreshed segment not correctly published to cube
-* [KYLIN-412] - No exception or message when sync up table which can't access
-* [KYLIN-421] - Hive table metadata issue
-* [KYLIN-436] - Can't sync Hive table metadata from other database rather than "default"
-* [KYLIN-508] - Too high cardinality is not suitable for dictionary!
-* [KYLIN-509] - Order by on fact table not works correctly
-* [KYLIN-517] - Always delete the last one of Add Lookup page buttom even if deleting the first join condition
-* [KYLIN-524] - Exception will throw out if dimension is created on a lookup table, then deleting the lookup table.
-* [KYLIN-547] - Create cube failed if column dictionary sets false and column length value greater than 0
-* [KYLIN-556] - error tip enhance when cube detail return empty
-* [KYLIN-570] - Need not to call API before sending login request
-* [KYLIN-571] - Dimensions lost when creating cube though Joson Editor
-* [KYLIN-572] - HTable size is wrong
-* [KYLIN-581] - unable to build cube
-* [KYLIN-583] - Dependency of Hive conf/jar in II branch will affect auto deploy
-* [KYLIN-588] - Error when run package.sh
-* [KYLIN-593] - angular.min.js.map and angular-resource.min.js.map are missing in kylin.war
-* [KYLIN-594] - Making changes in build and packaging with respect to apache release process
-* [KYLIN-595] - Kylin JDBC driver should not assume Kylin server listen on either 80 or 443
-* [KYLIN-605] - Issue when install Kylin on a CLI which does not have yarn Resource Manager
-* [KYLIN-614] - find hive dependency shell fine is unable to set the hive dependency correctly
-* [KYLIN-615] - Unable add measures in Kylin web UI
-* [KYLIN-619] - Cube build fails with hive+tez
-* [KYLIN-620] - Wrong duration number
-* [KYLIN-621] - SecurityException when running MR job
-* [KYLIN-627] - Hive tables' partition column was not sync into Kylin
-* [KYLIN-628] - Couldn't build a new created cube
-* [KYLIN-629] - Kylin failed to run mapreduce job if there is no mapreduce.application.classpath in mapred-site.xml
-* [KYLIN-630] - ArrayIndexOutOfBoundsException when merge cube segments 
-* [KYLIN-638] - kylin.sh stop not working
-* [KYLIN-639] - Get "Table 'xxxx' not found while executing SQL" error after a cube be successfully built
-* [KYLIN-640] - sum of float not working
-* [KYLIN-642] - Couldn't refresh cube segment
-* [KYLIN-643] - JDBC couldn't connect to Kylin: "java.sql.SQLException: Authentication Failed"
-* [KYLIN-644] - join table as null error when build the cube
-* [KYLIN-652] - Lookup table alias will be set to null
-* [KYLIN-657] - JDBC Driver not register into DriverManager
-* [KYLIN-658] - java.lang.IllegalArgumentException: Cannot find rowkey column XXX in cube CubeDesc
-* [KYLIN-659] - Couldn't adjust the rowkey sequence when create cube
-* [KYLIN-666] - Select float type column got class cast exception
-* [KYLIN-681] - Failed to build dictionary if the rowkey's dictionary property is "date(yyyy-mm-dd)"
-* [KYLIN-682] - Got "No aggregator for func 'MIN' and return type 'decimal(19,4)'" error when build cube
-* [KYLIN-684] - Remove holistic distinct count and multiple column distinct count from sample cube
-* [KYLIN-691] - update tomcat download address in download-tomcat.sh
-* [KYLIN-696] - Dictionary couldn't recognize a value and throw IllegalArgumentException: "Not a valid value"
-* [KYLIN-703] - UT failed due to unknown host issue
-* [KYLIN-711] - UT failure in REST module
-* [KYLIN-739] - Dimension as metrics does not work with PK-FK derived column
-* [KYLIN-761] - Tables are not shown in the "Query" tab, and couldn't run SQL query after cube be built
-
-__Improvement__
-
-* [KYLIN-168] - Installation fails if multiple ZK
-* [KYLIN-182] - Validation Rule: columns used in Join condition should have same datatype
-* [KYLIN-204] - Kylin web not works properly in IE
-* [KYLIN-217] - Enhance coprocessor with endpoints 
-* [KYLIN-251] - job engine refactoring
-* [KYLIN-261] - derived column validate when create cube
-* [KYLIN-317] - note: grunt.json need to be configured when add new javascript or css file
-* [KYLIN-324] - Refactor metadata to support InvertedIndex
-* [KYLIN-407] - Validation: There's should no Hive table column using "binary" data type
-* [KYLIN-445] - Rename cube_desc/cube folder
-* [KYLIN-452] - Automatically create local cluster for running tests
-* [KYLIN-498] - Merge metadata tables 
-* [KYLIN-532] - Refactor data model in kylin front end
-* [KYLIN-539] - use hbase command to launch tomcat
-* [KYLIN-542] - add project property feature for cube
-* [KYLIN-553] - From cube instance, couldn't easily find the project instance that it belongs to
-* [KYLIN-563] - Wrap kylin start and stop with a script 
-* [KYLIN-567] - More flexible validation of new segments
-* [KYLIN-569] - Support increment+merge job
-* [KYLIN-578] - add more generic configuration for ssh
-* [KYLIN-601] - Extract content from kylin.tgz to "kylin" folder
-* [KYLIN-616] - Validation Rule: partition date column should be in dimension columns
-* [KYLIN-634] - Script to import sample data and cube metadata
-* [KYLIN-636] - wiki/On-Hadoop-CLI-installation is not up to date
-* [KYLIN-637] - add start&end date for hbase info in cubeDesigner
-* [KYLIN-714] - Add Apache RAT to pom.xml
-* [KYLIN-753] - Make the dependency on hbase-common to "provided"
-* [KYLIN-758] - Updating port forwarding issue Hadoop Installation on Hortonworks Sandbox.
-* [KYLIN-779] - [UI] jump to cube list after create cube
-* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
-
-__Wish__
-
-* [KYLIN-608] - Distinct count for ii storage
-
diff --git a/website/_docs16/tutorial/acl.cn.md b/website/_docs16/tutorial/acl.cn.md
deleted file mode 100644
index 006f831..0000000
--- a/website/_docs16/tutorial/acl.cn.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin Cube 权限授予教程
-categories: 教程
-permalink: /cn/docs16/tutorial/acl.html
-version: v1.2
-since: v0.7.1
----
-
-  
-
-在`Cubes`页面,双击cube行查看详细信息。在这里我们关注`Access`标签。
-点击`+Grant`按钮进行授权。
-
-![]( /images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
-
-一个cube有四种不同的权限。将你的鼠标移动到`?`图标查看详细信息。
-
-![]( /images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
-
-授权对象也有两种:`User`和`Role`。`Role`是指一组拥有同样权限的用户。
-
-### 1. 授予用户权限
-* 选择`User`类型,输入你想要授权的用户的用户名并选择相应的权限。
-
-     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
-
-* 然后点击`Grant`按钮提交请求。在这一操作成功后,你会在表中看到一个新的表项。你可以选择不同的访问权限来修改用户权限。点击`Revoke`按钮可以删除一个拥有权限的用户。
-
-     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
-
-### 2. 授予角色权限
-* 选择`Role`类型,通过点击下拉按钮选择你想要授权的一组用户并选择一个权限。
-
-* 然后点击`Grant`按钮提交请求。在这一操作成功后,你会在表中看到一个新的表项。你可以选择不同的访问权限来修改组权限。点击`Revoke`按钮可以删除一个拥有权限的组。
diff --git a/website/_docs16/tutorial/acl.md b/website/_docs16/tutorial/acl.md
deleted file mode 100644
index be59d60..0000000
--- a/website/_docs16/tutorial/acl.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-layout: docs16
-title:  Kylin Cube Permission
-categories: tutorial
-permalink: /docs16/tutorial/acl.html
-since: v0.7.1
----
-
-In `Cubes` page, double click the cube row to see the detail information. Here we focus on the `Access` tab.
-Click the `+Grant` button to grant permission. 
-
-![](/images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
-
-There are four different kinds of permissions for a cube. Move your mouse over the `?` icon to see detail information. 
-
-![](/images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
-
-There are also two types of user that a permission can be granted: `User` and `Role`. `Role` means a group of users who have the same role.
-
-### 1. Grant User Permission
-* Select `User` type, enter the username of the user you want to grant and select the related permission. 
-
-     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
-
-* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a user. To delete a user with permission, just click the `Revoke` button.
-
-     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
-
-### 2. Grant Role Permission
-* Select `Role` type, choose a group of users that you want to grant by click the drop down button and select a permission.
-
-* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a group. To delete a group with permission, just click the `Revoke` button.
diff --git a/website/_docs16/tutorial/create_cube.cn.md b/website/_docs16/tutorial/create_cube.cn.md
deleted file mode 100644
index 0f44010..0000000
--- a/website/_docs16/tutorial/create_cube.cn.md
+++ /dev/null
@@ -1,129 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin Cube 创建教程
-categories: 教程
-permalink: /cn/docs16/tutorial/create_cube.html
-version: v1.2
-since: v0.7.1
----
-  
-  
-### I. 新建一个项目
-1. 由顶部菜单栏进入`Query`页面,然后点击`Manage Projects`。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
-
-2. 点击`+ Project`按钮添加一个新的项目。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/2 %2Bproject.png)
-
-3. 填写下列表单并点击`submit`按钮提交请求。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/3 new-project.png)
-
-4. 成功后,底部会显示通知。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
-
-### II. 同步一张表
-1. 在顶部菜单栏点击`Tables`,然后点击`+ Sync`按钮加载hive表元数据。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/4 %2Btable.png)
-
-2. 输入表名并点击`Sync`按钮提交请求。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
-
-### III. 新建一个cube
-首先,在顶部菜单栏点击`Cubes`。然后点击`+Cube`按钮进入cube designer页面。
-
-![](/images/Kylin-Cube-Creation-Tutorial/6 %2Bcube.png)
-
-**步骤1. Cube信息**
-
-填写cube基本信息。点击`Next`进入下一步。
-
-你可以使用字母、数字和“_”来为你的cube命名(注意名字中不能使用空格)。
-
-![](/images/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
-
-**步骤2. 维度**
-
-1. 建立事实表。
-
-    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-factable.png)
-
-2. 点击`+Dimension`按钮添加一个新的维度。
-
-    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-%2Bdim.png)
-
-3. 可以选择不同类型的维度加入一个cube。我们在这里列出其中一部分供你参考。
-
-    * 从事实表获取维度。
-          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeA.png)
-
-    * 从查找表获取维度。
-        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-1.png)
-
-        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-2.png)
-   
-    * 从有分级结构的查找表获取维度。
-          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeC.png)
-
-    * 从有衍生维度(derived dimensions)的查找表获取维度。
-          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeD.png)
-
-4. 用户可以在保存维度后进行编辑。
-   ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-edit.png)
-
-**步骤3. 度量**
-
-1. 点击`+Measure`按钮添加一个新的度量。
-   ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-%2Bmeas.png)
-
-2. 根据它的表达式共有5种不同类型的度量:`SUM`、`MAX`、`MIN`、`COUNT`和`COUNT_DISTINCT`。请谨慎选择返回类型,它与`COUNT(DISTINCT)`的误差率相关。
-   * SUM
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-sum.png)
-
-   * MIN
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-min.png)
-
-   * MAX
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-max.png)
-
-   * COUNT
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-count.png)
-
-   * DISTINCT_COUNT
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-distinct.png)
-
-**步骤4. 过滤器**
-
-这一步骤是可选的。你可以使用`SQL`格式添加一些条件过滤器。
-
-![](/images/Kylin-Cube-Creation-Tutorial/10 filter.png)
-
-**步骤5. 更新设置**
-
-这一步骤是为增量构建cube而设计的。
-
-![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting1.png)
-
-选择分区类型、分区列和开始日期。
-
-![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting2.png)
-
-**步骤6. 高级设置**
-
-![](/images/Kylin-Cube-Creation-Tutorial/12 advanced.png)
-
-**步骤7. 概览 & 保存**
-
-你可以概览你的cube并返回之前的步骤进行修改。点击`Save`按钮完成cube创建。
-
-![](/images/Kylin-Cube-Creation-Tutorial/13 overview.png)
diff --git a/website/_docs16/tutorial/create_cube.md b/website/_docs16/tutorial/create_cube.md
deleted file mode 100644
index 25b304f..0000000
--- a/website/_docs16/tutorial/create_cube.md
+++ /dev/null
@@ -1,198 +0,0 @@
----
-layout: docs16
-title:  Kylin Cube Creation
-categories: tutorial
-permalink: /docs16/tutorial/create_cube.html
----
-
-This tutorial will guide you to create a cube. It need you have at least 1 sample table in Hive. If you don't have, you can follow this to create some data.
-  
-### I. Create a Project
-1. Go to `Query` page in top menu bar, then click `Manage Projects`.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
-
-2. Click the `+ Project` button to add a new project.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/2 +project.png)
-
-3. Enter a project name, e.g, "Tutorial", with a description (optional), then click `submit` button to send the request.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3 new-project.png)
-
-4. After success, the project will show in the table.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
-
-### II. Sync up Hive Table
-1. Click `Model` in top bar and then click `Data Source` tab in the left part, it lists all the tables loaded into Kylin; click `Load Hive Table` button.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table.png)
-
-2. Enter the hive table names, separated with commad, and then click `Sync` to send the request.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
-
-3. [Optional] If you want to browser the hive database to pick tables, click the `Load Hive Table From Tree` button.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table-tree.png)
-
-4. [Optional] Expand the database node, click to select the table to load, and then click `Sync`.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-tree.png)
-
-5. A success message will pop up. In the left `Tables` section, the newly loaded table is added. Click the table name will expand the columns.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-info.png)
-
-6. In the background, Kylin will run a MapReduce job to calculate the approximate cardinality for the newly synced table. After the job be finished, refresh web page and then click the table name, the cardinality will be shown in the table info.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-cardinality.png)
-
-
-### III. Create Data Model
-Before create a cube, need define a data model. The data model defines the star schema. One data model can be reused in multiple cubes.
-
-1. Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Model`.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 +model.png)
-
-2. Enter a name for the model, with an optional description.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-name.png)
-
-3. In the `Fact Table` box, select the fact table of this data model.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-fact-table.png)
-
-4. [Optional] Click `Add Lookup Table` button to add a lookup table. Select the table name and join type (inner or left).
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-lookup-table.png)
-
-5. [Optional] Click `New Join Condition` button, select the FK column of fact table in the left, and select the PK column of lookup table in the right side. Repeat this if have more than one join columns.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-join-condition.png)
-
-6. Click "OK", repeat step 4 and 5 to add more lookup tables if any. After finished, click "Next".
-
-7. The "Dimensions" page allows to select the columns that will be used as dimension in the child cubes. Click the `Columns` cell of a table, in the drop-down list select the column to the list. 
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-dimensions.png)
-
-8. Click "Next" go to the "Measures" page, select the columns that will be used in measure/metrics. The measure column can only from fact table. 
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-measures.png)
-
-9. Click "Next" to the "Settings" page. If the data in fact table increases by day, select the corresponding date column in the `Partition Date Column`, and select the date format, otherwise leave it as blank.
-
-10. [Optional] Select `Cube Size`, which is an indicator on the scale of the cube, by default it is `MEDIUM`.
-
-11. [Optional] If some records want to excluded from the cube, like dirty data, you can input the condition in `Filter`.
-
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-partition-column.png)
-
-12. Click `Save` and then select `Yes` to save the data model. After created, the data model will be shown in the left `Models` list.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-created.png)
-
-### IV. Create Cube
-After the data model be created, you can start to create cube. 
-
-Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Cube`.
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 new-cube.png)
-
-
-**Step 1. Cube Info**
-
-Select the data model, enter the cube name; Click `Next` to enter the next step.
-
-You can use letters, numbers and '_' to name your cube (blank space in name is not allowed). `Notification List` is a list of email addresses which be notified on cube job success/failure.
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
-    
-
-**Step 2. Dimensions**
-
-1. Click `Add Dimension`, it popups two option: "Normal" and "Derived": "Normal" is to add a normal independent dimension column, "Derived" is to add a derived dimension column. Read more in [How to optimize cubes](/docs15/howto/howto_optimize_cubes.html).
-
-2. Click "Normal" and then select a dimension column, give it a meaningful name.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-normal.png)
-    
-3. [Optional] Click "Derived" and then pickup 1 more multiple columns on lookup table, give them a meaningful name.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-derived.png)
-
-4. Repeate 2 and 3 to add all dimension columns; you can do this in batch for "Normal" dimension with the button `Auto Generator`. 
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-batch.png)
-
-5. Click "Next" after select all dimensions.
-
-**Step 3. Measures**
-
-1. Click the `+Measure` to add a new measure.
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 meas-+meas.png)
-
-2. There are 6 types of measure according to its expression: `SUM`, `MAX`, `MIN`, `COUNT`, `COUNT_DISTINCT` and `TOP_N`. Properly select the return type for `COUNT_DISTINCT` and `TOP_N`, as it will impact on the cube size.
-   * SUM
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-sum.png)
-
-   * MIN
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-min.png)
-
-   * MAX
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-max.png)
-
-   * COUNT
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-count.png)
-
-   * DISTINCT_COUNT
-   This measure has two implementations: 
-   a) approximate implementation with HyperLogLog, select an acceptable error rate, lower error rate will take more storage.
-   b) precise implementation with bitmap (see limitation in https://issues.apache.org/jira/browse/KYLIN-1186). 
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-distinct.png)
-
-   Pleaste note: distinct count is a very heavy data type, it is slower to build and query comparing to other measures.
-
-   * TOP_N
-   Approximate TopN measure pre-calculates the top records in each dimension combination, it will provide higher performance in query time than no pre-calculation; Need specify two parameters here: the first is the column will be used as metrics for Top records (aggregated with SUM and then sorted in descending order); the second is the literal ID, represents the record like seller_id;
-
-   Properly select the return type, depends on how many top records to inspect: top 10, top 100 or top 1000. 
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-topn.png)
-
-
-**Step 4. Refresh Setting**
-
-This step is designed for incremental cube build. 
-
-`Auto Merge Time Ranges (days)`: merge the small segments into medium and large segment automatically. If you don't want to auto merge, remove the default two ranges.
-
-`Retention Range (days)`: only keep the segment whose data is in past given days in cube, the old segment will be automatically dropped from head; 0 means not enable this feature.
-
-`Partition Start Date`: the start date of this cube.
-
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/9 refresh-setting1.png)
-
-**Step 5. Advanced Setting**
-
-`Aggregation Groups`: by default Kylin put all dimensions into one aggregation group; you can create multiple aggregation groups by knowing well about your query patterns. For the concepts of "Mandatory Dimensions", "Hierarchy Dimensions" and "Joint Dimensions", read this blog: [New Aggregation Group](/blog/2016/02/18/new-aggregation-group/)
-
-`Rowkeys`: the rowkeys are composed by the dimension encoded values. "Dictionary" is the default encoding method; If a dimension is not fit with dictionary (e.g., cardinality > 10 million), select "false" and then enter the fixed length for that dimension, usually that is the max. length of that column; if a value is longer than that size it will be truncated. Please note, without dictionary encoding, the cube size might be much bigger.
-
-You can drag & drop a dimension column to adjust its position in rowkey; Put the mandantory dimension at the begining, then followed the dimensions that heavily involved in filters (where condition). Put high cardinality dimensions ahead of low cardinality dimensions.
-
-
-**Step 6. Overview & Save**
-
-You can overview your cube and go back to previous step to modify it. Click the `Save` button to complete the cube creation.
-
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 overview.png)
-
-Cheers! now the cube is created, you can go ahead to build and play it.
diff --git a/website/_docs16/tutorial/cube_build_job.cn.md b/website/_docs16/tutorial/cube_build_job.cn.md
deleted file mode 100644
index 8a8822c..0000000
--- a/website/_docs16/tutorial/cube_build_job.cn.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin Cube 建立和Job监控教程
-categories: 教程
-permalink: /cn/docs16/tutorial/cube_build_job.html
-version: v1.2
-since: v0.7.1
----
-
-### Cube建立
-首先,确认你拥有你想要建立的cube的权限。
-
-1. 在`Cubes`页面中,点击cube栏右侧的`Action`下拉按钮并选择`Build`操作。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
-
-2. 选择后会出现一个弹出窗口。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/2 pop-up.png)
-
-3. 点击`END DATE`输入框选择增量构建这个cube的结束日期。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
-
-4. 点击`Submit`提交请求。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 submit.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4.1 success.png)
-
-   提交请求成功后,你将会看到`Jobs`页面新建了job。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 jobs-page.png)
-
-5. 如要放弃这个job,点击`Discard`按钮。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
-
-### Job监控
-在`Jobs`页面,点击job详情按钮查看显示于右侧的详细信息。
-
-![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
-
-job详细信息为跟踪一个job提供了它的每一步记录。你可以将光标停放在一个步骤状态图标上查看基本状态和信息。
-
-![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
-
-点击每个步骤显示的图标按钮查看详情:`Parameters`、`Log`、`MRJob`、`EagleMonitoring`。
-
-* Parameters
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
-
-* Log
-        
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
-
-* MRJob(MapReduce Job)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)
diff --git a/website/_docs16/tutorial/cube_build_job.md b/website/_docs16/tutorial/cube_build_job.md
deleted file mode 100644
index b19ef5a..0000000
--- a/website/_docs16/tutorial/cube_build_job.md
+++ /dev/null
@@ -1,67 +0,0 @@
----
-layout: docs16
-title:  Kylin Cube Build and Job Monitoring
-categories: tutorial
-permalink: /docs16/tutorial/cube_build_job.html
----
-
-### Cube Build
-First of all, make sure that you have authority of the cube you want to build.
-
-1. In `Models` page, click the `Action` drop down button in the right of a cube column and select operation `Build`.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
-
-2. There is a pop-up window after the selection, click `END DATE` input box to select end date of this incremental cube build.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
-
-4. Click `Submit` to send the build request. After success, you will see the new job in the `Monitor` page.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 jobs-page.png)
-
-5. The new job is in "pending" status; after a while, it will be started to run and you will see the progress by refresh the web page or click the refresh button.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 job-progress.png)
-
-
-6. Wait the job to finish. In the between if you want to discard it, click `Actions` -> `Discard` button.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
-
-7. After the job is 100% finished, the cube's status becomes to "Ready", means it is ready to serve SQL queries. In the `Model` tab, find the cube, click cube name to expand the section, in the "HBase" tab, it will list the cube segments. Each segment has a start/end time; Its underlying HBase table information is also listed.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/10 cube-segment.png)
-
-If you have more source data, repeate the steps above to build them into the cube.
-
-### Job Monitoring
-In the `Monitor` page, click the job detail button to see detail information show in the right side.
-
-![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
-
-The detail information of a job provides a step-by-step record to trace a job. You can hover a step status icon to see the basic status and information.
-
-![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
-
-Click the icon buttons showing in each step to see the details: `Parameters`, `Log`, `MRJob`.
-
-* Parameters
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
-
-* Log
-        
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
-
-* MRJob(MapReduce Job)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)
-
-
diff --git a/website/_docs16/tutorial/cube_streaming.md b/website/_docs16/tutorial/cube_streaming.md
deleted file mode 100644
index 63d81eb..0000000
--- a/website/_docs16/tutorial/cube_streaming.md
+++ /dev/null
@@ -1,219 +0,0 @@
----
-layout: docs16
-title:  Scalable Cubing from Kafka (beta)
-categories: tutorial
-permalink: /docs16/tutorial/cube_streaming.html
----
-Kylin v1.6 releases the scalable streaming cubing function, it leverages Hadoop to consume the data from Kafka to build the cube, you can check [this blog](/blog/2016/10/18/new-nrt-streaming/) for the high level design. This doc is a step by step tutorial, illustrating how to create and build a sample cube;
-
-## Preparation
-To finish this tutorial, you need a Hadoop environment which has kylin v1.6.0 or above installed, and also have a Kafka (v0.10.0 or above) running; Previous Kylin version has a couple issues so please upgrade your Kylin instance at first.
-
-In this tutorial, we will use Hortonworks HDP 2.2.4 Sandbox VM + Kafka v0.10.0(Scala 2.10) as the environment.
-
-## Install Kafka 0.10.0.0 and Kylin
-Don't use HDP 2.2.4's build-in Kafka as it is too old, stop it first if it is running.
-{% highlight Groff markup %}
-curl -s https://archive.apache.org/dist/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgz | tar -xz -C /usr/local/
-
-cd /usr/local/kafka_2.10-0.10.0.0/
-
-bin/kafka-server-start.sh config/server.properties &
-
-{% endhighlight %}
-
-Download the Kylin v1.6 from download page, expand the tar ball in /usr/local/ folder.
-
-## Create sample Kafka topic and populate data
-
-Create a sample topic "kylindemo", with 3 partitions:
-
-{% highlight Groff markup %}
-
-bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic kylindemo
-Created topic "kylindemo".
-{% endhighlight %}
-
-Put sample data to this topic; Kylin has an utility class which can do this;
-
-{% highlight Groff markup %}
-export KAFKA_HOME=/usr/local/kafka_2.10-0.10.0.0
-export KYLIN_HOME=/usr/local/apache-kylin-1.6.0-bin
-
-cd $KYLIN_HOME
-./bin/kylin.sh org.apache.kylin.source.kafka.util.KafkaSampleProducer --topic kylindemo --broker localhost:9092
-{% endhighlight %}
-
-This tool will send 100 records to Kafka every second. Please keep it running during this tutorial. You can check the sample message with kafka-console-consumer.sh now:
-
-{% highlight Groff markup %}
-cd $KAFKA_HOME
-bin/kafka-console-consumer.sh --zookeeper localhost:2181 --bootstrap-server localhost:9092 --topic kylindemo --from-beginning
-{"amount":63.50375137330458,"category":"TOY","order_time":1477415932581,"device":"Other","qty":4,"user":{"id":"bf249f36-f593-4307-b156-240b3094a1c3","age":21,"gender":"Male"},"currency":"USD","country":"CHINA"}
-{"amount":22.806058795736583,"category":"ELECTRONIC","order_time":1477415932591,"device":"Andriod","qty":1,"user":{"id":"00283efe-027e-4ec1-bbed-c2bbda873f1d","age":27,"gender":"Female"},"currency":"USD","country":"INDIA"}
-
- {% endhighlight %}
-
-## Define a table from streaming
-Start Kylin server with "$KYLIN_HOME/bin/kylin.sh start", login Kylin Web GUI at http://sandbox:7070/kylin/, select an existing project or create a new project; Click "Model" -> "Data Source", then click the icon "Add Streaming Table";
-
-   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/1_Add_streaming_table.png)
-
-In the pop-up dialogue, enter a sample record which you got from the kafka-console-consumer, click the ">>" button, Kylin parses the JSON message and listS all the properties;
-
-You need give a logic table name for this streaming data source; The name will be used for SQL query later; here enter "STREAMING_SALES_TABLE" as an example in the "Table Name" field.
-
-You need select a timestamp field which will be used to identify the time of a message; Kylin can derive other time values like "year_start", "quarter_start" from this time column, which can give your more flexibility on building and querying the cube. Here check "order_time". You can deselect those properties which are not needed for cube. Here let's keep all fields.
-
-Notice that Kylin supports structured (or say "embedded") message from v1.6, it will convert them into a flat table structure. By default use "_" as the separator of the structed properties.
-
-   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/2_Define_streaming_table.png)
-
-
-Click "Next". On this page, provide the Kafka cluster information; Enter "kylindemo" as "Topic" name; The cluster has 1 broker, whose host name is "sandbox", port is "9092", click "Save".
-
-   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Kafka_setting.png)
-
-In "Advanced setting" section, the "timeout" and "buffer size" are the configurations for connecting with Kafka, keep them. 
-
-In "Parser Setting", by default Kylin assumes your message is JSON format, and each record's timestamp column (specified by "tsColName") is a bigint (epoch time) value; in this case, you just need set the "tsColumn" to "order_time"; 
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_setting.png)
-
-In real case if the timestamp value is a string valued timestamp like "Jul 20, 2016 9:59:17 AM", you need specify the parser class with "tsParser" and the time pattern with "tsPattern" like this:
-
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_time.png)
-
-Click "Submit" to save the configurations. Now a "Streaming" table is created.
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/4_Streaming_table.png)
-
-## Define data model
-With the table defined in previous step, now we can create the data model. The step is almost the same as you create a normal data model, but it has two requirement:
-
-* Streaming Cube doesn't support join with lookup tables; When define the data model, only select fact table, no lookup table;
-* Streaming Cube must be partitioned; If you're going to build the Cube incrementally at minutes level, select "MINUTE_START" as the cube's partition date column. If at hours level, select "HOUR_START".
-
-Here we pick 13 dimension and 2 measure columns:
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/5_Data_model_dimension.png)
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/6_Data_model_measure.png)
-Save the data model.
-
-## Create Cube
-
-The streaming Cube is almost the same as a normal cube. a couple of points need get your attention:
-
-* The partition time column should be a dimension of the Cube. In Streaming OLAP the time is always a query condition, and Kylin will leverage this to narrow down the scanned partitions.
-* Don't use "order\_time" as dimension as that is pretty fine-grained; suggest to use "mintue\_start", "hour\_start" or other, depends on how you will inspect the data.
-* Define "year\_start", "quarter\_start", "month\_start", "day\_start", "hour\_start", "minute\_start" as a hierarchy to reduce the combinations to calculate.
-* In the "refersh setting" step, create more merge ranges, like 0.5 hour, 4 hours, 1 day, and then 7 days; This will help to control the cube segment number.
-* In the "rowkeys" section, drag&drop the "minute\_start" to the head position, as for streaming queries, the time condition is always appeared; putting it to head will help to narrow down the scan range.
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/8_Cube_dimension.png)
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/9_Cube_measure.png)
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/10_agg_group.png)
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/11_Rowkey.png)
-
-Save the cube.
-
-## Run a build
-
-You can trigger the build from web GUI, by clicking "Actions" -> "Build", or sending a request to Kylin RESTful API with 'curl' command:
-
-{% highlight Groff markup %}
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
-{% endhighlight %}
-
-Please note the API endpoint is different from a normal cube (this URL end with "build2").
-
-Here 0 means from the last position, and 9223372036854775807 (Long.MAX_VALUE) means to the end position on Kafka topic. If it is the first time to build (no previous segment), Kylin will seek to beginning of the topics as the start position. 
-
-In the "Monitor" page, a new job is generated; Wait it 100% finished.
-
-## Click the "Insight" tab, compose a SQL to run, e.g:
-
- {% highlight Groff markup %}
-select minute_start, count(*), sum(amount), sum(qty) from streaming_sales_table group by minute_start order by minute_start
- {% endhighlight %}
-
-The result looks like below.
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/13_Query_result.png)
-
-
-## Automate the build
-
-Once the first build and query got successfully, you can schedule incremental builds at a certain frequency. Kylin will record the offsets of each build; when receive a build request, it will start from the last end position, and then seek the latest offsets from Kafka. With the REST API you can trigger it with any scheduler tools like Linux cron:
-
-  {% highlight Groff markup %}
-crontab -e
-*/5 * * * * curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
- {% endhighlight %}
-
-Now you can site down and watch the cube be automatically built from streaming. And when the cube segments accumulate to bigger time range, Kylin will automatically merge them into a bigger segment.
-
-## Trouble shootings
-
- * You may encounter the following error when run "kylin.sh":
-{% highlight Groff markup %}
-Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/Producer
-	at java.lang.Class.getDeclaredMethods0(Native Method)
-	at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
-	at java.lang.Class.getMethod0(Class.java:2856)
-	at java.lang.Class.getMethod(Class.java:1668)
-	at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
-	at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
-Caused by: java.lang.ClassNotFoundException: org.apache.kafka.clients.producer.Producer
-	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
-	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
-	at java.security.AccessController.doPrivileged(Native Method)
-	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
-	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
-	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
-	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
-	... 6 more
-{% endhighlight %}
-
-The reason is Kylin wasn't able to find the proper Kafka client jars; Make sure you have properly set "KAFKA_HOME" environment variable.
-
- * Get "killed by admin" error in the "Build Cube" step
-
- Within a Sandbox VM, YARN may not allocate the requested memory resource to MR job as the "inmem" cubing algorithm requests more memory. You can bypass this by requesting less memory: edit "conf/kylin_job_conf_inmem.xml", change the following two parameters like this:
-
- {% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.memory.mb</name>
-        <value>1072</value>
-        <description></description>
-    </property>
-
-    <property>
-        <name>mapreduce.map.java.opts</name>
-        <value>-Xmx800m</value>
-        <description></description>
-    </property>
- {% endhighlight %}
-
- * If there already be bunch of history messages in Kafka and you don't want to build from the very beginning, you can trigger a call to set the current end position as the start for the cube:
-
-{% highlight Groff markup %}
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/init_start_offsets
-{% endhighlight %}
-
- * If some build job got error and you discard it, there will be a hole (or say gap) left in the Cube. Since each time Kylin will build from last position, you couldn't expect the hole be filled by normal builds. Kylin provides API to check and fill the holes 
-
-Check holes:
- {% highlight Groff markup %}
-curl -X GET --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
-{% endhighlight %}
-
-If the result is an empty arrary, means there is no hole; Otherwise, trigger Kylin to fill them:
- {% highlight Groff markup %}
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
-{% endhighlight %}
-
diff --git a/website/_docs16/tutorial/flink.md b/website/_docs16/tutorial/flink.md
deleted file mode 100644
index f3cb99f..0000000
--- a/website/_docs16/tutorial/flink.md
+++ /dev/null
@@ -1,249 +0,0 @@
----
-layout: docs16
-title:  Connect from Apache Flink
-categories: tutorial
-permalink: /docs16/tutorial/flink.html
----
-
-
-### Introduction
-
-This document describes how to use Kylin as a data source in Apache Flink; 
-
-There were several attempts to do this in Scala and JDBC, but none of them works: 
-
-* [attempt1](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JDBCInputFormat-preparation-with-Flink-1-1-SNAPSHOT-and-Scala-2-11-td5371.html)  
-* [attempt2](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-OT-in-class-org-apache-flink-api-common-io-RichInputFormat-could-not-be-determi-td7287.html)  
-* [attempt3](http://stackoverflow.com/questions/36067881/create-dataset-from-jdbc-source-in-flink-using-scala)  
-* [attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala); 
-
-We will try use CreateInput and [JDBCInputFormat](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html) in batch mode and access via JDBC to Kylin. But it isn’t implemented in Scala, is only in Java [MailList](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html). This doc will go step by step solving these problems.
-
-### Pre-requisites
-
-* Need an instance of Kylin, with a Cube; [Sample Cube](/docs16/tutorial/kylin_sample.html) will be good enough.
-* [Scala](http://www.scala-lang.org/) and [Apache Flink](http://flink.apache.org/) Installed
-* [IntelliJ](https://www.jetbrains.com/idea/) Installed and configured for Scala/Flink (see [Flink IDE setup guide](https://ci.apache.org/projects/flink/flink-docs-release-1.1/internals/ide_setup.html) )
-
-### Used software:
-
-* [Apache Flink](http://flink.apache.org/downloads.html) v1.2-SNAPSHOT
-* [Apache Kylin](http://kylin.apache.org/download/) v1.5.2 (v1.6.0 also works)
-* [IntelliJ](https://www.jetbrains.com/idea/download/#section=linux)  v2016.2
-* [Scala](downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz)  v2.11
-
-### Starting point:
-
-This can be out initial skeleton: 
-
-{% highlight Groff markup %}
-import org.apache.flink.api.scala._
-val env = ExecutionEnvironment.getExecutionEnvironment
-val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
-  .setDrivername("org.apache.kylin.jdbc.Driver")
-  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
-  .setUsername("ADMIN")
-  .setPassword("KYLIN")
-  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
-  .finish()
-  val dataset =env.createInput(inputFormat)
-{% endhighlight %}
-
-The first error is: ![alt text](/images/Flink-Tutorial/02.png)
-
-Add to Scala: 
-{% highlight Groff markup %}
-import org.apache.flink.api.java.io.jdbc.JDBCInputFormat
-{% endhighlight %}
-
-Next error is  ![alt text](/images/Flink-Tutorial/03.png)
-
-We can solve dependencies [(mvn repository: jdbc)](https://mvnrepository.com/artifact/org.apache.flink/flink-jdbc/1.1.2); Add this to your pom.xml:
-{% highlight Groff markup %}
-<dependency>
-   <groupId>org.apache.flink</groupId>
-   <artifactId>flink-jdbc</artifactId>
-   <version>${flink.version}</version>
-</dependency>
-{% endhighlight %}
-
-## Solve dependencies of row 
-
-Similar to previous point we need solve dependencies of Row Class [(mvn repository: Table) ](https://mvnrepository.com/artifact/org.apache.flink/flink-table_2.10/1.1.2):
-
-  ![](/images/Flink-Tutorial/03b.png)
-
-
-* In pom.xml add:
-{% highlight Groff markup %}
-<dependency>
-   <groupId>org.apache.flink</groupId>
-   <artifactId>flink-table_2.10</artifactId>
-   <version>${flink.version}</version>
-</dependency>
-{% endhighlight %}
-
-* In Scala: 
-{% highlight Groff markup %}
-import org.apache.flink.api.table.Row
-{% endhighlight %}
-
-## Solve RowTypeInfo property (and their new dependencies)
-
-This is the new error to solve:
-
-  ![](/images/Flink-Tutorial/04.png)
-
-
-* If check the code of [JDBCInputFormat.java](https://github.com/apache/flink/blob/master/flink-batch-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java#L69), we can see [this new property](https://github.com/apache/flink/commit/09b428bd65819b946cf82ab1fdee305eb5a941f5#diff-9b49a5041d50d9f9fad3f8060b3d1310R69) (and mandatory) added on Apr 2016 by [FLINK-3750](https://issues.apache.org/jira/browse/FLINK-3750)  Manual [JDBCInputFormat](https://ci.apa [...]
-
-   Add the new Property: **setRowTypeInfo**
-   
-{% highlight Groff markup %}
-val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
-  .setDrivername("org.apache.kylin.jdbc.Driver")
-  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
-  .setUsername("ADMIN")
-  .setPassword("KYLIN")
-  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
-  .setRowTypeInfo(DB_ROWTYPE)
-  .finish()
-{% endhighlight %}
-
-* How can configure this property in Scala? In [Attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala), there is an incorrect solution
-   
-   We can check the types using the intellisense: ![alt text](/images/Flink-Tutorial/05.png)
-   
-   Then we will need add more dependences; Add to scala:
-
-{% highlight Groff markup %}
-import org.apache.flink.api.table.typeutils.RowTypeInfo
-import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation}
-{% endhighlight %}
-
-   Create a Array or Seq of TypeInformation[ ]
-
-  ![](/images/Flink-Tutorial/06.png)
-
-
-   Solution:
-   
-{% highlight Groff markup %}
-   var stringColum: TypeInformation[String] = createTypeInformation[String]
-   val DB_ROWTYPE = new RowTypeInfo(Seq(stringColum))
-{% endhighlight %}
-
-## Solve ClassNotFoundException
-
-  ![](/images/Flink-Tutorial/07.png)
-
-Need find the kylin-jdbc-x.x.x.jar and then expose to Flink
-
-1. Find the Kylin JDBC jar
-
-   From Kylin [Download](http://kylin.apache.org/download/) choose **Binary** and the **correct version of Kylin and HBase**
-   
-   Download & Unpack: in ./lib: 
-   
-  ![](/images/Flink-Tutorial/08.png)
-
-
-2. Make this JAR accessible to Flink
-
-   If you execute like service you need put this JAR in you Java class path using your .bashrc 
-
-  ![](/images/Flink-Tutorial/09.png)
-
-
-  Check the actual value: ![alt text](/images/Flink-Tutorial/10.png)
-  
-  Check the permission for this file (Must be accessible for you):
-
-  ![](/images/Flink-Tutorial/11.png)
-
- 
-  If you are executing from IDE, need add your class path manually:
-  
-  On IntelliJ: ![alt text](/images/Flink-Tutorial/12.png)  > ![alt text](/images/Flink-Tutorial/13.png) > ![alt text](/images/Flink-Tutorial/14.png) > ![alt text](/images/Flink-Tutorial/15.png)
-  
-  The result, will be similar to: ![alt text](/images/Flink-Tutorial/16.png)
-  
-## Solve "Couldn’t access resultSet" error
-
-  ![](/images/Flink-Tutorial/17.png)
-
-
-It is related with [Flink 4108](https://issues.apache.org/jira/browse/FLINK-4108)  [(MailList)](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html#a9415) and Timo Walther [make a PR](https://github.com/apache/flink/pull/2619)
-
-If you are running Flink <= 1.2 you will need apply this path and make clean install
-
-## Solve the casting error
-
-  ![](/images/Flink-Tutorial/18.png)
-
-In the error message you have the problem and solution …. nice ;)  ¡¡
-
-## The result
-
-The output must be similar to this, print the result of query by standard output:
-
-  ![](/images/Flink-Tutorial/19.png)
-
-
-## Now, more complex
-
-Try with a multi-colum and multi-type query:
-
-{% highlight Groff markup %}
-select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
-from kylin_sales 
-group by part_dt 
-order by part_dt
-{% endhighlight %}
-
-Need changes in DB_ROWTYPE:
-
-  ![](/images/Flink-Tutorial/20.png)
-
-
-And import lib of Java, to work with Data type of Java ![alt text](/images/Flink-Tutorial/21.png)
-
-The new result will be: 
-
-  ![](/images/Flink-Tutorial/23.png)
-
-
-## Error:  Reused Connection
-
-
-  ![](/images/Flink-Tutorial/24.png)
-
-Check if your HBase and Kylin is working. Also you can use Kylin UI for it.
-
-
-## Error:  java.lang.AbstractMethodError:  ….Avatica Connection
-
-See [Kylin 1898](https://issues.apache.org/jira/browse/KYLIN-1898) 
-
-It is a problem with kylin-jdbc-1.x.x. JAR, you need use Calcite 1.8 or above; The solution is to use Kylin 1.5.4 or above.
-
-  ![](/images/Flink-Tutorial/25.png)
-
-
-
-## Error: can't expand macros compiled by previous versions of scala
-
-Is a problem with versions of scala, check in with "scala -version" your actual version and choose your correct POM.
-
-Perhaps you will need a IntelliJ > File > Invalidates Cache > Invalidate and Restart.
-
-I added POM for Scala 2.11
-
-
-## Final Words
-
-Now you can read Kylin’s data from Apache Flink, great!
-
-[Full Code Example](https://github.com/albertoRamon/Flink/tree/master/ReadKylinFromFlink/flink-scala-project)
-
-Solved all integration problems, and tested with different types of data (Long, BigDecimal and Dates). The patch has been comited at 15 Oct, then, will be part of Flink 1.2.
diff --git a/website/_docs16/tutorial/kylin_sample.md b/website/_docs16/tutorial/kylin_sample.md
deleted file mode 100644
index f60ed8b..0000000
--- a/website/_docs16/tutorial/kylin_sample.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-layout: docs16
-title:  Quick Start with Sample Cube
-categories: tutorial
-permalink: /docs16/tutorial/kylin_sample.html
----
-
-Kylin provides a script for you to create a sample Cube; the script will also create three sample hive tables:
-
-1. Run ${KYLIN_HOME}/bin/sample.sh ; Restart kylin server to flush the caches;
-2. Logon Kylin web with default user ADMIN/KYLIN, select project "learn_kylin" in the project dropdown list (left upper corner);
-3. Select the sample cube "kylin_sales_cube", click "Actions" -> "Build", pick up a date later than 2014-01-01 (to cover all 10000 sample records);
-4. Check the build progress in "Monitor" tab, until 100%;
-5. Execute SQLs in the "Insight" tab, for example:
-	select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt
-6. You can verify the query result and compare the response time with hive;
-
-   
-## What's next
-
-You can create another cube with the sample tables, by following the tutorials.
diff --git a/website/_docs16/tutorial/odbc.cn.md b/website/_docs16/tutorial/odbc.cn.md
deleted file mode 100644
index 9ebe8dc..0000000
--- a/website/_docs16/tutorial/odbc.cn.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin ODBC 驱动程序教程
-categories: 教程
-permalink: /cn/docs16/tutorial/odbc.html
-version: v1.2
-since: v0.7.1
----
-
-> 我们提供Kylin ODBC驱动程序以支持ODBC兼容客户端应用的数据访问。
-> 
-> 32位版本或64位版本的驱动程序都是可用的。
-> 
-> 测试操作系统:Windows 7,Windows Server 2008 R2
-> 
-> 测试应用:Tableau 8.0.4 和 Tableau 8.1.3
-
-## 前提条件
-1. Microsoft Visual C++ 2012 再分配(Redistributable)
-   * 32位Windows或32位Tableau Desktop:下载:[32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
-   * 64位Windows或64位Tableau Desktop:下载:[64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
-
-2. ODBC驱动程序内部从一个REST服务器获取结果,确保你能够访问一个
-
-## 安装
-1. 如果你已经安装,首先卸载已存在的Kylin ODBC
-2. 从[下载](../../download/)下载附件驱动安装程序,并运行。
-   * 32位Tableau Desktop:请安装KylinODBCDriver (x86).exe
-   * 64位Tableau Desktop:请安装KylinODBCDriver (x64).exe
-
-3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
-
-## 错误报告
-如有问题,请报告错误至Apache Kylin JIRA,或者发送邮件到dev邮件列表。
diff --git a/website/_docs16/tutorial/odbc.md b/website/_docs16/tutorial/odbc.md
deleted file mode 100644
index 06fbf8b..0000000
--- a/website/_docs16/tutorial/odbc.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-layout: docs16
-title:  Kylin ODBC Driver
-categories: tutorial
-permalink: /docs16/tutorial/odbc.html
-since: v0.7.1
----
-
-> We provide Kylin ODBC driver to enable data access from ODBC-compatible client applications.
-> 
-> Both 32-bit version or 64-bit version driver are available.
-> 
-> Tested Operation System: Windows 7, Windows Server 2008 R2
-> 
-> Tested Application: Tableau 8.0.4, Tableau 8.1.3 and Tableau 9.1
-
-## Prerequisites
-1. Microsoft Visual C++ 2012 Redistributable 
-   * For 32 bit Windows or 32 bit Tableau Desktop: Download: [32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
-   * For 64 bit Windows or 64 bit Tableau Desktop: Download: [64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
-
-
-2. ODBC driver internally gets results from a REST server, make sure you have access to one
-
-## Installation
-1. Uninstall existing Kylin ODBC first, if you already installled it before
-2. Download ODBC Driver from [download](../../download/).
-   * For 32 bit Tableau Desktop: Please install KylinODBCDriver (x86).exe
-   * For 64 bit Tableau Desktop: Please install KylinODBCDriver (x64).exe
-
-3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
-
-## DSN configuration
-1. Open ODBCAD to configure DSN.
-	* For 32 bit driver, please use the 32bit version in C:\Windows\SysWOW64\odbcad32.exe
-	* For 64 bit driver, please use the default "Data Sources (ODBC)" in Control Panel/Administrator Tools
-![]( /images/Kylin-ODBC-DSN/1.png)
-
-2. Open "System DSN" tab, and click "Add", you will see KylinODBCDriver listed as an option, Click "Finish" to continue.
-![]( /images/Kylin-ODBC-DSN/2.png)
-
-3. In the pop up dialog, fill in all the blanks, The server host is where your Kylin Rest Server is started.
-![]( /images/Kylin-ODBC-DSN/3.png)
-
-4. Click "Done", and you will see your new DSN listed in the "System Data Sources", you can use this DSN afterwards.
-![]( /images/Kylin-ODBC-DSN/4.png)
-
-## Bug Report
-Please open Apache Kylin JIRA to report bug, or send to dev mailing list.
diff --git a/website/_docs16/tutorial/powerbi.cn.md b/website/_docs16/tutorial/powerbi.cn.md
deleted file mode 100644
index 112c32b..0000000
--- a/website/_docs16/tutorial/powerbi.cn.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-layout: docs16-cn
-title:  微软Excel及Power BI教程
-categories: tutorial
-permalink: /cn/docs16/tutorial/powerbi.html
-version: v1.2
-since: v1.2
----
-
-Microsoft Excel是当今Windows平台上最流行的数据处理软件之一,支持多种数据处理功能,可以利用Power Query从ODBC数据源读取数据并返回到数据表中。
-
-Microsoft Power BI 是由微软推出的商业智能的专业分析工具,给用户提供简单且丰富的数据可视化及分析功能。
-
-> Apache Kylin目前版本不支持原始数据的查询,部分查询会因此失败,导致应用程序发生异常,建议打上KYLIN-1075补丁包以优化查询结果的显示。
-
-
-> Power BI及Excel不支持"connect live"模式,请注意并添加where条件在查询超大数据集时候,以避免从服务器拉去过多的数据到本地,甚至在某些情况下查询执行失败。
-
-### Install ODBC Driver
-参考页面[Kylin ODBC 驱动程序教程](./odbc.html),请确保下载并安装Kylin ODBC Driver __v1.2__. 如果你安装有早前版本,请卸载后再安装。 
-
-### 连接Excel到Kylin
-1. 从微软官网下载和安装Power Query,安装完成后在Excel中会看到Power Query的Fast Tab,单击`From other sources`下拉按钮,并选择`From ODBC`项
-![](/images/tutorial/odbc/ms_tool/Picture1.png)
-
-2. 在弹出的`From ODBC`数据连接向导中输入Apache Kylin服务器的连接字符串,也可以在`SQL`文本框中输入您想要执行的SQL语句,单击`OK`,SQL的执行结果就会立即加载到Excel的数据表中
-![](/images/tutorial/odbc/ms_tool/Picture2.png)
-
-> 为了简化连接字符串的输入,推荐创建Apache Kylin的DSN,可以将连接字符串简化为DSN=[YOUR_DSN_NAME],有关DSN的创建请参考:[https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599)。
-
- 
-3. 如果您选择不输入SQL语句,Power Query将会列出所有的数据库表,您可以根据需要对整张表的数据进行加载。但是,Apache Kylin暂不支持原数据的查询,部分表的加载可能因此受限
-![](/images/tutorial/odbc/ms_tool/Picture3.png)
-
-4. 稍等片刻,数据已成功加载到Excel中
-![](/images/tutorial/odbc/ms_tool/Picture4.png)
-
-5.  一旦服务器端数据产生更新,则需要对Excel中的数据进行同步,右键单击右侧列表中的数据源,选择`Refresh`,最新的数据便会更新到数据表中.
-
-6.  1.  为了提升性能,可以在Power Query中打开`Query Options`设置,然后开启`Fast data load`,这将提高数据加载速度,但可能造成界面的暂时无响应
-
-### Power BI
-1.  启动您已经安装的Power BI桌面版程序,单击`Get data`按钮,并选中ODBC数据源.
-![](/images/tutorial/odbc/ms_tool/Picture5.png)
-
-2.  在弹出的`From ODBC`数据连接向导中输入Apache Kylin服务器的数据库连接字符串,也可以在`SQL`文本框中输入您想要执行的SQL语句。单击`OK`,SQL的执行结果就会立即加载到Power BI中
-![](/images/tutorial/odbc/ms_tool/Picture6.png)
-
-3.  如果您选择不输入SQL语句,Power BI将会列出项目中所有的表,您可以根据需要将整张表的数据进行加载。但是,Apache Kylin暂不支持原数据的查询,部分表的加载可能因此受限
-![](/images/tutorial/odbc/ms_tool/Picture7.png)
-
-4.  现在你可以进一步使用Power BI进行可视化分析:
-![](/images/tutorial/odbc/ms_tool/Picture8.png)
-
-5.  单击工具栏的`Refresh`按钮即可重新加载数据并对图表进行更新
-
diff --git a/website/_docs16/tutorial/powerbi.md b/website/_docs16/tutorial/powerbi.md
deleted file mode 100644
index 00612da..0000000
--- a/website/_docs16/tutorial/powerbi.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-layout: docs16
-title:  MS Excel and Power BI
-categories: tutorial
-permalink: /docs16/tutorial/powerbi.html
-since: v1.2
----
-
-Microsoft Excel is one of the most famous data tool on Windows platform, and has plenty of data analyzing functions. With Power Query installed as plug-in, excel can easily read data from ODBC data source and fill spreadsheets. 
-
-Microsoft Power BI is a business intelligence tool providing rich functionality and experience for data visualization and processing to user.
-
-> Apache Kylin currently doesn't support query on raw data yet, some queries might fail and cause some exceptions in application. Patch KYLIN-1075 is recommended to get better look of query result.
-
-> Power BI and Excel do not support "connect live" model for other ODBC driver yet, please pay attention when you query on huge dataset, it may pull too many data into your client which will take a while even fail at the end.
-
-### Install ODBC Driver
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-Please make sure to download and install Kylin ODBC Driver __v1.2__. If you already installed ODBC Driver in your system, please uninstall it first. 
-
-### Kylin and Excel
-1. Download Power Query from Microsoft’s Website and install it. Then run Excel, switch to `Power Query` fast tab, click `From Other Sources` dropdown list, and select `ODBC` item.
-![](/images/tutorial/odbc/ms_tool/Picture1.png)
-
-2.  You’ll see `From ODBC` dialog, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox. Optionally you can type a SQL statement in `SQL statement` textbox. Click `OK`, result set will run to your spreadsheet now.
-![](/images/tutorial/odbc/ms_tool/Picture2.png)
-
-> Tips: In order to simplify the Database Connection String, DSN is recommended, which can shorten the Connection String like `DSN=[YOUR_DSN_NAME]`. Details about DSN, refer to [https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599).
- 
-3. If you didn’t input the SQL statement in last step, Power Query will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
-![](/images/tutorial/odbc/ms_tool/Picture3.png)
-
-4.  Hold on for a while, the data is lying in Excel now.
-![](/images/tutorial/odbc/ms_tool/Picture4.png)
-
-5.  If you want to sync data with Kylin Server, just right click the data source in right panel, and select `Refresh`, then you’ll see the latest data.
-
-6.  To improve data loading performance, you can enable `Fast data load` in Power Query, but this will make your UI unresponsive for a while. 
-
-### Power BI
-1.  Run Power BI Desktop, and click `Get Data` button, then select `ODBC` as data source type.
-![](/images/tutorial/odbc/ms_tool/Picture5.png)
-
-2.  Same with Excel, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox, and optionally type a SQL statement in `SQL statement` textbox. Click `OK`, the result set will come to Power BI as a new data source query.
-![](/images/tutorial/odbc/ms_tool/Picture6.png)
-
-3.  If you didn’t input the SQL statement in last step, Power BI will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
-![](/images/tutorial/odbc/ms_tool/Picture7.png)
-
-4.  Now you can start to enjoy analyzing with Power BI.
-![](/images/tutorial/odbc/ms_tool/Picture8.png)
-
-5.  To reload the data and redraw the charts, just click `Refresh` button in `Home` fast tab.
-
diff --git a/website/_docs16/tutorial/squirrel.md b/website/_docs16/tutorial/squirrel.md
deleted file mode 100644
index 5e69780..0000000
--- a/website/_docs16/tutorial/squirrel.md
+++ /dev/null
@@ -1,112 +0,0 @@
----
-layout: docs16
-title:  Connect from SQuirreL
-categories: tutorial
-permalink: /docs16/tutorial/squirrel.html
----
-
-### Introduction
-
-[SQuirreL SQL](http://www.squirrelsql.org/) is a multi platform Universal SQL Client (GNU License). You can use it to access HBase + Phoenix and Hive. This document introduces how to connect to Kylin from SQuirreL.
-
-### Used Software
-
-* [Kylin v1.6.0](/download/) & ODBC 1.6
-* [SquirreL SQL v3.7.1](http://www.squirrelsql.org/)
-
-## Pre-requisites
-
-* Find the Kylin JDBC driver jar
-  From Kylin Download, Choose Binary and the **correct version of Kylin and HBase**
-	Download & Unpack:  in **./lib**: 
-  ![](/images/SQuirreL-Tutorial/01.png)
-
-
-* Need an instance of Kylin, with a Cube; the [Sample Cube](/docs16/tutorial/kylin_sample.html) is enough.
-
-  ![](/images/SQuirreL-Tutorial/02.png)
-
-
-* [Dowload and install SquirreL](http://www.squirrelsql.org/#installation)
-
-## Add Kylin JDBC Driver
-
-On left menu: ![alt text](/images/SQuirreL-Tutorial/03.png) >![alt text](/images/SQuirreL-Tutorial/04.png)  > ![alt text](/images/SQuirreL-Tutorial/05.png)  > ![alt text](/images/SQuirreL-Tutorial/06.png)
-
-And locate the JAR: ![alt text](/images/SQuirreL-Tutorial/07.png)
-
-Configure this parameters:
-
-* Put a name: ![alt text](/images/SQuirreL-Tutorial/08.png)
-* Example URL ![alt text](/images/SQuirreL-Tutorial/09.png)
-
-  jdbc:kylin://172.17.0.2:7070/learn_kylin
-* Put Class Name: ![alt text](/images/SQuirreL-Tutorial/10.png)
-	Tip:  If auto complete not work, type:  org.apache.kylin.jdbc.Driver 
-	
-Check the Driver List: ![alt text](/images/SQuirreL-Tutorial/11.png)
-
-## Add Aliases
-
-On left menu: ![alt text](/images/SQuirreL-Tutorial/12.png)  > ![alt text](/images/SQuirreL-Tutorial/13.png) : (Login pass by default: ADMIN / KYLIN)
-
-  ![](/images/SQuirreL-Tutorial/14.png)
-
-
-And automatically launch conection:
-
-  ![](/images/SQuirreL-Tutorial/15.png)
-
-
-## Connect and Execute
-
-The startup window when connected:
-
-  ![](/images/SQuirreL-Tutorial/16.png)
-
-
-Choose Tab: and write a query  (whe use Kylin’s example cube):
-
-  ![](/images/SQuirreL-Tutorial/17.png)
-
-
-```
-select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
-from kylin_sales group by part_dt 
-order by part_dt
-```
-
-Execute With: ![alt text](/images/SQuirreL-Tutorial/18.png) 
-
-  ![](/images/SQuirreL-Tutorial/19.png)
-
-
-And it’s works!
-
-## Tips:
-
-SquirreL isn’t the most stable SQL Client, but it is very flexible and get a lot of info; It can be used for PoC and checking connectivity issues.
-
-List of tables: 
-
-  ![](/images/SQuirreL-Tutorial/21.png)
-
-
-List of columns of table:
-
-  ![](/images/SQuirreL-Tutorial/22.png)
-
-
-List of column of Querie:
-
-  ![](/images/SQuirreL-Tutorial/23.png)
-
-
-Export the result of queries:
-
-  ![](/images/SQuirreL-Tutorial/24.png)
-
-
- Info about time query execution:
-
-  ![](/images/SQuirreL-Tutorial/25.png)
diff --git a/website/_docs16/tutorial/tableau.cn.md b/website/_docs16/tutorial/tableau.cn.md
deleted file mode 100644
index fdbd8eb..0000000
--- a/website/_docs16/tutorial/tableau.cn.md
+++ /dev/null
@@ -1,116 +0,0 @@
----
-layout: docs16-cn
-title:  Tableau教程
-categories: 教程
-permalink: /cn/docs16/tutorial/tableau.html
-version: v1.2
-since: v0.7.1
----
-
-> Kylin ODBC驱动程序与Tableau存在一些限制,请在尝试前仔细阅读本说明书。
-> * 仅支持“managed”分析路径,Kylin引擎将对意外的维度或度量报错
-> * 请始终优先选择事实表,然后使用正确的连接条件添加查找表(cube中已定义的连接类型)
-> * 请勿尝试在多个事实表或多个查找表之间进行连接;
-> * 你可以尝试使用类似Tableau过滤器中seller id这样的高基数维度,但引擎现在将只返回有限个Tableau过滤器中的seller id。
-> 
-> 如需更多详细信息或有任何问题,请联系Kylin团队:`kylinolap@gmail.com`
-
-
-### 使用Tableau 9.x的用户
-请参考[Tableau 9 教程](./tableau_91.html)以获得更详细帮助。
-
-### 步骤1. 安装Kylin ODBC驱动程序
-参考页面[Kylin ODBC 驱动程序教程](./odbc.html)。
-
-### 步骤2. 连接到Kylin服务器
-> 我们建议使用Connect Using Driver而不是Using DSN。
-
-Connect Using Driver: 选择左侧面板中的“Other Database(ODBC)”和弹出窗口的“KylinODBCDriver”。
-
-![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
-
-输入你的服务器位置和证书:服务器主机,端口,用户名和密码。
-
-![](/images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
-
-点击“Connect”获取你有权限访问的项目列表。有关权限的详细信息请参考[Kylin Cube Permission Grant Tutorial](https://github.com/KylinOLAP/Kylin/wiki/Kylin-Cube-Permission-Grant-Tutorial)。然后在下拉列表中选择你想要连接的项目。
-
-![](/images/Kylin-and-Tableau-Tutorial/3 project.jpg)
-
-点击“Done”完成连接。
-
-![](/images/Kylin-and-Tableau-Tutorial/4 done.jpg)
-
-### 步骤3. 使用单表或多表
-> 限制
->    * 必须首先选择事实表
->    * 请勿仅支持从查找表选择
->    * 连接条件必须与cube定义匹配
-
-**选择事实表**
-
-选择`Multiple Tables`。
-
-![](/images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
-
-然后点击`Add Table...`添加一张事实表。
-
-![](/images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
-
-![](/images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
-
-**选择查找表**
-
-点击`Add Table...`添加一张查找表。
-
-![](/images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
-
-仔细建立连接条款。
-
-![](/images/Kylin-and-Tableau-Tutorial/8 join.jpg)
-
-继续通过点击`Add Table...`添加表直到所有的查找表都被正确添加。命名此连接以在Tableau中使用。
-
-![](/images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
-
-**使用Connect Live**
-
-`Data Connection`共有三种类型。选择`Connect Live`选项。
-
-![](/images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
-
-然后你就能够尽情使用Tableau进行分析。
-
-![](/images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
-
-**添加额外查找表**
-
-点击顶部菜单栏的`Data`,选择`Edit Tables...`更新查找表信息。
-
-![](/images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
-
-### 步骤4. 使用自定义SQL
-使用自定义SQL类似于使用单表/多表,但你需要在`Custom SQL`标签复制你的SQL后采取同上指令。
-
-![](/images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
-
-### 步骤5. 发布到Tableau服务器
-如果你已经完成使用Tableau制作一个仪表板,你可以将它发布到Tableau服务器上。
-点击顶部菜单栏的`Server`,选择`Publish Workbook...`。
-
-![](/images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
-
-然后登陆你的Tableau服务器并准备发布。
-
-![](/images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
-
-如果你正在使用Connect Using Driver而不是DSN连接,你还将需要嵌入你的密码。点击左下方的`Authentication`按钮并选择`Embedded Password`。点击`Publish`然后你将看到结果。
-
-![](/images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
-
-### 小贴士
-* 在Tableau中隐藏表名
-
-    * Tableau将会根据源表名分组显示列,但用户可能希望根据其他不同的安排组织列。使用Tableau中的"Group by Folder"并创建文件夹来对不同的列分组。
-
-     ![](/images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)
diff --git a/website/_docs16/tutorial/tableau.md b/website/_docs16/tutorial/tableau.md
deleted file mode 100644
index 0d9e38c..0000000
--- a/website/_docs16/tutorial/tableau.md
+++ /dev/null
@@ -1,113 +0,0 @@
----
-layout: docs16
-title:  Tableau 8
-categories: tutorial
-permalink: /docs16/tutorial/tableau.html
----
-
-> There are some limitations of Kylin ODBC driver with Tableau, please read carefully this instruction before you try it.
-> 
-> * Only support "managed" analysis path, Kylin engine will raise exception for unexpected dimension or metric
-> * Please always select Fact Table first, then add lookup tables with correct join condition (defined join type in cube)
-> * Do not try to join between fact tables or lookup tables;
-> * You can try to use high cardinality dimensions like seller id as Tableau Filter, but the engine will only return limited seller id in Tableau's filter now.
-
-### For Tableau 9.x User
-Please refer to [Tableau 9.x Tutorial](./tableau_91.html) for detail guide.
-
-### Step 1. Install Kylin ODBC Driver
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-
-### Step 2. Connect to Kylin Server
-> We recommended to use Connect Using Driver instead of Using DSN.
-
-Connect Using Driver: Select "Other Database(ODBC)" in the left panel and choose KylinODBCDriver in the pop-up window. 
-
-![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
-
-Enter your Sever location and credentials: server host, port, username and password.
-
-![]( /images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
-
-Click "Connect" to get the list of projects that you have permission to access. See details about permission in [Kylin Cube Permission Grant Tutorial](./acl.html). Then choose the project you want to connect in the drop down list. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/3 project.jpg)
-
-Click "Done" to complete the connection.
-
-![]( /images/Kylin-and-Tableau-Tutorial/4 done.jpg)
-
-### Step 3. Using Single Table or Multiple Tables
-> Limitation
-> 
->    * Must select FACT table first
->    * Do not support select from lookup table only
->    * The join condition must match within cube definition
-
-**Select Fact Table**
-
-Select `Multiple Tables`.
-
-![]( /images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
-
-Then click `Add Table...` to add a fact table.
-
-![]( /images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
-
-![]( /images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
-
-**Select Look-up Table**
-
-Click `Add Table...` to add a look-up table. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
-
-Set up the join clause carefully. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/8 join.jpg)
-
-Keep add tables through click `Add Table...` until all the look-up tables have been added properly. Give the connection a name for use in Tableau.
-
-![]( /images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
-
-**Using Connect Live**
-
-There are three types of `Data Connection`. Choose the `Connect Live` option. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
-
-Then you can enjoy analyzing with Tableau.
-
-![]( /images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
-
-**Add additional look-up Tables**
-
-Click `Data` in the top menu bar, select `Edit Tables...` to update the look-up table information.
-
-![]( /images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
-
-### Step 4. Using Customized SQL
-To use customized SQL resembles using Single Table/Multiple Tables, except that you just need to paste your SQL in `Custom SQL` tab and take the same instruction as above.
-
-![]( /images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
-
-### Step 5. Publish to Tableau Server
-Suppose you have finished making a dashboard with Tableau, you can publish it to Tableau Server.
-Click `Server` in the top menu bar, select `Publish Workbook...`. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
-
-Then sign in your Tableau Server and prepare to publish. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
-
-If you're Using Driver Connect instead of DSN connect, you'll need to additionally embed your password in. Click the `Authentication` button at left bottom and select `Embedded Password`. Click `Publish` and you will see the result.
-
-![]( /images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
-
-### Tips
-* Hide Table name in Tableau
-
-    * Tableau will display columns be grouped by source table name, but user may want to organize columns with different structure. Using "Group by Folder" in Tableau and Create Folders to group different columns.
-
-     ![]( /images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)
diff --git a/website/_docs16/tutorial/tableau_91.cn.md b/website/_docs16/tutorial/tableau_91.cn.md
deleted file mode 100644
index fddc464..0000000
--- a/website/_docs16/tutorial/tableau_91.cn.md
+++ /dev/null
@@ -1,51 +0,0 @@
----
-layout: docs16-cn
-title:  Tableau 9 教程
-categories: tutorial
-permalink: /cn/docs16/tutorial/tableau_91.html
-version: v1.2
-since: v1.2
----
-
-Tableau 9已经发布一段时间了,社区有很多用户希望Apache Kylin能进一步支持该版本。现在可以通过更新Kylin ODBC驱动以使用Tableau 9来与Kylin服务进行交互。
-
-
-### Tableau 8.x 用户
-请参考[Tableau 教程](./tableau.html)以获得更详细帮助。
-
-### Install ODBC Driver
-参考页面[Kylin ODBC 驱动程序教程](./odbc.html),请确保下载并安装Kylin ODBC Driver __v1.5__. 如果你安装有早前版本,请卸载后再安装。 
-
-### Connect to Kylin Server
-在Tableau 9.1创建新的数据连接,单击左侧面板中的`Other Database(ODBC)`,并在弹出窗口中选择`KylinODBCDriver` 
-![](/images/tutorial/odbc/tableau_91/1.png)
-
-输入你的服务器地址、端口、项目、用户名和密码,点击`Connect`可获取有权限访问的所有项目列表。有关权限的详细信息请参考[Kylin Cube 权限授予教程](./acl.html).
-![](/images/tutorial/odbc/tableau_91/2.png)
-
-### 映射数据模型
-在左侧的列表中,选择数据库`defaultCatalog`并单击”搜索“按钮,将列出所有可查询的表。用鼠标把表拖拽到右侧区域,就可以添加表作为数据源,并创建好表与表的连接关系
-![](/images/tutorial/odbc/tableau_91/3.png)
-
-### Connect Live
-Tableau 9.1中有两种数据源连接类型,选择`在线`选项以确保使用'Connect Live'模式
-![](/images/tutorial/odbc/tableau_91/4.png)
-
-### 自定义SQL
-如果需要使用自定义SQL,可以单击左侧`New Custom SQL`并在弹窗中输入SQL语句,就可添加为数据源.
-![](/images/tutorial/odbc/tableau_91/5.png)
-
-### 可视化
-现在你可以进一步使用Tableau进行可视化分析:
-![](/images/tutorial/odbc/tableau_91/6.png)
-
-### 发布到Tableau服务器
-如果希望发布到Tableau服务器, 点击`Server`菜单并选择`Publish Workbook`
-![](/images/tutorial/odbc/tableau_91/7.png)
-
-### 更多
-
-- 请参考[Tableau 教程](./tableau.html)以获得更多信息
-- 也可以参考社区用户Alberto Ramon Portoles (a.ramonportoles@gmail.com)提供的分享: [KylinWithTableau](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau)
-
-
diff --git a/website/_docs16/tutorial/tableau_91.md b/website/_docs16/tutorial/tableau_91.md
deleted file mode 100644
index 39d70ff..0000000
--- a/website/_docs16/tutorial/tableau_91.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-layout: docs16
-title:  Tableau 9
-categories: tutorial
-permalink: /docs16/tutorial/tableau_91.html
----
-
-Tableau 9.x has been released a while, there are many users are asking about support this version with Apache Kylin. With updated Kylin ODBC Driver, now user could interactive with Kylin service through Tableau 9.x.
-
-
-### For Tableau 8.x User
-Please refer to [Kylin and Tableau Tutorial](./tableau.html) for detail guide.
-
-### Install Kylin ODBC Driver
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-Please make sure to download and install Kylin ODBC Driver __v1.5__. If you already installed ODBC Driver in your system, please uninstall it first. 
-
-### Connect to Kylin Server
-Connect Using Driver: Start Tableau 9.1 desktop, click `Other Database(ODBC)` in the left panel and choose KylinODBCDriver in the pop-up window. 
-![](/images/tutorial/odbc/tableau_91/1.png)
-
-Provide your Sever location, credentials and project. Clicking `Connect` button, you can get the list of projects that you have permission to access, see details at [Kylin Cube Permission Grant Tutorial](./acl.html).
-![](/images/tutorial/odbc/tableau_91/2.png)
-
-### Mapping Data Model
-In left panel, select `defaultCatalog` as Database, click `Search` button in Table search box, and all tables get listed. With drag and drop to the right region, tables will become data source. Make sure JOINs are configured correctly.
-![](/images/tutorial/odbc/tableau_91/3.png)
-
-### Connect Live
-There are two types of `Connection`, choose the `Live` option to make sure using Connect Live mode.
-![](/images/tutorial/odbc/tableau_91/4.png)
-
-### Custom SQL
-To use customized SQL, click `New Custom SQL` in left panel and type SQL statement in pop-up dialog.
-![](/images/tutorial/odbc/tableau_91/5.png)
-
-### Visualization
-Now you can start to enjou analyzing with Tableau 9.1.
-![](/images/tutorial/odbc/tableau_91/6.png)
-
-### Publish to Tableau Server
-If you want to publish local dashboard to a Tableau Server, just expand `Server` menu and select `Publish Workbook`.
-![](/images/tutorial/odbc/tableau_91/7.png)
-
-### More
-
-- You can refer to [Kylin and Tableau Tutorial](./tableau.html) for more detail.
-- Here is a good tutorial written by Alberto Ramon Portoles (a.ramonportoles@gmail.com): [KylinWithTableau](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau)
-
-
diff --git a/website/_docs16/tutorial/web.cn.md b/website/_docs16/tutorial/web.cn.md
deleted file mode 100644
index 7f5e82c..0000000
--- a/website/_docs16/tutorial/web.cn.md
+++ /dev/null
@@ -1,134 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin网页版教程
-categories: 教程
-permalink: /cn/docs16/tutorial/web.html
-version: v1.2
----
-
-> **支持的浏览器**
-> 
-> Windows: Google Chrome, FireFox
-> 
-> Mac: Google Chrome, FireFox, Safari
-
-## 1. 访问 & 登陆
-访问主机: http://hostname:7070
-使用用户名/密码登陆:ADMIN/KYLIN
-
-![]( /images/Kylin-Web-Tutorial/1 login.png)
-
-## 2. Kylin中可用的Hive表
-虽然Kylin使用SQL作为查询接口并利用Hive元数据,Kylin不会让用户查询所有的hive表,因为到目前为止它是一个预构建OLAP(MOLAP)系统。为了使表在Kylin中可用,使用"Sync"方法能够方便地从Hive中同步表。
-
-![]( /images/Kylin-Web-Tutorial/2 tables.png)
-
-## 3. Kylin OLAP Cube
-Kylin的OLAP Cube是从星型模式的Hive表中获取的预计算数据集,这是供用户探索、管理所有cube的网页管理页面。由菜单栏进入`Cubes`页面,系统中所有可用的cube将被列出。
-
-![]( /images/Kylin-Web-Tutorial/3 cubes.png)
-
-探索更多关于Cube的详细信息
-
-* 表格视图:
-
-   ![]( /images/Kylin-Web-Tutorial/4 form-view.png)
-
-* SQL 视图 (Hive查询读取数据以生成cube):
-
-   ![]( /images/Kylin-Web-Tutorial/5 sql-view.png)
-
-* 可视化 (显示这个cube背后的星型模式):
-
-   ![]( /images/Kylin-Web-Tutorial/6 visualization.png)
-
-* 访问 (授予用户/角色权限,beta版中授予权限操作仅对管理员开放):
-
-   ![]( /images/Kylin-Web-Tutorial/7 access.png)
-
-## 4. 在网页上编写和运行SQL
-Kelin的网页版为用户提供了一个简单的查询工具来运行SQL以探索现存的cube,验证结果并探索使用#5中的Pivot analysis与可视化分析的结果集。
-
-> **查询限制**
-> 
-> 1. 仅支持SELECT查询
-> 
-> 2. 为了避免从服务器到客户端产生巨大的网络流量,beta版中的扫描范围阀值被设置为1,000,000。
-> 
-> 3. beta版中,SQL在cube中无法找到的数据将不会重定向到Hive
-
-由菜单栏进入“Query”页面:
-
-![]( /images/Kylin-Web-Tutorial/8 query.png)
-
-* 源表:
-
-   浏览器当前可用表(与Hive相同的结构和元数据):
-  
-   ![]( /images/Kylin-Web-Tutorial/9 query-table.png)
-
-* 新的查询:
-
-   你可以编写和运行你的查询并探索结果。这里提供一个查询供你参考:
-
-   ![]( /images/Kylin-Web-Tutorial/10 query-result.png)
-
-* 已保存的查询:
-
-   与用户账号关联,你将能够从不同的浏览器甚至机器上获取已保存的查询。
-   在结果区域点击“Save”,将会弹出名字和描述来保存当前查询:
-
-   ![]( /images/Kylin-Web-Tutorial/11 save-query.png)
-
-   点击“Saved Queries”探索所有已保存的查询,你可以直接重新提交它来运行或删除它:
-
-   ![]( /images/Kylin-Web-Tutorial/11 save-query-2.png)
-
-* 查询历史:
-
-   仅保存当前用户在当前浏览器中的查询历史,这将需要启用cookie,并且如果你清理浏览器缓存将会丢失数据。点击“Query History”标签,你可以直接重新提交其中的任何一条并再次运行。
-
-## 5. Pivot Analysis与可视化
-Kylin的网页版提供一个简单的Pivot与可视化分析工具供用户探索他们的查询结果:
-
-* 一般信息:
-
-   当查询运行成功后,它将呈现一个成功指标与被访问的cube名字。
-   同时它将会呈现这个查询在后台引擎运行了多久(不包括从Kylin服务器到浏览器的网络通信):
-
-   ![]( /images/Kylin-Web-Tutorial/12 general.png)
-
-* 查询结果:
-
-   能够方便地在一个列上排序。
-
-   ![]( /images/Kylin-Web-Tutorial/13 results.png)
-
-* 导出到CSV文件
-
-   点击“Export”按钮以CSV文件格式保存当前结果。
-
-* Pivot表:
-
-   将一个或多个列拖放到标头,结果将根据这些列的值分组:
-
-   ![]( /images/Kylin-Web-Tutorial/14 drag.png)
-
-* 可视化:
-
-   同时,结果集将被方便地显示在“可视化”的不同图表中:
-
-   注意:线形图仅当至少一个从Hive表中获取的维度有真实的“Date”数据类型列时才是可用的。
-
-   * 条形图:
-
-   ![]( /images/Kylin-Web-Tutorial/15 bar-chart.png)
-   
-   * 饼图:
-
-   ![]( /images/Kylin-Web-Tutorial/16 pie-chart.png)
-
-   * 线形图:
-
-   ![]( /images/Kylin-Web-Tutorial/17 line-chart.png)
-
diff --git a/website/_docs16/tutorial/web.md b/website/_docs16/tutorial/web.md
deleted file mode 100644
index 314ff48..0000000
--- a/website/_docs16/tutorial/web.md
+++ /dev/null
@@ -1,123 +0,0 @@
----
-layout: docs16
-title:  Kylin Web Interface
-categories: tutorial
-permalink: /docs16/tutorial/web.html
----
-
-> **Supported Browsers**
-> Windows: Google Chrome, FireFox
-> Mac: Google Chrome, FireFox, Safari
-
-## 1. Access & Login
-Host to access: http://hostname:7070
-Login with username/password: ADMIN/KYLIN
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/1 login.png)
-
-## 2. Sync Hive Table into Kylin
-Although Kylin will using SQL as query interface and leverage Hive metadata, kylin will not enable user to query all hive tables since it's a pre-build OLAP (MOLAP) system so far. To enable Table in Kylin, it will be easy to using "Sync" function to sync up tables from Hive.
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/2 tables.png)
-
-## 3. Kylin OLAP Cube
-Kylin's OLAP Cubes are pre-calculation datasets from star schema tables, Here's the web interface for user to explorer, manage all cubes. Go to `Model` menu, it will list all cubes available in system:
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/3 cubes.png)
-
-To explore more detail about the Cube
-
-* Form View:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/4 form-view.png)
-
-* SQL View (Hive Query to read data to generate the cube):
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/5 sql-view.png)
-
-* Access (Grant user/role privileges, grant operation only open to Admin):
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/7 access.png)
-
-## 4. Write and Execute SQL on web
-Kylin's web offer a simple query tool for user to run SQL to explorer existing cube, verify result and explorer the result set using #5's Pivot analysis and visualization
-
-> **Query Limit**
-> 
-> 1. Only SELECT query be supported
-> 
-> 2. SQL will not be redirect to Hive
-
-Go to "Insight" menu:
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/8 query.png)
-
-* Source Tables:
-
-   Browser current available tables (same structure and metadata as Hive):
-  
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/9 query-table.png)
-
-* New Query:
-
-   You can write and execute your query and explorer the result.
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/10 query-result.png)
-
-* Saved Query (only work after enable LDAP security):
-
-   Associate with user account, you can get saved query from different browsers even machines.
-   Click "Save" in Result area, it will popup for name and description to save current query:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/11 save-query.png)
-
-   Click "Saved Queries" to browser all your saved queries, you could direct submit it or remove it.
-
-* Query History:
-
-   Only keep the current user's query history in current bowser, it will require cookie enabled and will lost if you clean up bowser's cache. Click "Query History" tab, you could directly resubmit any of them to execute again.
-
-## 5. Pivot Analysis and Visualization
-There's one simple pivot and visualization analysis tool in Kylin's web for user to explore their query result:
-
-* General Information:
-
-   When the query execute success, it will present a success indictor and also a cube's name which be hit. 
-   Also it will present how long this query be executed in backend engine (not cover network traffic from Kylin server to browser):
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/12 general.png)
-
-* Query Result:
-
-   It's easy to order on one column.
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/13 results.png)
-
-* Export to CSV File
-
-   Click "Export" button to save current result as CSV file.
-
-* Pivot Table:
-
-   Drag and drop one or more columns into the header, the result will grouping by such column's value:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/14 drag.png)
-
-* Visualization:
-
-   Also, the result set will be easy to show with different charts in "Visualization":
-
-   note: line chart only available when there's at least one dimension with real "Date" data type of column from Hive Table.
-
-   * Bar Chart:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/15 bar-chart.png)
-   
-   * Pie Chart:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/16 pie-chart.png)
-
-   * Line Chart
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/17 line-chart.png)
-
diff --git a/website/archive/docs16.tar.gz b/website/archive/docs16.tar.gz
new file mode 100644
index 0000000..4862a79
Binary files /dev/null and b/website/archive/docs16.tar.gz differ
diff --git a/website/download/index.cn.md b/website/download/index.cn.md
index d7e88f9..bd4dc42 100644
--- a/website/download/index.cn.md
+++ b/website/download/index.cn.md
@@ -5,6 +5,18 @@ title: 下载
 
 您可以按照这些[步骤](https://www.apache.org/info/verification.html) 并使用这些[KEYS](https://www.apache.org/dist/kylin/KEYS)来验证下载文件的有效性.
 
+#### v2.5.0
+- 这是2.4版本后的一个主要发布版本,包含了96 个以及各种改进。关于具体内容请查看发布说明. 
+- [发布说明](/docs/release_notes.html) and [升级指南](/docs/howto/howto_upgrade.html)
+- 源码下载: [apache-kylin-2.5.0-source-release.zip](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip.sha256)\]
+- 二进制包下载:
+  - for HBase 1.x (includes HDP 2.3+, AWS EMR 5.0+, Azure HDInsight 3.4 - 3.6) - [apache-kylin-2.5.0-bin-hbase1x.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz.asc)\] \[[sha256](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz.sha256)\]
+  - for CDH 5.7+ - [apache-kylin-2.5.0-bin-cdh57.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz.asc)\] \[[sha256](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz.sha256)\]
+
+- Hadoop 3 API 二进制包 (beta):
+  - for Hadoop 3.1 + HBase 2.0 (includes Hortonworks HDP 3.0) - [apache-kylin-2.5.0-bin-hadoop3.tar.gz](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz) \[[asc](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz.asc)\] \[[sha256](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz.sha256)\]
+  - for CDH 6.0 - [apache-kylin-2.5.0-bin-cdh60.tar.gz](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz) \[[asc](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz.asc)\] \[[sha256](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz.sha256)\]
+
 #### v2.4.1
 - This is a bug fix release after 2.4.0, with 22 bug fixes and enhancement. For the detail list please check release notes. 
 - [Release notes](/docs/release_notes.html) and [upgrade guide](/docs/howto/howto_upgrade.html)
diff --git a/website/download/index.md b/website/download/index.md
index 7774e91..5fc0efa 100644
--- a/website/download/index.md
+++ b/website/download/index.md
@@ -6,6 +6,17 @@ permalink: /download/index.html
 
 You can verify the download by following these [procedures](https://www.apache.org/info/verification.html) and using these [KEYS](https://www.apache.org/dist/kylin/KEYS).
 
+#### v2.5.0
+- This is a major release after 2.4, with 96 bug fixes and enhancement. For the detail list please check release notes. 
+- [Release notes](/docs/release_notes.html) and [upgrade guide](/docs/howto/howto_upgrade.html)
+- Source download: [apache-kylin-2.5.0-source-release.zip](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip.sha256)\]
+- Binary download:
+  - for HBase 1.x (includes HDP 2.3+, AWS EMR 5.0+, Azure HDInsight 3.4 - 3.6) - [apache-kylin-2.5.0-bin-hbase1x.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz.asc)\] \[[sha256](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz.sha256)\]
+  - for CDH 5.7+ - [apache-kylin-2.5.0-bin-cdh57.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz.asc)\] \[[sha256](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz.sha256)\]
+
+- Hadoop 3 API binary packages (for beta):
+  - for Hadoop 3.1 + HBase 2.0 (includes Hortonworks HDP 3.0) - [apache-kylin-2.5.0-bin-hadoop3.tar.gz](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz) \[[asc](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz.asc)\] \[[sha256](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz.sha256)\]
+  - for CDH 6.0 - [apache-kylin-2.5.0-bin-cdh60.tar.gz](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz) \[[asc](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz.asc)\] \[[sha256](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz.sha256)\]
 
 #### v2.4.1
 - This is a bug fix release after 2.4.0, with 22 bug fixes and enhancement. For the detail list please check release notes. 
@@ -15,13 +26,7 @@ You can verify the download by following these [procedures](https://www.apache.o
   - for HBase 1.x (includes HDP 2.3+, AWS EMR 5.0+, Azure HDInsight 3.4 - 3.6) - [apache-kylin-2.4.1-bin-hbase1x.tar.gz](http://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-hbase1x.tar.gz) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-hbase1x.tar.gz.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-hbase1x.tar.gz.sha256)\]
   - for CDH 5.7+ - [apache-kylin-2.4.1-bin-cdh57.tar.gz](http://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-cdh57.tar.gz) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-cdh57.tar.gz.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-cdh57.tar.gz.sha256)\]
 
-#### v2.3.2
-- This is a bug fix release after 2.3.1, with 12 bug fixes and enhancements. For the detail list please check release notes. 
-- [Release notes](/docs23/release_notes.html) and [upgrade guide](/docs23/howto/howto_upgrade.html)
-- Source download: [apache-kylin-2.3.2-src.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-source-release.zip) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-source-release.zip.asc)\] \[[sha1](https://www.apache.org/dist/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-source-release.zip.sha1)\]
-- Binary download:
-  - for HBase 1.x (includes HDP 2.3+, AWS EMR 5.0+, Azure HDInsight 3.4 - 3.6) - [apache-kylin-2.3.2-bin-hbase1x.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-hbase1x.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-hbase1x.tar.gz.asc)\] \[[md5](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-hbase1x.tar.gz.md5)\]
-  - for CDH 5.7+ - [apache-kylin-2.3.2-bin-cdh57.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-cdh57.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-cdh57.tar.gz.asc)\] \[[md5](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-cdh57.tar.gz.md5)\]
+
 
 #### JDBC Driver