You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by ni...@apache.org on 2020/02/21 08:43:58 UTC

[kylin] branch document updated: Update doc

This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git


The following commit(s) were added to refs/heads/document by this push:
     new 7838bfb  Update doc
7838bfb is described below

commit 7838bfb9abae486de2012d86caa3a098c61cf6a9
Author: nichunen <ni...@apache.org>
AuthorDate: Fri Feb 21 16:22:13 2020 +0800

    Update doc
---
 website/_docs/index.cn.md                          |    3 +-
 website/_docs/index.md                             |    3 -
 website/_docs21/gettingstarted/best_practices.md   |   27 -
 website/_docs21/gettingstarted/concepts.md         |   64 -
 website/_docs21/gettingstarted/events.md           |   24 -
 website/_docs21/gettingstarted/faq.md              |  153 --
 website/_docs21/gettingstarted/terminology.md      |   25 -
 website/_docs21/howto/howto_backup_metadata.cn.md  |   59 -
 website/_docs21/howto/howto_backup_metadata.md     |   60 -
 .../howto/howto_build_cube_with_restapi.cn.md      |   54 -
 .../_docs21/howto/howto_build_cube_with_restapi.md |   53 -
 website/_docs21/howto/howto_cleanup_storage.cn.md  |   21 -
 website/_docs21/howto/howto_cleanup_storage.md     |   22 -
 .../_docs21/howto/howto_enable_zookeeper_acl.md    |   20 -
 .../howto/howto_install_ranger_kylin_plugin.md     |    8 -
 website/_docs21/howto/howto_jdbc.cn.md             |   92 -
 website/_docs21/howto/howto_jdbc.md                |   92 -
 website/_docs21/howto/howto_ldap_and_sso.md        |  128 --
 website/_docs21/howto/howto_optimize_build.cn.md   |  166 --
 website/_docs21/howto/howto_optimize_build.md      |  190 ---
 website/_docs21/howto/howto_optimize_cubes.md      |  212 ---
 website/_docs21/howto/howto_setup_systemcube.md    |  437 -----
 website/_docs21/howto/howto_update_coprocessor.md  |   14 -
 website/_docs21/howto/howto_upgrade.md             |  105 --
 website/_docs21/howto/howto_use_beeline.md         |   14 -
 website/_docs21/howto/howto_use_cube_planner.md    |  133 --
 website/_docs21/howto/howto_use_dashboard.md       |  110 --
 .../howto/howto_use_distributed_scheduler.md       |   16 -
 website/_docs21/howto/howto_use_restapi.md         | 1206 -------------
 website/_docs21/howto/howto_use_restapi_in_js.md   |   46 -
 website/_docs21/index.cn.md                        |   23 -
 website/_docs21/index.md                           |   62 -
 website/_docs21/install/advance_settings.md        |  102 --
 website/_docs21/install/hadoop_evn.md              |   36 -
 website/_docs21/install/index.cn.md                |   46 -
 website/_docs21/install/index.md                   |   35 -
 website/_docs21/install/kylin_aws_emr.md           |  167 --
 website/_docs21/install/kylin_cluster.md           |   32 -
 website/_docs21/install/kylin_docker.md            |   10 -
 website/_docs21/install/manual_install_guide.cn.md |   29 -
 website/_docs21/release_notes.md                   | 1792 --------------------
 website/_docs21/tutorial/Qlik.cn.md                |  153 --
 website/_docs21/tutorial/Qlik.md                   |  156 --
 website/_docs21/tutorial/acl.cn.md                 |   35 -
 website/_docs21/tutorial/acl.md                    |   37 -
 website/_docs21/tutorial/create_cube.cn.md         |  129 --
 website/_docs21/tutorial/create_cube.md            |  198 ---
 website/_docs21/tutorial/cube_build_job.cn.md      |   66 -
 website/_docs21/tutorial/cube_build_job.md         |   67 -
 website/_docs21/tutorial/cube_build_performance.md |  266 ---
 website/_docs21/tutorial/cube_spark.md             |  169 --
 website/_docs21/tutorial/cube_streaming.md         |  219 ---
 website/_docs21/tutorial/flink.md                  |  249 ---
 website/_docs21/tutorial/hue.md                    |  246 ---
 website/_docs21/tutorial/kylin_client_tool.cn.md   |  121 --
 website/_docs21/tutorial/kylin_client_tool.md      |  125 --
 website/_docs21/tutorial/kylin_sample.md           |   34 -
 website/_docs21/tutorial/microstrategy.md          |   84 -
 website/_docs21/tutorial/odbc.cn.md                |   34 -
 website/_docs21/tutorial/odbc.md                   |   49 -
 website/_docs21/tutorial/powerbi.cn.md             |   56 -
 website/_docs21/tutorial/powerbi.md                |   54 -
 website/_docs21/tutorial/project_level_acl.md      |   63 -
 website/_docs21/tutorial/query_pushdown.cn.md      |   50 -
 website/_docs21/tutorial/query_pushdown.md         |   61 -
 website/_docs21/tutorial/squirrel.md               |  112 --
 website/_docs21/tutorial/tableau.cn.md             |  116 --
 website/_docs21/tutorial/tableau.md                |  113 --
 website/_docs21/tutorial/tableau_91.cn.md          |   51 -
 website/_docs21/tutorial/tableau_91.md             |   50 -
 website/_docs21/tutorial/web.cn.md                 |  134 --
 website/_docs21/tutorial/web.md                    |  123 --
 website/_docs23/index.cn.md                        |    2 -
 website/_docs23/index.md                           |    2 -
 website/_docs24/index.cn.md                        |    1 -
 website/_docs24/index.md                           |    1 -
 website/_docs30/index.cn.md                        |    2 -
 website/_docs30/index.md                           |    3 -
 website/_docs31/index.cn.md                        |    2 +-
 website/_docs31/index.md                           |    2 +-
 website/archive/docs21.tar.gz                      |  Bin 0 -> 139241 bytes
 website/download/index.md                          |    4 +-
 82 files changed, 5 insertions(+), 9325 deletions(-)

diff --git a/website/_docs/index.cn.md b/website/_docs/index.cn.md
index 85e4cd4..71c8347 100644
--- a/website/_docs/index.cn.md
+++ b/website/_docs/index.cn.md
@@ -12,8 +12,7 @@ permalink: /cn/docs/index.html
 Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
 
 查看其它版本文档: 
-* [v3.1 document](/docs31)
-* [v3.0 document](/docs30)
+* [v3.0-alpha document](/docs30)
 * [v2.4 document](/cn/docs24/)
 * [v2.3 document](/cn/docs23/)
 * [v2.1 and v2.2 document](/cn/docs21/)
diff --git a/website/_docs/index.md b/website/_docs/index.md
index a95417a..c70c10e 100644
--- a/website/_docs/index.md
+++ b/website/_docs/index.md
@@ -12,12 +12,9 @@ Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
 Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
 
 This is the document for the latest released version (v3.0). Document of other versions: 
-* [v3.1 document](/docs31)
-* [v3.0 document](/docs30)
 * [v3.0-alpha document](/docs30)
 * [v2.4 document](/docs24)
 * [v2.3 document](/docs23)
-* [v2.1 and v2.2 document](/docs21/)
 * [Archived](/archive/)
 
 Installation & Setup
diff --git a/website/_docs21/gettingstarted/best_practices.md b/website/_docs21/gettingstarted/best_practices.md
deleted file mode 100644
index 5ff501a..0000000
--- a/website/_docs21/gettingstarted/best_practices.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-layout: docs21
-title:  "Community Best Practices"
-categories: gettingstarted
-permalink: /docs21/gettingstarted/best_practices.html
-since: v1.3.x
----
-
-List of articles about Kylin best practices contributed by community. Some of them are from Chinese community. Many thanks!
-
-* [Apache Kylin在百度地图的实践](http://www.infoq.com/cn/articles/practis-of-apache-kylin-in-baidu-map)
-
-* [Apache Kylin 大数据时代的OLAP利器](http://www.bitstech.net/2016/01/04/kylin-olap/)(网易案例)
-
-* [Apache Kylin在云海的实践](http://www.csdn.net/article/2015-11-27/2826343)(京东案例)
-
-* [Kylin, Mondrian, Saiku系统的整合](http://tech.youzan.com/kylin-mondrian-saiku/)(有赞案例)
-
-* [Big Data MDX with Mondrian and Apache Kylin](https://www.inovex.de/fileadmin/files/Vortraege/2015/big-data-mdx-with-mondrian-and-apache-kylin-sebastien-jelsch-pcm-11-2015.pdf)
-
-* [Kylin and Mondrain Interaction](https://github.com/mustangore/kylin-mondrian-interaction) (Thanks to [mustangore](https://github.com/mustangore))
-
-* [Kylin And Tableau Tutorial](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
-
-* [Kylin and Qlik Integration](https://github.com/albertoRamon/Kylin/tree/master/KylinWithQlik) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
-
-* [How to use Hue with Kylin](https://github.com/albertoRamon/Kylin/tree/master/KylinWithHue) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
\ No newline at end of file
diff --git a/website/_docs21/gettingstarted/concepts.md b/website/_docs21/gettingstarted/concepts.md
deleted file mode 100644
index 0760438..0000000
--- a/website/_docs21/gettingstarted/concepts.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-layout: docs21
-title:  "Technical Concepts"
-categories: gettingstarted
-permalink: /docs21/gettingstarted/concepts.html
-since: v1.2
----
- 
-Here are some basic technical concepts used in Apache Kylin, please check them for your reference.
-For terminology in domain, please refer to: [Terminology](terminology.html)
-
-## CUBE
-* __Table__ - This is definition of hive tables as source of cubes, which must be synced before building cubes.
-![](/images/docs/concepts/DataSource.png)
-
-* __Data Model__ - This describes a [STAR SCHEMA](https://en.wikipedia.org/wiki/Star_schema) data model, which defines fact/lookup tables and filter condition.
-![](/images/docs/concepts/DataModel.png)
-
-* __Cube Descriptor__ - This describes definition and settings for a cube instance, defining which data model to use, what dimensions and measures to have, how to partition to segments and how to handle auto-merge etc.
-![](/images/docs/concepts/CubeDesc.png)
-
-* __Cube Instance__ - This is instance of cube, built from one cube descriptor, and consist of one or more cube segments according partition settings.
-![](/images/docs/concepts/CubeInstance.png)
-
-* __Partition__ - User can define a DATE/STRING column as partition column on cube descriptor, to separate one cube into several segments with different date periods.
-![](/images/docs/concepts/Partition.png)
-
-* __Cube Segment__ - This is actual carrier of cube data, and maps to a HTable in HBase. One building job creates one new segment for the cube instance. Once data change on specified data period, we can refresh related segments to avoid rebuilding whole cube.
-![](/images/docs/concepts/CubeSegment.png)
-
-* __Aggregation Group__ - Each aggregation group is subset of dimensions, and build cuboid with combinations inside. It aims at pruning for optimization.
-![](/images/docs/concepts/AggregationGroup.png)
-
-## DIMENSION & MEASURE
-* __Mandotary__ - This dimension type is used for cuboid pruning, if a dimension is specified as “mandatory”, then those combinations without such dimension are pruned.
-* __Hierarchy__ - This dimension type is used for cuboid pruning, if dimension A,B,C forms a “hierarchy” relation, then only combinations with A, AB or ABC shall be remained. 
-* __Derived__ - On lookup tables, some dimensions could be generated from its PK, so there's specific mapping between them and FK from fact table. So those dimensions are DERIVED and don't participate in cuboid generation.
-![](/images/docs/concepts/Dimension.png)
-
-* __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
-* __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
-* __Top N__ - For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
-![](/images/docs/concepts/Measure.png)
-
-## CUBE ACTIONS
-* __BUILD__ - Given an interval of partition column, this action is to build a new cube segment.
-* __REFRESH__ - This action will rebuilt cube segment in some partition period, which is used in case of source table increasing.
-* __MERGE__ - This action will merge multiple continuous cube segments into single one. This can be automated with auto-merge settings in cube descriptor.
-* __PURGE__ - Clear segments under a cube instance. This will only update metadata, and won't delete cube data from HBase.
-![](/images/docs/concepts/CubeAction.png)
-
-## JOB STATUS
-* __NEW__ - This denotes one job has been just created.
-* __PENDING__ - This denotes one job is paused by job scheduler and waiting for resources.
-* __RUNNING__ - This denotes one job is running in progress.
-* __FINISHED__ - This denotes one job is successfully finished.
-* __ERROR__ - This denotes one job is aborted with errors.
-* __DISCARDED__ - This denotes one job is cancelled by end users.
-![](/images/docs/concepts/Job.png)
-
-## JOB ACTION
-* __RESUME__ - Once a job in ERROR status, this action will try to restore it from latest successful point.
-* __DISCARD__ - No matter status of a job is, user can end it and release resources with DISCARD action.
-![](/images/docs/concepts/JobAction.png)
diff --git a/website/_docs21/gettingstarted/events.md b/website/_docs21/gettingstarted/events.md
deleted file mode 100644
index ed1ba6b..0000000
--- a/website/_docs21/gettingstarted/events.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-layout: docs21
-title:  "Events and Conferences"
-categories: gettingstarted
-permalink: /docs21/gettingstarted/events.html
----
-
-__Conferences__
-
-* [The Evolution of Apache Kylin: Realtime and Plugin Architecture in Kylin](https://www.youtube.com/watch?v=n74zvLmIgF0)([slides](http://www.slideshare.net/YangLi43/apache-kylin-15-updates)) by [Li Yang](https://github.com/liyang-gmt8), at [Hadoop Summit 2016 Dublin](http://hadoopsummit.org/dublin/agenda/), Ireland, 2016-04-14
-* [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
-* [Apache Kylin - Balance Between Space and Time](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015) ([video](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015)) by [Debashis Saha](https://twitter.com/debashis_saha) & [Luke Han](https://twitter.com/lukehq), at Hadoop Summit 2015  [...]
-* [HBaseCon 2015: Apache Kylin; Extreme OLAP Engine for Hadoop](https://vimeo.com/128152444) ([video](https://vimeo.com/128152444), [slides](http://www.slideshare.net/HBaseCon/ecosystem-session-3b)) by [Seshu Adunuthula](https://twitter.com/SeshuAd) at HBaseCon 2015 in San Francisco, US, 2015-05-07
-* [Apache Kylin - Extreme OLAP Engine for Hadoop](http://strataconf.com/big-data-conference-uk-2015/public/schedule/detail/40029) ([slides](http://www.slideshare.net/lukehan/apache-kylin-extreme-olap-engine-for-big-data)) by [Luke Han](https://twitter.com/lukehq) & [Yang Li](https://github.com/liyang-gmt8), at Strata+Hadoop World in London, UK, 2015-05-06
-* [Apache Kylin Open Source Journey](http://www.infoq.com/cn/presentations/open-source-journey-of-apache-kylin) ([slides](http://www.slideshare.net/lukehan/apache-kylin-open-source-journey-for-qcon2015-beijing)) by [Luke Han](https://twitter.com/lukehq), at QCon Beijing in Beijing, China, 2015-04-23
-* [Apache Kylin - OLAP on Hadoop](http://cio.it168.com/a2015/0418/1721/000001721404.shtml) by [Yang Li](https://github.com/liyang-gmt8), at Database Technology Conference China 2015 in Beijing, China, 2015-04-18
-* [Apache Kylin – Cubes on Hadoop](https://www.youtube.com/watch?v=U0SbrVzuOe4) ([video](https://www.youtube.com/watch?v=U0SbrVzuOe4), [slides](http://www.slideshare.net/Hadoop_Summit/apache-kylin-cubes-on-hadoop)) by [Ted Dunning](https://twitter.com/ted_dunning), at Hadoop Summit 2015 Europe in Brussels, Belgium, 2015-04-16
-* [Apache Kylin - Hadoop 上的大规模联机分析平台](http://bdtc2014.hadooper.cn/m/zone/bdtc_2014/schedule3) ([slides](http://www.slideshare.net/lukehan/apache-kylin-big-data-technology-conference-2014-beijing-v2)) by [Luke Han](https://twitter.com/lukehq), at Big Data Technology Conference China in Beijing, China, 2014-12-14
-* [Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive](http://v.csdn.hudong.com/s/article.html?arcid=15820707) ([video](http://v.csdn.hudong.com/s/article.html?arcid=15820707), [slides](http://www.slideshare.net/XuJiang2/kylin-hadoop-olap-engine)) by [Jiang Xu](https://www.linkedin.com/pub/xu-jiang/4/5a8/230), at Shanghai Big Data Summit 2014 in Shanghai, China , 2014-10-25
-
-__Meetup__
-
-* [Apache Kylin Meetup @Bay Area](http://www.meetup.com/Cloud-at-ebayinc/events/218914395/), in San Jose, US, 6:00PM - 7:30PM, Thursday, 2014-12-04
-
diff --git a/website/_docs21/gettingstarted/faq.md b/website/_docs21/gettingstarted/faq.md
deleted file mode 100644
index fd6f02a..0000000
--- a/website/_docs21/gettingstarted/faq.md
+++ /dev/null
@@ -1,153 +0,0 @@
----
-layout: docs21
-title:  "FAQ"
-categories: gettingstarted
-permalink: /docs21/gettingstarted/faq.html
-since: v0.6.x
----
-
-#### 1. "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat" or "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState"
-
-  * Kylin need many dependent jars (hadoop/hive/hcat/hbase/kafka) on classpath to work, but Kylin doesn't ship them. It will seek these jars from your local machine by running commands like `hbase classpath`, `hive -e set` etc. The founded jars' path will be appended to the environment variable *HBASE_CLASSPATH* (Kylin uses `hbase` shell command to start up, which will read this). But in some Hadoop distribution (like AWS EMR 5.0), the `hbase` shell doesn't keep the origin `HBASE_CLASSPA [...]
-
-  * To fix this, find the hbase shell script (in hbase/bin folder), and search *HBASE_CLASSPATH*, check whether it overwrite the value like :
-
-  {% highlight Groff markup %}
-  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*
-  {% endhighlight %}
-
-  * If true, change it to keep the origin value like:
-
-   {% highlight Groff markup %}
-  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
-  {% endhighlight %}
-
-#### 2. Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
-
-  * Kylin uses "Dictionary" encoding to encode/decode the dimension values (check [this blog](/blog/2015/08/13/kylin-dictionary/)); Usually a dimension's cardinality is less than millions, so the "Dict" encoding is good to use. As dictionary need be persisted and loaded into memory, if a dimension's cardinality is very high, the memory footprint will be tremendous, so Kylin add a check on this. If you see this error, suggest to identify the UHC dimension first and then re-evaluate the de [...]
-
-#### 3. Build cube failed due to "error check status"
-
-  * Check if `kylin.log` contains *yarn.resourcemanager.webapp.address:http://0.0.0.0:8088* and *java.net.ConnectException: Connection refused*
-  * If yes, then the problem is the address of resource manager was not available in yarn-site.xml
-  * A workaround is update `kylin.properties`, set `kylin.job.yarn.app.rest.check.status.url=http://YOUR_RM_NODE:8088/ws/v1/cluster/apps/${job_id}?anonymous=true`
-
-#### 4. HBase cannot get master address from ZooKeeper on Hortonworks Sandbox
-   
-  * By default hortonworks disables hbase, you'll have to start hbase in ambari homepage first.
-
-#### 5. Map Reduce Job information cannot display on Hortonworks Sandbox
-   
-  * Check out [https://github.com/KylinOLAP/Kylin/issues/40](https://github.com/KylinOLAP/Kylin/issues/40)
-
-#### 6. How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
-
-  * Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
-
-  {% highlight Groff markup %}
-  I was able to deploy Kylin with following option in POM.
-  <hadoop2.version>2.5.0</hadoop2.version>
-  <yarn.version>2.5.0</yarn.version>
-  <hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
-  <zookeeper.version>3.4.5</zookeeper.version>
-  <hive.version>0.13.1</hive.version>
-  My Cluster is running on Cloudera Distribution CDH 5.2.0.
-  {% endhighlight %}
-
-
-#### 7. SUM(field) returns a negtive result while all the numbers in this field are > 0
-  * If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which  [...]
-
-#### 8. Why Kylin need extract the distinct columns from Fact Table before building cube?
-  * Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
-
-#### 9. Why Kylin calculate the HIVE table cardinality?
-  * The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
-
-#### 10. How to add new user or change the default password?
-  * Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
-
-   {% highlight Groff markup %}
-   ${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
-   {% endhighlight %}
-
-  * The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
-  * When you deploy Kylin for more users, switch to LDAP authentication is recommended.
-
-#### 11. Using sub-query for un-supported SQL
-
-{% highlight Groff markup %}
-Original SQL:
-select fact.slr_sgmt,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from ih_daily_fact fact
-inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-group by fact.slr_sgmt
-{% endhighlight %}
-
-{% highlight Groff markup %}
-Using sub-query
-select a.slr_sgmt,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from (
-    select fact.slr_sgmt as slr_sgmt,
-    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
-    sum(gmv) as gmv36,
-    sum(gmv) as gmv35
-    from ih_daily_fact fact
-    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
-) a
-group by a.slr_sgmt
-{% endhighlight %}
-
-#### 12. Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
-
-  * Please add proxy for your NPM:  
-  `npm config set proxy http://YOUR_PROXY_IP`
-
-  * Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
-  [http://npm.taobao.org](http://npm.taobao.org)
-
-#### 13. Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
-  * User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
-
-
-#### 14. How to update the default password for 'ADMIN'?
-  * By default, Kylin uses a simple, configuration based user registry; The default administrator 'ADMIN' with password 'KYLIN' is hard-coded in `kylinSecurity.xml`. To modify the password, you need firstly get the new password's encrypted value (with BCrypt), and then set it in `kylinSecurity.xml`. Here is a sample with password 'ABCDE'
-  
-{% highlight Groff markup %}
-
-cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
-
-java -classpath kylin-server-base-2.2.0.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:spring-security-core-4.2.3.RELEASE.jar:commons-codec-1.7.jar:commons-logging-1.1.3.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer BCrypt ABCDE
-
-BCrypt encrypted password is:
-$2a$10$A7.J.GIEOQknHmJhEeXUdOnj2wrdG4jhopBgqShTgDkJDMoKxYHVu
-
-{% endhighlight %}
-
-Then you can set it into `kylinSecurity.xml'
-
-{% highlight Groff markup %}
-
-vi ./tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
-
-{% endhighlight %}
-
-Replace the origin encrypted password with the new one: 
-{% highlight Groff markup %}
-
-        <bean class="org.springframework.security.core.userdetails.User" id="adminUser">
-            <constructor-arg value="ADMIN"/>
-            <constructor-arg
-                    value="$2a$10$A7.J.GIEOQknHmJhEeXUdOnj2wrdG4jhopBgqShTgDkJDMoKxYHVu"/>
-            <constructor-arg ref="adminAuthorities"/>
-        </bean>
-        
-{% endhighlight %}
-
-Restart Kylin to take effective. If you have multiple Kylin server as a cluster, do the same on each instance. 
-
diff --git a/website/_docs21/gettingstarted/terminology.md b/website/_docs21/gettingstarted/terminology.md
deleted file mode 100644
index d65cba7..0000000
--- a/website/_docs21/gettingstarted/terminology.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-layout: docs21
-title:  "Terminology"
-categories: gettingstarted
-permalink: /docs21/gettingstarted/terminology.html
-since: v0.5.x
----
- 
-
-Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
-They are basic knowledge of Apache Kylin which also will help to well understand such concept, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analycits. 
-
-* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
-* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
-* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
-* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
-* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
-* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
-* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
-* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
-* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
-* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
-
-
-
diff --git a/website/_docs21/howto/howto_backup_metadata.cn.md b/website/_docs21/howto/howto_backup_metadata.cn.md
deleted file mode 100644
index c7eefb4..0000000
--- a/website/_docs21/howto/howto_backup_metadata.cn.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-layout: docs21-cn
-title:  备份元数据
-categories: 帮助
-permalink: /cn/docs21/howto/howto_backup_metadata.html
----
-
-Kylin将它全部的元数据(包括cube描述和实例、项目、倒排索引描述和实例、任务、表和字典)组织成层级文件系统的形式。然而,Kylin使用hbase来存储元数据,而不是一个普通的文件系统。如果你查看过Kylin的配置文件(kylin.properties),你会发现这样一行:
-
-{% highlight Groff markup %}
-## The metadata store in hbase
-kylin.metadata.url=kylin_metadata@hbase
-{% endhighlight %}
-
-这表明元数据会被保存在一个叫作“kylin_metadata”的htable里。你可以在hbase shell里scan该htbale来获取它。
-
-## 使用二进制包来备份Metadata Store
-
-有时你需要将Kylin的Metadata Store从hbase备份到磁盘文件系统。在这种情况下,假设你在部署Kylin的hadoop命令行(或沙盒)里,你可以到KYLIN_HOME并运行:
-
-{% highlight Groff markup %}
-./bin/metastore.sh backup
-{% endhighlight %}
-
-来将你的元数据导出到本地目录,这个目录在KYLIN_HOME/metadata_backps下,它的命名规则使用了当前时间作为参数:KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second 。
-
-## 使用二进制包来恢复Metatdara Store
-
-万一你发现你的元数据被搞得一团糟,想要恢复先前的备份:
-
-首先,重置Metatdara Store(这个会清理Kylin在hbase的Metadata Store的所有信息,请确保先备份):
-
-{% highlight Groff markup %}
-./bin/metastore.sh reset
-{% endhighlight %}
-
-然后上传备份的元数据到Kylin的Metadata Store:
-{% highlight Groff markup %}
-./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
-{% endhighlight %}
-
-## 在开发环境备份/恢复元数据(0.7.3版本以上可用)
-
-在开发调试Kylin时,典型的环境是一台装有IDE的开发机上和一个后台的沙盒,通常你会写代码并在开发机上运行测试案例,但每次都需要将二进制包放到沙盒里以检查元数据是很麻烦的。这时有一个名为SandboxMetastoreCLI工具类可以帮助你在开发机本地下载/上传元数据。
-
-## 从Metadata Store清理无用的资源(0.7.3版本以上可用)
-随着运行时间增长,类似字典、表快照的资源变得没有用(cube segment被丢弃或者合并了),但是它们依旧占用空间,你可以运行命令来找到并清除它们:
-
-首先,运行一个检查,这是安全的因为它不会改变任何东西:
-{% highlight Groff markup %}
-./bin/metastore.sh clean
-{% endhighlight %}
-
-将要被删除的资源会被列出来:
-
-接下来,增加“--delete true”参数来清理这些资源;在这之前,你应该确保已经备份metadata store:
-{% highlight Groff markup %}
-./bin/metastore.sh clean --delete true
-{% endhighlight %}
diff --git a/website/_docs21/howto/howto_backup_metadata.md b/website/_docs21/howto/howto_backup_metadata.md
deleted file mode 100644
index cd0c210..0000000
--- a/website/_docs21/howto/howto_backup_metadata.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-layout: docs21
-title:  Backup Metadata
-categories: howto
-permalink: /docs21/howto/howto_backup_metadata.html
----
-
-Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
-
-{% highlight Groff markup %}
-## The metadata store in hbase
-kylin.metadata.url=kylin_metadata@hbase
-{% endhighlight %}
-
-This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
-
-## Backup Metadata Store with binary package
-
-Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
-In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
-
-{% highlight Groff markup %}
-./bin/metastore.sh backup
-{% endhighlight %}
-
-to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
-
-## Restore Metadata Store with binary package
-
-In case you find your metadata store messed up, and you want to restore to a previous backup:
-
-Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
-
-{% highlight Groff markup %}
-./bin/metastore.sh reset
-{% endhighlight %}
-
-Then upload the backup metadata to Kylin's metadata store:
-{% highlight Groff markup %}
-./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
-{% endhighlight %}
-
-## Backup/restore metadata in development env (available since 0.7.3)
-
-When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
-
-## Cleanup unused resources from Metadata Store (available since 0.7.3)
-As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
-
-Firstly, run a check, this is safe as it will not change anything:
-{% highlight Groff markup %}
-./bin/metastore.sh clean
-{% endhighlight %}
-
-The resources that will be dropped will be listed;
-
-Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
-{% highlight Groff markup %}
-./bin/metastore.sh clean --delete true
-{% endhighlight %}
diff --git a/website/_docs21/howto/howto_build_cube_with_restapi.cn.md b/website/_docs21/howto/howto_build_cube_with_restapi.cn.md
deleted file mode 100644
index f0a38d2..0000000
--- a/website/_docs21/howto/howto_build_cube_with_restapi.cn.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-layout: docs21-cn
-title:  用API构建cube
-categories: 帮助
-permalink: /cn/docs21/howto/howto_build_cube_with_restapi.html
----
-
-### 1. 认证
-*   目前Kylin使用[basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication)。
-*   给第一个请求加上用于认证的 Authorization 头部。
-*   或者进行一个特定的请求: POST http://localhost:7070/kylin/api/user/authentication 。
-*   完成认证后, 客户端可以在接下来的请求里带上cookie。
-{% highlight Groff markup %}
-POST http://localhost:7070/kylin/api/user/authentication
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-
-### 2. 获取Cube的详细信息
-*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
-*   用户可以在返回的cube详细信息里找到cube的segment日期范围。
-{% highlight Groff markup %}
-GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-
-### 3.	然后提交cube构建任务
-*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
-*   关于 put 的请求体细节请参考 Build Cube API
-    *   `startTime` 和 `endTime` 应该是utc时间。
-    *   `buildType` 可以是 `BUILD` 、 `MERGE` 或 `REFRESH`。 `BUILD` 用于构建一个新的segment, `REFRESH` 用于刷新一个已有的segment, `MERGE` 用于合并多个已有的segment生成一个较大的segment。
-*   这个方法会返回一个新建的任务实例,它的uuid是任务的唯一id,用于追踪任务状态。
-{% highlight Groff markup %}
-PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-    
-{
-    "startTime": 0,
-    "endTime": 1388563200000,
-    "buildType": "BUILD"
-}
-{% endhighlight %}
-
-### 4.	跟踪任务状态 
-*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
-*   返回的 `job_status` 代表job的当前状态。
-
-### 5.	如果构建任务出现错误,可以重新开始它
-*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`
diff --git a/website/_docs21/howto/howto_build_cube_with_restapi.md b/website/_docs21/howto/howto_build_cube_with_restapi.md
deleted file mode 100644
index 8808b7b..0000000
--- a/website/_docs21/howto/howto_build_cube_with_restapi.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-layout: docs21
-title:  Build Cube with API
-categories: howto
-permalink: /docs21/howto/howto_build_cube_with_restapi.html
----
-
-### 1.	Authentication
-*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
-*   Add `Authorization` header to first request for authentication
-*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
-*   Once authenticated, client can go subsequent requests with cookies.
-{% highlight Groff markup %}
-POST http://localhost:7070/kylin/api/user/authentication
-    
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-
-### 2.	Get details of cube. 
-*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
-*   Client can find cube segment date ranges in returned cube detail.
-{% highlight Groff markup %}
-GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-### 3.	Then submit a build job of the cube. 
-*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
-*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
-    *   `startTime` and `endTime` should be utc timestamp.
-    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
-*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
-{% highlight Groff markup %}
-PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-    
-{
-    "startTime": 0,
-    "endTime": 1388563200000,
-    "buildType": "BUILD"
-}
-{% endhighlight %}
-
-### 4.	Track job status. 
-*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
-*   Returned `job_status` represents current status of job.
-
-### 5.	If the job got errors, you can resume it. 
-*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`
diff --git a/website/_docs21/howto/howto_cleanup_storage.cn.md b/website/_docs21/howto/howto_cleanup_storage.cn.md
deleted file mode 100644
index 2bbfa3c..0000000
--- a/website/_docs21/howto/howto_cleanup_storage.cn.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-layout: docs21-cn
-title:  清理存储
-categories: 帮助
-permalink: /cn/docs21/howto/howto_cleanup_storage.html
----
-
-Kylin在构建cube期间会在HDFS上生成中间文件;除此之外,当清理/删除/合并cube时,一些HBase表可能被遗留在HBase却以后再也不会被查询;虽然Kylin已经开始做自动化的垃圾回收,但不一定能覆盖到所有的情况;你可以定期做离线的存储清理:
-
-步骤:
-1. 检查哪些资源可以清理,这一步不会删除任何东西:
-{% highlight Groff markup %}
-export KYLIN_HOME=/path/to/kylin_home
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete false
-{% endhighlight %}
-请将这里的 (version) 替换为你安装的Kylin jar版本。
-2. 你可以抽查一两个资源来检查它们是否已经没有被引用了;然后加上“--delete true”选项进行清理。
-{% highlight Groff markup %}
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete true
-{% endhighlight %}
-完成后,中间HDFS上的中间文件和HTable会被移除。
diff --git a/website/_docs21/howto/howto_cleanup_storage.md b/website/_docs21/howto/howto_cleanup_storage.md
deleted file mode 100644
index f488040..0000000
--- a/website/_docs21/howto/howto_cleanup_storage.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-layout: docs21
-title:  Cleanup Storage
-categories: howto
-permalink: /docs21/howto/howto_cleanup_storage.html
----
-
-Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
-automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
-
-Steps:
-1. Check which resources can be cleanup, this will not remove anything:
-{% highlight Groff markup %}
-export KYLIN_HOME=/path/to/kylin_home
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete false
-{% endhighlight %}
-Here please replace (version) with the specific Kylin jar version in your installation;
-2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
-{% highlight Groff markup %}
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete true
-{% endhighlight %}
-On finish, the intermediate HDFS location and HTables should be dropped;
diff --git a/website/_docs21/howto/howto_enable_zookeeper_acl.md b/website/_docs21/howto/howto_enable_zookeeper_acl.md
deleted file mode 100644
index 2d6cb13..0000000
--- a/website/_docs21/howto/howto_enable_zookeeper_acl.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-layout: docs21
-title:  Enable zookeeper acl
-categories: howto
-permalink: /docs21/howto/howto_enable_zookeeper_acl.html
----
-
-Edit $KYLIN_HOME/conf/kylin.properties to add following configuration item:
-
-* Add "kylin.env.zookeeper.zk-auth". It is the configuration item you can specify the zookeeper authenticated information. Its formats is "scheme:id". The value of scheme that the zookeeper supports is "world", "auth", "digest", "ip" or "super". The "id" is the authenticated information of the scheme. For example:
-
-    `kylin.env.zookeeper.zk-auth=digest:ADMIN:KYLIN`
-
-    The scheme equals to "digest". The id equals to "ADMIN:KYLIN", which expresses the "username:password".
-
-* Add "kylin.env.zookeeper.zk-acl". It is the configuration item you can set access permission. Its formats is "scheme:id:permissions". The value of permissions that the zookeeper supports is "READ", "WRITE", "CREATE", "DELETE" or "ADMIN". For example, we configure that everyone has all the permissions:
-
-    `kylin.env.zookeeper.zk-acl=world:anyone:rwcda`
-
-    The scheme equals to "world". The id equals to "anyone" and the permissions equals to "rwcda".
\ No newline at end of file
diff --git a/website/_docs21/howto/howto_install_ranger_kylin_plugin.md b/website/_docs21/howto/howto_install_ranger_kylin_plugin.md
deleted file mode 100644
index 530f914..0000000
--- a/website/_docs21/howto/howto_install_ranger_kylin_plugin.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-layout: docs21
-title:  The Ranger Kylin Plugin Installation Guide
-categories: howto
-permalink: /docs21/howto/howto_install_ranger_kylin_plugin.html
----
-
-Please refer to [https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin](https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin).
\ No newline at end of file
diff --git a/website/_docs21/howto/howto_jdbc.cn.md b/website/_docs21/howto/howto_jdbc.cn.md
deleted file mode 100644
index b1603dc..0000000
--- a/website/_docs21/howto/howto_jdbc.cn.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-layout: docs21-cn
-title:  Kylin JDBC Driver
-categories: 帮助
-permalink: /cn/docs21/howto/howto_jdbc.html
----
-
-### 认证
-
-###### 基于Apache Kylin认证RESTFUL服务。支持的参数:
-* user : 用户名
-* password : 密码
-* ssl: true或false。 默认为flas;如果为true,所有的服务调用都会使用https。
-
-### 连接url格式:
-{% highlight Groff markup %}
-jdbc:kylin://<hostname>:<port>/<kylin_project_name>
-{% endhighlight %}
-* 如果“ssl”为true,“port”应该是Kylin server的HTTPS端口。
-* 如果“port”未被指定,driver会使用默认的端口:HTTP 80,HTTPS 443。
-* 必须指定“kylin_project_name”并且用户需要确保它在Kylin server上存在。
-
-### 1. 使用Statement查询
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 2. 使用PreparedStatementv查询
-
-###### 支持的PreparedStatement参数:
-* setString
-* setInt
-* setShort
-* setLong
-* setFloat
-* setDouble
-* setBoolean
-* setByte
-* setDate
-* setTime
-* setTimestamp
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
-state.setInt(1, 10);
-ResultSet resultSet = state.executeQuery();
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 3. 获取查询结果元数据
-Kylin jdbc driver支持元数据列表方法:
-通过sql模式过滤器(比如 %)列出catalog、schema、table和column。
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
-while (tables.next()) {
-    for (int i = 0; i < 10; i++) {
-        assertEquals("dummy", tables.getString(i + 1));
-    }
-}
-{% endhighlight %}
diff --git a/website/_docs21/howto/howto_jdbc.md b/website/_docs21/howto/howto_jdbc.md
deleted file mode 100644
index 4885a84..0000000
--- a/website/_docs21/howto/howto_jdbc.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-layout: docs21
-title:  Kylin JDBC Driver
-categories: howto
-permalink: /docs21/howto/howto_jdbc.html
----
-
-### Authentication
-
-###### Build on Apache Kylin authentication restful service. Supported parameters:
-* user : username 
-* password : password
-* ssl: true/false. Default be false; If true, all the services call will use https.
-
-### Connection URL format:
-{% highlight Groff markup %}
-jdbc:kylin://<hostname>:<port>/<kylin_project_name>
-{% endhighlight %}
-* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
-* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
-* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
-
-### 1. Query with Statement
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 2. Query with PreparedStatement
-
-###### Supported prepared statement parameters:
-* setString
-* setInt
-* setShort
-* setLong
-* setFloat
-* setDouble
-* setBoolean
-* setByte
-* setDate
-* setTime
-* setTimestamp
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
-state.setInt(1, 10);
-ResultSet resultSet = state.executeQuery();
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 3. Get query result set metadata
-Kylin jdbc driver supports metadata list methods:
-List catalog, schema, table and column with sql pattern filters(such as %).
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
-while (tables.next()) {
-    for (int i = 0; i < 10; i++) {
-        assertEquals("dummy", tables.getString(i + 1));
-    }
-}
-{% endhighlight %}
diff --git a/website/_docs21/howto/howto_ldap_and_sso.md b/website/_docs21/howto/howto_ldap_and_sso.md
deleted file mode 100644
index 1a7442d..0000000
--- a/website/_docs21/howto/howto_ldap_and_sso.md
+++ /dev/null
@@ -1,128 +0,0 @@
----
-layout: docs21
-title: Secure with LDAP and SSO
-categories: howto
-permalink: /docs21/howto/howto_ldap_and_sso.html
----
-
-## Enable LDAP authentication
-
-Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
-
-#### Configure LDAP server info
-
-Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be encrypted; You can run the following command to get the encrypted value:
-
-```
-cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
-java -classpath kylin-server-base-\<versioin\>.jar:spring-beans-3.2.17.RELEASE.jar:spring-core-3.2.17.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
-```
-
-Config them in the conf/kylin.properties:
-
-```
-kylin.security.ldap.connection-server=ldap://<your_ldap_host>:<port>
-kylin.security.ldap.connection-username=<your_user_name>
-kylin.security.ldap.connection-password=<your_password_encrypted>
-```
-
-Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
-
-```
-kylin.security.ldap.user-search-base=OU=UserAccounts,DC=mycompany,DC=com
-kylin.security.ldap.user-search-pattern=(&(cn={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
-kylin.security.ldap.user-group-search-base=OU=Group,DC=mycompany,DC=com
-```
-
-If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in kylin.security.ldap.service-*; Otherwise, leave them be empty;
-
-### Configure the administrator group and default role
-
-To map an LDAP group to the admin group in Kylin, need set the "kylin.security.acl.admin-role" to "ROLE_" + GROUP_NAME. For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
-
-```
-kylin.security.acl.admin-role=ROLE_KYLIN-ADMIN-GROUP
-kylin.security.acl.default-role=ROLE_ANALYST,ROLE_MODELER
-```
-
-The "kylin.security.acl.default-role" is a list of the default roles that grant to everyone, keep it as-is.
-
-#### Enable LDAP
-
-Set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server.
-
-## Enable SSO authentication
-
-From v1.5, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
-
-Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
-
-### Generate IDP metadata xml
-Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
-
-  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
-  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
-  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
-
-### Generate JKS keystore for Kylin
-As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
-
-Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
-
-```
-$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
-Enter Export Password: <export_pwd>
-Verifying - Enter Export Password: <export_pwd>
-
-
-$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
-
-Enter destination keystore password:  changeit
-Re-enter new password: changeit
-```
-
-It will put the keys to "samlKeystore.jks" with alias "kylin";
-
-### Enable Higher Ciphers
-
-Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
-
-### Deploy IDP xml file and keystore to Kylin
-
-The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
-	
-  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
-  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
-  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
-
-```
-<!-- Central storage of cryptographic keys -->
-<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
-	<constructor-arg value="classpath:samlKeystore.jks"/>
-	<constructor-arg type="java.lang.String" value="changeit"/>
-	<constructor-arg>
-		<map>
-			<entry key="kylin" value="changeit"/>
-		</map>
-	</constructor-arg>
-	<constructor-arg type="java.lang.String" value="kylin"/>
-</bean>
-
-```
-
-### Other configurations
-In conf/kylin.properties, add the following properties with your server information:
-
-```
-saml.metadata.entityBaseURL=https://host-name/kylin
-saml.context.scheme=https
-saml.context.serverName=host-name
-saml.context.serverPort=443
-saml.context.contextPath=/kylin
-```
-
-Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
-
-### Enable SSO
-Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
-
diff --git a/website/_docs21/howto/howto_optimize_build.cn.md b/website/_docs21/howto/howto_optimize_build.cn.md
deleted file mode 100644
index 5610c09..0000000
--- a/website/_docs21/howto/howto_optimize_build.cn.md
+++ /dev/null
@@ -1,166 +0,0 @@
----
-layout: docs21-cn
-title:  优化cube构建
-categories: 帮助
-permalink: /cn/docs21/howto/howto_optimize_build.html
----
-
-Kylin将Cube构建任务分解为几个依次执行的步骤,这些步骤包括Hive操作、MapReduce操作和其他类型的操作。如果你有很多Cube构建任务需要每天运行,那么你肯定想要减少其中消耗的时间。下文按照Cube构建步骤顺序提供了一些优化经验。
-
-## 创建Hive的中间平表
-
-这一步将数据从源Hive表提取出来(和所有join的表一起)并插入到一个中间平表。如果Cube是分区的,Kylin会加上一个时间条件以确保只有在时间范围内的数据才会被提取。你可以在这个步骤的log查看相关的Hive命令,比如:
-
-```
-hive -e "USE default;
-DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
-
-CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
-(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
-STORED AS SEQUENCEFILE
-LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
-
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
-AIRLINE.FLIGHTDATE
-,AIRLINE.YEAR
-,AIRLINE.QUARTER
-,...
-,AIRLINE.ARRDELAYMINUTES
-FROM AIRLINE.AIRLINE as AIRLINE
-WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
-
-```
-
-在Hive命令运行时,Kylin会用`conf/kylin_hive_conf.properties`里的配置,比如保留更少的冗余备份和启用Hive的mapper side join。需要的话可以根据集群的具体情况增加其他配置。
-
-如果cube的分区列(在这个案例中是"FIGHTDATE")与Hive表的分区列相同,那么根据它过滤数据能让Hive聪明地跳过不匹配的分区。因此强烈建议用Hive的分区列(如果它是日期列)作为cube的分区列。这对于那些数据量很大的表来说几乎是必须的,否则Hive不得不每次在这步扫描全部文件,消耗非常长的时间。
-
-如果启用了Hive的文件合并,你可以在`conf/kylin_hive_conf.xml`里关闭它,因为Kylin有自己合并文件的方法(下一节):
-
-    <property>
-        <name>hive.merge.mapfiles</name>
-        <value>false</value>
-        <description>Disable Hive's auto merge</description>
-    </property>
-
-## 重新分发中间表
-
-在之前的一步之后,Hive在HDFS上的目录里生成了数据文件:有些是大文件,有些是小文件甚至空文件。这种不平衡的文件分布会导致之后的MR任务出现数据倾斜的问题:有些mapper完成得很快,但其他的就很慢。针对这个问题,Kylin增加了这一个步骤来“重新分发”数据,这是示例输出:
-
-```
-total input rows = 159869711
-expected input rows per mapper = 1000000
-num reducers for RedistributeFlatHiveTableStep = 160
-
-```
-
-重新分发表的命令:
-
-```
-hive -e "USE default;
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-set mapreduce.job.reduces=160;
-set hive.merge.mapredfiles=false;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
-"
-```
-
-首先,Kylin计算出中间表的行数,然后基于行数的大小算出重新分发数据需要的文件数。默认情况下,Kylin为每一百万行分配一个文件。在这个例子中,有1.6亿行和160个reducer,每个reducer会写一个文件。在接下来对这张表进行的MR步骤里,Hadoop会启动和文件相同数量的mapper来处理数据(通常一百万行数据比一个HDFS数据块要小)。如果你的日常数据量没有这么大或者Hadoop集群有足够的资源,你或许想要更多的并发数,这时可以将`conf/kylin.properties`里的`kylin.job.mapreduce.mapper.input.rows`设为小一点的数值,比如:
-
-`kylin.job.mapreduce.mapper.input.rows=500000`
-
-其次,Kylin会运行 *"INSERT OVERWRITE TABLE ... DISTRIBUTE BY "* 形式的HiveQL来分发数据到指定数量的reducer上。
-
-在很多情况下,Kylin请求Hive随机分发数据到reducer,然后得到大小相近的文件,分发的语句是"DISTRIBUTE BY RAND()"。
-
-如果你的cube指定了一个高基数的列,比如"USER_ID",作为"分片"维度(在cube的“高级设置”页面),Kylin会让Hive根据该列的值重新分发数据,那么在该列有着相同值的行将被分发到同一个文件。这比随机要分发要好得多,因为不仅重新分布了数据,并且在没有额外代价的情况下对数据进行了预先分类,如此一来接下来的cube build处理会从中受益。在典型的场景下,这样优化可以减少40%的build时长。在这个案例中分发的语句是"DISTRIBUTE BY USER_ID":
-
-请注意: 1)“分片”列应该是高基数的维度列,并且它会出现在很多的cuboid中(不只是出现在少数的cuboid)。 使用它来合理进行分发可以在每个时间范围内的数据均匀分布,否则会造成数据倾斜,从而降低build效率。典型的正面例子是:“USER_ID”、“SELLER_ID”、“PRODUCT”、“CELL_NUMBER”等等,这些列的基数应该大于一千(远大于reducer的数量)。 2)"分片"对cube的存储同样有好处,不过这超出了本文的范围。
-
-## 提取事实表的唯一列
-
-在这一步骤Kylin运行MR任务来提取使用字典编码的维度列的唯一值。
-
-实际上这步另外还做了一些事情:通过HyperLogLog计数器收集cube的统计数据,用于估算每个cuboid的行数。如果你发现mapper运行得很慢,这通常表明cube的设计太过复杂,请参考
-[优化cube设计](howto_optimize_cubes.html)来简化cube。如果reducer出现了内存溢出错误,这表明cuboid组合真的太多了或者是YARN的内存分配满足不了需要。如果这一步从任何意义上讲不能在合理的时间内完成,你可以放弃任务并考虑重新设计cube,因为继续下去会花费更长的时间。
-
-你可以通过降低取样的比例(kylin.job.cubing.inmen.sampling.percent)来加速这个步骤,但是帮助可能不大而且影响了cube统计数据的准确性,所有我们并不推荐。
-
-## 构建维度字典
-
-有了前一步提取的维度列唯一值,Kylin会在内存里构建字典(在下个版本将改为MapReduce任务)。通常这一步比较快,但如果唯一值集合很大,Kylin可能会报出类似“字典不支持过高基数”。对于UHC类型的列,请使用其他编码方式,比如“fixed_length”、“integer”等等。
-
-## 保存cuboid的统计数据和创建 HTable
-
-这两步是轻量级和快速的。
-
-## 构建基础cuboid
-
-这一步用Hive的中间表构建基础的cuboid,是“逐层”构建cube算法的第一轮MR计算。Mapper的数目与第二步的reducer数目相等;Reducer的数目是根据cube统计数据估算的:默认情况下每500MB输出使用一个reducer;如果观察到reducer的数量较少,你可以将kylin.properties里的“kylin.job.mapreduce.default.reduce.input.mb”设为小一点的数值以获得过多的资源,比如:
-
-`kylin.job.mapreduce.default.reduce.input.mb=200`
-
-## Build N-Dimension Cuboid 
-## 构建N维cuboid
-
-这些步骤是“逐层”构建cube的过程,每一步以前一步的输出作为输入,然后去掉一个维度以聚合得到一个子cuboid。举个例子,cuboid ABCD去掉A得到BCD,去掉B得到ACD。
-
-有些cuboid可以从一个以上的父cuboid聚合得到,这种情况下,Kylin会选择最小的一个父cuboid。举例,AB可以从ABC(id:1110)和ABD(id:1101)生成,则ABD会被选中,因为它的比ABC要小。在这基础上,如果D的基数较小,聚合运算的成本就会比较低。所以,当设计rowkey序列的时候,请记得将基数较小的维度放在末尾。这样不仅有利于cube构建,而且有助于cube查询,因为预聚合也遵循相同的规则。
-
-通常来说,从N维到(N/2)维的构建比较慢,因为这是cuboid数量爆炸性增长的阶段:N维有1个cuboid,(N-1)维有N个cuboid,(N-2)维有N*(N-1)个cuboid,以此类推。经过(N/2)维构建的步骤,整个构建任务会逐渐变快。
-
-## 构建cube
-
-这个步骤使用一个新的算法来构建cube:“逐片”构建(也称为“内存”构建)。它会使用一轮MR来计算所有的cuboids,但是比通常情况下更耗内存。配置文件"conf/kylin_job_inmem.xml"正是为这步而设。默认情况下它为每个mapper申请3GB内存。如果你的集群有充足的内存,你可以在上述配置文件中分配更多内存给mapper,这样它会用尽可能多的内存来缓存数据以获得更好的性能,比如:
-
-    <property>
-        <name>mapreduce.map.memory.mb</name>
-        <value>6144</value>
-        <description></description>
-    </property>
-    
-    <property>
-        <name>mapreduce.map.java.opts</name>
-        <value>-Xmx5632m</value>
-        <description></description>
-    </property>
-
-
-请注意,Kylin会根据数据分布(从cube的统计数据里获得)自动选择最优的算法,没有被选中的算法对应的步骤会被跳过。你不需要显式地选择构建算法。
-
-## 将cuboid数据转换为HFile
-
-这一步启动一个MR任务来讲cuboid文件(序列文件格式)转换为HBase的HFile格式。Kylin通过cube统计数据计算HBase的region数目,默认情况下每5GB数据对应一个region。Region越多,MR使用的reducer也会越多。如果你观察到reducer数目较小且性能较差,你可以将“conf/kylin.properties”里的以下参数设小一点,比如:
-
-```
-kylin.hbase.region.cut=2
-kylin.hbase.hfile.size.gb=1
-```
-
-如果你不确定一个region应该是多大时,联系你的HBase管理员。
-
-## 将HFile导入HBase表
-
-这一步使用HBase API来讲HFile导入region server,这是轻量级并快速的一步。
-
-## 更新cube信息
-
-在导入数据到HBase后,Kylin在元数据中将对应的cube segment标记为ready。
-
-## 清理资源
-
-将中间宽表从Hive删除。这一步不会阻塞任何操作,因为在前一步segment已经被标记为ready。如果这一步发生错误,不用担心,垃圾回收工作可以晚些再通过Kylin的[StorageCleanupJob](howto_cleanup_storage.html)完成。
-
-## 总结
-还有非常多其他提高Kylin性能的方法,如果你有经验可以分享,欢迎通过[dev@kylin.apache.org](mailto:dev@kylin.apache.org)讨论。
\ No newline at end of file
diff --git a/website/_docs21/howto/howto_optimize_build.md b/website/_docs21/howto/howto_optimize_build.md
deleted file mode 100644
index a4f3522..0000000
--- a/website/_docs21/howto/howto_optimize_build.md
+++ /dev/null
@@ -1,190 +0,0 @@
----
-layout: docs21
-title:  Optimize Cube Build
-categories: howto
-permalink: /docs21/howto/howto_optimize_build.html
----
-
-Kylin decomposes a Cube build task into several steps and then executes them in sequence. These steps include Hive operations, MapReduce jobs, and other types job. When you have many Cubes to build daily, then you definitely want to speed up this process. Here are some practices that you probably want to know, and they are organized in the same order as the steps sequence.
-
-
-
-## Create Intermediate Flat Hive Table
-
-This step extracts data from source Hive tables (with all tables joined) and inserts them into an intermediate flat table. If Cube is partitioned, Kylin will add a time condition so that only the data in the range would be fetched. You can check the related Hive command in the log of this step, e.g: 
-
-```
-hive -e "USE default;
-DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
-
-CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
-(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
-STORED AS SEQUENCEFILE
-LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
-
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
-AIRLINE.FLIGHTDATE
-,AIRLINE.YEAR
-,AIRLINE.QUARTER
-,...
-,AIRLINE.ARRDELAYMINUTES
-FROM AIRLINE.AIRLINE as AIRLINE
-WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
-"
-
-```
-
-Kylin applies the configuration in conf/kylin\_hive\_conf.xml while Hive commands are running, for instance, use less replication and enable Hive's mapper side join. If it is needed, you can add other configurations which are good for your cluster.
-
-If Cube's partition column ("FLIGHTDATE" in this case) is the same as Hive table's partition column, then filtering on it will let Hive smartly skip those non-matched partitions. So it is highly recommended to use Hive table's paritition column (if it is a date column) as the Cube's partition column. This is almost required for those very large tables, or Hive has to scan all files each time in this step, costing terribly long time.
-
-If your Hive enables file merge, you can disable them in "conf/kylin\_hive\_conf.xml" as Kylin has its own way to merge files (in the next step): 
-
-    <property>
-        <name>hive.merge.mapfiles</name>
-        <value>false</value>
-        <description>Disable Hive's auto merge</description>
-    </property>
-
-
-## Redistribute intermediate table
-
-After the previous step, Hive generates the data files in HDFS folder: while some files are large, some are small or even empty. The imbalanced file distribution would lead subsequent MR jobs to imbalance as well: some mappers finish quickly yet some others are very slow. To balance them, Kylin adds this step to "redistribute" the data and here is a sample output:
-
-```
-total input rows = 159869711
-expected input rows per mapper = 1000000
-num reducers for RedistributeFlatHiveTableStep = 160
-
-```
-
-
-Redistribute table, cmd: 
-
-```
-hive -e "USE default;
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-set mapreduce.job.reduces=160;
-set hive.merge.mapredfiles=false;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
-"
-
-```
-
-
-
-Firstly, Kylin gets the row count of this intermediate table; then based on the number of row count, it would get amount of files needed to get data redistributed. By default, Kylin allocates one file per 1 million rows. In this sample, there are 160 million rows and exist 160 reducers, and each reducer would write 1 file. In following MR step over this table, Hadoop will start the same number Mappers as the files to process (usually 1 million's data size is small than a HDFS block size) [...]
-
-`kylin.job.mapreduce.mapper.input.rows=500000`
-
-
-Secondly, Kylin runs a *"INSERT OVERWIRTE TABLE .... DISTRIBUTE BY "* HiveQL to distribute the rows among a specified number of reducers.
-
-In most cases, Kylin asks Hive to randomly distributes the rows among reducers, then get files very closed in size. The distribute clause is "DISTRIBUTE BY RAND()".
-
-If your Cube has specified a "shard by" dimension (in Cube's "Advanced setting" page), which is a high cardinality column (like "USER\_ID"), Kylin will ask Hive to redistribute data by that column's value. Then for the rows that have the same value as this column has, they will go to the same file. This is much better than "by random",  because the data will be not only redistributed but also pre-categorized without additional cost, thus benefiting the subsequent Cube build process. Unde [...]
-
-**Please note:** 1) The "shard by" column should be a high cardinality dimension column, and it appears in many cuboids (not just appears in seldom cuboids). Utilize it to distribute properly can get equidistribution in every time range; otherwise it will cause data incline, which will reduce the building speed. Typical good cases are: "USER\_ID", "SELLER\_ID", "PRODUCT", "CELL\_NUMBER", so forth, whose cardinality is higher than one thousand (should be much more than the reducer numbers [...]
-
-
-
-## Extract Fact Table Distinct Columns
-
-In this step Kylin runs a MR job to fetch distinct values for the dimensions, which are using dictionary encoding. 
-
-Actually this step does more: it collects the Cube statistics by using HyperLogLog counters to estimate the row count of each Cuboid. If you find that mappers work incredible slowly, it usually indicates that the Cube design is too complex, please check [optimize cube design](howto_optimize_cubes.html) to make the Cube thinner. If the reducers get OutOfMemory error, it indicates that the Cuboid combination does explode or the default YARN memory allocation cannot meet demands. If this st [...]
-
-You can reduce the sampling percentage (kylin.job.cubing.inmem.sampling.percen in kylin.properties) to get this step accelerated, but this may not help much and impact on the accuracy of Cube statistics, thus we don't recommend.  
-
-
-
-## Build Dimension Dictionary
-
-With the distinct values fetched in previous step, Kylin will build dictionaries in memory (in next version this will be moved to MR). Usually this step is fast, but if the value set is large, Kylin may report error like "Too high cardinality is not suitable for dictionary". For UHC column, please use other encoding method for the UHC column, such as "fixed_length", "integer" and so on.
-
-
-
-## Save Cuboid Statistics and Create HTable
-
-These two steps are lightweight and fast.
-
-
-
-## Build Base Cuboid 
-
-This step is building the base cuboid from the intermediate table, which is the first round MR of the "by-layer" cubing algorithm. The mapper number is equals to the reducer number of step 2; The reducer number is estimated with the cube statistics: by default use 1 reducer every 500MB output; If you observed the reducer number is small, you can set "kylin.job.mapreduce.default.reduce.input.mb" in kylin.properties to a smaller value to get more resources, e.g: `kylin.job.mapreduce.defaul [...]
-
-
-## Build N-Dimension Cuboid 
-
-These steps are the "by-layer" cubing process, each step uses the output of previous step as the input, and then cut off one dimension to aggregate to get one child cuboid. For example, from cuboid ABCD, cut off A get BCD, cut off B get ACD etc. 
-
-Some cuboid can be aggregated from more than 1 parent cubiods, in this case, Kylin will select the minimal parent cuboid. For example, AB can be generated from ABC (id: 1110) and ABD (id: 1101), so ABD will be used as its id is smaller than ABC. Based on this, if D's cardinality is small, the aggregation will be cost-efficient. So, when you design the Cube rowkey sequence, please remember to put low cardinality dimensions to the tail position. This not only benefit the Cube build, but al [...]
-
-Usually from the N-D to (N/2)-D the building is slow, because it is the cuboid explosion process: N-D has 1 Cuboid, (N-1)-D has N cuboids, (N-2)-D has N*(N-1) cuboids, etc. After (N/2)-D step, the building gets faster gradually.
-
-
-
-## Build Cube
-
-This step uses a new algorithm to build the Cube: "by-split" Cubing (also called as "in-mem" cubing). It will use one round MR to calculate all cuboids, but it requests more memory than normal. The "conf/kylin\_job\_conf\_inmem.xml" is made for this step. By default it requests 3GB memory for each mapper. If your cluster has enough memory, you can allocate more in "conf/kylin\_job\_conf\_inmem.xml" so it will use as much possible memory to hold the data and gain a better performance, e.g:
-
-    <property>
-        <name>mapreduce.map.memory.mb</name>
-        <value>6144</value>
-        <description></description>
-    </property>
-    
-    <property>
-        <name>mapreduce.map.java.opts</name>
-        <value>-Xmx5632m</value>
-        <description></description>
-    </property>
-
-
-Please note, Kylin will automatically select the best algorithm based on the data distribution (get in Cube statistics). The not-selected algorithm's steps will be skipped. You don't need to select the algorithm explicitly.
-
-
-
-## Convert Cuboid Data to HFile
-
-This step starts a MR job to convert the Cuboid files (sequence file format) into HBase's HFile format. Kylin calculates the HBase region number with the Cube statistics, by default 1 region per 5GB. The more regions got, the more reducers would be utilized. If you observe the reducer's number is small and performance is poor, you can set the following parameters in "conf/kylin.properties" to smaller, as follows:
-
-```
-kylin.hbase.region.cut=2
-kylin.hbase.hfile.size.gb=1
-```
-
-If you're not sure what size a region should be, contact your HBase administrator. 
-
-
-## Load HFile to HBase Table
-
-This step uses HBase API to load the HFile to region servers, it is lightweight and fast.
-
-
-
-## Update Cube Info
-
-After loading data into HBase, Kylin marks this Cube segment as ready in metadata. This step is very fast.
-
-
-
-## Cleanup
-
-Drop the intermediate table from Hive. This step doesn't block anything as the segment has been marked ready in the previous step. If this step gets error, no need to worry, the garbage can be collected later when Kylin executes the [StorageCleanupJob](howto_cleanup_storage.html).
-
-
-## Summary
-There are also many other methods to boost the performance. If you have practices to share, welcome to discuss in [dev@kylin.apache.org](mailto:dev@kylin.apache.org).
\ No newline at end of file
diff --git a/website/_docs21/howto/howto_optimize_cubes.md b/website/_docs21/howto/howto_optimize_cubes.md
deleted file mode 100644
index 2693f38..0000000
--- a/website/_docs21/howto/howto_optimize_cubes.md
+++ /dev/null
@@ -1,212 +0,0 @@
----
-layout: docs21
-title:  Optimize Cube Design
-categories: howto
-permalink: /docs21/howto/howto_optimize_cubes.html
----
-
-## Hierarchies:
-
-Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
-
-group by continent
-group by continent, country
-group by continent, country, city
-
-In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
-
-If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
-
-
-A. Hierarchies on lookup table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, FK</td>
-    <td></td>
-    <td>PK,,H1,H2,H3,,,,</td>
-  </tr>
-</table>
-
----
-
-B. Hierarchies on fact table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
-  </tr>
-</table>
-
----
-
-
-There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
-
-A*. Hierarchies on lookup table over its primary key
-
-
-<table>
-  <tr>
-    <td align="center">Lookup Table(Calendar)</td>
-  </tr>
-  <tr>
-    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
-  </tr>
-</table>
-
----
-
-
-For cases like A* what you need is another optimization called "Derived Columns"
-
-## Derived Columns:
-
-Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
-
-For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, DimA(FK) </td>
-    <td></td>
-    <td>DimX(PK),,DimB, DimC</td>
-  </tr>
-</table>
-
----
-
-
-Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
-
-
-<table>
-  <tr>
-    <th>dimA</th>
-    <th>dimB</th>
-    <th>dimC</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>b</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>c</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-</table>
-
-
-in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
-
-original combinations:
-ABC,AB,AC,BC,A,B,C
-
-combinations when driving B from A:
-AC,A,C
-
-at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
-
-
-<table>
-  <tr>
-    <th>DimA</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>2</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"
diff --git a/website/_docs21/howto/howto_setup_systemcube.md b/website/_docs21/howto/howto_setup_systemcube.md
deleted file mode 100644
index 8f05554..0000000
--- a/website/_docs21/howto/howto_setup_systemcube.md
+++ /dev/null
@@ -1,437 +0,0 @@
----
-layout: docs21
-title:  Set Up System Cube
-categories: howto
-permalink: /docs21/howto/howto_setup_systemcube.html
----
-
-> Available since Apache Kylin v2.3.x
-
-## What is System Cube
-
-For better supporting self-monitoring, a set of system cubes are created under the system project, called "KYLIN_SYSTEM". Currently, there are five cubes. Three are for query metrics, "METRICS_QUERY", "METRICS_QUERY_CUBE", "METRICS_QUERY_RPC". And the other two are for job metrics, "METRICS_JOB", "METRICS_JOB_EXCEPTION".
-
-## How to Set Up System Cube
-
-### Prepare
-Create a configuration file SCSinkTools.json in KYLIN_HOME directory.
-
-For example:
-
-```
-[
-  [
-    "org.apache.kylin.tool.metrics.systemcube.util.HiveSinkTool",
-    {
-      "storage_type": 2,
-      "cube_desc_override_properties": [
-        "java.util.HashMap",
-        {
-          "kylin.cube.algorithm": "INMEM",
-          "kylin.cube.max-building-segments": "1"
-        }
-      ]
-    }
-  ]
-]
-```
-
-### 1. Generate Metadata
-Run the following command in KYLIN_HOME folder to generate related metadata:
-
-```
-./bin/kylin.sh org.apache.kylin.tool.metrics.systemcube.SCCreator \
--inputConfig SCSinkTools.json \
--output <output_forder>
-```
-
-By this command, the related metadata will be generated and its location is under the directory `<output_forder>`. The details are as follows, system_cube is our `<output_forder>`:
-
-![metadata](/images/SystemCube/metadata.png)
-
-### 2. Set Up Datasource
-Running the following command to create source hive tables:
-
-```
-hive -f <output_forder>/create_hive_tables_for_system_cubes.sql
-```
-
-By this command, the related hive table will be created.
-
-![hive_table](/images/SystemCube/hive_table.png)
-
-### 3. Upload Metadata for System Cubes
-Then we need to upload metadata to hbase by the following command:
-
-```
-./bin/metastore.sh restore <output_forder>
-```
-
-### 4. Reload Metadata
-Finally, we need to reload metadata in Kylin web UI.
-
-![reload_metadata](/images/SystemCube/reload_metadata.png)
-
-Then, a set of system cubes will be created under the system project, called "KYLIN_SYSTEM".
-
-![kylin_system](/images/SystemCube/kylin_system.png)
-
-### 5. System Cube build
-When the system cube is created, we need to build the cube regularly.
-
-1. Create a shell script that builds the system cube by calling org.apache.kylin.tool.job.CubeBuildingCLI
-  
-	For example:
-
-	```
-	#!/bin/bash
-
-	dir=$(dirname ${0})
-	export KYLIN_HOME=${dir}/../
-
-	CUBE=$1
-	INTERVAL=$2
-	DELAY=$3
-	CURRENT_TIME_IN_SECOND=`date +%s`
-	CURRENT_TIME=$((CURRENT_TIME_IN_SECOND * 1000))
-	END_TIME=$((CURRENT_TIME-DELAY))
-	END=$((END_TIME - END_TIME%INTERVAL))
-
-	ID="$END"
-	echo "building for ${CUBE}_${ID}" >> ${KYLIN_HOME}/logs/build_trace.log
-	sh ${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.job.CubeBuildingCLI --cube ${CUBE} --endTime ${END} > ${KYLIN_HOME}/logs/system_cube_${CUBE}_${END}.log 2>&1 &
-	```
-
-2. Then run this shell script regularly
-
-	For example, add a cron job as follows:
-
-	```
-	0 */2 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_QUERY_DEV 3600000 1200000
-
-	20 */2 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_QUERY_CUBE_DEV 3600000 1200000
-
-	40 */4 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_QUERY_RPC_DEV 3600000 1200000
-
-	30 */4 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_JOB_DEV 3600000 1200000
-
-	50 */12 * * * sh ${KYLIN_HOME}/bin/system_cube_build.sh KYLIN_HIVE_METRICS_JOB_EXCEPTION_DEV 3600000 12000
-	```
-
-## Details of System Cube
-### Common Dimension
-For all of these cube, admins can query at four time granularities. From higher level to lower, it's as follows:
-
-<table>
-  <tr>
-    <td>KYEAR_BEGIN_DATE</td>
-    <td>year</td>
-  </tr>
-  <tr>
-    <td>KMONTH_BEGIN_DATE</td>
-    <td>month</td>
-  </tr>
-  <tr>
-    <td>KWEEK_BEGIN_DATE</td>
-    <td>week</td>
-  </tr>
-  <tr>
-    <td>KDAY_DATE</td>
-    <td>date</td>
-  </tr>
-</table>
-
-### METRICS_QUERY
-This cube is for collecting query metrics at the highest level. The details are as follows:
-
-<table>
-  <tr>
-    <th colspan="2">Dimension</th>
-  </tr>
-  <tr>
-    <td>HOST</td>
-    <td>the host of server for query engine</td>
-  </tr>
-  <tr>
-    <td>PROJECT</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>REALIZATION</td>
-    <td>in kylin, there are two OLAP realizations: cube & hybrid of cubes</td>
-  </tr>
-  <tr>
-    <td>REALIZATION_TYPE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>QUERY_TYPE</td>
-    <td>users can query on different data sources, CACHE, OLAP, LOOKUP_TABLE, HIVE</td>
-  </tr>
-  <tr>
-    <td>EXCEPTION</td>
-    <td>when doing query, exceptions may happen. It's for classifying different exception types</td>
-  </tr>
-</table>
-
-<table>
-  <tr>
-    <th colspan="2">Measure</th>
-  </tr>
-  <tr>
-    <td>COUNT</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>MIN, MAX, SUM of QUERY_TIME_COST</td>
-    <td>the time cost for the whole query</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of CALCITE_SIZE_RETURN</td>
-    <td>the row count of the result Calcite returns</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of STORAGE_SIZE_RETURN</td>
-    <td>the row count of the input to Calcite</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of CALCITE_SIZE_AGGREGATE_FILTER</td>
-    <td>the row count of Calcite aggregates and filters</td>
-  </tr>
-  <tr>
-    <td>COUNT DISTINCT of QUERY_HASH_CODE</td>
-    <td>the number of different queries</td>
-  </tr>
-</table>
-
-### METRICS_QUERY_RPC
-This cube is for collecting query metrics at the lowest level. For a query, the related aggregation and filter can be pushed down to each rpc target server. The robustness of rpc target servers is the foundation for better serving queries. The details are as follows:
-
-<table>
-  <tr>
-    <th colspan="2">Dimension</th>
-  </tr>
-  <tr>
-    <td>HOST</td>
-    <td>the host of server for query engine</td>
-  </tr>
-  <tr>
-    <td>PROJECT</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>REALIZATION</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>RPC_SERVER</td>
-    <td>the rpc related target server</td>
-  </tr>
-  <tr>
-    <td>EXCEPTION</td>
-    <td>the exception of a rpc call. If no exception, "NULL" is used</td>
-  </tr>
-</table>
-
-<table>
-  <tr>
-    <th colspan="2">Measure</th>
-  </tr>
-  <tr>
-    <td>COUNT</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of CALL_TIME</td>
-    <td>the time cost of a rpc all</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of COUNT_SKIP</td>
-    <td>based on fuzzy filters or else, a few rows will be skiped. This indicates the skipped row count</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of SIZE_SCAN</td>
-    <td>the row count actually scanned</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of SIZE_RETURN</td>
-    <td>the row count actually returned</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of SIZE_AGGREGATE</td>
-    <td>the row count actually aggregated</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of SIZE_AGGREGATE_FILTER</td>
-    <td>the row count actually aggregated and filtered, = SIZE_SCAN - SIZE_RETURN</td>
-  </tr>
-</table>
-
-### METRICS_QUERY_CUBE
-This cube is for collecting query metrics at the cube level. The most important are cuboids related, which will serve for cube planner. The details are as follows:
-
-<table>
-  <tr>
-    <th colspan="2">Dimension</th>
-  </tr>
-  <tr>
-    <td>CUBE_NAME</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>CUBOID_SOURCE</td>
-    <td>source cuboid parsed based on query and cube design</td>
-  </tr>
-  <tr>
-    <td>CUBOID_TARGET</td>
-    <td>target cuboid already precalculated and served for source cuboid</td>
-  </tr>
-  <tr>
-    <td>IF_MATCH</td>
-    <td>whether source cuboid and target cuboid are equal</td>
-  </tr>
-  <tr>
-    <td>IF_SUCCESS</td>
-    <td>whether a query on this cube is successful or not</td>
-  </tr>
-</table>
-
-<table>
-  <tr>
-    <th colspan="2">Measure</th>
-  </tr>
-  <tr>
-    <td>COUNT</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of STORAGE_CALL_COUNT</td>
-    <td>the number of rpc calls for a query hit on this cube</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of STORAGE_CALL_TIME_SUM</td>
-    <td>sum of time cost for the rpc calls of a query</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of STORAGE_CALL_TIME_MAX</td>
-    <td>max of time cost among the rpc calls of a query</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of STORAGE_COUNT_SKIP</td>
-    <td>the sum of row count skipped for the related rpc calls</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of STORAGE_SIZE_SCAN</td>
-    <td>the sum of row count scanned for the related rpc calls</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of STORAGE_SIZE_RETURN</td>
-    <td>the sum of row count returned for the related rpc calls</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of STORAGE_SIZE_AGGREGATE</td>
-    <td>the sum of row count aggregated for the related rpc calls</td>
-  </tr>
-  <tr>
-    <td>MAX, SUM of STORAGE_SIZE_AGGREGATE_FILTER</td>
-    <td>the sum of row count aggregated and filtered for the related rpc calls, = STORAGE_SIZE_SCAN - STORAGE_SIZE_RETURN</td>
-  </tr>
-</table>
-
-### METRICS_JOB
-In kylin, there are mainly three types of job:
-- "BUILD",for building cube segments from **HIVE**.
-- "MERGE",for merging cube segments in **HBASE**.
-- "OPTIMIZE",for dynamically adjusting the precalculated cuboid tree base on the **base cuboid** in **HBASE**.
-
-This cube is for collecting job metrics. The details are as follows:
-
-<table>
-  <tr>
-    <th colspan="2">Dimension</th>
-  </tr>
-  <tr>
-    <td>PROJECT</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>CUBE_NAME</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>JOB_TYPE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>CUBING_TYPE</td>
-    <td>in kylin, there are two cubing algorithms, Layered & Fast(InMemory)</td>
-  </tr>
-</table>
-
-<table>
-  <tr>
-    <th colspan="2">Measure</th>
-  </tr>
-  <tr>
-    <td>COUNT</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>MIN, MAX, SUM of DURATION</td>
-    <td>the duration from a job start to finish</td>
-  </tr>
-  <tr>
-    <td>MIN, MAX, SUM of TABLE_SIZE</td>
-    <td>the size of data source in bytes</td>
-  </tr>
-  <tr>
-    <td>MIN, MAX, SUM of CUBE_SIZE</td>
-    <td>the size of created cube segment in bytes</td>
-  </tr>
-  <tr>
-    <td>MIN, MAX, SUM of PER_BYTES_TIME_COST</td>
-    <td>= DURATION / TABLE_SIZE</td>
-  </tr>
-  <tr>
-    <td>MIN, MAX, SUM of WAIT_RESOURCE_TIME</td>
-    <td>a job may includes serveral MR(map reduce) jobs. Those MR jobs may wait because of lack of Hadoop resources.</td>
-  </tr>
-</table>
-
-### METRICS_JOB_EXCEPTION
-This cube is for collecting job exception metrics. The details are as follows:
-
-<table>
-  <tr>
-    <th colspan="2">Dimension</th>
-  </tr>
-  <tr>
-    <td>PROJECT</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>CUBE_NAME</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>JOB_TYPE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>CUBING_TYPE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>EXCEPTION</td>
-    <td>when running a job, exceptions may happen. It's for classifying different exception types</td>
-  </tr>
-</table>
-
-<table>
-  <tr>
-    <th>Measure</th>
-  </tr>
-  <tr>
-    <td>COUNT</td>
-  </tr>
-</table>
diff --git a/website/_docs21/howto/howto_update_coprocessor.md b/website/_docs21/howto/howto_update_coprocessor.md
deleted file mode 100644
index 99f5d0c..0000000
--- a/website/_docs21/howto/howto_update_coprocessor.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-layout: docs21
-title:  Update Coprocessor
-categories: howto
-permalink: /docs21/howto/howto_update_coprocessor.html
----
-
-Kylin leverages HBase coprocessor to optimize query performance. After new versions released, the RPC protocol may get changed, so user need to redeploy coprocessor to HTable.
-
-There's a CLI tool to update HBase Coprocessor:
-
-{% highlight Groff markup %}
-$KYLIN_HOME/bin/kylin.sh org.apache.kylin.storage.hbase.util.DeployCoprocessorCLI default all
-{% endhighlight %}
diff --git a/website/_docs21/howto/howto_upgrade.md b/website/_docs21/howto/howto_upgrade.md
deleted file mode 100644
index a45abf1..0000000
--- a/website/_docs21/howto/howto_upgrade.md
+++ /dev/null
@@ -1,105 +0,0 @@
----
-layout: docs21
-title:  Upgrade From Old Versions
-categories: howto
-permalink: /docs21/howto/howto_upgrade.html
-since: v1.5.1
----
-
-Running as a Hadoop client, Apache Kylin's metadata and Cube data are persistended in Hadoop (HBase and HDFS), so the upgrade is relatively easy and user does not need worry about data loss. The upgrade can be performed in the following steps:
-
-* Download the new Apache Kylin binary package for your Hadoop version from Kylin download page.
-* Unpack the new version Kylin package to a new folder, e.g, /usr/local/kylin/apache-kylin-2.1.0/ (directly overwrite old instance is not recommended).
-* Merge the old configuration files (`$KYLIN_HOME/conf/*`) into the new ones. It is not recommended to overwrite the new configuration files, although that works in most cases. If you have modified tomcat configuration ($KYLIN_HOME/tomcat/conf/), do the same for it.
-* Stop the current Kylin instance with `bin/kylin.sh stop`
-* Set the `KYLIN_HOME` env variable to the new installation folder. If you have set `KYLIN_HOME` in `~/.bash_profile` or other scripts, remember to update them as well.
-* Start the new Kylin instance with `$KYLIN_HOME/bin/kylin start`. After be started, login Kylin web to check whether your cubes can be loaded correctly.
-* [Upgrade coprocessor](howto_update_coprocessor.html) to ensure the HBase region servers use the latest Kylin coprocessor.
-* Verify your SQL queries can be performed successfully.
-
-Below are versions specific guides:
-
-
-## Upgrade from v2.1.0 to v2.2.0
-
-Kylin v2.2.0 cube metadata is compitable with v2.1.0, but you need aware the following changes:
-
-* Cube ACL is removed, use Project Level ACL instead. You need to manually configure Project Permissions to migrate your existing Cube Permissions. Please refer to [Project Level ACL](/docs21/tutorial/project_level_acl.html).
-* Update HBase coprocessor. The HBase tables for existing cubes need be updated to the latest coprocessor. Follow [this guide](/docs21/howto/howto_update_coprocessor.html) to update.
-
-
-## Upgrade from v2.0.0 to v2.1.0
-
-Kylin v2.1.0 cube metadata is compitable with v2.0.0, but you need aware the following changes. 
-
-1) In previous version, Kylin uses additional two HBase tables "kylin_metadata_user" and "kylin_metadata_acl" to persistent the user and ACL info. From 2.1, Kylin consolidates all the info into one table: "kylin_metadata". This will make the backup/restore and maintenance more easier. When you start Kylin 2.1.0, it will detect whether need migration; if true, it will print the command to do migration:
-
-```
-ERROR: Legacy ACL metadata detected. Please migrate ACL metadata first. Run command 'bin/kylin.sh org.apache.kylin.tool.AclTableMigrationCLI MIGRATE'.
-```
-
-After the migration finished, you can delete the legacy "kylin_metadata_user" and "kylin_metadata_acl" tables from HBase.
-
-2) From v2.1, Kylin hides the default settings in "conf/kylin.properties"; You only need uncomment or add the customized properties in it.
-
-3) Spark is upgraded from v1.6.3 to v2.1.1, if you customized Spark configurations in kylin.properties, please upgrade them as well by referring to [Spark documentation](https://spark.apache.org/docs/2.1.0/).
-
-4) If you are running Kylin with two clusters (compute/query separated), need copy the big metadata files (which are persisted in HDFS instead of HBase) from the Hadoop cluster to HBase cluster.
-
-```
-hadoop distcp hdfs://compute-cluster:8020/kylin/kylin_metadata/resources hdfs://query-cluster:8020/kylin/kylin_metadata/resources
-```
-
-
-## Upgrade from v1.6.0 to v2.0.0
-
-Kylin v2.0.0 can read v1.6.0 metadata directly. Please follow the common upgrade steps above.
-
-Configuration names in `kylin.properties` have changed since v2.0.0. While the old property names still work, it is recommended to use the new property names as they follow [the naming convention](/development/coding_naming_convention.html) and are easier to understand. There is [a mapping from the old properties to the new properties](https://github.com/apache/kylin/blob/2.0.x/core-common/src/main/resources/kylin-backward-compatibility.properties).
-
-## Upgrade from v1.5.4 to v1.6.0
-
-Kylin v1.5.4 and v1.6.0 are compatible in metadata. Please follow the common upgrade steps above.
-
-## Upgrade from v1.5.3 to v1.5.4
-Kylin v1.5.3 and v1.5.4 are compatible in metadata. Please follow the common upgrade steps above.
-
-## Upgrade from 1.5.2 to v1.5.3
-Kylin v1.5.3 metadata is compitible with v1.5.2, your cubes don't need rebuilt, as usual, some actions need to be performed:
-
-#### 1. Update HBase coprocessor
-The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
-
-#### 2. Update conf/kylin_hive_conf.xml
-From 1.5.3, Kylin doesn't need Hive to merge small files anymore; For users who copy the conf/ from previous version, please remove the "merge" related properties in kylin_hive_conf.xml, including "hive.merge.mapfiles", "hive.merge.mapredfiles", and "hive.merge.size.per.task"; this will save the time on extracting data from Hive.
-
-
-## Upgrade from 1.5.1 to v1.5.2
-Kylin v1.5.2 metadata is compitible with v1.5.1, your cubes don't need upgrade, while some actions need to be performed:
-
-#### 1. Update HBase coprocessor
-The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
-
-#### 2. Update conf/kylin.properties
-In v1.5.2 several properties are deprecated, and several new one are added:
-
-Deprecated:
-
-* kylin.hbase.region.cut.small=5
-* kylin.hbase.region.cut.medium=10
-* kylin.hbase.region.cut.large=50
-
-New:
-
-* kylin.hbase.region.cut=5
-* kylin.hbase.hfile.size.gb=2
-
-These new parameters determines how to split HBase region; To use different size you can overwite these params in Cube level. 
-
-When copy from old kylin.properties file, suggest to remove the deprecated ones and add the new ones.
-
-#### 3. Add conf/kylin\_job\_conf\_inmem.xml
-A new job conf file named "kylin\_job\_conf\_inmem.xml" is added in "conf" folder; As Kylin 1.5 introduced the "fast cubing" algorithm, which aims to leverage more memory to do the in-mem aggregation; Kylin will use this new conf file for submitting the in-mem cube build job, which requesting different memory with a normal job; Please update it properly according to your cluster capacity.
-
-Besides, if you have used separate config files for different capacity cubes, for example "kylin\_job\_conf\_small.xml", "kylin\_job\_conf\_medium.xml" and "kylin\_job\_conf\_large.xml", please note that they are deprecated now; Only "kylin\_job\_conf.xml" and "kylin\_job\_conf\_inmem.xml" will be used for submitting cube job; If you have cube level job configurations (like using different Yarn job queue), you can customize at cube level, check [KYLIN-1706](https://issues.apache.org/jira [...]
-
diff --git a/website/_docs21/howto/howto_use_beeline.md b/website/_docs21/howto/howto_use_beeline.md
deleted file mode 100644
index c730cc1..0000000
--- a/website/_docs21/howto/howto_use_beeline.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-layout: docs21
-title:  Use Beeline for Hive
-categories: howto
-permalink: /docs21/howto/howto_use_beeline.html
----
-
-Beeline(https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients) is recommended by many venders to replace Hive CLI. By default Kylin uses Hive CLI to synchronize Hive tables, create flatten intermediate tables, etc. By simple configuration changes you can set Kylin to use Beeline instead.
-
-Edit $KYLIN_HOME/conf/kylin.properties by:
-
-  1. change kylin.hive.client=cli to kylin.hive.client=beeline
-  2. add "kylin.hive.beeline.params", this is where you can specifiy beeline commmand parameters. Like username(-n), JDBC URL(-u),etc. There's a sample kylin.hive.beeline.params included in default kylin.properties, however it's commented. You can modify the sample based on your real environment.
-
diff --git a/website/_docs21/howto/howto_use_cube_planner.md b/website/_docs21/howto/howto_use_cube_planner.md
deleted file mode 100644
index ab216c2..0000000
--- a/website/_docs21/howto/howto_use_cube_planner.md
+++ /dev/null
@@ -1,133 +0,0 @@
----
-layout: docs21
-title:  Use Cube Planner
-categories: howto
-permalink: /docs21/howto/howto_use_cube_planner.html
----
-
-> Available since Apache Kylin v2.3.0
-
-# Cube Planner
-
-## What is Cube Planner
-
-OLAP solution trades off online query speed with offline cube build cost (compute resource to build cube and storage resource to save the cube data). Resource efficiency is the most important competency of OLAP engine. To be resource efficient, It is critical to just pre-build the most valuable cuboids.
-
-Cube Planner makes Apache Kylin to be more resource efficient. It intelligently build a partial cube to minimize the cost of building a cube and at the same time maximize the benefit of serving end user queries, then learn patterns from queries at runtime and dynamically recommend cuboids accordingly. 
-
-![CubePlanner](/images/CubePlanner/CubePlanner.png)
-
-## Prerequisites
-
-To enable Dashboard on WebUI, you need to set **kylin.cube.cubeplanner.enabled=true** in **kylin.properties**.
-
-## How to use it
-
-*Note: Cube planner optimization is not suitable for new cube. Cube should be online on production for a while (like 3 months) before optimizing it. So that Kylin platform collects enough real queries from end user and use them to optimize the cube.*  
-
-#### Step 1:
-
-​	Select a cube
-
-#### Step 2:
-
-1. Click the '**Planner**' button to view the '**Current Cuboid Distribution**' of the cube.
-
-  You should make sure the status of the cube is '**READY**'![Status_Ready](/images/CubePlanner/Status_Ready.png).
-
-  If the status of the cube is '**DISABLED**'![Status_Disabled](/images/CubePlanner/Status_Disabled.png), you will not be able to use the cube planner.
-
-  ​
-
-  You should change the status of the cube from '**DISABLED**' to '**READY**' by clicking the '**Enable**' button.
-
-  ![DISABLED2READY](/images/CubePlanner/DISABLED2READY.png)
-
-#### Step 3:
-
-a. Click the '**Planner**' button to view the '**Current Cuboid Distribution**' of the cube.
-
-- The data will be displayed in *[Sunburst Chart](https://en.wikipedia.org/wiki/Pie_chart#Ring_chart_.2F_Sunburst_chart_.2F_Multilevel_pie_chart)*. 
-
-- Each part refers to a cuboid, is shown in different colors determined by the query **frequency** against this cuboid.
-
-     ![CubePlanner](/images/CubePlanner/CP.png)
-
-
--  You can move the cursor over the chart and it will display the detail information of the cuboid.
-
-   The detail information contains 5 attributes, '**Name**', '**ID**', '**Query Count**', '**Exactly Match Count**', '**Row Count**' and '**Rollup Rate**'. 
-
-   Cuboid **Name** is composed of several '0' or '1'. It means a combination of dimensions. '0' means the dimension doesn't exist in this combination, while '1' means the dimension exist in the combination. All the dimensions are ordered by the HBase row keys in advanced settings. 
-
-   Here is an example: 
-
-   ![CubePlanner](/images/CubePlanner/Leaf.png)
-
-   Name:1111111110000000 means the dimension combination is ["MONTH_BEG_DT","USER_CNTRY_SITE_CD","RPRTD_SGMNT_VAL","RPRTD_IND","SRVY_TYPE_ID","QSTN_ID","L1_L2_IND","PRNT_L1_ID","TRANCHE_ID"] based on the row key orders below:
-
-   ![CubePlanner](/images/CubePlanner/Rowkeys.png)
-
-   **ID** is the unique id of the cuboid.
-
-   **Query Count** is the total count of the queries that are served by this cuboid, including those queries that against other un-precalculated cuboids, but on line aggregated from this cuboid.  
-
-   **Exactly Match Count** is the query count that the query is actually against this cuboid.
-
-   **Row Count** is the total row count of all the segments for this cuboid.
-
-   **Rollup Rate** = (Cuboid's Row Count/its parent cuboid's Row Count) * 100%  
-
--  The center of the sunburst chart contains the combined information of  basic cuboid. its '**Name**' is composed of several '1's.
-
-![Root](/images/CubePlanner/Root.png)
-
-As for a leaf, its '**Name**' is composed of several '0's and 1's. 
-
-![Leaf](/images/CubePlanner/Leaf.png)
-
--    If you want to **specify** a leaf, just **click on** it. The view will change automatically.
-
-     ![Leaf-Specify](/images/CubePlanner/Leaf-Specify.png)
-
--    If you want to specify the parent leaf of a leaf, click on the **center circle** (the part marked yellow).
-
-![Leaf-Specify-Parent](/images/CubePlanner/Leaf-Specify-Parent.png)
-
-b. Click the '**Recommend**' button to view the '**Recommend Cuboid Distribution**' of the cube.
-
-If the cube is currently under building![Running](/images/CubePlanner/Running.png), the cube planner '**Recommend**' function will not be able to perform correctly. Please first **stop the building progress** of the cube.
-
--  The data will be calculated by unique algorithms. It is common to see this window.
-
-   ![Recommending](/images/CubePlanner/Recommending.png)
-
--  The data will be displayed in *[Sunburst Chart](https://en.wikipedia.org/wiki/Pie_chart#Ring_chart_.2F_Sunburst_chart_.2F_Multilevel_pie_chart)*.
-
-   - Each part is shown in different colors determined by the **frequency**.
-
-![CubePlanner_Recomm](/images/CubePlanner/CPRecom.png)
-
-- Detailed operation of the '**Recommend Cuboid Distribution**' chart is the same as '**Current Cuboid Distribution**' chart.
-- User is able to tell the dimension names from a cuboid when mouse hovers over the sunburst chart as figure shown below.
-- User is able to click **'Export'** to export hot dimension combinations (TopN cuboids, currently including options of Top 10, Top 50, Top 100) from an existing cube as a json file, which will be downloaded to your local file system for recording or future import of dimension combinations when creating cube.
-
-![export cuboids](/images/CubePlanner/export_cuboids.png)
-
-c. Click the '**Optimize**' button to optimize the cube.
-
-- A window will jump up to ensure your decision.
-
-  ​	![CubePlanner_Optimize](/images/CubePlanner/CubePlanner_Optimize.png)
-
-  Click '**Yes**' to start the optimization.
-
-  Click '**Cancel**' to give up the optimization.
-
-- User is able to get to know the last optimized time of the cube in Cube Planner tab page. 
-
-![column name+optimize time](/images/CubePlanner/column_name+optimize_time.png)
-
-- User is able to receive an email notification for a cube optimization job.
-
-![optimize email](/images/CubePlanner/optimize_email.png)
diff --git a/website/_docs21/howto/howto_use_dashboard.md b/website/_docs21/howto/howto_use_dashboard.md
deleted file mode 100644
index b90b9f9..0000000
--- a/website/_docs21/howto/howto_use_dashboard.md
+++ /dev/null
@@ -1,110 +0,0 @@
----
-layout: docs21
-title:  Use Dashboard
-categories: howto
-permalink: /docs21/howto/howto_use_dashboard.html
----
-
-> Available since Apache Kylin v2.3.0
-
-# Dashboard
-
-As a project owner, do you want to know your cube usage metrics? Do you want to know how many queries are against your cube every day? What is the AVG query latency? Do you want to know the AVG cube build time per GB source data, which is very helpful to foresee the time cost of a coming cube build job? You can find all information from Kylin Dashboard. 
-
-Kylin Dashboard shows useful cube usage statistics, which are very important to users.
-
-## Prerequisites
-
-To enable Dashboard on WebUI, you need to ensure these are all set:
-* Set **kylin.web.dashboard-enabled=true** in **kylin.properties**.
-* Setup system cubes according to ![toturial](howto_setup_systemcube.html).
-
-## How to use it
-
-#### Step 1:
-
-​	Click the '**Dashboard**' button on the navigation bar.
-
-​	There are 9 boxes on this page which you can operate.
-
-​	The boxes represent different attributes, including '**Time Period**','**Total Cube Count**', '**Avg Cube Expansion**', '**Query Count**', '**Average Query Latency**', '**Job Count**', '**Average Build Time per MB**', '**Data grouped by Project**' and '**Data grouped by Time**'. 
-
-![Kylin Dashboard](/images/Dashboard/QueryCount.jpg)
-
-#### Step 2:
-
-You should now click on the calender to modify the '**Time Period**'.
-
-![SelectPeriod](/images/Dashboard/SelectPeriod.png)
-
-- '**Time period**' is set deafult to **'Last 7 Days**'.
-
-- There are **2** ways to modify the time period, one is *using standard time period*s and the other is *customizing your time period*.
-
-  1. If you want to *use standard time periods*, you can click on '**Last 7 Days**' to choose data only from last 7 days, or click on '**This Month**' to choose data only from this month, or click on '**Last Month**' to choose data only from last month. 
-
-  2. If you want to *customize your time period*, you can click on '**Custom Range**'.
-
-     There are **2** ways to customize the time period, one is *typing dates in the textfield* and the other is *selecting dates in the calender*.
-
-     1. If you want to *type dates in the textfield*, please make sure that both dates are valid.
-     2. If you want to *select dates in the calender*, please make sure that you have clicked on two specific dates.
-
-- After you have modified the time period, click '**Apply**' to apply the changes, click '**Cancel**' to give up the changes.
-
-#### Step 3:
-
-Now the data analysis will be changed and shown on the same page. (Important information has been pixilated.)
-
-- Numbers in '**Total Cube Count**' and '**Avg Cube Expansion**' are in **Blue**.
-
-  You can click the '**More Details**' in these two boxes and you will be led to the '**Model**' page. 
-
-  ![Cube-Info-Page](/images/Dashboard/Cube-Info-Page.png)
-
-
-- Numbers in '**Query Count**', '**Average Query Latency**', '**Job Count**' and '**Average Build Time per MB**' are in **Green**.
-
-  You can click on these four rectangles to get detail infomation about the data you selected. The detail information will then be shown as diagrams and displayed in '**Data grouped by Project**' and '**Data grouped by Time**' boxes.
-
-  1. '**Query Count**' and '**Average Query Latency**'
-
-     You can click on '**Query Count**' to get detail infomation. 
-
-     ![QueryCount](/images/Dashboard/QueryCount.jpg)
-
-     You can click on '**Average Query Latency**' to get detail infomation. 
-
-     ![AVG-Query-Latency](/images/Dashboard/AVGQueryLatency.jpg)
-
-     You can click the '**More Details**' in these two boxes and you will be led to the '**Insight**' page. 
-
-     ![Query-Link-Page](/images/Dashboard/Query-Link-Page.png)
-
-  2. '**Job Count**' and '**Average Build Time per MB**'
-
-     You can click on '**Job Count**' to get detail infomation. 
-
-     ![Job-Count](/images/Dashboard/JobCount.jpg)
-
-     You can click on '**Average Build Time per MB**' to get detail information. 
-
-     ![AVG-Build-Time](/images/Dashboard/AVGBuildTimePerMB.jpg)
-
-     You can click the '**More Details**' in these two boxes and you will be led to the '**Monitor**' page.
-
-     ![Job-Link-Page](/images/Dashboard/Job-Link-Page.png)
-
-     It is common to see the browser showing 'Please wait...'.
-
-     ![Job-Link-Page-Waiting](/images/Dashboard/Job-Link-Page-Waiting.png)
-
-#### Step 4:
-
-**Advanced Operations**
-
-'**Data grouped by Project**' and '**Data grouped by Time**' displayed data in the form of diagram.
-
-There is a radio button called '**showValue**' in '**Data grouped by Project**', you can choose to show number in the diagram.
-
-There is a radio drop-down menu in '**Data grouped by Time**', you can choose to show the diagram in different timelines.
diff --git a/website/_docs21/howto/howto_use_distributed_scheduler.md b/website/_docs21/howto/howto_use_distributed_scheduler.md
deleted file mode 100644
index c141560..0000000
--- a/website/_docs21/howto/howto_use_distributed_scheduler.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-layout: docs21
-title:  Use distributed job scheduler
-categories: howto
-permalink: /docs21/howto/howto_use_distributed_scheduler.html
----
-
-Since Kylin 2.0, Kylin support distributed job scheduler.
-Which is more extensible, available and reliable than default job scheduler.
-To enable the distributed job scheduler, you need to set or update three configs in the kylin.properties:
-
-```
-1. kylin.job.scheduler.default=2
-2. kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperDistributedJobLock
-3. add all job servers and query servers to the kylin.server.cluster-servers
-```
diff --git a/website/_docs21/howto/howto_use_restapi.md b/website/_docs21/howto/howto_use_restapi.md
deleted file mode 100644
index 172d784..0000000
--- a/website/_docs21/howto/howto_use_restapi.md
+++ /dev/null
@@ -1,1206 +0,0 @@
----
-layout: docs21
-title:  Use RESTful API
-categories: howto
-permalink: /docs21/howto/howto_use_restapi.html
-since: v0.7.1
----
-
-This page lists the major RESTful APIs provided by Kylin.
-
-* Query
-   * [Authentication](#authentication)
-   * [Query](#query)
-   * [List queryable tables](#list-queryable-tables)
-* CUBE
-   * [List cubes](#list-cubes)
-   * [Get cube](#get-cube)
-   * [Get cube descriptor (dimension, measure info, etc)](#get-cube-descriptor)
-   * [Get data model (fact and lookup table info)](#get-data-model)
-   * [Build cube](#build-cube)
-   * [Enable cube](#enable-cube)
-   * [Disable cube](#disable-cube)
-   * [Purge cube](#purge-cube)
-   * [Delete segment](#delete-segment)
-* JOB
-   * [Resume job](#resume-job)
-   * [Pause job](#pause-job)
-   * [Discard job](#discard-job)
-   * [Get job status](#get-job-status)
-   * [Get job step output](#get-job-step-output)
-   * [Get job list](#get-job-list)
-* Metadata
-   * [Get Hive Table](#get-hive-table)
-   * [Get Hive Tables](#get-hive-tables)
-   * [Load Hive Tables](#load-hive-tables)
-* Cache
-   * [Wipe cache](#wipe-cache)
-* Streaming
-   * [Initiate cube start position](#initiate-cube-start-position)
-   * [Build stream cube](#build-stream-cube)
-   * [Check segment holes](#check-segment-holes)
-   * [Fill segment holes](#fill-segment-holes)
-
-## Authentication
-`POST /kylin/api/user/authentication`
-
-#### Request Header
-Authorization data encoded by basic auth is needed in the header, such as:
-Authorization:Basic {data}
-
-#### Response Body
-* userDetails - Defined authorities and status of current user.
-
-#### Response Sample
-
-```sh
-{  
-   "userDetails":{  
-      "password":null,
-      "username":"sample",
-      "authorities":[  
-         {  
-            "authority":"ROLE_ANALYST"
-         },
-         {  
-            "authority":"ROLE_MODELER"
-         }
-      ],
-      "accountNonExpired":true,
-      "accountNonLocked":true,
-      "credentialsNonExpired":true,
-      "enabled":true
-   }
-}
-```
-
-#### Curl Example
-
-```
-curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' http://<host>:<port>/kylin/api/user/authentication
-```
-
-If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
-
-```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
-```
-
-Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
-
-
-```
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "startTime": 820454400000, "endTime": 821318400000, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/kylin_sales/build
-```
-
-***
-
-## Query
-`POST /kylin/api/query`
-
-#### Request Body
-* sql - `required` `string` The text of sql statement.
-* offset - `optional` `int` Query offset. If offset is set in sql, curIndex will be ignored.
-* limit - `optional` `int` Query limit. If limit is set in sql, perPage will be ignored.
-* acceptPartial - `optional` `bool` Whether accept a partial result or not, default be "false". Set to "false" for production use. 
-* project - `optional` `string` Project to perform query. Default value is 'DEFAULT'.
-
-#### Request Sample
-
-```sh
-{  
-   "sql":"select * from TEST_KYLIN_FACT",
-   "offset":0,
-   "limit":50000,
-   "acceptPartial":false,
-   "project":"DEFAULT"
-}
-```
-
-#### Curl Example
-
-```
-curl -X POST -H "Authorization: Basic XXXXXXXXX" -H "Content-Type: application/json" -d '{ "sql":"select count(*) from TEST_KYLIN_FACT", "project":"learn_kylin" }' http://localhost:7070/kylin/api/query
-```
-
-#### Response Body
-* columnMetas - Column metadata information of result set.
-* results - Data set of result.
-* cube - Cube used for this query.
-* affectedRowCount - Count of affected row by this sql statement.
-* isException - Whether this response is an exception.
-* ExceptionMessage - Message content of the exception.
-* Duration - Time cost of this query
-* Partial - Whether the response is a partial result or not. Decided by `acceptPartial` of request.
-
-#### Response Sample
-
-```sh
-{  
-   "columnMetas":[  
-      {  
-         "isNullable":1,
-         "displaySize":0,
-         "label":"CAL_DT",
-         "name":"CAL_DT",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":0,
-         "scale":0,
-         "columnType":91,
-         "columnTypeName":"DATE",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      },
-      {  
-         "isNullable":1,
-         "displaySize":10,
-         "label":"LEAF_CATEG_ID",
-         "name":"LEAF_CATEG_ID",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":10,
-         "scale":0,
-         "columnType":4,
-         "columnTypeName":"INTEGER",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      }
-   ],
-   "results":[  
-      [  
-         "2013-08-07",
-         "32996",
-         "15",
-         "15",
-         "Auction",
-         "10000000",
-         "49.048952730908745",
-         "49.048952730908745",
-         "49.048952730908745",
-         "1"
-      ],
-      [  
-         "2013-08-07",
-         "43398",
-         "0",
-         "14",
-         "ABIN",
-         "10000633",
-         "85.78317064220418",
-         "85.78317064220418",
-         "85.78317064220418",
-         "1"
-      ]
-   ],
-   "cube":"test_kylin_cube_with_slr_desc",
-   "affectedRowCount":0,
-   "isException":false,
-   "exceptionMessage":null,
-   "duration":3451,
-   "partial":false
-}
-```
-
-
-## List queryable tables
-`GET /kylin/api/tables_and_columns`
-
-#### Request Parameters
-* project - `required` `string` The project to load tables
-
-#### Response Sample
-```sh
-[  
-   {  
-      "columns":[  
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"CAL_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":1,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         },
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"WEEK_BEG_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":2,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         }
-      ],
-      "table_NAME":"TEST_CAL_DT",
-      "table_SCHEM":"EDW",
-      "ref_GENERATION":null,
-      "self_REFERENCING_COL_NAME":null,
-      "type_SCHEM":null,
-      "table_TYPE":"TABLE",
-      "table_CAT":"defaultCatalog",
-      "remarks":null,
-      "type_CAT":null,
-      "type_NAME":null
-   }
-]
-```
-
-***
-
-## List cubes
-`GET /kylin/api/cubes`
-
-#### Request Parameters
-* offset - `required` `int` Offset used by pagination
-* limit - `required` `int ` Cubes per page.
-* cubeName - `optional` `string` Keyword for cube names. To find cubes whose name contains this keyword.
-* projectName - `optional` `string` Project name.
-
-#### Response Sample
-```sh
-[  
-   {  
-      "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-      "last_modified":1407831634847,
-      "name":"test_kylin_cube_with_slr_empty",
-      "owner":null,
-      "version":null,
-      "descriptor":"test_kylin_cube_with_slr_desc",
-      "cost":50,
-      "status":"DISABLED",
-      "segments":[  
-      ],
-      "create_time":null,
-      "source_records_count":0,
-      "source_records_size":0,
-      "size_kb":0
-   }
-]
-```
-
-## Get cube
-`GET /kylin/api/cubes/{cubeName}`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name to find.
-
-## Get cube descriptor
-`GET /kylin/api/cube_desc/{cubeName}`
-Get descriptor for specified cube instance.
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-[
-    {
-        "uuid": "a24ca905-1fc6-4f67-985c-38fa5aeafd92", 
-        "name": "test_kylin_cube_with_slr_desc", 
-        "description": null, 
-        "dimensions": [
-            {
-                "id": 0, 
-                "name": "CAL_DT", 
-                "table": "EDW.TEST_CAL_DT", 
-                "column": null, 
-                "derived": [
-                    "WEEK_BEG_DT"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 1, 
-                "name": "CATEGORY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": null, 
-                "derived": [
-                    "USER_DEFINED_FIELD1", 
-                    "USER_DEFINED_FIELD3", 
-                    "UPD_DATE", 
-                    "UPD_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 2, 
-                "name": "CATEGORY_HIERARCHY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": [
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": true
-            }, 
-            {
-                "id": 3, 
-                "name": "LSTG_FORMAT_NAME", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "LSTG_FORMAT_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }, 
-            {
-                "id": 4, 
-                "name": "SITE_ID", 
-                "table": "EDW.TEST_SITES", 
-                "column": null, 
-                "derived": [
-                    "SITE_NAME", 
-                    "CRE_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 5, 
-                "name": "SELLER_TYPE_CD", 
-                "table": "EDW.TEST_SELLER_TYPE_DIM", 
-                "column": null, 
-                "derived": [
-                    "SELLER_TYPE_DESC"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 6, 
-                "name": "SELLER_ID", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "SELLER_ID"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }
-        ], 
-        "measures": [
-            {
-                "id": 1, 
-                "name": "GMV_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 2, 
-                "name": "GMV_MIN", 
-                "function": {
-                    "expression": "MIN", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 3, 
-                "name": "GMV_MAX", 
-                "function": {
-                    "expression": "MAX", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 4, 
-                "name": "TRANS_CNT", 
-                "function": {
-                    "expression": "COUNT", 
-                    "parameter": {
-                        "type": "constant", 
-                        "value": "1", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 5, 
-                "name": "ITEM_COUNT_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "ITEM_COUNT", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }
-        ], 
-        "rowkey": {
-            "rowkey_columns": [
-                {
-                    "column": "SELLER_ID", 
-                    "length": 18, 
-                    "dictionary": null, 
-                    "mandatory": true
-                }, 
-                {
-                    "column": "CAL_DT", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LEAF_CATEG_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "META_CATEG_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL2_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL3_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_FORMAT_NAME", 
-                    "length": 12, 
-                    "dictionary": null, 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_SITE_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "SLR_SEGMENT_CD", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }
-            ], 
-            "aggregation_groups": [
-                [
-                    "LEAF_CATEG_ID", 
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME", 
-                    "CAL_DT"
-                ]
-            ]
-        }, 
-        "signature": "lsLAl2jL62ZApmOLZqWU3g==", 
-        "last_modified": 1445850327000, 
-        "model_name": "test_kylin_with_slr_model_desc", 
-        "null_string": null, 
-        "hbase_mapping": {
-            "column_family": [
-                {
-                    "name": "F1", 
-                    "columns": [
-                        {
-                            "qualifier": "M", 
-                            "measure_refs": [
-                                "GMV_SUM", 
-                                "GMV_MIN", 
-                                "GMV_MAX", 
-                                "TRANS_CNT", 
-                                "ITEM_COUNT_SUM"
-                            ]
-                        }
-                    ]
-                }
-            ]
-        }, 
-        "notify_list": null, 
-        "auto_merge_time_ranges": null, 
-        "retention_range": 0
-    }
-]
-```
-
-## Get data model
-`GET /kylin/api/model/{modelName}`
-
-#### Path Variable
-* modelName - `required` `string` Data model name, by default it should be the same with cube name.
-
-#### Response Sample
-```sh
-{
-    "uuid": "ff527b94-f860-44c3-8452-93b17774c647", 
-    "name": "test_kylin_with_slr_model_desc", 
-    "lookups": [
-        {
-            "table": "EDW.TEST_CAL_DT", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "CAL_DT"
-                ], 
-                "foreign_key": [
-                    "CAL_DT"
-                ]
-            }
-        }, 
-        {
-            "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "LEAF_CATEG_ID", 
-                    "SITE_ID"
-                ], 
-                "foreign_key": [
-                    "LEAF_CATEG_ID", 
-                    "LSTG_SITE_ID"
-                ]
-            }
-        }
-    ], 
-    "capacity": "MEDIUM", 
-    "last_modified": 1442372116000, 
-    "fact_table": "DEFAULT.TEST_KYLIN_FACT", 
-    "filter_condition": null, 
-    "partition_desc": {
-        "partition_date_column": "DEFAULT.TEST_KYLIN_FACT.CAL_DT", 
-        "partition_date_start": 0, 
-        "partition_date_format": "yyyy-MM-dd", 
-        "partition_type": "APPEND", 
-        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
-    }
-}
-```
-
-## Build cube
-`PUT /kylin/api/cubes/{cubeName}/build`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Request Body
-* startTime - `required` `long` Start timestamp of data to build, e.g. 1388563200000 for 2014-1-1
-* endTime - `required` `long` End timestamp of data to build
-* buildType - `required` `string` Supported build type: 'BUILD', 'MERGE', 'REFRESH'
-
-#### Curl Example
-```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
-```
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-
-## Enable Cube
-`PUT /kylin/api/cubes/{cubeName}/enable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-{  
-   "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-   "last_modified":1407909046305,
-   "name":"test_kylin_cube_with_slr_ready",
-   "owner":null,
-   "version":null,
-   "descriptor":"test_kylin_cube_with_slr_desc",
-   "cost":50,
-   "status":"ACTIVE",
-   "segments":[  
-      {  
-         "name":"19700101000000_20140531160000",
-         "storage_location_identifier":"KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_READY-19700101000000_20140531160000_BF043D2D-9A4A-45E9-AA59-5A17D3F34A50",
-         "date_range_start":0,
-         "date_range_end":1401552000000,
-         "status":"READY",
-         "size_kb":4758,
-         "source_records":6000,
-         "source_records_size":620356,
-         "last_build_time":1407832663227,
-         "last_build_job_id":"2c7a2b63-b052-4a51-8b09-0c24b5792cda",
-         "binary_signature":null,
-         "dictionaries":{  
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME/16d8185c-ee6b-4f8c-a919-756d9809f937.dict",
-            "TEST_KYLIN_FACT/LSTG_SITE_ID":"/dict/TEST_SITES/SITE_ID/0bec6bb3-1b0d-469c-8289-b8c4ca5d5001.dict",
-            "TEST_KYLIN_FACT/SLR_SEGMENT_CD":"/dict/TEST_SELLER_TYPE_DIM/SELLER_TYPE_CD/0c5d77ec-316b-47e0-ba9a-0616be890ad6.dict",
-            "TEST_KYLIN_FACT/CAL_DT":"/dict/PREDEFINED/date(yyyy-mm-dd)/64ac4f82-f2af-476e-85b9-f0805001014e.dict",
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME/270fbfb0-281c-4602-8413-2970a7439c47.dict",
-            "TEST_KYLIN_FACT/LEAF_CATEG_ID":"/dict/TEST_CATEGORY_GROUPINGS/LEAF_CATEG_ID/2602386c-debb-4968-8d2f-b52b8215e385.dict",
-            "TEST_CATEGORY_GROUPINGS/META_CATEG_NAME":"/dict/TEST_CATEGORY_GROUPINGS/META_CATEG_NAME/0410d2c4-4686-40bc-ba14-170042a2de94.dict"
-         },
-         "snapshots":{  
-            "TEST_CAL_DT":"/table_snapshot/TEST_CAL_DT.csv/8f7cfc8a-020d-4019-b419-3c6deb0ffaa0.snapshot",
-            "TEST_SELLER_TYPE_DIM":"/table_snapshot/TEST_SELLER_TYPE_DIM.csv/c60fd05e-ac94-4016-9255-96521b273b81.snapshot",
-            "TEST_CATEGORY_GROUPINGS":"/table_snapshot/TEST_CATEGORY_GROUPINGS.csv/363f4a59-b725-4459-826d-3188bde6a971.snapshot",
-            "TEST_SITES":"/table_snapshot/TEST_SITES.csv/78e0aecc-3ec6-4406-b86e-bac4b10ea63b.snapshot"
-         }
-      }
-   ],
-   "create_time":null,
-   "source_records_count":6000,
-   "source_records_size":0,
-   "size_kb":4758
-}
-```
-
-## Disable Cube
-`PUT /kylin/api/cubes/{cubeName}/disable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-## Purge Cube
-`PUT /kylin/api/cubes/{cubeName}/purge`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-
-## Delete Segment
-`DELETE /kylin/api/cubes/{cubeName}/segs/{segmentName}`
-
-***
-
-## Resume Job
-`PUT /kylin/api/jobs/{jobId}/resume`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-## Pause Job
-`PUT /kylin/api/jobs/{jobId}/pause`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Discard Job
-`PUT /kylin/api/jobs/{jobId}/cancel`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Get Job Status
-`GET /kylin/api/jobs/{jobId}`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-(Same as "Resume Job")
-
-## Get job step output
-`GET /kylin/api/jobs/{jobId}/steps/{stepId}/output`
-
-#### Path Variable
-* jobId - `required` `string` Job id.
-* stepId - `required` `string` Step id; the step id is composed by jobId with step sequence id; for example, the jobId is "fb479e54-837f-49a2-b457-651fc50be110", its 3rd step id is "fb479e54-837f-49a2-b457-651fc50be110-3", 
-
-#### Response Sample
-```
-{  
-   "cmd_output":"log string"
-}
-```
-
-## Get job list
-`GET /kylin/api/jobs`
-
-#### Request Variables
-* cubeName - `optional` `string` Cube name.
-* projectName - `required` `string` Project name.
-* status - `optional` `int` Job status, e.g. (NEW: 0, PENDING: 1, RUNNING: 2, STOPPED: 32, FINISHED: 4, ERROR: 8, DISCARDED: 16)
-* offset - `required` `int` Offset used by pagination.
-* limit - `required` `int` Jobs per page.
-* timeFilter - `required` `int`, e.g. (LAST ONE DAY: 0, LAST ONE WEEK: 1, LAST ONE MONTH: 2, LAST ONE YEAR: 3, ALL: 4)
-
-For example, to get the job list in project 'learn_kylin' for cube 'kylin_sales_cube' in lastone week: 
-
-```
-GET: /kylin/api/jobs?cubeName=kylin_sales_cube&limit=15&offset=0&projectName=learn_kylin&timeFilter=1
-```
-
-#### Response Sample
-```
-[
-  { 
-    "uuid": "9eb7bccf-4448-4578-9c29-552658b5a2ca", 
-    "last_modified": 1490957579843, 
-    "version": "2.0.0", 
-    "name": "Sample_Cube - 19700101000000_20150101000000 - BUILD - GMT+08:00 2017-03-31 18:36:08", 
-    "type": "BUILD", 
-    "duration": 936, 
-    "related_cube": "Sample_Cube", 
-    "related_segment": "53a5d7f7-7e06-4ea1-b3ee-b7f30343c723", 
-    "exec_start_time": 1490956581743, 
-    "exec_end_time": 1490957518131, 
-    "mr_waiting": 0, 
-    "steps": [
-      { 
-        "interruptCmd": null, 
-        "id": "9eb7bccf-4448-4578-9c29-552658b5a2ca-00", 
-        "name": "Create Intermediate Flat Hive Table", 
-        "sequence_id": 0, 
-        "exec_cmd": null, 
-        "interrupt_cmd": null, 
-        "exec_start_time": 1490957508721, 
-        "exec_end_time": 1490957518102, 
-        "exec_wait_time": 0, 
-        "step_status": "DISCARDED", 
-        "cmd_type": "SHELL_CMD_HADOOP", 
-        "info": { "endTime": "1490957518102", "startTime": "1490957508721" }, 
-        "run_async": false 
-      }, 
-      { 
-        "interruptCmd": null, 
-        "id": "9eb7bccf-4448-4578-9c29-552658b5a2ca-01", 
-        "name": "Redistribute Flat Hive Table", 
-        "sequence_id": 1, 
-        "exec_cmd": null, 
-        "interrupt_cmd": null, 
-        "exec_start_time": 0, 
-        "exec_end_time": 0, 
-        "exec_wait_time": 0, 
-        "step_status": "DISCARDED", 
-        "cmd_type": "SHELL_CMD_HADOOP", 
-        "info": {}, 
-        "run_async": false 
-      }
-    ],
-    "submitter": "ADMIN", 
-    "job_status": "FINISHED", 
-    "progress": 100.0 
-  }
-]
-```
-***
-
-## Get Hive Table
-`GET /kylin/api/tables/{project}/{tableName}`
-
-#### Path Parameters
-* project - `required` `string` project name
-* tableName - `required` `string` table name to find.
-
-#### Response Sample
-```sh
-{
-    uuid: "69cc92c0-fc42-4bb9-893f-bd1141c91dbe",
-    name: "SAMPLE_07",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    last_modified: 1419330476755
-}
-```
-
-## Get Hive Tables
-`GET /kylin/api/tables`
-
-#### Request Parameters
-* project- `required` `string` will list all tables in the project.
-* ext- `optional` `boolean`  set true to get extend info of table.
-
-#### Response Sample
-```sh
-[
- {
-    uuid: "53856c96-fe4d-459e-a9dc-c339b1bc3310",
-    name: "SAMPLE_08",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    cardinality: {},
-    last_modified: 0,
-    exd: {
-        minFileSize: "46069",
-        totalNumberFiles: "1",
-        location: "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_08",
-        lastAccessTime: "1398176495945",
-        lastUpdateTime: "1398176495981",
-        columns: "struct columns { string code, string description, i32 total_emp, i32 salary}",
-        partitionColumns: "",
-        EXD_STATUS: "true",
-        maxFileSize: "46069",
-        inputformat: "org.apache.hadoop.mapred.TextInputFormat",
-        partitioned: "false",
-        tableName: "sample_08",
-        owner: "hue",
-        totalFileSize: "46069",
-        outputformat: "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
-    }
-  }
-]
-```
-
-## Load Hive Tables
-`POST /kylin/api/tables/{tables}/{project}`
-
-#### Request Parameters
-* tables - `required` `string` table names you want to load from hive, separated with comma.
-* project - `required` `String`  the project which the tables will be loaded into.
-
-#### Response Sample
-```
-{
-    "result.loaded": ["DEFAULT.SAMPLE_07"],
-    "result.unloaded": ["sapmle_08"]
-}
-```
-
-***
-
-## Wipe cache
-`PUT /kylin/api/cache/{type}/{name}/{action}`
-
-#### Path variable
-* type - `required` `string` 'METADATA' or 'CUBE'
-* name - `required` `string` Cache key, e.g the cube name.
-* action - `required` `string` 'create', 'update' or 'drop'
-
-***
-
-## Initiate cube start position
-Set the stream cube's start position to the current latest offsets; This can avoid building from the earlist position of Kafka topic (if you have set a long retension time); 
-
-`PUT /kylin/api/cubes/{cubeName}/init_start_offsets`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Response Sample
-```sh
-{
-    "result": "success", 
-    "offsets": "{0=246059529, 1=253547684, 2=253023895, 3=172996803, 4=165503476, 5=173513896, 6=19200473, 7=26691891, 8=26699895, 9=26694021, 10=19204164, 11=26694597}"
-}
-```
-
-## Build stream cube
-`PUT /kylin/api/cubes/{cubeName}/build2`
-
-This API is specific for stream cube's building;
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Request Body
-
-* sourceOffsetStart - `required` `long` The start offset, 0 represents from previous position;
-* sourceOffsetEnd  - `required` `long` The end offset, 9223372036854775807 represents to the end position of current stream data
-* buildType - `required` Build type, "BUILD", "MERGE" or "REFRESH"
-
-#### Request Sample
-
-```sh
-{  
-   "sourceOffsetStart": 0, 
-   "sourceOffsetEnd": 9223372036854775807, 
-   "buildType": "BUILD"
-}
-```
-
-#### Response Sample
-```sh
-{
-    "uuid": "3afd6e75-f921-41e1-8c68-cb60bc72a601", 
-    "last_modified": 1480402541240, 
-    "version": "1.6.0", 
-    "name": "embedded_cube_clone - 1409830324_1409849348 - BUILD - PST 2016-11-28 22:55:41", 
-    "type": "BUILD", 
-    "duration": 0, 
-    "related_cube": "embedded_cube_clone", 
-    "related_segment": "42ebcdea-cbe9-4905-84db-31cb25f11515", 
-    "exec_start_time": 0, 
-    "exec_end_time": 0, 
-    "mr_waiting": 0, 
- ...
-}
-```
-
-## Check segment holes
-`GET /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-## Fill segment holes
-`PUT /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-
-
-## Use RESTful API in Javascript
-
-Keypoints of call Kylin RESTful API in web page are:
-
-1. Add basic access authorization info in http headers.
-
-2. Use proper request type and data synax.
-
-Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers; for example:
-
-```
-$.ajaxSetup({
-      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
-    });
-    var request = $.ajax({
-       url: "http://hostname/kylin/api/query",
-       type: "POST",
-       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
-       dataType: "json"
-    });
-    request.done(function( msg ) {
-       alert(msg);
-    }); 
-    request.fail(function( jqXHR, textStatus ) {
-       alert( "Request failed: " + textStatus );
-  });
-
-```
-
-To generate your authorization code, download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
-
-```
-var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
-
-$.ajaxSetup({
-   headers: { 
-    'Authorization': "Basic " + authorizationCode, 
-    'Content-Type': 'application/json;charset=utf-8' 
-   }
-});
-```
\ No newline at end of file
diff --git a/website/_docs21/howto/howto_use_restapi_in_js.md b/website/_docs21/howto/howto_use_restapi_in_js.md
deleted file mode 100644
index 585cdd5..0000000
--- a/website/_docs21/howto/howto_use_restapi_in_js.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-layout: docs21
-title:  Use RESTful API in Javascript
-categories: howto
-permalink: /docs21/howto/howto_use_restapi_in_js.html
----
-Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers.
-
-## Example on Query API.
-```
-$.ajaxSetup({
-      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
-    });
-    var request = $.ajax({
-       url: "http://hostname/kylin/api/query",
-       type: "POST",
-       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
-       dataType: "json"
-    });
-    request.done(function( msg ) {
-       alert(msg);
-    }); 
-    request.fail(function( jqXHR, textStatus ) {
-       alert( "Request failed: " + textStatus );
-  });
-
-```
-
-## Keypoints
-1. add basic access authorization info in http headers.
-2. use right ajax type and data synax.
-
-## Basic access authorization
-For what is basic access authorization, refer to [Wikipedia Page](http://en.wikipedia.org/wiki/Basic_access_authentication).
-How to generate your authorization code (download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
-
-```
-var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
- 
-$.ajaxSetup({
-   headers: { 
-    'Authorization': "Basic " + authorizationCode, 
-    'Content-Type': 'application/json;charset=utf-8' 
-   }
-});
-```
diff --git a/website/_docs21/index.cn.md b/website/_docs21/index.cn.md
deleted file mode 100644
index 1f504be..0000000
--- a/website/_docs21/index.cn.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-layout: docs21-cn
-title: 概述
-categories: docs
-permalink: /cn/docs21/index.html
----
-
-欢迎来到 Apache Kylin™
-------------  
-> Extreme OLAP Engine for Big Data
-
-Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
-
-
-安装 
-------------  
-请参考安装文档以安装Apache Kylin: [安装向导](/cn/docs20/install/)
-
-
-
-
-
-
diff --git a/website/_docs21/index.md b/website/_docs21/index.md
deleted file mode 100644
index e2f9b01..0000000
--- a/website/_docs21/index.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-layout: docs21
-title: Overview
-categories: docs
-permalink: /docs21/index.html
----
-
-
-Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
-------------  
-
-Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
-
-Installation & Setup
-------------  
-1. [Hadoop Env](install/hadoop_env.html)
-2. [Installation Guide](install/index.html)
-3. [Advanced settings](install/advance_settings.html)
-4. [Deploy in cluster mode](install/kylin_cluster.html)
-5. [Run Kylin with Docker](install/kylin_docker.html)
-6. [Install Kylin on AWS EMR](install/kylin_aws_emr.html)
-
-
-
-Tutorial
-------------  
-1. [Quick Start with Sample Cube](tutorial/kylin_sample.html)
-2. [Web Interface](tutorial/web.html)
-3. [Cube Wizard](tutorial/create_cube.html)
-4. [Cube Build and Job Monitoring](tutorial/cube_build_job.html)
-5. [SQL reference: by Apache Calcite](http://calcite.apache.org/docs/reference.html)
-6. [Build Cube with Streaming Data](tutorial/cube_streaming.html)
-7. [Build Cube with Spark Engine](tutorial/cube_spark.html)
-8. [Cube Build Tuning](tutorial/cube_build_performance.html)
-9. [Enable Query Pushdown](tutorial/query_pushdown.html)
-
-
-
-Connectivity and APIs
-------------  
-1. [ODBC driver](tutorial/odbc.html)
-2. [JDBC driver](howto/howto_jdbc.html)
-3. [RESTful API list](howto/howto_use_restapi.html)
-4. [Build cube with RESTful API](howto/howto_build_cube_with_restapi.html)
-5. [Connect from MS Excel and PowerBI](tutorial/powerbi.html)
-6. [Connect from Tableau 8](tutorial/tableau.html)
-7. [Connect from Tableau 9](tutorial/tableau_91.html)
-8. [Connect from MicroStrategy](tutorial/microstrategy.html)
-9. [Connect from SQuirreL](tutorial/squirrel.html)
-10. [Connect from Apache Flink](tutorial/flink.html)
-11. [Connect from Hue](tutorial/hue.html)
-12. [Connect from Qlik Sense](tutorial/Qlik.html)
-
-
-Operations
-------------  
-1. [Backup/restore Kylin metadata](howto/howto_backup_metadata.html)
-2. [Cleanup storage](howto/howto_cleanup_storage.html)
-3. [Upgrade from old version](howto/howto_upgrade.html)
-
-
-
diff --git a/website/_docs21/install/advance_settings.md b/website/_docs21/install/advance_settings.md
deleted file mode 100644
index 2e36448..0000000
--- a/website/_docs21/install/advance_settings.md
+++ /dev/null
@@ -1,102 +0,0 @@
----
-layout: docs21
-title:  "Advanced Settings"
-categories: install
-permalink: /docs21/install/advance_settings.html
----
-
-## Overwrite default kylin.properties at Cube level
-In `conf/kylin.properties` there are many parameters, which control/impact on Kylin's behaviors; Most parameters are global configs like security or job related; while some are Cube related; These Cube related parameters can be customized at each Cube level, so you can control the behaviors more flexibly. The GUI to do this is in the "Configuration Overwrites" step of the Cube wizard, as the screenshot below.
-
-![]( /images/install/overwrite_config_v2.png)
-
-Here take two example: 
-
- * `kylin.cube.algorithm`: it defines the Cubing algorithm that the job engine will select; Its default value is "auto", means the engine will dynamically pick an algorithm ("layer" or "inmem") by sampling the data. If you knows Kylin and your data/cluster well, you can set your preferred algorithm directly.   
-
- * `kylin.storage.hbase.region-cut-gb`: it defines how big a region is when creating the HBase table. The default value is "5" (GB) per region. It might be too big for a small or medium cube, so you can give it a smaller value to get more regions created, then can gain better query performance.
-
-## Overwrite default Hadoop job conf at Cube level
-The `conf/kylin_job_conf.xml` and `conf/kylin_job_conf_inmem.xml` manage the default configurations for Hadoop jobs. If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need adding a prefix `kylin.engine.mr.config-override.`; These configs will be parsed out and then applied when submitting jobs. See two examples below:
-
- * If want a cube's job getting more memory from Yarn, you can define: `kylin.engine.mr.config-override.mapreduce.map.java.opts=-Xmx7g` and `kylin.engine.mr.config-override.mapreduce.map.memory.mb=8192`
- * If want a cube's job going to a different Yarn resource queue, you can define: `kylin.engine.mr.config-override.mapreduce.job.queuename=myQueue` ("myQueue" is just a sample, change to your queue name)
-
-## Overwrite default Hive job conf at Cube level
-
-The `conf/kylin_hive_conf.xml` manages the default configurations when running Hive job (like creating intermediate flat hive table). If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need using another prefix `kylin.source.hive.config-override.`; These configs will be parsed out and then applied when running "hive -e" or "beeline" commands. See example below:
-
- * If want hive goes to a different Yarn resource queue, you can define: `kylin.source.hive.config-override.mapreduce.job.queuename=myQueue` ("myQueue" is just a sample, change to your queue name)
-
-## Overwrite default Spark conf at Cube level
-
- The configurations for Spark are managed in `conf/kylin.properties` with prefix `kylin.engine.spark-conf.`. For example, if you want to use job queue "myQueue" to run Spark, setting "kylin.engine.spark-conf.spark.yarn.queue=myQueue" will let Spark get "spark.yarn.queue=myQueue" feeded when submitting applications. The parameters can be configured at Cube level, which will override the default values in `conf/kylin.properties`. 
-
-## Enable compression
-
-By default, Kylin does not enable compression, this is not the recommend settings for production environment, but a tradeoff for new Kylin users. A suitable compression algorithm will reduce the storage overhead. But unsupported algorithm will break the Kylin job build also. There are three kinds of compression used in Kylin, HBase table compression, Hive output compression and MR jobs output compression. 
-
-* HBase table compression
-The compression settings define in `kyiln.properties` by `kylin.hbase.default.compression.codec`, default value is *none*. The valid value includes *none*, *snappy*, *lzo*, *gzip* and *lz4*. Before changing the compression algorithm, please make sure the selected algorithm is supported on your HBase cluster. Especially for snappy, lzo and lz4, not all Hadoop distributions include these. 
-
-* Hive output compression
-The compression settings define in `kylin_hive_conf.xml`. The default setting is empty which leverages the Hive default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_hive_conf.xml`. Take the snappy compression for example:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-* MR jobs output compression
-The compression settings define in `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. The default setting is empty which leverages the MR default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. Take the snappy compression for example:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-Compression settings only take effect after restarting Kylin server instance.
-
-## Allocate more memory to Kylin instance
-
-Open `bin/setenv.sh`, which has two sample settings for `KYLIN_JVM_SETTINGS` environment variable; The default setting is small (4GB at max.), you can comment it and then un-comment the next line to allocate 16GB:
-
-{% highlight Groff markup %}
-export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$KYLIN_HOME/logs/kylin.gc.$$ -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M"
-# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX:MaxPermSize=512m -XX:NewSize=3g -XX:MaxNewSize=3g -XX:SurvivorRatio=4 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=70 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError"
-{% endhighlight %}
-
-## Enable LDAP or SSO authentication
-
-Check [How to Enable Security with LDAP and SSO](../howto/howto_ldap_and_sso.html)
-
-
-## Enable email notification
-
-Kylin can send email notification on job complete/fail; To enable this, edit `conf/kylin.properties`, set the following parameters:
-{% highlight Groff markup %}
-mail.enabled=true
-mail.host=your-smtp-server
-mail.username=your-smtp-account
-mail.password=your-smtp-pwd
-mail.sender=your-sender-address
-kylin.job.admin.dls=adminstrator-address
-{% endhighlight %}
-
-Restart Kylin server to take effective. To disable, set `mail.enabled` back to `false`.
-
-Administrator will get notifications for all jobs. Modeler and Analyst need enter email address into the "Notification List" at the first page of cube wizard, and then will get notified for that cube.
diff --git a/website/_docs21/install/hadoop_evn.md b/website/_docs21/install/hadoop_evn.md
deleted file mode 100644
index c20d40a..0000000
--- a/website/_docs21/install/hadoop_evn.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-layout: docs21
-title:  "Hadoop Environment"
-categories: install
-permalink: /docs21/install/hadoop_env.html
----
-
-Kylin need run in a Hadoop node, to get better stability, we suggest you to deploy it a pure Hadoop client machine, on which  the command lines like `hive`, `hbase`, `hadoop`, `hdfs` already be installed and configured. The Linux account that running Kylin has got permission to the Hadoop cluster, including create/write hdfs, hive tables, hbase tables and submit MR jobs. 
-
-## Minimal Hadoop Versions
-
-* Hadoop: 2.7+
-* Hive: 0.13 - 1.2.1+
-* HBase: 0.98 - 0.99, 1.1+
-* JDK: 1.7+
-
-_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1. Windows and MacOS have known issues._
-
-To make things easier we strongly recommend you try Kylin with an all-in-one sandbox VM, like [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/), and give it 10 GB memory. In the following tutorial we'll go with **Hortonworks Sandbox 2.1** and **Cloudera QuickStart VM 5.1**. 
-
-To avoid permission issue in the sandbox, you can use its `root` account. The password for **Hortonworks Sandbox 2.1** is `hadoop` , for **Cloudera QuickStart VM 5.1** is `cloudera`.
-
-We also suggest you using bridged mode instead of NAT mode in Virtual Box settings. Bridged mode will assign your sandbox an independent IP address so that you can avoid issues like [this](https://github.com/KylinOLAP/Kylin/issues/12).
-
-### Start Hadoop
-Use ambari helps to launch hadoop:
-
-```
-ambari-agent start
-ambari-server start
-```
-
-With both command successfully run you can go to ambari homepage at <http://your_sandbox_ip:8080> (user:admin,password:admin) to check everything's status. **By default hortonworks ambari disables Hbase, you need manually start the `Hbase` service at ambari homepage.**
-
-
- 
diff --git a/website/_docs21/install/index.cn.md b/website/_docs21/install/index.cn.md
deleted file mode 100644
index f01cb2e..0000000
--- a/website/_docs21/install/index.cn.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-layout: docs21
-title:  "Installation Guide"
-categories: install
-permalink: /cn/docs21/install/index.html
-version: v0.7.2
-since: v0.7.1
----
-
-### Environment
-
-Kylin requires a properly setup hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check this reference: [Hadoop Environment](hadoop_env.html).
-
-## Prerequisites on Hadoop
-
-* Hadoop: 2.4+
-* Hive: 0.13+
-* HBase: 0.98+, 1.x
-* JDK: 1.7+  
-_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1_
-
-
-It is most common to install Kylin on a Hadoop client machine. It can be used for demo use, or for those who want to host their own web site to provide Kylin service. The scenario is depicted as:
-
-![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
-
-For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
-
-Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
-
-### Install Kylin
-
-1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
-2. Export KYLIN_HOME pointing to the extracted Kylin folder
-3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
-4. To start Kylin, simply run **bin/kylin.sh start**
-5. To stop Kylin, simply run **bin/kylin.sh stop**
-
-> If you want to have multiple Kylin nodes please refer to [this](kylin_cluster.html)
-
-After Kylin started you can visit <http://your_hostname:7070/kylin>. The username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
-
-1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
-2. [Create and Build your own cube](../tutorial/create_cube.html)
-3. [Kylin Web Tutorial](../tutorial/web.html)
-
diff --git a/website/_docs21/install/index.md b/website/_docs21/install/index.md
deleted file mode 100644
index 6770f0f..0000000
--- a/website/_docs21/install/index.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-layout: docs21
-title:  "Installation Guide"
-categories: install
-permalink: /docs21/install/index.html
----
-
-### Environment
-
-Kylin requires a properly setup Hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check [Hadoop Environment](hadoop_env.html).
-
-It is most common to install Kylin on a Hadoop client machine, from which Kylin can talk with the Hadoop cluster via command lines including `hive`, `hbase`, `hadoop`, etc. The scenario is depicted as:
-
-![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
-
-For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
-
-Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
-
-### Install Kylin
-
-1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
-2. Export KYLIN_HOME pointing to the extracted Kylin folder
-3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
-4. To start Kylin, run **bin/kylin.sh start**, after the server starts, you can watch logs/kylin.log for runtime logs;
-5. To stop Kylin, run **bin/kylin.sh stop**
-
-> If you want to have multiple Kylin nodes running to provide high availability, please refer to [this](kylin_cluster.html)
-
-After Kylin started you can visit <http://hostname:7070/kylin>. The default username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
-
-1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
-2. [Create and Build a cube](../tutorial/create_cube.html)
-3. [Kylin Web Tutorial](../tutorial/web.html)
-
diff --git a/website/_docs21/install/kylin_aws_emr.md b/website/_docs21/install/kylin_aws_emr.md
deleted file mode 100644
index 55b922a..0000000
--- a/website/_docs21/install/kylin_aws_emr.md
+++ /dev/null
@@ -1,167 +0,0 @@
----
-layout: docs21
-title:  "Install Kylin on AWS EMR"
-categories: install
-permalink: /docs21/install/kylin_aws_emr.html
----
-
-Many users run Hadoop on public Cloud like AWS today. Apache Kylin, compiled with standard Hadoop/HBase API, support most main stream Hadoop releases; The current version Kylin v2.2, supports AWS EMR 5.0 to 5.10. This document introduces how to run Kylin on EMR.
-
-### Recommended Version
-* AWS EMR 5.7 (for EMR 5.8 and above, please check [KYLIN-3129](https://issues.apache.org/jira/browse/KYLIN-3129))
-* Apache Kylin v2.2.0 for HBase 1.x
-
-### Start EMR cluster
-
-Launch an EMR cluser with AWS web console, command line or API. Select "**HBase**" in the applications as Kylin need HBase service. 
-
-You can select "HDFS" or "S3" as the storage for HBase, depending on whether you need Cube data be persisted after shutting down the cluster. EMR HDFS uses the local disk of EC2 instances, which will erase the data when cluster is stopped, then Kylin metadata and Cube data can be lost.
-
-If you use "S3" as HBase's storage, you need customize its configuration for "**hbase.rpc.timeout**", because the bulk load to S3 is a copy operation, when data size is huge, HBase region server need wait much longer to finish than on HDFS.
-
-```
-[  {
-    "Classification": "hbase-site",
-    "Properties": {
-      "hbase.rpc.timeout": "3600000",
-      "hbase.rootdir": "s3://yourbucket/EMRROOT"
-    }
-  },
-  {
-    "Classification": "hbase",
-    "Properties": {
-      "hbase.emr.storageMode": "s3"
-    }
-  }
-]
-```
-
-### Install Kylin
-
-When EMR cluser is in "Waiting" status, you can SSH into its master  node, download Kylin and then uncompress the tar ball:
-
-```
-sudo mkdir /usr/local/kylin
-sudo chown hadoop /usr/local/kylin
-cd /usr/local/kylin
-wget http://www-us.apache.org/dist/kylin/apache-kylin-2.2.0/apache-kylin-2.2.0-bin-hbase1x.tar.gz 
-tar –zxvf apache-kylin-2.2.0-bin-hbase1x.tar.gz
-```
-
-### Configure Kylin
-
-Before start Kylin, you need do a couple of configurations:
-
-- Copy "hbase.zookeeper.quorum" property from /etc/hbase/conf/hbase-site.xml to $KYLIN\_HOME/conf/kylin\_job\_conf.xml, like this:
-
-
-```
-<property>
-  <name>hbase.zookeeper.quorum</name>
-  <value>ip-nn-nn-nn-nn.ap-northeast-2.compute.internal</value>
-</property>
-```
-
-- Use HDFS as "kylin.env.hdfs-working-dir" (Recommended)
-
-EMR recommends to **"use HDFS for intermediate data storage while the cluster is running and Amazon S3 only to input the initial data and output the final results"**. Kylin's 'hdfs-working-dir' is for putting the intermediate data for Cube building, cuboid files and also some metadata files (like dictionary and table snapshots which are not good in HBase); so it is best to configure HDFS for this. 
-
-If using HDFS as Kylin working directory, you just leave configurations unchanged as EMR's default FS is HDFS:
-
-```
-kylin.env.hdfs-working-dir=/kylin
-```
-
-Before you shudown/restart the cluster, you must backup the "/kylin" data on HDFS to S3 with [S3DistCp](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html), or you may lost data and couldn't recover the cluster later.
-
-- Use S3 as "kylin.env.hdfs-working-dir" 
-
-If you want to use S3 as storage (assume HBase is also on S3), you need configure the following parameters:
-
-```
-kylin.env.hdfs-working-dir=s3://yourbucket/kylin
-kylin.storage.hbase.cluster-fs=s3://yourbucket
-kylin.source.hive.redistribute-flat-table=false
-```
-
-The intermediate file and the HFile will all be written to S3. The build performance would be slower than HDFS. Make sure you have a good understanding about the difference between S3 and HDFS. Read the following articles from AWS:
-
-[Input and Output Errors](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html)
-[Are you having trouble loading data to or from Amazon S3 into Hive](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-error-hive.html#emr-troubleshoot-error-hive-3)
-
-
-- Hadoop configurations
-
-Some Hadoop configurations need be applied for better performance and data consistency on S3, according to [emr-troubleshoot-errors-io](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html)
-
-```
-<property>
-  <name>io.file.buffer.size</name>
-  <value>65536</value>
-</property>
-<property>
-  <name>mapred.map.tasks.speculative.execution</name>
-  <value>false</value>
-</property>
-<property>
-  <name>mapred.reduce.tasks.speculative.execution</name>
-  <value>false</value>
-</property>
-<property>
-  <name>mapreduce.map.speculative</name>
-  <value>false</value>
-</property>
-<property>
-  <name>mapreduce.reduce.speculative</name>
-  <value>false</value>
-</property>
-
-```
-
-
-- Create the working-dir folder if it doesn't exist
-
-```
-hadoop fs -mkdir /kylin 
-```
-
-or
-
-```
-hadoop fs -mkdir s3://yourbucket/kylin
-```
-
-### Start Kylin
-
-The start is the same as on normal Hadoop:
-
-```
-export KYLIN_HOME=/usr/local/kylin/apache-kylin-2.2.0-bin
-$KYLIN_HOME/bin/sample.sh
-$KYLIN_HOME/bin/kylin.sh start
-```
-
-Don't forget to enable the 7070 port access in the security group for EMR master - "ElasticMapReduce-master", or with SSH tunnel to the master node, then you can access Kylin Web GUI at http://\<master\-dns\>:7070/kylin
-
-Build the sample Cube, and then run queries when the Cube is ready. You can browse S3 to see whether the data is safely persisted.
-
-### Shut down EMR Cluster
-
-Before you shut down EMR cluster, we suggest you take a backup for Kylin metadata and upload it to S3.
-
-To shut down an Amazon EMR cluster without losing data that hasn't been written to Amazon S3, the MemStore cache needs to flush to Amazon S3 to write new store files. To do this, you can run a shell script provided on the EMR cluster. 
-
-```
-bash /usr/lib/hbase/bin/disable_all_tables.sh
-```
-
-To restart a cluster with the same HBase data, specify the same Amazon S3 location as the previous cluster either in the AWS Management Console or using the "hbase.rootdir" configuration property. For more information about EMR HBase, refer to [HBase on Amazon S3](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hbase-s3.html)
-
-	
-## Deploy Kylin in a dedicated EC2 
-
-Running Kylin in a dedicated client node (not master, core or task) is recommended. You can start a separate EC2 instance within the same VPC and subnet as your EMR, copy the Hadoop clients from master node to it, and then install Kylin in it. This can improve the stability of services in master node as well as Kylin itself. 
-	
-## Known issues on EMR
-* [KYLIN-3028](https://issues.apache.org/jira/browse/KYLIN-3028)
-* [KYLIN-3032](https://issues.apache.org/jira/browse/KYLIN-3032)
diff --git a/website/_docs21/install/kylin_cluster.md b/website/_docs21/install/kylin_cluster.md
deleted file mode 100644
index 10d63d0..0000000
--- a/website/_docs21/install/kylin_cluster.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-layout: docs21
-title:  "Deploy in Cluster Mode"
-categories: install
-permalink: /docs21/install/kylin_cluster.html
----
-
-
-### Kylin Server modes
-
-Kylin instances are stateless,  the runtime state is saved in its "Metadata Store" in hbase (kylin.metadata.url config in conf/kylin.properties). For load balance considerations it is possible to start multiple Kylin instances sharing the same metadata store (thus sharing the same state on table schemas, job status, cube status, etc.)
-
-Each of the kylin instances has a kylin.server.mode entry in conf/kylin.properties specifying the runtime mode, it has three options: 1. "job" for running job engine only 2. "query" for running query engine only and 3 "all" for running both. Notice that only one server can run the job engine("all" mode or "job" mode), the others must all be "query" mode.
-
-A typical scenario is depicted in the following chart:
-
-![]( /images/install/kylin_server_modes.png)
-
-### Setting up Multiple Kylin REST servers
-
-If you are running Kylin in a cluster where you have multiple Kylin REST server instances, please make sure you have the following property correctly configured in ${KYLIN_HOME}/conf/kylin.properties for EVERY server instance.
-
-1. kylin.rest.servers 
-	List of web servers in use, this enables one web server instance to sync up with other servers. For example: kylin.rest.servers=sandbox1:7070,sandbox2:7070
-  
-2. kylin.server.mode
-	Make sure there is only one instance whose "kylin.server.mode" is set to "all"(or "job"), others should be "query"
-	
-## Setup load balancer 
-
-To enable Kylin high availability, you need setup a load balancer in front of these servers, let it routing the incoming requests to the cluster. Client sides send all requests to the load balancer, instead of talk with a specific instance. 
-	
diff --git a/website/_docs21/install/kylin_docker.md b/website/_docs21/install/kylin_docker.md
deleted file mode 100644
index 2278e44..0000000
--- a/website/_docs21/install/kylin_docker.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-layout: docs21
-title:  "Run Kylin with Docker"
-categories: install
-permalink: /docs21/install/kylin_docker.html
-version: v1.5.3
-since: v1.5.2
----
-
-Apache Kylin runs as a client of Hadoop cluster, so it is reasonable to run within a Docker container; please check [this project](https://github.com/Kyligence/kylin-docker/) on github.
diff --git a/website/_docs21/install/manual_install_guide.cn.md b/website/_docs21/install/manual_install_guide.cn.md
deleted file mode 100644
index b222ea5..0000000
--- a/website/_docs21/install/manual_install_guide.cn.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-layout: docs21-cn
-title:  "手动安装指南"
-categories: 安装
-permalink: /cn/docs21/install/manual_install_guide.html
-version: v0.7.2
-since: v0.7.1
----
-
-## 引言
-
-在大多数情况下,我们的自动脚本[Installation Guide](./index.html)可以帮助你在你的hadoop sandbox甚至你的hadoop cluster中启动Kylin。但是,为防部署脚本出错,我们撰写本文作为参考指南来解决你的问题。
-
-基本上本文解释了自动脚本中的每一步骤。我们假设你已经对Linux上的Hadoop操作非常熟悉。
-
-## 前提条件
-* Kylin 二进制文件拷贝至本地并解压,之后使用$KYLIN_HOME引用
-`export KYLIN_HOME=/path/to/kylin`
-`cd $KYLIN_HOME`
-
-### 启动Kylin
-
-以`./bin/kylin.sh start`
-
-启动Kylin
-
-并以`./bin/Kylin.sh stop`
-
-停止Kylin
diff --git a/website/_docs21/release_notes.md b/website/_docs21/release_notes.md
deleted file mode 100644
index d4bfa87..0000000
--- a/website/_docs21/release_notes.md
+++ /dev/null
@@ -1,1792 +0,0 @@
----
-layout: docs21
-title:  Release Notes
-categories: gettingstarted
-permalink: /docs21/release_notes.html
----
-
-To download latest release, please visit: [http://kylin.apache.org/download/](http://kylin.apache.org/download/), 
-there are source code package, binary package, ODBC driver and installation guide avaliable.
-
-Any problem or issue, please report to Apache Kylin JIRA project: [https://issues.apache.org/jira/browse/KYLIN](https://issues.apache.org/jira/browse/KYLIN)
-
-or send to Apache Kylin mailing list:
-
-* User relative: [user@kylin.apache.org](mailto:user@kylin.apache.org)
-* Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
-
-## v2.2.0 - 2017-11-03
-
-_Tag:_ [kylin-2.2.0](https://github.com/apache/kylin/tree/kylin-2.2.0)
-This is a major release after 2.1, with more than 70 bug fixes and enhancements. Check [How to upgrade](/docs21/howto/howto_upgrade.html).
-
-__New Feature__
-* [KYLIN-2703] - Manage ACL through Apache Ranger
-* [KYLIN-2752] - Make HTable name prefix configurable
-* [KYLIN-2761] - Table Level ACL
-* [KYLIN-2775] - Streaming Cube Sample
-
-__Improvement__
-* [KYLIN-2535] - Use ResourceStore to manage ACL files
-* [KYLIN-2604] - Use global dict as the default encoding for precise distinct count in web
-* [KYLIN-2606] - Only return counter for precise count_distinct if query is exactAggregate
-* [KYLIN-2622] - AppendTrieDictionary support not global
-* [KYLIN-2623] - Move output(Hbase) related code from MR engine to outputside
-* [KYLIN-2653] - Spark Cubing read metadata from HDFS
-* [KYLIN-2717] - Move concept Table under Project
-* [KYLIN-2790] - Add an extending point to support other types of column family
-* [KYLIN-2795] - Improve REST API document, add get/list jobs
-* [KYLIN-2803] - Pushdown non "select" query
-* [KYLIN-2818] - Refactor dateRange & sourceOffset on CubeSegment
-* [KYLIN-2819] - Add "kylin.env.zookeeper-base-path" for zk path
-* [KYLIN-2823] - Trim TupleFilter after dictionary-based filter optimization
-* [KYLIN-2844] - Override "max-visit-scanrange" and "max-fuzzykey-scan" at cube level
-* [KYLIN-2854] - Remove duplicated controllers
-* [KYLIN-2856] - Log pushdown query as a kind of BadQuery
-* [KYLIN-2857] - MR configuration should be overwritten by user specified parameters when resuming MR jobs
-* [KYLIN-2858] - Add retry in cache sync
-* [KYLIN-2879] - Upgrade Spring & Spring Security to fix potential vulnerability
-* [KYLIN-2891] - Upgrade Tomcat to 7.0.82.
-* [KYLIN-2963] - Remove Beta for Spark Cubing
-
-__Bug__
-* [KYLIN-1794] - Enable job list even some job metadata parsing failed
-* [KYLIN-2600] - Incorrectly set the range start when filtering by the minimum value
-* [KYLIN-2705] - Allow removing model's "partition_date_column" on web
-* [KYLIN-2706] - Fix the bug for the comparator in SortedIteratorMergerWithLimit
-* [KYLIN-2707] - Fix NPE in JobInfoConverter
-* [KYLIN-2716] - Non-thread-safe WeakHashMap leading to high CPU
-* [KYLIN-2718] - Overflow when calculating combination amount based on static rules
-* [KYLIN-2753] - Job duration may become negative
-* [KYLIN-2766] - Kylin uses default FS to put the coprocessor jar, instead of the working dir
-* [KYLIN-2773] - Should not push down join condition related columns are compatible while not consistent
-* [KYLIN-2781] - Make 'find-hadoop-conf-dir.sh' executable
-* [KYLIN-2786] - Miss "org.apache.kylin.source.kafka.DateTimeParser"
-* [KYLIN-2788] - HFile is not written to S3
-* [KYLIN-2789] - Cube's last build time is wrong
-* [KYLIN-2791] - Fix bug in readLong function in BytesUtil
-* [KYLIN-2798] - Can't rearrange the order of rowkey columns though web UI
-* [KYLIN-2799] - Building cube with percentile measure encounter with NullPointerException
-* [KYLIN-2800] - All dictionaries should be built based on the flat hive table
-* [KYLIN-2806] - Empty results from JDBC with Date filter in prepareStatement
-* [KYLIN-2812] - Save to wrong database when loading Kafka Topic
-* [KYLIN-2814] - HTTP connection may not be released in RestClient
-* [KYLIN-2815] - Empty results with prepareStatement but OK with KylinStatement
-* [KYLIN-2824] - Parse Boolean type in JDBC driver
-* [KYLIN-2832] - Table meta missing from system diagnosis
-* [KYLIN-2833] - Storage cleanup job could delete the intermediate hive table used by running jobs
-* [KYLIN-2834] - Bug in metadata sync, Broadcaster lost listener after cache wipe
-* [KYLIN-2838] - Should get storageType in changeHtableHost of CubeMigrationCLI
-* [KYLIN-2862] - BasicClientConnManager in RestClient can't do well with syncing many query severs
-* [KYLIN-2863] - Double caret bug in sample.sh for old version bash
-* [KYLIN-2865] - Wrong fs when use two cluster
-* [KYLIN-2868] - Include and exclude filters not work on ResourceTool
-* [KYLIN-2870] - Shortcut key description is error at Kylin-Web
-* [KYLIN-2871] - Ineffective null check in SegmentRange
-* [KYLIN-2877] - Unclosed PreparedStatement in QueryService#execute()
-* [KYLIN-2906] - Check model/cube name is duplicated when creating model/cube
-* [KYLIN-2915] - Exception during query on lookup table
-* [KYLIN-2920] - Failed to get streaming config on WebUI
-* [KYLIN-2944] - HLLCSerializer, RawSerializer, PercentileSerializer returns shared object in serialize()
-* [KYLIN-2949] - Couldn't get authorities with LDAP in RedHat Linux
-
-
-Task
-* [KYLIN-2782] - Replace DailyRollingFileAppender with RollingFileAppender to allow log retention
-* [KYLIN-2925] - Provide document for Ranger security integration
-
-Sub-task
-* [KYLIN-2549] - Modify tools that related to Acl
-* [KYLIN-2728] - Introduce a new cuboid scheduler based on cuboid tree rather than static rules
-* [KYLIN-2729] - Introduce greedy algorithm for cube planner
-* [KYLIN-2730] - Introduce genetic algorithm for cube planner
-* [KYLIN-2802] - Enable cube planner phase one
-* [KYLIN-2826] - Add basic support classes for cube planner algorithms
-* [KYLIN-2961] - Provide user guide for Ranger Kylin Plugin
-
-## v2.1.0 - 2017-08-17
-
-_Tag:_ [kylin-2.1.0](https://github.com/apache/kylin/tree/kylin-2.1.0)
-This is a major release after 2.0, with more than 100 bug fixes and enhancements. Check [How to upgrade](/docs21/howto/howto_upgrade.html).
-
-__New Feature__
-
-* [KYLIN-1351] - Support RDBMS as data source
-* [KYLIN-2515] - Route unsupported query back to source
-* [KYLIN-2646] - Project level query authorization
-* [KYLIN-2665] - Add model JSON edit in web 
-
-__Improvement__
-
-* [KYLIN-2506] - Refactor Global Dictionary
-* [KYLIN-2562] - Allow configuring yarn app tracking URL pattern
-* [KYLIN-2578] - Refactor DistributedLock
-* [KYLIN-2579] - Improvement on subqueries: reorder subqueries joins with RelOptRule
-* [KYLIN-2580] - Improvement on subqueries: allow grouping by columns from subquery
-* [KYLIN-2586] - use random port for CacheServiceTest as fixed port 7777 might have been occupied
-* [KYLIN-2596] - Enable generating multiple streaming messages with one input message in streaming parser
-* [KYLIN-2597] - Deal with trivial expression in filters like x = 1 + 2
-* [KYLIN-2598] - Should not translate filter to a in-clause filter with too many elements
-* [KYLIN-2599] - select * in subquery fail due to bug in hackSelectStar 
-* [KYLIN-2602] - Add optional job threshold arg for MetadataCleanupJob
-* [KYLIN-2603] - Push 'having' filter down to storage
-* [KYLIN-2607] - Add http timeout for RestClient
-* [KYLIN-2610] - Optimize BuiltInFunctionTransformer performance
-* [KYLIN-2616] - GUI for multiple column count distinct measure
-* [KYLIN-2624] - Correct reporting of HBase errors
-* [KYLIN-2627] - ResourceStore to support simple rollback
-* [KYLIN-2628] - Remove synchronized modifier for reloadCubeLocalAt
-* [KYLIN-2633] - Upgrade Spark to 2.1
-* [KYLIN-2642] - Relax check in RowKeyColDesc to keep backward compatibility
-* [KYLIN-2667] - Ignore whitespace when caching query
-* [KYLIN-2668] - Support Calcites Properties in JDBC URL
-* [KYLIN-2673] - Support change the fact table when the cube is disable
-* [KYLIN-2676] - Keep UUID in metadata constant 
-* [KYLIN-2677] - Add project configuration view page
-* [KYLIN-2689] - Only dimension columns can join when create a model
-* [KYLIN-2691] - Support delete broken cube
-* [KYLIN-2695] - Allow override spark conf in cube
-* [KYLIN-2696] - Check SQL injection in model filter condition
-* [KYLIN-2700] - Allow override Kafka conf at cube level
-* [KYLIN-2704] - StorageCleanupJob should deal with a new metadata path
-* [KYLIN-2742] - Specify login page for Spring security 4.x
-* [KYLIN-2757] - Get cube size when using Azure Data Lake Store
-* [KYLIN-2783] - Refactor CuboidScheduler to be extensible
-* [KYLIN-2784] - Set User-Agent for ODBC/JDBC Drivers
-* [KYLIN-2793] - ODBC Driver - Bypass cert validation when connect to SSL service
-
-__Bug__
-
-* [KYLIN-1668] - Rowkey column shouldn't allow delete and add
-* [KYLIN-1683] - Row key could drag and drop in view state of cube - advanced settings tabpage
-* [KYLIN-2472] - Support Unicode chars in kylin.properties
-* [KYLIN-2493] - Fix BufferOverflowException in FactDistinctColumnsMapper when value exceeds 4096 bytes
-* [KYLIN-2540] - concat cascading is not supported
-* [KYLIN-2544] - Fix wrong left join type when editing lookup table
-* [KYLIN-2557] - Fix creating HBase table conflict when multiple kylin instances are starting concurrently
-* [KYLIN-2559] - Enhance check-env.sh to check 'kylin.env.hdfs-working-dir' to be mandatory
-* [KYLIN-2563] - Fix preauthorize-annotation bugs in query authorization
-* [KYLIN-2568] - 'kylin_port_replace_util.sh' should only modify the kylin port and keep other properties unchanged. 
-* [KYLIN-2571] - Return correct driver version from kylin jdbc driver
-* [KYLIN-2572] - Fix parsing 'hive_home' error in 'find-hive-dependency.sh'
-* [KYLIN-2573] - Enhance 'kylin.sh stop' to terminate kylin process finally
-* [KYLIN-2574] - RawQueryLastHacker should group by all possible dimensions
-* [KYLIN-2581] - Fix deadlock bugs in broadcast sync
-* [KYLIN-2582] - 'Server Config' should be refreshed automatically in web page 'System', after we update it successfully. 
-* [KYLIN-2588] - Query failed when two top-n measure with order by count(*) exists in one cube
-* [KYLIN-2589] - Enhance thread-safe in Authentication
-* [KYLIN-2592] - Fix distinct count measure build failed issue with spark cubing 
-* [KYLIN-2593] - Fix NPE issue when querying with Ton-N by count(*) 
-* [KYLIN-2594] - After reloading metadata, the project list should refresh
-* [KYLIN-2595] - Display column alias name when query with keyword 'As'
-* [KYLIN-2601] - The return type of tinyint for sum measure should be bigint
-* [KYLIN-2605] - Remove the hard-code sample data path in 'sample.sh'
-* [KYLIN-2608] - Bubble sort bug in JoinDesc
-* [KYLIN-2609] - Fix grant role access issue on project page.
-* [KYLIN-2611] - Unclosed HBaseAdmin in AclTableMigrationTool#checkTableExist
-* [KYLIN-2612] - Potential NPE accessing familyMap in AclTableMigrationTool#getAllAceInfo
-* [KYLIN-2613] - Wrong variable is used in DimensionDesc#hashCode
-* [KYLIN-2621] - Fix issue on mapping LDAP group to the admin group
-* [KYLIN-2637] - Show tips after creating project successfully
-* [KYLIN-2641] - The current selected project is incorrect after we delete a project.
-* [KYLIN-2643] - PreparedStatement should be closed in QueryServiceV2#execute()
-* [KYLIN-2644] - Fix "Add Project" after refreshing Insight page
-* [KYLIN-2647] - Should get FileSystem from HBaseConfiguration in HBaseResourceStore
-* [KYLIN-2648] - kylin.env.hdfs-working-dir should be qualified and absolute path
-* [KYLIN-2652] - Make KylinConfig threadsafe in CubeVisitService
-* [KYLIN-2655] - Fix wrong job duration issue when resuming the error or stopped job.
-* [KYLIN-2657] - Fix Cube merge NPE whose TopN dictionary not found
-* [KYLIN-2658] - Unclosed ResultSet in JdbcExplorer#loadTableMetadata()
-* [KYLIN-2660] - Show error tips if load hive error occurs and can not be connected.
-* [KYLIN-2661] - Fix Cube list page display issue when using MODELER or ANALYST
-* [KYLIN-2664] - Fix Extended column bug in web
-* [KYLIN-2670] - Fix CASE WHEN issue in orderby clause
-* [KYLIN-2674] - Should not catch OutOfMemoryError in coprocessor
-* [KYLIN-2678] - Fix minor issues in KylinConfigCLITest
-* [KYLIN-2684] - Fix Object.class not registered in Kyro issue with spark cubing
-* [KYLIN-2687] - When the model has a ready cube, should not allow user to edit model JSON in web.
-* [KYLIN-2688] - When the model has a ready cube, should not allow user to edit model JSON in web.
-* [KYLIN-2693] - Should use overrideHiveConfig for LookupHiveViewMaterialization and RedistributeFlatHiveTable
-* [KYLIN-2694] - Fix ArrayIndexOutOfBoundsException in SparkCubingByLayer
-* [KYLIN-2699] - Tomcat LinkageError for curator-client jar file conflict
-* [KYLIN-2701] - Unclosed PreparedStatement in QueryService#getPrepareOnlySqlResponse
-* [KYLIN-2702] - Ineffective null check in DataModelDesc#initComputedColumns()
-* [KYLIN-2707] - Fix NPE in JobInfoConverter
-* [KYLIN-2708] - cube merge operations can not execute success
-* [KYLIN-2711] - NPE if job output is lost
-* [KYLIN-2713] - Fix ITJdbcSourceTableLoaderTest.java and ITJdbcTableReaderTest.java missing license header
-* [KYLIN-2719] - serviceStartTime of CubeVisitService should not be an attribute which may be shared by multi-thread
-* [KYLIN-2743] - Potential corrupt TableDesc when loading an existing Hive table
-* [KYLIN-2748] - Calcite code generation can not gc cause OOM
-* [KYLIN-2754] - Fix Sync issue when reload existing hive table
-* [KYLIN-2758] - Query pushdown should be able to skip database prefix
-* [KYLIN-2762] - Get "Owner required" error on saving data model
-* [KYLIN-2767] - 404 error on click "System" tab
-* [KYLIN-2768] - Wrong UI for count distinct measure
-* [KYLIN-2769] - Non-partitioned cube doesn't need show start/end time
-* [KYLIN-2778] - Sample cube doesn't have ACL info
-* [KYLIN-2780] - QueryController.getMetadata and CacheController.wipeCache may be deadlock
-
-
-__Sub-task__
-
-* [KYLIN-2548] - Keep ACL information backward compatibile
-
-## v2.0.0 - 2017-04-30
-
-_Tag:_ [kylin-2.0.0](https://github.com/apache/kylin/tree/kylin-2.0.0)
-This is a major release with **Spark Cubing**, **Snowflake Data Model** and runs **TPC-H Benchmark**. Check out [the download](/download/) and the [how to upgrade guide](/docs20/howto/howto_upgrade.html).
-
-__New Feature__
-
-* [KYLIN-744] - Spark Cube Build Engine
-* [KYLIN-2006] - Make job engine distributed and HA
-* [KYLIN-2031] - New Fix_length_Hex encoding to support hash value and better Integer encoding to support negative value
-* [KYLIN-2180] - Add project config and make config priority become "cube > project > server"
-* [KYLIN-2240] - Add a toggle to ignore all cube signature inconsistency temporally
-* [KYLIN-2317] - Hybrid Cube CLI Tools
-* [KYLIN-2331] - By layer Spark cubing
-* [KYLIN-2351] - Support Cloud DFS as kylin.env.hdfs-working-dir
-* [KYLIN-2388] - Hot load kylin config from web
-* [KYLIN-2394] - Upgrade Calcite to 1.11 and Avatica to 1.9
-* [KYLIN-2396] - Percentile pre-aggregation implementation
-
-__Improvements__
-
-* [KYLIN-227] - Support "Pause" on Kylin Job
-* [KYLIN-490] - Support multiple column distinct count
-* [KYLIN-995] - Enable kylin to support joining the same lookup table more than once
-* [KYLIN-1832] - HyperLogLog codec performance improvement
-* [KYLIN-1875] - Snowflake schema support
-* [KYLIN-1971] - Cannot support columns with same name under different table
-* [KYLIN-2029] - lookup table support count(distinct column)
-* [KYLIN-2030] - lookup table support group by primary key when no derived dimension
-* [KYLIN-2096] - Support "select version()" SQL statement
-* [KYLIN-2131] - Load Kafka client configuration from properties files
-* [KYLIN-2133] - Check web server port availability when startup
-* [KYLIN-2135] - Enlarge FactDistinctColumns reducer number
-* [KYLIN-2136] - Enhance cubing algorithm selection
-* [KYLIN-2141] - Add include/exclude interface for ResourceTool
-* [KYLIN-2144] - move useful operation tools to org.apache.kylin.tool
-* [KYLIN-2163] - Refine kylin scripts, less verbose during start up
-* [KYLIN-2165] - Use hive table statistics data to get the total count
-* [KYLIN-2169] - Refactor AbstractExecutable to respect KylinConfig
-* [KYLIN-2170] - Mapper/Reducer cleanup() exception handling
-* [KYLIN-2175] - cubestatsreader support reading unfinished segments
-* [KYLIN-2181] - remove integer as fixed_length in test_kylin_cube_with_slr_empty desc
-* [KYLIN-2187] - Enhance TableExt metadata
-* [KYLIN-2192] - More Robust Global Dictionary
-* [KYLIN-2193] - parameterise org.apache.kylin.storage.translate.DerivedFilterTranslator#IN_THRESHOLD
-* [KYLIN-2195] - Setup naming convention for kylin properties
-* [KYLIN-2196] - Update Tomcat clas loader to parallel loader
-* [KYLIN-2198] - Add a framework to allow major changes in DimensionEncoding
-* [KYLIN-2205] - Use column name as the default dimension name
-* [KYLIN-2215] - Refactor DimensionEncoding.encode(byte[]) to encode(String)
-* [KYLIN-2217] - Reducers build dictionaries locally
-* [KYLIN-2220] - Enforce same name between Cube & CubeDesc
-* [KYLIN-2222] - web ui uses rest api to decide which dim encoding is valid for different typed columns
-* [KYLIN-2227] - rename kylin-log4j.properties to kylin-tools-log4j.properties and move it to global conf folder
-* [KYLIN-2238] - Add query server scan threshold
-* [KYLIN-2244] - "kylin.job.cuboid.size.memhungry.ratio" shouldn't be applied on measures like TopN
-* [KYLIN-2246] - redesign the way to decide layer cubing reducer count
-* [KYLIN-2248] - TopN merge further optimization after KYLIN-1917
-* [KYLIN-2252] - Enhance project/model/cube name check
-* [KYLIN-2255] - Drop v1 CubeStorageQuery, Storage Engine ID=0
-* [KYLIN-2263] - Display reasonable exception message if could not find kafka dependency for streaming build
-* [KYLIN-2266] - Reduce memory usage for building global dict
-* [KYLIN-2269] - Reduce MR memory usage for global dict
-* [KYLIN-2280] - A easier way to change all the conflict ports when start multi kylin instance in the same server
-* [KYLIN-2283] - Have a general purpose data generation tool
-* [KYLIN-2287] - Speed up model and cube list load in Web
-* [KYLIN-2290] - minor improvements on limit
-* [KYLIN-2294] - Refactor CI, merge with_slr and without_slr cubes
-* [KYLIN-2295] - Refactor CI, blend view cubes into the rest
-* [KYLIN-2296] - Allow cube to override kafka configuration
-* [KYLIN-2304] - Only copy latest version dict for global dict
-* [KYLIN-2306] - Tolerate Class missing when loading job list
-* [KYLIN-2307] - Make HBase 1.x the default of master
-* [KYLIN-2308] - Allow user to set more columnFamily in web
-* [KYLIN-2310] - Refactor CI, add IT for date/time encoding & extended column
-* [KYLIN-2312] - Display Server Config/Environment by order in system tab
-* [KYLIN-2314] - Add Integration Test (IT) for snowflake
-* [KYLIN-2323] - Refine Table load/unload error message
-* [KYLIN-2328] - Reduce the size of metadata uploaded to distributed cache
-* [KYLIN-2338] - refactor BitmapCounter.DataInputByteBuffer
-* [KYLIN-2349] - Serialize BitmapCounter with peekLength
-* [KYLIN-2353] - Serialize BitmapCounter with distinct count
-* [KYLIN-2358] - CuboidReducer has too many "if (aggrMask[i])" checks
-* [KYLIN-2359] - Update job build step name
-* [KYLIN-2364] - Output table name to error info in LookupTable
-* [KYLIN-2375] - Default cache size (10M) is too small
-* [KYLIN-2377] - Add kylin client query timeout
-* [KYLIN-2378] - Set job thread name with job uuid
-* [KYLIN-2379] - Add UseCMSInitiatingOccupancyOnly to KYLIN_JVM_SETTINGS
-* [KYLIN-2380] - Refactor DbUnit assertions
-* [KYLIN-2387] - A new BitmapCounter with better performance
-* [KYLIN-2389] - Improve resource utilization for DistributedScheduler
-* [KYLIN-2393] - Add "hive.auto.convert.join" and "hive.stats.autogather" to kylin_hive_conf.xml
-* [KYLIN-2400] - Simplify Dictionary interface
-* [KYLIN-2404] - Add "hive.merge.mapfiles" and "hive.merge.mapredfiles" to kylin_hive_conf.xml
-* [KYLIN-2409] - Performance tunning for in-mem cubing
-* [KYLIN-2411] - Kill MR job on pause
-* [KYLIN-2414] - Distinguish UHC columns from normal columns in KYLIN-2217
-* [KYLIN-2415] - Change back default metadata name to "kylin_metadata"
-* [KYLIN-2418] - Refactor pom.xml, drop unused parameter
-* [KYLIN-2422] - NumberDictionary support for decimal with extra 0 after "."
-* [KYLIN-2423] - Model should always include PK/FK as dimensions
-* [KYLIN-2424] - Optimize the integration test's performance
-* [KYLIN-2428] - Cleanup unnecessary shaded libraries for job/coprocessor/jdbc/server
-* [KYLIN-2436] - add a configuration knob to disable spilling of aggregation cache
-* [KYLIN-2437] - collect number of bytes scanned to query metrics
-* [KYLIN-2438] - replace scan threshold with max scan bytes
-* [KYLIN-2442] - Re-calculate expansion rate, count raw data size regardless of flat table compression
-* [KYLIN-2443] - Report coprocessor error information back to client
-* [KYLIN-2446] - Support project names filter in DeployCoprocessorCLI
-* [KYLIN-2451] - Set HBASE_RPC_TIMEOUT according to kylin.storage.hbase.coprocessor-timeout-seconds
-* [KYLIN-2489] - Upgrade zookeeper dependency to 3.4.8
-* [KYLIN-2494] - Model has no dup column on dimensions and measures
-* [KYLIN-2501] - Stream Aggregate GTRecords at Query Server
-* [KYLIN-2503] - Spark cubing step should show YARN app link
-* [KYLIN-2518] - Improve the sampling performance of FactDistinctColumns step
-* [KYLIN-2525] - Smooth upgrade to 2.0.0 from older metadata
-* [KYLIN-2527] - Speedup LookupStringTable, use HashMap instead of ConcurrentHashMap
-* [KYLIN-2528] - refine job email notification to support starttls and customized port
-* [KYLIN-2529] - Allow thread-local override of KylinConfig
-* [KYLIN-2545] - Number2BytesConverter could tolerate malformed numbers
-* [KYLIN-2560] - Fix license headers for 2.0.0 release
-
-__Bugs__
-
-* [KYLIN-1603] - Building job still finished even MR job error happened.
-* [KYLIN-1770] - Upgrade Calcite dependency (v1.10)
-* [KYLIN-1793] - Job couldn't stop when hive commands got error with beeline
-* [KYLIN-1945] - Cuboid.translateToValidCuboid method throw exception while cube building or query execute
-* [KYLIN-2077] - Inconsistent cube desc signature for CubeDesc
-* [KYLIN-2153] - Allow user to skip the check in CubeMetaIngester
-* [KYLIN-2155] - get-properties.sh doesn't support parameters starting with "-n"
-* [KYLIN-2166] - Unclosed HBaseAdmin in StorageCleanupJob#cleanUnusedHBaseTables
-* [KYLIN-2172] - Potential NPE in ExecutableManager#updateJobOutput
-* [KYLIN-2174] - partitoin column format visibility issue
-* [KYLIN-2176] - org.apache.kylin.rest.service.JobService#submitJob will leave orphan NEW segment in cube when exception is met
-* [KYLIN-2191] - Integer encoding error for width from 5 to 7
-* [KYLIN-2197] - Has only base cuboid for some cube desc
-* [KYLIN-2202] - Fix the conflict between KYLIN-1851 and KYLIN-2135
-* [KYLIN-2207] - Ineffective null check in ExtendCubeToHybridCLI#createFromCube()
-* [KYLIN-2208] - Unclosed FileReader in HiveCmdBuilder#build()
-* [KYLIN-2209] - Potential NPE in StreamingController#deserializeTableDesc()
-* [KYLIN-2211] - IDictionaryValueEnumerator should return String instead of byte[]
-* [KYLIN-2212] - 'NOT' operator in filter on derived column may get incorrect result
-* [KYLIN-2213] - UnsupportedOperationException when excute 'not like' query on cube v1
-* [KYLIN-2216] - Potential NPE in model#findTable() call
-* [KYLIN-2224] - "select * from fact inner join lookup " does not return values for look up columns
-* [KYLIN-2232] - cannot set partition date column pattern when edit a model
-* [KYLIN-2236] - JDBC statement.setMaxRows(10) is not working
-* [KYLIN-2237] - Ensure dimensions and measures of model don't have null column
-* [KYLIN-2242] - Directly write hdfs file in reducer is dangerous
-* [KYLIN-2243] - TopN memory estimation is inaccurate in some cases
-* [KYLIN-2251] - JDBC Driver httpcore dependency conflict
-* [KYLIN-2254] - A kind of sub-query does not work
-* [KYLIN-2262] - Get "null" error when trigger a build with wrong cube name
-* [KYLIN-2268] - Potential NPE in ModelDimensionDesc#init()
-* [KYLIN-2271] - Purge cube may delete building segments
-* [KYLIN-2275] - Remove dimensions cause wrong remove in advance settings
-* [KYLIN-2277] - SELECT * query returns a "COUNT__" column, which is not expected
-* [KYLIN-2282] - Step name "Build N-Dimension Cuboid Data : N-Dimension" is inaccurate
-* [KYLIN-2284] - intersect_count function error
-* [KYLIN-2288] - Kylin treat empty string as error measure which is inconsistent with hive
-* [KYLIN-2292] - workaround for CALCITE-1540
-* [KYLIN-2297] - Manually edit cube segment start/end time will throw error in UI
-* [KYLIN-2298] - timer component get wrong seconds
-* [KYLIN-2300] - Show MapReduce waiting time for each build step
-* [KYLIN-2301] - ERROR when executing query with subquery in "NOT IN" clause.
-* [KYLIN-2305] - Unable to use long searchBase/Pattern for LDAP
-* [KYLIN-2313] - Cannot find a cube in a subquery case with count distinct
-* [KYLIN-2316] - Build Base Cuboid Data ERROR
-* [KYLIN-2320] - TrieDictionaryForest incorrect getSizeOfId() when empty dictionary
-* [KYLIN-2326] - ERROR: ArrayIndexOutOfBoundsException: -1
-* [KYLIN-2329] - Between 0.06 - 0.01 and 0.06 + 0.01, returns incorrect result
-* [KYLIN-2330] - CubeDesc returns redundant DerivedInfo
-* [KYLIN-2337] - Remove expensive toString in SortedIteratorMergerWithLimit
-* [KYLIN-2340] - Some subquery returns incorrect result
-* [KYLIN-2341] - sum(case .. when ..) is not supported
-* [KYLIN-2342] - When NoClassDefFoundError occurred in building cube, no error in kylin.log
-* [KYLIN-2343] - When syn hive table, got error but actually the table is synced
-* [KYLIN-2347] - TPC-H query 13, too many HLLC objects exceed memory budget
-* [KYLIN-2348] - TPC-H query 20, requires multiple models in one query
-* [KYLIN-2356] - Incorrect result when filter on numeric columns
-* [KYLIN-2357] - Make ERROR_RECORD_LOG_THRESHOLD configurable
-* [KYLIN-2362] - Unify shell interpreter in scripts to avoid syntax diversity
-* [KYLIN-2367] - raw query like select * where ... returns empty columns
-* [KYLIN-2376] - Upgrade checkstyle plugin
-* [KYLIN-2382] - The column order of "select *" is not as defined in the table
-* [KYLIN-2383] - count distinct should not include NULL
-* [KYLIN-2390] - Wrong argument order for WinAggResetContextImpl()
-* [KYLIN-2391] - Unclosed FileInputStream in KylinConfig#getConfigAsString()
-* [KYLIN-2395] - Lots of warning messages about failing to scan jars in kylin.out
-* [KYLIN-2406] - TPC-H query 20, prevent NPE and give error hint
-* [KYLIN-2407] - TPC-H query 20, routing bug in lookup query and cube query
-* [KYLIN-2410] - Global dictionary does not respect the Hadoop configuration in mapper & reducer
-* [KYLIN-2416] - Max LDAP password length is 15 chars
-* [KYLIN-2419] - Rollback KYLIN-2292 workaround
-* [KYLIN-2426] - Tests will fail if env not satisfy hardcoded path in ITHDFSResourceStoreTest
-* [KYLIN-2429] - Variable initialized should be declared volatile in SparkCubingByLayer#execute()
-* [KYLIN-2430] - Unnecessary exception catching in BulkLoadJob
-* [KYLIN-2432] - Couldn't select partition column in some old browser (such as Google Chrome 18.0.1025.162)
-* [KYLIN-2433] - Handle the column that all records is null in MergeCuboidMapper
-* [KYLIN-2434] - Spark cubing does not respect config kylin.source.hive.database-for-flat-table
-* [KYLIN-2440] - Query failed if join condition columns not appear on cube
-* [KYLIN-2448] - Cloning a Model with a '-' in the name
-* [KYLIN-2449] - Rewrite should not run on OLAPAggregateRel if has no OLAPTable
-* [KYLIN-2452] - Throw NoSuchElementException when AggregationGroup size is 0
-* [KYLIN-2454] - Data generation tool will fail if column name is hive reserved keyword
-* [KYLIN-2457] - Should copy the latest dictionaries on dimension tables in a batch merge job
-* [KYLIN-2462] - PK and FK both as dimensions causes save cube failure
-* [KYLIN-2464] - Use ConcurrentMap instead of ConcurrentHashMap to avoid runtime errors
-* [KYLIN-2465] - Web page still has "Streaming cube build is not supported on UI" statements
-* [KYLIN-2474] - Build snapshot should check lookup PK uniqueness
-* [KYLIN-2481] - NoRealizationFoundException when there are similar cubes and models
-* [KYLIN-2487] - IN condition will convert to subquery join when its elements number exceeds 20
-* [KYLIN-2490] - Couldn't get cube size on Azure HDInsight
-* [KYLIN-2491] - Cube with error job can be dropped
-* [KYLIN-2502] - "Create flat table" and "redistribute table" steps don't show YARN application link
-* [KYLIN-2504] - Clone cube didn't keep the "engine_type" property
-* [KYLIN-2508] - Trans the time to UTC time when set the range of building cube
-* [KYLIN-2510] - Unintended NPE in CubeMetaExtractor#requireProject()
-* [KYLIN-2514] - Joins in data model fail to save when they disorder
-* [KYLIN-2516] - a table field can not be used as both dimension and measure in kylin 2.0
-* [KYLIN-2530] - Build cube failed with NoSuchObjectException, hive table not found 'default.kylin_intermediate_xxxx'
-* [KYLIN-2536] - Replace the use of org.codehaus.jackson
-* [KYLIN-2537] - HBase Read/Write separation bug introduced by KYLIN-2351
-* [KYLIN-2539] - Useless filter dimension will impact cuboid selection.
-* [KYLIN-2541] - Beeline SQL not printed in logs
-* [KYLIN-2543] - Still build dictionary for TopN group by column even using non-dict encoding
-* [KYLIN-2555] - minor issues about acl and granted autority
-
-__Tasks__
-
-* [KYLIN-1799] - Add a document to setup kylin on spark engine?
-* [KYLIN-2293] - Refactor KylinConfig to remove test related code
-* [KYLIN-2327] - Enable check-style for test code
-* [KYLIN-2344] - Package spark into Kylin binary package
-* [KYLIN-2368] - Enable Findbugs plugin
-* [KYLIN-2386] - Revert KYLIN-2349 and KYLIN-2353
-* [KYLIN-2521] - upgrade to calcite 1.12.0
-
-
-## v1.6.0 - 2016-11-26
-
-_Tag:_ [kylin-1.6.0](https://github.com/apache/kylin/tree/kylin-1.6.0)
-This is a major release with better support for using Apache Kafka as data source. Check [how to upgrade](/docs16/howto/howto_upgrade.html) to do the upgrading.
-
-__New Feature__
-
-* [KYLIN-1726] - Scalable streaming cubing
-* [KYLIN-1919] - Support Embedded Structure when Parsing Streaming Message
-* [KYLIN-2055] - Add an encoder for Boolean type
-* [KYLIN-2067] - Add API to check and fill segment holes
-* [KYLIN-2079] - add explicit configuration knob for coprocessor timeout
-* [KYLIN-2088] - Support intersect count for calculation of retention or conversion rates
-* [KYLIN-2125] - Support using beeline to load hive table metadata
-
-__Bug__
-
-* [KYLIN-1565] - Read the kv max size from HBase config
-* [KYLIN-1820] - Column autocomplete should remove the user input in model designer
-* [KYLIN-1828] - java.lang.StringIndexOutOfBoundsException in org.apache.kylin.storage.hbase.util.StorageCleanupJob
-* [KYLIN-1967] - Dictionary rounding can cause IllegalArgumentException in GTScanRangePlanner
-* [KYLIN-1978] - kylin.sh compatible issue on Ubuntu
-* [KYLIN-1990] - The SweetAlert at the front page may out of the page if the content is too long.
-* [KYLIN-2007] - CUBOID_CACHE is not cleared when rebuilding ALL cache
-* [KYLIN-2012] - more robust approach to hive schema changes
-* [KYLIN-2024] - kylin TopN only support the first measure 
-* [KYLIN-2027] - Error "connection timed out" occurs when zookeeper's port is set in hbase.zookeeper.quorum of hbase-site.xml
-* [KYLIN-2028] - find-*-dependency script fail on Mac OS
-* [KYLIN-2035] - Auto Merge Submit Continuously
-* [KYLIN-2041] - Wrong parameter definition in Get Hive Tables REST API
-* [KYLIN-2043] - Rollback httpclient to 4.2.5 to align with Hadoop 2.6/2.7
-* [KYLIN-2044] - Unclosed DataInputByteBuffer in BitmapCounter#peekLength
-* [KYLIN-2045] - Wrong argument order in JobInstanceExtractor#executeExtract()
-* [KYLIN-2047] - Ineffective null check in MetadataManager
-* [KYLIN-2050] - Potentially ineffective call to close() in QueryCli
-* [KYLIN-2051] - Potentially ineffective call to IOUtils.closeQuietly()
-* [KYLIN-2052] - Edit "Top N" measure, the "group by" column wasn't displayed
-* [KYLIN-2059] - Concurrent build issue in CubeManager.calculateToBeSegments()
-* [KYLIN-2069] - NPE in LookupStringTable
-* [KYLIN-2078] - Can't see generated SQL at Web UI
-* [KYLIN-2084] - Unload sample table failed
-* [KYLIN-2085] - PrepareStatement return incorrect result in some cases
-* [KYLIN-2086] - Still report error when there is more than 12 dimensions in one agg group
-* [KYLIN-2093] - Clear cache in CubeMetaIngester
-* [KYLIN-2097] - Get 'Column does not exist in row key desc" on cube has TopN measure
-* [KYLIN-2099] - Import table error of sample table KYLIN_CAL_DT
-* [KYLIN-2106] - UI bug - Advanced Settings - Rowkeys - new Integer dictionary encoding - could possibly impact also cube metadata
-* [KYLIN-2109] - Deploy coprocessor only this server own the table
-* [KYLIN-2110] - Ineffective comparison in BooleanDimEnc#equals()
-* [KYLIN-2114] - WEB-Global-Dictionary bug fix and improve
-* [KYLIN-2115] - some extended column query returns wrong answer
-* [KYLIN-2116] - when hive field delimitor exists in table field values, fields order is wrong
-* [KYLIN-2119] - Wrong chart value and sort when process scientific notation 
-* [KYLIN-2120] - kylin1.5.4.1 with cdh5.7 cube sql Oops Faild to take action
-* [KYLIN-2121] - Failed to pull data to PowerBI or Excel on some query
-* [KYLIN-2127] - UI bug fix for Extend Column
-* [KYLIN-2130] - QueryMetrics concurrent bug fix
-* [KYLIN-2132] - Unable to pull data from Kylin Cube ( learn_kylin cube ) to Excel or Power BI for Visualization and some dimensions are not showing up.
-* [KYLIN-2134] - Kylin will treat empty string as NULL by mistake
-* [KYLIN-2137] - Failed to run mr job when user put a kafka jar in hive's lib folder
-* [KYLIN-2138] - Unclosed ResultSet in BeelineHiveClient
-* [KYLIN-2146] - "Streaming Cluster" page should remove "Margin" inputbox
-* [KYLIN-2152] - TopN group by column does not distinguish between NULL and ""
-* [KYLIN-2154] - source table rows will be skipped if TOPN's group column contains NULL values
-* [KYLIN-2158] - Delete joint dimension not right
-* [KYLIN-2159] - Redistribution Hive Table Step always requires row_count filename as 000000_0 
-* [KYLIN-2167] - FactDistinctColumnsReducer may get wrong max/min partition col value
-* [KYLIN-2173] - push down limit leads to wrong answer when filter is loosened
-* [KYLIN-2178] - CubeDescTest is unstable
-* [KYLIN-2201] - Cube desc and aggregation group rule combination max check fail
-* [KYLIN-2226] - Build Dimension Dictionary Error
-
-__Improvement__
-
-* [KYLIN-1042] - Horizontal scalable solution for streaming cubing
-* [KYLIN-1827] - Send mail notification when runtime exception throws during build/merge cube
-* [KYLIN-1839] - improvement set classpath before submitting mr job
-* [KYLIN-1917] - TopN counter merge performance improvement
-* [KYLIN-1962] - Split kylin.properties into two files
-* [KYLIN-1999] - Use some compression at UT/IT
-* [KYLIN-2019] - Add license checker into checkstyle rule
-* [KYLIN-2033] - Refactor broadcast of metadata change
-* [KYLIN-2042] - QueryController puts entry in Cache w/o checking QueryCacheEnabled
-* [KYLIN-2054] - TimedJsonStreamParser should support other time format
-* [KYLIN-2068] - Import hive comment when sync tables
-* [KYLIN-2070] - UI changes for allowing concurrent build/refresh/merge
-* [KYLIN-2073] - Need timestamp info for diagnose  
-* [KYLIN-2075] - TopN measure: need select "constant" + "1" as the SUM|ORDER parameter
-* [KYLIN-2076] - Improve sample cube and data
-* [KYLIN-2080] - UI: allow multiple building jobs for the same cube
-* [KYLIN-2082] - Support to change streaming configuration
-* [KYLIN-2089] - Make update HBase coprocessor concurrent
-* [KYLIN-2090] - Allow updating cube level config even the cube is ready
-* [KYLIN-2091] - Add API to init the start-point (of each parition) for streaming cube
-* [KYLIN-2095] - Hive mr job use overrided MR job configuration by cube properties
-* [KYLIN-2098] - TopN support query UHC column without sorting by sum value
-* [KYLIN-2100] - Allow cube to override HIVE job configuration by properties
-* [KYLIN-2108] - Support usage of schema name "default" in SQL
-* [KYLIN-2111] - only allow columns from Model dimensions when add group by column to TOP_N
-* [KYLIN-2112] - Allow a column be a dimension as well as "group by" column in TopN measure
-* [KYLIN-2113] - Need sort by columns in SQLDigest
-* [KYLIN-2118] - allow user view CubeInstance json even cube is ready
-* [KYLIN-2122] - Move the partition offset calculation before submitting job
-* [KYLIN-2126] - use column name as default dimension name when auto generate dimension for lookup table
-* [KYLIN-2140] - rename packaged js with different name when build
-* [KYLIN-2143] - allow more options from Extended Columns,COUNT_DISTINCT,RAW_TABLE
-* [KYLIN-2162] - Improve the cube validation error message
-* [KYLIN-2221] - rethink on KYLIN-1684
-* [KYLIN-2083] - more RAM estimation test for MeasureAggregator and GTAggregateScanner
-* [KYLIN-2105] - add QueryId
-* [KYLIN-1321] - Add derived checkbox for lookup table columns on Auto Generate Dimensions panel
-* [KYLIN-1995] - Upgrade MapReduce properties which are deprecated
-
-__Task__
-
-* [KYLIN-2072] - Cleanup old streaming code
-* [KYLIN-2081] - UI change to support embeded streaming message
-* [KYLIN-2171] - Release 1.6.0
-
-
-## v1.5.4.1 - 2016-09-28
-_Tag:_ [kylin-1.5.4.1](https://github.com/apache/kylin/tree/kylin-1.5.4.1)
-This version fixes two major bugs introduced in 1.5.4; The metadata and HBase coprocessor is compatible with 1.5.4.
-
-__Bug__
-
-* [KYLIN-2010] - Date dictionary return wrong SQL result
-* [KYLIN-2026] - NPE occurs when build a cube without partition column
-* [KYLIN-2032] - Cube build failed when partition column isn't in dimension list
-
-## v1.5.4 - 2016-09-15
-_Tag:_ [kylin-1.5.4](https://github.com/apache/kylin/tree/kylin-1.5.4)
-This version includes bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.3; While after upgrade, you still need update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__New Feature__
-
-* [KYLIN-1732] - Support Window Function
-* [KYLIN-1767] - UI for TopN: specify encoding and multiple "group by"
-* [KYLIN-1849] - Search cube by name in Web UI
-* [KYLIN-1908] - Collect Metrics to JMX
-* [KYLIN-1921] - Support Grouping Funtions
-* [KYLIN-1964] - Add a companion tool of CubeMetaExtractor for cube importing
-
-__Bug__
-
-* [KYLIN-962] - [UI] Cube Designer can't drag rowkey normally
-* [KYLIN-1194] - Filter(CubeName) on Jobs/Monitor page works only once
-* [KYLIN-1488] - When modifying a model, Save after deleting a lookup table. The internal error will pop up.
-* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
-* [KYLIN-1808] - unload non existing table cause NPE
-* [KYLIN-1834] - java.lang.IllegalArgumentException: Value not exists! - in Step 4 - Build Dimension Dictionary
-* [KYLIN-1883] - Consensus Problem when running the tool, MetadataCleanupJob
-* [KYLIN-1889] - Didn't deal with the failure of renaming folder in hdfs when running the tool CubeMigrationCLI
-* [KYLIN-1929] - Error to load slow query in "Monitor" page for non-admin user
-* [KYLIN-1933] - Deploy in cluster mode, the "query" node report "scheduler has not been started" every second
-* [KYLIN-1934] - 'Value not exist' During Cube Merging Caused by Empty Dict
-* [KYLIN-1939] - Linkage error while executing any queries
-* [KYLIN-1942] - Models are missing after change project's name
-* [KYLIN-1953] - Error handling for diagnosis
-* [KYLIN-1956] - Can't query from child cube of a hybrid cube after its status changed from disabled to enabled
-* [KYLIN-1961] - Project name is always constant instead of real project name in email notification
-* [KYLIN-1970] - System Menu UI ACL issue
-* [KYLIN-1972] - Access denied when query seek to hybrid
-* [KYLIN-1973] - java.lang.NegativeArraySizeException when Build Dimension Dictionary
-* [KYLIN-1982] - CubeMigrationCLI: associate model with project
-* [KYLIN-1986] - CubeMigrationCLI: make global dictionary unique
-* [KYLIN-1992] - Clear ThreadLocal Contexts when query failed before scaning HBase
-* [KYLIN-1996] - Keep original column order when designing cube
-* [KYLIN-1998] - Job engine lock is not release at shutdown
-* [KYLIN-2003] - error start time at query result page
-* [KYLIN-2005] - Move all storage side behavior hints to GTScanRequest
-
-__Improvement__
-
-* [KYLIN-672] - Add Env and Project Info in job email notification
-* [KYLIN-1702] - The Key of the Snapshot to the related lookup table may be not informative
-* [KYLIN-1855] - Should exclude those joins in whose related lookup tables no dimensions are used in cube
-* [KYLIN-1858] - Remove all InvertedIndex(Streaming purpose) related codes and tests
-* [KYLIN-1866] - Add tip for field at 'Add Streaming' table page.
-* [KYLIN-1867] - Upgrade dependency libraries
-* [KYLIN-1874] - Make roaring bitmap version determined
-* [KYLIN-1898] - Upgrade to Avatica 1.8 or higher
-* [KYLIN-1904] - WebUI for GlobalDictionary
-* [KYLIN-1906] - Add more comments and default value for kylin.properties
-* [KYLIN-1910] - Support Separate HBase Cluster with NN HA and Kerberos Authentication
-* [KYLIN-1920] - Add view CubeInstance json function
-* [KYLIN-1922] - Improve the logic to decide whether to pre aggregate on Region server
-* [KYLIN-1923] - Add access controller to query
-* [KYLIN-1924] - Region server metrics: replace int type for long type for scanned row count
-* [KYLIN-1925] - Do not allow cross project clone for cube
-* [KYLIN-1926] - Loosen the constraint on FK-PK data type matching
-* [KYLIN-1936] - Improve enable limit logic (exactAggregation is too strict)
-* [KYLIN-1940] - Add owner for DataModel
-* [KYLIN-1941] - Show submitter for slow query
-* [KYLIN-1954] - BuildInFunctionTransformer should be executed per CubeSegmentScanner
-* [KYLIN-1963] - Delegate the loading of certain package (like slf4j) to tomcat's parent classloader
-* [KYLIN-1965] - Check duplicated measure name
-* [KYLIN-1966] - Refactor IJoinedFlatTableDesc
-* [KYLIN-1979] - Move hackNoGroupByAggregation to cube-based storage implementations
-* [KYLIN-1984] - Don't use compression in packaging configuration
-* [KYLIN-1985] - SnapshotTable should only keep the columns described in tableDesc
-* [KYLIN-1997] - Add pivot feature back in query result page
-* [KYLIN-2004] - Make the creating intermediate hive table steps configurable (two options)
-
-## v1.5.3 - 2016-07-28
-_Tag:_ [kylin-1.5.3](https://github.com/apache/kylin/tree/kylin-1.5.3)
-This version includes many bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.2; But after upgrade, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__New Feature__
-
-* [KYLIN-1478] - TopN measure should support non-dictionary encoding for ultra high cardinality
-* [KYLIN-1693] - Support multiple group-by columns for TOP_N meausre
-* [KYLIN-1752] - Add an option to fail cube build job when source table is empty
-* [KYLIN-1756] - Allow user to run MR jobs against different Hadoop queues
-
-__Bug__
-
-* [KYLIN-1499] - Couldn't save query, error in backend
-* [KYLIN-1568] - Calculate row value buffer size instead of hard coded ROWVALUE_BUFFER_SIZE
-* [KYLIN-1645] - Exception inside coprocessor should report back to the query thread
-* [KYLIN-1646] - Column appeared twice if it was declared as both dimension and measure
-* [KYLIN-1676] - High CPU in TrieDictionary due to incorrect use of HashMap
-* [KYLIN-1679] - bin/get-properties.sh cannot get property which contains space or equals sign
-* [KYLIN-1684] - query on table "kylin_sales" return empty resultset after cube "kylin_sales_cube" which generated by sample.sh is ready
-* [KYLIN-1694] - make multiply coefficient configurable when estimating cuboid size
-* [KYLIN-1695] - Skip cardinality calculation job when loading hive table
-* [KYLIN-1703] - The not-thread-safe ToolRunner.run() will cause concurrency issue in job engine
-* [KYLIN-1704] - When load empty snapshot, NULL Pointer Exception occurs
-* [KYLIN-1723] - GTAggregateScanner$Dump.flush() must not write the WHOLE metrics buffer
-* [KYLIN-1738] - MRJob Id is not saved to kylin jobs if MR job is killed
-* [KYLIN-1742] - kylin.sh should always set KYLIN_HOME to an absolute path
-* [KYLIN-1755] - TopN Measure IndexOutOfBoundsException
-* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
-* [KYLIN-1762] - Query threw NPE with 3 or more join conditions
-* [KYLIN-1769] - There is no response when click "Property" button at Cube Designer
-* [KYLIN-1777] - Streaming cube build shouldn't check working segment
-* [KYLIN-1780] - Potential issue in SnapshotTable.equals()
-* [KYLIN-1781] - kylin.properties encoding error while contain chinese prop key or value
-* [KYLIN-1783] - Can't add override property at cube design 'Configuration Overwrites' step.
-* [KYLIN-1785] - NoSuchElementException when Mandatory Dimensions contains all Dimensions
-* [KYLIN-1787] - Properly deal with limit clause in CubeHBaseEndpointRPC (SELECT * problem)
-* [KYLIN-1788] - Allow arbitrary number of mandatory dimensions in one aggregation group
-* [KYLIN-1789] - Couldn't use View as Lookup when join type is "inner"
-* [KYLIN-1795] - bin/sample.sh doesn't work when configured hive client is beeline
-* [KYLIN-1800] - IllegalArgumentExceptio: Too many digits for NumberDictionary: -0.009999999999877218. Expect 19 digits before decimal point at max.
-* [KYLIN-1803] - ExtendedColumn Measure Encoding with Non-ascii Characters
-* [KYLIN-1811] - Error step may be skipped sometimes when resume a cube job
-* [KYLIN-1816] - More than one base KylinConfig exist in spring JVM
-* [KYLIN-1817] - No result from JDBC with Date filter in prepareStatement
-* [KYLIN-1838] - Fix sample cube definition
-* [KYLIN-1848] - Can't sort cubes by any field in Web UI
-* [KYLIN-1862] - "table not found" in "Build Dimension Dictionary" step
-* [KYLIN-1879] - RestAPI /api/jobs always returns 0 for exec_start_time and exec_end_time fields
-* [KYLIN-1882] - it report can't find the intermediate table in '#4 Step Name: Build Dimension Dictionary' when use hive view as lookup table
-* [KYLIN-1896] - JDBC support mybatis
-* [KYLIN-1905] - Wrong Default Date in Cube Build Web UI
-* [KYLIN-1909] - Wrong access control to rest get cubes
-* [KYLIN-1911] - NPE when extended column has NULL value
-* [KYLIN-1912] - Create Intermediate Flat Hive Table failed when using beeline
-* [KYLIN-1913] - query log printed abnormally if the query contains "\r" (not "\r\n")
-* [KYLIN-1918] - java.lang.UnsupportedOperationException when unload hive table
-
-__Improvement__
-
-* [KYLIN-1319] - Find a better way to check hadoop job status
-* [KYLIN-1379] - More stable and functional precise count distinct implements after KYLIN-1186
-* [KYLIN-1656] - Improve performance of MRv2 engine by making each mapper handles a configured number of records
-* [KYLIN-1657] - Add new configuration kylin.job.mapreduce.min.reducer.number
-* [KYLIN-1669] - Deprecate the "Capacity" field from DataModel
-* [KYLIN-1677] - Distribute source data by certain columns when creating flat table
-* [KYLIN-1705] - Global (and more scalable) dictionary
-* [KYLIN-1706] - Allow cube to override MR job configuration by properties
-* [KYLIN-1714] - Make job/source/storage engines configurable from kylin.properties
-* [KYLIN-1717] - Make job engine scheduler configurable
-* [KYLIN-1718] - Grow ByteBuffer Dynamically in Cube Building and Query
-* [KYLIN-1719] - Add config in scan request to control compress the query result or not
-* [KYLIN-1724] - Support Amazon EMR
-* [KYLIN-1725] - Use KylinConfig inside coprocessor
-* [KYLIN-1728] - Introduce dictionary metadata
-* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
-* [KYLIN-1747] - Calculate all 0 (except mandatory) cuboids
-* [KYLIN-1749] - Allow mandatory only cuboid
-* [KYLIN-1751] - Make kylin log configurable
-* [KYLIN-1766] - CubeTupleConverter.translateResult() is slow due to date conversion
-* [KYLIN-1775] - Add Cube Migrate Support for Global Dictionary
-* [KYLIN-1782] - API redesign for CubeDesc
-* [KYLIN-1786] - Frontend work for KYLIN-1313 (extended columns as measure)
-* [KYLIN-1792] - behaviours for non-aggregated queries
-* [KYLIN-1805] - It's easily got stuck when deleting HTables during running the StorageCleanupJob
-* [KYLIN-1815] - Cleanup package size
-* [KYLIN-1818] - change kafka dependency to provided
-* [KYLIN-1821] - Reformat all of the java files and enable checkstyle to enforce code formatting
-* [KYLIN-1823] - refactor kylin-server packaging
-* [KYLIN-1846] - minimize dependencies of JDBC driver
-* [KYLIN-1884] - Reload metadata automatically after migrating cube
-* [KYLIN-1894] - GlobalDictionary may corrupt when server suddenly crash
-* [KYLIN-1744] - Separate concepts of source offset and date range on cube segments
-* [KYLIN-1654] - Upgrade httpclient dependency
-* [KYLIN-1774] - Update Kylin's tomcat version to 7.0.69
-* [KYLIN-1861] - Hive may fail to create flat table with "GC overhead error"
-
-## v1.5.2.1 - 2016-06-07
-_Tag:_ [kylin-1.5.2.1](https://github.com/apache/kylin/tree/kylin-1.5.2.1)
-
-This is a hot-fix version on v1.5.2, no new feature introduced, please upgrade to this version;
-
-__Bug__
-
-* [KYLIN-1758] - createLookupHiveViewMaterializationStep will create intermediate table for fact table
-* [KYLIN-1739] - kylin_job_conf_inmem.xml can impact non-inmem MR job
-
-
-## v1.5.2 - 2016-05-26
-_Tag:_ [kylin-1.5.2](https://github.com/apache/kylin/tree/kylin-1.5.2)
-
-This version is backward compatiple with v1.5.1. But after upgrade to v1.5.2 from v1.5.1, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__Highlights__
-
-* [KYLIN-1077] - Support Hive View as Lookup Table
-* [KYLIN-1515] - Make Kylin run on MapR
-* [KYLIN-1600] - Download diagnosis zip from GUI
-* [KYLIN-1672] - support kylin on cdh 5.7
-
-__New Feature__
-
-* [KYLIN-1016] - Count distinct on any dimension should work even not a predefined measure
-* [KYLIN-1077] - Support Hive View as Lookup Table
-* [KYLIN-1441] - Display time column as partition column
-* [KYLIN-1515] - Make Kylin run on MapR
-* [KYLIN-1600] - Download diagnosis zip from GUI
-* [KYLIN-1672] - support kylin on cdh 5.7
-
-__Improvement__
-
-* [KYLIN-869] - Enhance mail notification
-* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
-* [KYLIN-1313] - Enable deriving dimensions on non PK/FK
-* [KYLIN-1323] - Improve performance of converting data to hfile
-* [KYLIN-1340] - Tools to extract all cube/hybrid/project related metadata to facilitate diagnosing/debugging/* sharing
-* [KYLIN-1381] - change RealizationCapacity from three profiles to specific numbers
-* [KYLIN-1391] - quicker and better response to v2 storage engine's rpc timeout exception
-* [KYLIN-1418] - Memory hungry cube should select LAYER and INMEM cubing smartly
-* [KYLIN-1432] - For GUI, to add one option "yyyy-MM-dd HH:MM:ss" for Partition Date Column
-* [KYLIN-1453] - cuboid sharding based on specific column
-* [KYLIN-1487] - attach a hyperlink to introduce new aggregation group
-* [KYLIN-1526] - Move query cache back to query controller level
-* [KYLIN-1542] - Hfile owner is not hbase
-* [KYLIN-1544] - Make hbase encoding and block size configurable just like hbase compression
-* [KYLIN-1561] - Refactor storage engine(v2) to be extension friendly
-* [KYLIN-1566] - Add and use a separate kylin_job_conf.xml for in-mem cubing
-* [KYLIN-1567] - Front-end work for KYLIN-1557
-* [KYLIN-1578] - Coprocessor thread voluntarily stop itself when it reaches timeout
-* [KYLIN-1579] - IT preparation classes like BuildCubeWithEngine should exit with status code upon build * exception
-* [KYLIN-1580] - Use 1 byte instead of 8 bytes as column indicator in fact distinct MR job
-* [KYLIN-1584] - Specify region cut size in cubedesc and leave the RealizationCapacity in model as a hint
-* [KYLIN-1585] - make MAX_HBASE_FUZZY_KEYS in GTScanRangePlanner configurable
-* [KYLIN-1587] - show cube level configuration overwrites properties in CubeDesigner
-* [KYLIN-1591] - enabling different block size setting for small column families
-* [KYLIN-1599] - Add "isShardBy" flag in rowkey panel
-* [KYLIN-1601] - Need not to shrink scan cache when hbase rows can be large
-* [KYLIN-1602] - User could dump hbase usage for diagnosis
-* [KYLIN-1614] - Bring more information in diagnosis tool
-* [KYLIN-1621] - Use deflate level 1 to enable compression "on the fly"
-* [KYLIN-1623] - Make the hll precision for data samping configurable
-* [KYLIN-1624] - HyperLogLogPlusCounter will become inaccurate when there're billions of entries
-* [KYLIN-1625] - GC log overwrites old one after restart Kylin service
-* [KYLIN-1627] - add backdoor toggle to dump binary cube storage response for further analysis
-* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
-
-__Bug__
-
-* [KYLIN-989] - column width is too narrow for timestamp field
-* [KYLIN-1197] - cube data not updated after purge
-* [KYLIN-1305] - Can not get more than one system admin email in config
-* [KYLIN-1551] - Should check and ensure TopN measure has two parameters specified
-* [KYLIN-1563] - Unsafe check of initiated in HybridInstance#init()
-* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
-* [KYLIN-1574] - Unclosed ResultSet in QueryService#getMetadata()
-* [KYLIN-1581] - NPE in Job engine when execute MR job
-* [KYLIN-1593] - Agg group info will be blank when trying to edit cube
-* [KYLIN-1595] - columns in metric could also be in filter/groupby
-* [KYLIN-1596] - UT fail, due to String encoding CharsetEncoder mismatch
-* [KYLIN-1598] - cannot run complete UT at windows dev machine
-* [KYLIN-1604] - Concurrent write issue on hdfs when deploy coprocessor
-* [KYLIN-1612] - Cube is ready but insight tables not result
-* [KYLIN-1615] - UT 'HiveCmdBuilderTest' fail on 'testBeeline'
-* [KYLIN-1619] - Can't find any realization coursed by Top-N measure
-* [KYLIN-1622] - sql not executed and report topN error
-* [KYLIN-1631] - Web UI of TopN, "group by" column couldn't be a dimension column
-* [KYLIN-1634] - Unclosed OutputStream in SSHClient#scpFileToLocal()
-* [KYLIN-1637] - Sample cube build error
-* [KYLIN-1638] - Unclosed HBaseAdmin in ToolUtil#getHBaseMetaStoreId()
-* [KYLIN-1639] - Wrong logging of JobID in MapReduceExecutable.java
-* [KYLIN-1643] - Kylin's hll counter count "NULL" as a value
-* [KYLIN-1647] - Purge a cube, and then build again, the start date is not updated
-* [KYLIN-1650] - java.io.IOException: Filesystem closed - in Cube Build Step 2 (MapR)
-* [KYLIN-1655] - function name 'getKylinPropertiesAsInputSteam' misspelt
-* [KYLIN-1660] - Streaming/kafka config not match with table name
-* [KYLIN-1662] - tableName got truncated during request mapping for /tables/tableName
-* [KYLIN-1666] - Should check project selection before add a stream table
-* [KYLIN-1667] - Streaming table name should allow enter "DB.TABLE" format
-* [KYLIN-1673] - make sure metadata in 1.5.2 compatible with 1.5.1
-* [KYLIN-1678] - MetaData clean just clean FINISHED and DISCARD jobs,but job correct status is SUCCEED
-* [KYLIN-1685] - error happens while execute a sql contains '?' using Statement
-* [KYLIN-1688] - Illegal char on result dataset table
-* [KYLIN-1721] - KylinConfigExt lost base properties when store into file
-* [KYLIN-1722] - IntegerDimEnc serialization exception inside coprocessor
-
-## v1.5.1 - 2016-04-13
-_Tag:_ [kylin-1.5.1](https://github.com/apache/kylin/tree/kylin-1.5.1)
-
-This version is backward compatiple with v1.5.0. But after upgrade to v1.5.1 from v1.5.0, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__Highlights__
-
-* [KYLIN-1122] - Kylin support detail data query from fact table
-* [KYLIN-1492] - Custom dimension encoding
-* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
-* [KYLIN-1534] - Cube specific config, override global kylin.properties
-* [KYLIN-1546] - Tool to dump information for diagnosis
-
-__New Feature__
-
-* [KYLIN-1122] - Kylin support detail data query from fact table
-* [KYLIN-1378] - Add UI for TopN measure
-* [KYLIN-1492] - Custom dimension encoding
-* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
-* [KYLIN-1501] - Run some classes at the beginning of kylin server startup
-* [KYLIN-1503] - Print version information with kylin.sh
-* [KYLIN-1531] - Add smoke test scripts
-* [KYLIN-1534] - Cube specific config, override global kylin.properties
-* [KYLIN-1540] - REST API for deleting segment
-* [KYLIN-1541] - IntegerDimEnc, custom dimension encoding for integers
-* [KYLIN-1546] - Tool to dump information for diagnosis
-* [KYLIN-1550] - Persist some recent bad query
-
-__Improvement__
-
-* [KYLIN-1490] - Use InstallShield 2015 to generate ODBC Driver setup files
-* [KYLIN-1498] - cube desc signature not calculated correctly
-* [KYLIN-1500] - streaming_fillgap cause out of memory
-* [KYLIN-1502] - When cube is not empty, only signature consistent cube desc updates are allowed
-* [KYLIN-1504] - Use NavigableSet to store rowkey and use prefix filter to check resource path prefix instead String comparison on tomcat side
-* [KYLIN-1505] - Combine guava filters with Predicates.and
-* [KYLIN-1543] - GTFilterScanner performance tuning
-* [KYLIN-1557] - Enhance the check on aggregation group dimension number
-
-__Bug__
-
-* [KYLIN-1373] - need to encode export query url to get right result in query page
-* [KYLIN-1434] - Kylin Job Monitor API: /kylin/api/jobs is too slow in large kylin deployment
-* [KYLIN-1472] - Export csv get error when there is a plus sign in the sql
-* [KYLIN-1486] - java.lang.IllegalArgumentException: Too many digits for NumberDictionary
-* [KYLIN-1491] - Should return base cuboid as valid cuboid if no aggregation group matches
-* [KYLIN-1493] - make ExecutableManager.getInstance thread safe
-* [KYLIN-1497] - Make three <class>.getInstance thread safe
-* [KYLIN-1507] - Couldn't find hive dependency jar on some platform like CDH
-* [KYLIN-1513] - Time partitioning doesn't work across multiple days
-* [KYLIN-1514] - MD5 validation of Tomcat does not work when package tar
-* [KYLIN-1521] - Couldn't refresh a cube segment whose start time is before 1970-01-01
-* [KYLIN-1522] - HLLC is incorrect when result is feed from cache
-* [KYLIN-1524] - Get "java.lang.Double cannot be cast to java.lang.Long" error when Top-N metris data type is BigInt
-* [KYLIN-1527] - Columns with all NULL values can't be queried
-* [KYLIN-1537] - Failed to create flat hive table, when name is too long
-* [KYLIN-1538] - DoubleDeltaSerializer cause obvious error after deserialize and serialize
-* [KYLIN-1553] - Cannot find rowkey column "COL_NAME" in cube CubeDesc
-* [KYLIN-1564] - Unclosed table in BuildCubeWithEngine#checkHFilesInHBase()
-* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
-
-## v1.5.0 - 2016-03-12
-_Tag:_ [kylin-1.5.0](https://github.com/apache/kylin/tree/kylin-1.5.0)
-
-__This version is not backward compatible.__ The format of cube and metadata has been refactored in order to get times of performance improvement. We recommend this version, but does not suggest upgrade from previous deployment directly. A clean and new deployment of this version is strongly recommended. If you have to upgrade from previous deployment, an upgrade guide will be provided by community later.
-
-__Highlights__
-
-* [KYLIN-875] - A plugin-able architecture, to allow alternative cube engine / storage engine / data source.
-* [KYLIN-1245] - A better MR cubing algorithm, about 1.5 times faster by comparing hundreds of jobs.
-* [KYLIN-942] - A better storage engine, makes query roughly 2 times faster (especially for slow queries) by comparing tens of thousands sqls.
-* [KYLIN-738] - Streaming cubing EXPERIMENTAL support, source from kafka, build cube in-mem at minutes interval.
-* [KYLIN-242] - Redesign aggregation group, support of 20+ dimensions made easy.
-* [KYLIN-976] - Custom aggregation types (or UDF in other words).
-* [KYLIN-943] - TopN aggregation type.
-* [KYLIN-1065] - ODBC compatible with Tableau 9.1, MS Excel, MS PowerBI.
-* [KYLIN-1219] - Kylin support SSO with Spring SAML.
-
-__New Feature__
-
-* [KYLIN-528] - Build job flow for Inverted Index building
-* [KYLIN-579] - Unload table from Kylin
-* [KYLIN-596] - Support Excel and Power BI
-* [KYLIN-599] - Near real-time support
-* [KYLIN-607] - More efficient cube building
-* [KYLIN-609] - Add Hybrid as a federation of Cube and Inverted-index realization
-* [KYLIN-625] - Create GridTable, a data structure that abstracts vertical and horizontal partition of a table
-* [KYLIN-728] - IGTStore implementation which use disk when memory runs short
-* [KYLIN-738] - StreamingOLAP
-* [KYLIN-749] - support timestamp type in II and cube
-* [KYLIN-774] - Automatically merge cube segments
-* [KYLIN-868] - add a metadata backup/restore script in bin folder
-* [KYLIN-886] - Data Retention for streaming data
-* [KYLIN-906] - cube retention
-* [KYLIN-943] - Approximate TopN supported by Cube
-* [KYLIN-986] - Generalize Streaming scripts and put them into code repository
-* [KYLIN-1219] - Kylin support SSO with Spring SAML
-* [KYLIN-1277] - Upgrade tool to put old-version cube and new-version cube into a hybrid model
-* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
-
-* [KYLIN-976] - Support Custom Aggregation Types
-* [KYLIN-1054] - Support Hive client Beeline
-* [KYLIN-1128] - Clone Cube Metadata
-* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
-* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
-* [KYLIN-1483] - Command tool to visualize all cuboids in a cube/segment
-
-__Improvement__
-
-* [KYLIN-225] - Support edit "cost" of cube
-* [KYLIN-410] - table schema not expand when clicking the database text
-* [KYLIN-589] - Cleanup Intermediate hive table after cube build
-* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
-* [KYLIN-633] - Support Timestamp for cube partition
-* [KYLIN-649] - move the cache layer from service tier back to storage tier
-* [KYLIN-655] - Migrate cube storage (query side) to use GridTable API
-* [KYLIN-663] - Push time condition down to ii endpoint
-* [KYLIN-668] - Out of memory in mapper when building cube in mem
-* [KYLIN-671] - Implement fine grained cache for cube and ii
-* [KYLIN-674] - IIEndpoint return metrics as well
-* [KYLIN-675] - cube&model designer refactor
-* [KYLIN-678] - optimize RowKeyColumnIO
-* [KYLIN-697] - Reorganize all test cases to unit test and integration tests
-* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS
-* [KYLIN-708] - replace BitSet for AggrKey
-* [KYLIN-712] - some enhancement after code review
-* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
-* [KYLIN-718] - replace aliasMap in storage context with a clear specified return column list
-* [KYLIN-719] - bundle statistics info in endpoint response
-* [KYLIN-720] - Optimize endpoint's response structure to suit with no-dictionary data
-* [KYLIN-721] - streaming cli support third-party streammessage parser
-* [KYLIN-726] - add remote cli port configuration for KylinConfig
-* [KYLIN-729] - IIEndpoint eliminate the non-aggregate routine
-* [KYLIN-734] - Push cache layer to each storage engine
-* [KYLIN-752] - Improved IN clause performance
-* [KYLIN-753] - Make the dependency on hbase-common to "provided"
-* [KYLIN-755] - extract copying libs from prepare.sh so that it can be reused
-* [KYLIN-760] - Improve the hasing performance in Sampling cuboid size
-* [KYLIN-772] - Continue cube job when hive query return empty resultset
-* [KYLIN-773] - performance is slow list jobs
-* [KYLIN-783] - update hdp version in test cases to 2.2.4
-* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
-* [KYLIN-809] - Streaming cubing allow multiple kafka clusters/topics
-* [KYLIN-816] - Allow gap in cube segments, for streaming case
-* [KYLIN-822] - list cube overview in one page
-* [KYLIN-823] - replace fk on fact table on rowkey & aggregation group generate
-* [KYLIN-838] - improve performance of job query
-* [KYLIN-844] - add backdoor toggles to control query behavior
-* [KYLIN-845] - Enable coprocessor even when there is memory hungry distinct count
-* [KYLIN-858] - add snappy compression support
-* [KYLIN-866] - Confirm with user when he selects empty segments to merge
-* [KYLIN-869] - Enhance mail notification
-* [KYLIN-870] - Speed up hbase segments info by caching
-* [KYLIN-871] - growing dictionary for streaming case
-* [KYLIN-874] - script for fill streaming gap automatically
-* [KYLIN-875] - Decouple with Hadoop to allow alternative Input / Build Engine / Storage
-* [KYLIN-879] - add a tool to collect orphan hbases
-* [KYLIN-880] - Kylin should change the default folder from /tmp to user configurable destination
-* [KYLIN-881] - Upgrade Calcite to 1.3.0
-* [KYLIN-882] - check access to kylin.hdfs.working.dir
-* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
-* [KYLIN-893] - Remove the dependency on quartz and metrics
-* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
-* [KYLIN-896] - Clean ODBC code, add them into main repository and write docs to help compiling
-* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
-* [KYLIN-902] - move streaming related parameters into StreamingConfig
-* [KYLIN-909] - Adapt GTStore to hbase endpoint
-* [KYLIN-919] - more friendly UI for 0.8
-* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
-* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
-* [KYLIN-927] - Real time cubes merging skipping gaps
-* [KYLIN-933] - friendly UI to use data model
-* [KYLIN-938] - add friendly tip to page when rest request failed
-* [KYLIN-942] - Cube parallel scan on Hbase
-* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
-* [KYLIN-957] - Support HBase in a separate cluster
-* [KYLIN-960] - Split storage module to core-storage and storage-hbase
-* [KYLIN-973] - add a tool to analyse streaming output logs
-* [KYLIN-984] - Behavior change in streaming data consuming
-* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
-* [KYLIN-1014] - Support kerberos authentication while getting status from RM
-* [KYLIN-1018] - make TimedJsonStreamParser default parser
-* [KYLIN-1019] - Remove v1 cube model classes from code repository
-* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
-* [KYLIN-1025] - Save cube change is very slow
-* [KYLIN-1036] - Code Clean, remove code which never used at front end
-* [KYLIN-1041] - ADD Streaming UI
-* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-* [KYLIN-1058] - Remove "right join" during model creation
-* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
-* [KYLIN-1065] - ODBC driver support tableau 9.1
-* [KYLIN-1068] - Optimize the memory footprint for TopN counter
-* [KYLIN-1069] - update tip for 'Partition Column' on UI
-* [KYLIN-1074] - Load hive tables with selecting mode
-* [KYLIN-1095] - Update AdminLTE to latest version
-* [KYLIN-1096] - Deprecate minicluster
-* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
-* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
-* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
-* [KYLIN-1116] - Use local dictionary for InvertedIndex batch building
-* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
-* [KYLIN-1126] - v2 storage(for parallel scan) backward compatibility with v1 storage
-* [KYLIN-1135] - Pscan use share thread pool
-* [KYLIN-1136] - Distinguish fast build mode and complete build mode
-* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
-* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
-* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
-* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
-* [KYLIN-1160] - Set default logger appender of log4j for JDBC
-* [KYLIN-1161] - Rest API /api/cubes?cubeName= is doing fuzzy match instead of exact match
-* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
-* [KYLIN-1190] - Make memory budget per query configurable
-* [KYLIN-1211] - Add 'Enable Cache' button in System page
-* [KYLIN-1234] - Cube ACL does not work
-* [KYLIN-1235] - allow user to select dimension column as options when edit COUNT_DISTINCT measure
-* [KYLIN-1237] - Revisit on cube size estimation
-* [KYLIN-1239] - attribute each htable with team contact and owner name
-* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
-* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
-* [KYLIN-1246] - get cubes API update - offset,limit not required
-* [KYLIN-1251] - add toggle event for tree label
-* [KYLIN-1259] - Change font/background color of job progress
-* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
-* [KYLIN-1266] - Tune release package size
-* [KYLIN-1267] - Check Kryo performance when spilling aggregation cache
-* [KYLIN-1268] - Fix 2 kylin logs
-* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
-* [KYLIN-1281] - Add "partition_date_end", and move "partition_date_start" into cube descriptor
-* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
-* [KYLIN-1287] - UI update for streaming build action
-* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
-* [KYLIN-1301] - fix segment pruning failure
-* [KYLIN-1308] - query storage v2 enable parallel cube visiting
-* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
-* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
-* [KYLIN-1318] - enable gc log for kylin server instance
-* [KYLIN-1323] - Improve performance of converting data to hfile
-* [KYLIN-1327] - Tool for batch updating host information of htables
-* [KYLIN-1333] - Kylin Entity Permission Control
-* [KYLIN-1334] - allow truncating string for fixed length dimensions
-* [KYLIN-1341] - Display JSON of Data Model in the dialog
-* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
-* [KYLIN-1365] - Kylin ACL enhancement
-* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
-* [KYLIN-1424] - Should support multiple selection in picking up dimension/measure column step in data model wizard
-* [KYLIN-1438] - auto generate aggregation group
-* [KYLIN-1474] - expose list, remove and cat in metastore.sh
-* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
-
-* [KYLIN-242] - Redesign aggregation group
-* [KYLIN-770] - optimize memory usage for GTSimpleMemStore GTAggregationScanner
-* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
-* [KYLIN-980] - FactDistinctColumnsJob to support high cardinality columns
-* [KYLIN-1079] - Manager large number of entries in metadata store
-* [KYLIN-1082] - Hive dependencies should be add to tmpjars
-* [KYLIN-1201] - Enhance project level ACL
-* [KYLIN-1222] - restore testing v1 query engine in case need it as a fallback for v2
-* [KYLIN-1232] - Refine ODBC Connection UI
-* [KYLIN-1237] - Revisit on cube size estimation
-* [KYLIN-1239] - attribute each htable with team contact and owner name
-* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
-* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
-* [KYLIN-1266] - Tune release package size
-* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
-* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
-* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
-* [KYLIN-1301] - fix segment pruning failure
-* [KYLIN-1308] - query storage v2 enable parallel cube visiting
-* [KYLIN-1318] - enable gc log for kylin server instance
-* [KYLIN-1327] - Tool for batch updating host information of htables
-* [KYLIN-1343] - Upgrade calcite version to 1.6
-* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
-* [KYLIN-1366] - Bind metadata version with release version
-* [KYLIN-1389] - Formatting ODBC Drive C++ code
-* [KYLIN-1405] - Aggregation group validation
-* [KYLIN-1465] - Beautify kylin log to convenience both production trouble shooting and CI debuging
-* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
-
-__Bug__
-
-* [KYLIN-404] - Can't get cube source record size.
-* [KYLIN-457] - log4j error and dup lines in kylin.log
-* [KYLIN-521] - No verification even if join condition is invalid
-* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
-* [KYLIN-635] - IN clause within CASE when is not working
-* [KYLIN-656] - REST API get cube desc NullPointerException when cube is not exists
-* [KYLIN-660] - Make configurable of dictionary cardinality cap
-* [KYLIN-665] - buffer error while in mem cubing
-* [KYLIN-688] - possible memory leak for segmentIterator
-* [KYLIN-731] - Parallel stream build will throw OOM
-* [KYLIN-740] - Slowness with many IN() values
-* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
-* [KYLIN-748] - II returned result not correct when decimal omits precision and scal
-* [KYLIN-751] - Max on negative double values is not working
-* [KYLIN-766] - round BigDecimal according to the DataType scale
-* [KYLIN-769] - empty segment build fail due to no dictionary
-* [KYLIN-771] - query cache is not evicted when metadata changes
-* [KYLIN-778] - can't build cube after package to binary
-* [KYLIN-780] - Upgrade Calcite to 1.0
-* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted
-* [KYLIN-801] - fix remaining issues on query cache and storage cache
-* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
-* [KYLIN-807] - Avoid write conflict between job engine and stream cube builder
-* [KYLIN-817] - Support Extract() on timestamp column
-* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
-* [KYLIN-828] - kylin still use ldap profile when comment the line "kylin.sandbox=false" in kylin.properties
-* [KYLIN-834] - optimize StreamingUtil binary search perf
-* [KYLIN-837] - fix submit build type when refresh cube
-* [KYLIN-873] - cancel button does not work when [resume][discard] job
-* [KYLIN-889] - Support more than one HDFS files of lookup table
-* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
-* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
-* [KYLIN-905] - Boolean type not supported
-* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
-* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
-* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
-* [KYLIN-914] - Scripts shebang should use /bin/bash
-* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
-* [KYLIN-930] - can't see realizations under each project at project list page
-* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
-* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
-* [KYLIN-936] - can not see job step log
-* [KYLIN-944] - update doc about how to consume kylin API in javascript
-* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
-* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
-* [KYLIN-951] - Drop RowBlock concept from GridTable general API
-* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
-* [KYLIN-967] - Dump running queries on memory shortage
-* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
-* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
-* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
-* [KYLIN-983] - Query sql offset keyword bug
-* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
-* [KYLIN-991] - StorageCleanupJob may clean a newly created HTable in streaming cube building
-* [KYLIN-992] - ConcurrentModificationException when initializing ResourceStore
-* [KYLIN-993] - implement substr support in kylin
-* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
-* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
-* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million
-* [KYLIN-1026] - Error message for git check is not correct in package.sh
-* [KYLIN-1027] - HBase Token not added after KYLIN-1007
-* [KYLIN-1033] - Error when joining two sub-queries
-* [KYLIN-1039] - Filter like (A or false) yields wrong result
-* [KYLIN-1047] - Upgrade to Calcite 1.4
-* [KYLIN-1066] - Only 1 reducer is started in the "Build cube" step of MR_Engine_V2
-* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
-* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
-* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
-* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
-* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
-* [KYLIN-1113] - Support TopN query in v2/CubeStorageQuery.java
-* [KYLIN-1115] - Clean up ODBC driver code
-* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
-* [KYLIN-1127] - Refactor CacheService
-* [KYLIN-1137] - TopN measure need support dictionary merge
-* [KYLIN-1138] - Bad CubeDesc signature cause segment be delete when enable a cube
-* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
-* [KYLIN-1151] - Menu items should be aligned when create new model
-* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
-* [KYLIN-1153] - Upgrade is needed for cubedesc metadata from 1.3 to 1.4
-* [KYLIN-1171] - KylinConfig truncate bug
-* [KYLIN-1179] - Cannot use String as partition column
-* [KYLIN-1180] - Some NPE in Dictionary
-* [KYLIN-1181] - Split metadata size exceeded when data got huge in one segment
-* [KYLIN-1182] - DataModelDesc needs to be updated from v1.x to v2.0
-* [KYLIN-1192] - Cannot edit data model desc without name change
-* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
-* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
-* [KYLIN-1218] - java.lang.NullPointerException in MeasureTypeFactory when sync hive table
-* [KYLIN-1220] - JsonMappingException: Can not deserialize instance of java.lang.String out of START_ARRAY
-* [KYLIN-1225] - Only 15 cubes listed in the /models page
-* [KYLIN-1226] - InMemCubeBuilder throw OOM for multiple HLLC measures
-* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
-* [KYLIN-1236] - redirect to home page when input invalid url
-* [KYLIN-1250] - Got NPE when discarding a job
-* [KYLIN-1260] - Job status labels are not in same style
-* [KYLIN-1269] - Can not get last error message in email
-* [KYLIN-1271] - Create streaming table layer will disappear if click on outside
-* [KYLIN-1274] - Query from JDBC is partial results by default
-* [KYLIN-1282] - Comparison filter on Date/Time column not work for query
-* [KYLIN-1289] - Click on subsequent wizard steps doesn't work when editing existing cube or model
-* [KYLIN-1303] - Error when in-mem cubing on empty data source which has boolean columns
-* [KYLIN-1306] - Null strings are not applied during fast cubing
-* [KYLIN-1314] - Display issue for aggression groups
-* [KYLIN-1315] - UI: Cannot add normal dimension when creating new cube
-* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
-* [KYLIN-1328] - "UnsupportedOperationException" is thrown when remove a data model
-* [KYLIN-1330] - UI create model: Press enter will go back to pre step
-* [KYLIN-1336] - 404 errors of model page and api 'access/DataModelDesc' in console
-* [KYLIN-1337] - Sort cube name doesn't work well
-* [KYLIN-1346] - IllegalStateException happens in SparkCubing
-* [KYLIN-1347] - UI: cannot place cursor in front of the last dimension
-* [KYLIN-1349] - 'undefined' is logged in console when adding lookup table
-* [KYLIN-1352] - 'Cache already exists' exception in high-concurrency query situation
-* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
-* [KYLIN-1357] - Cloned cube has build time information
-* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
-* [KYLIN-1382] - CubeMigrationCLI reports error when migrate cube
-* [KYLIN-1387] - Streaming cubing doesn't generate cuboids files on HDFS, cause cube merge failure
-* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
-* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
-* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
-* [KYLIN-1412] - Widget width of "Partition date column" is too small to select
-* [KYLIN-1413] - Row key column's sequence is wrong after saving the cube
-* [KYLIN-1414] - Couldn't drag and drop rowkey, js error is thrown in browser console
-* [KYLIN-1417] - TimedJsonStreamParser is case sensitive for message's property name
-* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
-* [KYLIN-1420] - Query returns empty result on partition column's boundary condition
-* [KYLIN-1421] - Cube "source record" is always zero for streaming
-* [KYLIN-1423] - HBase size precision issue
-* [KYLIN-1430] - Not add "STREAMING_" prefix when import a streaming table
-* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
-* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
-* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
-* ​
-* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
-* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
-* [KYLIN-1344] - Bitmap measure defined after TopN measure can cause merge to fail
-* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
-* [KYLIN-1386] - Duplicated projects appear in connection dialog after clicking CONNECT button multiple times
-* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
-* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
-* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
-* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
-* [KYLIN-1469] - Hive dependency jars are hard coded in test
-* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
-* [KYLIN-1473] - Cannot have comments in the end of New Query textbox
-
-__Task__
-
-* [KYLIN-529] - Migrate ODBC source code to Apache Git
-* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
-* [KYLIN-762] - remove quartz dependency
-* [KYLIN-763] - remove author name
-* [KYLIN-820] - support streaming cube of exact timestamp range
-* [KYLIN-907] - Improve Kylin community development experience
-* [KYLIN-1112] - Reorganize InvertedIndex source codes into plug-in architecture
-
-* [KYLIN-808] - streaming cubing support split by data timestamp
-* [KYLIN-1427] - Enable partition date column to support date and hour as separate columns for increment cube build
-
-__Test__
-
-* [KYLIN-677] - benchmark for Endpoint without dictionary
-* [KYLIN-826] - create new test case for streaming building & queries
-
-
-## v1.3.0 - 2016-03-14
-_Tag:_ [kylin-1.3.0](https://github.com/apache/kylin/tree/kylin-1.3.0)
-
-__New Feature__
-
-* [KYLIN-579] - Unload table from Kylin
-* [KYLIN-976] - Support Custom Aggregation Types
-* [KYLIN-1054] - Support Hive client Beeline
-* [KYLIN-1128] - Clone Cube Metadata
-* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
-
-__Improvement__
-
-* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
-* [KYLIN-1014] - Support kerberos authentication while getting status from RM
-* [KYLIN-1074] - Load hive tables with selecting mode
-* [KYLIN-1082] - Hive dependencies should be add to tmpjars
-* [KYLIN-1132] - make filtering input easier in creating cube
-* [KYLIN-1201] - Enhance project level ACL
-* [KYLIN-1211] - Add 'Enable Cache' button in System page
-* [KYLIN-1234] - Cube ACL does not work
-* [KYLIN-1240] - Fix link and typo in README
-* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
-* [KYLIN-1246] - get cubes API update - offset,limit not required
-* [KYLIN-1251] - add toggle event for tree label
-* [KYLIN-1259] - Change font/background color of job progress
-* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
-* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
-* [KYLIN-1323] - Improve performance of converting data to hfile
-* [KYLIN-1333] - Kylin Entity Permission Control 
-* [KYLIN-1343] - Upgrade calcite version to 1.6
-* [KYLIN-1365] - Kylin ACL enhancement
-* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
-
-__Bug__
-
-* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
-* [KYLIN-1078] - Cannot have comments in the end of New Query textbox
-* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
-* [KYLIN-1110] - can not see project options after clear brower cookie and cache
-* [KYLIN-1159] - problem about kylin web UI
-* [KYLIN-1214] - Remove "Back to My Cubes" link in non-edit mode
-* [KYLIN-1215] - minor, update website member's info on community page
-* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
-* [KYLIN-1236] - redirect to home page when input invalid url
-* [KYLIN-1250] - Got NPE when discarding a job
-* [KYLIN-1254] - cube model will be overridden while creating a new cube with the same name
-* [KYLIN-1260] - Job status labels are not in same style
-* [KYLIN-1274] - Query from JDBC is partial results by default
-* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
-* [KYLIN-1330] - UI create model: Press enter will go back to pre step
-* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
-* [KYLIN-1342] - Typo in doc
-* [KYLIN-1354] - Couldn't edit a cube if it has no "partition date" set
-* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
-* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale 
-* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
-* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
-* [KYLIN-1412] - Widget width of "Partition date column"  is too small to select
-* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
-* [KYLIN-1423] - HBase size precision issue
-* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
-* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
-* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
-* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
-* [KYLIN-1469] - Hive dependency jars are hard coded in test
-
-__Test__
-
-* [KYLIN-1335] - Disable PrintResult in KylinQueryTest
-
-
-## v1.2 - 2015-12-15
-_Tag:_ [kylin-1.2](https://github.com/apache/kylin/tree/kylin-1.2)
-
-__New Feature__
-
-* [KYLIN-596] - Support Excel and Power BI
-
-__Improvement__
-
-* [KYLIN-389] - Can't edit cube name for existing cubes
-* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS 
-* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
-* [KYLIN-1058] - Remove "right join" during model creation
-* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
-* [KYLIN-1065] - ODBC driver support tableau 9.1
-* [KYLIN-1069] - update tip for 'Partition Column' on UI
-* [KYLIN-1081] - ./bin/find-hive-dependency.sh may not find hive-hcatalog-core.jar
-* [KYLIN-1095] - Update AdminLTE to latest version
-* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
-* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
-* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
-* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
-* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
-* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
-* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
-* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
-* [KYLIN-1160] - Set default logger appender of log4j for JDBC
-* [KYLIN-1161] - Rest API /api/cubes?cubeName=  is doing fuzzy match instead of exact match
-* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
-* [KYLIN-1166] - CubeMigrationCLI should disable and purge the cube in source store after be migrated
-* [KYLIN-1168] - Couldn't save cube after doing some modification, get "Update data model is not allowed! Please create a new cube if needed" error
-* [KYLIN-1190] - Make memory budget per query configurable
-
-__Bug__
-
-* [KYLIN-693] - Couldn't change a cube's name after it be created
-* [KYLIN-930] - can't see realizations under each project at project list page
-* [KYLIN-966] - When user creates a cube, if enter a name which already exists, Kylin will thrown expection on last step
-* [KYLIN-1033] - Error when joining two sub-queries
-* [KYLIN-1039] - Filter like (A or false) yields wrong result
-* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
-* [KYLIN-1070] - changing  case in table name in  model desc
-* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
-* [KYLIN-1098] - two "kylin.hbase.region.count.min" in conf/kylin.properties
-* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
-* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
-* [KYLIN-1120] - MapReduce job read local meta issue
-* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
-* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
-* [KYLIN-1148] - Edit project's name and cancel edit, project's name still modified
-* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
-* [KYLIN-1155] - unit test with minicluster doesn't work on 1.x
-* [KYLIN-1203] - Cannot save cube after correcting the configuration mistake
-* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
-* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
-
-__Task__
-
-* [KYLIN-1170] - Update website and status files to TLP
-
-
-## v1.1.1-incubating - 2015-11-04
-_Tag:_ [kylin-1.1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1.1-incubating)
-
-__Improvement__
-
-* [KYLIN-999] - License check and cleanup for release
-
-## v1.1-incubating - 2015-10-25
-_Tag:_ [kylin-1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1-incubating)
-
-__New Feature__
-
-* [KYLIN-222] - Web UI to Display CubeInstance Information
-* [KYLIN-906] - cube retention
-* [KYLIN-910] - Allow user to enter "retention range" in days on Cube UI
-
-__Bug__
-
-* [KYLIN-457] - log4j error and dup lines in kylin.log
-* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
-* [KYLIN-740] - Slowness with many IN() values
-* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
-* [KYLIN-771] - query cache is not evicted when metadata changes
-* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted 
-* [KYLIN-847] - "select * from fact" does not work on 0.7 branch
-* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
-* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-* [KYLIN-944] - update doc about how to consume kylin API in javascript
-* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
-* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
-* [KYLIN-958] - update cube data model may fail and leave metadata in inconsistent state
-* [KYLIN-961] - Can't get cube  source record count.
-* [KYLIN-967] - Dump running queries on memory shortage
-* [KYLIN-968] - CubeSegment.lastBuildJobID is null in new instance but used for rowkey_stats path
-* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
-* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
-* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
-* [KYLIN-983] - Query sql offset keyword bug
-* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
-* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
-* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
-* [KYLIN-1005] - fail to acquire ZookeeperJobLock when hbase.zookeeper.property.clientPort is configured other than 2181
-* [KYLIN-1015] - Hive dependency jars appeared twice on job configuration
-* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million 
-* [KYLIN-1026] - Error message for git check is not correct in package.sh
-
-__Improvement__
-
-* [KYLIN-343] - Enable timeout on query 
-* [KYLIN-367] - automatically backup metadata everyday
-* [KYLIN-589] - Cleanup Intermediate hive table after cube build
-* [KYLIN-772] - Continue cube job when hive query return empty resultset
-* [KYLIN-858] - add snappy compression support
-* [KYLIN-882] - check access to kylin.hdfs.working.dir
-* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
-* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
-* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
-* [KYLIN-957] - Support HBase in a separate cluster
-* [KYLIN-965] - Allow user to configure the region split size for cube
-* [KYLIN-971] - kylin display timezone on UI
-* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
-* [KYLIN-998] - Finish the hive intermediate table clean up job in org.apache.kylin.job.hadoop.cube.StorageCleanupJob
-* [KYLIN-999] - License check and cleanup for release
-* [KYLIN-1013] - Make hbase client configurations like timeout configurable
-* [KYLIN-1025] - Save cube change is very slow
-* [KYLIN-1034] - Faster bitmap indexes with Roaring bitmaps
-* [KYLIN-1035] - Validate [Project] before create Cube on UI
-* [KYLIN-1037] - Remove hardcoded "hdp.version" from regression tests
-* [KYLIN-1047] - Upgrade to Calcite 1.4
-* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-
-
-## v1.0-incubating - 2015-09-06
-_Tag:_ [kylin-1.0-incubating](https://github.com/apache/kylin/tree/kylin-1.0-incubating)
-
-__New Feature__
-
-* [KYLIN-591] - Leverage Zeppelin to interactive with Kylin
-
-__Bug__
-
-* [KYLIN-404] - Can't get cube source record size.
-* [KYLIN-626] - JDBC error for float and double values
-* [KYLIN-751] - Max on negative double values is not working
-* [KYLIN-757] - Cache wasn't flushed in cluster mode
-* [KYLIN-780] - Upgrade Calcite to 1.0
-* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
-* [KYLIN-889] - Support more than one HDFS files of lookup table
-* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
-* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
-* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
-* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
-* [KYLIN-914] - Scripts shebang should use /bin/bash
-* [KYLIN-915] - appendDBName in CubeMetadataUpgrade will return null
-* [KYLIN-921] - Dimension with all nulls cause BuildDimensionDictionary failed due to FileNotFoundException
-* [KYLIN-923] - FetcherRunner will never run again if encountered exception during running
-* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
-* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
-* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
-* [KYLIN-936] - can not see job step log 
-* [KYLIN-940] - NPE when close the null resouce
-* [KYLIN-945] - Kylin JDBC - Get Connection from DataSource results in NullPointerException
-* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
-* [KYLIN-949] - Query cache doesn't work properly for prepareStatement queries
-
-__Improvement__
-
-* [KYLIN-568] - job support stop/suspend function so that users can manually resume a job
-* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
-* [KYLIN-792] - kylin performance insight [dashboard]
-* [KYLIN-838] - improve performance of job query
-* [KYLIN-842] - Add version and commit id into binary package
-* [KYLIN-844] - add backdoor toggles to control query behavior 
-* [KYLIN-857] - backport coprocessor improvement in 0.8 to 0.7
-* [KYLIN-866] - Confirm with user when he selects empty segments to merge
-* [KYLIN-867] - Hybrid model for multiple realizations/cubes
-* [KYLIN-880] -  Kylin should change the default folder from /tmp to user configurable destination
-* [KYLIN-881] - Upgrade Calcite to 1.3.0
-* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
-* [KYLIN-893] - Remove the dependency on quartz and metrics
-* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
-* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
-* [KYLIN-933] - friendly UI to use data model
-* [KYLIN-938] - add friendly tip to page when rest request failed
-
-__Task__
-
-* [KYLIN-884] - Restructure docs and website
-* [KYLIN-907] - Improve Kylin community development experience
-* [KYLIN-954] - Release v1.0 (formerly v0.7.3)
-* [KYLIN-863] - create empty segment when there is no data in one single streaming batch
-* [KYLIN-908] - Help community developer to setup develop/debug environment
-* [KYLIN-931] - Port KYLIN-921 to 0.8 branch
-
-## v0.7.2-incubating - 2015-07-21
-_Tag:_ [kylin-0.7.2-incubating](https://github.com/apache/kylin/tree/kylin-0.7.2-incubating)
-
-__Main Changes:__  
-Critical bug fixes after v0.7.1 release, please go with this version directly for new case and upgrade to this version for existing deployment.
-
-__Bug__  
-
-* [KYLIN-514] - Error message is not helpful to user when doing something in Jason Editor window
-* [KYLIN-598] - Kylin detecting hive table delim failure
-* [KYLIN-660] - Make configurable of dictionary cardinality cap
-* [KYLIN-765] - When a cube job is failed, still be possible to submit a new job
-* [KYLIN-814] - Duplicate columns error for subqueries on fact table
-* [KYLIN-819] - Fix necessary ColumnMetaData order for Calcite (Optic)
-* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
-* [KYLIN-829] - Cube "Actions" shows "NA"; but after expand the "access" tab, the button shows up
-* [KYLIN-830] - Cube merge failed after migrating from v0.6 to v0.7
-* [KYLIN-831] - Kylin report "Column 'ABC' not found in table 'TABLE' while executing SQL", when that column is FK but not define as a dimension
-* [KYLIN-840] - HBase table compress not enabled even LZO is installed
-* [KYLIN-848] - Couldn't resume or discard a cube job
-* [KYLIN-849] - Couldn't query metrics on lookup table PK
-* [KYLIN-865] - Cube has been built but couldn't query; In log it said "Realization 'CUBE.CUBE_NAME' defined under project PROJECT_NAME is not found
-* [KYLIN-873] - cancel button does not work when [resume][discard] job
-* [KYLIN-888] - "Jobs" page only shows 15 job at max, the "Load more" button was disappeared
-
-__Improvement__
-
-* [KYLIN-159] - Metadata migrate tool 
-* [KYLIN-199] - Validation Rule: Unique value of Lookup table's key columns
-* [KYLIN-207] - Support SQL pagination
-* [KYLIN-209] - Merge tail small MR jobs into one
-* [KYLIN-210] - Split heavy MR job to more small jobs
-* [KYLIN-221] - Convert cleanup and GC to job 
-* [KYLIN-284] - add log for all Rest API Request
-* [KYLIN-488] - Increase HDFS block size 1GB
-* [KYLIN-600] - measure return type update
-* [KYLIN-611] - Allow Implicit Joins
-* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
-* [KYLIN-727] - Cube build in BuildCubeWithEngine does not cover incremental build/cube merge
-* [KYLIN-752] - Improved IN clause performance
-* [KYLIN-773] - performance is slow list jobs
-* [KYLIN-839] - Optimize Snapshot table memory usage 
-
-__New Feature__
-
-* [KYLIN-211] - Bitmap Inverted Index
-* [KYLIN-285] - Enhance alert program for whole system
-* [KYLIN-467] - Validataion Rule: Check duplicate rows in lookup table
-* [KYLIN-471] - Support "Copy" on grid result
-
-__Task__
-
-* [KYLIN-7] - Enable maven checkstyle plugin
-* [KYLIN-885] - Release v0.7.2
-* [KYLIN-812] - Upgrade to Calcite 0.9.2
-
-## v0.7.1-incubating (First Apache Release) - 2015-06-10  
-_Tag:_ [kylin-0.7.1-incubating](https://github.com/apache/kylin/tree/kylin-0.7.1-incubating)
-
-Apache Kylin v0.7.1-incubating has rolled out on June 10, 2015. This is also the first Apache release after join incubating. 
-
-__Main Changes:__
-
-* Package renamed from com.kylinolap to org.apache.kylin
-* Code cleaned up to apply Apache License policy
-* Easy install and setup with bunch of scripts and automation
-* Job engine refactor to be generic job manager for all jobs, and improved efficiency
-* Support Hive database other than 'default'
-* JDBC driver avaliable for client to interactive with Kylin server
-* Binary pacakge avaliable download 
-
-__New Feature__
-
-* [KYLIN-327] - Binary distribution 
-* [KYLIN-368] - Move MailService to Common module
-* [KYLIN-540] - Data model upgrade for legacy cube descs
-* [KYLIN-576] - Refactor expansion rate expression
-
-__Task__
-
-* [KYLIN-361] - Rename package name with Apache Kylin
-* [KYLIN-531] - Rename package name to org.apache.kylin
-* [KYLIN-533] - Job Engine Refactoring
-* [KYLIN-585] - Simplify deployment
-* [KYLIN-586] - Add Apache License header in each source file
-* [KYLIN-587] - Remove hard copy of javascript libraries
-* [KYLIN-624] - Add dimension and metric info into DataModel
-* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
-* [KYLIN-669] - Release v0.7.1 as first apache release
-* [KYLIN-670] - Update pom with "incubating" in version number
-* [KYLIN-737] - Generate and sign release package for review and vote
-* [KYLIN-795] - Release after success vote
-
-__Bug__
-
-* [KYLIN-132] - Job framework
-* [KYLIN-194] - Dict & ColumnValueContainer does not support number comparison, they do string comparison right now
-* [KYLIN-220] - Enable swap column of Rowkeys in Cube Designer
-* [KYLIN-230] - Error when create HTable
-* [KYLIN-255] - Error when a aggregated function appear twice in select clause
-* [KYLIN-383] - Sample Hive EDW database name should be replaced by "default" in the sample
-* [KYLIN-399] - refreshed segment not correctly published to cube
-* [KYLIN-412] - No exception or message when sync up table which can't access
-* [KYLIN-421] - Hive table metadata issue
-* [KYLIN-436] - Can't sync Hive table metadata from other database rather than "default"
-* [KYLIN-508] - Too high cardinality is not suitable for dictionary!
-* [KYLIN-509] - Order by on fact table not works correctly
-* [KYLIN-517] - Always delete the last one of Add Lookup page buttom even if deleting the first join condition
-* [KYLIN-524] - Exception will throw out if dimension is created on a lookup table, then deleting the lookup table.
-* [KYLIN-547] - Create cube failed if column dictionary sets false and column length value greater than 0
-* [KYLIN-556] - error tip enhance when cube detail return empty
-* [KYLIN-570] - Need not to call API before sending login request
-* [KYLIN-571] - Dimensions lost when creating cube though Joson Editor
-* [KYLIN-572] - HTable size is wrong
-* [KYLIN-581] - unable to build cube
-* [KYLIN-583] - Dependency of Hive conf/jar in II branch will affect auto deploy
-* [KYLIN-588] - Error when run package.sh
-* [KYLIN-593] - angular.min.js.map and angular-resource.min.js.map are missing in kylin.war
-* [KYLIN-594] - Making changes in build and packaging with respect to apache release process
-* [KYLIN-595] - Kylin JDBC driver should not assume Kylin server listen on either 80 or 443
-* [KYLIN-605] - Issue when install Kylin on a CLI which does not have yarn Resource Manager
-* [KYLIN-614] - find hive dependency shell fine is unable to set the hive dependency correctly
-* [KYLIN-615] - Unable add measures in Kylin web UI
-* [KYLIN-619] - Cube build fails with hive+tez
-* [KYLIN-620] - Wrong duration number
-* [KYLIN-621] - SecurityException when running MR job
-* [KYLIN-627] - Hive tables' partition column was not sync into Kylin
-* [KYLIN-628] - Couldn't build a new created cube
-* [KYLIN-629] - Kylin failed to run mapreduce job if there is no mapreduce.application.classpath in mapred-site.xml
-* [KYLIN-630] - ArrayIndexOutOfBoundsException when merge cube segments 
-* [KYLIN-638] - kylin.sh stop not working
-* [KYLIN-639] - Get "Table 'xxxx' not found while executing SQL" error after a cube be successfully built
-* [KYLIN-640] - sum of float not working
-* [KYLIN-642] - Couldn't refresh cube segment
-* [KYLIN-643] - JDBC couldn't connect to Kylin: "java.sql.SQLException: Authentication Failed"
-* [KYLIN-644] - join table as null error when build the cube
-* [KYLIN-652] - Lookup table alias will be set to null
-* [KYLIN-657] - JDBC Driver not register into DriverManager
-* [KYLIN-658] - java.lang.IllegalArgumentException: Cannot find rowkey column XXX in cube CubeDesc
-* [KYLIN-659] - Couldn't adjust the rowkey sequence when create cube
-* [KYLIN-666] - Select float type column got class cast exception
-* [KYLIN-681] - Failed to build dictionary if the rowkey's dictionary property is "date(yyyy-mm-dd)"
-* [KYLIN-682] - Got "No aggregator for func 'MIN' and return type 'decimal(19,4)'" error when build cube
-* [KYLIN-684] - Remove holistic distinct count and multiple column distinct count from sample cube
-* [KYLIN-691] - update tomcat download address in download-tomcat.sh
-* [KYLIN-696] - Dictionary couldn't recognize a value and throw IllegalArgumentException: "Not a valid value"
-* [KYLIN-703] - UT failed due to unknown host issue
-* [KYLIN-711] - UT failure in REST module
-* [KYLIN-739] - Dimension as metrics does not work with PK-FK derived column
-* [KYLIN-761] - Tables are not shown in the "Query" tab, and couldn't run SQL query after cube be built
-
-__Improvement__
-
-* [KYLIN-168] - Installation fails if multiple ZK
-* [KYLIN-182] - Validation Rule: columns used in Join condition should have same datatype
-* [KYLIN-204] - Kylin web not works properly in IE
-* [KYLIN-217] - Enhance coprocessor with endpoints 
-* [KYLIN-251] - job engine refactoring
-* [KYLIN-261] - derived column validate when create cube
-* [KYLIN-317] - note: grunt.json need to be configured when add new javascript or css file
-* [KYLIN-324] - Refactor metadata to support InvertedIndex
-* [KYLIN-407] - Validation: There's should no Hive table column using "binary" data type
-* [KYLIN-445] - Rename cube_desc/cube folder
-* [KYLIN-452] - Automatically create local cluster for running tests
-* [KYLIN-498] - Merge metadata tables 
-* [KYLIN-532] - Refactor data model in kylin front end
-* [KYLIN-539] - use hbase command to launch tomcat
-* [KYLIN-542] - add project property feature for cube
-* [KYLIN-553] - From cube instance, couldn't easily find the project instance that it belongs to
-* [KYLIN-563] - Wrap kylin start and stop with a script 
-* [KYLIN-567] - More flexible validation of new segments
-* [KYLIN-569] - Support increment+merge job
-* [KYLIN-578] - add more generic configuration for ssh
-* [KYLIN-601] - Extract content from kylin.tgz to "kylin" folder
-* [KYLIN-616] - Validation Rule: partition date column should be in dimension columns
-* [KYLIN-634] - Script to import sample data and cube metadata
-* [KYLIN-636] - wiki/On-Hadoop-CLI-installation is not up to date
-* [KYLIN-637] - add start&end date for hbase info in cubeDesigner
-* [KYLIN-714] - Add Apache RAT to pom.xml
-* [KYLIN-753] - Make the dependency on hbase-common to "provided"
-* [KYLIN-758] - Updating port forwarding issue Hadoop Installation on Hortonworks Sandbox.
-* [KYLIN-779] - [UI] jump to cube list after create cube
-* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
-
-__Wish__
-
-* [KYLIN-608] - Distinct count for ii storage
-
diff --git a/website/_docs21/tutorial/Qlik.cn.md b/website/_docs21/tutorial/Qlik.cn.md
deleted file mode 100644
index 0423b9d..0000000
--- a/website/_docs21/tutorial/Qlik.cn.md
+++ /dev/null
@@ -1,153 +0,0 @@
----
-layout: docs21-cn
-title:  与Qlik Sense集成
-categories: tutorial
-permalink: /cn/docs21/tutorial/Qlik.html
-since: v2.2
----
-
-Qlik Sense 是新一代自助式数据可视化工具。它是一款完整的商业分析软件,便于开发人员和分析人员快速构建和部署强大的分析应用。近年来,该工具成为全球增长率最快的 BI 产品。它可以与 Hadoop Database(Hive 和 Impala)集成。现在也可与 Apache Kylin 集成。本文将分步指导您完成 Apache Kylin 与 Qlik Sense 的连接。 
-
-### 安装 Kylin ODBC 驱动程序
-
-有关安装信息,参考页面 [Kylin ODBC 驱动](http://kylin.apache.org/cn/docs21/tutorial/odbc.html).
-
-### 安装 Qlik Sense
-
-有关 Olik Sense 的安装说明,请访问 [Qlik Sense Desktop download](https://www.qlik.com/us/try-or-buy/download-qlik-sense).
-
-### 与 Qlik Sense 连接
-
-配置完本地 DSN 并成功安装 Qlik Sense 后,可执行以下步骤来用 Qlik Sense 连接 Apache Kylin:
-
-- 打开 **Qlik Sense Desktop**.
-
-
-- 输入 Qlik 用户名和密码,接着系统将弹出以下对话框。单击**创建新应用程序**.
-
-![](/images/tutorial/2.1/Qlik/welcome_to_qlik_desktop.png)
-
-- 为新建的应用程序指定名称. 
-
-![](/images/tutorial/2.1/Qlik/create_new_application.png)
-
-- 应用程序视图中有两个选项,选择下方的**脚本编辑器**。
-
-![](/images/tutorial/2.1/Qlik/script_editor.png)
-
-- 此时会显示 **数据加载编辑器**的窗口。单击页面右上方的**创建新连接**并选择**ODBC**。
-
-![Create New Data Connection](/images/tutorial/2.1/Qlik/create_data_connection.png)
-
-- 选择你创建的**DSN**,忽略账户信息,点击**创建**。
-
-![ODBC Connection](/images/tutorial/2.1/Qlik/odbc_connection.png)
-
-### 配置Direct Query连接模式
-修改默认的脚本中的"TimeFormat", "DateFormat" and "TimestampFormat" 为
-
-`SET TimeFormat='h:mm:ss';`
-`SET DateFormat='YYYY-MM-DD';`
-`SET TimestampFormat='YYYY-MM-DD h:mm:ss[.fff]';`
-
-考虑到kylin环境中的Cube的数据量级通常都很大,可达到PB级。我们推荐用户使用Qlik sense的Direct Query连接模式,而不要将数据导入到Qlik sense中。
-
-你可以在脚本的连接中打入`Direct Query`来启用Direct Query连接模式。
-
-下面的截图展现了一个连接了 *Learn_kylin* 项目中的 *kylin_sales_cube* 的Direct Query的脚本。
-
-![Script](/images/tutorial/2.1/Qlik/script_run_result.png) 
-
-Qlik sense会基于你定义的这个脚本在报表中相应的生成SQL查询。
-
-我们推荐用户将Kylin Cube上定义的维度和度量相应的定义到脚本中的维度和度量中。
-
-你也可以使用Native表达式来使用Apache Kylin内置函数,例如:
-
-`NATIVE('extract(month from PART_DT)') ` 
-
-完整的脚本提供在下方以供参考。
-
-请确保将脚本中`LIB CONNECT TO 'kylin';` 部分引用的DSN进行相应的修改。 
-
-```SQL
-SET ThousandSep=',';
-SET DecimalSep='.';
-SET MoneyThousandSep=',';
-SET MoneyDecimalSep='.';
-SET MoneyFormat='$#,##0.00;-$#,##0.00';
-SET TimeFormat='h:mm:ss';
-SET DateFormat='YYYY/MM/DD';
-SET TimestampFormat='YYYY/MM/DD h:mm:ss[.fff]';
-SET FirstWeekDay=6;
-SET BrokenWeeks=1;
-SET ReferenceDay=0;
-SET FirstMonthOfYear=1;
-SET CollationLocale='en-US';
-SET CreateSearchIndexOnReload=1;
-SET MonthNames='Jan;Feb;Mar;Apr;May;Jun;Jul;Aug;Sep;Oct;Nov;Dec';
-SET LongMonthNames='January;February;March;April;May;June;July;August;September;October;November;December';
-SET DayNames='Mon;Tue;Wed;Thu;Fri;Sat;Sun';
-SET LongDayNames='Monday;Tuesday;Wednesday;Thursday;Friday;Saturday;Sunday';
-
-LIB CONNECT TO 'kylin';
-
-
-DIRECT QUERY
-DIMENSION 
-  TRANS_ID,
-  YEAR_BEG_DT,
-  MONTH_BEG_DT,
-  WEEK_BEG_DT,
-  PART_DT,
-  LSTG_FORMAT_NAME,
-  OPS_USER_ID,
-  OPS_REGION,
-  NATIVE('extract(month from PART_DT)') AS PART_MONTH,
-   NATIVE('extract(year from PART_DT)') AS PART_YEAR,
-  META_CATEG_NAME,
-  CATEG_LVL2_NAME,
-  CATEG_LVL3_NAME,
-  ACCOUNT_BUYER_LEVEL,
-  NAME
-MEASURE
-	ITEM_COUNT,
-    PRICE,
-    SELLER_ID
-FROM KYLIN_SALES 
-join KYLIN_CATEGORY_GROUPINGS  
-on( SITE_ID=LSTG_SITE_ID 
-and KYLIN_SALES.LEAF_CATEG_ID=KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID)
-join KYLIN_CAL_DT
-on (KYLIN_CAL_DT.CAL_DT=KYLIN_SALES.PART_DT)
-join KYLIN_ACCOUNT 
-on (KYLIN_ACCOUNT.ACCOUNT_ID=KYLIN_SALES.BUYER_ID)
-JOIN KYLIN_COUNTRY
-on (KYLIN_COUNTRY.COUNTRY=KYLIN_ACCOUNT.ACCOUNT_COUNTRY)
-```
-
-点击窗口右上方的**加载数据**,Qlik sense会根据脚本来生成探测查询以检查脚本的语法。
-
-![Load Data](/images/tutorial/2.1/Qlik/load_data.png)
-
-### 创建报表
-
-点击左上角的**应用程序视图**。
-
-![Open App Overview](/images/tutorial/2.1/Qlik/go_to_app_overview.png)
-
-点击**创建新工作表**。
-
-![Create new sheet](/images/tutorial/2.1/Qlik/create_new_report.png)
-
-选择一个图标类型,将维度和度量根据需要添加到图表上。
-
-![Select the required charts, dimension and measure](/images/tutorial/2.1/Qlik/add_dimension.png)
-
-图表返回了结果,说明连接Apache Kylin成功。
-
-现在你可以使用Qlik sense分析Apache Kylin中的数据了。
-
-![View data in Qlik Sense](/images/tutorial/2.1/Qlik/report.png)
-
-请注意如果你希望你的报表可以击中Cube,你在Qlik sense中定义的度量需要和Cube上定义的一致。比如,为了击中Learn_kylin项目的 *Kylin_sales_cube* 我们在本例中使用`sum(price)`。
diff --git a/website/_docs21/tutorial/Qlik.md b/website/_docs21/tutorial/Qlik.md
deleted file mode 100644
index ae5041e..0000000
--- a/website/_docs21/tutorial/Qlik.md
+++ /dev/null
@@ -1,156 +0,0 @@
----
-layout: docs21
-title: Qlik Sense
-categories: tutorial
-permalink: /docs21/tutorial/Qlik.html
----
-
-Qlik Sense delivers intuitive platform solutions for self-service data visualization, guided analytics applications, embedded analytics, and reporting. It is a new player in the Business Intelligence (BI) tools world, with a high growth since 2013. It has connectors with Hadoop Database (Hive and Impala). Now it can be integrated with Apache Kylin. This article will guide you to connect Apache Kylin with Qlik Sense.  
-
-### Install Kylin ODBC Driver
-
-For the installation information, please refer to [Kylin ODBC Driver](http://kylin.apache.org/docs21/tutorial/odbc.html).
-
-### Install Qlik Sense
-
-For the installation of Qlik Sense, please visit [Qlik Sense Desktop download](https://www.qlik.com/us/try-or-buy/download-qlik-sense).
-
-### Connection with Qlik Sense
-
-After configuring your Local DSN and installing Qlik Sense successfully, you may go through the following steps to connect Apache Kylin with Qlik Sense.
-
-- Open **Qlik Sense Desktop**.
-
-
-
-- Input your Qlik account to log in, then the following dialog will pop up. Click **Create New Application**.
-
-![Create New Application](../../images/tutorial/2.1/Qlik/welcome_to_qlik_desktop.png)
-
-- Specify a name for the new app. 
-
-
-![Specify a unique name](../../images/tutorial/2.1/Qlik/create_new_application.png)
-
-- There are two choices in the Application View. Please select the bottom **Script Editor**.
-
-
-![Select Script Editor](../../images/tutorial/2.1/Qlik/script_editor.png)
-
-- The Data Load Editor window shows. Click **Create New Connection** and choose **ODBC**.
-
-
-![Create New Data Connection](../../images/tutorial/2.1/Qlik/create_data_connection.png)
-
-- Select **DSN** you have created, ignore the account information and then click **Create**. 
-
-
-![ODBC Connection](../../images/tutorial/2.1/Qlik/odbc_connection.png)
-
-### Configure Direct Query mode
-Change the default scripts of "TimeFormat", "DateFormat" and "TimestampFormat" to:
-
-`SET TimeFormat='h:mm:ss';`
-`SET DateFormat='YYYY-MM-DD';`
-`SET TimestampFormat='YYYY-MM-DD h:mm:ss[.fff]';`
-
-
-Given the Peta-byte scale Cube size in a usual Apache Kylin environment, we recommend user to use Direct Query mode in Qlik Sense and avoid importing data into Qlik Sense.
-
-You are able to enable Direct Query mode by typing `Direct Query` in front of your query script in Script editor.
-
-Below is the screenshot of such Direct Query script against *kylin_sales_cube* in *Learn_kylin* project. 
-
-![Script](../../images/tutorial/2.1/Qlik/script_run_result.png)
-
-Once you defined such script, Qlik sense can generate SQL based on this script for your report.
-
-It is recommended that you define Dimension and Measure corresponding to the Dimension and Measure in the Kylin Cube.  
-
-You may also be able to utilize Apache Kylin built-in functions by creating a Native expression, for example: 
-
-`NATIVE('extract(month from PART_DT)') ` 
-
-The whole script has been posted for your reference. 
-
-Make sure to update `LIB CONNECT TO 'kylin';` to the DSN you created. 
-
-```SQL
-SET ThousandSep=',';
-SET DecimalSep='.';
-SET MoneyThousandSep=',';
-SET MoneyDecimalSep='.';
-SET MoneyFormat='$#,##0.00;-$#,##0.00';
-SET TimeFormat='h:mm:ss';
-SET DateFormat='YYYY/MM/DD';
-SET TimestampFormat='YYYY/MM/DD h:mm:ss[.fff]';
-SET FirstWeekDay=6;
-SET BrokenWeeks=1;
-SET ReferenceDay=0;
-SET FirstMonthOfYear=1;
-SET CollationLocale='en-US';
-SET CreateSearchIndexOnReload=1;
-SET MonthNames='Jan;Feb;Mar;Apr;May;Jun;Jul;Aug;Sep;Oct;Nov;Dec';
-SET LongMonthNames='January;February;March;April;May;June;July;August;September;October;November;December';
-SET DayNames='Mon;Tue;Wed;Thu;Fri;Sat;Sun';
-SET LongDayNames='Monday;Tuesday;Wednesday;Thursday;Friday;Saturday;Sunday';
-
-LIB CONNECT TO 'kylin';
-
-
-DIRECT QUERY
-DIMENSION 
-  TRANS_ID,
-  YEAR_BEG_DT,
-  MONTH_BEG_DT,
-  WEEK_BEG_DT,
-  PART_DT,
-  LSTG_FORMAT_NAME,
-  OPS_USER_ID,
-  OPS_REGION,
-  NATIVE('extract(month from PART_DT)') AS PART_MONTH,
-   NATIVE('extract(year from PART_DT)') AS PART_YEAR,
-  META_CATEG_NAME,
-  CATEG_LVL2_NAME,
-  CATEG_LVL3_NAME,
-  ACCOUNT_BUYER_LEVEL,
-  NAME
-MEASURE
-	ITEM_COUNT,
-    PRICE,
-    SELLER_ID
-FROM KYLIN_SALES 
-join KYLIN_CATEGORY_GROUPINGS  
-on( SITE_ID=LSTG_SITE_ID 
-and KYLIN_SALES.LEAF_CATEG_ID=KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID)
-join KYLIN_CAL_DT
-on (KYLIN_CAL_DT.CAL_DT=KYLIN_SALES.PART_DT)
-join KYLIN_ACCOUNT 
-on (KYLIN_ACCOUNT.ACCOUNT_ID=KYLIN_SALES.BUYER_ID)
-JOIN KYLIN_COUNTRY
-on (KYLIN_COUNTRY.COUNTRY=KYLIN_ACCOUNT.ACCOUNT_COUNTRY)
-```
-
-Click **Load Data** on the upper right of the window, Qlik sense will send out inspection query to test the connection based on the script.
-
-![Load Data](../../images/tutorial/2.1/Qlik/load_data.png)
-
-### Create a new report
-
-On the top left menu open **App Overview**.
-
-![Open App Overview](../../images/tutorial/2.1/Qlik/go_to_app_overview.png)
-
- Click **Create new sheet** on this page.
-
-![Create new sheet](../../images/tutorial/2.1/Qlik/create_new_report.png)
-
-Select the charts you need, then add dimension and measurement based on your requirements. 
-
-![Select the required charts, dimension and measure](../../images/tutorial/2.1/Qlik/add_dimension.png)
-
-You will get your worksheet and the connection is complete. Your Apache Kylin data shows in Qlik Sense now.
-
-![View data in Qlik Sense](../../images/tutorial/2.1/Qlik/report.png)
-
-Please note that if you want the report to hit on Cube, you need to create the measure exactly as those are defined in the Cube. For the case of *Kylin_sales_cube* in Learn_kylin project. We use `sum(price)` as an example. 
diff --git a/website/_docs21/tutorial/acl.cn.md b/website/_docs21/tutorial/acl.cn.md
deleted file mode 100644
index 53cf83c..0000000
--- a/website/_docs21/tutorial/acl.cn.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-layout: docs21-cn
-title:  Kylin Cube 权限授予教程
-categories: 教程
-permalink: /cn/docs21/tutorial/acl.html
-version: v1.2
-since: v0.7.1
----
-
-> 从v2.2.0版本开始,Cube ACL功能已经移除, 请使用[Project level ACL](/docs21/tutorial/project_level_acl.html)进行权限管理。
-
-在`Cubes`页面,双击cube行查看详细信息。在这里我们关注`Access`标签。
-点击`+Grant`按钮进行授权。
-
-![]( /images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
-
-一个cube有四种不同的权限。将你的鼠标移动到`?`图标查看详细信息。
-
-![]( /images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
-
-授权对象也有两种:`User`和`Role`。`Role`是指一组拥有同样权限的用户。
-
-### 1. 授予用户权限
-* 选择`User`类型,输入你想要授权的用户的用户名并选择相应的权限。
-
-     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
-
-* 然后点击`Grant`按钮提交请求。在这一操作成功后,你会在表中看到一个新的表项。你可以选择不同的访问权限来修改用户权限。点击`Revoke`按钮可以删除一个拥有权限的用户。
-
-     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
-
-### 2. 授予角色权限
-* 选择`Role`类型,通过点击下拉按钮选择你想要授权的一组用户并选择一个权限。
-
-* 然后点击`Grant`按钮提交请求。在这一操作成功后,你会在表中看到一个新的表项。你可以选择不同的访问权限来修改组权限。点击`Revoke`按钮可以删除一个拥有权限的组。
diff --git a/website/_docs21/tutorial/acl.md b/website/_docs21/tutorial/acl.md
deleted file mode 100644
index 6b7d3c6..0000000
--- a/website/_docs21/tutorial/acl.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-layout: docs21
-title: Cube Permission (v2.1.x)
-categories: tutorial
-permalink: /docs21/tutorial/acl.html
-since: v0.7.1
----
-
-```
-Notes:
-Cube ACL is removed since v2.2.0, please use [Project level ACL](/docs21/tutorial/project_level_acl.html) to manager ACL.
-```
-
-In `Cubes` page, double click the cube row to see the detail information. Here we focus on the `Access` tab.
-Click the `+Grant` button to grant permission. 
-
-![](/images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
-
-There are four different kinds of permissions for a cube. Move your mouse over the `?` icon to see detail information. 
-
-![](/images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
-
-There are also two types of user that a permission can be granted: `User` and `Role`. `Role` means a group of users who have the same role.
-
-### 1. Grant User Permission
-* Select `User` type, enter the username of the user you want to grant and select the related permission. 
-
-     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
-
-* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a user. To delete a user with permission, just click the `Revoke` button.
-
-     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
-
-### 2. Grant Role Permission
-* Select `Role` type, choose a group of users that you want to grant by click the drop down button and select a permission.
-
-* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a group. To delete a group with permission, just click the `Revoke` button.
diff --git a/website/_docs21/tutorial/create_cube.cn.md b/website/_docs21/tutorial/create_cube.cn.md
deleted file mode 100644
index e1f076d..0000000
--- a/website/_docs21/tutorial/create_cube.cn.md
+++ /dev/null
@@ -1,129 +0,0 @@
----
-layout: docs21-cn
-title:  Kylin Cube 创建教程
-categories: 教程
-permalink: /cn/docs21/tutorial/create_cube.html
-version: v1.2
-since: v0.7.1
----
-  
-  
-### I. 新建一个项目
-1. 由顶部菜单栏进入`Query`页面,然后点击`Manage Projects`。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
-
-2. 点击`+ Project`按钮添加一个新的项目。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/2 %2Bproject.png)
-
-3. 填写下列表单并点击`submit`按钮提交请求。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/3 new-project.png)
-
-4. 成功后,底部会显示通知。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
-
-### II. 同步一张表
-1. 在顶部菜单栏点击`Tables`,然后点击`+ Sync`按钮加载hive表元数据。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/4 %2Btable.png)
-
-2. 输入表名并点击`Sync`按钮提交请求。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
-
-### III. 新建一个cube
-首先,在顶部菜单栏点击`Cubes`。然后点击`+Cube`按钮进入cube designer页面。
-
-![](/images/Kylin-Cube-Creation-Tutorial/6 %2Bcube.png)
-
-**步骤1. Cube信息**
-
-填写cube基本信息。点击`Next`进入下一步。
-
-你可以使用字母、数字和“_”来为你的cube命名(注意名字中不能使用空格)。
-
-![](/images/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
-
-**步骤2. 维度**
-
-1. 建立事实表。
-
-    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-factable.png)
-
-2. 点击`+Dimension`按钮添加一个新的维度。
-
-    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-%2Bdim.png)
-
-3. 可以选择不同类型的维度加入一个cube。我们在这里列出其中一部分供你参考。
-
-    * 从事实表获取维度。
-          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeA.png)
-
-    * 从查找表获取维度。
-        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-1.png)
-
-        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-2.png)
-   
-    * 从有分级结构的查找表获取维度。
-          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeC.png)
-
-    * 从有衍生维度(derived dimensions)的查找表获取维度。
-          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeD.png)
-
-4. 用户可以在保存维度后进行编辑。
-   ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-edit.png)
-
-**步骤3. 度量**
-
-1. 点击`+Measure`按钮添加一个新的度量。
-   ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-%2Bmeas.png)
-
-2. 根据它的表达式共有5种不同类型的度量:`SUM`、`MAX`、`MIN`、`COUNT`和`COUNT_DISTINCT`。请谨慎选择返回类型,它与`COUNT(DISTINCT)`的误差率相关。
-   * SUM
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-sum.png)
-
-   * MIN
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-min.png)
-
-   * MAX
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-max.png)
-
-   * COUNT
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-count.png)
-
-   * DISTINCT_COUNT
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-distinct.png)
-
-**步骤4. 过滤器**
-
-这一步骤是可选的。你可以使用`SQL`格式添加一些条件过滤器。
-
-![](/images/Kylin-Cube-Creation-Tutorial/10 filter.png)
-
-**步骤5. 更新设置**
-
-这一步骤是为增量构建cube而设计的。
-
-![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting1.png)
-
-选择分区类型、分区列和开始日期。
-
-![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting2.png)
-
-**步骤6. 高级设置**
-
-![](/images/Kylin-Cube-Creation-Tutorial/12 advanced.png)
-
-**步骤7. 概览 & 保存**
-
-你可以概览你的cube并返回之前的步骤进行修改。点击`Save`按钮完成cube创建。
-
-![](/images/Kylin-Cube-Creation-Tutorial/13 overview.png)
diff --git a/website/_docs21/tutorial/create_cube.md b/website/_docs21/tutorial/create_cube.md
deleted file mode 100644
index 52fa6fd..0000000
--- a/website/_docs21/tutorial/create_cube.md
+++ /dev/null
@@ -1,198 +0,0 @@
----
-layout: docs21
-title:  Cube Wizard
-categories: tutorial
-permalink: /docs21/tutorial/create_cube.html
----
-
-This tutorial will guide you to create a cube. It need you have at least 1 sample table in Hive. If you don't have, you can follow this to create some data.
-  
-### I. Create a Project
-1. Go to `Query` page in top menu bar, then click `Manage Projects`.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
-
-2. Click the `+ Project` button to add a new project.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/2 +project.png)
-
-3. Enter a project name, e.g, "Tutorial", with a description (optional), then click `submit` button to send the request.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3 new-project.png)
-
-4. After success, the project will show in the table.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
-
-### II. Sync up Hive Table
-1. Click `Model` in top bar and then click `Data Source` tab in the left part, it lists all the tables loaded into Kylin; click `Load Hive Table` button.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table.png)
-
-2. Enter the hive table names, separated with commad, and then click `Sync` to send the request.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
-
-3. [Optional] If you want to browser the hive database to pick tables, click the `Load Hive Table From Tree` button.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table-tree.png)
-
-4. [Optional] Expand the database node, click to select the table to load, and then click `Sync`.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-tree.png)
-
-5. A success message will pop up. In the left `Tables` section, the newly loaded table is added. Click the table name will expand the columns.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-info.png)
-
-6. In the background, Kylin will run a MapReduce job to calculate the approximate cardinality for the newly synced table. After the job be finished, refresh web page and then click the table name, the cardinality will be shown in the table info.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-cardinality.png)
-
-
-### III. Create Data Model
-Before create a cube, need define a data model. The data model defines the star schema. One data model can be reused in multiple cubes.
-
-1. Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Model`.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 +model.png)
-
-2. Enter a name for the model, with an optional description.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-name.png)
-
-3. In the `Fact Table` box, select the fact table of this data model.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-fact-table.png)
-
-4. [Optional] Click `Add Lookup Table` button to add a lookup table. Select the table name and join type (inner or left).
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-lookup-table.png)
-
-5. [Optional] Click `New Join Condition` button, select the FK column of fact table in the left, and select the PK column of lookup table in the right side. Repeat this if have more than one join columns.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-join-condition.png)
-
-6. Click "OK", repeat step 4 and 5 to add more lookup tables if any. After finished, click "Next".
-
-7. The "Dimensions" page allows to select the columns that will be used as dimension in the child cubes. Click the `Columns` cell of a table, in the drop-down list select the column to the list. 
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-dimensions.png)
-
-8. Click "Next" go to the "Measures" page, select the columns that will be used in measure/metrics. The measure column can only from fact table. 
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-measures.png)
-
-9. Click "Next" to the "Settings" page. If the data in fact table increases by day, select the corresponding date column in the `Partition Date Column`, and select the date format, otherwise leave it as blank.
-
-10. [Optional] Select `Cube Size`, which is an indicator on the scale of the cube, by default it is `MEDIUM`.
-
-11. [Optional] If some records want to excluded from the cube, like dirty data, you can input the condition in `Filter`.
-
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-partition-column.png)
-
-12. Click `Save` and then select `Yes` to save the data model. After created, the data model will be shown in the left `Models` list.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-created.png)
-
-### IV. Create Cube
-After the data model be created, you can start to create cube. 
-
-Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Cube`.
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 new-cube.png)
-
-
-**Step 1. Cube Info**
-
-Select the data model, enter the cube name; Click `Next` to enter the next step.
-
-You can use letters, numbers and '_' to name your cube (blank space in name is not allowed). `Notification List` is a list of email addresses which be notified on cube job success/failure.
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
-    
-
-**Step 2. Dimensions**
-
-1. Click `Add Dimension`, it popups two option: "Normal" and "Derived": "Normal" is to add a normal independent dimension column, "Derived" is to add a derived dimension column. Read more in [How to optimize cubes](/docs15/howto/howto_optimize_cubes.html).
-
-2. Click "Normal" and then select a dimension column, give it a meaningful name.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-normal.png)
-    
-3. [Optional] Click "Derived" and then pickup 1 more multiple columns on lookup table, give them a meaningful name.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-derived.png)
-
-4. Repeate 2 and 3 to add all dimension columns; you can do this in batch for "Normal" dimension with the button `Auto Generator`. 
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-batch.png)
-
-5. Click "Next" after select all dimensions.
-
-**Step 3. Measures**
-
-1. Click the `+Measure` to add a new measure.
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 meas-+meas.png)
-
-2. There are 6 types of measure according to its expression: `SUM`, `MAX`, `MIN`, `COUNT`, `COUNT_DISTINCT` and `TOP_N`. Properly select the return type for `COUNT_DISTINCT` and `TOP_N`, as it will impact on the cube size.
-   * SUM
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-sum.png)
-
-   * MIN
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-min.png)
-
-   * MAX
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-max.png)
-
-   * COUNT
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-count.png)
-
-   * DISTINCT_COUNT
-   This measure has two implementations: 
-   a) approximate implementation with HyperLogLog, select an acceptable error rate, lower error rate will take more storage.
-   b) precise implementation with bitmap (see limitation in https://issues.apache.org/jira/browse/KYLIN-1186). 
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-distinct.png)
-
-   Pleaste note: distinct count is a very heavy data type, it is slower to build and query comparing to other measures.
-
-   * TOP_N
-   Approximate TopN measure pre-calculates the top records in each dimension combination, it will provide higher performance in query time than no pre-calculation; Need specify two parameters here: the first is the column will be used as metrics for Top records (aggregated with SUM and then sorted in descending order); the second is the literal ID, represents the record like seller_id;
-
-   Properly select the return type, depends on how many top records to inspect: top 10, top 100 or top 1000. 
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-topn.png)
-
-
-**Step 4. Refresh Setting**
-
-This step is designed for incremental cube build. 
-
-`Auto Merge Time Ranges (days)`: merge the small segments into medium and large segment automatically. If you don't want to auto merge, remove the default two ranges.
-
-`Retention Range (days)`: only keep the segment whose data is in past given days in cube, the old segment will be automatically dropped from head; 0 means not enable this feature.
-
-`Partition Start Date`: the start date of this cube.
-
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/9 refresh-setting1.png)
-
-**Step 5. Advanced Setting**
-
-`Aggregation Groups`: by default Kylin put all dimensions into one aggregation group; you can create multiple aggregation groups by knowing well about your query patterns. For the concepts of "Mandatory Dimensions", "Hierarchy Dimensions" and "Joint Dimensions", read this blog: [New Aggregation Group](/blog/2016/02/18/new-aggregation-group/)
-
-`Rowkeys`: the rowkeys are composed by the dimension encoded values. "Dictionary" is the default encoding method; If a dimension is not fit with dictionary (e.g., cardinality > 10 million), select "false" and then enter the fixed length for that dimension, usually that is the max. length of that column; if a value is longer than that size it will be truncated. Please note, without dictionary encoding, the cube size might be much bigger.
-
-You can drag & drop a dimension column to adjust its position in rowkey; Put the mandantory dimension at the begining, then followed the dimensions that heavily involved in filters (where condition). Put high cardinality dimensions ahead of low cardinality dimensions.
-
-
-**Step 6. Overview & Save**
-
-You can overview your cube and go back to previous step to modify it. Click the `Save` button to complete the cube creation.
-
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 overview.png)
-
-Cheers! now the cube is created, you can go ahead to build and play it.
diff --git a/website/_docs21/tutorial/cube_build_job.cn.md b/website/_docs21/tutorial/cube_build_job.cn.md
deleted file mode 100644
index 09390fb..0000000
--- a/website/_docs21/tutorial/cube_build_job.cn.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-layout: docs21-cn
-title:  Kylin Cube 建立和Job监控教程
-categories: 教程
-permalink: /cn/docs21/tutorial/cube_build_job.html
-version: v1.2
-since: v0.7.1
----
-
-### Cube建立
-首先,确认你拥有你想要建立的cube的权限。
-
-1. 在`Cubes`页面中,点击cube栏右侧的`Action`下拉按钮并选择`Build`操作。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
-
-2. 选择后会出现一个弹出窗口。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/2 pop-up.png)
-
-3. 点击`END DATE`输入框选择增量构建这个cube的结束日期。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
-
-4. 点击`Submit`提交请求。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 submit.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4.1 success.png)
-
-   提交请求成功后,你将会看到`Jobs`页面新建了job。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 jobs-page.png)
-
-5. 如要放弃这个job,点击`Discard`按钮。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
-
-### Job监控
-在`Jobs`页面,点击job详情按钮查看显示于右侧的详细信息。
-
-![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
-
-job详细信息为跟踪一个job提供了它的每一步记录。你可以将光标停放在一个步骤状态图标上查看基本状态和信息。
-
-![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
-
-点击每个步骤显示的图标按钮查看详情:`Parameters`、`Log`、`MRJob`、`EagleMonitoring`。
-
-* Parameters
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
-
-* Log
-        
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
-
-* MRJob(MapReduce Job)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)
diff --git a/website/_docs21/tutorial/cube_build_job.md b/website/_docs21/tutorial/cube_build_job.md
deleted file mode 100644
index 63a5210..0000000
--- a/website/_docs21/tutorial/cube_build_job.md
+++ /dev/null
@@ -1,67 +0,0 @@
----
-layout: docs21
-title:  Cube Build and Job Monitoring
-categories: tutorial
-permalink: /docs21/tutorial/cube_build_job.html
----
-
-### Cube Build
-First of all, make sure that you have authority of the cube you want to build.
-
-1. In `Models` page, click the `Action` drop down button in the right of a cube column and select operation `Build`.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
-
-2. There is a pop-up window after the selection, click `END DATE` input box to select end date of this incremental cube build.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
-
-4. Click `Submit` to send the build request. After success, you will see the new job in the `Monitor` page.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 jobs-page.png)
-
-5. The new job is in "pending" status; after a while, it will be started to run and you will see the progress by refresh the web page or click the refresh button.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 job-progress.png)
-
-
-6. Wait the job to finish. In the between if you want to discard it, click `Actions` -> `Discard` button.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
-
-7. After the job is 100% finished, the cube's status becomes to "Ready", means it is ready to serve SQL queries. In the `Model` tab, find the cube, click cube name to expand the section, in the "HBase" tab, it will list the cube segments. Each segment has a start/end time; Its underlying HBase table information is also listed.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/10 cube-segment.png)
-
-If you have more source data, repeate the steps above to build them into the cube.
-
-### Job Monitoring
-In the `Monitor` page, click the job detail button to see detail information show in the right side.
-
-![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
-
-The detail information of a job provides a step-by-step record to trace a job. You can hover a step status icon to see the basic status and information.
-
-![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
-
-Click the icon buttons showing in each step to see the details: `Parameters`, `Log`, `MRJob`.
-
-* Parameters
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
-
-* Log
-        
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
-
-* MRJob(MapReduce Job)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)
-
-
diff --git a/website/_docs21/tutorial/cube_build_performance.md b/website/_docs21/tutorial/cube_build_performance.md
deleted file mode 100755
index 8be4bda..0000000
--- a/website/_docs21/tutorial/cube_build_performance.md
+++ /dev/null
@@ -1,266 +0,0 @@
----
-layout: docs21
-title: Cube Build Tuning
-categories: tutorial
-permalink: /docs21/tutorial/cube_build_performance.html
----
- *This tutorial is an example step by step about how to optimize build of cube.* 
- 
-In this scenario we're trying to optimize a very simple Cube, with 1 fact and 1 lookup table (Date Dimension). Before do a real tunning, please get an overall understanding about Cube build process from [Optimize Cube Build](/docs20/howto/howto_optimize_build.html)
-
-![]( /images/tutorial/2.0/cube_build_performance/01.png)
-
-The baseline is:
-
-* One Measure: Balance, calculate always Max, Min and Count
-* All Dim_date (10 items) will be used as dimensions 
-* Input is a Hive CSV external table 
-* Output is a Cube in HBase without compression 
-
-With this configuration, the results are: 13 min to build a cube of 20 Mb  (Cube_01)
-
-### Cube_02: Reduce combinations
-To make the first improvement, use Joint and Hierarchy on Dimensions to reduce the combinations (number of cuboids).
-
-Put together all ID and Text of: Month, Week, Weekday and Quarter using Joint Dimension
-
-![]( /images/tutorial/2.0/cube_build_performance/02.png)
-
-	
-Define Id_date and Year as a Hierarchy Dimension
-
-This reduces the size down to 0.72MB and time to 5 min
-
-[Kylin 2149](https://issues.apache.org/jira/browse/KYLIN-2149), ideally, these Hierarchies can be defined also:
-* Id_weekday > Id_date
-* Id_Month > Id_date
-* Id_Quarter > Id_date
-* Id_week > Id_date
-
-But for now, it impossible to use Joint and Hierarchy together for one dimension.
-
-
-### Cube_03: Compress output
-To make the next improvement, compress HBase Cube with Snappy:
-
-![alt text](/images/tutorial/2.0/cube_build_performance/03.png)
-
-Another option is Gzip:
-
-![alt text](/images/tutorial/2.0/cube_build_performance/04.png)
-
-
-The results of compression output are:
-
-![alt text](/images/tutorial/2.0/cube_build_performance/05.png)
-
-The difference between Snappy and Ggzip in time is less than 1% but in size it is 18%
-
-
-### Cube_04: Compress Hive table
-The time distribution is like this:
-
-![]( /images/tutorial/2.0/cube_build_performance/06.png)
-
-
-Group detailed times by concepts :
-
-![]( /images/tutorial/2.0/cube_build_performance/07.png)
-
-67 % is used to build / process flat table and respect 30% to build the cube
-
-A lot of time is used in the first steps.
-
-This time distribution is typical in a cube with few measures and few dim (or very optimized)
-
-
-Try to use ORC Format and compression on Hive input table (Snappy):
-
-![]( /images/tutorial/2.0/cube_build_performance/08.png)
-
-
-The time in the first three stree steps (Flat Table) has been improved by half.
-
-Other columnar formats can be tested:
-
-![]( /images/tutorial/2.0/cube_build_performance/19.png)
-
-
-* ORC
-* ORC compressed with Snappy
-
-But the results are worse than when using Sequence file.
-
-See comments about this here: [Shaofengshi in MailList](http://apache-kylin.74782.x6.nabble.com/Kylin-Performance-td6713.html#a6767)
-
-The second strep is to redistribute Flat Hive table:
-
-![]( /images/tutorial/2.0/cube_build_performance/20.png)
-
-Is a simple row count, two approximations can be made
-* If it doesn’t need to be accurate, the rows of the fact table can be counted→ this can be performed in parallel with Step 1 (and 99% of the time it will be accurate)
-
-![]( /images/tutorial/2.0/cube_build_performance/21.png)
-
-
-* In the future versions (KYLIN-2165 v2.0), this steps will be implemented using Hive table statistics.
-
-
-
-### Cube_05: Partition Hive table (fail)
-The distribution of rows is:
-
-Table | Rows
---- | --- 
-Fact Table | 3.900.00 
-Dim Date | 2.100 
-
-And the query (the simplified version) to build the flat table is:
-{% highlight Groff markup %}
-```sql
-SELECT
-,DIM_DATE.X
-,DIM_DATE.y
-,FACT_POSICIONES.BALANCE
-FROM  FACT_POSICIONES  INNER JOIN DIM_DATE 
-	ON  ID_FECHA = .ID_FECHA
-WHERE (ID_DATE >= '2016-12-08' AND ID_DATE < '2016-12-23')
-```
-{% endhighlight %}
-
-The problem here, is that, Hive in only using 1 Map to create Flat Table. It is important to lets go to change this behavior. The solution is to partition DIM and FACT in the same columns
-
-* Option 1: Use id_date as a partition column on Hive table. This has a big problem: the Hive metastore is meant for few a hundred of partitions and not thousands (In [Hive 9452](https://issues.apache.org/jira/browse/HIVE-9452) there is an idea to solve this but it isn’t finished yet)
-* Option 2: Generate a new column for this purpose like Monthslot.
-
-![]( /images/tutorial/2.0/cube_build_performance/09.png)
-
-
-Add the same column to dim and fact tables
-
-Now, upgrade the the data model with this new condition to join tables
-
-![]( /images/tutorial/2.0/cube_build_performance/10.png)
-
-	
-The new query to generate flat table will be similar to:
-{% highlight Groff markup %}
-```sql
-SELECT *
-	FROM  FACT_POSICIONES  **INNER JOIN** DIM_DATE 
-		ON  ID_FECHA = .ID_FECHA    AND  MONTHSLOT=MONTHSLOT
-```
-{% endhighlight %}
-
-Rebuild the new cube with this data model
-
-As a result, the performance has worsened  :( . After tried several attempts, there hasn’t been a solution
-
-![]( /images/tutorial/2.0/cube_build_performance/11.png)
-
-
-The problem is that partitions were not used to generate several Mappers
-
-![]( /images/tutorial/2.0/cube_build_performance/12.png)
-
-	
-(I checked this issue with ShaoFeng Shi. He thinks the problem is that there are few many rows and we are not working with a real Hadoop cluster. See this [tech note](http://kylin.apache.org/docs16/howto/howto_optimize_build.html)).
-	
-
-### Resume of results
-
-![]( /images/tutorial/2.0/cube_build_performance/13.png)
-
-
-The tunning process has been:
-* Hive Input tables compressed
-* HBase Output compressed
-* Apply techniques of reduction of cardinality (Joint, Derived, Hierarchy and Mandatory)
-* Personalize Dim encoder for each Dim and choose the best order of Dim in Row Key
-
-
-
-Now, there are three types of cubes:
-* Cubes with low cardinality in their dimensions (Like cube 4, most of time is usend in flat table steps)
-* Cubes with high cardinality in their dimensions (Like cube 6,most of time is usend on Build cube, the flat table steps are lower than 10%)
-* The third type, ultra high cardinality (UHC) which is outside the scope of this article
-
-
-### Cube 6: Cube with high cardinality Dimensions
-
-![]( /images/tutorial/2.0/cube_build_performance/22.png)
-
-In this case the **72%** of the time is used to build Cube
-
-This step is a MapReduce task, you can see the YARN log of these steps on ![alt text](/images/tutorial/2.0/cube_build_performance/23.png) > ![alt text](/images/tutorial/2.0/cube_build_performance/24.png) 
-
-How can the performance of Map – Reduce be improved? The easy way is to increase the numbers of Mappers and Reduces (= Increase parallelism).
-
-
-![]( /images/tutorial/2.0/cube_build_performance/25.png)
-
-
-**NOTE:** YARN / MapReduce have a lot parameters to configure and adapt to theyour system. The focus here is only on small parts. 
-
-(In my system I can assign 12 – 14 GB and 8 cores to YARN Resources):
-
-* yarn.nodemanager.resource.memory-mb = 15 GB
-* yarn.scheduler.maximum-allocation-mb = 8 GB
-* yarn.nodemanager.resource.cpu-vcores = 8 cores
-With this config our max theoreticaleorical grade of parallelismelist is 8. However, but this has a problem: “Timed out after 3600 secs”
-
-![]( /images/tutorial/2.0/cube_build_performance/26.png)
-
-
-The parameter mapreduce.task.timeout  (1 hour by default) define max time that Application Master (AM) can happen with out ACK of Yarn Container. Once this time passes, AM kill the container and retry the same 4 times (with the same result)
-
-Where is the problem? The problem is that 4 mappers started, but each mapper needed more than 4 GB to finish
-
-* The solution 1: add more RAM to YARN 
-* The solution 2: increase vCores number used in Mapper step to reduce the RAM used
-* The solution 3: you can play with max RAM to YARN by node  (yarn.nodemanager.resource.memory-mb) and experiment with mimin RAM perto container (yarn.scheduler.minimum-allocation-mb). If you increase minimum RAM per container, YARN will reduce the numbers of Mappers     
-
-![]( /images/tutorial/2.0/cube_build_performance/27.png)
-
-
-In the last two cases the results are the same: reduce the level of parallelism ==> 
-* Now we only start 3 mappers start at the same time, the fourth must be wait for a free slot
-* The three first mappers distribute spread the ram among themselves, and as a result they will have enough ram to finish the task
-
-During a normal “Build Cube” step you will see similars messages on YARN log:
-
-![]( /images/tutorial/2.0/cube_build_performance/28.png)
-
-
-If you don’t see this periodically, perhaps you have a bottleneck in the memory.
-
-
-
-### Cube 7: Improve cube response time
-We can try to use different aggregations groups to improve the query performance of some very important Dim or a Dim with high cardinality.
-
-In our case we define 3 Aggregations Groups: 
-1. “Normal cube”
-2. Cube with Date Dim and Currency (as mandatory)
-3. Cube with Date Dim and Carteras_Desc (as mandatory)
-
-![]( /images/tutorial/2.0/cube_build_performance/29.png)
-
-
-![]( /images/tutorial/2.0/cube_build_performance/30.png)
-
-
-![]( /images/tutorial/2.0/cube_build_performance/31.png)
-
-
-
-Compare without / with AGGs:
-
-![]( /images/tutorial/2.0/cube_build_performance/32.png)
-
-
-Now it uses 3% more of time to build the cube and 0.6% of space, but queries by currency or Carteras_Desc will be much faster.
-
-
-
-
diff --git a/website/_docs21/tutorial/cube_spark.md b/website/_docs21/tutorial/cube_spark.md
deleted file mode 100644
index 5400309..0000000
--- a/website/_docs21/tutorial/cube_spark.md
+++ /dev/null
@@ -1,169 +0,0 @@
----
-layout: docs21
-title:  Build Cube with Spark
-categories: tutorial
-permalink: /docs21/tutorial/cube_spark.html
----
-Kylin v2.0 introduces the Spark cube engine, it uses Apache Spark to replace MapReduce in the build cube step; You can check [this blog](/blog/2017/02/23/by-layer-spark-cubing/) for an overall picture. The current document uses the sample cube to demo how to try the new engine.
-
-
-## Preparation
-To finish this tutorial, you need a Hadoop environment which has Kylin v2.1.0 or above installed. Here we will use Hortonworks HDP 2.4 Sandbox VM, the Hadoop components as well as Hive/HBase has already been started. 
-
-## Install Kylin v2.1.0 or above
-
-Download the Kylin v2.1.0 for HBase 1.x from Kylin's download page, and then uncompress the tar ball into */usr/local/* folder:
-
-{% highlight Groff markup %}
-
-wget http://www-us.apache.org/dist/kylin/apache-kylin-2.1.0/apache-kylin-2.1.0-bin-hbase1x.tar.gz -P /tmp
-
-tar -zxvf /tmp/apache-kylin-2.1.0-bin-hbase1x.tar.gz -C /usr/local/
-
-export KYLIN_HOME=/usr/local/apache-kylin-2.1.0-bin-hbase1x
-{% endhighlight %}
-
-## Prepare "kylin.env.hadoop-conf-dir"
-
-To run Spark on Yarn, need specify **HADOOP_CONF_DIR** environment variable, which is the directory that contains the (client side) configuration files for Hadoop. In many Hadoop distributions the directory is "/etc/hadoop/conf"; But Kylin not only need access HDFS, Yarn and Hive, but also HBase, so the default directory might not have all necessary files. In this case, you need create a new directory and then copying or linking those client files (core-site.xml, hdfs-site.xml, yarn-site [...]
-
-{% highlight Groff markup %}
-
-mkdir $KYLIN_HOME/hadoop-conf
-ln -s /etc/hadoop/conf/core-site.xml $KYLIN_HOME/hadoop-conf/core-site.xml 
-ln -s /etc/hadoop/conf/hdfs-site.xml $KYLIN_HOME/hadoop-conf/hdfs-site.xml 
-ln -s /etc/hadoop/conf/yarn-site.xml $KYLIN_HOME/hadoop-conf/yarn-site.xml 
-ln -s /etc/hbase/2.4.0.0-169/0/hbase-site.xml $KYLIN_HOME/hadoop-conf/hbase-site.xml 
-cp /etc/hive/2.4.0.0-169/0/hive-site.xml $KYLIN_HOME/hadoop-conf/hive-site.xml 
-vi $KYLIN_HOME/hadoop-conf/hive-site.xml (change "hive.execution.engine" value from "tez" to "mr")
-
-{% endhighlight %}
-
-Now, let Kylin know this directory with property "kylin.env.hadoop-conf-dir" in kylin.properties:
-
-{% highlight Groff markup %}
-kylin.env.hadoop-conf-dir=/usr/local/apache-kylin-2.1.0-bin-hbase1x/hadoop-conf
-{% endhighlight %}
-
-If this property isn't set, Kylin will use the directory that "hive-site.xml" locates in; while that folder may have no "hbase-site.xml", will get HBase/ZK connection error in Spark.
-
-## Check Spark configuration
-
-Kylin embedes a Spark binary (v2.1.0) in $KYLIN_HOME/spark, all the Spark configurations can be managed in $KYLIN_HOME/conf/kylin.properties with prefix *"kylin.engine.spark-conf."*. These properties will be extracted and applied when runs submit Spark job; E.g, if you configure "kylin.engine.spark-conf.spark.executor.memory=4G", Kylin will use "--conf spark.executor.memory=4G" as parameter when execute "spark-submit".
-
-Before you run Spark cubing, suggest take a look on these configurations and do customization according to your cluster. Below is the default configurations, which is also the minimal config for a sandbox (1 executor with 1GB memory); usually in a normal cluster, need much more executors and each has at least 4GB memory and 2 cores:
-
-{% highlight Groff markup %}
-kylin.engine.spark-conf.spark.master=yarn
-kylin.engine.spark-conf.spark.submit.deployMode=cluster
-kylin.engine.spark-conf.spark.yarn.queue=default
-kylin.engine.spark-conf.spark.executor.memory=1G
-kylin.engine.spark-conf.spark.executor.cores=2
-kylin.engine.spark-conf.spark.executor.instances=1
-kylin.engine.spark-conf.spark.eventLog.enabled=true
-kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-history
-kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history
-
-#kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
-
-## uncomment for HDP
-#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
-#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
-#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
-
-{% endhighlight %}
-
-For running on Hortonworks platform, need specify "hdp.version" as Java options for Yarn containers, so please uncommment the last three lines in kylin.properties. 
-
-Besides, in order to avoid repeatedly uploading Spark jars to Yarn, you can manually do that once, and then configure the jar's HDFS location; Please note, the HDFS location need be full qualified name.
-
-{% highlight Groff markup %}
-jar cv0f spark-libs.jar -C $KYLIN_HOME/spark/jars/ .
-hadoop fs -mkdir -p /kylin/spark/
-hadoop fs -put spark-libs.jar /kylin/spark/
-{% endhighlight %}
-
-After do that, the config in kylin.properties will be:
-{% highlight Groff markup %}
-kylin.engine.spark-conf.spark.yarn.archive=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-libs.jar
-kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
-kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
-kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
-{% endhighlight %}
-
-All the "kylin.engine.spark-conf.*" parameters can be overwritten at Cube or Project level, this gives more flexibility to the user.
-
-## Create and modify sample cube
-
-Run the sample.sh to create the sample cube, and then start Kylin server:
-
-{% highlight Groff markup %}
-
-$KYLIN_HOME/bin/sample.sh
-$KYLIN_HOME/bin/kylin.sh start
-
-{% endhighlight %}
-
-After Kylin is started, access Kylin web, edit the "kylin_sales" cube, in the "Advanced Setting" page, change the "Cube Engine" from "MapReduce" to "Spark":
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/1_cube_engine.png)
-
-Click "Next" to the "Configuration Overwrites" page, click "+Property" to add property "kylin.engine.spark.rdd-partition-cut-mb" with value "500" (reasons below):
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_overwrite_partition.png)
-
-The sample cube has two memory hungry measures: a "COUNT DISTINCT" and a "TOPN(100)"; Their size estimation can be inaccurate when the source data is small: the estimized size is much larger than the real size, that causes much more RDD partitions be splitted, which slows down the build. Here 100 is a more reasonable number for it. Click "Next" and "Save" to save the cube.
-
-
-## Build Cube with Spark
-
-Click "Build", select current date as the build end date. Kylin generates a build job in the "Monitor" page, in which the 7th step is the Spark cubing. The job engine starts to execute the steps in sequence. 
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_job_with_spark.png)
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/3_spark_cubing_step.png)
-
-When Kylin executes this step, you can monitor the status in Yarn resource manager. Click the "Application Master" link will open Spark web UI, it shows the progress of each stage and the detailed information.
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/4_job_on_rm.png)
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/5_spark_web_gui.png)
-
-
-After all steps be successfully executed, the Cube becomes "Ready" and you can query it as normal.
-
-## Troubleshooting
-
-When getting error, you should check "logs/kylin.log" firstly. There has the full Spark command that Kylin executes, e.g:
-
-{% highlight Groff markup %}
-2017-03-06 14:44:38,574 INFO  [Job 2d5c1178-c6f6-4b50-8937-8e5e3b39227e-306] spark.SparkExecutable:121 : cmd:export HADOOP_CONF_DIR=/usr/local/apache-kylin-2.1.0-bin-hbase1x/hadoop-conf && /usr/local/apache-kylin-2.1.0-bin-hbase1x/spark/bin/spark-submit --class org.apache.kylin.common.util.SparkEntry  --conf spark.executor.instances=1  --conf spark.yarn.queue=default  --conf spark.yarn.am.extraJavaOptions=-Dhdp.version=current  --conf spark.history.fs.logDirectory=hdfs:///kylin/spark-his [...]
-
-{% endhighlight %}
-
-You can copy the cmd to execute manually in shell and then tunning the parameters quickly; During the execution, you can access Yarn resource manager to check more. If the job has already finished, you can check the history info in Spark history server. 
-
-By default Kylin outputs the history to "hdfs:///kylin/spark-history", you need start Spark history server on that directory, or change to use your existing Spark history server's event directory in conf/kylin.properties with parameter "kylin.engine.spark-conf.spark.eventLog.dir" and "kylin.engine.spark-conf.spark.history.fs.logDirectory".
-
-The following command will start a Spark history server instance on Kylin's output directory, before run it making sure you have stopped the existing Spark history server in sandbox:
-
-{% highlight Groff markup %}
-$KYLIN_HOME/spark/sbin/start-history-server.sh hdfs://sandbox.hortonworks.com:8020/kylin/spark-history 
-{% endhighlight %}
-
-In web browser, access "http://sandbox:18080" it shows the job history:
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/9_spark_history.png)
-
-Click a specific job, there you will see the detail runtime information, that is very helpful for trouble shooting and performance tuning.
-
-## Go further
-
-If you're a Kylin administrator but new to Spark, suggest you go through [Spark documents](https://spark.apache.org/docs/2.1.0/), and don't forget to update the configurations accordingly. You can enable Spark [Dynamic Resource Allocation](https://spark.apache.org/docs/2.1.0/job-scheduling.html#dynamic-resource-allocation) so that it can auto scale/shrink for different work load. Spark's performance relies on Cluster's memory and CPU resource, while Kylin's Cube build is a heavy task whe [...]
-
-If you have any question, comment, or bug fix, welcome to discuss in dev@kylin.apache.org.
diff --git a/website/_docs21/tutorial/cube_streaming.md b/website/_docs21/tutorial/cube_streaming.md
deleted file mode 100644
index dd5eba2..0000000
--- a/website/_docs21/tutorial/cube_streaming.md
+++ /dev/null
@@ -1,219 +0,0 @@
----
-layout: docs21
-title:  Scalable Cubing from Kafka
-categories: tutorial
-permalink: /docs21/tutorial/cube_streaming.html
----
-Kylin v1.6 releases the scalable streaming cubing function, it leverages Hadoop to consume the data from Kafka to build the cube, you can check [this blog](/blog/2016/10/18/new-nrt-streaming/) for the high level design. This doc is a step by step tutorial, illustrating how to create and build a sample cube;
-
-## Preparation
-To finish this tutorial, you need a Hadoop environment which has kylin v1.6.0 or above installed, and also have a Kafka (v0.10.0 or above) running; Previous Kylin version has a couple issues so please upgrade your Kylin instance at first.
-
-In this tutorial, we will use Hortonworks HDP 2.2.4 Sandbox VM + Kafka v0.10.0(Scala 2.10) as the environment.
-
-## Install Kafka 0.10.0.0 and Kylin
-Don't use HDP 2.2.4's build-in Kafka as it is too old, stop it first if it is running.
-{% highlight Groff markup %}
-curl -s https://archive.apache.org/dist/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgz | tar -xz -C /usr/local/
-
-cd /usr/local/kafka_2.10-0.10.0.0/
-
-bin/kafka-server-start.sh config/server.properties &
-
-{% endhighlight %}
-
-Download the Kylin v1.6 from download page, expand the tar ball in /usr/local/ folder.
-
-## Create sample Kafka topic and populate data
-
-Create a sample topic "kylin_streaming_topic", with 3 partitions:
-
-{% highlight Groff markup %}
-
-bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic kylin_streaming_topic
-Created topic "kylin_streaming_topic".
-{% endhighlight %}
-
-Put sample data to this topic; Kylin has an utility class which can do this;
-
-{% highlight Groff markup %}
-export KAFKA_HOME=/usr/local/kafka_2.10-0.10.0.0
-export KYLIN_HOME=/usr/local/apache-kylin-2.1.0-bin
-
-cd $KYLIN_HOME
-./bin/kylin.sh org.apache.kylin.source.kafka.util.KafkaSampleProducer --topic kylin_streaming_topic --broker localhost:9092
-{% endhighlight %}
-
-This tool will send 100 records to Kafka every second. Please keep it running during this tutorial. You can check the sample message with kafka-console-consumer.sh now:
-
-{% highlight Groff markup %}
-cd $KAFKA_HOME
-bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic kylin_streaming_topic --from-beginning
-{"amount":63.50375137330458,"category":"TOY","order_time":1477415932581,"device":"Other","qty":4,"user":{"id":"bf249f36-f593-4307-b156-240b3094a1c3","age":21,"gender":"Male"},"currency":"USD","country":"CHINA"}
-{"amount":22.806058795736583,"category":"ELECTRONIC","order_time":1477415932591,"device":"Andriod","qty":1,"user":{"id":"00283efe-027e-4ec1-bbed-c2bbda873f1d","age":27,"gender":"Female"},"currency":"USD","country":"INDIA"}
-
- {% endhighlight %}
-
-## Define a table from streaming
-Start Kylin server with "$KYLIN_HOME/bin/kylin.sh start", login Kylin Web GUI at http://sandbox:7070/kylin/, select an existing project or create a new project; Click "Model" -> "Data Source", then click the icon "Add Streaming Table";
-
-   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/1_Add_streaming_table.png)
-
-In the pop-up dialogue, enter a sample record which you got from the kafka-console-consumer, click the ">>" button, Kylin parses the JSON message and listS all the properties;
-
-You need give a logic table name for this streaming data source; The name will be used for SQL query later; here enter "STREAMING_SALES_TABLE" as an example in the "Table Name" field.
-
-You need select a timestamp field which will be used to identify the time of a message; Kylin can derive other time values like "year_start", "quarter_start" from this time column, which can give your more flexibility on building and querying the cube. Here check "order_time". You can deselect those properties which are not needed for cube. Here let's keep all fields.
-
-Notice that Kylin supports structured (or say "embedded") message from v1.6, it will convert them into a flat table structure. By default use "_" as the separator of the structed properties.
-
-   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/2_Define_streaming_table.png)
-
-
-Click "Next". On this page, provide the Kafka cluster information; Enter "kylin_streaming_topic" as "Topic" name; The cluster has 1 broker, whose host name is "sandbox", port is "9092", click "Save".
-
-   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Kafka_setting.png)
-
-In "Advanced setting" section, the "timeout" and "buffer size" are the configurations for connecting with Kafka, keep them. 
-
-In "Parser Setting", by default Kylin assumes your message is JSON format, and each record's timestamp column (specified by "tsColName") is a bigint (epoch time) value; in this case, you just need set the "tsColumn" to "order_time"; 
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_setting.png)
-
-In real case if the timestamp value is a string valued timestamp like "Jul 20, 2016 9:59:17 AM", you need specify the parser class with "tsParser" and the time pattern with "tsPattern" like this:
-
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_time.png)
-
-Click "Submit" to save the configurations. Now a "Streaming" table is created.
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/4_Streaming_table.png)
-
-## Define data model
-With the table defined in previous step, now we can create the data model. The step is almost the same as you create a normal data model, but it has two requirement:
-
-* Streaming Cube doesn't support join with lookup tables; When define the data model, only select fact table, no lookup table;
-* Streaming Cube must be partitioned; If you're going to build the Cube incrementally at minutes level, select "MINUTE_START" as the cube's partition date column. If at hours level, select "HOUR_START".
-
-Here we pick 13 dimension and 2 measure columns:
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/5_Data_model_dimension.png)
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/6_Data_model_measure.png)
-Save the data model.
-
-## Create Cube
-
-The streaming Cube is almost the same as a normal cube. a couple of points need get your attention:
-
-* The partition time column should be a dimension of the Cube. In Streaming OLAP the time is always a query condition, and Kylin will leverage this to narrow down the scanned partitions.
-* Don't use "order\_time" as dimension as that is pretty fine-grained; suggest to use "mintue\_start", "hour\_start" or other, depends on how you will inspect the data.
-* Define "year\_start", "quarter\_start", "month\_start", "day\_start", "hour\_start", "minute\_start" as a hierarchy to reduce the combinations to calculate.
-* In the "refersh setting" step, create more merge ranges, like 0.5 hour, 4 hours, 1 day, and then 7 days; This will help to control the cube segment number.
-* In the "rowkeys" section, drag&drop the "minute\_start" to the head position, as for streaming queries, the time condition is always appeared; putting it to head will help to narrow down the scan range.
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/8_Cube_dimension.png)
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/9_Cube_measure.png)
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/10_agg_group.png)
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/11_Rowkey.png)
-
-Save the cube.
-
-## Run a build
-
-You can trigger the build from web GUI, by clicking "Actions" -> "Build", or sending a request to Kylin RESTful API with 'curl' command:
-
-{% highlight Groff markup %}
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
-{% endhighlight %}
-
-Please note the API endpoint is different from a normal cube (this URL end with "build2").
-
-Here 0 means from the last position, and 9223372036854775807 (Long.MAX_VALUE) means to the end position on Kafka topic. If it is the first time to build (no previous segment), Kylin will seek to beginning of the topics as the start position. 
-
-In the "Monitor" page, a new job is generated; Wait it 100% finished.
-
-## Click the "Insight" tab, compose a SQL to run, e.g:
-
- {% highlight Groff markup %}
-select minute_start, count(*), sum(amount), sum(qty) from streaming_sales_table group by minute_start order by minute_start
- {% endhighlight %}
-
-The result looks like below.
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/13_Query_result.png)
-
-
-## Automate the build
-
-Once the first build and query got successfully, you can schedule incremental builds at a certain frequency. Kylin will record the offsets of each build; when receive a build request, it will start from the last end position, and then seek the latest offsets from Kafka. With the REST API you can trigger it with any scheduler tools like Linux cron:
-
-  {% highlight Groff markup %}
-crontab -e
-*/5 * * * * curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
- {% endhighlight %}
-
-Now you can site down and watch the cube be automatically built from streaming. And when the cube segments accumulate to bigger time range, Kylin will automatically merge them into a bigger segment.
-
-## Trouble shootings
-
- * You may encounter the following error when run "kylin.sh":
-{% highlight Groff markup %}
-Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/Producer
-	at java.lang.Class.getDeclaredMethods0(Native Method)
-	at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
-	at java.lang.Class.getMethod0(Class.java:2856)
-	at java.lang.Class.getMethod(Class.java:1668)
-	at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
-	at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
-Caused by: java.lang.ClassNotFoundException: org.apache.kafka.clients.producer.Producer
-	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
-	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
-	at java.security.AccessController.doPrivileged(Native Method)
-	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
-	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
-	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
-	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
-	... 6 more
-{% endhighlight %}
-
-The reason is Kylin wasn't able to find the proper Kafka client jars; Make sure you have properly set "KAFKA_HOME" environment variable.
-
- * Get "killed by admin" error in the "Build Cube" step
-
- Within a Sandbox VM, YARN may not allocate the requested memory resource to MR job as the "inmem" cubing algorithm requests more memory. You can bypass this by requesting less memory: edit "conf/kylin_job_conf_inmem.xml", change the following two parameters like this:
-
- {% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.memory.mb</name>
-        <value>1072</value>
-        <description></description>
-    </property>
-
-    <property>
-        <name>mapreduce.map.java.opts</name>
-        <value>-Xmx800m</value>
-        <description></description>
-    </property>
- {% endhighlight %}
-
- * If there already be bunch of history messages in Kafka and you don't want to build from the very beginning, you can trigger a call to set the current end position as the start for the cube:
-
-{% highlight Groff markup %}
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/init_start_offsets
-{% endhighlight %}
-
- * If some build job got error and you discard it, there will be a hole (or say gap) left in the Cube. Since each time Kylin will build from last position, you couldn't expect the hole be filled by normal builds. Kylin provides API to check and fill the holes 
-
-Check holes:
- {% highlight Groff markup %}
-curl -X GET --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
-{% endhighlight %}
-
-If the result is an empty arrary, means there is no hole; Otherwise, trigger Kylin to fill them:
- {% highlight Groff markup %}
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
-{% endhighlight %}
-
diff --git a/website/_docs21/tutorial/flink.md b/website/_docs21/tutorial/flink.md
deleted file mode 100644
index fdd0d87..0000000
--- a/website/_docs21/tutorial/flink.md
+++ /dev/null
@@ -1,249 +0,0 @@
----
-layout: docs21
-title:  Apache Flink
-categories: tutorial
-permalink: /docs21/tutorial/flink.html
----
-
-
-### Introduction
-
-This document describes how to use Kylin as a data source in Apache Flink; 
-
-There were several attempts to do this in Scala and JDBC, but none of them works: 
-
-* [attempt1](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JDBCInputFormat-preparation-with-Flink-1-1-SNAPSHOT-and-Scala-2-11-td5371.html)  
-* [attempt2](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-OT-in-class-org-apache-flink-api-common-io-RichInputFormat-could-not-be-determi-td7287.html)  
-* [attempt3](http://stackoverflow.com/questions/36067881/create-dataset-from-jdbc-source-in-flink-using-scala)  
-* [attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala); 
-
-We will try use CreateInput and [JDBCInputFormat](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html) in batch mode and access via JDBC to Kylin. But it isn’t implemented in Scala, is only in Java [MailList](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html). This doc will go step by step solving these problems.
-
-### Pre-requisites
-
-* Need an instance of Kylin, with a Cube; [Sample Cube](kylin_sample.html) will be good enough.
-* [Scala](http://www.scala-lang.org/) and [Apache Flink](http://flink.apache.org/) Installed
-* [IntelliJ](https://www.jetbrains.com/idea/) Installed and configured for Scala/Flink (see [Flink IDE setup guide](https://ci.apache.org/projects/flink/flink-docs-release-1.1/internals/ide_setup.html) )
-
-### Used software:
-
-* [Apache Flink](http://flink.apache.org/downloads.html) v1.2-SNAPSHOT
-* [Apache Kylin](http://kylin.apache.org/download/) v1.5.2 (v1.6.0 also works)
-* [IntelliJ](https://www.jetbrains.com/idea/download/#section=linux)  v2016.2
-* [Scala](downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz)  v2.11
-
-### Starting point:
-
-This can be out initial skeleton: 
-
-{% highlight Groff markup %}
-import org.apache.flink.api.scala._
-val env = ExecutionEnvironment.getExecutionEnvironment
-val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
-  .setDrivername("org.apache.kylin.jdbc.Driver")
-  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
-  .setUsername("ADMIN")
-  .setPassword("KYLIN")
-  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
-  .finish()
-  val dataset =env.createInput(inputFormat)
-{% endhighlight %}
-
-The first error is: ![alt text](/images/Flink-Tutorial/02.png)
-
-Add to Scala: 
-{% highlight Groff markup %}
-import org.apache.flink.api.java.io.jdbc.JDBCInputFormat
-{% endhighlight %}
-
-Next error is  ![alt text](/images/Flink-Tutorial/03.png)
-
-We can solve dependencies [(mvn repository: jdbc)](https://mvnrepository.com/artifact/org.apache.flink/flink-jdbc/1.1.2); Add this to your pom.xml:
-{% highlight Groff markup %}
-<dependency>
-   <groupId>org.apache.flink</groupId>
-   <artifactId>flink-jdbc</artifactId>
-   <version>${flink.version}</version>
-</dependency>
-{% endhighlight %}
-
-## Solve dependencies of row 
-
-Similar to previous point we need solve dependencies of Row Class [(mvn repository: Table) ](https://mvnrepository.com/artifact/org.apache.flink/flink-table_2.10/1.1.2):
-
-  ![](/images/Flink-Tutorial/03b.png)
-
-
-* In pom.xml add:
-{% highlight Groff markup %}
-<dependency>
-   <groupId>org.apache.flink</groupId>
-   <artifactId>flink-table_2.10</artifactId>
-   <version>${flink.version}</version>
-</dependency>
-{% endhighlight %}
-
-* In Scala: 
-{% highlight Groff markup %}
-import org.apache.flink.api.table.Row
-{% endhighlight %}
-
-## Solve RowTypeInfo property (and their new dependencies)
-
-This is the new error to solve:
-
-  ![](/images/Flink-Tutorial/04.png)
-
-
-* If check the code of [JDBCInputFormat.java](https://github.com/apache/flink/blob/master/flink-batch-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java#L69), we can see [this new property](https://github.com/apache/flink/commit/09b428bd65819b946cf82ab1fdee305eb5a941f5#diff-9b49a5041d50d9f9fad3f8060b3d1310R69) (and mandatory) added on Apr 2016 by [FLINK-3750](https://issues.apache.org/jira/browse/FLINK-3750)  Manual [JDBCInputFormat](https://ci.apa [...]
-
-   Add the new Property: **setRowTypeInfo**
-   
-{% highlight Groff markup %}
-val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
-  .setDrivername("org.apache.kylin.jdbc.Driver")
-  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
-  .setUsername("ADMIN")
-  .setPassword("KYLIN")
-  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
-  .setRowTypeInfo(DB_ROWTYPE)
-  .finish()
-{% endhighlight %}
-
-* How can configure this property in Scala? In [Attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala), there is an incorrect solution
-   
-   We can check the types using the intellisense: ![alt text](/images/Flink-Tutorial/05.png)
-   
-   Then we will need add more dependences; Add to scala:
-
-{% highlight Groff markup %}
-import org.apache.flink.api.table.typeutils.RowTypeInfo
-import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation}
-{% endhighlight %}
-
-   Create a Array or Seq of TypeInformation[ ]
-
-  ![](/images/Flink-Tutorial/06.png)
-
-
-   Solution:
-   
-{% highlight Groff markup %}
-   var stringColum: TypeInformation[String] = createTypeInformation[String]
-   val DB_ROWTYPE = new RowTypeInfo(Seq(stringColum))
-{% endhighlight %}
-
-## Solve ClassNotFoundException
-
-  ![](/images/Flink-Tutorial/07.png)
-
-Need find the kylin-jdbc-x.x.x.jar and then expose to Flink
-
-1. Find the Kylin JDBC jar
-
-   From Kylin [Download](http://kylin.apache.org/download/) choose **Binary** and the **correct version of Kylin and HBase**
-   
-   Download & Unpack: in ./lib: 
-   
-  ![](/images/Flink-Tutorial/08.png)
-
-
-2. Make this JAR accessible to Flink
-
-   If you execute like service you need put this JAR in you Java class path using your .bashrc 
-
-  ![](/images/Flink-Tutorial/09.png)
-
-
-  Check the actual value: ![alt text](/images/Flink-Tutorial/10.png)
-  
-  Check the permission for this file (Must be accessible for you):
-
-  ![](/images/Flink-Tutorial/11.png)
-
- 
-  If you are executing from IDE, need add your class path manually:
-  
-  On IntelliJ: ![alt text](/images/Flink-Tutorial/12.png)  > ![alt text](/images/Flink-Tutorial/13.png) > ![alt text](/images/Flink-Tutorial/14.png) > ![alt text](/images/Flink-Tutorial/15.png)
-  
-  The result, will be similar to: ![alt text](/images/Flink-Tutorial/16.png)
-  
-## Solve "Couldn’t access resultSet" error
-
-  ![](/images/Flink-Tutorial/17.png)
-
-
-It is related with [Flink 4108](https://issues.apache.org/jira/browse/FLINK-4108)  [(MailList)](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html#a9415) and Timo Walther [make a PR](https://github.com/apache/flink/pull/2619)
-
-If you are running Flink <= 1.2 you will need apply this path and make clean install
-
-## Solve the casting error
-
-  ![](/images/Flink-Tutorial/18.png)
-
-In the error message you have the problem and solution …. nice ;)  ¡¡
-
-## The result
-
-The output must be similar to this, print the result of query by standard output:
-
-  ![](/images/Flink-Tutorial/19.png)
-
-
-## Now, more complex
-
-Try with a multi-colum and multi-type query:
-
-{% highlight Groff markup %}
-select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
-from kylin_sales 
-group by part_dt 
-order by part_dt
-{% endhighlight %}
-
-Need changes in DB_ROWTYPE:
-
-  ![](/images/Flink-Tutorial/20.png)
-
-
-And import lib of Java, to work with Data type of Java ![alt text](/images/Flink-Tutorial/21.png)
-
-The new result will be: 
-
-  ![](/images/Flink-Tutorial/23.png)
-
-
-## Error:  Reused Connection
-
-
-  ![](/images/Flink-Tutorial/24.png)
-
-Check if your HBase and Kylin is working. Also you can use Kylin UI for it.
-
-
-## Error:  java.lang.AbstractMethodError:  ….Avatica Connection
-
-See [Kylin 1898](https://issues.apache.org/jira/browse/KYLIN-1898) 
-
-It is a problem with kylin-jdbc-1.x.x. JAR, you need use Calcite 1.8 or above; The solution is to use Kylin 1.5.4 or above.
-
-  ![](/images/Flink-Tutorial/25.png)
-
-
-
-## Error: can't expand macros compiled by previous versions of scala
-
-Is a problem with versions of scala, check in with "scala -version" your actual version and choose your correct POM.
-
-Perhaps you will need a IntelliJ > File > Invalidates Cache > Invalidate and Restart.
-
-I added POM for Scala 2.11
-
-
-## Final Words
-
-Now you can read Kylin’s data from Apache Flink, great!
-
-[Full Code Example](https://github.com/albertoRamon/Flink/tree/master/ReadKylinFromFlink/flink-scala-project)
-
-Solved all integration problems, and tested with different types of data (Long, BigDecimal and Dates). The patch has been comited at 15 Oct, then, will be part of Flink 1.2.
diff --git a/website/_docs21/tutorial/hue.md b/website/_docs21/tutorial/hue.md
deleted file mode 100755
index 1c4b7de..0000000
--- a/website/_docs21/tutorial/hue.md
+++ /dev/null
@@ -1,246 +0,0 @@
----
-layout: docs21
-title: Hue
-categories: tutorial
-permalink: /docs21/tutorial/hue.html
----
-### Introduction
- In [Hue-2745](https://issues.cloudera.org/browse/HUE-2745) v3.10, add JDBC support like Phoenix, Kylin, Redshift, Solr Parallel SQL, …
-
-However, there isn’t any manual to use with Kylin.
-
-### Pre-requisites
-Build a cube sample of Kylin with: [Quick Start with Sample Cube](http://kylin.apache.org/docs23/tutorial/kylin_sample.html), will be enough.
-
-You can check: 
-
-  ![](/images/tutorial/2.0/hue/01.png)
-
-
-### Used Software:
-* [Hue](http://gethue.com/) v3.10.0
-* [Apache Kylin](http://kylin.apache.org/) v1.5.2
-
-
-### Install Hue
-If you have Hue installed, you can skip this step.
-
-To install Hue on Ubuntu 16.04 LTS. The [official Instructions](http://gethue.com/how-to-build-hue-on-ubuntu-14-04-trusty/) didn’t work but [this](https://github.com/cloudera/hue/blob/master/tools/docker/hue-base/Dockerfile) works fine:
-
-There isn’t any binary package thus [pre-requisites](https://github.com/cloudera/hue#development-prerequisites) must be installed and compile with the command *make*
-
-{% highlight Groff markup %}
-    sudo apt-get install --fix-missing -q -y \
-    git \
-    ant \
-    gcc \
-    g++ \
-    libkrb5-dev \
-    libmysqlclient-dev \
-    libssl-dev \
-    libsasl2-dev \
-    libsasl2-modules-gssapi-mit \
-    libsqlite3-dev \
-    libtidy-0.99-0 \
-    libxml2-dev \
-    libxslt-dev \
-    libffi-dev \
-    make \
-    maven \
-    libldap2-dev \
-    python-dev \
-    python-setuptools \
-    libgmp3-dev \
-    libz-dev
-{% endhighlight %}
-
-Download and Compile:
-
-{% highlight Groff markup %}
-    git clone https://github.com/cloudera/hue.git
-    cd hue
-    make apps
-{% endhighlight %}
-
-Start and connect to Hue:
-
-{% highlight Groff markup %}
-    build/env/bin/hue runserver_plus localhost:8888
-{% endhighlight %}
-* runserver_plus: is like runserver with [debugger](http://django-extensions.readthedocs.io/en/latest/runserver_plus.html#usage)
-* localIP: Port, usually Hue uses 8888
-
-The output must be similar to:
-
-  ![](/images/tutorial/2.0/hue/02.png)
-
-
-Connect using your browser: http://localhost:8888
-
-  ![](/images/tutorial/2.0/hue/03.png)
-
-
-Important: The first time that you connect to hue, you set Login / Pass  for admin
-
-We will use Hue / Hue as login / pass
-
-
-**Issue 1:** Could not create home directory
-
-  ![](/images/tutorial/2.0/hue/04.png)
-
-
-   It is a permission problem of your current user, you can use: sudo to start Hue
-
-**Issue 2:** Could not connect to … 
-
-  ![](/images/tutorial/2.0/hue/05.png)
-
-   If Hue’s code  had been downloaded from Git, Hive connection is active but not configured → skip this message  
-
-**Issue 3:** Address already in use
-
-  ![](/images/tutorial/2.0/hue/06.png)
-
-   The port is in use or you have a Hive process running already
-
-  You can use *ps -ef | grep hue*, to find the PID and kill
-
-
-### Configure Hue for Apache Kylin
-The purpose is to add a snipped in a notebook with Kylin queries
-
-References:
-* [Custom SQL Databases](http://gethue.com/custom-sql-query-editors/)	
-* [Manual: Kylin JDBC Driver](http://kylin.apache.org/docs/howto/howto_jdbc.html)
-* [GitHub: Kylin JDBC Driver](https://github.com/apache/kylin/tree/3b2ebd243cfe233ea7b1a80285f4c2110500bbe5/jdbc)
-
-Register JDBC Driver
-
-1. To find the JAR Class for the JDBC Connector
-
- From Kylin [Download](http://kylin.apache.org/download/)
-Choose **Binary** and the **correct version of Kylin and HBase**
-
- Download & Unpack:  in ./lib: 
-
-  ![](/images/tutorial/2.0/hue/07.png)
-
-
-2. Place this JAR in Java ClassPATH using .bashrc
-
-  ![](/images/tutorial/2.0/hue/08.png)
-
-
-  check the actual value: ![alt text](/images/tutorial/2.0/hue/09.png)
-
-  check the permission for this file (must be accessible to you):
-
-  ![](/images/tutorial/2.0/hue/10.png)
-
-
-3. Add this new interface to Hue.ini
-
-  Where is the hue.ini ? 
-
- * If the code is downloaded from Git:  *UnzipPath/desktop/conf/pseudo-distributed.ini*
-
-   (I shared my *INI* file in GitHub).
-
- * If you are using Cloudera: you must use Advanced Configuration Snippet
-
- * Other: find your actual *hue.ini*
-
- Add these lines in *[[interpreters]]*
-{% highlight Groff markup %}
-    [[[kylin]]]
-    name=kylin JDBC
-    interface=jdbc
-    options='{"url": "jdbc:kylin://172.17.0.2:7070/learn_kylin","driver": "org.apache.kylin.jdbc.Driver", "user": "ADMIN", "password": "KYLIN"}'
-{% endhighlight %}
-
-4. Try to Start Hue and connect just like in ‘Start and connect’
-
-TIP: One JDBC Source for each project is need
-
-
-Register without a password, it can do use this other format:
-{% highlight Groff markup %}
-    options='{"url": "jdbc:kylin://172.17.0.2:7070/learn_kylin","driver": "org.apache.kylin.jdbc.Driver"}'
-{% endhighlight %}
-
-And when you open the Notebook, Hue prompts this:
-
-  ![](/images/tutorial/2.0/hue/11.png)
-
-
-
-**Issue 1:** Hue can’t Start
-
-If you see this when you connect to Hue  ( http://localhost:8888 ):
-
-  ![](/images/tutorial/2.0/hue/12.png)
-
-
-Go to the last line ![alt text](/images/tutorial/2.0/hue/13.png) 
-
-And launch Python Interpreter (see console icon on the right):
-
-  ![](/images/tutorial/2.0/hue/14.png)
-
-In this case: I've forgotten to close “ after learn_kylin
-
-**Issue 2:** Password Prompting
-
-In Hue 3.11 there is a bug [Hue 4716](https://issues.cloudera.org/browse/HUE-4716)
-
-In Hue 3.10 with Kylin, I don’t have any problem   :)
-
-
-## Test query example
-Add Kylin JDBC as source in the Kylin’s notebook:
-
- ![alt text](/images/tutorial/2.0/hue/15.png) > ![alt text](/images/tutorial/2.0/hue/16.png)  > ![alt text](/images/tutorial/2.0/hue/17.png)  > ![alt text](/images/tutorial/2.0/hue/18.png) 
-
-
-Write a query, like this:
-{% highlight Groff markup %}
-    select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt
-{% endhighlight %}
-
-And Execute with: ![alt text](/images/tutorial/2.0/hue/19.png) 
-
-  ![](/images/tutorial/2.0/hue/20.png)
-
-
- **Congratulations !!!**  you are connected to Hue with Kylin
-
-
-**Issue 1:**  No suitable driver found for jdbc:kylin
-
-  ![](/images/tutorial/2.0/hue/21.png)
-
-There is a bug, not solved since 27 Aug 2016, nor in 3.10 and 3.11, but the solution is very easy:
-
-[Link](https://github.com/cloudera/hue/pull/369): 
-You only need to change 3 lines in  *<HuePath>/desktop/libs/librdbms/src/librdbms/jdbc.py*
-
-
-## Limits
-In Hue 3.10 and 3.11
-* Auto-complete doesn’t work on JDBC interfaces
-* Max 1000 records. There is a limitation on JDBC interfaces, because Hue does not support result pagination [Hue 3419](https://issues.cloudera.org/browse/HUE-3419). 
-
-
-### Future Work
-
-**Dashboards**
-There is an amazing feature of Hue: [Search Dasboards](http://gethue.com/search-dashboards/) / [Dynamic Dashboards](http://gethue.com/hadoop-search-dynamic-search-dashboards-with-solr/). You can ‘play’ with this [Demo On-line](http://demo.gethue.com/search/admin/collections). But this only works with SolR.
-
-There is a JIRA to solve this: [Hue 3228](https://issues.cloudera.org/browse/HUE-3228), is in roadmap for 4.1. Check Hue MailList[MailList](https://groups.google.com/a/cloudera.org/forum/#!topic/hue-user/B6FWBeoqK7I) and add Dashboards to JDBC connections.
-
-**Chart & Dynamic Filter**
-Nowadays, it isn’t compatible, you only can work with Grid.
-
-**DB Query**
- DB Query does not yet support JDBC.
diff --git a/website/_docs21/tutorial/kylin_client_tool.cn.md b/website/_docs21/tutorial/kylin_client_tool.cn.md
deleted file mode 100644
index 1668bd3..0000000
--- a/website/_docs21/tutorial/kylin_client_tool.cn.md
+++ /dev/null
@@ -1,121 +0,0 @@
----
-layout: docs21-cn
-title:  Kylin Python 客户端工具库
-categories: 教程
-permalink: /cn/docs21/tutorial/kylin_client_tool.html
----
-
-Apache Kylin Python 客户端工具库是基于Python可访问Kylin的客户端. 此工具库包含两个可使用原件. 想要了解更多关于此工具库信息请点击[Github仓库](https://github.com/Kyligence/kylinpy).
-
-* Apache Kylin 命令行工具
-* Apache Kylin SQLAchemy方言
-
-## 安装
-请确保您python解释器版本在2.7+, 或者3.4+以上. 最方便安装Apache Kylin python客户端工具库的方法是使用pip命令
-```
-    pip install --upgrade kylinpy
-```
-
-## Kylinpy 命令行工具
-安装完kylinpy后, 立即可以在终端下访问kylinpy
-
-```
-    $ kylinpy
-    Usage: kylinpy [OPTIONS] COMMAND [ARGS]...
-
-    Options:
-      -h, --host TEXT       Kylin host name  [required]
-      -P, --port INTEGER    Kylin port, default: 7070
-      -u, --username TEXT   Kylin username  [required]
-      -p, --password TEXT   Kylin password  [required]
-      --project TEXT        Kylin project  [required]
-      --prefix TEXT         Kylin RESTful prefix of url, default: /kylin/api
-      --debug / --no-debug  show debug infomation
-      --api1 / --api2       API version; default is "api1"; "api1" 适用于 Apache Kylin
-      --help                Show this message and exit.
-
-    Commands:
-      auth           get user auth info
-      cube_columns   list cube columns
-      cube_desc      show cube description
-      cube_names     list cube names
-      model_desc     show model description
-      projects       list all projects
-      query          sql query
-      table_columns  list table columns
-      table_names    list all table names
-```
-
-## Kylinpy命令行工具示例
-
-1. 访问Apache Kylin
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug auth
-```
-
-2. 访问选定cube所有的维度信息
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_columns --name kylin_sales_cube
-```
-
-3. 访问选定的cube描述
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_desc --name kylin_sales_cube
-```
-
-4. 访问所有cube名称
-```
-kylinpy -h hostname -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_names
-```
-
-5. 访问选定cube的SQL定义
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_sql --name kylin_sales_cube
-```
-
-6. 列出Kylin中所有项目
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug projects
-```
-
-7. 访问选定表所有的维度信息
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug table_columns --name KYLIN_SALES
-```
-
-8. 访问所有表名
-```
-kylinpy -h hostname -u ADMIN -p KYLIN --project learn_kylin --api1 table_names
-```
-
-9. 访问所选模型信息
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug model_desc --name kylin_sales_model
-```
-
-## Apache Kylin SQLAlchemy方言
-
-任何一个使用SQLAlchemy的应用程序都可以通过此`方言`访问到Kylin, 您之前如果已经安装了kylinpy那么现在就已经集成好了SQLAlchemy Dialect. 请使用如下DSN模板访问Kylin
-
-```
-kylin://<username>:<password>@<hostname>:<port>/<project>?version=<v1|v2>&prefix=</kylin/api>
-```
-
-## SQLAlchemy 实例
-测试Apache Kylin连接
-
-```
-    $ python
-    >>> import sqlalchemy as sa
-    >>> kylin_engine = sa.create_engine('kylin://username:password@hostname:7070/learn_kylin?version=v1')
-    >>> results = kylin_engine.execute('SELECT count(*) FROM KYLIN_SALES')
-    >>> [e for e in results]
-    [(4953,)]
-    >>> kylin_engine.table_names()
-    [u'KYLIN_ACCOUNT',
-     u'KYLIN_CAL_DT',
-     u'KYLIN_CATEGORY_GROUPINGS',
-     u'KYLIN_COUNTRY',
-     u'KYLIN_SALES',
-     u'KYLIN_STREAMING_TABLE']
-```
diff --git a/website/_docs21/tutorial/kylin_client_tool.md b/website/_docs21/tutorial/kylin_client_tool.md
deleted file mode 100644
index 8b610f5..0000000
--- a/website/_docs21/tutorial/kylin_client_tool.md
+++ /dev/null
@@ -1,125 +0,0 @@
----
-layout: docs21
-title:  Kylin Python Client Library
-categories: tutorial
-permalink: /docs21/tutorial/kylin_client_tool.html
----
-
-Apache Kylin Python Client Library is a python-based Apache Kylin client. There are two components in Apache Kylin Python Client Library:
-
-* Apache Kylin command line tools
-* Apache Kylin dialect for SQLAlchemy
-
-You can get more detail from this [Github Repository](https://github.com/Kyligence/kylinpy).
-
-## Installation
-Make sure python version is 2.7+ or 3.4+. The easiest way to install Apache Kylin Python Client Library is to use "pip":
-
-```
-    pip install --upgrade kylinpy
-```
-
-## Kylinpy CLI
-After installing Apache Kylin Python Client Library you may run kylinpy in terminal
-
-```
-    $ kylinpy
-    Usage: kylinpy [OPTIONS] COMMAND [ARGS]...
-
-    Options:
-      -h, --host TEXT       Kylin host name  [required]
-      -P, --port INTEGER    Kylin port, default: 7070
-      -u, --username TEXT   Kylin username  [required]
-      -p, --password TEXT   Kylin password  [required]
-      --project TEXT        Kylin project  [required]
-      --prefix TEXT         Kylin RESTful prefix of url, default: "/kylin/api"
-      --debug / --no-debug  show debug infomation
-      --api2 / --api1       API version; default is "api1"; "api1" is for Apache Kylin;
-      --help                Show this message and exit.
-
-    Commands:
-      auth           get user auth info
-      cube_columns   list cube columns
-      cube_desc      show cube description
-      cube_names     list cube names
-      model_desc     show model description
-      projects       list all projects
-      query          sql query
-      table_columns  list table columns
-      table_names    list all table names
-```
-
-## Examples for Kylinpy CLI
-
-1. To get all user info from Apache Kylin with debug mode
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug auth
-```
-
-2. To get all cube columns from Apache Kylin with debug mode
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_columns --name kylin_sales_cube
-```
-
-3. To get cube description of selected cube from Apache Kylin with debug mode
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_desc --name kylin_sales_cube
-```
-
-4. To get all cube names from Apache Kylin with debug mode
-```
-kylinpy -h hostname -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_names
-```
-
-5. To get cube SQL of selected cube from Apache Kylin with debug mode
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_sql --name kylin_sales_cube
-```
-
-6. To list all projects from Apache Kylin with debug mode
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug projects
-```
-
-7. To list all tables column of selected cube from Apache Kylin with debug mode
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug table_columns --name KYLIN_SALES
-```
-
-8. To get all table names from kylin
-```
-kylinpy -h hostname -u ADMIN -p KYLIN --project learn_kylin --api1 table_names
-```
-
-9. To get the model description of the selected model from Apache Kylin with debug mode
-```
-kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug model_desc --name kylin_sales_model
-```
-
-## Kylin dialect for SQLAlchemy
-
-Any application that uses SQLAlchemy can now query Apache Kylin with this Apache Kylin dialect installed. It is part of the Apache Kylin Python Client Library, so if you already installed this library in the previous step, you are ready to use. You may use below template to build DSN to connect Apache Kylin.
-
-```
-kylin://<username>:<password>@<hostname>:<port>/<project>?version=v1>&prefix=</kylin/api>
-```
-
-## Examples for SQLAlchemy
-
-Test connection with Apache Kylin
-
-{% highlight Groff markup %}
-    $ python
-    >>> import sqlalchemy as sa
-    >>> kylin_engine = sa.create_engine('kylin://username:password@hostname:7070/learn_kylin?version=v1')
-    >>> results = kylin_engine.execute('SELECT count(*) FROM KYLIN_SALES')
-    >>> [e for e in results]
-    [(4953,)]
-    >>> kylin_engine.table_names()
-    [u'KYLIN_ACCOUNT',
-     u'KYLIN_CAL_DT',
-     u'KYLIN_CATEGORY_GROUPINGS',
-     u'KYLIN_COUNTRY',
-     u'KYLIN_SALES',
-     u'KYLIN_STREAMING_TABLE']
-{% endhighlight %}
diff --git a/website/_docs21/tutorial/kylin_sample.md b/website/_docs21/tutorial/kylin_sample.md
deleted file mode 100644
index 592231f..0000000
--- a/website/_docs21/tutorial/kylin_sample.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-layout: docs21
-title:  Quick Start with Sample Cube
-categories: tutorial
-permalink: /docs21/tutorial/kylin_sample.html
----
-
-Kylin provides a script for you to create a sample Cube; the script will also create five sample hive tables:
-
-1. Run ${KYLIN_HOME}/bin/sample.sh ; Restart kylin server to flush the caches;
-2. Logon Kylin web with default user ADMIN/KYLIN, select project "learn_kylin" in the project dropdown list (left upper corner);
-3. Select the sample cube "kylin_sales_cube", click "Actions" -> "Build", pick up a date later than 2014-01-01 (to cover all 10000 sample records);
-4. Check the build progress in "Monitor" tab, until 100%;
-5. Execute SQLs in the "Insight" tab, for example:
-	select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt
-6. You can verify the query result and compare the response time with hive;
-
-   
-## Quick Start with Streaming Sample Cube
-
-Kylin provides a script for streaming sample cube also. This script will create Kafka topic and send the random messages constantly to the generated topic.
-
-1. Export KAFKA_HOME first, and start Kylin.
-2. Run ${KYLIN_HOME}/bin/sample.sh, it will generate Table DEFAULT.KYLIN_STREAMING_TABLE, Model kylin_streaming_model, Cube kylin_streaming_cube in learn_kylin project.
-3. Run ${KYLIN_HOME}/bin/sample-streaming.sh, it will create Kafka Topic kylin_streaming_topic into the localhost:9092 broker. It also send the random 100 messages into kylin_streaming_topic per second.
-4. Follow the the standard cube build process, and trigger the Cube kylin_streaming_cube build.  
-5. Check the build process in "Monitor" tab, until at least one job is 100%.
-6. Execute SQLs in the "Insight" tab, for example:
-         select count(*), HOUR_START from kylin_streaming_table group by HOUR_START
-7. Verify the query result.
- 
-## What's next
-
-You can create another cube with the sample tables, by following the tutorials.
diff --git a/website/_docs21/tutorial/microstrategy.md b/website/_docs21/tutorial/microstrategy.md
deleted file mode 100644
index c20604d..0000000
--- a/website/_docs21/tutorial/microstrategy.md
+++ /dev/null
@@ -1,84 +0,0 @@
----
-layout: docs21
-title:  MicroStrategy
-categories: tutorial
-permalink: /docs21/tutorial/microstrategy.html
----
-
-### Install ODBC Driver
-
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-Please make sure to download and install Kylin ODBC Driver __v1.6__ 64 bit or above. If you already installed ODBC Driver in your system, please uninstall it first. already installed ODBC Driver in your system, please uninstall it first.  
-
-The Kylin ODBC driver needs to be installed in the machine or virtual environment where your Microstrategy Intelligenec Server is installed. 
-
-###Create Local DSN
-
-Open your window ODBC Data Source Administrator (64bit) and create a system DSN that point to your kylin instance. 
-
-![](/images/tutorial/2.1/MicroStrategy/0.png)
-
-### Setting Database Instance
-
-Connect Kylin using ODBC driver: open your MicroStrategy Developer and connect to the project source where your are going to connect Kylin data source using a user account with administrative privilege. 
-
-Once logged in, go to `Administration` -> `Configuration manager` -> `Database Instance`, create a new database instance with system DSN that you created in the previous step. Under database connection type, please choose Generic DBMS.
-
-![](/images/tutorial/2.1/MicroStrategy/2.png)
-
-![](/images/tutorial/2.1/MicroStrategy/1.png)
-
-Depending on your business scenario, you may need to create a new project and set Kylin database instance as your primary database instance or if there is an existing project, set Kylin database instance as one of your primary or non-primary database instance. You can achieve this by right click on your project, and go to `project configuration` -> `database instance`. 
-
-### Import Logical Table
-
-Open up your project, go to `schema` -> `warehouse catalog` to import the tables your need. 
-
-![](/images/tutorial/2.1/MicroStrategy/4.png)
-
-### Building Schema and Public Objects
-
-Create Attribute, Facts and Metric objects
-
-![](/images/tutorial/2.1/MicroStrategy/5.png)
-
-![](/images/tutorial/2.1/MicroStrategy/6.png)
-
-![](/images/tutorial/2.1/MicroStrategy/7.png)
-
-![](/images/tutorial/2.1/MicroStrategy/8.png)
-
-### Create a Simple Report
-
-Now you can start creating reports with Kylin as data source.
-
-![](/images/tutorial/2.1/MicroStrategy/9.png)
-
-![](/images/tutorial/2.1/MicroStrategy/10.png)
-
-### Best Practice for Connecting MicroStrategy to Kylin Data Source
-
-1. Kylin does not work with multiple SQL passes at the moment, so it is recommended to set up your report intermediate table type as derived, you can change this setting at report level using `Data`-> `VLDB property`-> `Tables`-> `Intermediate Table Type`
-
-2. Avoid using below functionality in MicroStrategy as it will generate multiple sql passes that can not be bypassed by VLDB property:
-
-   ​	Creation of datamarts
-
-   ​	Query partitioned tables
-
-   ​	Reports with custom groups
-
-3. Dimension named with Kylin keywords will cause sql to error out. You may find Kylin keywords here, it is recommended to avoid naming the column name as Kylin keywords, especially when you use MicroStrategy as the front-end BI tool, as far as we know there is no setting in MicroStrategy that can escape the keyword.  [https://calcite.apache.org/docs/reference.html#keywords](https://calcite.apache.org/docs/reference.html#keywords)
-
-4. If underlying Kylin data model has left join from fact table to lookup table, In order for Microstrategy to also generate the same left join in sql, please follow below MicroStrategy TN to modify VLDB property:
-
-   [https://community.microstrategy.com/s/article/ka1440000009GrQAAU/KB17514-Using-the-Preserve-all-final-pass-result-elements-VLDB](https://community.microstrategy.com/s/article/ka1440000009GrQAAU/KB17514-Using-the-Preserve-all-final-pass-result-elements-VLDB)
-
-5. By default, MicroStrategy generate SQL query with date filter in a format like 'mm/dd/yyyy'. This format might be different from Kylin's date format, if so, query will error out. You may follow below steps to change MicroStrategy to generate the same date format SQL as Kylin,  
-
-   1. go to `Instance` -> `Administration` -> `Configuration Manager` -> `Database Instance`. 
-   2. Then right click on the database, choose VLDB properties. 
-   3. On the top menu choose `Tools` -> `show Advanced Settings`.
-   4. Go to `select/insert` -> `date format`.
-   5. Change the date format to follow date format in Kylin, for example 'yyyy-mm-dd'.
-   6. Restart MicroStrategy Intelligence Server so that change can be effective. 
\ No newline at end of file
diff --git a/website/_docs21/tutorial/odbc.cn.md b/website/_docs21/tutorial/odbc.cn.md
deleted file mode 100644
index e76b4d2..0000000
--- a/website/_docs21/tutorial/odbc.cn.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-layout: docs21-cn
-title:  Kylin ODBC 驱动程序教程
-categories: 教程
-permalink: /cn/docs21/tutorial/odbc.html
-version: v1.2
-since: v0.7.1
----
-
-> 我们提供Kylin ODBC驱动程序以支持ODBC兼容客户端应用的数据访问。
-> 
-> 32位版本或64位版本的驱动程序都是可用的。
-> 
-> 测试操作系统:Windows 7,Windows Server 2008 R2
-> 
-> 测试应用:Tableau 8.0.4 和 Tableau 8.1.3
-
-## 前提条件
-1. Microsoft Visual C++ 2012 再分配(Redistributable)
-   * 32位Windows或32位Tableau Desktop:下载:[32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
-   * 64位Windows或64位Tableau Desktop:下载:[64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
-
-2. ODBC驱动程序内部从一个REST服务器获取结果,确保你能够访问一个
-
-## 安装
-1. 如果你已经安装,首先卸载已存在的Kylin ODBC
-2. 从[下载](../../download/)下载附件驱动安装程序,并运行。
-   * 32位Tableau Desktop:请安装KylinODBCDriver (x86).exe
-   * 64位Tableau Desktop:请安装KylinODBCDriver (x64).exe
-
-3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
-
-## 错误报告
-如有问题,请报告错误至Apache Kylin JIRA,或者发送邮件到dev邮件列表。
diff --git a/website/_docs21/tutorial/odbc.md b/website/_docs21/tutorial/odbc.md
deleted file mode 100644
index c4ed8a9..0000000
--- a/website/_docs21/tutorial/odbc.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-layout: docs21
-title:  Kylin ODBC Driver
-categories: tutorial
-permalink: /docs21/tutorial/odbc.html
-since: v0.7.1
----
-
-> We provide Kylin ODBC driver to enable data access from ODBC-compatible client applications.
-> 
-> Both 32-bit version or 64-bit version driver are available.
-> 
-> Tested Operation System: Windows 7, Windows Server 2008 R2
-> 
-> Tested Application: Tableau 8.0.4, Tableau 8.1.3 and Tableau 9.1
-
-## Prerequisites
-1. Microsoft Visual C++ 2012 Redistributable 
-   * For 32 bit Windows or 32 bit Tableau Desktop: Download: [32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
-   * For 64 bit Windows or 64 bit Tableau Desktop: Download: [64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
-
-
-2. ODBC driver internally gets results from a REST server, make sure you have access to one
-
-## Installation
-1. Uninstall existing Kylin ODBC first, if you already installled it before
-2. Download ODBC Driver from [download](../../download/).
-   * For 32 bit Tableau Desktop: Please install KylinODBCDriver (x86).exe
-   * For 64 bit Tableau Desktop: Please install KylinODBCDriver (x64).exe
-
-3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
-
-## DSN configuration
-1. Open ODBCAD to configure DSN.
-	* For 32 bit driver, please use the 32bit version in C:\Windows\SysWOW64\odbcad32.exe
-	* For 64 bit driver, please use the default "Data Sources (ODBC)" in Control Panel/Administrator Tools
-![]( /images/Kylin-ODBC-DSN/1.png)
-
-2. Open "System DSN" tab, and click "Add", you will see KylinODBCDriver listed as an option, Click "Finish" to continue.
-![]( /images/Kylin-ODBC-DSN/2.png)
-
-3. In the pop up dialog, fill in all the blanks, The server host is where your Kylin Rest Server is started.
-![]( /images/Kylin-ODBC-DSN/3.png)
-
-4. Click "Done", and you will see your new DSN listed in the "System Data Sources", you can use this DSN afterwards.
-![]( /images/Kylin-ODBC-DSN/4.png)
-
-## Bug Report
-Please open Apache Kylin JIRA to report bug, or send to dev mailing list.
diff --git a/website/_docs21/tutorial/powerbi.cn.md b/website/_docs21/tutorial/powerbi.cn.md
deleted file mode 100644
index cdedca3..0000000
--- a/website/_docs21/tutorial/powerbi.cn.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-layout: docs21-cn
-title:  微软Excel及Power BI教程
-categories: tutorial
-permalink: /cn/docs21/tutorial/powerbi.html
-version: v1.2
-since: v1.2
----
-
-Microsoft Excel是当今Windows平台上最流行的数据处理软件之一,支持多种数据处理功能,可以利用Power Query从ODBC数据源读取数据并返回到数据表中。
-
-Microsoft Power BI 是由微软推出的商业智能的专业分析工具,给用户提供简单且丰富的数据可视化及分析功能。
-
-> Apache Kylin目前版本不支持原始数据的查询,部分查询会因此失败,导致应用程序发生异常,建议打上KYLIN-1075补丁包以优化查询结果的显示。
-
-
-> Power BI及Excel不支持"connect live"模式,请注意并添加where条件在查询超大数据集时候,以避免从服务器拉去过多的数据到本地,甚至在某些情况下查询执行失败。
-
-### Install ODBC Driver
-参考页面[Kylin ODBC 驱动程序教程](./odbc.html),请确保下载并安装Kylin ODBC Driver __v1.2__. 如果你安装有早前版本,请卸载后再安装。 
-
-### 连接Excel到Kylin
-1. 从微软官网下载和安装Power Query,安装完成后在Excel中会看到Power Query的Fast Tab,单击`From other sources`下拉按钮,并选择`From ODBC`项
-![](/images/tutorial/odbc/ms_tool/Picture1.png)
-
-2. 在弹出的`From ODBC`数据连接向导中输入Apache Kylin服务器的连接字符串,也可以在`SQL`文本框中输入您想要执行的SQL语句,单击`OK`,SQL的执行结果就会立即加载到Excel的数据表中
-![](/images/tutorial/odbc/ms_tool/Picture2.png)
-
-> 为了简化连接字符串的输入,推荐创建Apache Kylin的DSN,可以将连接字符串简化为DSN=[YOUR_DSN_NAME],有关DSN的创建请参考:[https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599)。
-
- 
-3. 如果您选择不输入SQL语句,Power Query将会列出所有的数据库表,您可以根据需要对整张表的数据进行加载。但是,Apache Kylin暂不支持原数据的查询,部分表的加载可能因此受限
-![](/images/tutorial/odbc/ms_tool/Picture3.png)
-
-4. 稍等片刻,数据已成功加载到Excel中
-![](/images/tutorial/odbc/ms_tool/Picture4.png)
-
-5.  一旦服务器端数据产生更新,则需要对Excel中的数据进行同步,右键单击右侧列表中的数据源,选择`Refresh`,最新的数据便会更新到数据表中.
-
-6.  1.  为了提升性能,可以在Power Query中打开`Query Options`设置,然后开启`Fast data load`,这将提高数据加载速度,但可能造成界面的暂时无响应
-
-### Power BI
-1.  启动您已经安装的Power BI桌面版程序,单击`Get data`按钮,并选中ODBC数据源.
-![](/images/tutorial/odbc/ms_tool/Picture5.png)
-
-2.  在弹出的`From ODBC`数据连接向导中输入Apache Kylin服务器的数据库连接字符串,也可以在`SQL`文本框中输入您想要执行的SQL语句。单击`OK`,SQL的执行结果就会立即加载到Power BI中
-![](/images/tutorial/odbc/ms_tool/Picture6.png)
-
-3.  如果您选择不输入SQL语句,Power BI将会列出项目中所有的表,您可以根据需要将整张表的数据进行加载。但是,Apache Kylin暂不支持原数据的查询,部分表的加载可能因此受限
-![](/images/tutorial/odbc/ms_tool/Picture7.png)
-
-4.  现在你可以进一步使用Power BI进行可视化分析:
-![](/images/tutorial/odbc/ms_tool/Picture8.png)
-
-5.  单击工具栏的`Refresh`按钮即可重新加载数据并对图表进行更新
-
diff --git a/website/_docs21/tutorial/powerbi.md b/website/_docs21/tutorial/powerbi.md
deleted file mode 100644
index 7c3dbfd..0000000
--- a/website/_docs21/tutorial/powerbi.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-layout: docs21
-title:  MS Excel and Power BI
-categories: tutorial
-permalink: /docs21/tutorial/powerbi.html
-since: v1.2
----
-
-Microsoft Excel is one of the most famous data tool on Windows platform, and has plenty of data analyzing functions. With Power Query installed as plug-in, excel can easily read data from ODBC data source and fill spreadsheets. 
-
-Microsoft Power BI is a business intelligence tool providing rich functionality and experience for data visualization and processing to user.
-
-> Apache Kylin currently doesn't support query on raw data yet, some queries might fail and cause some exceptions in application. Patch KYLIN-1075 is recommended to get better look of query result.
-
-> Power BI and Excel do not support "connect live" model for other ODBC driver yet, please pay attention when you query on huge dataset, it may pull too many data into your client which will take a while even fail at the end.
-
-### Install ODBC Driver
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-Please make sure to download and install Kylin ODBC Driver __v1.2__. If you already installed ODBC Driver in your system, please uninstall it first. 
-
-### Kylin and Excel
-1. Download Power Query from Microsoft’s Website and install it. Then run Excel, switch to `Power Query` fast tab, click `From Other Sources` dropdown list, and select `ODBC` item.
-![](/images/tutorial/odbc/ms_tool/Picture1.png)
-
-2.  You’ll see `From ODBC` dialog, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox. Optionally you can type a SQL statement in `SQL statement` textbox. Click `OK`, result set will run to your spreadsheet now.
-![](/images/tutorial/odbc/ms_tool/Picture2.png)
-
-> Tips: In order to simplify the Database Connection String, DSN is recommended, which can shorten the Connection String like `DSN=[YOUR_DSN_NAME]`. Details about DSN, refer to [https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599).
- 
-3. If you didn’t input the SQL statement in last step, Power Query will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
-![](/images/tutorial/odbc/ms_tool/Picture3.png)
-
-4.  Hold on for a while, the data is lying in Excel now.
-![](/images/tutorial/odbc/ms_tool/Picture4.png)
-
-5.  If you want to sync data with Kylin Server, just right click the data source in right panel, and select `Refresh`, then you’ll see the latest data.
-
-6.  To improve data loading performance, you can enable `Fast data load` in Power Query, but this will make your UI unresponsive for a while. 
-
-### Power BI
-1.  Run Power BI Desktop, and click `Get Data` button, then select `ODBC` as data source type.
-![](/images/tutorial/odbc/ms_tool/Picture5.png)
-
-2.  Same with Excel, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox, and optionally type a SQL statement in `SQL statement` textbox. Click `OK`, the result set will come to Power BI as a new data source query.
-![](/images/tutorial/odbc/ms_tool/Picture6.png)
-
-3.  If you didn’t input the SQL statement in last step, Power BI will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
-![](/images/tutorial/odbc/ms_tool/Picture7.png)
-
-4.  Now you can start to enjoy analyzing with Power BI.
-![](/images/tutorial/odbc/ms_tool/Picture8.png)
-
-5.  To reload the data and redraw the charts, just click `Refresh` button in `Home` fast tab.
-
diff --git a/website/_docs21/tutorial/project_level_acl.md b/website/_docs21/tutorial/project_level_acl.md
deleted file mode 100644
index a25c898..0000000
--- a/website/_docs21/tutorial/project_level_acl.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-layout: docs21
-title: Project Level ACL
-categories: tutorial
-permalink: /docs21/tutorial/project_level_acl.html
-since: v2.1.0
----
-
-Whether a user can access a project and use some functionalities within the project is determined by project-level access control, there are four types of access permission role set at the project-level in Apache Kylin. They are *ADMIN*, *MANAGEMENT*, *OPERATION* and *QUERY*. Each role defines a list of functionality user may perform in Apache Kylin.
-
-- *QUERY*: designed to be used by analysts who only need access permission to query tables/cubes in the project.
-- *OPERATION*: designed to be used by operation team in a corporate/organization who need permission to maintain the Cube. OPERATION access permission includes QUERY.
-- *MANAGEMENT*: designed to be used by Modeler or Designer who is fully knowledgeable of business meaning of the data/model, Cube will be in charge of Model and Cube design. MANAGEMENT access permission includes OPERATION, and QUERY.
-- *ADMIN*: Designed to fully manage the project. ADMIN access permission includes MANAGEMENT, OPERATION and QUERY.
-
-Access permissions are independent between different projects.
-
-### How Access Permission is Determined
-
-Once project-level access permission has been set for a user, access permission on data source, model and Cube will be inherited based on the access permission role defined on project-level. For detailed functionalities, each access permission role can have access to, see table below.
-
-|                                          | System Admin | Project Admin | Management | Operation | Query |
-| ---------------------------------------- | ------------ | ------------- | ---------- | --------- | ----- |
-| Create/delete project                    | Yes          | No            | No         | No        | No    |
-| Edit project                             | Yes          | Yes           | No         | No        | No    |
-| Add/edit/delete project access permission | Yes          | Yes           | No         | No        | No    |
-| Check model page                         | Yes          | Yes           | Yes        | Yes       | Yes   |
-| Check data source page                   | Yes          | Yes           | Yes        | No        | No    |
-| Load, unload table, reload table         | Yes          | Yes           | No         | No        | No    |
-| View model in read only mode             | Yes          | Yes           | Yes        | Yes       | Yes   |
-| Add, edit, clone, drop model             | Yes          | Yes           | Yes        | No        | No    |
-| Check cube detail definition             | Yes          | Yes           | Yes        | Yes       | Yes   |
-| Add, disable/enable, clone cube, edit, drop cube, purge cube | Yes          | Yes           | Yes        | No        | No    |
-| Build, refresh, merge cube               | Yes          | Yes           | Yes        | Yes       | No    |
-| Edit, view cube json                     | Yes          | Yes           | Yes        | No        | No    |
-| Check insight page                       | Yes          | Yes           | Yes        | Yes       | Yes   |
-| View table in insight page               | Yes          | Yes           | Yes        | Yes       | Yes   |
-| Check monitor page                       | Yes          | Yes           | Yes        | Yes       | No    |
-| Check system page                        | Yes          | No            | No         | No        | No    |
-| Reload metadata, disable cache, set config, diagnosis | Yes          | No            | No         | No        | No    |
-
-
-Additionally, when Query Pushdown is enabled, QUERY access permission on a project allows users to issue push down queries on all tables in the project even though no cube could serve them. It's impossible if a user is not yet granted QUERY permission at project-level.
-
-### Manage Access Permission at Project-level
-
-1. Click the small gear shape icon on the top-left corner of Model page. You will be redirected to project page
-
-   ![](/images/Project-level-acl/ACL-1.png)
-
-2. In project page, expand a project and choose Access.
-3. Click `Grant`to grant permission to user.
-
-	![](/images/Project-level-acl/ACL-2.png)
-
-4. Fill in name of the user or role, choose permission and then click `Grant` to grant permission.
-
-5. You can also revoke and update permission on this page.
-
-   ![](/images/Project-level-acl/ACL-3.png)
-
-   Please note that in order to grant permission to default user (MODELER and ANLAYST), these users need to login as least once. 
-   ​
\ No newline at end of file
diff --git a/website/_docs21/tutorial/query_pushdown.cn.md b/website/_docs21/tutorial/query_pushdown.cn.md
deleted file mode 100644
index 34aeccb..0000000
--- a/website/_docs21/tutorial/query_pushdown.cn.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-layout: docs21-cn
-title:  查询下压
-categories: tutorial
-permalink: /cn/docs21/tutorial/query_pushdown.html
-since: v2.1
----
-
-### Kylin支持查询下压
-
-对于没有cube能查得结果的sql,Kylin支持将这类查询通过JDBC下压至备用查询引擎如Hive, SparkSQL, Impala等来查得结果。以下以Hive为例说明开启步骤,由于Kylin本事就将Hive作为数据源,作为Query Pushdown引擎也更易使用与配置。
-
-### 查询下压配置
-
-1. 修改配置文件`kylin.properties`打开Query Pushdown注释掉的配置项`kylin.query.pushdown.runner-class-name`,设置为`org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl`
-
-
-2. 在配置文件`kylin.properties`添加如下配置项。若不设置,将使用默认配置项。请不要忘记将"hiveserver"和"10000"替换成环境中Hive运行的主机和端口。
-
-    - *kylin.query.pushdown.jdbc.url*:Hive JDBC的URL.
-
-    - *kylin.query.pushdown.jdbc.driver*:Hive Jdbc的driver类名
-        
-    - *kylin.query.pushdown.jdbc.username*:Hive Jdbc对应数据库的用户名
-
-    - *kylin.query.pushdown.jdbc.password*:Hive Jdbc对应数据库的密码
-
-    - *kylin.query.pushdown.jdbc.pool-max-total*:Hive Jdbc连接池的最大连接数,默认值为8
-
-    - *kylin.query.pushdown.jdbc.pool-max-idle*:Hive Jdbc连接池的最大等待连接数,默认值为8
-    
-    - *kylin.query.pushdown.jdbc.pool-min-idle*:Hive Jdbc连接池的最小连接数,默认值为0
-
-下面是一个样例设置; 请记得将主机名"hiveserver"以及端口"10000"修改为您的集群设置。
-
-{% highlight Groff markup %} kylin.query.pushdown.runner-class-name=org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl kylin.query.pushdown.jdbc.url=jdbc:hive2://hiveserver:10000/default kylin.query.pushdown.jdbc.driver=org.apache.hive.jdbc.HiveDriver kylin.query.pushdown.jdbc.username=hive kylin.query.pushdown.jdbc.password= kylin.query.pushdown.jdbc.pool-max-total=8 kylin.query.pushdown.jdbc.pool-max-idle=8 kylin.query.pushdown.jdbc.pool-min-idle=0
-
-{% endhighlight %}
-
-3. 重启Kylin
-
-### 进行查询下压
-
-开启查询下压后,即可按同步的表进行灵活查询,而无需根据查询构建对应Cube。
-
-   ![](/images/tutorial/2.1/push_down/push_down_1.png)
-
-用户在提交查询时,若查询下压发挥作用,则在log里有相应的记录。
-
-   ![](/images/tutorial/2.1/push_down/push_down_2.png)
diff --git a/website/_docs21/tutorial/query_pushdown.md b/website/_docs21/tutorial/query_pushdown.md
deleted file mode 100644
index ded6a7e..0000000
--- a/website/_docs21/tutorial/query_pushdown.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-layout: docs21
-title:  Enable Query Pushdown
-categories: tutorial
-permalink: /docs21/tutorial/query_pushdown.html
-since: v2.1
----
-
-### Introduction
-
-If a query can not be answered by any cube, Kylin supports pushing down such query to backup query engines like Hive, SparkSQL, Impala through JDBC. In the following, Hive is used as an example, as it is one of Kylin's data sources and be convenient to configure. 
-
-
-### Query Pushdown config
-
-1. In Kylin's installation directory, uncomment configuration item `kylin.query.pushdown.runner-class-name` of config file `kylin.properties`, and set it to `org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl`
-
-
-2. Add configuration items below in config file `kylin.properties`. 
-
-   - *kylin.query.pushdown.jdbc.url*: Hive JDBC's URL.
-
-   - *kylin.query.pushdown.jdbc.driver*: Hive Jdbc's driver class name.
-
-   - *kylin.query.pushdown.jdbc.username*: Hive Jdbc's user name.
-
-   - *kylin.query.pushdown.jdbc.password*: Hive Jdbc's password.
-
-   - *kylin.query.pushdown.jdbc.pool-max-total*: Hive Jdbc's connection pool's max connected connection number, default value is 8
-
-   - *kylin.query.pushdown.jdbc.pool-max-idle*: Hive Jdbc's connection pool's max waiting connection number, default value is 8
-
-   - *kylin.query.pushdown.jdbc.pool-min-idle*: Hive Jdbc's connection pool's min connected connection number, default value is 0
-
-Here is a sample configuration; remember to change host "hiveserver" and port "10000" with your cluster configuraitons.
-
-{% highlight Groff markup %}
-kylin.query.pushdown.runner-class-name=org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl
-kylin.query.pushdown.jdbc.url=jdbc:hive2://hiveserver:10000/default
-kylin.query.pushdown.jdbc.driver=org.apache.hive.jdbc.HiveDriver
-kylin.query.pushdown.jdbc.username=hive
-kylin.query.pushdown.jdbc.password=
-kylin.query.pushdown.jdbc.pool-max-total=8
-kylin.query.pushdown.jdbc.pool-max-idle=8
-kylin.query.pushdown.jdbc.pool-min-idle=0
-
-{% endhighlight %}
-
-
-3. Restart Kylin
-
-### Do Query Pushdown
-
-After Query Pushdown is configured, user is allowed to do flexible queries to the imported tables without avaible cubes.
-
-   ![](/images/tutorial/2.1/push_down/push_down_1.png)
-
-If query is answered by backup engine, `Is Query Push-Down` is set to `true` in the log.
-
-   ![](/images/tutorial/2.1/push_down/push_down_2.png)
-# 
\ No newline at end of file
diff --git a/website/_docs21/tutorial/squirrel.md b/website/_docs21/tutorial/squirrel.md
deleted file mode 100644
index d3ba148..0000000
--- a/website/_docs21/tutorial/squirrel.md
+++ /dev/null
@@ -1,112 +0,0 @@
----
-layout: docs21
-title:  SQuirreL
-categories: tutorial
-permalink: /docs21/tutorial/squirrel.html
----
-
-### Introduction
-
-[SQuirreL SQL](http://www.squirrelsql.org/) is a multi platform Universal SQL Client (GNU License). You can use it to access HBase + Phoenix and Hive. This document introduces how to connect to Kylin from SQuirreL.
-
-### Used Software
-
-* [Kylin v1.6.0](/download/) & ODBC 1.6
-* [SquirreL SQL v3.7.1](http://www.squirrelsql.org/)
-
-## Pre-requisites
-
-* Find the Kylin JDBC driver jar
-  From Kylin Download, Choose Binary and the **correct version of Kylin and HBase**
-	Download & Unpack:  in **./lib**: 
-  ![](/images/SQuirreL-Tutorial/01.png)
-
-
-* Need an instance of Kylin, with a Cube; the [Sample Cube](kylin_sample.html) is enough.
-
-  ![](/images/SQuirreL-Tutorial/02.png)
-
-
-* [Dowload and install SquirreL](http://www.squirrelsql.org/#installation)
-
-## Add Kylin JDBC Driver
-
-On left menu: ![alt text](/images/SQuirreL-Tutorial/03.png) >![alt text](/images/SQuirreL-Tutorial/04.png)  > ![alt text](/images/SQuirreL-Tutorial/05.png)  > ![alt text](/images/SQuirreL-Tutorial/06.png)
-
-And locate the JAR: ![alt text](/images/SQuirreL-Tutorial/07.png)
-
-Configure this parameters:
-
-* Put a name: ![alt text](/images/SQuirreL-Tutorial/08.png)
-* Example URL ![alt text](/images/SQuirreL-Tutorial/09.png)
-
-  jdbc:kylin://172.17.0.2:7070/learn_kylin
-* Put Class Name: ![alt text](/images/SQuirreL-Tutorial/10.png)
-	Tip:  If auto complete not work, type:  org.apache.kylin.jdbc.Driver 
-	
-Check the Driver List: ![alt text](/images/SQuirreL-Tutorial/11.png)
-
-## Add Aliases
-
-On left menu: ![alt text](/images/SQuirreL-Tutorial/12.png)  > ![alt text](/images/SQuirreL-Tutorial/13.png) : (Login pass by default: ADMIN / KYLIN)
-
-  ![](/images/SQuirreL-Tutorial/14.png)
-
-
-And automatically launch conection:
-
-  ![](/images/SQuirreL-Tutorial/15.png)
-
-
-## Connect and Execute
-
-The startup window when connected:
-
-  ![](/images/SQuirreL-Tutorial/16.png)
-
-
-Choose Tab: and write a query  (whe use Kylin’s example cube):
-
-  ![](/images/SQuirreL-Tutorial/17.png)
-
-
-```
-select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
-from kylin_sales group by part_dt 
-order by part_dt
-```
-
-Execute With: ![alt text](/images/SQuirreL-Tutorial/18.png) 
-
-  ![](/images/SQuirreL-Tutorial/19.png)
-
-
-And it’s works!
-
-## Tips:
-
-SquirreL isn’t the most stable SQL Client, but it is very flexible and get a lot of info; It can be used for PoC and checking connectivity issues.
-
-List of tables: 
-
-  ![](/images/SQuirreL-Tutorial/21.png)
-
-
-List of columns of table:
-
-  ![](/images/SQuirreL-Tutorial/22.png)
-
-
-List of column of Querie:
-
-  ![](/images/SQuirreL-Tutorial/23.png)
-
-
-Export the result of queries:
-
-  ![](/images/SQuirreL-Tutorial/24.png)
-
-
- Info about time query execution:
-
-  ![](/images/SQuirreL-Tutorial/25.png)
diff --git a/website/_docs21/tutorial/tableau.cn.md b/website/_docs21/tutorial/tableau.cn.md
deleted file mode 100644
index 113b3c2..0000000
--- a/website/_docs21/tutorial/tableau.cn.md
+++ /dev/null
@@ -1,116 +0,0 @@
----
-layout: docs21-cn
-title:  Tableau教程
-categories: 教程
-permalink: /cn/docs21/tutorial/tableau.html
-version: v1.2
-since: v0.7.1
----
-
-> Kylin ODBC驱动程序与Tableau存在一些限制,请在尝试前仔细阅读本说明书。
-> * 仅支持“managed”分析路径,Kylin引擎将对意外的维度或度量报错
-> * 请始终优先选择事实表,然后使用正确的连接条件添加查找表(cube中已定义的连接类型)
-> * 请勿尝试在多个事实表或多个查找表之间进行连接;
-> * 你可以尝试使用类似Tableau过滤器中seller id这样的高基数维度,但引擎现在将只返回有限个Tableau过滤器中的seller id。
-> 
-> 如需更多详细信息或有任何问题,请联系Kylin团队:`kylinolap@gmail.com`
-
-
-### 使用Tableau 9.x的用户
-请参考[Tableau 9 教程](./tableau_91.html)以获得更详细帮助。
-
-### 步骤1. 安装Kylin ODBC驱动程序
-参考页面[Kylin ODBC 驱动程序教程](./odbc.html)。
-
-### 步骤2. 连接到Kylin服务器
-> 我们建议使用Connect Using Driver而不是Using DSN。
-
-Connect Using Driver: 选择左侧面板中的“Other Database(ODBC)”和弹出窗口的“KylinODBCDriver”。
-
-![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
-
-输入你的服务器位置和证书:服务器主机,端口,用户名和密码。
-
-![](/images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
-
-点击“Connect”获取你有权限访问的项目列表。有关权限的详细信息请参考[Kylin Cube Permission Grant Tutorial](https://github.com/KylinOLAP/Kylin/wiki/Kylin-Cube-Permission-Grant-Tutorial)。然后在下拉列表中选择你想要连接的项目。
-
-![](/images/Kylin-and-Tableau-Tutorial/3 project.jpg)
-
-点击“Done”完成连接。
-
-![](/images/Kylin-and-Tableau-Tutorial/4 done.jpg)
-
-### 步骤3. 使用单表或多表
-> 限制
->    * 必须首先选择事实表
->    * 请勿仅支持从查找表选择
->    * 连接条件必须与cube定义匹配
-
-**选择事实表**
-
-选择`Multiple Tables`。
-
-![](/images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
-
-然后点击`Add Table...`添加一张事实表。
-
-![](/images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
-
-![](/images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
-
-**选择查找表**
-
-点击`Add Table...`添加一张查找表。
-
-![](/images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
-
-仔细建立连接条款。
-
-![](/images/Kylin-and-Tableau-Tutorial/8 join.jpg)
-
-继续通过点击`Add Table...`添加表直到所有的查找表都被正确添加。命名此连接以在Tableau中使用。
-
-![](/images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
-
-**使用Connect Live**
-
-`Data Connection`共有三种类型。选择`Connect Live`选项。
-
-![](/images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
-
-然后你就能够尽情使用Tableau进行分析。
-
-![](/images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
-
-**添加额外查找表**
-
-点击顶部菜单栏的`Data`,选择`Edit Tables...`更新查找表信息。
-
-![](/images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
-
-### 步骤4. 使用自定义SQL
-使用自定义SQL类似于使用单表/多表,但你需要在`Custom SQL`标签复制你的SQL后采取同上指令。
-
-![](/images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
-
-### 步骤5. 发布到Tableau服务器
-如果你已经完成使用Tableau制作一个仪表板,你可以将它发布到Tableau服务器上。
-点击顶部菜单栏的`Server`,选择`Publish Workbook...`。
-
-![](/images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
-
-然后登陆你的Tableau服务器并准备发布。
-
-![](/images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
-
-如果你正在使用Connect Using Driver而不是DSN连接,你还将需要嵌入你的密码。点击左下方的`Authentication`按钮并选择`Embedded Password`。点击`Publish`然后你将看到结果。
-
-![](/images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
-
-### 小贴士
-* 在Tableau中隐藏表名
-
-    * Tableau将会根据源表名分组显示列,但用户可能希望根据其他不同的安排组织列。使用Tableau中的"Group by Folder"并创建文件夹来对不同的列分组。
-
-     ![](/images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)
diff --git a/website/_docs21/tutorial/tableau.md b/website/_docs21/tutorial/tableau.md
deleted file mode 100644
index ab643f3..0000000
--- a/website/_docs21/tutorial/tableau.md
+++ /dev/null
@@ -1,113 +0,0 @@
----
-layout: docs21
-title:  Tableau 8
-categories: tutorial
-permalink: /docs21/tutorial/tableau.html
----
-
-> There are some limitations of Kylin ODBC driver with Tableau, please read carefully this instruction before you try it.
-> 
-> * Only support "managed" analysis path, Kylin engine will raise exception for unexpected dimension or metric
-> * Please always select Fact Table first, then add lookup tables with correct join condition (defined join type in cube)
-> * Do not try to join between fact tables or lookup tables;
-> * You can try to use high cardinality dimensions like seller id as Tableau Filter, but the engine will only return limited seller id in Tableau's filter now.
-
-### For Tableau 9.x User
-Please refer to [Tableau 9.x Tutorial](./tableau_91.html) for detail guide.
-
-### Step 1. Install Kylin ODBC Driver
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-
-### Step 2. Connect to Kylin Server
-> We recommended to use Connect Using Driver instead of Using DSN.
-
-Connect Using Driver: Select "Other Database(ODBC)" in the left panel and choose KylinODBCDriver in the pop-up window. 
-
-![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
-
-Enter your Sever location and credentials: server host, port, username and password.
-
-![]( /images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
-
-Click "Connect" to get the list of projects that you have permission to access. See details about permission in [Kylin Cube Permission Grant Tutorial](./acl.html). Then choose the project you want to connect in the drop down list. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/3 project.jpg)
-
-Click "Done" to complete the connection.
-
-![]( /images/Kylin-and-Tableau-Tutorial/4 done.jpg)
-
-### Step 3. Using Single Table or Multiple Tables
-> Limitation
-> 
->    * Must select FACT table first
->    * Do not support select from lookup table only
->    * The join condition must match within cube definition
-
-**Select Fact Table**
-
-Select `Multiple Tables`.
-
-![]( /images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
-
-Then click `Add Table...` to add a fact table.
-
-![]( /images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
-
-![]( /images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
-
-**Select Look-up Table**
-
-Click `Add Table...` to add a look-up table. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
-
-Set up the join clause carefully. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/8 join.jpg)
-
-Keep add tables through click `Add Table...` until all the look-up tables have been added properly. Give the connection a name for use in Tableau.
-
-![]( /images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
-
-**Using Connect Live**
-
-There are three types of `Data Connection`. Choose the `Connect Live` option. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
-
-Then you can enjoy analyzing with Tableau.
-
-![]( /images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
-
-**Add additional look-up Tables**
-
-Click `Data` in the top menu bar, select `Edit Tables...` to update the look-up table information.
-
-![]( /images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
-
-### Step 4. Using Customized SQL
-To use customized SQL resembles using Single Table/Multiple Tables, except that you just need to paste your SQL in `Custom SQL` tab and take the same instruction as above.
-
-![]( /images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
-
-### Step 5. Publish to Tableau Server
-Suppose you have finished making a dashboard with Tableau, you can publish it to Tableau Server.
-Click `Server` in the top menu bar, select `Publish Workbook...`. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
-
-Then sign in your Tableau Server and prepare to publish. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
-
-If you're Using Driver Connect instead of DSN connect, you'll need to additionally embed your password in. Click the `Authentication` button at left bottom and select `Embedded Password`. Click `Publish` and you will see the result.
-
-![]( /images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
-
-### Tips
-* Hide Table name in Tableau
-
-    * Tableau will display columns be grouped by source table name, but user may want to organize columns with different structure. Using "Group by Folder" in Tableau and Create Folders to group different columns.
-
-     ![]( /images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)
diff --git a/website/_docs21/tutorial/tableau_91.cn.md b/website/_docs21/tutorial/tableau_91.cn.md
deleted file mode 100644
index ed5f855..0000000
--- a/website/_docs21/tutorial/tableau_91.cn.md
+++ /dev/null
@@ -1,51 +0,0 @@
----
-layout: docs21-cn
-title:  Tableau 9 教程
-categories: tutorial
-permalink: /cn/docs21/tutorial/tableau_91.html
-version: v1.2
-since: v1.2
----
-
-Tableau 9已经发布一段时间了,社区有很多用户希望Apache Kylin能进一步支持该版本。现在可以通过更新Kylin ODBC驱动以使用Tableau 9来与Kylin服务进行交互。
-
-
-### Tableau 8.x 用户
-请参考[Tableau 教程](./tableau.html)以获得更详细帮助。
-
-### Install ODBC Driver
-参考页面[Kylin ODBC 驱动程序教程](./odbc.html),请确保下载并安装Kylin ODBC Driver __v1.5__. 如果你安装有早前版本,请卸载后再安装。 
-
-### Connect to Kylin Server
-在Tableau 9.1创建新的数据连接,单击左侧面板中的`Other Database(ODBC)`,并在弹出窗口中选择`KylinODBCDriver` 
-![](/images/tutorial/odbc/tableau_91/1.png)
-
-输入你的服务器地址、端口、项目、用户名和密码,点击`Connect`可获取有权限访问的所有项目列表。有关权限的详细信息请参考[Kylin Cube 权限授予教程](./acl.html).
-![](/images/tutorial/odbc/tableau_91/2.png)
-
-### 映射数据模型
-在左侧的列表中,选择数据库`defaultCatalog`并单击”搜索“按钮,将列出所有可查询的表。用鼠标把表拖拽到右侧区域,就可以添加表作为数据源,并创建好表与表的连接关系
-![](/images/tutorial/odbc/tableau_91/3.png)
-
-### Connect Live
-Tableau 9.1中有两种数据源连接类型,选择`在线`选项以确保使用'Connect Live'模式
-![](/images/tutorial/odbc/tableau_91/4.png)
-
-### 自定义SQL
-如果需要使用自定义SQL,可以单击左侧`New Custom SQL`并在弹窗中输入SQL语句,就可添加为数据源.
-![](/images/tutorial/odbc/tableau_91/5.png)
-
-### 可视化
-现在你可以进一步使用Tableau进行可视化分析:
-![](/images/tutorial/odbc/tableau_91/6.png)
-
-### 发布到Tableau服务器
-如果希望发布到Tableau服务器, 点击`Server`菜单并选择`Publish Workbook`
-![](/images/tutorial/odbc/tableau_91/7.png)
-
-### 更多
-
-- 请参考[Tableau 教程](./tableau.html)以获得更多信息
-- 也可以参考社区用户Alberto Ramon Portoles (a.ramonportoles@gmail.com)提供的分享: [KylinWithTableau](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau)
-
-
diff --git a/website/_docs21/tutorial/tableau_91.md b/website/_docs21/tutorial/tableau_91.md
deleted file mode 100644
index cf4fcf6..0000000
--- a/website/_docs21/tutorial/tableau_91.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-layout: docs21
-title:  Tableau 9
-categories: tutorial
-permalink: /docs21/tutorial/tableau_91.html
----
-
-Tableau 9.x has been released a while, there are many users are asking about support this version with Apache Kylin. With updated Kylin ODBC Driver, now user could interactive with Kylin service through Tableau 9.x.
-
-
-### For Tableau 8.x User
-Please refer to [Kylin and Tableau Tutorial](./tableau.html) for detail guide.
-
-### Install Kylin ODBC Driver
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-Please make sure to download and install Kylin ODBC Driver __v1.5__. If you already installed ODBC Driver in your system, please uninstall it first. 
-
-### Connect to Kylin Server
-Connect Using Driver: Start Tableau 9.1 desktop, click `Other Database(ODBC)` in the left panel and choose KylinODBCDriver in the pop-up window. 
-![](/images/tutorial/odbc/tableau_91/1.png)
-
-Provide your Sever location, credentials and project. Clicking `Connect` button, you can get the list of projects that you have permission to access, see details at [Kylin Cube Permission Grant Tutorial](./acl.html).
-![](/images/tutorial/odbc/tableau_91/2.png)
-
-### Mapping Data Model
-In left panel, select `defaultCatalog` as Database, click `Search` button in Table search box, and all tables get listed. With drag and drop to the right region, tables will become data source. Make sure JOINs are configured correctly.
-![](/images/tutorial/odbc/tableau_91/3.png)
-
-### Connect Live
-There are two types of `Connection`, choose the `Live` option to make sure using Connect Live mode.
-![](/images/tutorial/odbc/tableau_91/4.png)
-
-### Custom SQL
-To use customized SQL, click `New Custom SQL` in left panel and type SQL statement in pop-up dialog.
-![](/images/tutorial/odbc/tableau_91/5.png)
-
-### Visualization
-Now you can start to enjou analyzing with Tableau 9.1.
-![](/images/tutorial/odbc/tableau_91/6.png)
-
-### Publish to Tableau Server
-If you want to publish local dashboard to a Tableau Server, just expand `Server` menu and select `Publish Workbook`.
-![](/images/tutorial/odbc/tableau_91/7.png)
-
-### More
-
-- You can refer to [Kylin and Tableau Tutorial](./tableau.html) for more detail.
-- Here is a good tutorial written by Alberto Ramon Portoles (a.ramonportoles@gmail.com): [KylinWithTableau](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau)
-
-
diff --git a/website/_docs21/tutorial/web.cn.md b/website/_docs21/tutorial/web.cn.md
deleted file mode 100644
index a9bd1a5..0000000
--- a/website/_docs21/tutorial/web.cn.md
+++ /dev/null
@@ -1,134 +0,0 @@
----
-layout: docs21-cn
-title:  Kylin网页版教程
-categories: 教程
-permalink: /cn/docs21/tutorial/web.html
-version: v1.2
----
-
-> **支持的浏览器**
-> 
-> Windows: Google Chrome, FireFox
-> 
-> Mac: Google Chrome, FireFox, Safari
-
-## 1. 访问 & 登陆
-访问主机: http://hostname:7070
-使用用户名/密码登陆:ADMIN/KYLIN
-
-![]( /images/Kylin-Web-Tutorial/1 login.png)
-
-## 2. Kylin中可用的Hive表
-虽然Kylin使用SQL作为查询接口并利用Hive元数据,Kylin不会让用户查询所有的hive表,因为到目前为止它是一个预构建OLAP(MOLAP)系统。为了使表在Kylin中可用,使用"Sync"方法能够方便地从Hive中同步表。
-
-![]( /images/Kylin-Web-Tutorial/2 tables.png)
-
-## 3. Kylin OLAP Cube
-Kylin的OLAP Cube是从星型模式的Hive表中获取的预计算数据集,这是供用户探索、管理所有cube的网页管理页面。由菜单栏进入`Cubes`页面,系统中所有可用的cube将被列出。
-
-![]( /images/Kylin-Web-Tutorial/3 cubes.png)
-
-探索更多关于Cube的详细信息
-
-* 表格视图:
-
-   ![]( /images/Kylin-Web-Tutorial/4 form-view.png)
-
-* SQL 视图 (Hive查询读取数据以生成cube):
-
-   ![]( /images/Kylin-Web-Tutorial/5 sql-view.png)
-
-* 可视化 (显示这个cube背后的星型模式):
-
-   ![]( /images/Kylin-Web-Tutorial/6 visualization.png)
-
-* 访问 (授予用户/角色权限,beta版中授予权限操作仅对管理员开放):
-
-   ![]( /images/Kylin-Web-Tutorial/7 access.png)
-
-## 4. 在网页上编写和运行SQL
-Kelin的网页版为用户提供了一个简单的查询工具来运行SQL以探索现存的cube,验证结果并探索使用#5中的Pivot analysis与可视化分析的结果集。
-
-> **查询限制**
-> 
-> 1. 仅支持SELECT查询
-> 
-> 2. 为了避免从服务器到客户端产生巨大的网络流量,beta版中的扫描范围阀值被设置为1,000,000。
-> 
-> 3. beta版中,SQL在cube中无法找到的数据将不会重定向到Hive
-
-由菜单栏进入“Query”页面:
-
-![]( /images/Kylin-Web-Tutorial/8 query.png)
-
-* 源表:
-
-   浏览器当前可用表(与Hive相同的结构和元数据):
-  
-   ![]( /images/Kylin-Web-Tutorial/9 query-table.png)
-
-* 新的查询:
-
-   你可以编写和运行你的查询并探索结果。这里提供一个查询供你参考:
-
-   ![]( /images/Kylin-Web-Tutorial/10 query-result.png)
-
-* 已保存的查询:
-
-   与用户账号关联,你将能够从不同的浏览器甚至机器上获取已保存的查询。
-   在结果区域点击“Save”,将会弹出名字和描述来保存当前查询:
-
-   ![]( /images/Kylin-Web-Tutorial/11 save-query.png)
-
-   点击“Saved Queries”探索所有已保存的查询,你可以直接重新提交它来运行或删除它:
-
-   ![]( /images/Kylin-Web-Tutorial/11 save-query-2.png)
-
-* 查询历史:
-
-   仅保存当前用户在当前浏览器中的查询历史,这将需要启用cookie,并且如果你清理浏览器缓存将会丢失数据。点击“Query History”标签,你可以直接重新提交其中的任何一条并再次运行。
-
-## 5. Pivot Analysis与可视化
-Kylin的网页版提供一个简单的Pivot与可视化分析工具供用户探索他们的查询结果:
-
-* 一般信息:
-
-   当查询运行成功后,它将呈现一个成功指标与被访问的cube名字。
-   同时它将会呈现这个查询在后台引擎运行了多久(不包括从Kylin服务器到浏览器的网络通信):
-
-   ![]( /images/Kylin-Web-Tutorial/12 general.png)
-
-* 查询结果:
-
-   能够方便地在一个列上排序。
-
-   ![]( /images/Kylin-Web-Tutorial/13 results.png)
-
-* 导出到CSV文件
-
-   点击“Export”按钮以CSV文件格式保存当前结果。
-
-* Pivot表:
-
-   将一个或多个列拖放到标头,结果将根据这些列的值分组:
-
-   ![]( /images/Kylin-Web-Tutorial/14 drag.png)
-
-* 可视化:
-
-   同时,结果集将被方便地显示在“可视化”的不同图表中:
-
-   注意:线形图仅当至少一个从Hive表中获取的维度有真实的“Date”数据类型列时才是可用的。
-
-   * 条形图:
-
-   ![]( /images/Kylin-Web-Tutorial/15 bar-chart.png)
-   
-   * 饼图:
-
-   ![]( /images/Kylin-Web-Tutorial/16 pie-chart.png)
-
-   * 线形图:
-
-   ![]( /images/Kylin-Web-Tutorial/17 line-chart.png)
-
diff --git a/website/_docs21/tutorial/web.md b/website/_docs21/tutorial/web.md
deleted file mode 100644
index a0d7fd1..0000000
--- a/website/_docs21/tutorial/web.md
+++ /dev/null
@@ -1,123 +0,0 @@
----
-layout: docs21
-title:  Web Interface
-categories: tutorial
-permalink: /docs21/tutorial/web.html
----
-
-> **Supported Browsers**
-> Windows: Google Chrome, FireFox
-> Mac: Google Chrome, FireFox, Safari
-
-## 1. Access & Login
-Host to access: http://hostname:7070
-Login with username/password: ADMIN/KYLIN
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/1 login.png)
-
-## 2. Sync Hive Table into Kylin
-Although Kylin will using SQL as query interface and leverage Hive metadata, kylin will not enable user to query all hive tables since it's a pre-build OLAP (MOLAP) system so far. To enable Table in Kylin, it will be easy to using "Sync" function to sync up tables from Hive.
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/2 tables.png)
-
-## 3. Kylin OLAP Cube
-Kylin's OLAP Cubes are pre-calculation datasets from star schema tables, Here's the web interface for user to explorer, manage all cubes. Go to `Model` menu, it will list all cubes available in system:
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/3 cubes.png)
-
-To explore more detail about the Cube
-
-* Form View:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/4 form-view.png)
-
-* SQL View (Hive Query to read data to generate the cube):
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/5 sql-view.png)
-
-* Access (Grant user/role privileges, grant operation only open to Admin):
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/7 access.png)
-
-## 4. Write and Execute SQL on web
-Kylin's web offer a simple query tool for user to run SQL to explorer existing cube, verify result and explorer the result set using #5's Pivot analysis and visualization
-
-> **Query Limit**
-> 
-> 1. Only SELECT query be supported
-> 
-> 2. SQL will not be redirect to Hive
-
-Go to "Insight" menu:
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/8 query.png)
-
-* Source Tables:
-
-   Browser current available tables (same structure and metadata as Hive):
-  
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/9 query-table.png)
-
-* New Query:
-
-   You can write and execute your query and explorer the result.
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/10 query-result.png)
-
-* Saved Query (only work after enable LDAP security):
-
-   Associate with user account, you can get saved query from different browsers even machines.
-   Click "Save" in Result area, it will popup for name and description to save current query:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/11 save-query.png)
-
-   Click "Saved Queries" to browser all your saved queries, you could direct submit it or remove it.
-
-* Query History:
-
-   Only keep the current user's query history in current bowser, it will require cookie enabled and will lost if you clean up bowser's cache. Click "Query History" tab, you could directly resubmit any of them to execute again.
-
-## 5. Pivot Analysis and Visualization
-There's one simple pivot and visualization analysis tool in Kylin's web for user to explore their query result:
-
-* General Information:
-
-   When the query execute success, it will present a success indictor and also a cube's name which be hit. 
-   Also it will present how long this query be executed in backend engine (not cover network traffic from Kylin server to browser):
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/12 general.png)
-
-* Query Result:
-
-   It's easy to order on one column.
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/13 results.png)
-
-* Export to CSV File
-
-   Click "Export" button to save current result as CSV file.
-
-* Pivot Table:
-
-   Drag and drop one or more columns into the header, the result will grouping by such column's value:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/14 drag.png)
-
-* Visualization:
-
-   Also, the result set will be easy to show with different charts in "Visualization":
-
-   note: line chart only available when there's at least one dimension with real "Date" data type of column from Hive Table.
-
-   * Bar Chart:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/15 bar-chart.png)
-   
-   * Pie Chart:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/16 pie-chart.png)
-
-   * Line Chart
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/17 line-chart.png)
-
diff --git a/website/_docs23/index.cn.md b/website/_docs23/index.cn.md
index c88ef59..04f4691 100644
--- a/website/_docs23/index.cn.md
+++ b/website/_docs23/index.cn.md
@@ -12,8 +12,6 @@ permalink: /cn/docs23/index.html
 Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
 
 查看旧版本文档: 
-* [v2.3.x document](/cn/docs23/)
-* [v2.1.x and v2.2.x document](/cn/docs21/)
 * [归档](/archive/)
 
 安装 
diff --git a/website/_docs23/index.md b/website/_docs23/index.md
index f0dd562..c007f8c 100644
--- a/website/_docs23/index.md
+++ b/website/_docs23/index.md
@@ -12,8 +12,6 @@ Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
 Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
 
 Document of prior versions: 
-* [v2.3.x document](/docs23)
-* [v2.1.x and v2.2.x document](/docs21/)
 * [Archived](/archive/)
 
 
diff --git a/website/_docs24/index.cn.md b/website/_docs24/index.cn.md
index 3258526..6c67394 100644
--- a/website/_docs24/index.cn.md
+++ b/website/_docs24/index.cn.md
@@ -13,7 +13,6 @@ Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQ
 
 查看旧版本文档: 
 * [v2.3.x document](/cn/docs23/)
-* [v2.1.x and v2.2.x document](/cn/docs21/)
 * [归档](/archive/)
 
 安装 
diff --git a/website/_docs24/index.md b/website/_docs24/index.md
index e1a646c..126bf00 100644
--- a/website/_docs24/index.md
+++ b/website/_docs24/index.md
@@ -13,7 +13,6 @@ Apache Kylin™ is an open source Distributed Analytics Engine designed to provi
 
 This is the document for the latest released version (v2.4). Document of prior versions: 
 * [v2.3.x document](/docs23)
-* [v2.1.x and v2.2.x document](/docs21/)
 * [Archived](/archive/)
 
 Installation & Setup
diff --git a/website/_docs30/index.cn.md b/website/_docs30/index.cn.md
index eb28ac3..0178cc9 100644
--- a/website/_docs30/index.cn.md
+++ b/website/_docs30/index.cn.md
@@ -14,8 +14,6 @@ Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQ
 查看旧版本文档: 
 * [v2.4 document](/cn/docs24/)
 * [v2.3 document](/cn/docs23/)
-* [v2.1 and v2.2 document](/cn/docs21/)
-* [v2.0 document](/cn/docs20/)
 * [归档](/archive/)
 
 安装
diff --git a/website/_docs30/index.md b/website/_docs30/index.md
index 6bc0c0e..79c1956 100644
--- a/website/_docs30/index.md
+++ b/website/_docs30/index.md
@@ -12,11 +12,8 @@ Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
 Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
 
 This is the document for v3.0 with the new feature Real-time OLAP. Document of prior versions: 
-* [v2.5 and v2.6 document](/docs)
 * [v2.4 document](/docs24)
 * [v2.3 document](/docs23)
-* [v2.1 and v2.2 document](/docs21/)
-* [v2.0 document](/docs20/)
 * [Archived](/archive/)
 
 Installation & Setup
diff --git a/website/_docs31/index.cn.md b/website/_docs31/index.cn.md
index 78ff703..fbbd9f7 100644
--- a/website/_docs31/index.cn.md
+++ b/website/_docs31/index.cn.md
@@ -12,10 +12,10 @@ permalink: /cn/docs31/index.html
 Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
 
 查看其它版本文档: 
+* [v3.0 document](/docs)
 * [v3.0-alpha document](/docs30)
 * [v2.4 document](/cn/docs24/)
 * [v2.3 document](/cn/docs23/)
-* [v2.1 and v2.2 document](/cn/docs21/)
 * [归档](/archive/)
 
 安装
diff --git a/website/_docs31/index.md b/website/_docs31/index.md
index f1a92c9..2b89f8b 100644
--- a/website/_docs31/index.md
+++ b/website/_docs31/index.md
@@ -12,10 +12,10 @@ Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
 Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
 
 This is the document for the latest released version (v2.5 & v2.6). Document of other versions: 
+* [v3.0 document](/docs)
 * [v3.0-alpha document](/docs30)
 * [v2.4 document](/docs24)
 * [v2.3 document](/docs23)
-* [v2.1 and v2.2 document](/docs21/)
 * [Archived](/archive/)
 
 Installation & Setup
diff --git a/website/archive/docs21.tar.gz b/website/archive/docs21.tar.gz
new file mode 100644
index 0000000..de60eaa
Binary files /dev/null and b/website/archive/docs21.tar.gz differ
diff --git a/website/download/index.md b/website/download/index.md
index 77b919e..b6dd484 100644
--- a/website/download/index.md
+++ b/website/download/index.md
@@ -7,7 +7,7 @@ permalink: /download/index.html
 You can verify the download by following these [procedures](https://www.apache.org/info/verification.html) and using these [KEYS](https://www.apache.org/dist/kylin/KEYS).
 
 #### v3.0.1
-- Kylin 3.0.0 is a release of Kylin's next generation after 2.x, with the new real-time OLAP feature, Kylin can query streaming data with sub-second latency. Kylin 3.0.1 is a major release after 3.0.0, with 24 bug fixes and enhancement. To learn about real-time OLAP, please visit [the tech blog](/blog/2019/04/12/rt-streaming-design/) and [the tutorial](/docs/tutorial/realtime_olap.html) for real-time OLAP.
+- Kylin 3.0.0 is a major release of Kylin's next generation after 2.x, with the new real-time OLAP feature, Kylin can query streaming data with sub-second latency. Kylin 3.0.1 is a release after 3.0.0, with 24 bug fixes and enhancement. To learn about real-time OLAP, please visit [the tech blog](/blog/2019/04/12/rt-streaming-design/) and [the tutorial](/docs/tutorial/realtime_olap.html) for real-time OLAP.
 - [Release notes](/docs/release_notes.html), [installation guide](/docs/install/index.html) and [upgrade guide](/docs/howto/howto_upgrade.html)
 - Source download: [apache-kylin-3.0.1-source-release.zip](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-3.0.1/apache-kylin-3.0.1-source-release.zip) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-3.0.1/apache-kylin-3.0.1-source-release.zip.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-3.0.1/apache-kylin-3.0.1-source-release.zip.sha256)\]
 - Binary for Hadoop 2 download:
@@ -19,7 +19,7 @@ You can verify the download by following these [procedures](https://www.apache.o
   - for Cloudera CDH 6.0/6.1 (check [KYLIN-3564](https://issues.apache.org/jira/browse/KYLIN-3564) first) - [apache-kylin-3.0.1-bin-cdh60.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-3.0.1/apache-kylin-3.0.1-bin-cdh60.tar.gz) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-3.0.1/apache-kylin-3.0.1-bin-cdh60.tar.gz.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-3.0.1/apache-kylin-3.0.1-bin-cdh60.tar.gz.sha256)\]
 
 #### v2.6.5
-- This is a major release after 2.6.4, with 32 bug fixes and enhancement. Check the release notes.
+- This is a release after 2.6.4, with 32 bug fixes and enhancement. Check the release notes.
 - [Release notes](/docs/release_notes.html), [installation guide](/docs/install/index.html) and [upgrade guide](/docs/howto/howto_upgrade.html)
 - Source download: [apache-kylin-2.6.5-source-release.zip](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.6.5/apache-kylin-2.6.5-source-release.zip) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.6.5/apache-kylin-2.6.5-source-release.zip.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-2.6.5/apache-kylin-2.6.5-source-release.zip.sha256)\]
 - Binary for Hadoop 2 download: