You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by sh...@apache.org on 2020/06/27 11:31:06 UTC

[kylin] 03/03: archive docs30alpha

This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit dec502a0e77e723de9d91f1317e587466fc89be2
Author: shaofengshi <sh...@apache.org>
AuthorDate: Sat Jun 27 19:30:38 2020 +0800

    archive docs30alpha
---
 website/_docs30/gettingstarted/best_practices.md   |   42 -
 website/_docs30/gettingstarted/concepts.md         |   64 -
 website/_docs30/gettingstarted/events.md           |   43 -
 website/_docs30/gettingstarted/faq.cn.md           |  239 --
 website/_docs30/gettingstarted/faq.md              |  341 ---
 website/_docs30/gettingstarted/terminology.md      |   25 -
 website/_docs30/howto/howto_backup_metadata.cn.md  |  131 -
 website/_docs30/howto/howto_backup_metadata.md     |  132 -
 .../howto/howto_build_cube_with_restapi.cn.md      |   54 -
 .../_docs30/howto/howto_build_cube_with_restapi.md |   53 -
 website/_docs30/howto/howto_cleanup_storage.cn.md  |   26 -
 website/_docs30/howto/howto_cleanup_storage.md     |   27 -
 .../_docs30/howto/howto_enable_zookeeper_acl.md    |   20 -
 .../howto/howto_install_ranger_kylin_plugin.md     |    8 -
 website/_docs30/howto/howto_jdbc.cn.md             |   92 -
 website/_docs30/howto/howto_jdbc.md                |   92 -
 website/_docs30/howto/howto_ldap_and_sso.md        |  130 -
 website/_docs30/howto/howto_optimize_build.cn.md   |  166 --
 website/_docs30/howto/howto_optimize_build.md      |  190 --
 website/_docs30/howto/howto_optimize_cubes.cn.md   |  212 --
 website/_docs30/howto/howto_optimize_cubes.md      |  212 --
 website/_docs30/howto/howto_update_coprocessor.md  |   14 -
 website/_docs30/howto/howto_upgrade.md             |  112 -
 website/_docs30/howto/howto_use_beeline.md         |   14 -
 website/_docs30/howto/howto_use_cli.cn.md          |  154 --
 website/_docs30/howto/howto_use_cli.md             |  161 --
 .../howto/howto_use_distributed_scheduler.md       |   16 -
 .../_docs30/howto/howto_use_health_check_cli.md    |  118 -
 website/_docs30/howto/howto_use_mr_hive_dict.md    |  205 --
 website/_docs30/howto/howto_use_restapi.cn.md      | 1495 ----------
 website/_docs30/howto/howto_use_restapi.md         | 1496 ----------
 website/_docs30/howto/howto_use_restapi_in_js.md   |   46 -
 website/_docs30/index.cn.md                        |   69 -
 website/_docs30/index.md                           |   72 -
 website/_docs30/install/advance_settings.cn.md     |  191 --
 website/_docs30/install/advance_settings.md        |  190 --
 website/_docs30/install/configuration.cn.md        |  803 ------
 website/_docs30/install/configuration.md           |  795 ------
 website/_docs30/install/index.cn.md                |  128 -
 website/_docs30/install/index.md                   |  127 -
 website/_docs30/install/kylin_aws_emr.cn.md        |  191 --
 website/_docs30/install/kylin_aws_emr.md           |  194 --
 website/_docs30/install/kylin_cluster.cn.md        |   53 -
 website/_docs30/install/kylin_cluster.md           |   55 -
 website/_docs30/install/kylin_docker.cn.md         |  143 -
 website/_docs30/install/kylin_docker.md            |  143 -
 website/_docs30/release_notes.md                   | 2909 --------------------
 website/_docs30/tutorial/Qlik.cn.md                |  153 -
 website/_docs30/tutorial/Qlik.md                   |  156 --
 website/_docs30/tutorial/acl.cn.md                 |   35 -
 website/_docs30/tutorial/acl.md                    |   37 -
 website/_docs30/tutorial/create_cube.cn.md         |  224 --
 website/_docs30/tutorial/create_cube.md            |  217 --
 website/_docs30/tutorial/cube_build_job.cn.md      |   69 -
 website/_docs30/tutorial/cube_build_job.md         |   67 -
 .../_docs30/tutorial/cube_build_performance.cn.md  |  266 --
 website/_docs30/tutorial/cube_build_performance.md |  266 --
 website/_docs30/tutorial/cube_spark.cn.md          |  204 --
 website/_docs30/tutorial/cube_spark.md             |  200 --
 website/_docs30/tutorial/cube_streaming.cn.md      |  219 --
 website/_docs30/tutorial/cube_streaming.md         |  219 --
 website/_docs30/tutorial/flink.md                  |  249 --
 website/_docs30/tutorial/hue.md                    |  246 --
 website/_docs30/tutorial/hybrid.cn.md              |   47 -
 website/_docs30/tutorial/hybrid.md                 |   46 -
 website/_docs30/tutorial/jdbc.cn.md                |   92 -
 website/_docs30/tutorial/jdbc.md                   |   92 -
 website/_docs30/tutorial/kylin_client_tool.cn.md   |  127 -
 website/_docs30/tutorial/kylin_client_tool.md      |  137 -
 website/_docs30/tutorial/kylin_sample.cn.md        |   42 -
 website/_docs30/tutorial/kylin_sample.md           |   42 -
 website/_docs30/tutorial/microstrategy.md          |   84 -
 website/_docs30/tutorial/mysql_metastore.cn.md     |   84 -
 website/_docs30/tutorial/mysql_metastore.md        |   85 -
 website/_docs30/tutorial/odbc.cn.md                |   34 -
 website/_docs30/tutorial/odbc.md                   |   49 -
 website/_docs30/tutorial/powerbi.cn.md             |   56 -
 website/_docs30/tutorial/powerbi.md                |   54 -
 website/_docs30/tutorial/project_level_acl.cn.md   |   86 -
 website/_docs30/tutorial/project_level_acl.md      |   85 -
 website/_docs30/tutorial/query_pushdown.cn.md      |   50 -
 website/_docs30/tutorial/query_pushdown.md         |   61 -
 website/_docs30/tutorial/real_time_olap.md         |  242 --
 .../_docs30/tutorial/setup_jdbc_datasource.cn.md   |   93 -
 website/_docs30/tutorial/setup_jdbc_datasource.md  |   93 -
 website/_docs30/tutorial/setup_systemcube.cn.md    |  438 ---
 website/_docs30/tutorial/setup_systemcube.md       |  438 ---
 website/_docs30/tutorial/spark.cn.md               |   90 -
 website/_docs30/tutorial/spark.md                  |   90 -
 website/_docs30/tutorial/sql_reference.cn.md       |  384 ---
 website/_docs30/tutorial/sql_reference.md          |  388 ---
 website/_docs30/tutorial/squirrel.cn.md            |  112 -
 website/_docs30/tutorial/squirrel.md               |  112 -
 website/_docs30/tutorial/superset.cn.md            |   35 -
 website/_docs30/tutorial/superset.md               |   36 -
 website/_docs30/tutorial/tableau.cn.md             |  112 -
 website/_docs30/tutorial/tableau.md                |  113 -
 website/_docs30/tutorial/tableau_91.cn.md          |   47 -
 website/_docs30/tutorial/tableau_91.md             |   46 -
 website/_docs30/tutorial/use_cube_planner.cn.md    |  127 -
 website/_docs30/tutorial/use_cube_planner.md       |  130 -
 website/_docs30/tutorial/use_dashboard.cn.md       |   99 -
 website/_docs30/tutorial/use_dashboard.md          |   99 -
 website/_docs30/tutorial/web.cn.md                 |  109 -
 website/_docs30/tutorial/web.md                    |  106 -
 105 files changed, 19912 deletions(-)

diff --git a/website/_docs30/gettingstarted/best_practices.md b/website/_docs30/gettingstarted/best_practices.md
deleted file mode 100644
index 09dcdb7..0000000
--- a/website/_docs30/gettingstarted/best_practices.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-layout: docs30
-title:  "Community Best Practices"
-categories: gettingstarted
-permalink: /docs30/gettingstarted/best_practices.html
-since: v1.3.x
----
-
-List of articles about Kylin best practices contributed by community. Some of them are from Chinese community. Many thanks!
-
-* [Apache Kylin在百度地图的实践](http://www.infoq.com/cn/articles/practis-of-apache-kylin-in-baidu-map)
-
-* [Apache Kylin 大数据时代的OLAP利器](http://www.bitstech.net/2016/01/04/kylin-olap/)(网易案例)
-
-* [Apache Kylin在云海的实践](http://www.csdn.net/article/2015-11-27/2826343)(京东案例)
-
-* [Kylin, Mondrian, Saiku系统的整合](http://tech.youzan.com/kylin-mondrian-saiku/)(有赞案例)
-
-* [Big Data MDX with Mondrian and Apache Kylin](https://www.inovex.de/fileadmin/files/Vortraege/2015/big-data-mdx-with-mondrian-and-apache-kylin-sebastien-jelsch-pcm-11-2015.pdf)
-
-* [Kylin and Mondrain Interaction](https://github.com/mustangore/kylin-mondrian-interaction) (Thanks to [mustangore](https://github.com/mustangore))
-
-* [Kylin And Tableau Tutorial](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
-
-* [Kylin and Qlik Integration](https://github.com/albertoRamon/Kylin/tree/master/KylinWithQlik) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
-
-* [How to use Hue with Kylin](https://github.com/albertoRamon/Kylin/tree/master/KylinWithHue) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
-
-
-**Here are online tutorials for self-studying Kylin:**
-
-- Free Kylin tutorial (registration needed), from the core developers, in Chinese:
-[https://www.chinahadoop.cn/search?q=kylin](https://www.chinahadoop.cn/search?q=kylin)
-
-- A paid Kylin tutorial on Udemy, in English:
-[https://www.udemy.com/apache-kylin-implementing-olap-on-the-hadoop-platform](https://www.udemy.com/apache-kylin-implementing-olap-on-the-hadoop-platform)
-
-- Tutorial of Kylin on Tableau, in Spanish: 
-[https://www.youtube.com/watch?v=k6fBw8yA1NI](https://www.youtube.com/watch?v=k6fBw8yA1NI)
-
-Besides, there is also some video of Kylin on the conferences; You can search for them on Youtube:
-[https://www.youtube.com/results?search_query=Apache+Kylin](https://www.youtube.com/results?search_query=Apache+Kylin)
diff --git a/website/_docs30/gettingstarted/concepts.md b/website/_docs30/gettingstarted/concepts.md
deleted file mode 100644
index 6cbb7ee..0000000
--- a/website/_docs30/gettingstarted/concepts.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-layout: docs30
-title:  "Technical Concepts"
-categories: gettingstarted
-permalink: /docs30/gettingstarted/concepts.html
-since: v1.2
----
- 
-Here are some basic technical concepts used in Apache Kylin, please check them for your reference.
-For terminology in domain, please refer to: [Terminology](terminology.html)
-
-## CUBE
-* __Table__ - This is definition of hive tables as source of cubes, which must be synced before building cubes.
-![](/images/docs/concepts/DataSource.png)
-
-* __Data Model__ - This describes a [STAR SCHEMA](https://en.wikipedia.org/wiki/Star_schema) data model, which defines fact/lookup tables and filter condition.
-![](/images/docs/concepts/DataModel.png)
-
-* __Cube Descriptor__ - This describes definition and settings for a cube instance, defining which data model to use, what dimensions and measures to have, how to partition to segments and how to handle auto-merge etc.
-![](/images/docs/concepts/CubeDesc.png)
-
-* __Cube Instance__ - This is instance of cube, built from one cube descriptor, and consist of one or more cube segments according partition settings.
-![](/images/docs/concepts/CubeInstance.png)
-
-* __Partition__ - User can define a DATE/STRING column as partition column on cube descriptor, to separate one cube into several segments with different date periods.
-![](/images/docs/concepts/Partition.png)
-
-* __Cube Segment__ - This is actual carrier of cube data, and maps to a HTable in HBase. One building job creates one new segment for the cube instance. Once data change on specified data period, we can refresh related segments to avoid rebuilding whole cube.
-![](/images/docs/concepts/CubeSegment.png)
-
-* __Aggregation Group__ - Each aggregation group is subset of dimensions, and build cuboid with combinations inside. It aims at pruning for optimization.
-![](/images/docs/concepts/AggregationGroup.png)
-
-## DIMENSION & MEASURE
-* __Mandotary__ - This dimension type is used for cuboid pruning, if a dimension is specified as “mandatory”, then those combinations without such dimension are pruned.
-* __Hierarchy__ - This dimension type is used for cuboid pruning, if dimension A,B,C forms a “hierarchy” relation, then only combinations with A, AB or ABC shall be remained. 
-* __Derived__ - On lookup tables, some dimensions could be generated from its PK, so there's specific mapping between them and FK from fact table. So those dimensions are DERIVED and don't participate in cuboid generation.
-![](/images/docs/concepts/Dimension.png)
-
-* __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
-* __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
-* __Top N__ - For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
-![](/images/docs/concepts/Measure.png)
-
-## CUBE ACTIONS
-* __BUILD__ - Given an interval of partition column, this action is to build a new cube segment.
-* __REFRESH__ - This action will rebuilt cube segment in some partition period, which is used in case of source table increasing.
-* __MERGE__ - This action will merge multiple continuous cube segments into single one. This can be automated with auto-merge settings in cube descriptor.
-* __PURGE__ - Clear segments under a cube instance. This will only update metadata, and won't delete cube data from HBase.
-![](/images/docs/concepts/CubeAction.png)
-
-## JOB STATUS
-* __NEW__ - This denotes one job has been just created.
-* __PENDING__ - This denotes one job is paused by job scheduler and waiting for resources.
-* __RUNNING__ - This denotes one job is running in progress.
-* __FINISHED__ - This denotes one job is successfully finished.
-* __ERROR__ - This denotes one job is aborted with errors.
-* __DISCARDED__ - This denotes one job is cancelled by end users.
-![](/images/docs/concepts/Job.png)
-
-## JOB ACTION
-* __RESUME__ - Once a job in ERROR status, this action will try to restore it from latest successful point.
-* __DISCARD__ - No matter status of a job is, user can end it and release resources with DISCARD action.
-![](/images/docs/concepts/JobAction.png)
diff --git a/website/_docs30/gettingstarted/events.md b/website/_docs30/gettingstarted/events.md
deleted file mode 100644
index 8c09cb3..0000000
--- a/website/_docs30/gettingstarted/events.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-layout: docs30
-title:  "Events and Conferences"
-categories: gettingstarted
-permalink: /docs30/gettingstarted/events.html
----
-
-__Conferences__
-
-* [Accelerate big data analytics with Apache Kylin](https://berlinbuzzwords.de/19/session/accelerate-big-data-analytics-apache-kylin) by Shaofeng Shi at Big Data conference Berlin Buzzwords 2019, Berlin June 18, 2019
-* [Refactor your data warehouse with mobile analytics products](https://conferences.oreilly.com/strata/strata-ny/public/schedule/speaker/313314) by Zhi Zhu and Luke Han at Strata Data Conference New York, New York September 11–13, 2018
-* [Apache Kylin on HBase: Extreme OLAP engine for big data](https://www.slideshare.net/ShiShaoFeng1/apache-kylin-on-hbase-extreme-olap-engine-for-big-data) by Shaofeng Shi at [HBaseCon Asia 2018](https://hbase.apache.org/hbaseconasia-2018/)
-* [The Evolution of Apache Kylin: Realtime and Plugin Architecture in Kylin](https://www.youtube.com/watch?v=n74zvLmIgF0)([slides](http://www.slideshare.net/YangLi43/apache-kylin-15-updates)) by [Li Yang](https://github.com/liyang-gmt8), at [Hadoop Summit 2016 Dublin](http://hadoopsummit.org/dublin/agenda/), Ireland, 2016-04-14
-* [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
-* [Apache Kylin - Balance Between Space and Time](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015) ([video](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015)) by [Debashis Saha](https://twitter.com/debashis_saha) & [Luke Han](https://twitter.com/lukehq), at Hadoop Summit 2015  [...]
-* [HBaseCon 2015: Apache Kylin; Extreme OLAP Engine for Hadoop](https://vimeo.com/128152444) ([video](https://vimeo.com/128152444), [slides](http://www.slideshare.net/HBaseCon/ecosystem-session-3b)) by [Seshu Adunuthula](https://twitter.com/SeshuAd) at HBaseCon 2015 in San Francisco, US, 2015-05-07
-* [Apache Kylin - Extreme OLAP Engine for Hadoop](http://strataconf.com/big-data-conference-uk-2015/public/schedule/detail/40029) ([slides](http://www.slideshare.net/lukehan/apache-kylin-extreme-olap-engine-for-big-data)) by [Luke Han](https://twitter.com/lukehq) & [Yang Li](https://github.com/liyang-gmt8), at Strata+Hadoop World in London, UK, 2015-05-06
-* [Apache Kylin Open Source Journey](http://www.infoq.com/cn/presentations/open-source-journey-of-apache-kylin) ([slides](http://www.slideshare.net/lukehan/apache-kylin-open-source-journey-for-qcon2015-beijing)) by [Luke Han](https://twitter.com/lukehq), at QCon Beijing in Beijing, China, 2015-04-23
-* [Apache Kylin - OLAP on Hadoop](http://cio.it168.com/a2015/0418/1721/000001721404.shtml) by [Yang Li](https://github.com/liyang-gmt8), at Database Technology Conference China 2015 in Beijing, China, 2015-04-18
-* [Apache Kylin – Cubes on Hadoop](https://www.youtube.com/watch?v=U0SbrVzuOe4) ([video](https://www.youtube.com/watch?v=U0SbrVzuOe4), [slides](http://www.slideshare.net/Hadoop_Summit/apache-kylin-cubes-on-hadoop)) by [Ted Dunning](https://twitter.com/ted_dunning), at Hadoop Summit 2015 Europe in Brussels, Belgium, 2015-04-16
-* [Apache Kylin - Hadoop 上的大规模联机分析平台](http://bdtc2014.hadooper.cn/m/zone/bdtc_2014/schedule3) ([slides](http://www.slideshare.net/lukehan/apache-kylin-big-data-technology-conference-2014-beijing-v2)) by [Luke Han](https://twitter.com/lukehq), at Big Data Technology Conference China in Beijing, China, 2014-12-14
-* [Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive](http://v.csdn.hudong.com/s/article.html?arcid=15820707) ([video](http://v.csdn.hudong.com/s/article.html?arcid=15820707), [slides](http://www.slideshare.net/XuJiang2/kylin-hadoop-olap-engine)) by [Jiang Xu](https://www.linkedin.com/pub/xu-jiang/4/5a8/230), at Shanghai Big Data Summit 2014 in Shanghai, China , 2014-10-25
-
-__Meetup__
-
-* [Apache Kylin Meetup @Beijing](https://www.huodongxing.com/event/2516174942311), China; 13:00PM - 17:00PM, Saturday, 2019-11-16
-* [Apache Kylin Meetup @Berlin](https://www.meetup.com/Apache-Kylin-Meetup-Berlin/events/264945114) ([Slides](https://www.slideshare.net/ssuser931288/presentations)), Berlin, Germany; 7:00PM - 8:30PM, Thursday, 2019-10-24
-* [Apache Kylin Meetup @Shenzhen](https://www.huodongxing.com/event/3506680147611), China; 12:30PM - 17:00PM, Saturday, 2019-09-07
-* [Apache Kylin Meetup @California](https://www.meetup.com/Apache-Kylin/events/263433976), San Jose, US; 6:30 PM - 8:30 PM, Wednesday, 2019-08-07
-* [Apache Kylin Meetup @Chengdu](https://www.huodongxing.com/event/4489409598500), China; 1:00 PM - 5:00 PM, Saturday, 2019-05-25
-* [Apache Kylin Meetup @Beijing](https://www.huodongxing.com/event/7484371439700), China; 1:00 PM - 5:30 PM, Saturday, 2019-04-13
-* [Apache Kylin Meetup @Shanghai](http://www.huodongxing.com/event/4476570217900) ([Slides](https://kyligence.io/zh/resource/case-study-zh/)), China; 1:00 PM - 4:30 PM, Saturday, 2019-02-23 
-* [Apache Kylin for Extreme OLAP and Big Data @eBay South Campus](https://www.eventbrite.com/e/thursday-nov-29-meetup-apache-kylin-for-extreme-olap-and-big-data-tickets-52275347973?aff=estw), Sanjose, CA, US; 6:30 PM - 8:30 PM, Thursday, 2018-11-29 
-* [Apache Kylin Meetup @Hangzhou](http://www.huodongxing.com/event/7461326621900), China; 1:30PM - 17:00PM, Saturday, 2018-10-26
-* [CDAP in Cloud, Extreme OLAP w Apache Kylin, Twitter Reviews & DataStax](https://www.meetup.com/BigDataApps/events/253429041/) @ Google Cloud, US; 6:00PM to 8:00PM, 2018-8-29
-* [Apache Kylin Meetup @Beijing Meituan&Dianping](http://www.huodongxing.com/event/7452131278400), China; 1:30PM - 17:00PM, Saturday, 2018-8-11
-* [Apache Kylin Meetup @Shanghai Bestpay](http://www.huodongxing.com/event/2449364807100?td=4222685755750), China; 1:30PM - 17:00PM, Saturday, 2018-7-28
-* [Apache Kylin Meetup @Shenzhen](http://cn.mikecrm.com/rjqPLom), China; 1:00PM - 17:00PM, Saturday, 2018-6-23
-* [Apache Kylin & Alluxio Meetup @Shanghai](http://huiyi.csdn.net/activity/product/goods_list?project_id=3746), in Shanghai, China, 1:00PM - 17:30PM, Sunday, 2018-1-21
-* [Apache Kylin Meetup @Bay Area](http://www.meetup.com/Cloud-at-ebayinc/events/218914395/), in San Jose, US, 6:00PM - 7:30PM, Thursday, 2014-12-04
-
-[__Propose a talk__](http://kyligence-apache-kylin.mikecrm.com/SJFewHC)
-
diff --git a/website/_docs30/gettingstarted/faq.cn.md b/website/_docs30/gettingstarted/faq.cn.md
deleted file mode 100644
index aed4b25..0000000
--- a/website/_docs30/gettingstarted/faq.cn.md
+++ /dev/null
@@ -1,239 +0,0 @@
----
-layout: docs30-cn
-title:  常见问题
-categories: 开始
-permalink: /cn/docs30/gettingstarted/faq.html
-since: v0.6.x
----
-
-### 如果在使用 Kylin 中遇到了问题
-
-1. 使用搜索引擎(谷歌/百度)、[Kylin 的邮件列表的存档](http://apache-kylin.74782.x6.nabble.com/), [Kylin 的 JIRA 列表](https://issues.apache.org/jira/projects/KYLIN/issues) 来寻求解决办法。
-2. 浏览 Kylin 官方网页,尤其是[文档](http://kylin.apache.org/docs/) 和 [常见问题](http://kylin.apache.org/docs/gettingstarted/faq.html) 页面。
-3. 向社区求助,用户可以在订阅 Apache Kylin 的邮件列表之后,用个人邮箱向 Apache Kylin 邮件列表发送邮件,所有订阅了邮件列表的用户都会看到此邮件,并回复邮件以发表自己的见解。
-   Apache Kylin 主要有 3 个邮件列表,分别是 dev、user、issues。dev 列表主要讨论 Kylin 的开发及新版本发布,user 列表主要讨论用户使用过程中遇到的问题,issues 主要用于追踪 Kylin 项目管理工具(JIRA)的更新动态,订阅的方法请参考 [Apache Kylin 邮件群组](https://kylin.apache.org/cn/community/) 页面中的订阅方法。
-   也正因为 Apache Kylin 社区是开源社区,所有用户和 Committer 都是志愿进行贡献的,所有的讨论和求助是没有 SLA(Service Level Agreement)的。为了提高讨论效率、规范提问,建议用户在撰写邮件时详细描述问题的出错情况、重现过程、安装版本和 Hadoop 发行版版本等,并且最好能提供相关的出错日志。另外,因为用户的全球化,建议提问时使用英文撰写邮件内容、至少保证邮件主题使用英文。有一篇关于如何提问的[How To Ask Questions The Smart Way](http://catb.org/~esr/faqs/smart-questions.html) 文章,推荐阅读。
-
-### Kylin 是大数据的通用 SQL 引擎吗?
-不,Kylin 是一个带有 SQL 接口的 OLAP 引擎。 SQL 查询需要与预定义的 OLAP 模型匹配。
-
-### 什么是使用 Apache Kylin 的典型场景?
-如果用户有一个巨大的表 (如:超过 1 亿行) 与维表进行 JOIN,而且查询需要在仪表盘、交互式报告、BI (商务智能) 中完成,用户并发数量为几十个或者几百个,那么 Kylin 是最好的选择。
-
-### Kylin 支持多大的数据量表? 性能怎么样?
-Kylin 可以支持 TB 到 PB 级数据集的亚秒级查询。 这已经被 eBay,美团,头条等用户验证过。 以 美团的案例为例(至 2018-08),973 个 Cube,每天 380 万个查询,原始数据 8.9 万亿,总 Cube 大小 971 TB(原始数据更大),50%查询在 <0.5 秒内完成,90% 查询 <1.2秒。
-
-### Cube 的膨胀率是多大(与原始数据相比)
-Cube 的膨胀率取决于多个因素,例如维度 / 度量的数量,维度的基数,Cuboid 的数量,压缩算法等。用户可以通过多种方式优化 Cube 体积。
-
-### 如何比较 Kylin 与其他 SQL 引擎(如 Hive,Presto,SparkSQL,Impala)
-SQL 引擎以不同的方式回答查询,Kylin 不是它们的替代品,而是它们的查询加速器。很多用户将 Kylin 与其他 SQL 引擎一起使用。对于高频率查询的模式,构建 Cube 可以极大地提高性能并给集群负荷减压。
-
-### 运行 Kylin 需要多少个 Hadoop 节点?
-
-Kylin 可以在 Hadoop 集群上运行,从几个节点到数千个节点,取决于您拥有多少数据。 该架构可水平扩展。
-因为大多数计算都是在 Hadoop(MapReduce / Spark / HBase)中进行的,所以通常只需要在几个节点中安装Kylin。
-
-
-### Cube 可以支持多少个维度?
-
-Cube 的最大物理维度数量 (不包括衍生维度) 是 63,但是不推荐使用大于 30 个维度的 Cube,会引起维度灾难。
-
-
-### 执行 "select \*" 报错
-Cube 中只包含聚合数据,所以用户的所有查询都应该是聚合查询 (包含 "group by")。用户可以使用对所有维度分组("group by") 的查询来获取尽可能接近的详细结果,但返回结果并不是原始数据。
-为了从某些 BI 工具连接 Kylin,Kylin 会尝试回答 "select \*" 查询,但是请注意结果可能不是预期的。请确保发送到 Kylin 的每条查询都是聚合查询。
-
-
-### 如何从 Cube 中查询原始数据
-
-Cube 不是用于查询原始数据的正确选择;但是如果用户的确存在这个需求,以下是一些解决方案:
-1. 将表中的主键(primiary key,简称 pk) 设置为 Cube 中的维度,然后在查询的时候使用 "group by pk"
-2. 配置 Kylin 查询下压到其他 SQL 引擎如 Hive;但是请注意查询性能可能会有影响。
-
-
-### 什么是超高基(UHC) 维度
-
-UHC 代表 Ultra High Cardinality,即超高基数。基数表示维度不同值的数量。通常,维度的基数从数十到数百万。如果超过百万,我们将其称为超高基维度,例如:用户 ID,电话号码等。
-
-Kylin 支持超高基维度,但是在 Cube 设计中额外注意超高基维度,它们可能会使 Cube 体积非常大、查询变慢。
-
-
-### 如何指定用于回答查询的 Cube
-
-用户无法指定用于回答查询的 Cube。Cube 对于终端用户来说是透明的,如果您对同一个数据模型有多个不同的 Cube,建议把不同的 Cube 放在不同的项目中。
-
-
-### 存在用于创建项目、模型和 Cube 的 REST API 吗?
-
-存在的,但是他们是代码内部的 API,并且会随着版本进行变化。默认情况下,Kylin 希望用户在网页上创建新的项目、模型和 Cube。
-
-### 如何定义一个雪花模型 (有两张事实表)
-
-在雪花模型中,同样只能指定一张事实表。但是用户可以定义一张维表与另一张维表进行 JOIN 连结。
-
-如果查询中包含两张事实表的 JOIN 连结,用户同样可以将一张事实表定义为维表,并不为这张较大的维表设置维表快照。
-
-### Cube 存放在哪里?可以直接在 HBase 上读取 Cube 数据吗?
-
-Cube 数据存储在 HBase 中,每个 Cube segment 都是一张 HBase 中的表;维度的值将组成行键 (Row Key);度量的值将按列序列化。为了提高存储效率,维度和度量都被编码为字节。Kylin 将在 HBase 中获取到字节后解码为原始值。
-如果没有 Kylin 的元数据,HBase 表不是可读的。
-
-
-### Cube 设计的最佳实践
-请参考: [Design cube in Apache Kylin](https://www.slideshare.net/YangLi43/design-cube-in-apache-kylin)
-
-### 如何加密 Cube 数据
-
-用户可以在 HBase 端对 Cube 数据进行加密,有关内容请参考:[Transparent Encryption of Data At Rest](https://hbase.apache.org/book.html#hbase.encryption.server)
-
-
-### 如何自动调度 Cube 构建
-
-Kylin 没有内置的调度程度。您可以通过 REST API 从外部调度程度服务中触发 Cube 的定时构建,如 Linux 的命令 `crontab`、Apache Airflow 等。
-
-
-### Kylin 支持 Hadoop 3 和 HBase 2.0 吗?
-
-从 v2.5.0 开始,Kylin 将为 Hadoop 3 和 HBase 2 提供二进制包。
-
-
-### Cube 已经启用了,但是 Insight 页面看不到对应的表
-
-请确认 `$KYLIN_HOME/conf/kylin.properties` 中的配置项 `kylin.server.cluster-servers` 在每个 Kylin 节点上都被正确配置,Kylin 节点通过此配置相互通知刷新缓存,请确保节点间的网络通信健康。
-
-### 如何处理 "java.lang.NoClassDefFoundError" 报错?
-
-Kylin 并不自带这些 Hadoop 的 Jar 包,因为它们应该已经在 Hadoop 节点中存在。所以 Kylin 会尝试通过 `hbase classpath` 和 `hive -e set` 找到它们,并将它们的路径加入 `HBASE_CLASSPATH` 中(Kylin 启动时会运行 `hbase` 脚本,该脚本会读取 `HBASE_CLASSPATH`)。
-由于 Hadoop 的复杂性,可能会存在一些找不到 Jar 包的情况,在这种情况下,请查看并修改 `$KYLIN_HOME/bin/` 目录下的 `find-\*-dependecy.sh` 和 `kylin.sh` 脚本来适应您的环境;或者在某些 Hadoop 的发行版中 (如 AWS EMR 5.0),`hbase` 脚本不会保留原始的 `HBASE_CLASSPATH` 值,可能会引起 "NoClassDefFoundError" 的报错。为了解决这个问题,请在 `$HBASE_HOME/bin/` 下找到 `hbase` 脚本,并在其中搜索 `HBASE_CLASSPATH`,查看它是否是如下形式:
-```sh
-export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*
-```
-如果是的话,请修改它来保留原始值,如下:
-```sh
-export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
-```
-
-
-### 如何在 Cube 中增加维度和度量?
-
-一旦 Cube 被构建完成,它的结构不能被修改。用户可以通过克隆该 Cube,在新的 Cube 中增加维度和度量,并在新的 Cube 构建完成后,禁用或删除老的 Cube,使得查询能够使用新的 Cube 进行回答。
-
-如果用户能接受新的维度在历史数据中不存在,则可以在老的 Cube 的构建结束时候后开始构建新的 Cube,并基于老的 Cube 和新的 Cube 创建一个混合(Hybrid)模型。
-
-
-### 查询结果和在 Hive 中查询结果不一致
-
-以下是可能的原因:
-1. Hive 中的源数据在导入 Kylin 后发生了改变。
-2. Cube 的时间区间和 Hive 中的不一样。
-3. 另外一个 Cube 回答了查询。
-4. 模型中存在内连接(INNER JOIN),但是查询时没有包含所有的表的连接关系。
-5. Cube 有一些近似度量,如 HyperLogLog、TopN
-6. 在 Kylin v2.3 及之前版本,Kylin 在从 Hive 中获取数据时可能存在数据丢失,请参考 KYLIN-3388。
-
-### Cube 构建完成后,源数据发生改变怎么办?
-
-用户需要刷新 Cube,如果 Cube 被分区,那么可以刷新某些 Segment。
-
-### 构建时在 “Load HFile to HBase Table” 报错 “bulk load aborted with some files not yet loaded”
-
-根本原因是 Kylin 没有权限执行 HBase CompleteBulkLoad。请检查启动 Kylin 的用户是否有访问 HBase 的权限。
-
-### 执行 `sample.sh` 在 HDFS 上创建 `/tmp/kylin` 文件夹失败
-
-执行 `bash -v $KYLIN_HOME/bin/find-hadoop-conf-dir.sh`,查看报错信息,然后根据报错信息进行排错。
-
-### 使用 Chrome 浏览器时,网页报错显示 “net::ERR_CONTENT_DECODING_FAILED”
-
-修改 `$KYLIN_HOME/tomcat/conf/server.xml`,找到 `compress=on`,修改为 `off`。
-
-
-### 如何配置某个 Cube 在指定的 YARN 队列中构建
-
-用户可以在 Cube 级别进行重写如下参数:
-```properties
-kylin.engine.mr.config-override.mapreduce.job.queuename=YOUR_QUEUE_NAME
-kylin.source.hive.config-override.mapreduce.job.queuename=YOUR_QUEUE_NAME
-kylin.engine.spark-conf.spark.yarn.queue=YOUR_QUEUE_NAME
-```
-
-### 构建 Cube 时报错 “Too high cardinality is not suitable for dictionary”
-
-Kylin 使用字典编码方式来编码/解码维度的值;通常一个维度的基数都是小于百万的,所以字典编码方式非常好用。正如字典需要被持久化、加载进内存,如果一个维度的基数非常高,内存占用会非常大,所以 Kylin 加了一层检查。如果用户看到这类报错,建议先找到哪些维度是超高基维度,并重新对 Cube 进行设计 (如:是否需要对将超高基的列设置为维度),如果用户必须保留该超高基列作为一个维度,有如下解决方法:
-1. 修改编码方式,如修改为 `fixed_length` 或 `integer`。
-2. 增大 `$KYLIN_HOME/conf/kylin.properties` 中的 `kylin.dictionary.max.cardinality` 的值。
-
-
-### 当某列中的数据都大于 0 时,查询 SUM() 返回负值
-
-如果在 Hive 中将列声明为 integer,则 SQL 引擎 (Calcite) 将使用列的数据类型作为 “SUM()” 的数据类型,而此字段上的聚合值可能超出整数范围;在这种情况下,查询的返回结果可能是负值。
-
-解决方法如下:
-在 Hive 中将该列的数据类型更改为 BIGINT,然后将表同步到 Kylin(对应的 Cube 不需要刷新)。
-
-
-### 为什么需要在构建 Cuboid 之前从事实表中提取不同的列?
-Kylin 使用字典对每列中的值进行编码,这大大减少了 Cube 的存储大小。 而要构建字典,Kylin 需要为每列获取不同的值。
-
-
-### 如何新增用户并修改默认密码
-
-Kylin 的网页安全是通过 Spring 安全框架实现的,而 `kylinSecurity.xml` 是主要的配置文件。
-```
-${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
-```
-预定义用户的密码哈希值可以在配置文件 “sandbox,testing” 中找到;如果要更改默认密码,用户需要生成一个新哈希,然后在此处更新,请参阅以下代码段: [Spring BCryptPasswordEncoder generate different password for same input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
-
-我们更推荐集成 LDAP 认证方式和 Kylin 来管理多用户。
-
-### 构建 Kylin 代码时遇到 NPM 报错 (中国大陆地区用户请特别注意此问题)
-
-用户可以通过以下命令为 NPM 设置代理:
-```sh
-npm config set proxy http://YOUR_PROXY_IP
-```
-请更新您本地的 NPM 仓库以使用国内的 NPM 镜像,例如[淘宝 NPM 镜像]([http://npm.taobao.org](http://npm.taobao.org)
-
-
-### 运行 BuildCubeWithEngineTest 失败 "failed to connect to hbase" 
-用户在第一次运行 hbase 客户端时可能会遇到此错误,请检查错误信息以查看是否存在无法访问 "/hadoop/hbase/local/jars" 等文件夹;如果文件夹不存在,请创建它。
-
-### JDBC 驱动程序返回的日期/时间与 REST API 不同
-请参考:[JDBC query result Date column get wrong value](http://apache-kylin.74782.x6.nabble.com/JDBC-query-result-Date-column-get-wrong-value-td5370.html)
-
-
-### 如何修改 ADMIN 用户的默认密码
-
-默认情况下,Kylin 使用简单的、基于配置的用户注册表;默认的系统管理员 ADMIN 的密码为 KYLIN 在 `kylinSecurity.xml` 中进行了硬编码。如果要修改密码,首先需要获取新密码的加密值(使用BCrypt),然后在 `kylinSecurity.xml` 中设置它。以下为密码为 'ABCDE' 的示例:
-```sh
-cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
-java -classpath kylin-server-base-2.3.0.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:spring-security-core-4.2.3.RELEASE.jar:commons-codec-1.7.jar:commons-logging-1.1.3.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer BCrypt ABCDE
-```
-加密后的密码为:
-```
-$2a$10$A7.J.GIEOQknHmJhEeXUdOnj2wrdG4jhopBgqShTgDkJDMoKxYHVu
-```
-然后将加密后的密码在 `kylinSecurity.xml` 中设置,如下:
-```
-vi $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
-```
-使用新的密码代替旧的密码:
-```
-<bean class="org.springframework.security.core.userdetails.User" id="adminUser">
-	<constructor-arg value="ADMIN"/>
-	<constructor-arg value="$2a$10$A7.J.GIEOQknHmJhEeXUdOnj2wrdG4jhopBgqShTgDkJDMoKxYHVu"/>
-    <constructor-arg ref="adminAuthorities"/>
-</bean>
-```
-重启 Kylin 来使得配置生效,如果用户有多个 Kylin 服务器作为一个集群,需要在所有的节点都执行相同操作。
-
-### HDFS 上的工作目录中文件超过了 300G,可以手动删除吗?
-
-HDFS 上的工作目录中的数据包括了中间数据 (将被垃圾清理所清除) 和 Cuboid 数据 (不会被垃圾清理所清除),Cuboid 数据将为之后的 Segment 合并而保留。所以如果用户确认这些 Segment 在之后不会被合并,可以将 Cuboid 数据移动到其他路径甚至删除。
-
-另外,请留意 HDFS 工作目录下的 "resources" 或 "jdbc-resources" 子目录中会存放一些大的元数据,如字典文件和维表的快照,这些文件不能被删除。
-
-#### 如何对 like 语句中的关键字进行转义?
-"%", "_" 是 "like" 语句中的保留关键字; "%" 可以匹配任意个字符,  "_" 匹配单个字符; 如果你想匹配关键字如 "_", 需要使用另一个字符在前面进行转义; 下面是一个使用 "/" 进行转义的例子, 此查询试图匹配 "xiao_":
-"select username from gg_user where username like '%xiao/_%' escape '/'; "
\ No newline at end of file
diff --git a/website/_docs30/gettingstarted/faq.md b/website/_docs30/gettingstarted/faq.md
deleted file mode 100644
index feced26..0000000
--- a/website/_docs30/gettingstarted/faq.md
+++ /dev/null
@@ -1,341 +0,0 @@
----
-layout: docs30
-title:  "FAQ"
-categories: gettingstarted
-permalink: /docs30/gettingstarted/faq.html
-since: v0.6.x
----
-
-**Here are some tips for you when encountering problems with Kylin:**
- 1. Use search engines (Google / Baidu), Kylin's [Mailing List Archives](http://apache-kylin.74782.x6.nabble.com/), the Kylin Project on the Apache [JIRA](https://issues.apache.org/jira/projects/KYLIN/issues) to seek a solution. 
- 2. Browse Kylin's official website, especially the [Docs page](http://kylin.apache.org/docs/) and the [FAQ page](http://kylin.apache.org/docs/gettingstarted/faq.html). 
- 3. Send an email to Apache Kylin dev or user mailing list: dev@kylin.apache.org, user@kylin.apache.org; before sending, please make sure you have subscribed the mailing list by dropping an email to dev-subscribe@kylin.apache.org or user-subscribe@kylin.apache.org. Your email is supposed to include: the version numbers of Kylin and other components you are using in your env, the log of the error message, the SQL (if you got the query error).
-There is an article about [how to ask a question in a smart way](http://catb.org/~esr/faqs/smart-questions.html).
-
-#### Is Kylin a generic SQL engine for big data?
-
-  * No, Kylin is an OLAP engine with SQL interface. The SQL queries need be matched with the pre-defined OLAP model.
-
-#### What's a typical scenario to use Apache Kylin?
-
-  * Kylin can be the best option if you have a huge table (e.g., >100 million rows), join with lookup tables, while queries need be finished in the second level (dashboards, interactive reports, business intelligence, etc), and the concurrent users can be dozens or hundreds.
-
-#### How large a data scale can Kylin support? How about the performance?
-
-  * Kylin can supports second level query performance at TB to PB level dataset. This has been verified by users like eBay, Meituan, Toutiao. Take Meituan's case as an example (till 2018-08), 973 cubes, 3.8 million queries per day, raw data 8.9 trillion, total cube size 971 TB (original data is bigger), 50% queries finished in < 0.5 seconds, 90% queries < 1.2 seconds.
-
-#### Who are using Apache Kylin?
-
-  * You can find a list in Kylin's [powered by page](/community/poweredby.html). If you want to be added, please email to dev@kylin.apache.org with your use case.
-
-#### What's the expansion rate of Cube (compared with raw data)?
-
-  * It depends on a couple of factors, for example, dimension/measure number, dimension cardinality, cuboid number, compression algorithm, etc. You can optimize the cube expansion in many ways to control the size.
-
-#### How to compare Kylin with other SQL engines like Hive, Presto, Spark SQL, Impala?
-
-  * They answer a query in different ways. Kylin is not a replacement for them, but a supplement (query accelerator). Many users run Kylin together with other SQL engines. For the high frequent query patterns, building Cubes can greatly improve the performance and also offload cluster workloads. For less queried patterns or ad-hoc queries, ther MPP engines are more flexible.
-  
-#### How to compare Kylin with Druid?
-
-  * Druid is more suitable for real-time analysis. Kylin is more focus on OLAP case. Druid has good integration with Kafka as real-time streaming; Kylin fetches data from Hive or Kafka in batches. The real-time capability of Kylin is still under development.
-
-  * Many internet service providers host both Druid and Kylin, serving different purposes (real-time and historical).
-
-  * Some other Kylin's highlights: supports star & snowflake schema; ANSI-SQL support, JDBC/ODBC for BI integrations. Kylin also has a Web GUI with LDAP/SSO user authentication.
-
-  * For more information, please do a search or check this [mail thread](https://mail-archives.apache.org/mod_mbox/kylin-dev/201503.mbox/%3CCAKmQrOY0fjZLUU0MGo5aajZ2uLb3T0qJknHQd+Wv1oxd5PKixQ@mail.gmail.com%3E).
-
-#### How to quick start with Kylin?
-
-  * To get a quick start, you can run Kylin in a Hadoop sandbox VM or in the cloud, for example, start a small AWS EMR or Azure HDInsight cluster and then install Kylin in one of the node.
-
-#### How many nodes of the Hadoop are needed to run Kylin?
-
-  * Kylin can run on a Hadoop cluster from only a couple nodes to thousands of nodes, depends on how much data you have. The architecture is horizontally scalable.
-
-  * Because most of the computation is happening in Hadoop (MapReduce/Spark/HBase), usually you just need to install Kylin in a couple of nodes.
-
-#### How many dimensions can be in a cube?
-
-  * The max physical dimension number (exclude derived column in lookup tables) in a cube is 63; If you can normalize some dimensions to lookup tables, with derived dimensions, you can create a cube with more than 100 dimensions.
-
-  * But a cube with > 30 physical dimensions is not recommended; You even couldn't save that in Kylin if you don't optimize the aggregation groups. Please search "curse of dimensionality".
-
-#### Why I got an error when running a "select * " query?
-
-  * The cube only has aggregated data, so all your queries should be aggregated queries ("GROUP BY"). You can use a SQL with all dimensions be grouped to get them as close as the detailed result, but that is not the raw data.
-
-  * In order to be connected from some BI tools, Kylin tries to answer "select \*" query but please aware the result might not be expected. Please make sure each query to Kylin is aggregated.
-
-#### How can I query raw data from a cube?
-
-  * Cube is not the right option for raw data.
-
-But if you do want, there are some workarounds. 1) Add the primary key as a dimension, then the "group by pk" will return the raw data; 2) Configure Kylin to push down the query to another SQL engine like Hive, but the performance has no assurance.
-
-#### What is the UHC dimension?
-
-  * UHC means Ultra High Cardinality. Cardinality means the number of distinct values of a dimension. Usually, a dimension's cardinality is from tens to millions. If above million, we call it a UHC dimension, for example, user id, cell number, etc.
-
-  * Kylin supports UHC dimension but you need to pay attention to UHC dimension, especially the encoding and the cuboid combinations. It may cause your Cube very large and query to be slow.
-
-#### Can I specify a cube to answer my SQL statements?
-
-  * No, you couldn't; Cube is transparent for the end user. If you have multiple Cubes for the same data models, separating them into different projects is a good idea.
-
-#### Is there a REST API to create the project/model/cube?
-
-  * Yes, but they are private APIs, incline to change over versions (without notification). By design, Kylin expects the user to create a new project/model/cube in Kylin's web GUI.
-
-#### How to define a snowflake model(with two fact tables)?
-
-  * In the snowflake model, there is only one fact table also. But you could define lookup table joins with another lookup table.
-  * If the query pattern between your two "fact" tables is fixed, just like factA left join with factB. You could define factB as a lookup table and skip the snapshot for this huge lookup table.
-
-#### Where does the cube locate, can I directly read cube from HBase without going through Kylin API?
-
-  * Cube is stored in HBase. Each cube segment is an HBase table. The dimension values will be composed as the row key. The measures will be serialized in columns. To improve the storage efficiency, both dimension and measure values will be encoded to bytes. Kylin will decode the bytes to origin values after fetching from HBase. Without Kylin's metadata, the HBase tables are not readable.
-
-#### What's the best practice to design a cube?
-
-  * Please check: [https://www.slideshare.net/YangLi43/design-cube-in-apache-kylin](https://www.slideshare.net/YangLi43/design-cube-in-apache-kylin)
-
-#### How to encrypt cube data?
-
-  * You can enable encryption at HBase side. Refer https://hbase.apache.org/book.html#hbase.encryption.server for more details.
-
-#### How to schedule the cube build at a fixed frequency, in an automatic way?
-
-  * Kylin doesn't have a built-in scheduler for this. You can trigger that through Rest API from external scheduler services, like Linux cron job, Apache Airflow, etc.
-
-#### How to export/import cube/project across different Kylin environments?
-
-  * Please check: [http://kylin.apache.org/docs/howto/howto_use_cli.html](http://kylin.apache.org/docs/howto/howto_use_cli.html)
-
-#### How to view kylin cube's hbase table without encoding?
-
-  * To view the origin data, please use SQL to query Kylin. Kylin will convert the SQL query to HBase access and then decode the data. You can use Rest API, JDBC, ODBC drivers to connect with Kylin.
-  
-#### Does Kylin support Hadoop 3 and HBase 2.0?
-
-  * From v2.5.0, Kylin will provide a binary package for Hadoop 3 and HBase 2.
-
-#### The Cube is ready, but why the table does not appear in the "Insight" tab?
-
-  * Make sure the "kylin.server.cluster-servers" property in `conf/kylin.properties` is configured with EVERY Kylin node, all job and query nodes. Kylin nodes notify each other to flush cache with this configuration. And please ensure the network among them are healthy.
-
-#### What should I do if I encounter a "java.lang.NoClassDefFoundError" error?
-
-  * Kylin doesn't ship those Hadoop jars, because they should already exist in the Hadoop node. So Kylin will try to find them and then add to Kylin's classpath. Due to Hadoop's complexity, there might be some case a jar wasn't found. In this case please look at the "bin/find-\*-dependency.sh" and "bin/kylin.sh", modify them to fit your environment.
-
-#### How to query Kylin in Python?
-
-  * Please check: [https://github.com/Kyligence/kylinpy](https://github.com/Kyligence/kylinpy)
-
-#### How to add dimension/measure to a cube?
-
-  * Once a cube is built, its structure couldn't be modified. To add dimension/measure, you need to clone a new cube, and then add in it.
-
-When the new cube is built, please disable or drop the old one.
-
-If you can accept the absence of new dimensions for historical data, you can build the new cube since the end time of the old cube. And then create a hybrid model over the old and new cube.
-
-#### How to solve the data security problem of Tableau connection client?
-  
-  * Kylin's ACL control can solve this problem. Different analysts have the authority to work on different projects for Kylin. When you create a Kylin ODBC DSN, you can map different links to different analyst accounts.
-
-#### The query result is not exactly matched with that in Hive, what's the possible reason?
-
-  * Possible reasons:
-a) Source data changed in Hive after built into the cube;
-b) Cube's time range is not the same as in Hive;
-c) Another cube answered your query;
-d) The data model has inner joins, but the query doesn't join all tables;
-e) Cube has some approximate measures like HyberLogLog, TopN;
-f) In v2.3 and before, Kylin may have data loss when fetching from Hive, see KYLIN-3388.
-
-#### What to do if the source data changed after being built into the cube?
-
-  * You need to refresh the cube. If the cube is partitioned, you can refresh certain segments.
-
-#### What is the possible reason for getting the error ‘bulk load aborted with some files not yet loaded’ in the ‘Load HFile to HBase Table’ step?
-
-  * Kylin doesn't have permissions to execute HBase CompleteBulkLoad. Check whether the current user (that run Kylin service) has the permission to access HBase.
-
-#### Why `bin/sample.sh` cannot create the `/tmp/kylin` folder on HDFS?
-
-  * Run ./bin/find-hadoop-conf-dir.sh -v, check the error message, then check the environment according to the information reported.
-
-#### In Chrome, web console shows net::ERR_CONTENT_DECODING_FAILED, what should I do?
-
-  * Edit $KYLIN_HOME/tomcat/conf/server.xml, find the "compress=on", change it to off.
-
-#### How to configure one cube to be built using a chosen YARN queue?
-
-  * Set the YARN queue in Cube’s Configuration Overwrites page, then it will affect only one cube. Here are the three parameters:
-
-  {% highlight Groff markup %}
-kylin.engine.mr.config-override.mapreduce.job.queuename=YOUR_QUEUE_NAME
-kylin.source.hive.config-override.mapreduce.job.queuename=YOUR_QUEUE_NAME
-kylin.engine.spark-conf.spark.yarn.queue=YOUR_QUEUE_NAME
-  {% endhighlight %}
-
-#### How to add a new JDBC data source dialect?
-
-  * That is easy to add a new type of JDBC data source. You can follow such steps:
-
-1) Add the dialect in  source-hive/src/main/java/org/apache/kylin/source/jdbc/JdbcDialect.java
-
-2) Implement a new IJdbcMetadata if {database that you want to add}'s metadata fetching is different with others and then register it in JdbcMetadataFactory
-
-3) You may need to customize the SQL for creating/dropping table in JdbcExplorer for {database that you want to add}.
-
-#### How to ask a question?
-
-  * Check Kylin documents first. and do a Google search also can help. Sometimes the question has been answered so you don't need ask again. If no matching, please send your question to Apache Kylin user mailing list: user@kylin.apache.org; You need to drop an email to user-subscribe@kylin.apache.org to subscribe if you haven't done so. In the email content, please provide your Kylin and Hadoop version, specific error logs (as much as possible), and also the how to re-produce steps.  
-
-#### "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat" or "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState"
-
-  * Kylin need many dependent jars (hadoop/hive/hcat/hbase/kafka) on classpath to work, but Kylin doesn't ship them. It will seek these jars from your local machine by running commands like `hbase classpath`, `hive -e set` etc. The founded jars' path will be appended to the environment variable *HBASE_CLASSPATH* (Kylin uses `hbase` shell command to start up, which will read this). But in some Hadoop distribution (like AWS EMR 5.0), the `hbase` shell doesn't keep the origin `HBASE_CLASSPA [...]
-
-  * To fix this, find the hbase shell script (in hbase/bin folder), and search *HBASE_CLASSPATH*, check whether it overwrite the value like :
-
-  {% highlight Groff markup %}
-  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*
-  {% endhighlight %}
-
-  * If true, change it to keep the origin value like:
-
-   {% highlight Groff markup %}
-  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
-  {% endhighlight %}
-
-#### Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
-
-  * Kylin uses "Dictionary" encoding to encode/decode the dimension values (check [this blog](/blog/2015/08/13/kylin-dictionary/)); Usually a dimension's cardinality is less than millions, so the "Dict" encoding is good to use. As dictionary need be persisted and loaded into memory, if a dimension's cardinality is very high, the memory footprint will be tremendous, so Kylin add a check on this. If you see this error, suggest to identify the UHC dimension first and then re-evaluate the de [...]
-
-
-#### How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
-
-  * Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
-
-  {% highlight Groff markup %}
-  I was able to deploy Kylin with following option in POM.
-  <hadoop2.version>2.5.0</hadoop2.version>
-  <yarn.version>2.5.0</yarn.version>
-  <hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
-  <zookeeper.version>3.4.5</zookeeper.version>
-  <hive.version>0.13.1</hive.version>
-  My Cluster is running on Cloudera Distribution CDH 5.2.0.
-  {% endhighlight %}
-
-
-#### SUM(field) returns a negative result while all the numbers in this field are > 0
-  * If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workaround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which [...]
-
-#### Why Kylin need extract the distinct columns from Fact Table before building cube?
-  * Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
-
-#### Why Kylin calculate the HIVE table cardinality?
-  * The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
-
-#### How to add new user or change the default password?
-  * Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
-
-   {% highlight Groff markup %}
-   ${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
-   {% endhighlight %}
-
-  * The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
-  * When you deploy Kylin for more users, switch to LDAP authentication is recommended.
-
-#### Using sub-query for un-supported SQL
-
-{% highlight Groff markup %}
-Original SQL:
-select fact.slr_sgmt,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from ih_daily_fact fact
-inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-group by fact.slr_sgmt
-{% endhighlight %}
-
-{% highlight Groff markup %}
-Using sub-query
-select a.slr_sgmt,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from (
-    select fact.slr_sgmt as slr_sgmt,
-    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
-    sum(gmv) as gmv36,
-    sum(gmv) as gmv35
-    from ih_daily_fact fact
-    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
-) a
-group by a.slr_sgmt
-{% endhighlight %}
-
-#### Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
-
-  * Please add proxy for your NPM:  
-  `npm config set proxy http://YOUR_PROXY_IP`
-
-  * Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
-  [http://npm.taobao.org](http://npm.taobao.org)
-
-#### Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
-  * User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
-
-#### Kylin JDBC driver returns a different Date/time than the REST API, seems it add the timezone to parse the date.
-  * Please check the [post in mailing list](http://apache-kylin.74782.x6.nabble.com/JDBC-query-result-Date-column-get-wrong-value-td5370.html)
-
-
-#### How to update the default password for 'ADMIN'?
-  * By default, Kylin uses a simple, configuration based user registry; The default administrator 'ADMIN' with password 'KYLIN' is hard-coded in `kylinSecurity.xml`. To modify the password, you need firstly get the new password's encrypted value (with BCrypt), and then set it in `kylinSecurity.xml`. Here is a sample with password 'ABCDE'
-  
-{% highlight Groff markup %}
-
-cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
-
-java -classpath kylin-server-base-2.3.0.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:spring-security-core-4.2.3.RELEASE.jar:commons-codec-1.7.jar:commons-logging-1.1.3.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer BCrypt ABCDE
-
-BCrypt encrypted password is:
-$2a$10$A7.J.GIEOQknHmJhEeXUdOnj2wrdG4jhopBgqShTgDkJDMoKxYHVu
-
-{% endhighlight %}
-
-  * Then you can set it into `kylinSecurity.xml`
-
-{% highlight Groff markup %}
-
-vi ./tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
-
-{% endhighlight %}
-
-  * Replace the origin encrypted password with the new one: 
-{% highlight Groff markup %}
-
-        <bean class="org.springframework.security.core.userdetails.User" id="adminUser">
-            <constructor-arg value="ADMIN"/>
-            <constructor-arg
-                    value="$2a$10$A7.J.GIEOQknHmJhEeXUdOnj2wrdG4jhopBgqShTgDkJDMoKxYHVu"/>
-            <constructor-arg ref="adminAuthorities"/>
-        </bean>
-        
-{% endhighlight %}
-
-  * Restart Kylin to take effective. If you have multiple Kylin server as a cluster, do the same on each instance. 
-
-#### What kind of data be left in 'kylin.env.hdfs-working-dir' ? We often execute kylin cleanup storage command, but now our working dir folder is about 300 GB size, can we delete old data manually?
-
-  * The data in 'hdfs-working-dir' ('hdfs:///kylin/kylin_metadata/' by default) includes intermediate files (will be GC) and Cuboid data (won't be GC). The Cuboid data is kept for the further segments' merge, as Kylin couldn't merge from HBase. If you're sure those segments won't be merged, you can move them to other paths or even delete.
-
-  * Please pay attention to the "resources" or "jdbc-resources" sub-folder under '/kylin/kylin_metadata/', which persists big metadata files like dictionaries and lookup tables' snapshots. They shouldn't be manually moved.
-
-#### How to escape the key word in fuzzy match (like) queries?
-"%", "_" are key words in the "like" clause; "%" matches any character, and "_" matches a single character; When you wants to match the key word like "_", need to escape them with another character ahead; Below is a sample with "/" to escape, the query is to match the "xiao_":
-"select username from gg_user where username like '%xiao/_%' escape '/'; "  
\ No newline at end of file
diff --git a/website/_docs30/gettingstarted/terminology.md b/website/_docs30/gettingstarted/terminology.md
deleted file mode 100644
index aa6b0f1..0000000
--- a/website/_docs30/gettingstarted/terminology.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-layout: docs30
-title:  "Terminology"
-categories: gettingstarted
-permalink: /docs30/gettingstarted/terminology.html
-since: v0.5.x
----
- 
-
-Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
-They are basic knowledge of Apache Kylin which also will help to well understand such concept, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analytics. 
-
-* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
-* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
-* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
-* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
-* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
-* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
-* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
-* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
-* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
-* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
-
-
-
diff --git a/website/_docs30/howto/howto_backup_metadata.cn.md b/website/_docs30/howto/howto_backup_metadata.cn.md
deleted file mode 100644
index b562e03..0000000
--- a/website/_docs30/howto/howto_backup_metadata.cn.md
+++ /dev/null
@@ -1,131 +0,0 @@
----
-layout: docs30-cn
-title:  备份元数据
-categories: 帮助
-permalink: /cn/docs30/howto/howto_backup_metadata.html
----
-
-Kylin将它全部的元数据(包括cube描述和实例、项目、倒排索引描述和实例、任务、表和字典)组织成层级文件系统的形式。然而,Kylin 使用 HBase 来存储元数据,而不是一个普通的文件系统。如果你查看过Kylin的配置文件(kylin.properties),你会发现这样一行:
-
-{% highlight Groff markup %}
-## The metadata store in hbase
-kylin.metadata.url=kylin_metadata@hbase
-{% endhighlight %}
-
-这表明元数据会被保存在一个叫作 “kylin_metadata”的htable 里。你可以在 hbase shell 里 scan 该 htbale 来获取它。
-
-## 元数据路径
-
-Kylin使用`resource root path + resource name + resource suffix`作为key值(HBase中的rowkey)来存储元数据。你可以参考如下表格使用`./bin/metastore.sh`命令。
- 
-| Resource root path  | resource name         | resource suffix
-| --------------------| :---------------------| :--------------|
-| /cube               | /cube name            | .json |
-| /cube_desc          | /cube name            | .json |
-| /cube_statistics    | /cube name/uuid       | .seq |
-| /model_desc         | /model name           | .json |
-| /dict               | /DATABASE.TABLE/COLUMN/uuid | .dict |
-| /project            | /project name         | .json |
-| /table_snapshot     | /DATABASE.TABLE/uuid  | .snapshot |
-| /table              | /DATABASE.TABLE--project name | .json |
-| /table_exd          | /DATABASE.TABLE--project name | .json |
-| /execute            | /job id               |  |
-| /execute_output     | /job id-step index    |  |
-| /kafka              | /DATABASE.TABLE       | .json |
-| /streaming          | /DATABASE.TABLE       | .json |
-| /user               | /user name            |  |
-
-## 查看元数据
-
-Kylin以二进制字节的格式将元数据存储在HBase中,如果你想要查看一些元数据,可以运行:
-
-{% highlight Groff markup %}
-./bin/metastore.sh list /path/to/store/metadata
-{% endhighlight %}
-
-列出存储在指定路径下的所有实体元数据。然后运行: 
-
-{% highlight Groff markup %}
-./bin/metastore.sh cat /path/to/store/entity/metadata.
-{% endhighlight %}
-
-查看某个实体的元数据。
-
-## 使用二进制包来备份 metadata
-
-有时你需要将 Kylin 的 metadata store 从 hbase 备份到磁盘文件系统。在这种情况下,假设你在部署 Kylin 的 hadoop 命令行(或沙盒)里,你可以到KYLIN_HOME并运行:
-
-{% highlight Groff markup %}
-./bin/metastore.sh backup
-{% endhighlight %}
-
-来将你的元数据导出到本地目录,这个目录在KYLIN_HOME/metadata_backps下,它的命名规则使用了当前时间作为参数:KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second 。
-
-此外, 你可以运行:
-
-{% highlight Groff markup %}
-./bin/metastore.sh fetch /path/to/store/metadata
-{% endhighlight %}
-
-有选择地导出元数据. 举个栗子, 运行 `./bin/metastore.sh fetch /cube_desc/` 获取所有的cube desc元数据, 或者运行 `./bin/metastore.sh fetch /cube_desc/kylin_sales_cube.json` 导出单个cube desc的元数据。
-
-## 使用二进制包来恢复 metadata
-
-万一你发现你的元数据被搞得一团糟,想要恢复先前的备份:
-
-首先,重置 metatdara store(这个会清理 Kylin 在 HBase 的 metadata store的所有信息,请确保先备份):
-
-{% highlight Groff markup %}
-./bin/metastore.sh reset
-{% endhighlight %}
-
-然后上传备份的元数据到 Kylin 的 metadata store:
-{% highlight Groff markup %}
-./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
-{% endhighlight %}
-
-## 有选择地恢复 metadata (推荐)
-如果只更改了几个元数据文件,管理员只需选择要还原的这些文件,而不必覆盖所有元数据。 与完全恢复相比,这种方法更有效,更安全,因此建议使用。
-
-创建一个新的空目录,然后根据要还原的元数据文件的位置在其中创建子目录; 例如,要恢复多维数据集实例,您应该创建一个“cube”子目录:
-
-{% highlight Groff markup %}
-mkdir /path/to/restore_new
-mkdir /path/to/restore_new/cube
-{% endhighlight %}
-
-将要还原的元数据文件复制到此新目录:
-
-{% highlight Groff markup %}
-cp meta_backups/meta_2016_06_10_20_24_50/cube/kylin_sales_cube.json /path/to/restore_new/cube/
-{% endhighlight %}
-
-此时,您可以手动修改/修复元数据。
-
-从此目录还原:
-
-{% highlight Groff markup %}
-cd $KYLIN_HOME
-./bin/metastore.sh restore /path/to/restore_new
-{% endhighlight %}
-
-只有在此文件夹中的文件才会上传到Kylin Metastore。 同样,在恢复完成后,单击 Web UI 上的“Reload Metadata”按钮以刷新缓存。
-
-## 在开发环境备份/恢复元数据
-
-在开发调试 Kylin 时,典型的环境是一台装有 IDE 的开发机上和一个后台的沙盒,通常你会写代码并在开发机上运行测试案例,但每次都需要将二进制包放到沙盒里以检查元数据是很麻烦的。这时有一个名为 SandboxMetastoreCLI 工具类可以帮助你在开发机本地下载/上传元数据。
-
-## 从 metadata store 清理无用的资源
-随着运行时间增长,类似字典、表快照的资源变得没有用(cube segment被丢弃或者合并了),但是它们依旧占用空间,你可以运行命令来找到并清除它们:
-
-首先,运行一个检查,这是安全的因为它不会改变任何东西,通过添加 "--jobThreshold 30(默认值,您可以改为任何数字)" 参数,您可以设置要保留的 metadata resource 天数:
-{% highlight Groff markup %}
-./bin/metastore.sh clean --jobThreshold 30
-{% endhighlight %}
-
-将要被删除的资源会被列出来:
-
-接下来,增加 “--delete true” 参数来清理这些资源;在这之前,你应该确保已经备份 metadata store:
-{% highlight Groff markup %}
-./bin/metastore.sh clean --delete true --jobThreshold 30
-{% endhighlight %}
diff --git a/website/_docs30/howto/howto_backup_metadata.md b/website/_docs30/howto/howto_backup_metadata.md
deleted file mode 100644
index 2b364ac..0000000
--- a/website/_docs30/howto/howto_backup_metadata.md
+++ /dev/null
@@ -1,132 +0,0 @@
----
-layout: docs30
-title:  Backup Metadata
-categories: howto
-permalink: /docs30/howto/howto_backup_metadata.html
----
-
-Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
-
-{% highlight Groff markup %}
-## The metadata store in hbase
-kylin.metadata.url=kylin_metadata@hbase
-{% endhighlight %}
-
-This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
-
-## Metadata directory
-
-Kylin metastore use `resource root path + resource name + resource suffix` as key (rowkey in hbase) to store metadata. You can refer to the following table to use `./bin/metastore.sh`.
- 
-| Resource root path  | resource name         | resource suffix
-| --------------------| :---------------------| :--------------|
-| /cube               | /cube name            | .json |
-| /cube_desc          | /cube name            | .json |
-| /cube_statistics    | /cube name/uuid       | .seq |
-| /model_desc         | /model name           | .json |
-| /dict               | /DATABASE.TABLE/COLUMN/uuid | .dict |
-| /project            | /project name         | .json |
-| /table_snapshot     | /DATABASE.TABLE/uuid  | .snapshot |
-| /table              | /DATABASE.TABLE--project name | .json |
-| /table_exd          | /DATABASE.TABLE--project name | .json |
-| /execute            | /job id               |  |
-| /execute_output     | /job id-step index    |  |
-| /kafka              | /DATABASE.TABLE       | .json |
-| /streaming          | /DATABASE.TABLE       | .json |
-| /user               | /user name            |  |
-
-## View metadata
-
-Kylin store metadata in Byte format in HBase. If you want to view some metadata, you can run:
-
-{% highlight Groff markup %}
-./bin/metastore.sh list /path/to/store/metadata
-{% endhighlight %}
-
-to list the entity stored in specified directory, and then run: 
-
-{% highlight Groff markup %}
-./bin/metastore.sh cat /path/to/store/entity/metadata.
-{% endhighlight %}
-
-to view one entity metadata.
-
-## Backup metadata with binary package
-
-Sometimes you need to backup the Kylin's metadata store from hbase to your disk file system.
-In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
-
-{% highlight Groff markup %}
-./bin/metastore.sh backup
-{% endhighlight %}
-
-to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
-
-In addition, you can run:
-
-{% highlight Groff markup %}
-./bin/metastore.sh fetch /path/to/store/metadata
-{% endhighlight %}
-
-to dump metadata selectively. For example, run `./bin/metastore.sh fetch /cube_desc/` to get all cube desc metadata, or run `./bin/metastore.sh fetch /cube_desc/kylin_sales_cube.json` to get single cube desc metadata.
-
-## Restore metadata with binary package
-
-In case you find your metadata store messed up, and you want to restore to a previous backup:
-
-Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
-
-{% highlight Groff markup %}
-./bin/metastore.sh reset
-{% endhighlight %}
-
-Then upload the backup metadata to Kylin's metadata store:
-{% highlight Groff markup %}
-./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
-{% endhighlight %}
-
-## Restore metadata selectively (Recommended)
-If only changes a couple of metadata files, the administrator can just pick these files to restore, without having to cover all the metadata. Compared to the full recovery, this approach is more efficient, safer, so it is recommended.
-
-Create a new empty directory, and then create subdirectories in it according to the location of the metadata files to restore; for example, to restore a Cube instance, you should create a "cube" subdirectory:
-
-{% highlight Groff markup %}
-mkdir /path/to/restore_new
-mkdir /path/to/restore_new/cube
-{% endhighlight %}
-
-Copy the metadata file to be restored to this new directory:
-
-{% highlight Groff markup %}
-cp meta_backups/meta_2016_06_10_20_24_50/cube/kylin_sales_cube.json /path/to/restore_new/cube/
-{% endhighlight %}
-
-At this point, you can modify/fix the metadata manually.
-
-Restore from this directory:
-
-{% highlight Groff markup %}
-cd $KYLIN_HOME
-./bin/metastore.sh restore /path/to/restore_new
-{% endhighlight %}
-
-Only the files in the folder will be uploaded to Kylin metastore. Similarly, after the recovery is finished, click Reload Metadata button on the Web UI to flush cache.
-
-## Backup/restore metadata in development env 
-
-When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
-
-## Cleanup unused resources from metadata store
-As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
-
-Firstly, run a check, this is safe as it will not change anything, you can set the number of days to keep metadata resource by adding the "--jobThreshold 30(default, you can change to any number)" option:
-{% highlight Groff markup %}
-./bin/metastore.sh clean --jobThreshold 30
-{% endhighlight %}
-
-The resources that will be dropped will be listed;
-
-Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
-{% highlight Groff markup %}
-./bin/metastore.sh clean --delete true --jobThreshold 30
-{% endhighlight %}
diff --git a/website/_docs30/howto/howto_build_cube_with_restapi.cn.md b/website/_docs30/howto/howto_build_cube_with_restapi.cn.md
deleted file mode 100644
index b7ff7ff..0000000
--- a/website/_docs30/howto/howto_build_cube_with_restapi.cn.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-layout: docs30-cn
-title:  用 API 构建 Cube
-categories: 帮助
-permalink: /cn/docs30/howto/howto_build_cube_with_restapi.html
----
-
-### 1. 认证
-*   目前Kylin使用[basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication)。
-*   给第一个请求加上用于认证的 Authorization 头部。
-*   或者进行一个特定的请求: POST http://localhost:7070/kylin/api/user/authentication 。
-*   完成认证后, 客户端可以在接下来的请求里带上cookie。
-{% highlight Groff markup %}
-POST http://localhost:7070/kylin/api/user/authentication
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-
-### 2. 获取Cube的详细信息
-*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
-*   用户可以在返回的cube详细信息里找到cube的segment日期范围。
-{% highlight Groff markup %}
-GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-
-### 3.	然后提交cube构建任务
-*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
-*   关于 put 的请求体细节请参考 Build Cube API
-    *   `startTime` 和 `endTime` 应该是utc时间。
-    *   `buildType` 可以是 `BUILD` 、 `MERGE` 或 `REFRESH`。 `BUILD` 用于构建一个新的segment, `REFRESH` 用于刷新一个已有的segment, `MERGE` 用于合并多个已有的segment生成一个较大的segment。
-*   这个方法会返回一个新建的任务实例,它的uuid是任务的唯一id,用于追踪任务状态。
-{% highlight Groff markup %}
-PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-    
-{
-    "startTime": 0,
-    "endTime": 1388563200000,
-    "buildType": "BUILD"
-}
-{% endhighlight %}
-
-### 4.	跟踪任务状态 
-*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
-*   返回的 `job_status` 代表job的当前状态。
-
-### 5.	如果构建任务出现错误,可以重新开始它
-*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`
diff --git a/website/_docs30/howto/howto_build_cube_with_restapi.md b/website/_docs30/howto/howto_build_cube_with_restapi.md
deleted file mode 100644
index 9e1f7a6..0000000
--- a/website/_docs30/howto/howto_build_cube_with_restapi.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-layout: docs30
-title:  Build Cube with API
-categories: howto
-permalink: /docs30/howto/howto_build_cube_with_restapi.html
----
-
-### 1.	Authentication
-*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
-*   Add `Authorization` header to first request for authentication
-*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
-*   Once authenticated, client can go subsequent requests with cookies.
-{% highlight Groff markup %}
-POST http://localhost:7070/kylin/api/user/authentication
-    
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-
-### 2.	Get details of cube. 
-*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
-*   Client can find cube segment date ranges in returned cube detail.
-{% highlight Groff markup %}
-GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-### 3.	Then submit a build job of the cube. 
-*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
-*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
-    *   `startTime` and `endTime` should be utc timestamp.
-    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
-*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
-{% highlight Groff markup %}
-PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-    
-{
-    "startTime": 0,
-    "endTime": 1388563200000,
-    "buildType": "BUILD"
-}
-{% endhighlight %}
-
-### 4.	Track job status. 
-*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
-*   Returned `job_status` represents current status of job.
-
-### 5.	If the job got errors, you can resume it. 
-*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`
diff --git a/website/_docs30/howto/howto_cleanup_storage.cn.md b/website/_docs30/howto/howto_cleanup_storage.cn.md
deleted file mode 100644
index c05ea63..0000000
--- a/website/_docs30/howto/howto_cleanup_storage.cn.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-layout: docs30-cn
-title:  清理存储
-categories: 帮助
-permalink: /cn/docs30/howto/howto_cleanup_storage.html
----
-
-Kylin 在构建 cube 期间会在 HDFS 上生成中间文件;除此之外,当清理/删除/合并 cube 时,一些 HBase 表可能被遗留在 HBase 却以后再也不会被查询;虽然 Kylin 已经开始做自动化的垃圾回收,但不一定能覆盖到所有的情况;你可以定期做离线的存储清理:
-
-步骤:
-1. 检查哪些资源可以清理,这一步不会删除任何东西:
-{% highlight Groff markup %}
-export KYLIN_HOME=/path/to/kylin_home
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete false
-{% endhighlight %}
-请将这里的 (version) 替换为你安装的 Kylin jar 版本。
-2. 你可以抽查一两个资源来检查它们是否已经没有被引用了;然后加上“--delete true”选项进行清理。
-{% highlight Groff markup %}
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete true
-{% endhighlight %}
-完成后,Hive 里的中间表, HDFS 上的中间文件及 HBase 中的 HTables 都会被移除。
-3. 如果您想要删除所有资源;可添加 "--force true" 选项:
-{% highlight Groff markup %}
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --force true --delete true
-{% endhighlight %}
-完成后,Hive 中所有的中间表, HDFS 上所有的中间文件及 HBase 中的 HTables 都会被移除。
diff --git a/website/_docs30/howto/howto_cleanup_storage.md b/website/_docs30/howto/howto_cleanup_storage.md
deleted file mode 100644
index 68db6d8..0000000
--- a/website/_docs30/howto/howto_cleanup_storage.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-layout: docs30
-title:  Cleanup Storage
-categories: howto
-permalink: /docs30/howto/howto_cleanup_storage.html
----
-
-Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
-automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
-
-Steps:
-1. Check which resources can be cleanup, this will not remove anything:
-{% highlight Groff markup %}
-export KYLIN_HOME=/path/to/kylin_home
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete false
-{% endhighlight %}
-Here please replace (version) with the specific Kylin jar version in your installation;
-2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
-{% highlight Groff markup %}
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete true
-{% endhighlight %}
-On finish, the intermediate Hive tables, HDFS location and HTables should be dropped;
-3. If you want to delete all resources, then add the "--force true" option to start the cleanup:
-{% highlight Groff markup %}
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --force true --delete true
-{% endhighlight %}
-On finish, all the intermediate Hive tables, HDFS location and HTables should be dropped;
diff --git a/website/_docs30/howto/howto_enable_zookeeper_acl.md b/website/_docs30/howto/howto_enable_zookeeper_acl.md
deleted file mode 100644
index 8aece8b..0000000
--- a/website/_docs30/howto/howto_enable_zookeeper_acl.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-layout: docs30
-title:  Enable Zookeeper ACL
-categories: howto
-permalink: /docs30/howto/howto_enable_zookeeper_acl.html
----
-
-Edit $KYLIN_HOME/conf/kylin.properties to add following configuration item:
-
-* Add "kylin.env.zookeeper.zk-auth". It is the configuration item you can specify the zookeeper authenticated information. Its formats is "scheme:id". The value of scheme that the zookeeper supports is "world", "auth", "digest", "ip" or "super". The "id" is the authenticated information of the scheme. For example:
-
-    `kylin.env.zookeeper.zk-auth=digest:ADMIN:KYLIN`
-
-    The scheme equals to "digest". The id equals to "ADMIN:KYLIN", which expresses the "username:password".
-
-* Add "kylin.env.zookeeper.zk-acl". It is the configuration item you can set access permission. Its formats is "scheme:id:permissions". The value of permissions that the zookeeper supports is "READ", "WRITE", "CREATE", "DELETE" or "ADMIN". For example, we configure that everyone has all the permissions:
-
-    `kylin.env.zookeeper.zk-acl=world:anyone:rwcda`
-
-    The scheme equals to "world". The id equals to "anyone" and the permissions equals to "rwcda".
diff --git a/website/_docs30/howto/howto_install_ranger_kylin_plugin.md b/website/_docs30/howto/howto_install_ranger_kylin_plugin.md
deleted file mode 100644
index 3405282..0000000
--- a/website/_docs30/howto/howto_install_ranger_kylin_plugin.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-layout: docs30
-title:  Install Ranger Plugin
-categories: howto
-permalink: /docs30/howto/howto_install_ranger_kylin_plugin.html
----
-
-Please refer to [https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin](https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin).
diff --git a/website/_docs30/howto/howto_jdbc.cn.md b/website/_docs30/howto/howto_jdbc.cn.md
deleted file mode 100644
index 51602c5..0000000
--- a/website/_docs30/howto/howto_jdbc.cn.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-layout: docs30-cn
-title:  Kylin JDBC Driver
-categories: 帮助
-permalink: /cn/docs30/howto/howto_jdbc.html
----
-
-### 认证
-
-###### 基于Apache Kylin认证RESTFUL服务。支持的参数:
-* user : 用户名
-* password : 密码
-* ssl: true或false。 默认为flas;如果为true,所有的服务调用都会使用https。
-
-### 连接url格式:
-{% highlight Groff markup %}
-jdbc:kylin://<hostname>:<port>/<kylin_project_name>
-{% endhighlight %}
-* 如果“ssl”为true,“port”应该是Kylin server的HTTPS端口。
-* 如果“port”未被指定,driver会使用默认的端口:HTTP 80,HTTPS 443。
-* 必须指定“kylin_project_name”并且用户需要确保它在Kylin server上存在。
-
-### 1. 使用Statement查询
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 2. 使用PreparedStatementv查询
-
-###### 支持的PreparedStatement参数:
-* setString
-* setInt
-* setShort
-* setLong
-* setFloat
-* setDouble
-* setBoolean
-* setByte
-* setDate
-* setTime
-* setTimestamp
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
-state.setInt(1, 10);
-ResultSet resultSet = state.executeQuery();
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 3. 获取查询结果元数据
-Kylin jdbc driver支持元数据列表方法:
-通过sql模式过滤器(比如 %)列出catalog、schema、table和column。
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
-while (tables.next()) {
-    for (int i = 0; i < 10; i++) {
-        assertEquals("dummy", tables.getString(i + 1));
-    }
-}
-{% endhighlight %}
diff --git a/website/_docs30/howto/howto_jdbc.md b/website/_docs30/howto/howto_jdbc.md
deleted file mode 100644
index 8734c31..0000000
--- a/website/_docs30/howto/howto_jdbc.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-layout: docs30
-title:  JDBC Driver
-categories: howto
-permalink: /docs30/howto/howto_jdbc.html
----
-
-### Authentication
-
-###### Build on Apache Kylin authentication restful service. Supported parameters:
-* user : username 
-* password : password
-* ssl: true/false. Default be false; If true, all the services call will use https.
-
-### Connection URL format:
-{% highlight Groff markup %}
-jdbc:kylin://<hostname>:<port>/<kylin_project_name>
-{% endhighlight %}
-* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
-* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
-* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
-
-### 1. Query with Statement
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 2. Query with PreparedStatement
-
-###### Supported prepared statement parameters:
-* setString
-* setInt
-* setShort
-* setLong
-* setFloat
-* setDouble
-* setBoolean
-* setByte
-* setDate
-* setTime
-* setTimestamp
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
-state.setInt(1, 10);
-ResultSet resultSet = state.executeQuery();
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 3. Get query result set metadata
-Kylin jdbc driver supports metadata list methods:
-List catalog, schema, table and column with sql pattern filters(such as %).
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
-while (tables.next()) {
-    for (int i = 0; i < 10; i++) {
-        assertEquals("dummy", tables.getString(i + 1));
-    }
-}
-{% endhighlight %}
diff --git a/website/_docs30/howto/howto_ldap_and_sso.md b/website/_docs30/howto/howto_ldap_and_sso.md
deleted file mode 100644
index 8f2efcb..0000000
--- a/website/_docs30/howto/howto_ldap_and_sso.md
+++ /dev/null
@@ -1,130 +0,0 @@
----
-layout: docs30
-title: Secure with LDAP and SSO
-categories: howto
-permalink: /docs30/howto/howto_ldap_and_sso.html
----
-
-## Enable LDAP authentication
-
-Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
-
-#### Configure LDAP server info
-
-Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be encrypted; You can run the following command to get the encrypted value:
-
-```
-cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
-java -classpath kylin-server-base-\<versioin\>.jar:kylin-core-common-\<versioin\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
-```
-
-Config them in the conf/kylin.properties:
-
-```
-kylin.security.ldap.connection-server=ldap://<your_ldap_host>:<port>
-kylin.security.ldap.connection-username=<your_user_name>
-kylin.security.ldap.connection-password=<your_password_encrypted>
-```
-
-Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
-
-```
-kylin.security.ldap.user-search-base=OU=UserAccounts,DC=mycompany,DC=com
-kylin.security.ldap.user-search-pattern=(&(cn={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
-kylin.security.ldap.user-group-search-base=OU=Group,DC=mycompany,DC=com
-```
-
-If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in kylin.security.ldap.service-.*; Otherwise, leave them be empty;
-
-### Configure the administrator group
-
-To map an LDAP group to the admin group in Kylin, need set the "kylin.security.acl.admin-role" to the LDAP group name (shall keep the original case), and the users in this group will be global admin in Kylin.
-
-For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
-
-```
-kylin.security.acl.admin-role=KYLIN-ADMIN-GROUP
-```
-
-
-*Attention: When upgrading from Kylin 2.3 ealier version to 2.3 or later, please remove the "ROLE_" in this setting as this required in the 2.3 earlier version and keep the group name in original case. And the kylin.security.acl.default-role is deprecated.*
-
-#### Enable LDAP
-
-Set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server.
-
-## Enable SSO authentication
-
-From v1.5, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
-
-Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
-
-### Generate IDP metadata xml
-Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
-
-  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
-  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
-  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
-
-### Generate JKS keystore for Kylin
-As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
-
-Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
-
-```
-$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
-Enter Export Password: <export_pwd>
-Verifying - Enter Export Password: <export_pwd>
-
-
-$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
-
-Enter destination keystore password:  changeit
-Re-enter new password: changeit
-```
-
-It will put the keys to "samlKeystore.jks" with alias "kylin";
-
-### Enable Higher Ciphers
-
-Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
-
-### Deploy IDP xml file and keystore to Kylin
-
-The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
-	
-  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
-  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
-  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
-
-```
-<!-- Central storage of cryptographic keys -->
-<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
-	<constructor-arg value="classpath:samlKeystore.jks"/>
-	<constructor-arg type="java.lang.String" value="changeit"/>
-	<constructor-arg>
-		<map>
-			<entry key="kylin" value="changeit"/>
-		</map>
-	</constructor-arg>
-	<constructor-arg type="java.lang.String" value="kylin"/>
-</bean>
-
-```
-
-### Other configurations
-In conf/kylin.properties, add the following properties with your server information:
-
-```
-saml.metadata.entityBaseURL=https://host-name/kylin
-saml.context.scheme=https
-saml.context.serverName=host-name
-saml.context.serverPort=443
-saml.context.contextPath=/kylin
-```
-
-Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
-
-### Enable SSO
-Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
-
diff --git a/website/_docs30/howto/howto_optimize_build.cn.md b/website/_docs30/howto/howto_optimize_build.cn.md
deleted file mode 100644
index b027a0e..0000000
--- a/website/_docs30/howto/howto_optimize_build.cn.md
+++ /dev/null
@@ -1,166 +0,0 @@
----
-layout: docs30-cn
-title:  优化 Cube 构建
-categories: 帮助
-permalink: /cn/docs30/howto/howto_optimize_build.html
----
-
-Kylin将Cube构建任务分解为几个依次执行的步骤,这些步骤包括Hive操作、MapReduce操作和其他类型的操作。如果你有很多Cube构建任务需要每天运行,那么你肯定想要减少其中消耗的时间。下文按照Cube构建步骤顺序提供了一些优化经验。
-
-## 创建Hive的中间平表
-
-这一步将数据从源Hive表提取出来(和所有join的表一起)并插入到一个中间平表。如果Cube是分区的,Kylin会加上一个时间条件以确保只有在时间范围内的数据才会被提取。你可以在这个步骤的log查看相关的Hive命令,比如:
-
-```
-hive -e "USE default;
-DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
-
-CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
-(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
-STORED AS SEQUENCEFILE
-LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
-
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
-AIRLINE.FLIGHTDATE
-,AIRLINE.YEAR
-,AIRLINE.QUARTER
-,...
-,AIRLINE.ARRDELAYMINUTES
-FROM AIRLINE.AIRLINE as AIRLINE
-WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
-
-```
-
-在Hive命令运行时,Kylin会用`conf/kylin_hive_conf.properties`里的配置,比如保留更少的冗余备份和启用Hive的mapper side join。需要的话可以根据集群的具体情况增加其他配置。
-
-如果cube的分区列(在这个案例中是"FIGHTDATE")与Hive表的分区列相同,那么根据它过滤数据能让Hive聪明地跳过不匹配的分区。因此强烈建议用Hive的分区列(如果它是日期列)作为cube的分区列。这对于那些数据量很大的表来说几乎是必须的,否则Hive不得不每次在这步扫描全部文件,消耗非常长的时间。
-
-如果启用了Hive的文件合并,你可以在`conf/kylin_hive_conf.xml`里关闭它,因为Kylin有自己合并文件的方法(下一节):
-
-    <property>
-        <name>hive.merge.mapfiles</name>
-        <value>false</value>
-        <description>Disable Hive's auto merge</description>
-    </property>
-
-## 重新分发中间表
-
-在之前的一步之后,Hive在HDFS上的目录里生成了数据文件:有些是大文件,有些是小文件甚至空文件。这种不平衡的文件分布会导致之后的MR任务出现数据倾斜的问题:有些mapper完成得很快,但其他的就很慢。针对这个问题,Kylin增加了这一个步骤来“重新分发”数据,这是示例输出:
-
-```
-total input rows = 159869711
-expected input rows per mapper = 1000000
-num reducers for RedistributeFlatHiveTableStep = 160
-
-```
-
-重新分发表的命令:
-
-```
-hive -e "USE default;
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-set mapreduce.job.reduces=160;
-set hive.merge.mapredfiles=false;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
-"
-```
-
-首先,Kylin计算出中间表的行数,然后基于行数的大小算出重新分发数据需要的文件数。默认情况下,Kylin为每一百万行分配一个文件。在这个例子中,有1.6亿行和160个reducer,每个reducer会写一个文件。在接下来对这张表进行的MR步骤里,Hadoop会启动和文件相同数量的mapper来处理数据(通常一百万行数据比一个HDFS数据块要小)。如果你的日常数据量没有这么大或者Hadoop集群有足够的资源,你或许想要更多的并发数,这时可以将`conf/kylin.properties`里的`kylin.job.mapreduce.mapper.input.rows`设为小一点的数值,比如:
-
-`kylin.job.mapreduce.mapper.input.rows=500000`
-
-其次,Kylin会运行 *"INSERT OVERWRITE TABLE ... DISTRIBUTE BY "* 形式的HiveQL来分发数据到指定数量的reducer上。
-
-在很多情况下,Kylin请求Hive随机分发数据到reducer,然后得到大小相近的文件,分发的语句是"DISTRIBUTE BY RAND()"。
-
-如果你的cube指定了一个高基数的列,比如"USER_ID",作为"分片"维度(在cube的“高级设置”页面),Kylin会让Hive根据该列的值重新分发数据,那么在该列有着相同值的行将被分发到同一个文件。这比随机要分发要好得多,因为不仅重新分布了数据,并且在没有额外代价的情况下对数据进行了预先分类,如此一来接下来的cube build处理会从中受益。在典型的场景下,这样优化可以减少40%的build时长。在这个案例中分发的语句是"DISTRIBUTE BY USER_ID":
-
-请注意: 1)“分片”列应该是高基数的维度列,并且它会出现在很多的cuboid中(不只是出现在少数的cuboid)。 使用它来合理进行分发可以在每个时间范围内的数据均匀分布,否则会造成数据倾斜,从而降低build效率。典型的正面例子是:“USER_ID”、“SELLER_ID”、“PRODUCT”、“CELL_NUMBER”等等,这些列的基数应该大于一千(远大于reducer的数量)。 2)"分片"对cube的存储同样有好处,不过这超出了本文的范围。
-
-## 提取事实表的唯一列
-
-在这一步骤Kylin运行MR任务来提取使用字典编码的维度列的唯一值。
-
-实际上这步另外还做了一些事情:通过HyperLogLog计数器收集cube的统计数据,用于估算每个cuboid的行数。如果你发现mapper运行得很慢,这通常表明cube的设计太过复杂,请参考
-[优化cube设计](howto_optimize_cubes.html)来简化cube。如果reducer出现了内存溢出错误,这表明cuboid组合真的太多了或者是YARN的内存分配满足不了需要。如果这一步从任何意义上讲不能在合理的时间内完成,你可以放弃任务并考虑重新设计cube,因为继续下去会花费更长的时间。
-
-你可以通过降低取样的比例(kylin.job.cubing.inmen.sampling.percent)来加速这个步骤,但是帮助可能不大而且影响了cube统计数据的准确性,所有我们并不推荐。
-
-## 构建维度字典
-
-有了前一步提取的维度列唯一值,Kylin会在内存里构建字典。通常这一步比较快,但如果唯一值集合很大,Kylin可能会报出类似“字典不支持过高基数”。对于UHC类型的列,请使用其他编码方式,比如“fixed_length”、“integer”等等。
-
-## 保存cuboid的统计数据和创建 HTable
-
-这两步是轻量级和快速的。
-
-## 构建基础cuboid
-
-这一步用Hive的中间表构建基础的cuboid,是“逐层”构建cube算法的第一轮MR计算。Mapper的数目与第二步的reducer数目相等;Reducer的数目是根据cube统计数据估算的:默认情况下每500MB输出使用一个reducer;如果观察到reducer的数量较少,你可以将kylin.properties里的“kylin.job.mapreduce.default.reduce.input.mb”设为小一点的数值以获得过多的资源,比如:
-
-`kylin.job.mapreduce.default.reduce.input.mb=200`
-
-## Build N-Dimension Cuboid 
-## 构建N维cuboid
-
-这些步骤是“逐层”构建cube的过程,每一步以前一步的输出作为输入,然后去掉一个维度以聚合得到一个子cuboid。举个例子,cuboid ABCD去掉A得到BCD,去掉B得到ACD。
-
-有些cuboid可以从一个以上的父cuboid聚合得到,这种情况下,Kylin会选择最小的一个父cuboid。举例,AB可以从ABC(id:1110)和ABD(id:1101)生成,则ABD会被选中,因为它的比ABC要小。在这基础上,如果D的基数较小,聚合运算的成本就会比较低。所以,当设计rowkey序列的时候,请记得将基数较小的维度放在末尾。这样不仅有利于cube构建,而且有助于cube查询,因为预聚合也遵循相同的规则。
-
-通常来说,从N维到(N/2)维的构建比较慢,因为这是cuboid数量爆炸性增长的阶段:N维有1个cuboid,(N-1)维有N个cuboid,(N-2)维有N*(N-1)个cuboid,以此类推。经过(N/2)维构建的步骤,整个构建任务会逐渐变快。
-
-## 构建cube
-
-这个步骤使用一个新的算法来构建cube:“逐片”构建(也称为“内存”构建)。它会使用一轮MR来计算所有的cuboids,但是比通常情况下更耗内存。配置文件"conf/kylin_job_inmem.xml"正是为这步而设。默认情况下它为每个mapper申请3GB内存。如果你的集群有充足的内存,你可以在上述配置文件中分配更多内存给mapper,这样它会用尽可能多的内存来缓存数据以获得更好的性能,比如:
-
-    <property>
-        <name>mapreduce.map.memory.mb</name>
-        <value>6144</value>
-        <description></description>
-    </property>
-    
-    <property>
-        <name>mapreduce.map.java.opts</name>
-        <value>-Xmx5632m</value>
-        <description></description>
-    </property>
-
-
-请注意,Kylin会根据数据分布(从cube的统计数据里获得)自动选择最优的算法,没有被选中的算法对应的步骤会被跳过。你不需要显式地选择构建算法。
-
-## 将cuboid数据转换为HFile
-
-这一步启动一个MR任务来讲cuboid文件(序列文件格式)转换为HBase的HFile格式。Kylin通过cube统计数据计算HBase的region数目,默认情况下每5GB数据对应一个region。Region越多,MR使用的reducer也会越多。如果你观察到reducer数目较小且性能较差,你可以将“conf/kylin.properties”里的以下参数设小一点,比如:
-
-```
-kylin.hbase.region.cut=2
-kylin.hbase.hfile.size.gb=1
-```
-
-如果你不确定一个region应该是多大时,联系你的HBase管理员。
-
-## 将HFile导入HBase表
-
-这一步使用HBase API来讲HFile导入region server,这是轻量级并快速的一步。
-
-## 更新cube信息
-
-在导入数据到HBase后,Kylin在元数据中将对应的cube segment标记为ready。
-
-## 清理资源
-
-将中间宽表从Hive删除。这一步不会阻塞任何操作,因为在前一步segment已经被标记为ready。如果这一步发生错误,不用担心,垃圾回收工作可以晚些再通过Kylin的[StorageCleanupJob](howto_cleanup_storage.html)完成。
-
-## 总结
-还有非常多其他提高Kylin性能的方法,如果你有经验可以分享,欢迎通过[dev@kylin.apache.org](mailto:dev@kylin.apache.org)讨论。
diff --git a/website/_docs30/howto/howto_optimize_build.md b/website/_docs30/howto/howto_optimize_build.md
deleted file mode 100644
index 3029033..0000000
--- a/website/_docs30/howto/howto_optimize_build.md
+++ /dev/null
@@ -1,190 +0,0 @@
----
-layout: docs30
-title:  Optimize Cube Build
-categories: howto
-permalink: /docs30/howto/howto_optimize_build.html
----
-
-Kylin decomposes a Cube build task into several steps and then executes them in sequence. These steps include Hive operations, MapReduce jobs, and other types job. When you have many Cubes to build daily, then you definitely want to speed up this process. Here are some practices that you probably want to know, and they are organized in the same order as the steps sequence.
-
-
-
-## Create Intermediate Flat Hive Table
-
-This step extracts data from source Hive tables (with all tables joined) and inserts them into an intermediate flat table. If Cube is partitioned, Kylin will add a time condition so that only the data in the range would be fetched. You can check the related Hive command in the log of this step, e.g: 
-
-```
-hive -e "USE default;
-DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
-
-CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
-(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
-STORED AS SEQUENCEFILE
-LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
-
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
-AIRLINE.FLIGHTDATE
-,AIRLINE.YEAR
-,AIRLINE.QUARTER
-,...
-,AIRLINE.ARRDELAYMINUTES
-FROM AIRLINE.AIRLINE as AIRLINE
-WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
-"
-
-```
-
-Kylin applies the configuration in conf/kylin\_hive\_conf.xml while Hive commands are running, for instance, use less replication and enable Hive's mapper side join. If it is needed, you can add other configurations which are good for your cluster.
-
-If Cube's partition column ("FLIGHTDATE" in this case) is the same as Hive table's partition column, then filtering on it will let Hive smartly skip those non-matched partitions. So it is highly recommended to use Hive table's paritition column (if it is a date column) as the Cube's partition column. This is almost required for those very large tables, or Hive has to scan all files each time in this step, costing terribly long time.
-
-If your Hive enables file merge, you can disable them in "conf/kylin\_hive\_conf.xml" as Kylin has its own way to merge files (in the next step): 
-
-    <property>
-        <name>hive.merge.mapfiles</name>
-        <value>false</value>
-        <description>Disable Hive's auto merge</description>
-    </property>
-
-
-## Redistribute intermediate table
-
-After the previous step, Hive generates the data files in HDFS folder: while some files are large, some are small or even empty. The imbalanced file distribution would lead subsequent MR jobs to imbalance as well: some mappers finish quickly yet some others are very slow. To balance them, Kylin adds this step to "redistribute" the data and here is a sample output:
-
-```
-total input rows = 159869711
-expected input rows per mapper = 1000000
-num reducers for RedistributeFlatHiveTableStep = 160
-
-```
-
-
-Redistribute table, cmd: 
-
-```
-hive -e "USE default;
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-set mapreduce.job.reduces=160;
-set hive.merge.mapredfiles=false;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
-"
-
-```
-
-
-
-Firstly, Kylin gets the row count of this intermediate table; then based on the number of row count, it would get amount of files needed to get data redistributed. By default, Kylin allocates one file per 1 million rows. In this sample, there are 160 million rows and exist 160 reducers, and each reducer would write 1 file. In following MR step over this table, Hadoop will start the same number Mappers as the files to process (usually 1 million's data size is small than a HDFS block size) [...]
-
-`kylin.job.mapreduce.mapper.input.rows=500000`
-
-
-Secondly, Kylin runs a *"INSERT OVERWIRTE TABLE .... DISTRIBUTE BY "* HiveQL to distribute the rows among a specified number of reducers.
-
-In most cases, Kylin asks Hive to randomly distributes the rows among reducers, then get files very closed in size. The distribute clause is "DISTRIBUTE BY RAND()".
-
-If your Cube has specified a "shard by" dimension (in Cube's "Advanced setting" page), which is a high cardinality column (like "USER\_ID"), Kylin will ask Hive to redistribute data by that column's value. Then for the rows that have the same value as this column has, they will go to the same file. This is much better than "by random",  because the data will be not only redistributed but also pre-categorized without additional cost, thus benefiting the subsequent Cube build process. Unde [...]
-
-**Please note:** 1) The "shard by" column should be a high cardinality dimension column, and it appears in many cuboids (not just appears in seldom cuboids). Utilize it to distribute properly can get equidistribution in every time range; otherwise it will cause data incline, which will reduce the building speed. Typical good cases are: "USER\_ID", "SELLER\_ID", "PRODUCT", "CELL\_NUMBER", so forth, whose cardinality is higher than one thousand (should be much more than the reducer numbers [...]
-
-
-
-## Extract Fact Table Distinct Columns
-
-In this step Kylin runs a MR job to fetch distinct values for the dimensions, which are using dictionary encoding. 
-
-Actually this step does more: it collects the Cube statistics by using HyperLogLog counters to estimate the row count of each Cuboid. If you find that mappers work incredible slowly, it usually indicates that the Cube design is too complex, please check [optimize cube design](howto_optimize_cubes.html) to make the Cube thinner. If the reducers get OutOfMemory error, it indicates that the Cuboid combination does explode or the default YARN memory allocation cannot meet demands. If this st [...]
-
-You can reduce the sampling percentage (kylin.job.cubing.inmem.sampling.percen in kylin.properties) to get this step accelerated, but this may not help much and impact on the accuracy of Cube statistics, thus we don't recommend.  
-
-
-
-## Build Dimension Dictionary
-
-With the distinct values fetched in previous step, Kylin will build dictionaries in memory. Usually this step is fast, but if the value set is large, Kylin may report error like "Too high cardinality is not suitable for dictionary". For UHC column, please use other encoding method for the UHC column, such as "fixed_length", "integer" and so on.
-
-
-
-## Save Cuboid Statistics and Create HTable
-
-These two steps are lightweight and fast.
-
-
-
-## Build Base Cuboid 
-
-This step is building the base cuboid from the intermediate table, which is the first round MR of the "by-layer" cubing algorithm. The mapper number is equals to the reducer number of step 2; The reducer number is estimated with the cube statistics: by default use 1 reducer every 500MB output; If you observed the reducer number is small, you can set "kylin.job.mapreduce.default.reduce.input.mb" in kylin.properties to a smaller value to get more resources, e.g: `kylin.job.mapreduce.defaul [...]
-
-
-## Build N-Dimension Cuboid 
-
-These steps are the "by-layer" cubing process, each step uses the output of previous step as the input, and then cut off one dimension to aggregate to get one child cuboid. For example, from cuboid ABCD, cut off A get BCD, cut off B get ACD etc. 
-
-Some cuboid can be aggregated from more than 1 parent cubiods, in this case, Kylin will select the minimal parent cuboid. For example, AB can be generated from ABC (id: 1110) and ABD (id: 1101), so ABD will be used as its id is smaller than ABC. Based on this, if D's cardinality is small, the aggregation will be cost-efficient. So, when you design the Cube rowkey sequence, please remember to put low cardinality dimensions to the tail position. This not only benefit the Cube build, but al [...]
-
-Usually from the N-D to (N/2)-D the building is slow, because it is the cuboid explosion process: N-D has 1 Cuboid, (N-1)-D has N cuboids, (N-2)-D has N*(N-1) cuboids, etc. After (N/2)-D step, the building gets faster gradually.
-
-
-
-## Build Cube
-
-This step uses a new algorithm to build the Cube: "by-split" Cubing (also called as "in-mem" cubing). It will use one round MR to calculate all cuboids, but it requests more memory than normal. The "conf/kylin\_job\_conf\_inmem.xml" is made for this step. By default it requests 3GB memory for each mapper. If your cluster has enough memory, you can allocate more in "conf/kylin\_job\_conf\_inmem.xml" so it will use as much possible memory to hold the data and gain a better performance, e.g:
-
-    <property>
-        <name>mapreduce.map.memory.mb</name>
-        <value>6144</value>
-        <description></description>
-    </property>
-    
-    <property>
-        <name>mapreduce.map.java.opts</name>
-        <value>-Xmx5632m</value>
-        <description></description>
-    </property>
-
-
-Please note, Kylin will automatically select the best algorithm based on the data distribution (get in Cube statistics). The not-selected algorithm's steps will be skipped. You don't need to select the algorithm explicitly.
-
-
-
-## Convert Cuboid Data to HFile
-
-This step starts a MR job to convert the Cuboid files (sequence file format) into HBase's HFile format. Kylin calculates the HBase region number with the Cube statistics, by default 1 region per 5GB. The more regions got, the more reducers would be utilized. If you observe the reducer's number is small and performance is poor, you can set the following parameters in "conf/kylin.properties" to smaller, as follows:
-
-```
-kylin.hbase.region.cut=2
-kylin.hbase.hfile.size.gb=1
-```
-
-If you're not sure what size a region should be, contact your HBase administrator. 
-
-
-## Load HFile to HBase Table
-
-This step uses HBase API to load the HFile to region servers, it is lightweight and fast.
-
-
-
-## Update Cube Info
-
-After loading data into HBase, Kylin marks this Cube segment as ready in metadata. This step is very fast.
-
-
-
-## Cleanup
-
-Drop the intermediate table from Hive. This step doesn't block anything as the segment has been marked ready in the previous step. If this step gets error, no need to worry, the garbage can be collected later when Kylin executes the [StorageCleanupJob](howto_cleanup_storage.html).
-
-
-## Summary
-There are also many other methods to boost the performance. If you have practices to share, welcome to discuss in [dev@kylin.apache.org](mailto:dev@kylin.apache.org).
diff --git a/website/_docs30/howto/howto_optimize_cubes.cn.md b/website/_docs30/howto/howto_optimize_cubes.cn.md
deleted file mode 100644
index ea3800a..0000000
--- a/website/_docs30/howto/howto_optimize_cubes.cn.md
+++ /dev/null
@@ -1,212 +0,0 @@
----
-layout: docs30-cn
-title:  优化 Cube 设计
-categories: howto
-permalink: /cn/docs30/howto/howto_optimize_cubes.html
----
-
-## Hierarchies:
-
-Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
-
-group by continent
-group by continent, country
-group by continent, country, city
-
-In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
-
-If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
-
-
-A. Hierarchies on lookup table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, FK</td>
-    <td></td>
-    <td>PK,,H1,H2,H3,,,,</td>
-  </tr>
-</table>
-
----
-
-B. Hierarchies on fact table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
-  </tr>
-</table>
-
----
-
-
-There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
-
-A*. Hierarchies on lookup table over its primary key
-
-
-<table>
-  <tr>
-    <td align="center">Lookup Table(Calendar)</td>
-  </tr>
-  <tr>
-    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
-  </tr>
-</table>
-
----
-
-
-For cases like A* what you need is another optimization called "Derived Columns"
-
-## Derived Columns:
-
-Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
-
-For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, DimA(FK) </td>
-    <td></td>
-    <td>DimX(PK),,DimB, DimC</td>
-  </tr>
-</table>
-
----
-
-
-Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
-
-
-<table>
-  <tr>
-    <th>dimA</th>
-    <th>dimB</th>
-    <th>dimC</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>b</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>c</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-</table>
-
-
-in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
-
-original combinations:
-ABC,AB,AC,BC,A,B,C
-
-combinations when driving B from A:
-AC,A,C
-
-at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
-
-
-<table>
-  <tr>
-    <th>DimA</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>2</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"
diff --git a/website/_docs30/howto/howto_optimize_cubes.md b/website/_docs30/howto/howto_optimize_cubes.md
deleted file mode 100644
index 9bcd47a..0000000
--- a/website/_docs30/howto/howto_optimize_cubes.md
+++ /dev/null
@@ -1,212 +0,0 @@
----
-layout: docs30
-title:  Optimize Cube Design
-categories: howto
-permalink: /docs30/howto/howto_optimize_cubes.html
----
-
-## Hierarchies:
-
-Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
-
-group by continent
-group by continent, country
-group by continent, country, city
-
-In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
-
-If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
-
-
-A. Hierarchies on lookup table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, FK</td>
-    <td></td>
-    <td>PK,,H1,H2,H3,,,,</td>
-  </tr>
-</table>
-
----
-
-B. Hierarchies on fact table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
-  </tr>
-</table>
-
----
-
-
-There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
-
-A*. Hierarchies on lookup table over its primary key
-
-
-<table>
-  <tr>
-    <td align="center">Lookup Table(Calendar)</td>
-  </tr>
-  <tr>
-    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
-  </tr>
-</table>
-
----
-
-
-For cases like A* what you need is another optimization called "Derived Columns"
-
-## Derived Columns:
-
-Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
-
-For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, DimA(FK) </td>
-    <td></td>
-    <td>DimX(PK),,DimB, DimC</td>
-  </tr>
-</table>
-
----
-
-
-Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
-
-
-<table>
-  <tr>
-    <th>dimA</th>
-    <th>dimB</th>
-    <th>dimC</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>b</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>c</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-</table>
-
-
-in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
-
-original combinations:
-ABC,AB,AC,BC,A,B,C
-
-combinations when driving B from A:
-AC,A,C
-
-at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
-
-
-<table>
-  <tr>
-    <th>DimA</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>2</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"
diff --git a/website/_docs30/howto/howto_update_coprocessor.md b/website/_docs30/howto/howto_update_coprocessor.md
deleted file mode 100644
index 9121fec..0000000
--- a/website/_docs30/howto/howto_update_coprocessor.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-layout: docs30
-title:  Update Coprocessor
-categories: howto
-permalink: /docs30/howto/howto_update_coprocessor.html
----
-
-Kylin leverages HBase coprocessor to optimize query performance. After new versions released, the RPC protocol may get changed, so user need to redeploy coprocessor to HTable.
-
-There's a CLI tool to update HBase Coprocessor:
-
-{% highlight Groff markup %}
-$KYLIN_HOME/bin/kylin.sh org.apache.kylin.storage.hbase.util.DeployCoprocessorCLI default all
-{% endhighlight %}
diff --git a/website/_docs30/howto/howto_upgrade.md b/website/_docs30/howto/howto_upgrade.md
deleted file mode 100644
index d5cd273..0000000
--- a/website/_docs30/howto/howto_upgrade.md
+++ /dev/null
@@ -1,112 +0,0 @@
----
-layout: docs30
-title:  Upgrade From Old Versions
-categories: howto
-permalink: /docs30/howto/howto_upgrade.html
-since: v1.5.1
----
-
-Running as a Hadoop client, Apache Kylin's metadata and Cube data are persistended in Hadoop (HBase and HDFS), so the upgrade is relatively easy and user does not need worry about data loss. The upgrade can be performed in the following steps:
-
-* Download the new Apache Kylin binary package for your Hadoop version from Kylin download page.
-* Unpack the new version Kylin package to a new folder, e.g, /usr/local/kylin/apache-kylin-2.1.0/ (directly overwrite old instance is not recommended).
-* Merge the old configuration files (`$KYLIN_HOME/conf/*`) into the new ones. It is not recommended to overwrite the new configuration files, although that works in most cases. If you have modified tomcat configuration ($KYLIN_HOME/tomcat/conf/), do the same for it.
-* Stop the current Kylin instance with `bin/kylin.sh stop`
-* Set the `KYLIN_HOME` env variable to the new installation folder. If you have set `KYLIN_HOME` in `~/.bash_profile` or other scripts, remember to update them as well.
-* Start the new Kylin instance with `$KYLIN_HOME/bin/kylin start`. After be started, login Kylin web to check whether your cubes can be loaded correctly.
-* [Upgrade coprocessor](howto_update_coprocessor.html) to ensure the HBase region servers use the latest Kylin coprocessor.
-* Verify your SQL queries can be performed successfully.
-
-Below are versions specific guides:
-
-## Upgrade from 2.4 to 2.5.0
-
-* Kylin 2.5 need Java 8; Please upgrade Java if you're running with Java 7.
-* Kylin metadata is compatible between 2.4 and 2.5. No migration is needed.
-* Spark engine will move more steps from MR to Spark, you may see performance difference for the same cube after the upgrade.
-* Property `kylin.source.jdbc.sqoop-home` need be the location of sqoop installation, not its "bin" subfolder, please modify it if you're using RDBMS as the data source. 
-* The Cube planner is enabled by default now; New cubes will be optimized by it on first build. System cube and dashboard still need manual enablement.
-
-## Upgrade from v2.1.0 to v2.2.0
-
-Kylin v2.2.0 cube metadata is compitable with v2.1.0, but you need aware the following changes:
-
-* Cube ACL is removed, use Project Level ACL instead. You need to manually configure Project Permissions to migrate your existing Cube Permissions. Please refer to [Project Level ACL](/docs21/tutorial/project_level_acl.html).
-* Update HBase coprocessor. The HBase tables for existing cubes need be updated to the latest coprocessor. Follow [this guide](/docs21/howto/howto_update_coprocessor.html) to update.
-
-
-## Upgrade from v2.0.0 to v2.1.0
-
-Kylin v2.1.0 cube metadata is compitable with v2.0.0, but you need aware the following changes. 
-
-1) In previous version, Kylin uses additional two HBase tables "kylin_metadata_user" and "kylin_metadata_acl" to persistent the user and ACL info. From 2.1, Kylin consolidates all the info into one table: "kylin_metadata". This will make the backup/restore and maintenance more easier. When you start Kylin 2.1.0, it will detect whether need migration; if true, it will print the command to do migration:
-
-```
-ERROR: Legacy ACL metadata detected. Please migrate ACL metadata first. Run command 'bin/kylin.sh org.apache.kylin.tool.AclTableMigrationCLI MIGRATE'.
-```
-
-After the migration finished, you can delete the legacy "kylin_metadata_user" and "kylin_metadata_acl" tables from HBase.
-
-2) From v2.1, Kylin hides the default settings in "conf/kylin.properties"; You only need uncomment or add the customized properties in it.
-
-3) Spark is upgraded from v1.6.3 to v2.1.1, if you customized Spark configurations in kylin.properties, please upgrade them as well by referring to [Spark documentation](https://spark.apache.org/docs/2.1.0/).
-
-4) If you are running Kylin with two clusters (compute/query separated), need copy the big metadata files (which are persisted in HDFS instead of HBase) from the Hadoop cluster to HBase cluster.
-
-```
-hadoop distcp hdfs://compute-cluster:8020/kylin/kylin_metadata/resources hdfs://query-cluster:8020/kylin/kylin_metadata/resources
-```
-
-
-## Upgrade from v1.6.0 to v2.0.0
-
-Kylin v2.0.0 can read v1.6.0 metadata directly. Please follow the common upgrade steps above.
-
-Configuration names in `kylin.properties` have changed since v2.0.0. While the old property names still work, it is recommended to use the new property names as they follow [the naming convention](/development/coding_naming_convention.html) and are easier to understand. There is [a mapping from the old properties to the new properties](https://github.com/apache/kylin/blob/2.0.x/core-common/src/main/resources/kylin-backward-compatibility.properties).
-
-## Upgrade from v1.5.4 to v1.6.0
-
-Kylin v1.5.4 and v1.6.0 are compatible in metadata. Please follow the common upgrade steps above.
-
-## Upgrade from v1.5.3 to v1.5.4
-Kylin v1.5.3 and v1.5.4 are compatible in metadata. Please follow the common upgrade steps above.
-
-## Upgrade from 1.5.2 to v1.5.3
-Kylin v1.5.3 metadata is compatible with v1.5.2, your cubes don't need rebuilt, as usual, some actions need to be performed:
-
-#### 1. Update HBase coprocessor
-The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
-
-#### 2. Update conf/kylin_hive_conf.xml
-From 1.5.3, Kylin doesn't need Hive to merge small files anymore; For users who copy the conf/ from previous version, please remove the "merge" related properties in kylin_hive_conf.xml, including "hive.merge.mapfiles", "hive.merge.mapredfiles", and "hive.merge.size.per.task"; this will save the time on extracting data from Hive.
-
-
-## Upgrade from 1.5.1 to v1.5.2
-Kylin v1.5.2 metadata is compatible with v1.5.1, your cubes don't need upgrade, while some actions need to be performed:
-
-#### 1. Update HBase coprocessor
-The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
-
-#### 2. Update conf/kylin.properties
-In v1.5.2 several properties are deprecated, and several new one are added:
-
-Deprecated:
-
-* kylin.hbase.region.cut.small=5
-* kylin.hbase.region.cut.medium=10
-* kylin.hbase.region.cut.large=50
-
-New:
-
-* kylin.hbase.region.cut=5
-* kylin.hbase.hfile.size.gb=2
-
-These new parameters determines how to split HBase region; To use different size you can overwite these params in Cube level. 
-
-When copy from old kylin.properties file, suggest to remove the deprecated ones and add the new ones.
-
-#### 3. Add conf/kylin\_job\_conf\_inmem.xml
-A new job conf file named "kylin\_job\_conf\_inmem.xml" is added in "conf" folder; As Kylin 1.5 introduced the "fast cubing" algorithm, which aims to leverage more memory to do the in-mem aggregation; Kylin will use this new conf file for submitting the in-mem cube build job, which requesting different memory with a normal job; Please update it properly according to your cluster capacity.
-
-Besides, if you have used separate config files for different capacity cubes, for example "kylin\_job\_conf\_small.xml", "kylin\_job\_conf\_medium.xml" and "kylin\_job\_conf\_large.xml", please note that they are deprecated now; Only "kylin\_job\_conf.xml" and "kylin\_job\_conf\_inmem.xml" will be used for submitting cube job; If you have cube level job configurations (like using different Yarn job queue), you can customize at cube level, check [KYLIN-1706](https://issues.apache.org/jira [...]
-
diff --git a/website/_docs30/howto/howto_use_beeline.md b/website/_docs30/howto/howto_use_beeline.md
deleted file mode 100644
index 4ff1d93..0000000
--- a/website/_docs30/howto/howto_use_beeline.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-layout: docs30
-title:  Use Beeline for Hive
-categories: howto
-permalink: /docs30/howto/howto_use_beeline.html
----
-
-Beeline(https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients) is recommended by many venders to replace Hive CLI. By default Kylin uses Hive CLI to synchronize Hive tables, create flatten intermediate tables, etc. By simple configuration changes you can set Kylin to use Beeline instead.
-
-Edit $KYLIN_HOME/conf/kylin.properties by:
-
-  1. change kylin.hive.client=cli to kylin.hive.client=beeline
-  2. add "kylin.hive.beeline.params", this is where you can specify beeline command parameters. Like username(-n), JDBC URL(-u),etc. There's a sample kylin.hive.beeline.params included in default kylin.properties, however it's commented. You can modify the sample based on your real environment.
-
diff --git a/website/_docs30/howto/howto_use_cli.cn.md b/website/_docs30/howto/howto_use_cli.cn.md
deleted file mode 100644
index 3afbbfa..0000000
--- a/website/_docs30/howto/howto_use_cli.cn.md
+++ /dev/null
@@ -1,154 +0,0 @@
----
-layout: docs30-cn
-title:  "实用 CLI 工具"
-categories: howto
-permalink: /cn/docs30/howto/howto_use_cli.html
----
-Kylin 提供一些方便实用的工具类。这篇文档会介绍以下几个工具类:KylinConfigCLI.java,CubeMetaExtractor.java,CubeMetaIngester.java,CubeMigrationCLI.java 和 CubeMigrationCheckCLI.java。在使用这些工具类前,首先要切换到 KYLIN_HOME 目录下。
-
-## KylinConfigCLI.java
-
-### 作用
-KylinConfigCLI 工具类会将您输入的 Kylin 参数的值输出。 
-
-### 如何使用
-类名后只能写一个参数,conf_name 即您想要知道其值的参数名称。
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.KylinConfigCLI <conf_name>
-{% endhighlight %}
-例如: 
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.KylinConfigCLI kylin.server.mode
-{% endhighlight %}
-结果:
-{% highlight Groff markup %}
-all
-{% endhighlight %}
-如果您不知道参数的准确名称,您可以使用以下命令,然后所有以该前缀为前缀的参数的值都会被列出。
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.KylinConfigCLI <prefix>.
-{% endhighlight %}
-例如:
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.KylinConfigCLI kylin.job.
-{% endhighlight %}
-结果:
-{% highlight Groff markup %}
-max-concurrent-jobs=10
-retry=3
-sampling-percentage=100
-{% endhighlight %}
-
-## CubeMetaExtractor.java
-
-### 作用
-CubeMetaExtractor.java 用于提取与 cube 相关的信息以达到调试/分发的目的。  
-
-### 如何使用
-类名后至少写两个参数。
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaExtractor -<conf_name> <conf_value> -destDir <your_dest_dir>
-{% endhighlight %}
-例如:
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaExtractor -cube querycube -destDir /root/newconfigdir1
-{% endhighlight %}
-结果:
-命令执行成功后,您想要抽取的 cube / project / hybrid 将会存在于您指定的 destDir 目录中。
-
-下面会列出所有支持的参数:
-
-| Parameter                                             | Description                                                                                         |
-| ----------------------------------------------------- | :-------------------------------------------------------------------------------------------------- |
-| allProjects                                           | Specify realizations in all projects to extract                                                     |
-| compress <compress>                                   | Specify whether to compress the output with zip. Default true.                                      | 
-| cube <cube>                                           | Specify which Cube to extract                                                                       |
-| destDir <destDir>                                     | (Required) Specify the dest dir to save the related information                                     |
-| hybrid <hybrid>                                       | Specify which hybrid to extract                                                                     |
-| includeJobs <includeJobs>                             | Set this to true if want to extract job info/outputs too. Default false                             |
-| includeSegmentDetails <includeSegmentDetails>         | Set this to true if want to extract segment details too, such as dict, tablesnapshot. Default false |
-| includeSegments <includeSegments>                     | Set this to true if want extract the segments info. Default true                                    |
-| onlyOutput <onlyOutput>                               | When include jobs, only extract output of job. Default true                                         |
-| packagetype <packagetype>                             | Specify the package type                                                                            |
-| project <project>                                     | Specify realizations in which project to extract                                                    |
-| submodule <submodule>                                 | Specify whether this is a submodule of other CLI tool. Default false.                               |
-
-## CubeMetaIngester.java
-
-### 作用
-CubeMetaIngester.java 将提取的 cube 注入到另一个 metadata store 中。目前其只支持注入 cube。  
-
-### 如何使用
-类名后至少写两个参数。请确保您想要注入的 cube 在要注入的 project 中不存在。注意:zip 文件解压后必须只能包含一个目录。
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaIngester -project <target_project> -srcPath <your_src_dir>
-{% endhighlight %}
-例如:
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaIngester -project querytest -srcPath /root/newconfigdir1/cubes.zip
-{% endhighlight %}
-结果:
-命令执行成功后,您想要注入的 cube 将会存在于您指定的 srcPath 目录中。
-
-下面会列出所有支持的参数:
-
-| Parameter                         | Description                                                                                                                                                                                        |
-| --------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| forceIngest <forceIngest>         | Skip the target cube, model and table check and ingest by force. Use in caution because it might break existing cubes! Suggest to backup metadata store first. Default false.                      |
-| overwriteTables <overwriteTables> | If table meta conflicts, overwrite the one in metadata store with the one in srcPath. Use in caution because it might break existing cubes! Suggest to backup metadata store first. Default false. |
-| project <project>                 | (Required) Specify the target project for the new cubes.                              
-| srcPath <srcPath>                 | (Required) Specify the path to the extracted Cube metadata zip file.                                                                                                                               |
-
-##  CubeMigrationCLI.java
-
-### 作用
-CubeMigrationCLI.java 用于迁移 cubes。例如:将 cube 从测试环境迁移到生产环境。请注意,不同的环境是共享相同的 Hadoop 集群,包括 HDFS,HBase 和 HIVE。此 CLI 不支持跨 Hadoop 集群的数据迁移。
-
-### 如何使用
-前八个参数必须有且次序不能改变。
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCLI <srcKylinConfigUri> <dstKylinConfigUri> <cubeName> <projectName> <copyAclOrNot> <purgeOrNot> <overwriteIfExists> <realExecute> <migrateSegmentOrNot>
-{% endhighlight %}
-例如:
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCLI kylin-qa:7070 kylin-prod:7070 kylin_sales_cube learn_kylin true false false true false
-{% endhighlight %}
-命令执行成功后,请 reload metadata,您想要迁移的 cube 将会存在于迁移后的 project 中。
-
-下面会列出所有支持的参数:
- 如果您使用 `cubeName` 这个参数,但想要迁移的 cube 所对应的 model 在要迁移的环境中不存在,model 的数据也会迁移过去。
- 如果您将 `overwriteIfExists` 设置为 false,且该 cube 已存在于要迁移的环境中,当您运行命令,cube 存在的提示信息将会出现。
- 如果您将 `migrateSegmentOrNot` 设置为 true,请保证 Kylin metadata 的 HDFS 目录存在且 Cube 的状态为 READY。
-
-| Parameter           | Description                                                                                |
-| ------------------- | :----------------------------------------------------------------------------------------- |
-| srcKylinConfigUri   | The URL of the source environment's Kylin configuration. It can be `host:7070`, or an absolute file path to the `kylin.properties`.                                                     |
-| dstKylinConfigUri   | The URL of the target environment's Kylin configuration.                                                 |
-| cubeName            | the name of Cube to be migrated.(Make sure it exist)                                       |
-| projectName         | The target project in the target environment.(Make sure it exist)                          |
-| copyAclOrNot        | `true` or `false`: whether copy Cube ACL to target environment.                                |
-| purgeOrNot          | `true` or `false`: whether purge the Cube from src server after the migration.                 |
-| overwriteIfExists   | `true` or `false`: overwrite cube if it already exists in the target environment.                             |
-| realExecute         | `true` or `false`: if false, just print the operations to take, if true, do the real migration.               |
-| migrateSegmentOrNot | (Optional) true or false: whether copy segment data to target environment. Default true.   |
-
-## CubeMigrationCheckCLI.java
-
-### 作用
-CubeMigrationCheckCLI.java 用于在迁移 Cube 之后检查“KYLIN_HOST”属性是否与 dst 中所有 Cube segment 对应的 HTable 的 MetadataUrlPrefix 一致。CubeMigrationCheckCLI.java 会在 CubeMigrationCLI.java 中被调用,通常不单独使用。
-
-### 如何使用
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCheckCLI -fix <conf_value> -dstCfgUri <dstCfgUri_value> -cube <cube_name>
-{% endhighlight %}
-例如:
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCheckCLI -fix true -dstCfgUri kylin-prod:7070 -cube querycube
-{% endhighlight %}
-下面会列出所有支持的参数:
-
-| Parameter           | Description                                                                   |
-| ------------------- | :---------------------------------------------------------------------------- |
-| fix                 | Fix the inconsistent Cube segments' HOST, default false                       |
-| dstCfgUri           | The KylinConfig of the Cube’s new home                                       |
-| cube                | The name of Cube migrated                                                     |
diff --git a/website/_docs30/howto/howto_use_cli.md b/website/_docs30/howto/howto_use_cli.md
deleted file mode 100644
index 1b5deb0..0000000
--- a/website/_docs30/howto/howto_use_cli.md
+++ /dev/null
@@ -1,161 +0,0 @@
----
-layout: docs30
-title:  Use Utility CLIs
-categories: howto
-permalink: /docs30/howto/howto_use_cli.html
----
-Kylin has some client utility tools. This document will introduce the following class: KylinConfigCLI.java, CubeMetaExtractor.java, CubeMetaIngester.java, CubeMigrationCLI.java and CubeMigrationCheckCLI.java. Before using these tools, you have to switch to the KYLIN_HOME directory. 
-
-## KylinConfigCLI.java
-
-### Function
-KylinConfigCLI.java outputs the value of Kylin properties. 
-
-### How to use 
-After the class name, you can only write one parameter, `conf_name` which is the parameter name that you want to know its value.
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.KylinConfigCLI <conf_name>
-{% endhighlight %}
-For example: 
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.KylinConfigCLI kylin.server.mode
-{% endhighlight %}
-Result:
-{% highlight Groff markup %}
-all
-{% endhighlight %}
-
-If you do not know the full parameter name, you can use the following command, then all parameters prefixed by this prefix will be listed:
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.KylinConfigCLI <prefix>.
-{% endhighlight %}
-For example: 
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.KylinConfigCLI kylin.job.
-{% endhighlight %}
-Result:
-{% highlight Groff markup %}
-max-concurrent-jobs=10
-retry=3
-sampling-percentage=100
-{% endhighlight %}
-
-## CubeMetaExtractor.java
-
-### Function
-CubeMetaExtractor.java is to extract Cube related info for debugging / distributing purpose.  
-
-### How to use
-At least two parameters should be followed. 
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaExtractor -<conf_name> <conf_value> -destDir <your_dest_dir>
-{% endhighlight %}
-For example: 
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaExtractor -cube kylin_sales_cube -destDir /tmp/kylin_sales_cube
-{% endhighlight %}
-Result:
-After the command is executed, the cube, project or hybrid you want to extract will be dumped in the specified path.
-
-All supported parameters are listed below:  
-
-| Parameter                                             | Description                                                                                         |
-| ----------------------------------------------------- | :-------------------------------------------------------------------------------------------------- |
-| allProjects                                           | Specify realizations in all projects to extract                                                     |
-| compress <compress>                                   | Specify whether to compress the output with zip. Default true.                                      | 
-| cube <cube>                                           | Specify which Cube to extract                                                                       |
-| destDir <destDir>                                     | (Required) Specify the dest dir to save the related information                                     |
-| hybrid <hybrid>                                       | Specify which hybrid to extract                                                                     |
-| includeJobs <includeJobs>                             | Set this to true if want to extract job info/outputs too. Default false                             |
-| includeSegmentDetails <includeSegmentDetails>         | Set this to true if want to extract segment details too, such as dict, tablesnapshot. Default false |
-| includeSegments <includeSegments>                     | Set this to true if want extract the segments info. Default true                                    |
-| onlyOutput <onlyOutput>                               | When include jobs, only extract output of job. Default true                                         |
-| packagetype <packagetype>                             | Specify the package type                                                                            |
-| project <project>                                     | Which project to extract                                                    |
-                             |
-
-## CubeMetaIngester.java
-
-### Function
-CubeMetaIngester.java is to ingest the extracted cube meta data into another metadata store. It only supports ingest cube now. 
-
-### How to use
-At least two parameters should be specified. Please make sure the cube you want to ingest does not exist in the target project. 
-
-Note: The zip file must contain only one directory after it has been decompressed.
-
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaIngester -project <target_project> -srcPath <your_src_dir>
-{% endhighlight %}
-For example: 
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMetaIngester -project querytest -srcPath /tmp/newconfigdir1/cubes.zip
-{% endhighlight %}
-Result:
-After the command is successfully executed, the cube you want to ingest will exist in the srcPath.
-
-All supported parameters are listed below:
-
-| Parameter                         | Description                                                                                                                                                                                        |
-| --------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| forceIngest <forceIngest>         | Skip the target Cube, model and table check and ingest by force. Use in caution because it might break existing cubes! Suggest to backup metadata store first. Default false.                      |
-| overwriteTables <overwriteTables> | If table meta conflicts, overwrite the one in metadata store with the one in srcPath. Use in caution because it might break existing cubes! Suggest to backup metadata store first. Default false. |
-| project <project>                 | (Required) Specify the target project for the new cubes.                                                                                                                                           |
-| srcPath <srcPath>                 | (Required) Specify the path to the extracted Cube metadata zip file.                                                                                                                               |
-
-## CubeMigrationCLI.java
-
-### Function
-CubeMigrationCLI.java can migrate a cube from a Kylin environment to another, for example, promote a well tested cube from the testing env to production env. Note that the different Kylin environments should share the same Hadoop cluster, including HDFS, HBase and HIVE. 
-
-Please note, this tool will migrate the Kylin metadata, rename the Kylin HDFS folders and update HBase table's metadata. It doesn't migrate data across Hadoop clusters. 
-
-### How to use
-
-
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCLI <srcKylinConfigUri> <dstKylinConfigUri> <cubeName> <projectName> <copyAclOrNot> <purgeOrNot> <overwriteIfExists> <realExecute> <migrateSegmentOrNot>
-{% endhighlight %}
-For example: 
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCLI kylin-qa:7070 kylin-prod:7070 kylin_sales_cube learn_kylin true false false true false
-{% endhighlight %}
-After the command is successfully executed, please reload Kylin metadata, the cube you want to migrate will appear in the target environment.
-
-All supported parameters are listed below:
- If the data model of the cube you want to migrate does not exist in the target environment, this tool will also migrate the model.
- If you set `overwriteIfExists` to `false`, and the cube exists in the target environment, the tool will stop to proceed.
- If you set `migrateSegmentOrNot` to `true`, please make sure the cube has `READY` segments, they will be migrated to target environment together.
-
-| Parameter           | Description                                                                                |
-| ------------------- | :----------------------------------------------------------------------------------------- |
-| srcKylinConfigUri   | The URL of the source environment's Kylin configuration. It can be `host:7070`, or an absolute file path to the `kylin.properties`.                                                      |
-| dstKylinConfigUri   | The URL of the target environment's Kylin configuration.                                                     |
-| cubeName            | the name of cube to be migrated.                                        |
-| projectName         | The target project in the target environment. If it doesn't exist, create it before run this command.                          |
-| copyAclOrNot        | `true` or `false`: whether copy the cube ACL to target environment.                                |
-| purgeOrNot          | `true` or `false`: whether to purge the cube from source environment after it be migrated to target environment.                 |
-| overwriteIfExists   | `true` or `false`: whether to overwrite if it already exists in the target environment.                             |
-| realExecute         | `true` or `false`: If false, just print the operations to take (dry-run mode); if true, do the real migration.               |
-| migrateSegmentOrNot | (Optional) `true` or `false`: whether copy segment info to the target environment. Default true.   |
-
-## CubeMigrationCheckCLI.java
-
-### Function
-CubeMigrationCheckCLI.java serves for the purpose of checking the "KYLIN_HOST" property to be consistent with the dst's MetadataUrlPrefix for all of Cube segments' corresponding HTables after migrating a Cube. CubeMigrationCheckCLI.java will be called in CubeMigrationCLI.java and is usually not used separately. 
-
-### How to use
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCheckCLI -fix <conf_value> -dstCfgUri <dstCfgUri_value> -cube <cube_name>
-{% endhighlight %}
-For example: 
-{% highlight Groff markup %}
-./bin/kylin.sh org.apache.kylin.tool.CubeMigrationCheckCLI -fix true -dstCfgUri kylin-prod:7070 -cube querycube
-{% endhighlight %}
-All supported parameters are listed below:
-
-| Parameter           | Description                                                                   |
-| ------------------- | :---------------------------------------------------------------------------- |
-| fix                 | Fix the inconsistent Cube segments' HOST, default false                       |
-| dstCfgUri           | The KylinConfig of the Cube’s new home                                       |
-| cube                | The cube name.                                                     |
\ No newline at end of file
diff --git a/website/_docs30/howto/howto_use_distributed_scheduler.md b/website/_docs30/howto/howto_use_distributed_scheduler.md
deleted file mode 100644
index a24eb25..0000000
--- a/website/_docs30/howto/howto_use_distributed_scheduler.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-layout: docs30
-title:  Use distributed job scheduler
-categories: howto
-permalink: /docs30/howto/howto_use_distributed_scheduler.html
----
-
-Since Kylin 2.0, Kylin support distributed job scheduler.
-Which is more extensible, available and reliable than default job scheduler.
-To enable the distributed job scheduler, you need to set or update three configs in the kylin.properties:
-
-```
-1. kylin.job.scheduler.default=2
-2. kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperDistributedJobLock
-3. add all job servers and query servers to the kylin.server.cluster-servers
-```
diff --git a/website/_docs30/howto/howto_use_health_check_cli.md b/website/_docs30/howto/howto_use_health_check_cli.md
deleted file mode 100644
index c40a32f..0000000
--- a/website/_docs30/howto/howto_use_health_check_cli.md
+++ /dev/null
@@ -1,118 +0,0 @@
----
-layout: docs30
-title:  Kylin Health Check(NEW)
-categories: howto
-permalink: /docs30/howto/howto_use_health_check_cli.html
----
-
-## Get started
-In kylin 3.0, we add a health check job of Kylin which help to detect whether your Kylin is in good state. This will help to reduce manually work for Kylin's Administrator. If you have hundreds of cubes and thousands of building job every day, this feature help you quickly find failed job and segment which lost file or hbase table, or cube with too high expansion rate. 
-
-Use this feature by adding following to *kylin.properties* if you are using 126.com:
-{% highlight Groff markup %}
-kylin.job.notification-enabled=true
-kylin.job.notification-mail-enable-starttls=true
-kylin.job.notification-mail-host=smtp.126.com
-kylin.job.notification-mail-username=hahaha@126.com
-kylin.job.notification-mail-password=hahaha
-kylin.job.notification-mail-sender=hahaha@126.com
-kylin.job.notification-admin-emails=hahaha@kyligence.io,hahaha@126.com
-{% endhighlight %} 
-After start the Kylin process, you should execute following command and get email received. In production env, it should be scheduled by crontab etc.
-{% highlight Groff markup %}
-sh bin/kylin.sh org.apache.kylin.tool.KylinHealthCheckJob
-{% endhighlight %} 
-You will receive email in your mailbox.
-
-## Detail of health check step
-
-### Checking metadata
-This part will try record all path of entry which Kylin process failed to load from Metadata(ResourceStore). This maybe a signal of health state for Kylin's Metadata Store.
-
-If find any error, it will be sent via email as following.
-{% highlight Groff markup %}
-Error loading CubeDesc at ${PATH} ...
-Error loading DataModelDesc at ${PATH} ...
-{% endhighlight %}
-
-### Fix missing HDFS path of segments
-This part will try to visit all segments and check whether segment file exists in HDFS.  
-
-If find any error, it will be sent via email as following.
-{% highlight Groff markup %}
-Project: ${PROJECT} cube: ${CUBE} segment: ${SEGMENT} cube id data: ${SEGMENT_PATH} don't exist and need to rebuild it
-{% endhighlight %}
-
-### Checking HBase Table of segments
-This part will check whether HTable belong to each segment exists and state is Enable, you may need to rebuild them or re-enable them if find any.
-
-If find any error, it will be sent via email as following.
-{% highlight Groff markup %}
-HBase table: {TABLE_NAME} not exist for segment: {SEGMENT}, project: {PROJECT}
-{% endhighlight %}
-
-### Checking holes of Cubes
-This part will try to check segment holes of each cube, so lost segments need to be rebuilt if find any.
-
-If find any error, it will be sent via email as following.
-{% highlight Groff markup %}
-{COUNT_HOLE} holes in cube: {CUBE_NAME}, project: {PROJECT_NAME}
-{% endhighlight %}
-
-### Checking too many segments of Cubes
-This part will try to check cube which have too many segments, so they need to merged.
-
-If find any error, it will be sent via email as following.
-{% highlight Groff markup %}
-Too many segments: {COUNT_OF_SEGMENT} for cube: {CUBE_NAME}, project: {PROJECT_NAME}, please merge the segments
-{% endhighlight %}
-
-The threshold is decided by `kylin.tool.health-check.warning-segment-num`, default value is `-1`, which means skip check.
-
-### Checking out-of-date Cubes
-This part will try to find cube which have not been built for a long duration, so maybe you don't really need them. 
-
-If find any error, it will be sent via email as following.
-{% highlight Groff markup %}
-Ready Cube: {CUBE_NAME} in project: {PROJECT_NAME} is not built more then {DAYS} days, maybe it can be disabled
-Disabled Cube: {CUBE_NAME} in project: {PROJECT_NAME} is not built more then {DAYS} days, maybe it can be deleted
-{% endhighlight %}
-
-The threshold is decided by `kylin.tool.health-check.stale-cube-threshold-days`, default value is `100`.
-
-### Check data expansion rate
-This part will try to check cube have high expansion rate, so you may consider optimize them. 
-
-If find any error, it will be sent via stdout as following.
-{% highlight Groff markup %}
-Cube: {CUBE_NAME} in project: {PROJECT_NAME} with too large expansion rate: {RATE}, cube data size: {SIZE}G
-{% endhighlight %}
-
-The expansion rate warning threshold is decided by `kylin.tool.health-check.warning-cube-expansion-rate`.
-The cube-size warning threshold is decided by `kylin.tool.health-check.expansion-check.min-cube-size-gb`.
-
-### Check cube configuration
-
-This part will try to check cube has been set with auto merge & retention configuration. 
-
-If find any error, it will be sent via stdout as following.
-{% highlight Groff markup %}
-Cube: {CUBE_NAME} in project: {PROJECT_NAME} with no auto merge params
-Cube: {CUBE_NAME} in project: {PROJECT_NAME} with no retention params
-{% endhighlight %} 
-
-### Cleanup stopped job
-
-Stopped and Error jobs which have not be repaired in time will be alarmed if find any.
-
-{% highlight Groff markup %}
-Should discard job: {}, which in ERROR/STOPPED state for {} days
-{% endhighlight %} 
-
-The duration is set by `kylin.tool.health-check.stale-job-threshold-days`, default is `30`.
-
-
-----
-
-For the detail of HealthCheck, please check code at *org.apache.kylin.rest.job.KylinHealthCheckJob* in Github Repo. 
-If you have more suggestion or want to add more check rule, please submit a PR to master branch.
diff --git a/website/_docs30/howto/howto_use_mr_hive_dict.md b/website/_docs30/howto/howto_use_mr_hive_dict.md
deleted file mode 100644
index 8f6df9f..0000000
--- a/website/_docs30/howto/howto_use_mr_hive_dict.md
+++ /dev/null
@@ -1,205 +0,0 @@
----
-layout: docs30
-title:  Use Hive to build global dictionary
-categories: howto
-permalink: /docs30/howto/howto_use_hive_mr_dict.html
----
-
-## Global Dictionary in Hive
-Count distinct(bitmap) measure is very important for many scenario, such as PageView statistics, and Kylin support count distinct since 1.5.3 .
-Apache Kylin implements precisely count distinct measure based on bitmap, and use global dictionary to encode string value into integer. 
-Currently we have to build global dictionary in single process/JVM, which may take a lot of time and memory for UHC. By this feature(KYLIN-3841), we use Hive, a distributed SQL engine to build global dictionary.
-
-This will help to:
-1. Reduce memory pressure of Kylin process, MapReduce(or other engine which hive used) will be used to build dict instead of Kylin process itself.
-2. Make build base cuboid quicker, because string value has been encoded in previous step.
-3. Make global dictionary reusable.
-4. Make global dictionary readable and bijective, you may use global dictionary outside Kylin, this maybe useful in many scenario.
-
-### Step by step Analysis
-This feature will add three additional steps in cube building if enabled, let us try to understand what Kylin do in these steps.
-
-1. Global Dict Mr/Hive extract dict_val from Data
-
-    - Create a Hive table for store global dictionary if it is not exists, table name should be *CubeName_Suffix*. This table has two normal column and one partition column, two normal columns are `dict_key` and `dict_value`, which for origin value and encoded integer respectively.
-    - Create a temporary table with "__group_by" as its suffix, which used to store distinct value for specific column. This table has one normal column and one partition column, normal column is `dict_key` which used to store origin value.
-    - Insert distinct value into temporary table created above for each column by using a hive query "select cloA from flatTable group by cloA".
-
-    When this step finished, you should get a temporary table contains distinct values, each partition for specific Count_Distinct column.
-
-2. Global Dict Mr/Hive build dict_val
-
-    - Find all fresh distinct value which never exists in any older segments by *LEFT JOIN* between global dictionary table and temporary table.
-    - Append all fresh distinct value to the tail of global dictionary table by *UNION*. By the power of `row_number` function in Hive, added value will be encoded with integer in incremental way.
-
-    When this step finished, all distinct value for all Count_Distinct column will be encoded correctly in global dictionary table.
-
-3. Global Dict Mr/Hive replace dict_val to Data
-
-    - Using *LEFT JOIN* to replace original string value with encoded integer on flat table which used to build cuboid later.
-
-    When this step finished, all string value which belong to Count_Distinct column will be updated with encoded integer in flat hive table.
-
-----
-
-## How to use
-
-If you have some count distinct(bitmap) measure, and data type of that column is String, you may need Hive Global Dictionary. Says columns name are PV_ID and USER_ID, and table name is USER_ACTION, you may add cube-level configuration `kylin.dictionary.mr-hive.columns=USER_ACTION_PV_ID,USER_ACTION_USER_ID` to enable this feature.
-
-Please don't use hive global dictionary on integer type column, you have to know that the value will be replaced with encoded integer in flat hive table. If you have sum/max/min measure on the same column, you will get wrong result in these measures.
-
-And you should know this feature is conflicted with shrunken global dictionary(KYLIN-3491) because they fix the same thing in different way.
-
-### Configuration
-
-- `kylin.dictionary.mr-hive.columns` is used to specific which columns need to use Hive-MR dict, should be *TABLE1_COLUMN1,TABLE2_COLUMN2*. Better configured in cube level, default value is empty.
-- `kylin.dictionary.mr-hive.database` is used to specific which database Hive-MR dict table located, default value is *default*.
-- `kylin.hive.union.style` Sometime sql which used to build global dict table may have problem in union syntax, you may refer to Hive Doc for more detail. The default value is *UNION*, using lower version of Hive should change to *UNION ALL*.
-- `kylin.dictionary.mr-hive.table.suffix` is used to specific suffix of global dict table, default value is *_global_dict*.
-
-----
-
-## Screenshot
-
-#### SQL in new added step Add count_distinct(bitmap) measure
-
-![add_count_distinct_bitmap](/images/Hive-Global-Dictionary/cube-level-config.png)
-
-#### SQL in new added step Set hive-dict-column in cube level config
-
-![set-hive-dict-column](/images/Hive-Global-Dictionary/set-hive-dict-column.png)
-
-#### SQL in new added step Three added steps of cubing job
-
-![three-added-steps](/images/Hive-Global-Dictionary/three-added-steps.png)
-
-#### SQL in new added step Hive Global Dictionary Table
-
-![hive-global-dict-table](/images/Hive-Global-Dictionary/hive-global-dict-table.png)
-
-#### SQL in new added step
-
-- Global Dict Mr/Hive extract dict_val from Data
-
-    {% highlight Groff markup %}
-    CREATE TABLE IF NOT EXISTS lacus.KYLIN_SALE_HIVE_DICT_HIVE_GLOBAL
-    ( dict_key STRING COMMENT '',
-    dict_val INT COMMENT ''
-    )
-    COMMENT ''
-    PARTITIONED BY (dict_column string)
-    STORED AS TEXTFILE;
-    DROP TABLE IF EXISTS kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195__group_by;
-    CREATE TABLE IF NOT EXISTS kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195__group_by
-    (
-     dict_key STRING COMMENT ''
-    )
-    COMMENT ''
-    PARTITIONED BY (dict_column string)
-    STORED AS SEQUENCEFILE
-    ;
-    INSERT OVERWRITE TABLE kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195__group_by
-    PARTITION (dict_column = 'KYLIN_SALES_LSTG_FORMAT_NAME')
-    SELECT
-    KYLIN_SALES_LSTG_FORMAT_NAME
-    FROM kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195
-    GROUP BY KYLIN_SALES_LSTG_FORMAT_NAME
-    ;
-    INSERT OVERWRITE TABLE kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195__group_by
-    PARTITION (dict_column = 'KYLIN_SALES_OPS_REGION')
-    SELECT
-    KYLIN_SALES_OPS_REGION
-    FROM kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195
-    GROUP BY KYLIN_SALES_OPS_REGION ;
-    {% endhighlight %}
-
-- Global Dict Mr/Hive build dict_val
-
-    {% highlight Groff markup %}
-    INSERT OVERWRITE TABLE lacus.KYLIN_SALE_HIVE_DICT_HIVE_GLOBAL
-    PARTITION (dict_column = 'KYLIN_SALES_OPS_REGION')
-    SELECT dict_key, dict_val FROM lacus.KYLIN_SALE_HIVE_DICT_HIVE_GLOBAL
-    WHERE dict_column = 'KYLIN_SALES_OPS_REGION'
-    UNION ALL
-    SELECT a.dict_key as dict_key, (row_number() over(order by a.dict_key asc)) + (0) as dict_val
-    FROM
-    (
-     SELECT dict_key FROM default.kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195__group_by WHERE dict_column = 'KYLIN_SALES_OPS_REGION' AND dict_key is not null
-    ) a
-    LEFT JOIN
-    (
-    SELECT dict_key, dict_val FROM lacus.KYLIN_SALE_HIVE_DICT_HIVE_GLOBAL WHERE dict_column = 'KYLIN_SALES_OPS_REGION'
-    ) b
-    ON a.dict_key = b.dict_key
-    WHERE b.dict_val is null;
-
-    INSERT OVERWRITE TABLE lacus.KYLIN_SALE_HIVE_DICT_HIVE_GLOBAL
-    PARTITION (dict_column = 'KYLIN_SALES_LSTG_FORMAT_NAME')
-    SELECT dict_key, dict_val FROM lacus.KYLIN_SALE_HIVE_DICT_HIVE_GLOBAL
-    WHERE dict_column = 'KYLIN_SALES_LSTG_FORMAT_NAME'
-    UNION ALL
-    SELECT a.dict_key as dict_key, (row_number() over(order by a.dict_key asc)) + (0) as dict_val
-    FROM
-    (
-     SELECT dict_key FROM default.kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195__group_by WHERE dict_column = 'KYLIN_SALES_LSTG_FORMAT_NAME' AND dict_key is not null
-    ) a
-    LEFT JOIN
-    (
-    SELECT dict_key, dict_val FROM lacus.KYLIN_SALE_HIVE_DICT_HIVE_GLOBAL WHERE dict_column = 'KYLIN_SALES_LSTG_FORMAT_NAME'
-    ) b
-    ON a.dict_key = b.dict_key
-    WHERE b.dict_val is null;
-{% endhighlight %}
-
-- Global Dict Mr/Hive replace dict_val to Data
-
-{% highlight Groff markup %}
-    INSERT OVERWRITE TABLE default.kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195
-    SELECT
-    a.KYLIN_SALES_TRANS_ID
-    ,a.KYLIN_SALES_PART_DT
-    ,a.KYLIN_SALES_LEAF_CATEG_ID
-    ,a.KYLIN_SALES_LSTG_SITE_ID
-    ,a.KYLIN_SALES_SELLER_ID
-    ,a.KYLIN_SALES_BUYER_ID
-    ,a.BUYER_ACCOUNT_ACCOUNT_COUNTRY
-    ,a.SELLER_ACCOUNT_ACCOUNT_COUNTRY
-    ,a.KYLIN_SALES_PRICE
-    ,a.KYLIN_SALES_ITEM_COUNT
-    ,a.KYLIN_SALES_LSTG_FORMAT_NAME
-    ,b. dict_val
-    FROM default.kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195 a
-    LEFT OUTER JOIN
-    (
-    SELECT dict_key, dict_val FROM lacus.KYLIN_SALE_HIVE_DICT_HIVE_GLOBAL WHERE dict_column = 'KYLIN_SALES_OPS_REGION'
-    ) b
-     ON a.KYLIN_SALES_OPS_REGION = b.dict_key;
-    INSERT OVERWRITE TABLE default.kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195
-    SELECT
-    a.KYLIN_SALES_TRANS_ID
-    ,a.KYLIN_SALES_PART_DT
-    ,a.KYLIN_SALES_LEAF_CATEG_ID
-    ,a.KYLIN_SALES_LSTG_SITE_ID
-    ,a.KYLIN_SALES_SELLER_ID
-    ,a.KYLIN_SALES_BUYER_ID
-    ,a.BUYER_ACCOUNT_ACCOUNT_COUNTRY
-    ,a.SELLER_ACCOUNT_ACCOUNT_COUNTRY
-    ,a.KYLIN_SALES_PRICE
-    ,a.KYLIN_SALES_ITEM_COUNT
-    ,b. dict_val
-    ,a.KYLIN_SALES_OPS_REGION
-    FROM default.kylin_intermediate_kylin_sale_hive_dict_921b0a15_d7cd_a2e6_6852_4ce44158f195 a
-    LEFT OUTER JOIN
-    (
-    SELECT dict_key, dict_val FROM lacus.KYLIN_SALE_HIVE_DICT_HIVE_GLOBAL WHERE dict_column = 'KYLIN_SALES_LSTG_FORMAT_NAME'
-    ) b
-     ON a.KYLIN_SALES_LSTG_FORMAT_NAME = b.dict_key;
-{% endhighlight %}
-
-### Reference Link
-
-- https://issues.apache.org/jira/browse/KYLIN-3491
-- https://issues.apache.org/jira/browse/KYLIN-3841
-- https://issues.apache.org/jira/browse/KYLIN-3905
-- https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Union
-- http://kylin.apache.org/blog/2016/08/01/count-distinct-in-kylin/
\ No newline at end of file
diff --git a/website/_docs30/howto/howto_use_restapi.cn.md b/website/_docs30/howto/howto_use_restapi.cn.md
deleted file mode 100644
index 1361b2f..0000000
--- a/website/_docs30/howto/howto_use_restapi.cn.md
+++ /dev/null
@@ -1,1495 +0,0 @@
----
-layout: docs30-cn
-title:  RESTful API
-categories: howto
-permalink: /cn/docs30/howto/howto_use_restapi.html
-since: v0.7.1
----
-
-This page lists the major RESTful APIs provided by Kylin.
-
-* Query
-   * [Authentication](#authentication)
-   * [Query](#query)
-   * [List queryable tables](#list-queryable-tables)
-* CUBE
-   * [Create cube](#create-cube)
-   * [List cubes](#list-cubes)
-   * [Get cube](#get-cube)
-   * [Get cube descriptor (dimension, measure info, etc)](#get-cube-descriptor)
-   * [Get data model (fact and lookup table info)](#get-data-model)
-   * [Build cube](#build-cube)
-   * [Enable cube](#enable-cube)
-   * [Disable cube](#disable-cube)
-   * [Purge cube](#purge-cube)
-   * [Delete segment](#delete-segment)
-* MODEL
-   * [Create model](#create-model)
-   * [Get modelDescData](#get-modeldescdata)
-   * [Delete model](#delete-model)
-* JOB
-   * [Resume job](#resume-job)
-   * [Pause job](#pause-job)
-   * [Drop job](#drop-job)
-   * [Discard job](#discard-job)
-   * [Get job status](#get-job-status)
-   * [Get job step output](#get-job-step-output)
-   * [Get job list](#get-job-list)
-* Metadata
-   * [Get Hive Table](#get-hive-table)
-   * [Get Hive Tables](#get-hive-tables)
-   * [Load Hive Tables](#load-hive-tables)
-* Cache
-   * [Wipe cache](#wipe-cache)
-* Streaming
-   * [Initiate cube start position](#initiate-cube-start-position)
-   * [Build stream cube](#build-stream-cube)
-   * [Check segment holes](#check-segment-holes)
-   * [Fill segment holes](#fill-segment-holes)
-
-## Authentication
-`POST /kylin/api/user/authentication`
-
-#### Request Header
-Authorization data encoded by basic auth is needed in the header, such as:
-Authorization:Basic {data}
-You can generate {data} by using below python script
-```
-python -c "import base64; print base64.standard_b64encode('$UserName:$Password')"
-```
-
-#### Response Body
-* userDetails - Defined authorities and status of current user.
-
-#### Response Sample
-
-```sh
-{  
-   "userDetails":{  
-      "password":null,
-      "username":"sample",
-      "authorities":[  
-         {  
-            "authority":"ROLE_ANALYST"
-         },
-         {  
-            "authority":"ROLE_MODELER"
-         }
-      ],
-      "accountNonExpired":true,
-      "accountNonLocked":true,
-      "credentialsNonExpired":true,
-      "enabled":true
-   }
-}
-```
-
-#### Curl Example
-
-```
-curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' http://<host>:<port>/kylin/api/user/authentication
-```
-
-If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
-
-```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
-```
-
-Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
-
-
-```
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "startTime": 820454400000, "endTime": 821318400000, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/kylin_sales/build
-```
-
-***
-
-## Query
-`POST /kylin/api/query`
-
-#### Request Body
-* sql - `required` `string` The text of sql statement.
-* offset - `optional` `int` Query offset. If offset is set in sql, curIndex will be ignored.
-* limit - `optional` `int` Query limit. If limit is set in sql, perPage will be ignored.
-* acceptPartial - `optional` `bool` Whether accept a partial result or not, default be "false". Set to "false" for production use. 
-* project - `optional` `string` Project to perform query. Default value is 'DEFAULT'.
-
-#### Request Sample
-
-```sh
-{  
-   "sql":"select * from TEST_KYLIN_FACT",
-   "offset":0,
-   "limit":50000,
-   "acceptPartial":false,
-   "project":"DEFAULT"
-}
-```
-
-#### Curl Example
-
-```
-curl -X POST -H "Authorization: Basic XXXXXXXXX" -H "Content-Type: application/json" -d '{ "sql":"select count(*) from TEST_KYLIN_FACT", "project":"learn_kylin" }' http://localhost:7070/kylin/api/query
-```
-
-#### Response Body
-* columnMetas - Column metadata information of result set.
-* results - Data set of result.
-* cube - Cube used for this query.
-* affectedRowCount - Count of affected row by this sql statement.
-* isException - Whether this response is an exception.
-* ExceptionMessage - Message content of the exception.
-* Duration - Time cost of this query
-* Partial - Whether the response is a partial result or not. Decided by `acceptPartial` of request.
-
-#### Response Sample
-
-```sh
-{  
-   "columnMetas":[  
-      {  
-         "isNullable":1,
-         "displaySize":0,
-         "label":"CAL_DT",
-         "name":"CAL_DT",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":0,
-         "scale":0,
-         "columnType":91,
-         "columnTypeName":"DATE",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      },
-      {  
-         "isNullable":1,
-         "displaySize":10,
-         "label":"LEAF_CATEG_ID",
-         "name":"LEAF_CATEG_ID",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":10,
-         "scale":0,
-         "columnType":4,
-         "columnTypeName":"INTEGER",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      }
-   ],
-   "results":[  
-      [  
-         "2013-08-07",
-         "32996",
-         "15",
-         "15",
-         "Auction",
-         "10000000",
-         "49.048952730908745",
-         "49.048952730908745",
-         "49.048952730908745",
-         "1"
-      ],
-      [  
-         "2013-08-07",
-         "43398",
-         "0",
-         "14",
-         "ABIN",
-         "10000633",
-         "85.78317064220418",
-         "85.78317064220418",
-         "85.78317064220418",
-         "1"
-      ]
-   ],
-   "cube":"test_kylin_cube_with_slr_desc",
-   "affectedRowCount":0,
-   "isException":false,
-   "exceptionMessage":null,
-   "duration":3451,
-   "partial":false
-}
-```
-
-
-## List queryable tables
-`GET /kylin/api/tables_and_columns`
-
-#### Request Parameters
-* project - `required` `string` The project to load tables
-
-#### Response Sample
-```sh
-[  
-   {  
-      "columns":[  
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"CAL_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":1,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         },
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"WEEK_BEG_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":2,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         }
-      ],
-      "table_NAME":"TEST_CAL_DT",
-      "table_SCHEM":"EDW",
-      "ref_GENERATION":null,
-      "self_REFERENCING_COL_NAME":null,
-      "type_SCHEM":null,
-      "table_TYPE":"TABLE",
-      "table_CAT":"defaultCatalog",
-      "remarks":null,
-      "type_CAT":null,
-      "type_NAME":null
-   }
-]
-```
-
-***
-
-## Create cube
-`POST /kylin/api/cubes`
-
-#### Request Body
-* cubeDescData - `required` `string` cubeDescData to create
-* cubeName - `required` `string` cubeName to create
-* projectName - `required` `string` projectName to which cube belongs
-
-#### Request Sample
-```
-{
-"cubeDescData":"{\"uuid\": \"0ef9b7a8-3929-4dff-b59d-2100aadc8dbf\",\"last_modified\": 0,\"version\": \"3.0.0.20500\",\"name\": \"kylin_test_cube\",\"is_draft\": false,\"model_name\": \"kylin_sales_model\",\"description\": \"\",\"null_string\": null,\"dimensions\": [{\"name\": \"TRANS_ID\",\"table\": \"KYLIN_SALES\",\"column\": \"TRANS_ID\",\"derived\": null},{\"name\": \"YEAR_BEG_DT\",\"table\": \"KYLIN_CAL_DT\",\"column\": null,\"derived\": [\"YEAR_BEG_DT\"]},{\"name\": \"MONTH_BEG_DT\ [...]
-"cubeName":"kylin_test_cube",
-"project":"learn_kylin"
-}
-```
-
-#### Response Sample
-```
-{
-"uuid": "7b3faf69-eca8-cc5f-25f9-49b0f0b5d404",
-"cubeName": "kylin_test_cube",
-"cubeDescData":"{\"uuid\": \"0ef9b7a8-3929-4dff-b59d-2100aadc8dbf\",\"last_modified\": 0,\"version\": \"3.0.0.20500\",\"name\": \"kylin_test_cube\",\"is_draft\": false,\"model_name\": \"kylin_sales_model\",\"description\": \"\",\"null_string\": null,\"dimensions\": [{\"name\": \"TRANS_ID\",\"table\": \"KYLIN_SALES\",\"column\": \"TRANS_ID\",\"derived\": null},{\"name\": \"YEAR_BEG_DT\",\"table\": \"KYLIN_CAL_DT\",\"column\": null,\"derived\": [\"YEAR_BEG_DT\"]},{\"name\": \"MONTH_BEG_DT\ [...]
-"streamingData": null,
-"kafkaData": null,
-"successful": true,
-"message": null,
-"project": "learn_kylin",
-"streamingCube": null
-}
-```
-
-## List cubes
-`GET /kylin/api/cubes`
-
-#### Request Parameters
-* offset - `required` `int` Offset used by pagination
-* limit - `required` `int ` Cubes per page.
-* cubeName - `optional` `string` Keyword for cube names. To find cubes whose name contains this keyword.
-* projectName - `optional` `string` Project name.
-
-#### Response Sample
-```sh
-[  
-   {  
-      "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-      "last_modified":1407831634847,
-      "name":"test_kylin_cube_with_slr_empty",
-      "owner":null,
-      "version":null,
-      "descriptor":"test_kylin_cube_with_slr_desc",
-      "cost":50,
-      "status":"DISABLED",
-      "segments":[  
-      ],
-      "create_time":null,
-      "source_records_count":0,
-      "source_records_size":0,
-      "size_kb":0
-   }
-]
-```
-
-## Get cube
-`GET /kylin/api/cubes/{cubeName}`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name to find.
-
-## Get cube descriptor
-`GET /kylin/api/cube_desc/{cubeName}`
-Get descriptor for specified cube instance.
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-[
-    {
-        "uuid": "a24ca905-1fc6-4f67-985c-38fa5aeafd92", 
-        "name": "test_kylin_cube_with_slr_desc", 
-        "description": null, 
-        "dimensions": [
-            {
-                "id": 0, 
-                "name": "CAL_DT", 
-                "table": "EDW.TEST_CAL_DT", 
-                "column": null, 
-                "derived": [
-                    "WEEK_BEG_DT"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 1, 
-                "name": "CATEGORY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": null, 
-                "derived": [
-                    "USER_DEFINED_FIELD1", 
-                    "USER_DEFINED_FIELD3", 
-                    "UPD_DATE", 
-                    "UPD_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 2, 
-                "name": "CATEGORY_HIERARCHY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": [
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": true
-            }, 
-            {
-                "id": 3, 
-                "name": "LSTG_FORMAT_NAME", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "LSTG_FORMAT_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }, 
-            {
-                "id": 4, 
-                "name": "SITE_ID", 
-                "table": "EDW.TEST_SITES", 
-                "column": null, 
-                "derived": [
-                    "SITE_NAME", 
-                    "CRE_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 5, 
-                "name": "SELLER_TYPE_CD", 
-                "table": "EDW.TEST_SELLER_TYPE_DIM", 
-                "column": null, 
-                "derived": [
-                    "SELLER_TYPE_DESC"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 6, 
-                "name": "SELLER_ID", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "SELLER_ID"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }
-        ], 
-        "measures": [
-            {
-                "id": 1, 
-                "name": "GMV_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 2, 
-                "name": "GMV_MIN", 
-                "function": {
-                    "expression": "MIN", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 3, 
-                "name": "GMV_MAX", 
-                "function": {
-                    "expression": "MAX", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 4, 
-                "name": "TRANS_CNT", 
-                "function": {
-                    "expression": "COUNT", 
-                    "parameter": {
-                        "type": "constant", 
-                        "value": "1", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 5, 
-                "name": "ITEM_COUNT_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "ITEM_COUNT", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }
-        ], 
-        "rowkey": {
-            "rowkey_columns": [
-                {
-                    "column": "SELLER_ID", 
-                    "length": 18, 
-                    "dictionary": null, 
-                    "mandatory": true
-                }, 
-                {
-                    "column": "CAL_DT", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LEAF_CATEG_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "META_CATEG_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL2_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL3_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_FORMAT_NAME", 
-                    "length": 12, 
-                    "dictionary": null, 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_SITE_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "SLR_SEGMENT_CD", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }
-            ], 
-            "aggregation_groups": [
-                [
-                    "LEAF_CATEG_ID", 
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME", 
-                    "CAL_DT"
-                ]
-            ]
-        }, 
-        "signature": "lsLAl2jL62ZApmOLZqWU3g==", 
-        "last_modified": 1445850327000, 
-        "model_name": "test_kylin_with_slr_model_desc", 
-        "null_string": null, 
-        "hbase_mapping": {
-            "column_family": [
-                {
-                    "name": "F1", 
-                    "columns": [
-                        {
-                            "qualifier": "M", 
-                            "measure_refs": [
-                                "GMV_SUM", 
-                                "GMV_MIN", 
-                                "GMV_MAX", 
-                                "TRANS_CNT", 
-                                "ITEM_COUNT_SUM"
-                            ]
-                        }
-                    ]
-                }
-            ]
-        }, 
-        "notify_list": null, 
-        "auto_merge_time_ranges": null, 
-        "retention_range": 0
-    }
-]
-```
-
-## Get data model
-`GET /kylin/api/model/{modelName}`
-
-#### Path Variable
-* modelName - `required` `string` Data model name, by default it should be the same with cube name.
-
-#### Response Sample
-```sh
-{
-    "uuid": "ff527b94-f860-44c3-8452-93b17774c647", 
-    "name": "test_kylin_with_slr_model_desc", 
-    "lookups": [
-        {
-            "table": "EDW.TEST_CAL_DT", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "CAL_DT"
-                ], 
-                "foreign_key": [
-                    "CAL_DT"
-                ]
-            }
-        }, 
-        {
-            "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "LEAF_CATEG_ID", 
-                    "SITE_ID"
-                ], 
-                "foreign_key": [
-                    "LEAF_CATEG_ID", 
-                    "LSTG_SITE_ID"
-                ]
-            }
-        }
-    ], 
-    "capacity": "MEDIUM", 
-    "last_modified": 1442372116000, 
-    "fact_table": "DEFAULT.TEST_KYLIN_FACT", 
-    "filter_condition": null, 
-    "partition_desc": {
-        "partition_date_column": "DEFAULT.TEST_KYLIN_FACT.CAL_DT", 
-        "partition_date_start": 0, 
-        "partition_date_format": "yyyy-MM-dd", 
-        "partition_type": "APPEND", 
-        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
-    }
-}
-```
-
-## Build cube
-`PUT /kylin/api/cubes/{cubeName}/build`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Request Body
-* startTime - `required` `long` Start timestamp of data to build, e.g. 1388563200000 for 2014-1-1
-* endTime - `required` `long` End timestamp of data to build
-* buildType - `required` `string` Supported build type: 'BUILD', 'MERGE', 'REFRESH'
-
-#### Curl Example
-```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
-```
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-
-## Enable Cube
-`PUT /kylin/api/cubes/{cubeName}/enable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-{  
-   "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-   "last_modified":1407909046305,
-   "name":"test_kylin_cube_with_slr_ready",
-   "owner":null,
-   "version":null,
-   "descriptor":"test_kylin_cube_with_slr_desc",
-   "cost":50,
-   "status":"ACTIVE",
-   "segments":[  
-      {  
-         "name":"19700101000000_20140531160000",
-         "storage_location_identifier":"KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_READY-19700101000000_20140531160000_BF043D2D-9A4A-45E9-AA59-5A17D3F34A50",
-         "date_range_start":0,
-         "date_range_end":1401552000000,
-         "status":"READY",
-         "size_kb":4758,
-         "source_records":6000,
-         "source_records_size":620356,
-         "last_build_time":1407832663227,
-         "last_build_job_id":"2c7a2b63-b052-4a51-8b09-0c24b5792cda",
-         "binary_signature":null,
-         "dictionaries":{  
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME/16d8185c-ee6b-4f8c-a919-756d9809f937.dict",
-            "TEST_KYLIN_FACT/LSTG_SITE_ID":"/dict/TEST_SITES/SITE_ID/0bec6bb3-1b0d-469c-8289-b8c4ca5d5001.dict",
-            "TEST_KYLIN_FACT/SLR_SEGMENT_CD":"/dict/TEST_SELLER_TYPE_DIM/SELLER_TYPE_CD/0c5d77ec-316b-47e0-ba9a-0616be890ad6.dict",
-            "TEST_KYLIN_FACT/CAL_DT":"/dict/PREDEFINED/date(yyyy-mm-dd)/64ac4f82-f2af-476e-85b9-f0805001014e.dict",
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME/270fbfb0-281c-4602-8413-2970a7439c47.dict",
-            "TEST_KYLIN_FACT/LEAF_CATEG_ID":"/dict/TEST_CATEGORY_GROUPINGS/LEAF_CATEG_ID/2602386c-debb-4968-8d2f-b52b8215e385.dict",
-            "TEST_CATEGORY_GROUPINGS/META_CATEG_NAME":"/dict/TEST_CATEGORY_GROUPINGS/META_CATEG_NAME/0410d2c4-4686-40bc-ba14-170042a2de94.dict"
-         },
-         "snapshots":{  
-            "TEST_CAL_DT":"/table_snapshot/TEST_CAL_DT.csv/8f7cfc8a-020d-4019-b419-3c6deb0ffaa0.snapshot",
-            "TEST_SELLER_TYPE_DIM":"/table_snapshot/TEST_SELLER_TYPE_DIM.csv/c60fd05e-ac94-4016-9255-96521b273b81.snapshot",
-            "TEST_CATEGORY_GROUPINGS":"/table_snapshot/TEST_CATEGORY_GROUPINGS.csv/363f4a59-b725-4459-826d-3188bde6a971.snapshot",
-            "TEST_SITES":"/table_snapshot/TEST_SITES.csv/78e0aecc-3ec6-4406-b86e-bac4b10ea63b.snapshot"
-         }
-      }
-   ],
-   "create_time":null,
-   "source_records_count":6000,
-   "source_records_size":0,
-   "size_kb":4758
-}
-```
-
-## Disable Cube
-`PUT /kylin/api/cubes/{cubeName}/disable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-## Purge Cube
-`PUT /kylin/api/cubes/{cubeName}/purge`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-
-## Delete Segment
-`DELETE /kylin/api/cubes/{cubeName}/segs/{segmentName}`
-
-***
-
-## Create Model
-`POST /kylin/api/models`
-
-#### Request Body
-* modelDescData - `required` `string` modelDescData to create
-* modelName - `required` `string` modelName to create
-* projectName - `required` `string` projectName to which model belongs
-
-#### Request Sample
-```
-{
-"modelDescData": "{\"uuid\": \"0928468a-9fab-4185-9a14-6f2e7c74823f\",\"last_modified\": 0,\"version\": \"3.0.0.20500\",\"name\": \"kylin_test_model\",\"owner\": null,\"is_draft\": false,\"description\": \"\",\"fact_table\": \"DEFAULT.KYLIN_SALES\",\"lookups\": [{\"table\": \"DEFAULT.KYLIN_CAL_DT\",\"kind\": \"LOOKUP\",\"alias\": \"KYLIN_CAL_DT\",\"join\": {\"type\": \"inner\",\"primary_key\": [\"KYLIN_CAL_DT.CAL_DT\"],\"foreign_key\": [\"KYLIN_SALES.PART_DT\"]}},{\"table\": \"DEFAULT.KY [...]
-"modelName": "kylin_test_model",
-"project": "learn_kylin"
-}
-```
-
-#### Response Sample
-```sh
-{
-"uuid": "2613d739-14c1-38ac-2e37-f36e46fd9976",
-"modelName": "kylin_test_model",
-"modelDescData": "{\"uuid\": \"0928468a-9fab-4185-9a14-6f2e7c74823f\",\"last_modified\": 0,\"version\": \"3.0.0.20500\",\"name\": \"kylin_test_model\",\"owner\": null,\"is_draft\": false,\"description\": \"\",\"fact_table\": \"DEFAULT.KYLIN_SALES\",\"lookups\": [{\"table\": \"DEFAULT.KYLIN_CAL_DT\",\"kind\": \"LOOKUP\",\"alias\": \"KYLIN_CAL_DT\",\"join\": {\"type\": \"inner\",\"primary_key\": [\"KYLIN_CAL_DT.CAL_DT\"],\"foreign_key\": [\"KYLIN_SALES.PART_DT\"]}},{\"table\": \"DEFAULT.KY [...]
-"successful": true,
-"message": null,
-"project": "learn_kylin",
-"ccInCheck": null,
-"seekingExprAdvice": false
-}
-```
-
-## Get ModelDescData
-`GET /kylin/api/models`
-
-#### Request Parameters
-* modelName - `optional` `string` Model name.
-* projectName - `optional` `string` Project Name.
-* limit - `optional` `integer` Offset used by pagination
-* offset - `optional` `integer` Models per page
-
-#### Response Sample
-```sh
-[
-    {
-        "uuid": "0928468a-9fab-4185-9a14-6f2e7c74823f",
-        "last_modified": 1568862496000,
-        "version": "3.0.0.20500",
-        "name": "kylin_sales_model",
-        "owner": null,
-        "is_draft": false,
-        "description": "",
-        "fact_table": "DEFAULT.KYLIN_SALES",
-        "lookups": [
-            {
-                "table": "DEFAULT.KYLIN_CAL_DT",
-                "kind": "LOOKUP",
-                "alias": "KYLIN_CAL_DT",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "KYLIN_CAL_DT.CAL_DT"
-                    ],
-                    "foreign_key": [
-                        "KYLIN_SALES.PART_DT"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_CATEGORY_GROUPINGS",
-                "kind": "LOOKUP",
-                "alias": "KYLIN_CATEGORY_GROUPINGS",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID",
-                        "KYLIN_CATEGORY_GROUPINGS.SITE_ID"
-                    ],
-                    "foreign_key": [
-                        "KYLIN_SALES.LEAF_CATEG_ID",
-                        "KYLIN_SALES.LSTG_SITE_ID"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_ACCOUNT",
-                "kind": "LOOKUP",
-                "alias": "BUYER_ACCOUNT",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "BUYER_ACCOUNT.ACCOUNT_ID"
-                    ],
-                    "foreign_key": [
-                        "KYLIN_SALES.BUYER_ID"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_ACCOUNT",
-                "kind": "LOOKUP",
-                "alias": "SELLER_ACCOUNT",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "SELLER_ACCOUNT.ACCOUNT_ID"
-                    ],
-                    "foreign_key": [
-                        "KYLIN_SALES.SELLER_ID"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_COUNTRY",
-                "kind": "LOOKUP",
-                "alias": "BUYER_COUNTRY",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "BUYER_COUNTRY.COUNTRY"
-                    ],
-                    "foreign_key": [
-                        "BUYER_ACCOUNT.ACCOUNT_COUNTRY"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_COUNTRY",
-                "kind": "LOOKUP",
-                "alias": "SELLER_COUNTRY",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "SELLER_COUNTRY.COUNTRY"
-                    ],
-                    "foreign_key": [
-                        "SELLER_ACCOUNT.ACCOUNT_COUNTRY"
-                    ]
-                }
-            }
-        ],
-        "dimensions": [
-            {
-                "table": "KYLIN_SALES",
-                "columns": [
-                    "TRANS_ID",
-                    "SELLER_ID",
-                    "BUYER_ID",
-                    "PART_DT",
-                    "LEAF_CATEG_ID",
-                    "LSTG_FORMAT_NAME",
-                    "LSTG_SITE_ID",
-                    "OPS_USER_ID",
-                    "OPS_REGION"
-                ]
-            },
-            {
-                "table": "KYLIN_CAL_DT",
-                "columns": [
-                    "CAL_DT",
-                    "WEEK_BEG_DT",
-                    "MONTH_BEG_DT",
-                    "YEAR_BEG_DT"
-                ]
-            },
-            {
-                "table": "KYLIN_CATEGORY_GROUPINGS",
-                "columns": [
-                    "USER_DEFINED_FIELD1",
-                    "USER_DEFINED_FIELD3",
-                    "META_CATEG_NAME",
-                    "CATEG_LVL2_NAME",
-                    "CATEG_LVL3_NAME",
-                    "LEAF_CATEG_ID",
-                    "SITE_ID"
-                ]
-            },
-            {
-                "table": "BUYER_ACCOUNT",
-                "columns": [
-                    "ACCOUNT_ID",
-                    "ACCOUNT_BUYER_LEVEL",
-                    "ACCOUNT_SELLER_LEVEL",
-                    "ACCOUNT_COUNTRY",
-                    "ACCOUNT_CONTACT"
-                ]
-            },
-            {
-                "table": "SELLER_ACCOUNT",
-                "columns": [
-                    "ACCOUNT_ID",
-                    "ACCOUNT_BUYER_LEVEL",
-                    "ACCOUNT_SELLER_LEVEL",
-                    "ACCOUNT_COUNTRY",
-                    "ACCOUNT_CONTACT"
-                ]
-            },
-            {
-                "table": "BUYER_COUNTRY",
-                "columns": [
-                    "COUNTRY",
-                    "NAME"
-                ]
-            },
-            {
-                "table": "SELLER_COUNTRY",
-                "columns": [
-                    "COUNTRY",
-                    "NAME"
-                ]
-            }
-        ],
-        "metrics": [
-            "KYLIN_SALES.PRICE",
-            "KYLIN_SALES.ITEM_COUNT"
-        ],
-        "filter_condition": "",
-        "partition_desc": {
-            "partition_date_column": "KYLIN_SALES.PART_DT",
-            "partition_time_column": null,
-            "partition_date_start": 1325376000000,
-            "partition_date_format": "yyyy-MM-dd",
-            "partition_time_format": "HH:mm:ss",
-            "partition_type": "APPEND",
-            "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
-        },
-        "capacity": "MEDIUM"
-    }
-]
-```
-
-## Delete Model
-`DELETE /kylin/api/models/{modelName}`
-
-#### Path variable
-* modelName - `required` `string` Model name.
-
-***
-
-## Resume Job
-`PUT /kylin/api/jobs/{jobId}/resume`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-## Pause Job
-`PUT /kylin/api/jobs/{jobId}/pause`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Discard Job
-`PUT /kylin/api/jobs/{jobId}/cancel`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Drop Job
-`DELETE /kylin/api/jobs/{jobId}/drop`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Get Job Status
-`GET /kylin/api/jobs/{jobId}`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-(Same as "Resume Job")
-
-## Get job step output
-`GET /kylin/api/jobs/{jobId}/steps/{stepId}/output`
-
-#### Path Variable
-* jobId - `required` `string` Job id.
-* stepId - `required` `string` Step id; the step id is composed by jobId with step sequence id; for example, the jobId is "fb479e54-837f-49a2-b457-651fc50be110", its 3rd step id is "fb479e54-837f-49a2-b457-651fc50be110-3", 
-
-#### Response Sample
-```
-{  
-   "cmd_output":"log string"
-}
-```
-
-## Get job list
-`GET /kylin/api/jobs`
-
-#### Request Variables
-* cubeName - `optional` `string` Cube name.
-* projectName - `required` `string` Project name.
-* status - `optional` `int` Job status, e.g. (NEW: 0, PENDING: 1, RUNNING: 2, STOPPED: 32, FINISHED: 4, ERROR: 8, DISCARDED: 16)
-* offset - `required` `int` Offset used by pagination.
-* limit - `required` `int` Jobs per page.
-* timeFilter - `required` `int`, e.g. (LAST ONE DAY: 0, LAST ONE WEEK: 1, LAST ONE MONTH: 2, LAST ONE YEAR: 3, ALL: 4)
-
-For example, to get the job list in project 'learn_kylin' for cube 'kylin_sales_cube' in lastone week: 
-
-```
-GET: /kylin/api/jobs?cubeName=kylin_sales_cube&limit=15&offset=0&projectName=learn_kylin&timeFilter=1
-```
-
-#### Response Sample
-```
-[
-  { 
-    "uuid": "9eb7bccf-4448-4578-9c29-552658b5a2ca", 
-    "last_modified": 1490957579843, 
-    "version": "2.0.0", 
-    "name": "Sample_Cube - 19700101000000_20150101000000 - BUILD - GMT+08:00 2017-03-31 18:36:08", 
-    "type": "BUILD", 
-    "duration": 936, 
-    "related_cube": "Sample_Cube", 
-    "related_segment": "53a5d7f7-7e06-4ea1-b3ee-b7f30343c723", 
-    "exec_start_time": 1490956581743, 
-    "exec_end_time": 1490957518131, 
-    "mr_waiting": 0, 
-    "steps": [
-      { 
-        "interruptCmd": null, 
-        "id": "9eb7bccf-4448-4578-9c29-552658b5a2ca-00", 
-        "name": "Create Intermediate Flat Hive Table", 
-        "sequence_id": 0, 
-        "exec_cmd": null, 
-        "interrupt_cmd": null, 
-        "exec_start_time": 1490957508721, 
-        "exec_end_time": 1490957518102, 
-        "exec_wait_time": 0, 
-        "step_status": "DISCARDED", 
-        "cmd_type": "SHELL_CMD_HADOOP", 
-        "info": { "endTime": "1490957518102", "startTime": "1490957508721" }, 
-        "run_async": false 
-      }, 
-      { 
-        "interruptCmd": null, 
-        "id": "9eb7bccf-4448-4578-9c29-552658b5a2ca-01", 
-        "name": "Redistribute Flat Hive Table", 
-        "sequence_id": 1, 
-        "exec_cmd": null, 
-        "interrupt_cmd": null, 
-        "exec_start_time": 0, 
-        "exec_end_time": 0, 
-        "exec_wait_time": 0, 
-        "step_status": "DISCARDED", 
-        "cmd_type": "SHELL_CMD_HADOOP", 
-        "info": {}, 
-        "run_async": false 
-      }
-    ],
-    "submitter": "ADMIN", 
-    "job_status": "FINISHED", 
-    "progress": 100.0 
-  }
-]
-```
-***
-
-## Get Hive Table
-`GET /kylin/api/tables/{project}/{tableName}`
-
-#### Path Parameters
-* project - `required` `string` project name
-* tableName - `required` `string` table name to find.
-
-#### Response Sample
-```sh
-{
-    uuid: "69cc92c0-fc42-4bb9-893f-bd1141c91dbe",
-    name: "SAMPLE_07",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    last_modified: 1419330476755
-}
-```
-
-## Get Hive Tables
-`GET /kylin/api/tables`
-
-#### Request Parameters
-* project- `required` `string` will list all tables in the project.
-* ext- `optional` `boolean`  set true to get extend info of table.
-
-#### Response Sample
-```sh
-[
- {
-    uuid: "53856c96-fe4d-459e-a9dc-c339b1bc3310",
-    name: "SAMPLE_08",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    cardinality: {},
-    last_modified: 0,
-    exd: {
-        minFileSize: "46069",
-        totalNumberFiles: "1",
-        location: "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_08",
-        lastAccessTime: "1398176495945",
-        lastUpdateTime: "1398176495981",
-        columns: "struct columns { string code, string description, i32 total_emp, i32 salary}",
-        partitionColumns: "",
-        EXD_STATUS: "true",
-        maxFileSize: "46069",
-        inputformat: "org.apache.hadoop.mapred.TextInputFormat",
-        partitioned: "false",
-        tableName: "sample_08",
-        owner: "hue",
-        totalFileSize: "46069",
-        outputformat: "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
-    }
-  }
-]
-```
-
-## Load Hive Tables
-`POST /kylin/api/tables/{tables}/{project}`
-
-#### Request Parameters
-* tables - `required` `string` table names you want to load from hive, separated with comma.
-* project - `required` `String`  the project which the tables will be loaded into.
-
-#### Request Body
-* calculate - `optional` `boolean` 
-
-#### Response Sample
-```
-{
-    "result.loaded": ["DEFAULT.SAMPLE_07"],
-    "result.unloaded": ["sapmle_08"]
-}
-```
-
-***
-
-## Wipe cache
-`PUT /kylin/api/cache/{type}/{name}/{action}`
-
-#### Path variable
-* type - `required` `string` 'METADATA' or 'CUBE'
-* name - `required` `string` Cache key, e.g the cube name.
-* action - `required` `string` 'create', 'update' or 'drop'
-
-***
-
-## Initiate cube start position
-Set the stream cube's start position to the current latest offsets; This can avoid building from the earlist position of Kafka topic (if you have set a long retension time); 
-
-`PUT /kylin/api/cubes/{cubeName}/init_start_offsets`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Response Sample
-```sh
-{
-    "result": "success", 
-    "offsets": "{0=246059529, 1=253547684, 2=253023895, 3=172996803, 4=165503476, 5=173513896, 6=19200473, 7=26691891, 8=26699895, 9=26694021, 10=19204164, 11=26694597}"
-}
-```
-
-## Build stream cube
-`PUT /kylin/api/cubes/{cubeName}/build2`
-
-This API is specific for stream cube's building;
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Request Body
-
-* sourceOffsetStart - `required` `long` The start offset, 0 represents from previous position;
-* sourceOffsetEnd  - `required` `long` The end offset, 9223372036854775807 represents to the end position of current stream data
-* buildType - `required` Build type, "BUILD", "MERGE" or "REFRESH"
-
-#### Request Sample
-
-```sh
-{  
-   "sourceOffsetStart": 0, 
-   "sourceOffsetEnd": 9223372036854775807, 
-   "buildType": "BUILD"
-}
-```
-
-#### Response Sample
-```sh
-{
-    "uuid": "3afd6e75-f921-41e1-8c68-cb60bc72a601", 
-    "last_modified": 1480402541240, 
-    "version": "1.6.0", 
-    "name": "embedded_cube_clone - 1409830324_1409849348 - BUILD - PST 2016-11-28 22:55:41", 
-    "type": "BUILD", 
-    "duration": 0, 
-    "related_cube": "embedded_cube_clone", 
-    "related_segment": "42ebcdea-cbe9-4905-84db-31cb25f11515", 
-    "exec_start_time": 0, 
-    "exec_end_time": 0, 
-    "mr_waiting": 0, 
- ...
-}
-```
-
-## Check segment holes
-`GET /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-## Fill segment holes
-`PUT /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-
-
-## Use RESTful API in Javascript
-
-Keypoints of call Kylin RESTful API in web page are:
-
-1. Add basic access authorization info in http headers.
-
-2. Use proper request type and data synax.
-
-Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers; for example:
-
-```
-$.ajaxSetup({
-      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
-    });
-    var request = $.ajax({
-       url: "http://hostname/kylin/api/query",
-       type: "POST",
-       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
-       dataType: "json"
-    });
-    request.done(function( msg ) {
-       alert(msg);
-    }); 
-    request.fail(function( jqXHR, textStatus ) {
-       alert( "Request failed: " + textStatus );
-  });
-
-```
-
-To generate your authorization code, download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
-
-```
-var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
-
-$.ajaxSetup({
-   headers: { 
-    'Authorization': "Basic " + authorizationCode, 
-    'Content-Type': 'application/json;charset=utf-8' 
-   }
-});
-```
diff --git a/website/_docs30/howto/howto_use_restapi.md b/website/_docs30/howto/howto_use_restapi.md
deleted file mode 100644
index 8bb46b3..0000000
--- a/website/_docs30/howto/howto_use_restapi.md
+++ /dev/null
@@ -1,1496 +0,0 @@
----
-layout: docs30
-title:  Use RESTful API
-categories: howto
-permalink: /docs30/howto/howto_use_restapi.html
-since: v0.7.1
----
-
-This page lists the major RESTful APIs provided by Kylin.
-
-* Query
-   * [Authentication](#authentication)
-   * [Query](#query)
-   * [List queryable tables](#list-queryable-tables)
-* CUBE
-   * [Create cube](#create-cube)
-   * [List cubes](#list-cubes)
-   * [Get cube](#get-cube)
-   * [Get cube descriptor (dimension, measure info, etc)](#get-cube-descriptor)
-   * [Get data model (fact and lookup table info)](#get-data-model)
-   * [Build cube](#build-cube)
-   * [Enable cube](#enable-cube)
-   * [Disable cube](#disable-cube)
-   * [Purge cube](#purge-cube)
-   * [Delete segment](#delete-segment)
-* MODEL
-   * [Create model](#create-model)
-   * [Get modelDescData](#get-modeldescdata)
-   * [Delete model](#delete-model)
-* JOB
-   * [Resume job](#resume-job)
-   * [Pause job](#pause-job)
-   * [Drop job](#drop-job)
-   * [Discard job](#discard-job)
-   * [Get job status](#get-job-status)
-   * [Get job step output](#get-job-step-output)
-   * [Get job list](#get-job-list)
-* Metadata
-   * [Get Hive Table](#get-hive-table)
-   * [Get Hive Tables](#get-hive-tables)
-   * [Load Hive Tables](#load-hive-tables)
-* Cache
-   * [Wipe cache](#wipe-cache)
-* Streaming
-   * [Initiate cube start position](#initiate-cube-start-position)
-   * [Build stream cube](#build-stream-cube)
-   * [Check segment holes](#check-segment-holes)
-   * [Fill segment holes](#fill-segment-holes)
-
-## Authentication
-`POST /kylin/api/user/authentication`
-
-#### Request Header
-Authorization data encoded by basic auth is needed in the header, such as:
-Authorization:Basic {data}
-You can generate {data} by using below python script
-```
-python -c "import base64; print base64.standard_b64encode('$UserName:$Password')"
-```
-
-#### Response Body
-* userDetails - Defined authorities and status of current user.
-
-#### Response Sample
-
-```sh
-{  
-   "userDetails":{  
-      "password":null,
-      "username":"sample",
-      "authorities":[  
-         {  
-            "authority":"ROLE_ANALYST"
-         },
-         {  
-            "authority":"ROLE_MODELER"
-         }
-      ],
-      "accountNonExpired":true,
-      "accountNonLocked":true,
-      "credentialsNonExpired":true,
-      "enabled":true
-   }
-}
-```
-
-#### Curl Example
-
-```
-curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' http://<host>:<port>/kylin/api/user/authentication
-```
-
-If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
-
-```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
-```
-
-Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
-
-
-```
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "startTime": 820454400000, "endTime": 821318400000, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/kylin_sales/build
-```
-
-***
-
-## Query
-`POST /kylin/api/query`
-
-#### Request Body
-* sql - `required` `string` The text of sql statement.
-* offset - `optional` `int` Query offset. If offset is set in sql, curIndex will be ignored.
-* limit - `optional` `int` Query limit. If limit is set in sql, perPage will be ignored.
-* acceptPartial - `optional` `bool` Whether accept a partial result or not, default be "false". Set to "false" for production use. 
-* project - `optional` `string` Project to perform query. Default value is 'DEFAULT'.
-
-#### Request Sample
-
-```sh
-{  
-   "sql":"select * from TEST_KYLIN_FACT",
-   "offset":0,
-   "limit":50000,
-   "acceptPartial":false,
-   "project":"DEFAULT"
-}
-```
-
-#### Curl Example
-
-```
-curl -X POST -H "Authorization: Basic XXXXXXXXX" -H "Content-Type: application/json" -d '{ "sql":"select count(*) from TEST_KYLIN_FACT", "project":"learn_kylin" }' http://localhost:7070/kylin/api/query
-```
-
-#### Response Body
-* columnMetas - Column metadata information of result set.
-* results - Data set of result.
-* cube - Cube used for this query.
-* affectedRowCount - Count of affected row by this sql statement.
-* isException - Whether this response is an exception.
-* ExceptionMessage - Message content of the exception.
-* Duration - Time cost of this query
-* Partial - Whether the response is a partial result or not. Decided by `acceptPartial` of request.
-
-#### Response Sample
-
-```sh
-{  
-   "columnMetas":[  
-      {  
-         "isNullable":1,
-         "displaySize":0,
-         "label":"CAL_DT",
-         "name":"CAL_DT",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":0,
-         "scale":0,
-         "columnType":91,
-         "columnTypeName":"DATE",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      },
-      {  
-         "isNullable":1,
-         "displaySize":10,
-         "label":"LEAF_CATEG_ID",
-         "name":"LEAF_CATEG_ID",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":10,
-         "scale":0,
-         "columnType":4,
-         "columnTypeName":"INTEGER",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      }
-   ],
-   "results":[  
-      [  
-         "2013-08-07",
-         "32996",
-         "15",
-         "15",
-         "Auction",
-         "10000000",
-         "49.048952730908745",
-         "49.048952730908745",
-         "49.048952730908745",
-         "1"
-      ],
-      [  
-         "2013-08-07",
-         "43398",
-         "0",
-         "14",
-         "ABIN",
-         "10000633",
-         "85.78317064220418",
-         "85.78317064220418",
-         "85.78317064220418",
-         "1"
-      ]
-   ],
-   "cube":"test_kylin_cube_with_slr_desc",
-   "affectedRowCount":0,
-   "isException":false,
-   "exceptionMessage":null,
-   "duration":3451,
-   "partial":false
-}
-```
-
-
-## List queryable tables
-`GET /kylin/api/tables_and_columns`
-
-#### Request Parameters
-* project - `required` `string` The project to load tables
-
-#### Response Sample
-```sh
-[  
-   {  
-      "columns":[  
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"CAL_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":1,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         },
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"WEEK_BEG_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":2,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         }
-      ],
-      "table_NAME":"TEST_CAL_DT",
-      "table_SCHEM":"EDW",
-      "ref_GENERATION":null,
-      "self_REFERENCING_COL_NAME":null,
-      "type_SCHEM":null,
-      "table_TYPE":"TABLE",
-      "table_CAT":"defaultCatalog",
-      "remarks":null,
-      "type_CAT":null,
-      "type_NAME":null
-   }
-]
-```
-
-***
-
-## Create cube
-`POST /kylin/api/cubes`
-
-#### Request Body
-* cubeDescData - `required` `string` cubeDescData to create
-* cubeName - `required` `string` cubeName to create
-* projectName - `required` `string` projectName to which cube belongs
-
-#### Request Sample
-```
-{
-"cubeDescData":"{\"uuid\": \"0ef9b7a8-3929-4dff-b59d-2100aadc8dbf\",\"last_modified\": 0,\"version\": \"3.0.0.20500\",\"name\": \"kylin_test_cube\",\"is_draft\": false,\"model_name\": \"kylin_sales_model\",\"description\": \"\",\"null_string\": null,\"dimensions\": [{\"name\": \"TRANS_ID\",\"table\": \"KYLIN_SALES\",\"column\": \"TRANS_ID\",\"derived\": null},{\"name\": \"YEAR_BEG_DT\",\"table\": \"KYLIN_CAL_DT\",\"column\": null,\"derived\": [\"YEAR_BEG_DT\"]},{\"name\": \"MONTH_BEG_DT\ [...]
-"cubeName":"kylin_test_cube",
-"project":"learn_kylin"
-}
-```
-
-#### Response Sample
-```
-{
-"uuid": "7b3faf69-eca8-cc5f-25f9-49b0f0b5d404",
-"cubeName": "kylin_test_cube",
-"cubeDescData":"{\"uuid\": \"0ef9b7a8-3929-4dff-b59d-2100aadc8dbf\",\"last_modified\": 0,\"version\": \"3.0.0.20500\",\"name\": \"kylin_test_cube\",\"is_draft\": false,\"model_name\": \"kylin_sales_model\",\"description\": \"\",\"null_string\": null,\"dimensions\": [{\"name\": \"TRANS_ID\",\"table\": \"KYLIN_SALES\",\"column\": \"TRANS_ID\",\"derived\": null},{\"name\": \"YEAR_BEG_DT\",\"table\": \"KYLIN_CAL_DT\",\"column\": null,\"derived\": [\"YEAR_BEG_DT\"]},{\"name\": \"MONTH_BEG_DT\ [...]
-"streamingData": null,
-"kafkaData": null,
-"successful": true,
-"message": null,
-"project": "learn_kylin",
-"streamingCube": null
-}
-```
-
-## List cubes
-`GET /kylin/api/cubes`
-
-#### Request Parameters
-* offset - `required` `int` Offset used by pagination
-* limit - `required` `int ` Cubes per page.
-* cubeName - `optional` `string` Keyword for cube names. To find cubes whose name contains this keyword.
-* projectName - `optional` `string` Project name.
-
-#### Response Sample
-```sh
-[  
-   {  
-      "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-      "last_modified":1407831634847,
-      "name":"test_kylin_cube_with_slr_empty",
-      "owner":null,
-      "version":null,
-      "descriptor":"test_kylin_cube_with_slr_desc",
-      "cost":50,
-      "status":"DISABLED",
-      "segments":[  
-      ],
-      "create_time":null,
-      "source_records_count":0,
-      "source_records_size":0,
-      "size_kb":0
-   }
-]
-```
-
-## Get cube
-`GET /kylin/api/cubes/{cubeName}`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name to find.
-
-## Get cube descriptor
-`GET /kylin/api/cube_desc/{cubeName}`
-Get descriptor for specified cube instance.
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-[
-    {
-        "uuid": "a24ca905-1fc6-4f67-985c-38fa5aeafd92", 
-        "name": "test_kylin_cube_with_slr_desc", 
-        "description": null, 
-        "dimensions": [
-            {
-                "id": 0, 
-                "name": "CAL_DT", 
-                "table": "EDW.TEST_CAL_DT", 
-                "column": null, 
-                "derived": [
-                    "WEEK_BEG_DT"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 1, 
-                "name": "CATEGORY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": null, 
-                "derived": [
-                    "USER_DEFINED_FIELD1", 
-                    "USER_DEFINED_FIELD3", 
-                    "UPD_DATE", 
-                    "UPD_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 2, 
-                "name": "CATEGORY_HIERARCHY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": [
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": true
-            }, 
-            {
-                "id": 3, 
-                "name": "LSTG_FORMAT_NAME", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "LSTG_FORMAT_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }, 
-            {
-                "id": 4, 
-                "name": "SITE_ID", 
-                "table": "EDW.TEST_SITES", 
-                "column": null, 
-                "derived": [
-                    "SITE_NAME", 
-                    "CRE_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 5, 
-                "name": "SELLER_TYPE_CD", 
-                "table": "EDW.TEST_SELLER_TYPE_DIM", 
-                "column": null, 
-                "derived": [
-                    "SELLER_TYPE_DESC"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 6, 
-                "name": "SELLER_ID", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "SELLER_ID"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }
-        ], 
-        "measures": [
-            {
-                "id": 1, 
-                "name": "GMV_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 2, 
-                "name": "GMV_MIN", 
-                "function": {
-                    "expression": "MIN", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 3, 
-                "name": "GMV_MAX", 
-                "function": {
-                    "expression": "MAX", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 4, 
-                "name": "TRANS_CNT", 
-                "function": {
-                    "expression": "COUNT", 
-                    "parameter": {
-                        "type": "constant", 
-                        "value": "1", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 5, 
-                "name": "ITEM_COUNT_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "ITEM_COUNT", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }
-        ], 
-        "rowkey": {
-            "rowkey_columns": [
-                {
-                    "column": "SELLER_ID", 
-                    "length": 18, 
-                    "dictionary": null, 
-                    "mandatory": true
-                }, 
-                {
-                    "column": "CAL_DT", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LEAF_CATEG_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "META_CATEG_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL2_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL3_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_FORMAT_NAME", 
-                    "length": 12, 
-                    "dictionary": null, 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_SITE_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "SLR_SEGMENT_CD", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }
-            ], 
-            "aggregation_groups": [
-                [
-                    "LEAF_CATEG_ID", 
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME", 
-                    "CAL_DT"
-                ]
-            ]
-        }, 
-        "signature": "lsLAl2jL62ZApmOLZqWU3g==", 
-        "last_modified": 1445850327000, 
-        "model_name": "test_kylin_with_slr_model_desc", 
-        "null_string": null, 
-        "hbase_mapping": {
-            "column_family": [
-                {
-                    "name": "F1", 
-                    "columns": [
-                        {
-                            "qualifier": "M", 
-                            "measure_refs": [
-                                "GMV_SUM", 
-                                "GMV_MIN", 
-                                "GMV_MAX", 
-                                "TRANS_CNT", 
-                                "ITEM_COUNT_SUM"
-                            ]
-                        }
-                    ]
-                }
-            ]
-        }, 
-        "notify_list": null, 
-        "auto_merge_time_ranges": null, 
-        "retention_range": 0
-    }
-]
-```
-
-## Get data model
-`GET /kylin/api/model/{modelName}`
-
-#### Path Variable
-* modelName - `required` `string` Data model name, by default it should be the same with cube name.
-
-#### Response Sample
-```sh
-{
-    "uuid": "ff527b94-f860-44c3-8452-93b17774c647", 
-    "name": "test_kylin_with_slr_model_desc", 
-    "lookups": [
-        {
-            "table": "EDW.TEST_CAL_DT", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "CAL_DT"
-                ], 
-                "foreign_key": [
-                    "CAL_DT"
-                ]
-            }
-        }, 
-        {
-            "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "LEAF_CATEG_ID", 
-                    "SITE_ID"
-                ], 
-                "foreign_key": [
-                    "LEAF_CATEG_ID", 
-                    "LSTG_SITE_ID"
-                ]
-            }
-        }
-    ], 
-    "capacity": "MEDIUM", 
-    "last_modified": 1442372116000, 
-    "fact_table": "DEFAULT.TEST_KYLIN_FACT", 
-    "filter_condition": null, 
-    "partition_desc": {
-        "partition_date_column": "DEFAULT.TEST_KYLIN_FACT.CAL_DT", 
-        "partition_date_start": 0, 
-        "partition_date_format": "yyyy-MM-dd", 
-        "partition_type": "APPEND", 
-        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
-    }
-}
-```
-
-## Build cube
-`PUT /kylin/api/cubes/{cubeName}/build`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Request Body
-* startTime - `required` `long` Start timestamp of data to build, e.g. 1388563200000 for 2014-1-1
-* endTime - `required` `long` End timestamp of data to build
-* buildType - `required` `string` Supported build type: 'BUILD', 'MERGE', 'REFRESH'
-
-#### Curl Example
-```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
-```
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-
-## Enable Cube
-`PUT /kylin/api/cubes/{cubeName}/enable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-{  
-   "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-   "last_modified":1407909046305,
-   "name":"test_kylin_cube_with_slr_ready",
-   "owner":null,
-   "version":null,
-   "descriptor":"test_kylin_cube_with_slr_desc",
-   "cost":50,
-   "status":"ACTIVE",
-   "segments":[  
-      {  
-         "name":"19700101000000_20140531160000",
-         "storage_location_identifier":"KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_READY-19700101000000_20140531160000_BF043D2D-9A4A-45E9-AA59-5A17D3F34A50",
-         "date_range_start":0,
-         "date_range_end":1401552000000,
-         "status":"READY",
-         "size_kb":4758,
-         "source_records":6000,
-         "source_records_size":620356,
-         "last_build_time":1407832663227,
-         "last_build_job_id":"2c7a2b63-b052-4a51-8b09-0c24b5792cda",
-         "binary_signature":null,
-         "dictionaries":{  
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME/16d8185c-ee6b-4f8c-a919-756d9809f937.dict",
-            "TEST_KYLIN_FACT/LSTG_SITE_ID":"/dict/TEST_SITES/SITE_ID/0bec6bb3-1b0d-469c-8289-b8c4ca5d5001.dict",
-            "TEST_KYLIN_FACT/SLR_SEGMENT_CD":"/dict/TEST_SELLER_TYPE_DIM/SELLER_TYPE_CD/0c5d77ec-316b-47e0-ba9a-0616be890ad6.dict",
-            "TEST_KYLIN_FACT/CAL_DT":"/dict/PREDEFINED/date(yyyy-mm-dd)/64ac4f82-f2af-476e-85b9-f0805001014e.dict",
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME/270fbfb0-281c-4602-8413-2970a7439c47.dict",
-            "TEST_KYLIN_FACT/LEAF_CATEG_ID":"/dict/TEST_CATEGORY_GROUPINGS/LEAF_CATEG_ID/2602386c-debb-4968-8d2f-b52b8215e385.dict",
-            "TEST_CATEGORY_GROUPINGS/META_CATEG_NAME":"/dict/TEST_CATEGORY_GROUPINGS/META_CATEG_NAME/0410d2c4-4686-40bc-ba14-170042a2de94.dict"
-         },
-         "snapshots":{  
-            "TEST_CAL_DT":"/table_snapshot/TEST_CAL_DT.csv/8f7cfc8a-020d-4019-b419-3c6deb0ffaa0.snapshot",
-            "TEST_SELLER_TYPE_DIM":"/table_snapshot/TEST_SELLER_TYPE_DIM.csv/c60fd05e-ac94-4016-9255-96521b273b81.snapshot",
-            "TEST_CATEGORY_GROUPINGS":"/table_snapshot/TEST_CATEGORY_GROUPINGS.csv/363f4a59-b725-4459-826d-3188bde6a971.snapshot",
-            "TEST_SITES":"/table_snapshot/TEST_SITES.csv/78e0aecc-3ec6-4406-b86e-bac4b10ea63b.snapshot"
-         }
-      }
-   ],
-   "create_time":null,
-   "source_records_count":6000,
-   "source_records_size":0,
-   "size_kb":4758
-}
-```
-
-## Disable Cube
-`PUT /kylin/api/cubes/{cubeName}/disable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-## Purge Cube
-`PUT /kylin/api/cubes/{cubeName}/purge`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-
-## Delete Segment
-`DELETE /kylin/api/cubes/{cubeName}/segs/{segmentName}`
-
-***
-
-## Create Model
-`POST /kylin/api/models`
-
-#### Request Body
-* modelDescData - `required` `string` modelDescData to create
-* modelName - `required` `string` modelName to create
-* projectName - `required` `string` projectName to which model belongs
-
-#### Request Sample
-```
-{
-"modelDescData": "{\"uuid\": \"0928468a-9fab-4185-9a14-6f2e7c74823f\",\"last_modified\": 0,\"version\": \"3.0.0.20500\",\"name\": \"kylin_test_model\",\"owner\": null,\"is_draft\": false,\"description\": \"\",\"fact_table\": \"DEFAULT.KYLIN_SALES\",\"lookups\": [{\"table\": \"DEFAULT.KYLIN_CAL_DT\",\"kind\": \"LOOKUP\",\"alias\": \"KYLIN_CAL_DT\",\"join\": {\"type\": \"inner\",\"primary_key\": [\"KYLIN_CAL_DT.CAL_DT\"],\"foreign_key\": [\"KYLIN_SALES.PART_DT\"]}},{\"table\": \"DEFAULT.KY [...]
-"modelName": "kylin_test_model",
-"project": "learn_kylin"
-}
-```
- 
-#### Response Sample
-```
-{
-"uuid": "2613d739-14c1-38ac-2e37-f36e46fd9976",
-"modelName": "kylin_test_model",
-"modelDescData": "{\"uuid\": \"0928468a-9fab-4185-9a14-6f2e7c74823f\",\"last_modified\": 0,\"version\": \"3.0.0.20500\",\"name\": \"kylin_test_model\",\"owner\": null,\"is_draft\": false,\"description\": \"\",\"fact_table\": \"DEFAULT.KYLIN_SALES\",\"lookups\": [{\"table\": \"DEFAULT.KYLIN_CAL_DT\",\"kind\": \"LOOKUP\",\"alias\": \"KYLIN_CAL_DT\",\"join\": {\"type\": \"inner\",\"primary_key\": [\"KYLIN_CAL_DT.CAL_DT\"],\"foreign_key\": [\"KYLIN_SALES.PART_DT\"]}},{\"table\": \"DEFAULT.KY [...]
-"successful": true,
-"message": null,
-"project": "learn_kylin",
-"ccInCheck": null,
-"seekingExprAdvice": false
-}
-```
-
-## Get ModelDescData
-`GET /kylin/api/models`
-
-##### Request Parameters
-* modelName - `optional` `string` Model name.
-* projectName - `optional` `string` Project Name.
-* limit - `optional` `integer` Offset used by pagination
-* offset - `optional` `integer` Models per page
-
-#### Response Sample
-```sh
-[
-    {
-        "uuid": "0928468a-9fab-4185-9a14-6f2e7c74823f",
-        "last_modified": 1568862496000,
-        "version": "3.0.0.20500",
-        "name": "kylin_sales_model",
-        "owner": null,
-        "is_draft": false,
-        "description": "",
-        "fact_table": "DEFAULT.KYLIN_SALES",
-        "lookups": [
-            {
-                "table": "DEFAULT.KYLIN_CAL_DT",
-                "kind": "LOOKUP",
-                "alias": "KYLIN_CAL_DT",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "KYLIN_CAL_DT.CAL_DT"
-                    ],
-                    "foreign_key": [
-                        "KYLIN_SALES.PART_DT"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_CATEGORY_GROUPINGS",
-                "kind": "LOOKUP",
-                "alias": "KYLIN_CATEGORY_GROUPINGS",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID",
-                        "KYLIN_CATEGORY_GROUPINGS.SITE_ID"
-                    ],
-                    "foreign_key": [
-                        "KYLIN_SALES.LEAF_CATEG_ID",
-                        "KYLIN_SALES.LSTG_SITE_ID"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_ACCOUNT",
-                "kind": "LOOKUP",
-                "alias": "BUYER_ACCOUNT",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "BUYER_ACCOUNT.ACCOUNT_ID"
-                    ],
-                    "foreign_key": [
-                        "KYLIN_SALES.BUYER_ID"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_ACCOUNT",
-                "kind": "LOOKUP",
-                "alias": "SELLER_ACCOUNT",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "SELLER_ACCOUNT.ACCOUNT_ID"
-                    ],
-                    "foreign_key": [
-                        "KYLIN_SALES.SELLER_ID"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_COUNTRY",
-                "kind": "LOOKUP",
-                "alias": "BUYER_COUNTRY",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "BUYER_COUNTRY.COUNTRY"
-                    ],
-                    "foreign_key": [
-                        "BUYER_ACCOUNT.ACCOUNT_COUNTRY"
-                    ]
-                }
-            },
-            {
-                "table": "DEFAULT.KYLIN_COUNTRY",
-                "kind": "LOOKUP",
-                "alias": "SELLER_COUNTRY",
-                "join": {
-                    "type": "inner",
-                    "primary_key": [
-                        "SELLER_COUNTRY.COUNTRY"
-                    ],
-                    "foreign_key": [
-                        "SELLER_ACCOUNT.ACCOUNT_COUNTRY"
-                    ]
-                }
-            }
-        ],
-        "dimensions": [
-            {
-                "table": "KYLIN_SALES",
-                "columns": [
-                    "TRANS_ID",
-                    "SELLER_ID",
-                    "BUYER_ID",
-                    "PART_DT",
-                    "LEAF_CATEG_ID",
-                    "LSTG_FORMAT_NAME",
-                    "LSTG_SITE_ID",
-                    "OPS_USER_ID",
-                    "OPS_REGION"
-                ]
-            },
-            {
-                "table": "KYLIN_CAL_DT",
-                "columns": [
-                    "CAL_DT",
-                    "WEEK_BEG_DT",
-                    "MONTH_BEG_DT",
-                    "YEAR_BEG_DT"
-                ]
-            },
-            {
-                "table": "KYLIN_CATEGORY_GROUPINGS",
-                "columns": [
-                    "USER_DEFINED_FIELD1",
-                    "USER_DEFINED_FIELD3",
-                    "META_CATEG_NAME",
-                    "CATEG_LVL2_NAME",
-                    "CATEG_LVL3_NAME",
-                    "LEAF_CATEG_ID",
-                    "SITE_ID"
-                ]
-            },
-            {
-                "table": "BUYER_ACCOUNT",
-                "columns": [
-                    "ACCOUNT_ID",
-                    "ACCOUNT_BUYER_LEVEL",
-                    "ACCOUNT_SELLER_LEVEL",
-                    "ACCOUNT_COUNTRY",
-                    "ACCOUNT_CONTACT"
-                ]
-            },
-            {
-                "table": "SELLER_ACCOUNT",
-                "columns": [
-                    "ACCOUNT_ID",
-                    "ACCOUNT_BUYER_LEVEL",
-                    "ACCOUNT_SELLER_LEVEL",
-                    "ACCOUNT_COUNTRY",
-                    "ACCOUNT_CONTACT"
-                ]
-            },
-            {
-                "table": "BUYER_COUNTRY",
-                "columns": [
-                    "COUNTRY",
-                    "NAME"
-                ]
-            },
-            {
-                "table": "SELLER_COUNTRY",
-                "columns": [
-                    "COUNTRY",
-                    "NAME"
-                ]
-            }
-        ],
-        "metrics": [
-            "KYLIN_SALES.PRICE",
-            "KYLIN_SALES.ITEM_COUNT"
-        ],
-        "filter_condition": "",
-        "partition_desc": {
-            "partition_date_column": "KYLIN_SALES.PART_DT",
-            "partition_time_column": null,
-            "partition_date_start": 1325376000000,
-            "partition_date_format": "yyyy-MM-dd",
-            "partition_time_format": "HH:mm:ss",
-            "partition_type": "APPEND",
-            "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
-        },
-        "capacity": "MEDIUM"
-    }
-]
-```
-
-## Delete Model
-`DELETE /kylin/api/models/{modelName}`
-
-#### Path variable
-* modelName - `required` `string` Model name you want delete.
-
-***
-
-## Resume Job
-`PUT /kylin/api/jobs/{jobId}/resume`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-## Pause Job
-`PUT /kylin/api/jobs/{jobId}/pause`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Discard Job
-`PUT /kylin/api/jobs/{jobId}/cancel`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Drop Job
-`DELETE /kylin/api/jobs/{jobId}/drop`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Get Job Status
-`GET /kylin/api/jobs/{jobId}`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-(Same as "Resume Job")
-
-## Get job step output
-`GET /kylin/api/jobs/{jobId}/steps/{stepId}/output`
-
-#### Path Variable
-* jobId - `required` `string` Job id.
-* stepId - `required` `string` Step id; the step id is composed by jobId with step sequence id; for example, the jobId is "fb479e54-837f-49a2-b457-651fc50be110", its 3rd step id is "fb479e54-837f-49a2-b457-651fc50be110-3", 
-
-#### Response Sample
-```
-{  
-   "cmd_output":"log string"
-}
-```
-
-## Get job list
-`GET /kylin/api/jobs`
-
-#### Request Variables
-* cubeName - `optional` `string` Cube name.
-* projectName - `required` `string` Project name.
-* status - `optional` `int` Job status, e.g. (NEW: 0, PENDING: 1, RUNNING: 2, STOPPED: 32, FINISHED: 4, ERROR: 8, DISCARDED: 16)
-* offset - `required` `int` Offset used by pagination.
-* limit - `required` `int` Jobs per page.
-* timeFilter - `required` `int`, e.g. (LAST ONE DAY: 0, LAST ONE WEEK: 1, LAST ONE MONTH: 2, LAST ONE YEAR: 3, ALL: 4)
-
-For example, to get the job list in project 'learn_kylin' for cube 'kylin_sales_cube' in lastone week: 
-
-```
-GET: /kylin/api/jobs?cubeName=kylin_sales_cube&limit=15&offset=0&projectName=learn_kylin&timeFilter=1
-```
-
-
-#### Response Sample
-```
-[
-  { 
-    "uuid": "9eb7bccf-4448-4578-9c29-552658b5a2ca", 
-    "last_modified": 1490957579843, 
-    "version": "2.0.0", 
-    "name": "Sample_Cube - 19700101000000_20150101000000 - BUILD - GMT+08:00 2017-03-31 18:36:08", 
-    "type": "BUILD", 
-    "duration": 936, 
-    "related_cube": "Sample_Cube", 
-    "related_segment": "53a5d7f7-7e06-4ea1-b3ee-b7f30343c723", 
-    "exec_start_time": 1490956581743, 
-    "exec_end_time": 1490957518131, 
-    "mr_waiting": 0, 
-    "steps": [
-      { 
-        "interruptCmd": null, 
-        "id": "9eb7bccf-4448-4578-9c29-552658b5a2ca-00", 
-        "name": "Create Intermediate Flat Hive Table", 
-        "sequence_id": 0, 
-        "exec_cmd": null, 
-        "interrupt_cmd": null, 
-        "exec_start_time": 1490957508721, 
-        "exec_end_time": 1490957518102, 
-        "exec_wait_time": 0, 
-        "step_status": "DISCARDED", 
-        "cmd_type": "SHELL_CMD_HADOOP", 
-        "info": { "endTime": "1490957518102", "startTime": "1490957508721" }, 
-        "run_async": false 
-      }, 
-      { 
-        "interruptCmd": null, 
-        "id": "9eb7bccf-4448-4578-9c29-552658b5a2ca-01", 
-        "name": "Redistribute Flat Hive Table", 
-        "sequence_id": 1, 
-        "exec_cmd": null, 
-        "interrupt_cmd": null, 
-        "exec_start_time": 0, 
-        "exec_end_time": 0, 
-        "exec_wait_time": 0, 
-        "step_status": "DISCARDED", 
-        "cmd_type": "SHELL_CMD_HADOOP", 
-        "info": {}, 
-        "run_async": false 
-      }
-    ],
-    "submitter": "ADMIN", 
-    "job_status": "FINISHED", 
-    "progress": 100.0 
-  }
-]
-```
-***
-
-## Get Hive Table
-`GET /kylin/api/tables/{project}/{tableName}`
-
-#### Path Parameters
-* project - `required` `string` project name
-* tableName - `required` `string` table name to find.
-
-#### Response Sample
-```sh
-{
-    uuid: "69cc92c0-fc42-4bb9-893f-bd1141c91dbe",
-    name: "SAMPLE_07",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    last_modified: 1419330476755
-}
-```
-
-## Get Hive Tables
-`GET /kylin/api/tables`
-
-#### Request Parameters
-* project- `required` `string` will list all tables in the project.
-* ext- `optional` `boolean`  set true to get extend info of table.
-
-#### Response Sample
-```sh
-[
- {
-    uuid: "53856c96-fe4d-459e-a9dc-c339b1bc3310",
-    name: "SAMPLE_08",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    cardinality: {},
-    last_modified: 0,
-    exd: {
-        minFileSize: "46069",
-        totalNumberFiles: "1",
-        location: "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_08",
-        lastAccessTime: "1398176495945",
-        lastUpdateTime: "1398176495981",
-        columns: "struct columns { string code, string description, i32 total_emp, i32 salary}",
-        partitionColumns: "",
-        EXD_STATUS: "true",
-        maxFileSize: "46069",
-        inputformat: "org.apache.hadoop.mapred.TextInputFormat",
-        partitioned: "false",
-        tableName: "sample_08",
-        owner: "hue",
-        totalFileSize: "46069",
-        outputformat: "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
-    }
-  }
-]
-```
-
-## Load Hive Tables
-`POST /kylin/api/tables/{tables}/{project}`
-
-#### Request Parameters
-* tables - `required` `string` table names you want to load from hive, separated with comma.
-* project - `required` `String`  the project which the tables will be loaded into.
-
-#### Request Body
-* calculate - `optional` `boolean` 
-
-#### Response Sample
-```
-{
-    "result.loaded": ["DEFAULT.SAMPLE_07"],
-    "result.unloaded": ["sapmle_08"]
-}
-```
-
-***
-
-## Wipe cache
-`PUT /kylin/api/cache/{type}/{name}/{action}`
-
-#### Path variable
-* type - `required` `string` 'METADATA' or 'CUBE'
-* name - `required` `string` Cache key, e.g the cube name.
-* action - `required` `string` 'create', 'update' or 'drop'
-
-***
-
-## Initiate cube start position
-Set the stream cube's start position to the current latest offsets; This can avoid building from the earlist position of Kafka topic (if you have set a long retension time); 
-
-`PUT /kylin/api/cubes/{cubeName}/init_start_offsets`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Response Sample
-```sh
-{
-    "result": "success", 
-    "offsets": "{0=246059529, 1=253547684, 2=253023895, 3=172996803, 4=165503476, 5=173513896, 6=19200473, 7=26691891, 8=26699895, 9=26694021, 10=19204164, 11=26694597}"
-}
-```
-
-## Build stream cube
-`PUT /kylin/api/cubes/{cubeName}/build2`
-
-This API is specific for stream cube's building;
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Request Body
-
-* sourceOffsetStart - `required` `long` The start offset, 0 represents from previous position;
-* sourceOffsetEnd  - `required` `long` The end offset, 9223372036854775807 represents to the end position of current stream data
-* buildType - `required` Build type, "BUILD", "MERGE" or "REFRESH"
-
-#### Request Sample
-
-```sh
-{  
-   "sourceOffsetStart": 0, 
-   "sourceOffsetEnd": 9223372036854775807, 
-   "buildType": "BUILD"
-}
-```
-
-#### Response Sample
-```sh
-{
-    "uuid": "3afd6e75-f921-41e1-8c68-cb60bc72a601", 
-    "last_modified": 1480402541240, 
-    "version": "1.6.0", 
-    "name": "embedded_cube_clone - 1409830324_1409849348 - BUILD - PST 2016-11-28 22:55:41", 
-    "type": "BUILD", 
-    "duration": 0, 
-    "related_cube": "embedded_cube_clone", 
-    "related_segment": "42ebcdea-cbe9-4905-84db-31cb25f11515", 
-    "exec_start_time": 0, 
-    "exec_end_time": 0, 
-    "mr_waiting": 0, 
- ...
-}
-```
-
-## Check segment holes
-`GET /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-## Fill segment holes
-`PUT /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-
-
-## Use RESTful API in Javascript
-
-Keypoints of call Kylin RESTful API in web page are:
-
-1. Add basic access authorization info in http headers.
-
-2. Use proper request type and data synax.
-
-Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers; for example:
-
-```
-$.ajaxSetup({
-      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
-    });
-    var request = $.ajax({
-       url: "http://hostname/kylin/api/query",
-       type: "POST",
-       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
-       dataType: "json"
-    });
-    request.done(function( msg ) {
-       alert(msg);
-    }); 
-    request.fail(function( jqXHR, textStatus ) {
-       alert( "Request failed: " + textStatus );
-  });
-
-```
-
-To generate your authorization code, download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
-
-```
-var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
-
-$.ajaxSetup({
-   headers: { 
-    'Authorization': "Basic " + authorizationCode, 
-    'Content-Type': 'application/json;charset=utf-8' 
-   }
-});
-```
diff --git a/website/_docs30/howto/howto_use_restapi_in_js.md b/website/_docs30/howto/howto_use_restapi_in_js.md
deleted file mode 100644
index 76bc898..0000000
--- a/website/_docs30/howto/howto_use_restapi_in_js.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-layout: docs30
-title:  Use RESTful API in Javascript
-categories: howto
-permalink: /docs30/howto/howto_use_restapi_in_js.html
----
-Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers.
-
-## Example on Query API.
-```
-$.ajaxSetup({
-      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
-    });
-    var request = $.ajax({
-       url: "http://hostname/kylin/api/query",
-       type: "POST",
-       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
-       dataType: "json"
-    });
-    request.done(function( msg ) {
-       alert(msg);
-    }); 
-    request.fail(function( jqXHR, textStatus ) {
-       alert( "Request failed: " + textStatus );
-  });
-
-```
-
-## Keypoints
-1. add basic access authorization info in http headers.
-2. use right ajax type and data synax.
-
-## Basic access authorization
-For what is basic access authorization, refer to [Wikipedia Page](http://en.wikipedia.org/wiki/Basic_access_authentication).
-How to generate your authorization code (download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
-
-```
-var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
- 
-$.ajaxSetup({
-   headers: { 
-    'Authorization': "Basic " + authorizationCode, 
-    'Content-Type': 'application/json;charset=utf-8' 
-   }
-});
-```
diff --git a/website/_docs30/index.cn.md b/website/_docs30/index.cn.md
deleted file mode 100644
index 53db54d..0000000
--- a/website/_docs30/index.cn.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-layout: docs30-cn
-title: 概述
-categories: docs
-permalink: /cn/docs30/index.html
----
-
-欢迎来到 Apache Kylin™
-------------  
-> Analytical Data Warehouse for Big Data
-
-Apache Kylin™是一个开源的、分布式的分析型数据仓库,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
-
-查看旧版本文档: 
-* [v2.4 document](/cn/docs24/)
-* [v2.3 document](/cn/docs23/)
-* [归档](/archive/)
-
-安装
-------------  
-1. [安装指南](install/index.html)
-2. [Kylin 配置](install/configuration.html)
-3. [集群模式部署](install/kylin_cluster.html)
-4. [高级配置](install/advance_settings.html)
-5. [用 Docker 运行 Kylin](install/kylin_docker.html)
-6. [在 AWS EMR 上安装 Kylin](install/kylin_aws_emr.html)
-
-教程
-------------  
-1. [样例 Cube 快速入门](tutorial/kylin_sample.html)
-2. [Web 界面](tutorial/web.html)
-3. [Cube 创建](tutorial/create_cube.html)
-4. [Cube 构建和 Job 监控](tutorial/cube_build_job.html)
-5. [SQL 快速参考](tutorial/sql_reference.html)
-6. [用 Kafka 流构建 Cube](tutorial/cube_streaming.html)
-7. [用 Spark 构建 Cube](tutorial/cube_spark.html)
-8. [优化 Cube 构建](tutorial/cube_build_performance.html)
-9. [查询下压](tutorial/query_pushdown.html)
-10. [建立 System Cube](tutorial/setup_systemcube.html)
-11. [使用 Cube Planner](tutorial/use_cube_planner.html)
-12. [使用 Dashboard](tutorial/use_dashboard.html)
-13. [建立 JDBC 数据源](tutorial/setup_jdbc_datasource.html)
-
-
-工具集成
-------------  
-1. [ODBC 驱动](tutorial/odbc.html)
-2. [JDBC 驱动](howto/howto_jdbc.html)
-3. [RESTful API 列表](howto/howto_use_restapi.html)
-4. [用 API 构建 Cube](howto/howto_build_cube_with_restapi.html)
-5. [MS Excel 及 PowerBI 教程](tutorial/powerbi.html)
-6. [Tableau 8](tutorial/tableau.html)
-7. [Tableau 9](tutorial/tableau_91.html)
-8. [SQuirreL](tutorial/squirrel.html)
-9. [Qlik Sense 集成](tutorial/Qlik.html)
-10. [Apache Superset](tutorial/superset.html)
-11. [Redash](/blog/2018/05/08/redash-kylin-plugin-strikingly/)
-
-
-帮助
-------------  
-1. [备份 Kylin 元数据](howto/howto_backup_metadata.html)
-2. [清理存储](howto/howto_cleanup_storage.html)
-
-
-
-
-
-
diff --git a/website/_docs30/index.md b/website/_docs30/index.md
deleted file mode 100644
index 04c06b6..0000000
--- a/website/_docs30/index.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-layout: docs30
-title: Overview
-categories: docs
-permalink: /docs30/index.html
----
-
-
-Welcome to Apache Kylin™: Analytical Data Warehouse for Big Data
-------------  
-
-Apache Kylin™ is an open source Distributed Analytical Data Warehouse designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
-
-This is the document for v3.0 with the new feature Real-time OLAP. Document of prior versions: 
-* [v2.4 document](/docs24)
-* [v2.3 document](/docs23)
-* [Archived](/archive/)
-
-Installation & Setup
-------------  
-1. [Installation Guide](install/index.html)
-2. [Configurations](install/configuration.html)
-3. [Deploy in cluster mode](install/kylin_cluster.html)
-4. [Advanced settings](install/advance_settings.html)
-5. [Run Kylin with Docker](install/kylin_docker.html)
-6. [Install Kylin on AWS EMR](install/kylin_aws_emr.html)
-
-Tutorial
-------------  
-1. [Quick Start with Sample Cube](tutorial/kylin_sample.html)
-2. [Web Interface](tutorial/web.html)
-3. [Cube Wizard](tutorial/create_cube.html)
-4. [Cube Build and Job Monitoring](tutorial/cube_build_job.html)
-5. [SQL reference](tutorial/sql_reference.html)
-6. [Build Cube with Streaming Data (Near real-time)](tutorial/cube_streaming.html)
-7. [Real-time OLAP (NEW!)](tutorial/realtime_olap.html)
-8. [Build Cube with Spark Engine](tutorial/cube_spark.html)
-9. [Cube Build Tuning](tutorial/cube_build_performance.html)
-10. [Enable Query Pushdown](tutorial/query_pushdown.html)
-11. [Setup System Cube](tutorial/setup_systemcube.html)
-12. [Optimize with Cube Planner](tutorial/use_cube_planner.html)
-13. [Use System Dashboard](tutorial/use_dashboard.html)
-14. [Setup JDBC Data Source](tutorial/setup_jdbc_datasource.html)
-
-
-Connectivity and APIs
-------------  
-1. [ODBC driver](tutorial/odbc.html)
-2. [JDBC driver](howto/howto_jdbc.html)
-3. [RESTful API list](howto/howto_use_restapi.html)
-4. [Build cube with RESTful API](howto/howto_build_cube_with_restapi.html)
-5. [Connect from MS Excel and PowerBI](tutorial/powerbi.html)
-6. [Connect from Tableau 8](tutorial/tableau.html)
-7. [Connect from Tableau 9](tutorial/tableau_91.html)
-8. [Connect from MicroStrategy](tutorial/microstrategy.html)
-9. [Connect from SQuirreL](tutorial/squirrel.html)
-10. [Connect from Apache Flink](tutorial/flink.html)
-11. [Connect from Apache Spark](tutorial/spark.html)
-12. [Connect from Hue](tutorial/hue.html)
-13. [Connect from Qlik Sense](tutorial/Qlik.html)
-14. [Connect from Apache Superset](tutorial/superset.html)
-15. [Connect from Redash](/blog/2018/05/08/redash-kylin-plugin-strikingly/)
-
-
-Operations
-------------  
-1. [Backup/restore Kylin metadata](howto/howto_backup_metadata.html)
-2. [Cleanup storage](howto/howto_cleanup_storage.html)
-3. [Upgrade from old version](howto/howto_upgrade.html)
-
-
-
diff --git a/website/_docs30/install/advance_settings.cn.md b/website/_docs30/install/advance_settings.cn.md
deleted file mode 100644
index c5e1451..0000000
--- a/website/_docs30/install/advance_settings.cn.md
+++ /dev/null
@@ -1,191 +0,0 @@
----
-layout: docs30-cn
-title: "高级设置"
-categories: install
-permalink: /cn/docs30/install/advance_settings.html
----
-
-## 在 Cube 级别重写默认的 kylin.properties
-`conf/kylin.properties` 里有许多的参数,控制/影响着 Kylin 的行为;大多数参数是全局配置的,例如 security 或 job 相关的参数;有一些是 Cube 相关的;这些 Cube 相关的参数可以在任意 Cube 级别进行自定义。对应的 GUI 页面是 Cube 创建的 "重写配置" 步骤所示的页面,如下图所示.
-
-![]( /images/install/overwrite_config_v2.png)
-
-两个示例:
-
- * `kylin.cube.algorithm`:定义了 job engine 选择的 Cubing 算法;默认值为 "auto",意味着 engine 会通过采集数据动态的选择一个算法 ("layer" or "inmem")。如果您很了解 Kylin 和 您的数据/集群,您可以直接设置您喜欢的算法。   
-
- * `kylin.storage.hbase.region-cut-gb`:定义了创建 HBase 表时一个 region 的大小。默认一个 region "5" (GB)。对于小的或中等大小的 cube 来说它的值可能太大了,所以您可以设置更小的值来获得更多的 regions,可获得更好的查询性能。
-
-## 在 Cube 级别重写默认的 Hadoop job conf 值
-`conf/kylin_job_conf.xml` 和 `conf/kylin_job_conf_inmem.xml` 管理 Hadoop jobs 的默认配置。如果您想通过 cube 自定义配置,您可以通过和上面相似的方式获得,但是需要加一个前缀 `kylin.engine.mr.config-override.`;当提交 jobs 这些配置会被解析并应用。下面是两个示例:
-
- * 希望 job 从 Yarn 获得更多 memory,您可以这样定义:`kylin.engine.mr.config-override.mapreduce.map.java.opts=-Xmx7g` 和 `kylin.engine.mr.config-override.mapreduce.map.memory.mb=8192`
- * 希望 cube's job 使用不同的 Yarn resource queue,您可以这样定义:`kylin.engine.mr.config-override.mapreduce.job.queuename=myQueue` ("myQueue" 是一个举例,可更换成您的 queue 名字)
-
-## 在 Cube 级别重写默认的 Hive job conf 值
-
-`conf/kylin_hive_conf.xml` 管理运行时 Hive job 的默认配置 (例如创建 flat hive table)。如果您想通过 cube 自定义配置,您可以通过和上面相似的方式获得,但需要另一个前缀 `kylin.source.hive.config-override.`;当运行 "hive -e" 或 "beeline" 命令,这些配置会被解析并应用。请看下面示例:
-
- * 希望 hive 使用不同的 Yarn resource queue,您可以这样定义:`kylin.source.hive.config-override.mapreduce.job.queuename=myQueue` ("myQueue" 是一个举例,可更换成您的 queue 名字)
-
-## 在 Cube 级别重写默认的 Spark conf 值
-
- Spark 的配置是在 `conf/kylin.properties` 中管理,前缀为 `kylin.engine.spark-conf.`。例如,如果您想要使用 job queue "myQueue" 运行 Spark,设置 "kylin.engine.spark-conf.spark.yarn.queue=myQueue" 会让 Spark 在提交应用时获取 "spark.yarn.queue=myQueue"。参数可以在 Cube 级别进行配置,将会覆盖 `conf/kylin.properties` 中的默认值。 
-
-## 支持压缩
-
-默认情况,Kylin 不支持压缩,在产品环境这不是一个推荐的设置,但对于新的 Kylin 用户是个权衡。一个合适的算法将会减少存储负载。不支持的算法会阻碍 Kylin job build。Kylin 可以使用三种类型的压缩,HBase 表压缩,Hive 输出压缩 和 MR jobs 输出压缩。 
-
-* HBase 表压缩
-压缩设置通过 `kylin.hbase.default.compression.codec` 定义在 `kyiln.properties` 中,默认值为 *none*。有效的值包括 *none*,*snappy*,*lzo*,*gzip* 和 *lz4*。在变换压缩算法前,请确保您的 Hbase 集群支持所选算法。尤其是 snappy,lzo 和 lz4,不是所有的 Hadoop 分布式都会包含。 
-
-* Hive 输出压缩
-压缩设置定义在 `kylin_hive_conf.xml`。默认设置为 empty 其利用了 Hive 的默认配置。如果您重写配置,请在 `kylin_hive_conf.xml` 中添加 (或替换) 下列属性。以 snappy 压缩为例:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-* MR jobs 输出压缩
-压缩设置定义在 `kylin_job_conf.xml` 和 `kylin_job_conf_inmem.xml`中。默认设置为 empty 其利用了 MR 的默认配置。如果您重写配置,请在 `kylin_job_conf.xml` 和 `kylin_job_conf_inmem.xml` 中添加 (或替换) 下列属性。以 snappy 压缩为例:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-压缩设置只有在重启 Kylin 服务器实例后才会生效。
-
-## 分配更多内存给 Kylin 实例
-
-打开 `bin/setenv.sh`,这里有两个 `KYLIN_JVM_SETTINGS` 环境变量的样例设置;默认设置较小 (最大为 4GB),您可以注释它然后取消下一行的注释来给其分配 16GB:
-
-{% highlight Groff markup %}
-export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$KYLIN_HOME/logs/kylin.gc.$$ -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M"
-# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX:MaxPermSize=512m -XX:NewSize=3g -XX:MaxNewSize=3g -XX:SurvivorRatio=4 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=70 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError"
-{% endhighlight %}
-
-## 启用多个任务引擎
-从 2.0 开始, Kylin 支持多个任务引擎一起运行,相比于默认单任务引擎的配置,多引擎可以保证任务构建的高可用。
-
-使用多任务引擎,你可以在多个 Kylin 节点上配置它的角色为 `job` 或 `all`。为了避免它们之间产生竞争,需要启用分布式任务锁,请在 `kylin.properties` 里配置:
-
-```
-kylin.job.scheduler.default=2
-kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperJobLock
-```
-并记得将所有任务和查询节点的地址注册到 `kylin.server.cluster-servers`.
-
-## 支持 LDAP 或 SSO authentication
-
-查看 [How to Enable Security with LDAP and SSO](../howto/howto_ldap_and_sso.html)
-
-
-## 支持邮件通知
-
-Kylin 可以在 job 完成/失败的时候发送邮件通知;编辑 `conf/kylin.properties`,设置如下参数使其生效:
-{% highlight Groff markup %}
-mail.enabled=true
-mail.host=your-smtp-server
-mail.username=your-smtp-account
-mail.password=your-smtp-pwd
-mail.sender=your-sender-address
-kylin.job.admin.dls=adminstrator-address
-{% endhighlight %}
-
-重启 Kylin 服务器使其生效。设置 `mail.enabled` 为 `false` 令其失效。
-
-所有的 jobs 管理员都会收到通知。建模者和分析师需要将邮箱填写在 cube 创建的第一页的 "Notification List" 中,然后即可收到关于该 cube 的通知。
-
-
-## 支持 MySQL 作为 Kylin metadata 的存储(测试)
-
-Kylin 支持 MySQL 作为 metadata 的存储;为了使该功能生效,您需要执行以下步骤:
-
-* 安装 MySQL 服务,例如 v5.1.17;
-* 下载并拷贝 MySQL JDBC connector "mysql-connector-java-<version>.jar" 到 $KYLIN_HOME/ext 目录(如没有该目录请自行创建)
-* 在 MySQL 中新建一个专为 Kylin 元数据的数据库,例如 kylin_metadata;
-* 编辑 `conf/kylin.properties`,配置以下参数:
-
-{% highlight Groff markup %}
-kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password}
-kylin.metadata.jdbc.dialect=mysql
-kylin.metadata.jdbc.json-always-small-cell=true
-kylin.metadata.jdbc.small-cell-meta-size-warning-threshold=100mb
-kylin.metadata.jdbc.small-cell-meta-size-error-threshold=1gb
-kylin.metadata.jdbc.max-cell-size=1mb
-{% endhighlight %}
-
-`kylin.metadata.url` 配置项中可以添加更多JDBC 连接的配置项;其中 `url`, `username`,和 `password` 为必须配置项。其余项若不配置将使用默认配置项:
-
-{% highlight Groff markup %}
-url: JDBC connection URL
-username: JDBC 的用户名
-password: JDBC 的密码,如果选择了加密,那这里请写加密后的密码
-driverClassName: JDBC 的 driver 类名,默认值为 com.mysql.jdbc.Driver
-maxActive: 最大数据库连接数,默认值为 5
-maxIdle: 最大等待中的连接数量,默认值为 5
-maxWait: 最大等待连接毫秒数,默认值为 1000
-removeAbandoned: 是否自动回收超时连接,默认值为 true
-removeAbandonedTimeout: 超时时间秒数,默认为 300
-passwordEncrypted: 是否对 JDBC 密码进行加密,默认为 false
-{% endhighlight %}
-
-你可以加密JDBC 连接密码:
-{% highlight Groff markup %}
-cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
-java -classpath kylin-server-base-\<version\>.jar:kylin-core-common-\<version\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
-{% endhighlight %}
-
-*启动 Kylin
-
-*注意:该功能还在测试中,建议您谨慎使用*
-
-## 使用 SparkSql 创建 Hive 中间表
-
-**注意:当第二次连接 thriftserver 进行构建时将会出现问题,详细信息请查看 [https://issues.apache.org/jira/browse/SPARK-21067](https://issues.apache.org/jira/browse/SPARK-21067)**
-
-Kylin 能够使用 SparkSql 创建 Hive 中间表;使其生效之前: 
-
-- 确保以下参数存在于 hive-site.xml:
-
-{% highlight Groff markup %}
-
-<property>
-  <name>hive.security.authorization.sqlstd.confwhitelist</name>
-  <value>mapred.*|hive.*|mapreduce.*|spark.*</value>
-</property>
-
-<property>
-  <name>hive.security.authorization.sqlstd.confwhitelist.append</name>
-  <value>mapred.*|hive.*|mapreduce.*|spark.*</value>
-</property>
-    
-{% endhighlight %}
-- 将 `hive.execution.engine` 改为 mr(可选),如果您想要使用 tez,请确保 tez 相关依赖已导入
-- 将 hive-site.xml 拷贝到 $SPARK_HOME/conf
-- 确保设置了 HADOOP_CONF_DIR 环境变量
-- 使用 `sbin/start-thriftserver.sh --master spark://sparkmasterip:sparkmasterport` 命令启动 thriftserver,通常端口为 7077
-- 编辑 `conf/kylin.properties`,设置如下参数:
-{% highlight Groff markup %}
-kylin.source.hive.enable-sparksql-for-table-ops=true
-kylin.source.hive.sparksql-beeline-shell=/path/to/spark-client/bin/beeline
-kylin.source.hive.sparksql-beeline-params=-n root -u 'jdbc:hive2://thriftserverip:thriftserverport'
-{% endhighlight %}
-
-重启 Kylin 令其生效。通过将 `kylin.source.hive.enable-sparksql-for-table-ops` 设置为 `false` 来令其失效
diff --git a/website/_docs30/install/advance_settings.md b/website/_docs30/install/advance_settings.md
deleted file mode 100644
index 8398de8..0000000
--- a/website/_docs30/install/advance_settings.md
+++ /dev/null
@@ -1,190 +0,0 @@
----
-layout: docs30
-title:  "Advanced Settings"
-categories: install
-permalink: /docs30/install/advance_settings.html
----
-
-## Overwrite default kylin.properties at Cube level
-In `conf/kylin.properties` there are many parameters, which control/impact on Kylin's behaviors; Most parameters are global configs like security or job related; while some are Cube related; These Cube related parameters can be customized at each Cube level, so you can control the behaviors more flexibly. The GUI to do this is in the "Configuration Overwrites" step of the Cube wizard, as the screenshot below.
-
-![]( /images/install/overwrite_config_v2.png)
-
-Here take two example: 
-
- * `kylin.cube.algorithm`: it defines the Cubing algorithm that the job engine will select; Its default value is "auto", means the engine will dynamically pick an algorithm ("layer" or "inmem") by sampling the data. If you knows Kylin and your data/cluster well, you can set your preferred algorithm directly.   
-
- * `kylin.storage.hbase.region-cut-gb`: it defines how big a region is when creating the HBase table. The default value is "5" (GB) per region. It might be too big for a small or medium cube, so you can give it a smaller value to get more regions created, then can gain better query performance.
-
-## Overwrite default Hadoop job conf at Cube level
-The `conf/kylin_job_conf.xml` and `conf/kylin_job_conf_inmem.xml` manage the default configurations for Hadoop jobs. If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need adding a prefix `kylin.engine.mr.config-override.`; These configs will be parsed out and then applied when submitting jobs. See two examples below:
-
- * If want a cube's job getting more memory from Yarn, you can define: `kylin.engine.mr.config-override.mapreduce.map.java.opts=-Xmx7g` and `kylin.engine.mr.config-override.mapreduce.map.memory.mb=8192`
- * If want a cube's job going to a different Yarn resource queue, you can define: `kylin.engine.mr.config-override.mapreduce.job.queuename=myQueue` ("myQueue" is just a sample, change to your queue name)
-
-## Overwrite default Hive job conf at Cube level
-
-The `conf/kylin_hive_conf.xml` manages the default configurations when running Hive job (like creating intermediate flat hive table). If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need using another prefix `kylin.source.hive.config-override.`; These configs will be parsed out and then applied when running "hive -e" or "beeline" commands. See example below:
-
- * If want hive goes to a different Yarn resource queue, you can define: `kylin.source.hive.config-override.mapreduce.job.queuename=myQueue` ("myQueue" is just a sample, change to your queue name)
-
-## Overwrite default Spark conf at Cube level
-
- The configurations for Spark are managed in `conf/kylin.properties` with prefix `kylin.engine.spark-conf.`. For example, if you want to use job queue "myQueue" to run Spark, setting "kylin.engine.spark-conf.spark.yarn.queue=myQueue" will let Spark get "spark.yarn.queue=myQueue" feeded when submitting applications. The parameters can be configured at Cube level, which will override the default values in `conf/kylin.properties`. 
-
-## Enable compression
-
-By default, Kylin does not enable compression, this is not the recommend settings for production environment, but a tradeoff for new Kylin users. A suitable compression algorithm will reduce the storage overhead. But unsupported algorithm will break the Kylin job build also. There are three kinds of compression used in Kylin, HBase table compression, Hive output compression and MR jobs output compression. 
-
-* HBase table compression
-The compression settings define in `kyiln.properties` by `kylin.hbase.default.compression.codec`, default value is *none*. The valid value includes *none*, *snappy*, *lzo*, *gzip* and *lz4*. Before changing the compression algorithm, please make sure the selected algorithm is supported on your HBase cluster. Especially for snappy, lzo and lz4, not all Hadoop distributions include these. 
-
-* Hive output compression
-The compression settings define in `kylin_hive_conf.xml`. The default setting is empty which leverages the Hive default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_hive_conf.xml`. Take the snappy compression for example:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-* MR jobs output compression
-The compression settings define in `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. The default setting is empty which leverages the MR default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. Take the snappy compression for example:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-Compression settings only take effect after restarting Kylin server instance.
-
-## Allocate more memory to Kylin instance
-
-Open `bin/setenv.sh`, which has two sample settings for `KYLIN_JVM_SETTINGS` environment variable; The default setting is small (4GB at max.), you can comment it and then un-comment the next line to allocate 16GB:
-
-{% highlight Groff markup %}
-export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$KYLIN_HOME/logs/kylin.gc.$$ -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M"
-# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX:MaxPermSize=512m -XX:NewSize=3g -XX:MaxNewSize=3g -XX:SurvivorRatio=4 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=70 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError"
-{% endhighlight %}
-
-## Enable multiple job engines (HA)
-Since Kylin 2.0, Kylin support multiple job engines running together, which is more extensible, available and reliable than the default job scheduler.
-
-To enable the distributed job scheduler, you need to set or update the configs in the kylin.properties:
-
-```
-kylin.job.scheduler.default=2
-kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperJobLock
-```
-Please add all job servers and query servers to the `kylin.server.cluster-servers`.
-
-## Enable LDAP or SSO authentication
-
-Check [How to Enable Security with LDAP and SSO](../howto/howto_ldap_and_sso.html)
-
-
-## Enable email notification
-
-Kylin can send email notification on job complete/fail; To enable this, edit `conf/kylin.properties`, set the following parameters:
-{% highlight Groff markup %}
-mail.enabled=true
-mail.host=your-smtp-server
-mail.username=your-smtp-account
-mail.password=your-smtp-pwd
-mail.sender=your-sender-address
-kylin.job.admin.dls=adminstrator-address
-{% endhighlight %}
-
-Restart Kylin server to take effective. To disable, set `mail.enabled` back to `false`.
-
-Administrator will get notifications for all jobs. Modeler and Analyst need enter email address into the "Notification List" at the first page of cube wizard, and then will get notified for that cube.
-
-
-## Enable MySQL as Kylin metadata storage (beta)
-
-Kylin can use MySQL as the metadata storage, for the scenarios that HBase is not the best option; To enable this, you can perform the following steps: 
-
-* Install a MySQL server, e.g, v5.1.17;
-* Create a new MySQL database for Kylin metadata, for example "kylin_metadata";
-* Download and copy MySQL JDBC connector "mysql-connector-java-<version>.jar" to $KYLIN_HOME/ext (if the folder does not exist, create it yourself);
-* Edit `conf/kylin.properties`, set the following parameters:
-{% highlight Groff markup %}
-kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password},driverClassName=com.mysql.jdbc.Driver
-kylin.metadata.jdbc.dialect=mysql
-kylin.metadata.jdbc.json-always-small-cell=true
-kylin.metadata.jdbc.small-cell-meta-size-warning-threshold=100mb
-kylin.metadata.jdbc.small-cell-meta-size-error-threshold=1gb
-kylin.metadata.jdbc.max-cell-size=1mb
-{% endhighlight %}
-In "kylin.metadata.url" more configuration items can be added; The `url`, `username`, and `password` are required items. If not configured, the default configuration items will be used:
-{% highlight Groff markup %}
-url: the JDBC connection URL;
-username: JDBC user name
-password: JDBC password, if encryption is selected, please put the encrypted password here;
-driverClassName: JDBC driver class name, the default value is com.mysql.jdbc.Driver
-maxActive: the maximum number of database connections, the default value is 5;
-maxIdle: the maximum number of connections waiting, the default value is 5;
-maxWait: The maximum number of milliseconds to wait for connection. The default value is 1000.
-removeAbandoned: Whether to automatically reclaim timeout connections, the default value is true;
-removeAbandonedTimeout: the number of seconds in the timeout period, the default is 300;
-passwordEncrypted: Whether the JDBC password is encrypted or not, the default is false;
-{% endhighlight %}
-
-* You can encrypt your password:
-{% highlight Groff markup %}
-cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
-java -classpath kylin-server-base-\<version\>.jar:kylin-core-common-\<version\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
-{% endhighlight %}
-
-* Start Kylin
-
-**Note: The feature is in beta now.**
-
-## Use SparkSql to create intermediate flat Hive table 
-
-**Note: There will be an issue when connecting thriftserver again, detail info, please check [https://issues.apache.org/jira/browse/SPARK-21067](https://issues.apache.org/jira/browse/SPARK-21067)**
-
-Kylin can use SparkSql to create intermediate flat Hive table; Before enable this: 
-
-- Make sure the following parameters exist in hive-site.xml:
-
-{% highlight Groff markup %}
-
-<property>
-  <name>hive.security.authorization.sqlstd.confwhitelist</name>
-  <value>mapred.*|hive.*|mapreduce.*|spark.*</value>
-</property>
-
-<property>
-  <name>hive.security.authorization.sqlstd.confwhitelist.append</name>
-  <value>mapred.*|hive.*|mapreduce.*|spark.*</value>
-</property>
-    
-{% endhighlight %}
-- Change `hive.execution.engine` to mr (Optional), if you want to use tez, please make sure the dependency of tez has been added
-- Copy the hive-site.xml to $SPARK_HOME/conf
-- Make sure the environmental variable HADOOP_CONF_DIR has been set
-- Use `sbin/start-thriftserver.sh --master spark://sparkmasterip:sparkmasterport` to start the thriftserver, usually port is 7077
-- Edit `conf/kylin.properties`, set the following parameters:
-{% highlight Groff markup %}
-kylin.source.hive.enable-sparksql-for-table-ops=true
-kylin.source.hive.sparksql-beeline-shell=/path/to/spark-client/bin/beeline
-kylin.source.hive.sparksql-beeline-params=-n root -u 'jdbc:hive2://thriftserverip:thriftserverport'
-{% endhighlight %}
-
-Restart Kylin server to take effective. To disable, set `kylin.source.hive.enable-sparksql-for-table-ops` back to `false`.
-
-
diff --git a/website/_docs30/install/configuration.cn.md b/website/_docs30/install/configuration.cn.md
deleted file mode 100644
index 72a1c0b..0000000
--- a/website/_docs30/install/configuration.cn.md
+++ /dev/null
@@ -1,803 +0,0 @@
----
-layout: docs30-cn
-title:  "Kylin 配置"
-categories: install
-permalink: /cn/docs30/install/configuration.html
----
-
-
-
-- [配置文件及参数重写](#kylin-config)
-    - [Kylin 配置文件](#kylin-config)
-	- [配置重写](#config-override)
-		- [项目级别配置重写](#project-config-override)
-		- [Cube 级别配置重写](#cube-config-override)
-		- [重写 MapReduce 参数](#mr-config-override)
-		- [重写 Hive 参数](#hive-config-override)
-		- [重写 Spark 参数](#spark-config-override)
-- [部署配置](#kylin-deploy)
-    - [部署 Kylin](#deploy-config)
-	- [分配更多内存给 Kylin 实例](#kylin-jvm-settings)
-	- [任务引擎高可用](#job-engine-ha)
-	- [任务引擎安全模式](#job-engine-safemode)
-	- [读写分离配置](#rw-deploy)
-	- [RESTful Webservice](#rest-config)
-- [Metastore 配置](#kylin_metastore)
-    - [元数据相关](#metadata)
-    - [基于 MySQL 的 Metastore (测试)](#mysql-metastore)
-- [构建配置](#kylin-build)
-    - [Hive 客户端 & SparkSQL](#hive-client-and-sparksql)
-    - [配置 JDBC 数据源](#jdbc-datasource)
-    - [数据类型精度](#precision-config)
-    - [Cube 设计](#cube-config)
-    - [Cube 大小估计](#cube-estimate)
-	- [Cube 构建算法](#cube-algorithm)
-	- [自动合并](#auto-merge)
-	- [维表快照](#snapshot)
-	- [Cube 构建](#cube-build)
-	- [字典相关](#dict-config)
-	- [超高基维度的处理](#uhc-config)
-	- [Spark 构建引擎](#spark-cubing)
-	- [通过 Livy 提交 Spark 任务](#livy-submit-spark-job)
-	- [Spark 资源动态分配](#dynamic-allocation)
-	- [任务相关](#job-config)
-	- [启用邮件通知](#email-notification)
-	- [启用 Cube Planner](#cube-planner)
-    - [任务输出](#job-output)
-    - [启用压缩](#compress-config)
-    - [实时 OLAP](#realtime-olap)
-- [清理存储配置](#storage-clean-up-configuration)
-    - [存储清理相关](#storage-clean-up-config)
-- [查询配置](#kylin-query)
-    - [查询相关](#query-config)
-    - [模糊查询](#fuzzy)
-	- [查询缓存](#cache-config)
-	- [查询限制](#query-limit)
-	- [坏查询](#bad-query)
-	- [查询下压](#query-pushdown)
-	- [查询改写](#convert-sql)
-	- [收集查询指标到 JMX](#jmx-metrics)
-	- [收集查询指标到 dropwizard](#dropwizard-metrics)
-- [安全配置](#kylin-security)
-	- [集成 LDAP 实现单点登录](#ldap-sso)
-	- [集成 Apache Ranger](#ranger)
-	- [启用 ZooKeeper ACL](#zookeeper-acl)
-- [启用 Memcached 做分布式查询缓存](#distributed-cache)
-
-
-### 配置文件及参数重写 {#kylin-config}
-
-本小节介绍 Kylin 的配置文件和如何进行配置重写。
-
-
-
-### Kylin 配置文件	 {#kylin-config-file}
-
-Kylin 会自动从环境中读取 Hadoop 配置(`core-site.xml`),Hive 配置(`hive-site.xml`)和 HBase 配置(`hbase-site.xml`),另外,Kylin 的配置文件在 `$KYLIN_HOME/conf/` 目录下,如下:
-
-- `kylin_hive_conf.xml`:该文件包含了 Hive 任务的配置项。
-- `kylin_job_conf.xml` & `kylin_job_conf_inmem.xml`:该文件包含了 MapReduce 任务的配置项。当执行 **In-mem Cubing** 任务时,需要在 `kylin_job_conf_inmem.xml` 中为 mapper 申请更多的内存
-- `kylin-kafka-consumer.xml`:该文件包含了 Kafka 任务的配置项。
-- `kylin-server-log4j.properties`:该文件包含了 Kylin 服务器的日志配置项。
-- `kylin-tools-log4j.properties`:该文件包含了 Kylin 命令行的日志配置项。
-- `setenv.sh` :该文件是用于设置环境变量的 shell 脚本,可以通过 `KYLIN_JVM_SETTINGS` 调整 Kylin JVM 栈的大小,且可以设置 `KAFKA_HOME` 等其他环境变量。
-- `kylin.properties`:该文件是 Kylin 使用的全局配置文件。
-
-
-
-### 配置重写	{#config-override}
-
-`$KYLIN_HOME/conf/` 中有部分配置项可以在 Web UI 界面进行重写,配置重写分为**项目级别配置重写**和 **Cube 级别配置重写**。配置重写的优先级关系为:Cube 级别配置重写 > 项目级别配置重写 > 全局配置文件。
-
-
-
-### 项目级别配置重写	{#project-config-override}
-
-在 Web UI 界面点击 **Manage Project** ,选中某个项目,点击 **Edit** -> **Project Config** -> **+ Property**,进行项目级别的配置重写,如下图所示:
-![](/images/install/override_config_project.png)
-
-
-
-### Cube 级别配置重写		{#cube-config-override}
-
-在设计 Cube (**Cube Designer**)的 **Configuration Overwrites** 步骤可以添加配置项,进行 Cube 级别的配置重写,如下图所示:
-![](/images/install/override_config_cube.png)
-
-以下参数可以在 Cube 级别重写:
-
-- `kylin.cube.size-estimate*`
-- `kylin.cube.algorithm*`
-- `kylin.cube.aggrgroup*`
-- `kylin.metadata.dimension-encoding-max-length`
-- `kylin.cube.max-building-segments`
-- `kylin.cube.is-automerge-enabled`
-- `kylin.job.allow-empty-segment`
-- `kylin.job.sampling-percentage`
-- `kylin.source.hive.redistribute-flat-table`
-- `kylin.engine.spark*`
-- `kylin.query.skip-empty-segments`
-
-
-
-### MapReduce 任务配置重写	{#mr-config-override}
-
-Kylin 支持在项目和 Cube 级别重写 `kylin_job_conf.xml` 和 `kylin_job_conf_inmem.xml` 中参数,以键值对的性质,按照如下格式替换:
-`kylin.engine.mr.config-override.<key> = <value>`
- * 如果用户希望任务从 Yarn 获得更多内存,可以这样设置:
- `kylin.engine.mr.config-override.mapreduce.map.java.opts=-Xmx7g` 和 `kylin.engine.mr.config-override.mapreduce.map.memory.mb=8192`
- * 如果用户希望 Cube 的构建任务使用不同的 YARN resource queue,可以设置:
- `kylin.engine.mr.config-override.mapreduce.job.queuename={queueName}`
-
-
-### Hive 任务配置重写  {#hive-config-override}
-
-Kylin 支持在项目和 Cube 级别重写 `kylin_hive_conf.xml` 中参数,以键值对的性质,按照如下格式替换:
-`kylin.source.hive.config-override.<key> = <value>`
-如果用户希望 Hive 使用不同的 YARN resource queue,可以设置:
-`kylin.source.hive.config-override.mapreduce.job.queuename={queueName}` 
-
-
-
-### Spark 任务配置重写   {#spark-config-override}
-
-Kylin 支持在项目和 Cube 级别重写 `kylin.properties` 中的 Spark 参数,以键值对的性质,按照如下格式替换:
-`kylin.engine.spark-conf.<key> = <value>`
-如果用户希望 Spark 使用不同的 YARN resource queue,可以设置:
-`kylin.engine.spark-conf.spark.yarn.queue={queueName}`
-
-
-
-### 部署配置 {#kylin-deploy}
-
-本小节介绍部署 Kylin 相关的配置。
-
-
-
-### 部署 Kylin  {#deploy-config}
-
-- `kylin.env.hdfs-working-dir`:指定 Kylin 服务所用的 HDFS 路径,默认值为 `/kylin`,请确保启动 Kylin 实例的用户有读写该目录的权限
-- `kylin.env`:指定 Kylin 部署的用途,参数值可选 `DEV`,`QA`, `PROD`,默认值为 `DEV`,在 DEV 模式下一些开发者功能将被启用
-- `kylin.env.zookeeper-base-path`:指定 Kylin 服务所用的 ZooKeeper 路径,默认值为 `/kylin`
-- `kylin.env.zookeeper-connect-string`:指定 ZooKeeper 连接字符串,如果为空,使用 HBase 的 ZooKeeper
-- `kylin.env.hadoop-conf-dir`:指定 Hadoop 配置文件目录,如果不指定的话,获取环境中的 `HADOOP_CONF_DIR`
-- `kylin.server.mode`:指定 Kylin 实例的运行模式,参数值可选 `all`, `job`, `query`,默认值为 `all`,job 模式代表该服务仅用于任务调度,不用于查询;query 模式代表该服务仅用于查询,不用于构建任务的调度;all 模式代表该服务同时用于任务调度和 SQL 查询。
-- `kylin.server.cluster-name`:指定集群名称
-
-
-
-### 分配更多内存给 Kylin 实例		{#kylin-jvm-settings}
-
-在 `$KYLIN_HOME/conf/setenv.sh` 中存在对 `KYLIN_JVM_SETTINGS` 的两种示例配置。
-默认配置使用的内存较少,用户可以根据自己的实际情况,注释掉默认配置并取消另一配置前的注释符号以启用另一配置,从而为 Kylin 实例分配更多的内存资源,该项配置的默认值如下:
-
-```shell
-export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX`MaxPermSize=512M -verbose`gc -XX`+PrintGCDetails -XX`+PrintGCDateStamps -Xloggc`$KYLIN_HOME/logs/kylin.gc.$$ -XX`+UseGCLogFileRotation -XX`NumberOfGCLogFiles=10 -XX`GCLogFileSize=64M"
-# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX`MaxPermSize=512m -XX`NewSize=3g -XX`MaxNewSize=3g -XX`SurvivorRatio=4 -XX`+CMSClassUnloadingEnabled -XX`+CMSParallelRemarkEnabled -XX`+UseConcMarkSweepGC -XX`+CMSIncrementalMode -XX`CMSInitiatingOccupancyFraction=70 -XX`+UseCMSInitiatingOccupancyOnly -XX`+DisableExplicitGC -XX`+HeapDumpOnOutOfMemoryError -verbose`gc -XX`+PrintGCDetails -XX`+PrintGCDateStamps -Xloggc`$KYLIN_HOME/logs/kylin.gc.$$ -XX`+UseGCLogFileRotation -XX`NumberOfGCLogFi [...]
-```
-
-
-
-### 任务引擎高可用  {#job-engine-ha}
-
-- `kylin.job.scheduler.default=2`:启用分布式任务调度器
-- `kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperJobLock`:开启分布式任务锁
-
-> 提示:更多信息请参考 [集群模式部署](/cn/docs/install/kylin_cluster.html) 中的**任务引擎高可用**部分。
-
-
-### 任务引擎安全模式   {#job-engine-safemode}
-
-安全模式仅在默认调度器中生效
-
-- `kylin.job.scheduler.safemode=TRUE`: 启用安全模式,新提交的任务不会被执行。
-- `kylin.job.scheduler.safemode.runable-projects=project1,project2`: 安全模式下仍然可以执行的项目列表,支持设置多个。
-
-### 读写分离配置   {#rw-deploy}
-
-- `kylin.storage.hbase.cluster-fs`:指明 HBase 集群的 HDFS 文件系统
-- `kylin.storage.hbase.cluster-hdfs-config-file`:指向 HBase 集群的 HDFS 配置文件
-
-> 提示:更多信息请参考 [Deploy Apache Kylin with Standalone HBase Cluster](http://kylin.apache.org/blog/2016/06/10/standalone-hbase-cluster/)
-
-
-
-### RESTful Webservice  {#rest-config}
-
-- `kylin.web.timezone`:指定 Kylin 的 REST 服务所使用的时区,默认值为 GMT+8
-- `kylin.web.cross-domain-enabled`:是否支持跨域访问,默认值为 TRUE
-- `kylin.web.export-allow-admin`:是否支持管理员用户导出信息,默认值为 TRUE
-- `kylin.web.export-allow-other`:是否支持其他用户导出信息,默认值为 TRUE
-- `kylin.web.dashboard-enabled`:是否启用 Dashboard,默认值为 FALSE
-
-
-
-### Metastore 配置 {#kylin_metastore}
-
-本小节介绍 Kylin Metastore 相关的配置。
-
-
-
-### 元数据相关 {#metadata}
-
-- `kylin.metadata.url`:指定元数据库路径,默认值为 kylin_metadata@hbase
-- `kylin.metadata.sync-retries`:指定元数据同步重试次数,默认值为 3 
-- `kylin.metadata.sync-error-handler`:默认值为 `DefaultSyncErrorHandler`
-- `kylin.metadata.check-copy-on-write`:清除元数据缓存,默认值为 `FALSE`
-- `kylin.metadata.hbase-client-scanner-timeout-period`:表示 HBase 客户端发起一次 scan 操作的 RPC 调用至得到响应之间总的超时时间,默认值为 10000(ms)
-- `kylin.metadata.hbase-rpc-timeout`:指定 HBase 执行 RPC 操作的超时时间,默认值为 5000(ms)
-- `kylin.metadata.hbase-client-retries-number`:指定 HBase 重试次数,默认值为 1(次)
-- `kylin.metadata.resource-store-provider.jdbc`:指定 JDBC 使用的类,默认值为 `org.apache.kylin.common.persistence.JDBCResourceStore`
-
-
-
-### 基于 MySQL 的 Metastore (测试) {#mysql-metastore}
-
-> **注意**:该功能还在测试中,建议用户谨慎使用。
-
-- `kylin.metadata.url`:指定元数据路径
-- `kylin.metadata.jdbc.dialect`:指定 JDBC 方言
-- `kylin.metadata.jdbc.json-always-small-cell`:默认值为 TRUE
-- `kylin.metadata.jdbc.small-cell-meta-size-warning-threshold`:默认值为 100(MB)
-- `kylin.metadata.jdbc.small-cell-meta-size-error-threshold`:默认值为 1(GB)
-- `kylin.metadata.jdbc.max-cell-size`:默认值为 1(MB)
-- `kylin.metadata.resource-store-provider.jdbc`:指定 JDBC 使用的类,默认值为 org.apache.kylin.common.persistence.JDBCResourceStore
-
-> 提示:更多信息请参考[基于 MySQL 的 Metastore 配置](/cn/docs/tutorial/mysql_metastore.html)
-
-
-
-### 构建配置 {#kylin-build}
-
-本小节介绍 Kylin 数据建模及构建相关的配置。
-
-
-
-### Hive 客户端 & SparkSQL {#hive-client-and-sparksql}
-
-- `kylin.source.hive.client`:指定 Hive 命令行类型,参数值可选 cli 或 beeline,默认值为 cli
-- `kylin.source.hive.beeline-shell`:指定 Beeline shell 的绝对路径,默认值为 beeline
-- `kylin.source.hive.beeline-params`:当使用 Beeline 做为 Hive 的 Client 工具时,需要配置此参数,以提供更多信息给 Beeline
-- `kylin.source.hive.enable-sparksql-for-table-ops`:默认值为 FALSE,当使用 SparkSQL 时需要设置为 TRUE
-- `kylin.source.hive.sparksql-beeline-shell`:当使用 SparkSQL Beeline 做为 Hive 的 Client 工具时,需要配置此参数为 /path/to/spark-client/bin/beeline
-- `kylin.source.hive.sparksql-beeline-params`:当使用 SparkSQL Beeline 做为 Hive 的 Client 工具时,需要配置此参数
-
-
-
-
-### 配置 JDBC 数据源  {#jdbc-datasource}
-
-- `kylin.source.default`:JDBC 使用的数据源种类
-- `kylin.source.jdbc.connection-url`:JDBC 连接字符串
-- `kylin.source.jdbc.driver`:JDBC 驱动类名
-- `kylin.source.jdbc.dialect`:JDBC方言,默认值为 default
-- `kylin.source.jdbc.user`:JDBC 连接用户名
-- `kylin.source.jdbc.pass`:JDBC 连接密码 
-- `kylin.source.jdbc.sqoop-home`:Sqoop 安装路径
-- `kylin.source.jdbc.sqoop-mapper-num`:指定应该分为多少个切片,Sqoop 将为每一个切片运行一个 mapper,默认值为 4
-- `kylin.source.jdbc.field-delimiter`:指定字段分隔符, 默认值为 \
-
-> 提示:更多信息请参考[建立 JDBC 数据源](/cn/docs/tutorial/setup_jdbc_datasource.html)。
-
-
-
-
-### 数据类型精度 {#precision-config}
-
-- `kylin.source.hive.default-varchar-precision`:指定 varchar 字段的最大长度,默认值为256
-- `kylin.source.hive.default-char-precision`:指定 char 字段的最大长度,默认值为 255
-- `kylin.source.hive.default-decimal-precision`:指定 decimal 字段的精度,默认值为 19
-- `kylin.source.hive.default-decimal-scale`:指定 decimal 字段的范围,默认值为 4
-
-
-
-### Cube 设计 {#cube-config}
-
-- `kylin.cube.ignore-signature-inconsistency`:Cube desc 中的 signature 信息能保证 Cube 不被更改为损坏状态,默认值为 FALSE
-- `kylin.cube.aggrgroup.max-combination`:指定一个 Cube 的聚合组 Cuboid 上限,默认值为 32768
-- `kylin.cube.aggrgroup.is-mandatory-only-valid`:是否允许 Cube 只包含 Base Cuboid,默认值为 FALSE,当使用 Spark Cubing 时需设置为 TRUE
-- `kylin.cube.rowkey.max-size`:指定可以设置为 Rowkeys 的最大列数,默认值为 63,且最大不能超过 63
-- `kylin.cube.allow-appear-in-multiple-projects`:是否允许一个 Cube 出现在多个项目中
-- `kylin.cube.gtscanrequest-serialization-level`:默认值为 1
-- `kylin.metadata.dimension-encoding-max-length`:指定维度作为 Rowkeys 时使用 fix_length 编码时的最大长度,默认值为 256
-- `kylin.web.hide-measures`: 隐藏一些可能不需要的度量,默认值是RAW
-
-
-
-
-### Cube 大小估计 {#cube-estimate}
-
-Kylin 和 HBase 都在写入磁盘时使用压缩,因此,Kylin 将在其原来的大小上乘以比率来估计 Cube 大小。
-
-- `kylin.cube.size-estimate-ratio`:普通的 Cube,默认值为 0.25
-- `kylin.cube.size-estimate-memhungry-ratio`:已废弃,默认值为 0.05
-- `kylin.cube.size-estimate-countdistinct-ratio`:包含精确去重度量的 Cube 大小估计,默认值为 0.5
-- `kylin.cube.size-estimate-topn-ratio`:包含 TopN 度量的 Cube 大小估计,默认值为 0.5 
-
-
-
-### Cube 构建算法 {#cube-algorithm}
-
-- `kylin.cube.algorithm`:指定 Cube 构建的算法,参数值可选 `auto`,`layer` 和 `inmem`, 默认值为 auto,即 Kylin 会通过采集数据动态地选择一个算法 (layer or inmem),如果用户很了解 Kylin 和自身的数据、集群,可以直接设置喜欢的算法
-- `kylin.cube.algorithm.layer-or-inmem-threshold`:默认值为 7
-- `kylin.cube.algorithm.inmem-split-limit`:默认值为 500
-- `kylin.cube.algorithm.inmem-concurrent-threads`:默认值为 1
-- `kylin.job.sampling-percentage`:指定数据采样百分比,默认值为 100
-
-
-
-### 自动合并 {#auto-merge}
-
-- `kylin.cube.is-automerge-enabled`:是否启用自动合并,默认值为 TRUE,将该参数设置为 FALSE 时,自动合并功能会被关闭,即使 Cube 设置中开启了自动合并、设置了自动合并阈值,也不会触发合并任务。
- 
-
-
-### 维表快照   {#snapshot}
-
-- `kylin.snapshot.max-mb`:允许维表的快照大小的上限,默认值为 300(M)
-- `kylin.snapshot.max-cache-entry`:缓存中最多可以存储的 snapshot 数量,默认值为 500
-- `kylin.snapshot.ext.shard-mb`:设置存储维表快照的 HBase 分片大小,默认值为 500(M)
-- `kylin.snapshot.ext.local.cache.path`:本地缓存路径,默认值为 lookup_cache 
-- `kylin.snapshot.ext.local.cache.max-size-gb`:本地维表快照缓存大小,默认值为 200(M)
-
-
-
-### Cube 构建 {#cube-build}
-
-- `kylin.storage.default`:指定默认的构建引擎,默认值为 2,即 HBase 
-- `kylin.source.hive.keep-flat-table`:是否在构建完成后保留 Hive 中间表,默认值为 FALSE
-- `kylin.source.hive.database-for-flat-table`:指定存放 Hive 中间表的 Hive 数据库名字,默认值为 default,请确保启动 Kylin 实例的用户有操作该数据库的权限
-- `kylin.source.hive.flat-table-storage-format`:指定 Hive 中间表的存储格式,默认值为 SEQUENCEFILE
-- `kylin.source.hive.flat-table-field-delimiter`:指定 Hive 中间表的分隔符,默认值为  \u001F 
-- `kylin.source.hive.intermediate-table-prefix`:指定 Hive 中间表的表名前缀,默认值为  kylin\_intermediate\_ 
-- `kylin.source.hive.redistribute-flat-table`:是否重分配 Hive 平表,默认值为 TRUE
-- `kylin.source.hive.redistribute-column-count`:重分配列的数量,默认值为 3
-- `kylin.source.hive.table-dir-create-first`:默认值为 FALSE
-- `kylin.storage.partition.aggr-spill-enabled`:默认值为 TRUE
-- `kylin.engine.mr.lib-dir`:指定 MapReduce 任务所使用的 jar 包的路径 
-- `kylin.engine.mr.reduce-input-mb`:MapReduce 任务启动前会依据输入预估 Reducer 接收数据的总量,再除以该参数得出 Reducer 的数目,默认值为 500(MB)                
-- `kylin.engine.mr.reduce-count-ratio`:用于估算 Reducer 数目,默认值为 1.0
-- `kylin.engine.mr.min-reducer-number`:MapReduce 任务中 Reducer 数目的最小值,默认值为 1
-- `kylin.engine.mr.max-reducer-number`:MapReduce 任务中 Reducer 数目的最大值,默认值为 500
-- `kylin.engine.mr.mapper-input-rows`:每个 Mapper 可以处理的行数,默认值为 1000000,如果将这个值调小,会起更多的 Mapper
-- `kylin.engine.mr.max-cuboid-stats-calculator-number`:用于计算 Cube 统计数据的线程数量,默认值为 1
-- `kylin.engine.mr.build-dict-in-reducer`:是否在构建任务 **Extract Fact Table Distinct Columns** 的 Reduce 阶段构建字典,默认值为 TRUE
-- `kylin.engine.mr.yarn-check-interval-seconds`:构建引擎间隔多久检查 Hadoop 任务的状态,默认值为 10(s)    
-- `kylin.engine.mr.use-local-classpath`: 是否使用本地 mapreduce 应用的 classpath。默认值为 TRUE    
-
-
-
-### 字典相关  {#dict-config}
-
-- `kylin.dictionary.use-forest-trie`:默认值为 TRUE 
-- `kylin.dictionary.forest-trie-max-mb`:默认值为 500
-- `kylin.dictionary.max-cache-entry`:默认值为 3000
-- `kylin.dictionary.growing-enabled`:默认值为 FALSE
-- `kylin.dictionary.append-entry-size`:默认值为 10000000
-- `kylin.dictionary.append-max-versions`:默认值为 3
-- `kylin.dictionary.append-version-ttl`:默认值为 259200000
-- `kylin.dictionary.resuable`:是否重用字典,默认值为 FALSE
-- `kylin.dictionary.shrunken-from-global-enabled`:是否缩小全局字典,默认值为 TRUE
-
-
-
-### 超高基维度的处理 {#uhc-config}
-
-Cube 构建默认在 **Extract Fact Table Distinct Column** 这一步为每一列分配一个 Reducer,对于超高基维度,可以通过以下参数增加 Reducer 个数
-
-- `kylin.engine.mr.build-uhc-dict-in-additional-step`:默认值为 FALSE,设置为 TRUE
-- `kylin.engine.mr.uhc-reducer-count`:默认值为 1,可以设置为 5,即为每个超高基的列分配 5 个 Reducer。
-
-
-
-### Spark 构建引擎  {#spark-cubing}
-
-- `kylin.engine.spark-conf.spark.master`:指定 Spark 运行模式,默认值为 yarn
-- `kylin.engine.spark-conf.spark.submit.deployMode`:指定 Spark on YARN 的部署模式,默认值为 cluster
-- `kylin.engine.spark-conf.spark.yarn.queue`:指定 Spark 资源队列,默认值为 default
-- `kylin.engine.spark-conf.spark.driver.memory`:指定 Spark Driver 内存大小,默认值为 2G
-- `kylin.engine.spark-conf.spark.executor.memory`:指定 Spark Executor 内存大小,默认值为 4G
-- `kylin.engine.spark-conf.spark.yarn.executor.memoryOverhead`:指定 Spark Executor 堆外内存大小,默认值为 1024(MB)
-- `kylin.engine.spark-conf.spark.executor.cores`:指定单个 Spark Executor可用核心数,默认值为 1
-- `kylin.engine.spark-conf.spark.network.timeout`:指定 Spark 网络超时时间,600
-- `kylin.engine.spark-conf.spark.executor.instances`:指定一个 Application 拥有的 Spark Executor 数量,默认值为 1
-- `kylin.engine.spark-conf.spark.eventLog.enabled`:是否记录 Spark 时间,默认值为 TRUE
-- `kylin.engine.spark-conf.spark.hadoop.dfs.replication`:HDFS 的副本数,默认值为 2
-- `kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress`:是否压缩输出,默认值为 TRUE
-- `kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress.codec`:输出所用压缩,默认值为 org.apache.hadoop.io.compress.DefaultCodec
-- `kylin.engine.spark.rdd-partition-cut-mb`:Kylin 用该参数的大小来分割 partition,默认值为 10(MB),可以在 Cube 级别重写这个参数,调整至更大,来减少分区数
-- `kylin.engine.spark.min-partition`:最小分区数,默认值为 1
-- `kylin.engine.spark.max-partition`:最大分区数,默认值为 5000
-- `kylin.engine.spark.storage-level`:RDD 分区数据缓存级别,默认值为 MEMORY_AND_DISK_SER
-- `kylin.engine.spark-conf-mergedict.spark.executor.memory`:为合并字典申请更多的内存,默认值为 6G
-- `kylin.engine.spark-conf-mergedict.spark.memory.fraction`:给系统预留的内存百分比,默认值为 0.2
-
-> 提示:更多信息请参考 [用 Spark 构建 Cube](/cn/docs/tutorial/cube_spark.html)。
-
-
-
-### 通过 Livy 提交 Spark 任务 {#livy-submit-spark-job}
-
-- `kylin.engine.livy-conf.livy-enabled`:是否开启 Livy 进行 Spark 任务的提交。默认值为 *FALSE*
-- `kylin.engine.livy-conf.livy-url`:指定了 Livy 的 URL。例如 *http://127.0.0.1:8998*
-- `kylin.engine.livy-conf.livy-key.*`:指定了 Livy 的 name-key 配置。例如 *kylin.engine.livy-conf.livy-key.name=kylin-livy-1*
-- `kylin.engine.livy-conf.livy-arr.*`:指定了 Livy 数组类型的配置。以逗号分隔。例如 *kylin.engine.livy-conf.livy-arr.jars=hdfs://your_self_path/hbase-common-1.4.8.jar,hdfs://your_self_path/hbase-server-1.4.8.jar,hdfs://your_self_path/hbase-client-1.4.8.jar*
-- `kylin.engine.livy-conf.livy-map.*`:指定了 Spark 配置。例如 *kylin.engine.livy-conf.livy-map.spark.executor.instances=10*
-
-> 提示:更多信息请参考 [Apache Livy Rest API](http://livy.incubator.apache.org/docs/latest/rest-api.html)。
-
-
-### Spark 资源动态分配 {#dynamic-allocation}
-
-- `kylin.engine.spark-conf.spark.shuffle.service.enabled`:是否开启 shuffle service
-- `kylin.engine.spark-conf.spark.dynamicAllocation.enabled`:是否启用 Spark 资源动态分配
-- `kylin.engine.spark-conf.spark.dynamicAllocation.initialExecutors`:如果所有的 Executor 都移除了,重新请求启动时初始 Executor 数量
-- `kylin.engine.spark-conf.spark.dynamicAllocation.minExecutors`:最少保留的 Executor 数量
-- `kylin.engine.spark-conf.spark.dynamicAllocation.maxExecutors`:最多申请的 Executor 数量
-- `kylin.engine.spark-conf.spark.dynamicAllocation.executorIdleTimeout`:Executor 空闲时间超过设置的值后,除非有缓存数据,不然会被移除,默认值为 60(s)
-
-> 提示:更多信息请参考 [Dynamic Resource Allocation](http://spark.apache.org/docs/1.6.2/job-scheduling.html#dynamic-resource-allocation)。
-
-
-
-### 任务相关 {#job-config}
-
-- `kylin.job.log-dir`:默认值为 /tmp/kylin/logs
-- `kylin.job.allow-empty-segment`:是否容忍数据源为空,默认值为 TRUE
-- `kylin.job.max-concurrent-jobs`:最大构建并发数,默认值为 10
-- `kylin.job.retry`:构建任务失败后的重试次数,默认值为 0
-- `kylin.job.retry-interval`: 每次重试的间隔毫秒数。默认值为 30000
-- `kylin.job.scheduler.priority-considered`:是否考虑任务优先级,默认值为 FALSE
-- `kylin.job.scheduler.priority-bar-fetch-from-queue`:指定从优先级队列中获取任务的时间间隔,默认值为 20(s)
-- `kylin.job.scheduler.poll-interval-second`:从队列中获取任务的时间间隔,默认值为 30(s)
-- `kylin.job.error-record-threshold`:指定任务抛出错误信息的阈值,默认值为 0
-- `kylin.job.cube-auto-ready-enabled`:是否在构建完成后自动启用 Cube,默认值为 TRUE
-- `kylin.cube.max-building-segments`:指定对同一个 Cube 的最大构建数量,默认值为 10
-
-
-
-### 启用邮件通知		{#email-notification}
-
-- `kylin.job.notification-enabled`:是否在任务成功或者失败时进行邮件通知,默认值为 FALSE
-- `kylin.job.notification-mail-enable-starttls`:# 是否启用 starttls,默认值为 FALSE
-- `kylin.job.notification-mail-host`:指定邮件的 SMTP 服务器地址
-- `kylin.job.notification-mail-port`:指定邮件的 SMTP 服务器端口,默认值为 25
-- `kylin.job.notification-mail-username`:指定邮件的登录用户名
-- `kylin.job.notification-mail-password`:指定邮件的用户名密码
-- `kylin.job.notification-mail-sender`:指定邮件的发送邮箱地址
-- `kylin.job.notification-admin-emails`:指定邮件通知的管理员邮箱
-
-
-
-### 启用 Cube Planner {#cube-planner}
-
-- `kylin.cube.cubeplanner.enabled`:默认值为 TRUE
-- `kylin.server.query-metrics2-enabled`:默认值为 TRUE
-- `kylin.metrics.reporter-query-enabled`:默认值为 TRUE
-- `kylin.metrics.reporter-job-enabled`:默认值为 TRUE
-- `kylin.metrics.monitor-enabled`:默认值为 TRUE
-- `kylin.cube.cubeplanner.enabled`:是否启用 Cube Planner,默认值为 TRUE
-- `kylin.cube.cubeplanner.enabled-for-existing-cube`:是否对已有的 Cube 启用 Cube Planner,默认值为 TRUE
-- `kylin.cube.cubeplanner.algorithm-threshold-greedy`:默认值为 8
-- `kylin.cube.cubeplanner.expansion-threshold`:默认值为 15.0
-- `kylin.cube.cubeplanner.recommend-cache-max-size`:默认值为 200
-- `kylin.cube.cubeplanner.query-uncertainty-ratio`:默认值为 0.1
-- `kylin.cube.cubeplanner.bpus-min-benefit-ratio`:默认值为 0.01
-- `kylin.cube.cubeplanner.algorithm-threshold-genetic`:默认值为 23
-
-
-> 提示:更多信息请参考 [使用 Cube Planner](/cn/docs/tutorial/use_cube_planner.html)。
-
-
-
-### HBase 存储   {#hbase-config}
-
-- `kylin.storage.hbase.table-name-prefix`:默认值为 KYLIN\_ 
-- `kylin.storage.hbase.namespace`:指定 HBase 存储默认的 namespace,默认值为 default
-- `kylin.storage.hbase.coprocessor-local-jar`:指向 HBase 协处理器有关 jar 包
-- `kylin.storage.hbase.coprocessor-mem-gb`:设置 HBase 协处理器内存大小,默认值为 3.0(GB)
-- `kylin.storage.hbase.run-local-coprocessor`:是否运行本地 HBase 协处理器,默认值为 FALSE
-- `kylin.storage.hbase.coprocessor-timeout-seconds`:设置超时时间,默认值为 0
-- `kylin.storage.hbase.region-cut-gb`:单个 Region 的大小,默认值为 5.0
-- `kylin.storage.hbase.min-region-count`:指定最小 Region 个数,默认值为 1
-- `kylin.storage.hbase.max-region-count`:指定最大 Region 个数,默认值为 500 
-- `kylin.storage.hbase.hfile-size-gb`:指定 HFile 大小,默认值为 2.0(GB)
-- `kylin.storage.hbase.max-scan-result-bytes`:指定扫描返回结果的最大值,默认值为 5242880(byte),即 5(MB)
-- `kylin.storage.hbase.compression-codec`:是否压缩,默认值为 none,即不开启压缩
-- `kylin.storage.hbase.rowkey-encoding`:指定 Rowkey 的编码方式,默认值为 FAST_DIFF
-- `kylin.storage.hbase.block-size-bytes`:默认值为 1048576
-- `kylin.storage.hbase.small-family-block-size-bytes`:指定 Block 大小,默认值为 65536(byte),即 64(KB)
-- `kylin.storage.hbase.owner-tag`:指定 Kylin 平台的所属人,默认值为 whoami@kylin.apache.org
-- `kylin.storage.hbase.endpoint-compress-result`:是否返回压缩结果,默认值为 TRUE
-- `kylin.storage.hbase.max-hconnection-threads`:指定连接线程数量的最大值,默认值为 2048
-- `kylin.storage.hbase.core-hconnection-threads`:指定核心连接线程的数量,默认值为 2048
-- `kylin.storage.hbase.hconnection-threads-alive-seconds`:指定线程存活时间,默认值为 60 
-- `kylin.storage.hbase.replication-scope`:指定集群复制范围,默认值为 0
-- `kylin.storage.hbase.scan-cache-rows`:指定扫描缓存行数,默认值为 1024
-
-
-
-### 启用压缩		{#compress-config}
-
-Kylin 在默认状态下不会启用压缩,不支持的压缩算法会阻碍 Kylin 的构建任务,但是一个合适的压缩算法可以减少存储开销和网络开销,提高整体系统运行效率。
-Kylin 可以使用三种类型的压缩,分别是 HBase 表压缩,Hive 输出压缩 和 MapReduce 任务输出压缩。 
-
-> **注意**:压缩设置只有在重启 Kylin 实例后才会生效。
-
-* HBase 表压缩
-
-该项压缩通过 `kyiln.properties` 中的 `kylin.storage.hbase.compression-codec` 进行配置,参数值可选 `none`,`snappy`, `lzo`, `gzip`, `lz4`),默认值为 none,即不压缩数据。
-
-> **注意**:在修改压缩算法前,请确保用户的 HBase 集群支持所选压缩算法。
-
-
-* Hive 输出压缩
-
-该项压缩通过 `kylin_hive_conf.xml` 进行配置,默认配置为空,即直接使用了 Hive 的默认配置。如果想重写配置,请在 `kylin_hive_conf.xml` 中添加 (或替换) 下列属性。以 SNAPPY 压缩为例:
-
-```xml
-<property>
-	<name>mapreduce.map.output.compress.codec</name>
-	<value>org.apache.hadoop.io.compress.SnappyCodec</value>
-	<description></description>
-</property>
-<property>
-	<name>mapreduce.output.fileoutputformat.compress.codec</name>
-	<value>org.apache.hadoop.io.compress.SnappyCodec</value>
-	<description></description>
-</property>
-```
-
-* MapReduce 任务输出压缩
-
-该项压缩通过 `kylin_job_conf.xml` 和 `kylin_job_conf_inmem.xml` 进行配置。默认值为空,即使用 MapReduce 的默认配置。如果想重写配置,请在 `kylin_job_conf.xml` 和 `kylin_job_conf_inmem.xml` 中添加 (或替换) 下列属性。以 SNAPPY 压缩为例:
-
-```xml
-<property>
-	<name>mapreduce.map.output.compress.codec</name>
-	<value>org.apache.hadoop.io.compress.SnappyCodec</value>
-	<description></description>
-</property>
-<property>
-	<name>mapreduce.output.fileoutputformat.compress.codec</name>
-	<value>org.apache.hadoop.io.compress.SnappyCodec</value>
-	<description></description>
-</property>
-```
-
-
-
-### 实时 OLAP    {#realtime-olap}
-- `kylin.stream.job.dfs.block.size`:指定了流式构建 Base Cuboid 任务所需 HDFS 块的大小。默认值为 *16M*。
-- `kylin.stream.index.path`:指定了本地 segment 缓存的位置。默认值为 *stream_index*。
-- `kylin.stream.cube-num-of-consumer-tasks`:指定了共享同一个 topic 分区的 replica set 数量,影响着不同 replica set 分配的分区数量。默认值为 *3*。
-- `kylin.stream.cube.window`:指定了每个 segment 的持续时长,以秒为单位。默认值为 *3600*。
-- `kylin.stream.cube.duration`:指定了 segment 从 active 状态变为 IMMUTABLE 状态的等待时间,以秒为单位。默认值为 *7200*。
-- `kylin.stream.cube.duration.max`:segment 的 active 状态的最长持续时间,以秒为单位。默认值为 *43200*。
-- `kylin.stream.checkpoint.file.max.num`:指定了每个 Cube 包含的 checkpoint 文件数的最大值。默认值为 *5*。
-- `kylin.stream.index.checkpoint.intervals`:指定了两个 checkpoint 设置的时间间隔。默认值为 *300*。
-- `kylin.stream.index.maxrows`:指定了缓存在堆/内存中的事件数的最大值。默认值为 *50000*。
-- `kylin.stream.immutable.segments.max.num`:指定了当前 receiver 里每个 Cube 中状态为 IMMUTABLE 的 segment 的最大数值,如果超过最大值,当前 topic 的消费将会被暂停。默认值为 *100*。
-- `kylin.stream.consume.offsets.latest`:是否从最近的偏移量开始消费。默认值为 *true*。
-- `kylin.stream.node`:指定了 coordinator/receiver 的节点。形如 host:port。默认值为 *null*。
-- `kylin.stream.metadata.store.type`:指定了元数据存储的位置。默认值为 *zk*。
-- `kylin.stream.segment.retention.policy`:指定了当 segment 变为 IMMUTABLE 状态时,本地 segment 缓存的处理策略。参数值可选 `purge` 和 `fullBuild`。`purge` 意味着当 segment 的状态变为 IMMUTABLE,本地缓存的 segment 数据将被删除。`fullBuild` 意味着当 segment 的状态变为 IMMUTABLE,本地缓存的 segment 数据将被上传到 HDFS。默认值为 *fullBuild*。
-- `kylin.stream.assigner`:指定了用于将 topic 分区分配给不同 replica set 的实现类。该类实现了 `org.apache.kylin.stream.coordinator.assign.Assigner` 类。默认值为 *DefaultAssigner*。
-- `kylin.stream.coordinator.client.timeout.millsecond`:指定了连接 coordinator 客户端的超时时间。默认值为 *5000*。
-- `kylin.stream.receiver.client.timeout.millsecond`:指定了连接 receiver 客户端的超时时间。默认值为 *5000*。
-- `kylin.stream.receiver.http.max.threads`:指定了连接 receiver 的最大线程数。默认值为 *200*。
-- `kylin.stream.receiver.http.min.threads`:指定了连接 receiver 的最小线程数。默认值为 *10*。
-- `kylin.stream.receiver.query-core-threads`:指定了当前 receiver 用于查询的线程数。默认值为 *50*。
-- `kylin.stream.receiver.query-max-threads`:指定了当前 receiver 用于查询的最大线程数。默认值为 *200*。
-- `kylin.stream.receiver.use-threads-per-query`:指定了每个查询使用的线程数。默认值为 *8*。
-- `kylin.stream.build.additional.cuboids`:是否构建除 Base Cuboid 外的 cuboids。除 Base Cuboid 外的 cuboids 指的是在 Cube 的 Advanced Setting 页面选择的强制维度的聚合。默认值为 *false*。默认只构建 Base Cuboid。
-- `kylin.stream.segment-max-fragments`:指定了每个 segment 保存的最大 fragment 数。默认值为 *50*。
-- `kylin.stream.segment-min-fragments`:指定了每个 segment 保存的最小 fragment 数。默认值为 *15*。
-- `kylin.stream.max-fragment-size-mb`:指定了每个 fragment 文件的最大尺寸。默认值为 *300*。
-- `kylin.stream.fragments-auto-merge-enable`:是否开启 fragment 文件自动合并的功能。默认值为 *true*。
-
-> 提示:更多信息请参考 [Real-time OLAP](http://kylin.apache.org/docs30/tutorial/real_time_olap.html)。
-
-
-
-### 存储清理配置  {#storage-clean-up-configuration}
-
-本小节介绍 Kylin 存储清理有关的配置。
-
-
-
-### 存储清理相关 {#storage-clean-up-config}
-
-- `kylin.storage.clean-after-delete-operation`: 是否清理 HBase 和 HDFS 中的 segment 数据。默认值为 FALSE。
-
-
-
-### 查询配置    {#kylin-query}
-
-本小节介绍 Kylin 查询有关的配置。
-
-
-
-### 查询相关   {#query-config}
-
-- `kylin.query.skip-empty-segments`:查询是否跳过数据量为 0 的 segment,默认值为 TRUE
-- `kylin.query.large-query-threshold`:指定最大返回行数,默认值为 1000000 
-- `kylin.query.security-enabled`:是否在查询时检查 ACL,默认值为 TRUE 
-- `kylin.query.security.table-acl-enabled`:是否在查询时检查对应表的 ACL,默认值为 TRUE 
-- `kylin.query.calcite.extras-props.conformance`:是否严格解析,默认值为 LENIENT
-- `kylin.query.calcite.extras-props.caseSensitive`:是否大小写敏感,默认值为 TRUE
-- `kylin.query.calcite.extras-props.unquotedCasing`:是否需要将查询语句进行大小写转换,参数值可选 `UNCHANGED`, `TO_UPPER`, `TO_LOWER` ,默认值为 `TO_UPPER`,即全部大写
-- `kylin.query.calcite.extras-props.quoting`:是否添加引号,参数值可选 `DOUBLE_QUOTE`, `BACK_TICK`,`BRACKET`,默认值为 `DOUBLE_QUOTE`
-- `kylin.query.statement-cache-max-num`:缓存的 PreparedStatement 的最大条数,默认值为 50000
-- `kylin.query.statement-cache-max-num-per-key`:每个键缓存的 PreparedStatement 的最大条数,默认值为 50 
-- `kylin.query.enable-dict-enumerator`:是否启用字典枚举器,默认值为 FALSE
-- `kylin.query.enable-dynamic-column`:是否启用动态列,默认值为 FALSE,设置为 TRUE 后可以查询一列中不包含 NULL 的行数
-
-
-
-### 模糊查询 {#fuzzy}
-
-- `kylin.storage.hbase.max-fuzzykey-scan`:设置扫描的模糊键的阈值,超过该参数值便不再扫描模糊键,默认值为 200 
-- `kylin.storage.hbase.max-fuzzykey-scan-split`:分割大模糊键集来减少每次扫描中模糊键的数量,默认值为 1
-- `kylin.storage.hbase.max-visit-scanrange`:默认值为 1000000 
-
-
-
-### 查询缓存 {#cache-config}
-
-- `kylin.query.cache-enabled`:是否启用缓存,默认值为 TRUE
-- `kylin.query.cache-threshold-duration`:查询延时超过阈值则保存进缓存,默认值为 2000(ms)
-- `kylin.query.cache-threshold-scan-count`:查询所扫描的数据行数超过阈值则保存进缓存,默认值为 10240(rows)
-- `kylin.query.cache-threshold-scan-bytes`:查询所扫描的数据字节数超过阈值则保存进缓存,默认值为 1048576(byte)
-
-
-
-### 查询限制 {#query-limit}
-
-- `kylin.query.timeout-seconds`:设置查询超时时间,默认值为 0,即没有限制,如果设置的值小于 60,会被强制替换成 60 秒
-- `kylin.query.timeout-seconds-coefficient`:设置查询超时秒数的系数,默认值为 0.5
-- `kylin.query.max-scan-bytes`:设置查询扫描字节的上限,默认值为 0,即没有限制
-- `kylin.storage.partition.max-scan-bytes`:设置查询扫描的最大字节数,默认值为 3221225472(bytes),即 3GB
-- `kylin.query.max-return-rows`:指定查询返回行数的上限,默认值为 5000000
-
-
-
-### 坏查询		{#bad-query}
-
-`kylin.query.timeout-seconds` 的值为大于 60 或为 0,`kylin.query.timeout-seconds-coefficient` 其最大值为 double 的上限。这两个参数的乘积为坏查询检查的间隔时间,如果为 0,那么会设为 60 秒,最长秒数是 int 的最大值。
-
-- `kylin.query.badquery-stacktrace-depth`:设置堆栈追踪的深度,默认值为 10
-- `kylin.query.badquery-history-number`:设置要展示的历史坏查询的数量,默认为 50
-- `kylin.query.badquery-alerting-seconds`:默认为 90,如果运行时间大于这个值,那么首先就会打出该查询的日志信息,包括(时长、项目、线程、用户、查询 id)。至于是否保存最近的查询,取决于另一个参数。然后记录 Stack 日志信息,记录的深度由另一个参数指定,方便后续问题分析
-- `kylin.query.badquery-persistent-enabled`:默认为 true,会保存最近的一些坏查询,而且不可在 Cube 级别进行覆盖
-
-
-
-
-### 查询下压		{#query-pushdown}
-
-- `kylin.query.pushdown.runner-class-name=org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl`:如果需要启用查询下压,需要移除这句配置的注释
-- `kylin.query.pushdown.jdbc.url`:JDBC 的 URL
-- `kylin.query.pushdown.jdbc.driver`:JDBC 的 driver 类名,默认值为 `org.apache.hive.jdbc.HiveDriver`
-- `kylin.query.pushdown.jdbc.username`:JDBC 对应数据库的用户名,默认值为 `hive`
-- `kylin.query.pushdown.jdbc.password`:JDBC 对应数据库的密码
-- `kylin.query.pushdown.jdbc.pool-max-total`:JDBC 连接池的最大连接数,默认值为8
-- `kylin.query.pushdown.jdbc.pool-max-idle`:JDBC 连接池的最大等待连接数,默认值为8
-- `kylin.query.pushdown.jdbc.pool-min-idle`:默认值为 0
-- `kylin.query.pushdown.update-enabled`:指定是否在查询下压中开启 update,默认值为 FALSE
-- `kylin.query.pushdown.cache-enabled`:是否开启下压查询的缓存来提高相同查询语句的查询效率,默认值为 FALSE
-
-> 提示:更多信息请参考[查询下压](/cn/docs/tutorial/query_pushdown.html)
-
-
-
-### 查询改写 {#convert-sql}
-
-- `kylin.query.force-limit`:该参数通过为 select * 语句强制添加 LIMIT 分句,达到缩短数据返回时间的目的,该参数默认值为 -1,将该参数值设置为正整数,如 1000,该值会被应用到 LIMIT 分句,查询语句最终会被转化成 select * from fact_table limit 1000
-- `kylin.storage.limit-push-down-enabled`: 默认值为 *TRUE*,设置为 *FALSE* 意味着关闭存储层的 limit-pushdown 
-- `kylin.query.flat-filter-max-children`:指定打平 filter 时 filter 的最大值。默认值为 500000 
-
-
-
-### 收集查询指标到 JMX {#jmx-metrics}
-
-- `kylin.server.query-metrics-enabled`:默认值为 FALSE,设为 TRUE 来将查询指标收集到 JMX
-
-> 提示:更多信息请参考 [JMX](https://www.oracle.com/technetwork/java/javase/tech/javamanagement-140525.html)
-
-
-
-### 收集查询指标到 dropwizard {#dropwizard-metrics}
-
-- `kylin.server.query-metrics2-enabled`:默认值为 FALSE,设为 TRUE 来将查询指标收集到 dropwizard
-
-> 提示:更多信息请参考 [dropwizard](https://metrics.dropwizard.io/4.0.0/)
-
-
-
-### 安全配置 {#kylin-security}
-
-本小节介绍 Kylin 安全有关的配置。
-
-
-
-### 集成 LDAP 实现单点登录	{#ldap-sso}
-
-- `kylin.security.profile`:安全认证的方式,参数值可选 `ldap`,`testing`,`saml`。集成 LDAP 实现单点登录时应设置为 `ldap`
-- `kylin.security.ldap.connection-server`:LDAP 服务器,如 ldap://ldap_server:389
-- `kylin.security.ldap.connection-username`:LDAP 用户名
-- `kylin.security.ldap.connection-password`:LDAP 密码
-- `kylin.security.ldap.user-search-base`:定义同步到 Kylin 的用户的范围
-- `kylin.security.ldap.user-search-pattern`:定义登录验证匹配的用户名
-- `kylin.security.ldap.user-group-search-base`:定义同步到 Kylin 的用户组的范围
-- `kylin.security.ldap.user-group-search-filter`:定义同步到 Kylin 的用户的类型
-- `kylin.security.ldap.service-search-base`:当需要服务账户可以访问 Kylin 时需要定义
-- `kylin.security.ldap.service-search-pattern`:当需要服务账户可以访问 Kylin 时需要定义
-- `kylin.security.ldap.service-group-search-base`:当需要服务账户可以访问 Kylin 时需要定义
-- `kylin.security.acl.admin-role`:将一个 LDAP 群组映射成管理员角色(组名大小写敏感)
-- `kylin.server.auth-user-cache.expire-seconds`:LDAP 用户信息缓存时间,默认值为 300(s)
-- `kylin.server.auth-user-cache.max-entries`:LDAP 用户数目最大缓存,默认值为 100
-
-
-
-### 集成 Apache Ranger {#ranger}
-
-- `kylin.server.external-acl-provider=org.apache.ranger.authorization.kylin.authorizer.RangerKylinAuthorizer`
-
-> 提示:更多信息请参考[Ranger 的安装文档之如何集成 Kylin 插件](https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin)
-
-
-
-### 启用 ZooKeeper ACL {#zookeeper-acl}
-
-- `kylin.env.zookeeper-acl-enabled`:启用 ZooKeeper ACL 以阻止未经授权的用户访问 Znode 或降低由此导致的不良操作的风险,默认值为 `FALSE`
-- `kylin.env.zookeeper.zk-auth`:使用 用户名:密码 作为 ACL 标识,默认值为 `digest:ADMIN:KYLIN`
-- `kylin.env.zookeeper.zk-acl`:使用单个 ID 作为 ACL 标识,默认值为 `world:anyone:rwcda`,`anyone` 表示任何人
-
-### 使用 Memcached 作为 Kylin 查询缓存 {#distributed-cache}
-
-从 v2.6.0,Kylin 可以使用 Memcached 作为查询缓存,一起引入的还有一系列缓存增强 ([KYLIN-2895](https://issues.apache.org/jira/browse/KYLIN-2895))。想要启用该功能,您需要执行以下步骤:
-
-1. 在一个或多个节点上安装 Memcached (最新稳定版 v1.5.12); 如果资源够的话,可以在每个安装 Kylin 的节点上安装 Memcached。
-
-2. 按照如下所示方式修改 $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 目录下的 applicationContext.xml 的内容:
-
-注释如下代码:
-{% highlight Groff markup %}
-<bean id="ehcache"
-      class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
-      p:configLocation="classpath:ehcache-test.xml" p:shared="true"/>
-
-<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager"
-      p:cacheManager-ref="ehcache"/>
-{% endhighlight %}
-取消如下代码的注释:
-{% highlight Groff markup %}
-<bean id="ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
-      p:configLocation="classpath:ehcache-test.xml" p:shared="true"/>
-
-<bean id="remoteCacheManager" class="org.apache.kylin.cache.cachemanager.MemcachedCacheManager" />
-<bean id="localCacheManager" class="org.apache.kylin.cache.cachemanager.InstrumentedEhCacheCacheManager"
-      p:cacheManager-ref="ehcache"/>
-<bean id="cacheManager" class="org.apache.kylin.cache.cachemanager.RemoteLocalFailOverCacheManager" />
-
-<bean id="memcachedCacheConfig" class="org.apache.kylin.cache.memcached.MemcachedCacheConfig">
-    <property name="timeout" value="500" />
-    <property name="hosts" value="${kylin.cache.memcached.hosts}" />
-</bean>
-{% endhighlight %}
-applicationContext.xml 中 `${kylin.cache.memcached.hosts}` 的值就是在 conf/kylin.properties 中指定的 `kylin.cache.memcached.hosts` 的值。 
-
-3.在 `conf/kylin.properties` 中添加如下参数:
-{% highlight Groff markup %}
-kylin.query.cache-enabled=true
-kylin.query.lazy-query-enabled=true
-kylin.query.cache-signature-enabled=true
-kylin.query.segment-cache-enabled=true
-kylin.cache.memcached.hosts=memcached1:11211,memcached2:11211,memcached3:11211
-{% endhighlight %}
-
-- `kylin.query.cache-enabled` 是否开启查询缓存的总开关,默认值为 `true`。
-- `kylin.query.lazy-query-enabled` 是否为短时间内重复发送的查询,等待并重用前次查询的结果,默认为 `false`。  
-- `kylin.query.cache-signature-enabled` 是否为缓存进行签名检查,依据签名变化来决定缓存的有效性。缓存的签名由项目中的 cube / hybrid 的状态以及它们的最后构建时间等来动态计算(在缓存被记录时),默认为 `false`,高度推荐设置为 `true`。 
-- `kylin.query.segment-cache-enabled` 是否在 segment 级别缓存从 存储引擎(HBase)返回的数据,默认为 `false`;设置为 `true`,且启用 Memcached 分布式缓存开启的时候,此功能才会生效。可为频繁构建的 cube (如 streaming cube)提升缓存命中率,从而提升性能。
-- `kylin.cache.memcached.hosts` 指明了 memcached 的机器名和端口。
\ No newline at end of file
diff --git a/website/_docs30/install/configuration.md b/website/_docs30/install/configuration.md
deleted file mode 100644
index d153540..0000000
--- a/website/_docs30/install/configuration.md
+++ /dev/null
@@ -1,795 +0,0 @@
----
-layout: docs30
-title:  "Kylin Configuration"
-categories: install
-permalink: /docs30/install/configuration.html
----
-
-
-- [Configuration Files and Overriding](#kylin-config)
-    - [Kylin Configuration Files](#kylin-config)
-	- [Configuration Overriding](#config-override)
-		- [Project-level Configuration Overriding](#project-config-override)
-		- [Cube-level Configuration Overriding](#cube-config-override)
-		- [MapReduce Configuration Overriding](#mr-config-override)
-		- [Hive Configuration Overriding](#hive-config-override)
-        - [Spark Configuration Overriding](#spark-config-override)
-- [Deployment configuration](#kylin-deploy)
-    - [Deploy Kylin](#deploy-config)
-	- [Allocate More Memory for Kylin](#kylin-jvm-settings)
-	- [Job Engine HA](#job-engine-ha)
-	- [Job Engine Safemode](#job-engine-safemode)
-	- [Read/Write Separation](#rw-deploy)
-	- [RESTful Webservice](#rest-config)
-- [Metastore Configuration](#kylin_metastore)
-    - [Metadata-related](#metadata)
-    - [MySQL Metastore Configuration (Beta)](#mysql-metastore)
-- [Modeling Configuration](#kylin-build)
-    - [Hive Client and SparkSQL](#hive-client-and-sparksql)
-    - [JDBC Datasource Configuration](#jdbc-datasource)
-    - [Data Type Precision](#precision-config)
-    - [Cube Design](#cube-config)
-    - [Cube Size Estimation](#cube-estimate)
-	- [Cube Algorithm](#cube-algorithm)
-	- [Auto Merge Segments](#auto-merge)
-	- [Lookup Table Snapshot](#snapshot)
-	- [Build Cube](#cube-build)
-	- [Dictionary-related](#dict-config)
-	- [Deal with Ultra-High-Cardinality Columns](#uhc-config)
-	- [Spark as Build Engine](#spark-cubing)
-	- [Submit Spark jobs via Livy](#livy-submit-spark-job)
-	- [Spark Dynamic Allocation](#dynamic-allocation)
-	- [Job-related](#job-config)
-	- [Enable Email Notification](#email-notification)
-	- [Enable Cube Planner](#cube-planner)
-    - [HBase Storage](#hbase-config)
-    - [Enable Compression](#compress-config)
-    - [Real-time OLAP](#realtime-olap)
-- [Storage Clean up Configuration](#storage-clean-up-configuration)
-    - [Storage-clean-up-related](#storage-clean-up-config)
-- [Query Configuration](#kylin-query)
-    - [Query-related](#query-config)
-    - [Fuzzy Query](#fuzzy)
-	- [Query Cache](#cache-config)
-	- [Query Limits](#query-limit)
-	- [Bad Query](#bad-query)
-	- [Query Pushdown](#query-pushdown)
-	- [Query rewriting](#convert-sql)
-	- [Collect Query Metrics to JMX](#jmx-metrics)
-	- [Collect Query Metrics to dropwizard](#dropwizard-metrics)
-- [Security Configuration](#kylin-security)
-	- [Integrated LDAP for SSO](#ldap-sso)
-	- [Integrate with Apache Ranger](#ranger)
-	- [Enable ZooKeeper ACL](#zookeeper-acl)
-- [Distributed query cache with Memcached](#distributed-cache)
-
-
-
-### Configuration Files and Overriding {#kylin-config}
-
-This section introduces Kylin's configuration files and how to perform Configuration Overriding.
-
-
-
-### Kylin Configuration Files	 {#kylin-config-file}
-
-Kylin will automatically read the Hadoop configuration (`core-site.xml`), Hive configuration (`hive-site.xml`) and HBase configuration (`hbase-site.xml`) from the environment, in addition, Kylin's configuration files are in the `$KYLIN_HOME/conf/` directory.
-Kylin's configuration file is as follows:
-
-- `kylin_hive_conf.xml`: This file contains the configuration for the Hive job.
-- `kylin_job_conf.xml` & `kylin_job_conf_inmem.xml`: This file contains configuration for the MapReduce job. When performing the *In-mem Cubing* job, user need to request more memory for the mapper in `kylin_job_conf_inmem.xml`
-- `kylin-kafka-consumer.xml`: This file contains the configuration for the Kafka job.
-- `kylin-server-log4j.properties`: This file contains the log configuration for the Kylin server.
-- `kylin-tools-log4j.properties`: This file contains the log configuration for the Kylin command line.
-- `setenv.sh` : This file is a shell script for setting environment variables. Users can adjust the size of the Kylin JVM stack with `KYLIN_JVM_SETTINGS` and set other environment variables such as `KAFKA_HOME`.
-- `kylin.properties`: This file contains Kylin global configuration.
-
-
-
-### Configuration Overriding {#config-override}
-
-Some configuration files in `$KYLIN_HOME/conf/` can be overridden in the Web UI. Configuration Overriding has two scope: *Project level* and *Cube level*. The priority order can be stated as: Cube level configurations > Project level configurations > configuration files.
-
-
-
-### Project-level Configuration Overriding {#project-config-override}
-
-Click *Manage Project* in the web UI interface, select a project, click *Edit* -> *Project Config* -> *+ Property* to add configuration properties which could override property values in configuration files, as the figure below shown,
-![](/images/install/override_config_project.png)
-
-
-
-### Cube-level Configuration Overriding		{#cube-config-override}
-
-In the *Configuration overrides* step of *Cube Designer*, user could rewrite property values to override those in project level and configuration files, as the figure below shown,
-![](/images/install/override_config_cube.png)
-
-The following configurations can be override in the Cube-level,
-
-- `kylin.cube.size-estimate*`
-- `kylin.cube.algorithm*`
-- `kylin.cube.aggrgroup*`
-- `kylin.metadata.dimension-encoding-max-length`
-- `kylin.cube.max-building-segments`
-- `kylin.cube.is-automerge-enabled`
-- `kylin.job.allow-empty-segment`
-- `kylin.job.sampling-percentage`
-- `kylin.source.hive.redistribute-flat-table`
-- `kylin.engine.spark*`
-- `kylin.query.skip-empty-segments`
-
-
-
-### MapReduce Configuration Overriding {#mr-config-override}
-
-Kylin supports overriding configuration properties in `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml` at the project and cube level, in the form of key-value pairs, in the following format:
-`kylin.engine.mr.config-override.<key> = <value>`
-* If user wants getting more memory from YARN for jobs, user can set:  `kylin.engine.mr.config-override.mapreduce.map.java.opts=-Xmx7g` and `kylin.engine.mr.config-override.mapreduce.map.memory.mb=8192`
-* If user wants the cube's build job to use a different YARN resource queue, user can set: 
-`kylin.engine.mr.config-override.mapreduce.job.queuename={queueName}` 
-
-
-
-### Hive Configuration Overriding {#hive-config-override}
-
-Kylin supports overriding configuration properties in `kylin_hive_conf.xml` at the project and cube level, in the form of key-value pairs, in the following format:
-`kylin.source.hive.config-override.<key> = <value>`
-If user wants Hive to use a different YARN resource queue, user can set: 
-`kylin.source.hive.config-override.mapreduce.job.queuename={queueName}` 
-
-
-
-### Spark Configuration Overriding {#spark-config-override}
-
-Kylin supports overriding configuration properties in `kylin.properties` at the project and cube level, in the form of key-value pairs, in the following format:
-`kylin.engine.spark-conf.<key> = <value>`
-If user wants Spark to use a different YARN resource queue, user can set: 
-`kylin.engine.spark-conf.spark.yarn.queue={queueName}`
-
-
-
-### Deployment configuration {#kylin-deploy}
-
-This section introduces Kylin Deployment related configuration.
-
-
-
-### Deploy Kylin {#deploy-config}
-
-- `kylin.env.hdfs-working-dir`: specifies the HDFS path used by Kylin service. The default value is `/kylin`. Make sure that the user who starts the Kylin instance has permission to read and write to this directory.
-- `kylin.env`: specifies the purpose of the Kylin deployment. Optional values include `DEV`, `QA` and `PROD`. The default value is *DEV*. Some developer functions will be enabled in *DEV* mode.
-- `kylin.env.zookeeper-base-path`: specifies the ZooKeeper path used by the Kylin service. The default value is `/kylin`
-- `kylin.env.zookeeper-connect-string`: specifies the ZooKeeper connection string. If it is empty, use HBase's ZooKeeper
-- `kylin.env.hadoop-conf-dir`: specifies the Hadoop configuration file directory. If not specified, get `HADOOP_CONF_DIR` in the environment.
-- `kylin.server.mode`: Optional values include `all`, `job` and `query`, among them *all* is the default one. *job* mode means the instance schedules Cube job only; *query* mode means the instance serves SQL queries only; *all* mode means the instance handles both of them.
-- `kylin.server.cluster-name`: specifies the cluster name
-
-
-
-### Allocate More Memory for Kylin {#kylin-jvm-settings}
-
-There are two sample settings for `KYLIN_JVM_SETTINGS` are given in `$KYLIN_HOME/conf/setenv.sh`.
-The default setting use relatively less memory. You can comment it and then uncomment the next line to allocate more memory for Kyligence Enterprise. The default configuration is:
-
-```shell
-Export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX`MaxPermSize=512M -verbose`gc -XX`+PrintGCDetails -XX`+PrintGCDateStamps -Xloggc`$KYLIN_HOME/logs/kylin.gc.$$ -XX`+UseGCLogFileRotation - XX`NumberOfGCLogFiles=10 -XX`GCLogFileSize=64M"
-# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX`MaxPermSize=512m -XX`NewSize=3g -XX`MaxNewSize=3g -XX`SurvivorRatio=4 -XX`+CMSClassUnloadingEnabled -XX`+CMSParallelRemarkEnabled -XX`+UseConcMarkSweepGC -XX `+CMSIncrementalMode -XX`CMSInitiatingOccupancyFraction=70 -XX`+UseCMSInitiatingOccupancyOnly -XX`+DisableExplicitGC -XX`+HeapDumpOnOutOfMemoryError -verbose`gc -XX`+PrintGCDetails -XX`+PrintGCDateStamps -Xloggc`$KYLIN_HOME/logs/kylin.gc. $$ -XX`+UseGCLogFileRotation -XX`NumberOfGCLog [...]
-```
-
-
-
-### Job Engine HA  {#job-engine-ha}
-
-- `kylin.job.scheduler.default=2`: to enable the distributed job scheduler.
-- `kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperJobLock`: to enable distributed job lock
-
-> Note: For more information, please refer to the **Enable Job Engine HA** section in [Deploy in Cluster Mode](/docs/install/kylin_cluster.html) 
-
-
-### Job Engine Safemode {#job-engine-safemode}
-
-Safemode can be only used in default schedule.
-
-- `kylin.job.scheduler.safemode=TRUE`: to enable job scheduler safemode. In safemode, Newly submitted job will not be executed
-- `kylin.job.scheduler.safemode.runable-projects=project1,project2`: provide list of projects as exceptional case in safemode.
-
-
-### Read/Write Separation   {#rw-deploy}
-
-- `kylin.storage.hbase.cluster-fs`: specifies the HDFS file system of the HBase cluster
-- `kylin.storage.hbase.cluster-hdfs-config-file`: specifies HDFS configuration file pointing to the HBase cluster
-
-> Note: For more information, please refer to [Deploy Apache Kylin with Standalone HBase Cluster](http://kylin.apache.org/blog/2016/06/10/standalone-hbase-cluster/)
-
-
-
-### RESTful Webservice {#rest-config}
-
-- `kylin.web.timezone`: specifies the time zone used by Kylin's REST service. The default value is GMT+8.
-- `kylin.web.cross-domain-enabled`: whether cross-domain access is supported. The default value is TRUE
-- `kylin.web.export-allow-admin`: whether to support administrator user export information. The default value is TRUE
-- `kylin.web.export-allow-other`: whether to support other users to export information. The default value is TRUE
-- `kylin.web.dashboard-enabled`: whether to enable Dashboard. The default value is FALSE
-
-
-
-### Metastore Configuration {#kylin_metastore}
-
-This section introduces Kylin Metastore related configuration.
-
-
-
-### Metadata-related {#metadata}
-
-- `kylin.metadata.url`: specifies the Metadata path. The default value is *kylin_metadata@hbase*
-- `kylin.metadata.dimension-encoding-max-length`: specifies the maximum length when the dimension is used as Rowkeys with fix_length encoding. The default value is 256.
-- `kylin.metadata.sync-retries`: specifies the number of Metadata sync retries. The default value is 3.
-- `kylin.metadata.sync-error-handler`: The default value is *DefaultSyncErrorHandler*
-- `kylin.metadata.check-copy-on-write`: whether clear metadata cache, default value is *FALSE*
-- `kylin.metadata.hbase-client-scanner-timeout-period`: specifies the total timeout between the RPC call initiated by the HBase client. The default value is 10000 (ms).
-- `kylin.metadata.hbase-rpc-timeout`: specifies the timeout for HBase to perform RPC operations. The default value is 5000 (ms).
-- `kylin.metadata.hbase-client-retries-number`: specifies the number of HBase retries. The default value is 1 (times).
-- `kylin.metadata.resource-store-provider.jdbc`: specifies the class used by JDBC. The default value is *org.apache.kylin.common.persistence.JDBCResourceStore*
-
-
-
-### MySQL Metastore Configuration (Beta) {#mysql-metastore}
-
-> *Note*: This feature is still being tested and it is recommended to use it with caution.
-
-- `kylin.metadata.url`: specifies the metadata path
-- `kylin.metadata.jdbc.dialect`: specifies JDBC dialect
-- `kylin.metadata.jdbc.json-always-small-cell`: The default value is TRUE
-- `kylin.metadata.jdbc.small-cell-meta-size-warning-threshold`: The default value is 100 (MB)
-- `kylin.metadata.jdbc.small-cell-meta-size-error-threshold`: The default value is 1 (GB)
-- `kylin.metadata.jdbc.max-cell-size`: The default value is 1 (MB)
-- `kylin.metadata.resource-store-provider.jdbc`: specifies the class used by JDBC. The default value is org.apache.kylin.common.persistence.JDBCResourceStore
-
-> Note: For more information, please refer to [MySQL-based Metastore Configuration](/docs/tutorial/mysql_metastore.html)
-
-
-
-### Modeling Configuration {#kylin-build}
-
-This section introduces Kylin data modeling and build related configuration.
-
-
-
-### Hive Client and SparkSQL {#hive-client-and-sparksql}
-
-- `kylin.source.hive.client`: specifies the Hive command line type. Optional values include *cli* or *beeline*. The default value is *cli*.
-- `kylin.source.hive.beeline-shell`: specifies the absolute path of the Beeline shell. The default is beeline
-- `kylin.source.hive.beeline-params`: when using Beeline as the Client tool for Hive, user need to configure this parameter to provide more information to Beeline
-- `kylin.source.hive.enable-sparksql-for-table-ops`: the default value is *FALSE*, which needs to be set to *TRUE* when using SparkSQL
-- `kylin.source.hive.sparksql-beeline-shell`: when using SparkSQL Beeline as the client tool for Hive, user need to configure this parameter as /path/to/spark-client/bin/beeline
-- `kylin.source.hive.sparksql-beeline-params`: when using SparkSQL Beeline as the Client tool for Hive,user need to configure this parameter to provide more information to SparkSQL
-
-
-
-
-### JDBC Datasource Configuration {#jdbc-datasource}
-
-- `kylin.source.default`: specifies the type of data source used by JDBC
-- `kylin.source.jdbc.connection-url`: specifies JDBC connection string
-- `kylin.source.jdbc.driver`: specifies JDBC driver class name
-- `kylin.source.jdbc.dialect`: specifies JDBC dialect. The default value is default
-- `kylin.source.jdbc.user`: specifies JDBC connection username
-- `kylin.source.jdbc.pass`: specifies JDBC connection password
-- `kylin.source.jdbc.sqoop-home`: specifies Sqoop installation path
-- `kylin.source.jdbc.sqoop-mapper-num`: specifies how many slices should be split. Sqoop will run a mapper for each slice. The default value is 4.
-- `kylin.source.jdbc.field-delimiter`: specifies the field separator. The default value is \
-
-> Note: For more information, please refer to [Building a JDBC Data Source](/docs/tutorial/setup_jdbc_datasource.html).
-
-
-
-
-### Data Type Precision {#precision-config}
-
-- `kylin.source.hive.default-varchar-precision`: specifies the maximum length of the *varchar* field. The default value is 256.
-- `kylin.source.hive.default-char-precision`: specifies the maximum length of the *char* field. The default value is 255.
-- `kylin.source.hive.default-decimal-precision`: specifies the precision of the *decimal* field. The default value is 19
-- `kylin.source.hive.default-decimal-scale`: specifies the scale of the *decimal* field. The default value is 4.
-
-
-
-### Cube Design {#cube-config}
-
-- `kylin.cube.ignore-signature-inconsistency`:The signature in Cube desc ensures that the cube is not changed to a corrupt state. The default value is *FALSE*
-- `kylin.cube.aggrgroup.max-combination`: specifies the max combination number of aggregation groups. The default value is 32768.
-- `kylin.cube.aggrgroup.is-mandatory-only-valid`: whether to allow Cube contains only Base Cuboid. The default value is *FALSE*, set to *TRUE* when using Spark Cubing
-- `kylin.cube.rowkey.max-size`: specifies the maximum number of columns that can be set to Rowkeys. The default value is 63, and it can not be more than 63.
-- `kylin.cube.allow-appear-in-multiple-projects`: whether to allow a cube to appear in multiple projects
-- `kylin.cube.gtscanrequest-serialization-level`: the default value is 1
-- `kylin.web.hide-measures`: hides some measures that may not be needed, the default value is RAW.   
-
-
-
-### Cube Size Estimation {#cube-estimate}
-
-Both Kylin and HBase use compression when writing to disk, so Kylin will multiply its original size by the ratio to estimate the size of the cube.
-
-- `kylin.cube.size-estimate-ratio`: normal cube, default value is 0.25
-- `kylin.cube.size-estimate-memhungry-ratio`: Deprecated, default is 0.05
-- `kylin.cube.size-estimate-countdistinct-ratio`: Cube Size Estimation with count distinct h= metric, default value is 0.5
-- `kylin.cube.size-estimate-topn-ratio`: Cube Size Estimation with TopN metric, default value is 0.5
-
-
-
-### Cube Algorithm {#cube-algorithm}
-
-- `kylin.cube.algorithm`: specifies the algorithm of the Build Cube. Optional values include `auto`, `layer` and `inmem`. The default value is `auto`, that is, Kylin will dynamically select an algorithm by collecting data ( Layer or inmem), if user knows Kylin, user data and cluster condition well, user can directly set the algorithm.
-- `kylin.cube.algorithm.layer-or-inmem-threshold`: the default value is 7
-- `kylin.cube.algorithm.inmem-split-limit`: the default value is 500
-- `kylin.cube.algorithm.inmem-concurrent-threads`: the default value is 1
-- `kylin.job.sampling-percentage`: specifies the data sampling percentage. The default value is 100.
-
-
-
-### Auto Merge Segments {#auto-merge}
-
-- `kylin.cube.is-automerge-enabled`: whether to enable auto-merge. The default value is *TRUE*. When this parameter is set to *FALSE*, the auto-merge function will be turned off, even if it is enabled in Cube Design.
-
-
-
-### Lookup Table Snapshot {#snapshot}
-
-- `kylin.snapshot.max-mb`: specifies the max size of the snapshot. The default value is 300(M)
-- `kylin.snapshot.max-cache-entry`: The maximum number of snapshots that can be stored in the cache. The default value is 500.
-- `kylin.snapshot.ext.shard-mb`: specifies the size of HBase shard. The default value is 500(M).
-- `kylin.snapshot.ext.local.cache.path`: specifies local cache path, default value is lookup_cache
-- `kylin.snapshot.ext.local.cache.max-size-gb`: specifies local snapshot cache size, default is 200(M)
-
-
-
-### Build Cube {#cube-build}
-
-- `kylin.storage.default`: specifies the default build engine. The default value is 2, which means HBase.
-- `kylin.source.hive.keep-flat-table`: whether to keep the Hive intermediate table after the build job is complete. The default value is *FALSE*
-- `kylin.source.hive.database-for-flat-table`: specifies the name of the Hive database that stores the Hive intermediate table. The default is *default*. Make sure that the user who started the Kylin instance has permission to operate the database.
-- `kylin.source.hive.flat-table-storage-format`: specifies the storage format of the Hive intermediate table. The default value is *SEQUENCEFILE*
-- `kylin.source.hive.flat-table-field-delimiter`: specifies the delimiter of the Hive intermediate table. The default value is *\u001F*
-- - `kylin.source.hive.intermediate-table-prefix`: specifies the table name prefix of the Hive intermediate table. The default value is *kylin\_intermediate\_*
-- `kylin.source.hive.redistribute-flat-table`: whether to redistribute the Hive flat table. The default value is *TRUE*
-- `kylin.source.hive.redistribute-column-count`: number of redistributed columns. The default value is *3*
-- `kylin.source.hive.table-dir-create-first`: the default value is *FALSE*
-- `kylin.storage.partition.aggr-spill-enabled`: the default value is *TRUE*
-- `kylin.engine.mr.lib-dir`: specifies the path to the jar package used by the MapReduce job
-- `kylin.engine.mr.reduce-input-mb`: used to estimate the number of Reducers. The default value is 500(MB).
-- `kylin.engine.mr.reduce-count-ratio`: used to estimate the number of Reducers. The default value is 1.0
-- `kylin.engine.mr.min-reducer-number`: specifies the minimum number of Reducers in the MapReduce job. The default is 1
-- `kylin.engine.mr.max-reducer-number`: specifies the maximum number of Reducers in the MapReduce job. The default is 500.
-- `kylin.engine.mr.mapper-input-rows`: specifies the number of rows that each Mapper can handle. The default value is 1000000. If user change this value, it will start more Mapper.
-- `kylin.engine.mr.max-cuboid-stats-calculator-number`: specifies the number of threads used to calculate Cube statistics. The default value is 1
-- `kylin.engine.mr.build-dict-in-reducer`: whether to build the dictionary in the Reduce phase of the build job *Extract Fact Table Distinct Columns*. The default value is `TRUE`
-- `kylin.engine.mr.yarn-check-interval-seconds`: How often the build engine is checked for the status of the Hadoop job. The default value is 10(s)
-- `kylin.engine.mr.use-local-classpath`: whether to use local mapreduce application classpath. The default value is TRUE
-
-
-
-### Dictionary-related {#dict-config}
-
-- `kylin.dictionary.use-forest-trie`: The default value is TRUE
-- `kylin.dictionary.forest-trie-max-mb`: The default value is 500
-- `kylin.dictionary.max-cache-entry`: The default value is 3000
-- `kylin.dictionary.growing-enabled`: The default value is FALSE
-- `kylin.dictionary.append-entry-size`: The default value is 10000000
-- `kylin.dictionary.append-max-versions`: The default value is 3
-- `kylin.dictionary.append-version-ttl`: The default value is 259200000
-- `kylin.dictionary.resuable`: whether to reuse the dictionary. The default value is FALSE
-- `kylin.dictionary.shrunken-from-global-enabled`: whether to reduce the size of global dictionary. The default value is *TRUE*
-
-
-
-### Deal with Ultra-High-Cardinality Columns {#uhc-config}
-
-- `kylin.engine.mr.build-uhc-dict-in-additional-step`: the default value is *FALSE*, set to *TRUE*
-- `kylin.engine.mr.uhc-reducer-count`: the default value is 1, which can be set to 5 to allocate 5 Reducers for each super-high base column.
-
-
-
-### Spark as Build Engine {#spark-cubing}
-
-- `kylin.engine.spark-conf.spark.master`: specifies the Spark operation mode. The default value is *yarn*
-- `kylin.engine.spark-conf.spark.submit.deployMode`: specifies the deployment mode of Spark on YARN. The default value is *cluster*
-- `kylin.engine.spark-conf.spark.yarn.queue`: specifies the Spark resource queue. The default value is *default*
-- `kylin.engine.spark-conf.spark.driver.memory`: specifies the Spark Driver memory The default value is 2G.
-- `kylin.engine.spark-conf.spark.executor.memory`: specifies the Spark Executor memory. The default value is 4G.
-- `kylin.engine.spark-conf.spark.yarn.executor.memoryOverhead`: specifies the size of the Spark Executor heap memory. The default value is 1024(MB).
-- `kylin.engine.spark-conf.spark.executor.cores`: specifies the number of cores available for a single Spark Executor. The default value is 1
-- `kylin.engine.spark-conf.spark.network.timeout`: specifies the Spark network timeout period, 600
-- `kylin.engine.spark-conf.spark.executor.instances`: specifies the number of Spark Executors owned by an Application. The default value is 1
-- `kylin.engine.spark-conf.spark.eventLog.enabled`: whether to record the Spark event. The default value is *TRUE*
-- `kylin.engine.spark-conf.spark.hadoop.dfs.replication`: replication number of HDFS, default is 2
-- `kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress`: whether to compress the output. The default value is *TRUE*
-- `kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress.codec`: specifies Output compression, default is *org.apache.hadoop.io.compress.DefaultCodec*
-- `kylin.engine.spark.rdd-partition-cut-mb`: Kylin uses the size of this parameter to split the partition. The default value is 10 (MB)
-- `kylin.engine.spark.min-partition`: specifies the minimum number of partitions. The default value is 1
-- `kylin.engine.spark.max-partition`: specifies maximum number of partitions, default is 5000
-- `kylin.engine.spark.storage-level`: specifies RDD partition data cache level, default value is *MEMORY_AND_DISK_SER*
-- `kylin.engine.spark-conf-mergedict.spark.executor.memory`: whether to request more memory for merging dictionary.The default value is 6G.
-- `kylin.engine.spark-conf-mergedict.spark.memory.fraction`: specifies the percentage of memory reserved for the system. The default value is 0.2
-
-> Note: For more information, please refer to [Building Cubes with Spark](/docs/tutorial/cube_spark.html).
-
-
-
-### Submit Spark jobs via Livy {#livy-submit-spark-job}
-
-- `kylin.engine.livy-conf.livy-enabled`: whether to enable Livy as submit Spark job service. The default value is *FALSE*
-- `kylin.engine.livy-conf.livy-url`: specifies the URL of Livy. Such as *http://127.0.0.1:8998*
-- `kylin.engine.livy-conf.livy-key.*`: specifies the name-key configuration of Livy. Such as *kylin.engine.livy-conf.livy-key.name=kylin-livy-1*
-- `kylin.engine.livy-conf.livy-arr.*`: specifies the array type configuration of Livy. Separated by commas. Such as *kylin.engine.livy-conf.livy-arr.jars=hdfs://your_self_path/hbase-common-1.4.8.jar,hdfs://your_self_path/hbase-server-1.4.8.jar,hdfs://your_self_path/hbase-client-1.4.8.jar*
-- `kylin.engine.livy-conf.livy-map.*`: specifies the Spark configuration properties. Such as *kylin.engine.livy-conf.livy-map.spark.executor.instances=10*
-
-> Note: For more information, please refer to [Apache Livy Rest API](http://livy.incubator.apache.org/docs/latest/rest-api.html).
-
-
-### Spark Dynamic Allocation {#dynamic-allocation}
-
-- `kylin.engine.spark-conf.spark.shuffle.service.enabled`: whether to enable shuffle service
-- `kylin.engine.spark-conf.spark.dynamicAllocation.enabled`: whether to enable Spark Dynamic Allocation
-- `kylin.engine.spark-conf.spark.dynamicAllocation.initialExecutors`: specifies the initial number of Executors
-- `kylin.engine.spark-conf.spark.dynamicAllocation.minExecutors`: specifies the minimum number of Executors retained
-- `kylin.engine.spark-conf.spark.dynamicAllocation.maxExecutors`: specifies the maximum number of Executors applied for
-- `kylin.engine.spark-conf.spark.dynamicAllocation.executorIdleTimeout`: specifies the threshold of Executor being removed after being idle. The default value is 60(s)
-
-> Note: For more information, please refer to the official documentation: [Dynamic Resource Allocation](http://spark.apache.org/docs/1.6.2/job-scheduling.html#dynamic-resource-allocation).
-
-
-
-### Job-related {#job-config}
-
-- `kylin.job.log-dir`: the default value is */tmp/kylin/logs*
-- `kylin.job.allow-empty-segment`: whether tolerant data source is empty. The default value is *TRUE*
-- `kylin.job.max-concurrent-jobs`: specifies maximum build concurrency, default is 10
-- `kylin.job.retry`: specifies retry times after the job is failed. The default value is 0
-- `kylin.job.retry-interval`: specifies retry interval in milliseconds. The default value is 30000
-- `kylin.job.scheduler.priority-considered`: whether to consider the job priority. The default value is FALSE
-- `kylin.job.scheduler.priority-bar-fetch-from-queue`: specifies the time interval for getting jobs from the priority queue. The default value is 20(s)
-- `kylin.job.scheduler.poll-interval-second`: The time interval for getting the job from the queue. The default value is 30(s)
-- `kylin.job.error-record-threshold`: specifies the threshold for the job to throw an error message. The default value is 0
-- `kylin.job.cube-auto-ready-enabled`: whether to enable Cube automatically after the build is complete. The default value is *TRUE*
-- `kylin.cube.max-building-segments`: specifies the maximum number of building job for the one Cube. The default value is 10
-
-
-
-### Enable Email Notification {#email-notification}
-
-- `kylin.job.notification-enabled`: whether to notify the email when the job succeeds or fails. The default value is *FALSE*
-- `kylin.job.notification-mail-enable-starttls`:# whether to enable starttls. The default value is *FALSE*
-- `kylin.job.notification-mail-host`: specifies the SMTP server address of the mail
-- `kylin.job.notification-mail-port`: specifies the SMTP server port of the mail. The default value is 25
-- `kylin.job.notification-mail-username`: specifies the login user name of the mail
-- `kylin.job.notification-mail-password`: specifies the username and password of the email
-- `kylin.job.notification-mail-sender`: specifies the email address of the email
-- `kylin.job.notification-admin-emails`: specifies the administrator's mailbox for email notifications
-
-
-
-### Enable Cube Planner {#cube-planner}
-
-- `kylin.cube.cubeplanner.enabled`: the default value is *TRUE*
-- `kylin.server.query-metrics2-enabled`: the default value is *TRUE*
-- `kylin.metrics.reporter-query-enabled`: the default value is *TRUE*
-- `kylin.metrics.reporter-job-enabled`: the default value is *TRUE*
-- `kylin.metrics.monitor-enabled`: the default value is *TRUE*
-- `kylin.cube.cubeplanner.enabled`: whether to enable Cube Planner, The default value is *TRUE*
-- `kylin.cube.cubeplanner.enabled-for-existing-cube`: whether to enable Cube Planner for the existing Cube. The default value is *TRUE*
-- `kylin.cube.cubeplanner.algorithm-threshold-greedy`: the default value is 8
-- `kylin.cube.cubeplanner.expansion-threshold`: the default value is 15.0
-- `kylin.cube.cubeplanner.recommend-cache-max-size`: the default value is 200
-- `kylin.cube.cubeplanner.query-uncertainty-ratio`: the default value is 0.1
-- `kylin.cube.cubeplanner.bpus-min-benefit-ratio`: the default value is 0.01
-- `kylin.cube.cubeplanner.algorithm-threshold-genetic`: the default value is 23
-
-> Note: For more information, please refer to [Using Cube Planner](/docs/tutorial/use_cube_planner.html).
-
-
-
-### HBase Storage {#hbase-config}
-
-- `kylin.storage.hbase.table-name-prefix`: specifies the prefix of HTable. The default value is *KYLIN\_*
-- `kylin.storage.hbase.namespace`: specifies the default namespace of HBase Storage. The default value is *default*
-- `kylin.storage.hbase.coprocessor-local-jar`: specifies jar package related to HBase coprocessor
-- `kylin.storage.hbase.coprocessor-mem-gb`: specifies the HBase coprocessor memory. The default value is 3.0(GB).
-- `kylin.storage.hbase.run-local-coprocessor`: whether to run the local HBase coprocessor. The default value is *FALSE*
-- `kylin.storage.hbase.coprocessor-timeout-seconds`: specifies the timeout period. The default value is 0
-- `kylin.storage.hbase.region-cut-gb`: specifies the size of a single Region, default is 5.0
-- `kylin.storage.hbase.min-region-count`: specifies the minimum number of regions. The default value is 1
-- `kylin.storage.hbase.max-region-count`: specifies the maximum number of Regions. The default value is 500
-- `kylin.storage.hbase.hfile-size-gb`: specifies the HFile size. The default value is 2.0(GB)
-- `kylin.storage.hbase.max-scan-result-bytes`: specifies the maximum value of the scan return. The default value is 5242880 (byte), which is 5 (MB).
-- `kylin.storage.hbase.compression-codec`: whether it is compressed. The default value is *none*, that is, compression is not enabled
-- `kylin.storage.hbase.rowkey-encoding`: specifies the encoding method of Rowkey. The default value is *FAST_DIFF*
-- `kylin.storage.hbase.block-size-bytes`: the default value is 1048576
-- `kylin.storage.hbase.small-family-block-size-bytes`: specifies the block size. The default value is 65536 (byte), which is 64 (KB).
-- `kylin.storage.hbase.owner-tag`: specifies the owner of the Kylin platform. The default value is whoami@kylin.apache.org
-- `kylin.storage.hbase.endpoint-compress-result`: whether to return the compression result. The default value is TRUE
-- `kylin.storage.hbase.max-hconnection-threads`: specifies the maximum number of connection threads. The default value is 2048.
-- `kylin.storage.hbase.core-hconnection-threads`: specifies the number of core connection threads. The default value is 2048.
-- `kylin.storage.hbase.hconnection-threads-alive-seconds`: specifies the thread lifetime. The default value is 60.
-- `kylin.storage.hbase.replication-scope`: specifies the cluster replication range. The default value is 0
-- `kylin.storage.hbase.scan-cache-rows`: specifies the number of scan cache lines. The default value is 1024.
-
-
-
-### Enable Compression {#compress-config}
-
-Kylin does not enable Enable Compression by default. Unsupported compression algorithms can hinder Kylin's build jobs, but a suitable compression algorithm can reduce storage overhead and network overhead and improve overall system operation efficiency.
-Kylin can use three types of compression, HBase table compression, Hive output compression, and MapReduce job output compression.
-
-> *Note*: The compression settings will not take effect until the Kylin instance is restarted.
-
-* HBase table compression
-
-This compression is configured by `kylin.storage.hbase.compression-codec` in `kyiln.properties`. Optional values include `none`, `snappy`, `lzo`, `gzip` and `lz4`. The default value is none, which means no data is compressed.
-
-> *Note*: Before modifying the compression algorithm, make sure userr HBase cluster supports the selected compression algorithm.
-
-
-* Hive output compression
-
-This compression is configured by `kylin_hive_conf.xml`. The default configuration is empty, which means that the default configuration of Hive is used directly. If user want to override the configuration, add (or replace) the following properties in `kylin_hive_conf.xml`. Take SNAPPY compression as an example:
-
-```xml
-<property>
-<name>mapreduce.map.output.compress.codec</name>
-<value>org.apache.hadoop.io.compress.SnappyCodec</value>
-<description></description>
-</property>
-<property>
-<name>mapreduce.output.fileoutputformat.compress.codec</name>
-<value>org.apache.hadoop.io.compress.SnappyCodec</value>
-<description></description>
-</property>
-```
-
-* MapReduce job output compression
-
-This compression is configured via `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. The default is empty, which uses the default configuration of MapReduce. If user want to override the configuration, add (or replace) the following properties in `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. Take SNAPPY compression as an example:
-
-```xml
-<property>
-<name>mapreduce.map.output.compress.codec</name>
-<value>org.apache.hadoop.io.compress.SnappyCodec</value>
-<description></description>
-</property>
-<property>
-<name>mapreduce.output.fileoutputformat.compress.codec</name>
-<value>org.apache.hadoop.io.compress.SnappyCodec</value>
-<description></description>
-</property>
-```
-
-
-
-### Real-time OLAP    {#realtime-olap}
-- `kylin.stream.job.dfs.block.size`: specifies the HDFS block size of the streaming Base Cuboid job using. The default value is *16M*.
-- `kylin.stream.index.path`: specifies the path to store local segment cache. The default value is *stream_index*.
-- `kylin.stream.cube-num-of-consumer-tasks`: specifies the number of replica sets that share the whole topic partition. It affects how many partitions will be assigned to different replica sets. The default value is *3*.
-- `kylin.stream.cube.window`: specifies the length of duration of each segment, value in seconds. The default value is *3600*.
-- `kylin.stream.cube.duration`: specifies the wait time that a segment's status changes from active to IMMUTABLE, value in seconds. The default value is *7200*.
-- `kylin.stream.cube.duration.max`: specifies the maximum duration that segment can keep active, value in seconds. The default value is *43200*.
-- `kylin.stream.checkpoint.file.max.num`: specifies the maximum number of checkpoint file for each cube. The default value is *5*.
-- `kylin.stream.index.checkpoint.intervals`: specifies the time interval between setting two checkpoints. The default value is *300*.
-- `kylin.stream.index.maxrows`: specifies the maximum number of the entered event be cached in heap/memory. The default value is *50000*.
-- `kylin.stream.immutable.segments.max.num`: specifies the maximum number of the IMMUTABLE segment in each Cube of the current streaming receiver, if exceed, consumption of current topic will be paused. The default value is *100*.
-- `kylin.stream.consume.offsets.latest`: whether to consume from the latest offset. The default value is *true*.
-- `kylin.stream.node`: specifies the node of coordinator/receiver. Such as host:port. The default value is *null*.
-- `kylin.stream.metadata.store.type`: specifies the position of metadata store. The default value is *zk*.
-- `kylin.stream.segment.retention.policy`: specifies the strategy to process local segment cache when segment become IMMUTABLE. Optional values include `purge` and `fullBuild`. `purge` means when the segment become IMMUTABLE, it will be dropped. `fullBuild` means when the segment become IMMUTABLE, it will be uploaded to HDFS. The default value is *fullBuild*.
-- `kylin.stream.assigner`: specifies the implementation class which used to assign the topic partition to different replica sets. The class should be the implementation class of `org.apache.kylin.stream.coordinator.assign.Assigner`. The default value is *DefaultAssigner*.
-- `kylin.stream.coordinator.client.timeout.millsecond`: specifies the connection timeout of the coordinator client. The default value is *5000*.
-- `kylin.stream.receiver.client.timeout.millsecond`: specifies the connection timeout of the receiver client. The default value is *5000*.
-- `kylin.stream.receiver.http.max.threads`: specifies the maximum connection threads of the receiver. The default value is *200*.
-- `kylin.stream.receiver.http.min.threads`: specifies the minimum connection threads of the receiver. The default value is *10*.
-- `kylin.stream.receiver.query-core-threads`: specifies the number of query threads be used for the current streaming receiver. The default value is *50*.
-- `kylin.stream.receiver.query-max-threads`: specifies the maximum number of query threads be used for the current streaming receiver. The default value is *200*.
-- `kylin.stream.receiver.use-threads-per-query`: specifies the threads number that each query use. The default value is *8*.
-- `kylin.stream.build.additional.cuboids`: whether to build additional Cuboids. The additional Cuboids mean the aggregation of Mandatory Dimensions that chosen in Cube Advanced Setting page. The default value is *false*. Only build Base Cuboid by default.
-- `kylin.stream.segment-max-fragments`: specifies the maximum number of fragments that each segment keep. The default value is *50*.
-- `kylin.stream.segment-min-fragments`: specifies the minimum number of fragments that each segment keep. The default value is *15*.
-- `kylin.stream.max-fragment-size-mb`: specifies the maximum size of each fragment. The default value is *300*.
-- `kylin.stream.fragments-auto-merge-enable`: whether to enable fragments auto merge. The default value is *true*.
-
-> Note: For more information, please refer to the [Real-time OLAP](http://kylin.apache.org/docs30/tutorial/real_time_olap.html).
-
-### Storage Clean up Configuration    {#storage-clean-up-configuration}
-
-This section introduces Kylin storage clean up related configuration.
-
-
-
-### Storage-clean-up-related {#storage-clean-up-config}
-
-- `kylin.storage.clean-after-delete-operation`: whether to clean segment data in HBase and HDFS. The default value is FALSE.
-
-
-
-### Query Configuration    {#kylin-query}
-
-This section introduces Kylin query related configuration.
-
-
-
-### Query-related {#query-config}
-
-- `kylin.query.skip-empty-segments`: whether to skip empty segments when querying. The default value is *TRUE*
-- `kylin.query.large-query-threshold`: specifies the maximum number of rows returned. The default value is 1000000.
-- `kylin.query.security-enabled`: whether to check the ACL when querying. The default value is *TRUE*
-- `kylin.query.security.table-acl-enabled`: whether to check the ACL of the corresponding table when querying. The default value is *TRUE*
-- `kylin.query.calcite.extras-props.conformance`: whether to strictly parsed. The default value is *LENIENT*
-- `kylin.query.calcite.extras-props.caseSensitive`: whether is case sensitive. The default value is *TRUE*
-- `kylin.query.calcite.extras-props.unquotedCasing`: optional values include `UNCHANGED`, `TO_UPPER` and `TO_LOWER`. The default value is *TO_UPPER*, that is, all uppercase
-- `kylin.query.calcite.extras-props.quoting`: whether to add quotes, Optional values include `DOUBLE_QUOTE`, `BACK_TICK` and `BRACKET`. The default value is *DOUBLE_QUOTE*
-- `kylin.query.statement-cache-max-num`: specifies the maximum number of cached PreparedStatements. The default value is 50000
-- `kylin.query.statement-cache-max-num-per-key`: specifies the maximum number of PreparedStatements per key cache. The default value is 50.
-- `kylin.query.enable-dict-enumerator`: whether to enable the dictionary enumerator. The default value is *FALSE*
-- `kylin.query.enable-dynamic-column`: whether to enable dynamic columns. The default value is *FALSE*, set to *TRUE* to query the number of rows in a column that do not contain NULL
-
-
-
-### Fuzzy Query {#fuzzy}
-
-- `kylin.storage.hbase.max-fuzzykey-scan`: specifies the threshold for the scanned fuzzy key. If the value is exceeded, the fuzzy key will not be scanned. The default value is 200.
-- `kylin.storage.hbase.max-fuzzykey-scan-split`: split the large fuzzy key set to reduce the number of fuzzy keys per scan. The default value is 1
-- `kylin.storage.hbase.max-visit-scanrange`: the default value is 1000000
-
-
-
-### Query Cache {#cache-config}
-
-- `kylin.query.cache-enabled`: whether to enable caching. The default value is TRUE
-- `kylin.query.cache-threshold-duration`: the query duration exceeding the threshold is saved in the cache. The default value is 2000 (ms).
-- `kylin.query.cache-threshold-scan-count`: the row count scanned in the query exceeding the threshold is saved in the cache. The default value is 10240 (rows).
-- `kylin.query.cache-threshold-scan-bytes`: the bytes scanned in the query exceeding the threshold is saved in the cache. The default value is 1048576 (byte).
-
-
-
-### Query Limits {#query-limit}
-
-- `kylin.query.timeout-seconds`: specifies the query timeout in seconds. The default value is 0, that is, no timeout limit on query. If the value is less than 60, it will set to 60 seconds.
-- `kylin.query.timeout-seconds-coefficient`: specifies the coefficient of the query timeout seconds. The default value is 0.5.
-- `kylin.query.max-scan-bytes`: specifies the maximum bytes scanned by the query. The default value is 0, that is, there is no limit.
-- `kylin.storage.partition.max-scan-bytes`: specifies the maximum number of bytes for the query scan. The default value is 3221225472 (bytes), which is 3GB.
-- `kylin.query.max-return-rows`: specifies the maximum number of rows returned by the query. The default value is 5000000.
-
-
-
-### Bad Query {#bad-query}
-
-The value of `kylin.query.timeout-seconds` is greater than 60 or equals 0, the max value of `kylin.query.timeout-seconds-coefficient` is the upper limit of double. The result of multiplying two properties is the interval time of detecting bad query, if it equals 0, it will be set to 60 seconds, the max value of it is the upper limit of int.
-
-- `kylin.query.badquery-stacktrace-depth`: specifies the depth of stack trace. The default value is 10.
-- `kylin.query.badquery-history-number`: specifies the showing number of bad query history. The default value is 50.
-- `kylin.query.badquery-alerting-seconds`: The default value is 90, if the time of running is greater than the value of this property, it will print the log of query firstly, including (duration, project, thread, user, query id). Whether to save the recent query, it depends on another property. Secondly, record the stack log, the depth of log depend on another property, so as to the analysis later
-- `kylin.query.badquery-persistent-enabled`: The default value is true, it will save the recent bad query, and cannot override in Cube-level
-
-
-### Query Pushdown		{#query-pushdown}
-
-- `kylin.query.pushdown.runner-class-name=org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl`: whether to enable query pushdown
-- `kylin.query.pushdown.jdbc.url`: specifies JDBC URL
-- `kylin.query.pushdown.jdbc.driver`: specifies JDBC driver class name. The default value is *org.apache.hive.jdbc.HiveDriver*
-- `kylin.query.pushdown.jdbc.username`: specifies the Username of the JDBC database. The default value is *hive*
-- `kylin.query.pushdown.jdbc.password`: specifies JDBC password for the database. The default value is
-- `kylin.query.pushdown.jdbc.pool-max-total`: specifies the maximum number of connections to the JDBC connection pool. The default value is 8.
-- `kylin.query.pushdown.jdbc.pool-max-idle`: specifies the maximum number of idle connections for the JDBC connection pool. The default value is 8.
-- `kylin.query.pushdown.jdbc.pool-min-idle`: the default value is 0
-- `kylin.query.pushdown.update-enabled`: specifies whether to enable update in Query Pushdown. The default value is *FALSE*
-- `kylin.query.pushdown.cache-enabled`: whether to enable the cache of the pushdown query to improve the query efficiency of the same query. The default value is *FALSE*
-
-> Note: For more information, please refer to [Query Pushdown](/docs/tutorial/query_pushdown.html)
-
-
-
-### Query rewriting {#convert-sql}
-
-- `kylin.query.force-limit`: this parameter achieves the purpose of shortening the query duration by forcing a LIMIT clause for the select * statement. The default value is *-1*, and the parameter value is set to a positive integer, such as 1000, the value will be applied to the LIMIT clause, and the query will eventually be converted to select * from fact_table limit 1000
-- `kylin.storage.limit-push-down-enabled`: the default value is *TRUE*, set to *FALSE* to close the limit-pushdown of storage layer
-- `kylin.query.flat-filter-max-children`: specifies the maximum number of filters when flatting filter. The default value is 500000
-
-
-
-### Collect Query Metrics to JMX {#jmx-metrics}
-
-- `kylin.server.query-metrics-enabled`: the default value is *FALSE*, set to *TRUE* to collect query metrics to JMX
-
-> Note: For more information, please refer to [JMX](https://www.oracle.com/technetwork/java/javase/tech/javamanagement-140525.html)
-
-
-
-### Collect Query Metrics to dropwizard {#dropwizard-metrics}
-
-- `kylin.server.query-metrics2-enabled`: the default value is *FALSE*, set to *TRUE* to collect query metrics into dropwizard
-
-> Note: For more information, please refer to [dropwizard](https://metrics.dropwizard.io/4.0.0/)
-
-
-
-### Security Configuration {#kylin-security}
-
-This section introduces Kylin security-related configuration.
-
-
-
-### Integrated LDAP for SSO {#ldap-sso}
-
-- `kylin.security.profile`: specifies the way of security authentication, optional values include `ldap`, `testing`, `saml`, it should be set to `ldap` when integrating LDAP for SSO
-- `kylin.security.ldap.connection-server`: specifies LDAP server, such as *ldap://ldap_server:389*
-- `kylin.security.ldap.connection-username`: specifies LDAP username
-- `kylin.security.ldap.connection-password`: specifies LDAP password
-- `kylin.security.ldap.user-search-base`: specifies the scope of users synced to Kylin
-- `kylin.security.ldap.user-search-pattern`: specifies the username for the login verification match
-- `kylin.security.ldap.user-group-search-base`: specifies the scope of the user group synchronized to Kylin
-- `kylin.security.ldap.user-group-search-filter`: specifies the type of user synced to Kylin
-- `kylin.security.ldap.service-search-base`: need to be specifies when a service account is required to access Kylin
-- `kylin.security.ldap.service-search-pattern`: need to be specifies when a service account is required to access Kylin
-- `kylin.security.ldap.service-group-search-base`: need to be specifies when a service account is required to access Kylin
-- `kylin.security.acl.admin-role`: map an LDAP group to an admin role (group name case sensitive)
-- `kylin.server.auth-user-cache.expire-seconds`: specifies LDAP user information cache time, default is 300(s)
-- `kylin.server.auth-user-cache.max-entries`: specifies maximum number of LDAP users, default is 100
-
-
-
-### Integrate with Apache Ranger {#ranger}
-
-- `kylin.server.external-acl-provider=org.apache.ranger.authorization.kylin.authorizer.RangerKylinAuthorizer`
-
-> Note: For more information, please refer to [How to integrate the Kylin plugin in the installation documentation for Ranger](https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin)
-
-
-
-### Enable ZooKeeper ACL {#zookeeper-acl}
-
-- `kylin.env.zookeeper-acl-enabled`: Enable ZooKeeper ACL to prevent unauthorized users from accessing the Znode or reducing the risk of bad operations resulting from this. The default value is *FALSE*
-- `kylin.env.zookeeper.zk-auth`: use username: password as the ACL identifier. The default value is *digest:ADMIN:KYLIN*
-- `kylin.env.zookeeper.zk-acl`: Use a single ID as the ACL identifier. The default value is *world:anyone:rwcda*, *anyone* for anyone
-
-### Distributed query cache with Memcached {#distributed-cache}
-
-From v2.6.0, Kylin can use Memcached as the distributed cache, and also there are some improvements on the cache policy ([KYLIN-2895](https://issues.apache.org/jira/browse/KYLIN-2895)). To enable these new features, you need to do the following steps:
-
-1. Install Memcached (latest v1.5.12) on 1 or multiple nodes; You can install it on all Kylin nodes if resource is enough;
-
-2. Modify the applicationContext.xml under $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes directory, comment the following code:
-{% highlight Groff markup %}
-<bean id="ehcache"
-      class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
-      p:configLocation="classpath:ehcache-test.xml" p:shared="true"/>
-
-<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager"
-      p:cacheManager-ref="ehcache"/>
-{% endhighlight %}
-Uncomment the following code:
-{% highlight Groff markup %}
-<bean id="ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
-      p:configLocation="classpath:ehcache-test.xml" p:shared="true"/>
-
-<bean id="remoteCacheManager" class="org.apache.kylin.cache.cachemanager.MemcachedCacheManager" />
-<bean id="localCacheManager" class="org.apache.kylin.cache.cachemanager.InstrumentedEhCacheCacheManager"
-      p:cacheManager-ref="ehcache"/>
-<bean id="cacheManager" class="org.apache.kylin.cache.cachemanager.RemoteLocalFailOverCacheManager" />
-
-<bean id="memcachedCacheConfig" class="org.apache.kylin.cache.memcached.MemcachedCacheConfig">
-    <property name="timeout" value="500" />
-    <property name="hosts" value="${kylin.cache.memcached.hosts}" />
-</bean>
-{% endhighlight %}
-The value of `${kylin.cache.memcached.hosts}` in applicationContext.xml is from the value of `kylin.cache.memcached.hosts` in conf/kylin.properties. 
-
-3.Add the following parameters to `conf/kylin.properties`:
-{% highlight Groff markup %}
-kylin.query.cache-enabled=true
-kylin.query.lazy-query-enabled=true
-kylin.query.cache-signature-enabled=true
-kylin.query.segment-cache-enabled=true
-kylin.cache.memcached.hosts=memcached1:11211,memcached2:11211,memcached3:11211
-{% endhighlight %}
-
-- `kylin.query.cache-enabled` controls the on-off of query cache, its default value is `true`.
-- `kylin.query.lazy-query-enabled` : whether to lazily answer the queries that be sent repeatedly in a short time (hold it until the previous query be returned, and then reuse the result); The default value is `false`. 
-- `kylin.query.cache-signature-enabled` : whether to use the signature of a query to determine the cache's validity. The signature is calculated by the cube/hybrid list of the project, their last build time and other information (at the moment when cache is persisted); It's default value is `false`, highly recommend to set it to `true`.
-- `kylin.query.segment-cache-enabled` : whether to cache the segment level returned data (from HBase storage) into Memcached. This feature is mainly for the cube that built very frequently (e.g, streaming cube, whose last build time always changed a couple minutes, the whole SQL statement level cache is very likely be cleaned; in this case, the by-segment cache can reduce the I/O). This only works when Memcached configured, the default value is `false`.
-- `kylin.cache.memcached.hosts`: a list of memcached node and port, connected with comma.
\ No newline at end of file
diff --git a/website/_docs30/install/index.cn.md b/website/_docs30/install/index.cn.md
deleted file mode 100644
index 8cdb228..0000000
--- a/website/_docs30/install/index.cn.md
+++ /dev/null
@@ -1,128 +0,0 @@
----
-layout: docs30-cn
-title:  "安装指南"
-categories: install
-permalink: /cn/docs30/install/index.html
----
-
-### 软件要求
-
-* Hadoop: 2.7+, 3.1+ (since v2.5)
-* Hive: 0.13 - 1.2.1+
-* HBase: 1.1+, 2.0 (since v2.5)
-* Spark (可选) 2.3.0+
-* Kafka (可选) 1.0.0+ (since v2.5)
-* JDK: 1.8+ (since v2.5)
-* OS: Linux only, CentOS 6.5+ or Ubuntu 16.0.4+
-
-在 Hortonworks HDP 2.2-2.6 and 3.0, Cloudera CDH 5.7-5.11 and 6.0, AWS EMR 5.7-5.10, Azure HDInsight 3.5-3.6 上测试通过。
-
-我们建议您使用集成的 sandbox 来试用 Kylin 或进行开发,比如 [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/),且要保证其有至少 10 GB 内存。在配置沙箱时,我们推荐您使用 Bridged Adapter 模型替代 NAT 模型。
-
-
-
-### 硬件要求
-
-运行 Kylin 的服务器的最低配置为 4 core CPU,16 GB 内存和 100 GB 磁盘。 对于高负载的场景,建议使用 24 core CPU,64 GB 内存或更高的配置。
-
-### Hadoop 环境
-
-Kylin 依赖于 Hadoop 集群处理大量的数据集。您需要准备一个配置好 HDFS,YARN,MapReduce,Hive, HBase,Zookeeper 和其他服务的 Hadoop 集群供 Kylin 运行。
-Kylin 可以在 Hadoop 集群的任意节点上启动。方便起见,您可以在 master 节点上运行 Kylin。但为了更好的稳定性,我们建议您将 Kylin 部署在一个干净的 Hadoop client 节点上,该节点上 Hive,HBase,HDFS 等命令行已安装好且 client 配置(如 `core-site.xml`,`hive-site.xml`,`hbase-site.xml`及其他)也已经合理的配置且其可以自动和其它节点同步。
-
-运行 Kylin 的 Linux 账户要有访问 Hadoop 集群的权限,包括创建/写入 HDFS 文件夹,Hive 表, HBase 表和提交 MapReduce 任务的权限。 
-
-
-
-### Kylin 安装
-
-1. 从 [Apache Kylin下载网站](https://kylin.apache.org/download/) 下载一个适用于您 Hadoop 版本的二进制文件。例如,适用于 HBase 1.x 的 Kylin 2.5.0 可通过如下命令行下载得到:
-
-```shell
-cd /usr/local/
-wget http://mirror.bit.edu.cn/apache/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz
-```
-
-2. 解压 tar 包,配置环境变量 `$KYLIN_HOME` 指向 Kylin 文件夹。
-
-```shell
-tar -zxvf apache-kylin-2.5.0-bin-hbase1x.tar.gz
-cd apache-kylin-2.5.0-bin-hbase1x
-export KYLIN_HOME=`pwd`
-```
-
-从 v2.6.1 开始, Kylin 不再包含 Spark 二进制包; 您需要另外下载 Spark,然后设置 `SPARK_HOME` 系统变量到 Spark 安装目录: 
-
-```shell
-export SPARK_HOME=/path/to/spark
-```
-
-或者使用脚本下载:
-
-```shell
-$KYLIN_HOME/bin/download-spark.sh
-```
-
-### Kylin tarball 目录
-* `bin`: shell 脚本,用于启动/停止 Kylin,备份/恢复 Kylin 元数据,以及一些检查端口、获取 Hive/HBase 依赖的方法等;
-* `conf`: Hadoop 任务的 XML 配置文件,这些文件的作用可参考[配置页面](/docs/install/configuration.html)
-* `lib`: 供外面应用使用的 jar 文件,例如 Hadoop 任务 jar, JDBC 驱动, HBase coprocessor 等.
-* `meta_backups`: 执行 `bin/metastore.sh backup` 后的默认的备份目录;
-* `sample_cube` 用于创建样例 Cube 和表的文件。
-* `spark`: 自带的 spark。
-* `tomcat`: 自带的 tomcat,用于启动 Kylin 服务。
-* `tool`: 用于执行一些命令行的jar文件。
-
-
-### 检查运行环境
-
-Kylin 运行在 Hadoop 集群上,对各个组件的版本、访问权限及 CLASSPATH 等都有一定的要求,为了避免遇到各种环境问题,您可以运行 `$KYLIN_HOME/bin/check-env.sh` 脚本来进行环境检测,如果您的环境存在任何的问题,脚本将打印出详细报错信息。如果没有报错信息,代表您的环境适合 Kylin 运行。
-
-
-
-### 启动 Kylin
-
-运行 `$KYLIN_HOME/bin/kylin.sh start` 脚本来启动 Kylin,界面输出如下:
-
-```
-Retrieving hadoop conf dir...
-KYLIN_HOME is set to /usr/local/apache-kylin-2.5.0-bin-hbase1x
-......
-A new Kylin instance is started by root. To stop it, run 'kylin.sh stop'
-Check the log at /usr/local/apache-kylin-2.5.0-bin-hbase1x/logs/kylin.log
-Web UI is at http://<hostname>:7070/kylin
-```
-
-
-
-### 使用 Kylin
-
-Kylin 启动后您可以通过浏览器 `http://<hostname>:7070/kylin` 进行访问。
-其中 `<hostname>` 为具体的机器名、IP 地址或域名,默认端口为 7070。
-初始用户名和密码是 `ADMIN/KYLIN`。
-服务器启动后,您可以通过查看 `$KYLIN_HOME/logs/kylin.log` 获得运行时日志。
-
-
-### 停止 Kylin
-
-运行 `$KYLIN_HOME/bin/kylin.sh stop` 脚本来停止 Kylin,界面输出如下:
-
-```
-Retrieving hadoop conf dir...
-KYLIN_HOME is set to /usr/local/apache-kylin-2.5.0-bin-hbase1x
-Stopping Kylin: 25964
-Stopping in progress. Will check after 2 secs again...
-Kylin with pid 25964 has been stopped.
-```
-
-您可以运行 `ps -ef | grep kylin` 来查看 Kylin 进程是否已停止。
-
-### HDFS 目录结构
-Kylin 会在 HDFS 上生成文件,根目录是 "/kylin/", 然后会使用 Kylin 集群的元数据表名作为第二层目录名,默认为 "kylin_metadata" (可以在`conf/kylin.properties`中定制).
-
-通常, `/kylin/kylin_metadata` 目录下会有这么几种子目录:`cardinality`, `coprocessor`, `kylin-job_id`, `resources`, `jdbc-resources`. 
-1. `cardinality`: Kylin 加载 Hive 表时,会启动一个 MR 任务来计算各个列的基数,输出结果会暂存在此目录。此目录可以安全清除。
-2. `coprocessor`: Kylin 用于存放 HBase coprocessor jar 的目录;请勿删除。
-3. `kylin-job_id`: Cube 计算过程的数据存储目录,请勿删除。 如需要清理,请遵循 [storage cleanup guide](/docs/howto/howto_cleanup_storage.html). 
-4. `resources`: Kylin 默认会将元数据存放在 HBase,但对于太大的文件(如字典或快照),会转存到 HDFS 的该目录下,请勿删除。如需要清理,请遵循 [cleanup resources from metadata](/docs/howto/howto_backup_metadata.html) 
-5. `jdbc-resources`:性质同上,只在使用 MySQL 做元数据存储时候出现。
diff --git a/website/_docs30/install/index.md b/website/_docs30/install/index.md
deleted file mode 100644
index d968ce1..0000000
--- a/website/_docs30/install/index.md
+++ /dev/null
@@ -1,127 +0,0 @@
----
-layout: docs30
-title:  "Installation Guide"
-categories: install
-permalink: /docs30/install/index.html
----
-
-### Software Requirements
-
-* Hadoop: 2.7+, 3.1+ (since v2.5)
-* Hive: 0.13 - 1.2.1+
-* HBase: 1.1+, 2.0 (since v2.5)
-* Spark (optional) 2.3.0+
-* Kafka (optional) 1.0.0+ (since v2.5)
-* JDK: 1.8+ (since v2.5)
-* OS: Linux only, CentOS 6.5+ or Ubuntu 16.0.4+
-
-Tests passed on Hortonworks HDP 2.2-2.6 and 3.0, Cloudera CDH 5.7-5.11 and 6.0, AWS EMR 5.7-5.10, Azure HDInsight 3.5-3.6.
-
-We recommend you to try out Kylin or develop it using the integrated sandbox, such as [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/), and make sure it has at least 10 GB of memory. When configuring a sandbox, we recommend that you use the Bridged Adapter model instead of the NAT model.
-
-
-
-### Hardware Requirements
-
-The minimum configuration of a server running Kylin is 4 core CPU, 16 GB RAM and 100 GB disk. For high-load scenarios, a 24-core CPU, 64 GB RAM or higher is recommended.
-
-
-
-### Hadoop Environment
-
-Kylin relies on Hadoop clusters to handle large data sets. You need to prepare a Hadoop cluster with HDFS, YARN, MapReduce, Hive, HBase, Zookeeper and other services for Kylin to run.
-Kylin can be launched on any node in a Hadoop cluster. For convenience, you can run Kylin on the master node. For better stability, it is recommended to deploy Kylin on a clean Hadoop client node with Hive, HBase, HDFS and other command lines installed and client configuration (such as `core-site.xml`, `hive-site.xml`, `hbase-site.xml` and others) are also reasonably configured and can be automatically synchronized with other nodes.
-
-Linux accounts running Kylin must have access to the Hadoop cluster, including the permission to create/write HDFS folders, Hive tables, HBase tables, and submit MapReduce tasks.
-
-
-
-### Kylin Installation
-
-1. Download a binary package for your Hadoop version from the [Apache Kylin Download Site](https://kylin.apache.org/download/). For example, Kylin 2.5.0 for HBase 1.x can be downloaded from the following command line:
-
-```shell
-cd /usr/local/
-wget http://mirror.bit.edu.cn/apache/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz
-```
-
-2. Unzip the tarball and configure the environment variable `$KYLIN_HOME` to the Kylin folder.
-
-```shell
-tar -zxvf apache-kylin-2.5.0-bin-hbase1x.tar.gz
-cd apache-kylin-2.5.0-bin-hbase1x
-export KYLIN_HOME=`pwd`
-```
-
-From v2.6.1, Kylin will not ship Spark binary anymore; You need to install Spark seperately, and then point `SPARK_HOME` system environment variable to it: 
-
-```shell
-export SPARK_HOME=/path/to/spark
-```
-
-or run the script to download it:
-
-```shell
-$KYLIN_HOME/bin/download-spark.sh
-```
-
-### Kylin tarball structure
-* `bin`: shell scripts to start/stop Kylin service, backup/restore metadata, as well as some utility scripts.
-* `conf`: XML configuration files. The function of these xml files can be found in [configuration page](/docs/install/configuration.html)
-* `lib`: Kylin jar files for external use, like the Hadoop job jar, JDBC driver, HBase coprocessor jar, etc.
-* `meta_backups`: default backup folder when run "bin/metastore.sh backup";
-* `sample_cube`: files to create the sample cube and its tables.
-* `spark`: the default spark binary that built with Kylin.
-* `tomcat` the tomcat web server that run Kylin application. 
-* `tool`: the jar file for running utility CLI. 
-
-### Checking the operating environment
-
-Kylin runs on a Hadoop cluster and has certain requirements for the version, access rights, and CLASSPATH of each component. To avoid various environmental problems, you can run the script, `$KYLIN_HOME/bin/check-env.sh` to have a test on your environment, if there are any problems with your environment, the script will print a detailed error message. If there is no error message, it means that your environment is suitable for Kylin to run.
-
-
-### Start Kylin
-
-Run the script, `$KYLIN_HOME/bin/kylin.sh start` , to start Kylin. The interface output is as follows:
-
-```
-Retrieving hadoop conf dir...
-KYLIN_HOME is set to /usr/local/apache-kylin-2.5.0-bin-hbase1x
-......
-A new Kylin instance is started by root. To stop it, run 'kylin.sh stop'
-Check the log at /usr/local/apache-kylin-2.5.0-bin-hbase1x/logs/kylin.log
-Web UI is at http://<hostname>:7070/kylin
-```
-
-### Using Kylin
-
-Once Kylin is launched, you can access it via the browser `http://<hostname>:7070/kylin` with
-specifying `<hostname>` with IP address or domain name, and the default port is 7070.
-The initial username and password are `ADMIN/KYLIN`.
-After the server is started, you can view the runtime log, `$KYLIN_HOME/logs/kylin.log`.
-
-
-### Stop Kylin
-
-Run the `$KYLIN_HOME/bin/kylin.sh stop` script to stop Kylin. The console output is as follows:
-
-```
-Retrieving hadoop conf dir...
-KYLIN_HOME is set to /usr/local/apache-kylin-2.5.0-bin-hbase1x
-Stopping Kylin: 25964
-Stopping in progress. Will check after 2 secs again...
-Kylin with pid 25964 has been stopped.
-```
-
-You can run `ps -ef | grep kylin` to see if the Kylin process has stopped.
-
-
-### HDFS folder structure
-Kylin will generate files on HDFS. The root folder is "/kylin/", but will have the second level folder for each Kylin cluster, named with the metadata table name, by default it is "kylin_metadata" (can be customized in `conf/kylin.properties`).
-
-Usually, there are at least these four kind of directories under `/kylin/kylin_metadata`: `cardinality`, `coprocessor`, `kylin-job_id`, `resources`. 
-1. `cardinality`: the output folder of the cardinality calculation job when Kylin loads a Hive table. It can be cleaned when there is no job running;
-2. `coprocessor`: the folder that Kylin puts HBase coprocessor jar file. Please do not delete it. 
-3. `kylin-job_id`: the cubing job's output folder. Please keep them; if need a cleanup, follow the [storage cleanup guide](/docs/howto/howto_cleanup_storage.html). 
-4. `resources`: the metadata entries that too big to persisted in HBase (e.g, a dictionary or table snapshot); Please do not delete it; if need a cleanup, follow the [cleanup resources from metadata](/docs/howto/howto_backup_metadata.html) 
-5. `jdbc-resources`: similar as `resources`, only appeared when using MySQL as the metadata storage。
diff --git a/website/_docs30/install/kylin_aws_emr.cn.md b/website/_docs30/install/kylin_aws_emr.cn.md
deleted file mode 100644
index 65ba9ba..0000000
--- a/website/_docs30/install/kylin_aws_emr.cn.md
+++ /dev/null
@@ -1,191 +0,0 @@
----
-layout: docs30-cn
-title:  "在 AWS EMR 上安装 Kylin"
-categories: install
-permalink: /cn/docs30/install/kylin_aws_emr.html
----
-
-本文档介绍了在 EMR 上如何运行 Kylin。
-
-
-
-### 推荐版本
-* AWS EMR 5.7 (EMR 5.8 及以上,请查看 [KYLIN-3129](https://issues.apache.org/jira/browse/KYLIN-3129))
-* Apache Kylin v2.2.0 or above for HBase 1.x
-
-
-
-### 启动 EMR 集群
-
-使用 AWS 网页控制台,命令行或 API 运行一个 EMR 集群。在 Kylin 需要 HBase 服务的应用中选择 **HBase**。 
-
-您可以选择 HDFS 或者 S3 作为 HBase 的存储,这取决于您在关闭集群之后是否需要将 Cube 数据进行存储。EMR HDFS 使用 EC2 实例的本地磁盘,当集群停止后数据将被清除,Kylin 元数据和 Cube 数据将会丢失。
-
-如果您使用 S3 作为 HBase 的存储,您需要自定义配置为 `hbase.rpc.timeout`,由于 S3 的大容量负载是一个复制操作,当数据规模比较大时,HBase Region 服务器比在 HDFS 上将花费更多的时间等待其完成。
-
-```
-[  {
-    "Classification": "hbase-site",
-    "Properties": {
-      "hbase.rpc.timeout": "3600000",
-      "hbase.rootdir": "s3://yourbucket/EMRROOT"
-    }
-  },
-  {
-    "Classification": "hbase",
-    "Properties": {
-      "hbase.emr.storageMode": "s3"
-    }
-  }
-]
-```
-
-
-
-### 安装 Kylin
-
-当 EMR 集群处于 "Waiting" 状态,您可以 SSH 到 master 节点,下载 Kylin 然后解压 tar 包:
-
-```sh
-sudo mkdir /usr/local/kylin
-sudo chown hadoop /usr/local/kylin
-cd /usr/local/kylin
-wget http://mirror.bit.edu.cn/apache/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz
-tar -zxvf apache-kylin-2.5.0-bin-hbase1x.tar.gz
-```
-
-### 配置 Kylin
-
-启动 Kylin 前,您需要进行一组配置:
-
-- 从 `/etc/hbase/conf/hbase-site.xml` 复制 `hbase.zookeeper.quorum` 属性到 `$KYLIN_HOME/conf/kylin_job_conf.xml`,例如:
-
-
-```xml
-<property>
-  <name>hbase.zookeeper.quorum</name>
-  <value>ip-nn-nn-nn-nn.ap-northeast-2.compute.internal</value>
-</property>
-```
-
-- 使用 HDFS 作为 `kylin.env.hdfs-working-dir` (推荐)
-
-EMR 建议 **当集群运行时使用 HDFS 作为中间数据的存储而 Amazon S3 只用来输入初始的数据和输出的最终结果**。Kylin 的 `hdfs-working-dir` 用来存放 Cube 构建时的中间数据,cuboid 数据文件和一些元数据文件 (例如不便于在 HBase 中存储的 `/dictionary` 和 `/table snapshots`),因此最好为其配置 HDFS。 
-
-如果使用 HDFS 作为 Kylin 的工作目录,您无需做任何修改,因为 EMR 的默认文件系统是 HDFS:
-
-```properties
-kylin.env.hdfs-working-dir=/kylin
-```
-
-关闭/重启集群前,您必须用 [S3DistCp](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html) 备份 HDFS 上 `/kylin` 路径下的数据到 S3,否则您可能丢失数据且之后不能恢复集群。
-
-- 使用 S3 作为 `kylin.env.hdfs-working-dir`
-
-如果您想使用 S3 作为存储 (假设 HBase 也在 S3 上),您需要配置下列参数:
-
-```properties
-kylin.env.hdfs-working-dir=s3://yourbucket/kylin
-kylin.storage.hbase.cluster-fs=s3://yourbucket
-kylin.source.hive.redistribute-flat-table=false
-```
-
-中间文件和 HFile 也都会写入 S3。构建性能将会比 HDFS 慢。
-为了很好地理解 S3 和 HDFS 的区别,请参考如下来自 AWS 的两篇文章:
-
-[Input and Output Errors](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html)
-[Are you having trouble loading data to or from Amazon S3 into Hive](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-error-hive.html#emr-troubleshoot-error-hive-3)
-
-
-- Hadoop 配置
-
-根据 [emr-troubleshoot-errors-io](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html),为在 S3 上获得更好的性能和数据一致性需要应用一些 Hadoop 配置。 
-
-```xml
-<property>
-  <name>io.file.buffer.size</name>
-  <value>65536</value>
-</property>
-<property>
-  <name>mapred.map.tasks.speculative.execution</name>
-  <value>false</value>
-</property>
-<property>
-  <name>mapred.reduce.tasks.speculative.execution</name>
-  <value>false</value>
-</property>
-<property>
-  <name>mapreduce.map.speculative</name>
-  <value>false</value>
-</property>
-<property>
-  <name>mapreduce.reduce.speculative</name>
-  <value>false</value>
-</property>
-
-```
-
-
-- 如果不存在创建工作目录文件夹
-
-```sh
-hadoop fs -mkdir /kylin 
-```
-
-或
-
-```sh
-hadoop fs -mkdir s3://yourbucket/kylin
-```
-
-### 启动 Kylin
-
-启动和在普通 Hadoop 上一样:
-
-```sh
-export KYLIN_HOME=/usr/local/kylin/apache-kylin-2.2.0-bin
-$KYLIN_HOME/bin/sample.sh
-$KYLIN_HOME/bin/kylin.sh start
-```
-
-别忘记在 EMR master - "ElasticMapReduce-master" 的安全组中启用 7070 端口访问,或使用 SSH 连接 master 节点,然后您可以使用 `http://<master-dns>:7070/kylin` 访问 Kylin Web GUI。
-
-Build 同一个 Cube,当 Cube 准备好后运行查询。您可以浏览 S3 查看数据是否安全的持久化了。
-
-
-
-### Spark 配置
-
-EMR 的 Spark 版本很可能与 Kylin 编译的版本不一致,因此您通常不能直接使用 EMR 打包的 Spark 用于 Kylin 的任务。 您需要在启动 Kylin 之前,将 "SPARK_HOME" 环境变量设置指向 Kylin 的 Spark 子目录 (KYLIN_HOME/spark) 。此外,为了从 Spark 中访问 S3 或 EMRFS 上的文件,您需要将 EMR 的扩展类从 EMR 的目录拷贝到 Kylin 的 Spark 下。
-
-```sh
-export SPARK_HOME=$KYLIN_HOME/spark
-
-cp /usr/lib/hadoop-lzo/lib/*.jar $KYLIN_HOME/spark/jars/
-cp /usr/share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-*.jar $KYLIN_HOME/spark/jars/
-cp /usr/lib/hadoop/hadoop-common*-amzn-*.jar $KYLIN_HOME/spark/jars/
-
-$KYLIN_HOME/bin/kylin.sh start
-```
-
-您也可以参考 EMR Spark 的 spark-defaults 来设置 Kylin 的 Spark 配置,以获得更好的对集群资源的适配。
-
-
-
-### 关闭 EMR 集群
-
-关闭 EMR 集群前,我们建议您为 Kylin metadata 做备份且将其上传到 S3。
-
-为了在关闭 Amazon EMR 集群时不丢失没写入 Amazon S3 的数据,MemStore cache 需要刷新到 Amazon S3 写入新的 store 文件。您可以运行 EMR 集群上提供的 shell 脚本来完成这个需求。 
-
-```sh
-bash /usr/lib/hbase/bin/disable_all_tables.sh
-```
-
-为了用同样的 Hbase 数据重启一个集群,可在 AWS Management Console 中指定和之前集群相同的 Amazon S3 位置或使用 `hbase.rootdir` 配置属性。更多的 EMR HBase 信息,参考 [HBase on Amazon S3](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hbase-s3.html)
-
-	
-## 在专用的 EC2 上部署 Kylin 
-
-推荐在专门的 client 节点上运行 Kylin (而不是 master,core 或 task)。启动一个和您 EMR 有同样 VPC 与子网的独立 EC2 实例,从 master 节点复制 Hadoop clients 到该实例,然后在其中安装 Kylin。这可提升 Kylin 自身与 master 节点中服务的稳定性。 
-	
\ No newline at end of file
diff --git a/website/_docs30/install/kylin_aws_emr.md b/website/_docs30/install/kylin_aws_emr.md
deleted file mode 100644
index bcf9ac9..0000000
--- a/website/_docs30/install/kylin_aws_emr.md
+++ /dev/null
@@ -1,194 +0,0 @@
----
-layout: docs30
-title:  "Install Kylin on AWS EMR"
-categories: install
-permalink: /docs30/install/kylin_aws_emr.html
----
-
-This document introduces how to run Kylin on EMR.
-
-
-
-### Recommended Version
-
-* AWS EMR 5.7 (for EMR 5.8 and above, please refer to [KYLIN-3129](https://issues.apache.org/jira/browse/KYLIN-3129))
-* Apache Kylin v2.2.0 or above for HBase 1.x
-
-
-
-### Start EMR cluster
-
-Launch an EMR cluster with AWS web console, command line or API. Select *HBase* in the applications as Kylin need HBase service. 
-
-You can select "HDFS" or "S3" as the storage for HBase, depending on whether you need Cube data be persisted after shutting down the cluster. EMR HDFS uses the local disk of EC2 instances, which will erase the data when cluster is stopped, then Kylin metadata and Cube data can be lost.
-
-If you use S3 as HBase's storage, you need customize its configuration for `hbase.rpc.timeout`, because the bulk load to S3 is a copy operation, when data size is huge, HBase region server need wait much longer to finish than on HDFS.
-
-```
-[  {
-    "Classification": "hbase-site",
-    "Properties": {
-      "hbase.rpc.timeout": "3600000",
-      "hbase.rootdir": "s3://yourbucket/EMRROOT"
-    }
-  },
-  {
-    "Classification": "hbase",
-    "Properties": {
-      "hbase.emr.storageMode": "s3"
-    }
-  }
-]
-```
-
-
-
-### Install Kylin
-
-When EMR cluster is in "Waiting" status, you can SSH into its master  node, download Kylin and then uncompress the tar-ball file:
-
-```sh
-sudo mkdir /usr/local/kylin
-sudo chown hadoop /usr/local/kylin
-cd /usr/local/kylin
-wget http://mirror.bit.edu.cn/apache/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz
-tar -zxvf apache-kylin-2.5.0-bin-hbase1x.tar.gz
-```
-
-
-
-### Configure Kylin
-
-Before start Kylin, you need do a couple of configurations:
-
-- Copy `hbase.zookeeper.quorum` property from `/etc/hbase/conf/hbase-site.xml` to `$KYLIN_HOME/conf/kylin_job_conf.xml`, like this:
-
-
-```xml
-<property>
-  <name>hbase.zookeeper.quorum</name>
-  <value>ip-nn-nn-nn-nn.ap-northeast-2.compute.internal</value>
-</property>
-```
-
-- Use HDFS as `kylin.env.hdfs-working-dir` (Recommended)
-
-EMR recommends to *use HDFS for intermediate data storage while the cluster is running and Amazon S3 only to input the initial data and output the final results*. Kylin's 'hdfs-working-dir' is for putting the intermediate data for Cube building, cuboid files and also some metadata files (like dictionary and table snapshots which are not good in HBase); so it is best to configure HDFS for this. 
-
-If using HDFS as Kylin working directory, you just leave configurations unchanged as EMR's default FS is HDFS:
-
-```properties
-kylin.env.hdfs-working-dir=/kylin
-```
-
-Before you shutdown/restart the cluster, you must backup the "/kylin" data on HDFS to S3 with [S3DistCp](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html), or you may lost data and couldn't recover the cluster later.
-
-- Use S3 as `kylin.env.hdfs-working-dir`
-
-If you want to use S3 as storage (assume HBase is also on S3), you need configure the following parameters:
-
-```properties
-kylin.env.hdfs-working-dir=s3://yourbucket/kylin
-kylin.storage.hbase.cluster-fs=s3://yourbucket
-kylin.source.hive.redistribute-flat-table=false
-```
-
-The intermediate file and the HFile will all be written to S3. The build performance would be slower than HDFS. Make sure you have a good understanding about the difference between S3 and HDFS. Read the following articles from AWS:
-
-[Input and Output Errors](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html)
-[Are you having trouble loading data to or from Amazon S3 into Hive](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-error-hive.html#emr-troubleshoot-error-hive-3)
-
-
-- Hadoop configurations
-
-Some Hadoop configurations need be applied for better performance and data consistency on S3, according to [emr-troubleshoot-errors-io](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html)
-
-```xml
-<property>
-  <name>io.file.buffer.size</name>
-  <value>65536</value>
-</property>
-<property>
-  <name>mapred.map.tasks.speculative.execution</name>
-  <value>false</value>
-</property>
-<property>
-  <name>mapred.reduce.tasks.speculative.execution</name>
-  <value>false</value>
-</property>
-<property>
-  <name>mapreduce.map.speculative</name>
-  <value>false</value>
-</property>
-<property>
-  <name>mapreduce.reduce.speculative</name>
-  <value>false</value>
-</property>
-
-```
-
-
-- Create the working-dir folder if it doesn't exist
-
-```
-hadoop fs -mkdir /kylin 
-```
-
-or
-
-```
-hadoop fs -mkdir s3://yourbucket/kylin
-```
-
-### Start Kylin
-
-The start is the same as on normal Hadoop:
-
-```
-export KYLIN_HOME=/usr/local/kylin/apache-kylin-2.2.0-bin
-$KYLIN_HOME/bin/sample.sh
-$KYLIN_HOME/bin/kylin.sh start
-```
-
-Don't forget to enable the 7070 port access in the security group for EMR master - "ElasticMapReduce-master", or with SSH tunnel to the master node, then you can access Kylin Web GUI at http://\<master\-dns\>:7070/kylin
-
-Build the sample Cube, and then run queries when the Cube is ready. You can browse S3 to see whether the data is safely persisted.
-
-
-
-### Spark Configuration
-
-EMR's Spark version may be incompatible with Kylin, so you couldn't directly use EMR's Spark. You need to set "SPARK_HOME" environment variable to Kylin's Spark folder (KYLIN_HOME/spark) before start Kylin. To access files on S3 or EMRFS, we need to copy EMR's implementation jars to Spark.
-
-```sh
-export SPARK_HOME=$KYLIN_HOME/spark
-
-cp /usr/lib/hadoop-lzo/lib/*.jar $KYLIN_HOME/spark/jars/
-cp /usr/share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-*.jar $KYLIN_HOME/spark/jars/
-cp /usr/lib/hadoop/hadoop-common*-amzn-*.jar $KYLIN_HOME/spark/jars/
-
-$KYLIN_HOME/bin/kylin.sh start
-```
-
-You can also copy EMR's spark-defaults configuration to Kylin's spark for a better utilization of the cluster resources.
-
-
-
-### Shut down EMR Cluster
-
-Before you shut down EMR cluster, we suggest you take a backup for Kylin metadata and upload it to S3.
-
-To shut down an Amazon EMR cluster without losing data that hasn't been written to Amazon S3, the MemStore cache needs to flush to Amazon S3 to write new store files. To do this, you can run a shell script provided on the EMR cluster. 
-
-```sh
-bash /usr/lib/hbase/bin/disable_all_tables.sh
-```
-
-To restart a cluster with the same HBase data, specify the same Amazon S3 location as the previous cluster either in the AWS Management Console or using the "hbase.rootdir" configuration property. For more information about EMR HBase, refer to [HBase on Amazon S3](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hbase-s3.html)
-
-
-	
-## Deploy Kylin in a dedicated EC2 
-
-Running Kylin in a dedicated client node (not master, core or task) is recommended. You can start a separate EC2 instance within the same VPC and subnet as your EMR, copy the Hadoop clients from master node to it, and then install Kylin in it. This can improve the stability of services in master node as well as Kylin itself. 
-	
\ No newline at end of file
diff --git a/website/_docs30/install/kylin_cluster.cn.md b/website/_docs30/install/kylin_cluster.cn.md
deleted file mode 100644
index 1b44094..0000000
--- a/website/_docs30/install/kylin_cluster.cn.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-layout: docs30-cn
-title:  "集群模式部署"
-categories: install
-permalink: /cn/docs30/install/kylin_cluster.html
----
-
-Kylin 实例是无状态的服务,运行时的状态信息存储在 HBase metastore 中。 出于负载均衡的考虑,您可以启用多个共享一个 metastore 的 Kylin 实例,使得各个节点分担查询压力且互为备份,从而提高服务的可用性。下图描绘了 Kylin 集群模式部署的一个典型场景:
-![](/images/install/kylin_server_modes.png)
-
-
-
-### Kylin 集群模式部署
-
-如果您需要将多个 Kylin 节点组成集群,请确保他们使用同一个 Hadoop 集群、HBase 集群。然后在每个节点的配置文件 `$KYLIN_HOME/conf/kylin.properties` 中执行下述操作:
-
-1. 配置相同的 `kylin.metadata.url` 值,即配置所有的 Kylin 节点使用同一个 HBase metastore。
-2. 配置 Kylin 节点列表 `kylin.server.cluster-servers`,包括所有节点(包括当前节点),当事件变化时,接收变化的节点需要通知其他所有节点(包括当前节点)。
-3. 配置 Kylin 节点的运行模式 `kylin.server.mode`,参数值可选 `all`, `job`, `query` 中的一个,默认值为 `all`。
-`job` 模式代表该服务仅用于任务调度,不用于查询;`query` 模式代表该服务仅用于查询,不用于构建任务的调度;`all` 模式代表该服务同时用于任务调度和 SQL 查询。
-
-> **注意:**默认情况下只有**一个实例**用于构建任务的调度 (即 `kylin.server.mode` 设置为 `all` 或者 `job` 模式)。
-
-
-
-### 任务引擎高可用
-
-从 v2.0 开始, Kylin 支持多个任务引擎一起运行,相比于默认单任务引擎的配置,多引擎可以保证任务构建的高可用。
-
-使用多任务引擎,你可以在多个 Kylin 节点上配置它的角色为 `job` 或 `all`。为了避免它们之间产生竞争,需要启用分布式任务锁,请在 `kylin.properties` 里配置:
-
-```properties
-kylin.job.scheduler.default=2
-kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperJobLock
-```
-并记得将所有任务和查询节点的地址注册到 `kylin.server.cluster-servers`。
-
-
-
-### 安装负载均衡器
-
-为了将查询请求发送给集群而非单个节点,您可以部署一个负载均衡器,如 [Nginx](http://nginx.org/en/), [F5](https://www.f5.com/) 或 [cloudlb](https://rubygems.org/gems/cloudlb/) 等,使得客户端和负载均衡器通信代替和特定的 Kylin 实例通信。
-
-
-
-### 读写分离部署
-
-为了达到更好的稳定性和最佳的性能,建议进行读写分离部署,将 Kylin 部署在两个集群上,如下:
-
-* 一个 Hadoop 集群用作 **Cube 构建**,这个集群可以是一个大的、与其它应用共享的集群;
-* 一个 HBase 集群用作 **SQL 查询**,通常这个集群是专门为 Kylin 配置的,节点数不用像 Hadoop 集群那么多,HBase 的配置可以针对 Kylin Cube 只读的特性而进行优化。
-
-这种部署策略是适合生产环境的最佳部署方案,关于如何进行读写分离部署,请参考 [Deploy Apache Kylin with Standalone HBase Cluster](/blog/2016/06/10/standalone-hbase-cluster/)。
\ No newline at end of file
diff --git a/website/_docs30/install/kylin_cluster.md b/website/_docs30/install/kylin_cluster.md
deleted file mode 100644
index 98fb55b..0000000
--- a/website/_docs30/install/kylin_cluster.md
+++ /dev/null
@@ -1,55 +0,0 @@
----
-layout: docs30
-title:  "Deploy in Cluster Mode"
-categories: install
-permalink: /docs30/install/kylin_cluster.html
----
-
-
-Kylin instances are stateless services, and runtime state information is stored in the HBase metastore. For load balancing purposes, you can enable multiple Kylin instances that share a metastore, so that each node shares query pressure and backs up each other, improving service availability. The following figure depicts a typical scenario for Kylin cluster mode deployment:
-![](/images/install/kylin_server_modes.png)
-
-
-
-### Kylin Node Configuration
-
-If you need to cluster multiple Kylin nodes, make sure they use the same Hadoop cluster, HBase cluster. Then do the following steps in each node's configuration file `$KYLIN_HOME/conf/kylin.properties`:
-
-1. Configure the same `kylin.metadata.url` value to configure all Kylin nodes to use the same HBase metastore.
-2. Configure the Kylin node list `kylin.server.cluster-servers`, including all nodes (the current node is also included). When the event changes, the node receiving the change needs to notify all other nodes (the current node is also included).
-3. Configure the running mode `kylin.server.mode` of the Kylin node. Optional values include `all`, `job`, `query`. The default value is *all*.
-The *job* mode means that the service is only used for job scheduling, not for queries; the *query* pattern means that the service is only used for queries, not for scheduling jobs; the *all* pattern represents the service for both job scheduling and queries.
-
-> *Note*:  By default, only *one instance* can be used for the job scheduling (ie., `kylin.server.mode` is set to `all` or `job`).
-
-
-
-### Enable Job Engine HA
-
-Since v2.0, Kylin supports multiple job engines running together, which is more extensible, available and reliable than the default job scheduler.
-
-To enable the distributed job scheduler, you need to set or update the configs in the `kylin.properties`:
-
-```properties
-kylin.job.scheduler.default=2
-kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperJobLock
-```
-
-Please add all job servers and query servers to the `kylin.server.cluster-servers`.
-
-
-
-### Installing a load balancer
-
-To send query requests to a cluster instead of a single node, you can deploy a load balancer such as [Nginx](http://nginx.org/en/), [F5](https://www.f5.com/) or [cloudlb](https://rubygems.org/gems/cloudlb/), etc., so that the client and load balancer communication instead communicate with a specific Kylin instance.
-
-
-
-### Read and write separation deployment
-
-For better stability and optimal performance, it is recommended to perform a read-write separation deployment, deploying Kylin on two clusters as follows:
-
-* A Hadoop cluster used to *Cube build*, which can be a large cluster shared with other applications;
-* An HBase cluster used to *SQL query*. Usually this cluster is configured for Kylin. The number of nodes does not need to be as many as Hadoop clusters. HBase configuration can be optimized for Kylin Cube read-only features.
-
-This deployment strategy is the best deployment solution for the production environment. For how to perform read-write separation deployment, please refer to [Deploy Apache Kylin with Standalone HBase Cluster](/blog/2016/06/10/standalone-hbase-cluster/) .
\ No newline at end of file
diff --git a/website/_docs30/install/kylin_docker.cn.md b/website/_docs30/install/kylin_docker.cn.md
deleted file mode 100644
index b4e1d1f..0000000
--- a/website/_docs30/install/kylin_docker.cn.md
+++ /dev/null
@@ -1,143 +0,0 @@
----
-layout: docs30
-title:  "用 Docker 运行 Kylin"
-categories: install
-permalink: /cn/docs30/install/kylin_docker.html
-since: v3.0.0-alpha2
----
-
-为了让用户方便的试用 Kylin,以及方便开发者在修改了源码后进行验证及调试。我们提供了 Kylin 的 docker 镜像。该镜像中,Kylin 依赖的各个服务均已正确的安装及部署,包括:
-
-- Jdk 1.8
-- Hadoop 2.7.0
-- Hive 1.2.1
-- Hbase 1.1.2
-- Spark 2.3.1
-- Zookeeper 3.4.6
-- Kafka 1.1.1
-- Mysql
-- Maven 3.6.1
-
-## 快速试用 Kylin
-
-我们已将面向用户的 Kylin 镜像上传至 docker 仓库,用户无需在本地构建镜像,直接执行以下命令从 docker 仓库 pull 镜像:
-
-{% highlight Groff markup %}
-docker pull apachekylin/apache-kylin-standalone:3.0.0-alpha2
-{% endhighlight %}
-
-pull 成功后,执行以下命令启动容器:
-
-{% highlight Groff markup %}
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 60010:60010 \
-apachekylin/apache-kylin-standalone:3.0.0-alpha2
-{% endhighlight %}
-
-在容器启动时,会自动启动以下服务:
-
-- NameNode, DataNode
-- ResourceManager, NodeManager
-- HBase
-- Kafka
-- Kylin
-
-并自动运行 `$KYLIN_HOME/bin/sample.sh ` 及在 Kafka 中创建 kylin_streaming_topic topic 并持续向该 topic 中发送数据。这是为了让用户启动容器后,就能体验以批和流的方式的方式构建 Cube 并进行查询。
-
-容器启动后,我们可以通过 docker exec 命令进入容器内。当然,由于我们已经将容器内指定端口映射到本机端口,我们可以直接在本机浏览器中打开各个服务的页面,如:
-
-- Kylin 页面:[http://127.0.0.1:7070/kylin/login](http://127.0.0.1:7070/kylin/login)
-- Hdfs NameNode 页面:[http://127.0.0.1:50070](http://127.0.0.1:50070/)
-- Yarn ResourceManager 页面:[http://127.0.0.1:8088](http://127.0.0.1:8088/)
-- HBase 页面:[http://127.0.0.1:60010](http://127.0.0.1:60010/)
-
-容器内,相关环境变量如下:
-
-{% highlight Groff markup %}
-JAVA_HOME=/home/admin/jdk1.8.0_141
-HADOOP_HOME=/home/admin/hadoop-2.7.0
-KAFKA_HOME=/home/admin/kafka_2.11-1.1.1
-SPARK_HOME=/home/admin/spark-2.3.1-bin-hadoop2.6
-HBASE_HOME=/home/admin/hbase-1.1.2
-HIVE_HOME=/home/admin/apache-hive-1.2.1-bin
-KYLIN_HOME=/home/admin/apache-kylin-3.0.0-alpha2-bin-hbase1x
-{% endhighlight %}
-
-## 构建镜像以验证源码修改
-
-当开发者修改了源代码,想要对源代码进行打包、部署及验证时,也可以使用镜像。首先,我们进入源码根目录下的 docker 目录,并执行下面的脚本来构建镜像并将源码拷贝到镜像中:
-
-{% highlight Groff markup %}
-#!/usr/bin/env bash
-
-echo "start build kylin image base on current source code"
-
-rm -rf ./kylin
-mkdir -p ./kylin
-
-echo "start copy kylin source code"
-
-for file in `ls ../../kylin/`
-do
-    if [ docker != $file ]
-    then
-        cp -r ../../kylin/$file ./kylin/
-    fi
-done
-
-echo "finish copy kylin source code"
-
-docker build -t apache-kylin-standalone .⏎
-{% endhighlight %}
-
-由于需要通过网络下载各种安装包并进行部署,整个构建过程可能会持续几十分钟,时间长短取决于网络情况。
-
-当完成镜像构建后,执行以下命令启动容器:
-
-{% highlight Groff markup %}
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 60010:60010 \
-apache-kylin-standalone
-{% endhighlight %}
-
-当容器启动后,执行 docker exec 命令进入容器。源代码存放在容器的 `/home/admin/kylin_sourcecode` 目录,执行以下命令对源码进行打包:
-
-{% highlight Groff markup %}
-cd /home/admin/kylin_sourcecod
-build/script/package.sh
-{% endhighlight %}
-
-打包完成后,会在 `/home/admin/kylin_sourcecode/dist` 目录下生成一个以 `.tar.gz` 结尾的安装包,如 `apache-kylin-3.0.0-alpha2-bin-hbase1x.tar.gz`。我们可以使用该安装包进行部署和启动 Kylin 服务,如:
-
-{% highlight Groff markup %}
-cp /home/admin/kylin_sourcecode/dist/apache-kylin-3.0.0-alpha2-bin-hbase1x.tar.gz /home/admin
-tar -zxvf /home/admin/apache-kylin-3.0.0-alpha2-bin-hbase1x.tar.gz
-/home/admin/apache-kylin-3.0.0-alpha2-bin-hbase1x/kylin.sh start
-{% endhighlight %}
-
-我们同样可以在本机的浏览器中打开 Hdfs、Yarn、HBase、Kylin 等服务的页面。
-
-## 容器资源建议
-
-为了让 Kylin 能够顺畅的构建 Cube,我们为 Yarn NodeManager 配置的内存资源为 6G,加上各服务占用的内存,请保证容器的内存不少于 8G,以免因为内存不足导致出错。
-
-为容器设置资源方法请参考:
-
-- Mac 用户:<https://docs.docker.com/docker-for-mac/#advanced>
-- Linux 用户:<https://docs.docker.com/config/containers/resource_constraints/#memory>
-
----
-
-旧版 docker image 请查看 github 项目 [kylin-docker](https://github.com/Kyligence/kylin-docker/).
\ No newline at end of file
diff --git a/website/_docs30/install/kylin_docker.md b/website/_docs30/install/kylin_docker.md
deleted file mode 100644
index bd102d1..0000000
--- a/website/_docs30/install/kylin_docker.md
+++ /dev/null
@@ -1,143 +0,0 @@
----
-layout: docs30
-title:  "Run Kylin with Docker"
-categories: install
-permalink: /docs30/install/kylin_docker.html
-since: v3.0.0-alpha2
----
-
-In order to allow users to easily try Kylin, and to facilitate developers to verify and debug after modifying the source code. We provide Kylin's docker image. In this image, each service that Kylin relies on is properly installed and deployed, including:
-
-- Jdk 1.8
-- Hadoop 2.7.0
-- Hive 1.2.1
-- Hbase 1.1.2
-- Spark 2.3.1
-- Zookeeper 3.4.6
-- Kafka 1.1.1
-- Mysql
-- Maven 3.6.1
-
-## Quickly try Kylin
-
-We have pushed the Kylin image for the user to the docker hub. Users do not need to build the image locally, just execute the following command to pull the image from the docker hub: 
-
-{% highlight Groff markup %}
-docker pull apachekylin/apache-kylin-standalone:3.0.0-alpha2
-{% endhighlight %}
-
-After the pull is successful, execute the following command to start the container: 
-
-{% highlight Groff markup %}
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 60010:60010 \
-apachekylin/apache-kylin-standalone:3.0.0-alpha2
-{% endhighlight %}
-
-The following services are automatically started when the container starts: 
-
-- NameNode, DataNode
-- ResourceManager, NodeManager
-- HBase
-- Kafka
-- Kylin
-
-and run automatically `$KYLIN_HOME/bin/sample.sh `, create a kylin_streaming_topic topic in Kafka and continue to send data to this topic. This is to let the users start the container and then experience the batch and streaming way to build the cube and query.
-
-After the container is started, we can enter the container through the `docker exec` command. Of course, since we have mapped the specified port in the container to the local port, we can open the pages of each service directly in the native browser, such as: 
-
-- Kylin Web UI: [http://127.0.0.1:7070/kylin/login](http://127.0.0.1:7070/kylin/login)
-- Hdfs NameNode Web UI: [http://127.0.0.1:50070](http://127.0.0.1:50070/)
-- Yarn ResourceManager Web UI: [http://127.0.0.1:8088](http://127.0.0.1:8088/)
-- HBase Web UI: [http://127.0.0.1:60010](http://127.0.0.1:60010/)
-
-In the container, the relevant environment variables are as follows: 
-
-{% highlight Groff markup %}
-JAVA_HOME=/home/admin/jdk1.8.0_141
-HADOOP_HOME=/home/admin/hadoop-2.7.0
-KAFKA_HOME=/home/admin/kafka_2.11-1.1.1
-SPARK_HOME=/home/admin/spark-2.3.1-bin-hadoop2.6
-HBASE_HOME=/home/admin/hbase-1.1.2
-HIVE_HOME=/home/admin/apache-hive-1.2.1-bin
-KYLIN_HOME=/home/admin/apache-kylin-3.0.0-alpha2-bin-hbase1x
-{% endhighlight %}
-
-## Build image to verify source code modifications
-
-The docker image can also be used when developers have modified the source code and want to package, deploy, and verify the source code. First, we go to the docker directory under the root directory of the source and execute the script below to build the image and copy the source into the image.: 
-
-{% highlight Groff markup %}
-#!/usr/bin/env bash
-
-echo "start build kylin image base on current source code"
-
-rm -rf ./kylin
-mkdir -p ./kylin
-
-echo "start copy kylin source code"
-
-for file in `ls ../../kylin/`
-do
-    if [ docker != $file ]
-    then
-        cp -r ../../kylin/$file ./kylin/
-    fi
-done
-
-echo "finish copy kylin source code"
-
-docker build -t apache-kylin-standalone .⏎
-{% endhighlight %}
-
-Due to need to download and deploy various binary packages over the network, the entire build process can last for tens of minutes, depending on the network.
-
-When the image build is complete, execute the following command to start the container: 
-
-{% highlight Groff markup %}
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 60010:60010 \
-apache-kylin-standalone
-{% endhighlight %}
-
-When the container starts, execute the docker exec command to enter the container. The source code is stored in the container dir `/home/admin/kylin_sourcecode`, execute the following command to package the source code: 
-
-{% highlight Groff markup %}
-cd /home/admin/kylin_sourcecod
-build/script/package.sh
-{% endhighlight %}
-
-After the package is complete, an binary package ending in `.tar.gz` will be generated in the `/home/admin/kylin_sourcecode/dist` directory, such as `apache-kylin-3.0.0-alpha2-bin-hbase1x.tar.gz`. We can use this  binary package to deploy and launch Kylin services such as:
-
-{% highlight Groff markup %}
-cp /home/admin/kylin_sourcecode/dist/apache-kylin-3.0.0-alpha2-bin-hbase1x.tar.gz /home/admin
-tar -zxvf /home/admin/apache-kylin-3.0.0-alpha2-bin-hbase1x.tar.gz
-/home/admin/apache-kylin-3.0.0-alpha2-bin-hbase1x/kylin.sh start
-{% endhighlight %}
-
-We can also open pages for services such as Hdfs, Yarn, HBase, and Kylin in the browser of this machine.
-
-## Container resource recommendation
-
-In order to allow Kylin to build the cube smoothly, the memory resource we configured for Yarn NodeManager is 6G, plus the memory occupied by each service, please ensure that the memory of the container is not less than 8G, so as to avoid errors due to insufficient memory.
-
-For the resource setting method for the container, please refer to:
-
-- Mac user: <https://docs.docker.com/docker-for-mac/#advanced>
-- Linux user: <https://docs.docker.com/config/containers/resource_constraints/#memory>
-
----
-
-For old docker image, please check the github page [kylin-docker](https://github.com/Kyligence/kylin-docker/).
\ No newline at end of file
diff --git a/website/_docs30/release_notes.md b/website/_docs30/release_notes.md
deleted file mode 100644
index 6fe72ca..0000000
--- a/website/_docs30/release_notes.md
+++ /dev/null
@@ -1,2909 +0,0 @@
----
-layout: docs30
-title:  Release Notes
-categories: gettingstarted
-permalink: /docs30/release_notes.html
----
-
-To download latest release, please visit: [http://kylin.apache.org/download/](http://kylin.apache.org/download/), 
-there are source code package, binary package and installation guide avaliable.
-
-Any problem or issue, please report to Apache Kylin JIRA project: [https://issues.apache.org/jira/browse/KYLIN](https://issues.apache.org/jira/browse/KYLIN)
-
-or send to Apache Kylin mailing list:
-
-* User relative: [user@kylin.apache.org](mailto:user@kylin.apache.org)
-* Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
-
-## v3.0.0 - 2019-12-20
-_Tag:_ [kylin-3.0.0](https://github.com/apache/kylin/tree/kylin-3.0.0)
-This is the GA release of Kylin's next generation after 2.x, with the new real-time OLAP feature.
-
-__New Feature__
-
-* [KYLIN-4098] - Add cube auto merge api
-* [KYLIN-3883] - Kylin supports column count aggregation
-
-__Improvement__
-
-* [KYLIN-565] - Unsupported SQL Functions
-* [KYLIN-1772] - Highlight segment at HBase tab page of cube admin view when the segment is not healthy.
-* [KYLIN-1850] - Show Kylin Version on GUI
-* [KYLIN-2431] - StorageCleanupJob will remove intermediate tables created by other kylin instances
-* [KYLIN-3756] - Support check-port-availability script for mac os x
-* [KYLIN-3865] - Centralize the zookeeper related info
-* [KYLIN-3906] - ExecutableManager is spelled as ExecutableManger
-* [KYLIN-3907] - Sort the cube list by create time in descending order.
-* [KYLIN-3917] - Add max segment merge span to cleanup intermediate data of cube building
-* [KYLIN-4010] - Auto adjust offset according to query server's timezone for time derived column
-* [KYLIN-4096] - Make cube metadata validator rules configuable
-* [KYLIN-4097] - Throw exception when too many dict slice eviction in AppendTrieDictionary
-* [KYLIN-4163] - CreateFlatHiveTableStep has not yarn app url when hive job running
-* [KYLIN-4167] - Refactor streaming coordinator
-* [KYLIN-4175] - Support secondary hbase storage config for hbase cluster migration
-* [KYLIN-4178] - Job scheduler support safe mode
-* [KYLIN-4180] - Prevent abnormal CPU usage by limiting flat filters length
-* [KYLIN-4187] - Building dimension dictionary using spark
-* [KYLIN-4193] - More user-friendly page for loading streaming tables
-* [KYLIN-4198] - “bin/system-cube.sh cron” will overwrite user's crontab
-* [KYLIN-4201] - Allow users to delete unused receivers from streaming page
-* [KYLIN-4208] - RT OLAP kylin.stream.node configure optimization support all receiver can have the same config
-* [KYLIN-4257] - Build historical data by layer in real time Lambda cube
-* [KYLIN-4258] - Real-time OLAP may return incorrect result for some case
-* [KYLIN-4273] - Make cube planner works for real-time streaming job
-* [KYLIN-4283] - FileNotFound error in "Garbage Collection" step should not break cube building.
-
-__Bug Fix__
-
-* [KYLIN-1716] - leave executing query page action stop bug
-* [KYLIN-3730] - TableMetadataManager.reloadSourceTableQuietly is wrong
-* [KYLIN-3741] - when the sql result is empty and limit is 0 , should not have "load more" bar
-* [KYLIN-3842] - kylinProperties.js Unable to get the public configuration of the first line in the front end
-* [KYLIN-3881] - Calcite isolating expression with its condition may throw 'Division Undefined' exception
-* [KYLIN-3887] - Query with decimal sum measure of double complied failed after KYLIN-3703
-* [KYLIN-3933] - Currently replica set related operation need refresh current front-end page
-* [KYLIN-4135] - Real time streaming segment build task discard but can't be rebuilt
-* [KYLIN-4147] - User has project's admin permission but doesn't have permission to see the Storage/Planner/streaming tab in Model page
-* [KYLIN-4162] - After drop the build task on the monitor page, subsequent segments cannot be constructed.
-* [KYLIN-4165] - RT OLAP building job on "Save Cube Dictionaries" step concurrency error
-* [KYLIN-4169] - Too many logs while DataModelManager init, cause the first RESTful API hang for a long time
-* [KYLIN-4172] - Can't rename field when map streaming schema to table
-* [KYLIN-4176] - Filter the intermediate tables when loading table metadata from tree
-* [KYLIN-4183] - Clicking 'Submit' button is unresponsive, when the segment is not selected.
-* [KYLIN-4190] - hiveproducer write() function throw exception because hive mertics table location path prefix is different with defaut fs when hdfs uses router-based federation
-* [KYLIN-4194] - Throw KylinConfigCannotInitException at STEP "Extract Fact Table Distinct Columns" with spark
-* [KYLIN-4203] - Disable a real time cube and then enable it ,this cube may can't submit build job anymore
-* [KYLIN-4229] - String index out of range -1
-* [KYLIN-4242] - Usage instructions in 'PasswordPlaceholderConfigurer' doesn't work
-* [KYLIN-4244] - ClassNotFoundException while use org.apache.kylin.engine.mr.common.CubeStatsReader in bash
-* [KYLIN-4246] - Wrong results from real-time streaming when an optional field is used as a dimension
-* [KYLIN-4248] - When adding a user, the prompt message is incorrect when the user name is empty.
-* [KYLIN-4254] - The result exporting from Insight with CSV format is empty, when sql contains Chinese
-* [KYLIN-4262] - pid in GC filename inconsistent with real pid
-* [KYLIN-4265] - SQL tab of cube failed when filter is not empty
-
-## v3.0.0-beta - 2019-10-25
-_Tag:_ [kylin-3.0.0-beta](https://github.com/apache/kylin/tree/kylin-3.0.0-beta)
-This is the beta release of Kylin's next generation after 2.x, with the new real-time OLAP feature.
-
-__New Feature__
-
-* [KYLIN-4114] - Provided a self-contained docker image for Kylin
-* [KYLIN-4122] - Add kylin user and group manage modules
-
-__Improvement__
-
-* [KYLIN-3519] - Upgrade Jacoco version to 0.8.2
-* [KYLIN-3628] - Query with lookup table always use latest snapshot
-* [KYLIN-3901] - Use multi threads to speed up the storage cleanup job
-* [KYLIN-4010] - Auto adjust offset according to query server's timezone for time derived column
-* [KYLIN-4055] - cube quey and ad-hoc query return different meta info
-* [KYLIN-4067] - Speed up response of kylin cube page
-* [KYLIN-4091] - support fast mode and simple mode for running CI
-* [KYLIN-4092] - Support setting seperate jvm params for kylin backgroud tools
-* [KYLIN-4093] - Slow query pages should be open to all users of the project
-* [KYLIN-4095] - Add RESOURCE_PATH_PREFIX option in ResourceTool
-* [KYLIN-4099] - Using no blocking RDD unpersist in spark cubing job
-* [KYLIN-4100] - Add overall job number statistics in monitor page
-* [KYLIN-4101] - set hive and spark job name when building cube
-* [KYLIN-4108] - Show slow query hit cube in slow query page
-* [KYLIN-4112] - Add hdfs keberos token delegation in Spark to support HBase and MR use different HDFSclusters
-* [KYLIN-4121] - Cleanup hive view intermediate tables after job be finished
-* [KYLIN-4127] - Remove never called classes
-* [KYLIN-4128] - Remove never called methods
-* [KYLIN-4129] - Remove useless code
-* [KYLIN-4130] - Coordinator->StreamingBuildJobStatusChecker thread always hold a old CubeManager
-* [KYLIN-4133] - support override configuration in kafka job
-* [KYLIN-4137] - Accelerate metadata reloading
-* [KYLIN-4139] -  Compatible old user security xml config when user upgrate new kylin version
-* [KYLIN-4140] - Add the time filter for current day jobs and make default values for web configurable
-* [KYLIN-4141] - Build Global Dictionary in no time
-* [KYLIN-4149] - Allow user to edit streaming v2 table's  kafka cluster address and topic name
-* [KYLIN-4150] - Improve docker for kylin instructions
-* [KYLIN-4160] - Auto redirect to host:port/kylin when user only enter host:port in broswer
-* [KYLIN-4167] - Refactor streaming coordinator
-* [KYLIN-4180] - Prevent abnormal CPU usage by limiting flat filters length
-
-__Bug Fix__
-
- * [KYLIN-1856] - Kylin shows old error in job step output after resume - specifically in #4 Step Name:Build Dimension Dictionary
- * [KYLIN-2820] - Query can't read window function's result from subquery
- * [KYLIN-3121] - NPE while executing a query with two left outer joins and floating point expressionson nullable fields
- * [KYLIN-3845] - Kylin build error If the Kafka data source lacks selected dimensions or metrics in thekylin stream build.
- * [KYLIN-4034] - The table should not display in Insight page when the user has no access to the table
- * [KYLIN-4039] - ZookeeperDistributedLock may not release lock when unlock operation was interrupted
- * [KYLIN-4049] - Refresh segment job will always delete old segment storage
- * [KYLIN-4057] - autoMerge job can not stop
- * [KYLIN-4066] - No planner for not ROLE_ADMIN user on WebSite
- * [KYLIN-4072] - CDH 6.x find-hbase-dependency.sh return with "base-common lib not found"
- * [KYLIN-4085] - Segment parallel building may cause segment not found
- * [KYLIN-4089] - Integration test failed with JDBCMetastore
- * [KYLIN-4103] - Make the user string in granting operation of project is case insensitive
- * [KYLIN-4106] - Illegal partition for SelfDefineSortableKey when “Extract Fact Table Distinct Columns”
- * [KYLIN-4107] - StorageCleanupJob fails to delete Hive tables with "Argument list too long" error
- * [KYLIN-4111] - drop table failed with no valid privileges after KYLIN-3857
- * [KYLIN-4115] - Always load KafkaConsumerProperties
- * [KYLIN-4117] - Intersect_count() return wrong result when column type is time
- * [KYLIN-4120] - Failed to query "select * from {lookup}" if a lookup table joined in two differentmodels
- * [KYLIN-4126] - cube name validate code cause the wrong judge of streaming type
- * [KYLIN-4135] - Real time streaming segment build task discard but can't  be rebuilt
- * [KYLIN-4143] - truncate spark executable job output
- * [KYLIN-4148] - Execute 'bin/kylin-port-replace-util.sh' to change port will cause the configurationof  'kylin.metadata.url' lost
- * [KYLIN-4153] - Failed to read big resource  /dict/xxxx at "Build Dimension Dictionary" Step
- * [KYLIN-4154] - Metadata inconsistency between multi Kylin server caused by Broadcaster closing
- * [KYLIN-4155] - Cube status can not change immediately when executed disable or enable button in web
- * [KYLIN-4157] - When using PrepareStatement query, functions within WHERE will causeInternalErrorException
- * [KYLIN-4158] - Query failed for GroupBy an expression of column with limit in SQL
- * [KYLIN-4159] - The first step of build cube job will fail and throw "Column 'xx' in where clause isambiguous" in jdbc datasource.
- * [KYLIN-4162] - After drop the build task on the monitor page, subsequent segments cannot beconstructed.
- * [KYLIN-4173] - cube list search can not work
-
-__Test__
-
-* [KYLIN-3878] - NPE to run sonar analysis
-
-## v3.0.0-alpha2 - 2019-07-31
-_Tag:_ [kylin-3.0.0-alpha2](https://github.com/apache/kylin/tree/kylin-3.0.0-alpha2)
-This is the alpha2 release of Kylin's next generation after 2.x, with the new real-time OLAP feature.
-
-__New Feature__
-
-* [KYLIN-3843] - List kylin instances with their server mode on web
-
-__Improvement__
-
-* [KYLIN-3628] - Query with lookup table always use latest snapshot
-* [KYLIN-3812] - optimize the child CompareTupleFilter in a CompareTupleFilter
-* [KYLIN-3813] - don't do push down when both of the children of CompareTupleFilter are CompareTupleFilter with column included
-* [KYLIN-3841] - Build Global Dict by MR/Hive
-* [KYLIN-3912] - Support cube level mapreduce queue config for BeelineHiveClient
-* [KYLIN-3918] - Add project name in cube and job pages
-* [KYLIN-3925] - Add reduce step for FilterRecommendCuboidDataJob & UpdateOldCuboidShardJob to avoid generating small hdfs files
-* [KYLIN-3932] - KafkaConfigOverride to take effect
-* [KYLIN-3958] - MrHive-Dict support build by livy
-* [KYLIN-3960] - Only update user when login in LDAP environment
-* [KYLIN-3997] - Add a health check job of Kylin
-* [KYLIN-4001] - Allow user-specified time format using real-time
-* [KYLIN-4012] - optimize cache in TrieDictionary/TrieDictionaryForest
-* [KYLIN-4013] - Only show the cubes under one model
-* [KYLIN-4026] - Avoid too many file append operations in HiveProducer of hive metrics reporter
-* [KYLIN-4028] - Speed up startup progress using cached dependency
-* [KYLIN-4031] - RestClient will throw exception with message contains clear-text password
-* [KYLIN-4033] - Can not access Kerberized Cluster with DebugTomcat
-* [KYLIN-4035] - Calculate column cardinality by using spark engine
-* [KYLIN-4041] - CONCAT NULL not working properly
-* [KYLIN-4062] - Too many "if else" clause in PushDownRunnerJdbcImpl#toSqlType
-* [KYLIN-4081] - Use absolute path instead of relative path for local segment cache
-* [KYLIN-4084] - Reset kylin.stream.node in kylin-port-replace-util.sh
-* [KYLIN-4086] - Support connect Kylin with Tableau by JDBC
-
-__Bug Fix__
-
-* [KYLIN-3935] - ZKUtil acquire the wrong Zookeeper Path on windows
-* [KYLIN-3942] - Rea-time OLAP don't support multi-level json event
-* [KYLIN-3946] - No cube for AVG measure after include count column
-* [KYLIN-3959] - Realtime OLAP query result should not be cached
-* [KYLIN-3981] - Auto Merge Job failed to execute on windows
-* [KYLIN-4005] - Saving Cube of a aggregation Groups(40 Dimensions, Max Dimension Combination:5) may cause kylin server OOM
-* [KYLIN-4017] - Build engine get zk(zookeeper) lock failed when building job, it causes the whole build engine doesn't work.
-* [KYLIN-4027] - Kylin-jdbc module has tcp resource leak
-* [KYLIN-4037] - Can't Cleanup Data in Hbase's HDFS Storage When Deploy Apache Kylin with Standalone HBase Cluster
-* [KYLIN-4039] - ZookeeperDistributedLock may not release lock when unlock operation was interrupted
-* [KYLIN-4044] - CuratorScheduler may throw NPE when init service Cache
-* [KYLIN-4046] - Refine JDBC Source(source.default=8)
-* [KYLIN-4064] - parameter 'engineType' is not working when running integration test
-* [KYLIN-4072] - CDH 6.x find-hbase-dependency.sh return with "base-common lib not found"
-* [KYLIN-4074] - Exception in thread "Memcached IO over {MemcachedConnection to ..." java.lang.NullPointerException
-
-## v3.0.0-alpha - 2019-04-12
-_Tag:_ [kylin-3.0.0-alpha](https://github.com/apache/kylin/tree/kylin-3.0.0-alpha)
-This is the alpha release of Kylin's next generation after 2.x, with the new real-time OLAP feature.
-
-__New Feature__
-
-* [KYLIN-3654] - Kylin Real-time Streaming
-* [KYLIN-3795] - Submit Spark jobs via Apache Livy
-* [KYLIN-3820] - Add a curator-based scheduler
-
-__Improvement__
-
-* [KYLIN-3716] - FastThreadLocal replaces ThreadLocal
-* [KYLIN-3744] - Add javadoc and unittest for Kylin New Streaming Solution
-* [KYLIN-3759] - Streaming ClassNotFoundExeception when lambda is enable in MR job
-* [KYLIN-3786] - Add integration test for real-time streaming
-* [KYLIN-3791] - Map return by Maps.transformValues is a immutable view
-* [KYLIN-3797] - Too many or filters may break Kylin server when flatting filter
-* [KYLIN-3814] - Add pause interval for job retry
-* [KYLIN-3821] - Expose real-time streaming data consuming lag info
-* [KYLIN-3834] - Add monitor for curator-based scheduler
-* [KYLIN-3839] - Storage clean up after refreshing or deleting a segment
-* [KYLIN-3864] - Provide a function to judge whether the os type is Mac os x or not
-* [KYLIN-3867] - Enable JDBC to use key store & trust store for https connection
-* [KYLIN-3901] - Use multi threads to speed up the storage cleanup job
-* [KYLIN-3905] - Enable shrunken dictionary default
-* [KYLIN-3908] - KylinClient's HttpRequest.releaseConnection is not needed in retrieveMetaData & executeKylinQuery
-* [KYLIN-3929] - Check satisfaction before execute cubeplanner algorithm
-* [KYLIN-3690] - New streaming backend implementation
-* [KYLIN-3691] - New streaming ui implementation
-* [KYLIN-3692] - New streaming ui implementation
-* [KYLIN-3745] - Real-time segment state changed from active to immutable is not sequently
-* [KYLIN-3747] - Use FQDN to register a streaming receiver instead of ip
-* [KYLIN-3768] - Save streaming metadata a standard kylin path in zookeeper
-
-__Bug Fix__
-
-* [KYLIN-3787] - NPE throws when dimension value has null when query real-time data
-* [KYLIN-3789] - Stream receiver admin page issue fix
-* [KYLIN-3800] - Real-time streaming count distinct result wrong
-* [KYLIN-3817] - Duration in Cube building is a negative number
-* [KYLIN-3818] - After Cube disabled, auto-merge cube job still running
-* [KYLIN-3830] - Wrong result when 'SELECT SUM(dim1)' without set a relative metric of dim1.
-* [KYLIN-3866] - Whether to set mapreduce.application.classpath is determined by the user
-* [KYLIN-3880] - DataType is incompatible in Kylin HBase coprocessor
-* [KYLIN-3888] - TableNotDisabledException when running "Convert Lookup Table to HFile"
-* [KYLIN-3898] - Cube level properties are ineffective in the some build steps
-* [KYLIN-3902] - NoRealizationFoundException due to creating a wrong JoinDesc
-* [KYLIN-3909] - Spark cubing job failed for MappeableRunContainer is not registered
-* [KYLIN-3911] - Check if HBase table is enabled before diabling table in DeployCoprocessorCLI
-* [KYLIN-3916] - Fix cube build action issue after streaming migrate
-* [KYLIN-3922] - Fail to update coprocessor when run DeployCoprocessorCLI
-* [KYLIN-3923] - UT GeneralColumnDataTest fail
-
-## v2.6.4 - 2019-10-12
-_Tag:_ [kylin-2.6.4](https://github.com/apache/kylin/tree/kylin-2.6.4)
-This is a bugfix release after 2.6.3, with 10 enhancements and 17 bug fixes.
-
-__Improvement__
-
-* [KYLIN-3628] - Query with lookup table always use latest snapshot
-* [KYLIN-3797] - Too many or filters may break Kylin server when flatting filter
-* [KYLIN-4013] - Only show the cubes under one model
-* [KYLIN-4047] - Use push-down query when division dynamic column cube query is not supported
-* [KYLIN-4055] - cube quey and ad-hoc query return different meta info
-* [KYLIN-4093] - Slow query pages should be open to all users of the project
-* [KYLIN-4099] - Using no blocking RDD unpersist in spark cubing job
-* [KYLIN-4121] - Cleanup hive view intermediate tables after job be finished
-* [KYLIN-4140] - Add the time filter for current day jobs and make default values for web configurable
-* [KYLIN-4160] - Auto redirect to host:port/kylin when user only enter host:port in broswer
-
-__Bug Fix__
-
-* [KYLIN-1856] - Kylin shows old error in job step output after resume - specifically in #4 Step Name: Build Dimension Dictionary
-* [KYLIN-4034] - The table should not display in Insight page when the user has no access to the table
-* [KYLIN-4037] - Can't Cleanup Data in Hbase's HDFS Storage When Deploy Apache Kylin with Standalone HBase Cluster
-* [KYLIN-4046] - Refine JDBC Source(source.default=8)
-* [KYLIN-4057] - autoMerge job can not stop
-* [KYLIN-4066] - No planner for not ROLE_ADMIN user on WebSite
-* [KYLIN-4074] - Exception in thread "Memcached IO over {MemcachedConnection to ..." java.lang.NullPointerException
-* [KYLIN-4103] - Make the user string in granting operation of project is case insensitive
-* [KYLIN-4106] - Illegal partition for SelfDefineSortableKey when “Extract Fact Table Distinct Columns”
-* [KYLIN-4111] - drop table failed with no valid privileges after KYLIN-3857
-* [KYLIN-4115] - Always load KafkaConsumerProperties
-* [KYLIN-4131] - Broadcaster memory leak
-* [KYLIN-4152] - Should Disable Before Deleting HBase Table using HBaseAdmin
-* [KYLIN-4153] - Failed to read big resource  /dict/xxxx at "Build Dimension Dictionary" Step
-* [KYLIN-4157] - When using PrepareStatement query, functions within WHERE will cause InternalErrorException
-* [KYLIN-4158] - Query failed for GroupBy an expression of column with limit in SQL
-* [KYLIN-4159] - The first step of build cube job will fail and throw "Column 'xx' in where clause is ambiguous" in jdbc datasource.
-
-## v2.6.3 - 2019-07-06
... 10676 lines suppressed ...