You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by sh...@apache.org on 2018/09/18 23:59:16 UTC

[kylin] branch document updated (31e3d99 -> 20f7e1b)

This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a change to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git.


    from 31e3d99  update howto_release
     new ed7896c  Update 2.5.0 doc
     new 20f7e1b  Update document for v2.5

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 website/_data/docs-cn.yml                          |    1 +
 website/_data/docs.yml                             |    1 +
 website/_data/{docs-cn.yml => docs24-cn.yml}       |    0
 website/_data/{docs.yml => docs24.yml}             |    0
 website/_dev/howto_hbase_branches.cn.md            |    9 +-
 website/_dev/howto_hbase_branches.md               |   13 +-
 website/_dev/howto_release.cn.md                   |    9 +-
 website/_dev/howto_release.md                      |    9 +-
 website/_docs/gettingstarted/events.md             |    1 +
 website/_docs/gettingstarted/faq.md                |  188 ++-
 website/_docs/howto/howto_upgrade.md               |    7 +
 website/_docs/howto/howto_use_restapi.cn.md        |    4 +-
 website/_docs/howto/howto_use_restapi.md           |    4 +-
 website/_docs/index.cn.md                          |    8 +-
 website/_docs/index.md                             |    8 +-
 website/_docs/install/advance_settings.cn.md       |   44 +
 website/_docs/install/advance_settings.md          |   41 +
 website/_docs/install/configuration.cn.md          |   42 +-
 website/_docs/install/configuration.md             |   42 +-
 website/_docs/release_notes.md                     |  109 +-
 website/_docs/tutorial/hybrid.cn.md                |   47 +
 website/_docs/tutorial/hybrid.md                   |   46 +
 website/_docs/tutorial/setup_jdbc_datasource.cn.md |    6 +-
 website/_docs/tutorial/setup_jdbc_datasource.md    |    4 +-
 website/_docs/tutorial/use_cube_planner.cn.md      |    2 +-
 website/_docs/tutorial/use_cube_planner.md         |    2 +-
 website/_docs16/gettingstarted/events.md           |   24 -
 website/_docs16/gettingstarted/faq.md              |  119 --
 website/_docs16/howto/howto_cleanup_storage.md     |   22 -
 website/_docs16/howto/howto_ldap_and_sso.md        |  128 --
 website/_docs16/howto/howto_optimize_build.md      |  190 ---
 website/_docs16/howto/howto_update_coprocessor.md  |   14 -
 website/_docs16/howto/howto_upgrade.md             |   66 -
 website/_docs16/howto/howto_use_beeline.md         |   14 -
 website/_docs16/howto/howto_use_restapi.md         | 1113 ----------------
 website/_docs16/index.cn.md                        |   26 -
 website/_docs16/index.md                           |   57 -
 website/_docs16/install/advance_settings.md        |   98 --
 website/_docs16/install/hadoop_evn.md              |   40 -
 website/_docs16/install/index.cn.md                |   46 -
 website/_docs16/install/index.md                   |   35 -
 website/_docs16/install/kylin_cluster.md           |   32 -
 website/_docs16/install/manual_install_guide.cn.md |   48 -
 website/_docs16/release_notes.md                   | 1333 --------------------
 website/_docs16/tutorial/acl.cn.md                 |   35 -
 website/_docs16/tutorial/create_cube.cn.md         |  129 --
 website/_docs16/tutorial/create_cube.md            |  198 ---
 website/_docs16/tutorial/cube_build_job.cn.md      |   66 -
 website/_docs16/tutorial/cube_build_job.md         |   67 -
 website/_docs16/tutorial/flink.md                  |  249 ----
 website/_docs16/tutorial/kylin_sample.md           |   21 -
 website/_docs16/tutorial/powerbi.cn.md             |   56 -
 website/_docs16/tutorial/squirrel.md               |  112 --
 website/_docs16/tutorial/tableau.cn.md             |  116 --
 website/_docs16/tutorial/web.cn.md                 |  134 --
 website/_docs16/tutorial/web.md                    |  123 --
 website/_docs20/howto/howto_use_restapi.md         |    4 +-
 website/_docs21/howto/howto_use_restapi.md         |    4 +-
 website/_docs23/howto/howto_use_restapi.cn.md      |    4 +-
 website/_docs23/howto/howto_use_restapi.md         |    4 +-
 .../gettingstarted/best_practices.md               |    4 +-
 .../gettingstarted/concepts.md                     |    4 +-
 .../{_docs => _docs24}/gettingstarted/events.md    |    2 +-
 website/{_docs => _docs24}/gettingstarted/faq.md   |    2 +-
 .../gettingstarted/terminology.md                  |    4 +-
 .../howto/howto_backup_metadata.cn.md              |    2 +-
 .../howto/howto_backup_metadata.md                 |    4 +-
 .../howto/howto_build_cube_with_restapi.cn.md      |    2 +-
 .../howto/howto_build_cube_with_restapi.md         |    6 +-
 .../howto/howto_cleanup_storage.cn.md              |    2 +-
 .../howto/howto_cleanup_storage.md                 |    2 +-
 .../howto/howto_enable_zookeeper_acl.md            |    2 +-
 .../howto/howto_install_ranger_kylin_plugin.md     |    2 +-
 website/{_docs => _docs24}/howto/howto_jdbc.cn.md  |    2 +-
 website/{_docs16 => _docs24}/howto/howto_jdbc.md   |    6 +-
 .../{_docs => _docs24}/howto/howto_ldap_and_sso.md |    2 +-
 .../howto/howto_optimize_build.cn.md               |    4 +-
 .../howto/howto_optimize_build.md                  |    4 +-
 .../howto/howto_optimize_cubes.cn.md               |    2 +-
 .../howto/howto_optimize_cubes.md                  |    4 +-
 .../howto/howto_update_coprocessor.md              |    2 +-
 website/{_docs => _docs24}/howto/howto_upgrade.md  |    2 +-
 .../{_docs => _docs24}/howto/howto_use_beeline.md  |    2 +-
 .../howto/howto_use_distributed_scheduler.md       |    4 +-
 .../howto/howto_use_restapi.cn.md                  |    6 +-
 .../{_docs => _docs24}/howto/howto_use_restapi.md  |    6 +-
 .../howto/howto_use_restapi_in_js.md               |    4 +-
 website/{_docs => _docs24}/index.cn.md             |    4 +-
 website/{_docs => _docs24}/index.md                |    2 +-
 .../install/advance_settings.cn.md                 |    2 +-
 .../{_docs => _docs24}/install/advance_settings.md |    2 +-
 .../{_docs => _docs24}/install/configuration.cn.md |    2 +-
 .../{_docs => _docs24}/install/configuration.md    |    2 +-
 website/{_docs => _docs24}/install/hadoop_evn.md   |    2 +-
 website/{_docs => _docs24}/install/index.cn.md     |    2 +-
 website/{_docs => _docs24}/install/index.md        |    2 +-
 .../{_docs => _docs24}/install/kylin_aws_emr.cn.md |    2 +-
 .../{_docs => _docs24}/install/kylin_aws_emr.md    |    2 +-
 .../{_docs => _docs24}/install/kylin_cluster.cn.md |   10 +-
 .../{_docs => _docs24}/install/kylin_cluster.md    |    6 +-
 .../{_docs => _docs24}/install/kylin_docker.cn.md  |    2 +-
 .../{_docs16 => _docs24}/install/kylin_docker.md   |    4 +-
 .../install/manual_install_guide.cn.md             |    2 +-
 website/{_docs => _docs24}/release_notes.md        |    8 +-
 website/{_docs => _docs24}/tutorial/Qlik.cn.md     |    4 +-
 website/{_docs => _docs24}/tutorial/Qlik.md        |    4 +-
 website/{_docs => _docs24}/tutorial/acl.cn.md      |    4 +-
 website/{_docs16 => _docs24}/tutorial/acl.md       |   11 +-
 .../{_docs => _docs24}/tutorial/create_cube.cn.md  |    2 +-
 website/{_docs => _docs24}/tutorial/create_cube.md |    2 +-
 .../tutorial/cube_build_job.cn.md                  |    2 +-
 .../{_docs => _docs24}/tutorial/cube_build_job.md  |    2 +-
 .../tutorial/cube_build_performance.cn.md          |    2 +-
 .../tutorial/cube_build_performance.md             |    2 +-
 .../{_docs => _docs24}/tutorial/cube_spark.cn.md   |    2 +-
 website/{_docs => _docs24}/tutorial/cube_spark.md  |    2 +-
 .../tutorial/cube_streaming.cn.md                  |    4 +-
 .../tutorial/cube_streaming.md                     |   22 +-
 website/{_docs => _docs24}/tutorial/flink.md       |    2 +-
 website/{_docs23 => _docs24}/tutorial/hue.md       |    8 +-
 website/{_docs => _docs24}/tutorial/jdbc.cn.md     |    2 +-
 website/{_docs => _docs24}/tutorial/jdbc.md        |    2 +-
 .../tutorial/kylin_client_tool.cn.md               |    2 +-
 .../tutorial/kylin_client_tool.md                  |    2 +-
 .../{_docs => _docs24}/tutorial/kylin_sample.cn.md |    2 +-
 .../{_docs => _docs24}/tutorial/kylin_sample.md    |    2 +-
 .../{_docs => _docs24}/tutorial/microstrategy.md   |    2 +-
 website/{_docs16 => _docs24}/tutorial/odbc.cn.md   |    6 +-
 website/{_docs16 => _docs24}/tutorial/odbc.md      |    4 +-
 website/{_docs => _docs24}/tutorial/powerbi.cn.md  |    2 +-
 website/{_docs16 => _docs24}/tutorial/powerbi.md   |    4 +-
 .../tutorial/project_level_acl.cn.md               |    2 +-
 .../tutorial/project_level_acl.md                  |    2 +-
 .../tutorial/query_pushdown.cn.md                  |    2 +-
 .../{_docs => _docs24}/tutorial/query_pushdown.md  |    2 +-
 .../tutorial/setup_jdbc_datasource.cn.md           |    2 +-
 .../tutorial/setup_jdbc_datasource.md              |    2 +-
 .../tutorial/setup_systemcube.cn.md                |    2 +-
 .../tutorial/setup_systemcube.md                   |    2 +-
 website/{_docs => _docs24}/tutorial/spark.cn.md    |    2 +-
 website/{_docs => _docs24}/tutorial/spark.md       |    2 +-
 website/{_docs => _docs24}/tutorial/squirrel.cn.md |    2 +-
 website/{_docs => _docs24}/tutorial/squirrel.md    |    2 +-
 website/{_docs => _docs24}/tutorial/superset.cn.md |    2 +-
 website/{_docs => _docs24}/tutorial/superset.md    |    2 +-
 website/{_docs => _docs24}/tutorial/tableau.cn.md  |    2 +-
 website/{_docs16 => _docs24}/tutorial/tableau.md   |    4 +-
 .../{_docs16 => _docs24}/tutorial/tableau_91.cn.md |   10 +-
 .../{_docs16 => _docs24}/tutorial/tableau_91.md    |    8 +-
 .../tutorial/use_cube_planner.cn.md                |    2 +-
 .../tutorial/use_cube_planner.md                   |    2 +-
 .../tutorial/use_dashboard.cn.md                   |    2 +-
 .../{_docs => _docs24}/tutorial/use_dashboard.md   |    2 +-
 website/{_docs => _docs24}/tutorial/web.cn.md      |    2 +-
 website/{_docs => _docs24}/tutorial/web.md         |    2 +-
 .../blog/2017-07-21-Improving-Spark-Cubing.md      |    2 +-
 website/archive/docs16.tar.gz                      |  Bin 0 -> 91609 bytes
 website/assets/css/docs.css                        |   15 +
 website/download/index.cn.md                       |   12 +
 website/download/index.md                          |   19 +-
 .../Kylin-Hybrid-Creation-Tutorial/1 +hybrid.png   |  Bin 0 -> 1084 bytes
 .../2 hybrid-name.png                              |  Bin 0 -> 30995 bytes
 .../3 hybrid-created.png                           |  Bin 0 -> 5057 bytes
 .../4 edit-hybrid.png                              |  Bin 0 -> 31492 bytes
 .../5 sql-statement.png                            |  Bin 0 -> 37483 bytes
 165 files changed, 790 insertions(+), 4945 deletions(-)
 copy website/_data/{docs-cn.yml => docs24-cn.yml} (100%)
 copy website/_data/{docs.yml => docs24.yml} (100%)
 create mode 100644 website/_docs/tutorial/hybrid.cn.md
 create mode 100644 website/_docs/tutorial/hybrid.md
 delete mode 100644 website/_docs16/gettingstarted/events.md
 delete mode 100644 website/_docs16/gettingstarted/faq.md
 delete mode 100644 website/_docs16/howto/howto_cleanup_storage.md
 delete mode 100644 website/_docs16/howto/howto_ldap_and_sso.md
 delete mode 100644 website/_docs16/howto/howto_optimize_build.md
 delete mode 100644 website/_docs16/howto/howto_update_coprocessor.md
 delete mode 100644 website/_docs16/howto/howto_upgrade.md
 delete mode 100644 website/_docs16/howto/howto_use_beeline.md
 delete mode 100644 website/_docs16/howto/howto_use_restapi.md
 delete mode 100644 website/_docs16/index.cn.md
 delete mode 100644 website/_docs16/index.md
 delete mode 100644 website/_docs16/install/advance_settings.md
 delete mode 100644 website/_docs16/install/hadoop_evn.md
 delete mode 100644 website/_docs16/install/index.cn.md
 delete mode 100644 website/_docs16/install/index.md
 delete mode 100644 website/_docs16/install/kylin_cluster.md
 delete mode 100644 website/_docs16/install/manual_install_guide.cn.md
 delete mode 100644 website/_docs16/release_notes.md
 delete mode 100644 website/_docs16/tutorial/acl.cn.md
 delete mode 100644 website/_docs16/tutorial/create_cube.cn.md
 delete mode 100644 website/_docs16/tutorial/create_cube.md
 delete mode 100644 website/_docs16/tutorial/cube_build_job.cn.md
 delete mode 100644 website/_docs16/tutorial/cube_build_job.md
 delete mode 100644 website/_docs16/tutorial/flink.md
 delete mode 100644 website/_docs16/tutorial/kylin_sample.md
 delete mode 100644 website/_docs16/tutorial/powerbi.cn.md
 delete mode 100644 website/_docs16/tutorial/squirrel.md
 delete mode 100644 website/_docs16/tutorial/tableau.cn.md
 delete mode 100644 website/_docs16/tutorial/web.cn.md
 delete mode 100644 website/_docs16/tutorial/web.md
 rename website/{_docs16 => _docs24}/gettingstarted/best_practices.md (95%)
 rename website/{_docs16 => _docs24}/gettingstarted/concepts.md (98%)
 copy website/{_docs => _docs24}/gettingstarted/events.md (99%)
 copy website/{_docs => _docs24}/gettingstarted/faq.md (98%)
 rename website/{_docs16 => _docs24}/gettingstarted/terminology.md (97%)
 copy website/{_docs => _docs24}/howto/howto_backup_metadata.cn.md (98%)
 rename website/{_docs16 => _docs24}/howto/howto_backup_metadata.md (97%)
 copy website/{_docs => _docs24}/howto/howto_build_cube_with_restapi.cn.md (97%)
 rename website/{_docs16 => _docs24}/howto/howto_build_cube_with_restapi.md (95%)
 copy website/{_docs => _docs24}/howto/howto_cleanup_storage.cn.md (94%)
 copy website/{_docs => _docs24}/howto/howto_cleanup_storage.md (95%)
 copy website/{_docs => _docs24}/howto/howto_enable_zookeeper_acl.md (95%)
 copy website/{_docs => _docs24}/howto/howto_install_ranger_kylin_plugin.md (77%)
 copy website/{_docs => _docs24}/howto/howto_jdbc.cn.md (98%)
 rename website/{_docs16 => _docs24}/howto/howto_jdbc.md (97%)
 copy website/{_docs => _docs24}/howto/howto_ldap_and_sso.md (99%)
 copy website/{_docs23 => _docs24}/howto/howto_optimize_build.cn.md (99%)
 copy website/{_docs23 => _docs24}/howto/howto_optimize_build.md (99%)
 copy website/{_docs => _docs24}/howto/howto_optimize_cubes.cn.md (98%)
 rename website/{_docs16 => _docs24}/howto/howto_optimize_cubes.md (98%)
 copy website/{_docs => _docs24}/howto/howto_update_coprocessor.md (88%)
 copy website/{_docs => _docs24}/howto/howto_upgrade.md (99%)
 copy website/{_docs => _docs24}/howto/howto_use_beeline.md (94%)
 rename website/{_docs16 => _docs24}/howto/howto_use_distributed_scheduler.md (86%)
 copy website/{_docs => _docs24}/howto/howto_use_restapi.cn.md (99%)
 copy website/{_docs => _docs24}/howto/howto_use_restapi.md (99%)
 rename website/{_docs16 => _docs24}/howto/howto_use_restapi_in_js.md (95%)
 copy website/{_docs => _docs24}/index.cn.md (92%)
 copy website/{_docs => _docs24}/index.md (98%)
 copy website/{_docs => _docs24}/install/advance_settings.cn.md (99%)
 copy website/{_docs => _docs24}/install/advance_settings.md (99%)
 copy website/{_docs => _docs24}/install/configuration.cn.md (99%)
 copy website/{_docs => _docs24}/install/configuration.md (99%)
 copy website/{_docs => _docs24}/install/hadoop_evn.md (96%)
 copy website/{_docs => _docs24}/install/index.cn.md (98%)
 copy website/{_docs => _docs24}/install/index.md (99%)
 copy website/{_docs => _docs24}/install/kylin_aws_emr.cn.md (99%)
 copy website/{_docs => _docs24}/install/kylin_aws_emr.md (99%)
 copy website/{_docs => _docs24}/install/kylin_cluster.cn.md (75%)
 copy website/{_docs => _docs24}/install/kylin_cluster.md (93%)
 copy website/{_docs => _docs24}/install/kylin_docker.cn.md (85%)
 rename website/{_docs16 => _docs24}/install/kylin_docker.md (82%)
 copy website/{_docs => _docs24}/install/manual_install_guide.cn.md (92%)
 copy website/{_docs => _docs24}/release_notes.md (99%)
 copy website/{_docs => _docs24}/tutorial/Qlik.cn.md (98%)
 copy website/{_docs => _docs24}/tutorial/Qlik.md (98%)
 copy website/{_docs => _docs24}/tutorial/acl.cn.md (92%)
 rename website/{_docs16 => _docs24}/tutorial/acl.md (88%)
 copy website/{_docs => _docs24}/tutorial/create_cube.cn.md (99%)
 copy website/{_docs => _docs24}/tutorial/create_cube.md (99%)
 copy website/{_docs => _docs24}/tutorial/cube_build_job.cn.md (98%)
 copy website/{_docs => _docs24}/tutorial/cube_build_job.md (98%)
 copy website/{_docs => _docs24}/tutorial/cube_build_performance.cn.md (99%)
 copy website/{_docs => _docs24}/tutorial/cube_build_performance.md (99%)
 mode change 100755 => 100644
 copy website/{_docs => _docs24}/tutorial/cube_spark.cn.md (99%)
 copy website/{_docs => _docs24}/tutorial/cube_spark.md (99%)
 copy website/{_docs23 => _docs24}/tutorial/cube_streaming.cn.md (99%)
 rename website/{_docs16 => _docs24}/tutorial/cube_streaming.md (94%)
 copy website/{_docs => _docs24}/tutorial/flink.md (99%)
 copy website/{_docs23 => _docs24}/tutorial/hue.md (97%)
 mode change 100755 => 100644
 copy website/{_docs => _docs24}/tutorial/jdbc.cn.md (98%)
 copy website/{_docs => _docs24}/tutorial/jdbc.md (98%)
 copy website/{_docs => _docs24}/tutorial/kylin_client_tool.cn.md (98%)
 copy website/{_docs => _docs24}/tutorial/kylin_client_tool.md (98%)
 copy website/{_docs => _docs24}/tutorial/kylin_sample.cn.md (97%)
 copy website/{_docs => _docs24}/tutorial/kylin_sample.md (97%)
 copy website/{_docs => _docs24}/tutorial/microstrategy.md (98%)
 rename website/{_docs16 => _docs24}/tutorial/odbc.cn.md (93%)
 rename website/{_docs16 => _docs24}/tutorial/odbc.md (97%)
 copy website/{_docs => _docs24}/tutorial/powerbi.cn.md (98%)
 rename website/{_docs16 => _docs24}/tutorial/powerbi.md (98%)
 copy website/{_docs => _docs24}/tutorial/project_level_acl.cn.md (98%)
 copy website/{_docs => _docs24}/tutorial/project_level_acl.md (98%)
 copy website/{_docs => _docs24}/tutorial/query_pushdown.cn.md (97%)
 copy website/{_docs => _docs24}/tutorial/query_pushdown.md (97%)
 copy website/{_docs => _docs24}/tutorial/setup_jdbc_datasource.cn.md (97%)
 copy website/{_docs => _docs24}/tutorial/setup_jdbc_datasource.md (98%)
 copy website/{_docs => _docs24}/tutorial/setup_systemcube.cn.md (99%)
 copy website/{_docs => _docs24}/tutorial/setup_systemcube.md (99%)
 copy website/{_docs => _docs24}/tutorial/spark.cn.md (98%)
 copy website/{_docs => _docs24}/tutorial/spark.md (98%)
 copy website/{_docs => _docs24}/tutorial/squirrel.cn.md (98%)
 copy website/{_docs => _docs24}/tutorial/squirrel.md (98%)
 copy website/{_docs => _docs24}/tutorial/superset.cn.md (97%)
 copy website/{_docs => _docs24}/tutorial/superset.md (97%)
 copy website/{_docs => _docs24}/tutorial/tableau.cn.md (98%)
 rename website/{_docs16 => _docs24}/tutorial/tableau.md (98%)
 rename website/{_docs16 => _docs24}/tutorial/tableau_91.cn.md (92%)
 rename website/{_docs16 => _docs24}/tutorial/tableau_91.md (93%)
 copy website/{_docs => _docs24}/tutorial/use_cube_planner.cn.md (99%)
 copy website/{_docs => _docs24}/tutorial/use_cube_planner.md (99%)
 copy website/{_docs => _docs24}/tutorial/use_dashboard.cn.md (98%)
 copy website/{_docs => _docs24}/tutorial/use_dashboard.md (98%)
 copy website/{_docs => _docs24}/tutorial/web.cn.md (98%)
 copy website/{_docs => _docs24}/tutorial/web.md (98%)
 create mode 100644 website/archive/docs16.tar.gz
 create mode 100644 website/images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/1 +hybrid.png
 create mode 100644 website/images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/2 hybrid-name.png
 create mode 100644 website/images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/3 hybrid-created.png
 create mode 100644 website/images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/4 edit-hybrid.png
 create mode 100644 website/images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/5 sql-statement.png


[kylin] 01/02: Update 2.5.0 doc

Posted by sh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit ed7896c70e2fc8d229b38cee57ef51b0279cb494
Author: GinaZhai <na...@kyligence.io>
AuthorDate: Fri Sep 14 17:34:21 2018 +0800

    Update 2.5.0 doc
    
    Signed-off-by: shaofengshi <sh...@apache.org>
---
 website/_data/docs-cn.yml                          |    1 +
 website/_data/docs.yml                             |    1 +
 website/_data/{docs-cn.yml => docs24-cn.yml}       |    0
 website/_data/{docs.yml => docs24.yml}             |    0
 website/_docs/howto/howto_use_restapi.cn.md        |    4 +-
 website/_docs/howto/howto_use_restapi.md           |    4 +-
 website/_docs/install/advance_settings.cn.md       |   39 +
 website/_docs/install/advance_settings.md          |   39 +
 website/_docs/install/configuration.cn.md          |   42 +-
 website/_docs/install/configuration.md             |   42 +-
 website/_docs/tutorial/hybrid.cn.md                |   47 +
 website/_docs/tutorial/hybrid.md                   |   47 +
 website/_docs/tutorial/use_cube_planner.cn.md      |    2 +-
 website/_docs/tutorial/use_cube_planner.md         |    2 +-
 website/_docs16/howto/howto_use_restapi.md         |    4 +-
 website/_docs20/howto/howto_use_restapi.md         |    4 +-
 website/_docs21/howto/howto_use_restapi.md         |    4 +-
 website/_docs23/howto/howto_use_restapi.cn.md      |    4 +-
 website/_docs23/howto/howto_use_restapi.md         |    4 +-
 website/_docs24/gettingstarted/best_practices.md   |   27 +
 website/_docs24/gettingstarted/concepts.md         |   64 +
 website/_docs24/gettingstarted/events.md           |   30 +
 website/_docs24/gettingstarted/faq.md              |  148 ++
 website/_docs24/gettingstarted/terminology.md      |   25 +
 website/_docs24/howto/howto_backup_metadata.cn.md  |   59 +
 website/_docs24/howto/howto_backup_metadata.md     |   60 +
 .../howto/howto_build_cube_with_restapi.cn.md      |   54 +
 .../_docs24/howto/howto_build_cube_with_restapi.md |   53 +
 website/_docs24/howto/howto_cleanup_storage.cn.md  |   21 +
 website/_docs24/howto/howto_cleanup_storage.md     |   22 +
 .../_docs24/howto/howto_enable_zookeeper_acl.md    |   20 +
 .../howto/howto_install_ranger_kylin_plugin.md     |    8 +
 website/_docs24/howto/howto_jdbc.cn.md             |   92 +
 website/_docs24/howto/howto_jdbc.md                |   92 +
 website/_docs24/howto/howto_ldap_and_sso.md        |  130 ++
 website/_docs24/howto/howto_optimize_build.cn.md   |  166 ++
 website/_docs24/howto/howto_optimize_build.md      |  190 ++
 website/_docs24/howto/howto_optimize_cubes.cn.md   |  212 ++
 website/_docs24/howto/howto_optimize_cubes.md      |  212 ++
 website/_docs24/howto/howto_update_coprocessor.md  |   14 +
 website/_docs24/howto/howto_upgrade.md             |  105 +
 website/_docs24/howto/howto_use_beeline.md         |   14 +
 .../howto/howto_use_distributed_scheduler.md       |   16 +
 .../howto/howto_use_restapi.cn.md                  |    6 +-
 .../{_docs => _docs24}/howto/howto_use_restapi.md  |    6 +-
 website/_docs24/howto/howto_use_restapi_in_js.md   |   46 +
 website/_docs24/index.cn.md                        |   29 +
 website/_docs24/index.md                           |   73 +
 .../install/advance_settings.cn.md                 |    2 +-
 .../{_docs => _docs24}/install/advance_settings.md |    2 +-
 .../{_docs => _docs24}/install/configuration.cn.md |    2 +-
 .../{_docs => _docs24}/install/configuration.md    |    2 +-
 website/_docs24/install/hadoop_evn.md              |   24 +
 website/_docs24/install/index.cn.md                |   79 +
 website/_docs24/install/index.md                   |   78 +
 website/_docs24/install/kylin_aws_emr.cn.md        |  180 ++
 website/_docs24/install/kylin_aws_emr.md           |  180 ++
 website/_docs24/install/kylin_cluster.cn.md        |   58 +
 website/_docs24/install/kylin_cluster.md           |   57 +
 website/_docs24/install/kylin_docker.cn.md         |   10 +
 website/_docs24/install/kylin_docker.md            |   10 +
 website/_docs24/install/manual_install_guide.cn.md |   29 +
 website/_docs24/release_notes.md                   | 2212 ++++++++++++++++++++
 website/_docs24/tutorial/Qlik.cn.md                |  153 ++
 website/_docs24/tutorial/Qlik.md                   |  156 ++
 website/_docs24/tutorial/acl.cn.md                 |   35 +
 website/_docs24/tutorial/acl.md                    |   37 +
 website/_docs24/tutorial/create_cube.cn.md         |  223 ++
 website/_docs24/tutorial/create_cube.md            |  216 ++
 website/_docs24/tutorial/cube_build_job.cn.md      |   69 +
 website/_docs24/tutorial/cube_build_job.md         |   67 +
 .../_docs24/tutorial/cube_build_performance.cn.md  |  266 +++
 website/_docs24/tutorial/cube_build_performance.md |  266 +++
 website/_docs24/tutorial/cube_spark.cn.md          |  165 ++
 website/_docs24/tutorial/cube_spark.md             |  159 ++
 website/_docs24/tutorial/cube_streaming.cn.md      |  219 ++
 website/_docs24/tutorial/cube_streaming.md         |  219 ++
 website/_docs24/tutorial/flink.md                  |  249 +++
 website/_docs24/tutorial/hue.md                    |  246 +++
 website/_docs24/tutorial/jdbc.cn.md                |   92 +
 website/_docs24/tutorial/jdbc.md                   |   92 +
 website/_docs24/tutorial/kylin_client_tool.cn.md   |  123 ++
 website/_docs24/tutorial/kylin_client_tool.md      |  135 ++
 website/_docs24/tutorial/kylin_sample.cn.md        |   34 +
 website/_docs24/tutorial/kylin_sample.md           |   34 +
 website/_docs24/tutorial/microstrategy.md          |   84 +
 website/_docs24/tutorial/odbc.cn.md                |   34 +
 website/_docs24/tutorial/odbc.md                   |   49 +
 website/_docs24/tutorial/powerbi.cn.md             |   56 +
 website/_docs24/tutorial/powerbi.md                |   54 +
 website/_docs24/tutorial/project_level_acl.cn.md   |   63 +
 website/_docs24/tutorial/project_level_acl.md      |   63 +
 website/_docs24/tutorial/query_pushdown.cn.md      |   50 +
 website/_docs24/tutorial/query_pushdown.md         |   61 +
 .../_docs24/tutorial/setup_jdbc_datasource.cn.md   |   93 +
 website/_docs24/tutorial/setup_jdbc_datasource.md  |   93 +
 website/_docs24/tutorial/setup_systemcube.cn.md    |  438 ++++
 website/_docs24/tutorial/setup_systemcube.md       |  438 ++++
 website/_docs24/tutorial/spark.cn.md               |   90 +
 website/_docs24/tutorial/spark.md                  |   90 +
 website/_docs24/tutorial/squirrel.cn.md            |  112 +
 website/_docs24/tutorial/squirrel.md               |  112 +
 website/_docs24/tutorial/superset.cn.md            |   35 +
 website/_docs24/tutorial/superset.md               |   36 +
 website/_docs24/tutorial/tableau.cn.md             |  112 +
 website/_docs24/tutorial/tableau.md                |  113 +
 website/_docs24/tutorial/tableau_91.cn.md          |   47 +
 website/_docs24/tutorial/tableau_91.md             |   46 +
 .../tutorial/use_cube_planner.cn.md                |    2 +-
 .../tutorial/use_cube_planner.md                   |    2 +-
 website/_docs24/tutorial/use_dashboard.cn.md       |   99 +
 website/_docs24/tutorial/use_dashboard.md          |   99 +
 website/_docs24/tutorial/web.cn.md                 |  109 +
 website/_docs24/tutorial/web.md                    |  106 +
 .../blog/2017-07-21-Improving-Spark-Cubing.md      |    2 +-
 website/assets/css/docs.css                        |   15 +
 .../Kylin-Hybrid-Creation-Tutorial/1 +hybrid.png   |  Bin 0 -> 1084 bytes
 .../2 hybrid-name.png                              |  Bin 0 -> 30995 bytes
 .../3 hybrid-created.png                           |  Bin 0 -> 5057 bytes
 .../4 edit-hybrid.png                              |  Bin 0 -> 31492 bytes
 .../5 sql-statement.png                            |  Bin 0 -> 37483 bytes
 121 files changed, 11228 insertions(+), 39 deletions(-)

diff --git a/website/_data/docs-cn.yml b/website/_data/docs-cn.yml
index ecbb6e7..f2b1bb3 100644
--- a/website/_data/docs-cn.yml
+++ b/website/_data/docs-cn.yml
@@ -40,6 +40,7 @@
   - tutorial/use_cube_planner
   - tutorial/use_dashboard
   - tutorial/setup_jdbc_datasource
+  - tutorial/hybrid
 
 - title: 工具集成
   docs:
diff --git a/website/_data/docs.yml b/website/_data/docs.yml
index bd78b83..9146cfb 100644
--- a/website/_data/docs.yml
+++ b/website/_data/docs.yml
@@ -49,6 +49,7 @@
   - tutorial/use_cube_planner
   - tutorial/use_dashboard
   - tutorial/setup_jdbc_datasource
+  - tutorial/hybrid
 
 - title: Integration
   docs:
diff --git a/website/_data/docs-cn.yml b/website/_data/docs24-cn.yml
similarity index 100%
copy from website/_data/docs-cn.yml
copy to website/_data/docs24-cn.yml
diff --git a/website/_data/docs.yml b/website/_data/docs24.yml
similarity index 100%
copy from website/_data/docs.yml
copy to website/_data/docs24.yml
diff --git a/website/_docs/howto/howto_use_restapi.cn.md b/website/_docs/howto/howto_use_restapi.cn.md
index 2bbebeb..1035bb1 100644
--- a/website/_docs/howto/howto_use_restapi.cn.md
+++ b/website/_docs/howto/howto_use_restapi.cn.md
@@ -83,7 +83,7 @@ curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H '
 If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
 
 ```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
 ```
 
 Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
@@ -659,7 +659,7 @@ Get descriptor for specified cube instance.
 
 #### Curl Example
 ```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
 ```
 
 #### Response Sample
diff --git a/website/_docs/howto/howto_use_restapi.md b/website/_docs/howto/howto_use_restapi.md
index 0927017..8c4b5f7 100644
--- a/website/_docs/howto/howto_use_restapi.md
+++ b/website/_docs/howto/howto_use_restapi.md
@@ -84,7 +84,7 @@ curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H '
 If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
 
 ```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
 ```
 
 Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
@@ -660,7 +660,7 @@ Get descriptor for specified cube instance.
 
 #### Curl Example
 ```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
 ```
 
 #### Response Sample
diff --git a/website/_docs/install/advance_settings.cn.md b/website/_docs/install/advance_settings.cn.md
index 6ed7c35..ce42f2c 100644
--- a/website/_docs/install/advance_settings.cn.md
+++ b/website/_docs/install/advance_settings.cn.md
@@ -111,3 +111,42 @@ kylin.job.admin.dls=adminstrator-address
 重启 Kylin 服务器使其生效。设置 `mail.enabled` 为 `false` 令其失效。
 
 所有的 jobs 管理员都会收到通知。建模者和分析师需要将邮箱填写在 cube 创建的第一页的 "Notification List" 中,然后即可收到关于该 cube 的通知。
+
+
+## 支持 MySQL 作为 Kylin metadata 的存储(测试)
+
+Kylin 支持 MySQL 作为 metadata 的存储;为了使该功能生效,您需要执行以下步骤:
+<ol>
+<li>在 MySQL 数据库中新建名为 kylin 的数据库</li>
+<li>编辑 `conf/kylin.properties`,配置以下参数</li>
+{% highlight Groff markup %}
+kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password}
+kylin.metadata.jdbc.dialect=mysql
+kylin.metadata.jdbc.json-always-small-cell=true
+kylin.metadata.jdbc.small-cell-meta-size-warning-threshold=100mb
+kylin.metadata.jdbc.small-cell-meta-size-error-threshold=1gb
+kylin.metadata.jdbc.max-cell-size=1mb
+{% endhighlight %}
+配置项的含义如下,其中 `url`, `username`,和 `password` 为必须配置项。其余项若不配置将使用默认配置项:
+{% highlight Groff markup %}
+Url: JDBC 的 url
+Username: JDBC 的用户名
+Password: JDBC 的密码,如果选择了加密,那这里请写加密后的密码
+driverClassName: JDBC 的 driver 类名,默认值为 com.mysql.jdbc.Driver
+maxActive: 最大数据库连接数,默认值为 5
+maxIdle: 最大等待中的连接数量,默认值为 5
+maxWait: 最大等待连接毫秒数,默认值为 1000
+removeAbandoned: 是否自动回收超时连接,默认值为 true
+removeAbandonedTimeout: 超时时间秒数,默认为 300
+passwordEncrypted: 是否对 JDBC 密码进行加密,默认为 false
+{% endhighlight %}
+<li>(可选)以这种方式加密:</li>
+{% highlight Groff markup %}
+cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
+java -classpath kylin-server-base-\<version\>.jar:kylin-core-common-\<version\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
+{% endhighlight %}
+<li>拷贝 JDBC connector jar 到 $KYLIN_HOME/ext 目录(如没有该目录请自行创建)</li>
+<li>启动 Kylin</li>
+</ol>
+
+*注意:该功能还在测试中,建议您谨慎使用*
\ No newline at end of file
diff --git a/website/_docs/install/advance_settings.md b/website/_docs/install/advance_settings.md
index 52bf044..e3a8307 100644
--- a/website/_docs/install/advance_settings.md
+++ b/website/_docs/install/advance_settings.md
@@ -111,3 +111,42 @@ kylin.job.admin.dls=adminstrator-address
 Restart Kylin server to take effective. To disable, set `mail.enabled` back to `false`.
 
 Administrator will get notifications for all jobs. Modeler and Analyst need enter email address into the "Notification List" at the first page of cube wizard, and then will get notified for that cube.
+
+
+## Enable MySQL as Kylin metadata storage(Beta)
+
+Kylin supports MySQL as metadata storage; To enable this, you should perform the following steps: 
+<ol>
+<li>Create a new database named kylin in the MySQL database</li>
+<li>Edit `conf/kylin.properties`, set the following parameters:</li>
+{% highlight Groff markup %}
+kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password}
+kylin.metadata.jdbc.dialect=mysql
+kylin.metadata.jdbc.json-always-small-cell=true
+kylin.metadata.jdbc.small-cell-meta-size-warning-threshold=100mb
+kylin.metadata.jdbc.small-cell-meta-size-error-threshold=1gb
+kylin.metadata.jdbc.max-cell-size=1mb
+{% endhighlight %}
+The configuration items have the following meanings, `url`, `username`, and `password` are required configuration items. If not configured, the default configuration items will be used:
+{% highlight Groff markup %}
+Url: JDBC url
+Username: JDBC username
+Password: JDBC password, if encryption is selected, please write the encrypted password here;
+driverClassName: JDBC driver class name, the default value is com.mysql.jdbc.Driver
+maxActive: the maximum number of database connections, the default value is 5;
+maxIdle: the maximum number of connections waiting, the default value is 5;
+maxWait: The maximum number of milliseconds to wait for connection. The default value is 1000.
+removeAbandoned: Whether to automatically reclaim timeout connections, the default value is true;
+removeAbandonedTimeout: the number of seconds in the timeout period, the default is 300;
+passwordEncrypted: Whether to encrypt the JDBC password, the default is false;
+{% endhighlight %}
+<li>(Optional) Encrypt password in this way:</li>
+{% highlight Groff markup %}
+cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
+java -classpath kylin-server-base-\<version\>.jar:kylin-core-common-\<version\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
+{% endhighlight %}
+<li>Copy the JDBC connector jar to $KYLIN_HOME/ext (if it does not exist, create it yourself)</li>
+<li>Start Kylin</li>
+</ol>
+
+*Note: The function is still in the test, it is recommended that you use it with caution*
diff --git a/website/_docs/install/configuration.cn.md b/website/_docs/install/configuration.cn.md
index 6523df3..6fc219d 100644
--- a/website/_docs/install/configuration.cn.md
+++ b/website/_docs/install/configuration.cn.md
@@ -59,6 +59,7 @@ Kylin 的主要配置文件。
 | kylin.env.zookeeper-acl-enabled                       | false                |                                                              | No                        |
 | kylin.env.zookeeper.zk-auth                           | digest:ADMIN:KYLIN   |                                                              | No                        |
 | kylin.env.zookeeper.zk-acl                            | world:anyone:rwcda   |                                                              | No                        |
+| kylin.metadata.dimension-encoding-max-length          | 256                  | Max length for one dimension's encoding                      | Yes                       |
 | kylin.metadata.url                                    | kylin_metadata@hbase | Kylin metadata storage                                       | No                        |
 | kylin.metadata.sync-retries                           | 3                    |                                                              | No                        |
 | kylin.metadata.sync-error-handler                     |                      |                                                              | No                        |
@@ -66,6 +67,12 @@ Kylin 的主要配置文件。
 | kylin.metadata.hbase-client-scanner-timeout-period    | 10000                |                                                              | No                        |
 | kylin.metadata.hbase-rpc-timeout                      | 5000                 |                                                              | No                        |
 | kylin.metadata.hbase-client-retries-number            | 1                    |                                                              | No                        |
+| kylin.metadata.jdbc.dialect                           | mysql                | clarify the type of dialect                                  | Yes                       |
+| kylin.metadata.resource-store-provider.jdbc           | org.apache.kylin.common.persistence.JDBCResourceStore| specify the class that jdbc used|                        |
+| kylin.metadata.jdbc.json-always-small-cell            | true                 |                                                              | Yes                       |
+| kylin.metadata.jdbc.small-cell-meta-size-warning-threshold| 100mb            |                                                              | Yes                       |
+| kylin.metadata.jdbc.small-cell-meta-size-error-threshold| 1gb                |                                                              | Yes                       |
+| kylin.metadata.jdbc.max-cell-size                     | 1mb                  |                                                              | Yes                       |
 | kylin.dictionary.use-forest-trie                      | true                 |                                                              | No                        |
 | kylin.dictionary.forest-trie-max-mb                   | 500                  |                                                              | No                        |
 | kylin.dictionary.max-cache-entry                      | 3000                 |                                                              | No                        |
@@ -73,6 +80,8 @@ Kylin 的主要配置文件。
 | kylin.dictionary.append-entry-size                    | 10000000             |                                                              | No                        |
 | kylin.dictionary.append-max-versions                  | 3                    |                                                              | No                        |
 | kylin.dictionary.append-version-ttl                   | 259200000            |                                                              | No                        |
+| kylin.dictionary.resuable                             | false                | Whether reuse dict                                           | Yes                       |
+| kylin.dictionary.shrunken-from-global-enabled         | false                | Whether shrink global dict                                   | Yes                       |
 | kylin.snapshot.max-cache-entry                        | 500                  |                                                              | No                        |
 | kylin.snapshot.max-mb                                 | 300                  |                                                              | No                        |
 | kylin.snapshot.ext.shard-mb                           | 500                  |                                                              | No                        |
@@ -80,16 +89,23 @@ Kylin 的主要配置文件。
 | kylin.snapshot.ext.local.cache.max-size-gb            | 200                  |                                                              | No                        |
 | kylin.cube.size-estimate-ratio                        | 0.25                 |                                                              | Yes                       |
 | kylin.cube.size-estimate-memhungry-ratio              | 0.05                 | Deprecated                                                   | Yes                       |
-| kylin.cube.size-estimate-countdistinct-ratio          | 0.05                 |                                                              | Yes                       |
+| kylin.cube.size-estimate-countdistinct-ratio          | 0.5                  |                                                              | Yes                       |
+| kylin.cube.size-estimate-topn-ratio                   | 0.5                  |                                                              | Yes                       |
 | kylin.cube.algorithm                                  | auto                 | Cubing algorithm for MR engine, other options: layer, inmem  | Yes                       |
 | kylin.cube.algorithm.layer-or-inmem-threshold         | 7                    |                                                              | Yes                       |
 | kylin.cube.algorithm.inmem-split-limit                | 500                  |                                                              | Yes                       |
 | kylin.cube.algorithm.inmem-concurrent-threads         | 1                    |                                                              | Yes                       |
 | kylin.cube.ignore-signature-inconsistency             | false                |                                                              |                           |
-| kylin.cube.aggrgroup.max-combination                  | 4096                 | Max cuboid numbers in a Cube                                 | Yes                       |
+| kylin.cube.aggrgroup.max-combination                  | 32768                | Max cuboid numbers in a Cube                                 | Yes                       |
 | kylin.cube.aggrgroup.is-mandatory-only-valid          | false                | Whether allow a Cube only has the base cuboid.               | Yes                       |
+| kylin.cube.cubeplanner.enabled                        | true                 | Whether enable cubeplanner                                   | Yes                       |
+| kylin.cube.cubeplanner.enabled-for-existing-cube      | true                 | Whether enable cubeplanner for existing cube                 | Yes                       |
+| kylin.cube.cubeplanner.algorithm-threshold-greedy     | 8                    |                                                              | Yes                       |
+| kylin.cube.cubeplanner.expansion-threshold            | 15.0                 |                                                              | Yes                       |
+| kylin.cube.cubeplanner.recommend-cache-max-size       | 200                  |                                                              | No                        |
+| kylin.cube.cubeplanner.mandatory-rollup-threshold     | 1000                 |                                                              | Yes                       |
+| kylin.cube.cubeplanner.algorithm-threshold-genetic    | 23                   |                                                              | Yes                       |
 | kylin.cube.rowkey.max-size                            | 63                   | Max columns in Rowkey                                        | No                        |
-| kylin.metadata.dimension-encoding-max-length          | 256                  | Max length for one dimension's encoding                      | Yes                       |
 | kylin.cube.max-building-segments                      | 10                   | Max building segments in one Cube                            | Yes                       |
 | kylin.cube.allow-appear-in-multiple-projects          | false                | Whether allow a Cueb appeared in multiple projects           | No                        |
 | kylin.cube.gtscanrequest-serialization-level          | 1                    |                                                              |                           |
@@ -112,11 +128,13 @@ Kylin 的主要配置文件。
 | kylin.job.scheduler.priority-bar-fetch-from-queue     | 20                   |                                                              | No                        |
 | kylin.job.scheduler.poll-interval-second              | 30                   |                                                              | No                        |
 | kylin.job.error-record-threshold                      | 0                    |                                                              | No                        |
+| kylin.job.cube-auto-ready-enabled                     | true                 | Whether enable the cube automatically when finish build      | Yes                       |
 | kylin.source.hive.keep-flat-table                     | false                | Whether keep the intermediate Hive table after job finished. | No                        |
 | kylin.source.hive.database-for-flat-table             | default              | Hive database to create the intermediate table.              | No                        |
 | kylin.source.hive.flat-table-storage-format           | SEQUENCEFILE         |                                                              | No                        |
 | kylin.source.hive.flat-table-field-delimiter          | \u001F               |                                                              | No                        |
 | kylin.source.hive.redistribute-flat-table             | true                 | Whether or not to redistribute the flat table.               | Yes                       |
+| kylin.source.hive.redistribute-column-count           | 3                    | The number of redistribute column                            | Yes                       |
 | kylin.source.hive.client                              | cli                  |                                                              | No                        |
 | kylin.source.hive.beeline-shell                       | beeline              |                                                              | No                        |
 | kylin.source.hive.beeline-params                      |                      |                                                              | No                        |
@@ -166,6 +184,7 @@ Kylin 的主要配置文件。
 | kylin.storage.hbase.max-hconnection-threads           | 2048                 |                                                              |                           |
 | kylin.storage.hbase.core-hconnection-threads          | 2048                 |                                                              |                           |
 | kylin.storage.hbase.hconnection-threads-alive-seconds | 60                   |                                                              |                           |
+| kylin.storage.hbase.replication-scope                 | 0                    | whether config hbase cluster replication                     | Yes                       |
 | kylin.engine.mr.lib-dir                               |                      |                                                              |                           |
 | kylin.engine.mr.reduce-input-mb                       | 500                  |                                                              |                           |
 | kylin.engine.mr.reduce-count-ratio                    | 1.0                  |                                                              |                           |
@@ -182,7 +201,12 @@ Kylin 的主要配置文件。
 | kylin.engine.spark.min-partition                      | 1                    | Spark Cubing RDD min partition number                        | Yes                       |
 | kylin.engine.spark.max-partition                      | 5000                 | RDD max partition number                                     | Yes                       |
 | kylin.engine.spark.storage-level                      | MEMORY_AND_DISK_SER  | RDD persistent level.                                        | Yes                       |
-| kylin.query.skip-empty-segments                       | true                 | Whether directly skip empty segment (metadata shows size be 0) when run SQL query. | Yes                       |
+| kylin.engine.spark-conf.spark.hadoop.dfs.replication  | 2                    |                                                              |                           |
+| kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress| true|                                                      |                           |
+| kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress.codec  | org.apache.hadoop.io.compress.DefaultCodec|        |                           |
+| kylin.engine.spark-conf-mergedict.spark.executor.memory| 6G                  |                                                              | Yes                       |
+| kylin.engine.spark-conf-mergedict.spark.memory.fraction| 0.2                 |                                                              | Yes                       |
+| kylin.query.skip-empty-segments                       | true                 | Whether directly skip empty segment (metadata shows size be 0) when run SQL query. | Yes |
 | kylin.query.force-limit                               | -1                   |                                                              |                           |
 | kylin.query.max-scan-bytes                            | 0                    |                                                              |                           |
 | kylin.query.max-return-rows                           | 5000000              |                                                              |                           |
@@ -193,6 +217,7 @@ Kylin 的主要配置文件。
 | kylin.query.security-enabled                          | true                 |                                                              |                           |
 | kylin.query.cache-enabled                             | true                 |                                                              |                           |
 | kylin.query.timeout-seconds                           | 0                    |                                                              |                           |
+| kylin.query.timeout-seconds-coefficient               | 0.5                  | the coefficient to controll query timeout seconds            | Yes                       |
 | kylin.query.pushdown.runner-class-name                |                      |                                                              |                           |
 | kylin.query.pushdown.update-enabled                   | false                |                                                              |                           |
 | kylin.query.pushdown.cache-enabled                    | false                |                                                              |                           |
@@ -204,6 +229,13 @@ Kylin 的主要配置文件。
 | kylin.query.pushdown.jdbc.pool-max-idle               | 8                    |                                                              |                           |
 | kylin.query.pushdown.jdbc.pool-min-idle               | 0                    |                                                              |                           |
 | kylin.query.security.table-acl-enabled                | true                 |                                                              | No                        |
+| kylin.query.calcite.extras-props.conformance          | LENIENT              |                                                              | Yes                       |
+| kylin.query.calcite.extras-props.caseSensitive        | true                 | Whether enable case sensitive                                | Yes                       |
+| kylin.query.calcite.extras-props.unquotedCasing       | TO_UPPER             | Options: UNCHANGED, TO_UPPER, TO_LOWER                       | Yes                       |
+| kylin.query.calcite.extras-props.quoting              | DOUBLE_QUOTE         | Options: DOUBLE_QUOTE, BACK_TICK, BRACKET                    | Yes                       |
+| kylin.query.statement-cache-max-num                   | 50000                | Max number for cache query statement                         | Yes                       |
+| kylin.query.statement-cache-max-num-per-key           | 50                   |                                                              | Yes                       |
+| kylin.query.enable-dict-enumerator                    | false                | Whether enable dict enumerator                               | Yes                       |
 | kylin.server.mode                                     | all                  | Kylin node mode: all\|job\|query.                            | No                        |
 | kylin.server.cluster-servers                          | localhost:7070       |                                                              | No                        |
 | kylin.server.cluster-name                             |                      |                                                              | No                        |
@@ -219,5 +251,5 @@ Kylin 的主要配置文件。
 | kylin.web.cross-domain-enabled                        | true                 |                                                              | No                        |
 | kylin.web.export-allow-admin                          | true                 |                                                              | No                        |
 | kylin.web.export-allow-other                          | true                 |                                                              | No                        |
-| kylin.web.dashboard-enabled                           | false                |                                                              | No            |
+| kylin.web.dashboard-enabled                           | false                |                                                              | No                        |
 
diff --git a/website/_docs/install/configuration.md b/website/_docs/install/configuration.md
index 542b419..b2bf6ce 100644
--- a/website/_docs/install/configuration.md
+++ b/website/_docs/install/configuration.md
@@ -59,6 +59,7 @@ The main configuration file of Kylin.
 | kylin.env.zookeeper-acl-enabled                       | false                |                                                              | No                        |
 | kylin.env.zookeeper.zk-auth                           | digest:ADMIN:KYLIN   |                                                              | No                        |
 | kylin.env.zookeeper.zk-acl                            | world:anyone:rwcda   |                                                              | No                        |
+| kylin.metadata.dimension-encoding-max-length          | 256                  | Max length for one dimension's encoding                      | Yes                       |
 | kylin.metadata.url                                    | kylin_metadata@hbase | Kylin metadata storage                                       | No                        |
 | kylin.metadata.sync-retries                           | 3                    |                                                              | No                        |
 | kylin.metadata.sync-error-handler                     |                      |                                                              | No                        |
@@ -66,6 +67,12 @@ The main configuration file of Kylin.
 | kylin.metadata.hbase-client-scanner-timeout-period    | 10000                |                                                              | No                        |
 | kylin.metadata.hbase-rpc-timeout                      | 5000                 |                                                              | No                        |
 | kylin.metadata.hbase-client-retries-number            | 1                    |                                                              | No                        |
+| kylin.metadata.jdbc.dialect                           | mysql                | clarify the type of dialect                                  | Yes                       |
+| kylin.metadata.resource-store-provider.jdbc           | org.apache.kylin.common.persistence.JDBCResourceStore| specify the class that jdbc used|                        |
+| kylin.metadata.jdbc.json-always-small-cell            | true                 |                                                              | Yes                       |
+| kylin.metadata.jdbc.small-cell-meta-size-warning-threshold| 100mb            |                                                              | Yes                       |
+| kylin.metadata.jdbc.small-cell-meta-size-error-threshold| 1gb                |                                                              | Yes                       |
+| kylin.metadata.jdbc.max-cell-size                     | 1mb                  |                                                              | Yes                       |
 | kylin.dictionary.use-forest-trie                      | true                 |                                                              | No                        |
 | kylin.dictionary.forest-trie-max-mb                   | 500                  |                                                              | No                        |
 | kylin.dictionary.max-cache-entry                      | 3000                 |                                                              | No                        |
@@ -73,6 +80,8 @@ The main configuration file of Kylin.
 | kylin.dictionary.append-entry-size                    | 10000000             |                                                              | No                        |
 | kylin.dictionary.append-max-versions                  | 3                    |                                                              | No                        |
 | kylin.dictionary.append-version-ttl                   | 259200000            |                                                              | No                        |
+| kylin.dictionary.resuable                             | false                | Whether reuse dict                                           | Yes                       |
+| kylin.dictionary.shrunken-from-global-enabled         | false                | Whether shrink global dict                                   | Yes                       |
 | kylin.snapshot.max-cache-entry                        | 500                  |                                                              | No                        |
 | kylin.snapshot.max-mb                                 | 300                  |                                                              | No                        |
 | kylin.snapshot.ext.shard-mb                           | 500                  |                                                              | No                        |
@@ -80,16 +89,23 @@ The main configuration file of Kylin.
 | kylin.snapshot.ext.local.cache.max-size-gb            | 200                  |                                                              | No                        |
 | kylin.cube.size-estimate-ratio                        | 0.25                 |                                                              | Yes                       |
 | kylin.cube.size-estimate-memhungry-ratio              | 0.05                 | Deprecated                                                   | Yes                       |
-| kylin.cube.size-estimate-countdistinct-ratio          | 0.05                 |                                                              | Yes                       |
+| kylin.cube.size-estimate-countdistinct-ratio          | 0.5                  |                                                              | Yes                       |
+| kylin.cube.size-estimate-topn-ratio                   | 0.5                  |                                                              | Yes                       |
 | kylin.cube.algorithm                                  | auto                 | Cubing algorithm for MR engine, other options: layer, inmem  | Yes                       |
 | kylin.cube.algorithm.layer-or-inmem-threshold         | 7                    |                                                              | Yes                       |
 | kylin.cube.algorithm.inmem-split-limit                | 500                  |                                                              | Yes                       |
 | kylin.cube.algorithm.inmem-concurrent-threads         | 1                    |                                                              | Yes                       |
 | kylin.cube.ignore-signature-inconsistency             | false                |                                                              |                           |
-| kylin.cube.aggrgroup.max-combination                  | 4096                 | Max cuboid numbers in a Cube                                 | Yes                       |
+| kylin.cube.aggrgroup.max-combination                  | 32768                | Max cuboid numbers in a Cube                                 | Yes                       |
 | kylin.cube.aggrgroup.is-mandatory-only-valid          | false                | Whether allow a Cube only has the base cuboid.               | Yes                       |
+| kylin.cube.cubeplanner.enabled                        | true                 | Whether enable cubeplanner                                   | Yes                       |
+| kylin.cube.cubeplanner.enabled-for-existing-cube      | true                 | Whether enable cubeplanner for existing cube                 | Yes                       |
+| kylin.cube.cubeplanner.algorithm-threshold-greedy     | 8                    |                                                              | Yes                       |
+| kylin.cube.cubeplanner.expansion-threshold            | 15.0                 |                                                              | Yes                       |
+| kylin.cube.cubeplanner.recommend-cache-max-size       | 200                  |                                                              | No                        |
+| kylin.cube.cubeplanner.mandatory-rollup-threshold     | 1000                 |                                                              | Yes                       |
+| kylin.cube.cubeplanner.algorithm-threshold-genetic    | 23                   |                                                              | Yes                       |
 | kylin.cube.rowkey.max-size                            | 63                   | Max columns in Rowkey                                        | No                        |
-| kylin.metadata.dimension-encoding-max-length          | 256                  | Max length for one dimension's encoding                      | Yes                       |
 | kylin.cube.max-building-segments                      | 10                   | Max building segments in one Cube                            | Yes                       |
 | kylin.cube.allow-appear-in-multiple-projects          | false                | Whether allow a Cueb appeared in multiple projects           | No                        |
 | kylin.cube.gtscanrequest-serialization-level          | 1                    |                                                              |                           |
@@ -112,11 +128,13 @@ The main configuration file of Kylin.
 | kylin.job.scheduler.priority-bar-fetch-from-queue     | 20                   |                                                              | No                        |
 | kylin.job.scheduler.poll-interval-second              | 30                   |                                                              | No                        |
 | kylin.job.error-record-threshold                      | 0                    |                                                              | No                        |
+| kylin.job.cube-auto-ready-enabled                     | true                 | Whether enable the cube automatically when finish build      | Yes                       |
 | kylin.source.hive.keep-flat-table                     | false                | Whether keep the intermediate Hive table after job finished. | No                        |
 | kylin.source.hive.database-for-flat-table             | default              | Hive database to create the intermediate table.              | No                        |
 | kylin.source.hive.flat-table-storage-format           | SEQUENCEFILE         |                                                              | No                        |
 | kylin.source.hive.flat-table-field-delimiter          | \u001F               |                                                              | No                        |
 | kylin.source.hive.redistribute-flat-table             | true                 | Whether or not to redistribute the flat table.               | Yes                       |
+| kylin.source.hive.redistribute-column-count           | 3                    | The number of redistribute column                            | Yes                       |
 | kylin.source.hive.client                              | cli                  |                                                              | No                        |
 | kylin.source.hive.beeline-shell                       | beeline              |                                                              | No                        |
 | kylin.source.hive.beeline-params                      |                      |                                                              | No                        |
@@ -166,6 +184,7 @@ The main configuration file of Kylin.
 | kylin.storage.hbase.max-hconnection-threads           | 2048                 |                                                              |                           |
 | kylin.storage.hbase.core-hconnection-threads          | 2048                 |                                                              |                           |
 | kylin.storage.hbase.hconnection-threads-alive-seconds | 60                   |                                                              |                           |
+| kylin.storage.hbase.replication-scope                 | 0                    | whether config hbase cluster replication                     | Yes                       |
 | kylin.engine.mr.lib-dir                               |                      |                                                              |                           |
 | kylin.engine.mr.reduce-input-mb                       | 500                  |                                                              |                           |
 | kylin.engine.mr.reduce-count-ratio                    | 1.0                  |                                                              |                           |
@@ -182,7 +201,12 @@ The main configuration file of Kylin.
 | kylin.engine.spark.min-partition                      | 1                    | Spark Cubing RDD min partition number                        | Yes                       |
 | kylin.engine.spark.max-partition                      | 5000                 | RDD max partition number                                     | Yes                       |
 | kylin.engine.spark.storage-level                      | MEMORY_AND_DISK_SER  | RDD persistent level.                                        | Yes                       |
-| kylin.query.skip-empty-segments                       | true                 | Whether directly skip empty segment (metadata shows size be 0) when run SQL query. | Yes                       |
+| kylin.engine.spark-conf.spark.hadoop.dfs.replication  | 2                    |                                                              |                           |
+| kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress| true|                                                      |                           |
+| kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress.codec  | org.apache.hadoop.io.compress.DefaultCodec|        |                           |
+| kylin.engine.spark-conf-mergedict.spark.executor.memory| 6G                  |                                                              | Yes                       |
+| kylin.engine.spark-conf-mergedict.spark.memory.fraction| 0.2                 |                                                              | Yes                       |
+| kylin.query.skip-empty-segments                       | true                 | Whether directly skip empty segment (metadata shows size be 0) when run SQL query. | Yes |
 | kylin.query.force-limit                               | -1                   |                                                              |                           |
 | kylin.query.max-scan-bytes                            | 0                    |                                                              |                           |
 | kylin.query.max-return-rows                           | 5000000              |                                                              |                           |
@@ -193,6 +217,7 @@ The main configuration file of Kylin.
 | kylin.query.security-enabled                          | true                 |                                                              |                           |
 | kylin.query.cache-enabled                             | true                 |                                                              |                           |
 | kylin.query.timeout-seconds                           | 0                    |                                                              |                           |
+| kylin.query.timeout-seconds-coefficient               | 0.5                  | the coefficient to controll query timeout seconds            | Yes                       |
 | kylin.query.pushdown.runner-class-name                |                      |                                                              |                           |
 | kylin.query.pushdown.update-enabled                   | false                |                                                              |                           |
 | kylin.query.pushdown.cache-enabled                    | false                |                                                              |                           |
@@ -204,6 +229,13 @@ The main configuration file of Kylin.
 | kylin.query.pushdown.jdbc.pool-max-idle               | 8                    |                                                              |                           |
 | kylin.query.pushdown.jdbc.pool-min-idle               | 0                    |                                                              |                           |
 | kylin.query.security.table-acl-enabled                | true                 |                                                              | No                        |
+| kylin.query.calcite.extras-props.conformance          | LENIENT              |                                                              | Yes                       |
+| kylin.query.calcite.extras-props.caseSensitive        | true                 | Whether enable case sensitive                                | Yes                       |
+| kylin.query.calcite.extras-props.unquotedCasing       | TO_UPPER             | Options: UNCHANGED, TO_UPPER, TO_LOWER                       | Yes                       |
+| kylin.query.calcite.extras-props.quoting              | DOUBLE_QUOTE         | Options: DOUBLE_QUOTE, BACK_TICK, BRACKET                    | Yes                       |
+| kylin.query.statement-cache-max-num                   | 50000                | Max number for cache query statement                         | Yes                       |
+| kylin.query.statement-cache-max-num-per-key           | 50                   |                                                              | Yes                       |
+| kylin.query.enable-dict-enumerator                    | false                | Whether enable dict enumerator                               | Yes                       |
 | kylin.server.mode                                     | all                  | Kylin node mode: all\|job\|query.                            | No                        |
 | kylin.server.cluster-servers                          | localhost:7070       |                                                              | No                        |
 | kylin.server.cluster-name                             |                      |                                                              | No                        |
@@ -219,5 +251,5 @@ The main configuration file of Kylin.
 | kylin.web.cross-domain-enabled                        | true                 |                                                              | No                        |
 | kylin.web.export-allow-admin                          | true                 |                                                              | No                        |
 | kylin.web.export-allow-other                          | true                 |                                                              | No                        |
-| kylin.web.dashboard-enabled                           | false                |                                                              | No            |
+| kylin.web.dashboard-enabled                           | false                |                                                              | No                        |
 
diff --git a/website/_docs/tutorial/hybrid.cn.md b/website/_docs/tutorial/hybrid.cn.md
new file mode 100644
index 0000000..a93b9c6
--- /dev/null
+++ b/website/_docs/tutorial/hybrid.cn.md
@@ -0,0 +1,47 @@
+---
+layout: docs-cn
+title:  Hybrid 模型
+categories: 教程
+permalink: /cn/docs/tutorial/hybrid.html
+version: v1.2
+since: v0.7.1
+---
+
+本教材将会指导您创建一个 Hybrid 模型。 
+
+### I. 创建 Hybrid 模型
+一个 Hybrid 模型可以包含多个 cube。
+
+1. 点击顶部的 `Model`,然后点击 `Models` 标签。点击 `+New` 按钮,在下拉框中选择 `New Hybrid`。
+
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/1 +hybrid.png)
+
+2. 输入 Hybrid 的名字,然后选择包含您想要查询的 cubes 的模型,然后勾选 cube 名称前的单选框,点击 > 按钮来将 cube(s) 添加到 Hybrid。
+
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/2 hybrid-name.png)
+    
+*注意:如果您想要选择另一个 model,您应该移除所有已选择的 cubes。* 
+
+3. 点击 `Submit` 然后选择 `Yes` 来保存 Hybrid 模型。创建完成后,Hybrid 模型就会出现在左边的 `Hybrids` 列表中。
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/3 hybrid-created.png)
+
+### II. 更新 Hybrid 模型
+1. 点击 Hybrid 名称,然后点击 `Edit` 按钮。然后您就可以通过添加或删除 cube(s) 的方式来更新 Hybrid。 
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/4 edit-hybrid.png)
+
+2. 点击 `Submit` 然后选择 `Yes` 来保存 Hybrid 模型。
+
+现在您只能通过点击 `Edit` 按钮来查看 Hybrid 详情。
+
+### III. 删除 Hybrid 模型
+1. 将鼠标放在 Hybrid 名称上,然后点击 `Action` 按钮,在下拉框中选择 `Drop`。然后确认删除窗口将会弹出。 
+
+2. 点击 `Yes` 将 Hybrid 模型删除。 
+
+### IV. 运行查询
+Hybrid 模型创建成功后,您可以直接进行查询。 
+
+点击顶部的 `Insight`,然后输入您的 sql 语句。
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/5 sql-statement.png)
+    
+其他事宜请参考[这篇博客](http://kylin.apache.org/blog/2015/09/25/hybrid-model/)。
\ No newline at end of file
diff --git a/website/_docs/tutorial/hybrid.md b/website/_docs/tutorial/hybrid.md
new file mode 100644
index 0000000..d81c196
--- /dev/null
+++ b/website/_docs/tutorial/hybrid.md
@@ -0,0 +1,47 @@
+---
+layout: docs
+title: Hybrid Model
+categories: tutorial
+permalink: /docs/tutorial/hybrid.html
+since: v0.7.1
+---
+
+This tutorial will guide you to create a Hybrid. 
+
+### I. Create Hybrid Model
+One Hybrid model can be referenced by multiple cubes.
+
+1. Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Hybrid`.
+
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/1 +hybrid.png)
+
+2. Enter a name for the Hybrid, then choose the model including cubes that you want to query, and then check the box before cube name, click > button to add cube(s) to hybrid.
+
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/2 hybrid-name.png)
+    
+*Note: If you want to change the model, you should remove all the cubes that you selected.* 
+
+3. Click `Submit` and then select `Yes` to save the Hybrid model. After created, the Hybrid model will be shown in the left `Hybrids` list.
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/3 hybrid-created.png)
+
+### II. Update Hybrid Model
+1. Place the mouse over the Hybrid name, then click `Action` button, in the drop-down list select `Edit`. Then you can update Hybrid by adding(> button) or deleting(< button) cubes. 
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/4 edit-hybrid.png)
+
+2. Click `Submit` and then select `Yes` to save the Hybrid model. 
+
+Now you only can view Hybrid details by click `Edit` button.
+
+### III. Drop Hybrid Model
+1. Place the mouse over the Hybrid name, then click `Action` button, in the drop-down list select `Drop`. Then the window will pop up. 
+
+2. Click `Yes` to delete the Hybrid model. 
+
+### IV. Run Query
+After the Hybrid model is created, you can run a query directly. 
+
+Click `Insight` in top bar, and then input sql statement of you needs.
+    ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/5 sql-statement.png)
+
+
+Please refer to [this blog](http://kylin.apache.org/blog/2015/09/25/hybrid-model/) for other matters.
\ No newline at end of file
diff --git a/website/_docs/tutorial/use_cube_planner.cn.md b/website/_docs/tutorial/use_cube_planner.cn.md
index 9fdc162..12b15a1 100644
--- a/website/_docs/tutorial/use_cube_planner.cn.md
+++ b/website/_docs/tutorial/use_cube_planner.cn.md
@@ -32,7 +32,7 @@ kylin.metrics.monitor-enabled=true
 
 ## 如何使用
 
-*注意:Cube planner 优化不适合新的 Cube。优化前 Cube 应该在产品上线一段时间(如 3 个月)。因而 Kylin 平台从终端用户收集了足够真实的查询且使用他们优化 Cube。*  
+*注意:Cube planner 分为两个阶段。阶段 1 可以在构建 Cube 前基于估算的 cuboid 大小推荐 cuboid 列表,然而阶段 2 是根据统计信息为已存在的 Cube 推荐 cuboid 列表。优化前 Cube 应该在产品上线一段时间(如 3 个月)。因而 Kylin 平台从终端用户收集了足够真实的查询且使用他们优化 Cube。*  
 
 #### 步骤 1:
 
diff --git a/website/_docs/tutorial/use_cube_planner.md b/website/_docs/tutorial/use_cube_planner.md
index 457ff4b..29a0253 100644
--- a/website/_docs/tutorial/use_cube_planner.md
+++ b/website/_docs/tutorial/use_cube_planner.md
@@ -33,7 +33,7 @@ kylin.metrics.monitor-enabled=true
 
 ## How to use it
 
-*Note: Cube planner optimization is not suitable for new Cube. Cube should be online on production for a while (like 3 months) before optimizing it. So that Kylin platform collects enough real queries from end user and use them to optimize the Cube.*  
+*Note: Cube planner is divided into two phase. Phase 1 can recommend cuboid list based on estimated cuboid size before building the Cube, while phase 2 recommends cuboid list for existing Cube according to query statistics. Cube should be online on production for a while (like 3 months) before optimizing it. So that Kylin platform collects enough real queries from end user and use them to optimize the Cube.*  
 
 #### Step 1:
 
diff --git a/website/_docs16/howto/howto_use_restapi.md b/website/_docs16/howto/howto_use_restapi.md
index 4e0483e..8d1a575 100644
--- a/website/_docs16/howto/howto_use_restapi.md
+++ b/website/_docs16/howto/howto_use_restapi.md
@@ -82,7 +82,7 @@ curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H '
 If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
 
 ```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
 ```
 
 Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
@@ -658,7 +658,7 @@ Get descriptor for specified cube instance.
 
 #### Curl Example
 ```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
 ```
 
 #### Response Sample
diff --git a/website/_docs20/howto/howto_use_restapi.md b/website/_docs20/howto/howto_use_restapi.md
index 58ec55b..8a925bc 100644
--- a/website/_docs20/howto/howto_use_restapi.md
+++ b/website/_docs20/howto/howto_use_restapi.md
@@ -82,7 +82,7 @@ curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H '
 If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
 
 ```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
 ```
 
 Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
@@ -658,7 +658,7 @@ Get descriptor for specified cube instance.
 
 #### Curl Example
 ```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
 ```
 
 #### Response Sample
diff --git a/website/_docs21/howto/howto_use_restapi.md b/website/_docs21/howto/howto_use_restapi.md
index 4915d10..5b877da 100644
--- a/website/_docs21/howto/howto_use_restapi.md
+++ b/website/_docs21/howto/howto_use_restapi.md
@@ -83,7 +83,7 @@ curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H '
 If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
 
 ```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
 ```
 
 Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
@@ -659,7 +659,7 @@ Get descriptor for specified cube instance.
 
 #### Curl Example
 ```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
 ```
 
 #### Response Sample
diff --git a/website/_docs23/howto/howto_use_restapi.cn.md b/website/_docs23/howto/howto_use_restapi.cn.md
index dae1431..5193ed3 100644
--- a/website/_docs23/howto/howto_use_restapi.cn.md
+++ b/website/_docs23/howto/howto_use_restapi.cn.md
@@ -83,7 +83,7 @@ curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H '
 If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
 
 ```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
 ```
 
 Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
@@ -659,7 +659,7 @@ Get descriptor for specified cube instance.
 
 #### Curl Example
 ```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
 ```
 
 #### Response Sample
diff --git a/website/_docs23/howto/howto_use_restapi.md b/website/_docs23/howto/howto_use_restapi.md
index f40c1f3..17d7052 100644
--- a/website/_docs23/howto/howto_use_restapi.md
+++ b/website/_docs23/howto/howto_use_restapi.md
@@ -83,7 +83,7 @@ curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H '
 If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
 
 ```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
 ```
 
 Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
@@ -659,7 +659,7 @@ Get descriptor for specified cube instance.
 
 #### Curl Example
 ```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
 ```
 
 #### Response Sample
diff --git a/website/_docs24/gettingstarted/best_practices.md b/website/_docs24/gettingstarted/best_practices.md
new file mode 100644
index 0000000..3bde351
--- /dev/null
+++ b/website/_docs24/gettingstarted/best_practices.md
@@ -0,0 +1,27 @@
+---
+layout: docs
+title:  "Community Best Practices"
+categories: gettingstarted
+permalink: /docs24/gettingstarted/best_practices.html
+since: v1.3.x
+---
+
+List of articles about Kylin best practices contributed by community. Some of them are from Chinese community. Many thanks!
+
+* [Apache Kylin在百度地图的实践](http://www.infoq.com/cn/articles/practis-of-apache-kylin-in-baidu-map)
+
+* [Apache Kylin 大数据时代的OLAP利器](http://www.bitstech.net/2016/01/04/kylin-olap/)(网易案例)
+
+* [Apache Kylin在云海的实践](http://www.csdn.net/article/2015-11-27/2826343)(京东案例)
+
+* [Kylin, Mondrian, Saiku系统的整合](http://tech.youzan.com/kylin-mondrian-saiku/)(有赞案例)
+
+* [Big Data MDX with Mondrian and Apache Kylin](https://www.inovex.de/fileadmin/files/Vortraege/2015/big-data-mdx-with-mondrian-and-apache-kylin-sebastien-jelsch-pcm-11-2015.pdf)
+
+* [Kylin and Mondrain Interaction](https://github.com/mustangore/kylin-mondrian-interaction) (Thanks to [mustangore](https://github.com/mustangore))
+
+* [Kylin And Tableau Tutorial](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
+
+* [Kylin and Qlik Integration](https://github.com/albertoRamon/Kylin/tree/master/KylinWithQlik) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
+
+* [How to use Hue with Kylin](https://github.com/albertoRamon/Kylin/tree/master/KylinWithHue) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
\ No newline at end of file
diff --git a/website/_docs24/gettingstarted/concepts.md b/website/_docs24/gettingstarted/concepts.md
new file mode 100644
index 0000000..e48ef8a
--- /dev/null
+++ b/website/_docs24/gettingstarted/concepts.md
@@ -0,0 +1,64 @@
+---
+layout: docs
+title:  "Technical Concepts"
+categories: gettingstarted
+permalink: /docs24/gettingstarted/concepts.html
+since: v1.2
+---
+ 
+Here are some basic technical concepts used in Apache Kylin, please check them for your reference.
+For terminology in domain, please refer to: [Terminology](terminology.html)
+
+## CUBE
+* __Table__ - This is definition of hive tables as source of cubes, which must be synced before building cubes.
+![](/images/docs/concepts/DataSource.png)
+
+* __Data Model__ - This describes a [STAR SCHEMA](https://en.wikipedia.org/wiki/Star_schema) data model, which defines fact/lookup tables and filter condition.
+![](/images/docs/concepts/DataModel.png)
+
+* __Cube Descriptor__ - This describes definition and settings for a cube instance, defining which data model to use, what dimensions and measures to have, how to partition to segments and how to handle auto-merge etc.
+![](/images/docs/concepts/CubeDesc.png)
+
+* __Cube Instance__ - This is instance of cube, built from one cube descriptor, and consist of one or more cube segments according partition settings.
+![](/images/docs/concepts/CubeInstance.png)
+
+* __Partition__ - User can define a DATE/STRING column as partition column on cube descriptor, to separate one cube into several segments with different date periods.
+![](/images/docs/concepts/Partition.png)
+
+* __Cube Segment__ - This is actual carrier of cube data, and maps to a HTable in HBase. One building job creates one new segment for the cube instance. Once data change on specified data period, we can refresh related segments to avoid rebuilding whole cube.
+![](/images/docs/concepts/CubeSegment.png)
+
+* __Aggregation Group__ - Each aggregation group is subset of dimensions, and build cuboid with combinations inside. It aims at pruning for optimization.
+![](/images/docs/concepts/AggregationGroup.png)
+
+## DIMENSION & MEASURE
+* __Mandotary__ - This dimension type is used for cuboid pruning, if a dimension is specified as “mandatory”, then those combinations without such dimension are pruned.
+* __Hierarchy__ - This dimension type is used for cuboid pruning, if dimension A,B,C forms a “hierarchy” relation, then only combinations with A, AB or ABC shall be remained. 
+* __Derived__ - On lookup tables, some dimensions could be generated from its PK, so there's specific mapping between them and FK from fact table. So those dimensions are DERIVED and don't participate in cuboid generation.
+![](/images/docs/concepts/Dimension.png)
+
+* __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
+* __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
+* __Top N__ - For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
+![](/images/docs/concepts/Measure.png)
+
+## CUBE ACTIONS
+* __BUILD__ - Given an interval of partition column, this action is to build a new cube segment.
+* __REFRESH__ - This action will rebuilt cube segment in some partition period, which is used in case of source table increasing.
+* __MERGE__ - This action will merge multiple continuous cube segments into single one. This can be automated with auto-merge settings in cube descriptor.
+* __PURGE__ - Clear segments under a cube instance. This will only update metadata, and won't delete cube data from HBase.
+![](/images/docs/concepts/CubeAction.png)
+
+## JOB STATUS
+* __NEW__ - This denotes one job has been just created.
+* __PENDING__ - This denotes one job is paused by job scheduler and waiting for resources.
+* __RUNNING__ - This denotes one job is running in progress.
+* __FINISHED__ - This denotes one job is successfully finished.
+* __ERROR__ - This denotes one job is aborted with errors.
+* __DISCARDED__ - This denotes one job is cancelled by end users.
+![](/images/docs/concepts/Job.png)
+
+## JOB ACTION
+* __RESUME__ - Once a job in ERROR status, this action will try to restore it from latest successful point.
+* __DISCARD__ - No matter status of a job is, user can end it and release resources with DISCARD action.
+![](/images/docs/concepts/JobAction.png)
diff --git a/website/_docs24/gettingstarted/events.md b/website/_docs24/gettingstarted/events.md
new file mode 100644
index 0000000..cf96eb2
--- /dev/null
+++ b/website/_docs24/gettingstarted/events.md
@@ -0,0 +1,30 @@
+---
+layout: docs
+title:  "Events and Conferences"
+categories: gettingstarted
+permalink: /docs24/gettingstarted/events.html
+---
+
+__Conferences__
+
+* [Apache Kylin on HBase: Extreme OLAP engine for big data](https://www.slideshare.net/ShiShaoFeng1/apache-kylin-on-hbase-extreme-olap-engine-for-big-data) by Shaofeng Shi at [HBaseCon Asia 2018](https://hbase.apache.org/hbaseconasia-2018/)
+* [The Evolution of Apache Kylin: Realtime and Plugin Architecture in Kylin](https://www.youtube.com/watch?v=n74zvLmIgF0)([slides](http://www.slideshare.net/YangLi43/apache-kylin-15-updates)) by [Li Yang](https://github.com/liyang-gmt8), at [Hadoop Summit 2016 Dublin](http://hadoopsummit.org/dublin/agenda/), Ireland, 2016-04-14
+* [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
+* [Apache Kylin - Balance Between Space and Time](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015) ([video](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015)) by [Debashis Saha](https://twitter.com/debashis_saha) & [Luke Han](https://twitter.com/lukehq), at Hadoop Summit 2015  [...]
+* [HBaseCon 2015: Apache Kylin; Extreme OLAP Engine for Hadoop](https://vimeo.com/128152444) ([video](https://vimeo.com/128152444), [slides](http://www.slideshare.net/HBaseCon/ecosystem-session-3b)) by [Seshu Adunuthula](https://twitter.com/SeshuAd) at HBaseCon 2015 in San Francisco, US, 2015-05-07
+* [Apache Kylin - Extreme OLAP Engine for Hadoop](http://strataconf.com/big-data-conference-uk-2015/public/schedule/detail/40029) ([slides](http://www.slideshare.net/lukehan/apache-kylin-extreme-olap-engine-for-big-data)) by [Luke Han](https://twitter.com/lukehq) & [Yang Li](https://github.com/liyang-gmt8), at Strata+Hadoop World in London, UK, 2015-05-06
+* [Apache Kylin Open Source Journey](http://www.infoq.com/cn/presentations/open-source-journey-of-apache-kylin) ([slides](http://www.slideshare.net/lukehan/apache-kylin-open-source-journey-for-qcon2015-beijing)) by [Luke Han](https://twitter.com/lukehq), at QCon Beijing in Beijing, China, 2015-04-23
+* [Apache Kylin - OLAP on Hadoop](http://cio.it168.com/a2015/0418/1721/000001721404.shtml) by [Yang Li](https://github.com/liyang-gmt8), at Database Technology Conference China 2015 in Beijing, China, 2015-04-18
+* [Apache Kylin – Cubes on Hadoop](https://www.youtube.com/watch?v=U0SbrVzuOe4) ([video](https://www.youtube.com/watch?v=U0SbrVzuOe4), [slides](http://www.slideshare.net/Hadoop_Summit/apache-kylin-cubes-on-hadoop)) by [Ted Dunning](https://twitter.com/ted_dunning), at Hadoop Summit 2015 Europe in Brussels, Belgium, 2015-04-16
+* [Apache Kylin - Hadoop 上的大规模联机分析平台](http://bdtc2014.hadooper.cn/m/zone/bdtc_2014/schedule3) ([slides](http://www.slideshare.net/lukehan/apache-kylin-big-data-technology-conference-2014-beijing-v2)) by [Luke Han](https://twitter.com/lukehq), at Big Data Technology Conference China in Beijing, China, 2014-12-14
+* [Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive](http://v.csdn.hudong.com/s/article.html?arcid=15820707) ([video](http://v.csdn.hudong.com/s/article.html?arcid=15820707), [slides](http://www.slideshare.net/XuJiang2/kylin-hadoop-olap-engine)) by [Jiang Xu](https://www.linkedin.com/pub/xu-jiang/4/5a8/230), at Shanghai Big Data Summit 2014 in Shanghai, China , 2014-10-25
+
+__Meetup__
+
+* [CDAP in Cloud, Extreme OLAP w Apache Kylin, Twitter Reviews & DataStax](https://www.meetup.com/BigDataApps/events/253429041/) @ Google Cloud, US; 6:00PM to 8:00PM, 2018-8-29
+* [Apache Kylin Meetup @Beijing Meituan&Dianping](http://www.huodongxing.com/event/7452131278400), China; 1:30PM - 17:00PM, Saturday, 2018-8-11
+* [Apache Kylin Meetup @Shanghai Bestpay](http://www.huodongxing.com/event/2449364807100?td=4222685755750), China; 1:30PM - 17:00PM, Saturday, 2018-7-28
+* [Apache Kylin Meetup @Shenzhen](http://cn.mikecrm.com/rjqPLom), China; 1:00PM - 17:00PM, Saturday, 2018-6-23
+* [Apache Kylin & Alluxio Meetup @Shanghai](http://huiyi.csdn.net/activity/product/goods_list?project_id=3746), in Shanghai, China, 1:00PM - 17:30PM, Sunday, 2018-1-21
+* [Apache Kylin Meetup @Bay Area](http://www.meetup.com/Cloud-at-ebayinc/events/218914395/), in San Jose, US, 6:00PM - 7:30PM, Thursday, 2014-12-04
+
diff --git a/website/_docs24/gettingstarted/faq.md b/website/_docs24/gettingstarted/faq.md
new file mode 100644
index 0000000..4976663
--- /dev/null
+++ b/website/_docs24/gettingstarted/faq.md
@@ -0,0 +1,148 @@
+---
+layout: docs
+title:  "FAQ"
+categories: gettingstarted
+permalink: /docs24/gettingstarted/faq.html
+since: v0.6.x
+---
+
+#### 1. "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat" or "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState"
+
+  * Kylin need many dependent jars (hadoop/hive/hcat/hbase/kafka) on classpath to work, but Kylin doesn't ship them. It will seek these jars from your local machine by running commands like `hbase classpath`, `hive -e set` etc. The founded jars' path will be appended to the environment variable *HBASE_CLASSPATH* (Kylin uses `hbase` shell command to start up, which will read this). But in some Hadoop distribution (like AWS EMR 5.0), the `hbase` shell doesn't keep the origin `HBASE_CLASSPA [...]
+
+  * To fix this, find the hbase shell script (in hbase/bin folder), and search *HBASE_CLASSPATH*, check whether it overwrite the value like :
+
+  {% highlight Groff markup %}
+  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*
+  {% endhighlight %}
+
+  * If true, change it to keep the origin value like:
+
+   {% highlight Groff markup %}
+  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
+  {% endhighlight %}
+
+#### 2. Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
+
+  * Kylin uses "Dictionary" encoding to encode/decode the dimension values (check [this blog](/blog/2015/08/13/kylin-dictionary/)); Usually a dimension's cardinality is less than millions, so the "Dict" encoding is good to use. As dictionary need be persisted and loaded into memory, if a dimension's cardinality is very high, the memory footprint will be tremendous, so Kylin add a check on this. If you see this error, suggest to identify the UHC dimension first and then re-evaluate the de [...]
+
+
+#### 3. How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
+
+  * Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
+
+  {% highlight Groff markup %}
+  I was able to deploy Kylin with following option in POM.
+  <hadoop2.version>2.5.0</hadoop2.version>
+  <yarn.version>2.5.0</yarn.version>
+  <hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
+  <zookeeper.version>3.4.5</zookeeper.version>
+  <hive.version>0.13.1</hive.version>
+  My Cluster is running on Cloudera Distribution CDH 5.2.0.
+  {% endhighlight %}
+
+
+#### 4. SUM(field) returns a negtive result while all the numbers in this field are > 0
+  * If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which  [...]
+
+#### 5. Why Kylin need extract the distinct columns from Fact Table before building cube?
+  * Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
+
+#### 6. Why Kylin calculate the HIVE table cardinality?
+  * The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
+
+#### 7. How to add new user or change the default password?
+  * Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
+
+   {% highlight Groff markup %}
+   ${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
+   {% endhighlight %}
+
+  * The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
+  * When you deploy Kylin for more users, switch to LDAP authentication is recommended.
+
+#### 8. Using sub-query for un-supported SQL
+
+{% highlight Groff markup %}
+Original SQL:
+select fact.slr_sgmt,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from ih_daily_fact fact
+inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+group by fact.slr_sgmt
+{% endhighlight %}
+
+{% highlight Groff markup %}
+Using sub-query
+select a.slr_sgmt,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from (
+    select fact.slr_sgmt as slr_sgmt,
+    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
+    sum(gmv) as gmv36,
+    sum(gmv) as gmv35
+    from ih_daily_fact fact
+    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
+) a
+group by a.slr_sgmt
+{% endhighlight %}
+
+#### 9. Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
+
+  * Please add proxy for your NPM:  
+  `npm config set proxy http://YOUR_PROXY_IP`
+
+  * Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
+  [http://npm.taobao.org](http://npm.taobao.org)
+
+#### 10. Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
+  * User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
+
+#### 11. Kylin JDBC driver returns a different Date/time than the REST API, seems it add the timezone to parse the date.
+  * Please check the [post in mailing list](http://apache-kylin.74782.x6.nabble.com/JDBC-query-result-Date-column-get-wrong-value-td5370.html)
+
+
+#### 12. How to update the default password for 'ADMIN'?
+  * By default, Kylin uses a simple, configuration based user registry; The default administrator 'ADMIN' with password 'KYLIN' is hard-coded in `kylinSecurity.xml`. To modify the password, you need firstly get the new password's encrypted value (with BCrypt), and then set it in `kylinSecurity.xml`. Here is a sample with password 'ABCDE'
+  
+{% highlight Groff markup %}
+
+cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
+
+java -classpath kylin-server-base-2.3.0.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:spring-security-core-4.2.3.RELEASE.jar:commons-codec-1.7.jar:commons-logging-1.1.3.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer BCrypt ABCDE
+
+BCrypt encrypted password is:
+$2a$10$A7.J.GIEOQknHmJhEeXUdOnj2wrdG4jhopBgqShTgDkJDMoKxYHVu
+
+{% endhighlight %}
+
+Then you can set it into `kylinSecurity.xml'
+
+{% highlight Groff markup %}
+
+vi ./tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
+
+{% endhighlight %}
+
+Replace the origin encrypted password with the new one: 
+{% highlight Groff markup %}
+
+        <bean class="org.springframework.security.core.userdetails.User" id="adminUser">
+            <constructor-arg value="ADMIN"/>
+            <constructor-arg
+                    value="$2a$10$A7.J.GIEOQknHmJhEeXUdOnj2wrdG4jhopBgqShTgDkJDMoKxYHVu"/>
+            <constructor-arg ref="adminAuthorities"/>
+        </bean>
+        
+{% endhighlight %}
+
+Restart Kylin to take effective. If you have multiple Kylin server as a cluster, do the same on each instance. 
+
+#### 13. What kind of data be left in 'kylin.env.hdfs-working-dir' ? We often execute kylin cleanup storage command, but now our working dir folder is about 300 GB size, can we delete old data manually?
+
+The data in 'hdfs-working-dir' ('hdfs:///kylin/kylin_metadata/' by default) includes intermediate files (will be GC) and Cuboid data (won't be GC). The Cuboid data is kept for the further segments' merge, as Kylin couldn't merge from HBase. If you're sure those segments won't be merged, you can move them to other paths or even delete.
+
+Please pay attention to the "resources" sub-folder under 'hdfs-working-dir', which persists some big metadata files like  dictionaries and lookup tables' snapshots. They shouldn't be moved.
\ No newline at end of file
diff --git a/website/_docs24/gettingstarted/terminology.md b/website/_docs24/gettingstarted/terminology.md
new file mode 100644
index 0000000..dfe91e9
--- /dev/null
+++ b/website/_docs24/gettingstarted/terminology.md
@@ -0,0 +1,25 @@
+---
+layout: docs
+title:  "Terminology"
+categories: gettingstarted
+permalink: /docs24/gettingstarted/terminology.html
+since: v0.5.x
+---
+ 
+
+Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
+They are basic knowledge of Apache Kylin which also will help to well understand such concept, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analycits. 
+
+* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
+* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
+* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
+* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
+* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
+* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
+* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
+* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
+* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
+* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
+
+
+
diff --git a/website/_docs24/howto/howto_backup_metadata.cn.md b/website/_docs24/howto/howto_backup_metadata.cn.md
new file mode 100644
index 0000000..acbf395
--- /dev/null
+++ b/website/_docs24/howto/howto_backup_metadata.cn.md
@@ -0,0 +1,59 @@
+---
+layout: docs-cn
+title:  备份元数据
+categories: 帮助
+permalink: /cn/docs24/howto/howto_backup_metadata.html
+---
+
+Kylin将它全部的元数据(包括cube描述和实例、项目、倒排索引描述和实例、任务、表和字典)组织成层级文件系统的形式。然而,Kylin使用hbase来存储元数据,而不是一个普通的文件系统。如果你查看过Kylin的配置文件(kylin.properties),你会发现这样一行:
+
+{% highlight Groff markup %}
+## The metadata store in hbase
+kylin.metadata.url=kylin_metadata@hbase
+{% endhighlight %}
+
+这表明元数据会被保存在一个叫作“kylin_metadata”的htable里。你可以在hbase shell里scan该htbale来获取它。
+
+## 使用二进制包来备份Metadata Store
+
+有时你需要将Kylin的Metadata Store从hbase备份到磁盘文件系统。在这种情况下,假设你在部署Kylin的hadoop命令行(或沙盒)里,你可以到KYLIN_HOME并运行:
+
+{% highlight Groff markup %}
+./bin/metastore.sh backup
+{% endhighlight %}
+
+来将你的元数据导出到本地目录,这个目录在KYLIN_HOME/metadata_backps下,它的命名规则使用了当前时间作为参数:KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second 。
+
+## 使用二进制包来恢复Metatdara Store
+
+万一你发现你的元数据被搞得一团糟,想要恢复先前的备份:
+
+首先,重置Metatdara Store(这个会清理Kylin在hbase的Metadata Store的所有信息,请确保先备份):
+
+{% highlight Groff markup %}
+./bin/metastore.sh reset
+{% endhighlight %}
+
+然后上传备份的元数据到Kylin的Metadata Store:
+{% highlight Groff markup %}
+./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
+{% endhighlight %}
+
+## 在开发环境备份/恢复元数据(0.7.3版本以上可用)
+
+在开发调试Kylin时,典型的环境是一台装有IDE的开发机上和一个后台的沙盒,通常你会写代码并在开发机上运行测试案例,但每次都需要将二进制包放到沙盒里以检查元数据是很麻烦的。这时有一个名为SandboxMetastoreCLI工具类可以帮助你在开发机本地下载/上传元数据。
+
+## 从Metadata Store清理无用的资源(0.7.3版本以上可用)
+随着运行时间增长,类似字典、表快照的资源变得没有用(cube segment被丢弃或者合并了),但是它们依旧占用空间,你可以运行命令来找到并清除它们:
+
+首先,运行一个检查,这是安全的因为它不会改变任何东西:
+{% highlight Groff markup %}
+./bin/metastore.sh clean
+{% endhighlight %}
+
+将要被删除的资源会被列出来:
+
+接下来,增加“--delete true”参数来清理这些资源;在这之前,你应该确保已经备份metadata store:
+{% highlight Groff markup %}
+./bin/metastore.sh clean --delete true
+{% endhighlight %}
diff --git a/website/_docs24/howto/howto_backup_metadata.md b/website/_docs24/howto/howto_backup_metadata.md
new file mode 100644
index 0000000..f54d9d0
--- /dev/null
+++ b/website/_docs24/howto/howto_backup_metadata.md
@@ -0,0 +1,60 @@
+---
+layout: docs
+title:  Backup Metadata
+categories: howto
+permalink: /docs24/howto/howto_backup_metadata.html
+---
+
+Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
+
+{% highlight Groff markup %}
+## The metadata store in hbase
+kylin.metadata.url=kylin_metadata@hbase
+{% endhighlight %}
+
+This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
+
+## Backup Metadata Store with binary package
+
+Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
+In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
+
+{% highlight Groff markup %}
+./bin/metastore.sh backup
+{% endhighlight %}
+
+to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
+
+## Restore Metadata Store with binary package
+
+In case you find your metadata store messed up, and you want to restore to a previous backup:
+
+Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
+
+{% highlight Groff markup %}
+./bin/metastore.sh reset
+{% endhighlight %}
+
+Then upload the backup metadata to Kylin's metadata store:
+{% highlight Groff markup %}
+./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
+{% endhighlight %}
+
+## Backup/restore metadata in development env (available since 0.7.3)
+
+When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
+
+## Cleanup unused resources from Metadata Store (available since 0.7.3)
+As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
+
+Firstly, run a check, this is safe as it will not change anything:
+{% highlight Groff markup %}
+./bin/metastore.sh clean
+{% endhighlight %}
+
+The resources that will be dropped will be listed;
+
+Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
+{% highlight Groff markup %}
+./bin/metastore.sh clean --delete true
+{% endhighlight %}
diff --git a/website/_docs24/howto/howto_build_cube_with_restapi.cn.md b/website/_docs24/howto/howto_build_cube_with_restapi.cn.md
new file mode 100644
index 0000000..401e12b
--- /dev/null
+++ b/website/_docs24/howto/howto_build_cube_with_restapi.cn.md
@@ -0,0 +1,54 @@
+---
+layout: docs-cn
+title:  用 API 构建 Cube
+categories: 帮助
+permalink: /cn/docs24/howto/howto_build_cube_with_restapi.html
+---
+
+### 1. 认证
+*   目前Kylin使用[basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication)。
+*   给第一个请求加上用于认证的 Authorization 头部。
+*   或者进行一个特定的请求: POST http://localhost:7070/kylin/api/user/authentication 。
+*   完成认证后, 客户端可以在接下来的请求里带上cookie。
+{% highlight Groff markup %}
+POST http://localhost:7070/kylin/api/user/authentication
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+
+### 2. 获取Cube的详细信息
+*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
+*   用户可以在返回的cube详细信息里找到cube的segment日期范围。
+{% highlight Groff markup %}
+GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+
+### 3.	然后提交cube构建任务
+*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
+*   关于 put 的请求体细节请参考 Build Cube API
+    *   `startTime` 和 `endTime` 应该是utc时间。
+    *   `buildType` 可以是 `BUILD` 、 `MERGE` 或 `REFRESH`。 `BUILD` 用于构建一个新的segment, `REFRESH` 用于刷新一个已有的segment, `MERGE` 用于合并多个已有的segment生成一个较大的segment。
+*   这个方法会返回一个新建的任务实例,它的uuid是任务的唯一id,用于追踪任务状态。
+{% highlight Groff markup %}
+PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+    
+{
+    "startTime": 0,
+    "endTime": 1388563200000,
+    "buildType": "BUILD"
+}
+{% endhighlight %}
+
+### 4.	跟踪任务状态 
+*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
+*   返回的 `job_status` 代表job的当前状态。
+
+	## 5.	如果构建任务出现错误,可以重新开始它
+*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`
diff --git a/website/_docs24/howto/howto_build_cube_with_restapi.md b/website/_docs24/howto/howto_build_cube_with_restapi.md
new file mode 100644
index 0000000..4fa2642
--- /dev/null
+++ b/website/_docs24/howto/howto_build_cube_with_restapi.md
@@ -0,0 +1,53 @@
+---
+layout: docs
+title:  Build Cube with API
+categories: howto
+permalink: /docs24/howto/howto_build_cube_with_restapi.html
+---
+
+### 1.	Authentication
+*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
+*   Add `Authorization` header to first request for authentication
+*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
+*   Once authenticated, client can go subsequent requests with cookies.
+{% highlight Groff markup %}
+POST http://localhost:7070/kylin/api/user/authentication
+    
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+
+### 2.	Get details of cube. 
+*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
+*   Client can find cube segment date ranges in returned cube detail.
+{% highlight Groff markup %}
+GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+### 3.	Then submit a build job of the cube. 
+*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
+*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
+    *   `startTime` and `endTime` should be utc timestamp.
+    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
+*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
+{% highlight Groff markup %}
+PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+    
+{
+    "startTime": 0,
+    "endTime": 1388563200000,
+    "buildType": "BUILD"
+}
+{% endhighlight %}
+
+### 4.	Track job status. 
+*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
+*   Returned `job_status` represents current status of job.
+
+### 5.	If the job got errors, you can resume it. 
+*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`
diff --git a/website/_docs24/howto/howto_cleanup_storage.cn.md b/website/_docs24/howto/howto_cleanup_storage.cn.md
new file mode 100644
index 0000000..aea9be4
--- /dev/null
+++ b/website/_docs24/howto/howto_cleanup_storage.cn.md
@@ -0,0 +1,21 @@
+---
+layout: docs-cn
+title:  清理存储
+categories: 帮助
+permalink: /cn/docs24/howto/howto_cleanup_storage.html
+---
+
+Kylin在构建cube期间会在HDFS上生成中间文件;除此之外,当清理/删除/合并cube时,一些HBase表可能被遗留在HBase却以后再也不会被查询;虽然Kylin已经开始做自动化的垃圾回收,但不一定能覆盖到所有的情况;你可以定期做离线的存储清理:
+
+步骤:
+1. 检查哪些资源可以清理,这一步不会删除任何东西:
+{% highlight Groff markup %}
+export KYLIN_HOME=/path/to/kylin_home
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete false
+{% endhighlight %}
+请将这里的 (version) 替换为你安装的Kylin jar版本。
+2. 你可以抽查一两个资源来检查它们是否已经没有被引用了;然后加上“--delete true”选项进行清理。
+{% highlight Groff markup %}
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete true
+{% endhighlight %}
+完成后,中间HDFS上的中间文件和HTable会被移除。
diff --git a/website/_docs24/howto/howto_cleanup_storage.md b/website/_docs24/howto/howto_cleanup_storage.md
new file mode 100644
index 0000000..4b76934
--- /dev/null
+++ b/website/_docs24/howto/howto_cleanup_storage.md
@@ -0,0 +1,22 @@
+---
+layout: docs
+title:  Cleanup Storage
+categories: howto
+permalink: /docs24/howto/howto_cleanup_storage.html
+---
+
+Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
+automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
+
+Steps:
+1. Check which resources can be cleanup, this will not remove anything:
+{% highlight Groff markup %}
+export KYLIN_HOME=/path/to/kylin_home
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete false
+{% endhighlight %}
+Here please replace (version) with the specific Kylin jar version in your installation;
+2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
+{% highlight Groff markup %}
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete true
+{% endhighlight %}
+On finish, the intermediate HDFS location and HTables should be dropped;
diff --git a/website/_docs24/howto/howto_enable_zookeeper_acl.md b/website/_docs24/howto/howto_enable_zookeeper_acl.md
new file mode 100644
index 0000000..34fc1d4
--- /dev/null
+++ b/website/_docs24/howto/howto_enable_zookeeper_acl.md
@@ -0,0 +1,20 @@
+---
+layout: docs
+title:  Enable Zookeeper ACL
+categories: howto
+permalink: /docs24/howto/howto_enable_zookeeper_acl.html
+---
+
+Edit $KYLIN_HOME/conf/kylin.properties to add following configuration item:
+
+* Add "kylin.env.zookeeper.zk-auth". It is the configuration item you can specify the zookeeper authenticated information. Its formats is "scheme:id". The value of scheme that the zookeeper supports is "world", "auth", "digest", "ip" or "super". The "id" is the authenticated information of the scheme. For example:
+
+    `kylin.env.zookeeper.zk-auth=digest:ADMIN:KYLIN`
+
+    The scheme equals to "digest". The id equals to "ADMIN:KYLIN", which expresses the "username:password".
+
+* Add "kylin.env.zookeeper.zk-acl". It is the configuration item you can set access permission. Its formats is "scheme:id:permissions". The value of permissions that the zookeeper supports is "READ", "WRITE", "CREATE", "DELETE" or "ADMIN". For example, we configure that everyone has all the permissions:
+
+    `kylin.env.zookeeper.zk-acl=world:anyone:rwcda`
+
+    The scheme equals to "world". The id equals to "anyone" and the permissions equals to "rwcda".
diff --git a/website/_docs24/howto/howto_install_ranger_kylin_plugin.md b/website/_docs24/howto/howto_install_ranger_kylin_plugin.md
new file mode 100644
index 0000000..a8fa3a3
--- /dev/null
+++ b/website/_docs24/howto/howto_install_ranger_kylin_plugin.md
@@ -0,0 +1,8 @@
+---
+layout: docs
+title:  Install Ranger Plugin
+categories: howto
+permalink: /docs24/howto/howto_install_ranger_kylin_plugin.html
+---
+
+Please refer to [https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin](https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin).
diff --git a/website/_docs24/howto/howto_jdbc.cn.md b/website/_docs24/howto/howto_jdbc.cn.md
new file mode 100644
index 0000000..61d1420
--- /dev/null
+++ b/website/_docs24/howto/howto_jdbc.cn.md
@@ -0,0 +1,92 @@
+---
+layout: docs-cn
+title:  Kylin JDBC Driver
+categories: 帮助
+permalink: /cn/docs24/howto/howto_jdbc.html
+---
+
+### 认证
+
+###### 基于Apache Kylin认证RESTFUL服务。支持的参数:
+* user : 用户名
+* password : 密码
+* ssl: true或false。 默认为flas;如果为true,所有的服务调用都会使用https。
+
+### 连接url格式:
+{% highlight Groff markup %}
+jdbc:kylin://<hostname>:<port>/<kylin_project_name>
+{% endhighlight %}
+* 如果“ssl”为true,“port”应该是Kylin server的HTTPS端口。
+* 如果“port”未被指定,driver会使用默认的端口:HTTP 80,HTTPS 443。
+* 必须指定“kylin_project_name”并且用户需要确保它在Kylin server上存在。
+
+### 1. 使用Statement查询
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 2. 使用PreparedStatementv查询
+
+###### 支持的PreparedStatement参数:
+* setString
+* setInt
+* setShort
+* setLong
+* setFloat
+* setDouble
+* setBoolean
+* setByte
+* setDate
+* setTime
+* setTimestamp
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
+state.setInt(1, 10);
+ResultSet resultSet = state.executeQuery();
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 3. 获取查询结果元数据
+Kylin jdbc driver支持元数据列表方法:
+通过sql模式过滤器(比如 %)列出catalog、schema、table和column。
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
+while (tables.next()) {
+    for (int i = 0; i < 10; i++) {
+        assertEquals("dummy", tables.getString(i + 1));
+    }
+}
+{% endhighlight %}
diff --git a/website/_docs24/howto/howto_jdbc.md b/website/_docs24/howto/howto_jdbc.md
new file mode 100644
index 0000000..dfcfaea
--- /dev/null
+++ b/website/_docs24/howto/howto_jdbc.md
@@ -0,0 +1,92 @@
+---
+layout: docs
+title:  JDBC Driver
+categories: howto
+permalink: /docs24/howto/howto_jdbc.html
+---
+
+### Authentication
+
+###### Build on Apache Kylin authentication restful service. Supported parameters:
+* user : username 
+* password : password
+* ssl: true/false. Default be false; If true, all the services call will use https.
+
+### Connection URL format:
+{% highlight Groff markup %}
+jdbc:kylin://<hostname>:<port>/<kylin_project_name>
+{% endhighlight %}
+* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
+* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
+* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
+
+### 1. Query with Statement
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 2. Query with PreparedStatement
+
+###### Supported prepared statement parameters:
+* setString
+* setInt
+* setShort
+* setLong
+* setFloat
+* setDouble
+* setBoolean
+* setByte
+* setDate
+* setTime
+* setTimestamp
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
+state.setInt(1, 10);
+ResultSet resultSet = state.executeQuery();
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 3. Get query result set metadata
+Kylin jdbc driver supports metadata list methods:
+List catalog, schema, table and column with sql pattern filters(such as %).
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
+while (tables.next()) {
+    for (int i = 0; i < 10; i++) {
+        assertEquals("dummy", tables.getString(i + 1));
+    }
+}
+{% endhighlight %}
diff --git a/website/_docs24/howto/howto_ldap_and_sso.md b/website/_docs24/howto/howto_ldap_and_sso.md
new file mode 100644
index 0000000..f0840fa
--- /dev/null
+++ b/website/_docs24/howto/howto_ldap_and_sso.md
@@ -0,0 +1,130 @@
+---
+layout: docs
+title: Secure with LDAP and SSO
+categories: howto
+permalink: /docs24/howto/howto_ldap_and_sso.html
+---
+
+## Enable LDAP authentication
+
+Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
+
+#### Configure LDAP server info
+
+Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be encrypted; You can run the following command to get the encrypted value:
+
+```
+cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
+java -classpath kylin-server-base-\<versioin\>.jar:kylin-core-common-\<versioin\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
+```
+
+Config them in the conf/kylin.properties:
+
+```
+ldap.server=ldap://<your_ldap_host>:<port>
+ldap.username=<your_user_name>
+ldap.password=<your_password_encrypted>
+```
+
+Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
+
+```
+ldap.user.searchBase=OU=UserAccounts,DC=mycompany,DC=com
+ldap.user.searchPattern=(&(cn={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
+ldap.user.groupSearchBase=OU=Group,DC=mycompany,DC=com
+```
+
+If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in ldap.service.*; Otherwise, leave them be empty;
+
+### Configure the administrator group
+
+To map an LDAP group to the admin group in Kylin, need set the "kylin.security.acl.admin-role" to the LDAP group name(shall keep the original case), and the users in this group will be global admin in Kylin.
+
+For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
+
+```
+kylin.security.acl.admin-role=KYLIN-ADMIN-GROUP
+```
+
+
+*Attention: When upgrading from Kylin 2.3 ealier version to 2.3 or later, please remove the "ROLE_" in this setting as this required in the 2.3 earlier version and keep the group name in original case. And the kylin.security.acl.default-role is deprecated.*
+
+#### Enable LDAP
+
+Set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server.
+
+## Enable SSO authentication
+
+From v1.5, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
+
+Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
+
+### Generate IDP metadata xml
+Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
+
+  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
+  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
+  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
+
+### Generate JKS keystore for Kylin
+As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
+
+Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
+
+```
+$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
+Enter Export Password: <export_pwd>
+Verifying - Enter Export Password: <export_pwd>
+
+
+$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
+
+Enter destination keystore password:  changeit
+Re-enter new password: changeit
+```
+
+It will put the keys to "samlKeystore.jks" with alias "kylin";
+
+### Enable Higher Ciphers
+
+Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
+
+### Deploy IDP xml file and keystore to Kylin
+
+The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
+	
+  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
+  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
+  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
+
+```
+<!-- Central storage of cryptographic keys -->
+<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
+	<constructor-arg value="classpath:samlKeystore.jks"/>
+	<constructor-arg type="java.lang.String" value="changeit"/>
+	<constructor-arg>
+		<map>
+			<entry key="kylin" value="changeit"/>
+		</map>
+	</constructor-arg>
+	<constructor-arg type="java.lang.String" value="kylin"/>
+</bean>
+
+```
+
+### Other configurations
+In conf/kylin.properties, add the following properties with your server information:
+
+```
+saml.metadata.entityBaseURL=https://host-name/kylin
+saml.context.scheme=https
+saml.context.serverName=host-name
+saml.context.serverPort=443
+saml.context.contextPath=/kylin
+```
+
+Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
+
+### Enable SSO
+Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
+
diff --git a/website/_docs24/howto/howto_optimize_build.cn.md b/website/_docs24/howto/howto_optimize_build.cn.md
new file mode 100644
index 0000000..6acf034
--- /dev/null
+++ b/website/_docs24/howto/howto_optimize_build.cn.md
@@ -0,0 +1,166 @@
+---
+layout: docs-cn
+title:  优化 Cube 构建
+categories: 帮助
+permalink: /cn/docs24/howto/howto_optimize_build.html
+---
+
+Kylin将Cube构建任务分解为几个依次执行的步骤,这些步骤包括Hive操作、MapReduce操作和其他类型的操作。如果你有很多Cube构建任务需要每天运行,那么你肯定想要减少其中消耗的时间。下文按照Cube构建步骤顺序提供了一些优化经验。
+
+## 创建Hive的中间平表
+
+这一步将数据从源Hive表提取出来(和所有join的表一起)并插入到一个中间平表。如果Cube是分区的,Kylin会加上一个时间条件以确保只有在时间范围内的数据才会被提取。你可以在这个步骤的log查看相关的Hive命令,比如:
+
+```
+hive -e "USE default;
+DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
+
+CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
+(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
+STORED AS SEQUENCEFILE
+LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
+
+SET dfs.replication=2;
+SET hive.exec.compress.output=true;
+SET hive.auto.convert.join.noconditionaltask=true;
+SET hive.auto.convert.join.noconditionaltask.size=100000000;
+SET mapreduce.job.split.metainfo.maxsize=-1;
+
+INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
+AIRLINE.FLIGHTDATE
+,AIRLINE.YEAR
+,AIRLINE.QUARTER
+,...
+,AIRLINE.ARRDELAYMINUTES
+FROM AIRLINE.AIRLINE as AIRLINE
+WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
+
+```
+
+在Hive命令运行时,Kylin会用`conf/kylin_hive_conf.properties`里的配置,比如保留更少的冗余备份和启用Hive的mapper side join。需要的话可以根据集群的具体情况增加其他配置。
+
+如果cube的分区列(在这个案例中是"FIGHTDATE")与Hive表的分区列相同,那么根据它过滤数据能让Hive聪明地跳过不匹配的分区。因此强烈建议用Hive的分区列(如果它是日期列)作为cube的分区列。这对于那些数据量很大的表来说几乎是必须的,否则Hive不得不每次在这步扫描全部文件,消耗非常长的时间。
+
+如果启用了Hive的文件合并,你可以在`conf/kylin_hive_conf.xml`里关闭它,因为Kylin有自己合并文件的方法(下一节):
+
+    <property>
+        <name>hive.merge.mapfiles</name>
+        <value>false</value>
+        <description>Disable Hive's auto merge</description>
+    </property>
+
+## 重新分发中间表
+
+在之前的一步之后,Hive在HDFS上的目录里生成了数据文件:有些是大文件,有些是小文件甚至空文件。这种不平衡的文件分布会导致之后的MR任务出现数据倾斜的问题:有些mapper完成得很快,但其他的就很慢。针对这个问题,Kylin增加了这一个步骤来“重新分发”数据,这是示例输出:
+
+```
+total input rows = 159869711
+expected input rows per mapper = 1000000
+num reducers for RedistributeFlatHiveTableStep = 160
+
+```
+
+重新分发表的命令:
+
+```
+hive -e "USE default;
+SET dfs.replication=2;
+SET hive.exec.compress.output=true;
+SET hive.auto.convert.join.noconditionaltask=true;
+SET hive.auto.convert.join.noconditionaltask.size=100000000;
+SET mapreduce.job.split.metainfo.maxsize=-1;
+set mapreduce.job.reduces=160;
+set hive.merge.mapredfiles=false;
+
+INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
+"
+```
+
+首先,Kylin计算出中间表的行数,然后基于行数的大小算出重新分发数据需要的文件数。默认情况下,Kylin为每一百万行分配一个文件。在这个例子中,有1.6亿行和160个reducer,每个reducer会写一个文件。在接下来对这张表进行的MR步骤里,Hadoop会启动和文件相同数量的mapper来处理数据(通常一百万行数据比一个HDFS数据块要小)。如果你的日常数据量没有这么大或者Hadoop集群有足够的资源,你或许想要更多的并发数,这时可以将`conf/kylin.properties`里的`kylin.job.mapreduce.mapper.input.rows`设为小一点的数值,比如:
+
+`kylin.job.mapreduce.mapper.input.rows=500000`
+
+其次,Kylin会运行 *"INSERT OVERWRITE TABLE ... DISTRIBUTE BY "* 形式的HiveQL来分发数据到指定数量的reducer上。
+
+在很多情况下,Kylin请求Hive随机分发数据到reducer,然后得到大小相近的文件,分发的语句是"DISTRIBUTE BY RAND()"。
+
+如果你的cube指定了一个高基数的列,比如"USER_ID",作为"分片"维度(在cube的“高级设置”页面),Kylin会让Hive根据该列的值重新分发数据,那么在该列有着相同值的行将被分发到同一个文件。这比随机要分发要好得多,因为不仅重新分布了数据,并且在没有额外代价的情况下对数据进行了预先分类,如此一来接下来的cube build处理会从中受益。在典型的场景下,这样优化可以减少40%的build时长。在这个案例中分发的语句是"DISTRIBUTE BY USER_ID":
+
+请注意: 1)“分片”列应该是高基数的维度列,并且它会出现在很多的cuboid中(不只是出现在少数的cuboid)。 使用它来合理进行分发可以在每个时间范围内的数据均匀分布,否则会造成数据倾斜,从而降低build效率。典型的正面例子是:“USER_ID”、“SELLER_ID”、“PRODUCT”、“CELL_NUMBER”等等,这些列的基数应该大于一千(远大于reducer的数量)。 2)"分片"对cube的存储同样有好处,不过这超出了本文的范围。
+
+## 提取事实表的唯一列
+
+在这一步骤Kylin运行MR任务来提取使用字典编码的维度列的唯一值。
+
+实际上这步另外还做了一些事情:通过HyperLogLog计数器收集cube的统计数据,用于估算每个cuboid的行数。如果你发现mapper运行得很慢,这通常表明cube的设计太过复杂,请参考
+[优化cube设计](howto_optimize_cubes.html)来简化cube。如果reducer出现了内存溢出错误,这表明cuboid组合真的太多了或者是YARN的内存分配满足不了需要。如果这一步从任何意义上讲不能在合理的时间内完成,你可以放弃任务并考虑重新设计cube,因为继续下去会花费更长的时间。
+
+你可以通过降低取样的比例(kylin.job.cubing.inmen.sampling.percent)来加速这个步骤,但是帮助可能不大而且影响了cube统计数据的准确性,所有我们并不推荐。
+
+## 构建维度字典
+
+有了前一步提取的维度列唯一值,Kylin会在内存里构建字典(在下个版本将改为MapReduce任务)。通常这一步比较快,但如果唯一值集合很大,Kylin可能会报出类似“字典不支持过高基数”。对于UHC类型的列,请使用其他编码方式,比如“fixed_length”、“integer”等等。
+
+## 保存cuboid的统计数据和创建 HTable
+
+这两步是轻量级和快速的。
+
+## 构建基础cuboid
+
+这一步用Hive的中间表构建基础的cuboid,是“逐层”构建cube算法的第一轮MR计算。Mapper的数目与第二步的reducer数目相等;Reducer的数目是根据cube统计数据估算的:默认情况下每500MB输出使用一个reducer;如果观察到reducer的数量较少,你可以将kylin.properties里的“kylin.job.mapreduce.default.reduce.input.mb”设为小一点的数值以获得过多的资源,比如:
+
+`kylin.job.mapreduce.default.reduce.input.mb=200`
+
+## Build N-Dimension Cuboid 
+## 构建N维cuboid
+
+这些步骤是“逐层”构建cube的过程,每一步以前一步的输出作为输入,然后去掉一个维度以聚合得到一个子cuboid。举个例子,cuboid ABCD去掉A得到BCD,去掉B得到ACD。
+
+有些cuboid可以从一个以上的父cuboid聚合得到,这种情况下,Kylin会选择最小的一个父cuboid。举例,AB可以从ABC(id:1110)和ABD(id:1101)生成,则ABD会被选中,因为它的比ABC要小。在这基础上,如果D的基数较小,聚合运算的成本就会比较低。所以,当设计rowkey序列的时候,请记得将基数较小的维度放在末尾。这样不仅有利于cube构建,而且有助于cube查询,因为预聚合也遵循相同的规则。
+
+通常来说,从N维到(N/2)维的构建比较慢,因为这是cuboid数量爆炸性增长的阶段:N维有1个cuboid,(N-1)维有N个cuboid,(N-2)维有N*(N-1)个cuboid,以此类推。经过(N/2)维构建的步骤,整个构建任务会逐渐变快。
+
+## 构建cube
+
+这个步骤使用一个新的算法来构建cube:“逐片”构建(也称为“内存”构建)。它会使用一轮MR来计算所有的cuboids,但是比通常情况下更耗内存。配置文件"conf/kylin_job_inmem.xml"正是为这步而设。默认情况下它为每个mapper申请3GB内存。如果你的集群有充足的内存,你可以在上述配置文件中分配更多内存给mapper,这样它会用尽可能多的内存来缓存数据以获得更好的性能,比如:
+
+    <property>
+        <name>mapreduce.map.memory.mb</name>
+        <value>6144</value>
+        <description></description>
+    </property>
+    
+    <property>
+        <name>mapreduce.map.java.opts</name>
+        <value>-Xmx5632m</value>
+        <description></description>
+    </property>
+
+
+请注意,Kylin会根据数据分布(从cube的统计数据里获得)自动选择最优的算法,没有被选中的算法对应的步骤会被跳过。你不需要显式地选择构建算法。
+
+## 将cuboid数据转换为HFile
+
+这一步启动一个MR任务来讲cuboid文件(序列文件格式)转换为HBase的HFile格式。Kylin通过cube统计数据计算HBase的region数目,默认情况下每5GB数据对应一个region。Region越多,MR使用的reducer也会越多。如果你观察到reducer数目较小且性能较差,你可以将“conf/kylin.properties”里的以下参数设小一点,比如:
+
+```
+kylin.hbase.region.cut=2
+kylin.hbase.hfile.size.gb=1
+```
+
+如果你不确定一个region应该是多大时,联系你的HBase管理员。
+
+## 将HFile导入HBase表
+
+这一步使用HBase API来讲HFile导入region server,这是轻量级并快速的一步。
+
+## 更新cube信息
+
+在导入数据到HBase后,Kylin在元数据中将对应的cube segment标记为ready。
+
+## 清理资源
+
+将中间宽表从Hive删除。这一步不会阻塞任何操作,因为在前一步segment已经被标记为ready。如果这一步发生错误,不用担心,垃圾回收工作可以晚些再通过Kylin的[StorageCleanupJob](howto_cleanup_storage.html)完成。
+
+## 总结
+还有非常多其他提高Kylin性能的方法,如果你有经验可以分享,欢迎通过[dev@kylin.apache.org](mailto:dev@kylin.apache.org)讨论。
diff --git a/website/_docs24/howto/howto_optimize_build.md b/website/_docs24/howto/howto_optimize_build.md
new file mode 100644
index 0000000..af130ae
--- /dev/null
+++ b/website/_docs24/howto/howto_optimize_build.md
@@ -0,0 +1,190 @@
+---
+layout: docs
+title:  Optimize Cube Build
+categories: howto
+permalink: /docs24/howto/howto_optimize_build.html
+---
+
+Kylin decomposes a Cube build task into several steps and then executes them in sequence. These steps include Hive operations, MapReduce jobs, and other types job. When you have many Cubes to build daily, then you definitely want to speed up this process. Here are some practices that you probably want to know, and they are organized in the same order as the steps sequence.
+
+
+
+## Create Intermediate Flat Hive Table
+
+This step extracts data from source Hive tables (with all tables joined) and inserts them into an intermediate flat table. If Cube is partitioned, Kylin will add a time condition so that only the data in the range would be fetched. You can check the related Hive command in the log of this step, e.g: 
+
+```
+hive -e "USE default;
+DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
+
+CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
+(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
+STORED AS SEQUENCEFILE
+LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
+
+SET dfs.replication=2;
+SET hive.exec.compress.output=true;
+SET hive.auto.convert.join.noconditionaltask=true;
+SET hive.auto.convert.join.noconditionaltask.size=100000000;
+SET mapreduce.job.split.metainfo.maxsize=-1;
+
+INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
+AIRLINE.FLIGHTDATE
+,AIRLINE.YEAR
+,AIRLINE.QUARTER
+,...
+,AIRLINE.ARRDELAYMINUTES
+FROM AIRLINE.AIRLINE as AIRLINE
+WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
+"
+
+```
+
+Kylin applies the configuration in conf/kylin\_hive\_conf.xml while Hive commands are running, for instance, use less replication and enable Hive's mapper side join. If it is needed, you can add other configurations which are good for your cluster.
+
+If Cube's partition column ("FLIGHTDATE" in this case) is the same as Hive table's partition column, then filtering on it will let Hive smartly skip those non-matched partitions. So it is highly recommended to use Hive table's paritition column (if it is a date column) as the Cube's partition column. This is almost required for those very large tables, or Hive has to scan all files each time in this step, costing terribly long time.
+
+If your Hive enables file merge, you can disable them in "conf/kylin\_hive\_conf.xml" as Kylin has its own way to merge files (in the next step): 
+
+    <property>
+        <name>hive.merge.mapfiles</name>
+        <value>false</value>
+        <description>Disable Hive's auto merge</description>
+    </property>
+
+
+## Redistribute intermediate table
+
+After the previous step, Hive generates the data files in HDFS folder: while some files are large, some are small or even empty. The imbalanced file distribution would lead subsequent MR jobs to imbalance as well: some mappers finish quickly yet some others are very slow. To balance them, Kylin adds this step to "redistribute" the data and here is a sample output:
+
+```
+total input rows = 159869711
+expected input rows per mapper = 1000000
+num reducers for RedistributeFlatHiveTableStep = 160
+
+```
+
+
+Redistribute table, cmd: 
+
+```
+hive -e "USE default;
+SET dfs.replication=2;
+SET hive.exec.compress.output=true;
+SET hive.auto.convert.join.noconditionaltask=true;
+SET hive.auto.convert.join.noconditionaltask.size=100000000;
+SET mapreduce.job.split.metainfo.maxsize=-1;
+set mapreduce.job.reduces=160;
+set hive.merge.mapredfiles=false;
+
+INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
+"
+
+```
+
+
+
+Firstly, Kylin gets the row count of this intermediate table; then based on the number of row count, it would get amount of files needed to get data redistributed. By default, Kylin allocates one file per 1 million rows. In this sample, there are 160 million rows and exist 160 reducers, and each reducer would write 1 file. In following MR step over this table, Hadoop will start the same number Mappers as the files to process (usually 1 million's data size is small than a HDFS block size) [...]
+
+`kylin.job.mapreduce.mapper.input.rows=500000`
+
+
+Secondly, Kylin runs a *"INSERT OVERWIRTE TABLE .... DISTRIBUTE BY "* HiveQL to distribute the rows among a specified number of reducers.
+
+In most cases, Kylin asks Hive to randomly distributes the rows among reducers, then get files very closed in size. The distribute clause is "DISTRIBUTE BY RAND()".
+
+If your Cube has specified a "shard by" dimension (in Cube's "Advanced setting" page), which is a high cardinality column (like "USER\_ID"), Kylin will ask Hive to redistribute data by that column's value. Then for the rows that have the same value as this column has, they will go to the same file. This is much better than "by random",  because the data will be not only redistributed but also pre-categorized without additional cost, thus benefiting the subsequent Cube build process. Unde [...]
+
+**Please note:** 1) The "shard by" column should be a high cardinality dimension column, and it appears in many cuboids (not just appears in seldom cuboids). Utilize it to distribute properly can get equidistribution in every time range; otherwise it will cause data incline, which will reduce the building speed. Typical good cases are: "USER\_ID", "SELLER\_ID", "PRODUCT", "CELL\_NUMBER", so forth, whose cardinality is higher than one thousand (should be much more than the reducer numbers [...]
+
+
+
+## Extract Fact Table Distinct Columns
+
+In this step Kylin runs a MR job to fetch distinct values for the dimensions, which are using dictionary encoding. 
+
+Actually this step does more: it collects the Cube statistics by using HyperLogLog counters to estimate the row count of each Cuboid. If you find that mappers work incredible slowly, it usually indicates that the Cube design is too complex, please check [optimize cube design](howto_optimize_cubes.html) to make the Cube thinner. If the reducers get OutOfMemory error, it indicates that the Cuboid combination does explode or the default YARN memory allocation cannot meet demands. If this st [...]
+
+You can reduce the sampling percentage (kylin.job.cubing.inmem.sampling.percen in kylin.properties) to get this step accelerated, but this may not help much and impact on the accuracy of Cube statistics, thus we don't recommend.  
+
+
+
+## Build Dimension Dictionary
+
+With the distinct values fetched in previous step, Kylin will build dictionaries in memory (in next version this will be moved to MR). Usually this step is fast, but if the value set is large, Kylin may report error like "Too high cardinality is not suitable for dictionary". For UHC column, please use other encoding method for the UHC column, such as "fixed_length", "integer" and so on.
+
+
+
+## Save Cuboid Statistics and Create HTable
+
+These two steps are lightweight and fast.
+
+
+
+## Build Base Cuboid 
+
+This step is building the base cuboid from the intermediate table, which is the first round MR of the "by-layer" cubing algorithm. The mapper number is equals to the reducer number of step 2; The reducer number is estimated with the cube statistics: by default use 1 reducer every 500MB output; If you observed the reducer number is small, you can set "kylin.job.mapreduce.default.reduce.input.mb" in kylin.properties to a smaller value to get more resources, e.g: `kylin.job.mapreduce.defaul [...]
+
+
+## Build N-Dimension Cuboid 
+
+These steps are the "by-layer" cubing process, each step uses the output of previous step as the input, and then cut off one dimension to aggregate to get one child cuboid. For example, from cuboid ABCD, cut off A get BCD, cut off B get ACD etc. 
+
+Some cuboid can be aggregated from more than 1 parent cubiods, in this case, Kylin will select the minimal parent cuboid. For example, AB can be generated from ABC (id: 1110) and ABD (id: 1101), so ABD will be used as its id is smaller than ABC. Based on this, if D's cardinality is small, the aggregation will be cost-efficient. So, when you design the Cube rowkey sequence, please remember to put low cardinality dimensions to the tail position. This not only benefit the Cube build, but al [...]
+
+Usually from the N-D to (N/2)-D the building is slow, because it is the cuboid explosion process: N-D has 1 Cuboid, (N-1)-D has N cuboids, (N-2)-D has N*(N-1) cuboids, etc. After (N/2)-D step, the building gets faster gradually.
+
+
+
+## Build Cube
+
+This step uses a new algorithm to build the Cube: "by-split" Cubing (also called as "in-mem" cubing). It will use one round MR to calculate all cuboids, but it requests more memory than normal. The "conf/kylin\_job\_conf\_inmem.xml" is made for this step. By default it requests 3GB memory for each mapper. If your cluster has enough memory, you can allocate more in "conf/kylin\_job\_conf\_inmem.xml" so it will use as much possible memory to hold the data and gain a better performance, e.g:
+
+    <property>
+        <name>mapreduce.map.memory.mb</name>
+        <value>6144</value>
+        <description></description>
+    </property>
+    
+    <property>
+        <name>mapreduce.map.java.opts</name>
+        <value>-Xmx5632m</value>
+        <description></description>
+    </property>
+
+
+Please note, Kylin will automatically select the best algorithm based on the data distribution (get in Cube statistics). The not-selected algorithm's steps will be skipped. You don't need to select the algorithm explicitly.
+
+
+
+## Convert Cuboid Data to HFile
+
+This step starts a MR job to convert the Cuboid files (sequence file format) into HBase's HFile format. Kylin calculates the HBase region number with the Cube statistics, by default 1 region per 5GB. The more regions got, the more reducers would be utilized. If you observe the reducer's number is small and performance is poor, you can set the following parameters in "conf/kylin.properties" to smaller, as follows:
+
+```
+kylin.hbase.region.cut=2
+kylin.hbase.hfile.size.gb=1
+```
+
+If you're not sure what size a region should be, contact your HBase administrator. 
+
+
+## Load HFile to HBase Table
+
+This step uses HBase API to load the HFile to region servers, it is lightweight and fast.
+
+
+
+## Update Cube Info
+
+After loading data into HBase, Kylin marks this Cube segment as ready in metadata. This step is very fast.
+
+
+
+## Cleanup
+
+Drop the intermediate table from Hive. This step doesn't block anything as the segment has been marked ready in the previous step. If this step gets error, no need to worry, the garbage can be collected later when Kylin executes the [StorageCleanupJob](howto_cleanup_storage.html).
+
+
+## Summary
+There are also many other methods to boost the performance. If you have practices to share, welcome to discuss in [dev@kylin.apache.org](mailto:dev@kylin.apache.org).
diff --git a/website/_docs24/howto/howto_optimize_cubes.cn.md b/website/_docs24/howto/howto_optimize_cubes.cn.md
new file mode 100644
index 0000000..8d76f5d
--- /dev/null
+++ b/website/_docs24/howto/howto_optimize_cubes.cn.md
@@ -0,0 +1,212 @@
+---
+layout: docs-cn
+title:  优化 Cube 设计
+categories: howto
+permalink: /cn/docs24/howto/howto_optimize_cubes.html
+---
+
+## Hierarchies:
+
+Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
+
+group by continent
+group by continent, country
+group by continent, country, city
+
+In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
+
+If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
+
+
+A. Hierarchies on lookup table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, FK</td>
+    <td></td>
+    <td>PK,,H1,H2,H3,,,,</td>
+  </tr>
+</table>
+
+---
+
+B. Hierarchies on fact table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
+  </tr>
+</table>
+
+---
+
+
+There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
+
+A*. Hierarchies on lookup table over its primary key
+
+
+<table>
+  <tr>
+    <td align="center">Lookup Table(Calendar)</td>
+  </tr>
+  <tr>
+    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
+  </tr>
+</table>
+
+---
+
+
+For cases like A* what you need is another optimization called "Derived Columns"
+
+## Derived Columns:
+
+Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
+
+For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, DimA(FK) </td>
+    <td></td>
+    <td>DimX(PK),,DimB, DimC</td>
+  </tr>
+</table>
+
+---
+
+
+Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
+
+
+<table>
+  <tr>
+    <th>dimA</th>
+    <th>dimB</th>
+    <th>dimC</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>b</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>c</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+</table>
+
+
+in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
+
+original combinations:
+ABC,AB,AC,BC,A,B,C
+
+combinations when driving B from A:
+AC,A,C
+
+at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
+
+
+<table>
+  <tr>
+    <th>DimA</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>2</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"
diff --git a/website/_docs24/howto/howto_optimize_cubes.md b/website/_docs24/howto/howto_optimize_cubes.md
new file mode 100644
index 0000000..170c9d4
--- /dev/null
+++ b/website/_docs24/howto/howto_optimize_cubes.md
@@ -0,0 +1,212 @@
+---
+layout: docs
+title:  Optimize Cube Design
+categories: howto
+permalink: /docs24/howto/howto_optimize_cubes.html
+---
+
+## Hierarchies:
+
+Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
+
+group by continent
+group by continent, country
+group by continent, country, city
+
+In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
+
+If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
+
+
+A. Hierarchies on lookup table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, FK</td>
+    <td></td>
+    <td>PK,,H1,H2,H3,,,,</td>
+  </tr>
+</table>
+
+---
+
+B. Hierarchies on fact table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
+  </tr>
+</table>
+
+---
+
+
+There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
+
+A*. Hierarchies on lookup table over its primary key
+
+
+<table>
+  <tr>
+    <td align="center">Lookup Table(Calendar)</td>
+  </tr>
+  <tr>
+    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
+  </tr>
+</table>
+
+---
+
+
+For cases like A* what you need is another optimization called "Derived Columns"
+
+## Derived Columns:
+
+Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
+
+For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, DimA(FK) </td>
+    <td></td>
+    <td>DimX(PK),,DimB, DimC</td>
+  </tr>
+</table>
+
+---
+
+
+Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
+
+
+<table>
+  <tr>
+    <th>dimA</th>
+    <th>dimB</th>
+    <th>dimC</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>b</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>c</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+</table>
+
+
+in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
+
+original combinations:
+ABC,AB,AC,BC,A,B,C
+
+combinations when driving B from A:
+AC,A,C
+
+at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
+
+
+<table>
+  <tr>
+    <th>DimA</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>2</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"
diff --git a/website/_docs24/howto/howto_update_coprocessor.md b/website/_docs24/howto/howto_update_coprocessor.md
new file mode 100644
index 0000000..d61c8db
--- /dev/null
+++ b/website/_docs24/howto/howto_update_coprocessor.md
@@ -0,0 +1,14 @@
+---
+layout: docs
+title:  Update Coprocessor
+categories: howto
+permalink: /docs24/howto/howto_update_coprocessor.html
+---
+
+Kylin leverages HBase coprocessor to optimize query performance. After new versions released, the RPC protocol may get changed, so user need to redeploy coprocessor to HTable.
+
+There's a CLI tool to update HBase Coprocessor:
+
+{% highlight Groff markup %}
+$KYLIN_HOME/bin/kylin.sh org.apache.kylin.storage.hbase.util.DeployCoprocessorCLI default all
+{% endhighlight %}
diff --git a/website/_docs24/howto/howto_upgrade.md b/website/_docs24/howto/howto_upgrade.md
new file mode 100644
index 0000000..55a4ffe
--- /dev/null
+++ b/website/_docs24/howto/howto_upgrade.md
@@ -0,0 +1,105 @@
+---
+layout: docs
+title:  Upgrade From Old Versions
+categories: howto
+permalink: /docs24/howto/howto_upgrade.html
+since: v1.5.1
+---
+
+Running as a Hadoop client, Apache Kylin's metadata and Cube data are persistended in Hadoop (HBase and HDFS), so the upgrade is relatively easy and user does not need worry about data loss. The upgrade can be performed in the following steps:
+
+* Download the new Apache Kylin binary package for your Hadoop version from Kylin download page.
+* Unpack the new version Kylin package to a new folder, e.g, /usr/local/kylin/apache-kylin-2.1.0/ (directly overwrite old instance is not recommended).
+* Merge the old configuration files (`$KYLIN_HOME/conf/*`) into the new ones. It is not recommended to overwrite the new configuration files, although that works in most cases. If you have modified tomcat configuration ($KYLIN_HOME/tomcat/conf/), do the same for it.
+* Stop the current Kylin instance with `bin/kylin.sh stop`
+* Set the `KYLIN_HOME` env variable to the new installation folder. If you have set `KYLIN_HOME` in `~/.bash_profile` or other scripts, remember to update them as well.
+* Start the new Kylin instance with `$KYLIN_HOME/bin/kylin start`. After be started, login Kylin web to check whether your cubes can be loaded correctly.
+* [Upgrade coprocessor](howto_update_coprocessor.html) to ensure the HBase region servers use the latest Kylin coprocessor.
+* Verify your SQL queries can be performed successfully.
+
+Below are versions specific guides:
+
+
+## Upgrade from v2.1.0 to v2.2.0
+
+Kylin v2.2.0 cube metadata is compitable with v2.1.0, but you need aware the following changes:
+
+* Cube ACL is removed, use Project Level ACL instead. You need to manually configure Project Permissions to migrate your existing Cube Permissions. Please refer to [Project Level ACL](/docs21/tutorial/project_level_acl.html).
+* Update HBase coprocessor. The HBase tables for existing cubes need be updated to the latest coprocessor. Follow [this guide](/docs21/howto/howto_update_coprocessor.html) to update.
+
+
+## Upgrade from v2.0.0 to v2.1.0
+
+Kylin v2.1.0 cube metadata is compitable with v2.0.0, but you need aware the following changes. 
+
+1) In previous version, Kylin uses additional two HBase tables "kylin_metadata_user" and "kylin_metadata_acl" to persistent the user and ACL info. From 2.1, Kylin consolidates all the info into one table: "kylin_metadata". This will make the backup/restore and maintenance more easier. When you start Kylin 2.1.0, it will detect whether need migration; if true, it will print the command to do migration:
+
+```
+ERROR: Legacy ACL metadata detected. Please migrate ACL metadata first. Run command 'bin/kylin.sh org.apache.kylin.tool.AclTableMigrationCLI MIGRATE'.
+```
+
+After the migration finished, you can delete the legacy "kylin_metadata_user" and "kylin_metadata_acl" tables from HBase.
+
+2) From v2.1, Kylin hides the default settings in "conf/kylin.properties"; You only need uncomment or add the customized properties in it.
+
+3) Spark is upgraded from v1.6.3 to v2.1.1, if you customized Spark configurations in kylin.properties, please upgrade them as well by referring to [Spark documentation](https://spark.apache.org/docs/2.1.0/).
+
+4) If you are running Kylin with two clusters (compute/query separated), need copy the big metadata files (which are persisted in HDFS instead of HBase) from the Hadoop cluster to HBase cluster.
+
+```
+hadoop distcp hdfs://compute-cluster:8020/kylin/kylin_metadata/resources hdfs://query-cluster:8020/kylin/kylin_metadata/resources
+```
+
+
+## Upgrade from v1.6.0 to v2.0.0
+
+Kylin v2.0.0 can read v1.6.0 metadata directly. Please follow the common upgrade steps above.
+
+Configuration names in `kylin.properties` have changed since v2.0.0. While the old property names still work, it is recommended to use the new property names as they follow [the naming convention](/development/coding_naming_convention.html) and are easier to understand. There is [a mapping from the old properties to the new properties](https://github.com/apache/kylin/blob/2.0.x/core-common/src/main/resources/kylin-backward-compatibility.properties).
+
+## Upgrade from v1.5.4 to v1.6.0
+
+Kylin v1.5.4 and v1.6.0 are compatible in metadata. Please follow the common upgrade steps above.
+
+## Upgrade from v1.5.3 to v1.5.4
+Kylin v1.5.3 and v1.5.4 are compatible in metadata. Please follow the common upgrade steps above.
+
+## Upgrade from 1.5.2 to v1.5.3
+Kylin v1.5.3 metadata is compitible with v1.5.2, your cubes don't need rebuilt, as usual, some actions need to be performed:
+
+#### 1. Update HBase coprocessor
+The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
+
+#### 2. Update conf/kylin_hive_conf.xml
+From 1.5.3, Kylin doesn't need Hive to merge small files anymore; For users who copy the conf/ from previous version, please remove the "merge" related properties in kylin_hive_conf.xml, including "hive.merge.mapfiles", "hive.merge.mapredfiles", and "hive.merge.size.per.task"; this will save the time on extracting data from Hive.
+
+
+## Upgrade from 1.5.1 to v1.5.2
+Kylin v1.5.2 metadata is compitible with v1.5.1, your cubes don't need upgrade, while some actions need to be performed:
+
+#### 1. Update HBase coprocessor
+The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
+
+#### 2. Update conf/kylin.properties
+In v1.5.2 several properties are deprecated, and several new one are added:
+
+Deprecated:
+
+* kylin.hbase.region.cut.small=5
+* kylin.hbase.region.cut.medium=10
+* kylin.hbase.region.cut.large=50
+
+New:
+
+* kylin.hbase.region.cut=5
+* kylin.hbase.hfile.size.gb=2
+
+These new parameters determines how to split HBase region; To use different size you can overwite these params in Cube level. 
+
+When copy from old kylin.properties file, suggest to remove the deprecated ones and add the new ones.
+
+#### 3. Add conf/kylin\_job\_conf\_inmem.xml
+A new job conf file named "kylin\_job\_conf\_inmem.xml" is added in "conf" folder; As Kylin 1.5 introduced the "fast cubing" algorithm, which aims to leverage more memory to do the in-mem aggregation; Kylin will use this new conf file for submitting the in-mem cube build job, which requesting different memory with a normal job; Please update it properly according to your cluster capacity.
+
+Besides, if you have used separate config files for different capacity cubes, for example "kylin\_job\_conf\_small.xml", "kylin\_job\_conf\_medium.xml" and "kylin\_job\_conf\_large.xml", please note that they are deprecated now; Only "kylin\_job\_conf.xml" and "kylin\_job\_conf\_inmem.xml" will be used for submitting cube job; If you have cube level job configurations (like using different Yarn job queue), you can customize at cube level, check [KYLIN-1706](https://issues.apache.org/jira [...]
+
diff --git a/website/_docs24/howto/howto_use_beeline.md b/website/_docs24/howto/howto_use_beeline.md
new file mode 100644
index 0000000..eca2875
--- /dev/null
+++ b/website/_docs24/howto/howto_use_beeline.md
@@ -0,0 +1,14 @@
+---
+layout: docs
+title:  Use Beeline for Hive
+categories: howto
+permalink: /docs24/howto/howto_use_beeline.html
+---
+
+Beeline(https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients) is recommended by many venders to replace Hive CLI. By default Kylin uses Hive CLI to synchronize Hive tables, create flatten intermediate tables, etc. By simple configuration changes you can set Kylin to use Beeline instead.
+
+Edit $KYLIN_HOME/conf/kylin.properties by:
+
+  1. change kylin.hive.client=cli to kylin.hive.client=beeline
+  2. add "kylin.hive.beeline.params", this is where you can specifiy beeline commmand parameters. Like username(-n), JDBC URL(-u),etc. There's a sample kylin.hive.beeline.params included in default kylin.properties, however it's commented. You can modify the sample based on your real environment.
+
diff --git a/website/_docs24/howto/howto_use_distributed_scheduler.md b/website/_docs24/howto/howto_use_distributed_scheduler.md
new file mode 100644
index 0000000..789e176
--- /dev/null
+++ b/website/_docs24/howto/howto_use_distributed_scheduler.md
@@ -0,0 +1,16 @@
+---
+layout: docs
+title:  Use distributed job scheduler
+categories: howto
+permalink: /docs24/howto/howto_use_distributed_scheduler.html
+---
+
+Since Kylin 2.0, Kylin support distributed job scheduler.
+Which is more extensible, available and reliable than default job scheduler.
+To enable the distributed job scheduler, you need to set or update three configs in the kylin.properties:
+
+```
+1. kylin.job.scheduler.default=2
+2. kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperDistributedJobLock
+3. add all job servers and query servers to the kylin.server.cluster-servers
+```
diff --git a/website/_docs/howto/howto_use_restapi.cn.md b/website/_docs24/howto/howto_use_restapi.cn.md
similarity index 99%
copy from website/_docs/howto/howto_use_restapi.cn.md
copy to website/_docs24/howto/howto_use_restapi.cn.md
index 2bbebeb..8cf8c29 100644
--- a/website/_docs/howto/howto_use_restapi.cn.md
+++ b/website/_docs24/howto/howto_use_restapi.cn.md
@@ -2,7 +2,7 @@
 layout: docs-cn
 title:  RESTful API
 categories: howto
-permalink: /cn/docs/howto/howto_use_restapi.html
+permalink: /cn/docs24/howto/howto_use_restapi.html
 since: v0.7.1
 ---
 
@@ -83,7 +83,7 @@ curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H '
 If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
 
 ```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
 ```
 
 Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
@@ -659,7 +659,7 @@ Get descriptor for specified cube instance.
 
 #### Curl Example
 ```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
 ```
 
 #### Response Sample
diff --git a/website/_docs/howto/howto_use_restapi.md b/website/_docs24/howto/howto_use_restapi.md
similarity index 99%
copy from website/_docs/howto/howto_use_restapi.md
copy to website/_docs24/howto/howto_use_restapi.md
index 0927017..1a6b861 100644
--- a/website/_docs/howto/howto_use_restapi.md
+++ b/website/_docs24/howto/howto_use_restapi.md
@@ -2,7 +2,7 @@
 layout: docs
 title:  Use RESTful API
 categories: howto
-permalink: /docs/howto/howto_use_restapi.html
+permalink: /docs24/howto/howto_use_restapi.html
 since: v0.7.1
 ---
 
@@ -84,7 +84,7 @@ curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H '
 If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
 
 ```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
 ```
 
 Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
@@ -660,7 +660,7 @@ Get descriptor for specified cube instance.
 
 #### Curl Example
 ```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
 ```
 
 #### Response Sample
diff --git a/website/_docs24/howto/howto_use_restapi_in_js.md b/website/_docs24/howto/howto_use_restapi_in_js.md
new file mode 100644
index 0000000..fbe2f1f
--- /dev/null
+++ b/website/_docs24/howto/howto_use_restapi_in_js.md
@@ -0,0 +1,46 @@
+---
+layout: docs
+title:  Use RESTful API in Javascript
+categories: howto
+permalink: /docs24/howto/howto_use_restapi_in_js.html
+---
+Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers.
+
+## Example on Query API.
+```
+$.ajaxSetup({
+      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
+    });
+    var request = $.ajax({
+       url: "http://hostname/kylin/api/query",
+       type: "POST",
+       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
+       dataType: "json"
+    });
+    request.done(function( msg ) {
+       alert(msg);
+    }); 
+    request.fail(function( jqXHR, textStatus ) {
+       alert( "Request failed: " + textStatus );
+  });
+
+```
+
+## Keypoints
+1. add basic access authorization info in http headers.
+2. use right ajax type and data synax.
+
+## Basic access authorization
+For what is basic access authorization, refer to [Wikipedia Page](http://en.wikipedia.org/wiki/Basic_access_authentication).
+How to generate your authorization code (download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
+
+```
+var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
+ 
+$.ajaxSetup({
+   headers: { 
+    'Authorization': "Basic " + authorizationCode, 
+    'Content-Type': 'application/json;charset=utf-8' 
+   }
+});
+```
diff --git a/website/_docs24/index.cn.md b/website/_docs24/index.cn.md
new file mode 100644
index 0000000..6bec14c
--- /dev/null
+++ b/website/_docs24/index.cn.md
@@ -0,0 +1,29 @@
+---
+layout: docs-cn
+title: 概述
+categories: docs
+permalink: /cn/docs24/index.html
+---
+
+欢迎来到 Apache Kylin™
+------------  
+> Extreme OLAP Engine for Big Data
+
+Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
+
+查看旧版本文档: 
+* [v2.3.x document](/cn/docs23/)
+* [v2.1.x and v2.2.x document](/cn/docs21/)
+* [v2.0.x document](/cn/docs20/)
+* [v1.6.x document](/cn/docs16/)
+* [归档](/archive/)
+
+安装 
+------------  
+请参考安装文档以安装Apache Kylin: [安装向导](/cn/docs24/install/)
+
+
+
+
+
+
diff --git a/website/_docs24/index.md b/website/_docs24/index.md
new file mode 100644
index 0000000..22a75cd
--- /dev/null
+++ b/website/_docs24/index.md
@@ -0,0 +1,73 @@
+---
+layout: docs
+title: Overview
+categories: docs
+permalink: /docs24/index.html
+---
+
+
+Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
+------------  
+
+Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
+
+This is the document for the latest released version (v2.4). Document of prior versions: 
+* [v2.3.x document](/docs23)
+* [v2.1.x and v2.2.x document](/docs21/)
+* [v2.0.x document](/docs20/)
+* [v1.6.x document](/docs16/)
+* [Archive](/archive/)
+
+Installation & Setup
+------------  
+1. [Installation Guide](install/index.html)
+2. [Configurations](install/configuration.html)
+3. [Deploy in cluster mode](install/kylin_cluster.html)
+4. [Advanced settings](install/advance_settings.html)
+5. [Run Kylin with Docker](install/kylin_docker.html)
+6. [Install Kylin on AWS EMR](install/kylin_aws_emr.html)
+
+Tutorial
+------------  
+1. [Quick Start with Sample Cube](tutorial/kylin_sample.html)
+2. [Web Interface](tutorial/web.html)
+3. [Cube Wizard](tutorial/create_cube.html)
+4. [Cube Build and Job Monitoring](tutorial/cube_build_job.html)
+5. [SQL reference: by Apache Calcite](http://calcite.apache.org/docs/reference.html)
+6. [Build Cube with Streaming Data](tutorial/cube_streaming.html)
+7. [Build Cube with Spark Engine](tutorial/cube_spark.html)
+8. [Cube Build Tuning](tutorial/cube_build_performance.html)
+9. [Enable Query Pushdown](tutorial/query_pushdown.html)
+10. [Setup System Cube](tutorial/setup_systemcube.html)
+11. [Optimize with Cube Planner](tutorial/use_cube_planner.html)
+12. [Use System Dashboard](tutorial/use_dashboard.html)
+13. [Setup JDBC Data Source](tutorial/setup_jdbc_datasource.html)
+
+
+Connectivity and APIs
+------------  
+1. [ODBC driver](tutorial/odbc.html)
+2. [JDBC driver](howto/howto_jdbc.html)
+3. [RESTful API list](howto/howto_use_restapi.html)
+4. [Build cube with RESTful API](howto/howto_build_cube_with_restapi.html)
+5. [Connect from MS Excel and PowerBI](tutorial/powerbi.html)
+6. [Connect from Tableau 8](tutorial/tableau.html)
+7. [Connect from Tableau 9](tutorial/tableau_91.html)
+8. [Connect from MicroStrategy](tutorial/microstrategy.html)
+9. [Connect from SQuirreL](tutorial/squirrel.html)
+10. [Connect from Apache Flink](tutorial/flink.html)
+11. [Connect from Apache Spark](tutorial/spark.html)
+12. [Connect from Hue](tutorial/hue.html)
+13. [Connect from Qlik Sense](tutorial/Qlik.html)
+14. [Connect from Apache Superset](tutorial/superset.html)
+15. [Connect from Redash](/blog/2018/05/08/redash-kylin-plugin-strikingly/)
+
+
+Operations
+------------  
+1. [Backup/restore Kylin metadata](howto/howto_backup_metadata.html)
+2. [Cleanup storage](howto/howto_cleanup_storage.html)
+3. [Upgrade from old version](howto/howto_upgrade.html)
+
+
+
diff --git a/website/_docs/install/advance_settings.cn.md b/website/_docs24/install/advance_settings.cn.md
similarity index 99%
copy from website/_docs/install/advance_settings.cn.md
copy to website/_docs24/install/advance_settings.cn.md
index 6ed7c35..13c406e 100644
--- a/website/_docs/install/advance_settings.cn.md
+++ b/website/_docs24/install/advance_settings.cn.md
@@ -2,7 +2,7 @@
 layout: docs-cn
 title: "高级设置"
 categories: install
-permalink: /cn/docs/install/advance_settings.html
+permalink: /cn/docs24/install/advance_settings.html
 ---
 
 ## 在 Cube 级别重写默认的 kylin.properties
diff --git a/website/_docs/install/advance_settings.md b/website/_docs24/install/advance_settings.md
similarity index 99%
copy from website/_docs/install/advance_settings.md
copy to website/_docs24/install/advance_settings.md
index 52bf044..fd394b8 100644
--- a/website/_docs/install/advance_settings.md
+++ b/website/_docs24/install/advance_settings.md
@@ -2,7 +2,7 @@
 layout: docs
 title:  "Advanced Settings"
 categories: install
-permalink: /docs/install/advance_settings.html
+permalink: /docs24/install/advance_settings.html
 ---
 
 ## Overwrite default kylin.properties at Cube level
diff --git a/website/_docs/install/configuration.cn.md b/website/_docs24/install/configuration.cn.md
similarity index 99%
copy from website/_docs/install/configuration.cn.md
copy to website/_docs24/install/configuration.cn.md
index 6523df3..fdf0951 100644
--- a/website/_docs/install/configuration.cn.md
+++ b/website/_docs24/install/configuration.cn.md
@@ -2,7 +2,7 @@
 layout: docs-cn
 title:  "Kylin 配置"
 categories: install
-permalink: /cn/docs/install/configuration.html
+permalink: /cn/docs24/install/configuration.html
 ---
 
 Kylin 会自动从环境中检测 Hadoop/Hive/HBase 配置,如 "core-site.xml", "hbase-site.xml" 和其他。除此之外,Kylin 有自己的配置,在 "conf" 文件夹下。
diff --git a/website/_docs/install/configuration.md b/website/_docs24/install/configuration.md
similarity index 99%
copy from website/_docs/install/configuration.md
copy to website/_docs24/install/configuration.md
index 542b419..9a15134 100644
--- a/website/_docs/install/configuration.md
+++ b/website/_docs24/install/configuration.md
@@ -2,7 +2,7 @@
 layout: docs
 title:  "Kylin Configuration"
 categories: install
-permalink: /docs/install/configuration.html
+permalink: /docs24/install/configuration.html
 ---
 
 Kylin detects Hadoop/Hive/HBase configurations from the environments automatically, for example the "core-site.xml", the "hbase-site.xml" and others. Besides, Kylin has its own configurations, managed in the "conf" folder.
diff --git a/website/_docs24/install/hadoop_evn.md b/website/_docs24/install/hadoop_evn.md
new file mode 100644
index 0000000..845b500
--- /dev/null
+++ b/website/_docs24/install/hadoop_evn.md
@@ -0,0 +1,24 @@
+---
+layout: docs
+title:  "Hadoop Environment"
+categories: install
+permalink: /docs24/install/hadoop_env.html
+---
+
+Kylin need run in a Hadoop node, to get better stability, we suggest you to deploy it a pure Hadoop client machine, on which  the command lines like `hive`, `hbase`, `hadoop`, `hdfs` already be installed and configured. The Linux account that running Kylin has got permission to the Hadoop cluster, including create/write hdfs, hive tables, hbase tables and submit MR jobs. 
+
+## Software dependencies
+
+* Hadoop: 2.7+
+* Hive: 0.13 - 1.2.1+
+* HBase: 1.1+
+* Spark 2.1
+* JDK: 1.7+
+* OS: Linux only, CentOS 6.5+ or Ubuntu 16.0.4+
+
+Tested with Hortonworks HDP 2.2 - 2.6, Cloudera CDH 5.7 - 5.11, AWS EMR 5.7 - 5.10, Azure HDInsight 3.5 - 3.6.
+
+For trial and development purpose, we recommend you try Kylin with an all-in-one sandbox VM, like [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/), and give it 10 GB memory. To avoid permission issue in the sandbox, you can use its `root` account. We also suggest you using bridged mode instead of NAT mode in Virtual Box settings. Bridged mode will assign your sandbox an independent IP address so that you can avoid issues like [this](https://github.com/KylinOLAP/Kylin/i [...]
+
+
+ 
diff --git a/website/_docs24/install/index.cn.md b/website/_docs24/install/index.cn.md
new file mode 100644
index 0000000..de05618
--- /dev/null
+++ b/website/_docs24/install/index.cn.md
@@ -0,0 +1,79 @@
+---
+layout: docs-cn
+title:  "安装指南"
+categories: install
+permalink: /cn/docs24/install/index.html
+---
+
+## 软件要求
+
+* Hadoop: 2.7+
+* Hive: 0.13 - 1.2.1+
+* HBase: 1.1+
+* Spark (可选) 2.1.1+
+* Kafka (可选) 0.10.0+
+* JDK: 1.7+
+* OS: Linux only, CentOS 6.5+ or Ubuntu 16.0.4+
+
+在 Hortonworks HDP 2.2 - 2.6, Cloudera CDH 5.7 - 5.11, AWS EMR 5.7 - 5.10, Azure HDInsight 3.5 - 3.6 上测试通过。
+
+出于试用和开发的目的,我们建议您使用集成的 sandbox 来试用 Kylin,比如 [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/),且其要保证 10 GB memory。我们建议您在 Virtual Box settings 中使用桥接模式代替 NAT 模式。 
+
+## 硬件要求
+
+运行 Kylin 的服务器的最低的配置为 4 core CPU, 16 GB memory 和 100 GB disk。 对于高负载的场景,建议使用 24 core CPU, 64 GB memory 或更高的配置。
+
+
+## Hadoop 环境
+
+Kylin 依赖于 Hadoop 集群处理大量的数据集。您需要准备一个配置好 HDFS, YARN, MapReduce, Hive, Hbase, Zookeeper 和其他服务的 Hadoop 集群供 Kylin 运行。最常见的是在 Hadoop client machine 上安装 Kylin,这样 Kylin 可以通过(`hive`, `hbase`, `hadoop`, 以及其他的)命令行与 Hadoop 进行通信。 
+
+Kylin 可以在 Hadoop 集群的任意节点上启动。方便起见,您可以在 master 节点上运行 Kylin。但为了更好的稳定性,我们建议您将其部署在一个干净的 Hadoop client 节点上,该节点上 `hive`, `hbase`, `hadoop`, `hdfs` 命令行已安装好且 client 配置如(core-site.xml, hive-site.xml, hbase-site.xml, 及其他)也已经合理的配置且其可以自动和其它节点同步。运行 Kylin 的 Linux 账户要有访问 Hadoop 集群的权限,包括 create/write HDFS 文件夹, hive 表, hbase 表 和 提交 MR jobs 的权限。 
+
+## Kylin 安装
+
+ * 从最新的 Apache 下载网站下载一个适用于您 Hadoop 版本的 Kylin binaries 文件。例如,来源于 US 适用于 HBase 1.x 的 Kylin 2.3.1:
+{% highlight Groff markup %}
+cd /usr/local
+wget http://www-us.apache.org/dist/kylin/apache-kylin-2.3.1/apache-kylin-2.3.1-hbase1x-bin.tar.gz
+{% endhighlight %}
+ * 解压 tar 包,然后配置环境变量 KYLIN_HOME 指向 Kylin 文件夹
+{% highlight Groff markup %}
+tar -zxvf apache-kylin-2.3.1-hbase1x-bin.tar.gz
+cd apache-kylin-2.3.1-bin
+export KYLIN_HOME=`pwd`
+{% endhighlight %}
+ * 确保用户有权限在 shell 中运行 hadoop, hive 和 hbase cmd。如果您不确定,您可以运行 `$KYLIN_HOME/bin/check-env.sh` 脚本,如果您的环境有任何的问题,它会将打印出详细的信息。如果没有 error,意味着环境没问题。
+{% highlight Groff markup %}
+-bash-4.1# $KYLIN_HOME/bin/check-env.sh
+Retrieving hadoop conf dir...
+KYLIN_HOME is set to /usr/local/apache-kylin-2.3.1-bin
+-bash-4.1#
+{% endhighlight %}
+ * 运行 `$KYLIN_HOME/bin/kylin.sh start` 脚本来启动 Kylin,服务器启动后,您可以通过查看 `$KYLIN_HOME/logs/kylin.log` 获得运行时日志。
+{% highlight Groff markup %}
+-bash-4.1# $KYLIN_HOME/bin/kylin.sh start
+Retrieving hadoop conf dir...
+KYLIN_HOME is set to /usr/local/apache-kylin-2.3.1-bin
+Retrieving hive dependency...
+Retrieving hbase dependency...
+Retrieving hadoop conf dir...
+Retrieving kafka dependency...
+Retrieving Spark dependency...
+...
+A new Kylin instance is started by root. To stop it, run 'kylin.sh stop'
+Check the log at /usr/local/apache-kylin-2.3.1-bin/logs/kylin.log
+Web UI is at http://<hostname>:7070/kylin
+-bash-4.1#
+{% endhighlight %}
+ * Kylin 启动后您可以通过浏览器 <http://hostname:7070/kylin> 查看。初始用户名和密码是 ADMIN/KYLIN。
+ * 运行 `$KYLIN_HOME/bin/kylin.sh stop` 脚本,停止 Kylin。
+{% highlight Groff markup %}
+-bash-4.1# $KYLIN_HOME/bin/kylin.sh stop
+Retrieving hadoop conf dir... 
+KYLIN_HOME is set to /usr/local/apache-kylin-2.3.1-bin
+Stopping Kylin: 7014
+Kylin with pid 7014 has been stopped.
+{% endhighlight %}
+
+
diff --git a/website/_docs24/install/index.md b/website/_docs24/install/index.md
new file mode 100644
index 0000000..49af8c8
--- /dev/null
+++ b/website/_docs24/install/index.md
@@ -0,0 +1,78 @@
+---
+layout: docs
+title:  "Installation Guide"
+categories: install
+permalink: /docs24/install/index.html
+---
+
+## Software requirements
+
+* Hadoop: 2.7+
+* Hive: 0.13 - 1.2.1+
+* HBase: 1.1+
+* Spark 2.1.1+
+* JDK: 1.7+
+* OS: Linux only, CentOS 6.5+ or Ubuntu 16.0.4+
+
+Tested with Hortonworks HDP 2.2 - 2.6, Cloudera CDH 5.7 - 5.11, AWS EMR 5.7 - 5.10, Azure HDInsight 3.5 - 3.6.
+
+For trial and development purpose, we recommend you try Kylin with an all-in-one sandbox VM, like [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/), and give it 10 GB memory. We suggest you using bridged mode instead of NAT mode in Virtual Box settings. 
+
+## Hardware requirements
+
+The server to run Kylin need 4 core CPU, 16 GB memory and 100 GB disk as the minimal configuration. For high workload scenario, 24 core CPU, 64 GB memory or more is recommended.
+
+
+## Hadoop Environment
+
+Kylin depends on Hadoop cluster to process the massive data set. You need prepare a well configured Hadoop cluster for Kylin to run, with the common services includes HDFS, YARN, MapReduce, Hive, HBase, Zookeeper and other services. It is most common to install Kylin on a Hadoop client machine, from which Kylin can talk with the Hadoop cluster via command lines including `hive`, `hbase`, `hadoop`, etc. 
+
+Kylin itself can be started in any node of the Hadoop cluster. For simplity, you can run it in the master node. But to get better stability, we suggest you to deploy it a pure Hadoop client node, on which the command lines like `hive`, `hbase`, `hadoop`, `hdfs` already be installed and the client congfigurations (core-site.xml, hive-site.xml, hbase-site.xml, etc) are properly configured and will be automatically syned with other nodes. The Linux account that running Kylin has the permiss [...]
+
+## Installation Kylin
+
+ * Download a version of Kylin binaries for your Hadoop version from a closer Apache download site. For example, Kylin 2.3.1 for HBase 1.x from US:
+{% highlight Groff markup %}
+cd /usr/local
+wget http://www-us.apache.org/dist/kylin/apache-kylin-2.3.1/apache-kylin-2.3.1-hbase1x-bin.tar.gz
+{% endhighlight %}
+ * Uncompress the tarball and then export KYLIN_HOME pointing to the Kylin folder
+{% highlight Groff markup %}
+tar -zxvf apache-kylin-2.3.1-hbase1x-bin.tar.gz
+cd apache-kylin-2.3.1-bin
+export KYLIN_HOME=`pwd`
+{% endhighlight %}
+ * Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run `$KYLIN_HOME/bin/check-env.sh`, it will print out the detail information if you have some environment issues. If no error, that means the environment is ready.
+{% highlight Groff markup %}
+-bash-4.1# $KYLIN_HOME/bin/check-env.sh
+Retrieving hadoop conf dir...
+KYLIN_HOME is set to /usr/local/apache-kylin-2.3.1-bin
+-bash-4.1#
+{% endhighlight %}
+ * Start Kylin, run `$KYLIN_HOME/bin/kylin.sh start`, after the server starts, you can watch `$KYLIN_HOME/logs/kylin.log` for runtime logs;
+{% highlight Groff markup %}
+-bash-4.1# $KYLIN_HOME/bin/kylin.sh start
+Retrieving hadoop conf dir...
+KYLIN_HOME is set to /usr/local/apache-kylin-2.3.1-bin
+Retrieving hive dependency...
+Retrieving hbase dependency...
+Retrieving hadoop conf dir...
+Retrieving kafka dependency...
+Retrieving Spark dependency...
+...
+A new Kylin instance is started by root. To stop it, run 'kylin.sh stop'
+Check the log at /usr/local/apache-kylin-2.3.1-bin/logs/kylin.log
+Web UI is at http://<hostname>:7070/kylin
+-bash-4.1#
+{% endhighlight %}
+ * After Kylin started you can visit <http://hostname:7070/kylin> in your web browser. The initial username/password is ADMIN/KYLIN. 
+ * To stop Kylin, run `$KYLIN_HOME/bin/kylin.sh stop`
+{% highlight Groff markup %}
+-bash-4.1# $KYLIN_HOME/bin/kylin.sh stop
+Retrieving hadoop conf dir... 
+KYLIN_HOME is set to /usr/local/apache-kylin-2.3.1-bin
+Stopping Kylin: 7014
+Kylin with pid 7014 has been stopped.
+{% endhighlight %}
+
+
diff --git a/website/_docs24/install/kylin_aws_emr.cn.md b/website/_docs24/install/kylin_aws_emr.cn.md
new file mode 100644
index 0000000..0414f74
--- /dev/null
+++ b/website/_docs24/install/kylin_aws_emr.cn.md
@@ -0,0 +1,180 @@
+---
+layout: docs-cn
+title:  "在 AWS EMR 上安装 Kylin"
+categories: install
+permalink: /cn/docs24/install/kylin_aws_emr.html
+---
+
+今天许多用户将 Hadoop 运行在像 AWS 这样的公有云上。Apache Kylin,由标准的 Hadoop/HBase API 编译,支持多数主流的 Hadoop 发布;现在的版本是 Kylin v2.2,支持 AWS EMR 5.0 - 5.10。本文档介绍了在 EMR 上如何运行 Kylin。
+
+### 推荐版本
+* AWS EMR 5.7 (EMR 5.8 及以上,请查看 [KYLIN-3129](https://issues.apache.org/jira/browse/KYLIN-3129))
+* Apache Kylin v2.2.0 or above for HBase 1.x
+
+### 启动 EMR 集群
+
+使用 AWS 网页控制台,命令行或 API 运行一个 EMR 集群。在 Kylin 需要 HBase 服务的应用中选择 "**HBase**"。 
+
+您可以选择 "HDFS" 或者 "S3" 作为 HBase 的存储,这取决于您在关闭集群之后是否需要将 Cube 数据进行存储。EMR HDFS 使用 EC2 实例的本地磁盘,当集群停止后数据将被清除,Kylin metadata 和 Cube 数据将会丢失。
+
+如果您使用 "S3" 作为 HBase 的存储,您需要自定义配置为 "**hbase.rpc.timeout**",由于 S3 的大容量负载是一个复制操作,当数据规模比较大时,HBase region 服务器比在 HDFS 上将花费更多的时间等待其完成。
+
+```
+[  {
+    "Classification": "hbase-site",
+    "Properties": {
+      "hbase.rpc.timeout": "3600000",
+      "hbase.rootdir": "s3://yourbucket/EMRROOT"
+    }
+  },
+  {
+    "Classification": "hbase",
+    "Properties": {
+      "hbase.emr.storageMode": "s3"
+    }
+  }
+]
+```
+
+### 安装 Kylin
+
+当 EMR 集群处于 "Waiting" 状态,您可以 SSH 到 master 节点,下载 Kylin 然后解压 tar 包:
+
+```
+sudo mkdir /usr/local/kylin
+sudo chown hadoop /usr/local/kylin
+cd /usr/local/kylin
+wget http://www-us.apache.org/dist/kylin/apache-kylin-2.2.0/apache-kylin-2.2.0-bin-hbase1x.tar.gz 
+tar –zxvf apache-kylin-2.2.0-bin-hbase1x.tar.gz
+```
+
+### 配置 Kylin
+
+启动 Kylin 前,您需要进行一组配置:
+
+- 从 /etc/hbase/conf/hbase-site.xml 复制 "hbase.zookeeper.quorum" 属性到 $KYLIN\_HOME/conf/kylin\_job\_conf.xml,例如:
+
+
+```
+<property>
+  <name>hbase.zookeeper.quorum</name>
+  <value>ip-nn-nn-nn-nn.ap-northeast-2.compute.internal</value>
+</property>
+```
+
+- 使用 HDFS 作为 "kylin.env.hdfs-working-dir" (推荐)
+
+EMR 建议 **"当集群运行时使用 HDFS 作为中间数据的存储而 Amazon S3 只用来输入初始的数据和输出的最终结果"**。Kylin 的 'hdfs-working-dir' 用来存放 Cube building 时的中间数据,cuboid 文件和一些 metadata 文件 (例如在 Hbase 中不好的 dictionary 和 table snapshots);因此最好为其配置 HDFS。 
+
+如果使用 HDFS 作为 Kylin 的工作目录,您无需做任何修改,因为 EMR 的默认文件系统是 HDFS:
+
+```
+kylin.env.hdfs-working-dir=/kylin
+```
+
+关闭/重启集群前,您必须用 [S3DistCp](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html) 备份 HDFS 上 "/kylin" 路径下的数据到 S3,否则您可能丢失数据且之后不能恢复集群。
+
+- 使用 S3 作为 "kylin.env.hdfs-working-dir" 
+
+如果您想使用 S3 作为存储 (假设 HBase 也在 S3 上),您需要配置下列参数:
+
+```
+kylin.env.hdfs-working-dir=s3://yourbucket/kylin
+kylin.storage.hbase.cluster-fs=s3://yourbucket
+kylin.source.hive.redistribute-flat-table=false
+```
+
+中间文件和 HFile 也都会写入 S3。Build 性能将会比 HDFS 慢。确保您很好的理解了 S3 和 HDFS 的区别。阅读下列来自 AWS 的文章:
+
+[Input and Output Errors](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html)
+[Are you having trouble loading data to or from Amazon S3 into Hive](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-error-hive.html#emr-troubleshoot-error-hive-3)
+
+
+- Hadoop 配置
+
+根据 [emr-troubleshoot-errors-io](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html),为在 S3 上获得更好的性能和数据一致性需要应用一些 Hadoop 配置。 
+
+```
+<property>
+  <name>io.file.buffer.size</name>
+  <value>65536</value>
+</property>
+<property>
+  <name>mapred.map.tasks.speculative.execution</name>
+  <value>false</value>
+</property>
+<property>
+  <name>mapred.reduce.tasks.speculative.execution</name>
+  <value>false</value>
+</property>
+<property>
+  <name>mapreduce.map.speculative</name>
+  <value>false</value>
+</property>
+<property>
+  <name>mapreduce.reduce.speculative</name>
+  <value>false</value>
+</property>
+
+```
+
+
+- 如果不存在创建工作目录文件夹
+
+```
+hadoop fs -mkdir /kylin 
+```
+
+或
+
+```
+hadoop fs -mkdir s3://yourbucket/kylin
+```
+
+### 启动 Kylin
+
+启动和在普通 Hadoop 上一样:
+
+```
+export KYLIN_HOME=/usr/local/kylin/apache-kylin-2.2.0-bin
+$KYLIN_HOME/bin/sample.sh
+$KYLIN_HOME/bin/kylin.sh start
+```
+
+别忘记在 EMR master - "ElasticMapReduce-master" 的安全组中启用 7070 端口访问,或使用 SSH 连接 master 节点,然后您可以使用 http://\<master\-dns\>:7070/kylin 访问 Kylin Web GUI。
+
+Build 同一个 Cube,当 Cube 准备好后运行查询。您可以浏览 S3 查看数据是否安全的持久化了。
+
+### Spark 配置
+
+EMR 的 Spark 版本很可能与 Kylin 编译的版本不一致,因此您通常不能直接使用 EMR 打包的 Spark 用于 Kylin 的任务。 您需要在启动 Kylin 之前,将 "SPARK_HOME" 环境变量设置指向 Kylin 的 Spark 子目录 (KYLIN_HOME/spark) 。此外,为了从 Spark 中访问 S3 或 EMRFS 上的文件,您需要将 EMR 的扩展类从 EMR 的目录拷贝到 Kylin 的 Spark 下。
+
+```
+export SPARK_HOME=$KYLIN_HOME/spark
+
+cp /usr/lib/hadoop-lzo/lib/*.jar $KYLIN_HOME/spark/jars/
+cp /usr/share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-*.jar $KYLIN_HOME/spark/jars/
+cp /usr/lib/hadoop/hadoop-common*-amzn-*.jar $KYLIN_HOME/spark/jars/
+
+$KYLIN_HOME/bin/kylin.sh start
+```
+
+您也可以参考 EMR Spark 的 spark-defauts 来设置 Kylin 的 Spark 配置,以获得更好的对集群资源的适配。
+
+### 关闭 EMR 集群
+
+关闭 EMR 集群前,我们建议您为 Kylin metadata 做备份且将其上传到 S3。
+
+为了在关闭 Amazon EMR 集群时不丢失没写入 Amazon S3 的数据,MemStore cache 需要刷新到 Amazon S3 写入新的 store 文件。您可以运行 EMR 集群上提供的 shell 脚本来完成这个需求。 
+
+```
+bash /usr/lib/hbase/bin/disable_all_tables.sh
+```
+
+为了用同样的 Hbase 数据重启一个集群,可在 AWS Management Console 中指定和之前集群相同的 Amazon S3 位置或使用 "hbase.rootdir" 配置属性。更多的 EMR HBase 信息,参考 [HBase on Amazon S3](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hbase-s3.html)
+
+	
+## 在专用的 EC2 上部署 Kylin 
+
+推荐在专门的 client 节点上运行 Kylin (而不是 master,core 或 task)。启动一个和您 EMR 有同样 VPC 与子网的独立 EC2 实例,从 master 节点复制 Hadoop clients 到该实例,然后在其中安装 Kylin。这可提升 Kylin 自身与 master 节点中服务的稳定性。 
+	
\ No newline at end of file
diff --git a/website/_docs24/install/kylin_aws_emr.md b/website/_docs24/install/kylin_aws_emr.md
new file mode 100644
index 0000000..f6d2e86
--- /dev/null
+++ b/website/_docs24/install/kylin_aws_emr.md
@@ -0,0 +1,180 @@
+---
+layout: docs
+title:  "Install Kylin on AWS EMR"
+categories: install
+permalink: /docs24/install/kylin_aws_emr.html
+---
+
+Many users run Hadoop on public Cloud like AWS today. Apache Kylin, compiled with standard Hadoop/HBase API, support most main stream Hadoop releases; The current version Kylin v2.2, supports AWS EMR 5.0 to 5.10. This document introduces how to run Kylin on EMR.
+
+### Recommended Version
+* AWS EMR 5.7 (for EMR 5.8 and above, please check [KYLIN-3129](https://issues.apache.org/jira/browse/KYLIN-3129))
+* Apache Kylin v2.2.0 or above for HBase 1.x
+
+### Start EMR cluster
+
+Launch an EMR cluser with AWS web console, command line or API. Select "**HBase**" in the applications as Kylin need HBase service. 
+
+You can select "HDFS" or "S3" as the storage for HBase, depending on whether you need Cube data be persisted after shutting down the cluster. EMR HDFS uses the local disk of EC2 instances, which will erase the data when cluster is stopped, then Kylin metadata and Cube data can be lost.
+
+If you use "S3" as HBase's storage, you need customize its configuration for "**hbase.rpc.timeout**", because the bulk load to S3 is a copy operation, when data size is huge, HBase region server need wait much longer to finish than on HDFS.
+
+```
+[  {
+    "Classification": "hbase-site",
+    "Properties": {
+      "hbase.rpc.timeout": "3600000",
+      "hbase.rootdir": "s3://yourbucket/EMRROOT"
+    }
+  },
+  {
+    "Classification": "hbase",
+    "Properties": {
+      "hbase.emr.storageMode": "s3"
+    }
+  }
+]
+```
+
+### Install Kylin
+
+When EMR cluser is in "Waiting" status, you can SSH into its master  node, download Kylin and then uncompress the tar ball:
+
+```
+sudo mkdir /usr/local/kylin
+sudo chown hadoop /usr/local/kylin
+cd /usr/local/kylin
+wget http://www-us.apache.org/dist/kylin/apache-kylin-2.2.0/apache-kylin-2.2.0-bin-hbase1x.tar.gz 
+tar –zxvf apache-kylin-2.2.0-bin-hbase1x.tar.gz
+```
+
+### Configure Kylin
+
+Before start Kylin, you need do a couple of configurations:
+
+- Copy "hbase.zookeeper.quorum" property from /etc/hbase/conf/hbase-site.xml to $KYLIN\_HOME/conf/kylin\_job\_conf.xml, like this:
+
+
+```
+<property>
+  <name>hbase.zookeeper.quorum</name>
+  <value>ip-nn-nn-nn-nn.ap-northeast-2.compute.internal</value>
+</property>
+```
+
+- Use HDFS as "kylin.env.hdfs-working-dir" (Recommended)
+
+EMR recommends to **"use HDFS for intermediate data storage while the cluster is running and Amazon S3 only to input the initial data and output the final results"**. Kylin's 'hdfs-working-dir' is for putting the intermediate data for Cube building, cuboid files and also some metadata files (like dictionary and table snapshots which are not good in HBase); so it is best to configure HDFS for this. 
+
+If using HDFS as Kylin working directory, you just leave configurations unchanged as EMR's default FS is HDFS:
+
+```
+kylin.env.hdfs-working-dir=/kylin
+```
+
+Before you shutdown/restart the cluster, you must backup the "/kylin" data on HDFS to S3 with [S3DistCp](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html), or you may lost data and couldn't recover the cluster later.
+
+- Use S3 as "kylin.env.hdfs-working-dir" 
+
+If you want to use S3 as storage (assume HBase is also on S3), you need configure the following parameters:
+
+```
+kylin.env.hdfs-working-dir=s3://yourbucket/kylin
+kylin.storage.hbase.cluster-fs=s3://yourbucket
+kylin.source.hive.redistribute-flat-table=false
+```
+
+The intermediate file and the HFile will all be written to S3. The build performance would be slower than HDFS. Make sure you have a good understanding about the difference between S3 and HDFS. Read the following articles from AWS:
+
+[Input and Output Errors](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html)
+[Are you having trouble loading data to or from Amazon S3 into Hive](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-error-hive.html#emr-troubleshoot-error-hive-3)
+
+
+- Hadoop configurations
+
+Some Hadoop configurations need be applied for better performance and data consistency on S3, according to [emr-troubleshoot-errors-io](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-errors-io.html)
+
+```
+<property>
+  <name>io.file.buffer.size</name>
+  <value>65536</value>
+</property>
+<property>
+  <name>mapred.map.tasks.speculative.execution</name>
+  <value>false</value>
+</property>
+<property>
+  <name>mapred.reduce.tasks.speculative.execution</name>
+  <value>false</value>
+</property>
+<property>
+  <name>mapreduce.map.speculative</name>
+  <value>false</value>
+</property>
+<property>
+  <name>mapreduce.reduce.speculative</name>
+  <value>false</value>
+</property>
+
+```
+
+
+- Create the working-dir folder if it doesn't exist
+
+```
+hadoop fs -mkdir /kylin 
+```
+
+or
+
+```
+hadoop fs -mkdir s3://yourbucket/kylin
+```
+
+### Start Kylin
+
+The start is the same as on normal Hadoop:
+
+```
+export KYLIN_HOME=/usr/local/kylin/apache-kylin-2.2.0-bin
+$KYLIN_HOME/bin/sample.sh
+$KYLIN_HOME/bin/kylin.sh start
+```
+
+Don't forget to enable the 7070 port access in the security group for EMR master - "ElasticMapReduce-master", or with SSH tunnel to the master node, then you can access Kylin Web GUI at http://\<master\-dns\>:7070/kylin
+
+Build the sample Cube, and then run queries when the Cube is ready. You can browse S3 to see whether the data is safely persisted.
+
+### Spark Configuration
+
+EMR's Spark version may be incompatible with Kylin, so you couldn't directly use EMR's Spark. You need to set "SPARK_HOME" environment variable to Kylin's Spark folder (KYLIN_HOME/spark) before start Kylin. To access files on S3 or EMRFS, we need to copy EMR's implementation jars to Spark.
+
+```
+export SPARK_HOME=$KYLIN_HOME/spark
+
+cp /usr/lib/hadoop-lzo/lib/*.jar $KYLIN_HOME/spark/jars/
+cp /usr/share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-*.jar $KYLIN_HOME/spark/jars/
+cp /usr/lib/hadoop/hadoop-common*-amzn-*.jar $KYLIN_HOME/spark/jars/
+
+$KYLIN_HOME/bin/kylin.sh start
+```
+
+You can also copy EMR's spark-defauts configuration to Kylin's spark for a better utilization of the cluster resources.
+
+### Shut down EMR Cluster
+
+Before you shut down EMR cluster, we suggest you take a backup for Kylin metadata and upload it to S3.
+
+To shut down an Amazon EMR cluster without losing data that hasn't been written to Amazon S3, the MemStore cache needs to flush to Amazon S3 to write new store files. To do this, you can run a shell script provided on the EMR cluster. 
+
+```
+bash /usr/lib/hbase/bin/disable_all_tables.sh
+```
+
+To restart a cluster with the same HBase data, specify the same Amazon S3 location as the previous cluster either in the AWS Management Console or using the "hbase.rootdir" configuration property. For more information about EMR HBase, refer to [HBase on Amazon S3](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hbase-s3.html)
+
+	
+## Deploy Kylin in a dedicated EC2 
+
+Running Kylin in a dedicated client node (not master, core or task) is recommended. You can start a separate EC2 instance within the same VPC and subnet as your EMR, copy the Hadoop clients from master node to it, and then install Kylin in it. This can improve the stability of services in master node as well as Kylin itself. 
+	
\ No newline at end of file
diff --git a/website/_docs24/install/kylin_cluster.cn.md b/website/_docs24/install/kylin_cluster.cn.md
new file mode 100644
index 0000000..71efc9d
--- /dev/null
+++ b/website/_docs24/install/kylin_cluster.cn.md
@@ -0,0 +1,58 @@
+---
+layout: docs-cn
+title:  "集群模式部署"
+categories: install
+permalink: /cn/docs24/install/kylin_cluster.html
+---
+
+
+### Kylin 服务器模式
+
+Kylin 实例是无状态的。其运行时状态存于存储在 HBase (由 `conf/kylin.properties` 中的 `kylin.metadata.url` 指定) 中的 metadata 中。出于负载均衡的考虑,建议运行多个共享一个 metadata 存储的 Kylin 实例,因此他们在表结构中共享同一个状态,job 状态, Cube 状态, 等等。
+
+每一个 Kylin 实例在 `conf/kylin.properties` 中都有一个 "kylin.server.mode" entry,指定了运行时的模式,有 3 个选项: 
+
+ *  **job** : 在实例中运行 job engine; Kylin job engine 管理集群 的 jobs;
+ *  **query** : 只运行 query engine; Kylin query engine 接收和回应你的 SQL 查询;
+ *  **all** : 在实例中既运行 job engine 也运行 query engines。 
+
+注意只有一个实例可以运行 job engine ("all" 或 "job" 模式), 其他必须是 "query" 模式。 
+
+下图中描绘了一个典型场景:
+
+![]( /images/install/kylin_server_modes.png)
+
+### 配置多个 Kylin 服务器
+
+当您在拥有多个 Kylin 服务器实例的集群运行 Kylin 时, 请确保您为每一个实例在 `conf/kylin.properties` 中正确的配置了以下属性。
+
+ *  `kylin.rest.servers`
+	使用中的服务器列表, 当事件变化时,让一个实例去通知其他服务器。例如: 
+
+```
+kylin.rest.servers=host1:7070,host2:7070
+```
+
+ *  `kylin.server.mode`
+
+
+默认情况下,只有一个实例的 `kylin.server.mode` 设置为 "all" 或 "job", 其余的为 "query"。
+
+```
+kylin.server.mode=all
+```
+
+也即默认情况下,只有一个节点用于调度构建任务的执行。如果您需要配置多个节点同时执行任务构建,以满足高可用和高并发的需求,请参考 "启用多个任务引擎" 的内容,在 [高级设置](advance_settings.html) 页.
+
+### 安装负载均衡器
+
+为确保 Kylin 服务器的高可用性, 您需要在这些服务器之前安装负载均衡器, 让其将传入的请求路由至集群。客户端和负载均衡器通信代替和特定的 Kylin 实例通信。安装负载均衡器超出了范围,您可以选择像 Nginx, F5 或 cloud LB 服务这样的实现。
+	
+### 读/写分离的双集群配置
+
+Kylin 可以连接两个集群以获得更好的稳定性和性能:
+
+ * 一个 Hadoop 集群用作 Cube 构建; 这个集群可以是一个大的、与其它应用共享的集群;
+ * 一个 HBase 集群用作 SQL 查询;通常这个集群是专门为 Kylin 配置的,节点数不用像 Hadoop 集群那么多。HBase 的配置可以更加针对 Kylin Cube 只读的特性而进行优化。  
+
+这种部署策略已经被很多大企业所采纳并得到验证。它是迄今我们知道适合生产环境的最佳部署方案。关于如何配置这种架构,请参考 [Deploy Apache Kylin with Standalone HBase Cluster](/blog/2016/06/10/standalone-hbase-cluster/)
\ No newline at end of file
diff --git a/website/_docs24/install/kylin_cluster.md b/website/_docs24/install/kylin_cluster.md
new file mode 100644
index 0000000..22270ed
--- /dev/null
+++ b/website/_docs24/install/kylin_cluster.md
@@ -0,0 +1,57 @@
+---
+layout: docs
+title:  "Deploy in Cluster Mode"
+categories: install
+permalink: /docs24/install/kylin_cluster.html
+---
+
+
+### Kylin Server modes
+
+Kylin instances are stateless, the runtime state is saved in its metadata store in HBase (specified by `kylin.metadata.url` in `conf/kylin.properties`). For load balance considerations it is recommended to run multiple Kylin instances sharing the same metadata store, thus they share the same state on table schemas, job status, Cube status, etc.
+
+Each of the Kylin instance has a "kylin.server.mode" entry in `conf/kylin.properties` specifying the runtime mode, it has three options: 
+
+ *  **job** : run job engine in this instance; Kylin job engine manages the jobs to cluster;
+ *  **query** : run query engine only; Kylin query engine accepts and answers your SQL queries;
+ *  **all** : run both job engine and query engines in this instance. 
+
+ By default only one instance can run the job engine ("all" or "job" mode), the others should be in the "query" mode. 
+
+ If you want to run multiple job engines to get high availability or handle heavy concurrent jobs, please check "Enable multiple job engines" in [Advanced settings](advance_settings.html) page.
+
+A typical scenario is depicted in the following chart:
+
+![]( /images/install/kylin_server_modes.png)
+
+### Configure Multiple Kylin Servers
+
+If you are running Kylin in a cluster where you have multiple Kylin server instances, please make sure you have the following property correctly configured in `conf/kylin.properties` for EVERY instance.
+
+ *  `kylin.rest.servers`
+	List of servers in use, this enables one instance to notify other servers when there is event change. For example: 
+
+```
+kylin.rest.servers=host1:7070,host2:7070
+```
+
+ *  `kylin.server.mode`
+	Make sure there is only one instance whose `kylin.server.mode` is set to "all" or "job", others should be "query"
+
+```
+kylin.server.mode=all
+```
+
+### Setup Load Balancer 
+
+To enable Kylin service high availability, you need setup a load balancer in front of these servers, letting it routes the incoming requests to the cluster. Client side communicates with the load balancer, instead of with a specific Kylin instance. The setup of load balancer is out of the scope; you may select an implementation like Nginx, F5 or cloud LB service. 
+	
+
+### Configure Read/Write separated deployment
+
+Kylin can work with two clusters to gain better stability and performance:
+
+ * A Hadoop cluster for Cube building; This can be a shared, large cluster.
+ * A HBase cluster for SQL queries; Usually this is a dedicated cluster with less nodes. The HBase configurations can be tuned for better read performance as Cubes are immutable after built.  
+
+This deployment has been adopted and verified by many large companies. It is the best solution for production deployment as we know. For how to do this, please refer to [Deploy Apache Kylin with Standalone HBase Cluster](/blog/2016/06/10/standalone-hbase-cluster/)
\ No newline at end of file
diff --git a/website/_docs24/install/kylin_docker.cn.md b/website/_docs24/install/kylin_docker.cn.md
new file mode 100644
index 0000000..474694e
--- /dev/null
+++ b/website/_docs24/install/kylin_docker.cn.md
@@ -0,0 +1,10 @@
+---
+layout: docs
+title:  "用 Docker 运行 Kylin"
+categories: install
+permalink: /cn/docs24/install/kylin_docker.html
+version: v1.5.3
+since: v1.5.2
+---
+
+Apache Kylin 作为一个 Hadoop 集群的客户端运行, 因此运行在 Docker 容器中是合理的; 请查看 github 项目[kylin-docker](https://github.com/Kyligence/kylin-docker/).
diff --git a/website/_docs24/install/kylin_docker.md b/website/_docs24/install/kylin_docker.md
new file mode 100644
index 0000000..248e27d
--- /dev/null
+++ b/website/_docs24/install/kylin_docker.md
@@ -0,0 +1,10 @@
+---
+layout: docs
+title:  "Run Kylin with Docker"
+categories: install
+permalink: /docs24/install/kylin_docker.html
+version: v1.5.3
+since: v1.5.2
+---
+
+Apache Kylin runs as a client of Hadoop cluster, so it is reasonable to run within a Docker container; please check [this project](https://github.com/Kyligence/kylin-docker/) on github.
diff --git a/website/_docs24/install/manual_install_guide.cn.md b/website/_docs24/install/manual_install_guide.cn.md
new file mode 100644
index 0000000..d6db50f
--- /dev/null
+++ b/website/_docs24/install/manual_install_guide.cn.md
@@ -0,0 +1,29 @@
+---
+layout: docs-cn
+title:  "手动安装指南"
+categories: 安装
+permalink: /cn/docs24/install/manual_install_guide.html
+version: v0.7.2
+since: v0.7.1
+---
+
+## 引言
+
+在大多数情况下,我们的自动脚本[Installation Guide](./index.html)可以帮助你在你的hadoop sandbox甚至你的hadoop cluster中启动Kylin。但是,为防部署脚本出错,我们撰写本文作为参考指南来解决你的问题。
+
+基本上本文解释了自动脚本中的每一步骤。我们假设你已经对Linux上的Hadoop操作非常熟悉。
+
+## 前提条件
+* Kylin 二进制文件拷贝至本地并解压,之后使用$KYLIN_HOME引用
+`export KYLIN_HOME=/path/to/kylin`
+`cd $KYLIN_HOME`
+
+### 启动Kylin
+
+以`./bin/kylin.sh start`
+
+启动Kylin
+
+并以`./bin/Kylin.sh stop`
+
+停止Kylin
diff --git a/website/_docs24/release_notes.md b/website/_docs24/release_notes.md
new file mode 100644
index 0000000..40ec13b
--- /dev/null
+++ b/website/_docs24/release_notes.md
@@ -0,0 +1,2212 @@
+---
+layout: docs
+title:  Release Notes
+categories: gettingstarted
+permalink: /docs24/release_notes.html
+---
+
+To download latest release, please visit: [http://kylin.apache.org/download/](http://kylin.apache.org/download/), 
+there are source code package, binary package, ODBC driver and installation guide avaliable.
+
+Any problem or issue, please report to Apache Kylin JIRA project: [https://issues.apache.org/jira/browse/KYLIN](https://issues.apache.org/jira/browse/KYLIN)
+
+or send to Apache Kylin mailing list:
+
+* User relative: [user@kylin.apache.org](mailto:user@kylin.apache.org)
+* Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
+
+
+## v2.4.1 - 2018-09-09
+_Tag:_ [kylin-2.4.1](https://github.com/apache/kylin/tree/kylin-2.4.1)
+This is a bug fix release after 2.4.0, with 22 bug fixes and enhancement. Check [How to upgrade](/docs23/howto/howto_upgrade.html).
+
+__Improvement__
+* [KYLIN-3421] - Improve job scheduler fetch performance
+* [KYLIN-3424] - Missing invoke addCubingGarbageCollectionSteps in the cleanup step for HBaseMROutput2Transition
+* [KYLIN-3422] - Support multi-path of domain for kylin connection
+* [KYLIN-3463] - Improve optimize job by avoiding creating empty output files on HDFS
+* [KYLIN-3503] - Missing java.util.logging.config.file when starting kylin instance
+* [KYLIN-3507] - Query NPE when project is not found
+
+__Bug__
+* [KYLIN-2662] - NegativeArraySizeException in "Extract Fact Table Distinct Columns
+* [KYLIN-3025] - kylin odbc error : {fn CONVERT} for bigint type in tableau 10.4
+* [KYLIN-3255] - Cannot save cube
+* [KYLIN-3347] - QueryService Exception when using calcite function ex : {fn CURRENT_TIMESTAMP(0)}
+* [KYLIN-3391] - BadQueryDetector only detect first query
+* [KYLIN-3403] - Querying sample cube with filter "KYLIN_CAL_DT.WEEK_BEG_DT >= CAST('2001-09-09' AS DATE)" returns unexpected empty result set
+* [KYLIN-3428] - java.lang.OutOfMemoryError: Requested array size exceeds VM limit
+* [KYLIN-3438] - mapreduce.job.queuename does not work at 'Convert Cuboid Data to HFile' Step
+* [KYLIN-3451] - Cloned cube doesn't have Mandatory Cuboids copied
+* [KYLIN-3456] - Cube level's snapshot config does not work
+* [KYLIN-3460] - {fn CURRENT_DATE()} parse error
+* [KYLIN-3461] - "metastore.sh refresh-cube-signature" not updating cube signature as expected
+* [KYLIN-3476] - Fix TupleExpression verification when parsing sql
+* [KYLIN-3492] - Wrong constant value in KylinConfigBase.getDefaultVarcharPrecision
+* [KYLIN-3500] - kylin 2.4 use jdbc datasource :Unknown column 'A.A.CRT_DATE' in 'where clause'
+* [KYLIN-3505] - DataType.getType wrong usage of cache
+
+
+## v2.4.0 - 2018-06-23
+_Tag:_ [kylin-2.4.0](https://github.com/apache/kylin/tree/kylin-2.4.0)
+This is a major release after 2.3.x, with 8 new features and more than 30 bug fixes bug fixes and enhancement. Check [How to upgrade](/docs24/howto/howto_upgrade.html).
+
+__New Feature__
+* [KYLIN-2484] - Spark engine to support source from Kafka
+* [KYLIN-3221] - Allow externalizing lookup table snapshot
+* [KYLIN-3283] - Support values RelNode
+* [KYLIN-3315] - Allow each project to set its own source at project level
+* [KYLIN-3343] - Support JDBC source on UI
+* [KYLIN-3358] - Support sum(case when...), sum(2*price+1), count(column) and more
+* [KYLIN-3366] - Configure automatic enabling of cubes after a build process
+* [KYLIN-3378] - Support Kafka table join with Hive tables
+
+__Improvement__
+* [KYLIN-3137] - Spark cubing without hive-site.xml
+* [KYLIN-3174] - Default scheduler enhancement
+* [KYLIN-3220] - Add manager for project ACL.
+* [KYLIN-3234] - ResourceStore should add a API that can recursively list path.
+* [KYLIN-3246] - Add manager for user.
+* [KYLIN-3248] - Add batch grant API for project ACL.
+* [KYLIN-3251] - Add a hook that can customer made test_case_data
+* [KYLIN-3266] - Improve CI coverage
+* [KYLIN-3267] - add override MR config at project/cube level only for mem-hungry build steps 
+* [KYLIN-3271] - Optimize sub-path check of ResourceTool
+* [KYLIN-3275] - Add unit test for StorageCleanupJob
+* [KYLIN-3279] - Util Class for encryption and decryption
+* [KYLIN-3284] - Refactor all OLAPRel computeSelfCost
+* [KYLIN-3289] - Refactor the storage garbage clean up code
+* [KYLIN-3294] - Remove HBaseMROutput.java, RangeKeyDistributionJob.java and other sunset classes
+* [KYLIN-3314] - Refactor code for cube planner algorithm
+* [KYLIN-3320] - CubeStatsReader cannot print stats properly for some cube 
+* [KYLIN-3328] - Upgrade the metadata of sample cube to latest
+* [KYLIN-3331] - Kylin start script hangs during retrieving hive dependencys
+* [KYLIN-3345] - Use Apache Parent POM 19
+* [KYLIN-3354] - KeywordDefaultDirtyHack cannot handle double-quoted defaultCatalog identifier
+* [KYLIN-3369] - Reduce the data size sink from Kafka topic to HDFS
+* [KYLIN-3380] - Allow to configure sqoop for jdbc source with a kylin_sqoop_conf.xml like hive
+* [KYLIN-3386] - TopN measure validate code refactor to make it more clear
+
+__Bug__
+* [KYLIN-1768] - NDCuboidMapper throws ArrayIndexOutOfBoundsException when dimension is fixed length encoded to more than 256 bytes
+* [KYLIN-1948] - IntegerDimEnc, does not encode -1 correctly
+* [KYLIN-3115] - Incompatible RowKeySplitter initialize between build and merge job
+* [KYLIN-3122] - Partition elimination algorithm seems to be inefficient and have serious issues with handling date/time ranges, can lead to very slow queries and OOM/Java heap dump conditions
+* [KYLIN-3149] - Calcite's ReduceExpressionsRule.PROJECT_INSTANCE not working as expected
+* [KYLIN-3168] - CubeHFileJob should use currentHBaseConfiguration but not new create hbase configuration
+* [KYLIN-3257] - Useless call in FuzzyValueCombination
+* [KYLIN-3277] - Override hiveconf settings when connecting to hive using jdbc
+* [KYLIN-3281] - OLAPProjectRule can't normal working with  projectRel[input=sortRel]
+* [KYLIN-3292] - The setting config dialog will cause NPE in Kylin server
+* [KYLIN-3293] - FixedLenHexDimEnc return a wrong code length leads to cut bytes error.
+* [KYLIN-3295] - Unused method SQLDigestUtil#appendTsFilterToExecute
+* [KYLIN-3296] - When merge cube,get java.lang.ArrayIndexOutOfBoundsException at java.lang.System.arraycopy(Native Method)
+* [KYLIN-3311] - Segments overlap error (refactor write conflict exception)
+* [KYLIN-3324] - NegativeArraySizeException in CreateDictionaryJob$2.getDictionary()
+* [KYLIN-3336] - java.lang.NoSuchMethodException: org.apache.kylin.tool.HBaseUsageExtractor.execute([Ljava.lang.String;)
+* [KYLIN-3348] - "missing LastBuildJobID" error when building new cube segment
+* [KYLIN-3352] - Segment pruning bug, e.g. date_col > "max_date+1"
+* [KYLIN-3363] - Wrong partition condition appended in JDBC Source
+* [KYLIN-3367] - Add the compatibility for new version of hbase
+* [KYLIN-3368] - "/kylin/kylin_metadata/metadata/" has many garbage for spark cubing
+* [KYLIN-3388] - Data may become not correct if mappers fail during the redistribute step, "distribute by rand()"
+* [KYLIN-3396] - NPE throws when materialize lookup table to HBase
+* [KYLIN-3398] - Inaccurate arithmetic operation in LookupTableToHFileJob#calculateShardNum
+* [KYLIN-3400] - WipeCache and createCubeDesc causes deadlock
+* [KYLIN-3401] - The current using zip compress tool has an arbitrary file write vulnerability
+* [KYLIN-3404] - Last optimized time detail was not showing after cube optimization
+
+__Task__
+* [KYLIN-3327] - Upgrade surefire version to 2.21.0
+* [KYLIN-3372] - Upgrade jackson-databind version due to security concerns
+* [KYLIN-3415] - Remove "external" module
+
+__Sub-task__
+* [KYLIN-3359] - Support sum(expression) if possible
+* [KYLIN-3362] - Support dynamic dimension push down
+* [KYLIN-3364] - Make the behavior of BigDecimalSumAggregator consistent with hive
+* [KYLIN-3373] - Some improvements for lookup table - UI part change
+* [KYLIN-3374] - Some improvements for lookup table - metadata change
+* [KYLIN-3375] - Some improvements for lookup table - build change
+* [KYLIN-3376] - Some improvements for lookup table - query change
+* [KYLIN-3377] - Some improvements for lookup table - snapshot management
+
+## v2.3.2 - 2018-07-08
+_Tag:_ [kylin-2.3.2](https://github.com/apache/kylin/tree/kylin-2.3.2)
+This is a bug fix release after 2.3.1, with 12 bug fixes and enhancement. Check [How to upgrade](/docs23/howto/howto_upgrade.html).
+
+__Improvement__
+* [KYLIN-3345] - Use Apache Parent POM 19
+* [KYLIN-3372] - Upgrade jackson-databind version due to security concerns
+* [KYLIN-3415] - Remove "external" module
+
+__Bug__
+* [KYLIN-3115] - Incompatible RowKeySplitter initialize between build and merge job
+* [KYLIN-3336] - java.lang.NoSuchMethodException: org.apache.kylin.tool.HBaseUsageExtractor.execute([Ljava.lang.String;)
+* [KYLIN-3348] - "missing LastBuildJobID" error when building new cube segment
+* [KYLIN-3352] - Segment pruning bug, e.g. date_col > "max_date+1"
+* [KYLIN-3363] - Wrong partition condition appended in JDBC Source
+* [KYLIN-3388] - Data may become not correct if mappers fail during the redistribute step, "distribute by rand()"
+* [KYLIN-3400] - WipeCache and createCubeDesc causes deadlock
+* [KYLIN-3401] - The current using zip compress tool has an arbitrary file write vulnerability
+* [KYLIN-3404] - Last optimized time detail was not showing after cube optimization
+
+## v2.3.1 - 2018-03-28
+_Tag:_ [kylin-2.3.1](https://github.com/apache/kylin/tree/kylin-2.3.1)
+This is a bug fix release after 2.3.0, with 12 bug fixes and enhancement. Check [How to upgrade](/docs24/howto/howto_upgrade.html).
+
+__Improvement__
+* [KYLIN-3233] - CacheController can not handle if cacheKey has "/"
+* [KYLIN-3278] - Kylin should not distribute hive table by random at Step1
+* [KYLIN-3300] - Upgrade jackson-databind to 2.6.7.1 with security issue fixed
+* [KYLIN-3301] - Upgrade opensaml to 2.6.6 with security issue fixed
+
+__Bug__
+* [KYLIN-3270] - Fix the blocking issue in Cube optimizing job
+* [KYLIN-3276] - Fix the query cache bug with dynamic parameter
+* [KYLIN-3288] - "Sqoop To Flat Hive Table" step should specify "mapreduce.queue.name"
+* [KYLIN-3306] - Fix the rarely happened unit test exception of generic algorithm
+* [KYLIN-3287] - When a shard by column is in dict encoding, dict building error.
+* [KYLIN-3280] - The delete button should not be enabled without any segment in cube segment delete confirm dialog
+* [KYLIN-3119] - A few bugs in the function 'massageSql' of 'QueryUtil.java'
+* [KYLIN-3236] - The function 'reGenerateAdvancedDict()' has an error logical judgment, which will cause an exception when you edit the cube.
+
+
+## v2.3.0 - 2018-03-04
+_Tag:_ [kylin-2.3.0](https://github.com/apache/kylin/tree/kylin-2.3.0)
+This is a major release after 2.2, with more than 250 bug fixes and enhancement. Check [How to upgrade](/docs24/howto/howto_upgrade.html).
+
+__New Feature__
+* [KYLIN-3125] - Support SparkSql in Cube building step "Create Intermediate Flat Hive Table"
+* [KYLIN-3052] - Support Redshift as data source
+* [KYLIN-3044] - Support SQL Server as data source
+* [KYLIN-2999] - One click migrate cube in web
+* [KYLIN-2960] - Support user/group and role authentication for LDAP
+* [KYLIN-2902] - Introduce project-level concurrent query number control
+* [KYLIN-2776] - New metric framework based on dropwizard
+* [KYLIN-2727] - Introduce cube planner able to select cost-effective cuboids to be built by cost-based algorithms
+* [KYLIN-2726] - Introduce a dashboard for showing kylin service related metrics, like query count, query latency, job count, etc
+* [KYLIN-1892] - Support volatile range for segments auto merge
+
+__Improvement__
+* [KYLIN-3265] - Add "jobSearchMode" as a condition to "/kylin/api/jobs" API
+* [KYLIN-3245] - Searching cube support fuzzy search
+* [KYLIN-3243] - Optimize the code and keep the code consistent in the access.html
+* [KYLIN-3239] - Refactor the ACL code about "checkPermission" and "hasPermission"
+* [KYLIN-3215] - Remove 'drop' option when job status is stopped and error
+* [KYLIN-3214] - Initialize ExternalAclProvider when starting kylin
+* [KYLIN-3209] - Optimize job partial statistics path be consistent with existing one
+* [KYLIN-3196] - Replace StringUtils.containsOnly with Regex
+* [KYLIN-3194] - Tolerate broken job metadata caused by executable ClassNotFoundException
+* [KYLIN-3193] - No model clone across projects
+* [KYLIN-3182] - Update Kylin help menu links
+* [KYLIN-3181] - The submit button status of refreshing cube is not suitable when the start time is equal or more than the end time.
+* [KYLIN-3162] - Fix alignment problem of 'Save Query' pop-up box
+* [KYLIN-3159] - Remove unnecessary cube access request
+* [KYLIN-3158] - Metadata broadcast should only retry failed node
+* [KYLIN-3157] - Enhance query timeout to entire query life cycle
+* [KYLIN-3151] - Enable 'Query History' to show items filtered by different projects
+* [KYLIN-3150] - Support different compression in PercentileCounter measure
+* [KYLIN-3145] - Support Kafka JSON message whose property name includes "_"
+* [KYLIN-3144] - Adopt Collections.emptyList() for empty list values
+* [KYLIN-3129] - Fix the joda library conflicts during Kylin start on EMR 5.8+
+* [KYLIN-3128] - Configs for allowing export query results for admin/nonadmin user
+* [KYLIN-3127] - In the Insights tab, results section, make the list of Cubes hit by the query either scrollable or multiline
+* [KYLIN-3124] - Support horizontal scroll bar in 'Insight'
+* [KYLIN-3117] - Hide project config in cube level
+* [KYLIN-3114] - Enable kylin.web.query-timeout for web query request
+* [KYLIN-3113] - Editing Measure supports fuzzy search in web
+* [KYLIN-3108] - Change IT embedded Kafka broker path to /kylin/streaming_config/UUID
+* [KYLIN-3105] - Interface Scheduler's stop method should be removed
+* [KYLIN-3100] - Building empty partitioned cube with rest api supports partition_start_date
+* [KYLIN-3098] - Enable kylin.query.max-return-rows to limit the maximum row count returned to user
+* [KYLIN-3092] - Synchronize read/write operations on Managers
+* [KYLIN-3090] - Refactor to consolidate all caches and managers under KylinConfig
+* [KYLIN-3088] - Spell Error of isCubeMatch
+* [KYLIN-3086] - Ignore the intermediate tables when loading Hive source tables
+* [KYLIN-3079] - Use Docker for document build environment
+* [KYLIN-3078] - Optimize the estimated size of percentile measure
+* [KYLIN-3076] - Make kylin remember the choices we have made in the "Monitor>Jobs" page
+* [KYLIN-3074] - Change cube access to project access in ExternalAclProvider.java
+* [KYLIN-3073] - Automatically refresh the 'Saved Queries' tab page when new query saved. 
+* [KYLIN-3070] - Enable 'kylin.source.hive.flat-table-storage-format' for flat table storage format
+* [KYLIN-3067] - Provide web interface for dimension capping feature
+* [KYLIN-3065] - Add 'First' and 'Last' button in case 'Query History' is too much
+* [KYLIN-3064] - Turn off Yarn timeline-service when submit mr job
+* [KYLIN-3048] - Give warning when merge with holes, but allow user to force proceed at the same time
+* [KYLIN-3043] - Don't need create materialized view for lookup tables without snapshot
+* [KYLIN-3039] - Unclosed hbaseAdmin in ITAclTableMigrationToolTest
+* [KYLIN-3036] - Allow complex column type when loading source table
+* [KYLIN-3024] - Input Validator for "Auto Merge Thresholds" text box
+* [KYLIN-3019] - The pop-up window of 'Calculate Cardinality' and 'Load Hive Table' should have the same hint
+* [KYLIN-3009] - Rest API to get Cube join SQL
+* [KYLIN-3008] - Introduce "submit-patch.py"
+* [KYLIN-3006] - Upgrade Spark to 2.1.2
+* [KYLIN-2997] - Allow change engineType even if there are segments in cube
+* [KYLIN-2996] - Show DeployCoprocessorCLI Log failed tables info
+* [KYLIN-2993] - Add special mr config for base cuboid step
+* [KYLIN-2992] - Avoid OOM in  CubeHFileJob.Reducer
+* [KYLIN-2990] - Add warning window of exist model names for other project selected
+* [KYLIN-2987] - Add 'auto.purge=true' when creating intermediate hive table or redistribute a hive table
+* [KYLIN-2985] - Cache temp json file created by each Calcite Connection
+* [KYLIN-2984] - Only allow delete FINISHED or DISCARDED job
+* [KYLIN-2982] - Avoid upgrade column in OLAPTable
+* [KYLIN-2981] - Typo in Cube refresh setting page.
+* [KYLIN-2980] - Remove getKey/Value setKey/Value from Kylin's Pair.
+* [KYLIN-2975] - Unclosed Statement in test
+* [KYLIN-2966] - push down jdbc column type id mapping
+* [KYLIN-2965] - Keep the same cost calculation logic between RealizationChooser and CubeInstance
+* [KYLIN-2947] - Changed the Pop-up box when no project selected
+* [KYLIN-2941] - Configuration setting for SSO
+* [KYLIN-2940] - List job restful throw NPE when time filter not set
+* [KYLIN-2935] - Improve the way to deploy coprocessor
+* [KYLIN-2928] - PUSH DOWN query cannot use order by function
+* [KYLIN-2921] - Refactor DataModelDesc
+* [KYLIN-2918] - Table ACL needs GUI
+* [KYLIN-2913] - Enable job retry for configurable exceptions
+* [KYLIN-2912] - Remove "hfile" folder after bulk load to HBase
+* [KYLIN-2909] - Refine Email Template for notification by freemarker
+* [KYLIN-2908] - Add one option for migration tool to indicate whether to migrate segment data
+* [KYLIN-2905] - Refine the process of submitting a job
+* [KYLIN-2884] - Add delete segment function for portal
+* [KYLIN-2881] - Improve hbase coprocessor exception handling at kylin server side 
+* [KYLIN-2875] - Cube e-mail notification Validation
+* [KYLIN-2867] - split large fuzzy Key set
+* [KYLIN-2866] - Enlarge the reducer number for hyperloglog statistics calculation at step FactDistinctColumnsJob
+* [KYLIN-2847] - Avoid doing useless work by checking query deadline
+* [KYLIN-2846] - Add a config of hbase namespace for cube storage
+* [KYLIN-2809] - Support operator "+" as string concat operator
+* [KYLIN-2801] - Make default precision and scale in DataType (for hive) configurable
+* [KYLIN-2764] - Build the dict for UHC column with MR
+* [KYLIN-2736] - Use multiple threads to calculate HyperLogLogPlusCounter in FactDistinctColumnsMapper
+* [KYLIN-2672] - Only clean necessary cache for CubeMigrationCLI
+* [KYLIN-2656] - Support Zookeeper ACL
+* [KYLIN-2649] - Tableau could send "select *" on a big table
+* [KYLIN-2645] - Upgrade Kafka version to 0.11.0.1
+* [KYLIN-2556] - Switch Findbugs to Spotbugs
+* [KYLIN-2363] - Prune cuboids by capping number of dimensions
+* [KYLIN-1925] - Do not allow cross project clone for cube
+* [KYLIN-1872] - Make query visible and interruptible, improve server's stablility
+
+__Bug__
+* [KYLIN-3268] - Tomcat Security Vulnerability Alert. The version of the tomcat for kylin should upgrade to 7.0.85.
+* [KYLIN-3263] - AbstractExecutable's retry has problem
+* [KYLIN-3247] - REST API 'GET /api/cubes/{cubeName}/segs/{segmentName}/sql' should return a cube segment sql
+* [KYLIN-3242] - export result should use alias too
+* [KYLIN-3241] - When refresh on 'Add Cube Page', a blank page will appear.
+* [KYLIN-3228] - Should remove the related segment when deleting a job
+* [KYLIN-3227] - Automatically remove the blank at the end of lines in properties files
+* [KYLIN-3226] - When user logs in with only query permission, 'N/A' is displayed in the cube's action list.
+* [KYLIN-3224] - data can't show when use kylin pushdown model 
+* [KYLIN-3223] - Query for the list of hybrid cubes results in NPE
+* [KYLIN-3222] - The function of editing 'Advanced Dictionaries' in cube is unavailable.
+* [KYLIN-3219] - Fix NPE when updating metrics during Spark CubingJob
+* [KYLIN-3216] - Remove the hard-code of spark-history path in 'check-env.sh'
+* [KYLIN-3213] - Kylin help has duplicate items
+* [KYLIN-3211] - Class IntegerDimEnc shuould give more exception information when the length is exceed the max or less than the min
+* [KYLIN-3210] - The project shows '_null' in result page.
+* [KYLIN-3205] - Allow one column is used for both dimension and precisely count distinct measure
+* [KYLIN-3204] - Potentially unclosed resources in JdbcExplorer#evalQueryMetadata
+* [KYLIN-3199] - The login dialog should be closed when ldap user with no permission login correctly
+* [KYLIN-3190] - Fix wrong parameter in revoke access API
+* [KYLIN-3184] - Fix '_null' project on the query page
+* [KYLIN-3183] - Fix the bug of the 'Remove' button in 'Query History'
+* [KYLIN-3178] - Delete table acl failed will cause the wabpage awalys shows "Please wait..."
+* [KYLIN-3177] - Merged Streaming cube segment has no start/end time
+* [KYLIN-3175] - Streaming segment lost TSRange after merge
+* [KYLIN-3173] - DefaultScheduler shutdown didn't reset field initialized.
+* [KYLIN-3172] - No such file or directory error with CreateLookupHiveViewMaterializationStep 
+* [KYLIN-3167] - Datatype lost precision when using beeline
+* [KYLIN-3165] - Fix the IllegalArgumentException during segments auto merge
+* [KYLIN-3164] - HBase connection must be closed when clearing connection pool
+* [KYLIN-3143] - Wrong use of Preconditions.checkNotNull() in ManagedUser#removeAuthoritie
+* [KYLIN-3139] - Failure in map-reduce job due to undefined hdp.version variable when using HDP stack and remote HBase cluster
+* [KYLIN-3136] - Endless status while subtask happens to be the illegal RUNNING
+* [KYLIN-3135] - Fix regular expression bug in SQL comments
+* [KYLIN-3131] - After refresh the page,the cubes can't sort by 'create_time'
+* [KYLIN-3130] - If we add new cube then refresh the page,the page is blank
+* [KYLIN-3116] - Fix cardinality caculate checkbox issue when loading tables
+* [KYLIN-3112] - The job 'Pause' operation has logic bug in the kylin server.
+* [KYLIN-3111] - Close of HBaseAdmin instance should be placed in finally block
+* [KYLIN-3110] - The dashboard page has some display problems.
+* [KYLIN-3106] - DefaultScheduler.shutdown should use ExecutorService.shutdownNow instead of ExecutorService.shutdown
+* [KYLIN-3104] - When the user log out from "Monitor" page, an alert dialog will pop up warning "Failed to load query."
+* [KYLIN-3102] - Solve the problems for incomplete display of Hive Table tree.
+* [KYLIN-3101] - The "search" icon will separate from the "Filter" textbox when click the "showSteps" button of a job in the jobList
+* [KYLIN-3097] - A few spell error in partials directory
+* [KYLIN-3087] - Fix the DistributedLock release bug in GlobalDictionaryBuilder
+* [KYLIN-3085] - CubeManager.updateCube() must not update the cached CubeInstance
+* [KYLIN-3084] - File not found Exception when processing union-all in TEZ mode
+* [KYLIN-3083] - potential overflow in CubeHBaseRPC#getCoprocessorTimeoutMillis
+* [KYLIN-3082] - Close of GTBuilder should be placed in finally block in InMemCubeBuilder
+* [KYLIN-3081] - Ineffective null check in CubeController#cuboidsExport
+* [KYLIN-3077] - EDW.TEST_SELLER_TYPE_DIM_TABLE is not being created by the integration test, but it's presence in the Hive is expected
+* [KYLIN-3069] - Add proper time zone support to the WebUI instead of GMT/PST kludge
+* [KYLIN-3063] - load-hive-conf.sh should not get the commented configuration item
+* [KYLIN-3061] - When we cancel the Topic modification for 'Kafka Setting' of streaming table, the 'Cancel' operation will make a mistake.
+* [KYLIN-3060] - The logical processing of creating or updating streaming table has a bug in server, which will cause a NullPointerException.
+* [KYLIN-3058] - We should limit the integer type ID and Port for "Kafka Setting" in "Streaming Cluster" page
+* [KYLIN-3056] - Fix 'Cannot find segment null' bug when click 'SQL' in the cube view page
+* [KYLIN-3055] - Fix NullPointerException for intersect_count
+* [KYLIN-3054] - The drop-down menu in the grid column of query results missing a little bit.
+* [KYLIN-3053] - When aggregation group verification failed, the error message about aggregation group number does not match with the actual on the Advanced Setting page
+* [KYLIN-3049] - Filter the invalid zero value of "Auto Merge Thresholds" parameter when you create or upate a cube.
+* [KYLIN-3047] - Wrong column type when sync hive table via beeline
+* [KYLIN-3042] - In query results page, the results data table should resize when click "fullScreen" button
+* [KYLIN-3040] - Refresh a non-partitioned cube changes the segment name to "19700101000000_2922789940817071255"
+* [KYLIN-3038] - cannot support sum of type-converted column SQL
+* [KYLIN-3034] - In the models tree, the "Edit(JSON)" option is missing partly.
+* [KYLIN-3032] - Cube size shows 0 but actually it isn't empty
+* [KYLIN-3031] - KeywordDefaultDirtyHack should ignore case of default like other database does
+* [KYLIN-3030] - In the cubes table, the options of last column action are missing partly.
+* [KYLIN-3029] - The warning window of existing cube name does not work
+* [KYLIN-3028] - Build cube error when set S3 as working-dir
+* [KYLIN-3026] - Can not see full cube names on insight page
+* [KYLIN-3020] - Improve org.apache.hadoop.util.ToolRunner to be threadsafe
+* [KYLIN-3017] - Footer covers the selection box and some options can not be selected
+* [KYLIN-3016] - StorageCleanup job doesn't clean up all the legacy fiels in a in Read/Write seperation environment
+* [KYLIN-3004] - Update validation when deleting segment
+* [KYLIN-3001] - Fix the wrong Cache key issue 
+* [KYLIN-2995] - Set SparkContext.hadoopConfiguration to HadoopUtil in Spark Cubing
+* [KYLIN-2994] - Handle NPE when load dict in DictionaryManager
+* [KYLIN-2991] - Query hit NumberFormatException if partitionDateFormat is not yyyy-MM-dd
+* [KYLIN-2989] - Close of BufferedWriter should be placed in finally block in SCCreator
+* [KYLIN-2974] - zero joint group can lead to query error
+* [KYLIN-2971] - Fix the wrong "Realization Names" in logQuery when hit cache
+* [KYLIN-2969] - Fix the wrong NumberBytesCodec cache in Number2BytesConverter 
+* [KYLIN-2968] - misspelled word in table_load.html
+* [KYLIN-2967] - Add the dependency check when deleting a  project
+* [KYLIN-2962] - drop error job not delete segment
+* [KYLIN-2959] - SAML logout issue
+* [KYLIN-2956] - building trie dictionary blocked on value of length over 4095 
+* [KYLIN-2953] - List readable project not correct if add limit and offset
+* [KYLIN-2939] - Get config properties not correct in UI
+* [KYLIN-2933] - Fix compilation against the Kafka 1.0.0 release
+* [KYLIN-2930] - Selecting one column in union causes compile error
+* [KYLIN-2929] - speed up Dump file performance
+* [KYLIN-2922] - Query fails when a column is used as dimension and sum(column) at the same time
+* [KYLIN-2917] - Dup alias on OLAPTableScan
+* [KYLIN-2907] - Check if a number is a positive integer 
+* [KYLIN-2901] - Update correct cardinality for empty table
+* [KYLIN-2887] - Subquery columns not exported in OLAPContext allColumns
+* [KYLIN-2876] - Ineffective check in ExternalAclProvider
+* [KYLIN-2874] - Ineffective check in CubeDesc#getInitialCuboidScheduler
+* [KYLIN-2849] - duplicate segment,cannot be deleted and data cannot be refreshed and merged
+* [KYLIN-2837] - Ineffective call to toUpperCase() in MetadataManager
+* [KYLIN-2836] - Lack of synchronization in CodahaleMetrics#close
+* [KYLIN-2835] - Unclosed resources in JdbcExplorer
+* [KYLIN-2794] - MultipleDictionaryValueEnumerator should output values in sorted order
+* [KYLIN-2756] - Let "LIMIT" be optional in "Inspect" page
+* [KYLIN-2470] - cube build failed when 0 bytes input for non-partition fact table
+* [KYLIN-1664] - Harden security check for '/kylin/api/admin/config' API
+
+__Task__
+* [KYLIN-3207] - Blog for Kylin Superset Integration
+* [KYLIN-3200] - Enable SonarCloud for Code Analysis
+* [KYLIN-3198] - More Chinese Howto Documents
+* [KYLIN-3195] - Kylin v2.3.0 Release
+* [KYLIN-3191] - Remove the deprecated configuration item kylin.security.acl.default-role
+* [KYLIN-3189] - Documents for kylin python client
+* [KYLIN-3080] - Kylin Qlik Sense Integration Documentation
+* [KYLIN-3068] - Rename deprecated parameter for HDFS block size in HiveColumnCardinalityJob
+* [KYLIN-3062] - Hide RAW measure
+* [KYLIN-3010] - Remove v1 Spark engine code
+* [KYLIN-2843] - Upgrade nvd3 version
+* [KYLIN-2797] - Remove MR engine V1
+* [KYLIN-2796] - Remove the legacy "statisticsenabled" codes in FactDistinctColumnsJob
+
+__Sub-Task__
+* [KYLIN-3235] - add null check for SQL
+* [KYLIN-3202] - Doc directory for 2.3
+* [KYLIN-3155] - Create a document for how to use dashboard
+* [KYLIN-3154] - Create a document for cube planner
+* [KYLIN-3153] - Create a document for system cube creation
+* [KYLIN-3018] - Change maxLevel for layered cubing
+* [KYLIN-2946] - Introduce a tool for batch incremental building of system cubes
+* [KYLIN-2934] - Provide user guide for KYLIN-2656(Support Zookeeper ACL)
+* [KYLIN-2822] - Introduce sunburst chart to show cuboid tree
+* [KYLIN-2746] - Separate filter row count & aggregated row count for metrics collection returned by coprocessor
+* [KYLIN-2735] - Introduce an option to make job scheduler consider job priority
+* [KYLIN-2734] - Introduce hot cuboids export & import
+* [KYLIN-2733] - Introduce optimize job for adjusting cuboid set
+* [KYLIN-2732] - Introduce base cuboid as a new input for cubing job
+* [KYLIN-2731] - Introduce checkpoint executable
+* [KYLIN-2725] - Introduce a tool for creating system cubes relating to query & job metrics
+* [KYLIN-2723] - Introduce metrics collector for query & job metrics
+* [KYLIN-2722] - Introduce a new measure, called active reservoir, for actively pushing metrics to reporters
+
+## v2.2.0 - 2017-11-03
+
+_Tag:_ [kylin-2.2.0](https://github.com/apache/kylin/tree/kylin-2.2.0)
+This is a major release after 2.1, with more than 70 bug fixes and enhancements. Check [How to upgrade](/docs21/howto/howto_upgrade.html).
+
+__New Feature__
+* [KYLIN-2703] - Manage ACL through Apache Ranger
+* [KYLIN-2752] - Make HTable name prefix configurable
+* [KYLIN-2761] - Table Level ACL
+* [KYLIN-2775] - Streaming Cube Sample
+
+__Improvement__
+* [KYLIN-2535] - Use ResourceStore to manage ACL files
+* [KYLIN-2604] - Use global dict as the default encoding for precise distinct count in web
+* [KYLIN-2606] - Only return counter for precise count_distinct if query is exactAggregate
+* [KYLIN-2622] - AppendTrieDictionary support not global
+* [KYLIN-2623] - Move output(Hbase) related code from MR engine to outputside
+* [KYLIN-2653] - Spark Cubing read metadata from HDFS
+* [KYLIN-2717] - Move concept Table under Project
+* [KYLIN-2790] - Add an extending point to support other types of column family
+* [KYLIN-2795] - Improve REST API document, add get/list jobs
+* [KYLIN-2803] - Pushdown non "select" query
+* [KYLIN-2818] - Refactor dateRange & sourceOffset on CubeSegment
+* [KYLIN-2819] - Add "kylin.env.zookeeper-base-path" for zk path
+* [KYLIN-2823] - Trim TupleFilter after dictionary-based filter optimization
+* [KYLIN-2844] - Override "max-visit-scanrange" and "max-fuzzykey-scan" at cube level
+* [KYLIN-2854] - Remove duplicated controllers
+* [KYLIN-2856] - Log pushdown query as a kind of BadQuery
+* [KYLIN-2857] - MR configuration should be overwritten by user specified parameters when resuming MR jobs
+* [KYLIN-2858] - Add retry in cache sync
+* [KYLIN-2879] - Upgrade Spring & Spring Security to fix potential vulnerability
+* [KYLIN-2891] - Upgrade Tomcat to 7.0.82.
+* [KYLIN-2963] - Remove Beta for Spark Cubing
+
+__Bug__
+* [KYLIN-1794] - Enable job list even some job metadata parsing failed
+* [KYLIN-2600] - Incorrectly set the range start when filtering by the minimum value
+* [KYLIN-2705] - Allow removing model's "partition_date_column" on web
+* [KYLIN-2706] - Fix the bug for the comparator in SortedIteratorMergerWithLimit
+* [KYLIN-2707] - Fix NPE in JobInfoConverter
+* [KYLIN-2716] - Non-thread-safe WeakHashMap leading to high CPU
+* [KYLIN-2718] - Overflow when calculating combination amount based on static rules
+* [KYLIN-2753] - Job duration may become negative
+* [KYLIN-2766] - Kylin uses default FS to put the coprocessor jar, instead of the working dir
+* [KYLIN-2773] - Should not push down join condition related columns are compatible while not consistent
+* [KYLIN-2781] - Make 'find-hadoop-conf-dir.sh' executable
+* [KYLIN-2786] - Miss "org.apache.kylin.source.kafka.DateTimeParser"
+* [KYLIN-2788] - HFile is not written to S3
+* [KYLIN-2789] - Cube's last build time is wrong
+* [KYLIN-2791] - Fix bug in readLong function in BytesUtil
+* [KYLIN-2798] - Can't rearrange the order of rowkey columns though web UI
+* [KYLIN-2799] - Building cube with percentile measure encounter with NullPointerException
+* [KYLIN-2800] - All dictionaries should be built based on the flat hive table
+* [KYLIN-2806] - Empty results from JDBC with Date filter in prepareStatement
+* [KYLIN-2812] - Save to wrong database when loading Kafka Topic
+* [KYLIN-2814] - HTTP connection may not be released in RestClient
+* [KYLIN-2815] - Empty results with prepareStatement but OK with KylinStatement
+* [KYLIN-2824] - Parse Boolean type in JDBC driver
+* [KYLIN-2832] - Table meta missing from system diagnosis
+* [KYLIN-2833] - Storage cleanup job could delete the intermediate hive table used by running jobs
+* [KYLIN-2834] - Bug in metadata sync, Broadcaster lost listener after cache wipe
+* [KYLIN-2838] - Should get storageType in changeHtableHost of CubeMigrationCLI
+* [KYLIN-2862] - BasicClientConnManager in RestClient can't do well with syncing many query severs
+* [KYLIN-2863] - Double caret bug in sample.sh for old version bash
+* [KYLIN-2865] - Wrong fs when use two cluster
+* [KYLIN-2868] - Include and exclude filters not work on ResourceTool
+* [KYLIN-2870] - Shortcut key description is error at Kylin-Web
+* [KYLIN-2871] - Ineffective null check in SegmentRange
+* [KYLIN-2877] - Unclosed PreparedStatement in QueryService#execute()
+* [KYLIN-2906] - Check model/cube name is duplicated when creating model/cube
+* [KYLIN-2915] - Exception during query on lookup table
+* [KYLIN-2920] - Failed to get streaming config on WebUI
+* [KYLIN-2944] - HLLCSerializer, RawSerializer, PercentileSerializer returns shared object in serialize()
+* [KYLIN-2949] - Couldn't get authorities with LDAP in RedHat Linux
+
+
+Task
+* [KYLIN-2782] - Replace DailyRollingFileAppender with RollingFileAppender to allow log retention
+* [KYLIN-2925] - Provide document for Ranger security integration
+
+Sub-task
+* [KYLIN-2549] - Modify tools that related to Acl
+* [KYLIN-2728] - Introduce a new cuboid scheduler based on cuboid tree rather than static rules
+* [KYLIN-2729] - Introduce greedy algorithm for cube planner
+* [KYLIN-2730] - Introduce genetic algorithm for cube planner
+* [KYLIN-2802] - Enable cube planner phase one
+* [KYLIN-2826] - Add basic support classes for cube planner algorithms
+* [KYLIN-2961] - Provide user guide for Ranger Kylin Plugin
+
+## v2.1.0 - 2017-08-17
+
+_Tag:_ [kylin-2.1.0](https://github.com/apache/kylin/tree/kylin-2.1.0)
+This is a major release after 2.0, with more than 100 bug fixes and enhancements. Check [How to upgrade](/docs21/howto/howto_upgrade.html).
+
+__New Feature__
+
+* [KYLIN-1351] - Support RDBMS as data source
+* [KYLIN-2515] - Route unsupported query back to source
+* [KYLIN-2646] - Project level query authorization
+* [KYLIN-2665] - Add model JSON edit in web 
+
+__Improvement__
+
+* [KYLIN-2506] - Refactor Global Dictionary
+* [KYLIN-2562] - Allow configuring yarn app tracking URL pattern
+* [KYLIN-2578] - Refactor DistributedLock
+* [KYLIN-2579] - Improvement on subqueries: reorder subqueries joins with RelOptRule
+* [KYLIN-2580] - Improvement on subqueries: allow grouping by columns from subquery
+* [KYLIN-2586] - use random port for CacheServiceTest as fixed port 7777 might have been occupied
+* [KYLIN-2596] - Enable generating multiple streaming messages with one input message in streaming parser
+* [KYLIN-2597] - Deal with trivial expression in filters like x = 1 + 2
+* [KYLIN-2598] - Should not translate filter to a in-clause filter with too many elements
+* [KYLIN-2599] - select * in subquery fail due to bug in hackSelectStar 
+* [KYLIN-2602] - Add optional job threshold arg for MetadataCleanupJob
+* [KYLIN-2603] - Push 'having' filter down to storage
+* [KYLIN-2607] - Add http timeout for RestClient
+* [KYLIN-2610] - Optimize BuiltInFunctionTransformer performance
+* [KYLIN-2616] - GUI for multiple column count distinct measure
+* [KYLIN-2624] - Correct reporting of HBase errors
+* [KYLIN-2627] - ResourceStore to support simple rollback
+* [KYLIN-2628] - Remove synchronized modifier for reloadCubeLocalAt
+* [KYLIN-2633] - Upgrade Spark to 2.1
+* [KYLIN-2642] - Relax check in RowKeyColDesc to keep backward compatibility
+* [KYLIN-2667] - Ignore whitespace when caching query
+* [KYLIN-2668] - Support Calcites Properties in JDBC URL
+* [KYLIN-2673] - Support change the fact table when the cube is disable
+* [KYLIN-2676] - Keep UUID in metadata constant 
+* [KYLIN-2677] - Add project configuration view page
+* [KYLIN-2689] - Only dimension columns can join when create a model
+* [KYLIN-2691] - Support delete broken cube
+* [KYLIN-2695] - Allow override spark conf in cube
+* [KYLIN-2696] - Check SQL injection in model filter condition
+* [KYLIN-2700] - Allow override Kafka conf at cube level
+* [KYLIN-2704] - StorageCleanupJob should deal with a new metadata path
+* [KYLIN-2742] - Specify login page for Spring security 4.x
+* [KYLIN-2757] - Get cube size when using Azure Data Lake Store
+* [KYLIN-2783] - Refactor CuboidScheduler to be extensible
+* [KYLIN-2784] - Set User-Agent for ODBC/JDBC Drivers
+* [KYLIN-2793] - ODBC Driver - Bypass cert validation when connect to SSL service
+
+__Bug__
+
+* [KYLIN-1668] - Rowkey column shouldn't allow delete and add
+* [KYLIN-1683] - Row key could drag and drop in view state of cube - advanced settings tabpage
+* [KYLIN-2472] - Support Unicode chars in kylin.properties
+* [KYLIN-2493] - Fix BufferOverflowException in FactDistinctColumnsMapper when value exceeds 4096 bytes
+* [KYLIN-2540] - concat cascading is not supported
+* [KYLIN-2544] - Fix wrong left join type when editing lookup table
+* [KYLIN-2557] - Fix creating HBase table conflict when multiple kylin instances are starting concurrently
+* [KYLIN-2559] - Enhance check-env.sh to check 'kylin.env.hdfs-working-dir' to be mandatory
+* [KYLIN-2563] - Fix preauthorize-annotation bugs in query authorization
+* [KYLIN-2568] - 'kylin_port_replace_util.sh' should only modify the kylin port and keep other properties unchanged. 
+* [KYLIN-2571] - Return correct driver version from kylin jdbc driver
+* [KYLIN-2572] - Fix parsing 'hive_home' error in 'find-hive-dependency.sh'
+* [KYLIN-2573] - Enhance 'kylin.sh stop' to terminate kylin process finally
+* [KYLIN-2574] - RawQueryLastHacker should group by all possible dimensions
+* [KYLIN-2581] - Fix deadlock bugs in broadcast sync
+* [KYLIN-2582] - 'Server Config' should be refreshed automatically in web page 'System', after we update it successfully. 
+* [KYLIN-2588] - Query failed when two top-n measure with order by count(*) exists in one cube
+* [KYLIN-2589] - Enhance thread-safe in Authentication
+* [KYLIN-2592] - Fix distinct count measure build failed issue with spark cubing 
+* [KYLIN-2593] - Fix NPE issue when querying with Ton-N by count(*) 
+* [KYLIN-2594] - After reloading metadata, the project list should refresh
+* [KYLIN-2595] - Display column alias name when query with keyword 'As'
+* [KYLIN-2601] - The return type of tinyint for sum measure should be bigint
+* [KYLIN-2605] - Remove the hard-code sample data path in 'sample.sh'
+* [KYLIN-2608] - Bubble sort bug in JoinDesc
+* [KYLIN-2609] - Fix grant role access issue on project page.
+* [KYLIN-2611] - Unclosed HBaseAdmin in AclTableMigrationTool#checkTableExist
+* [KYLIN-2612] - Potential NPE accessing familyMap in AclTableMigrationTool#getAllAceInfo
+* [KYLIN-2613] - Wrong variable is used in DimensionDesc#hashCode
+* [KYLIN-2621] - Fix issue on mapping LDAP group to the admin group
+* [KYLIN-2637] - Show tips after creating project successfully
+* [KYLIN-2641] - The current selected project is incorrect after we delete a project.
+* [KYLIN-2643] - PreparedStatement should be closed in QueryServiceV2#execute()
+* [KYLIN-2644] - Fix "Add Project" after refreshing Insight page
+* [KYLIN-2647] - Should get FileSystem from HBaseConfiguration in HBaseResourceStore
+* [KYLIN-2648] - kylin.env.hdfs-working-dir should be qualified and absolute path
+* [KYLIN-2652] - Make KylinConfig threadsafe in CubeVisitService
+* [KYLIN-2655] - Fix wrong job duration issue when resuming the error or stopped job.
+* [KYLIN-2657] - Fix Cube merge NPE whose TopN dictionary not found
+* [KYLIN-2658] - Unclosed ResultSet in JdbcExplorer#loadTableMetadata()
+* [KYLIN-2660] - Show error tips if load hive error occurs and can not be connected.
+* [KYLIN-2661] - Fix Cube list page display issue when using MODELER or ANALYST
+* [KYLIN-2664] - Fix Extended column bug in web
+* [KYLIN-2670] - Fix CASE WHEN issue in orderby clause
+* [KYLIN-2674] - Should not catch OutOfMemoryError in coprocessor
+* [KYLIN-2678] - Fix minor issues in KylinConfigCLITest
+* [KYLIN-2684] - Fix Object.class not registered in Kyro issue with spark cubing
+* [KYLIN-2687] - When the model has a ready cube, should not allow user to edit model JSON in web.
+* [KYLIN-2688] - When the model has a ready cube, should not allow user to edit model JSON in web.
+* [KYLIN-2693] - Should use overrideHiveConfig for LookupHiveViewMaterialization and RedistributeFlatHiveTable
+* [KYLIN-2694] - Fix ArrayIndexOutOfBoundsException in SparkCubingByLayer
+* [KYLIN-2699] - Tomcat LinkageError for curator-client jar file conflict
+* [KYLIN-2701] - Unclosed PreparedStatement in QueryService#getPrepareOnlySqlResponse
+* [KYLIN-2702] - Ineffective null check in DataModelDesc#initComputedColumns()
+* [KYLIN-2707] - Fix NPE in JobInfoConverter
+* [KYLIN-2708] - cube merge operations can not execute success
+* [KYLIN-2711] - NPE if job output is lost
+* [KYLIN-2713] - Fix ITJdbcSourceTableLoaderTest.java and ITJdbcTableReaderTest.java missing license header
+* [KYLIN-2719] - serviceStartTime of CubeVisitService should not be an attribute which may be shared by multi-thread
+* [KYLIN-2743] - Potential corrupt TableDesc when loading an existing Hive table
+* [KYLIN-2748] - Calcite code generation can not gc cause OOM
+* [KYLIN-2754] - Fix Sync issue when reload existing hive table
+* [KYLIN-2758] - Query pushdown should be able to skip database prefix
+* [KYLIN-2762] - Get "Owner required" error on saving data model
+* [KYLIN-2767] - 404 error on click "System" tab
+* [KYLIN-2768] - Wrong UI for count distinct measure
+* [KYLIN-2769] - Non-partitioned cube doesn't need show start/end time
+* [KYLIN-2778] - Sample cube doesn't have ACL info
+* [KYLIN-2780] - QueryController.getMetadata and CacheController.wipeCache may be deadlock
+
+
+__Sub-task__
+
+* [KYLIN-2548] - Keep ACL information backward compatibile
+
+## v2.0.0 - 2017-04-30
+
+_Tag:_ [kylin-2.0.0](https://github.com/apache/kylin/tree/kylin-2.0.0)
+This is a major release with **Spark Cubing**, **Snowflake Data Model** and runs **TPC-H Benchmark**. Check out [the download](/download/) and the [how to upgrade guide](/docs20/howto/howto_upgrade.html).
+
+__New Feature__
+
+* [KYLIN-744] - Spark Cube Build Engine
+* [KYLIN-2006] - Make job engine distributed and HA
+* [KYLIN-2031] - New Fix_length_Hex encoding to support hash value and better Integer encoding to support negative value
+* [KYLIN-2180] - Add project config and make config priority become "cube > project > server"
+* [KYLIN-2240] - Add a toggle to ignore all cube signature inconsistency temporally
+* [KYLIN-2317] - Hybrid Cube CLI Tools
+* [KYLIN-2331] - By layer Spark cubing
+* [KYLIN-2351] - Support Cloud DFS as kylin.env.hdfs-working-dir
+* [KYLIN-2388] - Hot load kylin config from web
+* [KYLIN-2394] - Upgrade Calcite to 1.11 and Avatica to 1.9
+* [KYLIN-2396] - Percentile pre-aggregation implementation
+
+__Improvements__
+
+* [KYLIN-227] - Support "Pause" on Kylin Job
+* [KYLIN-490] - Support multiple column distinct count
+* [KYLIN-995] - Enable kylin to support joining the same lookup table more than once
+* [KYLIN-1832] - HyperLogLog codec performance improvement
+* [KYLIN-1875] - Snowflake schema support
+* [KYLIN-1971] - Cannot support columns with same name under different table
+* [KYLIN-2029] - lookup table support count(distinct column)
+* [KYLIN-2030] - lookup table support group by primary key when no derived dimension
+* [KYLIN-2096] - Support "select version()" SQL statement
+* [KYLIN-2131] - Load Kafka client configuration from properties files
+* [KYLIN-2133] - Check web server port availability when startup
+* [KYLIN-2135] - Enlarge FactDistinctColumns reducer number
+* [KYLIN-2136] - Enhance cubing algorithm selection
+* [KYLIN-2141] - Add include/exclude interface for ResourceTool
+* [KYLIN-2144] - move useful operation tools to org.apache.kylin.tool
+* [KYLIN-2163] - Refine kylin scripts, less verbose during start up
+* [KYLIN-2165] - Use hive table statistics data to get the total count
+* [KYLIN-2169] - Refactor AbstractExecutable to respect KylinConfig
+* [KYLIN-2170] - Mapper/Reducer cleanup() exception handling
+* [KYLIN-2175] - cubestatsreader support reading unfinished segments
+* [KYLIN-2181] - remove integer as fixed_length in test_kylin_cube_with_slr_empty desc
+* [KYLIN-2187] - Enhance TableExt metadata
+* [KYLIN-2192] - More Robust Global Dictionary
+* [KYLIN-2193] - parameterise org.apache.kylin.storage.translate.DerivedFilterTranslator#IN_THRESHOLD
+* [KYLIN-2195] - Setup naming convention for kylin properties
+* [KYLIN-2196] - Update Tomcat clas loader to parallel loader
+* [KYLIN-2198] - Add a framework to allow major changes in DimensionEncoding
+* [KYLIN-2205] - Use column name as the default dimension name
+* [KYLIN-2215] - Refactor DimensionEncoding.encode(byte[]) to encode(String)
+* [KYLIN-2217] - Reducers build dictionaries locally
+* [KYLIN-2220] - Enforce same name between Cube & CubeDesc
+* [KYLIN-2222] - web ui uses rest api to decide which dim encoding is valid for different typed columns
+* [KYLIN-2227] - rename kylin-log4j.properties to kylin-tools-log4j.properties and move it to global conf folder
+* [KYLIN-2238] - Add query server scan threshold
+* [KYLIN-2244] - "kylin.job.cuboid.size.memhungry.ratio" shouldn't be applied on measures like TopN
+* [KYLIN-2246] - redesign the way to decide layer cubing reducer count
+* [KYLIN-2248] - TopN merge further optimization after KYLIN-1917
+* [KYLIN-2252] - Enhance project/model/cube name check
+* [KYLIN-2255] - Drop v1 CubeStorageQuery, Storage Engine ID=0
+* [KYLIN-2263] - Display reasonable exception message if could not find kafka dependency for streaming build
+* [KYLIN-2266] - Reduce memory usage for building global dict
+* [KYLIN-2269] - Reduce MR memory usage for global dict
+* [KYLIN-2280] - A easier way to change all the conflict ports when start multi kylin instance in the same server
+* [KYLIN-2283] - Have a general purpose data generation tool
+* [KYLIN-2287] - Speed up model and cube list load in Web
+* [KYLIN-2290] - minor improvements on limit
+* [KYLIN-2294] - Refactor CI, merge with_slr and without_slr cubes
+* [KYLIN-2295] - Refactor CI, blend view cubes into the rest
+* [KYLIN-2296] - Allow cube to override kafka configuration
+* [KYLIN-2304] - Only copy latest version dict for global dict
+* [KYLIN-2306] - Tolerate Class missing when loading job list
+* [KYLIN-2307] - Make HBase 1.x the default of master
+* [KYLIN-2308] - Allow user to set more columnFamily in web
+* [KYLIN-2310] - Refactor CI, add IT for date/time encoding & extended column
+* [KYLIN-2312] - Display Server Config/Environment by order in system tab
+* [KYLIN-2314] - Add Integration Test (IT) for snowflake
+* [KYLIN-2323] - Refine Table load/unload error message
+* [KYLIN-2328] - Reduce the size of metadata uploaded to distributed cache
+* [KYLIN-2338] - refactor BitmapCounter.DataInputByteBuffer
+* [KYLIN-2349] - Serialize BitmapCounter with peekLength
+* [KYLIN-2353] - Serialize BitmapCounter with distinct count
+* [KYLIN-2358] - CuboidReducer has too many "if (aggrMask[i])" checks
+* [KYLIN-2359] - Update job build step name
+* [KYLIN-2364] - Output table name to error info in LookupTable
+* [KYLIN-2375] - Default cache size (10M) is too small
+* [KYLIN-2377] - Add kylin client query timeout
+* [KYLIN-2378] - Set job thread name with job uuid
+* [KYLIN-2379] - Add UseCMSInitiatingOccupancyOnly to KYLIN_JVM_SETTINGS
+* [KYLIN-2380] - Refactor DbUnit assertions
+* [KYLIN-2387] - A new BitmapCounter with better performance
+* [KYLIN-2389] - Improve resource utilization for DistributedScheduler
+* [KYLIN-2393] - Add "hive.auto.convert.join" and "hive.stats.autogather" to kylin_hive_conf.xml
+* [KYLIN-2400] - Simplify Dictionary interface
+* [KYLIN-2404] - Add "hive.merge.mapfiles" and "hive.merge.mapredfiles" to kylin_hive_conf.xml
+* [KYLIN-2409] - Performance tunning for in-mem cubing
+* [KYLIN-2411] - Kill MR job on pause
+* [KYLIN-2414] - Distinguish UHC columns from normal columns in KYLIN-2217
+* [KYLIN-2415] - Change back default metadata name to "kylin_metadata"
+* [KYLIN-2418] - Refactor pom.xml, drop unused parameter
+* [KYLIN-2422] - NumberDictionary support for decimal with extra 0 after "."
+* [KYLIN-2423] - Model should always include PK/FK as dimensions
+* [KYLIN-2424] - Optimize the integration test's performance
+* [KYLIN-2428] - Cleanup unnecessary shaded libraries for job/coprocessor/jdbc/server
+* [KYLIN-2436] - add a configuration knob to disable spilling of aggregation cache
+* [KYLIN-2437] - collect number of bytes scanned to query metrics
+* [KYLIN-2438] - replace scan threshold with max scan bytes
+* [KYLIN-2442] - Re-calculate expansion rate, count raw data size regardless of flat table compression
+* [KYLIN-2443] - Report coprocessor error information back to client
+* [KYLIN-2446] - Support project names filter in DeployCoprocessorCLI
+* [KYLIN-2451] - Set HBASE_RPC_TIMEOUT according to kylin.storage.hbase.coprocessor-timeout-seconds
+* [KYLIN-2489] - Upgrade zookeeper dependency to 3.4.8
+* [KYLIN-2494] - Model has no dup column on dimensions and measures
+* [KYLIN-2501] - Stream Aggregate GTRecords at Query Server
+* [KYLIN-2503] - Spark cubing step should show YARN app link
+* [KYLIN-2518] - Improve the sampling performance of FactDistinctColumns step
+* [KYLIN-2525] - Smooth upgrade to 2.0.0 from older metadata
+* [KYLIN-2527] - Speedup LookupStringTable, use HashMap instead of ConcurrentHashMap
+* [KYLIN-2528] - refine job email notification to support starttls and customized port
+* [KYLIN-2529] - Allow thread-local override of KylinConfig
+* [KYLIN-2545] - Number2BytesConverter could tolerate malformed numbers
+* [KYLIN-2560] - Fix license headers for 2.0.0 release
+
+__Bugs__
+
+* [KYLIN-1603] - Building job still finished even MR job error happened.
+* [KYLIN-1770] - Upgrade Calcite dependency (v1.10)
+* [KYLIN-1793] - Job couldn't stop when hive commands got error with beeline
+* [KYLIN-1945] - Cuboid.translateToValidCuboid method throw exception while cube building or query execute
+* [KYLIN-2077] - Inconsistent cube desc signature for CubeDesc
+* [KYLIN-2153] - Allow user to skip the check in CubeMetaIngester
+* [KYLIN-2155] - get-properties.sh doesn't support parameters starting with "-n"
+* [KYLIN-2166] - Unclosed HBaseAdmin in StorageCleanupJob#cleanUnusedHBaseTables
+* [KYLIN-2172] - Potential NPE in ExecutableManager#updateJobOutput
+* [KYLIN-2174] - partitoin column format visibility issue
+* [KYLIN-2176] - org.apache.kylin.rest.service.JobService#submitJob will leave orphan NEW segment in cube when exception is met
+* [KYLIN-2191] - Integer encoding error for width from 5 to 7
+* [KYLIN-2197] - Has only base cuboid for some cube desc
+* [KYLIN-2202] - Fix the conflict between KYLIN-1851 and KYLIN-2135
+* [KYLIN-2207] - Ineffective null check in ExtendCubeToHybridCLI#createFromCube()
+* [KYLIN-2208] - Unclosed FileReader in HiveCmdBuilder#build()
+* [KYLIN-2209] - Potential NPE in StreamingController#deserializeTableDesc()
+* [KYLIN-2211] - IDictionaryValueEnumerator should return String instead of byte[]
+* [KYLIN-2212] - 'NOT' operator in filter on derived column may get incorrect result
+* [KYLIN-2213] - UnsupportedOperationException when excute 'not like' query on cube v1
+* [KYLIN-2216] - Potential NPE in model#findTable() call
+* [KYLIN-2224] - "select * from fact inner join lookup " does not return values for look up columns
+* [KYLIN-2232] - cannot set partition date column pattern when edit a model
+* [KYLIN-2236] - JDBC statement.setMaxRows(10) is not working
+* [KYLIN-2237] - Ensure dimensions and measures of model don't have null column
+* [KYLIN-2242] - Directly write hdfs file in reducer is dangerous
+* [KYLIN-2243] - TopN memory estimation is inaccurate in some cases
+* [KYLIN-2251] - JDBC Driver httpcore dependency conflict
+* [KYLIN-2254] - A kind of sub-query does not work
+* [KYLIN-2262] - Get "null" error when trigger a build with wrong cube name
+* [KYLIN-2268] - Potential NPE in ModelDimensionDesc#init()
+* [KYLIN-2271] - Purge cube may delete building segments
+* [KYLIN-2275] - Remove dimensions cause wrong remove in advance settings
+* [KYLIN-2277] - SELECT * query returns a "COUNT__" column, which is not expected
+* [KYLIN-2282] - Step name "Build N-Dimension Cuboid Data : N-Dimension" is inaccurate
+* [KYLIN-2284] - intersect_count function error
+* [KYLIN-2288] - Kylin treat empty string as error measure which is inconsistent with hive
+* [KYLIN-2292] - workaround for CALCITE-1540
+* [KYLIN-2297] - Manually edit cube segment start/end time will throw error in UI
+* [KYLIN-2298] - timer component get wrong seconds
+* [KYLIN-2300] - Show MapReduce waiting time for each build step
+* [KYLIN-2301] - ERROR when executing query with subquery in "NOT IN" clause.
+* [KYLIN-2305] - Unable to use long searchBase/Pattern for LDAP
+* [KYLIN-2313] - Cannot find a cube in a subquery case with count distinct
+* [KYLIN-2316] - Build Base Cuboid Data ERROR
+* [KYLIN-2320] - TrieDictionaryForest incorrect getSizeOfId() when empty dictionary
+* [KYLIN-2326] - ERROR: ArrayIndexOutOfBoundsException: -1
+* [KYLIN-2329] - Between 0.06 - 0.01 and 0.06 + 0.01, returns incorrect result
+* [KYLIN-2330] - CubeDesc returns redundant DerivedInfo
+* [KYLIN-2337] - Remove expensive toString in SortedIteratorMergerWithLimit
+* [KYLIN-2340] - Some subquery returns incorrect result
+* [KYLIN-2341] - sum(case .. when ..) is not supported
+* [KYLIN-2342] - When NoClassDefFoundError occurred in building cube, no error in kylin.log
+* [KYLIN-2343] - When syn hive table, got error but actually the table is synced
+* [KYLIN-2347] - TPC-H query 13, too many HLLC objects exceed memory budget
+* [KYLIN-2348] - TPC-H query 20, requires multiple models in one query
+* [KYLIN-2356] - Incorrect result when filter on numeric columns
+* [KYLIN-2357] - Make ERROR_RECORD_LOG_THRESHOLD configurable
+* [KYLIN-2362] - Unify shell interpreter in scripts to avoid syntax diversity
+* [KYLIN-2367] - raw query like select * where ... returns empty columns
+* [KYLIN-2376] - Upgrade checkstyle plugin
+* [KYLIN-2382] - The column order of "select *" is not as defined in the table
+* [KYLIN-2383] - count distinct should not include NULL
+* [KYLIN-2390] - Wrong argument order for WinAggResetContextImpl()
+* [KYLIN-2391] - Unclosed FileInputStream in KylinConfig#getConfigAsString()
+* [KYLIN-2395] - Lots of warning messages about failing to scan jars in kylin.out
+* [KYLIN-2406] - TPC-H query 20, prevent NPE and give error hint
+* [KYLIN-2407] - TPC-H query 20, routing bug in lookup query and cube query
+* [KYLIN-2410] - Global dictionary does not respect the Hadoop configuration in mapper & reducer
+* [KYLIN-2416] - Max LDAP password length is 15 chars
+* [KYLIN-2419] - Rollback KYLIN-2292 workaround
+* [KYLIN-2426] - Tests will fail if env not satisfy hardcoded path in ITHDFSResourceStoreTest
+* [KYLIN-2429] - Variable initialized should be declared volatile in SparkCubingByLayer#execute()
+* [KYLIN-2430] - Unnecessary exception catching in BulkLoadJob
+* [KYLIN-2432] - Couldn't select partition column in some old browser (such as Google Chrome 18.0.1025.162)
+* [KYLIN-2433] - Handle the column that all records is null in MergeCuboidMapper
+* [KYLIN-2434] - Spark cubing does not respect config kylin.source.hive.database-for-flat-table
+* [KYLIN-2440] - Query failed if join condition columns not appear on cube
+* [KYLIN-2448] - Cloning a Model with a '-' in the name
+* [KYLIN-2449] - Rewrite should not run on OLAPAggregateRel if has no OLAPTable
+* [KYLIN-2452] - Throw NoSuchElementException when AggregationGroup size is 0
+* [KYLIN-2454] - Data generation tool will fail if column name is hive reserved keyword
+* [KYLIN-2457] - Should copy the latest dictionaries on dimension tables in a batch merge job
+* [KYLIN-2462] - PK and FK both as dimensions causes save cube failure
+* [KYLIN-2464] - Use ConcurrentMap instead of ConcurrentHashMap to avoid runtime errors
+* [KYLIN-2465] - Web page still has "Streaming cube build is not supported on UI" statements
+* [KYLIN-2474] - Build snapshot should check lookup PK uniqueness
+* [KYLIN-2481] - NoRealizationFoundException when there are similar cubes and models
+* [KYLIN-2487] - IN condition will convert to subquery join when its elements number exceeds 20
+* [KYLIN-2490] - Couldn't get cube size on Azure HDInsight
+* [KYLIN-2491] - Cube with error job can be dropped
+* [KYLIN-2502] - "Create flat table" and "redistribute table" steps don't show YARN application link
+* [KYLIN-2504] - Clone cube didn't keep the "engine_type" property
+* [KYLIN-2508] - Trans the time to UTC time when set the range of building cube
+* [KYLIN-2510] - Unintended NPE in CubeMetaExtractor#requireProject()
+* [KYLIN-2514] - Joins in data model fail to save when they disorder
+* [KYLIN-2516] - a table field can not be used as both dimension and measure in kylin 2.0
+* [KYLIN-2530] - Build cube failed with NoSuchObjectException, hive table not found 'default.kylin_intermediate_xxxx'
+* [KYLIN-2536] - Replace the use of org.codehaus.jackson
+* [KYLIN-2537] - HBase Read/Write separation bug introduced by KYLIN-2351
+* [KYLIN-2539] - Useless filter dimension will impact cuboid selection.
+* [KYLIN-2541] - Beeline SQL not printed in logs
+* [KYLIN-2543] - Still build dictionary for TopN group by column even using non-dict encoding
+* [KYLIN-2555] - minor issues about acl and granted autority
+
+__Tasks__
+
+* [KYLIN-1799] - Add a document to setup kylin on spark engine?
+* [KYLIN-2293] - Refactor KylinConfig to remove test related code
+* [KYLIN-2327] - Enable check-style for test code
+* [KYLIN-2344] - Package spark into Kylin binary package
+* [KYLIN-2368] - Enable Findbugs plugin
+* [KYLIN-2386] - Revert KYLIN-2349 and KYLIN-2353
+* [KYLIN-2521] - upgrade to calcite 1.12.0
+
+
+## v1.6.0 - 2016-11-26
+
+_Tag:_ [kylin-1.6.0](https://github.com/apache/kylin/tree/kylin-1.6.0)
+This is a major release with better support for using Apache Kafka as data source. Check [how to upgrade](/docs16/howto/howto_upgrade.html) to do the upgrading.
+
+__New Feature__
+
+* [KYLIN-1726] - Scalable streaming cubing
+* [KYLIN-1919] - Support Embedded Structure when Parsing Streaming Message
+* [KYLIN-2055] - Add an encoder for Boolean type
+* [KYLIN-2067] - Add API to check and fill segment holes
+* [KYLIN-2079] - add explicit configuration knob for coprocessor timeout
+* [KYLIN-2088] - Support intersect count for calculation of retention or conversion rates
+* [KYLIN-2125] - Support using beeline to load hive table metadata
+
+__Bug__
+
+* [KYLIN-1565] - Read the kv max size from HBase config
+* [KYLIN-1820] - Column autocomplete should remove the user input in model designer
+* [KYLIN-1828] - java.lang.StringIndexOutOfBoundsException in org.apache.kylin.storage.hbase.util.StorageCleanupJob
+* [KYLIN-1967] - Dictionary rounding can cause IllegalArgumentException in GTScanRangePlanner
+* [KYLIN-1978] - kylin.sh compatible issue on Ubuntu
+* [KYLIN-1990] - The SweetAlert at the front page may out of the page if the content is too long.
+* [KYLIN-2007] - CUBOID_CACHE is not cleared when rebuilding ALL cache
+* [KYLIN-2012] - more robust approach to hive schema changes
+* [KYLIN-2024] - kylin TopN only support the first measure 
+* [KYLIN-2027] - Error "connection timed out" occurs when zookeeper's port is set in hbase.zookeeper.quorum of hbase-site.xml
+* [KYLIN-2028] - find-*-dependency script fail on Mac OS
+* [KYLIN-2035] - Auto Merge Submit Continuously
+* [KYLIN-2041] - Wrong parameter definition in Get Hive Tables REST API
+* [KYLIN-2043] - Rollback httpclient to 4.2.5 to align with Hadoop 2.6/2.7
+* [KYLIN-2044] - Unclosed DataInputByteBuffer in BitmapCounter#peekLength
+* [KYLIN-2045] - Wrong argument order in JobInstanceExtractor#executeExtract()
+* [KYLIN-2047] - Ineffective null check in MetadataManager
+* [KYLIN-2050] - Potentially ineffective call to close() in QueryCli
+* [KYLIN-2051] - Potentially ineffective call to IOUtils.closeQuietly()
+* [KYLIN-2052] - Edit "Top N" measure, the "group by" column wasn't displayed
+* [KYLIN-2059] - Concurrent build issue in CubeManager.calculateToBeSegments()
+* [KYLIN-2069] - NPE in LookupStringTable
+* [KYLIN-2078] - Can't see generated SQL at Web UI
+* [KYLIN-2084] - Unload sample table failed
+* [KYLIN-2085] - PrepareStatement return incorrect result in some cases
+* [KYLIN-2086] - Still report error when there is more than 12 dimensions in one agg group
+* [KYLIN-2093] - Clear cache in CubeMetaIngester
+* [KYLIN-2097] - Get 'Column does not exist in row key desc" on cube has TopN measure
+* [KYLIN-2099] - Import table error of sample table KYLIN_CAL_DT
+* [KYLIN-2106] - UI bug - Advanced Settings - Rowkeys - new Integer dictionary encoding - could possibly impact also cube metadata
+* [KYLIN-2109] - Deploy coprocessor only this server own the table
+* [KYLIN-2110] - Ineffective comparison in BooleanDimEnc#equals()
+* [KYLIN-2114] - WEB-Global-Dictionary bug fix and improve
+* [KYLIN-2115] - some extended column query returns wrong answer
+* [KYLIN-2116] - when hive field delimitor exists in table field values, fields order is wrong
+* [KYLIN-2119] - Wrong chart value and sort when process scientific notation 
+* [KYLIN-2120] - kylin1.5.4.1 with cdh5.7 cube sql Oops Faild to take action
+* [KYLIN-2121] - Failed to pull data to PowerBI or Excel on some query
+* [KYLIN-2127] - UI bug fix for Extend Column
+* [KYLIN-2130] - QueryMetrics concurrent bug fix
+* [KYLIN-2132] - Unable to pull data from Kylin Cube ( learn_kylin cube ) to Excel or Power BI for Visualization and some dimensions are not showing up.
+* [KYLIN-2134] - Kylin will treat empty string as NULL by mistake
+* [KYLIN-2137] - Failed to run mr job when user put a kafka jar in hive's lib folder
+* [KYLIN-2138] - Unclosed ResultSet in BeelineHiveClient
+* [KYLIN-2146] - "Streaming Cluster" page should remove "Margin" inputbox
+* [KYLIN-2152] - TopN group by column does not distinguish between NULL and ""
+* [KYLIN-2154] - source table rows will be skipped if TOPN's group column contains NULL values
+* [KYLIN-2158] - Delete joint dimension not right
+* [KYLIN-2159] - Redistribution Hive Table Step always requires row_count filename as 000000_0 
+* [KYLIN-2167] - FactDistinctColumnsReducer may get wrong max/min partition col value
+* [KYLIN-2173] - push down limit leads to wrong answer when filter is loosened
+* [KYLIN-2178] - CubeDescTest is unstable
+* [KYLIN-2201] - Cube desc and aggregation group rule combination max check fail
+* [KYLIN-2226] - Build Dimension Dictionary Error
+
+__Improvement__
+
+* [KYLIN-1042] - Horizontal scalable solution for streaming cubing
+* [KYLIN-1827] - Send mail notification when runtime exception throws during build/merge cube
+* [KYLIN-1839] - improvement set classpath before submitting mr job
+* [KYLIN-1917] - TopN counter merge performance improvement
+* [KYLIN-1962] - Split kylin.properties into two files
+* [KYLIN-1999] - Use some compression at UT/IT
+* [KYLIN-2019] - Add license checker into checkstyle rule
+* [KYLIN-2033] - Refactor broadcast of metadata change
+* [KYLIN-2042] - QueryController puts entry in Cache w/o checking QueryCacheEnabled
+* [KYLIN-2054] - TimedJsonStreamParser should support other time format
+* [KYLIN-2068] - Import hive comment when sync tables
+* [KYLIN-2070] - UI changes for allowing concurrent build/refresh/merge
+* [KYLIN-2073] - Need timestamp info for diagnose  
+* [KYLIN-2075] - TopN measure: need select "constant" + "1" as the SUM|ORDER parameter
+* [KYLIN-2076] - Improve sample cube and data
+* [KYLIN-2080] - UI: allow multiple building jobs for the same cube
+* [KYLIN-2082] - Support to change streaming configuration
+* [KYLIN-2089] - Make update HBase coprocessor concurrent
+* [KYLIN-2090] - Allow updating cube level config even the cube is ready
+* [KYLIN-2091] - Add API to init the start-point (of each parition) for streaming cube
+* [KYLIN-2095] - Hive mr job use overrided MR job configuration by cube properties
+* [KYLIN-2098] - TopN support query UHC column without sorting by sum value
+* [KYLIN-2100] - Allow cube to override HIVE job configuration by properties
+* [KYLIN-2108] - Support usage of schema name "default" in SQL
+* [KYLIN-2111] - only allow columns from Model dimensions when add group by column to TOP_N
+* [KYLIN-2112] - Allow a column be a dimension as well as "group by" column in TopN measure
+* [KYLIN-2113] - Need sort by columns in SQLDigest
+* [KYLIN-2118] - allow user view CubeInstance json even cube is ready
+* [KYLIN-2122] - Move the partition offset calculation before submitting job
+* [KYLIN-2126] - use column name as default dimension name when auto generate dimension for lookup table
+* [KYLIN-2140] - rename packaged js with different name when build
+* [KYLIN-2143] - allow more options from Extended Columns,COUNT_DISTINCT,RAW_TABLE
+* [KYLIN-2162] - Improve the cube validation error message
+* [KYLIN-2221] - rethink on KYLIN-1684
+* [KYLIN-2083] - more RAM estimation test for MeasureAggregator and GTAggregateScanner
+* [KYLIN-2105] - add QueryId
+* [KYLIN-1321] - Add derived checkbox for lookup table columns on Auto Generate Dimensions panel
+* [KYLIN-1995] - Upgrade MapReduce properties which are deprecated
+
+__Task__
+
+* [KYLIN-2072] - Cleanup old streaming code
+* [KYLIN-2081] - UI change to support embeded streaming message
+* [KYLIN-2171] - Release 1.6.0
+
+
+## v1.5.4.1 - 2016-09-28
+_Tag:_ [kylin-1.5.4.1](https://github.com/apache/kylin/tree/kylin-1.5.4.1)
+This version fixes two major bugs introduced in 1.5.4; The metadata and HBase coprocessor is compatible with 1.5.4.
+
+__Bug__
+
+* [KYLIN-2010] - Date dictionary return wrong SQL result
+* [KYLIN-2026] - NPE occurs when build a cube without partition column
+* [KYLIN-2032] - Cube build failed when partition column isn't in dimension list
+
+## v1.5.4 - 2016-09-15
+_Tag:_ [kylin-1.5.4](https://github.com/apache/kylin/tree/kylin-1.5.4)
+This version includes bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.3; While after upgrade, you still need update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
+
+__New Feature__
+
+* [KYLIN-1732] - Support Window Function
+* [KYLIN-1767] - UI for TopN: specify encoding and multiple "group by"
+* [KYLIN-1849] - Search cube by name in Web UI
+* [KYLIN-1908] - Collect Metrics to JMX
+* [KYLIN-1921] - Support Grouping Funtions
+* [KYLIN-1964] - Add a companion tool of CubeMetaExtractor for cube importing
+
+__Bug__
+
+* [KYLIN-962] - [UI] Cube Designer can't drag rowkey normally
+* [KYLIN-1194] - Filter(CubeName) on Jobs/Monitor page works only once
+* [KYLIN-1488] - When modifying a model, Save after deleting a lookup table. The internal error will pop up.
+* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
+* [KYLIN-1808] - unload non existing table cause NPE
+* [KYLIN-1834] - java.lang.IllegalArgumentException: Value not exists! - in Step 4 - Build Dimension Dictionary
+* [KYLIN-1883] - Consensus Problem when running the tool, MetadataCleanupJob
+* [KYLIN-1889] - Didn't deal with the failure of renaming folder in hdfs when running the tool CubeMigrationCLI
+* [KYLIN-1929] - Error to load slow query in "Monitor" page for non-admin user
+* [KYLIN-1933] - Deploy in cluster mode, the "query" node report "scheduler has not been started" every second
+* [KYLIN-1934] - 'Value not exist' During Cube Merging Caused by Empty Dict
+* [KYLIN-1939] - Linkage error while executing any queries
+* [KYLIN-1942] - Models are missing after change project's name
+* [KYLIN-1953] - Error handling for diagnosis
+* [KYLIN-1956] - Can't query from child cube of a hybrid cube after its status changed from disabled to enabled
+* [KYLIN-1961] - Project name is always constant instead of real project name in email notification
+* [KYLIN-1970] - System Menu UI ACL issue
+* [KYLIN-1972] - Access denied when query seek to hybrid
+* [KYLIN-1973] - java.lang.NegativeArraySizeException when Build Dimension Dictionary
+* [KYLIN-1982] - CubeMigrationCLI: associate model with project
+* [KYLIN-1986] - CubeMigrationCLI: make global dictionary unique
+* [KYLIN-1992] - Clear ThreadLocal Contexts when query failed before scaning HBase
+* [KYLIN-1996] - Keep original column order when designing cube
+* [KYLIN-1998] - Job engine lock is not release at shutdown
+* [KYLIN-2003] - error start time at query result page
+* [KYLIN-2005] - Move all storage side behavior hints to GTScanRequest
+
+__Improvement__
+
+* [KYLIN-672] - Add Env and Project Info in job email notification
+* [KYLIN-1702] - The Key of the Snapshot to the related lookup table may be not informative
+* [KYLIN-1855] - Should exclude those joins in whose related lookup tables no dimensions are used in cube
+* [KYLIN-1858] - Remove all InvertedIndex(Streaming purpose) related codes and tests
+* [KYLIN-1866] - Add tip for field at 'Add Streaming' table page.
+* [KYLIN-1867] - Upgrade dependency libraries
+* [KYLIN-1874] - Make roaring bitmap version determined
+* [KYLIN-1898] - Upgrade to Avatica 1.8 or higher
+* [KYLIN-1904] - WebUI for GlobalDictionary
+* [KYLIN-1906] - Add more comments and default value for kylin.properties
+* [KYLIN-1910] - Support Separate HBase Cluster with NN HA and Kerberos Authentication
+* [KYLIN-1920] - Add view CubeInstance json function
+* [KYLIN-1922] - Improve the logic to decide whether to pre aggregate on Region server
+* [KYLIN-1923] - Add access controller to query
+* [KYLIN-1924] - Region server metrics: replace int type for long type for scanned row count
+* [KYLIN-1925] - Do not allow cross project clone for cube
+* [KYLIN-1926] - Loosen the constraint on FK-PK data type matching
+* [KYLIN-1936] - Improve enable limit logic (exactAggregation is too strict)
+* [KYLIN-1940] - Add owner for DataModel
+* [KYLIN-1941] - Show submitter for slow query
+* [KYLIN-1954] - BuildInFunctionTransformer should be executed per CubeSegmentScanner
+* [KYLIN-1963] - Delegate the loading of certain package (like slf4j) to tomcat's parent classloader
+* [KYLIN-1965] - Check duplicated measure name
+* [KYLIN-1966] - Refactor IJoinedFlatTableDesc
+* [KYLIN-1979] - Move hackNoGroupByAggregation to cube-based storage implementations
+* [KYLIN-1984] - Don't use compression in packaging configuration
+* [KYLIN-1985] - SnapshotTable should only keep the columns described in tableDesc
+* [KYLIN-1997] - Add pivot feature back in query result page
+* [KYLIN-2004] - Make the creating intermediate hive table steps configurable (two options)
+
+## v1.5.3 - 2016-07-28
+_Tag:_ [kylin-1.5.3](https://github.com/apache/kylin/tree/kylin-1.5.3)
+This version includes many bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.2; But after upgrade, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
+
+__New Feature__
+
+* [KYLIN-1478] - TopN measure should support non-dictionary encoding for ultra high cardinality
+* [KYLIN-1693] - Support multiple group-by columns for TOP_N meausre
+* [KYLIN-1752] - Add an option to fail cube build job when source table is empty
+* [KYLIN-1756] - Allow user to run MR jobs against different Hadoop queues
+
+__Bug__
+
+* [KYLIN-1499] - Couldn't save query, error in backend
+* [KYLIN-1568] - Calculate row value buffer size instead of hard coded ROWVALUE_BUFFER_SIZE
+* [KYLIN-1645] - Exception inside coprocessor should report back to the query thread
+* [KYLIN-1646] - Column appeared twice if it was declared as both dimension and measure
+* [KYLIN-1676] - High CPU in TrieDictionary due to incorrect use of HashMap
+* [KYLIN-1679] - bin/get-properties.sh cannot get property which contains space or equals sign
+* [KYLIN-1684] - query on table "kylin_sales" return empty resultset after cube "kylin_sales_cube" which generated by sample.sh is ready
+* [KYLIN-1694] - make multiply coefficient configurable when estimating cuboid size
+* [KYLIN-1695] - Skip cardinality calculation job when loading hive table
+* [KYLIN-1703] - The not-thread-safe ToolRunner.run() will cause concurrency issue in job engine
+* [KYLIN-1704] - When load empty snapshot, NULL Pointer Exception occurs
+* [KYLIN-1723] - GTAggregateScanner$Dump.flush() must not write the WHOLE metrics buffer
+* [KYLIN-1738] - MRJob Id is not saved to kylin jobs if MR job is killed
+* [KYLIN-1742] - kylin.sh should always set KYLIN_HOME to an absolute path
+* [KYLIN-1755] - TopN Measure IndexOutOfBoundsException
+* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
+* [KYLIN-1762] - Query threw NPE with 3 or more join conditions
+* [KYLIN-1769] - There is no response when click "Property" button at Cube Designer
+* [KYLIN-1777] - Streaming cube build shouldn't check working segment
+* [KYLIN-1780] - Potential issue in SnapshotTable.equals()
+* [KYLIN-1781] - kylin.properties encoding error while contain chinese prop key or value
+* [KYLIN-1783] - Can't add override property at cube design 'Configuration Overwrites' step.
+* [KYLIN-1785] - NoSuchElementException when Mandatory Dimensions contains all Dimensions
+* [KYLIN-1787] - Properly deal with limit clause in CubeHBaseEndpointRPC (SELECT * problem)
+* [KYLIN-1788] - Allow arbitrary number of mandatory dimensions in one aggregation group
+* [KYLIN-1789] - Couldn't use View as Lookup when join type is "inner"
+* [KYLIN-1795] - bin/sample.sh doesn't work when configured hive client is beeline
+* [KYLIN-1800] - IllegalArgumentExceptio: Too many digits for NumberDictionary: -0.009999999999877218. Expect 19 digits before decimal point at max.
+* [KYLIN-1803] - ExtendedColumn Measure Encoding with Non-ascii Characters
+* [KYLIN-1811] - Error step may be skipped sometimes when resume a cube job
+* [KYLIN-1816] - More than one base KylinConfig exist in spring JVM
+* [KYLIN-1817] - No result from JDBC with Date filter in prepareStatement
+* [KYLIN-1838] - Fix sample cube definition
+* [KYLIN-1848] - Can't sort cubes by any field in Web UI
+* [KYLIN-1862] - "table not found" in "Build Dimension Dictionary" step
+* [KYLIN-1879] - RestAPI /api/jobs always returns 0 for exec_start_time and exec_end_time fields
+* [KYLIN-1882] - it report can't find the intermediate table in '#4 Step Name: Build Dimension Dictionary' when use hive view as lookup table
+* [KYLIN-1896] - JDBC support mybatis
+* [KYLIN-1905] - Wrong Default Date in Cube Build Web UI
+* [KYLIN-1909] - Wrong access control to rest get cubes
+* [KYLIN-1911] - NPE when extended column has NULL value
+* [KYLIN-1912] - Create Intermediate Flat Hive Table failed when using beeline
+* [KYLIN-1913] - query log printed abnormally if the query contains "\r" (not "\r\n")
+* [KYLIN-1918] - java.lang.UnsupportedOperationException when unload hive table
+
+__Improvement__
+
+* [KYLIN-1319] - Find a better way to check hadoop job status
+* [KYLIN-1379] - More stable and functional precise count distinct implements after KYLIN-1186
+* [KYLIN-1656] - Improve performance of MRv2 engine by making each mapper handles a configured number of records
+* [KYLIN-1657] - Add new configuration kylin.job.mapreduce.min.reducer.number
+* [KYLIN-1669] - Deprecate the "Capacity" field from DataModel
+* [KYLIN-1677] - Distribute source data by certain columns when creating flat table
+* [KYLIN-1705] - Global (and more scalable) dictionary
+* [KYLIN-1706] - Allow cube to override MR job configuration by properties
+* [KYLIN-1714] - Make job/source/storage engines configurable from kylin.properties
+* [KYLIN-1717] - Make job engine scheduler configurable
+* [KYLIN-1718] - Grow ByteBuffer Dynamically in Cube Building and Query
+* [KYLIN-1719] - Add config in scan request to control compress the query result or not
+* [KYLIN-1724] - Support Amazon EMR
+* [KYLIN-1725] - Use KylinConfig inside coprocessor
+* [KYLIN-1728] - Introduce dictionary metadata
+* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
+* [KYLIN-1747] - Calculate all 0 (except mandatory) cuboids
+* [KYLIN-1749] - Allow mandatory only cuboid
+* [KYLIN-1751] - Make kylin log configurable
+* [KYLIN-1766] - CubeTupleConverter.translateResult() is slow due to date conversion
+* [KYLIN-1775] - Add Cube Migrate Support for Global Dictionary
+* [KYLIN-1782] - API redesign for CubeDesc
+* [KYLIN-1786] - Frontend work for KYLIN-1313 (extended columns as measure)
+* [KYLIN-1792] - behaviours for non-aggregated queries
+* [KYLIN-1805] - It's easily got stuck when deleting HTables during running the StorageCleanupJob
+* [KYLIN-1815] - Cleanup package size
+* [KYLIN-1818] - change kafka dependency to provided
+* [KYLIN-1821] - Reformat all of the java files and enable checkstyle to enforce code formatting
+* [KYLIN-1823] - refactor kylin-server packaging
+* [KYLIN-1846] - minimize dependencies of JDBC driver
+* [KYLIN-1884] - Reload metadata automatically after migrating cube
+* [KYLIN-1894] - GlobalDictionary may corrupt when server suddenly crash
+* [KYLIN-1744] - Separate concepts of source offset and date range on cube segments
+* [KYLIN-1654] - Upgrade httpclient dependency
+* [KYLIN-1774] - Update Kylin's tomcat version to 7.0.69
+* [KYLIN-1861] - Hive may fail to create flat table with "GC overhead error"
+
+## v1.5.2.1 - 2016-06-07
+_Tag:_ [kylin-1.5.2.1](https://github.com/apache/kylin/tree/kylin-1.5.2.1)
+
+This is a hot-fix version on v1.5.2, no new feature introduced, please upgrade to this version;
+
+__Bug__
+
+* [KYLIN-1758] - createLookupHiveViewMaterializationStep will create intermediate table for fact table
+* [KYLIN-1739] - kylin_job_conf_inmem.xml can impact non-inmem MR job
+
+
+## v1.5.2 - 2016-05-26
+_Tag:_ [kylin-1.5.2](https://github.com/apache/kylin/tree/kylin-1.5.2)
+
+This version is backward compatiple with v1.5.1. But after upgrade to v1.5.2 from v1.5.1, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
+
+__Highlights__
+
+* [KYLIN-1077] - Support Hive View as Lookup Table
+* [KYLIN-1515] - Make Kylin run on MapR
+* [KYLIN-1600] - Download diagnosis zip from GUI
+* [KYLIN-1672] - support kylin on cdh 5.7
+
+__New Feature__
+
+* [KYLIN-1016] - Count distinct on any dimension should work even not a predefined measure
+* [KYLIN-1077] - Support Hive View as Lookup Table
+* [KYLIN-1441] - Display time column as partition column
+* [KYLIN-1515] - Make Kylin run on MapR
+* [KYLIN-1600] - Download diagnosis zip from GUI
+* [KYLIN-1672] - support kylin on cdh 5.7
+
+__Improvement__
+
+* [KYLIN-869] - Enhance mail notification
+* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
+* [KYLIN-1313] - Enable deriving dimensions on non PK/FK
+* [KYLIN-1323] - Improve performance of converting data to hfile
+* [KYLIN-1340] - Tools to extract all cube/hybrid/project related metadata to facilitate diagnosing/debugging/* sharing
+* [KYLIN-1381] - change RealizationCapacity from three profiles to specific numbers
+* [KYLIN-1391] - quicker and better response to v2 storage engine's rpc timeout exception
+* [KYLIN-1418] - Memory hungry cube should select LAYER and INMEM cubing smartly
+* [KYLIN-1432] - For GUI, to add one option "yyyy-MM-dd HH:MM:ss" for Partition Date Column
+* [KYLIN-1453] - cuboid sharding based on specific column
+* [KYLIN-1487] - attach a hyperlink to introduce new aggregation group
+* [KYLIN-1526] - Move query cache back to query controller level
+* [KYLIN-1542] - Hfile owner is not hbase
+* [KYLIN-1544] - Make hbase encoding and block size configurable just like hbase compression
+* [KYLIN-1561] - Refactor storage engine(v2) to be extension friendly
+* [KYLIN-1566] - Add and use a separate kylin_job_conf.xml for in-mem cubing
+* [KYLIN-1567] - Front-end work for KYLIN-1557
+* [KYLIN-1578] - Coprocessor thread voluntarily stop itself when it reaches timeout
+* [KYLIN-1579] - IT preparation classes like BuildCubeWithEngine should exit with status code upon build * exception
+* [KYLIN-1580] - Use 1 byte instead of 8 bytes as column indicator in fact distinct MR job
+* [KYLIN-1584] - Specify region cut size in cubedesc and leave the RealizationCapacity in model as a hint
+* [KYLIN-1585] - make MAX_HBASE_FUZZY_KEYS in GTScanRangePlanner configurable
+* [KYLIN-1587] - show cube level configuration overwrites properties in CubeDesigner
+* [KYLIN-1591] - enabling different block size setting for small column families
+* [KYLIN-1599] - Add "isShardBy" flag in rowkey panel
+* [KYLIN-1601] - Need not to shrink scan cache when hbase rows can be large
+* [KYLIN-1602] - User could dump hbase usage for diagnosis
+* [KYLIN-1614] - Bring more information in diagnosis tool
+* [KYLIN-1621] - Use deflate level 1 to enable compression "on the fly"
+* [KYLIN-1623] - Make the hll precision for data samping configurable
+* [KYLIN-1624] - HyperLogLogPlusCounter will become inaccurate when there're billions of entries
+* [KYLIN-1625] - GC log overwrites old one after restart Kylin service
+* [KYLIN-1627] - add backdoor toggle to dump binary cube storage response for further analysis
+* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
+
+__Bug__
+
+* [KYLIN-989] - column width is too narrow for timestamp field
+* [KYLIN-1197] - cube data not updated after purge
+* [KYLIN-1305] - Can not get more than one system admin email in config
+* [KYLIN-1551] - Should check and ensure TopN measure has two parameters specified
+* [KYLIN-1563] - Unsafe check of initiated in HybridInstance#init()
+* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
+* [KYLIN-1574] - Unclosed ResultSet in QueryService#getMetadata()
+* [KYLIN-1581] - NPE in Job engine when execute MR job
+* [KYLIN-1593] - Agg group info will be blank when trying to edit cube
+* [KYLIN-1595] - columns in metric could also be in filter/groupby
+* [KYLIN-1596] - UT fail, due to String encoding CharsetEncoder mismatch
+* [KYLIN-1598] - cannot run complete UT at windows dev machine
+* [KYLIN-1604] - Concurrent write issue on hdfs when deploy coprocessor
+* [KYLIN-1612] - Cube is ready but insight tables not result
+* [KYLIN-1615] - UT 'HiveCmdBuilderTest' fail on 'testBeeline'
+* [KYLIN-1619] - Can't find any realization coursed by Top-N measure
+* [KYLIN-1622] - sql not executed and report topN error
+* [KYLIN-1631] - Web UI of TopN, "group by" column couldn't be a dimension column
+* [KYLIN-1634] - Unclosed OutputStream in SSHClient#scpFileToLocal()
+* [KYLIN-1637] - Sample cube build error
+* [KYLIN-1638] - Unclosed HBaseAdmin in ToolUtil#getHBaseMetaStoreId()
+* [KYLIN-1639] - Wrong logging of JobID in MapReduceExecutable.java
+* [KYLIN-1643] - Kylin's hll counter count "NULL" as a value
+* [KYLIN-1647] - Purge a cube, and then build again, the start date is not updated
+* [KYLIN-1650] - java.io.IOException: Filesystem closed - in Cube Build Step 2 (MapR)
+* [KYLIN-1655] - function name 'getKylinPropertiesAsInputSteam' misspelt
+* [KYLIN-1660] - Streaming/kafka config not match with table name
+* [KYLIN-1662] - tableName got truncated during request mapping for /tables/tableName
+* [KYLIN-1666] - Should check project selection before add a stream table
+* [KYLIN-1667] - Streaming table name should allow enter "DB.TABLE" format
+* [KYLIN-1673] - make sure metadata in 1.5.2 compatible with 1.5.1
+* [KYLIN-1678] - MetaData clean just clean FINISHED and DISCARD jobs,but job correct status is SUCCEED
+* [KYLIN-1685] - error happens while execute a sql contains '?' using Statement
+* [KYLIN-1688] - Illegal char on result dataset table
+* [KYLIN-1721] - KylinConfigExt lost base properties when store into file
+* [KYLIN-1722] - IntegerDimEnc serialization exception inside coprocessor
+
+## v1.5.1 - 2016-04-13
+_Tag:_ [kylin-1.5.1](https://github.com/apache/kylin/tree/kylin-1.5.1)
+
+This version is backward compatiple with v1.5.0. But after upgrade to v1.5.1 from v1.5.0, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
+
+__Highlights__
+
+* [KYLIN-1122] - Kylin support detail data query from fact table
+* [KYLIN-1492] - Custom dimension encoding
+* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
+* [KYLIN-1534] - Cube specific config, override global kylin.properties
+* [KYLIN-1546] - Tool to dump information for diagnosis
+
+__New Feature__
+
+* [KYLIN-1122] - Kylin support detail data query from fact table
+* [KYLIN-1378] - Add UI for TopN measure
+* [KYLIN-1492] - Custom dimension encoding
+* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
+* [KYLIN-1501] - Run some classes at the beginning of kylin server startup
+* [KYLIN-1503] - Print version information with kylin.sh
+* [KYLIN-1531] - Add smoke test scripts
+* [KYLIN-1534] - Cube specific config, override global kylin.properties
+* [KYLIN-1540] - REST API for deleting segment
+* [KYLIN-1541] - IntegerDimEnc, custom dimension encoding for integers
+* [KYLIN-1546] - Tool to dump information for diagnosis
+* [KYLIN-1550] - Persist some recent bad query
+
+__Improvement__
+
+* [KYLIN-1490] - Use InstallShield 2015 to generate ODBC Driver setup files
+* [KYLIN-1498] - cube desc signature not calculated correctly
+* [KYLIN-1500] - streaming_fillgap cause out of memory
+* [KYLIN-1502] - When cube is not empty, only signature consistent cube desc updates are allowed
+* [KYLIN-1504] - Use NavigableSet to store rowkey and use prefix filter to check resource path prefix instead String comparison on tomcat side
+* [KYLIN-1505] - Combine guava filters with Predicates.and
+* [KYLIN-1543] - GTFilterScanner performance tuning
+* [KYLIN-1557] - Enhance the check on aggregation group dimension number
+
+__Bug__
+
+* [KYLIN-1373] - need to encode export query url to get right result in query page
+* [KYLIN-1434] - Kylin Job Monitor API: /kylin/api/jobs is too slow in large kylin deployment
+* [KYLIN-1472] - Export csv get error when there is a plus sign in the sql
+* [KYLIN-1486] - java.lang.IllegalArgumentException: Too many digits for NumberDictionary
+* [KYLIN-1491] - Should return base cuboid as valid cuboid if no aggregation group matches
+* [KYLIN-1493] - make ExecutableManager.getInstance thread safe
+* [KYLIN-1497] - Make three <class>.getInstance thread safe
+* [KYLIN-1507] - Couldn't find hive dependency jar on some platform like CDH
+* [KYLIN-1513] - Time partitioning doesn't work across multiple days
+* [KYLIN-1514] - MD5 validation of Tomcat does not work when package tar
+* [KYLIN-1521] - Couldn't refresh a cube segment whose start time is before 1970-01-01
+* [KYLIN-1522] - HLLC is incorrect when result is feed from cache
+* [KYLIN-1524] - Get "java.lang.Double cannot be cast to java.lang.Long" error when Top-N metris data type is BigInt
+* [KYLIN-1527] - Columns with all NULL values can't be queried
+* [KYLIN-1537] - Failed to create flat hive table, when name is too long
+* [KYLIN-1538] - DoubleDeltaSerializer cause obvious error after deserialize and serialize
+* [KYLIN-1553] - Cannot find rowkey column "COL_NAME" in cube CubeDesc
+* [KYLIN-1564] - Unclosed table in BuildCubeWithEngine#checkHFilesInHBase()
+* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
+
+## v1.5.0 - 2016-03-12
+_Tag:_ [kylin-1.5.0](https://github.com/apache/kylin/tree/kylin-1.5.0)
+
+__This version is not backward compatible.__ The format of cube and metadata has been refactored in order to get times of performance improvement. We recommend this version, but does not suggest upgrade from previous deployment directly. A clean and new deployment of this version is strongly recommended. If you have to upgrade from previous deployment, an upgrade guide will be provided by community later.
+
+__Highlights__
+
+* [KYLIN-875] - A plugin-able architecture, to allow alternative cube engine / storage engine / data source.
+* [KYLIN-1245] - A better MR cubing algorithm, about 1.5 times faster by comparing hundreds of jobs.
+* [KYLIN-942] - A better storage engine, makes query roughly 2 times faster (especially for slow queries) by comparing tens of thousands sqls.
+* [KYLIN-738] - Streaming cubing EXPERIMENTAL support, source from kafka, build cube in-mem at minutes interval.
+* [KYLIN-242] - Redesign aggregation group, support of 20+ dimensions made easy.
+* [KYLIN-976] - Custom aggregation types (or UDF in other words).
+* [KYLIN-943] - TopN aggregation type.
+* [KYLIN-1065] - ODBC compatible with Tableau 9.1, MS Excel, MS PowerBI.
+* [KYLIN-1219] - Kylin support SSO with Spring SAML.
+
+__New Feature__
+
+* [KYLIN-528] - Build job flow for Inverted Index building
+* [KYLIN-579] - Unload table from Kylin
+* [KYLIN-596] - Support Excel and Power BI
+* [KYLIN-599] - Near real-time support
+* [KYLIN-607] - More efficient cube building
+* [KYLIN-609] - Add Hybrid as a federation of Cube and Inverted-index realization
+* [KYLIN-625] - Create GridTable, a data structure that abstracts vertical and horizontal partition of a table
+* [KYLIN-728] - IGTStore implementation which use disk when memory runs short
+* [KYLIN-738] - StreamingOLAP
+* [KYLIN-749] - support timestamp type in II and cube
+* [KYLIN-774] - Automatically merge cube segments
+* [KYLIN-868] - add a metadata backup/restore script in bin folder
+* [KYLIN-886] - Data Retention for streaming data
+* [KYLIN-906] - cube retention
+* [KYLIN-943] - Approximate TopN supported by Cube
+* [KYLIN-986] - Generalize Streaming scripts and put them into code repository
+* [KYLIN-1219] - Kylin support SSO with Spring SAML
+* [KYLIN-1277] - Upgrade tool to put old-version cube and new-version cube into a hybrid model
+* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
+
+* [KYLIN-976] - Support Custom Aggregation Types
+* [KYLIN-1054] - Support Hive client Beeline
+* [KYLIN-1128] - Clone Cube Metadata
+* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
+* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
+* [KYLIN-1483] - Command tool to visualize all cuboids in a cube/segment
+
+__Improvement__
+
+* [KYLIN-225] - Support edit "cost" of cube
+* [KYLIN-410] - table schema not expand when clicking the database text
+* [KYLIN-589] - Cleanup Intermediate hive table after cube build
+* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
+* [KYLIN-633] - Support Timestamp for cube partition
+* [KYLIN-649] - move the cache layer from service tier back to storage tier
+* [KYLIN-655] - Migrate cube storage (query side) to use GridTable API
+* [KYLIN-663] - Push time condition down to ii endpoint
+* [KYLIN-668] - Out of memory in mapper when building cube in mem
+* [KYLIN-671] - Implement fine grained cache for cube and ii
+* [KYLIN-674] - IIEndpoint return metrics as well
+* [KYLIN-675] - cube&model designer refactor
+* [KYLIN-678] - optimize RowKeyColumnIO
+* [KYLIN-697] - Reorganize all test cases to unit test and integration tests
+* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS
+* [KYLIN-708] - replace BitSet for AggrKey
+* [KYLIN-712] - some enhancement after code review
+* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
+* [KYLIN-718] - replace aliasMap in storage context with a clear specified return column list
+* [KYLIN-719] - bundle statistics info in endpoint response
+* [KYLIN-720] - Optimize endpoint's response structure to suit with no-dictionary data
+* [KYLIN-721] - streaming cli support third-party streammessage parser
+* [KYLIN-726] - add remote cli port configuration for KylinConfig
+* [KYLIN-729] - IIEndpoint eliminate the non-aggregate routine
+* [KYLIN-734] - Push cache layer to each storage engine
+* [KYLIN-752] - Improved IN clause performance
+* [KYLIN-753] - Make the dependency on hbase-common to "provided"
+* [KYLIN-755] - extract copying libs from prepare.sh so that it can be reused
+* [KYLIN-760] - Improve the hasing performance in Sampling cuboid size
+* [KYLIN-772] - Continue cube job when hive query return empty resultset
+* [KYLIN-773] - performance is slow list jobs
+* [KYLIN-783] - update hdp version in test cases to 2.2.4
+* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
+* [KYLIN-809] - Streaming cubing allow multiple kafka clusters/topics
+* [KYLIN-816] - Allow gap in cube segments, for streaming case
+* [KYLIN-822] - list cube overview in one page
+* [KYLIN-823] - replace fk on fact table on rowkey & aggregation group generate
+* [KYLIN-838] - improve performance of job query
+* [KYLIN-844] - add backdoor toggles to control query behavior
+* [KYLIN-845] - Enable coprocessor even when there is memory hungry distinct count
+* [KYLIN-858] - add snappy compression support
+* [KYLIN-866] - Confirm with user when he selects empty segments to merge
+* [KYLIN-869] - Enhance mail notification
+* [KYLIN-870] - Speed up hbase segments info by caching
+* [KYLIN-871] - growing dictionary for streaming case
+* [KYLIN-874] - script for fill streaming gap automatically
+* [KYLIN-875] - Decouple with Hadoop to allow alternative Input / Build Engine / Storage
+* [KYLIN-879] - add a tool to collect orphan hbases
+* [KYLIN-880] - Kylin should change the default folder from /tmp to user configurable destination
+* [KYLIN-881] - Upgrade Calcite to 1.3.0
+* [KYLIN-882] - check access to kylin.hdfs.working.dir
+* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
+* [KYLIN-893] - Remove the dependency on quartz and metrics
+* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
+* [KYLIN-896] - Clean ODBC code, add them into main repository and write docs to help compiling
+* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
+* [KYLIN-902] - move streaming related parameters into StreamingConfig
+* [KYLIN-909] - Adapt GTStore to hbase endpoint
+* [KYLIN-919] - more friendly UI for 0.8
+* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
+* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
+* [KYLIN-927] - Real time cubes merging skipping gaps
+* [KYLIN-933] - friendly UI to use data model
+* [KYLIN-938] - add friendly tip to page when rest request failed
+* [KYLIN-942] - Cube parallel scan on Hbase
+* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
+* [KYLIN-957] - Support HBase in a separate cluster
+* [KYLIN-960] - Split storage module to core-storage and storage-hbase
+* [KYLIN-973] - add a tool to analyse streaming output logs
+* [KYLIN-984] - Behavior change in streaming data consuming
+* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
+* [KYLIN-1014] - Support kerberos authentication while getting status from RM
+* [KYLIN-1018] - make TimedJsonStreamParser default parser
+* [KYLIN-1019] - Remove v1 cube model classes from code repository
+* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
+* [KYLIN-1025] - Save cube change is very slow
+* [KYLIN-1036] - Code Clean, remove code which never used at front end
+* [KYLIN-1041] - ADD Streaming UI
+* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
+* [KYLIN-1058] - Remove "right join" during model creation
+* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
+* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
+* [KYLIN-1065] - ODBC driver support tableau 9.1
+* [KYLIN-1068] - Optimize the memory footprint for TopN counter
+* [KYLIN-1069] - update tip for 'Partition Column' on UI
+* [KYLIN-1074] - Load hive tables with selecting mode
+* [KYLIN-1095] - Update AdminLTE to latest version
+* [KYLIN-1096] - Deprecate minicluster
+* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
+* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
+* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
+* [KYLIN-1116] - Use local dictionary for InvertedIndex batch building
+* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
+* [KYLIN-1126] - v2 storage(for parallel scan) backward compatibility with v1 storage
+* [KYLIN-1135] - Pscan use share thread pool
+* [KYLIN-1136] - Distinguish fast build mode and complete build mode
+* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
+* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
+* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
+* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
+* [KYLIN-1160] - Set default logger appender of log4j for JDBC
+* [KYLIN-1161] - Rest API /api/cubes?cubeName= is doing fuzzy match instead of exact match
+* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
+* [KYLIN-1190] - Make memory budget per query configurable
+* [KYLIN-1211] - Add 'Enable Cache' button in System page
+* [KYLIN-1234] - Cube ACL does not work
+* [KYLIN-1235] - allow user to select dimension column as options when edit COUNT_DISTINCT measure
+* [KYLIN-1237] - Revisit on cube size estimation
+* [KYLIN-1239] - attribute each htable with team contact and owner name
+* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
+* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
+* [KYLIN-1246] - get cubes API update - offset,limit not required
+* [KYLIN-1251] - add toggle event for tree label
+* [KYLIN-1259] - Change font/background color of job progress
+* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
+* [KYLIN-1266] - Tune release package size
+* [KYLIN-1267] - Check Kryo performance when spilling aggregation cache
+* [KYLIN-1268] - Fix 2 kylin logs
+* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
+* [KYLIN-1281] - Add "partition_date_end", and move "partition_date_start" into cube descriptor
+* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
+* [KYLIN-1287] - UI update for streaming build action
+* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
+* [KYLIN-1301] - fix segment pruning failure
+* [KYLIN-1308] - query storage v2 enable parallel cube visiting
+* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
+* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
+* [KYLIN-1318] - enable gc log for kylin server instance
+* [KYLIN-1323] - Improve performance of converting data to hfile
+* [KYLIN-1327] - Tool for batch updating host information of htables
+* [KYLIN-1333] - Kylin Entity Permission Control
+* [KYLIN-1334] - allow truncating string for fixed length dimensions
+* [KYLIN-1341] - Display JSON of Data Model in the dialog
+* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
+* [KYLIN-1365] - Kylin ACL enhancement
+* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
+* [KYLIN-1424] - Should support multiple selection in picking up dimension/measure column step in data model wizard
+* [KYLIN-1438] - auto generate aggregation group
+* [KYLIN-1474] - expose list, remove and cat in metastore.sh
+* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
+
+* [KYLIN-242] - Redesign aggregation group
+* [KYLIN-770] - optimize memory usage for GTSimpleMemStore GTAggregationScanner
+* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
+* [KYLIN-980] - FactDistinctColumnsJob to support high cardinality columns
+* [KYLIN-1079] - Manager large number of entries in metadata store
+* [KYLIN-1082] - Hive dependencies should be add to tmpjars
+* [KYLIN-1201] - Enhance project level ACL
+* [KYLIN-1222] - restore testing v1 query engine in case need it as a fallback for v2
+* [KYLIN-1232] - Refine ODBC Connection UI
+* [KYLIN-1237] - Revisit on cube size estimation
+* [KYLIN-1239] - attribute each htable with team contact and owner name
+* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
+* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
+* [KYLIN-1266] - Tune release package size
+* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
+* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
+* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
+* [KYLIN-1301] - fix segment pruning failure
+* [KYLIN-1308] - query storage v2 enable parallel cube visiting
+* [KYLIN-1318] - enable gc log for kylin server instance
+* [KYLIN-1327] - Tool for batch updating host information of htables
+* [KYLIN-1343] - Upgrade calcite version to 1.6
+* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
+* [KYLIN-1366] - Bind metadata version with release version
+* [KYLIN-1389] - Formatting ODBC Drive C++ code
+* [KYLIN-1405] - Aggregation group validation
+* [KYLIN-1465] - Beautify kylin log to convenience both production trouble shooting and CI debuging
+* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
+
+__Bug__
+
+* [KYLIN-404] - Can't get cube source record size.
+* [KYLIN-457] - log4j error and dup lines in kylin.log
+* [KYLIN-521] - No verification even if join condition is invalid
+* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
+* [KYLIN-635] - IN clause within CASE when is not working
+* [KYLIN-656] - REST API get cube desc NullPointerException when cube is not exists
+* [KYLIN-660] - Make configurable of dictionary cardinality cap
+* [KYLIN-665] - buffer error while in mem cubing
+* [KYLIN-688] - possible memory leak for segmentIterator
+* [KYLIN-731] - Parallel stream build will throw OOM
+* [KYLIN-740] - Slowness with many IN() values
+* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
+* [KYLIN-748] - II returned result not correct when decimal omits precision and scal
+* [KYLIN-751] - Max on negative double values is not working
+* [KYLIN-766] - round BigDecimal according to the DataType scale
+* [KYLIN-769] - empty segment build fail due to no dictionary
+* [KYLIN-771] - query cache is not evicted when metadata changes
+* [KYLIN-778] - can't build cube after package to binary
+* [KYLIN-780] - Upgrade Calcite to 1.0
+* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted
+* [KYLIN-801] - fix remaining issues on query cache and storage cache
+* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
+* [KYLIN-807] - Avoid write conflict between job engine and stream cube builder
+* [KYLIN-817] - Support Extract() on timestamp column
+* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
+* [KYLIN-828] - kylin still use ldap profile when comment the line "kylin.sandbox=false" in kylin.properties
+* [KYLIN-834] - optimize StreamingUtil binary search perf
+* [KYLIN-837] - fix submit build type when refresh cube
+* [KYLIN-873] - cancel button does not work when [resume][discard] job
+* [KYLIN-889] - Support more than one HDFS files of lookup table
+* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
+* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
+* [KYLIN-905] - Boolean type not supported
+* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
+* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
+* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
+* [KYLIN-914] - Scripts shebang should use /bin/bash
+* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
+* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
+* [KYLIN-930] - can't see realizations under each project at project list page
+* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
+* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
+* [KYLIN-936] - can not see job step log
+* [KYLIN-944] - update doc about how to consume kylin API in javascript
+* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
+* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
+* [KYLIN-951] - Drop RowBlock concept from GridTable general API
+* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
+* [KYLIN-967] - Dump running queries on memory shortage
+* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
+* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
+* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
+* [KYLIN-983] - Query sql offset keyword bug
+* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
+* [KYLIN-991] - StorageCleanupJob may clean a newly created HTable in streaming cube building
+* [KYLIN-992] - ConcurrentModificationException when initializing ResourceStore
+* [KYLIN-993] - implement substr support in kylin
+* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
+* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
+* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million
+* [KYLIN-1026] - Error message for git check is not correct in package.sh
+* [KYLIN-1027] - HBase Token not added after KYLIN-1007
+* [KYLIN-1033] - Error when joining two sub-queries
+* [KYLIN-1039] - Filter like (A or false) yields wrong result
+* [KYLIN-1047] - Upgrade to Calcite 1.4
+* [KYLIN-1066] - Only 1 reducer is started in the "Build cube" step of MR_Engine_V2
+* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
+* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
+* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
+* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
+* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
+* [KYLIN-1113] - Support TopN query in v2/CubeStorageQuery.java
+* [KYLIN-1115] - Clean up ODBC driver code
+* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
+* [KYLIN-1127] - Refactor CacheService
+* [KYLIN-1137] - TopN measure need support dictionary merge
+* [KYLIN-1138] - Bad CubeDesc signature cause segment be delete when enable a cube
+* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
+* [KYLIN-1151] - Menu items should be aligned when create new model
+* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
+* [KYLIN-1153] - Upgrade is needed for cubedesc metadata from 1.3 to 1.4
+* [KYLIN-1171] - KylinConfig truncate bug
+* [KYLIN-1179] - Cannot use String as partition column
+* [KYLIN-1180] - Some NPE in Dictionary
+* [KYLIN-1181] - Split metadata size exceeded when data got huge in one segment
+* [KYLIN-1182] - DataModelDesc needs to be updated from v1.x to v2.0
+* [KYLIN-1192] - Cannot edit data model desc without name change
+* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
+* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
+* [KYLIN-1218] - java.lang.NullPointerException in MeasureTypeFactory when sync hive table
+* [KYLIN-1220] - JsonMappingException: Can not deserialize instance of java.lang.String out of START_ARRAY
+* [KYLIN-1225] - Only 15 cubes listed in the /models page
+* [KYLIN-1226] - InMemCubeBuilder throw OOM for multiple HLLC measures
+* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
+* [KYLIN-1236] - redirect to home page when input invalid url
+* [KYLIN-1250] - Got NPE when discarding a job
+* [KYLIN-1260] - Job status labels are not in same style
+* [KYLIN-1269] - Can not get last error message in email
+* [KYLIN-1271] - Create streaming table layer will disappear if click on outside
+* [KYLIN-1274] - Query from JDBC is partial results by default
+* [KYLIN-1282] - Comparison filter on Date/Time column not work for query
+* [KYLIN-1289] - Click on subsequent wizard steps doesn't work when editing existing cube or model
+* [KYLIN-1303] - Error when in-mem cubing on empty data source which has boolean columns
+* [KYLIN-1306] - Null strings are not applied during fast cubing
+* [KYLIN-1314] - Display issue for aggression groups
+* [KYLIN-1315] - UI: Cannot add normal dimension when creating new cube
+* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
+* [KYLIN-1328] - "UnsupportedOperationException" is thrown when remove a data model
+* [KYLIN-1330] - UI create model: Press enter will go back to pre step
+* [KYLIN-1336] - 404 errors of model page and api 'access/DataModelDesc' in console
+* [KYLIN-1337] - Sort cube name doesn't work well
+* [KYLIN-1346] - IllegalStateException happens in SparkCubing
+* [KYLIN-1347] - UI: cannot place cursor in front of the last dimension
+* [KYLIN-1349] - 'undefined' is logged in console when adding lookup table
+* [KYLIN-1352] - 'Cache already exists' exception in high-concurrency query situation
+* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
+* [KYLIN-1357] - Cloned cube has build time information
+* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
+* [KYLIN-1382] - CubeMigrationCLI reports error when migrate cube
+* [KYLIN-1387] - Streaming cubing doesn't generate cuboids files on HDFS, cause cube merge failure
+* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
+* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
+* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
+* [KYLIN-1412] - Widget width of "Partition date column" is too small to select
+* [KYLIN-1413] - Row key column's sequence is wrong after saving the cube
+* [KYLIN-1414] - Couldn't drag and drop rowkey, js error is thrown in browser console
+* [KYLIN-1417] - TimedJsonStreamParser is case sensitive for message's property name
+* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
+* [KYLIN-1420] - Query returns empty result on partition column's boundary condition
+* [KYLIN-1421] - Cube "source record" is always zero for streaming
+* [KYLIN-1423] - HBase size precision issue
+* [KYLIN-1430] - Not add "STREAMING_" prefix when import a streaming table
+* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
+* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
+* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
+* ​
+* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
+* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
+* [KYLIN-1344] - Bitmap measure defined after TopN measure can cause merge to fail
+* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
+* [KYLIN-1386] - Duplicated projects appear in connection dialog after clicking CONNECT button multiple times
+* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
+* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
+* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
+* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
+* [KYLIN-1469] - Hive dependency jars are hard coded in test
+* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
+* [KYLIN-1473] - Cannot have comments in the end of New Query textbox
+
+__Task__
+
+* [KYLIN-529] - Migrate ODBC source code to Apache Git
+* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
+* [KYLIN-762] - remove quartz dependency
+* [KYLIN-763] - remove author name
+* [KYLIN-820] - support streaming cube of exact timestamp range
+* [KYLIN-907] - Improve Kylin community development experience
+* [KYLIN-1112] - Reorganize InvertedIndex source codes into plug-in architecture
+
+* [KYLIN-808] - streaming cubing support split by data timestamp
+* [KYLIN-1427] - Enable partition date column to support date and hour as separate columns for increment cube build
+
+__Test__
+
+* [KYLIN-677] - benchmark for Endpoint without dictionary
+* [KYLIN-826] - create new test case for streaming building & queries
+
+
+## v1.3.0 - 2016-03-14
+_Tag:_ [kylin-1.3.0](https://github.com/apache/kylin/tree/kylin-1.3.0)
+
+__New Feature__
+
+* [KYLIN-579] - Unload table from Kylin
+* [KYLIN-976] - Support Custom Aggregation Types
+* [KYLIN-1054] - Support Hive client Beeline
+* [KYLIN-1128] - Clone Cube Metadata
+* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
+
+__Improvement__
+
+* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
+* [KYLIN-1014] - Support kerberos authentication while getting status from RM
+* [KYLIN-1074] - Load hive tables with selecting mode
+* [KYLIN-1082] - Hive dependencies should be add to tmpjars
+* [KYLIN-1132] - make filtering input easier in creating cube
+* [KYLIN-1201] - Enhance project level ACL
+* [KYLIN-1211] - Add 'Enable Cache' button in System page
+* [KYLIN-1234] - Cube ACL does not work
+* [KYLIN-1240] - Fix link and typo in README
+* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
+* [KYLIN-1246] - get cubes API update - offset,limit not required
+* [KYLIN-1251] - add toggle event for tree label
+* [KYLIN-1259] - Change font/background color of job progress
+* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
+* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
+* [KYLIN-1323] - Improve performance of converting data to hfile
+* [KYLIN-1333] - Kylin Entity Permission Control 
+* [KYLIN-1343] - Upgrade calcite version to 1.6
+* [KYLIN-1365] - Kylin ACL enhancement
+* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
+
+__Bug__
+
+* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
+* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
+* [KYLIN-1078] - Cannot have comments in the end of New Query textbox
+* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
+* [KYLIN-1110] - can not see project options after clear brower cookie and cache
+* [KYLIN-1159] - problem about kylin web UI
+* [KYLIN-1214] - Remove "Back to My Cubes" link in non-edit mode
+* [KYLIN-1215] - minor, update website member's info on community page
+* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
+* [KYLIN-1236] - redirect to home page when input invalid url
+* [KYLIN-1250] - Got NPE when discarding a job
+* [KYLIN-1254] - cube model will be overridden while creating a new cube with the same name
+* [KYLIN-1260] - Job status labels are not in same style
+* [KYLIN-1274] - Query from JDBC is partial results by default
+* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
+* [KYLIN-1330] - UI create model: Press enter will go back to pre step
+* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
+* [KYLIN-1342] - Typo in doc
+* [KYLIN-1354] - Couldn't edit a cube if it has no "partition date" set
+* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
+* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale 
+* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
+* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
+* [KYLIN-1412] - Widget width of "Partition date column"  is too small to select
+* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
+* [KYLIN-1423] - HBase size precision issue
+* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
+* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
+* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
+* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
+* [KYLIN-1469] - Hive dependency jars are hard coded in test
+
+__Test__
+
+* [KYLIN-1335] - Disable PrintResult in KylinQueryTest
+
+
+## v1.2 - 2015-12-15
+_Tag:_ [kylin-1.2](https://github.com/apache/kylin/tree/kylin-1.2)
+
+__New Feature__
+
+* [KYLIN-596] - Support Excel and Power BI
+
+__Improvement__
+
+* [KYLIN-389] - Can't edit cube name for existing cubes
+* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS 
+* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
+* [KYLIN-1058] - Remove "right join" during model creation
+* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
+* [KYLIN-1065] - ODBC driver support tableau 9.1
+* [KYLIN-1069] - update tip for 'Partition Column' on UI
+* [KYLIN-1081] - ./bin/find-hive-dependency.sh may not find hive-hcatalog-core.jar
+* [KYLIN-1095] - Update AdminLTE to latest version
+* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
+* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
+* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
+* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
+* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
+* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
+* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
+* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
+* [KYLIN-1160] - Set default logger appender of log4j for JDBC
+* [KYLIN-1161] - Rest API /api/cubes?cubeName=  is doing fuzzy match instead of exact match
+* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
+* [KYLIN-1166] - CubeMigrationCLI should disable and purge the cube in source store after be migrated
+* [KYLIN-1168] - Couldn't save cube after doing some modification, get "Update data model is not allowed! Please create a new cube if needed" error
+* [KYLIN-1190] - Make memory budget per query configurable
+
+__Bug__
+
+* [KYLIN-693] - Couldn't change a cube's name after it be created
+* [KYLIN-930] - can't see realizations under each project at project list page
+* [KYLIN-966] - When user creates a cube, if enter a name which already exists, Kylin will thrown expection on last step
+* [KYLIN-1033] - Error when joining two sub-queries
+* [KYLIN-1039] - Filter like (A or false) yields wrong result
+* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
+* [KYLIN-1070] - changing  case in table name in  model desc
+* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
+* [KYLIN-1098] - two "kylin.hbase.region.count.min" in conf/kylin.properties
+* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
+* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
+* [KYLIN-1120] - MapReduce job read local meta issue
+* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
+* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
+* [KYLIN-1148] - Edit project's name and cancel edit, project's name still modified
+* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
+* [KYLIN-1155] - unit test with minicluster doesn't work on 1.x
+* [KYLIN-1203] - Cannot save cube after correcting the configuration mistake
+* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
+* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
+
+__Task__
+
+* [KYLIN-1170] - Update website and status files to TLP
+
+
+## v1.1.1-incubating - 2015-11-04
+_Tag:_ [kylin-1.1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1.1-incubating)
+
+__Improvement__
+
+* [KYLIN-999] - License check and cleanup for release
+
+## v1.1-incubating - 2015-10-25
+_Tag:_ [kylin-1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1-incubating)
+
+__New Feature__
+
+* [KYLIN-222] - Web UI to Display CubeInstance Information
+* [KYLIN-906] - cube retention
+* [KYLIN-910] - Allow user to enter "retention range" in days on Cube UI
+
+__Bug__
+
+* [KYLIN-457] - log4j error and dup lines in kylin.log
+* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
+* [KYLIN-740] - Slowness with many IN() values
+* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
+* [KYLIN-771] - query cache is not evicted when metadata changes
+* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted 
+* [KYLIN-847] - "select * from fact" does not work on 0.7 branch
+* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
+* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
+* [KYLIN-944] - update doc about how to consume kylin API in javascript
+* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
+* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
+* [KYLIN-958] - update cube data model may fail and leave metadata in inconsistent state
+* [KYLIN-961] - Can't get cube  source record count.
+* [KYLIN-967] - Dump running queries on memory shortage
+* [KYLIN-968] - CubeSegment.lastBuildJobID is null in new instance but used for rowkey_stats path
+* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
+* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
+* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
+* [KYLIN-983] - Query sql offset keyword bug
+* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
+* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
+* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
+* [KYLIN-1005] - fail to acquire ZookeeperJobLock when hbase.zookeeper.property.clientPort is configured other than 2181
+* [KYLIN-1015] - Hive dependency jars appeared twice on job configuration
+* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million 
+* [KYLIN-1026] - Error message for git check is not correct in package.sh
+
+__Improvement__
+
+* [KYLIN-343] - Enable timeout on query 
+* [KYLIN-367] - automatically backup metadata everyday
+* [KYLIN-589] - Cleanup Intermediate hive table after cube build
+* [KYLIN-772] - Continue cube job when hive query return empty resultset
+* [KYLIN-858] - add snappy compression support
+* [KYLIN-882] - check access to kylin.hdfs.working.dir
+* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
+* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
+* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
+* [KYLIN-957] - Support HBase in a separate cluster
+* [KYLIN-965] - Allow user to configure the region split size for cube
+* [KYLIN-971] - kylin display timezone on UI
+* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
+* [KYLIN-998] - Finish the hive intermediate table clean up job in org.apache.kylin.job.hadoop.cube.StorageCleanupJob
+* [KYLIN-999] - License check and cleanup for release
+* [KYLIN-1013] - Make hbase client configurations like timeout configurable
+* [KYLIN-1025] - Save cube change is very slow
+* [KYLIN-1034] - Faster bitmap indexes with Roaring bitmaps
+* [KYLIN-1035] - Validate [Project] before create Cube on UI
+* [KYLIN-1037] - Remove hardcoded "hdp.version" from regression tests
+* [KYLIN-1047] - Upgrade to Calcite 1.4
+* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
+* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
+* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
+* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
+
+
+## v1.0-incubating - 2015-09-06
+_Tag:_ [kylin-1.0-incubating](https://github.com/apache/kylin/tree/kylin-1.0-incubating)
+
+__New Feature__
+
+* [KYLIN-591] - Leverage Zeppelin to interactive with Kylin
+
+__Bug__
+
+* [KYLIN-404] - Can't get cube source record size.
+* [KYLIN-626] - JDBC error for float and double values
+* [KYLIN-751] - Max on negative double values is not working
+* [KYLIN-757] - Cache wasn't flushed in cluster mode
+* [KYLIN-780] - Upgrade Calcite to 1.0
+* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
+* [KYLIN-889] - Support more than one HDFS files of lookup table
+* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
+* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
+* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
+* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
+* [KYLIN-914] - Scripts shebang should use /bin/bash
+* [KYLIN-915] - appendDBName in CubeMetadataUpgrade will return null
+* [KYLIN-921] - Dimension with all nulls cause BuildDimensionDictionary failed due to FileNotFoundException
+* [KYLIN-923] - FetcherRunner will never run again if encountered exception during running
+* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
+* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
+* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
+* [KYLIN-936] - can not see job step log 
+* [KYLIN-940] - NPE when close the null resouce
+* [KYLIN-945] - Kylin JDBC - Get Connection from DataSource results in NullPointerException
+* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
+* [KYLIN-949] - Query cache doesn't work properly for prepareStatement queries
+
+__Improvement__
+
+* [KYLIN-568] - job support stop/suspend function so that users can manually resume a job
+* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
+* [KYLIN-792] - kylin performance insight [dashboard]
+* [KYLIN-838] - improve performance of job query
+* [KYLIN-842] - Add version and commit id into binary package
+* [KYLIN-844] - add backdoor toggles to control query behavior 
+* [KYLIN-857] - backport coprocessor improvement in 0.8 to 0.7
+* [KYLIN-866] - Confirm with user when he selects empty segments to merge
+* [KYLIN-867] - Hybrid model for multiple realizations/cubes
+* [KYLIN-880] -  Kylin should change the default folder from /tmp to user configurable destination
+* [KYLIN-881] - Upgrade Calcite to 1.3.0
+* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
+* [KYLIN-893] - Remove the dependency on quartz and metrics
+* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
+* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
+* [KYLIN-933] - friendly UI to use data model
+* [KYLIN-938] - add friendly tip to page when rest request failed
+
+__Task__
+
+* [KYLIN-884] - Restructure docs and website
+* [KYLIN-907] - Improve Kylin community development experience
+* [KYLIN-954] - Release v1.0 (formerly v0.7.3)
+* [KYLIN-863] - create empty segment when there is no data in one single streaming batch
+* [KYLIN-908] - Help community developer to setup develop/debug environment
+* [KYLIN-931] - Port KYLIN-921 to 0.8 branch
+
+## v0.7.2-incubating - 2015-07-21
+_Tag:_ [kylin-0.7.2-incubating](https://github.com/apache/kylin/tree/kylin-0.7.2-incubating)
+
+__Main Changes:__  
+Critical bug fixes after v0.7.1 release, please go with this version directly for new case and upgrade to this version for existing deployment.
+
+__Bug__  
+
+* [KYLIN-514] - Error message is not helpful to user when doing something in Jason Editor window
+* [KYLIN-598] - Kylin detecting hive table delim failure
+* [KYLIN-660] - Make configurable of dictionary cardinality cap
+* [KYLIN-765] - When a cube job is failed, still be possible to submit a new job
+* [KYLIN-814] - Duplicate columns error for subqueries on fact table
+* [KYLIN-819] - Fix necessary ColumnMetaData order for Calcite (Optic)
+* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
+* [KYLIN-829] - Cube "Actions" shows "NA"; but after expand the "access" tab, the button shows up
+* [KYLIN-830] - Cube merge failed after migrating from v0.6 to v0.7
+* [KYLIN-831] - Kylin report "Column 'ABC' not found in table 'TABLE' while executing SQL", when that column is FK but not define as a dimension
+* [KYLIN-840] - HBase table compress not enabled even LZO is installed
+* [KYLIN-848] - Couldn't resume or discard a cube job
+* [KYLIN-849] - Couldn't query metrics on lookup table PK
+* [KYLIN-865] - Cube has been built but couldn't query; In log it said "Realization 'CUBE.CUBE_NAME' defined under project PROJECT_NAME is not found
+* [KYLIN-873] - cancel button does not work when [resume][discard] job
+* [KYLIN-888] - "Jobs" page only shows 15 job at max, the "Load more" button was disappeared
+
+__Improvement__
+
+* [KYLIN-159] - Metadata migrate tool 
+* [KYLIN-199] - Validation Rule: Unique value of Lookup table's key columns
+* [KYLIN-207] - Support SQL pagination
+* [KYLIN-209] - Merge tail small MR jobs into one
+* [KYLIN-210] - Split heavy MR job to more small jobs
+* [KYLIN-221] - Convert cleanup and GC to job 
+* [KYLIN-284] - add log for all Rest API Request
+* [KYLIN-488] - Increase HDFS block size 1GB
+* [KYLIN-600] - measure return type update
+* [KYLIN-611] - Allow Implicit Joins
+* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
+* [KYLIN-727] - Cube build in BuildCubeWithEngine does not cover incremental build/cube merge
+* [KYLIN-752] - Improved IN clause performance
+* [KYLIN-773] - performance is slow list jobs
+* [KYLIN-839] - Optimize Snapshot table memory usage 
+
+__New Feature__
+
+* [KYLIN-211] - Bitmap Inverted Index
+* [KYLIN-285] - Enhance alert program for whole system
+* [KYLIN-467] - Validataion Rule: Check duplicate rows in lookup table
+* [KYLIN-471] - Support "Copy" on grid result
+
+__Task__
+
+* [KYLIN-7] - Enable maven checkstyle plugin
+* [KYLIN-885] - Release v0.7.2
+* [KYLIN-812] - Upgrade to Calcite 0.9.2
+
+## v0.7.1-incubating (First Apache Release) - 2015-06-10  
+_Tag:_ [kylin-0.7.1-incubating](https://github.com/apache/kylin/tree/kylin-0.7.1-incubating)
+
+Apache Kylin v0.7.1-incubating has rolled out on June 10, 2015. This is also the first Apache release after join incubating. 
+
+__Main Changes:__
+
+* Package renamed from com.kylinolap to org.apache.kylin
+* Code cleaned up to apply Apache License policy
+* Easy install and setup with bunch of scripts and automation
+* Job engine refactor to be generic job manager for all jobs, and improved efficiency
+* Support Hive database other than 'default'
+* JDBC driver avaliable for client to interactive with Kylin server
+* Binary pacakge avaliable download 
+
+__New Feature__
+
+* [KYLIN-327] - Binary distribution 
+* [KYLIN-368] - Move MailService to Common module
+* [KYLIN-540] - Data model upgrade for legacy cube descs
+* [KYLIN-576] - Refactor expansion rate expression
+
+__Task__
+
+* [KYLIN-361] - Rename package name with Apache Kylin
+* [KYLIN-531] - Rename package name to org.apache.kylin
+* [KYLIN-533] - Job Engine Refactoring
+* [KYLIN-585] - Simplify deployment
+* [KYLIN-586] - Add Apache License header in each source file
+* [KYLIN-587] - Remove hard copy of javascript libraries
+* [KYLIN-624] - Add dimension and metric info into DataModel
+* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
+* [KYLIN-669] - Release v0.7.1 as first apache release
+* [KYLIN-670] - Update pom with "incubating" in version number
+* [KYLIN-737] - Generate and sign release package for review and vote
+* [KYLIN-795] - Release after success vote
+
+__Bug__
+
+* [KYLIN-132] - Job framework
+* [KYLIN-194] - Dict & ColumnValueContainer does not support number comparison, they do string comparison right now
+* [KYLIN-220] - Enable swap column of Rowkeys in Cube Designer
+* [KYLIN-230] - Error when create HTable
+* [KYLIN-255] - Error when a aggregated function appear twice in select clause
+* [KYLIN-383] - Sample Hive EDW database name should be replaced by "default" in the sample
+* [KYLIN-399] - refreshed segment not correctly published to cube
+* [KYLIN-412] - No exception or message when sync up table which can't access
+* [KYLIN-421] - Hive table metadata issue
+* [KYLIN-436] - Can't sync Hive table metadata from other database rather than "default"
+* [KYLIN-508] - Too high cardinality is not suitable for dictionary!
+* [KYLIN-509] - Order by on fact table not works correctly
+* [KYLIN-517] - Always delete the last one of Add Lookup page buttom even if deleting the first join condition
+* [KYLIN-524] - Exception will throw out if dimension is created on a lookup table, then deleting the lookup table.
+* [KYLIN-547] - Create cube failed if column dictionary sets false and column length value greater than 0
+* [KYLIN-556] - error tip enhance when cube detail return empty
+* [KYLIN-570] - Need not to call API before sending login request
+* [KYLIN-571] - Dimensions lost when creating cube though Joson Editor
+* [KYLIN-572] - HTable size is wrong
+* [KYLIN-581] - unable to build cube
+* [KYLIN-583] - Dependency of Hive conf/jar in II branch will affect auto deploy
+* [KYLIN-588] - Error when run package.sh
+* [KYLIN-593] - angular.min.js.map and angular-resource.min.js.map are missing in kylin.war
+* [KYLIN-594] - Making changes in build and packaging with respect to apache release process
+* [KYLIN-595] - Kylin JDBC driver should not assume Kylin server listen on either 80 or 443
+* [KYLIN-605] - Issue when install Kylin on a CLI which does not have yarn Resource Manager
+* [KYLIN-614] - find hive dependency shell fine is unable to set the hive dependency correctly
+* [KYLIN-615] - Unable add measures in Kylin web UI
+* [KYLIN-619] - Cube build fails with hive+tez
+* [KYLIN-620] - Wrong duration number
+* [KYLIN-621] - SecurityException when running MR job
+* [KYLIN-627] - Hive tables' partition column was not sync into Kylin
+* [KYLIN-628] - Couldn't build a new created cube
+* [KYLIN-629] - Kylin failed to run mapreduce job if there is no mapreduce.application.classpath in mapred-site.xml
+* [KYLIN-630] - ArrayIndexOutOfBoundsException when merge cube segments 
+* [KYLIN-638] - kylin.sh stop not working
+* [KYLIN-639] - Get "Table 'xxxx' not found while executing SQL" error after a cube be successfully built
+* [KYLIN-640] - sum of float not working
+* [KYLIN-642] - Couldn't refresh cube segment
+* [KYLIN-643] - JDBC couldn't connect to Kylin: "java.sql.SQLException: Authentication Failed"
+* [KYLIN-644] - join table as null error when build the cube
+* [KYLIN-652] - Lookup table alias will be set to null
+* [KYLIN-657] - JDBC Driver not register into DriverManager
+* [KYLIN-658] - java.lang.IllegalArgumentException: Cannot find rowkey column XXX in cube CubeDesc
+* [KYLIN-659] - Couldn't adjust the rowkey sequence when create cube
+* [KYLIN-666] - Select float type column got class cast exception
+* [KYLIN-681] - Failed to build dictionary if the rowkey's dictionary property is "date(yyyy-mm-dd)"
+* [KYLIN-682] - Got "No aggregator for func 'MIN' and return type 'decimal(19,4)'" error when build cube
+* [KYLIN-684] - Remove holistic distinct count and multiple column distinct count from sample cube
+* [KYLIN-691] - update tomcat download address in download-tomcat.sh
+* [KYLIN-696] - Dictionary couldn't recognize a value and throw IllegalArgumentException: "Not a valid value"
+* [KYLIN-703] - UT failed due to unknown host issue
+* [KYLIN-711] - UT failure in REST module
+* [KYLIN-739] - Dimension as metrics does not work with PK-FK derived column
+* [KYLIN-761] - Tables are not shown in the "Query" tab, and couldn't run SQL query after cube be built
+
+__Improvement__
+
+* [KYLIN-168] - Installation fails if multiple ZK
+* [KYLIN-182] - Validation Rule: columns used in Join condition should have same datatype
+* [KYLIN-204] - Kylin web not works properly in IE
+* [KYLIN-217] - Enhance coprocessor with endpoints 
+* [KYLIN-251] - job engine refactoring
+* [KYLIN-261] - derived column validate when create cube
+* [KYLIN-317] - note: grunt.json need to be configured when add new javascript or css file
+* [KYLIN-324] - Refactor metadata to support InvertedIndex
+* [KYLIN-407] - Validation: There's should no Hive table column using "binary" data type
+* [KYLIN-445] - Rename cube_desc/cube folder
+* [KYLIN-452] - Automatically create local cluster for running tests
+* [KYLIN-498] - Merge metadata tables 
+* [KYLIN-532] - Refactor data model in kylin front end
+* [KYLIN-539] - use hbase command to launch tomcat
+* [KYLIN-542] - add project property feature for cube
+* [KYLIN-553] - From cube instance, couldn't easily find the project instance that it belongs to
+* [KYLIN-563] - Wrap kylin start and stop with a script 
+* [KYLIN-567] - More flexible validation of new segments
+* [KYLIN-569] - Support increment+merge job
+* [KYLIN-578] - add more generic configuration for ssh
+* [KYLIN-601] - Extract content from kylin.tgz to "kylin" folder
+* [KYLIN-616] - Validation Rule: partition date column should be in dimension columns
+* [KYLIN-634] - Script to import sample data and cube metadata
+* [KYLIN-636] - wiki/On-Hadoop-CLI-installation is not up to date
+* [KYLIN-637] - add start&end date for hbase info in cubeDesigner
+* [KYLIN-714] - Add Apache RAT to pom.xml
+* [KYLIN-753] - Make the dependency on hbase-common to "provided"
+* [KYLIN-758] - Updating port forwarding issue Hadoop Installation on Hortonworks Sandbox.
+* [KYLIN-779] - [UI] jump to cube list after create cube
+* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
+
+__Wish__
+
+* [KYLIN-608] - Distinct count for ii storage
+
diff --git a/website/_docs24/tutorial/Qlik.cn.md b/website/_docs24/tutorial/Qlik.cn.md
new file mode 100644
index 0000000..71e8fee
--- /dev/null
+++ b/website/_docs24/tutorial/Qlik.cn.md
@@ -0,0 +1,153 @@
+---
+layout: docs-cn
+title:  Qlik Sense 集成
+categories: tutorial
+permalink: /cn/docs24/tutorial/Qlik.html
+since: v2.2
+---
+
+Qlik Sense 是新一代自助式数据可视化工具。它是一款完整的商业分析软件,便于开发人员和分析人员快速构建和部署强大的分析应用。近年来,该工具成为全球增长率最快的 BI 产品。它可以与 Hadoop Database(Hive 和 Impala)集成。现在也可与 Apache Kylin 集成。本文将分步指导您完成 Apache Kylin 与 Qlik Sense 的连接。 
+
+### 安装 Kylin ODBC 驱动程序
+
+有关安装信息,参考页面 [Kylin ODBC 驱动](http://kylin.apache.org/cn/docs24/tutorial/odbc.html).
+
+### 安装 Qlik Sense
+
+有关 Olik Sense 的安装说明,请访问 [Qlik Sense Desktop download](https://www.qlik.com/us/try-or-buy/download-qlik-sense).
+
+### 与 Qlik Sense 连接
+
+配置完本地 DSN 并成功安装 Qlik Sense 后,可执行以下步骤来用 Qlik Sense 连接 Apache Kylin:
+
+- 打开 **Qlik Sense Desktop**.
+
+
+- 输入 Qlik 用户名和密码,接着系统将弹出以下对话框。单击**创建新应用程序**.
+
+![](/images/tutorial/2.1/Qlik/welcome_to_qlik_desktop.png)
+
+- 为新建的应用程序指定名称. 
+
+![](/images/tutorial/2.1/Qlik/create_new_application.png)
+
+- 应用程序视图中有两个选项,选择下方的**脚本编辑器**。
+
+![](/images/tutorial/2.1/Qlik/script_editor.png)
+
+- 此时会显示 **数据加载编辑器**的窗口。单击页面右上方的**创建新连接**并选择**ODBC**。
+
+![Create New Data Connection](/images/tutorial/2.1/Qlik/create_data_connection.png)
+
+- 选择你创建的**DSN**,忽略账户信息,点击**创建**。
+
+![ODBC Connection](/images/tutorial/2.1/Qlik/odbc_connection.png)
+
+### 配置Direct Query连接模式
+修改默认的脚本中的"TimeFormat", "DateFormat" and "TimestampFormat" 为
+
+`SET TimeFormat='h:mm:ss';`
+`SET DateFormat='YYYY-MM-DD';`
+`SET TimestampFormat='YYYY-MM-DD h:mm:ss[.fff]';`
+
+考虑到kylin环境中的Cube的数据量级通常都很大,可达到PB级。我们推荐用户使用Qlik sense的Direct Query连接模式,而不要将数据导入到Qlik sense中。
+
+你可以在脚本的连接中打入`Direct Query`来启用Direct Query连接模式。
+
+下面的截图展现了一个连接了 *Learn_kylin* 项目中的 *kylin_sales_cube* 的Direct Query的脚本。
+
+![Script](/images/tutorial/2.1/Qlik/script_run_result.png) 
+
+Qlik sense会基于你定义的这个脚本在报表中相应的生成SQL查询。
+
+我们推荐用户将Kylin Cube上定义的维度和度量相应的定义到脚本中的维度和度量中。
+
+你也可以使用Native表达式来使用Apache Kylin内置函数,例如:
+
+`NATIVE('extract(month from PART_DT)') ` 
+
+完整的脚本提供在下方以供参考。
+
+请确保将脚本中`LIB CONNECT TO 'kylin';` 部分引用的DSN进行相应的修改。 
+
+```SQL
+SET ThousandSep=',';
+SET DecimalSep='.';
+SET MoneyThousandSep=',';
+SET MoneyDecimalSep='.';
+SET MoneyFormat='$#,##0.00;-$#,##0.00';
+SET TimeFormat='h:mm:ss';
+SET DateFormat='YYYY/MM/DD';
+SET TimestampFormat='YYYY/MM/DD h:mm:ss[.fff]';
+SET FirstWeekDay=6;
+SET BrokenWeeks=1;
+SET ReferenceDay=0;
+SET FirstMonthOfYear=1;
+SET CollationLocale='en-US';
+SET CreateSearchIndexOnReload=1;
+SET MonthNames='Jan;Feb;Mar;Apr;May;Jun;Jul;Aug;Sep;Oct;Nov;Dec';
+SET LongMonthNames='January;February;March;April;May;June;July;August;September;October;November;December';
+SET DayNames='Mon;Tue;Wed;Thu;Fri;Sat;Sun';
+SET LongDayNames='Monday;Tuesday;Wednesday;Thursday;Friday;Saturday;Sunday';
+
+LIB CONNECT TO 'kylin';
+
+
+DIRECT QUERY
+DIMENSION 
+  TRANS_ID,
+  YEAR_BEG_DT,
+  MONTH_BEG_DT,
+  WEEK_BEG_DT,
+  PART_DT,
+  LSTG_FORMAT_NAME,
+  OPS_USER_ID,
+  OPS_REGION,
+  NATIVE('extract(month from PART_DT)') AS PART_MONTH,
+   NATIVE('extract(year from PART_DT)') AS PART_YEAR,
+  META_CATEG_NAME,
+  CATEG_LVL2_NAME,
+  CATEG_LVL3_NAME,
+  ACCOUNT_BUYER_LEVEL,
+  NAME
+MEASURE
+	ITEM_COUNT,
+    PRICE,
+    SELLER_ID
+FROM KYLIN_SALES 
+join KYLIN_CATEGORY_GROUPINGS  
+on( SITE_ID=LSTG_SITE_ID 
+and KYLIN_SALES.LEAF_CATEG_ID=KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID)
+join KYLIN_CAL_DT
+on (KYLIN_CAL_DT.CAL_DT=KYLIN_SALES.PART_DT)
+join KYLIN_ACCOUNT 
+on (KYLIN_ACCOUNT.ACCOUNT_ID=KYLIN_SALES.BUYER_ID)
+JOIN KYLIN_COUNTRY
+on (KYLIN_COUNTRY.COUNTRY=KYLIN_ACCOUNT.ACCOUNT_COUNTRY)
+```
+
+点击窗口右上方的**加载数据**,Qlik sense会根据脚本来生成探测查询以检查脚本的语法。
+
+![Load Data](/images/tutorial/2.1/Qlik/load_data.png)
+
+### 创建报表
+
+点击左上角的**应用程序视图**。
+
+![Open App Overview](/images/tutorial/2.1/Qlik/go_to_app_overview.png)
+
+点击**创建新工作表**。
+
+![Create new sheet](/images/tutorial/2.1/Qlik/create_new_report.png)
+
+选择一个图标类型,将维度和度量根据需要添加到图表上。
+
+![Select the required charts, dimension and measure](/images/tutorial/2.1/Qlik/add_dimension.png)
+
+图表返回了结果,说明连接Apache Kylin成功。
+
+现在你可以使用Qlik sense分析Apache Kylin中的数据了。
+
+![View data in Qlik Sense](/images/tutorial/2.1/Qlik/report.png)
+
+请注意如果你希望你的报表可以击中Cube,你在Qlik sense中定义的度量需要和Cube上定义的一致。比如,为了击中Learn_kylin项目的 *Kylin_sales_cube* 我们在本例中使用`sum(price)`。
diff --git a/website/_docs24/tutorial/Qlik.md b/website/_docs24/tutorial/Qlik.md
new file mode 100644
index 0000000..527eb7c
--- /dev/null
+++ b/website/_docs24/tutorial/Qlik.md
@@ -0,0 +1,156 @@
+---
+layout: docs
+title: Qlik Sense
+categories: tutorial
+permalink: /docs24/tutorial/Qlik.html
+---
+
+Qlik Sense delivers intuitive platform solutions for self-service data visualization, guided analytics applications, embedded analytics, and reporting. It is a new player in the Business Intelligence (BI) tools world, with a high growth since 2013. It has connectors with Hadoop Database (Hive and Impala). Now it can be integrated with Apache Kylin. This article will guide you to connect Apache Kylin with Qlik Sense.  
+
+### Install Kylin ODBC Driver
+
+For the installation information, please refer to [Kylin ODBC Driver](http://kylin.apache.org/docs24/tutorial/odbc.html).
+
+### Install Qlik Sense
+
+For the installation of Qlik Sense, please visit [Qlik Sense Desktop download](https://www.qlik.com/us/try-or-buy/download-qlik-sense).
+
+### Connection with Qlik Sense
+
+After configuring your Local DSN and installing Qlik Sense successfully, you may go through the following steps to connect Apache Kylin with Qlik Sense.
+
+- Open **Qlik Sense Desktop**.
+
+
+
+- Input your Qlik account to log in, then the following dialog will pop up. Click **Create New Application**.
+
+![Create New Application](../../images/tutorial/2.1/Qlik/welcome_to_qlik_desktop.png)
+
+- Specify a name for the new app. 
+
+
+![Specify a unique name](../../images/tutorial/2.1/Qlik/create_new_application.png)
+
+- There are two choices in the Application View. Please select the bottom **Script Editor**.
+
+
+![Select Script Editor](../../images/tutorial/2.1/Qlik/script_editor.png)
+
+- The Data Load Editor window shows. Click **Create New Connection** and choose **ODBC**.
+
+
+![Create New Data Connection](../../images/tutorial/2.1/Qlik/create_data_connection.png)
+
+- Select **DSN** you have created, ignore the account information and then click **Create**. 
+
+
+![ODBC Connection](../../images/tutorial/2.1/Qlik/odbc_connection.png)
+
+### Configure Direct Query mode
+Change the default scripts of "TimeFormat", "DateFormat" and "TimestampFormat" to:
+
+`SET TimeFormat='h:mm:ss';`
+`SET DateFormat='YYYY-MM-DD';`
+`SET TimestampFormat='YYYY-MM-DD h:mm:ss[.fff]';`
+
+
+Given the Peta-byte scale Cube size in a usual Apache Kylin environment, we recommend user to use Direct Query mode in Qlik Sense and avoid importing data into Qlik Sense.
+
+You are able to enable Direct Query mode by typing `Direct Query` in front of your query script in Script editor.
+
+Below is the screenshot of such Direct Query script against *kylin_sales_cube* in *Learn_kylin* project. 
+
+![Script](../../images/tutorial/2.1/Qlik/script_run_result.png)
+
+Once you defined such script, Qlik sense can generate SQL based on this script for your report.
+
+It is recommended that you define Dimension and Measure corresponding to the Dimension and Measure in the Kylin Cube.  
+
+You may also be able to utilize Apache Kylin built-in functions by creating a Native expression, for example: 
+
+`NATIVE('extract(month from PART_DT)') ` 
+
+The whole script has been posted for your reference. 
+
+Make sure to update `LIB CONNECT TO 'kylin';` to the DSN you created. 
+
+```SQL
+SET ThousandSep=',';
+SET DecimalSep='.';
+SET MoneyThousandSep=',';
+SET MoneyDecimalSep='.';
+SET MoneyFormat='$#,##0.00;-$#,##0.00';
+SET TimeFormat='h:mm:ss';
+SET DateFormat='YYYY/MM/DD';
+SET TimestampFormat='YYYY/MM/DD h:mm:ss[.fff]';
+SET FirstWeekDay=6;
+SET BrokenWeeks=1;
+SET ReferenceDay=0;
+SET FirstMonthOfYear=1;
+SET CollationLocale='en-US';
+SET CreateSearchIndexOnReload=1;
+SET MonthNames='Jan;Feb;Mar;Apr;May;Jun;Jul;Aug;Sep;Oct;Nov;Dec';
+SET LongMonthNames='January;February;March;April;May;June;July;August;September;October;November;December';
+SET DayNames='Mon;Tue;Wed;Thu;Fri;Sat;Sun';
+SET LongDayNames='Monday;Tuesday;Wednesday;Thursday;Friday;Saturday;Sunday';
+
+LIB CONNECT TO 'kylin';
+
+
+DIRECT QUERY
+DIMENSION 
+  TRANS_ID,
+  YEAR_BEG_DT,
+  MONTH_BEG_DT,
+  WEEK_BEG_DT,
+  PART_DT,
+  LSTG_FORMAT_NAME,
+  OPS_USER_ID,
+  OPS_REGION,
+  NATIVE('extract(month from PART_DT)') AS PART_MONTH,
+   NATIVE('extract(year from PART_DT)') AS PART_YEAR,
+  META_CATEG_NAME,
+  CATEG_LVL2_NAME,
+  CATEG_LVL3_NAME,
+  ACCOUNT_BUYER_LEVEL,
+  NAME
+MEASURE
+	ITEM_COUNT,
+    PRICE,
+    SELLER_ID
+FROM KYLIN_SALES 
+join KYLIN_CATEGORY_GROUPINGS  
+on( SITE_ID=LSTG_SITE_ID 
+and KYLIN_SALES.LEAF_CATEG_ID=KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID)
+join KYLIN_CAL_DT
+on (KYLIN_CAL_DT.CAL_DT=KYLIN_SALES.PART_DT)
+join KYLIN_ACCOUNT 
+on (KYLIN_ACCOUNT.ACCOUNT_ID=KYLIN_SALES.BUYER_ID)
+JOIN KYLIN_COUNTRY
+on (KYLIN_COUNTRY.COUNTRY=KYLIN_ACCOUNT.ACCOUNT_COUNTRY)
+```
+
+Click **Load Data** on the upper right of the window, Qlik sense will send out inspection query to test the connection based on the script.
+
+![Load Data](../../images/tutorial/2.1/Qlik/load_data.png)
+
+### Create a new report
+
+On the top left menu open **App Overview**.
+
+![Open App Overview](../../images/tutorial/2.1/Qlik/go_to_app_overview.png)
+
+ Click **Create new sheet** on this page.
+
+![Create new sheet](../../images/tutorial/2.1/Qlik/create_new_report.png)
+
+Select the charts you need, then add dimension and measurement based on your requirements. 
+
+![Select the required charts, dimension and measure](../../images/tutorial/2.1/Qlik/add_dimension.png)
+
+You will get your worksheet and the connection is complete. Your Apache Kylin data shows in Qlik Sense now.
+
+![View data in Qlik Sense](../../images/tutorial/2.1/Qlik/report.png)
+
+Please note that if you want the report to hit on Cube, you need to create the measure exactly as those are defined in the Cube. For the case of *Kylin_sales_cube* in Learn_kylin project. We use `sum(price)` as an example. 
diff --git a/website/_docs24/tutorial/acl.cn.md b/website/_docs24/tutorial/acl.cn.md
new file mode 100644
index 0000000..6f89299
--- /dev/null
+++ b/website/_docs24/tutorial/acl.cn.md
@@ -0,0 +1,35 @@
+---
+layout: docs-cn
+title:  Cube 权限授予(v2.1)
+categories: 教程
+permalink: /cn/docs24/tutorial/acl.html
+version: v1.2
+since: v0.7.1
+---
+
+> 从v2.2.0版本开始,Cube ACL功能已经移除, 请使用[Project level ACL](/docs24/tutorial/project_level_acl.html)进行权限管理。
+
+在`Cubes`页面,双击cube行查看详细信息。在这里我们关注`Access`标签。
+点击`+Grant`按钮进行授权。
+
+![]( /images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
+
+一个cube有四种不同的权限。将你的鼠标移动到`?`图标查看详细信息。
+
+![]( /images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
+
+授权对象也有两种:`User`和`Role`。`Role`是指一组拥有同样权限的用户。
+
+### 1. 授予用户权限
+* 选择`User`类型,输入你想要授权的用户的用户名并选择相应的权限。
+
+     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
+
+* 然后点击`Grant`按钮提交请求。在这一操作成功后,你会在表中看到一个新的表项。你可以选择不同的访问权限来修改用户权限。点击`Revoke`按钮可以删除一个拥有权限的用户。
+
+     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
+
+### 2. 授予角色权限
+* 选择`Role`类型,通过点击下拉按钮选择你想要授权的一组用户并选择一个权限。
+
+* 然后点击`Grant`按钮提交请求。在这一操作成功后,你会在表中看到一个新的表项。你可以选择不同的访问权限来修改组权限。点击`Revoke`按钮可以删除一个拥有权限的组。
diff --git a/website/_docs24/tutorial/acl.md b/website/_docs24/tutorial/acl.md
new file mode 100644
index 0000000..cf4ac55
--- /dev/null
+++ b/website/_docs24/tutorial/acl.md
@@ -0,0 +1,37 @@
+---
+layout: docs
+title: Cube Permission (v2.1)
+categories: tutorial
+permalink: /docs24/tutorial/acl.html
+since: v0.7.1
+---
+
+```
+Notes:
+Cube ACL is removed since v2.2.0, please use [Project level ACL](/docs24/tutorial/project_level_acl.html) to manager ACL.
+```
+
+In `Cubes` page, double click the cube row to see the detail information. Here we focus on the `Access` tab.
+Click the `+Grant` button to grant permission. 
+
+![](/images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
+
+There are four different kinds of permissions for a cube. Move your mouse over the `?` icon to see detail information. 
+
+![](/images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
+
+There are also two types of user that a permission can be granted: `User` and `Role`. `Role` means a group of users who have the same role.
+
+### 1. Grant User Permission
+* Select `User` type, enter the username of the user you want to grant and select the related permission. 
+
+     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
+
+* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a user. To delete a user with permission, just click the `Revoke` button.
+
+     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
+
+### 2. Grant Role Permission
+* Select `Role` type, choose a group of users that you want to grant by click the drop down button and select a permission.
+
+* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a group. To delete a group with permission, just click the `Revoke` button.
diff --git a/website/_docs24/tutorial/create_cube.cn.md b/website/_docs24/tutorial/create_cube.cn.md
new file mode 100644
index 0000000..be13162
--- /dev/null
+++ b/website/_docs24/tutorial/create_cube.cn.md
@@ -0,0 +1,223 @@
+---
+layout: docs-cn
+title:  Cube 创建
+categories: 教程
+permalink: /cn/docs24/tutorial/create_cube.html
+version: v1.2
+since: v0.7.1
+---
+
+
+### I. 新建项目
+1. 由顶部菜单栏进入 `Model` 页面,然后点击 `Manage Projects`。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
+
+2. 点击 `+ Project` 按钮添加一个新的项目。
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/2 %2Bproject.png)
+
+3. 填写下列表单并点击 `submit` 按钮提交请求。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3 new-project.png)
+
+4. 成功后,底部会显示通知。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
+
+### II. 同步Hive表
+1. 在顶部菜单栏点击 `Model`,然后点击左边的 `Data Source` 标签,它会列出所有加载进 Kylin 的表,点击 `Load Table` 按钮。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table.png)
+
+2. 输入表名并点击 `Sync` 按钮提交请求。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
+
+3. 【可选】如果你想要浏览 hive 数据库来选择表,点击 `Load Table From Tree` 按钮。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table-tree.png)
+
+4. 【可选】展开数据库节点,点击选择要加载的表,然后点击 `Sync` 按钮。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-tree.png)
+
+5. 成功的消息将会弹出,在左边的 `Tables` 部分,新加载的表已经被添加进来。点击表将会展开列。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-info.png)
+
+6. 在后台,Kylin 将会执行 MapReduce 任务计算新同步表的基数(cardinality),任务完成后,刷新页面并点击表名,基数值将会显示在表信息中。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-cardinality.png)
+
+### III. 新建 Data Model
+创建 cube 前,需定义一个数据模型。数据模型定义了一个星型(star schema)或雪花(snowflake schema)模型。一个模型可以被多个 cube 使用。
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 +model.png)
+
+1. 点击顶部的 `Model` ,然后点击 `Models` 标签。点击 `+New` 按钮,在下拉框中选择 `New Model`。
+
+2. 输入 model 的名字和可选的描述。
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-name.png)
+
+3. 在 `Fact Table` 中,为模型选择事实表。
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-fact-table.png)
+
+4. 【可选】点击 `Add Lookup Table` 按钮添加一个 lookup 表。选择表名和关联类型(内连接或左连接)
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-lookup-table.png)
+
+5. 点击 `New Join Condition` 按钮,左边选择事实表的外键,右边选择 lookup 表的主键。如果有多于一个 join 列重复执行。
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-join-condition.png)
+
+6. 点击 “OK”,重复4,5步来添加更多的 lookup 表。完成后,点击 “Next”。
+
+7. `Dimensions` 页面允许选择在子 cube 中用作维度的列,然后点击 `Columns` 列,在下拉框中选择需要的列。
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-dimensions.png)
+
+8. 点击 “Next” 到达 “Measures” 页面,选择作为 measure 的列,其只能从事实表中选择。
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-measures.png)
+
+9. 点击 “Next” 到达 “Settings” 页面,如果事实表中的数据每日增长,选择 `Partition Date Column` 中相应的 日期列以及日期格式,否则就将其留白。
+
+10. 【可选】选择是否需要 “time of the day” 列,默认情况下为 `No`。如果选择 `Yes`, 选择 `Partition Time Column` 中相应的 time 列以及 time 格式
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-partition-column.png)
+
+11. 【可选】如果在从 hive 抽取数据时候想做一些筛选,可以在 `Filter` 中输入筛选条件。
+
+12. 点击 `Save` 然后选择 `Yes` 来保存 data model。创建完成,data model 就会列在左边 `Models` 列表中。
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-created.png)
+
+### IV. 新建 Cube
+
+创建完 data model,可以开始创建 cube。
+点击顶部 `Model`,然后点击 `Models` 标签。点击 `+New` 按钮,在下拉框中选择 `New Cube`。
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 new-cube.png)
+
+**步骤1. Cube 信息**
+
+1. 选择 data model,输入 cube 名字;点击 `Next` 进行下一步。
+
+cube 名字可以使用字母,数字和下划线(空格不允许)。`Notification Email List` 是运用来通知job执行成功或失败情况的邮箱列表。`Notification Events` 是触发事件的状态。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
+
+**步骤2. 维度**
+
+1. 点击 `Add Dimension`,在弹窗中显示的事实表和 lookup 表里勾选输入需要的列。Lookup 表的列有2个选项:“Normal” 和 “Derived”(默认)。“Normal” 添加一个普通独立的维度列,“Derived” 添加一个 derived 维度,derived 维度不会计算入 cube,将由事实表的外键推算出。阅读更多【如何优化 cube】(/docs15/howto/howto_optimize_cubes.html)。
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-batch.png)
+
+2. 选择所有维度后点击 “Next”。
+
+**步骤3. 度量**
+
+1. 点击 `+Measure` 按钮添加一个新的度量。
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 meas-+meas.png)
+
+2. 根据它的表达式共有8种不同类型的度量:`SUM`、`MAX`、`MIN`、`COUNT`、`COUNT_DISTINCT` `TOP_N`, `EXTENDED_COLUMN` 和 `PERCENTILE`。请合理选择 `COUNT_DISTINCT` 和 `TOP_N` 返回类型,它与 cube 的大小相关。
+   * SUM
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-sum.png)
+
+   * MIN
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-min.png)
+
+   * MAX
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-max.png)
+
+   * COUNT
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-count.png)
+
+   * DISTINCT_COUNT
+   这个度量有两个实现:
+   1)近似实现 HyperLogLog,选择可接受的错误率,低错误率需要更多存储;
+   2)精确实现 bitmap(具体限制请看 https://issues.apache.org/jira/browse/KYLIN-1186)
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-distinct.png)
+   
+    注意:distinct 是一种非常重的数据类型,和其他度量相比构建和查询会更慢。
+   
+   * TOP_N
+   TopN 度量在每个维度结合时预计算,它比未预计算的在查询时间上性能更好;需要两个参数:一是被用来作为 Top 记录的度量列,Kylin 将计算它的 SUM 值并做倒序排列;二是 literal ID,代表最 Top 的记录,例如 seller_id;
+   
+   合理的选择返回类型,将决定多少 top 记录被监察:top 10, top 100, top 500, top 1000, top 5000 or top 10000。
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-topn.png)
+
+   * EXTENDED_COLUMN
+   Extended_Column 作为度量比作为维度更节省空间。一列和零一列可以生成新的列。
+   
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-extended_column.PNG)
+
+   * PERCENTILE
+   Percentile 代表了百分比。值越大,错误就越少。100为最合适的值。
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-percentile.PNG)
+
+**步骤4. 更新设置**
+
+这一步骤是为增量构建 cube 而设计的。
+
+`Auto Merge Thresholds`: 自动合并小的 segments 到中等甚至更大的 segment。如果不想自动合并,删除默认2个选项。
+
+`Volatile Range`: 默认为0,会自动合并所有可能的 cube segments,或者用 'Auto Merge' 将不会合并最新的 [Volatile Range] 天的 cube segments。
+
+`Retention Threshold`: 只会保存 cube 过去几天的 segment,旧的 segment 将会自动从头部删除;0表示不启用这个功能。
+
+`Partition Start Date`: cube 的开始日期.
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/9 refresh-setting1.png)
+
+**步骤5. 高级设置**
+
+`Aggregation Groups`: Cube 中的维度可以划分到多个聚合组中。默认 kylin 会把所有维度放在一个聚合组,当维度较多时,产生的组合数可能是巨大的,会造成 Cube 爆炸;如果你很好的了解你的查询模式,那么你可以创建多个聚合组。在每个聚合组内,使用 "Mandatory Dimensions", "Hierarchy Dimensions" 和 "Joint Dimensions" 来进一步优化维度组合。
+
+`Mandatory Dimensions`: 必要维度,用于总是出现的维度。例如,如果你的查询中总是会带有 "ORDER_DATE" 做为 group by 或 过滤条件, 那么它可以被声明为必要维度。这样一来,所有不含此维度的 cuboid 就可以被跳过计算。
+
+`Hierarchy Dimensions`: 层级维度,例如 "国家" -> "省" -> "市" 是一个层级;不符合此层级关系的 cuboid 可以被跳过计算,例如 ["省"], ["市"]. 定义层级维度时,将父级别维度放在子维度的左边。
+
+`Joint Dimensions`:联合维度,有些维度往往一起出现,或者它们的基数非常接近(有1:1映射关系)。例如 "user_id" 和 "email"。把多个维度定义为组合关系后,所有不符合此关系的 cuboids 会被跳过计算。 
+
+关于更多维度优化,请阅读这个博客: [新的聚合组](/blog/2016/02/18/new-aggregation-group/)
+
+`Rowkeys`: 是由维度编码值组成。"Dictionary" (字典)是默认的编码方式; 字典只能处理中低基数(少于一千万)的维度;如果维度基数很高(如大于1千万), 选择 "false" 然后为维度输入合适的长度,通常是那列的最大长度值; 如果超过最大值,会被截断。请注意,如果没有字典编码,cube 的大小可能会非常大。
+
+你可以拖拽维度列去调整其在 rowkey 中位置; 位于rowkey前面的列,将可以用来大幅缩小查询的范围。通常建议将 mandantory 维度放在开头, 然后是在过滤 ( where 条件)中起到很大作用的维度;如果多个列都会被用于过滤,将高基数的维度(如 user_id)放在低基数的维度(如 age)的前面。
+
+`Mandatory Cuboids`: 维度组合白名单。确保你想要构建的 cuboid 能被构建。
+
+`Cube Engine`: cube 构建引擎。有两种:MapReduce 和 Spark。如果你的 cube 只有简单度量(SUM, MIN, MAX),建议使用 Spark。如果 cube 中有复杂类型度量(COUNT DISTINCT, TOP_N),建议使用 MapReduce。 
+
+`Advanced Dictionaries`: "Global Dictionary" 是用于精确计算 COUNT DISTINCT 的字典, 它会将一个非 integer的值转成 integer,以便于 bitmap 进行去重。如果你要计算 COUNT DISTINCT 的列本身已经是 integer 类型,那么不需要定义 Global Dictionary。 Global Dictionary 会被所有 segment 共享,因此支持在跨 segments 之间做上卷去重操作。请注意,Global Dictionary 随着数据的加载,可能会不断变大。
+
+"Segment Dictionary" 是另一个用于精确计算 COUNT DISTINCT 的字典,与 Global Dictionary 不同的是,它是基于一个 segment 的值构建的,因此不支持跨 segments 的汇总计算。如果你的 cube 不是分区的或者能保证你的所有 SQL 按照 partition_column 进行 group by, 那么你应该使用 "Segment Dictionary" 而不是 "Global Dictionary",这样可以避免单个字典过大的问题。
+
+请注意:"Global Dictionary" 和 "Segment Dictionary" 都是单向编码的字典,仅用于 COUNT DISTINCT 计算(将非 integer 类型转成 integer 用于 bitmap计算),他们不支持解码,因此不能为普通维度编码。
+
+`Advanced Snapshot Table`: 为全局 lookup 表而设计,提供不同的存储类型。
+
+`Advanced ColumnFamily`: 如果有超过一个的COUNT DISTINCT 或 TopN 度量, 你可以将它们放在更多列簇中,以优化与HBase 的I/O。
+
+**步骤6. 重写配置**
+
+Kylin 允许在 Cube 级别覆盖部分 kylin.properties 中的配置,你可以在这里定义覆盖的属性。如果你没有要配置的,点击 `Next` 按钮。
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 configuration.PNG)
+
+**步骤7. 概览 & 保存**
+
+你可以概览你的 cube 并返回之前的步骤进行修改。点击 `Save` 按钮完成 cube 创建。
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/11 overview.PNG)
+
+恭喜,cube 创建好了,你可以去构建和玩它了。
diff --git a/website/_docs24/tutorial/create_cube.md b/website/_docs24/tutorial/create_cube.md
new file mode 100644
index 0000000..588b1e5
--- /dev/null
+++ b/website/_docs24/tutorial/create_cube.md
@@ -0,0 +1,216 @@
+---
+layout: docs
+title:  Cube Wizard
+categories: tutorial
+permalink: /docs24/tutorial/create_cube.html
+---
+
+This tutorial will guide you to create a cube. It need you have at least 1 sample table in Hive. If you don't have, you can follow this to create some data.
+
+### I. Create a Project
+1. Go to `Model` page in top menu bar, then click `Manage Projects`.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
+
+2. Click the `+ Project` button to add a new project.
+
+3. Enter a project name, e.g, "Tutorial", with a description (optional) and the overwritten Kylin configuration properties (optional), then click `submit` button.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3 new-project.png)
+
+4. After success, the project will show in the table. You can switch the current project with the dropdown in the top of the page.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
+
+### II. Sync up Hive Table
+1. Click `Model` in top bar and then click `Data Source` tab in the left part, it lists all the tables loaded into Kylin; click `Load Table` button.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table.png)
+
+2. Enter the hive table names, separated with commad, and then click `Sync` .
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
+
+3. [Optional] If you want to browser the hive database to pick tables, click the `Load Table From Tree` button.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table-tree.png)
+
+4. [Optional] Expand the database node, click to select the table to load, and then click `Sync`.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-tree.png)
+
+5. In the left `Tables` section, the newly loaded table is added. Click the table name will shows the columns.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-info.png)
+
+6. In the background, Kylin will run a MapReduce job to calculate the approximate cardinality for the newly synced table. After the job be finished, refresh web page and then click the table name, the cardinality will be shown in the table info.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-cardinality.png)
+
+### III. Create Data Model
+Before creating a cube, you need to define a data model. The data model defines a star/snowflake schema. But it doesn't define the aggregation policies. One data model can be referenced by multiple cubes.
+
+1. Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Model`.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 +model.png)
+
+2. Enter a name for the model, with an optional description.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-name.png)
+
+3. In the `Fact Table` box, select the fact table of this data model.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-fact-table.png)
+
+4. [Optional] Click `Add Lookup Table` button to add a lookup table. Select the table name and join type (inner join or left join).
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-lookup-table.png)
+
+5. Click `New Join Condition` button, select the FK column of fact table in the left, and select the PK column of lookup table in the right side. Repeat this step if have more than one join columns.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-join-condition.png)
+
+6. Click "OK", repeat step 4 and 5 to add more lookup tables if any. After finished, click "Next".
+
+7. The "Dimensions" page allows to select the columns that will be used as dimension in the cubes. Click the `Columns` cell of a table, in the drop-down list select the column to the list. Usually all "Varchar", "String", "Date" columns should be declared as dimension. Only a column in this list can be added into a cube as dimension, so please add all possible dimension columns here.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-dimensions.png)
+
+8. Click "Next" go to the "Measures" page, select the columns that will be used in measure/metrics. The measure column can only from fact table. Usually the "long", "int", "double", "decimal" columns are declared as measures. 
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-measures.png)
+
+9. Click "Next" to the "Settings" page. If the data in fact table increases by day, select the corresponding date column in the `Partition Date Column`, and select the date format, otherwise leave it as blank.
+
+10. [Optional] Choose whether has a separate "time of the day" column, by default it is `No`. If choose `Yes`, select the corresponding time column in the `Partition Time Column`, and select the time format.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-partition-column.png)
+
+11. [Optional] If some conditions need to be applied when extracting data from Hive,  you can input the condition in `Filter`.
+
+12. Click `Save` and then select `Yes` to save the data model. After created, the data model will be shown in the left `Models` list.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-created.png)
+
+### IV. Create Cube
+After the data model be created, you can start to create a cube. 
+
+Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Cube`.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 new-cube.png)
+
+**Step 1. Cube Info**
+
+Select the data model, enter the cube name; Click `Next` to enter the next step.
+
+You can use letters, numbers and '_' to name your cube (blank space in name is not allowed). `Notification Email List` is a list of email addresses which be notified on cube job success/failure. `Notification Events` is the status to trigger events.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
+
+**Step 2. Dimensions**
+
+1. Click `Add Dimension`, it pops up a window: tick columns that you need from FactTable and LookupTable. There are two options for LookupTable columns: "Normal" and "Derived" (default). "Normal" is to add a normal independent dimension column, "Derived" is to add a derived dimension column (deriving from the FK of the fact table). Read more in [How to optimize cubes](/docs15/howto/howto_optimize_cubes.html).
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-batch.png)
+
+2. Click "Next" after select all other dimensions.
+
+**Step 3. Measures**
+
+1. Click the `+Measure` to add a new measure.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 meas-+meas.png)
+
+2. There are 8 types of measure according to its expression: `SUM`, `MAX`, `MIN`, `COUNT`, `COUNT_DISTINCT`, `TOP_N`, `EXTENDED_COLUMN` and `PERCENTILE`. Properly select the return type for `COUNT_DISTINCT` and `TOP_N`, as it will impact on the cube size.
+   * SUM
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-sum.png)
+
+   * MIN
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-min.png)
+
+   * MAX
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-max.png)
+
+   * COUNT
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-count.png)
+
+   * DISTINCT_COUNT
+   This measure has two implementations: 
+   a) approximate implementation with HyperLogLog, select an acceptable error rate, lower error rate will take more storage.
+   b) precise implementation with bitmap (see limitation in https://issues.apache.org/jira/browse/KYLIN-1186). 
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-distinct.png)
+
+   Pleaste note: distinct count is a very heavy data type, it is slower to build and query comparing to other measures.
+
+   * TOP_N
+   Approximate TopN measure pre-calculates the top records in each dimension combination, it will provide higher performance in query time than no pre-calculation; Need specify two parameters here: the first is the column will be used as metrics for Top records (aggregated with SUM and then sorted in descending order); the second is the literal ID, represents the entity like seller_id;
+
+   Properly select the return type, depends on how many top records to inspect: top 10, top 100, top 500, top 1000, top 5000 or top 10000. 
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-topn.png)
+
+* EXTENDED_COLUMN
+   Extended_Column as a measure rather than a dimension is to save space. One column with another column can generate new columns.
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-extended_column.PNG)
+
+* PERCENTILE
+   Percentile represent the percentage. The larger the value, the smaller the error. 100 is the most suitable.
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-percentile.PNG)
+
+**Step 4. Refresh Setting**
+
+This step is designed for incremental cube build. 
+
+`Auto Merge Thresholds`: merge the small segments into medium and large segment automatically. If you don't want to auto merge, remove the default two ranges.
+
+`Volatile Range`: by default it is 0, which will auto merge all possible cube segments, or 'Auto Merge' will not merge latest [Volatile Range] days cube segments.
+
+`Retention Threshold`: only keep the segment whose data is in past given days in cube, the old segment will be automatically dropped from head; 0 means not enable this feature.
+
+`Partition Start Date`: the start date of this cube.
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/9 refresh-setting1.png)
+
+**Step 5. Advanced Setting**
+
+`Aggregation Groups`: The dimensions can be divided into multiple groups, each group is called an "agg group". By default Kylin put all dimensions into one aggregation group. When you have many dimensions, that will cause cube explosion. You can create multiple agg groups by knowing well about your query patterns. In each agg group, you can use the concepts of "Mandatory Dimensions", "Hierarchy Dimensions" and "Joint Dimensions" to further optimize the dimension combinations. 
+
+`Mandatory Dimensions`: Dimensions that appears always. For example, if all your queries have "ORDER_DATE" as the group by or filtering condition, then it can be marked as mandatory. The cuboids that doesn't have this dimension can be omitted for building.
+
+`Hierarchy Dimensions`: For example "Country" -> "State" -> "City" is a logic hierarchy; The cuboids that doesn't comply with this hierarchy can be omitted for building, for example ["STATE", "CITY"], ["CITY"]. When defining a hierarchy, put the parent level dimension before the child level dimension.
+
+`Joint Dimensions`:Some dimensions will always appear together, or their cardinality is close (near 1:1). For example, "user_id" and "email". Defining them as a joint relationship, then the cuboids only has partial of them can be omitted. 
+
+For more please read this blog: [New Aggregation Group](/blog/2016/02/18/new-aggregation-group/)
+
+`Rowkeys`: the rowkeys are composed by the dimension encoded values. "Dictionary" is the default encoding method; If a dimension is not fit with dictionary (e.g., cardinality > 10 million), select "false" and then enter the fixed length for that dimension, usually that is the max length of that column; if a value is longer than that size it will be truncated. Please note, without dictionary encoding, the cube size might be much bigger.
+
+You can drag & drop a dimension column to adjust its position in rowkey; Put the mandantory dimension at the begining, then followed the dimensions that heavily involved in filters (where condition). Put high cardinality dimensions ahead of low cardinality dimensions.
+
+`Mandatory Cuboids`: Whitelist of the cuboids that you want to build.
+
+`Cube Engine`: The engine for building cube. There are 2 engines: MapReduce and Spark. If your cube only has simple measures (COUNT, SUM, MIN, MAX), Spark can gain better performance; If cube has complex measures (COUNT DISTINCT, TOP_N), MapReduce is more stable.
+
+`Advanced Dictionaries`: "Global Dictionary" is the default dictionary for precise count distinct measure, it can ensure one value always be encoded into one consistent integer, so it can support "COUNT DISTINCT" rollup among multiple segments. But global dictionary may grow to very big size as time go.
+
+"Segment Dictionary" is a special dictionary for precise count distinct measure, which is built on one segment and could not support rollup among segments. Its size can be much smaller than global dictionary. Specifically, if your cube isn't partitioned or you can ensure all your SQLs will group by your partition_column, you could use "Segment Dictionary" instead of "Global Dictionary".
+
+Please note: "Global Dictionary" and "Segment Dictionary" are one-way dictionary for COUNT DISTINCT (converting a non-integer value to integer for bitmap), they couldn't be used as the encoding for a dimension.
+
+`Advanced Snapshot Table`: design for global lookup table and provide different storage type.
+
+`Advanced ColumnFamily`: If there are more than one ultra-high cardinality precise count distinct or TopN measures, you could divide these measures to more column family to optimize the I/O from HBase.
+
+**Step 6. Configuration Overwrites**
+
+Kylin allows overwritting system configurations (conf/kylin.properties) at Cube level . You can add the key/values that you want to overwrite here. If you don't have anything to config, click `Next` button.
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 configuration.PNG)
+
+**Step 7. Overview & Save**
+
+You can overview your cube and go back to previous step to modify it. Click the `Save` button to complete the cube creation.
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/11 overview.PNG)
+
+Cheers! Now the cube is created, you can go ahead to build and play it.
diff --git a/website/_docs24/tutorial/cube_build_job.cn.md b/website/_docs24/tutorial/cube_build_job.cn.md
new file mode 100644
index 0000000..8af1343
--- /dev/null
+++ b/website/_docs24/tutorial/cube_build_job.cn.md
@@ -0,0 +1,69 @@
+---
+layout: docs-cn
+title: "Cube 构建和 Job 监控"
+categories: 教程
+permalink: /cn/docs24/tutorial/cube_build_job.html
+version: v1.2
+since: v0.7.1
+---
+
+### Cube建立
+
+首先,确认你拥有你想要建立的 cube 的权限。
+
+1. 在 `Models` 页面中,点击 cube 栏右侧的 `Action` 下拉按钮并选择 `Build` 操作。
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
+
+2. 选择后会出现一个弹出窗口,点击 `Start Date` 或者 `End Date` 输入框选择这个增量 cube 构建的起始日期。
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 date.png)
+
+3. 点击 `Submit` 提交请求。成功之后,你将会在 `Monitor` 页面看到新建的 job。
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 jobs-page.png)
+
+4. 新建的 job 是 “pending” 状态;一会儿,它就会开始运行并且你可以通过刷新 web 页面或者点击刷新按钮来查看进度。
+
+    ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 job-progress.png)
+
+5. 等待 job 完成。期间如要放弃这个 job ,点击 `Actions` -> `Discard` 按钮。
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
+
+6. 等到 job 100%完成,cube 的状态就会变为 “Ready”, 意味着它已经准备好进行 SQL 查询。在 `Model` 页,找到 cube,然后点击 cube 名展开消息,在 “Storage” 标签下,列出 cube segments。每一个 segment 都有 start/end 时间;Hbase 表的信息也会列出。
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/10 cube-segment.png)
+
+如果你有更多的源数据,重复以上的步骤将它们构建进 cube。
+
+### Job监控
+
+在 `Monitor` 页面,点击job详情按钮查看显示于右侧的详细信息。
+
+![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
+
+job 详细信息为跟踪一个 job 提供了它的每一步记录。你可以将光标停放在一个步骤状态图标上查看基本状态和信息。
+
+![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
+
+点击每个步骤显示的图标按钮查看详情:`Parameters`、`Log`、`MRJob`。
+
+* Parameters
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
+
+* Log
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
+
+* MRJob(MapReduce Job)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)
+
diff --git a/website/_docs24/tutorial/cube_build_job.md b/website/_docs24/tutorial/cube_build_job.md
new file mode 100644
index 0000000..453a772
--- /dev/null
+++ b/website/_docs24/tutorial/cube_build_job.md
@@ -0,0 +1,67 @@
+---
+layout: docs
+title:  Cube Build and Job Monitoring
+categories: tutorial
+permalink: /docs24/tutorial/cube_build_job.html
+---
+
+### Cube Build
+First of all, make sure that you have authority of the cube you want to build.
+
+1. In `Models` page, click the `Action` drop down button in the right of a cube column and select operation `Build`.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
+
+2. There is a pop-up window after the selection, click `Start Date` and `End date` input box to select date/time range of this incremental cube build.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 date.png)
+
+4. Click `Submit`, you will see the new job in the `Monitor` page.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 jobs-page.png)
+
+5. The new job is in "pending" status; after a while, it will be started to run and you will see the progress by refresh the web page or click the refresh button.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 job-progress.png)
+
+
+6. Wait the job to finish. In the between if you want to discard it, click `Actions` -> `Discard` button. If the job is failed, you can click `Resume` to retry.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
+
+7. After the job is 100% finished, the cube's status becomes to "Ready", means it is ready to serve SQL queries. In the `Model` tab, find the cube, click cube name to expand the section, in the "Storage" tab, it will list the cube segments. Each segment has a start/end time; Its underlying HBase table information is also listed.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/10 cube-segment.png)
+
+If you have more source data, repeat the steps above to build them into the cube.
+
+### Job Monitoring
+In the `Monitor` page, click the job detail button to see detail information show in the right side.
+
+![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
+
+The detail information of a job provides a step-by-step record to trace a job. You can hover a step status icon to see the basic status and information.
+
+![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
+
+Click the icon buttons showing in each step to see the details: `Parameters`, `Log`, `MRJob`.
+
+* Parameters
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
+
+* Log
+        
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
+
+* MRJob(MapReduce Job)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)
+
+
diff --git a/website/_docs24/tutorial/cube_build_performance.cn.md b/website/_docs24/tutorial/cube_build_performance.cn.md
new file mode 100644
index 0000000..b5eff35
--- /dev/null
+++ b/website/_docs24/tutorial/cube_build_performance.cn.md
@@ -0,0 +1,266 @@
+---
+layout: docs-cn
+title: "优化 Cube 构建"
+categories: tutorial
+permalink: /cn/docs24/tutorial/cube_build_performance.html
+---
+ *本教程是关于如何一步步优化 cube build 的样例。* 
+ 
+在这个场景中我们尝试优化一个简单的 Cube,用 1 张 fact 和 1 张 lookup 表 (日期 Dimension)。在真正的调整之前,请从 [优化 Cube Build](/docs20/howto/howto_optimize_build.html) 中大体了解关于 Cube build 的过程
+
+![]( /images/tutorial/2.0/cube_build_performance/01.png)
+
+基准是:
+
+* 一个 Measure:平衡,总是计算 Max,Min 和 Count
+* 所有 Dim_date (10 项) 会被用作 dimensions 
+* 输入为 Hive CSV 外部表 
+* 输出为 HBase 中未压缩的 Cube 
+
+使用这些配置,结果为:13 分钟 build 一个 20 Mb 的 cube (Cube_01)
+
+### Cube_02:减少组合
+第一次提升,在 Dimensions 上使用 Joint 和 Hierarchy 来减少组合 (cuboids 的数量)。
+
+使用月,周,工作日和季度的 Joint Dimension 将所有的 ID 和 Text 组合在一起
+
+![]( /images/tutorial/2.0/cube_build_performance/02.png)
+
+	
+定义 Id_date 和 Year 作为 Hierarchy Dimension
+
+这将其大小减至 0.72 MB 而时间减至 5 分钟
+
+[Kylin 2149](https://issues.apache.org/jira/browse/KYLIN-2149),理想情况下,这些 Hierarchies 也能够这样定义:
+* Id_weekday > Id_date
+* Id_Month > Id_date
+* Id_Quarter > Id_date
+* Id_week > Id_date
+
+现在,还不能对同一 dimension 一起使用 Joint 和 Hierarchy。
+
+
+### Cube_03:输出压缩
+下一次提升,使用 Snappy 压缩 HBase Cube:
+
+![alt text](/images/tutorial/2.0/cube_build_performance/03.png)
+
+另一个选项为 Gzip:
+
+![alt text](/images/tutorial/2.0/cube_build_performance/04.png)
+
+
+压缩输出的结果为:
+
+![alt text](/images/tutorial/2.0/cube_build_performance/05.png)
+
+Snappy 和 Ggzip 的区别在时间上少于 1% 但是在大小上有 18% 差别
+
+
+### Cube_04:压缩 Hive 表
+时间分布如下:
+
+![]( /images/tutorial/2.0/cube_build_performance/06.png)
+
+
+按概念分组的详细信息 :
+
+![]( /images/tutorial/2.0/cube_build_performance/07.png)
+
+67 % 用来 build / process flat 表且遵守 30% 用来 build cube
+
+大量时间用在了第一步。
+
+这种时间分布在有很少的 measures 和很少的 dim (或者是非常优化的) 的 cube 中是很典型的 
+
+
+尝试在 Hive 输入表中使用 ORC 格式和压缩(Snappy):
+
+![]( /images/tutorial/2.0/cube_build_performance/08.png)
+
+
+前三步 (Flat Table) 的时间已经提升了一半。
+
+其他列式格式可以被测试:
+
+![]( /images/tutorial/2.0/cube_build_performance/19.png)
+
+
+* ORC
+* 使用 Snappy 的 ORC 压缩
+
+但结果比使用 Sequence 文件的效果差。
+
+请看:[Shaofengshi in MailList](http://apache-kylin.74782.x6.nabble.com/Kylin-Performance-td6713.html#a6767) 关于这个的评论
+
+第二步是重新分配 Flat Hive 表:
+
+![]( /images/tutorial/2.0/cube_build_performance/20.png)
+
+是一个简单的 row count,可以做出两个近似值
+* 如果其不需要精确,fact 表的 row 可以被统计→ 这可以与步骤 1 并行执行 (且 99% 的时间将是精确的)
+
+![]( /images/tutorial/2.0/cube_build_performance/21.png)
+
+
+* 将来的版本中 (KYLIN-2165 v2.0),这一步将使用 Hive 表数据实现。
+
+
+
+### Cube_05:Hive 表 (失败) 分区
+Rows 的分布为:
+
+Table | Rows
+--- | --- 
+Fact Table | 3.900.00 
+Dim Date | 2.100 
+
+build flat 表的查询语句 (简单版本):
+{% highlight Groff markup %}
+```sql
+SELECT
+,DIM_DATE.X
+,DIM_DATE.y
+,FACT_POSICIONES.BALANCE
+FROM  FACT_POSICIONES  INNER JOIN DIM_DATE 
+	ON  ID_FECHA = .ID_FECHA
+WHERE (ID_DATE >= '2016-12-08' AND ID_DATE < '2016-12-23')
+```
+{% endhighlight %}
+
+这里存在的问题是,Hive 只使用 1 个 Map 创建 Flat 表。重要的是我们要改变这种行为。解决方案是在同一列将 DIM 和 FACT 分区
+
+* 选项 1:在 Hive 表中使用 id_date 作为分区列。这有一个大问题:Hive metastore 意味着几百个分区而不是几千个 (在 [Hive 9452](https://issues.apache.org/jira/browse/HIVE-9452) 中有一个解决该问题的方法但现在还未完成)
+* 选项 2:生成一个新列如 Monthslot。
+
+![]( /images/tutorial/2.0/cube_build_performance/09.png)
+
+
+为 dim 和 fact 表添加同一个列
+
+现在,用这个新的条件 join 表来更新数据模型
+
+![]( /images/tutorial/2.0/cube_build_performance/10.png)
+
+	
+生成 flat 表的新查询类似于:
+{% highlight Groff markup %}
+```sql
+SELECT *
+	FROM  FACT_POSICIONES  **INNER JOIN** DIM_DATE 
+		ON  ID_FECHA = .ID_FECHA    AND  MONTHSLOT=MONTHSLOT
+```
+{% endhighlight %}
+
+用这个数据模型 rebuild 新 cube
+
+结果,性能更糟了 :(。尝试了几种方法后,还是没找到解决方案
+
+![]( /images/tutorial/2.0/cube_build_performance/11.png)
+
+
+问题是分区没有被用来生成几个 Mappers
+
+![]( /images/tutorial/2.0/cube_build_performance/12.png)
+
+	
+(我和 ShaoFeng Shi 检查了这个问题。他认为问题是这里只有很少的 rows 而且我们不是使用的真实的 Hadoop 集群。请看这个 [tech note](http://kylin.apache.org/docs16/howto/howto_optimize_build.html))。
+	
+
+### 结果摘要
+
+![]( /images/tutorial/2.0/cube_build_performance/13.png)
+
+
+调整进度如下:
+* Hive 输入表压缩了
+* HBase 输出压缩了
+* 应用了 cardinality (Joint,Derived,Hierarchy 和 Mandatory) 减少的技术
+* 为每一个 Dim 个性化 Dim 编码器并选择了 Dim 在 Row Key 中最好的顺序
+
+
+
+现在,这里有三种类型的 cubes:
+* 在 dimensions 中使用低 cardinality 的 Cubes(如 cube 4,大多数时间用在 flat 表这一步)
+* 在 dimensions 中使用高 cardinality 的 Cubes(如 cube 6,大多数时间用于 Build cube,flat 表这一步少于 10%)
+* 第三种类型,超高 cardinality (UHC) 其超出了本文的范围
+
+
+### Cube 6:用高 cardinality Dimensions 的 Cube
+
+![]( /images/tutorial/2.0/cube_build_performance/22.png)
+
+在这个用例中 **72%** 的时间用来 build Cube
+
+这一步是 MapReduce 任务,您可以在 ![alt text](/images/tutorial/2.0/cube_build_performance/23.png) > ![alt text](/images/tutorial/2.0/cube_build_performance/24.png) 看 YARN 中关于这一步的日志
+
+Map – Reduce 的性能怎样能提升呢? 简单的方式是增加 Mappers 和 Reduces (等于增加了并行数) 的数量。
+
+
+![]( /images/tutorial/2.0/cube_build_performance/25.png)
+
+
+**注意:** YARN / MapReduce 有很多参数配置和适应您的系统。这里的重点只在于小部分。 
+
+(在我的系统中我可以分配 12 – 14 GB 和 8 cores 给 YARN 资源):
+
+* yarn.nodemanager.resource.memory-mb = 15 GB
+* yarn.scheduler.maximum-allocation-mb = 8 GB
+* yarn.nodemanager.resource.cpu-vcores = 8 cores
+有了这些配置我们并行列表的最大理论级别为 8。然而这里有一个问题:“3600 秒后超时了”
+
+![]( /images/tutorial/2.0/cube_build_performance/26.png)
+
+
+参数 mapreduce.task.timeout  (默认为 1 小时) 定义了 Application Master (AM) 在没有 ACK of Yarn Container 的情况下发生的最大时间。一旦这次通过了,AM 杀死 container 并重新尝试 4 次 (都是同一个结果)
+
+问题在哪? 问题是 4 个 mappers 启动了,但每一个 mapper 需要超过 4 GB 完成
+
+* 解决方案 1:增加 RAM 给 YARN 
+* 解决方案 2:增加在 Mapper 步骤中使用的 vCores 数量来减少 RAM 使用
+* 解决方案 3:您可以通过 node 为 YARN 使用最大的 RAM(yarn.nodemanager.resource.memory-mb) 并为每一个 container 使用最小的 RAM 进行实验(yarn.scheduler.minimum-allocation-mb)。如果您为每一个 container 增加了最小的 RAM,YARN 将会减少 Mappers 的数量。
+
+![]( /images/tutorial/2.0/cube_build_performance/27.png)
+
+
+在最后两个用例中结果是相同的:减少并行化的级别 ==> 
+* 现在我们只启动 3 个 mappers 且同时启动,第四个必须等待空闲时间
+* 3 个 mappers 将 ram 分散在它们之间,结果它们就会有足够的 ram 完成 task
+
+一个正常的 “Build Cube” 步骤中您将会在 YARN 日志中看到相似的消息:
+
+![]( /images/tutorial/2.0/cube_build_performance/28.png)
+
+
+如果您没有周期性的看见这个,也许您在内存中遇到了瓶颈。
+
+
+
+### Cube 7:提升 cube 响应时间
+我们尝试使用不同 aggregations groups 来提升一些非常重要 Dim 或有高 cardinality 的 Dim 的查询性能。
+
+在我们的用例中定义 3 个 Aggregations Groups:
+1. “Normal cube”
+2. 使用日期 Dim 和 Currency 的 Cube(就像 mandatory)
+3. 使用日期 Dim 和 Carteras_Desc 的 Cube(就像 mandatory)
+
+![]( /images/tutorial/2.0/cube_build_performance/29.png)
+
+
+![]( /images/tutorial/2.0/cube_build_performance/30.png)
+
+
+![]( /images/tutorial/2.0/cube_build_performance/31.png)
+
+
+
+比较未使用 / 使用 AGGs:
+
+![]( /images/tutorial/2.0/cube_build_performance/32.png)
+
+
+使用多于 3% 的时间 build cube 以及 0.6% 的 space,使用 currency 或 Carteras_Desc 的查询会快很多。
+
+
+
+
diff --git a/website/_docs24/tutorial/cube_build_performance.md b/website/_docs24/tutorial/cube_build_performance.md
new file mode 100644
index 0000000..e8e4e99
--- /dev/null
+++ b/website/_docs24/tutorial/cube_build_performance.md
@@ -0,0 +1,266 @@
+---
+layout: docs
+title: Cube Build Tuning
+categories: tutorial
+permalink: /docs24/tutorial/cube_build_performance.html
+---
+ *This tutorial is an example step by step about how to optimize build of cube.* 
+ 
+In this scenario we're trying to optimize a very simple Cube, with 1 fact and 1 lookup table (Date Dimension). Before do a real tunning, please get an overall understanding about Cube build process from [Optimize Cube Build](/docs20/howto/howto_optimize_build.html)
+
+![]( /images/tutorial/2.0/cube_build_performance/01.png)
+
+The baseline is:
+
+* One Measure: Balance, calculate always Max, Min and Count
+* All Dim_date (10 items) will be used as dimensions 
+* Input is a Hive CSV external table 
+* Output is a Cube in HBase without compression 
+
+With this configuration, the results are: 13 min to build a cube of 20 Mb  (Cube_01)
+
+### Cube_02: Reduce combinations
+To make the first improvement, use Joint and Hierarchy on Dimensions to reduce the combinations (number of cuboids).
+
+Put together all ID and Text of: Month, Week, Weekday and Quarter using Joint Dimension
+
+![]( /images/tutorial/2.0/cube_build_performance/02.png)
+
+	
+Define Id_date and Year as a Hierarchy Dimension
+
+This reduces the size down to 0.72MB and time to 5 min
+
+[Kylin 2149](https://issues.apache.org/jira/browse/KYLIN-2149), ideally, these Hierarchies can be defined also:
+* Id_weekday > Id_date
+* Id_Month > Id_date
+* Id_Quarter > Id_date
+* Id_week > Id_date
+
+But for now, it impossible to use Joint and Hierarchy together for one dimension.
+
+
+### Cube_03: Compress output
+To make the next improvement, compress HBase Cube with Snappy:
+
+![alt text](/images/tutorial/2.0/cube_build_performance/03.png)
+
+Another option is Gzip:
+
+![alt text](/images/tutorial/2.0/cube_build_performance/04.png)
+
+
+The results of compression output are:
+
+![alt text](/images/tutorial/2.0/cube_build_performance/05.png)
+
+The difference between Snappy and Ggzip in time is less than 1% but in size it is 18%
+
+
+### Cube_04: Compress Hive table
+The time distribution is like this:
+
+![]( /images/tutorial/2.0/cube_build_performance/06.png)
+
+
+Group detailed times by concepts :
+
+![]( /images/tutorial/2.0/cube_build_performance/07.png)
+
+67 % is used to build / process flat table and respect 30% to build the cube
+
+A lot of time is used in the first steps.
+
+This time distribution is typical in a cube with few measures and few dim (or very optimized)
+
+
+Try to use ORC Format and compression on Hive input table (Snappy):
+
+![]( /images/tutorial/2.0/cube_build_performance/08.png)
+
+
+The time in the first three steps (Flat Table) has been improved by half.
+
+Other columnar formats can be tested:
+
+![]( /images/tutorial/2.0/cube_build_performance/19.png)
+
+
+* ORC
+* ORC compressed with Snappy
+
+But the results are worse than when using Sequence file.
+
+See comments about this here: [Shaofengshi in MailList](http://apache-kylin.74782.x6.nabble.com/Kylin-Performance-td6713.html#a6767)
+
+The second strep is to redistribute Flat Hive table:
+
+![]( /images/tutorial/2.0/cube_build_performance/20.png)
+
+Is a simple row count, two approximations can be made
+* If it doesn’t need to be accurate, the rows of the fact table can be counted→ this can be performed in parallel with Step 1 (and 99% of the time it will be accurate)
+
+![]( /images/tutorial/2.0/cube_build_performance/21.png)
+
+
+* In the future versions (KYLIN-2165 v2.0), this steps will be implemented using Hive table statistics.
+
+
+
+### Cube_05: Partition Hive table (fail)
+The distribution of rows is:
+
+Table | Rows
+--- | --- 
+Fact Table | 3.900.00 
+Dim Date | 2.100 
+
+And the query (the simplified version) to build the flat table is:
+{% highlight Groff markup %}
+```sql
+SELECT
+,DIM_DATE.X
+,DIM_DATE.y
+,FACT_POSICIONES.BALANCE
+FROM  FACT_POSICIONES  INNER JOIN DIM_DATE 
+	ON  ID_FECHA = .ID_FECHA
+WHERE (ID_DATE >= '2016-12-08' AND ID_DATE < '2016-12-23')
+```
+{% endhighlight %}
+
+The problem here, is that, Hive in only using 1 Map to create Flat Table. It is important to lets go to change this behavior. The solution is to partition DIM and FACT in the same columns
+
+* Option 1: Use id_date as a partition column on Hive table. This has a big problem: the Hive metastore is meant for few a hundred of partitions and not thousands (In [Hive 9452](https://issues.apache.org/jira/browse/HIVE-9452) there is an idea to solve this but it isn’t finished yet)
+* Option 2: Generate a new column for this purpose like Monthslot.
+
+![]( /images/tutorial/2.0/cube_build_performance/09.png)
+
+
+Add the same column to dim and fact tables
+
+Now, upgrade the the data model with this new condition to join tables
+
+![]( /images/tutorial/2.0/cube_build_performance/10.png)
+
+	
+The new query to generate flat table will be similar to:
+{% highlight Groff markup %}
+```sql
+SELECT *
+	FROM  FACT_POSICIONES  **INNER JOIN** DIM_DATE 
+		ON  ID_FECHA = .ID_FECHA    AND  MONTHSLOT=MONTHSLOT
+```
+{% endhighlight %}
+
+Rebuild the new cube with this data model
+
+As a result, the performance has worsened  :( . After tried several attempts, there hasn’t been a solution
+
+![]( /images/tutorial/2.0/cube_build_performance/11.png)
+
+
+The problem is that partitions were not used to generate several Mappers
+
+![]( /images/tutorial/2.0/cube_build_performance/12.png)
+
+	
+(I checked this issue with ShaoFeng Shi. He thinks the problem is that there are few many rows and we are not working with a real Hadoop cluster. See this [tech note](http://kylin.apache.org/docs16/howto/howto_optimize_build.html)).
+	
+
+### Resume of results
+
+![]( /images/tutorial/2.0/cube_build_performance/13.png)
+
+
+The tunning process has been:
+* Hive Input tables compressed
+* HBase Output compressed
+* Apply techniques of reduction of cardinality (Joint, Derived, Hierarchy and Mandatory)
+* Personalize Dim encoder for each Dim and choose the best order of Dim in Row Key
+
+
+
+Now, there are three types of cubes:
+* Cubes with low cardinality in their dimensions (Like cube 4, most of time is usend in flat table steps)
+* Cubes with high cardinality in their dimensions (Like cube 6,most of time is usend on Build cube, the flat table steps are lower than 10%)
+* The third type, ultra high cardinality (UHC) which is outside the scope of this article
+
+
+### Cube 6: Cube with high cardinality Dimensions
+
+![]( /images/tutorial/2.0/cube_build_performance/22.png)
+
+In this case the **72%** of the time is used to build Cube
+
+This step is a MapReduce task, you can see the YARN log of these steps on ![alt text](/images/tutorial/2.0/cube_build_performance/23.png) > ![alt text](/images/tutorial/2.0/cube_build_performance/24.png) 
+
+How can the performance of Map – Reduce be improved? The easy way is to increase the numbers of Mappers and Reduces (= Increase parallelism).
+
+
+![]( /images/tutorial/2.0/cube_build_performance/25.png)
+
+
+**NOTE:** YARN / MapReduce have a lot parameters to configure and adapt to the your system. The focus here is only on small parts. 
+
+(In my system I can assign 12 – 14 GB and 8 cores to YARN Resources):
+
+* yarn.nodemanager.resource.memory-mb = 15 GB
+* yarn.scheduler.maximum-allocation-mb = 8 GB
+* yarn.nodemanager.resource.cpu-vcores = 8 cores
+With this config our max theoretical orical grade of parallelism list is 8. However, but this has a problem: “Timed out after 3600 secs”
+
+![]( /images/tutorial/2.0/cube_build_performance/26.png)
+
+
+The parameter mapreduce.task.timeout  (1 hour by default) define max time that Application Master (AM) can happen with out ACK of Yarn Container. Once this time passes, AM kill the container and retry the same 4 times (with the same result)
+
+Where is the problem? The problem is that 4 mappers started, but each mapper needed more than 4 GB to finish
+
+* The solution 1: add more RAM to YARN 
+* The solution 2: increase vCores number used in Mapper step to reduce the RAM used
+* The solution 3: you can play with max RAM to YARN by node  (yarn.nodemanager.resource.memory-mb) and experiment with minimum RAM per to container (yarn.scheduler.minimum-allocation-mb). If you increase minimum RAM per container, YARN will reduce the numbers of Mappers     
+
+![]( /images/tutorial/2.0/cube_build_performance/27.png)
+
+
+In the last two cases the results are the same: reduce the level of parallelism ==> 
+* Now we only start 3 mappers start at the same time, the fourth must be wait for a free slot
+* The three first mappers distribute spread the ram among themselves, and as a result they will have enough ram to finish the task
+
+During a normal “Build Cube” step you will see similars messages on YARN log:
+
+![]( /images/tutorial/2.0/cube_build_performance/28.png)
+
+
+If you don’t see this periodically, perhaps you have a bottleneck in the memory.
+
+
+
+### Cube 7: Improve cube response time
+We can try to use different aggregations groups to improve the query performance of some very important Dim or a Dim with high cardinality.
+
+In our case we define 3 Aggregations Groups: 
+1. “Normal cube”
+2. Cube with Date Dim and Currency (as mandatory)
+3. Cube with Date Dim and Carteras_Desc (as mandatory)
+
+![]( /images/tutorial/2.0/cube_build_performance/29.png)
+
+
+![]( /images/tutorial/2.0/cube_build_performance/30.png)
+
+
+![]( /images/tutorial/2.0/cube_build_performance/31.png)
+
+
+
+Compare without / with AGGs:
+
+![]( /images/tutorial/2.0/cube_build_performance/32.png)
+
+
+Now it uses 3% more of time to build the cube and 0.6% of space, but queries by currency or Carteras_Desc will be much faster.
+
+
+
+
diff --git a/website/_docs24/tutorial/cube_spark.cn.md b/website/_docs24/tutorial/cube_spark.cn.md
new file mode 100644
index 0000000..64c9095
--- /dev/null
+++ b/website/_docs24/tutorial/cube_spark.cn.md
@@ -0,0 +1,165 @@
+---
+layout: docs-cn
+title:  "用 Spark 构建 Cube"
+categories: tutorial
+permalink: /cn/docs24/tutorial/cube_spark.html
+---
+Kylin v2.0 介绍了 Spark cube engine,在 build cube 步骤中使用 Apache Spark 代替 MapReduce;您可以通过查看 [这篇博客](/blog/2017/02/23/by-layer-spark-cubing/) 的图片了解整体情况。当前的文档使用样例 cube 对如何尝试 new engine 进行了演示。
+
+
+## 准备阶段
+您需要一个安装了 Kylin v2.1.0 及以上版本的 Hadoop 环境。使用 Hortonworks HDP 2.4 Sandbox VM,其中 Hadoop 组件和 Hive/HBase 已经启动了。 
+
+## 安装 Kylin v2.1.0 及以上版本
+
+从 Kylin 的下载页面下载适用于 HBase 1.x 的 Kylin v2.1.0,然后在 */usr/local/* 文件夹中解压 tar 包:
+
+{% highlight Groff markup %}
+
+wget http://www-us.apache.org/dist/kylin/apache-kylin-2.1.0/apache-kylin-2.1.0-bin-hbase1x.tar.gz -P /tmp
+
+tar -zxvf /tmp/apache-kylin-2.1.0-bin-hbase1x.tar.gz -C /usr/local/
+
+export KYLIN_HOME=/usr/local/apache-kylin-2.1.0-bin-hbase1x
+{% endhighlight %}
+
+## 准备 "kylin.env.hadoop-conf-dir"
+
+为使 Spark 运行在 Yarn 上,需指定 **HADOOP_CONF_DIR** 环境变量,其是一个包含 Hadoop(客户端) 配置文件的目录,通常是 `/etc/hadoop/conf`。
+
+通常 Kylin 会在启动时从 Java classpath 上检测 Hadoop 配置目录,并使用它来启动 Spark。 如果您的环境中未能正确发现此目录,那么可以显式地指定此目录:在 `kylin.properties` 中设置属性 "kylin.env.hadoop-conf-dir" 好让 Kylin 知道这个目录:
+
+{% highlight Groff markup %}
+kylin.env.hadoop-conf-dir=/etc/hadoop/conf
+{% endhighlight %}
+
+## 检查 Spark 配置
+
+Kylin 在 $KYLIN_HOME/spark 中嵌入一个 Spark binary (v2.1.2),所有使用 *"kylin.engine.spark-conf."* 作为前缀的 Spark 配置属性都能在 $KYLIN_HOME/conf/kylin.properties 中进行管理。这些属性当运行提交 Spark job 时会被提取并应用;例如,如果您配置 "kylin.engine.spark-conf.spark.executor.memory=4G",Kylin 将会在执行 "spark-submit" 操作时使用 "--conf spark.executor.memory=4G" 作为参数。
+
+运行 Spark cubing 前,建议查看一下这些配置并根据您集群的情况进行自定义。下面是建议配置,开启了 Spark 动态资源分配:
+
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.master=yarn
+kylin.engine.spark-conf.spark.submit.deployMode=cluster
+kylin.engine.spark-conf.spark.dynamicAllocation.enabled=true
+kylin.engine.spark-conf.spark.dynamicAllocation.minExecutors=1
+kylin.engine.spark-conf.spark.dynamicAllocation.maxExecutors=1000
+kylin.engine.spark-conf.spark.dynamicAllocation.executorIdleTimeout=300
+kylin.engine.spark-conf.spark.yarn.queue=default
+kylin.engine.spark-conf.spark.driver.memory=2G
+kylin.engine.spark-conf.spark.executor.memory=4G
+kylin.engine.spark-conf.spark.yarn.executor.memoryOverhead=1024
+kylin.engine.spark-conf.spark.executor.cores=1
+kylin.engine.spark-conf.spark.network.timeout=600
+kylin.engine.spark-conf.spark.shuffle.service.enabled=true
+#kylin.engine.spark-conf.spark.executor.instances=1
+kylin.engine.spark-conf.spark.eventLog.enabled=true
+kylin.engine.spark-conf.spark.hadoop.dfs.replication=2
+kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress=true
+kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.DefaultCodec
+kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
+kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-history
+kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history
+
+
+## uncomment for HDP
+#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
+
+{% endhighlight %}
+
+为了在 Hortonworks 平台上运行,需要将 "hdp.version" 指定为 Yarn 容器的 Java 选项,因此请取消 kylin.properties 的最后三行的注释。 
+
+除此之外,为了避免重复上传 Spark jar 包到 Yarn,您可以手动上传一次,然后配置 jar 包的 HDFS 路径;请注意,HDFS 路径必须是全路径名。
+
+{% highlight Groff markup %}
+jar cv0f spark-libs.jar -C $KYLIN_HOME/spark/jars/ .
+hadoop fs -mkdir -p /kylin/spark/
+hadoop fs -put spark-libs.jar /kylin/spark/
+{% endhighlight %}
+
+然后,要在 kylin.properties 中进行如下配置:
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.yarn.archive=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-libs.jar
+{% endhighlight %}
+
+所有 "kylin.engine.spark-conf.*" 参数都可以在 Cube 或 Project 级别进行重写,这为用户提供了灵活性。
+
+## 创建和修改样例 cube
+
+运行 sample.sh 创建样例 cube,然后启动 Kylin 服务器:
+
+{% highlight Groff markup %}
+
+$KYLIN_HOME/bin/sample.sh
+$KYLIN_HOME/bin/kylin.sh start
+
+{% endhighlight %}
+
+Kylin 启动后,访问 Kylin 网站,在 "Advanced Setting" 页,编辑名为 "kylin_sales" 的 cube,将 "Cube Engine" 由 "MapReduce" 换成 "Spark":
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/1_cube_engine.png)
+
+点击 "Next" 进入 "Configuration Overwrites" 页面,点击 "+Property" 添加属性 "kylin.engine.spark.rdd-partition-cut-mb" 其值为 "500" (理由如下):
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_overwrite_partition.png)
+
+样例 cube 有两个耗尽内存的度量: "COUNT DISTINCT" 和 "TOPN(100)";当源数据较小时,他们的大小估计的不太准确: 预估的大小会比真实的大很多,导致了更多的 RDD partitions 被切分,使得 build 的速度降低。500 对于其是一个较为合理的数字。点击 "Next" 和 "Save" 保存 cube。
+
+对于没有"COUNT DISTINCT" 和 "TOPN" 的 cube,请保留默认配置。
+
+
+## 用 Spark 构建 Cube
+
+点击 "Build",选择当前日期为 end date。Kylin 会在 "Monitor" 页生成一个构建 job,第 7 步是 Spark cubing。Job engine 开始按照顺序执行每一步。 
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_job_with_spark.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/3_spark_cubing_step.png)
+
+当 Kylin 执行这一步时,您可以监视 Yarn 资源管理器里的状态. 点击 "Application Master" 链接将会打开 Spark 的 UI 网页,它会显示每一个 stage 的进度以及详细的信息。
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/4_job_on_rm.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/5_spark_web_gui.png)
+
+
+所有步骤成功执行后,Cube 的状态变为 "Ready" 且您可以像往常那样进行查询。
+
+## 疑难解答
+
+当出现 error,您可以首先查看 "logs/kylin.log". 其中包含 Kylin 执行的所有 Spark 命令,例如:
+
+{% highlight Groff markup %}
+2017-03-06 14:44:38,574 INFO  [Job 2d5c1178-c6f6-4b50-8937-8e5e3b39227e-306] spark.SparkExecutable:121 : cmd:export HADOOP_CONF_DIR=/usr/local/apache-kylin-2.1.0-bin-hbase1x/hadoop-conf && /usr/local/apache-kylin-2.1.0-bin-hbase1x/spark/bin/spark-submit --class org.apache.kylin.common.util.SparkEntry  --conf spark.executor.instances=1  --conf spark.yarn.queue=default  --conf spark.yarn.am.extraJavaOptions=-Dhdp.version=current  --conf spark.history.fs.logDirectory=hdfs:///kylin/spark-his [...]
+
+{% endhighlight %}
+
+您可以拷贝 cmd 以便在 shell 中手动执行,然后快速进行参数调整;执行期间,您可以访问 Yarn 资源管理器查看更多的消息。如果 job 已经完成了,您可以在 Spark history server 中查看历史信息。 
+
+Kylin 默认将历史信息输出到 "hdfs:///kylin/spark-history",您需要在该目录下启动 Spark history server,或在 conf/kylin.properties 中使用参数 "kylin.engine.spark-conf.spark.eventLog.dir" 和 "kylin.engine.spark-conf.spark.history.fs.logDirectory" 替换为您已存在的 Spark history server 的事件目录。
+
+下面的命令可以在 Kylin 的输出目录下启动一个 Spark history server 实例,运行前请确保 sandbox 中已存在的 Spark history server 关闭了:
+
+{% highlight Groff markup %}
+$KYLIN_HOME/spark/sbin/start-history-server.sh hdfs://sandbox.hortonworks.com:8020/kylin/spark-history 
+{% endhighlight %}
+
+浏览器访问 "http://sandbox:18080" 将会显示 job 历史:
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/9_spark_history.png)
+
+点击一个具体的 job,运行时的具体信息将会展示,该信息对疑难解答和性能调整有极大的帮助。
+
+## 进一步
+
+如果您是 Kylin 的管理员但是对于 Spark 是新手,建议您浏览 [Spark 文档](https://spark.apache.org/docs/2.1.2/),别忘记相应地去更新配置。您可以开启 Spark 的 [Dynamic Resource Allocation](https://spark.apache.org/docs/2.1.2/job-scheduling.html#dynamic-resource-allocation) ,以便其对于不同的工作负载能自动伸缩。Spark 性能依赖于集群的内存和 CPU 资源,当有复杂数据模型和巨大的数据集一次构建时 Kylin 的 Cube 构建将会是一项繁重的任务。如果您的集群资源不能够执行,Spark executors 就会抛出如 "OutOfMemorry" 这样的错误,因此请合理的使用。对于有 UHC dimension,过多组合 (例如,一个 cube 超过 12 dimensions),或耗尽内存的度量 (Count Distinct,Top-N) 的 Cube,建议您使用 MapReduce e [...]
+
+如果您有任何问题,意见,或 bug 修复,欢迎在 dev@kylin.apache.org 中讨论。
diff --git a/website/_docs24/tutorial/cube_spark.md b/website/_docs24/tutorial/cube_spark.md
new file mode 100644
index 0000000..8b8459a
--- /dev/null
+++ b/website/_docs24/tutorial/cube_spark.md
@@ -0,0 +1,159 @@
+---
+layout: docs
+title:  Build Cube with Spark
+categories: tutorial
+permalink: /docs24/tutorial/cube_spark.html
+---
+Kylin v2.0 introduces the Spark cube engine, it uses Apache Spark to replace MapReduce in the build cube step; You can check [this blog](/blog/2017/02/23/by-layer-spark-cubing/) for an overall picture. The current document uses the sample cube to demo how to try the new engine.
+
+
+## Preparation
+To finish this tutorial, you need a Hadoop environment which has Kylin v2.1.0 or above installed. Here we will use Hortonworks HDP 2.4 Sandbox VM, the Hadoop components as well as Hive/HBase has already been started. 
+
+## Install Kylin v2.4.0 or above
+
+Download the Kylin binary for HBase 1.x from Kylin's download page, and then uncompress the tar ball into */usr/local/* folder:
+
+{% highlight Groff markup %}
+
+wget http://www-us.apache.org/dist/kylin/apache-kylin-2.4.0/apache-kylin-2.4.0-bin-hbase1x.tar.gz -P /tmp
+
+tar -zxvf /tmp/apache-kylin-2.4.0-bin-hbase1x.tar.gz -C /usr/local/
+
+export KYLIN_HOME=/usr/local/apache-kylin-2.4.0-bin-hbase1x
+{% endhighlight %}
+
+## Prepare "kylin.env.hadoop-conf-dir"
+
+To run Spark on Yarn, need specify **HADOOP_CONF_DIR** environment variable, which is the directory that contains the (client side) configuration files for Hadoop. In many Hadoop distributions the directory is "/etc/hadoop/conf"; Kylin can automatically detect this folder from Hadoop configuration, so by default you don't need to set this property. If your configuration files are not in default folder, please set this property explicitly.
+
+## Check Spark configuration
+
+Kylin embedes a Spark binary (v2.1.0) in $KYLIN_HOME/spark, all the Spark configurations can be managed in $KYLIN_HOME/conf/kylin.properties with prefix *"kylin.engine.spark-conf."*. These properties will be extracted and applied when runs submit Spark job; E.g, if you configure "kylin.engine.spark-conf.spark.executor.memory=4G", Kylin will use "--conf spark.executor.memory=4G" as parameter when execute "spark-submit".
+
+Before you run Spark cubing, suggest take a look on these configurations and do customization according to your cluster. Below is the recommended configurations:
+
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.master=yarn
+kylin.engine.spark-conf.spark.submit.deployMode=cluster
+kylin.engine.spark-conf.spark.dynamicAllocation.enabled=true
+kylin.engine.spark-conf.spark.dynamicAllocation.minExecutors=1
+kylin.engine.spark-conf.spark.dynamicAllocation.maxExecutors=1000
+kylin.engine.spark-conf.spark.dynamicAllocation.executorIdleTimeout=300
+kylin.engine.spark-conf.spark.yarn.queue=default
+kylin.engine.spark-conf.spark.driver.memory=2G
+kylin.engine.spark-conf.spark.executor.memory=4G
+kylin.engine.spark-conf.spark.yarn.executor.memoryOverhead=1024
+kylin.engine.spark-conf.spark.executor.cores=1
+kylin.engine.spark-conf.spark.network.timeout=600
+kylin.engine.spark-conf.spark.shuffle.service.enabled=true
+#kylin.engine.spark-conf.spark.executor.instances=1
+kylin.engine.spark-conf.spark.eventLog.enabled=true
+kylin.engine.spark-conf.spark.hadoop.dfs.replication=2
+kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress=true
+kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.DefaultCodec
+kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
+kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-history
+kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history
+
+## uncomment for HDP
+#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
+
+{% endhighlight %}
+
+For running on Hortonworks platform, need specify "hdp.version" as Java options for Yarn containers, so please uncommment the last three lines in kylin.properties. 
+
+Besides, in order to avoid repeatedly uploading Spark jars to Yarn, you can manually do that once, and then configure the jar's HDFS location; Please note, the HDFS location need be full qualified name.
+
+{% highlight Groff markup %}
+jar cv0f spark-libs.jar -C $KYLIN_HOME/spark/jars/ .
+hadoop fs -mkdir -p /kylin/spark/
+hadoop fs -put spark-libs.jar /kylin/spark/
+{% endhighlight %}
+
+After do that, the config in kylin.properties will be:
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.yarn.archive=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-libs.jar
+kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
+{% endhighlight %}
+
+All the "kylin.engine.spark-conf.*" parameters can be overwritten at Cube or Project level, this gives more flexibility to the user.
+
+## Create and modify sample cube
+
+Run the sample.sh to create the sample cube, and then start Kylin server:
+
+{% highlight Groff markup %}
+
+$KYLIN_HOME/bin/sample.sh
+$KYLIN_HOME/bin/kylin.sh start
+
+{% endhighlight %}
+
+After Kylin is started, access Kylin web, edit the "kylin_sales" cube, in the "Advanced Setting" page, change the "Cube Engine" from "MapReduce" to "Spark":
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/1_cube_engine.png)
+
+Click "Next" to the "Configuration Overwrites" page, click "+Property" to add property "kylin.engine.spark.rdd-partition-cut-mb" with value "500" (reasons below):
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_overwrite_partition.png)
+
+The sample cube has two memory hungry measures: a "COUNT DISTINCT" and a "TOPN(100)"; Their size estimation can be inaccurate when the source data is small: the estimized size is much larger than the real size, that causes much more RDD partitions be splitted, which slows down the build. Here 100 is a more reasonable number for it. Click "Next" and "Save" to save the cube.
+
+
+## Build Cube with Spark
+
+Click "Build", select current date as the build end date. Kylin generates a build job in the "Monitor" page, in which the 7th step is the Spark cubing. The job engine starts to execute the steps in sequence. 
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_job_with_spark.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/3_spark_cubing_step.png)
+
+When Kylin executes this step, you can monitor the status in Yarn resource manager. Click the "Application Master" link will open Spark web UI, it shows the progress of each stage and the detailed information.
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/4_job_on_rm.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/5_spark_web_gui.png)
+
+
+After all steps be successfully executed, the Cube becomes "Ready" and you can query it as normal.
+
+## Troubleshooting
+
+When getting error, you should check "logs/kylin.log" firstly. There has the full Spark command that Kylin executes, e.g:
+
+{% highlight Groff markup %}
+2017-03-06 14:44:38,574 INFO  [Job 2d5c1178-c6f6-4b50-8937-8e5e3b39227e-306] spark.SparkExecutable:121 : cmd:export HADOOP_CONF_DIR=/etc/hadoop/conf && /usr/local/apache-kylin-2.4.0-bin-hbase1x/spark/bin/spark-submit --class org.apache.kylin.common.util.SparkEntry  --conf spark.executor.instances=1  --conf spark.yarn.queue=default  --conf spark.yarn.am.extraJavaOptions=-Dhdp.version=current  --conf spark.history.fs.logDirectory=hdfs:///kylin/spark-history  --conf spark.driver.extraJavaOp [...]
+
+{% endhighlight %}
+
+You can copy the cmd to execute manually in shell and then tunning the parameters quickly; During the execution, you can access Yarn resource manager to check more. If the job has already finished, you can check the history info in Spark history server. 
+
+By default Kylin outputs the history to "hdfs:///kylin/spark-history", you need start Spark history server on that directory, or change to use your existing Spark history server's event directory in conf/kylin.properties with parameter "kylin.engine.spark-conf.spark.eventLog.dir" and "kylin.engine.spark-conf.spark.history.fs.logDirectory".
+
+The following command will start a Spark history server instance on Kylin's output directory, before run it making sure you have stopped the existing Spark history server in sandbox:
+
+{% highlight Groff markup %}
+$KYLIN_HOME/spark/sbin/start-history-server.sh hdfs://sandbox.hortonworks.com:8020/kylin/spark-history 
+{% endhighlight %}
+
+In web browser, access "http://sandbox:18080" it shows the job history:
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/9_spark_history.png)
+
+Click a specific job, there you will see the detail runtime information, that is very helpful for trouble shooting and performance tuning.
+
+## Go further
+
+If you're a Kylin administrator but new to Spark, suggest you go through [Spark documents](https://spark.apache.org/docs/2.1.0/), and don't forget to update the configurations accordingly. You can enable Spark [Dynamic Resource Allocation](https://spark.apache.org/docs/2.1.0/job-scheduling.html#dynamic-resource-allocation) so that it can auto scale/shrink for different work load. Spark's performance relies on Cluster's memory and CPU resource, while Kylin's Cube build is a heavy task whe [...]
+
+If you have any question, comment, or bug fix, welcome to discuss in dev@kylin.apache.org.
diff --git a/website/_docs24/tutorial/cube_streaming.cn.md b/website/_docs24/tutorial/cube_streaming.cn.md
new file mode 100644
index 0000000..e913364
--- /dev/null
+++ b/website/_docs24/tutorial/cube_streaming.cn.md
@@ -0,0 +1,219 @@
+---
+layout: docs-cn
+title:  "从 Kafka 流构建 Cube"
+categories: tutorial
+permalink: /cn/docs24/tutorial/cube_streaming.html
+---
+Kylin v1.6 发布了可扩展的 streaming cubing 功能,它利用 Hadoop 消费 Kafka 数据的方式构建 cube,您可以查看 [这篇博客](/blog/2016/10/18/new-nrt-streaming/) 以进行高级别的设计。本文档是一步接一步的阐述如何创建和构建样例 cube 的教程;
+
+## 前期准备
+您需要一个安装了 kylin v1.6.0 或以上版本和可运行的 Kafka(v0.10.0 或以上版本)的 Hadoop 环境;先前的 Kylin 版本有一定的问题因此请首先升级您的 Kylin 实例。
+
+本教程中我们使用 Hortonworks HDP 2.2.4 Sandbox VM + Kafka v0.10.0(Scala 2.10) 作为环境。
+
+## 安装 Kafka 0.10.0.0 和 Kylin
+不要使用 HDP 2.2.4 自带的 Kafka,因为它太旧了,如果其运行着请先停掉。
+{% highlight Groff markup %}
+curl -s https://archive.apache.org/dist/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgz | tar -xz -C /usr/local/
+
+cd /usr/local/kafka_2.10-0.10.0.0/
+
+bin/kafka-server-start.sh config/server.properties &
+
+{% endhighlight %}
+
+从下载页下载 Kylin v1.6,在 /usr/local/ 文件夹中解压 tar 包。
+
+## 创建样例 Kafka topic 并填充数据
+
+创建样例名为 "kylin_streaming_topic" 具有三个分区的 topic:
+
+{% highlight Groff markup %}
+
+bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic kylin_streaming_topic
+Created topic "kylin_streaming_topic".
+{% endhighlight %}
+
+将样例数据放入 topic;Kylin 有一个实用类可以做这项工作;
+
+{% highlight Groff markup %}
+export KAFKA_HOME=/usr/local/kafka_2.10-0.10.0.0
+export KYLIN_HOME=/usr/local/apache-kylin-2.1.0-bin
+
+cd $KYLIN_HOME
+./bin/kylin.sh org.apache.kylin.source.kafka.util.KafkaSampleProducer --topic kylin_streaming_topic --broker localhost:9092
+{% endhighlight %}
+
+工具每一秒会向 Kafka 发送 100 条记录。直至本教程结束请让其一直运行。现在您可以用 kafka-console-consumer.sh 查看样例消息:
+
+{% highlight Groff markup %}
+cd $KAFKA_HOME
+bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic kylin_streaming_topic --from-beginning
+{"amount":63.50375137330458,"category":"TOY","order_time":1477415932581,"device":"Other","qty":4,"user":{"id":"bf249f36-f593-4307-b156-240b3094a1c3","age":21,"gender":"Male"},"currency":"USD","country":"CHINA"}
+{"amount":22.806058795736583,"category":"ELECTRONIC","order_time":1477415932591,"device":"Andriod","qty":1,"user":{"id":"00283efe-027e-4ec1-bbed-c2bbda873f1d","age":27,"gender":"Female"},"currency":"USD","country":"INDIA"}
+
+ {% endhighlight %}
+
+## 用 streaming 定义一张表
+用 "$KYLIN_HOME/bin/kylin.sh start" 启动 Kylin 服务器,输入 http://sandbox:7070/kylin/ 登陆 Kylin Web GUI,选择一个已存在的 project 或创建一个新的 project;点击 "Model" -> "Data Source",点击 "Add Streaming Table" 图标;
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/1_Add_streaming_table.png)
+
+在弹出的对话框中,输入您从 kafka-console-consumer 中获得的样例记录,点击 ">>" 按钮,Kylin 会解析 JSON 消息并列出所有的消息;
+
+您需要为这个 streaming 数据源起一个逻辑表名;该名字会在后续用于 SQL 查询;这里是在 "Table Name" 字段输入 "STREAMING_SALES_TABLE" 作为样例。
+
+您需要选择一个时间戳字段用来标识消息的时间;Kylin 可以从这列值中获得其他时间值,如 "year_start","quarter_start",这为您构建和查询 cube 提供了更高的灵活性。这里可以查看 "order_time"。您可以取消选择那些 cube 不需要的属性。这里我们保留了所有字段。
+
+注意 Kylin 从 1.6 版本开始支持结构化 (或称为 "嵌入") 消息,会将其转换成一个 flat table structure。默认使用 "_" 作为结构化属性的分隔符。
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/2_Define_streaming_table.png)
+
+
+点击 "Next"。在这个页面,提供了 Kafka 集群信息;输入 "kylin_streaming_topic" 作为 "Topic" 名;集群有 1 个 broker,其主机名为 "sandbox",端口为 "9092",点击 "Save"。
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Kafka_setting.png)
+
+在 "Advanced setting" 部分,"timeout" 和 "buffer size" 是和 Kafka 进行连接的配置,保留它们。 
+
+在 "Parser Setting",Kylin 默认您的消息为 JSON 格式,每一个记录的时间戳列 (由 "tsColName" 指定) 是 bigint (新纪元时间) 类型值;在这个例子中,您只需设置 "tsColumn" 为 "order_time";
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_setting.png)
+
+在现实情况中如果时间戳值为 string 如 "Jul 20,2016 9:59:17 AM",您需要用 "tsParser" 指定解析类和时间模式例如:
+
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_time.png)
+
+点击 "Submit" 保存设置。现在 "Streaming" 表就创建好了。
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/4_Streaming_table.png)
+
+## 定义数据模型
+有了上一步创建的表,现在我们可以创建数据模型了。步骤和您创建普通数据模型是一样的,但有两个要求:
+
+* Streaming Cube 不支持与 lookup 表进行 join;当定义数据模型时,只选择 fact 表,不选 lookup 表;
+* Streaming Cube 必须进行分区;如果您想要在分钟级别增量的构建 Cube,选择 "MINUTE_START" 作为 cube 的分区日期列。如果是在小时级别,选择 "HOUR_START"。
+
+这里我们选择 13 个 dimension 和 2 个 measure 列:
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/5_Data_model_dimension.png)
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/6_Data_model_measure.png)
+保存数据模型。
+
+## 创建 Cube
+
+Streaming Cube 和普通的 cube 大致上一样. 有以下几点需要您注意:
+
+* 分区时间列应该是 Cube 的一个 dimension。在 Streaming OLAP 中时间总是一个查询条件,Kylin 利用它来缩小扫描分区的范围。
+* 不要使用 "order\_time" 作为 dimension 因为它非常的精细;建议使用 "mintue\_start","hour\_start" 或其他,取决于您如何检查数据。
+* 定义 "year\_start","quarter\_start","month\_start","day\_start","hour\_start","minute\_start" 作为层级以减少组合计算。
+* 在 "refersh setting" 这一步,创建更多合并的范围,如 0.5 小时,4 小时,1 天,然后是 7 天;这将会帮助您控制 cube segment 的数量。
+* 在 "rowkeys" 部分,拖拽 "minute\_start" 到最上面的位置,对于 streaming 查询,时间条件会一直显示;将其放到前面将会帮助您缩小扫描范围。
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/8_Cube_dimension.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/9_Cube_measure.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/10_agg_group.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/11_Rowkey.png)
+
+保存 cube。
+
+## 运行 build
+
+您可以在 web GUI 触发 build,通过点击 "Actions" -> "Build",或用 'curl' 命令发送一个请求到 Kylin RESTful API:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+{% endhighlight %}
+
+请注意 API 终端和普通 cube 不一样 (这个 URL 以 "build2" 结尾)。
+
+这里的 0 表示从最后一个位置开始,9223372036854775807 (Long 类型的最大值) 表示到 Kafka topic 的结束位置。如果这是第一次 build (没有以前的 segment),Kylin 将会寻找 topics 的开头作为开始位置。 
+
+在 "Monitor" 页面,一个新的 job 生成了;等待其直到 100% 完成。
+
+## 点击 "Insight" 标签,编写 SQL 运行,例如:
+
+ {% highlight Groff markup %}
+select minute_start, count(*), sum(amount), sum(qty) from streaming_sales_table group by minute_start order by minute_start
+ {% endhighlight %}
+
+结果如下。
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/13_Query_result.png)
+
+
+## 自动 build
+
+一旦第一个 build 和查询成功了,您可以按照一定的频率调度增量 build。Kylin 将会记录每一个 build 的 offsets;当收到一个 build 请求,它将会从上一个结束的位置开始,然后从 Kafka 获取最新的 offsets。有了 REST API 您可以使用任何像 Linux cron 调度工具触发它:
+
+  {% highlight Groff markup %}
+crontab -e
+*/5 * * * * curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+ {% endhighlight %}
+
+现在您可以观看 cube 从 streaming 中自动 built。当 cube segments 累积到更大的时间范围,Kylin 将会自动的将其合并到一个更大的 segment 中。
+
+## 疑难解答
+
+ * 运行 "kylin.sh" 时您可能遇到以下错误:
+{% highlight Groff markup %}
+Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/Producer
+	at java.lang.Class.getDeclaredMethods0(Native Method)
+	at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
+	at java.lang.Class.getMethod0(Class.java:2856)
+	at java.lang.Class.getMethod(Class.java:1668)
+	at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
+	at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
+Caused by: java.lang.ClassNotFoundException: org.apache.kafka.clients.producer.Producer
+	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
+	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
+	at java.security.AccessController.doPrivileged(Native Method)
+	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
+	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
+	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
+	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
+	... 6 more
+{% endhighlight %}
+
+原因是 Kylin 不能找到正确的 Kafka client jars;确保您设置了正确的 "KAFKA_HOME" 环境变量。
+
+ * "Build Cube" 步骤中的 "killed by admin" 错误 
+
+ 在 Sandbox VM 中,YARN 不能给 MR job 分配请求的内存资源,因为 "inmem" cubing 算法需要更多的内存。您可以通过请求更少的内存来绕过这一步: 编辑 "conf/kylin_job_conf_inmem.xml",将这两个参数改为如下这样:
+
+ {% highlight Groff markup %}
+    <property>
+        <name>mapreduce.map.memory.mb</name>
+        <value>1072</value>
+        <description></description>
+    </property>
+
+    <property>
+        <name>mapreduce.map.java.opts</name>
+        <value>-Xmx800m</value>
+        <description></description>
+    </property>
+ {% endhighlight %}
+
+ * 如果 Kafka 里已经有一组历史 message 且您不想从最开始 build,您可以触发一个调用来将当前的结束位置设为 cube 的开始:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/init_start_offsets
+{% endhighlight %}
+
+ * 如果一些 build job 出错了并且您将其 discard,Cube 中就会留有一个洞(或称为空隙)。每一次 Kylin 都会从最后的位置 build,您不可期望通过正常的 builds 将洞填补。Kylin 提供了 API 检查和填补洞 
+
+检查洞:
+ {% highlight Groff markup %}
+curl -X GET --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+
+如果查询结果是一个空的数组,意味着没有洞;否则,触发 Kylin 填补他们:
+ {% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+
diff --git a/website/_docs24/tutorial/cube_streaming.md b/website/_docs24/tutorial/cube_streaming.md
new file mode 100644
index 0000000..4eaf76b
--- /dev/null
+++ b/website/_docs24/tutorial/cube_streaming.md
@@ -0,0 +1,219 @@
+---
+layout: docs
+title:  Scalable Cubing from Kafka
+categories: tutorial
+permalink: /docs24/tutorial/cube_streaming.html
+---
+Kylin v1.6 releases the scalable streaming cubing function, it leverages Hadoop to consume the data from Kafka to build the cube, you can check [this blog](/blog/2016/10/18/new-nrt-streaming/) for the high level design. This doc is a step by step tutorial, illustrating how to create and build a sample cube;
+
+## Preparation
+To finish this tutorial, you need a Hadoop environment which has kylin v1.6.0 or above installed, and also have a Kafka (v0.10.0 or above) running; Previous Kylin version has a couple issues so please upgrade your Kylin instance at first.
+
+In this tutorial, we will use Hortonworks HDP 2.2.4 Sandbox VM + Kafka v0.10.0(Scala 2.10) as the environment.
+
+## Install Kafka 0.10.0.0 and Kylin
+Don't use HDP 2.2.4's build-in Kafka as it is too old, stop it first if it is running.
+{% highlight Groff markup %}
+curl -s https://archive.apache.org/dist/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgz | tar -xz -C /usr/local/
+
+cd /usr/local/kafka_2.10-0.10.0.0/
+
+bin/kafka-server-start.sh config/server.properties &
+
+{% endhighlight %}
+
+Download the Kylin v1.6 from download page, expand the tar ball in /usr/local/ folder.
+
+## Create sample Kafka topic and populate data
+
+Create a sample topic "kylin_streaming_topic", with 3 partitions:
+
+{% highlight Groff markup %}
+
+bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic kylin_streaming_topic
+Created topic "kylin_streaming_topic".
+{% endhighlight %}
+
+Put sample data to this topic; Kylin has an utility class which can do this;
+
+{% highlight Groff markup %}
+export KAFKA_HOME=/usr/local/kafka_2.10-0.10.0.0
+export KYLIN_HOME=/usr/local/apache-kylin-2.1.0-bin
+
+cd $KYLIN_HOME
+./bin/kylin.sh org.apache.kylin.source.kafka.util.KafkaSampleProducer --topic kylin_streaming_topic --broker localhost:9092
+{% endhighlight %}
+
+This tool will send 100 records to Kafka every second. Please keep it running during this tutorial. You can check the sample message with kafka-console-consumer.sh now:
+
+{% highlight Groff markup %}
+cd $KAFKA_HOME
+bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic kylin_streaming_topic --from-beginning
+{"amount":63.50375137330458,"category":"TOY","order_time":1477415932581,"device":"Other","qty":4,"user":{"id":"bf249f36-f593-4307-b156-240b3094a1c3","age":21,"gender":"Male"},"currency":"USD","country":"CHINA"}
+{"amount":22.806058795736583,"category":"ELECTRONIC","order_time":1477415932591,"device":"Andriod","qty":1,"user":{"id":"00283efe-027e-4ec1-bbed-c2bbda873f1d","age":27,"gender":"Female"},"currency":"USD","country":"INDIA"}
+
+ {% endhighlight %}
+
+## Define a table from streaming
+Start Kylin server with "$KYLIN_HOME/bin/kylin.sh start", login Kylin Web GUI at http://sandbox:7070/kylin/, select an existing project or create a new project; Click "Model" -> "Data Source", then click the icon "Add Streaming Table";
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/1_Add_streaming_table.png)
+
+In the pop-up dialogue, enter a sample record which you got from the kafka-console-consumer, click the ">>" button, Kylin parses the JSON message and lists all the properties;
+
+You need give a logic table name for this streaming data source; The name will be used for SQL query later; here enter "STREAMING_SALES_TABLE" as an example in the "Table Name" field.
+
+You need select a timestamp field which will be used to identify the time of a message; Kylin can derive other time values like "year_start", "quarter_start" from this time column, which can give your more flexibility on building and querying the cube. Here check "order_time". You can deselect those properties which are not needed for cube. Here let's keep all fields.
+
+Notice that Kylin supports structured (or say "embedded") message from v1.6, it will convert them into a flat table structure. By default use "_" as the separator of the structed properties.
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/2_Define_streaming_table.png)
+
+
+Click "Next". On this page, provide the Kafka cluster information; Enter "kylin_streaming_topic" as "Topic" name; The cluster has 1 broker, whose host name is "sandbox", port is "9092", click "Save".
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Kafka_setting.png)
+
+In "Advanced setting" section, the "timeout" and "buffer size" are the configurations for connecting with Kafka, keep them. 
+
+In "Parser Setting", by default Kylin assumes your message is JSON format, and each record's timestamp column (specified by "tsColName") is a bigint (epoch time) value; in this case, you just need set the "tsColumn" to "order_time"; 
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_setting.png)
+
+In real case if the timestamp value is a string valued timestamp like "Jul 20, 2016 9:59:17 AM", you need specify the parser class with "tsParser" and the time pattern with "tsPattern" like this:
+
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_time.png)
+
+Click "Submit" to save the configurations. Now a "Streaming" table is created.
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/4_Streaming_table.png)
+
+## Define data model
+With the table defined in previous step, now we can create the data model. The step is almost the same as you create a normal data model, but it has two requirement:
+
+* Streaming Cube doesn't support join with lookup tables; When define the data model, only select fact table, no lookup table;
+* Streaming Cube must be partitioned; If you're going to build the Cube incrementally at minutes level, select "MINUTE_START" as the cube's partition date column. If at hours level, select "HOUR_START".
+
+Here we pick 13 dimension and 2 measure columns:
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/5_Data_model_dimension.png)
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/6_Data_model_measure.png)
+Save the data model.
+
+## Create Cube
+
+The streaming Cube is almost the same as a normal cube. a couple of points need get your attention:
+
+* The partition time column should be a dimension of the Cube. In Streaming OLAP the time is always a query condition, and Kylin will leverage this to narrow down the scanned partitions.
+* Don't use "order\_time" as dimension as that is pretty fine-grained; suggest to use "mintue\_start", "hour\_start" or other, depends on how you will inspect the data.
+* Define "year\_start", "quarter\_start", "month\_start", "day\_start", "hour\_start", "minute\_start" as a hierarchy to reduce the combinations to calculate.
+* In the "refersh setting" step, create more merge ranges, like 0.5 hour, 4 hours, 1 day, and then 7 days; This will help to control the cube segment number.
+* In the "rowkeys" section, drag&drop the "minute\_start" to the head position, as for streaming queries, the time condition is always appeared; putting it to head will help to narrow down the scan range.
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/8_Cube_dimension.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/9_Cube_measure.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/10_agg_group.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/11_Rowkey.png)
+
+Save the cube.
+
+## Run a build
+
+You can trigger the build from web GUI, by clicking "Actions" -> "Build", or sending a request to Kylin RESTful API with 'curl' command:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+{% endhighlight %}
+
+Please note the API endpoint is different from a normal cube (this URL end with "build2").
+
+Here 0 means from the last position, and 9223372036854775807 (Long.MAX_VALUE) means to the end position on Kafka topic. If it is the first time to build (no previous segment), Kylin will seek to beginning of the topics as the start position. 
+
+In the "Monitor" page, a new job is generated; Wait it 100% finished.
+
+## Click the "Insight" tab, compose a SQL to run, e.g:
+
+ {% highlight Groff markup %}
+select minute_start, count(*), sum(amount), sum(qty) from streaming_sales_table group by minute_start order by minute_start
+ {% endhighlight %}
+
+The result looks like below.
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/13_Query_result.png)
+
+
+## Automate the build
+
+Once the first build and query got successfully, you can schedule incremental builds at a certain frequency. Kylin will record the offsets of each build; when receive a build request, it will start from the last end position, and then seek the latest offsets from Kafka. With the REST API you can trigger it with any scheduler tools like Linux cron:
+
+  {% highlight Groff markup %}
+crontab -e
+*/5 * * * * curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+ {% endhighlight %}
+
+Now you can site down and watch the cube be automatically built from streaming. And when the cube segments accumulate to bigger time range, Kylin will automatically merge them into a bigger segment.
+
+## Trouble shootings
+
+ * You may encounter the following error when run "kylin.sh":
+{% highlight Groff markup %}
+Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/Producer
+	at java.lang.Class.getDeclaredMethods0(Native Method)
+	at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
+	at java.lang.Class.getMethod0(Class.java:2856)
+	at java.lang.Class.getMethod(Class.java:1668)
+	at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
+	at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
+Caused by: java.lang.ClassNotFoundException: org.apache.kafka.clients.producer.Producer
+	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
+	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
+	at java.security.AccessController.doPrivileged(Native Method)
+	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
+	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
+	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
+	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
+	... 6 more
+{% endhighlight %}
+
+The reason is Kylin wasn't able to find the proper Kafka client jars; Make sure you have properly set "KAFKA_HOME" environment variable.
+
+ * Get "killed by admin" error in the "Build Cube" step
+
+ Within a Sandbox VM, YARN may not allocate the requested memory resource to MR job as the "inmem" cubing algorithm requests more memory. You can bypass this by requesting less memory: edit "conf/kylin_job_conf_inmem.xml", change the following two parameters like this:
+
+ {% highlight Groff markup %}
+    <property>
+        <name>mapreduce.map.memory.mb</name>
+        <value>1072</value>
+        <description></description>
+    </property>
+
+    <property>
+        <name>mapreduce.map.java.opts</name>
+        <value>-Xmx800m</value>
+        <description></description>
+    </property>
+ {% endhighlight %}
+
+ * If there already be bunch of history messages in Kafka and you don't want to build from the very beginning, you can trigger a call to set the current end position as the start for the cube:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/init_start_offsets
+{% endhighlight %}
+
+ * If some build job got error and you discard it, there will be a hole (or say gap) left in the Cube. Since each time Kylin will build from last position, you couldn't expect the hole be filled by normal builds. Kylin provides API to check and fill the holes 
+
+Check holes:
+ {% highlight Groff markup %}
+curl -X GET --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+
+If the result is an empty arrary, means there is no hole; Otherwise, trigger Kylin to fill them:
+ {% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+
diff --git a/website/_docs24/tutorial/flink.md b/website/_docs24/tutorial/flink.md
new file mode 100644
index 0000000..7e964e9
--- /dev/null
+++ b/website/_docs24/tutorial/flink.md
@@ -0,0 +1,249 @@
+---
+layout: docs
+title:  Apache Flink
+categories: tutorial
+permalink: /docs24/tutorial/flink.html
+---
+
+
+### Introduction
+
+This document describes how to use Kylin as a data source in Apache Flink; 
+
+There were several attempts to do this in Scala and JDBC, but none of them works: 
+
+* [attempt1](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JDBCInputFormat-preparation-with-Flink-1-1-SNAPSHOT-and-Scala-2-11-td5371.html)  
+* [attempt2](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-OT-in-class-org-apache-flink-api-common-io-RichInputFormat-could-not-be-determi-td7287.html)  
+* [attempt3](http://stackoverflow.com/questions/36067881/create-dataset-from-jdbc-source-in-flink-using-scala)  
+* [attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala); 
+
+We will try use CreateInput and [JDBCInputFormat](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html) in batch mode and access via JDBC to Kylin. But it isn’t implemented in Scala, is only in Java [MailList](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html). This doc will go step by step solving these problems.
+
+### Pre-requisites
+
+* Need an instance of Kylin, with a Cube; [Sample Cube](kylin_sample.html) will be good enough.
+* [Scala](http://www.scala-lang.org/) and [Apache Flink](http://flink.apache.org/) Installed
+* [IntelliJ](https://www.jetbrains.com/idea/) Installed and configured for Scala/Flink (see [Flink IDE setup guide](https://ci.apache.org/projects/flink/flink-docs-release-1.1/internals/ide_setup.html) )
+
+### Used software:
+
+* [Apache Flink](http://flink.apache.org/downloads.html) v1.2-SNAPSHOT
+* [Apache Kylin](http://kylin.apache.org/download/) v1.5.2 (v1.6.0 also works)
+* [IntelliJ](https://www.jetbrains.com/idea/download/#section=linux)  v2016.2
+* [Scala](downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz)  v2.11
+
+### Starting point:
+
+This can be out initial skeleton: 
+
+{% highlight Groff markup %}
+import org.apache.flink.api.scala._
+val env = ExecutionEnvironment.getExecutionEnvironment
+val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
+  .setDrivername("org.apache.kylin.jdbc.Driver")
+  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
+  .setUsername("ADMIN")
+  .setPassword("KYLIN")
+  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
+  .finish()
+  val dataset =env.createInput(inputFormat)
+{% endhighlight %}
+
+The first error is: ![alt text](/images/Flink-Tutorial/02.png)
+
+Add to Scala: 
+{% highlight Groff markup %}
+import org.apache.flink.api.java.io.jdbc.JDBCInputFormat
+{% endhighlight %}
+
+Next error is  ![alt text](/images/Flink-Tutorial/03.png)
+
+We can solve dependencies [(mvn repository: jdbc)](https://mvnrepository.com/artifact/org.apache.flink/flink-jdbc/1.1.2); Add this to your pom.xml:
+{% highlight Groff markup %}
+<dependency>
+   <groupId>org.apache.flink</groupId>
+   <artifactId>flink-jdbc</artifactId>
+   <version>${flink.version}</version>
+</dependency>
+{% endhighlight %}
+
+## Solve dependencies of row 
+
+Similar to previous point we need solve dependencies of Row Class [(mvn repository: Table) ](https://mvnrepository.com/artifact/org.apache.flink/flink-table_2.10/1.1.2):
+
+  ![](/images/Flink-Tutorial/03b.png)
+
+
+* In pom.xml add:
+{% highlight Groff markup %}
+<dependency>
+   <groupId>org.apache.flink</groupId>
+   <artifactId>flink-table_2.10</artifactId>
+   <version>${flink.version}</version>
+</dependency>
+{% endhighlight %}
+
+* In Scala: 
+{% highlight Groff markup %}
+import org.apache.flink.api.table.Row
+{% endhighlight %}
+
+## Solve RowTypeInfo property (and their new dependencies)
+
+This is the new error to solve:
+
+  ![](/images/Flink-Tutorial/04.png)
+
+
+* If check the code of [JDBCInputFormat.java](https://github.com/apache/flink/blob/master/flink-batch-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java#L69), we can see [this new property](https://github.com/apache/flink/commit/09b428bd65819b946cf82ab1fdee305eb5a941f5#diff-9b49a5041d50d9f9fad3f8060b3d1310R69) (and mandatory) added on Apr 2016 by [FLINK-3750](https://issues.apache.org/jira/browse/FLINK-3750)  Manual [JDBCInputFormat](https://ci.apa [...]
+
+   Add the new Property: **setRowTypeInfo**
+   
+{% highlight Groff markup %}
+val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
+  .setDrivername("org.apache.kylin.jdbc.Driver")
+  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
+  .setUsername("ADMIN")
+  .setPassword("KYLIN")
+  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
+  .setRowTypeInfo(DB_ROWTYPE)
+  .finish()
+{% endhighlight %}
+
+* How can configure this property in Scala? In [Attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala), there is an incorrect solution
+   
+   We can check the types using the intellisense: ![alt text](/images/Flink-Tutorial/05.png)
+   
+   Then we will need add more dependences; Add to scala:
+
+{% highlight Groff markup %}
+import org.apache.flink.api.table.typeutils.RowTypeInfo
+import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation}
+{% endhighlight %}
+
+   Create a Array or Seq of TypeInformation[ ]
+
+  ![](/images/Flink-Tutorial/06.png)
+
+
+   Solution:
+   
+{% highlight Groff markup %}
+   var stringColum: TypeInformation[String] = createTypeInformation[String]
+   val DB_ROWTYPE = new RowTypeInfo(Seq(stringColum))
+{% endhighlight %}
+
+## Solve ClassNotFoundException
+
+  ![](/images/Flink-Tutorial/07.png)
+
+Need find the kylin-jdbc-x.x.x.jar and then expose to Flink
+
+1. Find the Kylin JDBC jar
+
+   From Kylin [Download](http://kylin.apache.org/download/) choose **Binary** and the **correct version of Kylin and HBase**
+   
+   Download & Unpack: in ./lib: 
+   
+  ![](/images/Flink-Tutorial/08.png)
+
+
+2. Make this JAR accessible to Flink
+
+   If you execute like service you need put this JAR in you Java class path using your .bashrc 
+
+  ![](/images/Flink-Tutorial/09.png)
+
+
+  Check the actual value: ![alt text](/images/Flink-Tutorial/10.png)
+  
+  Check the permission for this file (Must be accessible for you):
+
+  ![](/images/Flink-Tutorial/11.png)
+
+ 
+  If you are executing from IDE, need add your class path manually:
+  
+  On IntelliJ: ![alt text](/images/Flink-Tutorial/12.png)  > ![alt text](/images/Flink-Tutorial/13.png) > ![alt text](/images/Flink-Tutorial/14.png) > ![alt text](/images/Flink-Tutorial/15.png)
+  
+  The result, will be similar to: ![alt text](/images/Flink-Tutorial/16.png)
+  
+## Solve "Couldn’t access resultSet" error
+
+  ![](/images/Flink-Tutorial/17.png)
+
+
+It is related with [Flink 4108](https://issues.apache.org/jira/browse/FLINK-4108)  [(MailList)](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html#a9415) and Timo Walther [make a PR](https://github.com/apache/flink/pull/2619)
+
+If you are running Flink <= 1.2 you will need apply this path and make clean install
+
+## Solve the casting error
+
+  ![](/images/Flink-Tutorial/18.png)
+
+In the error message you have the problem and solution …. nice ;)  ¡¡
+
+## The result
+
+The output must be similar to this, print the result of query by standard output:
+
+  ![](/images/Flink-Tutorial/19.png)
+
+
+## Now, more complex
+
+Try with a multi-colum and multi-type query:
+
+{% highlight Groff markup %}
+select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
+from kylin_sales 
+group by part_dt 
+order by part_dt
+{% endhighlight %}
+
+Need changes in DB_ROWTYPE:
+
+  ![](/images/Flink-Tutorial/20.png)
+
+
+And import lib of Java, to work with Data type of Java ![alt text](/images/Flink-Tutorial/21.png)
+
+The new result will be: 
+
+  ![](/images/Flink-Tutorial/23.png)
+
+
+## Error:  Reused Connection
+
+
+  ![](/images/Flink-Tutorial/24.png)
+
+Check if your HBase and Kylin is working. Also you can use Kylin UI for it.
+
+
+## Error:  java.lang.AbstractMethodError:  ….Avatica Connection
+
+See [Kylin 1898](https://issues.apache.org/jira/browse/KYLIN-1898) 
+
+It is a problem with kylin-jdbc-1.x.x. JAR, you need use Calcite 1.8 or above; The solution is to use Kylin 1.5.4 or above.
+
+  ![](/images/Flink-Tutorial/25.png)
+
+
+
+## Error: can't expand macros compiled by previous versions of scala
+
+Is a problem with versions of scala, check in with "scala -version" your actual version and choose your correct POM.
+
+Perhaps you will need a IntelliJ > File > Invalidates Cache > Invalidate and Restart.
+
+I added POM for Scala 2.11
+
+
+## Final Words
+
+Now you can read Kylin’s data from Apache Flink, great!
+
+[Full Code Example](https://github.com/albertoRamon/Flink/tree/master/ReadKylinFromFlink/flink-scala-project)
+
+Solved all integration problems, and tested with different types of data (Long, BigDecimal and Dates). The patch has been comited at 15 Oct, then, will be part of Flink 1.2.
diff --git a/website/_docs24/tutorial/hue.md b/website/_docs24/tutorial/hue.md
new file mode 100644
index 0000000..dd4b4a5
--- /dev/null
+++ b/website/_docs24/tutorial/hue.md
@@ -0,0 +1,246 @@
+---
+layout: docs
+title: Hue
+categories: tutorial
+permalink: /docs24/tutorial/hue.html
+---
+### Introduction
+ In [Hue-2745](https://issues.cloudera.org/browse/HUE-2745) v3.10, add JDBC support like Phoenix, Kylin, Redshift, Solr Parallel SQL, …
+
+However, there isn’t any manual to use with Kylin.
+
+### Pre-requisites
+Build a cube sample of Kylin with: [Quick Start with Sample Cube](http://kylin.apache.org/docs24/tutorial/kylin_sample.html), will be enough.
+
+You can check: 
+
+  ![](/images/tutorial/2.0/hue/01.png)
+
+
+### Used Software:
+* [Hue](http://gethue.com/) v3.10.0
+* [Apache Kylin](http://kylin.apache.org/) v1.5.2
+
+
+### Install Hue
+If you have Hue installed, you can skip this step.
+
+To install Hue on Ubuntu 16.04 LTS. The [official Instructions](http://gethue.com/how-to-build-hue-on-ubuntu-14-04-trusty/) didn’t work but [this](https://github.com/cloudera/hue/blob/master/tools/docker/hue-base/Dockerfile) works fine:
+
+There isn’t any binary package thus [pre-requisites](https://github.com/cloudera/hue#development-prerequisites) must be installed and compile with the command *make*
+
+{% highlight Groff markup %}
+    sudo apt-get install --fix-missing -q -y \
+    git \
+    ant \
+    gcc \
+    g++ \
+    libkrb5-dev \
+    libmysqlclient-dev \
+    libssl-dev \
+    libsasl2-dev \
+    libsasl2-modules-gssapi-mit \
+    libsqlite3-dev \
+    libtidy-0.99-0 \
+    libxml2-dev \
+    libxslt-dev \
+    libffi-dev \
+    make \
+    maven \
+    libldap2-dev \
+    python-dev \
+    python-setuptools \
+    libgmp3-dev \
+    libz-dev
+{% endhighlight %}
+
+Download and Compile:
+
+{% highlight Groff markup %}
+    git clone https://github.com/cloudera/hue.git
+    cd hue
+    make apps
+{% endhighlight %}
+
+Start and connect to Hue:
+
+{% highlight Groff markup %}
+    build/env/bin/hue runserver_plus localhost:8888
+{% endhighlight %}
+* runserver_plus: is like runserver with [debugger](http://django-extensions.readthedocs.io/en/latest/runserver_plus.html#usage)
+* localIP: Port, usually Hue uses 8888
+
+The output must be similar to:
+
+  ![](/images/tutorial/2.0/hue/02.png)
+
+
+Connect using your browser: http://localhost:8888
+
+  ![](/images/tutorial/2.0/hue/03.png)
+
+
+Important: The first time that you connect to hue, you set Login / Pass  for admin
+
+We will use Hue / Hue as login / pass
+
+
+**Issue 1:** Could not create home directory
+
+  ![](/images/tutorial/2.0/hue/04.png)
+
+
+   It is a permission problem of your current user, you can use: sudo to start Hue
+
+**Issue 2:** Could not connect to … 
+
+  ![](/images/tutorial/2.0/hue/05.png)
+
+   If Hue’s code  had been downloaded from Git, Hive connection is active but not configured → skip this message  
+
+**Issue 3:** Address already in use
+
+  ![](/images/tutorial/2.0/hue/06.png)
+
+   The port is in use or you have a Hive process running already
+
+  You can use *ps -ef | grep hue*, to find the PID and kill
+
+
+### Configure Hue for Apache Kylin
+The purpose is to add a snipped in a notebook with Kylin queries
+
+References:
+* [Custom SQL Databases](http://gethue.com/custom-sql-query-editors/)	
+* [Manual: Kylin JDBC Driver](http://kylin.apache.org/docs24/howto/howto_jdbc.html)
+* [GitHub: Kylin JDBC Driver](https://github.com/apache/kylin/tree/3b2ebd243cfe233ea7b1a80285f4c2110500bbe5/jdbc)
+
+Register JDBC Driver
+
+1. To find the JAR Class for the JDBC Connector
+
+ From Kylin [Download](http://kylin.apache.org/download/)
+Choose **Binary** and the **correct version of Kylin and HBase**
+
+ Download & Unpack:  in ./lib: 
+
+  ![](/images/tutorial/2.0/hue/07.png)
+
+
+2. Place this JAR in Java ClassPATH using .bashrc
+
+  ![](/images/tutorial/2.0/hue/08.png)
+
+
+  check the actual value: ![alt text](/images/tutorial/2.0/hue/09.png)
+
+  check the permission for this file (must be accessible to you):
+
+  ![](/images/tutorial/2.0/hue/10.png)
+
+
+3. Add this new interface to Hue.ini
+
+  Where is the hue.ini ? 
+
+ * If the code is downloaded from Git:  *UnzipPath/desktop/conf/pseudo-distributed.ini*
+
+   (I shared my *INI* file in GitHub).
+
+ * If you are using Cloudera: you must use Advanced Configuration Snippet
+
+ * Other: find your actual *hue.ini*
+
+ Add these lines in *[[interpreters]]*
+{% highlight Groff markup %}
+    [[[kylin]]]
+    name=kylin JDBC
+    interface=jdbc
+    options='{"url": "jdbc:kylin://172.17.0.2:7070/learn_kylin","driver": "org.apache.kylin.jdbc.Driver", "user": "ADMIN", "password": "KYLIN"}'
+{% endhighlight %}
+
+4. Try to Start Hue and connect just like in ‘Start and connect’
+
+TIP: One JDBC Source for each project is need
+
+
+Register without a password, it can do use this other format:
+{% highlight Groff markup %}
+    options='{"url": "jdbc:kylin://172.17.0.2:7070/learn_kylin","driver": "org.apache.kylin.jdbc.Driver"}'
+{% endhighlight %}
+
+And when you open the Notebook, Hue prompts this:
+
+  ![](/images/tutorial/2.0/hue/11.png)
+
+
+
+**Issue 1:** Hue can’t Start
+
+If you see this when you connect to Hue  ( http://localhost:8888 ):
+
+  ![](/images/tutorial/2.0/hue/12.png)
+
+
+Go to the last line ![alt text](/images/tutorial/2.0/hue/13.png) 
+
+And launch Python Interpreter (see console icon on the right):
+
+  ![](/images/tutorial/2.0/hue/14.png)
+
+In this case: I've forgotten to close “ after learn_kylin
+
+**Issue 2:** Password Prompting
+
+In Hue 3.11 there is a bug [Hue 4716](https://issues.cloudera.org/browse/HUE-4716)
+
+In Hue 3.10 with Kylin, I don’t have any problem   :)
+
+
+## Test query example
+Add Kylin JDBC as source in the Kylin’s notebook:
+
+ ![alt text](/images/tutorial/2.0/hue/15.png) > ![alt text](/images/tutorial/2.0/hue/16.png)  > ![alt text](/images/tutorial/2.0/hue/17.png)  > ![alt text](/images/tutorial/2.0/hue/18.png) 
+
+
+Write a query, like this:
+{% highlight Groff markup %}
+    select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt
+{% endhighlight %}
+
+And Execute with: ![alt text](/images/tutorial/2.0/hue/19.png) 
+
+  ![](/images/tutorial/2.0/hue/20.png)
+
+
+ **Congratulations !!!**  you are connected to Hue with Kylin
+
+
+**Issue 1:**  No suitable driver found for jdbc:kylin
+
+  ![](/images/tutorial/2.0/hue/21.png)
+
+There is a bug, not solved since 27 Aug 2016, nor in 3.10 and 3.11, but the solution is very easy:
+
+[Link](https://github.com/cloudera/hue/pull/369): 
+You only need to change 3 lines in  *<HuePath>/desktop/libs/librdbms/src/librdbms/jdbc.py*
+
+
+## Limits
+In Hue 3.10 and 3.11
+* Auto-complete doesn’t work on JDBC interfaces
+* Max 1000 records. There is a limitation on JDBC interfaces, because Hue does not support result pagination [Hue 3419](https://issues.cloudera.org/browse/HUE-3419). 
+
+
+### Future Work
+
+**Dashboards**
+There is an amazing feature of Hue: [Search Dasboards](http://gethue.com/search-dashboards/) / [Dynamic Dashboards](http://gethue.com/hadoop-search-dynamic-search-dashboards-with-solr/). You can ‘play’ with this [Demo On-line](http://demo.gethue.com/search/admin/collections). But this only works with SolR.
+
+There is a JIRA to solve this: [Hue 3228](https://issues.cloudera.org/browse/HUE-3228), is in roadmap for 4.1. Check Hue MailList[MailList](https://groups.google.com/a/cloudera.org/forum/#!topic/hue-user/B6FWBeoqK7I) and add Dashboards to JDBC connections.
+
+**Chart & Dynamic Filter**
+Nowadays, it isn’t compatible, you only can work with Grid.
+
+**DB Query**
+ DB Query does not yet support JDBC.
diff --git a/website/_docs24/tutorial/jdbc.cn.md b/website/_docs24/tutorial/jdbc.cn.md
new file mode 100644
index 0000000..d00e44f
--- /dev/null
+++ b/website/_docs24/tutorial/jdbc.cn.md
@@ -0,0 +1,92 @@
+---
+layout: docs-cn
+title:  "JDBC 驱动"
+categories: 教程
+permalink: /cn/docs24/tutorial/jdbc.html
+---
+
+### 认证
+
+###### 基于Apache Kylin认证RESTFUL服务。支持的参数:
+* user : 用户名
+* password : 密码
+* ssl: true或false。 默认为flas;如果为true,所有的服务调用都会使用https。
+
+### 连接url格式:
+{% highlight Groff markup %}
+jdbc:kylin://<hostname>:<port>/<kylin_project_name>
+{% endhighlight %}
+* 如果“ssl”为true,“port”应该是Kylin server的HTTPS端口。
+* 如果“port”未被指定,driver会使用默认的端口:HTTP 80,HTTPS 443。
+* 必须指定“kylin_project_name”并且用户需要确保它在Kylin server上存在。
+
+### 1. 使用Statement查询
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 2. 使用PreparedStatementv查询
+
+###### 支持的PreparedStatement参数:
+* setString
+* setInt
+* setShort
+* setLong
+* setFloat
+* setDouble
+* setBoolean
+* setByte
+* setDate
+* setTime
+* setTimestamp
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
+state.setInt(1, 10);
+ResultSet resultSet = state.executeQuery();
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 3. 获取查询结果元数据
+Kylin jdbc driver支持元数据列表方法:
+通过sql模式过滤器(比如 %)列出catalog、schema、table和column。
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
+while (tables.next()) {
+    for (int i = 0; i < 10; i++) {
+        assertEquals("dummy", tables.getString(i + 1));
+    }
+}
+{% endhighlight %}
diff --git a/website/_docs24/tutorial/jdbc.md b/website/_docs24/tutorial/jdbc.md
new file mode 100644
index 0000000..eb9bb63
--- /dev/null
+++ b/website/_docs24/tutorial/jdbc.md
@@ -0,0 +1,92 @@
+---
+layout: docs
+title:  Kylin JDBC Driver
+categories: tutorial
+permalink: /docs24/tutorial/jdbc.html
+---
+
+### Authentication
+
+###### Build on Apache Kylin authentication restful service. Supported parameters:
+* user : username 
+* password : password
+* ssl: true/false. Default be false; If true, all the services call will use https.
+
+### Connection URL format:
+{% highlight Groff markup %}
+jdbc:kylin://<hostname>:<port>/<kylin_project_name>
+{% endhighlight %}
+* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
+* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
+* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
+
+### 1. Query with Statement
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 2. Query with PreparedStatement
+
+###### Supported prepared statement parameters:
+* setString
+* setInt
+* setShort
+* setLong
+* setFloat
+* setDouble
+* setBoolean
+* setByte
+* setDate
+* setTime
+* setTimestamp
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
+state.setInt(1, 10);
+ResultSet resultSet = state.executeQuery();
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 3. Get query result set metadata
+Kylin jdbc driver supports metadata list methods:
+List catalog, schema, table and column with sql pattern filters(such as %).
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
+while (tables.next()) {
+    for (int i = 0; i < 10; i++) {
+        assertEquals("dummy", tables.getString(i + 1));
+    }
+}
+{% endhighlight %}
diff --git a/website/_docs24/tutorial/kylin_client_tool.cn.md b/website/_docs24/tutorial/kylin_client_tool.cn.md
new file mode 100644
index 0000000..9725db9
--- /dev/null
+++ b/website/_docs24/tutorial/kylin_client_tool.cn.md
@@ -0,0 +1,123 @@
+---
+layout: docs-cn
+title:  "Python 客户端"
+categories: 教程
+permalink: /cn/docs24/tutorial/kylin_client_tool.html
+---
+
+Apache Kylin Python 客户端工具库是基于Python可访问Kylin的客户端. 此工具库包含两个可使用组件. 
+
+* Apache Kylin 命令行工具
+* Apache Kylin SQLAchemy方言
+
+想要了解更多关于此工具库信息请点击[Github仓库](https://github.com/Kyligence/kylinpy).
+
+## 安装
+请确保您python解释器版本在2.7+, 或者3.4+以上. 最方便安装Apache Kylin Python客户端工具库的方法是使用pip命令
+```
+    pip install --upgrade kylinpy
+```
+
+## Kylinpy 命令行工具
+安装完kylinpy后, 立即可以在终端下访问kylinpy
+
+```
+    $ kylinpy
+    Usage: kylinpy [OPTIONS] COMMAND [ARGS]...
+
+    Options:
+      -h, --host TEXT       Kylin host name  [required]
+      -P, --port INTEGER    Kylin port, default: 7070
+      -u, --username TEXT   Kylin username  [required]
+      -p, --password TEXT   Kylin password  [required]
+      --project TEXT        Kylin project  [required]
+      --prefix TEXT         Kylin RESTful prefix of url, default: /kylin/api
+      --debug / --no-debug  show debug infomation
+      --api1 / --api2       API version; default is "api1"; "api1" 适用于 Apache Kylin
+      --help                Show this message and exit.
+
+    Commands:
+      auth           get user auth info
+      cube_columns   list cube columns
+      cube_desc      show cube description
+      cube_names     list cube names
+      model_desc     show model description
+      projects       list all projects
+      query          sql query
+      table_columns  list table columns
+      table_names    list all table names
+```
+
+## Kylinpy命令行工具示例
+
+1. 访问Apache Kylin
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug auth
+```
+
+2. 访问选定cube所有的维度信息
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_columns --name kylin_sales_cube
+```
+
+3. 访问选定的cube描述
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_desc --name kylin_sales_cube
+```
+
+4. 访问所有cube名称
+```
+kylinpy -h hostname -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_names
+```
+
+5. 访问选定cube的SQL定义
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug cube_sql --name kylin_sales_cube
+```
+
+6. 列出Kylin中所有项目
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug projects
+```
+
+7. 访问选定表所有的维度信息
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug table_columns --name KYLIN_SALES
+```
+
+8. 访问所有表名
+```
+kylinpy -h hostname -u ADMIN -p KYLIN --project learn_kylin --api1 table_names
+```
+
+9. 访问所选模型信息
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --api1 --debug model_desc --name kylin_sales_model
+```
+
+## Apache Kylin SQLAlchemy方言
+
+任何一个使用SQLAlchemy的应用程序都可以通过此`方言`访问到Kylin, 您之前如果已经安装了kylinpy那么现在就已经集成好了SQLAlchemy Dialect. 请使用如下DSN模板访问Kylin
+
+```
+kylin://<username>:<password>@<hostname>:<port>/<project>?version=<v1|v2>&prefix=</kylin/api>
+```
+
+## SQLAlchemy 实例
+测试Apache Kylin连接
+
+```
+    $ python
+    >>> import sqlalchemy as sa
+    >>> kylin_engine = sa.create_engine('kylin://username:password@hostname:7070/learn_kylin?version=v1')
+    >>> results = kylin_engine.execute('SELECT count(*) FROM KYLIN_SALES')
+    >>> [e for e in results]
+    [(4953,)]
+    >>> kylin_engine.table_names()
+    [u'KYLIN_ACCOUNT',
+     u'KYLIN_CAL_DT',
+     u'KYLIN_CATEGORY_GROUPINGS',
+     u'KYLIN_COUNTRY',
+     u'KYLIN_SALES',
+     u'KYLIN_STREAMING_TABLE']
+```
diff --git a/website/_docs24/tutorial/kylin_client_tool.md b/website/_docs24/tutorial/kylin_client_tool.md
new file mode 100644
index 0000000..af90644
--- /dev/null
+++ b/website/_docs24/tutorial/kylin_client_tool.md
@@ -0,0 +1,135 @@
+---
+layout: docs
+title:  Kylin Python Client
+categories: tutorial
+permalink: /docs24/tutorial/kylin_client_tool.html
+---
+
+Apache Kylin Python Client Library is a python-based Apache Kylin client. There are two components in Apache Kylin Python Client Library:
+
+* Apache Kylin command line tools
+* Apache Kylin dialect for SQLAlchemy
+
+You can get more detail from this [Github Repository](https://github.com/Kyligence/kylinpy).
+
+## Installation
+Make sure Python version is 2.7+ or 3.4+. The easiest way to install Apache Kylin Python Client Library is to use "pip":
+
+```
+pip install --upgrade kylinpy
+```
+
+## Kylinpy CLI
+After installing Apache Kylin Python Client Library you may run kylinpy in terminal.
+
+```
+$ kylinpy
+Usage: kylinpy [OPTIONS] COMMAND [ARGS]...
+
+Options:
+  -h, --host TEXT       Kylin host name  [required]
+  -P, --port INTEGER    Kylin port, default: 7070
+  -u, --username TEXT   Kylin username  [required]
+  -p, --password TEXT   Kylin password  [required]
+  --project TEXT        Kylin project  [required]
+  --prefix TEXT         Kylin RESTful prefix of url, default: "/kylin/api"
+  --debug / --no-debug  show debug infomation
+  --help                Show this message and exit.
+
+Commands:
+  auth           get user auth info
+  cube_columns   list cube columns
+  cube_desc      show cube description
+  cube_names     list cube names
+  model_desc     show model description
+  projects       list all projects
+  query          sql query
+  table_columns  list table columns
+  table_names    list all table names
+```
+
+## Examples for Kylinpy CLI
+
+1. To get all user info from Apache Kylin with debug mode
+
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --debug auth
+```
+
+2. To get all cube columns from Apache Kylin with debug mode
+
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --debug cube_columns --name kylin_sales_cube
+```
+
+3. To get cube description of selected cube from Apache Kylin with debug mode
+
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --debug cube_desc --name kylin_sales_cube
+```
+
+4. To get all cube names from Apache Kylin with debug mode
+
+```
+kylinpy -h hostname -u ADMIN -p KYLIN --project learn_kylin --debug cube_names
+```
+
+5. To get cube SQL of selected cube from Apache Kylin with debug mode
+
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --debug cube_sql --name kylin_sales_cube
+```
+
+6. To list all projects from Apache Kylin with debug mode
+
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --debug projects
+```
+
+7. To list all tables column of selected cube from Apache Kylin with debug mode
+
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --debug table_columns --name KYLIN_SALES
+```
+
+8. To get all table names from kylin
+
+```
+kylinpy -h hostname -u ADMIN -p KYLIN --project learn_kylin --api1 table_names
+```
+
+9. To get the model description of the selected model from Apache Kylin with debug mode
+
+```
+kylinpy -h hostname -P 7070 -u ADMIN -p KYLIN --project learn_kylin --debug model_desc --name kylin_sales_model
+```
+
+## Kylin dialect for SQLAlchemy
+
+Any application that uses SQLAlchemy can now query Apache Kylin with this Apache Kylin dialect installed. It is part of the Apache Kylin Python Client Library, so if you already installed this library in the previous step, you are ready to use. You may use below template to build DSN to connect Apache Kylin.
+
+```
+kylin://<username>:<password>@<hostname>:<port>/<project>?version=<v1|v2>&prefix=</kylin/api>
+```
+
+## Examples for SQLAlchemy
+
+Test connection with Apache Kylin
+
+```
+$ python
+  >>> import sqlalchemy as sa
+  >>> kylin_engine = sa.create_engine('kylin://username:password@hostname:7070/learn_kylin?version=v1')
+  >>> results = kylin_engine.execute('SELECT count(*) FROM KYLIN_SALES')
+  >>> [e for e in results]
+    [(4953,)]
+  >>> kylin_engine.table_names()
+    [u'KYLIN_ACCOUNT',
+     u'KYLIN_CAL_DT',
+     u'KYLIN_CATEGORY_GROUPINGS',
+     u'KYLIN_COUNTRY',
+     u'KYLIN_SALES',
+     u'KYLIN_STREAMING_TABLE']
+```
+
+
diff --git a/website/_docs24/tutorial/kylin_sample.cn.md b/website/_docs24/tutorial/kylin_sample.cn.md
new file mode 100644
index 0000000..86b0060
--- /dev/null
+++ b/website/_docs24/tutorial/kylin_sample.cn.md
@@ -0,0 +1,34 @@
+---
+layout: docs-cn
+title:  "样例 Cube 快速入门"
+categories: tutorial
+permalink: /cn/docs24/tutorial/kylin_sample.html
+---
+
+Kylin 提供了一个创建样例 Cube 脚本;脚本会创建五个样例 hive 表:
+
+1. 运行 ${KYLIN_HOME}/bin/sample.sh ;重启 kylin 服务器刷新缓存;
+2. 用默认的用户名和密码 ADMIN/KYLIN 登陆 Kylin 网站,选择 project 下拉框(左上角)中的 "learn_kylin" 工程;
+3. 选择名为 "kylin_sales_cube" 的样例 cube,点击 "Actions" -> "Build",选择一个在 2014-01-01 之后的日期(覆盖所有的 10000 样例记录);
+4. 点击 "Monitor" 标签,查看 build 进度直至 100%;
+5. 点击 "Insight" 标签,执行 SQLs,例如:
+	`select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt`
+6. 您可以验证查询结果且与 hive 的响应时间进行比较;
+
+   
+## Streaming 样例 Cube 快速入门
+
+Kylin 也提供了 streaming 样例 cube 脚本。该脚本将会创建 Kafka topic 且不断的向生成的 topic 发送随机 messages。
+
+1. 首先设置 KAFKA_HOME,然后启动 Kylin。
+2. 运行 ${KYLIN_HOME}/bin/sample.sh,它会在 learn_kylin 工程中生成 DEFAULT.KYLIN_STREAMING_TABLE 表,kylin_streaming_model 模型,Cube kylin_streaming_cube。
+3. 运行 ${KYLIN_HOME}/bin/sample-streaming.sh,它会在 localhost:9092 broker 中创建名为 kylin_streaming_topic 的 Kafka Topic。它也会每秒随机发送 100 条 messages 到 kylin_streaming_topic。
+4. 遵循标准 cube build 过程,并触发 Cube kylin_streaming_cube build。  
+5. 点击 "Monitor" 标签,查看 build 进度直至至少有一个 job 达到 100%。
+6. 点击 "Insight" 标签,执行 SQLs,例如:
+         `select count(*), HOUR_START from kylin_streaming_table group by HOUR_START`
+7. 验证查询结果。
+ 
+## 下一步干什么
+
+您可以通过接下来的教程用同一张表创建另一个 cube。
diff --git a/website/_docs24/tutorial/kylin_sample.md b/website/_docs24/tutorial/kylin_sample.md
new file mode 100644
index 0000000..db8607a
--- /dev/null
+++ b/website/_docs24/tutorial/kylin_sample.md
@@ -0,0 +1,34 @@
+---
+layout: docs
+title:  Quick Start with Sample Cube
+categories: tutorial
+permalink: /docs24/tutorial/kylin_sample.html
+---
+
+Kylin provides a script for you to create a sample Cube; the script will also create five sample hive tables:
+
+1. Run ${KYLIN_HOME}/bin/sample.sh ; Restart kylin server to flush the caches;
+2. Logon Kylin web with default user ADMIN/KYLIN, select project "learn_kylin" in the project dropdown list (left upper corner);
+3. Select the sample cube "kylin_sales_cube", click "Actions" -> "Build", pick up a date later than 2014-01-01 (to cover all 10000 sample records);
+4. Check the build progress in "Monitor" tab, until 100%;
+5. Execute SQLs in the "Insight" tab, for example:
+	select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt
+6. You can verify the query result and compare the response time with hive;
+
+   
+## Quick Start with Streaming Sample Cube
+
+Kylin provides a script for streaming sample cube also. This script will create Kafka topic and send the random messages constantly to the generated topic.
+
+1. Export KAFKA_HOME first, and start Kylin.
+2. Run ${KYLIN_HOME}/bin/sample.sh, it will generate Table DEFAULT.KYLIN_STREAMING_TABLE, Model kylin_streaming_model, Cube kylin_streaming_cube in learn_kylin project.
+3. Run ${KYLIN_HOME}/bin/sample-streaming.sh, it will create Kafka Topic kylin_streaming_topic into the localhost:9092 broker. It also send the random 100 messages into kylin_streaming_topic per second.
+4. Follow the the standard cube build process, and trigger the Cube kylin_streaming_cube build.  
+5. Check the build process in "Monitor" tab, until at least one job is 100%.
+6. Execute SQLs in the "Insight" tab, for example:
+         select count(*), HOUR_START from kylin_streaming_table group by HOUR_START
+7. Verify the query result.
+ 
+## What's next
+
+You can create another cube with the sample tables, by following the tutorials.
diff --git a/website/_docs24/tutorial/microstrategy.md b/website/_docs24/tutorial/microstrategy.md
new file mode 100644
index 0000000..deff988
--- /dev/null
+++ b/website/_docs24/tutorial/microstrategy.md
@@ -0,0 +1,84 @@
+---
+layout: docs
+title:  MicroStrategy
+categories: tutorial
+permalink: /docs24/tutorial/microstrategy.html
+---
+
+### Install ODBC Driver
+
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+Please make sure to download and install Kylin ODBC Driver __v1.6__ 64 bit or above. If you already installed ODBC Driver in your system, please uninstall it first. already installed ODBC Driver in your system, please uninstall it first.  
+
+The Kylin ODBC driver needs to be installed in the machine or virtual environment where your Microstrategy Intelligenec Server is installed. 
+
+###Create Local DSN
+
+Open your window ODBC Data Source Administrator (64bit) and create a system DSN that point to your kylin instance. 
+
+![](/images/tutorial/2.1/MicroStrategy/0.png)
+
+### Setting Database Instance
+
+Connect Kylin using ODBC driver: open your MicroStrategy Developer and connect to the project source where your are going to connect Kylin data source using a user account with administrative privilege. 
+
+Once logged in, go to `Administration` -> `Configuration manager` -> `Database Instance`, create a new database instance with system DSN that you created in the previous step. Under database connection type, please choose Generic DBMS.
+
+![](/images/tutorial/2.1/MicroStrategy/2.png)
+
+![](/images/tutorial/2.1/MicroStrategy/1.png)
+
+Depending on your business scenario, you may need to create a new project and set Kylin database instance as your primary database instance or if there is an existing project, set Kylin database instance as one of your primary or non-primary database instance. You can achieve this by right click on your project, and go to `project configuration` -> `database instance`. 
+
+### Import Logical Table
+
+Open up your project, go to `schema` -> `warehouse catalog` to import the tables your need. 
+
+![](/images/tutorial/2.1/MicroStrategy/4.png)
+
+### Building Schema and Public Objects
+
+Create Attribute, Facts and Metric objects
+
+![](/images/tutorial/2.1/MicroStrategy/5.png)
+
+![](/images/tutorial/2.1/MicroStrategy/6.png)
+
+![](/images/tutorial/2.1/MicroStrategy/7.png)
+
+![](/images/tutorial/2.1/MicroStrategy/8.png)
+
+### Create a Simple Report
+
+Now you can start creating reports with Kylin as data source.
+
+![](/images/tutorial/2.1/MicroStrategy/9.png)
+
+![](/images/tutorial/2.1/MicroStrategy/10.png)
+
+### Best Practice for Connecting MicroStrategy to Kylin Data Source
+
+1. Kylin does not work with multiple SQL passes at the moment, so it is recommended to set up your report intermediate table type as derived, you can change this setting at report level using `Data`-> `VLDB property`-> `Tables`-> `Intermediate Table Type`
+
+2. Avoid using below functionality in MicroStrategy as it will generate multiple sql passes that can not be bypassed by VLDB property:
+
+   ​	Creation of datamarts
+
+   ​	Query partitioned tables
+
+   ​	Reports with custom groups
+
+3. Dimension named with Kylin keywords will cause sql to error out. You may find Kylin keywords here, it is recommended to avoid naming the column name as Kylin keywords, especially when you use MicroStrategy as the front-end BI tool, as far as we know there is no setting in MicroStrategy that can escape the keyword.  [https://calcite.apache.org/docs/reference.html#keywords](https://calcite.apache.org/docs/reference.html#keywords)
+
+4. If underlying Kylin data model has left join from fact table to lookup table, In order for Microstrategy to also generate the same left join in sql, please follow below MicroStrategy TN to modify VLDB property:
+
+   [https://community.microstrategy.com/s/article/ka1440000009GrQAAU/KB17514-Using-the-Preserve-all-final-pass-result-elements-VLDB](https://community.microstrategy.com/s/article/ka1440000009GrQAAU/KB17514-Using-the-Preserve-all-final-pass-result-elements-VLDB)
+
+5. By default, MicroStrategy generate SQL query with date filter in a format like 'mm/dd/yyyy'. This format might be different from Kylin's date format, if so, query will error out. You may follow below steps to change MicroStrategy to generate the same date format SQL as Kylin,  
+
+   1. go to `Instance` -> `Administration` -> `Configuration Manager` -> `Database Instance`. 
+   2. Then right click on the database, choose VLDB properties. 
+   3. On the top menu choose `Tools` -> `show Advanced Settings`.
+   4. Go to `select/insert` -> `date format`.
+   5. Change the date format to follow date format in Kylin, for example 'yyyy-mm-dd'.
+   6. Restart MicroStrategy Intelligence Server so that change can be effective. 
diff --git a/website/_docs24/tutorial/odbc.cn.md b/website/_docs24/tutorial/odbc.cn.md
new file mode 100644
index 0000000..bab5331
--- /dev/null
+++ b/website/_docs24/tutorial/odbc.cn.md
@@ -0,0 +1,34 @@
+---
+layout: docs-cn
+title:  "ODBC 驱动"
+categories: 教程
+permalink: /cn/docs24/tutorial/odbc.html
+version: v1.2
+since: v0.7.1
+---
+
+> 我们提供Kylin ODBC驱动程序以支持ODBC兼容客户端应用的数据访问。
+> 
+> 32位版本或64位版本的驱动程序都是可用的。
+> 
+> 测试操作系统:Windows 7,Windows Server 2008 R2
+> 
+> 测试应用:Tableau 8.0.4 和 Tableau 8.1.3
+
+## 前提条件
+1. Microsoft Visual C++ 2012 再分配(Redistributable)
+   * 32位Windows或32位Tableau Desktop:下载:[32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
+   * 64位Windows或64位Tableau Desktop:下载:[64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
+
+2. ODBC驱动程序内部从一个REST服务器获取结果,确保你能够访问一个
+
+## 安装
+1. 如果你已经安装,首先卸载已存在的Kylin ODBC
+2. 从[下载](../../download/)下载附件驱动安装程序,并运行。
+   * 32位Tableau Desktop:请安装KylinODBCDriver (x86).exe
+   * 64位Tableau Desktop:请安装KylinODBCDriver (x64).exe
+
+3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
+
+## 错误报告
+如有问题,请报告错误至Apache Kylin JIRA,或者发送邮件到dev邮件列表。
diff --git a/website/_docs24/tutorial/odbc.md b/website/_docs24/tutorial/odbc.md
new file mode 100644
index 0000000..826aff9
--- /dev/null
+++ b/website/_docs24/tutorial/odbc.md
@@ -0,0 +1,49 @@
+---
+layout: docs
+title:  Kylin ODBC Driver
+categories: tutorial
+permalink: /docs24/tutorial/odbc.html
+since: v0.7.1
+---
+
+> We provide Kylin ODBC driver to enable data access from ODBC-compatible client applications.
+> 
+> Both 32-bit version or 64-bit version driver are available.
+> 
+> Tested Operation System: Windows 7, Windows Server 2008 R2
+> 
+> Tested Application: Tableau 8.0.4, Tableau 8.1.3 and Tableau 9.1
+
+## Prerequisites
+1. Microsoft Visual C++ 2012 Redistributable 
+   * For 32 bit Windows or 32 bit Tableau Desktop: Download: [32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
+   * For 64 bit Windows or 64 bit Tableau Desktop: Download: [64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
+
+
+2. ODBC driver internally gets results from a REST server, make sure you have access to one
+
+## Installation
+1. Uninstall existing Kylin ODBC first, if you already installled it before
+2. Download ODBC Driver from [download](../../download/).
+   * For 32 bit Tableau Desktop: Please install KylinODBCDriver (x86).exe
+   * For 64 bit Tableau Desktop: Please install KylinODBCDriver (x64).exe
+
+3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
+
+## DSN configuration
+1. Open ODBCAD to configure DSN.
+	* For 32 bit driver, please use the 32bit version in C:\Windows\SysWOW64\odbcad32.exe
+	* For 64 bit driver, please use the default "Data Sources (ODBC)" in Control Panel/Administrator Tools
+![]( /images/Kylin-ODBC-DSN/1.png)
+
+2. Open "System DSN" tab, and click "Add", you will see KylinODBCDriver listed as an option, Click "Finish" to continue.
+![]( /images/Kylin-ODBC-DSN/2.png)
+
+3. In the pop up dialog, fill in all the blanks, The server host is where your Kylin Rest Server is started.
+![]( /images/Kylin-ODBC-DSN/3.png)
+
+4. Click "Done", and you will see your new DSN listed in the "System Data Sources", you can use this DSN afterwards.
+![]( /images/Kylin-ODBC-DSN/4.png)
+
+## Bug Report
+Please open Apache Kylin JIRA to report bug, or send to dev mailing list.
diff --git a/website/_docs24/tutorial/powerbi.cn.md b/website/_docs24/tutorial/powerbi.cn.md
new file mode 100644
index 0000000..404808c
--- /dev/null
+++ b/website/_docs24/tutorial/powerbi.cn.md
@@ -0,0 +1,56 @@
+---
+layout: docs-cn
+title:  "Excel 及 Power BI 教程"
+categories: tutorial
+permalink: /cn/docs24/tutorial/powerbi.html
+version: v1.2
+since: v1.2
+---
+
+Microsoft Excel是当今Windows平台上最流行的数据处理软件之一,支持多种数据处理功能,可以利用Power Query从ODBC数据源读取数据并返回到数据表中。
+
+Microsoft Power BI 是由微软推出的商业智能的专业分析工具,给用户提供简单且丰富的数据可视化及分析功能。
+
+> Apache Kylin目前版本不支持原始数据的查询,部分查询会因此失败,导致应用程序发生异常,建议打上KYLIN-1075补丁包以优化查询结果的显示。
+
+
+> Power BI及Excel不支持"connect live"模式,请注意并添加where条件在查询超大数据集时候,以避免从服务器拉去过多的数据到本地,甚至在某些情况下查询执行失败。
+
+### Install ODBC Driver
+参考页面[Kylin ODBC 驱动程序教程](./odbc.html),请确保下载并安装Kylin ODBC Driver __v1.2__. 如果你安装有早前版本,请卸载后再安装。 
+
+### 连接Excel到Kylin
+1. 从微软官网下载和安装Power Query,安装完成后在Excel中会看到Power Query的Fast Tab,单击`From other sources`下拉按钮,并选择`From ODBC`项
+![](/images/tutorial/odbc/ms_tool/Picture1.png)
+
+2. 在弹出的`From ODBC`数据连接向导中输入Apache Kylin服务器的连接字符串,也可以在`SQL`文本框中输入您想要执行的SQL语句,单击`OK`,SQL的执行结果就会立即加载到Excel的数据表中
+![](/images/tutorial/odbc/ms_tool/Picture2.png)
+
+> 为了简化连接字符串的输入,推荐创建Apache Kylin的DSN,可以将连接字符串简化为DSN=[YOUR_DSN_NAME],有关DSN的创建请参考:[https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599)。
+
+
+3. 如果您选择不输入SQL语句,Power Query将会列出所有的数据库表,您可以根据需要对整张表的数据进行加载。但是,Apache Kylin暂不支持原数据的查询,部分表的加载可能因此受限
+![](/images/tutorial/odbc/ms_tool/Picture3.png)
+
+4. 稍等片刻,数据已成功加载到Excel中
+![](/images/tutorial/odbc/ms_tool/Picture4.png)
+
+5.  一旦服务器端数据产生更新,则需要对Excel中的数据进行同步,右键单击右侧列表中的数据源,选择`Refresh`,最新的数据便会更新到数据表中.
+
+6.  1.  为了提升性能,可以在Power Query中打开`Query Options`设置,然后开启`Fast data load`,这将提高数据加载速度,但可能造成界面的暂时无响应
+
+### Power BI
+1.  启动您已经安装的Power BI桌面版程序,单击`Get data`按钮,并选中ODBC数据源.
+![](/images/tutorial/odbc/ms_tool/Picture5.png)
+
+2.  在弹出的`From ODBC`数据连接向导中输入Apache Kylin服务器的数据库连接字符串,也可以在`SQL`文本框中输入您想要执行的SQL语句。单击`OK`,SQL的执行结果就会立即加载到Power BI中
+![](/images/tutorial/odbc/ms_tool/Picture6.png)
+
+3.  如果您选择不输入SQL语句,Power BI将会列出项目中所有的表,您可以根据需要将整张表的数据进行加载。但是,Apache Kylin暂不支持原数据的查询,部分表的加载可能因此受限
+![](/images/tutorial/odbc/ms_tool/Picture7.png)
+
+4.  现在你可以进一步使用Power BI进行可视化分析:
+![](/images/tutorial/odbc/ms_tool/Picture8.png)
+
+5.  单击工具栏的`Refresh`按钮即可重新加载数据并对图表进行更新
+
diff --git a/website/_docs24/tutorial/powerbi.md b/website/_docs24/tutorial/powerbi.md
new file mode 100644
index 0000000..0d470a3
--- /dev/null
+++ b/website/_docs24/tutorial/powerbi.md
@@ -0,0 +1,54 @@
+---
+layout: docs
+title:  MS Excel and Power BI
+categories: tutorial
+permalink: /docs24/tutorial/powerbi.html
+since: v1.2
+---
+
+Microsoft Excel is one of the most famous data tool on Windows platform, and has plenty of data analyzing functions. With Power Query installed as plug-in, excel can easily read data from ODBC data source and fill spreadsheets. 
+
+Microsoft Power BI is a business intelligence tool providing rich functionality and experience for data visualization and processing to user.
+
+> Apache Kylin currently doesn't support query on raw data yet, some queries might fail and cause some exceptions in application. Patch KYLIN-1075 is recommended to get better look of query result.
+
+> Power BI and Excel do not support "connect live" model for other ODBC driver yet, please pay attention when you query on huge dataset, it may pull too many data into your client which will take a while even fail at the end.
+
+### Install ODBC Driver
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+Please make sure to download and install Kylin ODBC Driver __v1.2__. If you already installed ODBC Driver in your system, please uninstall it first. 
+
+### Kylin and Excel
+1. Download Power Query from Microsoft’s Website and install it. Then run Excel, switch to `Power Query` fast tab, click `From Other Sources` dropdown list, and select `ODBC` item.
+![](/images/tutorial/odbc/ms_tool/Picture1.png)
+
+2.  You’ll see `From ODBC` dialog, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox. Optionally you can type a SQL statement in `SQL statement` textbox. Click `OK`, result set will run to your spreadsheet now.
+![](/images/tutorial/odbc/ms_tool/Picture2.png)
+
+> Tips: In order to simplify the Database Connection String, DSN is recommended, which can shorten the Connection String like `DSN=[YOUR_DSN_NAME]`. Details about DSN, refer to [https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599).
+ 
+3. If you didn’t input the SQL statement in last step, Power Query will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
+![](/images/tutorial/odbc/ms_tool/Picture3.png)
+
+4.  Hold on for a while, the data is lying in Excel now.
+![](/images/tutorial/odbc/ms_tool/Picture4.png)
+
+5.  If you want to sync data with Kylin Server, just right click the data source in right panel, and select `Refresh`, then you’ll see the latest data.
+
+6.  To improve data loading performance, you can enable `Fast data load` in Power Query, but this will make your UI unresponsive for a while. 
+
+### Power BI
+1.  Run Power BI Desktop, and click `Get Data` button, then select `ODBC` as data source type.
+![](/images/tutorial/odbc/ms_tool/Picture5.png)
+
+2.  Same with Excel, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox, and optionally type a SQL statement in `SQL statement` textbox. Click `OK`, the result set will come to Power BI as a new data source query.
+![](/images/tutorial/odbc/ms_tool/Picture6.png)
+
+3.  If you didn’t input the SQL statement in last step, Power BI will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
+![](/images/tutorial/odbc/ms_tool/Picture7.png)
+
+4.  Now you can start to enjoy analyzing with Power BI.
+![](/images/tutorial/odbc/ms_tool/Picture8.png)
+
+5.  To reload the data and redraw the charts, just click `Refresh` button in `Home` fast tab.
+
diff --git a/website/_docs24/tutorial/project_level_acl.cn.md b/website/_docs24/tutorial/project_level_acl.cn.md
new file mode 100644
index 0000000..bb9b457
--- /dev/null
+++ b/website/_docs24/tutorial/project_level_acl.cn.md
@@ -0,0 +1,63 @@
+---
+layout: docs-cn
+title: Project Level ACL
+categories: tutorial
+permalink: /cn/docs24/tutorial/project_level_acl.html
+since: v2.1.0
+---
+
+Whether a user can access a project and use some functionalities within the project is determined by project-level access control, there are four types of access permission role set at the project-level in Apache Kylin. They are *ADMIN*, *MANAGEMENT*, *OPERATION* and *QUERY*. Each role defines a list of functionality user may perform in Apache Kylin.
+
+- *QUERY*: designed to be used by analysts who only need access permission to query tables/cubes in the project.
+- *OPERATION*: designed to be used by operation team in a corporate/organization who need permission to maintain the Cube. OPERATION access permission includes QUERY.
+- *MANAGEMENT*: designed to be used by Modeler or Designer who is fully knowledgeable of business meaning of the data/model, Cube will be in charge of Model and Cube design. MANAGEMENT access permission includes OPERATION, and QUERY.
+- *ADMIN*: Designed to fully manage the project. ADMIN access permission includes MANAGEMENT, OPERATION and QUERY.
+
+Access permissions are independent between different projects.
+
+### How Access Permission is Determined
+
+Once project-level access permission has been set for a user, access permission on data source, model and Cube will be inherited based on the access permission role defined on project-level. For detailed functionalities, each access permission role can have access to, see table below.
+
+|                                          | System Admin | Project Admin | Management | Operation | Query |
+| ---------------------------------------- | ------------ | ------------- | ---------- | --------- | ----- |
+| Create/delete project                    | Yes          | No            | No         | No        | No    |
+| Edit project                             | Yes          | Yes           | No         | No        | No    |
+| Add/edit/delete project access permission | Yes          | Yes           | No         | No        | No    |
+| Check model page                         | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Check data source page                   | Yes          | Yes           | Yes        | No        | No    |
+| Load, unload table, reload table         | Yes          | Yes           | No         | No        | No    |
+| View model in read only mode             | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Add, edit, clone, drop model             | Yes          | Yes           | Yes        | No        | No    |
+| Check cube detail definition             | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Add, disable/enable, clone cube, edit, drop cube, purge cube | Yes          | Yes           | Yes        | No        | No    |
+| Build, refresh, merge cube               | Yes          | Yes           | Yes        | Yes       | No    |
+| Edit, view cube json                     | Yes          | Yes           | Yes        | No        | No    |
+| Check insight page                       | Yes          | Yes           | Yes        | Yes       | Yes   |
+| View table in insight page               | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Check monitor page                       | Yes          | Yes           | Yes        | Yes       | No    |
+| Check system page                        | Yes          | No            | No         | No        | No    |
+| Reload metadata, disable cache, set config, diagnosis | Yes          | No            | No         | No        | No    |
+
+
+Additionally, when Query Pushdown is enabled, QUERY access permission on a project allows users to issue push down queries on all tables in the project even though no cube could serve them. It's impossible if a user is not yet granted QUERY permission at project-level.
+
+### Manage Access Permission at Project-level
+
+1. Click the small gear shape icon on the top-left corner of Model page. You will be redirected to project page
+
+   ![](/images/Project-level-acl/ACL-1.png)
+
+2. In project page, expand a project and choose Access.
+3. Click `Grant`to grant permission to user.
+
+	![](/images/Project-level-acl/ACL-2.png)
+
+4. Fill in name of the user or role, choose permission and then click `Grant` to grant permission.
+
+5. You can also revoke and update permission on this page.
+
+   ![](/images/Project-level-acl/ACL-3.png)
+
+   Please note that in order to grant permission to default user (MODELER and ANLAYST), these users need to login as least once. 
+   ​
diff --git a/website/_docs24/tutorial/project_level_acl.md b/website/_docs24/tutorial/project_level_acl.md
new file mode 100644
index 0000000..e2f4a79
--- /dev/null
+++ b/website/_docs24/tutorial/project_level_acl.md
@@ -0,0 +1,63 @@
+---
+layout: docs
+title: Project Level ACL
+categories: tutorial
+permalink: /docs24/tutorial/project_level_acl.html
+since: v2.1.0
+---
+
+Whether a user can access a project and use some functionalities within the project is determined by project-level access control, there are four types of access permission role set at the project-level in Apache Kylin. They are *ADMIN*, *MANAGEMENT*, *OPERATION* and *QUERY*. Each role defines a list of functionality user may perform in Apache Kylin.
+
+- *QUERY*: designed to be used by analysts who only need access permission to query tables/cubes in the project.
+- *OPERATION*: designed to be used by operation team in a corporate/organization who need permission to maintain the Cube. OPERATION access permission includes QUERY.
+- *MANAGEMENT*: designed to be used by Modeler or Designer who is fully knowledgeable of business meaning of the data/model, Cube will be in charge of Model and Cube design. MANAGEMENT access permission includes OPERATION, and QUERY.
+- *ADMIN*: Designed to fully manage the project. ADMIN access permission includes MANAGEMENT, OPERATION and QUERY.
+
+Access permissions are independent between different projects.
+
+### How Access Permission is Determined
+
+Once project-level access permission has been set for a user, access permission on data source, model and Cube will be inherited based on the access permission role defined on project-level. For detailed functionalities, each access permission role can have access to, see table below.
+
+|                                          | System Admin | Project Admin | Management | Operation | Query |
+| ---------------------------------------- | ------------ | ------------- | ---------- | --------- | ----- |
+| Create/delete project                    | Yes          | No            | No         | No        | No    |
+| Edit project                             | Yes          | Yes           | No         | No        | No    |
+| Add/edit/delete project access permission | Yes          | Yes           | No         | No        | No    |
+| Check model page                         | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Check data source page                   | Yes          | Yes           | Yes        | No        | No    |
+| Load, unload table, reload table         | Yes          | Yes           | No         | No        | No    |
+| View model in read only mode             | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Add, edit, clone, drop model             | Yes          | Yes           | Yes        | No        | No    |
+| Check cube detail definition             | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Add, disable/enable, clone cube, edit, drop cube, purge cube | Yes          | Yes           | Yes        | No        | No    |
+| Build, refresh, merge cube               | Yes          | Yes           | Yes        | Yes       | No    |
+| Edit, view cube json                     | Yes          | Yes           | Yes        | No        | No    |
+| Check insight page                       | Yes          | Yes           | Yes        | Yes       | Yes   |
+| View table in insight page               | Yes          | Yes           | Yes        | Yes       | Yes   |
+| Check monitor page                       | Yes          | Yes           | Yes        | Yes       | No    |
+| Check system page                        | Yes          | No            | No         | No        | No    |
+| Reload metadata, disable cache, set config, diagnosis | Yes          | No            | No         | No        | No    |
+
+
+Additionally, when Query Pushdown is enabled, QUERY access permission on a project allows users to issue push down queries on all tables in the project even though no cube could serve them. It's impossible if a user is not yet granted QUERY permission at project-level.
+
+### Manage Access Permission at Project-level
+
+1. Click the small gear shape icon on the top-left corner of Model page. You will be redirected to project page
+
+   ![](/images/Project-level-acl/ACL-1.png)
+
+2. In project page, expand a project and choose Access.
+3. Click `Grant`to grant permission to user.
+
+	![](/images/Project-level-acl/ACL-2.png)
+
+4. Fill in name of the user or role, choose permission and then click `Grant` to grant permission.
+
+5. You can also revoke and update permission on this page.
+
+   ![](/images/Project-level-acl/ACL-3.png)
+
+   Please note that in order to grant permission to default user (MODELER and ANLAYST), these users need to login as least once. 
+   ​
diff --git a/website/_docs24/tutorial/query_pushdown.cn.md b/website/_docs24/tutorial/query_pushdown.cn.md
new file mode 100644
index 0000000..2aee730
--- /dev/null
+++ b/website/_docs24/tutorial/query_pushdown.cn.md
@@ -0,0 +1,50 @@
+---
+layout: docs-cn
+title:  查询下压
+categories: tutorial
+permalink: /cn/docs24/tutorial/query_pushdown.html
+since: v2.1
+---
+
+### Kylin 支持查询下压
+
+对于没有cube能查得结果的sql,Kylin支持将这类查询通过JDBC下压至备用查询引擎如Hive, SparkSQL, Impala等来查得结果。以下以Hive为例说明开启步骤,由于Kylin本事就将Hive作为数据源,作为Query Pushdown引擎也更易使用与配置。
+
+### 查询下压配置
+
+1. 修改配置文件`kylin.properties`打开Query Pushdown注释掉的配置项`kylin.query.pushdown.runner-class-name`,设置为`org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl`
+
+
+2. 在配置文件`kylin.properties`添加如下配置项。若不设置,将使用默认配置项。请不要忘记将"hiveserver"和"10000"替换成环境中Hive运行的主机和端口。
+
+    - *kylin.query.pushdown.jdbc.url*:Hive JDBC的URL.
+
+    - *kylin.query.pushdown.jdbc.driver*:Hive Jdbc的driver类名
+      
+    - *kylin.query.pushdown.jdbc.username*:Hive Jdbc对应数据库的用户名
+
+    - *kylin.query.pushdown.jdbc.password*:Hive Jdbc对应数据库的密码
+
+    - *kylin.query.pushdown.jdbc.pool-max-total*:Hive Jdbc连接池的最大连接数,默认值为8
+
+    - *kylin.query.pushdown.jdbc.pool-max-idle*:Hive Jdbc连接池的最大等待连接数,默认值为8
+    
+    - *kylin.query.pushdown.jdbc.pool-min-idle*:Hive Jdbc连接池的最小连接数,默认值为0
+
+下面是一个样例设置; 请记得将主机名"hiveserver"以及端口"10000"修改为您的集群设置。
+
+{% highlight Groff markup %} kylin.query.pushdown.runner-class-name=org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl kylin.query.pushdown.jdbc.url=jdbc:hive2://hiveserver:10000/default kylin.query.pushdown.jdbc.driver=org.apache.hive.jdbc.HiveDriver kylin.query.pushdown.jdbc.username=hive kylin.query.pushdown.jdbc.password= kylin.query.pushdown.jdbc.pool-max-total=8 kylin.query.pushdown.jdbc.pool-max-idle=8 kylin.query.pushdown.jdbc.pool-min-idle=0
+
+{% endhighlight %}
+
+3. 重启Kylin
+
+### 进行查询下压
+
+开启查询下压后,即可按同步的表进行灵活查询,而无需根据查询构建对应Cube。
+
+   ![](/images/tutorial/2.1/push_down/push_down_1.png)
+
+用户在提交查询时,若查询下压发挥作用,则在log里有相应的记录。
+
+   ![](/images/tutorial/2.1/push_down/push_down_2.png)
diff --git a/website/_docs24/tutorial/query_pushdown.md b/website/_docs24/tutorial/query_pushdown.md
new file mode 100644
index 0000000..480b38e
--- /dev/null
+++ b/website/_docs24/tutorial/query_pushdown.md
@@ -0,0 +1,61 @@
+---
+layout: docs
+title:  Enable Query Pushdown
+categories: tutorial
+permalink: /docs24/tutorial/query_pushdown.html
+since: v2.1
+---
+
+### Introduction
+
+If a query can not be answered by any cube, Kylin supports pushing down such query to backup query engines like Hive, SparkSQL, Impala through JDBC. In the following, Hive is used as an example, as it is one of Kylin's data sources and be convenient to configure. 
+
+
+### Query Pushdown config
+
+1. In Kylin's installation directory, uncomment configuration item `kylin.query.pushdown.runner-class-name` of config file `kylin.properties`, and set it to `org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl`
+
+
+2. Add configuration items below in config file `kylin.properties`. 
+
+   - *kylin.query.pushdown.jdbc.url*: Hive JDBC's URL.
+
+   - *kylin.query.pushdown.jdbc.driver*: Hive Jdbc's driver class name.
+
+   - *kylin.query.pushdown.jdbc.username*: Hive Jdbc's user name.
+
+   - *kylin.query.pushdown.jdbc.password*: Hive Jdbc's password.
+
+   - *kylin.query.pushdown.jdbc.pool-max-total*: Hive Jdbc's connection pool's max connected connection number, default value is 8
+
+   - *kylin.query.pushdown.jdbc.pool-max-idle*: Hive Jdbc's connection pool's max waiting connection number, default value is 8
+
+   - *kylin.query.pushdown.jdbc.pool-min-idle*: Hive Jdbc's connection pool's min connected connection number, default value is 0
+
+Here is a sample configuration; remember to change host "hiveserver" and port "10000" with your cluster configuraitons.
+
+{% highlight Groff markup %}
+kylin.query.pushdown.runner-class-name=org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl
+kylin.query.pushdown.jdbc.url=jdbc:hive2://hiveserver:10000/default
+kylin.query.pushdown.jdbc.driver=org.apache.hive.jdbc.HiveDriver
+kylin.query.pushdown.jdbc.username=hive
+kylin.query.pushdown.jdbc.password=
+kylin.query.pushdown.jdbc.pool-max-total=8
... 2487 lines suppressed ...


[kylin] 02/02: Update document for v2.5

Posted by sh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 20f7e1bba4284b82f62975bb5dea1fdc4d6fb32b
Author: shaofengshi <sh...@apache.org>
AuthorDate: Tue Sep 18 18:14:13 2018 +0800

    Update document for v2.5
---
 website/_dev/howto_hbase_branches.cn.md            |    9 +-
 website/_dev/howto_hbase_branches.md               |   13 +-
 website/_dev/howto_release.cn.md                   |    9 +-
 website/_dev/howto_release.md                      |    9 +-
 website/_docs/gettingstarted/events.md             |    1 +
 website/_docs/gettingstarted/faq.md                |  188 ++-
 website/_docs/howto/howto_upgrade.md               |    7 +
 website/_docs/index.cn.md                          |    8 +-
 website/_docs/index.md                             |    8 +-
 website/_docs/install/advance_settings.cn.md       |   27 +-
 website/_docs/install/advance_settings.md          |   34 +-
 website/_docs/release_notes.md                     |  109 +-
 website/_docs/tutorial/hybrid.cn.md                |   12 +-
 website/_docs/tutorial/hybrid.md                   |   33 +-
 website/_docs/tutorial/setup_jdbc_datasource.cn.md |    6 +-
 website/_docs/tutorial/setup_jdbc_datasource.md    |    4 +-
 website/_docs16/gettingstarted/best_practices.md   |   27 -
 website/_docs16/gettingstarted/concepts.md         |   64 -
 website/_docs16/gettingstarted/events.md           |   24 -
 website/_docs16/gettingstarted/faq.md              |  119 --
 website/_docs16/gettingstarted/terminology.md      |   25 -
 website/_docs16/howto/howto_backup_metadata.md     |   60 -
 .../_docs16/howto/howto_build_cube_with_restapi.md |   53 -
 website/_docs16/howto/howto_cleanup_storage.md     |   22 -
 website/_docs16/howto/howto_jdbc.md                |   92 --
 website/_docs16/howto/howto_ldap_and_sso.md        |  128 --
 website/_docs16/howto/howto_optimize_build.md      |  190 ---
 website/_docs16/howto/howto_optimize_cubes.md      |  212 ----
 website/_docs16/howto/howto_update_coprocessor.md  |   14 -
 website/_docs16/howto/howto_upgrade.md             |   66 -
 website/_docs16/howto/howto_use_beeline.md         |   14 -
 .../howto/howto_use_distributed_scheduler.md       |   16 -
 website/_docs16/howto/howto_use_restapi.md         | 1113 ----------------
 website/_docs16/howto/howto_use_restapi_in_js.md   |   46 -
 website/_docs16/index.cn.md                        |   26 -
 website/_docs16/index.md                           |   57 -
 website/_docs16/install/advance_settings.md        |   98 --
 website/_docs16/install/hadoop_evn.md              |   40 -
 website/_docs16/install/index.cn.md                |   46 -
 website/_docs16/install/index.md                   |   35 -
 website/_docs16/install/kylin_cluster.md           |   32 -
 website/_docs16/install/kylin_docker.md            |   10 -
 website/_docs16/install/manual_install_guide.cn.md |   48 -
 website/_docs16/release_notes.md                   | 1333 --------------------
 website/_docs16/tutorial/acl.cn.md                 |   35 -
 website/_docs16/tutorial/acl.md                    |   32 -
 website/_docs16/tutorial/create_cube.cn.md         |  129 --
 website/_docs16/tutorial/create_cube.md            |  198 ---
 website/_docs16/tutorial/cube_build_job.cn.md      |   66 -
 website/_docs16/tutorial/cube_build_job.md         |   67 -
 website/_docs16/tutorial/cube_streaming.md         |  219 ----
 website/_docs16/tutorial/flink.md                  |  249 ----
 website/_docs16/tutorial/kylin_sample.md           |   21 -
 website/_docs16/tutorial/odbc.cn.md                |   34 -
 website/_docs16/tutorial/odbc.md                   |   49 -
 website/_docs16/tutorial/powerbi.cn.md             |   56 -
 website/_docs16/tutorial/powerbi.md                |   54 -
 website/_docs16/tutorial/squirrel.md               |  112 --
 website/_docs16/tutorial/tableau.cn.md             |  116 --
 website/_docs16/tutorial/tableau.md                |  113 --
 website/_docs16/tutorial/tableau_91.cn.md          |   51 -
 website/_docs16/tutorial/tableau_91.md             |   50 -
 website/_docs16/tutorial/web.cn.md                 |  134 --
 website/_docs16/tutorial/web.md                    |  123 --
 website/archive/docs16.tar.gz                      |  Bin 0 -> 91609 bytes
 website/download/index.cn.md                       |   12 +
 website/download/index.md                          |   19 +-
 67 files changed, 407 insertions(+), 6019 deletions(-)

diff --git a/website/_dev/howto_hbase_branches.cn.md b/website/_dev/howto_hbase_branches.cn.md
index 4b0319a..9605dab 100644
--- a/website/_dev/howto_hbase_branches.cn.md
+++ b/website/_dev/howto_hbase_branches.cn.md
@@ -11,10 +11,11 @@ permalink: /cn/development/howto_hbase_branches.html
 
 分支设计为
 
-- `master` 分支编译的是 HBase 0.98,也是开发的主要分支。 所有错误修复和新功能仅提交给 `master`。
-- `master-hbase1.x` 分支编译的是 HBase 1.x。通过在 `master` 上应用一个 patch 来创建此分支。换句话说,`master-hbase1.x` = `master` + `a patch to support HBase 1.x`.
-- 同样的,有 `master-cdh5.7` = `master-hbase1.x` + `a patch to support CDH 5.7`。
-- 在 `master-hbase1.x` 和 `master-cdh5.7` 上不会直接发生代码更改(除非分支上最后一次提交采用了 HBase 调用)。
+- `master` 分支编译的是 HBase 1.1,也是开发的主要分支。 所有错误修复和新功能仅提交给 `master`。
+- `master-hadoop3.1` 分支编译的是 Hadoop 3.1 + HBase 2.x。通过在 `master` 上应用若干个 patch 来创建此分支。换句话说,`master-hadoop3.1` = `master` + `patches to support HBase 2.x`.
+- The `master-hbase0.98` 已经弃之不用,0.98用户建议升级HBase;
+- 另外有若干个Kylin版本维护分支,如2.5.x, 2.4.x 等;如果你提了一个patch或Pull request, 请告知 reviewer 哪几个版本需要此patch, reviewer 会把 patch 合并到除master以外的其它分支;
+- 在 `master-hadoop3.1` 上不会直接发生代码更改(除非分支上最后一次提交采用了 HBase 调用)。
 
 有一个脚本有助于保持这些分支同步:`dev-support/sync_hbase_cdh_branches.sh`。
 
diff --git a/website/_dev/howto_hbase_branches.md b/website/_dev/howto_hbase_branches.md
index f871b23..48a2dd5 100644
--- a/website/_dev/howto_hbase_branches.md
+++ b/website/_dev/howto_hbase_branches.md
@@ -1,20 +1,21 @@
 ---
 layout: dev
-title:  How to Maintain HBase Branches
+title:  How to Maintain Hadoop/HBase Branches
 categories: development
 permalink: /development/howto_hbase_branches.html
 ---
 
-### Kylin Branches for Different HBase Versions
+### Kylin Branches for Different Hadoop/HBase Versions
 
 Because HBase API diverges based on versions and vendors, different code branches have to be maintained for different HBase versions.
 
 The branching design is
 
-- The `master` branch compiles with HBase 0.98, and is also the main branch for development. All bug fixes and new features commits to `master` only.
-- The `master-hbase1.x` branch compiles with HBase 1.x. This branch is created by applying one patch on top of `master`. In other word, `master-hbase1.x` = `master` + `a patch to support HBase 1.x`.
-- Similarly, there is `master-cdh5.7` = `master-hbase1.x` + `a patch to support CDH 5.7`.
-- No code changes should happen on `master-hbase1.x` and `master-cdh5.7` directly (apart from the last commit on the branch that adapts HBase calls).
+- The `master` branch compiles with HBase 1.1, and is also the main branch for development. All bug fixes and new features commits to `master` only.
+- The `master-hadoop3.1` branch compiles with Hadoop 3.1 and HBase 1.x. This branch is created by applying several patches on top of `master`. In other word, `master-hadoop3.1` = `master` + `patches to support Hadoop 3 and HBase 2.x`.
+- The `master-hbase0.98` is deprecated;
+- There are several release maintenance branches like `2.5.x`, `2.4.x`. If you have a PR or patch, please let reviewer knows which branch it need be applied. The reviewer should cherry-pick the patch to a specific branch after be merged into master.
+- No code changes should happen on `master-hadoop3.1`, `master-hbase0.98` directly (apart from the last commit on the branch that adapts HBase calls).
 
 There is a script helps to keep these branches in sync: `dev-support/sync_hbase_cdh_branches.sh`.
 
diff --git a/website/_dev/howto_release.cn.md b/website/_dev/howto_release.cn.md
index 69a7117..e7b1476 100644
--- a/website/_dev/howto_release.cn.md
+++ b/website/_dev/howto_release.cn.md
@@ -18,8 +18,10 @@ _对于中国用户,请谨慎使用代理以避免潜在的防火墙问题。_
 * Apache Nexus (maven 仓库): [https://repository.apache.org](https://repository.apache.org)  
 * Apache Kylin dist 仓库: [https://dist.apache.org/repos/dist/dev/kylin](https://dist.apache.org/repos/dist/dev/kylin)  
 
-## 安装使用 Java 8 和 Maven 3.5.3+
-开始之前,确保已经安装了 Java 8,以及 Maven 3.5.3 或更高版本。
+## 软件要求
+* Java 8 or above; 
+* Maven 3.5.3 或更高版本。
+* 如果你是用 Mac OS X 做发布, 请安装 GNU TAR, 按照 [此文章](http://macappstore.org/gnu-tar/).
 
 ## 设置 GPG 签名密钥  
 按照 [http://www.apache.org/dev/release-signing](http://www.apache.org/dev/release-signing) 上的说明创建密钥对  
@@ -383,7 +385,8 @@ $ mkdir -p ~/dist/release
 $ cd ~/dist/release
 $ svn co https://dist.apache.org/repos/dist/release/kylin
 $ cd kylin
-$ cp -rp ../../dev/kylin/apache-kylin-X.Y.Z-rcN apache-kylin-X.Y.Z
+$ mkdir apache-kylin-X.Y.Z
+$ cp -rp ../../dev/kylin/apache-kylin-X.Y.Z-rcN/apache-kylin* apache-kylin-X.Y.Z/
 $ svn add apache-kylin-X.Y.Z
 
 # Check in.
diff --git a/website/_dev/howto_release.md b/website/_dev/howto_release.md
index 9eda756..9821f25 100644
--- a/website/_dev/howto_release.md
+++ b/website/_dev/howto_release.md
@@ -18,8 +18,10 @@ Make sure you have avaliable account and privilege for following applications:
 * Apache Nexus (maven repo): [https://repository.apache.org](https://repository.apache.org)  
 * Apache Kylin dist repo: [https://dist.apache.org/repos/dist/dev/kylin](https://dist.apache.org/repos/dist/dev/kylin)  
 
-## Install Java 8 and Maven 3.5.3+
-Make sure you have Java 8 and Maven 3.5.3 or above installed.
+## Software requirement
+* Java 8 or above; 
+* Maven 3.5.3 or above;
+* If you're on Apple Mac OS X, please install GNU TAR, check [this post](http://macappstore.org/gnu-tar/).
 
 ## Setup GPG signing keys  
 Follow instructions at [http://www.apache.org/dev/release-signing](http://www.apache.org/dev/release-signing) to create a key pair  
@@ -386,7 +388,8 @@ $ mkdir -p ~/dist/release
 $ cd ~/dist/release
 $ svn co https://dist.apache.org/repos/dist/release/kylin
 $ cd kylin
-$ cp -rp ../../dev/kylin/apache-kylin-X.Y.Z-rcN apache-kylin-X.Y.Z
+$ mkdir apache-kylin-X.Y.Z
+$ cp -rp ../../dev/kylin/apache-kylin-X.Y.Z-rcN/apache-kylin* apache-kylin-X.Y.Z/
 $ svn add apache-kylin-X.Y.Z
 
 # Check in.
diff --git a/website/_docs/gettingstarted/events.md b/website/_docs/gettingstarted/events.md
index f617907..3b35c19 100644
--- a/website/_docs/gettingstarted/events.md
+++ b/website/_docs/gettingstarted/events.md
@@ -7,6 +7,7 @@ permalink: /docs/gettingstarted/events.html
 
 __Conferences__
 
+* [Refactor your data warehouse with mobile analytics products](https://conferences.oreilly.com/strata/strata-ny/public/schedule/speaker/313314) by Zhi Zhu and Luke Han at Strata Data Conference New York, New York September 11–13, 2018
 * [Apache Kylin on HBase: Extreme OLAP engine for big data](https://www.slideshare.net/ShiShaoFeng1/apache-kylin-on-hbase-extreme-olap-engine-for-big-data) by Shaofeng Shi at [HBaseCon Asia 2018](https://hbase.apache.org/hbaseconasia-2018/)
 * [The Evolution of Apache Kylin: Realtime and Plugin Architecture in Kylin](https://www.youtube.com/watch?v=n74zvLmIgF0)([slides](http://www.slideshare.net/YangLi43/apache-kylin-15-updates)) by [Li Yang](https://github.com/liyang-gmt8), at [Hadoop Summit 2016 Dublin](http://hadoopsummit.org/dublin/agenda/), Ireland, 2016-04-14
 * [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
diff --git a/website/_docs/gettingstarted/faq.md b/website/_docs/gettingstarted/faq.md
index 751a4ad..26bce81 100644
--- a/website/_docs/gettingstarted/faq.md
+++ b/website/_docs/gettingstarted/faq.md
@@ -6,7 +6,169 @@ permalink: /docs/gettingstarted/faq.html
 since: v0.6.x
 ---
 
-#### 1. "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat" or "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState"
+#### Is Kylin a generic SQL engine for big data?
+
+  * No, Kylin is an OLAP engine with SQL interface. The SQL queries need be matched with the pre-defined OLAP model.
+
+#### How to compare Kylin with other SQL engines like Hive, Presto, Spark SQL, Impala?
+
+  * They answer a query in different ways. Kylin is not a replacement for them, but a supplement (query accelerator). Many users run Kylin together with other SQL engines. For the high frequent query patterns, building Cubes can greatly improve the performance and also offload cluster workloads. For less queried patterns or ad-hoc queries, other engines are more flexible.
+
+#### What's a typical scenario to use Apache Kylin?
+
+  * Kylin can be the best option if you have a huge table (e.g., >100 million rows), join with lookup tables, while queries need be finished in the second level (dashboards, interactive reports, business intelligence, etc), and the concurrent users can be dozens or hundreds.
+
+#### How large a data scale can Kylin support? How about the performance?
+
+  * Kylin can supports second level query performance at TB to PB level dataset. This has been verified by users like eBay, Meituan, Toutiao. Take Meituan's case as an example (till 2018-08), 973 cubes, 3.8 million queries per day, raw data 8.9 trillion, total cube size 971 TB (original data is bigger), 50% queries finished in < 0.5 seconds, 90% queries < 1.2 seconds.
+
+#### Who are using Apache Kylin?
+
+  * Please check Kylin's powered by page(https://kylin.apache.org/community/poweredby.html) .
+
+#### What's the expansion rate of Cube (compared with raw data)?
+
+  * It depends on a couple of factors, for example, dimension/measure number, dimension cardinality, cuboid number, compression algorithm, etc. You can optimize the cube expansion in many ways to control the size.
+
+#### How to compare Kylin with Druid?
+
+  * Druid is more suitable for real-time analysis. Kylin is more focus on OLAP case. Druid has good integration with Kafka as real-time streaming; Kylin fetches data from Hive or Kafka in batches. The real-time capability of Kylin is still under development.
+
+  * Many internet service providers host both Druid and Kylin, serving different purposes (real-time and historical).
+
+  * Some other Kylin's highlights: supports star & snowflake schema; ANSI-SQL support, JDBC/ODBC for BI integrations. Kylin also has a Web GUI with LDAP/SSO user authentication.
+
+  * For more information, please do a search or check this [mail thread](https://mail-archives.apache.org/mod_mbox/kylin-dev/201503.mbox/%3CCAKmQrOY0fjZLUU0MGo5aajZ2uLb3T0qJknHQd+Wv1oxd5PKixQ@mail.gmail.com%3E).
+
+#### How to quick start with Kylin?
+
+  * To get a quick start, you can run Kylin in a Hadoop sandbox VM or in the cloud, for example, start a small AWS EMR or Azure HDInsight cluster and then install Kylin in one of the node.
+
+#### How many nodes of the Hadoop are needed to run Kylin?
+
+  * Kylin can run on a Hadoop cluster from only a couple nodes to thousands of nodes, depends on how much data you have. The architecture is horizontally scalable.
+
+  * Because most of the computation is happening in Hadoop (MapReduce/Spark/HBase), usually you just need to install Kylin in a couple of nodes.
+
+#### How many dimensions can be in a cube?
+
+  * The max physical dimension number (exclude derived column in lookup tables) in a cube is 63; If you can normalize some dimensions to lookup tables, with derived dimensions, you can create a cube with more than 100 dimensions.
+
+  * But a cube with > 30 physical dimensions is not recommended; You even couldn't save that in Kylin if you don't optimize the aggregation groups. Please search "curse of dimensionality".
+
+#### Why I got an error when running a "select * " query?
+
+  * The cube only has aggregated data, so all your queries should be aggregated queries ("GROUP BY"). You can use a SQL with all dimensions be grouped to get them as close as the detailed result, but that is not the raw data.
+
+  * In order to be connected from some BI tools, Kylin tries to answer "select *" query but please aware the result might not be expected. Please make sure each query to Kylin is aggregated.
+
+#### How can I query raw data from a cube?
+
+  * Cube is not the right option for raw data.
+
+But if you do want, there are some workarounds. 1) Add the primary key as a dimension, then the "group by pk" will return the raw data; 2) Configure Kylin to push down the query to another SQL engine like Hive, but the performance has no assurance.
+
+#### What is the UHC dimension?
+
+  * UHC means Ultra High Cardinality. Cardinality means the number of distinct values of a dimension. Usually, a dimension's cardinality is from tens to millions. If above million, we call it a UHC dimension, for example, user id, cell number, etc.
+
+  * Kylin supports UHC dimension but you need to pay attention to UHC dimension, especially the encoding and the cuboid combinations. It may cause your Cube very large and query to be slow.
+
+#### Can I specify a cube to answer my SQL statements?
+
+  * No, you couldn't; Cube is transparent for the end user. If you have multiple Cubes for the same data models, separating them into different projects is a good idea.
+
+#### Is there a REST API to create the project/model/cube?
+
+  * Yes, but they are private APIs, incline to change over versions (without notification). By design, Kylin expects the user to create a new project/model/cube in Kylin's web GUI.
+
+#### Where does the cube locate, can I directly read cube from HBase without going through Kylin API?
+
+  * Cube is stored in HBase. Each cube segment is an HBase table. The dimension values will be composed as the row key. The measures will be serialized in columns. To improve the storage efficiency, both dimension and measure values will be encoded to bytes. Kylin will decode the bytes to origin values after fetching from HBase. Without Kylin's metadata, the HBase tables are not readable.
+
+#### How to encrypt Cube Data?
+
+  * You can enable encryption at HBase side. Refer https://hbase.apache.org/book.html#hbase.encryption.server for more details.
+
+#### How to schedule the cube build at a fixed frequency, in an automatic way?
+
+  * Kylin doesn't have a built-in scheduler for this. You can trigger that through Rest API from external scheduler services, like Linux cron job, Apache Airflow, etc.
+
+#### Does Kylin support Hadoop 3 and HBase 2.0?
+
+  * From v2.5.0, Kylin will provide a binary package for Hadoop 3 and HBase 2.
+
+#### The Cube is ready, but why the table does not appear in the "Insight" tab?
+
+  * Make sure the "kylin.server.cluster-servers" property in `conf/kylin.properties` is configured with EVERY Kylin node, all job and query nodes. Kylin nodes notify each other to flush cache with this configuration. And please ensure the network among them are healthy.
+
+#### What should I do if I encounter a "java.lang.NoClassDefFoundError" error?
+
+  * Kylin doesn't ship those Hadoop jars, because they should already exist in the Hadoop node. So Kylin will try to find them and then add to Kylin's classpath. Due to Hadoop's complexity, there might be some case a jar wasn't found. In this case please look at the "bin/find-*-dependency.sh" and "bin/kylin.sh", modify them to fit your environment.
+
+#### How to query Kylin in Python?
+
+  * Please check: [https://github.com/Kyligence/kylinpy]()
+
+#### How to add dimension/measure to a cube?
+
+  * Once a cube is built, its structure couldn't be modified. To add dimension/measure, you need to clone a new cube, and then add in it.
+
+When the new cube is built, please disable or drop the old one.
+
+If you can accept the absence of new dimensions for historical data, you can build the new cube since the end time of the old cube. And then create a hybrid model over the old and new cube.
+
+#### The query result is not exactly matched with that in Hive, what's the possible reason?
+
+  * Possible reasons:
+a) Source data changed in Hive after built into the cube;
+b) Cube's time range is not the same as in Hive;
+c) Another cube answered your query;
+d) The data model has inner joins, but the query doesn't join all tables;
+e) Cube has some approximate measures like HyberLogLog, TopN;
+f) In v2.3 and before, Kylin may have data loss when fetching from Hive, see KYLIN-3388.
+
+#### What to do if the source data changed after being built into the cube?
+
+  * You need to refresh the cube. If the cube is partitioned, you can refresh certain segments.
+
+#### What is the possible reason for getting the error ‘bulk load aborted with some files not yet loaded’ in the ‘Load HFile to HBase Table’ step?
+
+  * Kylin doesn't have permissions to execute HBase CompleteBulkLoad. Check whether the current user (that run Kylin service) has the permission to access HBase.
+
+#### Why `bin/sample.sh` cannot create the `/tmp/kylin` folder on HDFS?
+
+  * Run ./bin/find-hadoop-conf-dir.sh -v, check the error message, then check the environment according to the information reported.
+
+#### In Chrome, web console shows net::ERR_CONTENT_DECODING_FAILED, what should I do?
+
+  * Edit $KYLIN_HOME/tomcat/conf/server.xml, find the "compress=on", change it to off.
+
+#### How to configure one cube to be built using a chosen YARN queue?
+
+  * Set the YARN queue in Cube’s Configuration Overwrites page, then it will affect only one cube. Here are the three parameters:
+
+  {% highlight Groff markup %}
+kylin.engine.mr.config-override.mapreduce.job.queuename=YOUR_QUEUE_NAME
+kylin.source.hive.config-override.mapreduce.job.queuename=YOUR_QUEUE_NAME
+kylin.engine.spark-conf.spark.yarn.queue=YOUR_QUEUE_NAME
+  {% endhighlight %}
+
+#### How to add a new JDBC data source dialect?
+
+  * That is easy to add a new type of JDBC data source. You can follow such steps:
+
+1) Add the dialect in  source-hive/src/main/java/org/apache/kylin/source/jdbc/JdbcDialect.java
+
+2) Implement a new IJdbcMetadata if {database that you want to add}'s metadata fetching is different with others and then register it in JdbcMetadataFactory
+
+3) You may need to customize the SQL for creating/dropping table in JdbcExplorer for {database that you want to add}.
+
+#### How to ask a question?
+
+  * Check Kylin documents first. and do a Google search also can help. Sometimes the question has been answered so you don't need ask again. If no matching, please send your question to Apache Kylin user mailing list: user@kylin.apache.org; You need to drop an email to user-subscribe@kylin.apache.org to subscribe if you haven't done so. In the email content, please provide your Kylin and Hadoop version, specific error logs (as much as possible), and also the how to re-produce steps.  
+
+#### "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat" or "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState"
 
   * Kylin need many dependent jars (hadoop/hive/hcat/hbase/kafka) on classpath to work, but Kylin doesn't ship them. It will seek these jars from your local machine by running commands like `hbase classpath`, `hive -e set` etc. The founded jars' path will be appended to the environment variable *HBASE_CLASSPATH* (Kylin uses `hbase` shell command to start up, which will read this). But in some Hadoop distribution (like AWS EMR 5.0), the `hbase` shell doesn't keep the origin `HBASE_CLASSPA [...]
 
@@ -22,12 +184,12 @@ since: v0.6.x
   export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
   {% endhighlight %}
 
-#### 2. Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
+#### Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
 
   * Kylin uses "Dictionary" encoding to encode/decode the dimension values (check [this blog](/blog/2015/08/13/kylin-dictionary/)); Usually a dimension's cardinality is less than millions, so the "Dict" encoding is good to use. As dictionary need be persisted and loaded into memory, if a dimension's cardinality is very high, the memory footprint will be tremendous, so Kylin add a check on this. If you see this error, suggest to identify the UHC dimension first and then re-evaluate the de [...]
 
 
-#### 3. How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
+#### How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
 
   * Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
 
@@ -42,16 +204,16 @@ since: v0.6.x
   {% endhighlight %}
 
 
-#### 4. SUM(field) returns a negtive result while all the numbers in this field are > 0
+#### SUM(field) returns a negtive result while all the numbers in this field are > 0
   * If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which  [...]
 
-#### 5. Why Kylin need extract the distinct columns from Fact Table before building cube?
+#### Why Kylin need extract the distinct columns from Fact Table before building cube?
   * Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
 
-#### 6. Why Kylin calculate the HIVE table cardinality?
+#### Why Kylin calculate the HIVE table cardinality?
   * The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
 
-#### 7. How to add new user or change the default password?
+#### How to add new user or change the default password?
   * Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
 
    {% highlight Groff markup %}
@@ -61,7 +223,7 @@ since: v0.6.x
   * The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
   * When you deploy Kylin for more users, switch to LDAP authentication is recommended.
 
-#### 8. Using sub-query for un-supported SQL
+#### Using sub-query for un-supported SQL
 
 {% highlight Groff markup %}
 Original SQL:
@@ -90,7 +252,7 @@ from (
 group by a.slr_sgmt
 {% endhighlight %}
 
-#### 9. Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
+#### Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
 
   * Please add proxy for your NPM:  
   `npm config set proxy http://YOUR_PROXY_IP`
@@ -98,14 +260,14 @@ group by a.slr_sgmt
   * Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
   [http://npm.taobao.org](http://npm.taobao.org)
 
-#### 10. Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
+#### Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
   * User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
 
-#### 11. Kylin JDBC driver returns a different Date/time than the REST API, seems it add the timezone to parse the date.
+#### Kylin JDBC driver returns a different Date/time than the REST API, seems it add the timezone to parse the date.
   * Please check the [post in mailing list](http://apache-kylin.74782.x6.nabble.com/JDBC-query-result-Date-column-get-wrong-value-td5370.html)
 
 
-#### 12. How to update the default password for 'ADMIN'?
+#### How to update the default password for 'ADMIN'?
   * By default, Kylin uses a simple, configuration based user registry; The default administrator 'ADMIN' with password 'KYLIN' is hard-coded in `kylinSecurity.xml`. To modify the password, you need firstly get the new password's encrypted value (with BCrypt), and then set it in `kylinSecurity.xml`. Here is a sample with password 'ABCDE'
   
 {% highlight Groff markup %}
@@ -141,7 +303,7 @@ Replace the origin encrypted password with the new one:
 
 Restart Kylin to take effective. If you have multiple Kylin server as a cluster, do the same on each instance. 
 
-#### 13. What kind of data be left in 'kylin.env.hdfs-working-dir' ? We often execute kylin cleanup storage command, but now our working dir folder is about 300 GB size, can we delete old data manually?
+#### What kind of data be left in 'kylin.env.hdfs-working-dir' ? We often execute kylin cleanup storage command, but now our working dir folder is about 300 GB size, can we delete old data manually?
 
 The data in 'hdfs-working-dir' ('hdfs:///kylin/kylin_metadata/' by default) includes intermediate files (will be GC) and Cuboid data (won't be GC). The Cuboid data is kept for the further segments' merge, as Kylin couldn't merge from HBase. If you're sure those segments won't be merged, you can move them to other paths or even delete.
 
diff --git a/website/_docs/howto/howto_upgrade.md b/website/_docs/howto/howto_upgrade.md
index 1110f21..e11edfa 100644
--- a/website/_docs/howto/howto_upgrade.md
+++ b/website/_docs/howto/howto_upgrade.md
@@ -19,6 +19,13 @@ Running as a Hadoop client, Apache Kylin's metadata and Cube data are persistend
 
 Below are versions specific guides:
 
+## Upgrade from 2.4 to 2.5.0
+
+* Kylin 2.5 need Java 8; Please upgrade Java if you're running with Java 7.
+* Kylin metadata is compitable between 2.4 and 2.5. No migration is needed.
+* Spark engine will move more steps from MR to Spark, you may see performance difference for the same cube after the upgrade.
+* Property `kylin.source.jdbc.sqoop-home` need be the location of sqoop installation, not its "bin" subfolder, please modify it if you're using RDBMS as the data source. 
+* The Cube planner is enabled by default now; New cubes will be optimized by it on first build. System cube and dashboard still need manual enablement.
 
 ## Upgrade from v2.1.0 to v2.2.0
 
diff --git a/website/_docs/index.cn.md b/website/_docs/index.cn.md
index c741e05..f30068f 100644
--- a/website/_docs/index.cn.md
+++ b/website/_docs/index.cn.md
@@ -12,10 +12,10 @@ permalink: /cn/docs/index.html
 Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
 
 查看旧版本文档: 
-* [v2.3.x document](/cn/docs23/)
-* [v2.1.x and v2.2.x document](/cn/docs21/)
-* [v2.0.x document](/cn/docs20/)
-* [v1.6.x document](/cn/docs16/)
+* [v2.4 document](/cn/docs24/)
+* [v2.3 document](/cn/docs23/)
+* [v2.1 and v2.2 document](/cn/docs21/)
+* [v2.0 document](/cn/docs20/)
 * [归档](/archive/)
 
 安装 
diff --git a/website/_docs/index.md b/website/_docs/index.md
index 3f3a2e9..c16deb3 100644
--- a/website/_docs/index.md
+++ b/website/_docs/index.md
@@ -12,10 +12,10 @@ Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
 Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
 
 This is the document for the latest released version (v2.4). Document of prior versions: 
-* [v2.3.x document](/docs23)
-* [v2.1.x and v2.2.x document](/docs21/)
-* [v2.0.x document](/docs20/)
-* [v1.6.x document](/docs16/)
+* [v2.4 document](/docs24)
+* [v2.3 document](/docs23)
+* [v2.1 and v2.2 document](/docs21/)
+* [v2.0 document](/docs20/)
 * [Archive](/archive/)
 
 Installation & Setup
diff --git a/website/_docs/install/advance_settings.cn.md b/website/_docs/install/advance_settings.cn.md
index ce42f2c..43de7ba 100644
--- a/website/_docs/install/advance_settings.cn.md
+++ b/website/_docs/install/advance_settings.cn.md
@@ -116,9 +116,12 @@ kylin.job.admin.dls=adminstrator-address
 ## 支持 MySQL 作为 Kylin metadata 的存储(测试)
 
 Kylin 支持 MySQL 作为 metadata 的存储;为了使该功能生效,您需要执行以下步骤:
-<ol>
-<li>在 MySQL 数据库中新建名为 kylin 的数据库</li>
-<li>编辑 `conf/kylin.properties`,配置以下参数</li>
+
+* 安装 MySQL 服务,例如 v5.1.17;
+* 下载并拷贝 MySQL JDBC connector "mysql-connector-java-<version>.jar" 到 $KYLIN_HOME/ext 目录(如没有该目录请自行创建)
+* 在 MySQL 中新建一个专为 Kylin 元数据的数据库,例如 kylin_metadata;
+* 编辑 `conf/kylin.properties`,配置以下参数:
+
 {% highlight Groff markup %}
 kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password}
 kylin.metadata.jdbc.dialect=mysql
@@ -127,11 +130,13 @@ kylin.metadata.jdbc.small-cell-meta-size-warning-threshold=100mb
 kylin.metadata.jdbc.small-cell-meta-size-error-threshold=1gb
 kylin.metadata.jdbc.max-cell-size=1mb
 {% endhighlight %}
-配置项的含义如下,其中 `url`, `username`,和 `password` 为必须配置项。其余项若不配置将使用默认配置项:
+
+`kylin.metadata.url` 配置项中可以添加更多JDBC 连接的配置项;其中 `url`, `username`,和 `password` 为必须配置项。其余项若不配置将使用默认配置项:
+
 {% highlight Groff markup %}
-Url: JDBC 的 url
-Username: JDBC 的用户名
-Password: JDBC 的密码,如果选择了加密,那这里请写加密后的密码
+url: JDBC connection URL
+username: JDBC 的用户名
+password: JDBC 的密码,如果选择了加密,那这里请写加密后的密码
 driverClassName: JDBC 的 driver 类名,默认值为 com.mysql.jdbc.Driver
 maxActive: 最大数据库连接数,默认值为 5
 maxIdle: 最大等待中的连接数量,默认值为 5
@@ -140,13 +145,13 @@ removeAbandoned: 是否自动回收超时连接,默认值为 true
 removeAbandonedTimeout: 超时时间秒数,默认为 300
 passwordEncrypted: 是否对 JDBC 密码进行加密,默认为 false
 {% endhighlight %}
-<li>(可选)以这种方式加密:</li>
+
+你可以加密JDBC 连接密码:
 {% highlight Groff markup %}
 cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
 java -classpath kylin-server-base-\<version\>.jar:kylin-core-common-\<version\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
 {% endhighlight %}
-<li>拷贝 JDBC connector jar 到 $KYLIN_HOME/ext 目录(如没有该目录请自行创建)</li>
-<li>启动 Kylin</li>
-</ol>
+
+*启动 Kylin
 
 *注意:该功能还在测试中,建议您谨慎使用*
\ No newline at end of file
diff --git a/website/_docs/install/advance_settings.md b/website/_docs/install/advance_settings.md
index e3a8307..595fb66 100644
--- a/website/_docs/install/advance_settings.md
+++ b/website/_docs/install/advance_settings.md
@@ -113,40 +113,42 @@ Restart Kylin server to take effective. To disable, set `mail.enabled` back to `
 Administrator will get notifications for all jobs. Modeler and Analyst need enter email address into the "Notification List" at the first page of cube wizard, and then will get notified for that cube.
 
 
-## Enable MySQL as Kylin metadata storage(Beta)
+## Enable MySQL as Kylin metadata storage (beta)
 
-Kylin supports MySQL as metadata storage; To enable this, you should perform the following steps: 
-<ol>
-<li>Create a new database named kylin in the MySQL database</li>
-<li>Edit `conf/kylin.properties`, set the following parameters:</li>
+Kylin can use MySQL as the metadata storage, for the scenarios that HBase is not the best option; To enable this, you can perform the following steps: 
+
+* Install a MySQL server, e.g, v5.1.17;
+* Create a new MySQL database for Kylin metadata, for example "kylin_metadata";
+* Download and copy MySQL JDBC connector "mysql-connector-java-<version>.jar" to $KYLIN_HOME/ext (if the folder does not exist, create it yourself);
+* Edit `conf/kylin.properties`, set the following parameters:
 {% highlight Groff markup %}
-kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password}
+kylin.metadata.url={your_metadata_tablename}@jdbc,url=jdbc:mysql://localhost:3306/kylin,username={your_username},password={your_password},driverClassName=com.mysql.jdbc.Driver
 kylin.metadata.jdbc.dialect=mysql
 kylin.metadata.jdbc.json-always-small-cell=true
 kylin.metadata.jdbc.small-cell-meta-size-warning-threshold=100mb
 kylin.metadata.jdbc.small-cell-meta-size-error-threshold=1gb
 kylin.metadata.jdbc.max-cell-size=1mb
 {% endhighlight %}
-The configuration items have the following meanings, `url`, `username`, and `password` are required configuration items. If not configured, the default configuration items will be used:
+In "kylin.metadata.url" more configuration items can be added; The `url`, `username`, and `password` are required items. If not configured, the default configuration items will be used:
 {% highlight Groff markup %}
-Url: JDBC url
-Username: JDBC username
-Password: JDBC password, if encryption is selected, please write the encrypted password here;
+url: the JDBC connection URL;
+username: JDBC user name
+password: JDBC password, if encryption is selected, please put the encrypted password here;
 driverClassName: JDBC driver class name, the default value is com.mysql.jdbc.Driver
 maxActive: the maximum number of database connections, the default value is 5;
 maxIdle: the maximum number of connections waiting, the default value is 5;
 maxWait: The maximum number of milliseconds to wait for connection. The default value is 1000.
 removeAbandoned: Whether to automatically reclaim timeout connections, the default value is true;
 removeAbandonedTimeout: the number of seconds in the timeout period, the default is 300;
-passwordEncrypted: Whether to encrypt the JDBC password, the default is false;
+passwordEncrypted: Whether the JDBC password is encrypted or not, the default is false;
 {% endhighlight %}
-<li>(Optional) Encrypt password in this way:</li>
+
+* You can encrypt your password:
 {% highlight Groff markup %}
 cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
 java -classpath kylin-server-base-\<version\>.jar:kylin-core-common-\<version\>.jar:spring-beans-4.3.10.RELEASE.jar:spring-core-4.3.10.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
 {% endhighlight %}
-<li>Copy the JDBC connector jar to $KYLIN_HOME/ext (if it does not exist, create it yourself)</li>
-<li>Start Kylin</li>
-</ol>
 
-*Note: The function is still in the test, it is recommended that you use it with caution*
+* Start Kylin
+
+**Note: The feature is in beta now.**
diff --git a/website/_docs/release_notes.md b/website/_docs/release_notes.md
index f92e9ea..558a378 100644
--- a/website/_docs/release_notes.md
+++ b/website/_docs/release_notes.md
@@ -15,10 +15,115 @@ or send to Apache Kylin mailing list:
 * User relative: [user@kylin.apache.org](mailto:user@kylin.apache.org)
 * Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
 
+## v2.5.0 - 2018-09-16
+_Tag:_ [kylin-2.5.0](https://github.com/apache/kylin/tree/kylin-2.5.0)
+This is a major release after 2.4, with 96 bug fixes and enhancement. Check [How to upgrade](/docs/howto/howto_upgrade.html).
+
+__New Feature__
+* [KYLIN-2565] - Support Hadoop 3.0
+* [KYLIN-3488] - Support MySQL as Kylin metadata storage
+
+__Improvement__
+* [KYLIN-2998] - Kill spark app when cube job was discarded
+* [KYLIN-3033] - Support HBase 2.0
+* [KYLIN-3071] - Add config to reuse dict to reduce dict size
+* [KYLIN-3094] - Upgrade zookeeper to 3.4.12
+* [KYLIN-3146] - Response code and exception should be standardised for cube checking
+* [KYLIN-3186] - Add support for partitioning columns that combine date and time (e.g. YYYYMMDDHHMISS)
+* [KYLIN-3250] - Upgrade jetty version to 9.3.22
+* [KYLIN-3259] - When a cube is deleted, remove it from the hybrid cube definition
+* [KYLIN-3321] - Set MALLOC_ARENA_MAX in script
+* [KYLIN-3355] - Improve the HTTP return code of Rest API
+* [KYLIN-3370] - Enhance segment pruning
+* [KYLIN-3384] - Allow setting REPLICATION_SCOPE on newly created tables
+* [KYLIN-3414] - Optimize the cleanup of project L2 cache
+* [KYLIN-3418] - User interface for hybrid model
+* [KYLIN-3419] - Upgrade to Java 8
+* [KYLIN-3421] - Improve job scheduler fetch performance
+* [KYLIN-3423] - Performance improvement in FactDistinctColumnsMapper
+* [KYLIN-3424] - Missing invoke addCubingGarbageCollectionSteps in the cleanup step for HBaseMROutput2Transition
+* [KYLIN-3427] - Convert to HFile in Spark
+* [KYLIN-3434] - Support prepare statement in Kylin server side
+* [KYLIN-3441] - Merge cube segments in Spark
+* [KYLIN-3442] - Fact distinct columns in Spark
+* [KYLIN-3449] - Should allow deleting a segment in NEW status
+* [KYLIN-3452] - Optimize spark cubing memory footprint
+* [KYLIN-3453] - Improve cube size estimation for TOPN, COUNT DISTINCT
+* [KYLIN-3454] - Fix potential thread-safe problem in ResourceTool
+* [KYLIN-3457] - Distribute by multiple columns if not set shard-by column
+* [KYLIN-3463] - Improve optimize job by avoiding creating empty output files on HDFS
+* [KYLIN-3464] - Less user confirmation
+* [KYLIN-3470] - Add cache for execute and execute_output to speed up list job api
+* [KYLIN-3471] - Merge dictionary and statistics on Yarn
+* [KYLIN-3472] - TopN merge in Spark engine performance tunning
+* [KYLIN-3475] - Make calcite case handling and quoting method more configurable.
+* [KYLIN-3478] - Enhance backwards compatibility
+* [KYLIN-3479] - Model can save when kafka partition date column not select
+* [KYLIN-3480] - Change the conformance of calcite from default to lenient
+* [KYLIN-3481] - Kylin Jdbc: Shaded dependencies should not be transitive
+* [KYLIN-3485] - Make unloading table more flexible
+* [KYLIN-3489] - Improve the efficiency of enumerating dictionary values
+* [KYLIN-3490] - For single column queries, only dictionaries are enough
+* [KYLIN-3491] - Improve the cube building process when using global dictionary
+* [KYLIN-3503] - Missing java.util.logging.config.file when starting kylin instance
+* [KYLIN-3507] - Query NPE when project is not found
+* [KYLIN-3509] - Allocate more memory for "Merge dictionary on yarn" step
+* [KYLIN-3510] - Correct sqoopHome at 'createSqoopToFlatHiveStep'
+* [KYLIN-3521] - Enable Cube Planner by default
+* [KYLIN-3539] - Hybrid segment overlap not cover some case
+* [KYLIN-3317] - Replace UUID.randomUUID with deterministic PRNG
+* [KYLIN-3436] - Refactor code related to loading hive/stream table
+
+__Bug fix__
+* [KYLIN-2522] - Compilation fails with Java 8 when upgrading to hbase 1.2.5
+* [KYLIN-2662] - NegativeArraySizeException in "Extract Fact Table Distinct Columns"
+* [KYLIN-2933] - Fix compilation against the Kafka 1.0.0 release
+* [KYLIN-3025] - kylin odbc error : {fn CONVERT} for bigint type in tableau 10.4
+* [KYLIN-3255] - Cannot save cube
+* [KYLIN-3258] - No check for duplicate cube name when creating a hybrid cube
+* [KYLIN-3379] - timestampadd bug fix and add test
+* [KYLIN-3382] - YARN job link wasn't displayed when job is running
+* [KYLIN-3385] - Error when have sum(1) measure
+* [KYLIN-3390] - QueryInterceptorUtil.queryInterceptors is not thread safe
+* [KYLIN-3391] - BadQueryDetector only detect first query
+* [KYLIN-3399] - Leaked lookup table in DictionaryGeneratorCLI#processSegment
+* [KYLIN-3403] - Querying sample cube with filter "KYLIN_CAL_DT.WEEK_BEG_DT >= CAST('2001-09-09' AS DATE)" returns unexpected empty result set
+* [KYLIN-3428] - java.lang.OutOfMemoryError: Requested array size exceeds VM limit
+* [KYLIN-3438] - mapreduce.job.queuename does not work at 'Convert Cuboid Data to HFile' Step
+* [KYLIN-3446] - Convert to HFile in spark reports ZK connection refused
+* [KYLIN-3451] - Cloned cube doesn't have Mandatory Cuboids copied
+* [KYLIN-3456] - Cube level's snapshot config does not work
+* [KYLIN-3458] - Enabling config kylin.job.retry will cause log info incomplete
+* [KYLIN-3461] - "metastore.sh refresh-cube-signature" not updating cube signature as expected
+* [KYLIN-3462] - "dfs.replication=2" and compression not work in Spark cube engine
+* [KYLIN-3476] - Fix TupleExpression verification when parsing sql
+* [KYLIN-3477] - Spark job size not available when deployMode is cluster
+* [KYLIN-3482] - Unclosed SetAndUnsetThreadLocalConfig in SparkCubingByLayer
+* [KYLIN-3483] - Imprecise comparison between double and integer division
+* [KYLIN-3492] - Wrong constant value in KylinConfigBase.getDefaultVarcharPrecision
+* [KYLIN-3500] - kylin 2.4 use jdbc datasource :Unknown column 'A.A.CRT_DATE' in 'where clause'
+* [KYLIN-3505] - DataType.getType wrong usage of cache
+* [KYLIN-3516] - Job status not updated after job discarded
+* [KYLIN-3517] - Couldn't update coprocessor on HBase 2.0
+* [KYLIN-3518] - Coprocessor reports NPE when execute a query on HBase 2.0
+* [KYLIN-3522] - PrepareStatement cache issue
+* [KYLIN-3525] - kylin.source.hive.keep-flat-table=true will delete data
+* [KYLIN-3529] - Prompt not friendly
+* [KYLIN-3533] - Can not save hybrid
+* [KYLIN-3534] - Failed at update cube info step
+* [KYLIN-3535] - "kylin-port-replace-util.sh" changed port but not uncomment it
+* [KYLIN-3536] - PrepareStatement cache issue when there are new segments built
+* [KYLIN-3538] - Automatic cube enabled functionality is not merged into 2.4.0
+* [KYLIN-3547] - DimensionRangeInfo: Unsupported data type boolean
+* [KYLIN-3550] - "kylin.source.hive.flat-table-field-delimiter" has extra "\"
+* [KYLIN-3551] - Spark job failed with "FileNotFoundException"
+* [KYLIN-3553] - Upgrade Tomcat to 7.0.90.
+* [KYLIN-3554] - Spark job failed but Yarn shows SUCCEED, causing Kylin move to next step
+* [KYLIN-3557] - PreparedStatement should be closed in JDBCResourceDAO#checkTableExists
 
 ## v2.4.1 - 2018-09-09
 _Tag:_ [kylin-2.4.1](https://github.com/apache/kylin/tree/kylin-2.4.1)
-This is a bug fix release after 2.4.0, with 22 bug fixes and enhancement. Check [How to upgrade](/docs23/howto/howto_upgrade.html).
+This is a bug fix release after 2.4.0, with 22 bug fixes and enhancement. Check [How to upgrade](/docs/howto/howto_upgrade.html).
 
 __Improvement__
 * [KYLIN-3421] - Improve job scheduler fetch performance
@@ -28,7 +133,7 @@ __Improvement__
 * [KYLIN-3503] - Missing java.util.logging.config.file when starting kylin instance
 * [KYLIN-3507] - Query NPE when project is not found
 
-__Bug__
+__Bug fix__
 * [KYLIN-2662] - NegativeArraySizeException in "Extract Fact Table Distinct Columns
 * [KYLIN-3025] - kylin odbc error : {fn CONVERT} for bigint type in tableau 10.4
 * [KYLIN-3255] - Cannot save cube
diff --git a/website/_docs/tutorial/hybrid.cn.md b/website/_docs/tutorial/hybrid.cn.md
index a93b9c6..b812bb8 100644
--- a/website/_docs/tutorial/hybrid.cn.md
+++ b/website/_docs/tutorial/hybrid.cn.md
@@ -4,10 +4,10 @@ title:  Hybrid 模型
 categories: 教程
 permalink: /cn/docs/tutorial/hybrid.html
 version: v1.2
-since: v0.7.1
+since: v2.5.0
 ---
 
-本教材将会指导您创建一个 Hybrid 模型。 
+本教材将会指导您创建一个 Hybrid 模型。 关于 Hybrid 的概念,请参考[这篇博客](http://kylin.apache.org/blog/2015/09/25/hybrid-model/)。
 
 ### I. 创建 Hybrid 模型
 一个 Hybrid 模型可以包含多个 cube。
@@ -39,9 +39,9 @@ since: v0.7.1
 2. 点击 `Yes` 将 Hybrid 模型删除。 
 
 ### IV. 运行查询
-Hybrid 模型创建成功后,您可以直接进行查询。 
+Hybrid 模型创建成功后,您可以直接进行查询。 因为 hybrid 比 cube 有更高优先级,因此可以命中 cube 的查询会优先被 hybrid 执行,然后再转交给 cube。
 
-点击顶部的 `Insight`,然后输入您的 sql 语句。
+点击顶部的 `Insight`,然后输入您的 SQL 语句。
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/5 sql-statement.png)
-    
-其他事宜请参考[这篇博客](http://kylin.apache.org/blog/2015/09/25/hybrid-model/)。
\ No newline at end of file
+
+*请注意, Hybrid model 不适合 "bitmap" 类型的 count distinct 跨 cube 的二次合并,请务必在查询中带上日期维度. *
\ No newline at end of file
diff --git a/website/_docs/tutorial/hybrid.md b/website/_docs/tutorial/hybrid.md
index d81c196..fff16d2 100644
--- a/website/_docs/tutorial/hybrid.md
+++ b/website/_docs/tutorial/hybrid.md
@@ -3,45 +3,44 @@ layout: docs
 title: Hybrid Model
 categories: tutorial
 permalink: /docs/tutorial/hybrid.html
-since: v0.7.1
+since: v2.5.0
 ---
 
-This tutorial will guide you to create a Hybrid. 
+This tutorial will guide you to create a hybrid model. Regarding the concept of hybri, please refer to [this blog](http://kylin.apache.org/blog/2015/09/25/hybrid-model/).
 
-### I. Create Hybrid Model
-One Hybrid model can be referenced by multiple cubes.
+### I. Create a hybrid model
+One Hybrid model can refer to multiple cubes.
 
 1. Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Hybrid`.
 
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/1 +hybrid.png)
 
-2. Enter a name for the Hybrid, then choose the model including cubes that you want to query, and then check the box before cube name, click > button to add cube(s) to hybrid.
+2. Enter a name for the hybrid, select the data model, and then check the box for the cubes that you want to add, click > button to add the cube to this hybrid.
 
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/2 hybrid-name.png)
     
-*Note: If you want to change the model, you should remove all the cubes that you selected.* 
+*Note: If you want to change the data model, you need to remove all the cubes that you already selected.* 
 
-3. Click `Submit` and then select `Yes` to save the Hybrid model. After created, the Hybrid model will be shown in the left `Hybrids` list.
+3. Click `Submit` to save the Hybrid model. After be created, the hybrid model will be shown in the left `Hybrids` list.
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/3 hybrid-created.png)
 
-### II. Update Hybrid Model
-1. Place the mouse over the Hybrid name, then click `Action` button, in the drop-down list select `Edit`. Then you can update Hybrid by adding(> button) or deleting(< button) cubes. 
+### II. Update a hybrid model
+1. Place the mouse over the hybrid name, then click `Action` button, in the drop-down list select `Edit`. You can update the ybrid by adding(> button) or deleting(< button) cubes to/from it. 
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/4 edit-hybrid.png)
 
-2. Click `Submit` and then select `Yes` to save the Hybrid model. 
+2. Click `Submit` to save the Hybrid model. 
 
-Now you only can view Hybrid details by click `Edit` button.
+Now you only can view hybrid details by click `Edit` button.
 
-### III. Drop Hybrid Model
+### III. Drop a hybrid model
 1. Place the mouse over the Hybrid name, then click `Action` button, in the drop-down list select `Drop`. Then the window will pop up. 
 
-2. Click `Yes` to delete the Hybrid model. 
+2. Click `Yes` to drop the Hybrid model. 
 
 ### IV. Run Query
-After the Hybrid model is created, you can run a query directly. 
+After the hybrid model is created, you can run a query. As the hybrid has higher priority than the cube, queries will already hit the hybrid model, and then be delegated to cubes. 
 
-Click `Insight` in top bar, and then input sql statement of you needs.
+Click `Insight` in top bar, input a SQL statement to execute.
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/5 sql-statement.png)
 
-
-Please refer to [this blog](http://kylin.apache.org/blog/2015/09/25/hybrid-model/) for other matters.
\ No newline at end of file
+*Please note, Hybrid model is not suitable for "bitmap" count distinct measures's merge across cubes, please have the partition date as a group by field in the SQL query. *
\ No newline at end of file
diff --git a/website/_docs/tutorial/setup_jdbc_datasource.cn.md b/website/_docs/tutorial/setup_jdbc_datasource.cn.md
index a3816f4..380bff7 100644
--- a/website/_docs/tutorial/setup_jdbc_datasource.cn.md
+++ b/website/_docs/tutorial/setup_jdbc_datasource.cn.md
@@ -34,7 +34,7 @@ kylin.source.jdbc.driver=com.mysql.jdbc.Driver
 kylin.source.jdbc.dialect=mysql
 kylin.source.jdbc.user=your_username
 kylin.source.jdbc.pass=your_password
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.jdbc.filed-delimiter=|
 ```
 
@@ -47,7 +47,7 @@ kylin.source.jdbc.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
 kylin.source.jdbc.dialect=mssql
 kylin.source.jdbc.user=your_username
 kylin.source.jdbc.pass=your_password
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.jdbc.filed-delimiter=|
 ```
 
@@ -60,7 +60,7 @@ kylin.source.jdbc.driver=com.amazon.redshift.jdbc.Driver
 kylin.source.jdbc.dialect=default
 kylin.source.jdbc.user=user
 kylin.source.jdbc.pass=pass
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.default=8
 kylin.source.jdbc.filed-delimiter=|
 ```
diff --git a/website/_docs/tutorial/setup_jdbc_datasource.md b/website/_docs/tutorial/setup_jdbc_datasource.md
index 8b0f38b..c845296 100644
--- a/website/_docs/tutorial/setup_jdbc_datasource.md
+++ b/website/_docs/tutorial/setup_jdbc_datasource.md
@@ -47,7 +47,7 @@ kylin.source.jdbc.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
 kylin.source.jdbc.dialect=mssql
 kylin.source.jdbc.user=your_username
 kylin.source.jdbc.pass=your_password
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.jdbc.filed-delimiter=|
 ```
 
@@ -60,7 +60,7 @@ kylin.source.jdbc.driver=com.amazon.redshift.jdbc.Driver
 kylin.source.jdbc.dialect=default
 kylin.source.jdbc.user=user
 kylin.source.jdbc.pass=pass
-kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client/bin
+kylin.source.jdbc.sqoop-home=/usr/hdp/current/sqoop-client
 kylin.source.default=8
 kylin.source.jdbc.filed-delimiter=|
 ```
diff --git a/website/_docs16/gettingstarted/best_practices.md b/website/_docs16/gettingstarted/best_practices.md
deleted file mode 100644
index 5c3a12d..0000000
--- a/website/_docs16/gettingstarted/best_practices.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-layout: docs16
-title:  "Community Best Practices"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/best_practices.html
-since: v1.3.x
----
-
-List of articles about Kylin best practices contributed by community. Some of them are from Chinese community. Many thanks!
-
-* [Apache Kylin在百度地图的实践](http://www.infoq.com/cn/articles/practis-of-apache-kylin-in-baidu-map)
-
-* [Apache Kylin 大数据时代的OLAP利器](http://www.bitstech.net/2016/01/04/kylin-olap/)(网易案例)
-
-* [Apache Kylin在云海的实践](http://www.csdn.net/article/2015-11-27/2826343)(京东案例)
-
-* [Kylin, Mondrian, Saiku系统的整合](http://tech.youzan.com/kylin-mondrian-saiku/)(有赞案例)
-
-* [Big Data MDX with Mondrian and Apache Kylin](https://www.inovex.de/fileadmin/files/Vortraege/2015/big-data-mdx-with-mondrian-and-apache-kylin-sebastien-jelsch-pcm-11-2015.pdf)
-
-* [Kylin and Mondrain Interaction](https://github.com/mustangore/kylin-mondrian-interaction) (Thanks to [mustangore](https://github.com/mustangore))
-
-* [Kylin And Tableau Tutorial](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
-
-* [Kylin and Qlik Integration](https://github.com/albertoRamon/Kylin/tree/master/KylinWithQlik) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
-
-* [How to use Hue with Kylin](https://github.com/albertoRamon/Kylin/tree/master/KylinWithHue) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
\ No newline at end of file
diff --git a/website/_docs16/gettingstarted/concepts.md b/website/_docs16/gettingstarted/concepts.md
deleted file mode 100644
index cf5ce07..0000000
--- a/website/_docs16/gettingstarted/concepts.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-layout: docs16
-title:  "Technical Concepts"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/concepts.html
-since: v1.2
----
- 
-Here are some basic technical concepts used in Apache Kylin, please check them for your reference.
-For terminology in domain, please refer to: [Terminology](terminology.html)
-
-## CUBE
-* __Table__ - This is definition of hive tables as source of cubes, which must be synced before building cubes.
-![](/images/docs/concepts/DataSource.png)
-
-* __Data Model__ - This describes a [STAR SCHEMA](https://en.wikipedia.org/wiki/Star_schema) data model, which defines fact/lookup tables and filter condition.
-![](/images/docs/concepts/DataModel.png)
-
-* __Cube Descriptor__ - This describes definition and settings for a cube instance, defining which data model to use, what dimensions and measures to have, how to partition to segments and how to handle auto-merge etc.
-![](/images/docs/concepts/CubeDesc.png)
-
-* __Cube Instance__ - This is instance of cube, built from one cube descriptor, and consist of one or more cube segments according partition settings.
-![](/images/docs/concepts/CubeInstance.png)
-
-* __Partition__ - User can define a DATE/STRING column as partition column on cube descriptor, to separate one cube into several segments with different date periods.
-![](/images/docs/concepts/Partition.png)
-
-* __Cube Segment__ - This is actual carrier of cube data, and maps to a HTable in HBase. One building job creates one new segment for the cube instance. Once data change on specified data period, we can refresh related segments to avoid rebuilding whole cube.
-![](/images/docs/concepts/CubeSegment.png)
-
-* __Aggregation Group__ - Each aggregation group is subset of dimensions, and build cuboid with combinations inside. It aims at pruning for optimization.
-![](/images/docs/concepts/AggregationGroup.png)
-
-## DIMENSION & MEASURE
-* __Mandotary__ - This dimension type is used for cuboid pruning, if a dimension is specified as “mandatory”, then those combinations without such dimension are pruned.
-* __Hierarchy__ - This dimension type is used for cuboid pruning, if dimension A,B,C forms a “hierarchy” relation, then only combinations with A, AB or ABC shall be remained. 
-* __Derived__ - On lookup tables, some dimensions could be generated from its PK, so there's specific mapping between them and FK from fact table. So those dimensions are DERIVED and don't participate in cuboid generation.
-![](/images/docs/concepts/Dimension.png)
-
-* __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
-* __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
-* __Top N__ - For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
-![](/images/docs/concepts/Measure.png)
-
-## CUBE ACTIONS
-* __BUILD__ - Given an interval of partition column, this action is to build a new cube segment.
-* __REFRESH__ - This action will rebuilt cube segment in some partition period, which is used in case of source table increasing.
-* __MERGE__ - This action will merge multiple continuous cube segments into single one. This can be automated with auto-merge settings in cube descriptor.
-* __PURGE__ - Clear segments under a cube instance. This will only update metadata, and won't delete cube data from HBase.
-![](/images/docs/concepts/CubeAction.png)
-
-## JOB STATUS
-* __NEW__ - This denotes one job has been just created.
-* __PENDING__ - This denotes one job is paused by job scheduler and waiting for resources.
-* __RUNNING__ - This denotes one job is running in progress.
-* __FINISHED__ - This denotes one job is successfully finished.
-* __ERROR__ - This denotes one job is aborted with errors.
-* __DISCARDED__ - This denotes one job is cancelled by end users.
-![](/images/docs/concepts/Job.png)
-
-## JOB ACTION
-* __RESUME__ - Once a job in ERROR status, this action will try to restore it from latest successful point.
-* __DISCARD__ - No matter status of a job is, user can end it and release resources with DISCARD action.
-![](/images/docs/concepts/JobAction.png)
diff --git a/website/_docs16/gettingstarted/events.md b/website/_docs16/gettingstarted/events.md
deleted file mode 100644
index 277d580..0000000
--- a/website/_docs16/gettingstarted/events.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-layout: docs16
-title:  "Events and Conferences"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/events.html
----
-
-__Conferences__
-
-* [The Evolution of Apache Kylin: Realtime and Plugin Architecture in Kylin](https://www.youtube.com/watch?v=n74zvLmIgF0)([slides](http://www.slideshare.net/YangLi43/apache-kylin-15-updates)) by [Li Yang](https://github.com/liyang-gmt8), at [Hadoop Summit 2016 Dublin](http://hadoopsummit.org/dublin/agenda/), Ireland, 2016-04-14
-* [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
-* [Apache Kylin - Balance Between Space and Time](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015) ([video](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015)) by [Debashis Saha](https://twitter.com/debashis_saha) & [Luke Han](https://twitter.com/lukehq), at Hadoop Summit 2015  [...]
-* [HBaseCon 2015: Apache Kylin; Extreme OLAP Engine for Hadoop](https://vimeo.com/128152444) ([video](https://vimeo.com/128152444), [slides](http://www.slideshare.net/HBaseCon/ecosystem-session-3b)) by [Seshu Adunuthula](https://twitter.com/SeshuAd) at HBaseCon 2015 in San Francisco, US, 2015-05-07
-* [Apache Kylin - Extreme OLAP Engine for Hadoop](http://strataconf.com/big-data-conference-uk-2015/public/schedule/detail/40029) ([slides](http://www.slideshare.net/lukehan/apache-kylin-extreme-olap-engine-for-big-data)) by [Luke Han](https://twitter.com/lukehq) & [Yang Li](https://github.com/liyang-gmt8), at Strata+Hadoop World in London, UK, 2015-05-06
-* [Apache Kylin Open Source Journey](http://www.infoq.com/cn/presentations/open-source-journey-of-apache-kylin) ([slides](http://www.slideshare.net/lukehan/apache-kylin-open-source-journey-for-qcon2015-beijing)) by [Luke Han](https://twitter.com/lukehq), at QCon Beijing in Beijing, China, 2015-04-23
-* [Apache Kylin - OLAP on Hadoop](http://cio.it168.com/a2015/0418/1721/000001721404.shtml) by [Yang Li](https://github.com/liyang-gmt8), at Database Technology Conference China 2015 in Beijing, China, 2015-04-18
-* [Apache Kylin – Cubes on Hadoop](https://www.youtube.com/watch?v=U0SbrVzuOe4) ([video](https://www.youtube.com/watch?v=U0SbrVzuOe4), [slides](http://www.slideshare.net/Hadoop_Summit/apache-kylin-cubes-on-hadoop)) by [Ted Dunning](https://twitter.com/ted_dunning), at Hadoop Summit 2015 Europe in Brussels, Belgium, 2015-04-16
-* [Apache Kylin - Hadoop 上的大规模联机分析平台](http://bdtc2014.hadooper.cn/m/zone/bdtc_2014/schedule3) ([slides](http://www.slideshare.net/lukehan/apache-kylin-big-data-technology-conference-2014-beijing-v2)) by [Luke Han](https://twitter.com/lukehq), at Big Data Technology Conference China in Beijing, China, 2014-12-14
-* [Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive](http://v.csdn.hudong.com/s/article.html?arcid=15820707) ([video](http://v.csdn.hudong.com/s/article.html?arcid=15820707), [slides](http://www.slideshare.net/XuJiang2/kylin-hadoop-olap-engine)) by [Jiang Xu](https://www.linkedin.com/pub/xu-jiang/4/5a8/230), at Shanghai Big Data Summit 2014 in Shanghai, China , 2014-10-25
-
-__Meetup__
-
-* [Apache Kylin Meetup @Bay Area](http://www.meetup.com/Cloud-at-ebayinc/events/218914395/), in San Jose, US, 6:00PM - 7:30PM, Thursday, 2014-12-04
-
diff --git a/website/_docs16/gettingstarted/faq.md b/website/_docs16/gettingstarted/faq.md
deleted file mode 100644
index 0ecb44e..0000000
--- a/website/_docs16/gettingstarted/faq.md
+++ /dev/null
@@ -1,119 +0,0 @@
----
-layout: docs16
-title:  "FAQ"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/faq.html
-since: v0.6.x
----
-
-#### 1. "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat"
-
-  * Kylin need many dependent jars (hadoop/hive/hcat/hbase/kafka) on classpath to work, but Kylin doesn't ship them. It will seek these jars from your local machine by running commands like `hbase classpath`, `hive -e set` etc. The founded jars' path will be appended to the environment variable *HBASE_CLASSPATH* (Kylin uses `hbase` shell command to start up, which will read this). But in some Hadoop distribution (like EMR 5.0), the `hbase` shell doesn't keep the origin `HBASE_CLASSPATH`  [...]
-
-  * To fix this, find the hbase shell script (in hbase/bin folder), and search *HBASE_CLASSPATH*, check whether it overwrite the value like :
-
-  {% highlight Groff markup %}
-  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*
-  {% endhighlight %}
-
-  * If true, change it to keep the origin value like:
-
-   {% highlight Groff markup %}
-  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
-  {% endhighlight %}
-
-#### 2. Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
-
-  * Kylin uses "Dictionary" encoding to encode/decode the dimension values (check [this blog](/blog/2015/08/13/kylin-dictionary/)); Usually a dimension's cardinality is less than millions, so the "Dict" encoding is good to use. As dictionary need be persisted and loaded into memory, if a dimension's cardinality is very high, the memory footprint will be tremendous, so Kylin add a check on this. If you see this error, suggest to identify the UHC dimension first and then re-evaluate the de [...]
-
-#### 3. Build cube failed due to "error check status"
-
-  * Check if `kylin.log` contains *yarn.resourcemanager.webapp.address:http://0.0.0.0:8088* and *java.net.ConnectException: Connection refused*
-  * If yes, then the problem is the address of resource manager was not available in yarn-site.xml
-  * A workaround is update `kylin.properties`, set `kylin.job.yarn.app.rest.check.status.url=http://YOUR_RM_NODE:8088/ws/v1/cluster/apps/${job_id}?anonymous=true`
-
-#### 4. HBase cannot get master address from ZooKeeper on Hortonworks Sandbox
-   
-  * By default hortonworks disables hbase, you'll have to start hbase in ambari homepage first.
-
-#### 5. Map Reduce Job information cannot display on Hortonworks Sandbox
-   
-  * Check out [https://github.com/KylinOLAP/Kylin/issues/40](https://github.com/KylinOLAP/Kylin/issues/40)
-
-#### 6. How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
-
-  * Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
-
-  {% highlight Groff markup %}
-  I was able to deploy Kylin with following option in POM.
-  <hadoop2.version>2.5.0</hadoop2.version>
-  <yarn.version>2.5.0</yarn.version>
-  <hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
-  <zookeeper.version>3.4.5</zookeeper.version>
-  <hive.version>0.13.1</hive.version>
-  My Cluster is running on Cloudera Distribution CDH 5.2.0.
-  {% endhighlight %}
-
-
-#### 7. SUM(field) returns a negtive result while all the numbers in this field are > 0
-  * If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which  [...]
-
-#### 8. Why Kylin need extract the distinct columns from Fact Table before building cube?
-  * Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
-
-#### 9. Why Kylin calculate the HIVE table cardinality?
-  * The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
-
-#### 10. How to add new user or change the default password?
-  * Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
-
-   {% highlight Groff markup %}
-   ${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
-   {% endhighlight %}
-
-  * The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
-  * When you deploy Kylin for more users, switch to LDAP authentication is recommended.
-
-#### 11. Using sub-query for un-supported SQL
-
-{% highlight Groff markup %}
-Original SQL:
-select fact.slr_sgmt,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from ih_daily_fact fact
-inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-group by fact.slr_sgmt
-{% endhighlight %}
-
-{% highlight Groff markup %}
-Using sub-query
-select a.slr_sgmt,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from (
-    select fact.slr_sgmt as slr_sgmt,
-    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
-    sum(gmv) as gmv36,
-    sum(gmv) as gmv35
-    from ih_daily_fact fact
-    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
-) a
-group by a.slr_sgmt
-{% endhighlight %}
-
-#### 12. Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
-
-  * Please add proxy for your NPM:  
-  `npm config set proxy http://YOUR_PROXY_IP`
-
-  * Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
-  [http://npm.taobao.org](http://npm.taobao.org)
-
-#### 13. Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
-  * User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
-
-
-
-
diff --git a/website/_docs16/gettingstarted/terminology.md b/website/_docs16/gettingstarted/terminology.md
deleted file mode 100644
index 7c3a108..0000000
--- a/website/_docs16/gettingstarted/terminology.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-layout: docs16
-title:  "Terminology"
-categories: gettingstarted
-permalink: /docs16/gettingstarted/terminology.html
-since: v0.5.x
----
- 
-
-Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
-They are basic knowledge of Apache Kylin which also will help to well understand such concept, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analycits. 
-
-* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
-* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
-* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
-* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
-* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
-* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
-* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
-* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
-* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
-* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
-
-
-
diff --git a/website/_docs16/howto/howto_backup_metadata.md b/website/_docs16/howto/howto_backup_metadata.md
deleted file mode 100644
index 0d295aa..0000000
--- a/website/_docs16/howto/howto_backup_metadata.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-layout: docs16
-title:  Backup Metadata
-categories: howto
-permalink: /docs16/howto/howto_backup_metadata.html
----
-
-Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
-
-{% highlight Groff markup %}
-## The metadata store in hbase
-kylin.metadata.url=kylin_metadata@hbase
-{% endhighlight %}
-
-This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
-
-## Backup Metadata Store with binary package
-
-Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
-In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
-
-{% highlight Groff markup %}
-./bin/metastore.sh backup
-{% endhighlight %}
-
-to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
-
-## Restore Metadata Store with binary package
-
-In case you find your metadata store messed up, and you want to restore to a previous backup:
-
-Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
-
-{% highlight Groff markup %}
-./bin/metastore.sh reset
-{% endhighlight %}
-
-Then upload the backup metadata to Kylin's metadata store:
-{% highlight Groff markup %}
-./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
-{% endhighlight %}
-
-## Backup/restore metadata in development env (available since 0.7.3)
-
-When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
-
-## Cleanup unused resources from Metadata Store (available since 0.7.3)
-As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
-
-Firstly, run a check, this is safe as it will not change anything:
-{% highlight Groff markup %}
-./bin/metastore.sh clean
-{% endhighlight %}
-
-The resources that will be dropped will be listed;
-
-Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
-{% highlight Groff markup %}
-./bin/metastore.sh clean --delete true
-{% endhighlight %}
diff --git a/website/_docs16/howto/howto_build_cube_with_restapi.md b/website/_docs16/howto/howto_build_cube_with_restapi.md
deleted file mode 100644
index 0ccd486..0000000
--- a/website/_docs16/howto/howto_build_cube_with_restapi.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-layout: docs16
-title:  Build Cube with RESTful API
-categories: howto
-permalink: /docs16/howto/howto_build_cube_with_restapi.html
----
-
-### 1.	Authentication
-*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
-*   Add `Authorization` header to first request for authentication
-*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
-*   Once authenticated, client can go subsequent requests with cookies.
-{% highlight Groff markup %}
-POST http://localhost:7070/kylin/api/user/authentication
-    
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-
-### 2.	Get details of cube. 
-*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
-*   Client can find cube segment date ranges in returned cube detail.
-{% highlight Groff markup %}
-GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-### 3.	Then submit a build job of the cube. 
-*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
-*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
-    *   `startTime` and `endTime` should be utc timestamp.
-    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
-*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
-{% highlight Groff markup %}
-PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-    
-{
-    "startTime": 0,
-    "endTime": 1388563200000,
-    "buildType": "BUILD"
-}
-{% endhighlight %}
-
-### 4.	Track job status. 
-*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
-*   Returned `job_status` represents current status of job.
-
-### 5.	If the job got errors, you can resume it. 
-*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`
diff --git a/website/_docs16/howto/howto_cleanup_storage.md b/website/_docs16/howto/howto_cleanup_storage.md
deleted file mode 100644
index 233d32d..0000000
--- a/website/_docs16/howto/howto_cleanup_storage.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-layout: docs16
-title:  Cleanup Storage (HDFS & HBase)
-categories: howto
-permalink: /docs16/howto/howto_cleanup_storage.html
----
-
-Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
-automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
-
-Steps:
-1. Check which resources can be cleanup, this will not remove anything:
-{% highlight Groff markup %}
-export KYLIN_HOME=/path/to/kylin_home
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete false
-{% endhighlight %}
-Here please replace (version) with the specific Kylin jar version in your installation;
-2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
-{% highlight Groff markup %}
-${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete true
-{% endhighlight %}
-On finish, the intermediate HDFS location and HTables should be dropped;
diff --git a/website/_docs16/howto/howto_jdbc.md b/website/_docs16/howto/howto_jdbc.md
deleted file mode 100644
index 9990df6..0000000
--- a/website/_docs16/howto/howto_jdbc.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-layout: docs16
-title:  Use JDBC Driver
-categories: howto
-permalink: /docs16/howto/howto_jdbc.html
----
-
-### Authentication
-
-###### Build on Apache Kylin authentication restful service. Supported parameters:
-* user : username 
-* password : password
-* ssl: true/false. Default be false; If true, all the services call will use https.
-
-### Connection URL format:
-{% highlight Groff markup %}
-jdbc:kylin://<hostname>:<port>/<kylin_project_name>
-{% endhighlight %}
-* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
-* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
-* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
-
-### 1. Query with Statement
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 2. Query with PreparedStatement
-
-###### Supported prepared statement parameters:
-* setString
-* setInt
-* setShort
-* setLong
-* setFloat
-* setDouble
-* setBoolean
-* setByte
-* setDate
-* setTime
-* setTimestamp
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
-state.setInt(1, 10);
-ResultSet resultSet = state.executeQuery();
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 3. Get query result set metadata
-Kylin jdbc driver supports metadata list methods:
-List catalog, schema, table and column with sql pattern filters(such as %).
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
-while (tables.next()) {
-    for (int i = 0; i < 10; i++) {
-        assertEquals("dummy", tables.getString(i + 1));
-    }
-}
-{% endhighlight %}
diff --git a/website/_docs16/howto/howto_ldap_and_sso.md b/website/_docs16/howto/howto_ldap_and_sso.md
deleted file mode 100644
index 0a377b1..0000000
--- a/website/_docs16/howto/howto_ldap_and_sso.md
+++ /dev/null
@@ -1,128 +0,0 @@
----
-layout: docs16
-title: Enable Security with LDAP and SSO
-categories: howto
-permalink: /docs16/howto/howto_ldap_and_sso.html
----
-
-## Enable LDAP authentication
-
-Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
-
-#### Configure LDAP server info
-
-Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be encrypted; You can run the following command to get the encrypted value (please note, the password's length should be less than 16 characters, see [KYLIN-2416](https://issues.apache.org/jira/browse/KYLIN-2416)):
-
-```
-cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
-java -classpath kylin-server-base-1.6.0.jar:spring-beans-3.2.17.RELEASE.jar:spring-core-3.2.17.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
-```
-
-Config them in the conf/kylin.properties:
-
-```
-ldap.server=ldap://<your_ldap_host>:<port>
-ldap.username=<your_user_name>
-ldap.password=<your_password_encrypted>
-```
-
-Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
-
-```
-ldap.user.searchBase=OU=UserAccounts,DC=mycompany,DC=com
-ldap.user.searchPattern=(&(cn={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
-ldap.user.groupSearchBase=OU=Group,DC=mycompany,DC=com
-```
-
-If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in ldap.service.*; Otherwise, leave them be empty;
-
-### Configure the administrator group and default role
-
-To map an LDAP group to the admin group in Kylin, need set the "acl.adminRole" to "ROLE_" + GROUP_NAME. For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
-
-```
-acl.adminRole=ROLE_KYLIN-ADMIN-GROUP
-acl.defaultRole=ROLE_ANALYST,ROLE_MODELER
-```
-
-The "acl.defaultRole" is a list of the default roles that grant to everyone, keep it as-is.
-
-#### Enable LDAP
-
-Set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server.
-
-## Enable SSO authentication
-
-From v1.5, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
-
-Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
-
-### Generate IDP metadata xml
-Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
-
-  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
-  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
-  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
-
-### Generate JKS keystore for Kylin
-As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
-
-Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
-
-```
-$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
-Enter Export Password: <export_pwd>
-Verifying - Enter Export Password: <export_pwd>
-
-
-$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
-
-Enter destination keystore password:  changeit
-Re-enter new password: changeit
-```
-
-It will put the keys to "samlKeystore.jks" with alias "kylin";
-
-### Enable Higher Ciphers
-
-Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
-
-### Deploy IDP xml file and keystore to Kylin
-
-The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
-	
-  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
-  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
-  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
-
-```
-<!-- Central storage of cryptographic keys -->
-<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
-	<constructor-arg value="classpath:samlKeystore.jks"/>
-	<constructor-arg type="java.lang.String" value="changeit"/>
-	<constructor-arg>
-		<map>
-			<entry key="kylin" value="changeit"/>
-		</map>
-	</constructor-arg>
-	<constructor-arg type="java.lang.String" value="kylin"/>
-</bean>
-
-```
-
-### Other configurations
-In conf/kylin.properties, add the following properties with your server information:
-
-```
-saml.metadata.entityBaseURL=https://host-name/kylin
-saml.context.scheme=https
-saml.context.serverName=host-name
-saml.context.serverPort=443
-saml.context.contextPath=/kylin
-```
-
-Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
-
-### Enable SSO
-Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
-
diff --git a/website/_docs16/howto/howto_optimize_build.md b/website/_docs16/howto/howto_optimize_build.md
deleted file mode 100644
index 627ddcc..0000000
--- a/website/_docs16/howto/howto_optimize_build.md
+++ /dev/null
@@ -1,190 +0,0 @@
----
-layout: docs16
-title:  Optimize Cube Build
-categories: howto
-permalink: /docs16/howto/howto_optimize_build.html
----
-
-Kylin decomposes a Cube build task into several steps and then executes them in sequence. These steps include Hive operations, MapReduce jobs, and other types job. When you have many Cubes to build daily, then you definitely want to speed up this process. Here are some practices that you probably want to know, and they are organized in the same order as the steps sequence.
-
-
-
-## Create Intermediate Flat Hive Table
-
-This step extracts data from source Hive tables (with all tables joined) and inserts them into an intermediate flat table. If Cube is partitioned, Kylin will add a time condition so that only the data in the range would be fetched. You can check the related Hive command in the log of this step, e.g: 
-
-```
-hive -e "USE default;
-DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
-
-CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
-(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
-STORED AS SEQUENCEFILE
-LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
-
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
-AIRLINE.FLIGHTDATE
-,AIRLINE.YEAR
-,AIRLINE.QUARTER
-,...
-,AIRLINE.ARRDELAYMINUTES
-FROM AIRLINE.AIRLINE as AIRLINE
-WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
-"
-
-```
-
-Kylin applies the configuration in conf/kylin\_hive\_conf.xml while Hive commands are running, for instance, use less replication and enable Hive's mapper side join. If it is needed, you can add other configurations which are good for your cluster.
-
-If Cube's partition column ("FLIGHTDATE" in this case) is the same as Hive table's partition column, then filtering on it will let Hive smartly skip those non-matched partitions. So it is highly recommended to use Hive table's paritition column (if it is a date column) as the Cube's partition column. This is almost required for those very large tables, or Hive has to scan all files each time in this step, costing terribly long time.
-
-If your Hive enables file merge, you can disable them in "conf/kylin\_hive\_conf.xml" as Kylin has its own way to merge files (in the next step): 
-
-    <property>
-        <name>hive.merge.mapfiles</name>
-        <value>false</value>
-        <description>Disable Hive's auto merge</description>
-    </property>
-
-
-## Redistribute intermediate table
-
-After the previous step, Hive generates the data files in HDFS folder: while some files are large, some are small or even empty. The imbalanced file distribution would lead subsequent MR jobs to imbalance as well: some mappers finish quickly yet some others are very slow. To balance them, Kylin adds this step to "redistribute" the data and here is a sample output:
-
-```
-total input rows = 159869711
-expected input rows per mapper = 1000000
-num reducers for RedistributeFlatHiveTableStep = 160
-
-```
-
-
-Redistribute table, cmd: 
-
-```
-hive -e "USE default;
-SET dfs.replication=2;
-SET hive.exec.compress.output=true;
-SET hive.auto.convert.join.noconditionaltask=true;
-SET hive.auto.convert.join.noconditionaltask.size=100000000;
-SET mapreduce.job.split.metainfo.maxsize=-1;
-set mapreduce.job.reduces=160;
-set hive.merge.mapredfiles=false;
-
-INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
-"
-
-```
-
-
-
-Firstly, Kylin gets the row count of this intermediate table; then based on the number of row count, it would get amount of files needed to get data redistributed. By default, Kylin allocates one file per 1 million rows. In this sample, there are 160 million rows and exist 160 reducers, and each reducer would write 1 file. In following MR step over this table, Hadoop will start the same number Mappers as the files to process (usually 1 million's data size is small than a HDFS block size) [...]
-
-`kylin.job.mapreduce.mapper.input.rows=500000`
-
-
-Secondly, Kylin runs a *"INSERT OVERWIRTE TABLE .... DISTRIBUTE BY "* HiveQL to distribute the rows among a specified number of reducers.
-
-In most cases, Kylin asks Hive to randomly distributes the rows among reducers, then get files very closed in size. The distribute clause is "DISTRIBUTE BY RAND()".
-
-If your Cube has specified a "shard by" dimension (in Cube's "Advanced setting" page), which is a high cardinality column (like "USER\_ID"), Kylin will ask Hive to redistribute data by that column's value. Then for the rows that have the same value as this column has, they will go to the same file. This is much better than "by random",  because the data will be not only redistributed but also pre-categorized without additional cost, thus benefiting the subsequent Cube build process. Unde [...]
-
-**Please note:** 1) The "shard by" column should be a high cardinality dimension column, and it appears in many cuboids (not just appears in seldom cuboids). Utilize it to distribute properly can get equidistribution in every time range; otherwise it will cause data incline, which will reduce the building speed. Typical good cases are: "USER\_ID", "SELLER\_ID", "PRODUCT", "CELL\_NUMBER", so forth, whose cardinality is higher than one thousand (should be much more than the reducer numbers [...]
-
-
-
-## Extract Fact Table Distinct Columns
-
-In this step Kylin runs a MR job to fetch distinct values for the dimensions, which are using dictionary encoding. 
-
-Actually this step does more: it collects the Cube statistics by using HyperLogLog counters to estimate the row count of each Cuboid. If you find that mappers work incredible slowly, it usually indicates that the Cube design is too complex, please check [optimize cube design](/docs16/howto/howto_optimize_cubes.html) to make the Cube thinner. If the reducers get OutOfMemory error, it indicates that the Cuboid combination does explode or the default YARN memory allocation cannot meet deman [...]
-
-You can reduce the sampling percentage (kylin.job.cubing.inmem.sampling.percen in kylin.properties) to get this step accelerated, but this may not help much and impact on the accuracy of Cube statistics, thus we don't recommend.  
-
-
-
-## Build Dimension Dictionary
-
-With the distinct values fetched in previous step, Kylin will build dictionaries in memory (in next version this will be moved to MR). Usually this step is fast, but if the value set is large, Kylin may report error like "Too high cardinality is not suitable for dictionary". For UHC column, please use other encoding method for the UHC column, such as "fixed_length", "integer" and so on.
-
-
-
-## Save Cuboid Statistics and Create HTable
-
-These two steps are lightweight and fast.
-
-
-
-## Build Base Cuboid 
-
-This step is building the base cuboid from the intermediate table, which is the first round MR of the "by-layer" cubing algorithm. The mapper number is equals to the reducer number of step 2; The reducer number is estimated with the cube statistics: by default use 1 reducer every 500MB output; If you observed the reducer number is small, you can set "kylin.job.mapreduce.default.reduce.input.mb" in kylin.properties to a smaller value to get more resources, e.g: `kylin.job.mapreduce.defaul [...]
-
-
-## Build N-Dimension Cuboid 
-
-These steps are the "by-layer" cubing process, each step uses the output of previous step as the input, and then cut off one dimension to aggregate to get one child cuboid. For example, from cuboid ABCD, cut off A get BCD, cut off B get ACD etc. 
-
-Some cuboid can be aggregated from more than 1 parent cubiods, in this case, Kylin will select the minimal parent cuboid. For example, AB can be generated from ABC (id: 1110) and ABD (id: 1101), so ABD will be used as its id is smaller than ABC. Based on this, if D's cardinality is small, the aggregation will be cost-efficient. So, when you design the Cube rowkey sequence, please remember to put low cardinality dimensions to the tail position. This not only benefit the Cube build, but al [...]
-
-Usually from the N-D to (N/2)-D the building is slow, because it is the cuboid explosion process: N-D has 1 Cuboid, (N-1)-D has N cuboids, (N-2)-D has N*(N-1) cuboids, etc. After (N/2)-D step, the building gets faster gradually.
-
-
-
-## Build Cube
-
-This step uses a new algorithm to build the Cube: "by-split" Cubing (also called as "in-mem" cubing). It will use one round MR to calculate all cuboids, but it requests more memory than normal. The "conf/kylin\_job\_conf\_inmem.xml" is made for this step. By default it requests 3GB memory for each mapper. If your cluster has enough memory, you can allocate more in "conf/kylin\_job\_conf\_inmem.xml" so it will use as much possible memory to hold the data and gain a better performance, e.g:
-
-    <property>
-        <name>mapreduce.map.memory.mb</name>
-        <value>6144</value>
-        <description></description>
-    </property>
-    
-    <property>
-        <name>mapreduce.map.java.opts</name>
-        <value>-Xmx5632m</value>
-        <description></description>
-    </property>
-
-
-Please note, Kylin will automatically select the best algorithm based on the data distribution (get in Cube statistics). The not-selected algorithm's steps will be skipped. You don't need to select the algorithm explicitly.
-
-
-
-## Convert Cuboid Data to HFile
-
-This step starts a MR job to convert the Cuboid files (sequence file format) into HBase's HFile format. Kylin calculates the HBase region number with the Cube statistics, by default 1 region per 5GB. The more regions got, the more reducers would be utilized. If you observe the reducer's number is small and performance is poor, you can set the following parameters in "conf/kylin.properties" to smaller, as follows:
-
-```
-kylin.hbase.region.cut=2
-kylin.hbase.hfile.size.gb=1
-```
-
-If you're not sure what size a region should be, contact your HBase administrator. 
-
-
-## Load HFile to HBase Table
-
-This step uses HBase API to load the HFile to region servers, it is lightweight and fast.
-
-
-
-## Update Cube Info
-
-After loading data into HBase, Kylin marks this Cube segment as ready in metadata. This step is very fast.
-
-
-
-## Cleanup
-
-Drop the intermediate table from Hive. This step doesn't block anything as the segment has been marked ready in the previous step. If this step gets error, no need to worry, the garbage can be collected later when Kylin executes the [StorageCleanupJob](/docs16/howto/howto_cleanup_storage.html).
-
-
-## Summary
-There are also many other methods to boost the performance. If you have practices to share, welcome to discuss in [dev@kylin.apache.org](mailto:dev@kylin.apache.org).
\ No newline at end of file
diff --git a/website/_docs16/howto/howto_optimize_cubes.md b/website/_docs16/howto/howto_optimize_cubes.md
deleted file mode 100644
index 17e7fb8..0000000
--- a/website/_docs16/howto/howto_optimize_cubes.md
+++ /dev/null
@@ -1,212 +0,0 @@
----
-layout: docs16
-title:  Optimize Cube Design
-categories: howto
-permalink: /docs16/howto/howto_optimize_cubes.html
----
-
-## Hierarchies:
-
-Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
-
-group by continent
-group by continent, country
-group by continent, country, city
-
-In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
-
-If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
-
-
-A. Hierarchies on lookup table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, FK</td>
-    <td></td>
-    <td>PK,,H1,H2,H3,,,,</td>
-  </tr>
-</table>
-
----
-
-B. Hierarchies on fact table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
-  </tr>
-</table>
-
----
-
-
-There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
-
-A*. Hierarchies on lookup table over its primary key
-
-
-<table>
-  <tr>
-    <td align="center">Lookup Table(Calendar)</td>
-  </tr>
-  <tr>
-    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
-  </tr>
-</table>
-
----
-
-
-For cases like A* what you need is another optimization called "Derived Columns"
-
-## Derived Columns:
-
-Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
-
-For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, DimA(FK) </td>
-    <td></td>
-    <td>DimX(PK),,DimB, DimC</td>
-  </tr>
-</table>
-
----
-
-
-Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
-
-
-<table>
-  <tr>
-    <th>dimA</th>
-    <th>dimB</th>
-    <th>dimC</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>b</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>c</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-</table>
-
-
-in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
-
-original combinations:
-ABC,AB,AC,BC,A,B,C
-
-combinations when driving B from A:
-AC,A,C
-
-at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
-
-
-<table>
-  <tr>
-    <th>DimA</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>2</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"
diff --git a/website/_docs16/howto/howto_update_coprocessor.md b/website/_docs16/howto/howto_update_coprocessor.md
deleted file mode 100644
index 1aa8b0e..0000000
--- a/website/_docs16/howto/howto_update_coprocessor.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-layout: docs16
-title:  How to Update HBase Coprocessor
-categories: howto
-permalink: /docs16/howto/howto_update_coprocessor.html
----
-
-Kylin leverages HBase coprocessor to optimize query performance. After new versions released, the RPC protocol may get changed, so user need to redeploy coprocessor to HTable.
-
-There's a CLI tool to update HBase Coprocessor:
-
-{% highlight Groff markup %}
-$KYLIN_HOME/bin/kylin.sh org.apache.kylin.storage.hbase.util.DeployCoprocessorCLI $KYLIN_HOME/lib/kylin-coprocessor-*.jar all
-{% endhighlight %}
diff --git a/website/_docs16/howto/howto_upgrade.md b/website/_docs16/howto/howto_upgrade.md
deleted file mode 100644
index ed53116..0000000
--- a/website/_docs16/howto/howto_upgrade.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-layout: docs16
-title:  Upgrade From Old Versions
-categories: howto
-permalink: /docs16/howto/howto_upgrade.html
-since: v1.5.1
----
-
-Running as a Hadoop client, Apache Kylin's metadata and Cube data are persistended in Hadoop (HBase and HDFS), so the upgrade is relatively easy and user doesn't need worry about data loss. The upgrade can be performed in the following steps:
-
-* Download the new Apache Kylin binary package for your Hadoop version from Kylin's download page;
-* Uncompress the new version Kylin package to a new folder, e.g, /usr/local/kylin/apache-kylin-1.6.0/ (directly overwrite old instance is not recommended);
-* Copy the configuration files (`$KYLIN_HOME/conf/*`) from old instance (e.g /usr/local/kylin/apache-kylin-1.5.4/) to the new instance's `conf` folder if you have customized configurations; It is recommended to do a compare and merge since there might be new parameters introduced. If you have modified tomcat configuration ($KYLIN_HOME/tomcat/conf/), also remember to do the same.
-* Stop the current Kylin instance with `./bin/kylin.sh stop`;
-* Set the `KYLIN_HOME` env variable to the new instance's installation folder. If you have set `KYLIN_HOME` in `~/.bash_profile` or other scripts, remember to update them as well.
-* Start the new Kylin instance with `$KYLIN_HOME/bin/kylin start`; After be started, login Kylin web to check whether your cubes can be loaded correctly.
-* [Upgrade coprocessor](howto_update_coprocessor.html) to ensure the HBase region servers use the latest Kylin coprocessor.
-* Verify your SQL queries can be performed successfully.
-
-Below are versions specific guides:
-
-## Upgrade from v1.5.4 to v1.6.0
-Kylin v1.5.4 and v1.6.0 are compitible in metadata; Please follow the common upgrade steps above.
-
-## Upgrade from v1.5.3 to v1.5.4
-Kylin v1.5.3 and v1.5.4 are compitible in metadata; Please follow the common upgrade steps above.
-
-## Upgrade from 1.5.2 to v1.5.3
-Kylin v1.5.3 metadata is compitible with v1.5.2, your cubes don't need rebuilt, as usual, some actions need to be performed:
-
-#### 1. Update HBase coprocessor
-The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
-
-#### 2. Update conf/kylin_hive_conf.xml
-From 1.5.3, Kylin doesn't need Hive to merge small files anymore; For users who copy the conf/ from previous version, please remove the "merge" related properties in kylin_hive_conf.xml, including "hive.merge.mapfiles", "hive.merge.mapredfiles", and "hive.merge.size.per.task"; this will save the time on extracting data from Hive.
-
-
-## Upgrade from 1.5.1 to v1.5.2
-Kylin v1.5.2 metadata is compitible with v1.5.1, your cubes don't need upgrade, while some actions need to be performed:
-
-#### 1. Update HBase coprocessor
-The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
-
-#### 2. Update conf/kylin.properties
-In v1.5.2 several properties are deprecated, and several new one are added:
-
-Deprecated:
-
-* kylin.hbase.region.cut.small=5
-* kylin.hbase.region.cut.medium=10
-* kylin.hbase.region.cut.large=50
-
-New:
-
-* kylin.hbase.region.cut=5
-* kylin.hbase.hfile.size.gb=2
-
-These new parameters determines how to split HBase region; To use different size you can overwite these params in Cube level. 
-
-When copy from old kylin.properties file, suggest to remove the deprecated ones and add the new ones.
-
-#### 3. Add conf/kylin\_job\_conf\_inmem.xml
-A new job conf file named "kylin\_job\_conf\_inmem.xml" is added in "conf" folder; As Kylin 1.5 introduced the "fast cubing" algorithm, which aims to leverage more memory to do the in-mem aggregation; Kylin will use this new conf file for submitting the in-mem cube build job, which requesting different memory with a normal job; Please update it properly according to your cluster capacity.
-
-Besides, if you have used separate config files for different capacity cubes, for example "kylin\_job\_conf\_small.xml", "kylin\_job\_conf\_medium.xml" and "kylin\_job\_conf\_large.xml", please note that they are deprecated now; Only "kylin\_job\_conf.xml" and "kylin\_job\_conf\_inmem.xml" will be used for submitting cube job; If you have cube level job configurations (like using different Yarn job queue), you can customize at cube level, check [KYLIN-1706](https://issues.apache.org/jira [...]
-
diff --git a/website/_docs16/howto/howto_use_beeline.md b/website/_docs16/howto/howto_use_beeline.md
deleted file mode 100644
index 7c3148a..0000000
--- a/website/_docs16/howto/howto_use_beeline.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-layout: docs16
-title:  Use Beeline for Hive Commands
-categories: howto
-permalink: /docs16/howto/howto_use_beeline.html
----
-
-Beeline(https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients) is recommended by many venders to replace Hive CLI. By default Kylin uses Hive CLI to synchronize Hive tables, create flatten intermediate tables, etc. By simple configuration changes you can set Kylin to use Beeline instead.
-
-Edit $KYLIN_HOME/conf/kylin.properties by:
-
-  1. change kylin.hive.client=cli to kylin.hive.client=beeline
-  2. add "kylin.hive.beeline.params", this is where you can specifiy beeline commmand parameters. Like username(-n), JDBC URL(-u),etc. There's a sample kylin.hive.beeline.params included in default kylin.properties, however it's commented. You can modify the sample based on your real environment.
-
diff --git a/website/_docs16/howto/howto_use_distributed_scheduler.md b/website/_docs16/howto/howto_use_distributed_scheduler.md
deleted file mode 100644
index 01bb097..0000000
--- a/website/_docs16/howto/howto_use_distributed_scheduler.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-layout: docs16
-title:  Use distributed job scheduler
-categories: howto
-permalink: /docs16/howto/howto_use_distributed_scheduler.html
----
-
-Since Kylin 2.0, Kylin support distributed job scheduler.
-Which is more extensible, available and reliable than default job scheduler.
-To enable the distributed job scheduler, you need to set or update three configs in the kylin.properties:
-
-```
-1. kylin.job.scheduler.default=2
-2. kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperDistributedJobLock
-3. add all job servers and query servers to the kylin.server.cluster-servers
-```
diff --git a/website/_docs16/howto/howto_use_restapi.md b/website/_docs16/howto/howto_use_restapi.md
deleted file mode 100644
index 8d1a575..0000000
--- a/website/_docs16/howto/howto_use_restapi.md
+++ /dev/null
@@ -1,1113 +0,0 @@
----
-layout: docs16
-title:  Use RESTful API
-categories: howto
-permalink: /docs16/howto/howto_use_restapi.html
-since: v0.7.1
----
-
-This page lists the major RESTful APIs provided by Kylin.
-
-* Query
-   * [Authentication](#authentication)
-   * [Query](#query)
-   * [List queryable tables](#list-queryable-tables)
-* CUBE
-   * [List cubes](#list-cubes)
-   * [Get cube](#get-cube)
-   * [Get cube descriptor (dimension, measure info, etc)](#get-cube-descriptor)
-   * [Get data model (fact and lookup table info)](#get-data-model)
-   * [Build cube](#build-cube)
-   * [Disable cube](#disable-cube)
-   * [Purge cube](#purge-cube)
-   * [Enable cube](#enable-cube)
-* JOB
-   * [Resume job](#resume-job)
-   * [Pause job](#pause-job)
-   * [Discard job](#discard-job)
-   * [Get job status](#get-job-status)
-   * [Get job step output](#get-job-step-output)
-* Metadata
-   * [Get Hive Table](#get-hive-table)
-   * [Get Hive Table (Extend Info)](#get-hive-table-extend-info)
-   * [Get Hive Tables](#get-hive-tables)
-   * [Load Hive Tables](#load-hive-tables)
-* Cache
-   * [Wipe cache](#wipe-cache)
-* Streaming
-   * [Initiate cube start position](#initiate-cube-start-position)
-   * [Build stream cube](#build-stream-cube)
-   * [Check segment holes](#check-segment-holes)
-   * [Fill segment holes](#fill-segment-holes)
-
-## Authentication
-`POST /kylin/api/user/authentication`
-
-#### Request Header
-Authorization data encoded by basic auth is needed in the header, such as:
-Authorization:Basic {data}
-
-#### Response Body
-* userDetails - Defined authorities and status of current user.
-
-#### Response Sample
-
-```sh
-{  
-   "userDetails":{  
-      "password":null,
-      "username":"sample",
-      "authorities":[  
-         {  
-            "authority":"ROLE_ANALYST"
-         },
-         {  
-            "authority":"ROLE_MODELER"
-         }
-      ],
-      "accountNonExpired":true,
-      "accountNonLocked":true,
-      "credentialsNonExpired":true,
-      "enabled":true
-   }
-}
-```
-
-#### Curl Example
-
-```
-curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' http://<host>:<port>/kylin/api/user/authentication
-```
-
-If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
-
-```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
-```
-
-Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
-
-
-```
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "startTime": 820454400000, "endTime": 821318400000, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/kylin_sales/build
-```
-
-***
-
-## Query
-`POST /kylin/api/query`
-
-#### Request Body
-* sql - `required` `string` The text of sql statement.
-* offset - `optional` `int` Query offset. If offset is set in sql, curIndex will be ignored.
-* limit - `optional` `int` Query limit. If limit is set in sql, perPage will be ignored.
-* acceptPartial - `optional` `bool` Whether accept a partial result or not, default be "false". Set to "false" for production use. 
-* project - `optional` `string` Project to perform query. Default value is 'DEFAULT'.
-
-#### Request Sample
-
-```sh
-{  
-   "sql":"select * from TEST_KYLIN_FACT",
-   "offset":0,
-   "limit":50000,
-   "acceptPartial":false,
-   "project":"DEFAULT"
-}
-```
-
-#### Curl Example
-
-```
-curl -X POST -H "Authorization: Basic XXXXXXXXX" -H "Content-Type: application/json" -d '{ "sql":"select count(*) from TEST_KYLIN_FACT", "project":"learn_kylin" }' http://localhost:7070/kylin/api/query
-```
-
-#### Response Body
-* columnMetas - Column metadata information of result set.
-* results - Data set of result.
-* cube - Cube used for this query.
-* affectedRowCount - Count of affected row by this sql statement.
-* isException - Whether this response is an exception.
-* ExceptionMessage - Message content of the exception.
-* Duration - Time cost of this query
-* Partial - Whether the response is a partial result or not. Decided by `acceptPartial` of request.
-
-#### Response Sample
-
-```sh
-{  
-   "columnMetas":[  
-      {  
-         "isNullable":1,
-         "displaySize":0,
-         "label":"CAL_DT",
-         "name":"CAL_DT",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":0,
-         "scale":0,
-         "columnType":91,
-         "columnTypeName":"DATE",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      },
-      {  
-         "isNullable":1,
-         "displaySize":10,
-         "label":"LEAF_CATEG_ID",
-         "name":"LEAF_CATEG_ID",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":10,
-         "scale":0,
-         "columnType":4,
-         "columnTypeName":"INTEGER",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      }
-   ],
-   "results":[  
-      [  
-         "2013-08-07",
-         "32996",
-         "15",
-         "15",
-         "Auction",
-         "10000000",
-         "49.048952730908745",
-         "49.048952730908745",
-         "49.048952730908745",
-         "1"
-      ],
-      [  
-         "2013-08-07",
-         "43398",
-         "0",
-         "14",
-         "ABIN",
-         "10000633",
-         "85.78317064220418",
-         "85.78317064220418",
-         "85.78317064220418",
-         "1"
-      ]
-   ],
-   "cube":"test_kylin_cube_with_slr_desc",
-   "affectedRowCount":0,
-   "isException":false,
-   "exceptionMessage":null,
-   "duration":3451,
-   "partial":false
-}
-```
-
-
-## List queryable tables
-`GET /kylin/api/tables_and_columns`
-
-#### Request Parameters
-* project - `required` `string` The project to load tables
-
-#### Response Sample
-```sh
-[  
-   {  
-      "columns":[  
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"CAL_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":1,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         },
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"WEEK_BEG_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":2,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         }
-      ],
-      "table_NAME":"TEST_CAL_DT",
-      "table_SCHEM":"EDW",
-      "ref_GENERATION":null,
-      "self_REFERENCING_COL_NAME":null,
-      "type_SCHEM":null,
-      "table_TYPE":"TABLE",
-      "table_CAT":"defaultCatalog",
-      "remarks":null,
-      "type_CAT":null,
-      "type_NAME":null
-   }
-]
-```
-
-***
-
-## List cubes
-`GET /kylin/api/cubes`
-
-#### Request Parameters
-* offset - `required` `int` Offset used by pagination
-* limit - `required` `int ` Cubes per page.
-* cubeName - `optional` `string` Keyword for cube names. To find cubes whose name contains this keyword.
-* projectName - `optional` `string` Project name.
-
-#### Response Sample
-```sh
-[  
-   {  
-      "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-      "last_modified":1407831634847,
-      "name":"test_kylin_cube_with_slr_empty",
-      "owner":null,
-      "version":null,
-      "descriptor":"test_kylin_cube_with_slr_desc",
-      "cost":50,
-      "status":"DISABLED",
-      "segments":[  
-      ],
-      "create_time":null,
-      "source_records_count":0,
-      "source_records_size":0,
-      "size_kb":0
-   }
-]
-```
-
-## Get cube
-`GET /kylin/api/cubes/{cubeName}`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name to find.
-
-## Get cube descriptor
-`GET /kylin/api/cube_desc/{cubeName}`
-Get descriptor for specified cube instance.
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-[
-    {
-        "uuid": "a24ca905-1fc6-4f67-985c-38fa5aeafd92", 
-        "name": "test_kylin_cube_with_slr_desc", 
-        "description": null, 
-        "dimensions": [
-            {
-                "id": 0, 
-                "name": "CAL_DT", 
-                "table": "EDW.TEST_CAL_DT", 
-                "column": null, 
-                "derived": [
-                    "WEEK_BEG_DT"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 1, 
-                "name": "CATEGORY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": null, 
-                "derived": [
-                    "USER_DEFINED_FIELD1", 
-                    "USER_DEFINED_FIELD3", 
-                    "UPD_DATE", 
-                    "UPD_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 2, 
-                "name": "CATEGORY_HIERARCHY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": [
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": true
-            }, 
-            {
-                "id": 3, 
-                "name": "LSTG_FORMAT_NAME", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "LSTG_FORMAT_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }, 
-            {
-                "id": 4, 
-                "name": "SITE_ID", 
-                "table": "EDW.TEST_SITES", 
-                "column": null, 
-                "derived": [
-                    "SITE_NAME", 
-                    "CRE_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 5, 
-                "name": "SELLER_TYPE_CD", 
-                "table": "EDW.TEST_SELLER_TYPE_DIM", 
-                "column": null, 
-                "derived": [
-                    "SELLER_TYPE_DESC"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 6, 
-                "name": "SELLER_ID", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "SELLER_ID"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }
-        ], 
-        "measures": [
-            {
-                "id": 1, 
-                "name": "GMV_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 2, 
-                "name": "GMV_MIN", 
-                "function": {
-                    "expression": "MIN", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 3, 
-                "name": "GMV_MAX", 
-                "function": {
-                    "expression": "MAX", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 4, 
-                "name": "TRANS_CNT", 
-                "function": {
-                    "expression": "COUNT", 
-                    "parameter": {
-                        "type": "constant", 
-                        "value": "1", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 5, 
-                "name": "ITEM_COUNT_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "ITEM_COUNT", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }
-        ], 
-        "rowkey": {
-            "rowkey_columns": [
-                {
-                    "column": "SELLER_ID", 
-                    "length": 18, 
-                    "dictionary": null, 
-                    "mandatory": true
-                }, 
-                {
-                    "column": "CAL_DT", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LEAF_CATEG_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "META_CATEG_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL2_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL3_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_FORMAT_NAME", 
-                    "length": 12, 
-                    "dictionary": null, 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_SITE_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "SLR_SEGMENT_CD", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }
-            ], 
-            "aggregation_groups": [
-                [
-                    "LEAF_CATEG_ID", 
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME", 
-                    "CAL_DT"
-                ]
-            ]
-        }, 
-        "signature": "lsLAl2jL62ZApmOLZqWU3g==", 
-        "last_modified": 1445850327000, 
-        "model_name": "test_kylin_with_slr_model_desc", 
-        "null_string": null, 
-        "hbase_mapping": {
-            "column_family": [
-                {
-                    "name": "F1", 
-                    "columns": [
-                        {
-                            "qualifier": "M", 
-                            "measure_refs": [
-                                "GMV_SUM", 
-                                "GMV_MIN", 
-                                "GMV_MAX", 
-                                "TRANS_CNT", 
-                                "ITEM_COUNT_SUM"
-                            ]
-                        }
-                    ]
-                }
-            ]
-        }, 
-        "notify_list": null, 
-        "auto_merge_time_ranges": null, 
-        "retention_range": 0
-    }
-]
-```
-
-## Get data model
-`GET /kylin/api/model/{modelName}`
-
-#### Path Variable
-* modelName - `required` `string` Data model name, by default it should be the same with cube name.
-
-#### Response Sample
-```sh
-{
-    "uuid": "ff527b94-f860-44c3-8452-93b17774c647", 
-    "name": "test_kylin_with_slr_model_desc", 
-    "lookups": [
-        {
-            "table": "EDW.TEST_CAL_DT", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "CAL_DT"
-                ], 
-                "foreign_key": [
-                    "CAL_DT"
-                ]
-            }
-        }, 
-        {
-            "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "LEAF_CATEG_ID", 
-                    "SITE_ID"
-                ], 
-                "foreign_key": [
-                    "LEAF_CATEG_ID", 
-                    "LSTG_SITE_ID"
-                ]
-            }
-        }
-    ], 
-    "capacity": "MEDIUM", 
-    "last_modified": 1442372116000, 
-    "fact_table": "DEFAULT.TEST_KYLIN_FACT", 
-    "filter_condition": null, 
-    "partition_desc": {
-        "partition_date_column": "DEFAULT.TEST_KYLIN_FACT.CAL_DT", 
-        "partition_date_start": 0, 
-        "partition_date_format": "yyyy-MM-dd", 
-        "partition_type": "APPEND", 
-        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
-    }
-}
-```
-
-## Build cube
-`PUT /kylin/api/cubes/{cubeName}/build`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Request Body
-* startTime - `required` `long` Start timestamp of data to build, e.g. 1388563200000 for 2014-1-1
-* endTime - `required` `long` End timestamp of data to build
-* buildType - `required` `string` Supported build type: 'BUILD', 'MERGE', 'REFRESH'
-
-#### Curl Example
-```
-curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423612800000', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
-```
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-
-## Enable Cube
-`PUT /kylin/api/cubes/{cubeName}/enable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-{  
-   "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-   "last_modified":1407909046305,
-   "name":"test_kylin_cube_with_slr_ready",
-   "owner":null,
-   "version":null,
-   "descriptor":"test_kylin_cube_with_slr_desc",
-   "cost":50,
-   "status":"ACTIVE",
-   "segments":[  
-      {  
-         "name":"19700101000000_20140531160000",
-         "storage_location_identifier":"KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_READY-19700101000000_20140531160000_BF043D2D-9A4A-45E9-AA59-5A17D3F34A50",
-         "date_range_start":0,
-         "date_range_end":1401552000000,
-         "status":"READY",
-         "size_kb":4758,
-         "source_records":6000,
-         "source_records_size":620356,
-         "last_build_time":1407832663227,
-         "last_build_job_id":"2c7a2b63-b052-4a51-8b09-0c24b5792cda",
-         "binary_signature":null,
-         "dictionaries":{  
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME/16d8185c-ee6b-4f8c-a919-756d9809f937.dict",
-            "TEST_KYLIN_FACT/LSTG_SITE_ID":"/dict/TEST_SITES/SITE_ID/0bec6bb3-1b0d-469c-8289-b8c4ca5d5001.dict",
-            "TEST_KYLIN_FACT/SLR_SEGMENT_CD":"/dict/TEST_SELLER_TYPE_DIM/SELLER_TYPE_CD/0c5d77ec-316b-47e0-ba9a-0616be890ad6.dict",
-            "TEST_KYLIN_FACT/CAL_DT":"/dict/PREDEFINED/date(yyyy-mm-dd)/64ac4f82-f2af-476e-85b9-f0805001014e.dict",
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME/270fbfb0-281c-4602-8413-2970a7439c47.dict",
-            "TEST_KYLIN_FACT/LEAF_CATEG_ID":"/dict/TEST_CATEGORY_GROUPINGS/LEAF_CATEG_ID/2602386c-debb-4968-8d2f-b52b8215e385.dict",
-            "TEST_CATEGORY_GROUPINGS/META_CATEG_NAME":"/dict/TEST_CATEGORY_GROUPINGS/META_CATEG_NAME/0410d2c4-4686-40bc-ba14-170042a2de94.dict"
-         },
-         "snapshots":{  
-            "TEST_CAL_DT":"/table_snapshot/TEST_CAL_DT.csv/8f7cfc8a-020d-4019-b419-3c6deb0ffaa0.snapshot",
-            "TEST_SELLER_TYPE_DIM":"/table_snapshot/TEST_SELLER_TYPE_DIM.csv/c60fd05e-ac94-4016-9255-96521b273b81.snapshot",
-            "TEST_CATEGORY_GROUPINGS":"/table_snapshot/TEST_CATEGORY_GROUPINGS.csv/363f4a59-b725-4459-826d-3188bde6a971.snapshot",
-            "TEST_SITES":"/table_snapshot/TEST_SITES.csv/78e0aecc-3ec6-4406-b86e-bac4b10ea63b.snapshot"
-         }
-      }
-   ],
-   "create_time":null,
-   "source_records_count":6000,
-   "source_records_size":0,
-   "size_kb":4758
-}
-```
-
-## Disable Cube
-`PUT /kylin/api/cubes/{cubeName}/disable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-## Purge Cube
-`PUT /kylin/api/cubes/{cubeName}/purge`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-***
-
-## Resume Job
-`PUT /kylin/api/jobs/{jobId}/resume`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_ [...]
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-## Pause Job
-`PUT /kylin/api/jobs/{jobId}/pause`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Discard Job
-`PUT /kylin/api/jobs/{jobId}/cancel`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-## Get Job Status
-`GET /kylin/api/jobs/{jobId}`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-(Same as "Resume Job")
-
-## Get job step output
-`GET /kylin/api/jobs/{jobId}/steps/{stepId}/output`
-
-#### Path Variable
-* jobId - `required` `string` Job id.
-* stepId - `required` `string` Step id; the step id is composed by jobId with step sequence id; for example, the jobId is "fb479e54-837f-49a2-b457-651fc50be110", its 3rd step id is "fb479e54-837f-49a2-b457-651fc50be110-3", 
-
-#### Response Sample
-```
-{  
-   "cmd_output":"log string"
-}
-```
-
-***
-
-## Get Hive Table
-`GET /kylin/api/tables/{tableName}`
-
-#### Request Parameters
-* tableName - `required` `string` table name to find.
-
-#### Response Sample
-```sh
-{
-    uuid: "69cc92c0-fc42-4bb9-893f-bd1141c91dbe",
-    name: "SAMPLE_07",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    last_modified: 1419330476755
-}
-```
-
-## Get Hive Table (Extend Info)
-`GET /kylin/api/tables/{tableName}/exd-map`
-
-#### Request Parameters
-* tableName - `optional` `string` table name to find.
-
-#### Response Sample
-```
-{
-    "minFileSize": "46055",
-    "totalNumberFiles": "1",
-    "location": "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_07",
-    "lastAccessTime": "1418374103365",
-    "lastUpdateTime": "1398176493340",
-    "columns": "struct columns { string code, string description, i32 total_emp, i32 salary}",
-    "partitionColumns": "",
-    "EXD_STATUS": "true",
-    "maxFileSize": "46055",
-    "inputformat": "org.apache.hadoop.mapred.TextInputFormat",
-    "partitioned": "false",
-    "tableName": "sample_07",
-    "owner": "hue",
-    "totalFileSize": "46055",
-    "outputformat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
-}
-```
-
-## Get Hive Tables
-`GET /kylin/api/tables`
-
-#### Request Parameters
-* project- `required` `string` will list all tables in the project.
-* ext- `optional` `boolean`  set true to get extend info of table.
-
-#### Response Sample
-```sh
-[
- {
-    uuid: "53856c96-fe4d-459e-a9dc-c339b1bc3310",
-    name: "SAMPLE_08",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    cardinality: {},
-    last_modified: 0,
-    exd: {
-        minFileSize: "46069",
-        totalNumberFiles: "1",
-        location: "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_08",
-        lastAccessTime: "1398176495945",
-        lastUpdateTime: "1398176495981",
-        columns: "struct columns { string code, string description, i32 total_emp, i32 salary}",
-        partitionColumns: "",
-        EXD_STATUS: "true",
-        maxFileSize: "46069",
-        inputformat: "org.apache.hadoop.mapred.TextInputFormat",
-        partitioned: "false",
-        tableName: "sample_08",
-        owner: "hue",
-        totalFileSize: "46069",
-        outputformat: "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
-    }
-  }
-]
-```
-
-## Load Hive Tables
-`POST /kylin/api/tables/{tables}/{project}`
-
-#### Request Parameters
-* tables - `required` `string` table names you want to load from hive, separated with comma.
-* project - `required` `String`  the project which the tables will be loaded into.
-
-#### Response Sample
-```
-{
-    "result.loaded": ["DEFAULT.SAMPLE_07"],
-    "result.unloaded": ["sapmle_08"]
-}
-```
-
-***
-
-## Wipe cache
-`PUT /kylin/api/cache/{type}/{name}/{action}`
-
-#### Path variable
-* type - `required` `string` 'METADATA' or 'CUBE'
-* name - `required` `string` Cache key, e.g the cube name.
-* action - `required` `string` 'create', 'update' or 'drop'
-
-***
-
-## Initiate cube start position
-Set the stream cube's start position to the current latest offsets; This can avoid building from the earlist position of Kafka topic (if you have set a long retension time); 
-
-`PUT /kylin/api/cubes/{cubeName}/init_start_offsets`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Response Sample
-```sh
-{
-    "result": "success", 
-    "offsets": "{0=246059529, 1=253547684, 2=253023895, 3=172996803, 4=165503476, 5=173513896, 6=19200473, 7=26691891, 8=26699895, 9=26694021, 10=19204164, 11=26694597}"
-}
-```
-
-## Build stream cube
-`PUT /kylin/api/cubes/{cubeName}/build2`
-
-This API is specific for stream cube's building;
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-#### Request Body
-
-* sourceOffsetStart - `required` `long` The start offset, 0 represents from previous position;
-* sourceOffsetEnd  - `required` `long` The end offset, 9223372036854775807 represents to the end position of current stream data
-* buildType - `required` Build type, "BUILD", "MERGE" or "REFRESH"
-
-#### Request Sample
-
-```sh
-{  
-   "sourceOffsetStart": 0, 
-   "sourceOffsetEnd": 9223372036854775807, 
-   "buildType": "BUILD"
-}
-```
-
-#### Response Sample
-```sh
-{
-    "uuid": "3afd6e75-f921-41e1-8c68-cb60bc72a601", 
-    "last_modified": 1480402541240, 
-    "version": "1.6.0", 
-    "name": "embedded_cube_clone - 1409830324_1409849348 - BUILD - PST 2016-11-28 22:55:41", 
-    "type": "BUILD", 
-    "duration": 0, 
-    "related_cube": "embedded_cube_clone", 
-    "related_segment": "42ebcdea-cbe9-4905-84db-31cb25f11515", 
-    "exec_start_time": 0, 
-    "exec_end_time": 0, 
-    "mr_waiting": 0, 
- ...
-}
-```
-
-## Check segment holes
-`GET /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
-
-## Fill segment holes
-`PUT /kylin/api/cubes/{cubeName}/holes`
-
-#### Path variable
-* cubeName - `required` `string` Cube name
diff --git a/website/_docs16/howto/howto_use_restapi_in_js.md b/website/_docs16/howto/howto_use_restapi_in_js.md
deleted file mode 100644
index ebc5699..0000000
--- a/website/_docs16/howto/howto_use_restapi_in_js.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-layout: docs16
-title:  Use RESTful API in Javascript
-categories: howto
-permalink: /docs16/howto/howto_use_restapi_in_js.html
----
-Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers.
-
-## Example on Query API.
-```
-$.ajaxSetup({
-      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
-    });
-    var request = $.ajax({
-       url: "http://hostname/kylin/api/query",
-       type: "POST",
-       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
-       dataType: "json"
-    });
-    request.done(function( msg ) {
-       alert(msg);
-    }); 
-    request.fail(function( jqXHR, textStatus ) {
-       alert( "Request failed: " + textStatus );
-  });
-
-```
-
-## Keypoints
-1. add basic access authorization info in http headers.
-2. use right ajax type and data synax.
-
-## Basic access authorization
-For what is basic access authorization, refer to [Wikipedia Page](http://en.wikipedia.org/wiki/Basic_access_authentication).
-How to generate your authorization code (download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
-
-```
-var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
- 
-$.ajaxSetup({
-   headers: { 
-    'Authorization': "Basic " + authorizationCode, 
-    'Content-Type': 'application/json;charset=utf-8' 
-   }
-});
-```
diff --git a/website/_docs16/index.cn.md b/website/_docs16/index.cn.md
deleted file mode 100644
index 005a713..0000000
--- a/website/_docs16/index.cn.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-layout: docs16-cn
-title: 概述
-categories: docs16
-permalink: /cn/docs16/index.html
----
-
-欢迎来到 Apache Kylin™
-------------  
-> Extreme OLAP Engine for Big Data
-
-Apache Kylin™是一个开源的分布式分析引擎,提供Hadoop之上的SQL查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由eBay Inc.开发并贡献至开源社区。
-
-查看旧版本文档: 
-* [v1.5](/cn/docs15/)
-* [v1.3](/cn/docs/) 
-
-安装 
-------------  
-请参考安装文档以安装Apache Kylin: [安装向导](/cn/docs16/install/)
-
-
-
-
-
-
diff --git a/website/_docs16/index.md b/website/_docs16/index.md
deleted file mode 100644
index b4eee3b..0000000
--- a/website/_docs16/index.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-layout: docs16
-title: Overview
-categories: docs
-permalink: /docs16/index.html
----
-
-Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
-------------  
-
-Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
-
-Document of prior versions: 
-
-* [v1.5.x document](/docs15/)
-* [v1.3.x document](/docs/) 
-
-Installation & Setup
-------------  
-1. [Hadoop Env](install/hadoop_env.html)
-2. [Installation Guide](install/index.html)
-3. [Advanced settings](install/advance_settings.html)
-4. [Deploy in cluster mode](install/kylin_cluster.html)
-5. [Run Kylin with Docker](install/kylin_docker.html)
-
-
-Tutorial
-------------  
-1. [Quick Start with Sample Cube](tutorial/kylin_sample.html)
-2. [Cube Creation](tutorial/create_cube.html)
-3. [Cube Build and Job Monitoring](tutorial/cube_build_job.html)
-4. [Web Interface](tutorial/web.html)
-5. [SQL reference: by Apache Calcite](http://calcite.apache.org/docs/reference.html)
-6. [Build Cube with Streaming Data (beta)](tutorial/cube_streaming.html)
-
-
-Connectivity and APIs
-------------  
-1. [ODBC driver](tutorial/odbc.html)
-2. [JDBC driver](howto/howto_jdbc.html)
-3. [RESTful API list](howto/howto_use_restapi.html)
-4. [Build cube with RESTful API](howto/howto_build_cube_with_restapi.html)
-5. [Call RESTful API in Javascript](howto/howto_use_restapi_in_js.html)
-6. [Connect from MS Excel and PowerBI](tutorial/powerbi.html)
-7. [Connect from Tableau 8](tutorial/tableau.html)
-8. [Connect from Tableau 9](tutorial/tableau_91.html)
-9. [Connect from SQuirreL](tutorial/squirrel.html)
-10. [Connect from Apache Flink](tutorial/flink.html)
-
-Operations
-------------  
-1. [Backup/restore Kylin metadata](howto/howto_backup_metadata.html)
-2. [Cleanup storage (HDFS & HBase)](howto/howto_cleanup_storage.html)
-3. [Upgrade from old version](howto/howto_upgrade.html)
-
-
-
diff --git a/website/_docs16/install/advance_settings.md b/website/_docs16/install/advance_settings.md
deleted file mode 100644
index 8b5c24d..0000000
--- a/website/_docs16/install/advance_settings.md
+++ /dev/null
@@ -1,98 +0,0 @@
----
-layout: docs16
-title:  "Advanced Settings"
-categories: install
-permalink: /docs16/install/advance_settings.html
----
-
-## Overwrite default kylin.properties at Cube level
-In `conf/kylin.properties` there are many parameters, which control/impact on Kylin's behaviors; Most parameters are global configs like security or job related; while some are Cube related; These Cube related parameters can be customized at each Cube level, so you can control the behaviors more flexibly. The GUI to do this is in the "Configuration Overwrites" step of the Cube wizard, as the screenshot below.
-
-![]( /images/install/overwrite_config.png)
-
-Here take two example: 
-
- * `kylin.cube.algorithm`: it defines the Cubing algorithm that the job engine will select; Its default value is "auto", means the engine will dynamically pick an algorithm ("layer" or "inmem") by sampling the data. If you knows Kylin and your data/cluster well, you can set your preferred algorithm directly (usually "inmem" has better performance but will request more memory).   
-
- * `kylin.hbase.region.cut`: it defines how big a region is when creating the HBase table. The default value is "5" (GB) per region. It might be too big for a small or medium cube, so you can give it a smaller value to get more regions created, then can gain better query performance.
-
-## Overwrite default Hadoop job conf at Cube level
-The `conf/kylin_job_conf.xml` and `conf/kylin_job_conf_inmem.xml` manage the default configurations for Hadoop jobs. If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need adding a prefix `kylin.job.mr.config.override.`; These configs will be parsed out and then applied when submitting jobs. See two examples below:
-
- * If want a cube's job getting more memory from Yarn, you can define: `kylin.job.mr.config.override.mapreduce.map.java.opts=-Xmx7g` and `kylin.job.mr.config.override.mapreduce.map.memory.mb=8192`
- * If want a cube's job going to a different Yarn resource queue, you can define: `kylin.job.mr.config.override.mapreduce.job.queuename=myQueue` (note: "myQueue" is just a sample)
-
- ## Overwrite default Hive job conf at Cube level
-The `conf/kylin_hive_conf.xml` manage the default configurations when running Hive job (like creating intermediate flat hive table). If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need using another prefix `kylin.hive.config.override.`; These configs will be parsed out and then applied when running "hive -e" or "beeline" commands. See example below:
-
- * If want hive goes a different Yarn resource queue, you can define: `kylin.hive.config.override.mapreduce.job.queuename=myQueue` (note: "myQueue" is just a sample)
-
-
-## Enable compression
-
-By default, Kylin does not enable compression, this is not the recommend settings for production environment, but a tradeoff for new Kylin users. A suitable compression algorithm will reduce the storage overhead. But unsupported algorithm will break the Kylin job build also. There are three kinds of compression used in Kylin, HBase table compression, Hive output compression and MR jobs output compression. 
-
-* HBase table compression
-The compression settings define in `kyiln.properties` by `kylin.hbase.default.compression.codec`, default value is *none*. The valid value includes *none*, *snappy*, *lzo*, *gzip* and *lz4*. Before changing the compression algorithm, please make sure the selected algorithm is supported on your HBase cluster. Especially for snappy, lzo and lz4, not all Hadoop distributions include these. 
-
-* Hive output compression
-The compression settings define in `kylin_hive_conf.xml`. The default setting is empty which leverages the Hive default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_hive_conf.xml`. Take the snappy compression for example:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-* MR jobs output compression
-The compression settings define in `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. The default setting is empty which leverages the MR default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. Take the snappy compression for example:
-{% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.output.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-    <property>
-        <name>mapreduce.output.fileoutputformat.compress.codec</name>
-        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
-        <description></description>
-    </property>
-{% endhighlight %}
-
-Compression settings only take effect after restarting Kylin server instance.
-
-## Allocate more memory to Kylin instance
-
-Open `bin/setenv.sh`, which has two sample settings for `KYLIN_JVM_SETTINGS` environment variable; The default setting is small (4GB at max.), you can comment it and then un-comment the next line to allocate 16GB:
-
-{% highlight Groff markup %}
-export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$KYLIN_HOME/logs/kylin.gc.$$ -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M"
-# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX:MaxPermSize=512m -XX:NewSize=3g -XX:MaxNewSize=3g -XX:SurvivorRatio=4 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=70 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError"
-{% endhighlight %}
-
-## Enable LDAP or SSO authentication
-
-Check [How to Enable Security with LDAP and SSO](../howto/howto_ldap_and_sso.html)
-
-
-## Enable email notification
-
-Kylin can send email notification on job complete/fail; To enable this, edit `conf/kylin.properties`, set the following parameters:
-{% highlight Groff markup %}
-mail.enabled=true
-mail.host=your-smtp-server
-mail.username=your-smtp-account
-mail.password=your-smtp-pwd
-mail.sender=your-sender-address
-kylin.job.admin.dls=adminstrator-address
-{% endhighlight %}
-
-Restart Kylin server to take effective. To disable, set `mail.enabled` back to `false`.
-
-Administrator will get notifications for all jobs. Modeler and Analyst need enter email address into the "Notification List" at the first page of cube wizard, and then will get notified for that cube.
diff --git a/website/_docs16/install/hadoop_evn.md b/website/_docs16/install/hadoop_evn.md
deleted file mode 100644
index cea96cf..0000000
--- a/website/_docs16/install/hadoop_evn.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-layout: docs16
-title:  "Hadoop Environment"
-categories: install
-permalink: /docs16/install/hadoop_env.html
----
-
-Kylin need run in a Hadoop node, to get better stability, we suggest you to deploy it a pure Hadoop client machine, on which it the command lines like `hive`, `hbase`, `hadoop`, `hdfs` already be installed and configured. The Linux account that running Kylin has got permission to the Hadoop cluster, including create/write hdfs, hive tables, hbase tables and submit MR jobs. 
-
-## Recommended Hadoop Versions
-
-* Hadoop: 2.6 - 2.7
-* Hive: 0.13 - 1.2.1
-* HBase: 0.98 - 0.99, 1.x
-* JDK: 1.7+
-
-_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1. Windows and MacOS have known issues._
-
-To make things easier we strongly recommend you try Kylin with an all-in-one sandbox VM, like [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/), and give it 10 GB memory. In the following tutorial we'll go with **Hortonworks Sandbox 2.1** and **Cloudera QuickStart VM 5.1**. 
-
-To avoid permission issue in the sandbox, you can use its `root` account. The password for **Hortonworks Sandbox 2.1** is `hadoop` , for **Cloudera QuickStart VM 5.1** is `cloudera`.
-
-We also suggest you using bridged mode instead of NAT mode in Virtual Box settings. Bridged mode will assign your sandbox an independent IP address so that you can avoid issues like [this](https://github.com/KylinOLAP/Kylin/issues/12).
-
-### Start Hadoop
-Use ambari helps to launch hadoop:
-
-```
-ambari-agent start
-ambari-server start
-```
-
-With both command successfully run you can go to ambari homepage at <http://your_sandbox_ip:8080> (user:admin,password:admin) to check everything's status. **By default hortonworks ambari disables Hbase, you need manually start the `Hbase` service at ambari homepage.**
-
-![start hbase in ambari](https://raw.githubusercontent.com/KylinOLAP/kylinolap.github.io/master/docs/installation/starthbase.png)
-
-**Additonal Info for setting up Hortonworks Sandbox on Virtual Box**
-
-	Please make sure Hbase Master port [Default 60000] and Zookeeper [Default 2181] is forwarded to Host OS.
- 
diff --git a/website/_docs16/install/index.cn.md b/website/_docs16/install/index.cn.md
deleted file mode 100644
index 5c4d321..0000000
--- a/website/_docs16/install/index.cn.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-layout: docs16
-title:  "Installation Guide"
-categories: install
-permalink: /cn/docs16/install/index.html
-version: v0.7.2
-since: v0.7.1
----
-
-### Environment
-
-Kylin requires a properly setup hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check this reference: [Hadoop Environment](hadoop_env.html).
-
-## Prerequisites on Hadoop
-
-* Hadoop: 2.4+
-* Hive: 0.13+
-* HBase: 0.98+, 1.x
-* JDK: 1.7+  
-_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1_
-
-
-It is most common to install Kylin on a Hadoop client machine. It can be used for demo use, or for those who want to host their own web site to provide Kylin service. The scenario is depicted as:
-
-![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
-
-For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
-
-Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
-
-### Install Kylin
-
-1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
-2. Export KYLIN_HOME pointing to the extracted Kylin folder
-3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
-4. To start Kylin, simply run **bin/kylin.sh start**
-5. To stop Kylin, simply run **bin/kylin.sh stop**
-
-> If you want to have multiple Kylin nodes please refer to [this](kylin_cluster.html)
-
-After Kylin started you can visit <http://your_hostname:7070/kylin>. The username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
-
-1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
-2. [Create and Build your own cube](../tutorial/create_cube.html)
-3. [Kylin Web Tutorial](../tutorial/web.html)
-
diff --git a/website/_docs16/install/index.md b/website/_docs16/install/index.md
deleted file mode 100644
index 3086b42..0000000
--- a/website/_docs16/install/index.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-layout: docs16
-title:  "Installation Guide"
-categories: install
-permalink: /docs16/install/index.html
----
-
-### Environment
-
-Kylin requires a properly setup Hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check [Hadoop Environment](hadoop_env.html).
-
-It is most common to install Kylin on a Hadoop client machine, from which Kylin can talk with the Hadoop cluster via command lines including `hive`, `hbase`, `hadoop`, etc. The scenario is depicted as:
-
-![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
-
-For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
-
-Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
-
-### Install Kylin
-
-1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
-2. Export KYLIN_HOME pointing to the extracted Kylin folder
-3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
-4. To start Kylin, run **bin/kylin.sh start**, after the server starts, you can watch logs/kylin.log for runtime logs;
-5. To stop Kylin, run **bin/kylin.sh stop**
-
-> If you want to have multiple Kylin nodes running to provide high availability, please refer to [this](kylin_cluster.html)
-
-After Kylin started you can visit <http://hostname:7070/kylin>. The default username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
-
-1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
-2. [Create and Build a cube](../tutorial/create_cube.html)
-3. [Kylin Web Tutorial](../tutorial/web.html)
-
diff --git a/website/_docs16/install/kylin_cluster.md b/website/_docs16/install/kylin_cluster.md
deleted file mode 100644
index 1938000..0000000
--- a/website/_docs16/install/kylin_cluster.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-layout: docs16
-title:  "Deploy in Cluster Mode"
-categories: install
-permalink: /docs16/install/kylin_cluster.html
----
-
-
-### Kylin Server modes
-
-Kylin instances are stateless,  the runtime state is saved in its "Metadata Store" in hbase (kylin.metadata.url config in conf/kylin.properties). For load balance considerations it is possible to start multiple Kylin instances sharing the same metadata store (thus sharing the same state on table schemas, job status, cube status, etc.)
-
-Each of the kylin instances has a kylin.server.mode entry in conf/kylin.properties specifying the runtime mode, it has three options: 1. "job" for running job engine only 2. "query" for running query engine only and 3 "all" for running both. Notice that only one server can run the job engine("all" mode or "job" mode), the others must all be "query" mode.
-
-A typical scenario is depicted in the following chart:
-
-![]( /images/install/kylin_server_modes.png)
-
-### Setting up Multiple Kylin REST servers
-
-If you are running Kylin in a cluster where you have multiple Kylin REST server instances, please make sure you have the following property correctly configured in ${KYLIN_HOME}/conf/kylin.properties for EVERY server instance.
-
-1. kylin.rest.servers 
-	List of web servers in use, this enables one web server instance to sync up with other servers. For example: kylin.rest.servers=sandbox1:7070,sandbox2:7070
-  
-2. kylin.server.mode
-	Make sure there is only one instance whose "kylin.server.mode" is set to "all"(or "job"), others should be "query"
-	
-## Setup load balancer 
-
-To enable Kylin high availability, you need setup a load balancer in front of these servers, let it routing the incoming requests to the cluster. Client sides send all requests to the load balancer, instead of talk with a specific instance. 
-	
diff --git a/website/_docs16/install/kylin_docker.md b/website/_docs16/install/kylin_docker.md
deleted file mode 100644
index 57d4e20..0000000
--- a/website/_docs16/install/kylin_docker.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-layout: docs16
-title:  "Run Kylin with Docker"
-categories: install
-permalink: /docs16/install/kylin_docker.html
-version: v1.5.3
-since: v1.5.2
----
-
-Apache Kylin runs as a client of Hadoop cluster, so it is reasonable to run within a Docker container; please check [this project](https://github.com/Kyligence/kylin-docker/) on github.
diff --git a/website/_docs16/install/manual_install_guide.cn.md b/website/_docs16/install/manual_install_guide.cn.md
deleted file mode 100644
index b192dfa..0000000
--- a/website/_docs16/install/manual_install_guide.cn.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-layout: docs16-cn
-title:  "手动安装指南"
-categories: 安装
-permalink: /cn/docs16/install/manual_install_guide.html
-version: v0.7.2
-since: v0.7.1
----
-
-## 引言
-
-在大多数情况下,我们的自动脚本[Installation Guide](./index.html)可以帮助你在你的hadoop sandbox甚至你的hadoop cluster中启动Kylin。但是,为防部署脚本出错,我们撰写本文作为参考指南来解决你的问题。
-
-基本上本文解释了自动脚本中的每一步骤。我们假设你已经对Linux上的Hadoop操作非常熟悉。
-
-## 前提条件
-* 已安装Tomcat,输出到CATALINA_HOME(with CATALINA_HOME exported). 
-* Kylin 二进制文件拷贝至本地并解压,之后使用$KYLIN_HOME引用
-
-## 步骤
-
-### 准备Jars
-
-Kylin会需要使用两个jar包,两个jar包和配置在默认kylin.properties:(there two jars and configured in the default kylin.properties)
-
-```
-kylin.job.jar=/tmp/kylin/kylin-job-latest.jar
-
-```
-
-这是Kylin用于MR jobs的job jar包。你需要复制 $KYLIN_HOME/job/target/kylin-job-latest.jar 到 /tmp/kylin/
-
-```
-kylin.coprocessor.local.jar=/tmp/kylin/kylin-coprocessor-latest.jar
-
-```
-
-这是一个Kylin会放在hbase上的hbase协处理jar包。它用于提高性能。你需要复制 $KYLIN_HOME/storage/target/kylin-coprocessor-latest.jar 到 /tmp/kylin/
-
-### 启动Kylin
-
-以`./kylin.sh start`
-
-启动Kylin
-
-并以`./Kylin.sh stop`
-
-停止Kylin
diff --git a/website/_docs16/release_notes.md b/website/_docs16/release_notes.md
deleted file mode 100644
index 235d752..0000000
--- a/website/_docs16/release_notes.md
+++ /dev/null
@@ -1,1333 +0,0 @@
----
-layout: docs16
-title:  Apache Kylin Release Notes
-categories: gettingstarted
-permalink: /docs16/release_notes.html
----
-
-To download latest release, please visit: [http://kylin.apache.org/download/](http://kylin.apache.org/download/), 
-there are source code package, binary package, ODBC driver and installation guide avaliable.
-
-Any problem or issue, please report to Apache Kylin JIRA project: [https://issues.apache.org/jira/browse/KYLIN](https://issues.apache.org/jira/browse/KYLIN)
-
-or send to Apache Kylin mailing list:
-
-* User relative: [user@kylin.apache.org](mailto:user@kylin.apache.org)
-* Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
-
-## v1.6.0 - 2016-11-26
-_Tag:_ [kylin-1.6.0](https://github.com/apache/kylin/tree/kylin-1.6.0)
-This is a major release with better support for using Apache Kafka as data source. Check [how to upgrade](/docs16/howto/howto_upgrade.html) to do the upgrading.
-
-__New Feature__
-
-* [KYLIN-1726] - Scalable streaming cubing
-* [KYLIN-1919] - Support Embedded Structure when Parsing Streaming Message
-* [KYLIN-2055] - Add an encoder for Boolean type
-* [KYLIN-2067] - Add API to check and fill segment holes
-* [KYLIN-2079] - add explicit configuration knob for coprocessor timeout
-* [KYLIN-2088] - Support intersect count for calculation of retention or conversion rates
-* [KYLIN-2125] - Support using beeline to load hive table metadata
-
-__Bug__
-
-* [KYLIN-1565] - Read the kv max size from HBase config
-* [KYLIN-1820] - Column autocomplete should remove the user input in model designer
-* [KYLIN-1828] - java.lang.StringIndexOutOfBoundsException in org.apache.kylin.storage.hbase.util.StorageCleanupJob
-* [KYLIN-1967] - Dictionary rounding can cause IllegalArgumentException in GTScanRangePlanner
-* [KYLIN-1978] - kylin.sh compatible issue on Ubuntu
-* [KYLIN-1990] - The SweetAlert at the front page may out of the page if the content is too long.
-* [KYLIN-2007] - CUBOID_CACHE is not cleared when rebuilding ALL cache
-* [KYLIN-2012] - more robust approach to hive schema changes
-* [KYLIN-2024] - kylin TopN only support the first measure 
-* [KYLIN-2027] - Error "connection timed out" occurs when zookeeper's port is set in hbase.zookeeper.quorum of hbase-site.xml
-* [KYLIN-2028] - find-*-dependency script fail on Mac OS
-* [KYLIN-2035] - Auto Merge Submit Continuously
-* [KYLIN-2041] - Wrong parameter definition in Get Hive Tables REST API
-* [KYLIN-2043] - Rollback httpclient to 4.2.5 to align with Hadoop 2.6/2.7
-* [KYLIN-2044] - Unclosed DataInputByteBuffer in BitmapCounter#peekLength
-* [KYLIN-2045] - Wrong argument order in JobInstanceExtractor#executeExtract()
-* [KYLIN-2047] - Ineffective null check in MetadataManager
-* [KYLIN-2050] - Potentially ineffective call to close() in QueryCli
-* [KYLIN-2051] - Potentially ineffective call to IOUtils.closeQuietly()
-* [KYLIN-2052] - Edit "Top N" measure, the "group by" column wasn't displayed
-* [KYLIN-2059] - Concurrent build issue in CubeManager.calculateToBeSegments()
-* [KYLIN-2069] - NPE in LookupStringTable
-* [KYLIN-2078] - Can't see generated SQL at Web UI
-* [KYLIN-2084] - Unload sample table failed
-* [KYLIN-2085] - PrepareStatement return incorrect result in some cases
-* [KYLIN-2086] - Still report error when there is more than 12 dimensions in one agg group
-* [KYLIN-2093] - Clear cache in CubeMetaIngester
-* [KYLIN-2097] - Get 'Column does not exist in row key desc" on cube has TopN measure
-* [KYLIN-2099] - Import table error of sample table KYLIN_CAL_DT
-* [KYLIN-2106] - UI bug - Advanced Settings - Rowkeys - new Integer dictionary encoding - could possibly impact also cube metadata
-* [KYLIN-2109] - Deploy coprocessor only this server own the table
-* [KYLIN-2110] - Ineffective comparison in BooleanDimEnc#equals()
-* [KYLIN-2114] - WEB-Global-Dictionary bug fix and improve
-* [KYLIN-2115] - some extended column query returns wrong answer
-* [KYLIN-2116] - when hive field delimitor exists in table field values, fields order is wrong
-* [KYLIN-2119] - Wrong chart value and sort when process scientific notation 
-* [KYLIN-2120] - kylin1.5.4.1 with cdh5.7 cube sql Oops Faild to take action
-* [KYLIN-2121] - Failed to pull data to PowerBI or Excel on some query
-* [KYLIN-2127] - UI bug fix for Extend Column
-* [KYLIN-2130] - QueryMetrics concurrent bug fix
-* [KYLIN-2132] - Unable to pull data from Kylin Cube ( learn_kylin cube ) to Excel or Power BI for Visualization and some dimensions are not showing up.
-* [KYLIN-2134] - Kylin will treat empty string as NULL by mistake
-* [KYLIN-2137] - Failed to run mr job when user put a kafka jar in hive's lib folder
-* [KYLIN-2138] - Unclosed ResultSet in BeelineHiveClient
-* [KYLIN-2146] - "Streaming Cluster" page should remove "Margin" inputbox
-* [KYLIN-2152] - TopN group by column does not distinguish between NULL and ""
-* [KYLIN-2154] - source table rows will be skipped if TOPN's group column contains NULL values
-* [KYLIN-2158] - Delete joint dimension not right
-* [KYLIN-2159] - Redistribution Hive Table Step always requires row_count filename as 000000_0 
-* [KYLIN-2167] - FactDistinctColumnsReducer may get wrong max/min partition col value
-* [KYLIN-2173] - push down limit leads to wrong answer when filter is loosened
-* [KYLIN-2178] - CubeDescTest is unstable
-* [KYLIN-2201] - Cube desc and aggregation group rule combination max check fail
-* [KYLIN-2226] - Build Dimension Dictionary Error
-
-__Improvement__
-
-* [KYLIN-1042] - Horizontal scalable solution for streaming cubing
-* [KYLIN-1827] - Send mail notification when runtime exception throws during build/merge cube
-* [KYLIN-1839] - improvement set classpath before submitting mr job
-* [KYLIN-1917] - TopN counter merge performance improvement
-* [KYLIN-1962] - Split kylin.properties into two files
-* [KYLIN-1999] - Use some compression at UT/IT
-* [KYLIN-2019] - Add license checker into checkstyle rule
-* [KYLIN-2033] - Refactor broadcast of metadata change
-* [KYLIN-2042] - QueryController puts entry in Cache w/o checking QueryCacheEnabled
-* [KYLIN-2054] - TimedJsonStreamParser should support other time format
-* [KYLIN-2068] - Import hive comment when sync tables
-* [KYLIN-2070] - UI changes for allowing concurrent build/refresh/merge
-* [KYLIN-2073] - Need timestamp info for diagnose  
-* [KYLIN-2075] - TopN measure: need select "constant" + "1" as the SUM|ORDER parameter
-* [KYLIN-2076] - Improve sample cube and data
-* [KYLIN-2080] - UI: allow multiple building jobs for the same cube
-* [KYLIN-2082] - Support to change streaming configuration
-* [KYLIN-2089] - Make update HBase coprocessor concurrent
-* [KYLIN-2090] - Allow updating cube level config even the cube is ready
-* [KYLIN-2091] - Add API to init the start-point (of each parition) for streaming cube
-* [KYLIN-2095] - Hive mr job use overrided MR job configuration by cube properties
-* [KYLIN-2098] - TopN support query UHC column without sorting by sum value
-* [KYLIN-2100] - Allow cube to override HIVE job configuration by properties
-* [KYLIN-2108] - Support usage of schema name "default" in SQL
-* [KYLIN-2111] - only allow columns from Model dimensions when add group by column to TOP_N
-* [KYLIN-2112] - Allow a column be a dimension as well as "group by" column in TopN measure
-* [KYLIN-2113] - Need sort by columns in SQLDigest
-* [KYLIN-2118] - allow user view CubeInstance json even cube is ready
-* [KYLIN-2122] - Move the partition offset calculation before submitting job
-* [KYLIN-2126] - use column name as default dimension name when auto generate dimension for lookup table
-* [KYLIN-2140] - rename packaged js with different name when build
-* [KYLIN-2143] - allow more options from Extended Columns,COUNT_DISTINCT,RAW_TABLE
-* [KYLIN-2162] - Improve the cube validation error message
-* [KYLIN-2221] - rethink on KYLIN-1684
-* [KYLIN-2083] - more RAM estimation test for MeasureAggregator and GTAggregateScanner
-* [KYLIN-2105] - add QueryId
-* [KYLIN-1321] - Add derived checkbox for lookup table columns on Auto Generate Dimensions panel
-* [KYLIN-1995] - Upgrade MapReduce properties which are deprecated
-
-__Task__
-
-* [KYLIN-2072] - Cleanup old streaming code
-* [KYLIN-2081] - UI change to support embeded streaming message
-* [KYLIN-2171] - Release 1.6.0
-
-
-## v1.5.4.1 - 2016-09-28
-_Tag:_ [kylin-1.5.4.1](https://github.com/apache/kylin/tree/kylin-1.5.4.1)
-This version fixes two major bugs introduced in 1.5.4; The metadata and HBase coprocessor is compatible with 1.5.4.
-
-__Bug__
-
-* [KYLIN-2010] - Date dictionary return wrong SQL result
-* [KYLIN-2026] - NPE occurs when build a cube without partition column
-* [KYLIN-2032] - Cube build failed when partition column isn't in dimension list
-
-## v1.5.4 - 2016-09-15
-_Tag:_ [kylin-1.5.4](https://github.com/apache/kylin/tree/kylin-1.5.4)
-This version includes bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.3; While after upgrade, you still need update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__New Feature__
-
-* [KYLIN-1732] - Support Window Function
-* [KYLIN-1767] - UI for TopN: specify encoding and multiple "group by"
-* [KYLIN-1849] - Search cube by name in Web UI
-* [KYLIN-1908] - Collect Metrics to JMX
-* [KYLIN-1921] - Support Grouping Funtions
-* [KYLIN-1964] - Add a companion tool of CubeMetaExtractor for cube importing
-
-__Bug__
-
-* [KYLIN-962] - [UI] Cube Designer can't drag rowkey normally
-* [KYLIN-1194] - Filter(CubeName) on Jobs/Monitor page works only once
-* [KYLIN-1488] - When modifying a model, Save after deleting a lookup table. The internal error will pop up.
-* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
-* [KYLIN-1808] - unload non existing table cause NPE
-* [KYLIN-1834] - java.lang.IllegalArgumentException: Value not exists! - in Step 4 - Build Dimension Dictionary
-* [KYLIN-1883] - Consensus Problem when running the tool, MetadataCleanupJob
-* [KYLIN-1889] - Didn't deal with the failure of renaming folder in hdfs when running the tool CubeMigrationCLI
-* [KYLIN-1929] - Error to load slow query in "Monitor" page for non-admin user
-* [KYLIN-1933] - Deploy in cluster mode, the "query" node report "scheduler has not been started" every second
-* [KYLIN-1934] - 'Value not exist' During Cube Merging Caused by Empty Dict
-* [KYLIN-1939] - Linkage error while executing any queries
-* [KYLIN-1942] - Models are missing after change project's name
-* [KYLIN-1953] - Error handling for diagnosis
-* [KYLIN-1956] - Can't query from child cube of a hybrid cube after its status changed from disabled to enabled
-* [KYLIN-1961] - Project name is always constant instead of real project name in email notification
-* [KYLIN-1970] - System Menu UI ACL issue
-* [KYLIN-1972] - Access denied when query seek to hybrid
-* [KYLIN-1973] - java.lang.NegativeArraySizeException when Build Dimension Dictionary
-* [KYLIN-1982] - CubeMigrationCLI: associate model with project
-* [KYLIN-1986] - CubeMigrationCLI: make global dictionary unique
-* [KYLIN-1992] - Clear ThreadLocal Contexts when query failed before scaning HBase
-* [KYLIN-1996] - Keep original column order when designing cube
-* [KYLIN-1998] - Job engine lock is not release at shutdown
-* [KYLIN-2003] - error start time at query result page
-* [KYLIN-2005] - Move all storage side behavior hints to GTScanRequest
-
-__Improvement__
-
-* [KYLIN-672] - Add Env and Project Info in job email notification
-* [KYLIN-1702] - The Key of the Snapshot to the related lookup table may be not informative
-* [KYLIN-1855] - Should exclude those joins in whose related lookup tables no dimensions are used in cube
-* [KYLIN-1858] - Remove all InvertedIndex(Streaming purpose) related codes and tests
-* [KYLIN-1866] - Add tip for field at 'Add Streaming' table page.
-* [KYLIN-1867] - Upgrade dependency libraries
-* [KYLIN-1874] - Make roaring bitmap version determined
-* [KYLIN-1898] - Upgrade to Avatica 1.8 or higher
-* [KYLIN-1904] - WebUI for GlobalDictionary
-* [KYLIN-1906] - Add more comments and default value for kylin.properties
-* [KYLIN-1910] - Support Separate HBase Cluster with NN HA and Kerberos Authentication
-* [KYLIN-1920] - Add view CubeInstance json function
-* [KYLIN-1922] - Improve the logic to decide whether to pre aggregate on Region server
-* [KYLIN-1923] - Add access controller to query
-* [KYLIN-1924] - Region server metrics: replace int type for long type for scanned row count
-* [KYLIN-1925] - Do not allow cross project clone for cube
-* [KYLIN-1926] - Loosen the constraint on FK-PK data type matching
-* [KYLIN-1936] - Improve enable limit logic (exactAggregation is too strict)
-* [KYLIN-1940] - Add owner for DataModel
-* [KYLIN-1941] - Show submitter for slow query
-* [KYLIN-1954] - BuildInFunctionTransformer should be executed per CubeSegmentScanner
-* [KYLIN-1963] - Delegate the loading of certain package (like slf4j) to tomcat's parent classloader
-* [KYLIN-1965] - Check duplicated measure name
-* [KYLIN-1966] - Refactor IJoinedFlatTableDesc
-* [KYLIN-1979] - Move hackNoGroupByAggregation to cube-based storage implementations
-* [KYLIN-1984] - Don't use compression in packaging configuration
-* [KYLIN-1985] - SnapshotTable should only keep the columns described in tableDesc
-* [KYLIN-1997] - Add pivot feature back in query result page
-* [KYLIN-2004] - Make the creating intermediate hive table steps configurable (two options)
-
-## v1.5.3 - 2016-07-28
-_Tag:_ [kylin-1.5.3](https://github.com/apache/kylin/tree/kylin-1.5.3)
-This version includes many bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.2; But after upgrade, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__New Feature__
-
-* [KYLIN-1478] - TopN measure should support non-dictionary encoding for ultra high cardinality
-* [KYLIN-1693] - Support multiple group-by columns for TOP_N meausre
-* [KYLIN-1752] - Add an option to fail cube build job when source table is empty
-* [KYLIN-1756] - Allow user to run MR jobs against different Hadoop queues
-
-__Bug__
-
-* [KYLIN-1499] - Couldn't save query, error in backend
-* [KYLIN-1568] - Calculate row value buffer size instead of hard coded ROWVALUE_BUFFER_SIZE
-* [KYLIN-1645] - Exception inside coprocessor should report back to the query thread
-* [KYLIN-1646] - Column appeared twice if it was declared as both dimension and measure
-* [KYLIN-1676] - High CPU in TrieDictionary due to incorrect use of HashMap
-* [KYLIN-1679] - bin/get-properties.sh cannot get property which contains space or equals sign
-* [KYLIN-1684] - query on table "kylin_sales" return empty resultset after cube "kylin_sales_cube" which generated by sample.sh is ready
-* [KYLIN-1694] - make multiply coefficient configurable when estimating cuboid size
-* [KYLIN-1695] - Skip cardinality calculation job when loading hive table
-* [KYLIN-1703] - The not-thread-safe ToolRunner.run() will cause concurrency issue in job engine
-* [KYLIN-1704] - When load empty snapshot, NULL Pointer Exception occurs
-* [KYLIN-1723] - GTAggregateScanner$Dump.flush() must not write the WHOLE metrics buffer
-* [KYLIN-1738] - MRJob Id is not saved to kylin jobs if MR job is killed
-* [KYLIN-1742] - kylin.sh should always set KYLIN_HOME to an absolute path
-* [KYLIN-1755] - TopN Measure IndexOutOfBoundsException
-* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
-* [KYLIN-1762] - Query threw NPE with 3 or more join conditions
-* [KYLIN-1769] - There is no response when click "Property" button at Cube Designer
-* [KYLIN-1777] - Streaming cube build shouldn't check working segment
-* [KYLIN-1780] - Potential issue in SnapshotTable.equals()
-* [KYLIN-1781] - kylin.properties encoding error while contain chinese prop key or value
-* [KYLIN-1783] - Can't add override property at cube design 'Configuration Overwrites' step.
-* [KYLIN-1785] - NoSuchElementException when Mandatory Dimensions contains all Dimensions
-* [KYLIN-1787] - Properly deal with limit clause in CubeHBaseEndpointRPC (SELECT * problem)
-* [KYLIN-1788] - Allow arbitrary number of mandatory dimensions in one aggregation group
-* [KYLIN-1789] - Couldn't use View as Lookup when join type is "inner"
-* [KYLIN-1795] - bin/sample.sh doesn't work when configured hive client is beeline
-* [KYLIN-1800] - IllegalArgumentExceptio: Too many digits for NumberDictionary: -0.009999999999877218. Expect 19 digits before decimal point at max.
-* [KYLIN-1803] - ExtendedColumn Measure Encoding with Non-ascii Characters
-* [KYLIN-1811] - Error step may be skipped sometimes when resume a cube job
-* [KYLIN-1816] - More than one base KylinConfig exist in spring JVM
-* [KYLIN-1817] - No result from JDBC with Date filter in prepareStatement
-* [KYLIN-1838] - Fix sample cube definition
-* [KYLIN-1848] - Can't sort cubes by any field in Web UI
-* [KYLIN-1862] - "table not found" in "Build Dimension Dictionary" step
-* [KYLIN-1879] - RestAPI /api/jobs always returns 0 for exec_start_time and exec_end_time fields
-* [KYLIN-1882] - it report can't find the intermediate table in '#4 Step Name: Build Dimension Dictionary' when use hive view as lookup table
-* [KYLIN-1896] - JDBC support mybatis
-* [KYLIN-1905] - Wrong Default Date in Cube Build Web UI
-* [KYLIN-1909] - Wrong access control to rest get cubes
-* [KYLIN-1911] - NPE when extended column has NULL value
-* [KYLIN-1912] - Create Intermediate Flat Hive Table failed when using beeline
-* [KYLIN-1913] - query log printed abnormally if the query contains "\r" (not "\r\n")
-* [KYLIN-1918] - java.lang.UnsupportedOperationException when unload hive table
-
-__Improvement__
-
-* [KYLIN-1319] - Find a better way to check hadoop job status
-* [KYLIN-1379] - More stable and functional precise count distinct implements after KYLIN-1186
-* [KYLIN-1656] - Improve performance of MRv2 engine by making each mapper handles a configured number of records
-* [KYLIN-1657] - Add new configuration kylin.job.mapreduce.min.reducer.number
-* [KYLIN-1669] - Deprecate the "Capacity" field from DataModel
-* [KYLIN-1677] - Distribute source data by certain columns when creating flat table
-* [KYLIN-1705] - Global (and more scalable) dictionary
-* [KYLIN-1706] - Allow cube to override MR job configuration by properties
-* [KYLIN-1714] - Make job/source/storage engines configurable from kylin.properties
-* [KYLIN-1717] - Make job engine scheduler configurable
-* [KYLIN-1718] - Grow ByteBuffer Dynamically in Cube Building and Query
-* [KYLIN-1719] - Add config in scan request to control compress the query result or not
-* [KYLIN-1724] - Support Amazon EMR
-* [KYLIN-1725] - Use KylinConfig inside coprocessor
-* [KYLIN-1728] - Introduce dictionary metadata
-* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
-* [KYLIN-1747] - Calculate all 0 (except mandatory) cuboids
-* [KYLIN-1749] - Allow mandatory only cuboid
-* [KYLIN-1751] - Make kylin log configurable
-* [KYLIN-1766] - CubeTupleConverter.translateResult() is slow due to date conversion
-* [KYLIN-1775] - Add Cube Migrate Support for Global Dictionary
-* [KYLIN-1782] - API redesign for CubeDesc
-* [KYLIN-1786] - Frontend work for KYLIN-1313 (extended columns as measure)
-* [KYLIN-1792] - behaviours for non-aggregated queries
-* [KYLIN-1805] - It's easily got stuck when deleting HTables during running the StorageCleanupJob
-* [KYLIN-1815] - Cleanup package size
-* [KYLIN-1818] - change kafka dependency to provided
-* [KYLIN-1821] - Reformat all of the java files and enable checkstyle to enforce code formatting
-* [KYLIN-1823] - refactor kylin-server packaging
-* [KYLIN-1846] - minimize dependencies of JDBC driver
-* [KYLIN-1884] - Reload metadata automatically after migrating cube
-* [KYLIN-1894] - GlobalDictionary may corrupt when server suddenly crash
-* [KYLIN-1744] - Separate concepts of source offset and date range on cube segments
-* [KYLIN-1654] - Upgrade httpclient dependency
-* [KYLIN-1774] - Update Kylin's tomcat version to 7.0.69
-* [KYLIN-1861] - Hive may fail to create flat table with "GC overhead error"
-
-## v1.5.2.1 - 2016-06-07
-_Tag:_ [kylin-1.5.2.1](https://github.com/apache/kylin/tree/kylin-1.5.2.1)
-
-This is a hot-fix version on v1.5.2, no new feature introduced, please upgrade to this version;
-
-__Bug__
-
-* [KYLIN-1758] - createLookupHiveViewMaterializationStep will create intermediate table for fact table
-* [KYLIN-1739] - kylin_job_conf_inmem.xml can impact non-inmem MR job
-
-
-## v1.5.2 - 2016-05-26
-_Tag:_ [kylin-1.5.2](https://github.com/apache/kylin/tree/kylin-1.5.2)
-
-This version is backward compatiple with v1.5.1. But after upgrade to v1.5.2 from v1.5.1, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__Highlights__
-
-* [KYLIN-1077] - Support Hive View as Lookup Table
-* [KYLIN-1515] - Make Kylin run on MapR
-* [KYLIN-1600] - Download diagnosis zip from GUI
-* [KYLIN-1672] - support kylin on cdh 5.7
-
-__New Feature__
-
-* [KYLIN-1016] - Count distinct on any dimension should work even not a predefined measure
-* [KYLIN-1077] - Support Hive View as Lookup Table
-* [KYLIN-1441] - Display time column as partition column
-* [KYLIN-1515] - Make Kylin run on MapR
-* [KYLIN-1600] - Download diagnosis zip from GUI
-* [KYLIN-1672] - support kylin on cdh 5.7
-
-__Improvement__
-
-* [KYLIN-869] - Enhance mail notification
-* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
-* [KYLIN-1313] - Enable deriving dimensions on non PK/FK
-* [KYLIN-1323] - Improve performance of converting data to hfile
-* [KYLIN-1340] - Tools to extract all cube/hybrid/project related metadata to facilitate diagnosing/debugging/* sharing
-* [KYLIN-1381] - change RealizationCapacity from three profiles to specific numbers
-* [KYLIN-1391] - quicker and better response to v2 storage engine's rpc timeout exception
-* [KYLIN-1418] - Memory hungry cube should select LAYER and INMEM cubing smartly
-* [KYLIN-1432] - For GUI, to add one option "yyyy-MM-dd HH:MM:ss" for Partition Date Column
-* [KYLIN-1453] - cuboid sharding based on specific column
-* [KYLIN-1487] - attach a hyperlink to introduce new aggregation group
-* [KYLIN-1526] - Move query cache back to query controller level
-* [KYLIN-1542] - Hfile owner is not hbase
-* [KYLIN-1544] - Make hbase encoding and block size configurable just like hbase compression
-* [KYLIN-1561] - Refactor storage engine(v2) to be extension friendly
-* [KYLIN-1566] - Add and use a separate kylin_job_conf.xml for in-mem cubing
-* [KYLIN-1567] - Front-end work for KYLIN-1557
-* [KYLIN-1578] - Coprocessor thread voluntarily stop itself when it reaches timeout
-* [KYLIN-1579] - IT preparation classes like BuildCubeWithEngine should exit with status code upon build * exception
-* [KYLIN-1580] - Use 1 byte instead of 8 bytes as column indicator in fact distinct MR job
-* [KYLIN-1584] - Specify region cut size in cubedesc and leave the RealizationCapacity in model as a hint
-* [KYLIN-1585] - make MAX_HBASE_FUZZY_KEYS in GTScanRangePlanner configurable
-* [KYLIN-1587] - show cube level configuration overwrites properties in CubeDesigner
-* [KYLIN-1591] - enabling different block size setting for small column families
-* [KYLIN-1599] - Add "isShardBy" flag in rowkey panel
-* [KYLIN-1601] - Need not to shrink scan cache when hbase rows can be large
-* [KYLIN-1602] - User could dump hbase usage for diagnosis
-* [KYLIN-1614] - Bring more information in diagnosis tool
-* [KYLIN-1621] - Use deflate level 1 to enable compression "on the fly"
-* [KYLIN-1623] - Make the hll precision for data samping configurable
-* [KYLIN-1624] - HyperLogLogPlusCounter will become inaccurate when there're billions of entries
-* [KYLIN-1625] - GC log overwrites old one after restart Kylin service
-* [KYLIN-1627] - add backdoor toggle to dump binary cube storage response for further analysis
-* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
-
-__Bug__
-
-* [KYLIN-989] - column width is too narrow for timestamp field
-* [KYLIN-1197] - cube data not updated after purge
-* [KYLIN-1305] - Can not get more than one system admin email in config
-* [KYLIN-1551] - Should check and ensure TopN measure has two parameters specified
-* [KYLIN-1563] - Unsafe check of initiated in HybridInstance#init()
-* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
-* [KYLIN-1574] - Unclosed ResultSet in QueryService#getMetadata()
-* [KYLIN-1581] - NPE in Job engine when execute MR job
-* [KYLIN-1593] - Agg group info will be blank when trying to edit cube
-* [KYLIN-1595] - columns in metric could also be in filter/groupby
-* [KYLIN-1596] - UT fail, due to String encoding CharsetEncoder mismatch
-* [KYLIN-1598] - cannot run complete UT at windows dev machine
-* [KYLIN-1604] - Concurrent write issue on hdfs when deploy coprocessor
-* [KYLIN-1612] - Cube is ready but insight tables not result
-* [KYLIN-1615] - UT 'HiveCmdBuilderTest' fail on 'testBeeline'
-* [KYLIN-1619] - Can't find any realization coursed by Top-N measure
-* [KYLIN-1622] - sql not executed and report topN error
-* [KYLIN-1631] - Web UI of TopN, "group by" column couldn't be a dimension column
-* [KYLIN-1634] - Unclosed OutputStream in SSHClient#scpFileToLocal()
-* [KYLIN-1637] - Sample cube build error
-* [KYLIN-1638] - Unclosed HBaseAdmin in ToolUtil#getHBaseMetaStoreId()
-* [KYLIN-1639] - Wrong logging of JobID in MapReduceExecutable.java
-* [KYLIN-1643] - Kylin's hll counter count "NULL" as a value
-* [KYLIN-1647] - Purge a cube, and then build again, the start date is not updated
-* [KYLIN-1650] - java.io.IOException: Filesystem closed - in Cube Build Step 2 (MapR)
-* [KYLIN-1655] - function name 'getKylinPropertiesAsInputSteam' misspelt
-* [KYLIN-1660] - Streaming/kafka config not match with table name
-* [KYLIN-1662] - tableName got truncated during request mapping for /tables/tableName
-* [KYLIN-1666] - Should check project selection before add a stream table
-* [KYLIN-1667] - Streaming table name should allow enter "DB.TABLE" format
-* [KYLIN-1673] - make sure metadata in 1.5.2 compatible with 1.5.1
-* [KYLIN-1678] - MetaData clean just clean FINISHED and DISCARD jobs,but job correct status is SUCCEED
-* [KYLIN-1685] - error happens while execute a sql contains '?' using Statement
-* [KYLIN-1688] - Illegal char on result dataset table
-* [KYLIN-1721] - KylinConfigExt lost base properties when store into file
-* [KYLIN-1722] - IntegerDimEnc serialization exception inside coprocessor
-
-## v1.5.1 - 2016-04-13
-_Tag:_ [kylin-1.5.1](https://github.com/apache/kylin/tree/kylin-1.5.1)
-
-This version is backward compatiple with v1.5.0. But after upgrade to v1.5.1 from v1.5.0, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
-
-__Highlights__
-
-* [KYLIN-1122] - Kylin support detail data query from fact table
-* [KYLIN-1492] - Custom dimension encoding
-* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
-* [KYLIN-1534] - Cube specific config, override global kylin.properties
-* [KYLIN-1546] - Tool to dump information for diagnosis
-
-__New Feature__
-
-* [KYLIN-1122] - Kylin support detail data query from fact table
-* [KYLIN-1378] - Add UI for TopN measure
-* [KYLIN-1492] - Custom dimension encoding
-* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
-* [KYLIN-1501] - Run some classes at the beginning of kylin server startup
-* [KYLIN-1503] - Print version information with kylin.sh
-* [KYLIN-1531] - Add smoke test scripts
-* [KYLIN-1534] - Cube specific config, override global kylin.properties
-* [KYLIN-1540] - REST API for deleting segment
-* [KYLIN-1541] - IntegerDimEnc, custom dimension encoding for integers
-* [KYLIN-1546] - Tool to dump information for diagnosis
-* [KYLIN-1550] - Persist some recent bad query
-
-__Improvement__
-
-* [KYLIN-1490] - Use InstallShield 2015 to generate ODBC Driver setup files
-* [KYLIN-1498] - cube desc signature not calculated correctly
-* [KYLIN-1500] - streaming_fillgap cause out of memory
-* [KYLIN-1502] - When cube is not empty, only signature consistent cube desc updates are allowed
-* [KYLIN-1504] - Use NavigableSet to store rowkey and use prefix filter to check resource path prefix instead String comparison on tomcat side
-* [KYLIN-1505] - Combine guava filters with Predicates.and
-* [KYLIN-1543] - GTFilterScanner performance tuning
-* [KYLIN-1557] - Enhance the check on aggregation group dimension number
-
-__Bug__
-
-* [KYLIN-1373] - need to encode export query url to get right result in query page
-* [KYLIN-1434] - Kylin Job Monitor API: /kylin/api/jobs is too slow in large kylin deployment
-* [KYLIN-1472] - Export csv get error when there is a plus sign in the sql
-* [KYLIN-1486] - java.lang.IllegalArgumentException: Too many digits for NumberDictionary
-* [KYLIN-1491] - Should return base cuboid as valid cuboid if no aggregation group matches
-* [KYLIN-1493] - make ExecutableManager.getInstance thread safe
-* [KYLIN-1497] - Make three <class>.getInstance thread safe
-* [KYLIN-1507] - Couldn't find hive dependency jar on some platform like CDH
-* [KYLIN-1513] - Time partitioning doesn't work across multiple days
-* [KYLIN-1514] - MD5 validation of Tomcat does not work when package tar
-* [KYLIN-1521] - Couldn't refresh a cube segment whose start time is before 1970-01-01
-* [KYLIN-1522] - HLLC is incorrect when result is feed from cache
-* [KYLIN-1524] - Get "java.lang.Double cannot be cast to java.lang.Long" error when Top-N metris data type is BigInt
-* [KYLIN-1527] - Columns with all NULL values can't be queried
-* [KYLIN-1537] - Failed to create flat hive table, when name is too long
-* [KYLIN-1538] - DoubleDeltaSerializer cause obvious error after deserialize and serialize
-* [KYLIN-1553] - Cannot find rowkey column "COL_NAME" in cube CubeDesc
-* [KYLIN-1564] - Unclosed table in BuildCubeWithEngine#checkHFilesInHBase()
-* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
-
-## v1.5.0 - 2016-03-12
-_Tag:_ [kylin-1.5.0](https://github.com/apache/kylin/tree/kylin-1.5.0)
-
-__This version is not backward compatible.__ The format of cube and metadata has been refactored in order to get times of performance improvement. We recommend this version, but does not suggest upgrade from previous deployment directly. A clean and new deployment of this version is strongly recommended. If you have to upgrade from previous deployment, an upgrade guide will be provided by community later.
-
-__Highlights__
-
-* [KYLIN-875] - A plugin-able architecture, to allow alternative cube engine / storage engine / data source.
-* [KYLIN-1245] - A better MR cubing algorithm, about 1.5 times faster by comparing hundreds of jobs.
-* [KYLIN-942] - A better storage engine, makes query roughly 2 times faster (especially for slow queries) by comparing tens of thousands sqls.
-* [KYLIN-738] - Streaming cubing EXPERIMENTAL support, source from kafka, build cube in-mem at minutes interval.
-* [KYLIN-242] - Redesign aggregation group, support of 20+ dimensions made easy.
-* [KYLIN-976] - Custom aggregation types (or UDF in other words).
-* [KYLIN-943] - TopN aggregation type.
-* [KYLIN-1065] - ODBC compatible with Tableau 9.1, MS Excel, MS PowerBI.
-* [KYLIN-1219] - Kylin support SSO with Spring SAML.
-
-__New Feature__
-
-* [KYLIN-528] - Build job flow for Inverted Index building
-* [KYLIN-579] - Unload table from Kylin
-* [KYLIN-596] - Support Excel and Power BI
-* [KYLIN-599] - Near real-time support
-* [KYLIN-607] - More efficient cube building
-* [KYLIN-609] - Add Hybrid as a federation of Cube and Inverted-index realization
-* [KYLIN-625] - Create GridTable, a data structure that abstracts vertical and horizontal partition of a table
-* [KYLIN-728] - IGTStore implementation which use disk when memory runs short
-* [KYLIN-738] - StreamingOLAP
-* [KYLIN-749] - support timestamp type in II and cube
-* [KYLIN-774] - Automatically merge cube segments
-* [KYLIN-868] - add a metadata backup/restore script in bin folder
-* [KYLIN-886] - Data Retention for streaming data
-* [KYLIN-906] - cube retention
-* [KYLIN-943] - Approximate TopN supported by Cube
-* [KYLIN-986] - Generalize Streaming scripts and put them into code repository
-* [KYLIN-1219] - Kylin support SSO with Spring SAML
-* [KYLIN-1277] - Upgrade tool to put old-version cube and new-version cube into a hybrid model
-* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
-	
-* [KYLIN-976] - Support Custom Aggregation Types
-* [KYLIN-1054] - Support Hive client Beeline
-* [KYLIN-1128] - Clone Cube Metadata
-* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
-* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
-* [KYLIN-1483] - Command tool to visualize all cuboids in a cube/segment
-
-__Improvement__
-
-* [KYLIN-225] - Support edit "cost" of cube
-* [KYLIN-410] - table schema not expand when clicking the database text
-* [KYLIN-589] - Cleanup Intermediate hive table after cube build
-* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
-* [KYLIN-633] - Support Timestamp for cube partition
-* [KYLIN-649] - move the cache layer from service tier back to storage tier
-* [KYLIN-655] - Migrate cube storage (query side) to use GridTable API
-* [KYLIN-663] - Push time condition down to ii endpoint
-* [KYLIN-668] - Out of memory in mapper when building cube in mem
-* [KYLIN-671] - Implement fine grained cache for cube and ii
-* [KYLIN-674] - IIEndpoint return metrics as well
-* [KYLIN-675] - cube&model designer refactor
-* [KYLIN-678] - optimize RowKeyColumnIO
-* [KYLIN-697] - Reorganize all test cases to unit test and integration tests
-* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS
-* [KYLIN-708] - replace BitSet for AggrKey
-* [KYLIN-712] - some enhancement after code review
-* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
-* [KYLIN-718] - replace aliasMap in storage context with a clear specified return column list
-* [KYLIN-719] - bundle statistics info in endpoint response
-* [KYLIN-720] - Optimize endpoint's response structure to suit with no-dictionary data
-* [KYLIN-721] - streaming cli support third-party streammessage parser
-* [KYLIN-726] - add remote cli port configuration for KylinConfig
-* [KYLIN-729] - IIEndpoint eliminate the non-aggregate routine
-* [KYLIN-734] - Push cache layer to each storage engine
-* [KYLIN-752] - Improved IN clause performance
-* [KYLIN-753] - Make the dependency on hbase-common to "provided"
-* [KYLIN-755] - extract copying libs from prepare.sh so that it can be reused
-* [KYLIN-760] - Improve the hasing performance in Sampling cuboid size
-* [KYLIN-772] - Continue cube job when hive query return empty resultset
-* [KYLIN-773] - performance is slow list jobs
-* [KYLIN-783] - update hdp version in test cases to 2.2.4
-* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
-* [KYLIN-809] - Streaming cubing allow multiple kafka clusters/topics
-* [KYLIN-816] - Allow gap in cube segments, for streaming case
-* [KYLIN-822] - list cube overview in one page
-* [KYLIN-823] - replace fk on fact table on rowkey & aggregation group generate
-* [KYLIN-838] - improve performance of job query
-* [KYLIN-844] - add backdoor toggles to control query behavior
-* [KYLIN-845] - Enable coprocessor even when there is memory hungry distinct count
-* [KYLIN-858] - add snappy compression support
-* [KYLIN-866] - Confirm with user when he selects empty segments to merge
-* [KYLIN-869] - Enhance mail notification
-* [KYLIN-870] - Speed up hbase segments info by caching
-* [KYLIN-871] - growing dictionary for streaming case
-* [KYLIN-874] - script for fill streaming gap automatically
-* [KYLIN-875] - Decouple with Hadoop to allow alternative Input / Build Engine / Storage
-* [KYLIN-879] - add a tool to collect orphan hbases
-* [KYLIN-880] - Kylin should change the default folder from /tmp to user configurable destination
-* [KYLIN-881] - Upgrade Calcite to 1.3.0
-* [KYLIN-882] - check access to kylin.hdfs.working.dir
-* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
-* [KYLIN-893] - Remove the dependency on quartz and metrics
-* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
-* [KYLIN-896] - Clean ODBC code, add them into main repository and write docs to help compiling
-* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
-* [KYLIN-902] - move streaming related parameters into StreamingConfig
-* [KYLIN-909] - Adapt GTStore to hbase endpoint
-* [KYLIN-919] - more friendly UI for 0.8
-* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
-* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
-* [KYLIN-927] - Real time cubes merging skipping gaps
-* [KYLIN-933] - friendly UI to use data model
-* [KYLIN-938] - add friendly tip to page when rest request failed
-* [KYLIN-942] - Cube parallel scan on Hbase
-* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
-* [KYLIN-957] - Support HBase in a separate cluster
-* [KYLIN-960] - Split storage module to core-storage and storage-hbase
-* [KYLIN-973] - add a tool to analyse streaming output logs
-* [KYLIN-984] - Behavior change in streaming data consuming
-* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
-* [KYLIN-1014] - Support kerberos authentication while getting status from RM
-* [KYLIN-1018] - make TimedJsonStreamParser default parser
-* [KYLIN-1019] - Remove v1 cube model classes from code repository
-* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
-* [KYLIN-1025] - Save cube change is very slow
-* [KYLIN-1036] - Code Clean, remove code which never used at front end
-* [KYLIN-1041] - ADD Streaming UI
-* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-* [KYLIN-1058] - Remove "right join" during model creation
-* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
-* [KYLIN-1065] - ODBC driver support tableau 9.1
-* [KYLIN-1068] - Optimize the memory footprint for TopN counter
-* [KYLIN-1069] - update tip for 'Partition Column' on UI
-* [KYLIN-1074] - Load hive tables with selecting mode
-* [KYLIN-1095] - Update AdminLTE to latest version
-* [KYLIN-1096] - Deprecate minicluster
-* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
-* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
-* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
-* [KYLIN-1116] - Use local dictionary for InvertedIndex batch building
-* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
-* [KYLIN-1126] - v2 storage(for parallel scan) backward compatibility with v1 storage
-* [KYLIN-1135] - Pscan use share thread pool
-* [KYLIN-1136] - Distinguish fast build mode and complete build mode
-* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
-* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
-* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
-* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
-* [KYLIN-1160] - Set default logger appender of log4j for JDBC
-* [KYLIN-1161] - Rest API /api/cubes?cubeName= is doing fuzzy match instead of exact match
-* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
-* [KYLIN-1190] - Make memory budget per query configurable
-* [KYLIN-1211] - Add 'Enable Cache' button in System page
-* [KYLIN-1234] - Cube ACL does not work
-* [KYLIN-1235] - allow user to select dimension column as options when edit COUNT_DISTINCT measure
-* [KYLIN-1237] - Revisit on cube size estimation
-* [KYLIN-1239] - attribute each htable with team contact and owner name
-* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
-* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
-* [KYLIN-1246] - get cubes API update - offset,limit not required
-* [KYLIN-1251] - add toggle event for tree label
-* [KYLIN-1259] - Change font/background color of job progress
-* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
-* [KYLIN-1266] - Tune release package size
-* [KYLIN-1267] - Check Kryo performance when spilling aggregation cache
-* [KYLIN-1268] - Fix 2 kylin logs
-* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
-* [KYLIN-1281] - Add "partition_date_end", and move "partition_date_start" into cube descriptor
-* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
-* [KYLIN-1287] - UI update for streaming build action
-* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
-* [KYLIN-1301] - fix segment pruning failure
-* [KYLIN-1308] - query storage v2 enable parallel cube visiting
-* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
-* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
-* [KYLIN-1318] - enable gc log for kylin server instance
-* [KYLIN-1323] - Improve performance of converting data to hfile
-* [KYLIN-1327] - Tool for batch updating host information of htables
-* [KYLIN-1333] - Kylin Entity Permission Control
-* [KYLIN-1334] - allow truncating string for fixed length dimensions
-* [KYLIN-1341] - Display JSON of Data Model in the dialog
-* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
-* [KYLIN-1365] - Kylin ACL enhancement
-* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
-* [KYLIN-1424] - Should support multiple selection in picking up dimension/measure column step in data model wizard
-* [KYLIN-1438] - auto generate aggregation group
-* [KYLIN-1474] - expose list, remove and cat in metastore.sh
-* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
-	
-* [KYLIN-242] - Redesign aggregation group
-* [KYLIN-770] - optimize memory usage for GTSimpleMemStore GTAggregationScanner
-* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
-* [KYLIN-980] - FactDistinctColumnsJob to support high cardinality columns
-* [KYLIN-1079] - Manager large number of entries in metadata store
-* [KYLIN-1082] - Hive dependencies should be add to tmpjars
-* [KYLIN-1201] - Enhance project level ACL
-* [KYLIN-1222] - restore testing v1 query engine in case need it as a fallback for v2
-* [KYLIN-1232] - Refine ODBC Connection UI
-* [KYLIN-1237] - Revisit on cube size estimation
-* [KYLIN-1239] - attribute each htable with team contact and owner name
-* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
-* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
-* [KYLIN-1266] - Tune release package size
-* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
-* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
-* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
-* [KYLIN-1301] - fix segment pruning failure
-* [KYLIN-1308] - query storage v2 enable parallel cube visiting
-* [KYLIN-1318] - enable gc log for kylin server instance
-* [KYLIN-1327] - Tool for batch updating host information of htables
-* [KYLIN-1343] - Upgrade calcite version to 1.6
-* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
-* [KYLIN-1366] - Bind metadata version with release version
-* [KYLIN-1389] - Formatting ODBC Drive C++ code
-* [KYLIN-1405] - Aggregation group validation
-* [KYLIN-1465] - Beautify kylin log to convenience both production trouble shooting and CI debuging
-* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
-
-__Bug__
-
-* [KYLIN-404] - Can't get cube source record size.
-* [KYLIN-457] - log4j error and dup lines in kylin.log
-* [KYLIN-521] - No verification even if join condition is invalid
-* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
-* [KYLIN-635] - IN clause within CASE when is not working
-* [KYLIN-656] - REST API get cube desc NullPointerException when cube is not exists
-* [KYLIN-660] - Make configurable of dictionary cardinality cap
-* [KYLIN-665] - buffer error while in mem cubing
-* [KYLIN-688] - possible memory leak for segmentIterator
-* [KYLIN-731] - Parallel stream build will throw OOM
-* [KYLIN-740] - Slowness with many IN() values
-* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
-* [KYLIN-748] - II returned result not correct when decimal omits precision and scal
-* [KYLIN-751] - Max on negative double values is not working
-* [KYLIN-766] - round BigDecimal according to the DataType scale
-* [KYLIN-769] - empty segment build fail due to no dictionary
-* [KYLIN-771] - query cache is not evicted when metadata changes
-* [KYLIN-778] - can't build cube after package to binary
-* [KYLIN-780] - Upgrade Calcite to 1.0
-* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted
-* [KYLIN-801] - fix remaining issues on query cache and storage cache
-* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
-* [KYLIN-807] - Avoid write conflict between job engine and stream cube builder
-* [KYLIN-817] - Support Extract() on timestamp column
-* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
-* [KYLIN-828] - kylin still use ldap profile when comment the line "kylin.sandbox=false" in kylin.properties
-* [KYLIN-834] - optimize StreamingUtil binary search perf
-* [KYLIN-837] - fix submit build type when refresh cube
-* [KYLIN-873] - cancel button does not work when [resume][discard] job
-* [KYLIN-889] - Support more than one HDFS files of lookup table
-* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
-* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
-* [KYLIN-905] - Boolean type not supported
-* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
-* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
-* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
-* [KYLIN-914] - Scripts shebang should use /bin/bash
-* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
-* [KYLIN-930] - can't see realizations under each project at project list page
-* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
-* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
-* [KYLIN-936] - can not see job step log
-* [KYLIN-944] - update doc about how to consume kylin API in javascript
-* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
-* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
-* [KYLIN-951] - Drop RowBlock concept from GridTable general API
-* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
-* [KYLIN-967] - Dump running queries on memory shortage
-* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
-* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
-* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
-* [KYLIN-983] - Query sql offset keyword bug
-* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
-* [KYLIN-991] - StorageCleanupJob may clean a newly created HTable in streaming cube building
-* [KYLIN-992] - ConcurrentModificationException when initializing ResourceStore
-* [KYLIN-993] - implement substr support in kylin
-* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
-* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
-* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million
-* [KYLIN-1026] - Error message for git check is not correct in package.sh
-* [KYLIN-1027] - HBase Token not added after KYLIN-1007
-* [KYLIN-1033] - Error when joining two sub-queries
-* [KYLIN-1039] - Filter like (A or false) yields wrong result
-* [KYLIN-1047] - Upgrade to Calcite 1.4
-* [KYLIN-1066] - Only 1 reducer is started in the "Build cube" step of MR_Engine_V2
-* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
-* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
-* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
-* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
-* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
-* [KYLIN-1113] - Support TopN query in v2/CubeStorageQuery.java
-* [KYLIN-1115] - Clean up ODBC driver code
-* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
-* [KYLIN-1127] - Refactor CacheService
-* [KYLIN-1137] - TopN measure need support dictionary merge
-* [KYLIN-1138] - Bad CubeDesc signature cause segment be delete when enable a cube
-* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
-* [KYLIN-1151] - Menu items should be aligned when create new model
-* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
-* [KYLIN-1153] - Upgrade is needed for cubedesc metadata from 1.3 to 1.4
-* [KYLIN-1171] - KylinConfig truncate bug
-* [KYLIN-1179] - Cannot use String as partition column
-* [KYLIN-1180] - Some NPE in Dictionary
-* [KYLIN-1181] - Split metadata size exceeded when data got huge in one segment
-* [KYLIN-1182] - DataModelDesc needs to be updated from v1.x to v2.0
-* [KYLIN-1192] - Cannot edit data model desc without name change
-* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
-* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
-* [KYLIN-1218] - java.lang.NullPointerException in MeasureTypeFactory when sync hive table
-* [KYLIN-1220] - JsonMappingException: Can not deserialize instance of java.lang.String out of START_ARRAY
-* [KYLIN-1225] - Only 15 cubes listed in the /models page
-* [KYLIN-1226] - InMemCubeBuilder throw OOM for multiple HLLC measures
-* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
-* [KYLIN-1236] - redirect to home page when input invalid url
-* [KYLIN-1250] - Got NPE when discarding a job
-* [KYLIN-1260] - Job status labels are not in same style
-* [KYLIN-1269] - Can not get last error message in email
-* [KYLIN-1271] - Create streaming table layer will disappear if click on outside
-* [KYLIN-1274] - Query from JDBC is partial results by default
-* [KYLIN-1282] - Comparison filter on Date/Time column not work for query
-* [KYLIN-1289] - Click on subsequent wizard steps doesn't work when editing existing cube or model
-* [KYLIN-1303] - Error when in-mem cubing on empty data source which has boolean columns
-* [KYLIN-1306] - Null strings are not applied during fast cubing
-* [KYLIN-1314] - Display issue for aggression groups
-* [KYLIN-1315] - UI: Cannot add normal dimension when creating new cube
-* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
-* [KYLIN-1328] - "UnsupportedOperationException" is thrown when remove a data model
-* [KYLIN-1330] - UI create model: Press enter will go back to pre step
-* [KYLIN-1336] - 404 errors of model page and api 'access/DataModelDesc' in console
-* [KYLIN-1337] - Sort cube name doesn't work well
-* [KYLIN-1346] - IllegalStateException happens in SparkCubing
-* [KYLIN-1347] - UI: cannot place cursor in front of the last dimension
-* [KYLIN-1349] - 'undefined' is logged in console when adding lookup table
-* [KYLIN-1352] - 'Cache already exists' exception in high-concurrency query situation
-* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
-* [KYLIN-1357] - Cloned cube has build time information
-* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
-* [KYLIN-1382] - CubeMigrationCLI reports error when migrate cube
-* [KYLIN-1387] - Streaming cubing doesn't generate cuboids files on HDFS, cause cube merge failure
-* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
-* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
-* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
-* [KYLIN-1412] - Widget width of "Partition date column" is too small to select
-* [KYLIN-1413] - Row key column's sequence is wrong after saving the cube
-* [KYLIN-1414] - Couldn't drag and drop rowkey, js error is thrown in browser console
-* [KYLIN-1417] - TimedJsonStreamParser is case sensitive for message's property name
-* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
-* [KYLIN-1420] - Query returns empty result on partition column's boundary condition
-* [KYLIN-1421] - Cube "source record" is always zero for streaming
-* [KYLIN-1423] - HBase size precision issue
-* [KYLIN-1430] - Not add "STREAMING_" prefix when import a streaming table
-* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
-* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
-* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
-* 
-* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
-* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
-* [KYLIN-1344] - Bitmap measure defined after TopN measure can cause merge to fail
-* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
-* [KYLIN-1386] - Duplicated projects appear in connection dialog after clicking CONNECT button multiple times
-* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
-* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
-* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
-* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
-* [KYLIN-1469] - Hive dependency jars are hard coded in test
-* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
-* [KYLIN-1473] - Cannot have comments in the end of New Query textbox
-
-__Task__
-
-* [KYLIN-529] - Migrate ODBC source code to Apache Git
-* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
-* [KYLIN-762] - remove quartz dependency
-* [KYLIN-763] - remove author name
-* [KYLIN-820] - support streaming cube of exact timestamp range
-* [KYLIN-907] - Improve Kylin community development experience
-* [KYLIN-1112] - Reorganize InvertedIndex source codes into plug-in architecture
-	
-* [KYLIN-808] - streaming cubing support split by data timestamp
-* [KYLIN-1427] - Enable partition date column to support date and hour as separate columns for increment cube build
-
-__Test__
-
-* [KYLIN-677] - benchmark for Endpoint without dictionary
-* [KYLIN-826] - create new test case for streaming building & queries
-
-
-## v1.3.0 - 2016-03-14
-_Tag:_ [kylin-1.3.0](https://github.com/apache/kylin/tree/kylin-1.3.0)
-
-__New Feature__
-
-* [KYLIN-579] - Unload table from Kylin
-* [KYLIN-976] - Support Custom Aggregation Types
-* [KYLIN-1054] - Support Hive client Beeline
-* [KYLIN-1128] - Clone Cube Metadata
-* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
-
-__Improvement__
-
-* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
-* [KYLIN-1014] - Support kerberos authentication while getting status from RM
-* [KYLIN-1074] - Load hive tables with selecting mode
-* [KYLIN-1082] - Hive dependencies should be add to tmpjars
-* [KYLIN-1132] - make filtering input easier in creating cube
-* [KYLIN-1201] - Enhance project level ACL
-* [KYLIN-1211] - Add 'Enable Cache' button in System page
-* [KYLIN-1234] - Cube ACL does not work
-* [KYLIN-1240] - Fix link and typo in README
-* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
-* [KYLIN-1246] - get cubes API update - offset,limit not required
-* [KYLIN-1251] - add toggle event for tree label
-* [KYLIN-1259] - Change font/background color of job progress
-* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
-* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
-* [KYLIN-1323] - Improve performance of converting data to hfile
-* [KYLIN-1333] - Kylin Entity Permission Control 
-* [KYLIN-1343] - Upgrade calcite version to 1.6
-* [KYLIN-1365] - Kylin ACL enhancement
-* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
-
-__Bug__
-
-* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
-* [KYLIN-1078] - Cannot have comments in the end of New Query textbox
-* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
-* [KYLIN-1110] - can not see project options after clear brower cookie and cache
-* [KYLIN-1159] - problem about kylin web UI
-* [KYLIN-1214] - Remove "Back to My Cubes" link in non-edit mode
-* [KYLIN-1215] - minor, update website member's info on community page
-* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
-* [KYLIN-1236] - redirect to home page when input invalid url
-* [KYLIN-1250] - Got NPE when discarding a job
-* [KYLIN-1254] - cube model will be overridden while creating a new cube with the same name
-* [KYLIN-1260] - Job status labels are not in same style
-* [KYLIN-1274] - Query from JDBC is partial results by default
-* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
-* [KYLIN-1330] - UI create model: Press enter will go back to pre step
-* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
-* [KYLIN-1342] - Typo in doc
-* [KYLIN-1354] - Couldn't edit a cube if it has no "partition date" set
-* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
-* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale 
-* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
-* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
-* [KYLIN-1412] - Widget width of "Partition date column"  is too small to select
-* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
-* [KYLIN-1423] - HBase size precision issue
-* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
-* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
-* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
-* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
-* [KYLIN-1469] - Hive dependency jars are hard coded in test
-
-__Test__
-
-* [KYLIN-1335] - Disable PrintResult in KylinQueryTest
-
-
-## v1.2 - 2015-12-15
-_Tag:_ [kylin-1.2](https://github.com/apache/kylin/tree/kylin-1.2)
-
-__New Feature__
-
-* [KYLIN-596] - Support Excel and Power BI
-    
-__Improvement__
-
-* [KYLIN-389] - Can't edit cube name for existing cubes
-* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS 
-* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
-* [KYLIN-1058] - Remove "right join" during model creation
-* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
-* [KYLIN-1065] - ODBC driver support tableau 9.1
-* [KYLIN-1069] - update tip for 'Partition Column' on UI
-* [KYLIN-1081] - ./bin/find-hive-dependency.sh may not find hive-hcatalog-core.jar
-* [KYLIN-1095] - Update AdminLTE to latest version
-* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
-* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
-* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
-* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
-* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
-* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
-* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
-* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
-* [KYLIN-1160] - Set default logger appender of log4j for JDBC
-* [KYLIN-1161] - Rest API /api/cubes?cubeName=  is doing fuzzy match instead of exact match
-* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
-* [KYLIN-1166] - CubeMigrationCLI should disable and purge the cube in source store after be migrated
-* [KYLIN-1168] - Couldn't save cube after doing some modification, get "Update data model is not allowed! Please create a new cube if needed" error
-* [KYLIN-1190] - Make memory budget per query configurable
-
-__Bug__
-
-* [KYLIN-693] - Couldn't change a cube's name after it be created
-* [KYLIN-930] - can't see realizations under each project at project list page
-* [KYLIN-966] - When user creates a cube, if enter a name which already exists, Kylin will thrown expection on last step
-* [KYLIN-1033] - Error when joining two sub-queries
-* [KYLIN-1039] - Filter like (A or false) yields wrong result
-* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
-* [KYLIN-1070] - changing  case in table name in  model desc
-* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
-* [KYLIN-1098] - two "kylin.hbase.region.count.min" in conf/kylin.properties
-* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
-* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
-* [KYLIN-1120] - MapReduce job read local meta issue
-* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
-* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
-* [KYLIN-1148] - Edit project's name and cancel edit, project's name still modified
-* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
-* [KYLIN-1155] - unit test with minicluster doesn't work on 1.x
-* [KYLIN-1203] - Cannot save cube after correcting the configuration mistake
-* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
-* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
-
-__Task__
-
-* [KYLIN-1170] - Update website and status files to TLP
-
-
-## v1.1.1-incubating - 2015-11-04
-_Tag:_ [kylin-1.1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1.1-incubating)
-
-__Improvement__
-
-* [KYLIN-999] - License check and cleanup for release
-
-## v1.1-incubating - 2015-10-25
-_Tag:_ [kylin-1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1-incubating)
-
-__New Feature__
-
-* [KYLIN-222] - Web UI to Display CubeInstance Information
-* [KYLIN-906] - cube retention
-* [KYLIN-910] - Allow user to enter "retention range" in days on Cube UI
-
-__Bug__
-
-* [KYLIN-457] - log4j error and dup lines in kylin.log
-* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
-* [KYLIN-740] - Slowness with many IN() values
-* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
-* [KYLIN-771] - query cache is not evicted when metadata changes
-* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted 
-* [KYLIN-847] - "select * from fact" does not work on 0.7 branch
-* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
-* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-* [KYLIN-944] - update doc about how to consume kylin API in javascript
-* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
-* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
-* [KYLIN-958] - update cube data model may fail and leave metadata in inconsistent state
-* [KYLIN-961] - Can't get cube  source record count.
-* [KYLIN-967] - Dump running queries on memory shortage
-* [KYLIN-968] - CubeSegment.lastBuildJobID is null in new instance but used for rowkey_stats path
-* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
-* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
-* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
-* [KYLIN-983] - Query sql offset keyword bug
-* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
-* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
-* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
-* [KYLIN-1005] - fail to acquire ZookeeperJobLock when hbase.zookeeper.property.clientPort is configured other than 2181
-* [KYLIN-1015] - Hive dependency jars appeared twice on job configuration
-* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million 
-* [KYLIN-1026] - Error message for git check is not correct in package.sh
-
-__Improvement__
-
-* [KYLIN-343] - Enable timeout on query 
-* [KYLIN-367] - automatically backup metadata everyday
-* [KYLIN-589] - Cleanup Intermediate hive table after cube build
-* [KYLIN-772] - Continue cube job when hive query return empty resultset
-* [KYLIN-858] - add snappy compression support
-* [KYLIN-882] - check access to kylin.hdfs.working.dir
-* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
-* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
-* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
-* [KYLIN-957] - Support HBase in a separate cluster
-* [KYLIN-965] - Allow user to configure the region split size for cube
-* [KYLIN-971] - kylin display timezone on UI
-* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
-* [KYLIN-998] - Finish the hive intermediate table clean up job in org.apache.kylin.job.hadoop.cube.StorageCleanupJob
-* [KYLIN-999] - License check and cleanup for release
-* [KYLIN-1013] - Make hbase client configurations like timeout configurable
-* [KYLIN-1025] - Save cube change is very slow
-* [KYLIN-1034] - Faster bitmap indexes with Roaring bitmaps
-* [KYLIN-1035] - Validate [Project] before create Cube on UI
-* [KYLIN-1037] - Remove hardcoded "hdp.version" from regression tests
-* [KYLIN-1047] - Upgrade to Calcite 1.4
-* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-
-
-## v1.0-incubating - 2015-09-06
-_Tag:_ [kylin-1.0-incubating](https://github.com/apache/kylin/tree/kylin-1.0-incubating)
-
-__New Feature__
-
-* [KYLIN-591] - Leverage Zeppelin to interactive with Kylin
-
-__Bug__
-
-* [KYLIN-404] - Can't get cube source record size.
-* [KYLIN-626] - JDBC error for float and double values
-* [KYLIN-751] - Max on negative double values is not working
-* [KYLIN-757] - Cache wasn't flushed in cluster mode
-* [KYLIN-780] - Upgrade Calcite to 1.0
-* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
-* [KYLIN-889] - Support more than one HDFS files of lookup table
-* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
-* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
-* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
-* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
-* [KYLIN-914] - Scripts shebang should use /bin/bash
-* [KYLIN-915] - appendDBName in CubeMetadataUpgrade will return null
-* [KYLIN-921] - Dimension with all nulls cause BuildDimensionDictionary failed due to FileNotFoundException
-* [KYLIN-923] - FetcherRunner will never run again if encountered exception during running
-* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
-* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
-* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
-* [KYLIN-936] - can not see job step log 
-* [KYLIN-940] - NPE when close the null resouce
-* [KYLIN-945] - Kylin JDBC - Get Connection from DataSource results in NullPointerException
-* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
-* [KYLIN-949] - Query cache doesn't work properly for prepareStatement queries
-
-__Improvement__
-
-* [KYLIN-568] - job support stop/suspend function so that users can manually resume a job
-* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
-* [KYLIN-792] - kylin performance insight [dashboard]
-* [KYLIN-838] - improve performance of job query
-* [KYLIN-842] - Add version and commit id into binary package
-* [KYLIN-844] - add backdoor toggles to control query behavior 
-* [KYLIN-857] - backport coprocessor improvement in 0.8 to 0.7
-* [KYLIN-866] - Confirm with user when he selects empty segments to merge
-* [KYLIN-867] - Hybrid model for multiple realizations/cubes
-* [KYLIN-880] -  Kylin should change the default folder from /tmp to user configurable destination
-* [KYLIN-881] - Upgrade Calcite to 1.3.0
-* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
-* [KYLIN-893] - Remove the dependency on quartz and metrics
-* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
-* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
-* [KYLIN-933] - friendly UI to use data model
-* [KYLIN-938] - add friendly tip to page when rest request failed
-
-__Task__
-
-* [KYLIN-884] - Restructure docs and website
-* [KYLIN-907] - Improve Kylin community development experience
-* [KYLIN-954] - Release v1.0 (formerly v0.7.3)
-* [KYLIN-863] - create empty segment when there is no data in one single streaming batch
-* [KYLIN-908] - Help community developer to setup develop/debug environment
-* [KYLIN-931] - Port KYLIN-921 to 0.8 branch
-
-## v0.7.2-incubating - 2015-07-21
-_Tag:_ [kylin-0.7.2-incubating](https://github.com/apache/kylin/tree/kylin-0.7.2-incubating)
-
-__Main Changes:__  
-Critical bug fixes after v0.7.1 release, please go with this version directly for new case and upgrade to this version for existing deployment.
-
-__Bug__  
-
-* [KYLIN-514] - Error message is not helpful to user when doing something in Jason Editor window
-* [KYLIN-598] - Kylin detecting hive table delim failure
-* [KYLIN-660] - Make configurable of dictionary cardinality cap
-* [KYLIN-765] - When a cube job is failed, still be possible to submit a new job
-* [KYLIN-814] - Duplicate columns error for subqueries on fact table
-* [KYLIN-819] - Fix necessary ColumnMetaData order for Calcite (Optic)
-* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
-* [KYLIN-829] - Cube "Actions" shows "NA"; but after expand the "access" tab, the button shows up
-* [KYLIN-830] - Cube merge failed after migrating from v0.6 to v0.7
-* [KYLIN-831] - Kylin report "Column 'ABC' not found in table 'TABLE' while executing SQL", when that column is FK but not define as a dimension
-* [KYLIN-840] - HBase table compress not enabled even LZO is installed
-* [KYLIN-848] - Couldn't resume or discard a cube job
-* [KYLIN-849] - Couldn't query metrics on lookup table PK
-* [KYLIN-865] - Cube has been built but couldn't query; In log it said "Realization 'CUBE.CUBE_NAME' defined under project PROJECT_NAME is not found
-* [KYLIN-873] - cancel button does not work when [resume][discard] job
-* [KYLIN-888] - "Jobs" page only shows 15 job at max, the "Load more" button was disappeared
-
-__Improvement__
-
-* [KYLIN-159] - Metadata migrate tool 
-* [KYLIN-199] - Validation Rule: Unique value of Lookup table's key columns
-* [KYLIN-207] - Support SQL pagination
-* [KYLIN-209] - Merge tail small MR jobs into one
-* [KYLIN-210] - Split heavy MR job to more small jobs
-* [KYLIN-221] - Convert cleanup and GC to job 
-* [KYLIN-284] - add log for all Rest API Request
-* [KYLIN-488] - Increase HDFS block size 1GB
-* [KYLIN-600] - measure return type update
-* [KYLIN-611] - Allow Implicit Joins
-* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
-* [KYLIN-727] - Cube build in BuildCubeWithEngine does not cover incremental build/cube merge
-* [KYLIN-752] - Improved IN clause performance
-* [KYLIN-773] - performance is slow list jobs
-* [KYLIN-839] - Optimize Snapshot table memory usage 
-
-__New Feature__
-
-* [KYLIN-211] - Bitmap Inverted Index
-* [KYLIN-285] - Enhance alert program for whole system
-* [KYLIN-467] - Validataion Rule: Check duplicate rows in lookup table
-* [KYLIN-471] - Support "Copy" on grid result
-
-__Task__
-
-* [KYLIN-7] - Enable maven checkstyle plugin
-* [KYLIN-885] - Release v0.7.2
-* [KYLIN-812] - Upgrade to Calcite 0.9.2
-
-## v0.7.1-incubating (First Apache Release) - 2015-06-10  
-_Tag:_ [kylin-0.7.1-incubating](https://github.com/apache/kylin/tree/kylin-0.7.1-incubating)
-
-Apache Kylin v0.7.1-incubating has rolled out on June 10, 2015. This is also the first Apache release after join incubating. 
-
-__Main Changes:__
-
-* Package renamed from com.kylinolap to org.apache.kylin
-* Code cleaned up to apply Apache License policy
-* Easy install and setup with bunch of scripts and automation
-* Job engine refactor to be generic job manager for all jobs, and improved efficiency
-* Support Hive database other than 'default'
-* JDBC driver avaliable for client to interactive with Kylin server
-* Binary pacakge avaliable download 
-
-__New Feature__
-
-* [KYLIN-327] - Binary distribution 
-* [KYLIN-368] - Move MailService to Common module
-* [KYLIN-540] - Data model upgrade for legacy cube descs
-* [KYLIN-576] - Refactor expansion rate expression
-
-__Task__
-
-* [KYLIN-361] - Rename package name with Apache Kylin
-* [KYLIN-531] - Rename package name to org.apache.kylin
-* [KYLIN-533] - Job Engine Refactoring
-* [KYLIN-585] - Simplify deployment
-* [KYLIN-586] - Add Apache License header in each source file
-* [KYLIN-587] - Remove hard copy of javascript libraries
-* [KYLIN-624] - Add dimension and metric info into DataModel
-* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
-* [KYLIN-669] - Release v0.7.1 as first apache release
-* [KYLIN-670] - Update pom with "incubating" in version number
-* [KYLIN-737] - Generate and sign release package for review and vote
-* [KYLIN-795] - Release after success vote
-
-__Bug__
-
-* [KYLIN-132] - Job framework
-* [KYLIN-194] - Dict & ColumnValueContainer does not support number comparison, they do string comparison right now
-* [KYLIN-220] - Enable swap column of Rowkeys in Cube Designer
-* [KYLIN-230] - Error when create HTable
-* [KYLIN-255] - Error when a aggregated function appear twice in select clause
-* [KYLIN-383] - Sample Hive EDW database name should be replaced by "default" in the sample
-* [KYLIN-399] - refreshed segment not correctly published to cube
-* [KYLIN-412] - No exception or message when sync up table which can't access
-* [KYLIN-421] - Hive table metadata issue
-* [KYLIN-436] - Can't sync Hive table metadata from other database rather than "default"
-* [KYLIN-508] - Too high cardinality is not suitable for dictionary!
-* [KYLIN-509] - Order by on fact table not works correctly
-* [KYLIN-517] - Always delete the last one of Add Lookup page buttom even if deleting the first join condition
-* [KYLIN-524] - Exception will throw out if dimension is created on a lookup table, then deleting the lookup table.
-* [KYLIN-547] - Create cube failed if column dictionary sets false and column length value greater than 0
-* [KYLIN-556] - error tip enhance when cube detail return empty
-* [KYLIN-570] - Need not to call API before sending login request
-* [KYLIN-571] - Dimensions lost when creating cube though Joson Editor
-* [KYLIN-572] - HTable size is wrong
-* [KYLIN-581] - unable to build cube
-* [KYLIN-583] - Dependency of Hive conf/jar in II branch will affect auto deploy
-* [KYLIN-588] - Error when run package.sh
-* [KYLIN-593] - angular.min.js.map and angular-resource.min.js.map are missing in kylin.war
-* [KYLIN-594] - Making changes in build and packaging with respect to apache release process
-* [KYLIN-595] - Kylin JDBC driver should not assume Kylin server listen on either 80 or 443
-* [KYLIN-605] - Issue when install Kylin on a CLI which does not have yarn Resource Manager
-* [KYLIN-614] - find hive dependency shell fine is unable to set the hive dependency correctly
-* [KYLIN-615] - Unable add measures in Kylin web UI
-* [KYLIN-619] - Cube build fails with hive+tez
-* [KYLIN-620] - Wrong duration number
-* [KYLIN-621] - SecurityException when running MR job
-* [KYLIN-627] - Hive tables' partition column was not sync into Kylin
-* [KYLIN-628] - Couldn't build a new created cube
-* [KYLIN-629] - Kylin failed to run mapreduce job if there is no mapreduce.application.classpath in mapred-site.xml
-* [KYLIN-630] - ArrayIndexOutOfBoundsException when merge cube segments 
-* [KYLIN-638] - kylin.sh stop not working
-* [KYLIN-639] - Get "Table 'xxxx' not found while executing SQL" error after a cube be successfully built
-* [KYLIN-640] - sum of float not working
-* [KYLIN-642] - Couldn't refresh cube segment
-* [KYLIN-643] - JDBC couldn't connect to Kylin: "java.sql.SQLException: Authentication Failed"
-* [KYLIN-644] - join table as null error when build the cube
-* [KYLIN-652] - Lookup table alias will be set to null
-* [KYLIN-657] - JDBC Driver not register into DriverManager
-* [KYLIN-658] - java.lang.IllegalArgumentException: Cannot find rowkey column XXX in cube CubeDesc
-* [KYLIN-659] - Couldn't adjust the rowkey sequence when create cube
-* [KYLIN-666] - Select float type column got class cast exception
-* [KYLIN-681] - Failed to build dictionary if the rowkey's dictionary property is "date(yyyy-mm-dd)"
-* [KYLIN-682] - Got "No aggregator for func 'MIN' and return type 'decimal(19,4)'" error when build cube
-* [KYLIN-684] - Remove holistic distinct count and multiple column distinct count from sample cube
-* [KYLIN-691] - update tomcat download address in download-tomcat.sh
-* [KYLIN-696] - Dictionary couldn't recognize a value and throw IllegalArgumentException: "Not a valid value"
-* [KYLIN-703] - UT failed due to unknown host issue
-* [KYLIN-711] - UT failure in REST module
-* [KYLIN-739] - Dimension as metrics does not work with PK-FK derived column
-* [KYLIN-761] - Tables are not shown in the "Query" tab, and couldn't run SQL query after cube be built
-
-__Improvement__
-
-* [KYLIN-168] - Installation fails if multiple ZK
-* [KYLIN-182] - Validation Rule: columns used in Join condition should have same datatype
-* [KYLIN-204] - Kylin web not works properly in IE
-* [KYLIN-217] - Enhance coprocessor with endpoints 
-* [KYLIN-251] - job engine refactoring
-* [KYLIN-261] - derived column validate when create cube
-* [KYLIN-317] - note: grunt.json need to be configured when add new javascript or css file
-* [KYLIN-324] - Refactor metadata to support InvertedIndex
-* [KYLIN-407] - Validation: There's should no Hive table column using "binary" data type
-* [KYLIN-445] - Rename cube_desc/cube folder
-* [KYLIN-452] - Automatically create local cluster for running tests
-* [KYLIN-498] - Merge metadata tables 
-* [KYLIN-532] - Refactor data model in kylin front end
-* [KYLIN-539] - use hbase command to launch tomcat
-* [KYLIN-542] - add project property feature for cube
-* [KYLIN-553] - From cube instance, couldn't easily find the project instance that it belongs to
-* [KYLIN-563] - Wrap kylin start and stop with a script 
-* [KYLIN-567] - More flexible validation of new segments
-* [KYLIN-569] - Support increment+merge job
-* [KYLIN-578] - add more generic configuration for ssh
-* [KYLIN-601] - Extract content from kylin.tgz to "kylin" folder
-* [KYLIN-616] - Validation Rule: partition date column should be in dimension columns
-* [KYLIN-634] - Script to import sample data and cube metadata
-* [KYLIN-636] - wiki/On-Hadoop-CLI-installation is not up to date
-* [KYLIN-637] - add start&end date for hbase info in cubeDesigner
-* [KYLIN-714] - Add Apache RAT to pom.xml
-* [KYLIN-753] - Make the dependency on hbase-common to "provided"
-* [KYLIN-758] - Updating port forwarding issue Hadoop Installation on Hortonworks Sandbox.
-* [KYLIN-779] - [UI] jump to cube list after create cube
-* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
-
-__Wish__
-
-* [KYLIN-608] - Distinct count for ii storage
-
diff --git a/website/_docs16/tutorial/acl.cn.md b/website/_docs16/tutorial/acl.cn.md
deleted file mode 100644
index 006f831..0000000
--- a/website/_docs16/tutorial/acl.cn.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin Cube 权限授予教程
-categories: 教程
-permalink: /cn/docs16/tutorial/acl.html
-version: v1.2
-since: v0.7.1
----
-
-  
-
-在`Cubes`页面,双击cube行查看详细信息。在这里我们关注`Access`标签。
-点击`+Grant`按钮进行授权。
-
-![]( /images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
-
-一个cube有四种不同的权限。将你的鼠标移动到`?`图标查看详细信息。
-
-![]( /images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
-
-授权对象也有两种:`User`和`Role`。`Role`是指一组拥有同样权限的用户。
-
-### 1. 授予用户权限
-* 选择`User`类型,输入你想要授权的用户的用户名并选择相应的权限。
-
-     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
-
-* 然后点击`Grant`按钮提交请求。在这一操作成功后,你会在表中看到一个新的表项。你可以选择不同的访问权限来修改用户权限。点击`Revoke`按钮可以删除一个拥有权限的用户。
-
-     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
-
-### 2. 授予角色权限
-* 选择`Role`类型,通过点击下拉按钮选择你想要授权的一组用户并选择一个权限。
-
-* 然后点击`Grant`按钮提交请求。在这一操作成功后,你会在表中看到一个新的表项。你可以选择不同的访问权限来修改组权限。点击`Revoke`按钮可以删除一个拥有权限的组。
diff --git a/website/_docs16/tutorial/acl.md b/website/_docs16/tutorial/acl.md
deleted file mode 100644
index be59d60..0000000
--- a/website/_docs16/tutorial/acl.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-layout: docs16
-title:  Kylin Cube Permission
-categories: tutorial
-permalink: /docs16/tutorial/acl.html
-since: v0.7.1
----
-
-In `Cubes` page, double click the cube row to see the detail information. Here we focus on the `Access` tab.
-Click the `+Grant` button to grant permission. 
-
-![](/images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
-
-There are four different kinds of permissions for a cube. Move your mouse over the `?` icon to see detail information. 
-
-![](/images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
-
-There are also two types of user that a permission can be granted: `User` and `Role`. `Role` means a group of users who have the same role.
-
-### 1. Grant User Permission
-* Select `User` type, enter the username of the user you want to grant and select the related permission. 
-
-     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
-
-* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a user. To delete a user with permission, just click the `Revoke` button.
-
-     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
-
-### 2. Grant Role Permission
-* Select `Role` type, choose a group of users that you want to grant by click the drop down button and select a permission.
-
-* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a group. To delete a group with permission, just click the `Revoke` button.
diff --git a/website/_docs16/tutorial/create_cube.cn.md b/website/_docs16/tutorial/create_cube.cn.md
deleted file mode 100644
index 0f44010..0000000
--- a/website/_docs16/tutorial/create_cube.cn.md
+++ /dev/null
@@ -1,129 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin Cube 创建教程
-categories: 教程
-permalink: /cn/docs16/tutorial/create_cube.html
-version: v1.2
-since: v0.7.1
----
-  
-  
-### I. 新建一个项目
-1. 由顶部菜单栏进入`Query`页面,然后点击`Manage Projects`。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
-
-2. 点击`+ Project`按钮添加一个新的项目。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/2 %2Bproject.png)
-
-3. 填写下列表单并点击`submit`按钮提交请求。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/3 new-project.png)
-
-4. 成功后,底部会显示通知。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
-
-### II. 同步一张表
-1. 在顶部菜单栏点击`Tables`,然后点击`+ Sync`按钮加载hive表元数据。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/4 %2Btable.png)
-
-2. 输入表名并点击`Sync`按钮提交请求。
-
-   ![](/images/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
-
-### III. 新建一个cube
-首先,在顶部菜单栏点击`Cubes`。然后点击`+Cube`按钮进入cube designer页面。
-
-![](/images/Kylin-Cube-Creation-Tutorial/6 %2Bcube.png)
-
-**步骤1. Cube信息**
-
-填写cube基本信息。点击`Next`进入下一步。
-
-你可以使用字母、数字和“_”来为你的cube命名(注意名字中不能使用空格)。
-
-![](/images/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
-
-**步骤2. 维度**
-
-1. 建立事实表。
-
-    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-factable.png)
-
-2. 点击`+Dimension`按钮添加一个新的维度。
-
-    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-%2Bdim.png)
-
-3. 可以选择不同类型的维度加入一个cube。我们在这里列出其中一部分供你参考。
-
-    * 从事实表获取维度。
-          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeA.png)
-
-    * 从查找表获取维度。
-        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-1.png)
-
-        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-2.png)
-   
-    * 从有分级结构的查找表获取维度。
-          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeC.png)
-
-    * 从有衍生维度(derived dimensions)的查找表获取维度。
-          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeD.png)
-
-4. 用户可以在保存维度后进行编辑。
-   ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-edit.png)
-
-**步骤3. 度量**
-
-1. 点击`+Measure`按钮添加一个新的度量。
-   ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-%2Bmeas.png)
-
-2. 根据它的表达式共有5种不同类型的度量:`SUM`、`MAX`、`MIN`、`COUNT`和`COUNT_DISTINCT`。请谨慎选择返回类型,它与`COUNT(DISTINCT)`的误差率相关。
-   * SUM
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-sum.png)
-
-   * MIN
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-min.png)
-
-   * MAX
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-max.png)
-
-   * COUNT
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-count.png)
-
-   * DISTINCT_COUNT
-
-     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-distinct.png)
-
-**步骤4. 过滤器**
-
-这一步骤是可选的。你可以使用`SQL`格式添加一些条件过滤器。
-
-![](/images/Kylin-Cube-Creation-Tutorial/10 filter.png)
-
-**步骤5. 更新设置**
-
-这一步骤是为增量构建cube而设计的。
-
-![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting1.png)
-
-选择分区类型、分区列和开始日期。
-
-![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting2.png)
-
-**步骤6. 高级设置**
-
-![](/images/Kylin-Cube-Creation-Tutorial/12 advanced.png)
-
-**步骤7. 概览 & 保存**
-
-你可以概览你的cube并返回之前的步骤进行修改。点击`Save`按钮完成cube创建。
-
-![](/images/Kylin-Cube-Creation-Tutorial/13 overview.png)
diff --git a/website/_docs16/tutorial/create_cube.md b/website/_docs16/tutorial/create_cube.md
deleted file mode 100644
index 25b304f..0000000
--- a/website/_docs16/tutorial/create_cube.md
+++ /dev/null
@@ -1,198 +0,0 @@
----
-layout: docs16
-title:  Kylin Cube Creation
-categories: tutorial
-permalink: /docs16/tutorial/create_cube.html
----
-
-This tutorial will guide you to create a cube. It need you have at least 1 sample table in Hive. If you don't have, you can follow this to create some data.
-  
-### I. Create a Project
-1. Go to `Query` page in top menu bar, then click `Manage Projects`.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
-
-2. Click the `+ Project` button to add a new project.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/2 +project.png)
-
-3. Enter a project name, e.g, "Tutorial", with a description (optional), then click `submit` button to send the request.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3 new-project.png)
-
-4. After success, the project will show in the table.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
-
-### II. Sync up Hive Table
-1. Click `Model` in top bar and then click `Data Source` tab in the left part, it lists all the tables loaded into Kylin; click `Load Hive Table` button.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table.png)
-
-2. Enter the hive table names, separated with commad, and then click `Sync` to send the request.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
-
-3. [Optional] If you want to browser the hive database to pick tables, click the `Load Hive Table From Tree` button.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table-tree.png)
-
-4. [Optional] Expand the database node, click to select the table to load, and then click `Sync`.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-tree.png)
-
-5. A success message will pop up. In the left `Tables` section, the newly loaded table is added. Click the table name will expand the columns.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-info.png)
-
-6. In the background, Kylin will run a MapReduce job to calculate the approximate cardinality for the newly synced table. After the job be finished, refresh web page and then click the table name, the cardinality will be shown in the table info.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-cardinality.png)
-
-
-### III. Create Data Model
-Before create a cube, need define a data model. The data model defines the star schema. One data model can be reused in multiple cubes.
-
-1. Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Model`.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 +model.png)
-
-2. Enter a name for the model, with an optional description.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-name.png)
-
-3. In the `Fact Table` box, select the fact table of this data model.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-fact-table.png)
-
-4. [Optional] Click `Add Lookup Table` button to add a lookup table. Select the table name and join type (inner or left).
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-lookup-table.png)
-
-5. [Optional] Click `New Join Condition` button, select the FK column of fact table in the left, and select the PK column of lookup table in the right side. Repeat this if have more than one join columns.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-join-condition.png)
-
-6. Click "OK", repeat step 4 and 5 to add more lookup tables if any. After finished, click "Next".
-
-7. The "Dimensions" page allows to select the columns that will be used as dimension in the child cubes. Click the `Columns` cell of a table, in the drop-down list select the column to the list. 
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-dimensions.png)
-
-8. Click "Next" go to the "Measures" page, select the columns that will be used in measure/metrics. The measure column can only from fact table. 
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-measures.png)
-
-9. Click "Next" to the "Settings" page. If the data in fact table increases by day, select the corresponding date column in the `Partition Date Column`, and select the date format, otherwise leave it as blank.
-
-10. [Optional] Select `Cube Size`, which is an indicator on the scale of the cube, by default it is `MEDIUM`.
-
-11. [Optional] If some records want to excluded from the cube, like dirty data, you can input the condition in `Filter`.
-
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-partition-column.png)
-
-12. Click `Save` and then select `Yes` to save the data model. After created, the data model will be shown in the left `Models` list.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-created.png)
-
-### IV. Create Cube
-After the data model be created, you can start to create cube. 
-
-Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Cube`.
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 new-cube.png)
-
-
-**Step 1. Cube Info**
-
-Select the data model, enter the cube name; Click `Next` to enter the next step.
-
-You can use letters, numbers and '_' to name your cube (blank space in name is not allowed). `Notification List` is a list of email addresses which be notified on cube job success/failure.
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
-    
-
-**Step 2. Dimensions**
-
-1. Click `Add Dimension`, it popups two option: "Normal" and "Derived": "Normal" is to add a normal independent dimension column, "Derived" is to add a derived dimension column. Read more in [How to optimize cubes](/docs15/howto/howto_optimize_cubes.html).
-
-2. Click "Normal" and then select a dimension column, give it a meaningful name.
-
-    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-normal.png)
-    
-3. [Optional] Click "Derived" and then pickup 1 more multiple columns on lookup table, give them a meaningful name.
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-derived.png)
-
-4. Repeate 2 and 3 to add all dimension columns; you can do this in batch for "Normal" dimension with the button `Auto Generator`. 
-
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-batch.png)
-
-5. Click "Next" after select all dimensions.
-
-**Step 3. Measures**
-
-1. Click the `+Measure` to add a new measure.
-   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 meas-+meas.png)
-
-2. There are 6 types of measure according to its expression: `SUM`, `MAX`, `MIN`, `COUNT`, `COUNT_DISTINCT` and `TOP_N`. Properly select the return type for `COUNT_DISTINCT` and `TOP_N`, as it will impact on the cube size.
-   * SUM
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-sum.png)
-
-   * MIN
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-min.png)
-
-   * MAX
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-max.png)
-
-   * COUNT
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-count.png)
-
-   * DISTINCT_COUNT
-   This measure has two implementations: 
-   a) approximate implementation with HyperLogLog, select an acceptable error rate, lower error rate will take more storage.
-   b) precise implementation with bitmap (see limitation in https://issues.apache.org/jira/browse/KYLIN-1186). 
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-distinct.png)
-
-   Pleaste note: distinct count is a very heavy data type, it is slower to build and query comparing to other measures.
-
-   * TOP_N
-   Approximate TopN measure pre-calculates the top records in each dimension combination, it will provide higher performance in query time than no pre-calculation; Need specify two parameters here: the first is the column will be used as metrics for Top records (aggregated with SUM and then sorted in descending order); the second is the literal ID, represents the record like seller_id;
-
-   Properly select the return type, depends on how many top records to inspect: top 10, top 100 or top 1000. 
-
-     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-topn.png)
-
-
-**Step 4. Refresh Setting**
-
-This step is designed for incremental cube build. 
-
-`Auto Merge Time Ranges (days)`: merge the small segments into medium and large segment automatically. If you don't want to auto merge, remove the default two ranges.
-
-`Retention Range (days)`: only keep the segment whose data is in past given days in cube, the old segment will be automatically dropped from head; 0 means not enable this feature.
-
-`Partition Start Date`: the start date of this cube.
-
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/9 refresh-setting1.png)
-
-**Step 5. Advanced Setting**
-
-`Aggregation Groups`: by default Kylin put all dimensions into one aggregation group; you can create multiple aggregation groups by knowing well about your query patterns. For the concepts of "Mandatory Dimensions", "Hierarchy Dimensions" and "Joint Dimensions", read this blog: [New Aggregation Group](/blog/2016/02/18/new-aggregation-group/)
-
-`Rowkeys`: the rowkeys are composed by the dimension encoded values. "Dictionary" is the default encoding method; If a dimension is not fit with dictionary (e.g., cardinality > 10 million), select "false" and then enter the fixed length for that dimension, usually that is the max. length of that column; if a value is longer than that size it will be truncated. Please note, without dictionary encoding, the cube size might be much bigger.
-
-You can drag & drop a dimension column to adjust its position in rowkey; Put the mandantory dimension at the begining, then followed the dimensions that heavily involved in filters (where condition). Put high cardinality dimensions ahead of low cardinality dimensions.
-
-
-**Step 6. Overview & Save**
-
-You can overview your cube and go back to previous step to modify it. Click the `Save` button to complete the cube creation.
-
-![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 overview.png)
-
-Cheers! now the cube is created, you can go ahead to build and play it.
diff --git a/website/_docs16/tutorial/cube_build_job.cn.md b/website/_docs16/tutorial/cube_build_job.cn.md
deleted file mode 100644
index 8a8822c..0000000
--- a/website/_docs16/tutorial/cube_build_job.cn.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin Cube 建立和Job监控教程
-categories: 教程
-permalink: /cn/docs16/tutorial/cube_build_job.html
-version: v1.2
-since: v0.7.1
----
-
-### Cube建立
-首先,确认你拥有你想要建立的cube的权限。
-
-1. 在`Cubes`页面中,点击cube栏右侧的`Action`下拉按钮并选择`Build`操作。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
-
-2. 选择后会出现一个弹出窗口。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/2 pop-up.png)
-
-3. 点击`END DATE`输入框选择增量构建这个cube的结束日期。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
-
-4. 点击`Submit`提交请求。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 submit.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4.1 success.png)
-
-   提交请求成功后,你将会看到`Jobs`页面新建了job。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 jobs-page.png)
-
-5. 如要放弃这个job,点击`Discard`按钮。
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
-
-### Job监控
-在`Jobs`页面,点击job详情按钮查看显示于右侧的详细信息。
-
-![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
-
-job详细信息为跟踪一个job提供了它的每一步记录。你可以将光标停放在一个步骤状态图标上查看基本状态和信息。
-
-![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
-
-点击每个步骤显示的图标按钮查看详情:`Parameters`、`Log`、`MRJob`、`EagleMonitoring`。
-
-* Parameters
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
-
-* Log
-        
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
-
-* MRJob(MapReduce Job)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
-
-   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)
diff --git a/website/_docs16/tutorial/cube_build_job.md b/website/_docs16/tutorial/cube_build_job.md
deleted file mode 100644
index b19ef5a..0000000
--- a/website/_docs16/tutorial/cube_build_job.md
+++ /dev/null
@@ -1,67 +0,0 @@
----
-layout: docs16
-title:  Kylin Cube Build and Job Monitoring
-categories: tutorial
-permalink: /docs16/tutorial/cube_build_job.html
----
-
-### Cube Build
-First of all, make sure that you have authority of the cube you want to build.
-
-1. In `Models` page, click the `Action` drop down button in the right of a cube column and select operation `Build`.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
-
-2. There is a pop-up window after the selection, click `END DATE` input box to select end date of this incremental cube build.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
-
-4. Click `Submit` to send the build request. After success, you will see the new job in the `Monitor` page.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 jobs-page.png)
-
-5. The new job is in "pending" status; after a while, it will be started to run and you will see the progress by refresh the web page or click the refresh button.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 job-progress.png)
-
-
-6. Wait the job to finish. In the between if you want to discard it, click `Actions` -> `Discard` button.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
-
-7. After the job is 100% finished, the cube's status becomes to "Ready", means it is ready to serve SQL queries. In the `Model` tab, find the cube, click cube name to expand the section, in the "HBase" tab, it will list the cube segments. Each segment has a start/end time; Its underlying HBase table information is also listed.
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/10 cube-segment.png)
-
-If you have more source data, repeate the steps above to build them into the cube.
-
-### Job Monitoring
-In the `Monitor` page, click the job detail button to see detail information show in the right side.
-
-![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
-
-The detail information of a job provides a step-by-step record to trace a job. You can hover a step status icon to see the basic status and information.
-
-![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
-
-Click the icon buttons showing in each step to see the details: `Parameters`, `Log`, `MRJob`.
-
-* Parameters
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
-
-* Log
-        
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
-
-* MRJob(MapReduce Job)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
-
-   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)
-
-
diff --git a/website/_docs16/tutorial/cube_streaming.md b/website/_docs16/tutorial/cube_streaming.md
deleted file mode 100644
index 63d81eb..0000000
--- a/website/_docs16/tutorial/cube_streaming.md
+++ /dev/null
@@ -1,219 +0,0 @@
----
-layout: docs16
-title:  Scalable Cubing from Kafka (beta)
-categories: tutorial
-permalink: /docs16/tutorial/cube_streaming.html
----
-Kylin v1.6 releases the scalable streaming cubing function, it leverages Hadoop to consume the data from Kafka to build the cube, you can check [this blog](/blog/2016/10/18/new-nrt-streaming/) for the high level design. This doc is a step by step tutorial, illustrating how to create and build a sample cube;
-
-## Preparation
-To finish this tutorial, you need a Hadoop environment which has kylin v1.6.0 or above installed, and also have a Kafka (v0.10.0 or above) running; Previous Kylin version has a couple issues so please upgrade your Kylin instance at first.
-
-In this tutorial, we will use Hortonworks HDP 2.2.4 Sandbox VM + Kafka v0.10.0(Scala 2.10) as the environment.
-
-## Install Kafka 0.10.0.0 and Kylin
-Don't use HDP 2.2.4's build-in Kafka as it is too old, stop it first if it is running.
-{% highlight Groff markup %}
-curl -s https://archive.apache.org/dist/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgz | tar -xz -C /usr/local/
-
-cd /usr/local/kafka_2.10-0.10.0.0/
-
-bin/kafka-server-start.sh config/server.properties &
-
-{% endhighlight %}
-
-Download the Kylin v1.6 from download page, expand the tar ball in /usr/local/ folder.
-
-## Create sample Kafka topic and populate data
-
-Create a sample topic "kylindemo", with 3 partitions:
-
-{% highlight Groff markup %}
-
-bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic kylindemo
-Created topic "kylindemo".
-{% endhighlight %}
-
-Put sample data to this topic; Kylin has an utility class which can do this;
-
-{% highlight Groff markup %}
-export KAFKA_HOME=/usr/local/kafka_2.10-0.10.0.0
-export KYLIN_HOME=/usr/local/apache-kylin-1.6.0-bin
-
-cd $KYLIN_HOME
-./bin/kylin.sh org.apache.kylin.source.kafka.util.KafkaSampleProducer --topic kylindemo --broker localhost:9092
-{% endhighlight %}
-
-This tool will send 100 records to Kafka every second. Please keep it running during this tutorial. You can check the sample message with kafka-console-consumer.sh now:
-
-{% highlight Groff markup %}
-cd $KAFKA_HOME
-bin/kafka-console-consumer.sh --zookeeper localhost:2181 --bootstrap-server localhost:9092 --topic kylindemo --from-beginning
-{"amount":63.50375137330458,"category":"TOY","order_time":1477415932581,"device":"Other","qty":4,"user":{"id":"bf249f36-f593-4307-b156-240b3094a1c3","age":21,"gender":"Male"},"currency":"USD","country":"CHINA"}
-{"amount":22.806058795736583,"category":"ELECTRONIC","order_time":1477415932591,"device":"Andriod","qty":1,"user":{"id":"00283efe-027e-4ec1-bbed-c2bbda873f1d","age":27,"gender":"Female"},"currency":"USD","country":"INDIA"}
-
- {% endhighlight %}
-
-## Define a table from streaming
-Start Kylin server with "$KYLIN_HOME/bin/kylin.sh start", login Kylin Web GUI at http://sandbox:7070/kylin/, select an existing project or create a new project; Click "Model" -> "Data Source", then click the icon "Add Streaming Table";
-
-   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/1_Add_streaming_table.png)
-
-In the pop-up dialogue, enter a sample record which you got from the kafka-console-consumer, click the ">>" button, Kylin parses the JSON message and listS all the properties;
-
-You need give a logic table name for this streaming data source; The name will be used for SQL query later; here enter "STREAMING_SALES_TABLE" as an example in the "Table Name" field.
-
-You need select a timestamp field which will be used to identify the time of a message; Kylin can derive other time values like "year_start", "quarter_start" from this time column, which can give your more flexibility on building and querying the cube. Here check "order_time". You can deselect those properties which are not needed for cube. Here let's keep all fields.
-
-Notice that Kylin supports structured (or say "embedded") message from v1.6, it will convert them into a flat table structure. By default use "_" as the separator of the structed properties.
-
-   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/2_Define_streaming_table.png)
-
-
-Click "Next". On this page, provide the Kafka cluster information; Enter "kylindemo" as "Topic" name; The cluster has 1 broker, whose host name is "sandbox", port is "9092", click "Save".
-
-   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Kafka_setting.png)
-
-In "Advanced setting" section, the "timeout" and "buffer size" are the configurations for connecting with Kafka, keep them. 
-
-In "Parser Setting", by default Kylin assumes your message is JSON format, and each record's timestamp column (specified by "tsColName") is a bigint (epoch time) value; in this case, you just need set the "tsColumn" to "order_time"; 
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_setting.png)
-
-In real case if the timestamp value is a string valued timestamp like "Jul 20, 2016 9:59:17 AM", you need specify the parser class with "tsParser" and the time pattern with "tsPattern" like this:
-
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_time.png)
-
-Click "Submit" to save the configurations. Now a "Streaming" table is created.
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/4_Streaming_table.png)
-
-## Define data model
-With the table defined in previous step, now we can create the data model. The step is almost the same as you create a normal data model, but it has two requirement:
-
-* Streaming Cube doesn't support join with lookup tables; When define the data model, only select fact table, no lookup table;
-* Streaming Cube must be partitioned; If you're going to build the Cube incrementally at minutes level, select "MINUTE_START" as the cube's partition date column. If at hours level, select "HOUR_START".
-
-Here we pick 13 dimension and 2 measure columns:
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/5_Data_model_dimension.png)
-
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/6_Data_model_measure.png)
-Save the data model.
-
-## Create Cube
-
-The streaming Cube is almost the same as a normal cube. a couple of points need get your attention:
-
-* The partition time column should be a dimension of the Cube. In Streaming OLAP the time is always a query condition, and Kylin will leverage this to narrow down the scanned partitions.
-* Don't use "order\_time" as dimension as that is pretty fine-grained; suggest to use "mintue\_start", "hour\_start" or other, depends on how you will inspect the data.
-* Define "year\_start", "quarter\_start", "month\_start", "day\_start", "hour\_start", "minute\_start" as a hierarchy to reduce the combinations to calculate.
-* In the "refersh setting" step, create more merge ranges, like 0.5 hour, 4 hours, 1 day, and then 7 days; This will help to control the cube segment number.
-* In the "rowkeys" section, drag&drop the "minute\_start" to the head position, as for streaming queries, the time condition is always appeared; putting it to head will help to narrow down the scan range.
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/8_Cube_dimension.png)
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/9_Cube_measure.png)
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/10_agg_group.png)
-
-	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/11_Rowkey.png)
-
-Save the cube.
-
-## Run a build
-
-You can trigger the build from web GUI, by clicking "Actions" -> "Build", or sending a request to Kylin RESTful API with 'curl' command:
-
-{% highlight Groff markup %}
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
-{% endhighlight %}
-
-Please note the API endpoint is different from a normal cube (this URL end with "build2").
-
-Here 0 means from the last position, and 9223372036854775807 (Long.MAX_VALUE) means to the end position on Kafka topic. If it is the first time to build (no previous segment), Kylin will seek to beginning of the topics as the start position. 
-
-In the "Monitor" page, a new job is generated; Wait it 100% finished.
-
-## Click the "Insight" tab, compose a SQL to run, e.g:
-
- {% highlight Groff markup %}
-select minute_start, count(*), sum(amount), sum(qty) from streaming_sales_table group by minute_start order by minute_start
- {% endhighlight %}
-
-The result looks like below.
-![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/13_Query_result.png)
-
-
-## Automate the build
-
-Once the first build and query got successfully, you can schedule incremental builds at a certain frequency. Kylin will record the offsets of each build; when receive a build request, it will start from the last end position, and then seek the latest offsets from Kafka. With the REST API you can trigger it with any scheduler tools like Linux cron:
-
-  {% highlight Groff markup %}
-crontab -e
-*/5 * * * * curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
- {% endhighlight %}
-
-Now you can site down and watch the cube be automatically built from streaming. And when the cube segments accumulate to bigger time range, Kylin will automatically merge them into a bigger segment.
-
-## Trouble shootings
-
- * You may encounter the following error when run "kylin.sh":
-{% highlight Groff markup %}
-Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/Producer
-	at java.lang.Class.getDeclaredMethods0(Native Method)
-	at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
-	at java.lang.Class.getMethod0(Class.java:2856)
-	at java.lang.Class.getMethod(Class.java:1668)
-	at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
-	at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
-Caused by: java.lang.ClassNotFoundException: org.apache.kafka.clients.producer.Producer
-	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
-	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
-	at java.security.AccessController.doPrivileged(Native Method)
-	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
-	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
-	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
-	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
-	... 6 more
-{% endhighlight %}
-
-The reason is Kylin wasn't able to find the proper Kafka client jars; Make sure you have properly set "KAFKA_HOME" environment variable.
-
- * Get "killed by admin" error in the "Build Cube" step
-
- Within a Sandbox VM, YARN may not allocate the requested memory resource to MR job as the "inmem" cubing algorithm requests more memory. You can bypass this by requesting less memory: edit "conf/kylin_job_conf_inmem.xml", change the following two parameters like this:
-
- {% highlight Groff markup %}
-    <property>
-        <name>mapreduce.map.memory.mb</name>
-        <value>1072</value>
-        <description></description>
-    </property>
-
-    <property>
-        <name>mapreduce.map.java.opts</name>
-        <value>-Xmx800m</value>
-        <description></description>
-    </property>
- {% endhighlight %}
-
- * If there already be bunch of history messages in Kafka and you don't want to build from the very beginning, you can trigger a call to set the current end position as the start for the cube:
-
-{% highlight Groff markup %}
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/init_start_offsets
-{% endhighlight %}
-
- * If some build job got error and you discard it, there will be a hole (or say gap) left in the Cube. Since each time Kylin will build from last position, you couldn't expect the hole be filled by normal builds. Kylin provides API to check and fill the holes 
-
-Check holes:
- {% highlight Groff markup %}
-curl -X GET --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
-{% endhighlight %}
-
-If the result is an empty arrary, means there is no hole; Otherwise, trigger Kylin to fill them:
- {% highlight Groff markup %}
-curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
-{% endhighlight %}
-
diff --git a/website/_docs16/tutorial/flink.md b/website/_docs16/tutorial/flink.md
deleted file mode 100644
index f3cb99f..0000000
--- a/website/_docs16/tutorial/flink.md
+++ /dev/null
@@ -1,249 +0,0 @@
----
-layout: docs16
-title:  Connect from Apache Flink
-categories: tutorial
-permalink: /docs16/tutorial/flink.html
----
-
-
-### Introduction
-
-This document describes how to use Kylin as a data source in Apache Flink; 
-
-There were several attempts to do this in Scala and JDBC, but none of them works: 
-
-* [attempt1](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JDBCInputFormat-preparation-with-Flink-1-1-SNAPSHOT-and-Scala-2-11-td5371.html)  
-* [attempt2](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-OT-in-class-org-apache-flink-api-common-io-RichInputFormat-could-not-be-determi-td7287.html)  
-* [attempt3](http://stackoverflow.com/questions/36067881/create-dataset-from-jdbc-source-in-flink-using-scala)  
-* [attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala); 
-
-We will try use CreateInput and [JDBCInputFormat](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html) in batch mode and access via JDBC to Kylin. But it isn’t implemented in Scala, is only in Java [MailList](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html). This doc will go step by step solving these problems.
-
-### Pre-requisites
-
-* Need an instance of Kylin, with a Cube; [Sample Cube](/docs16/tutorial/kylin_sample.html) will be good enough.
-* [Scala](http://www.scala-lang.org/) and [Apache Flink](http://flink.apache.org/) Installed
-* [IntelliJ](https://www.jetbrains.com/idea/) Installed and configured for Scala/Flink (see [Flink IDE setup guide](https://ci.apache.org/projects/flink/flink-docs-release-1.1/internals/ide_setup.html) )
-
-### Used software:
-
-* [Apache Flink](http://flink.apache.org/downloads.html) v1.2-SNAPSHOT
-* [Apache Kylin](http://kylin.apache.org/download/) v1.5.2 (v1.6.0 also works)
-* [IntelliJ](https://www.jetbrains.com/idea/download/#section=linux)  v2016.2
-* [Scala](downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz)  v2.11
-
-### Starting point:
-
-This can be out initial skeleton: 
-
-{% highlight Groff markup %}
-import org.apache.flink.api.scala._
-val env = ExecutionEnvironment.getExecutionEnvironment
-val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
-  .setDrivername("org.apache.kylin.jdbc.Driver")
-  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
-  .setUsername("ADMIN")
-  .setPassword("KYLIN")
-  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
-  .finish()
-  val dataset =env.createInput(inputFormat)
-{% endhighlight %}
-
-The first error is: ![alt text](/images/Flink-Tutorial/02.png)
-
-Add to Scala: 
-{% highlight Groff markup %}
-import org.apache.flink.api.java.io.jdbc.JDBCInputFormat
-{% endhighlight %}
-
-Next error is  ![alt text](/images/Flink-Tutorial/03.png)
-
-We can solve dependencies [(mvn repository: jdbc)](https://mvnrepository.com/artifact/org.apache.flink/flink-jdbc/1.1.2); Add this to your pom.xml:
-{% highlight Groff markup %}
-<dependency>
-   <groupId>org.apache.flink</groupId>
-   <artifactId>flink-jdbc</artifactId>
-   <version>${flink.version}</version>
-</dependency>
-{% endhighlight %}
-
-## Solve dependencies of row 
-
-Similar to previous point we need solve dependencies of Row Class [(mvn repository: Table) ](https://mvnrepository.com/artifact/org.apache.flink/flink-table_2.10/1.1.2):
-
-  ![](/images/Flink-Tutorial/03b.png)
-
-
-* In pom.xml add:
-{% highlight Groff markup %}
-<dependency>
-   <groupId>org.apache.flink</groupId>
-   <artifactId>flink-table_2.10</artifactId>
-   <version>${flink.version}</version>
-</dependency>
-{% endhighlight %}
-
-* In Scala: 
-{% highlight Groff markup %}
-import org.apache.flink.api.table.Row
-{% endhighlight %}
-
-## Solve RowTypeInfo property (and their new dependencies)
-
-This is the new error to solve:
-
-  ![](/images/Flink-Tutorial/04.png)
-
-
-* If check the code of [JDBCInputFormat.java](https://github.com/apache/flink/blob/master/flink-batch-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java#L69), we can see [this new property](https://github.com/apache/flink/commit/09b428bd65819b946cf82ab1fdee305eb5a941f5#diff-9b49a5041d50d9f9fad3f8060b3d1310R69) (and mandatory) added on Apr 2016 by [FLINK-3750](https://issues.apache.org/jira/browse/FLINK-3750)  Manual [JDBCInputFormat](https://ci.apa [...]
-
-   Add the new Property: **setRowTypeInfo**
-   
-{% highlight Groff markup %}
-val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
-  .setDrivername("org.apache.kylin.jdbc.Driver")
-  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
-  .setUsername("ADMIN")
-  .setPassword("KYLIN")
-  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
-  .setRowTypeInfo(DB_ROWTYPE)
-  .finish()
-{% endhighlight %}
-
-* How can configure this property in Scala? In [Attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala), there is an incorrect solution
-   
-   We can check the types using the intellisense: ![alt text](/images/Flink-Tutorial/05.png)
-   
-   Then we will need add more dependences; Add to scala:
-
-{% highlight Groff markup %}
-import org.apache.flink.api.table.typeutils.RowTypeInfo
-import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation}
-{% endhighlight %}
-
-   Create a Array or Seq of TypeInformation[ ]
-
-  ![](/images/Flink-Tutorial/06.png)
-
-
-   Solution:
-   
-{% highlight Groff markup %}
-   var stringColum: TypeInformation[String] = createTypeInformation[String]
-   val DB_ROWTYPE = new RowTypeInfo(Seq(stringColum))
-{% endhighlight %}
-
-## Solve ClassNotFoundException
-
-  ![](/images/Flink-Tutorial/07.png)
-
-Need find the kylin-jdbc-x.x.x.jar and then expose to Flink
-
-1. Find the Kylin JDBC jar
-
-   From Kylin [Download](http://kylin.apache.org/download/) choose **Binary** and the **correct version of Kylin and HBase**
-   
-   Download & Unpack: in ./lib: 
-   
-  ![](/images/Flink-Tutorial/08.png)
-
-
-2. Make this JAR accessible to Flink
-
-   If you execute like service you need put this JAR in you Java class path using your .bashrc 
-
-  ![](/images/Flink-Tutorial/09.png)
-
-
-  Check the actual value: ![alt text](/images/Flink-Tutorial/10.png)
-  
-  Check the permission for this file (Must be accessible for you):
-
-  ![](/images/Flink-Tutorial/11.png)
-
- 
-  If you are executing from IDE, need add your class path manually:
-  
-  On IntelliJ: ![alt text](/images/Flink-Tutorial/12.png)  > ![alt text](/images/Flink-Tutorial/13.png) > ![alt text](/images/Flink-Tutorial/14.png) > ![alt text](/images/Flink-Tutorial/15.png)
-  
-  The result, will be similar to: ![alt text](/images/Flink-Tutorial/16.png)
-  
-## Solve "Couldn’t access resultSet" error
-
-  ![](/images/Flink-Tutorial/17.png)
-
-
-It is related with [Flink 4108](https://issues.apache.org/jira/browse/FLINK-4108)  [(MailList)](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html#a9415) and Timo Walther [make a PR](https://github.com/apache/flink/pull/2619)
-
-If you are running Flink <= 1.2 you will need apply this path and make clean install
-
-## Solve the casting error
-
-  ![](/images/Flink-Tutorial/18.png)
-
-In the error message you have the problem and solution …. nice ;)  ¡¡
-
-## The result
-
-The output must be similar to this, print the result of query by standard output:
-
-  ![](/images/Flink-Tutorial/19.png)
-
-
-## Now, more complex
-
-Try with a multi-colum and multi-type query:
-
-{% highlight Groff markup %}
-select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
-from kylin_sales 
-group by part_dt 
-order by part_dt
-{% endhighlight %}
-
-Need changes in DB_ROWTYPE:
-
-  ![](/images/Flink-Tutorial/20.png)
-
-
-And import lib of Java, to work with Data type of Java ![alt text](/images/Flink-Tutorial/21.png)
-
-The new result will be: 
-
-  ![](/images/Flink-Tutorial/23.png)
-
-
-## Error:  Reused Connection
-
-
-  ![](/images/Flink-Tutorial/24.png)
-
-Check if your HBase and Kylin is working. Also you can use Kylin UI for it.
-
-
-## Error:  java.lang.AbstractMethodError:  ….Avatica Connection
-
-See [Kylin 1898](https://issues.apache.org/jira/browse/KYLIN-1898) 
-
-It is a problem with kylin-jdbc-1.x.x. JAR, you need use Calcite 1.8 or above; The solution is to use Kylin 1.5.4 or above.
-
-  ![](/images/Flink-Tutorial/25.png)
-
-
-
-## Error: can't expand macros compiled by previous versions of scala
-
-Is a problem with versions of scala, check in with "scala -version" your actual version and choose your correct POM.
-
-Perhaps you will need a IntelliJ > File > Invalidates Cache > Invalidate and Restart.
-
-I added POM for Scala 2.11
-
-
-## Final Words
-
-Now you can read Kylin’s data from Apache Flink, great!
-
-[Full Code Example](https://github.com/albertoRamon/Flink/tree/master/ReadKylinFromFlink/flink-scala-project)
-
-Solved all integration problems, and tested with different types of data (Long, BigDecimal and Dates). The patch has been comited at 15 Oct, then, will be part of Flink 1.2.
diff --git a/website/_docs16/tutorial/kylin_sample.md b/website/_docs16/tutorial/kylin_sample.md
deleted file mode 100644
index f60ed8b..0000000
--- a/website/_docs16/tutorial/kylin_sample.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-layout: docs16
-title:  Quick Start with Sample Cube
-categories: tutorial
-permalink: /docs16/tutorial/kylin_sample.html
----
-
-Kylin provides a script for you to create a sample Cube; the script will also create three sample hive tables:
-
-1. Run ${KYLIN_HOME}/bin/sample.sh ; Restart kylin server to flush the caches;
-2. Logon Kylin web with default user ADMIN/KYLIN, select project "learn_kylin" in the project dropdown list (left upper corner);
-3. Select the sample cube "kylin_sales_cube", click "Actions" -> "Build", pick up a date later than 2014-01-01 (to cover all 10000 sample records);
-4. Check the build progress in "Monitor" tab, until 100%;
-5. Execute SQLs in the "Insight" tab, for example:
-	select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt
-6. You can verify the query result and compare the response time with hive;
-
-   
-## What's next
-
-You can create another cube with the sample tables, by following the tutorials.
diff --git a/website/_docs16/tutorial/odbc.cn.md b/website/_docs16/tutorial/odbc.cn.md
deleted file mode 100644
index 9ebe8dc..0000000
--- a/website/_docs16/tutorial/odbc.cn.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin ODBC 驱动程序教程
-categories: 教程
-permalink: /cn/docs16/tutorial/odbc.html
-version: v1.2
-since: v0.7.1
----
-
-> 我们提供Kylin ODBC驱动程序以支持ODBC兼容客户端应用的数据访问。
-> 
-> 32位版本或64位版本的驱动程序都是可用的。
-> 
-> 测试操作系统:Windows 7,Windows Server 2008 R2
-> 
-> 测试应用:Tableau 8.0.4 和 Tableau 8.1.3
-
-## 前提条件
-1. Microsoft Visual C++ 2012 再分配(Redistributable)
-   * 32位Windows或32位Tableau Desktop:下载:[32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
-   * 64位Windows或64位Tableau Desktop:下载:[64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
-
-2. ODBC驱动程序内部从一个REST服务器获取结果,确保你能够访问一个
-
-## 安装
-1. 如果你已经安装,首先卸载已存在的Kylin ODBC
-2. 从[下载](../../download/)下载附件驱动安装程序,并运行。
-   * 32位Tableau Desktop:请安装KylinODBCDriver (x86).exe
-   * 64位Tableau Desktop:请安装KylinODBCDriver (x64).exe
-
-3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
-
-## 错误报告
-如有问题,请报告错误至Apache Kylin JIRA,或者发送邮件到dev邮件列表。
diff --git a/website/_docs16/tutorial/odbc.md b/website/_docs16/tutorial/odbc.md
deleted file mode 100644
index 06fbf8b..0000000
--- a/website/_docs16/tutorial/odbc.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-layout: docs16
-title:  Kylin ODBC Driver
-categories: tutorial
-permalink: /docs16/tutorial/odbc.html
-since: v0.7.1
----
-
-> We provide Kylin ODBC driver to enable data access from ODBC-compatible client applications.
-> 
-> Both 32-bit version or 64-bit version driver are available.
-> 
-> Tested Operation System: Windows 7, Windows Server 2008 R2
-> 
-> Tested Application: Tableau 8.0.4, Tableau 8.1.3 and Tableau 9.1
-
-## Prerequisites
-1. Microsoft Visual C++ 2012 Redistributable 
-   * For 32 bit Windows or 32 bit Tableau Desktop: Download: [32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
-   * For 64 bit Windows or 64 bit Tableau Desktop: Download: [64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
-
-
-2. ODBC driver internally gets results from a REST server, make sure you have access to one
-
-## Installation
-1. Uninstall existing Kylin ODBC first, if you already installled it before
-2. Download ODBC Driver from [download](../../download/).
-   * For 32 bit Tableau Desktop: Please install KylinODBCDriver (x86).exe
-   * For 64 bit Tableau Desktop: Please install KylinODBCDriver (x64).exe
-
-3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
-
-## DSN configuration
-1. Open ODBCAD to configure DSN.
-	* For 32 bit driver, please use the 32bit version in C:\Windows\SysWOW64\odbcad32.exe
-	* For 64 bit driver, please use the default "Data Sources (ODBC)" in Control Panel/Administrator Tools
-![]( /images/Kylin-ODBC-DSN/1.png)
-
-2. Open "System DSN" tab, and click "Add", you will see KylinODBCDriver listed as an option, Click "Finish" to continue.
-![]( /images/Kylin-ODBC-DSN/2.png)
-
-3. In the pop up dialog, fill in all the blanks, The server host is where your Kylin Rest Server is started.
-![]( /images/Kylin-ODBC-DSN/3.png)
-
-4. Click "Done", and you will see your new DSN listed in the "System Data Sources", you can use this DSN afterwards.
-![]( /images/Kylin-ODBC-DSN/4.png)
-
-## Bug Report
-Please open Apache Kylin JIRA to report bug, or send to dev mailing list.
diff --git a/website/_docs16/tutorial/powerbi.cn.md b/website/_docs16/tutorial/powerbi.cn.md
deleted file mode 100644
index 112c32b..0000000
--- a/website/_docs16/tutorial/powerbi.cn.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-layout: docs16-cn
-title:  微软Excel及Power BI教程
-categories: tutorial
-permalink: /cn/docs16/tutorial/powerbi.html
-version: v1.2
-since: v1.2
----
-
-Microsoft Excel是当今Windows平台上最流行的数据处理软件之一,支持多种数据处理功能,可以利用Power Query从ODBC数据源读取数据并返回到数据表中。
-
-Microsoft Power BI 是由微软推出的商业智能的专业分析工具,给用户提供简单且丰富的数据可视化及分析功能。
-
-> Apache Kylin目前版本不支持原始数据的查询,部分查询会因此失败,导致应用程序发生异常,建议打上KYLIN-1075补丁包以优化查询结果的显示。
-
-
-> Power BI及Excel不支持"connect live"模式,请注意并添加where条件在查询超大数据集时候,以避免从服务器拉去过多的数据到本地,甚至在某些情况下查询执行失败。
-
-### Install ODBC Driver
-参考页面[Kylin ODBC 驱动程序教程](./odbc.html),请确保下载并安装Kylin ODBC Driver __v1.2__. 如果你安装有早前版本,请卸载后再安装。 
-
-### 连接Excel到Kylin
-1. 从微软官网下载和安装Power Query,安装完成后在Excel中会看到Power Query的Fast Tab,单击`From other sources`下拉按钮,并选择`From ODBC`项
-![](/images/tutorial/odbc/ms_tool/Picture1.png)
-
-2. 在弹出的`From ODBC`数据连接向导中输入Apache Kylin服务器的连接字符串,也可以在`SQL`文本框中输入您想要执行的SQL语句,单击`OK`,SQL的执行结果就会立即加载到Excel的数据表中
-![](/images/tutorial/odbc/ms_tool/Picture2.png)
-
-> 为了简化连接字符串的输入,推荐创建Apache Kylin的DSN,可以将连接字符串简化为DSN=[YOUR_DSN_NAME],有关DSN的创建请参考:[https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599)。
-
- 
-3. 如果您选择不输入SQL语句,Power Query将会列出所有的数据库表,您可以根据需要对整张表的数据进行加载。但是,Apache Kylin暂不支持原数据的查询,部分表的加载可能因此受限
-![](/images/tutorial/odbc/ms_tool/Picture3.png)
-
-4. 稍等片刻,数据已成功加载到Excel中
-![](/images/tutorial/odbc/ms_tool/Picture4.png)
-
-5.  一旦服务器端数据产生更新,则需要对Excel中的数据进行同步,右键单击右侧列表中的数据源,选择`Refresh`,最新的数据便会更新到数据表中.
-
-6.  1.  为了提升性能,可以在Power Query中打开`Query Options`设置,然后开启`Fast data load`,这将提高数据加载速度,但可能造成界面的暂时无响应
-
-### Power BI
-1.  启动您已经安装的Power BI桌面版程序,单击`Get data`按钮,并选中ODBC数据源.
-![](/images/tutorial/odbc/ms_tool/Picture5.png)
-
-2.  在弹出的`From ODBC`数据连接向导中输入Apache Kylin服务器的数据库连接字符串,也可以在`SQL`文本框中输入您想要执行的SQL语句。单击`OK`,SQL的执行结果就会立即加载到Power BI中
-![](/images/tutorial/odbc/ms_tool/Picture6.png)
-
-3.  如果您选择不输入SQL语句,Power BI将会列出项目中所有的表,您可以根据需要将整张表的数据进行加载。但是,Apache Kylin暂不支持原数据的查询,部分表的加载可能因此受限
-![](/images/tutorial/odbc/ms_tool/Picture7.png)
-
-4.  现在你可以进一步使用Power BI进行可视化分析:
-![](/images/tutorial/odbc/ms_tool/Picture8.png)
-
-5.  单击工具栏的`Refresh`按钮即可重新加载数据并对图表进行更新
-
diff --git a/website/_docs16/tutorial/powerbi.md b/website/_docs16/tutorial/powerbi.md
deleted file mode 100644
index 00612da..0000000
--- a/website/_docs16/tutorial/powerbi.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-layout: docs16
-title:  MS Excel and Power BI
-categories: tutorial
-permalink: /docs16/tutorial/powerbi.html
-since: v1.2
----
-
-Microsoft Excel is one of the most famous data tool on Windows platform, and has plenty of data analyzing functions. With Power Query installed as plug-in, excel can easily read data from ODBC data source and fill spreadsheets. 
-
-Microsoft Power BI is a business intelligence tool providing rich functionality and experience for data visualization and processing to user.
-
-> Apache Kylin currently doesn't support query on raw data yet, some queries might fail and cause some exceptions in application. Patch KYLIN-1075 is recommended to get better look of query result.
-
-> Power BI and Excel do not support "connect live" model for other ODBC driver yet, please pay attention when you query on huge dataset, it may pull too many data into your client which will take a while even fail at the end.
-
-### Install ODBC Driver
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-Please make sure to download and install Kylin ODBC Driver __v1.2__. If you already installed ODBC Driver in your system, please uninstall it first. 
-
-### Kylin and Excel
-1. Download Power Query from Microsoft’s Website and install it. Then run Excel, switch to `Power Query` fast tab, click `From Other Sources` dropdown list, and select `ODBC` item.
-![](/images/tutorial/odbc/ms_tool/Picture1.png)
-
-2.  You’ll see `From ODBC` dialog, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox. Optionally you can type a SQL statement in `SQL statement` textbox. Click `OK`, result set will run to your spreadsheet now.
-![](/images/tutorial/odbc/ms_tool/Picture2.png)
-
-> Tips: In order to simplify the Database Connection String, DSN is recommended, which can shorten the Connection String like `DSN=[YOUR_DSN_NAME]`. Details about DSN, refer to [https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599).
- 
-3. If you didn’t input the SQL statement in last step, Power Query will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
-![](/images/tutorial/odbc/ms_tool/Picture3.png)
-
-4.  Hold on for a while, the data is lying in Excel now.
-![](/images/tutorial/odbc/ms_tool/Picture4.png)
-
-5.  If you want to sync data with Kylin Server, just right click the data source in right panel, and select `Refresh`, then you’ll see the latest data.
-
-6.  To improve data loading performance, you can enable `Fast data load` in Power Query, but this will make your UI unresponsive for a while. 
-
-### Power BI
-1.  Run Power BI Desktop, and click `Get Data` button, then select `ODBC` as data source type.
-![](/images/tutorial/odbc/ms_tool/Picture5.png)
-
-2.  Same with Excel, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox, and optionally type a SQL statement in `SQL statement` textbox. Click `OK`, the result set will come to Power BI as a new data source query.
-![](/images/tutorial/odbc/ms_tool/Picture6.png)
-
-3.  If you didn’t input the SQL statement in last step, Power BI will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
-![](/images/tutorial/odbc/ms_tool/Picture7.png)
-
-4.  Now you can start to enjoy analyzing with Power BI.
-![](/images/tutorial/odbc/ms_tool/Picture8.png)
-
-5.  To reload the data and redraw the charts, just click `Refresh` button in `Home` fast tab.
-
diff --git a/website/_docs16/tutorial/squirrel.md b/website/_docs16/tutorial/squirrel.md
deleted file mode 100644
index 5e69780..0000000
--- a/website/_docs16/tutorial/squirrel.md
+++ /dev/null
@@ -1,112 +0,0 @@
----
-layout: docs16
-title:  Connect from SQuirreL
-categories: tutorial
-permalink: /docs16/tutorial/squirrel.html
----
-
-### Introduction
-
-[SQuirreL SQL](http://www.squirrelsql.org/) is a multi platform Universal SQL Client (GNU License). You can use it to access HBase + Phoenix and Hive. This document introduces how to connect to Kylin from SQuirreL.
-
-### Used Software
-
-* [Kylin v1.6.0](/download/) & ODBC 1.6
-* [SquirreL SQL v3.7.1](http://www.squirrelsql.org/)
-
-## Pre-requisites
-
-* Find the Kylin JDBC driver jar
-  From Kylin Download, Choose Binary and the **correct version of Kylin and HBase**
-	Download & Unpack:  in **./lib**: 
-  ![](/images/SQuirreL-Tutorial/01.png)
-
-
-* Need an instance of Kylin, with a Cube; the [Sample Cube](/docs16/tutorial/kylin_sample.html) is enough.
-
-  ![](/images/SQuirreL-Tutorial/02.png)
-
-
-* [Dowload and install SquirreL](http://www.squirrelsql.org/#installation)
-
-## Add Kylin JDBC Driver
-
-On left menu: ![alt text](/images/SQuirreL-Tutorial/03.png) >![alt text](/images/SQuirreL-Tutorial/04.png)  > ![alt text](/images/SQuirreL-Tutorial/05.png)  > ![alt text](/images/SQuirreL-Tutorial/06.png)
-
-And locate the JAR: ![alt text](/images/SQuirreL-Tutorial/07.png)
-
-Configure this parameters:
-
-* Put a name: ![alt text](/images/SQuirreL-Tutorial/08.png)
-* Example URL ![alt text](/images/SQuirreL-Tutorial/09.png)
-
-  jdbc:kylin://172.17.0.2:7070/learn_kylin
-* Put Class Name: ![alt text](/images/SQuirreL-Tutorial/10.png)
-	Tip:  If auto complete not work, type:  org.apache.kylin.jdbc.Driver 
-	
-Check the Driver List: ![alt text](/images/SQuirreL-Tutorial/11.png)
-
-## Add Aliases
-
-On left menu: ![alt text](/images/SQuirreL-Tutorial/12.png)  > ![alt text](/images/SQuirreL-Tutorial/13.png) : (Login pass by default: ADMIN / KYLIN)
-
-  ![](/images/SQuirreL-Tutorial/14.png)
-
-
-And automatically launch conection:
-
-  ![](/images/SQuirreL-Tutorial/15.png)
-
-
-## Connect and Execute
-
-The startup window when connected:
-
-  ![](/images/SQuirreL-Tutorial/16.png)
-
-
-Choose Tab: and write a query  (whe use Kylin’s example cube):
-
-  ![](/images/SQuirreL-Tutorial/17.png)
-
-
-```
-select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
-from kylin_sales group by part_dt 
-order by part_dt
-```
-
-Execute With: ![alt text](/images/SQuirreL-Tutorial/18.png) 
-
-  ![](/images/SQuirreL-Tutorial/19.png)
-
-
-And it’s works!
-
-## Tips:
-
-SquirreL isn’t the most stable SQL Client, but it is very flexible and get a lot of info; It can be used for PoC and checking connectivity issues.
-
-List of tables: 
-
-  ![](/images/SQuirreL-Tutorial/21.png)
-
-
-List of columns of table:
-
-  ![](/images/SQuirreL-Tutorial/22.png)
-
-
-List of column of Querie:
-
-  ![](/images/SQuirreL-Tutorial/23.png)
-
-
-Export the result of queries:
-
-  ![](/images/SQuirreL-Tutorial/24.png)
-
-
- Info about time query execution:
-
-  ![](/images/SQuirreL-Tutorial/25.png)
diff --git a/website/_docs16/tutorial/tableau.cn.md b/website/_docs16/tutorial/tableau.cn.md
deleted file mode 100644
index fdbd8eb..0000000
--- a/website/_docs16/tutorial/tableau.cn.md
+++ /dev/null
@@ -1,116 +0,0 @@
----
-layout: docs16-cn
-title:  Tableau教程
-categories: 教程
-permalink: /cn/docs16/tutorial/tableau.html
-version: v1.2
-since: v0.7.1
----
-
-> Kylin ODBC驱动程序与Tableau存在一些限制,请在尝试前仔细阅读本说明书。
-> * 仅支持“managed”分析路径,Kylin引擎将对意外的维度或度量报错
-> * 请始终优先选择事实表,然后使用正确的连接条件添加查找表(cube中已定义的连接类型)
-> * 请勿尝试在多个事实表或多个查找表之间进行连接;
-> * 你可以尝试使用类似Tableau过滤器中seller id这样的高基数维度,但引擎现在将只返回有限个Tableau过滤器中的seller id。
-> 
-> 如需更多详细信息或有任何问题,请联系Kylin团队:`kylinolap@gmail.com`
-
-
-### 使用Tableau 9.x的用户
-请参考[Tableau 9 教程](./tableau_91.html)以获得更详细帮助。
-
-### 步骤1. 安装Kylin ODBC驱动程序
-参考页面[Kylin ODBC 驱动程序教程](./odbc.html)。
-
-### 步骤2. 连接到Kylin服务器
-> 我们建议使用Connect Using Driver而不是Using DSN。
-
-Connect Using Driver: 选择左侧面板中的“Other Database(ODBC)”和弹出窗口的“KylinODBCDriver”。
-
-![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
-
-输入你的服务器位置和证书:服务器主机,端口,用户名和密码。
-
-![](/images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
-
-点击“Connect”获取你有权限访问的项目列表。有关权限的详细信息请参考[Kylin Cube Permission Grant Tutorial](https://github.com/KylinOLAP/Kylin/wiki/Kylin-Cube-Permission-Grant-Tutorial)。然后在下拉列表中选择你想要连接的项目。
-
-![](/images/Kylin-and-Tableau-Tutorial/3 project.jpg)
-
-点击“Done”完成连接。
-
-![](/images/Kylin-and-Tableau-Tutorial/4 done.jpg)
-
-### 步骤3. 使用单表或多表
-> 限制
->    * 必须首先选择事实表
->    * 请勿仅支持从查找表选择
->    * 连接条件必须与cube定义匹配
-
-**选择事实表**
-
-选择`Multiple Tables`。
-
-![](/images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
-
-然后点击`Add Table...`添加一张事实表。
-
-![](/images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
-
-![](/images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
-
-**选择查找表**
-
-点击`Add Table...`添加一张查找表。
-
-![](/images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
-
-仔细建立连接条款。
-
-![](/images/Kylin-and-Tableau-Tutorial/8 join.jpg)
-
-继续通过点击`Add Table...`添加表直到所有的查找表都被正确添加。命名此连接以在Tableau中使用。
-
-![](/images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
-
-**使用Connect Live**
-
-`Data Connection`共有三种类型。选择`Connect Live`选项。
-
-![](/images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
-
-然后你就能够尽情使用Tableau进行分析。
-
-![](/images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
-
-**添加额外查找表**
-
-点击顶部菜单栏的`Data`,选择`Edit Tables...`更新查找表信息。
-
-![](/images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
-
-### 步骤4. 使用自定义SQL
-使用自定义SQL类似于使用单表/多表,但你需要在`Custom SQL`标签复制你的SQL后采取同上指令。
-
-![](/images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
-
-### 步骤5. 发布到Tableau服务器
-如果你已经完成使用Tableau制作一个仪表板,你可以将它发布到Tableau服务器上。
-点击顶部菜单栏的`Server`,选择`Publish Workbook...`。
-
-![](/images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
-
-然后登陆你的Tableau服务器并准备发布。
-
-![](/images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
-
-如果你正在使用Connect Using Driver而不是DSN连接,你还将需要嵌入你的密码。点击左下方的`Authentication`按钮并选择`Embedded Password`。点击`Publish`然后你将看到结果。
-
-![](/images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
-
-### 小贴士
-* 在Tableau中隐藏表名
-
-    * Tableau将会根据源表名分组显示列,但用户可能希望根据其他不同的安排组织列。使用Tableau中的"Group by Folder"并创建文件夹来对不同的列分组。
-
-     ![](/images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)
diff --git a/website/_docs16/tutorial/tableau.md b/website/_docs16/tutorial/tableau.md
deleted file mode 100644
index 0d9e38c..0000000
--- a/website/_docs16/tutorial/tableau.md
+++ /dev/null
@@ -1,113 +0,0 @@
----
-layout: docs16
-title:  Tableau 8
-categories: tutorial
-permalink: /docs16/tutorial/tableau.html
----
-
-> There are some limitations of Kylin ODBC driver with Tableau, please read carefully this instruction before you try it.
-> 
-> * Only support "managed" analysis path, Kylin engine will raise exception for unexpected dimension or metric
-> * Please always select Fact Table first, then add lookup tables with correct join condition (defined join type in cube)
-> * Do not try to join between fact tables or lookup tables;
-> * You can try to use high cardinality dimensions like seller id as Tableau Filter, but the engine will only return limited seller id in Tableau's filter now.
-
-### For Tableau 9.x User
-Please refer to [Tableau 9.x Tutorial](./tableau_91.html) for detail guide.
-
-### Step 1. Install Kylin ODBC Driver
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-
-### Step 2. Connect to Kylin Server
-> We recommended to use Connect Using Driver instead of Using DSN.
-
-Connect Using Driver: Select "Other Database(ODBC)" in the left panel and choose KylinODBCDriver in the pop-up window. 
-
-![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
-
-Enter your Sever location and credentials: server host, port, username and password.
-
-![]( /images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
-
-Click "Connect" to get the list of projects that you have permission to access. See details about permission in [Kylin Cube Permission Grant Tutorial](./acl.html). Then choose the project you want to connect in the drop down list. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/3 project.jpg)
-
-Click "Done" to complete the connection.
-
-![]( /images/Kylin-and-Tableau-Tutorial/4 done.jpg)
-
-### Step 3. Using Single Table or Multiple Tables
-> Limitation
-> 
->    * Must select FACT table first
->    * Do not support select from lookup table only
->    * The join condition must match within cube definition
-
-**Select Fact Table**
-
-Select `Multiple Tables`.
-
-![]( /images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
-
-Then click `Add Table...` to add a fact table.
-
-![]( /images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
-
-![]( /images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
-
-**Select Look-up Table**
-
-Click `Add Table...` to add a look-up table. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
-
-Set up the join clause carefully. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/8 join.jpg)
-
-Keep add tables through click `Add Table...` until all the look-up tables have been added properly. Give the connection a name for use in Tableau.
-
-![]( /images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
-
-**Using Connect Live**
-
-There are three types of `Data Connection`. Choose the `Connect Live` option. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
-
-Then you can enjoy analyzing with Tableau.
-
-![]( /images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
-
-**Add additional look-up Tables**
-
-Click `Data` in the top menu bar, select `Edit Tables...` to update the look-up table information.
-
-![]( /images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
-
-### Step 4. Using Customized SQL
-To use customized SQL resembles using Single Table/Multiple Tables, except that you just need to paste your SQL in `Custom SQL` tab and take the same instruction as above.
-
-![]( /images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
-
-### Step 5. Publish to Tableau Server
-Suppose you have finished making a dashboard with Tableau, you can publish it to Tableau Server.
-Click `Server` in the top menu bar, select `Publish Workbook...`. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
-
-Then sign in your Tableau Server and prepare to publish. 
-
-![]( /images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
-
-If you're Using Driver Connect instead of DSN connect, you'll need to additionally embed your password in. Click the `Authentication` button at left bottom and select `Embedded Password`. Click `Publish` and you will see the result.
-
-![]( /images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
-
-### Tips
-* Hide Table name in Tableau
-
-    * Tableau will display columns be grouped by source table name, but user may want to organize columns with different structure. Using "Group by Folder" in Tableau and Create Folders to group different columns.
-
-     ![]( /images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)
diff --git a/website/_docs16/tutorial/tableau_91.cn.md b/website/_docs16/tutorial/tableau_91.cn.md
deleted file mode 100644
index fddc464..0000000
--- a/website/_docs16/tutorial/tableau_91.cn.md
+++ /dev/null
@@ -1,51 +0,0 @@
----
-layout: docs16-cn
-title:  Tableau 9 教程
-categories: tutorial
-permalink: /cn/docs16/tutorial/tableau_91.html
-version: v1.2
-since: v1.2
----
-
-Tableau 9已经发布一段时间了,社区有很多用户希望Apache Kylin能进一步支持该版本。现在可以通过更新Kylin ODBC驱动以使用Tableau 9来与Kylin服务进行交互。
-
-
-### Tableau 8.x 用户
-请参考[Tableau 教程](./tableau.html)以获得更详细帮助。
-
-### Install ODBC Driver
-参考页面[Kylin ODBC 驱动程序教程](./odbc.html),请确保下载并安装Kylin ODBC Driver __v1.5__. 如果你安装有早前版本,请卸载后再安装。 
-
-### Connect to Kylin Server
-在Tableau 9.1创建新的数据连接,单击左侧面板中的`Other Database(ODBC)`,并在弹出窗口中选择`KylinODBCDriver` 
-![](/images/tutorial/odbc/tableau_91/1.png)
-
-输入你的服务器地址、端口、项目、用户名和密码,点击`Connect`可获取有权限访问的所有项目列表。有关权限的详细信息请参考[Kylin Cube 权限授予教程](./acl.html).
-![](/images/tutorial/odbc/tableau_91/2.png)
-
-### 映射数据模型
-在左侧的列表中,选择数据库`defaultCatalog`并单击”搜索“按钮,将列出所有可查询的表。用鼠标把表拖拽到右侧区域,就可以添加表作为数据源,并创建好表与表的连接关系
-![](/images/tutorial/odbc/tableau_91/3.png)
-
-### Connect Live
-Tableau 9.1中有两种数据源连接类型,选择`在线`选项以确保使用'Connect Live'模式
-![](/images/tutorial/odbc/tableau_91/4.png)
-
-### 自定义SQL
-如果需要使用自定义SQL,可以单击左侧`New Custom SQL`并在弹窗中输入SQL语句,就可添加为数据源.
-![](/images/tutorial/odbc/tableau_91/5.png)
-
-### 可视化
-现在你可以进一步使用Tableau进行可视化分析:
-![](/images/tutorial/odbc/tableau_91/6.png)
-
-### 发布到Tableau服务器
-如果希望发布到Tableau服务器, 点击`Server`菜单并选择`Publish Workbook`
-![](/images/tutorial/odbc/tableau_91/7.png)
-
-### 更多
-
-- 请参考[Tableau 教程](./tableau.html)以获得更多信息
-- 也可以参考社区用户Alberto Ramon Portoles (a.ramonportoles@gmail.com)提供的分享: [KylinWithTableau](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau)
-
-
diff --git a/website/_docs16/tutorial/tableau_91.md b/website/_docs16/tutorial/tableau_91.md
deleted file mode 100644
index 39d70ff..0000000
--- a/website/_docs16/tutorial/tableau_91.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-layout: docs16
-title:  Tableau 9
-categories: tutorial
-permalink: /docs16/tutorial/tableau_91.html
----
-
-Tableau 9.x has been released a while, there are many users are asking about support this version with Apache Kylin. With updated Kylin ODBC Driver, now user could interactive with Kylin service through Tableau 9.x.
-
-
-### For Tableau 8.x User
-Please refer to [Kylin and Tableau Tutorial](./tableau.html) for detail guide.
-
-### Install Kylin ODBC Driver
-Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
-Please make sure to download and install Kylin ODBC Driver __v1.5__. If you already installed ODBC Driver in your system, please uninstall it first. 
-
-### Connect to Kylin Server
-Connect Using Driver: Start Tableau 9.1 desktop, click `Other Database(ODBC)` in the left panel and choose KylinODBCDriver in the pop-up window. 
-![](/images/tutorial/odbc/tableau_91/1.png)
-
-Provide your Sever location, credentials and project. Clicking `Connect` button, you can get the list of projects that you have permission to access, see details at [Kylin Cube Permission Grant Tutorial](./acl.html).
-![](/images/tutorial/odbc/tableau_91/2.png)
-
-### Mapping Data Model
-In left panel, select `defaultCatalog` as Database, click `Search` button in Table search box, and all tables get listed. With drag and drop to the right region, tables will become data source. Make sure JOINs are configured correctly.
-![](/images/tutorial/odbc/tableau_91/3.png)
-
-### Connect Live
-There are two types of `Connection`, choose the `Live` option to make sure using Connect Live mode.
-![](/images/tutorial/odbc/tableau_91/4.png)
-
-### Custom SQL
-To use customized SQL, click `New Custom SQL` in left panel and type SQL statement in pop-up dialog.
-![](/images/tutorial/odbc/tableau_91/5.png)
-
-### Visualization
-Now you can start to enjou analyzing with Tableau 9.1.
-![](/images/tutorial/odbc/tableau_91/6.png)
-
-### Publish to Tableau Server
-If you want to publish local dashboard to a Tableau Server, just expand `Server` menu and select `Publish Workbook`.
-![](/images/tutorial/odbc/tableau_91/7.png)
-
-### More
-
-- You can refer to [Kylin and Tableau Tutorial](./tableau.html) for more detail.
-- Here is a good tutorial written by Alberto Ramon Portoles (a.ramonportoles@gmail.com): [KylinWithTableau](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau)
-
-
diff --git a/website/_docs16/tutorial/web.cn.md b/website/_docs16/tutorial/web.cn.md
deleted file mode 100644
index 7f5e82c..0000000
--- a/website/_docs16/tutorial/web.cn.md
+++ /dev/null
@@ -1,134 +0,0 @@
----
-layout: docs16-cn
-title:  Kylin网页版教程
-categories: 教程
-permalink: /cn/docs16/tutorial/web.html
-version: v1.2
----
-
-> **支持的浏览器**
-> 
-> Windows: Google Chrome, FireFox
-> 
-> Mac: Google Chrome, FireFox, Safari
-
-## 1. 访问 & 登陆
-访问主机: http://hostname:7070
-使用用户名/密码登陆:ADMIN/KYLIN
-
-![]( /images/Kylin-Web-Tutorial/1 login.png)
-
-## 2. Kylin中可用的Hive表
-虽然Kylin使用SQL作为查询接口并利用Hive元数据,Kylin不会让用户查询所有的hive表,因为到目前为止它是一个预构建OLAP(MOLAP)系统。为了使表在Kylin中可用,使用"Sync"方法能够方便地从Hive中同步表。
-
-![]( /images/Kylin-Web-Tutorial/2 tables.png)
-
-## 3. Kylin OLAP Cube
-Kylin的OLAP Cube是从星型模式的Hive表中获取的预计算数据集,这是供用户探索、管理所有cube的网页管理页面。由菜单栏进入`Cubes`页面,系统中所有可用的cube将被列出。
-
-![]( /images/Kylin-Web-Tutorial/3 cubes.png)
-
-探索更多关于Cube的详细信息
-
-* 表格视图:
-
-   ![]( /images/Kylin-Web-Tutorial/4 form-view.png)
-
-* SQL 视图 (Hive查询读取数据以生成cube):
-
-   ![]( /images/Kylin-Web-Tutorial/5 sql-view.png)
-
-* 可视化 (显示这个cube背后的星型模式):
-
-   ![]( /images/Kylin-Web-Tutorial/6 visualization.png)
-
-* 访问 (授予用户/角色权限,beta版中授予权限操作仅对管理员开放):
-
-   ![]( /images/Kylin-Web-Tutorial/7 access.png)
-
-## 4. 在网页上编写和运行SQL
-Kelin的网页版为用户提供了一个简单的查询工具来运行SQL以探索现存的cube,验证结果并探索使用#5中的Pivot analysis与可视化分析的结果集。
-
-> **查询限制**
-> 
-> 1. 仅支持SELECT查询
-> 
-> 2. 为了避免从服务器到客户端产生巨大的网络流量,beta版中的扫描范围阀值被设置为1,000,000。
-> 
-> 3. beta版中,SQL在cube中无法找到的数据将不会重定向到Hive
-
-由菜单栏进入“Query”页面:
-
-![]( /images/Kylin-Web-Tutorial/8 query.png)
-
-* 源表:
-
-   浏览器当前可用表(与Hive相同的结构和元数据):
-  
-   ![]( /images/Kylin-Web-Tutorial/9 query-table.png)
-
-* 新的查询:
-
-   你可以编写和运行你的查询并探索结果。这里提供一个查询供你参考:
-
-   ![]( /images/Kylin-Web-Tutorial/10 query-result.png)
-
-* 已保存的查询:
-
-   与用户账号关联,你将能够从不同的浏览器甚至机器上获取已保存的查询。
-   在结果区域点击“Save”,将会弹出名字和描述来保存当前查询:
-
-   ![]( /images/Kylin-Web-Tutorial/11 save-query.png)
-
-   点击“Saved Queries”探索所有已保存的查询,你可以直接重新提交它来运行或删除它:
-
-   ![]( /images/Kylin-Web-Tutorial/11 save-query-2.png)
-
-* 查询历史:
-
-   仅保存当前用户在当前浏览器中的查询历史,这将需要启用cookie,并且如果你清理浏览器缓存将会丢失数据。点击“Query History”标签,你可以直接重新提交其中的任何一条并再次运行。
-
-## 5. Pivot Analysis与可视化
-Kylin的网页版提供一个简单的Pivot与可视化分析工具供用户探索他们的查询结果:
-
-* 一般信息:
-
-   当查询运行成功后,它将呈现一个成功指标与被访问的cube名字。
-   同时它将会呈现这个查询在后台引擎运行了多久(不包括从Kylin服务器到浏览器的网络通信):
-
-   ![]( /images/Kylin-Web-Tutorial/12 general.png)
-
-* 查询结果:
-
-   能够方便地在一个列上排序。
-
-   ![]( /images/Kylin-Web-Tutorial/13 results.png)
-
-* 导出到CSV文件
-
-   点击“Export”按钮以CSV文件格式保存当前结果。
-
-* Pivot表:
-
-   将一个或多个列拖放到标头,结果将根据这些列的值分组:
-
-   ![]( /images/Kylin-Web-Tutorial/14 drag.png)
-
-* 可视化:
-
-   同时,结果集将被方便地显示在“可视化”的不同图表中:
-
-   注意:线形图仅当至少一个从Hive表中获取的维度有真实的“Date”数据类型列时才是可用的。
-
-   * 条形图:
-
-   ![]( /images/Kylin-Web-Tutorial/15 bar-chart.png)
-   
-   * 饼图:
-
-   ![]( /images/Kylin-Web-Tutorial/16 pie-chart.png)
-
-   * 线形图:
-
-   ![]( /images/Kylin-Web-Tutorial/17 line-chart.png)
-
diff --git a/website/_docs16/tutorial/web.md b/website/_docs16/tutorial/web.md
deleted file mode 100644
index 314ff48..0000000
--- a/website/_docs16/tutorial/web.md
+++ /dev/null
@@ -1,123 +0,0 @@
----
-layout: docs16
-title:  Kylin Web Interface
-categories: tutorial
-permalink: /docs16/tutorial/web.html
----
-
-> **Supported Browsers**
-> Windows: Google Chrome, FireFox
-> Mac: Google Chrome, FireFox, Safari
-
-## 1. Access & Login
-Host to access: http://hostname:7070
-Login with username/password: ADMIN/KYLIN
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/1 login.png)
-
-## 2. Sync Hive Table into Kylin
-Although Kylin will using SQL as query interface and leverage Hive metadata, kylin will not enable user to query all hive tables since it's a pre-build OLAP (MOLAP) system so far. To enable Table in Kylin, it will be easy to using "Sync" function to sync up tables from Hive.
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/2 tables.png)
-
-## 3. Kylin OLAP Cube
-Kylin's OLAP Cubes are pre-calculation datasets from star schema tables, Here's the web interface for user to explorer, manage all cubes. Go to `Model` menu, it will list all cubes available in system:
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/3 cubes.png)
-
-To explore more detail about the Cube
-
-* Form View:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/4 form-view.png)
-
-* SQL View (Hive Query to read data to generate the cube):
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/5 sql-view.png)
-
-* Access (Grant user/role privileges, grant operation only open to Admin):
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/7 access.png)
-
-## 4. Write and Execute SQL on web
-Kylin's web offer a simple query tool for user to run SQL to explorer existing cube, verify result and explorer the result set using #5's Pivot analysis and visualization
-
-> **Query Limit**
-> 
-> 1. Only SELECT query be supported
-> 
-> 2. SQL will not be redirect to Hive
-
-Go to "Insight" menu:
-
-![](/images/tutorial/1.5/Kylin-Web-Tutorial/8 query.png)
-
-* Source Tables:
-
-   Browser current available tables (same structure and metadata as Hive):
-  
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/9 query-table.png)
-
-* New Query:
-
-   You can write and execute your query and explorer the result.
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/10 query-result.png)
-
-* Saved Query (only work after enable LDAP security):
-
-   Associate with user account, you can get saved query from different browsers even machines.
-   Click "Save" in Result area, it will popup for name and description to save current query:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/11 save-query.png)
-
-   Click "Saved Queries" to browser all your saved queries, you could direct submit it or remove it.
-
-* Query History:
-
-   Only keep the current user's query history in current bowser, it will require cookie enabled and will lost if you clean up bowser's cache. Click "Query History" tab, you could directly resubmit any of them to execute again.
-
-## 5. Pivot Analysis and Visualization
-There's one simple pivot and visualization analysis tool in Kylin's web for user to explore their query result:
-
-* General Information:
-
-   When the query execute success, it will present a success indictor and also a cube's name which be hit. 
-   Also it will present how long this query be executed in backend engine (not cover network traffic from Kylin server to browser):
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/12 general.png)
-
-* Query Result:
-
-   It's easy to order on one column.
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/13 results.png)
-
-* Export to CSV File
-
-   Click "Export" button to save current result as CSV file.
-
-* Pivot Table:
-
-   Drag and drop one or more columns into the header, the result will grouping by such column's value:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/14 drag.png)
-
-* Visualization:
-
-   Also, the result set will be easy to show with different charts in "Visualization":
-
-   note: line chart only available when there's at least one dimension with real "Date" data type of column from Hive Table.
-
-   * Bar Chart:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/15 bar-chart.png)
-   
-   * Pie Chart:
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/16 pie-chart.png)
-
-   * Line Chart
-
-   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/17 line-chart.png)
-
diff --git a/website/archive/docs16.tar.gz b/website/archive/docs16.tar.gz
new file mode 100644
index 0000000..4862a79
Binary files /dev/null and b/website/archive/docs16.tar.gz differ
diff --git a/website/download/index.cn.md b/website/download/index.cn.md
index d7e88f9..bd4dc42 100644
--- a/website/download/index.cn.md
+++ b/website/download/index.cn.md
@@ -5,6 +5,18 @@ title: 下载
 
 您可以按照这些[步骤](https://www.apache.org/info/verification.html) 并使用这些[KEYS](https://www.apache.org/dist/kylin/KEYS)来验证下载文件的有效性.
 
+#### v2.5.0
+- 这是2.4版本后的一个主要发布版本,包含了96 个以及各种改进。关于具体内容请查看发布说明. 
+- [发布说明](/docs/release_notes.html) and [升级指南](/docs/howto/howto_upgrade.html)
+- 源码下载: [apache-kylin-2.5.0-source-release.zip](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip.sha256)\]
+- 二进制包下载:
+  - for HBase 1.x (includes HDP 2.3+, AWS EMR 5.0+, Azure HDInsight 3.4 - 3.6) - [apache-kylin-2.5.0-bin-hbase1x.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz.asc)\] \[[sha256](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz.sha256)\]
+  - for CDH 5.7+ - [apache-kylin-2.5.0-bin-cdh57.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz.asc)\] \[[sha256](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz.sha256)\]
+
+- Hadoop 3 API 二进制包 (beta):
+  - for Hadoop 3.1 + HBase 2.0 (includes Hortonworks HDP 3.0) - [apache-kylin-2.5.0-bin-hadoop3.tar.gz](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz) \[[asc](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz.asc)\] \[[sha256](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz.sha256)\]
+  - for CDH 6.0 - [apache-kylin-2.5.0-bin-cdh60.tar.gz](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz) \[[asc](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz.asc)\] \[[sha256](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz.sha256)\]
+
 #### v2.4.1
 - This is a bug fix release after 2.4.0, with 22 bug fixes and enhancement. For the detail list please check release notes. 
 - [Release notes](/docs/release_notes.html) and [upgrade guide](/docs/howto/howto_upgrade.html)
diff --git a/website/download/index.md b/website/download/index.md
index 7774e91..5fc0efa 100644
--- a/website/download/index.md
+++ b/website/download/index.md
@@ -6,6 +6,17 @@ permalink: /download/index.html
 
 You can verify the download by following these [procedures](https://www.apache.org/info/verification.html) and using these [KEYS](https://www.apache.org/dist/kylin/KEYS).
 
+#### v2.5.0
+- This is a major release after 2.4, with 96 bug fixes and enhancement. For the detail list please check release notes. 
+- [Release notes](/docs/release_notes.html) and [upgrade guide](/docs/howto/howto_upgrade.html)
+- Source download: [apache-kylin-2.5.0-source-release.zip](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-source-release.zip.sha256)\]
+- Binary download:
+  - for HBase 1.x (includes HDP 2.3+, AWS EMR 5.0+, Azure HDInsight 3.4 - 3.6) - [apache-kylin-2.5.0-bin-hbase1x.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz.asc)\] \[[sha256](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-hbase1x.tar.gz.sha256)\]
+  - for CDH 5.7+ - [apache-kylin-2.5.0-bin-cdh57.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz.asc)\] \[[sha256](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.5.0/apache-kylin-2.5.0-bin-cdh57.tar.gz.sha256)\]
+
+- Hadoop 3 API binary packages (for beta):
+  - for Hadoop 3.1 + HBase 2.0 (includes Hortonworks HDP 3.0) - [apache-kylin-2.5.0-bin-hadoop3.tar.gz](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz) \[[asc](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz.asc)\] \[[sha256](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-hadoop3.tar.gz.sha256)\]
+  - for CDH 6.0 - [apache-kylin-2.5.0-bin-cdh60.tar.gz](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz) \[[asc](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz.asc)\] \[[sha256](https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.5.0-rc2/apache-kylin-2.5.0-bin-cdh60.tar.gz.sha256)\]
 
 #### v2.4.1
 - This is a bug fix release after 2.4.0, with 22 bug fixes and enhancement. For the detail list please check release notes. 
@@ -15,13 +26,7 @@ You can verify the download by following these [procedures](https://www.apache.o
   - for HBase 1.x (includes HDP 2.3+, AWS EMR 5.0+, Azure HDInsight 3.4 - 3.6) - [apache-kylin-2.4.1-bin-hbase1x.tar.gz](http://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-hbase1x.tar.gz) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-hbase1x.tar.gz.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-hbase1x.tar.gz.sha256)\]
   - for CDH 5.7+ - [apache-kylin-2.4.1-bin-cdh57.tar.gz](http://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-cdh57.tar.gz) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-cdh57.tar.gz.asc)\] \[[sha256](https://www.apache.org/dist/kylin/apache-kylin-2.4.1/apache-kylin-2.4.1-bin-cdh57.tar.gz.sha256)\]
 
-#### v2.3.2
-- This is a bug fix release after 2.3.1, with 12 bug fixes and enhancements. For the detail list please check release notes. 
-- [Release notes](/docs23/release_notes.html) and [upgrade guide](/docs23/howto/howto_upgrade.html)
-- Source download: [apache-kylin-2.3.2-src.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-source-release.zip) \[[asc](https://www.apache.org/dist/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-source-release.zip.asc)\] \[[sha1](https://www.apache.org/dist/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-source-release.zip.sha1)\]
-- Binary download:
-  - for HBase 1.x (includes HDP 2.3+, AWS EMR 5.0+, Azure HDInsight 3.4 - 3.6) - [apache-kylin-2.3.2-bin-hbase1x.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-hbase1x.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-hbase1x.tar.gz.asc)\] \[[md5](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-hbase1x.tar.gz.md5)\]
-  - for CDH 5.7+ - [apache-kylin-2.3.2-bin-cdh57.tar.gz](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-cdh57.tar.gz) \[[asc](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-cdh57.tar.gz.asc)\] \[[md5](https://www.apache.org/dyn/closer.cgi/kylin/apache-kylin-2.3.2/apache-kylin-2.3.2-bin-cdh57.tar.gz.md5)\]
+
 
 #### JDBC Driver