You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by bi...@apache.org on 2018/01/29 04:14:54 UTC

[6/6] kylin git commit: KYLIN-3202 update doc directory for Kylin 2.3.0

KYLIN-3202 update doc directory for Kylin 2.3.0


Project: http://git-wip-us.apache.org/repos/asf/kylin/repo
Commit: http://git-wip-us.apache.org/repos/asf/kylin/commit/40a53fe3
Tree: http://git-wip-us.apache.org/repos/asf/kylin/tree/40a53fe3
Diff: http://git-wip-us.apache.org/repos/asf/kylin/diff/40a53fe3

Branch: refs/heads/document
Commit: 40a53fe34d6709fe59da2132c109d6d68ddb5e7f
Parents: 1d2e168
Author: Billy Liu <bi...@apache.org>
Authored: Mon Jan 29 12:14:16 2018 +0800
Committer: Billy Liu <bi...@apache.org>
Committed: Mon Jan 29 12:14:16 2018 +0800

----------------------------------------------------------------------
 website/_config.yml                             |    6 +-
 website/_data/docs-cn.yml                       |   14 +
 website/_data/docs15-cn.yml                     |   14 +
 website/_data/docs16-cn.yml                     |   14 +
 website/_data/docs20-cn.yml                     |   14 +
 website/_data/docs21-cn.yml                     |   14 +
 website/_data/docs21.yml                        |    4 -
 website/_data/docs23-cn.yml                     |   41 +
 website/_data/docs23.yml                        |   78 +
 .../_docs23/gettingstarted/best_practices.md    |   27 +
 website/_docs23/gettingstarted/concepts.md      |   64 +
 website/_docs23/gettingstarted/events.md        |   24 +
 website/_docs23/gettingstarted/faq.md           |  119 ++
 website/_docs23/gettingstarted/terminology.md   |   25 +
 .../_docs23/howto/howto_backup_metadata.cn.md   |   59 +
 website/_docs23/howto/howto_backup_metadata.md  |   60 +
 .../howto/howto_build_cube_with_restapi.cn.md   |   54 +
 .../howto/howto_build_cube_with_restapi.md      |   53 +
 .../_docs23/howto/howto_cleanup_storage.cn.md   |   21 +
 website/_docs23/howto/howto_cleanup_storage.md  |   22 +
 .../_docs23/howto/howto_enable_zookeeper_acl.md |   20 +
 .../howto/howto_install_ranger_kylin_plugin.md  |    8 +
 website/_docs23/howto/howto_jdbc.cn.md          |   92 +
 website/_docs23/howto/howto_jdbc.md             |   92 +
 website/_docs23/howto/howto_ldap_and_sso.md     |  128 ++
 .../_docs23/howto/howto_optimize_build.cn.md    |  166 ++
 website/_docs23/howto/howto_optimize_build.md   |  190 ++
 website/_docs23/howto/howto_optimize_cubes.md   |  212 +++
 website/_docs23/howto/howto_setup_systemcube.md |  437 +++++
 .../_docs23/howto/howto_update_coprocessor.md   |   14 +
 website/_docs23/howto/howto_upgrade.md          |  105 +
 website/_docs23/howto/howto_use_beeline.md      |   14 +
 website/_docs23/howto/howto_use_cube_planner.md |  133 ++
 website/_docs23/howto/howto_use_dashboard.md    |  110 ++
 .../howto/howto_use_distributed_scheduler.md    |   16 +
 website/_docs23/howto/howto_use_restapi.md      | 1200 ++++++++++++
 .../_docs23/howto/howto_use_restapi_in_js.md    |   46 +
 website/_docs23/index.cn.md                     |   29 +
 website/_docs23/index.md                        |   70 +
 website/_docs23/install/advance_settings.md     |  102 +
 website/_docs23/install/hadoop_evn.md           |   36 +
 website/_docs23/install/index.cn.md             |   46 +
 website/_docs23/install/index.md                |   35 +
 website/_docs23/install/kylin_aws_emr.md        |  167 ++
 website/_docs23/install/kylin_cluster.md        |   32 +
 website/_docs23/install/kylin_docker.md         |   10 +
 .../_docs23/install/manual_install_guide.cn.md  |   29 +
 website/_docs23/release_notes.md                | 1792 ++++++++++++++++++
 website/_docs23/tutorial/Qlik.cn.md             |  153 ++
 website/_docs23/tutorial/Qlik.md                |  156 ++
 website/_docs23/tutorial/acl.cn.md              |   35 +
 website/_docs23/tutorial/acl.md                 |   37 +
 website/_docs23/tutorial/create_cube.cn.md      |  129 ++
 website/_docs23/tutorial/create_cube.md         |  198 ++
 website/_docs23/tutorial/cube_build_job.cn.md   |   66 +
 website/_docs23/tutorial/cube_build_job.md      |   67 +
 .../_docs23/tutorial/cube_build_performance.md  |  266 +++
 website/_docs23/tutorial/cube_spark.md          |  169 ++
 website/_docs23/tutorial/cube_streaming.md      |  219 +++
 website/_docs23/tutorial/flink.md               |  249 +++
 website/_docs23/tutorial/hue.md                 |  264 +++
 .../_docs23/tutorial/kylin_client_tool.cn.md    |   97 +
 website/_docs23/tutorial/kylin_sample.md        |   34 +
 website/_docs23/tutorial/microstrategy.md       |   84 +
 website/_docs23/tutorial/odbc.cn.md             |   34 +
 website/_docs23/tutorial/odbc.md                |   49 +
 website/_docs23/tutorial/powerbi.cn.md          |   56 +
 website/_docs23/tutorial/powerbi.md             |   54 +
 website/_docs23/tutorial/project_level_acl.md   |   63 +
 website/_docs23/tutorial/query_pushdown.cn.md   |   50 +
 website/_docs23/tutorial/query_pushdown.md      |   61 +
 website/_docs23/tutorial/squirrel.md            |  112 ++
 website/_docs23/tutorial/tableau.cn.md          |  116 ++
 website/_docs23/tutorial/tableau.md             |  113 ++
 website/_docs23/tutorial/tableau_91.cn.md       |   51 +
 website/_docs23/tutorial/tableau_91.md          |   50 +
 website/_docs23/tutorial/web.cn.md              |  134 ++
 website/_docs23/tutorial/web.md                 |  123 ++
 website/_includes/docs23_nav.cn.html            |   33 +
 website/_includes/docs23_nav.html               |   33 +
 website/_includes/docs23_ul.cn.html             |   28 +
 website/_includes/docs23_ul.html                |   29 +
 website/_layouts/docs23-cn.html                 |   46 +
 website/_layouts/docs23.html                    |   50 +
 84 files changed, 9561 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_config.yml
----------------------------------------------------------------------
diff --git a/website/_config.yml b/website/_config.yml
index 5c92bfd..2c37d27 100644
--- a/website/_config.yml
+++ b/website/_config.yml
@@ -27,7 +27,7 @@ encoding: UTF-8
 timezone: America/Dawson 
 
 exclude: ["README.md", "Rakefile", "*.scss", "*.haml", "*.sh"]
-include: [_docs,_docs15,_docs16,_dev]
+include: [_docs,_docs15,_docs16,_docs20,_docs21,docs23,_dev]
 
 # Build settings
 markdown: kramdown
@@ -74,5 +74,9 @@ collections:
     output: true
   docs21-cn:
     output: true
+  docs23:
+    output: true
+  docs23-cn:
+    output: true
   dev:
     output: true

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_data/docs-cn.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs-cn.yml b/website/_data/docs-cn.yml
index eeaca00..52791d2 100644
--- a/website/_data/docs-cn.yml
+++ b/website/_data/docs-cn.yml
@@ -1,3 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 - title: 开始
   docs:
   - index

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_data/docs15-cn.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs15-cn.yml b/website/_data/docs15-cn.yml
index f69fbe5..156f5fd 100644
--- a/website/_data/docs15-cn.yml
+++ b/website/_data/docs15-cn.yml
@@ -1,3 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 - title: 开始
   docs:
   - index

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_data/docs16-cn.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs16-cn.yml b/website/_data/docs16-cn.yml
index f69fbe5..156f5fd 100644
--- a/website/_data/docs16-cn.yml
+++ b/website/_data/docs16-cn.yml
@@ -1,3 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 - title: 开始
   docs:
   - index

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_data/docs20-cn.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs20-cn.yml b/website/_data/docs20-cn.yml
index f69fbe5..156f5fd 100644
--- a/website/_data/docs20-cn.yml
+++ b/website/_data/docs20-cn.yml
@@ -1,3 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 - title: 开始
   docs:
   - index

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_data/docs21-cn.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs21-cn.yml b/website/_data/docs21-cn.yml
index 516333b..51ad75d 100644
--- a/website/_data/docs21-cn.yml
+++ b/website/_data/docs21-cn.yml
@@ -1,3 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 - title: 开始
   docs:
   - index

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_data/docs21.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs21.yml b/website/_data/docs21.yml
index 1d5e8db..c05b5e5 100644
--- a/website/_data/docs21.yml
+++ b/website/_data/docs21.yml
@@ -49,7 +49,6 @@
 - title: Integration
   docs:
   - tutorial/odbc
-  - howto/howto_jdbc
   - tutorial/tableau
   - tutorial/tableau_91
   - tutorial/powerbi
@@ -74,6 +73,3 @@
   - howto/howto_update_coprocessor
   - howto/howto_install_ranger_kylin_plugin
   - howto/howto_enable_zookeeper_acl
-  - howto/howto_setup_systemcube
-  - howto/howto_use_cube_planner
-  - howto/howto_use_dashboard

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_data/docs23-cn.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs23-cn.yml b/website/_data/docs23-cn.yml
new file mode 100644
index 0000000..51ad75d
--- /dev/null
+++ b/website/_data/docs23-cn.yml
@@ -0,0 +1,41 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+- title: 开始
+  docs:
+  - index
+
+- title: 安装
+  docs:
+  - install/manual_install_guide
+
+- title: 教程
+  docs:
+  - tutorial/create_cube
+  - tutorial/cube_build_job
+  - tutorial/acl
+  - tutorial/web
+  - tutorial/tableau
+  - tutorial/tableau_91
+  - tutorial/powerbi
+  - tutorial/odbc
+  - tutorial/Qlik
+
+- title: 帮助
+  docs:
+  - howto/howto_backup_metadata
+  - howto/howto_build_cube_with_restapi
+  - howto/howto_cleanup_storage
+  - howto/howto_jdbc
+  - howto/howto_optimize_build

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_data/docs23.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs23.yml b/website/_data/docs23.yml
new file mode 100644
index 0000000..cf0550a
--- /dev/null
+++ b/website/_data/docs23.yml
@@ -0,0 +1,78 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Docs menu items, for English one, docs20-cn.yml is for Chinese one
+# The docs menu is constructed in docs20_nav.html with these data
+- title: Getting Started
+  docs:
+  - index
+  - release_notes
+  - gettingstarted/concepts
+  - gettingstarted/terminology
+  - gettingstarted/faq
+  - gettingstarted/events
+  - gettingstarted/best_practices
+
+- title: Installation
+  docs:
+  - install/index
+  - install/hadoop_env
+  - install/manual_install_guide
+  - install/kylin_cluster
+  - install/advance_settings
+  - install/kylin_docker
+  - install/kylin_aws_emr
+
+- title: Tutorial
+  docs:
+  - tutorial/kylin_sample
+  - tutorial/web
+  - tutorial/create_cube
+  - tutorial/cube_build_job
+  - tutorial/acl
+  - tutorial/project_level_acl
+  - tutorial/cube_spark
+  - tutorial/cube_build_performance
+
+- title: Integration
+  docs:
+  - tutorial/odbc
+  - tutorial/tableau
+  - tutorial/tableau_91
+  - tutorial/powerbi
+  - tutorial/microstrategy
+  - tutorial/squirrel
+  - tutorial/flink
+  - tutorial/hue
+  - tutorial/Qlik
+
+- title: How To
+  docs:
+  - howto/howto_use_restapi
+  - howto/howto_build_cube_with_restapi
+  - howto/howto_optimize_cubes
+  - howto/howto_optimize_build
+  - howto/howto_backup_metadata
+  - howto/howto_cleanup_storage
+  - howto/howto_jdbc
+  - howto/howto_upgrade
+  - howto/howto_ldap_and_sso
+  - howto/howto_use_beeline
+  - howto/howto_update_coprocessor
+  - howto/howto_install_ranger_kylin_plugin
+  - howto/howto_enable_zookeeper_acl
+  - howto/howto_setup_systemcube
+  - howto/howto_use_cube_planner
+  - howto/howto_use_dashboard

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/gettingstarted/best_practices.md
----------------------------------------------------------------------
diff --git a/website/_docs23/gettingstarted/best_practices.md b/website/_docs23/gettingstarted/best_practices.md
new file mode 100644
index 0000000..07da3e4
--- /dev/null
+++ b/website/_docs23/gettingstarted/best_practices.md
@@ -0,0 +1,27 @@
+---
+layout: docs23
+title:  "Community Best Practices"
+categories: gettingstarted
+permalink: /docs23/gettingstarted/best_practices.html
+since: v1.3.x
+---
+
+List of articles about Kylin best practices contributed by community. Some of them are from Chinese community. Many thanks!
+
+* [Apache Kylin在百度地图的实践](http://www.infoq.com/cn/articles/practis-of-apache-kylin-in-baidu-map)
+
+* [Apache Kylin 大数据时代的OLAP利器](http://www.bitstech.net/2016/01/04/kylin-olap/)(网易案例)
+
+* [Apache Kylin在云海的实践](http://www.csdn.net/article/2015-11-27/2826343)(京东案例)
+
+* [Kylin, Mondrian, Saiku系统的整合](http://tech.youzan.com/kylin-mondrian-saiku/)(有赞案例)
+
+* [Big Data MDX with Mondrian and Apache Kylin](https://www.inovex.de/fileadmin/files/Vortraege/2015/big-data-mdx-with-mondrian-and-apache-kylin-sebastien-jelsch-pcm-11-2015.pdf)
+
+* [Kylin and Mondrain Interaction](https://github.com/mustangore/kylin-mondrian-interaction) (Thanks to [mustangore](https://github.com/mustangore))
+
+* [Kylin And Tableau Tutorial](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
+
+* [Kylin and Qlik Integration](https://github.com/albertoRamon/Kylin/tree/master/KylinWithQlik) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
+
+* [How to use Hue with Kylin](https://github.com/albertoRamon/Kylin/tree/master/KylinWithHue) (Thanks to [Ramón Portolés, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/gettingstarted/concepts.md
----------------------------------------------------------------------
diff --git a/website/_docs23/gettingstarted/concepts.md b/website/_docs23/gettingstarted/concepts.md
new file mode 100644
index 0000000..226546f
--- /dev/null
+++ b/website/_docs23/gettingstarted/concepts.md
@@ -0,0 +1,64 @@
+---
+layout: docs23
+title:  "Technical Concepts"
+categories: gettingstarted
+permalink: /docs23/gettingstarted/concepts.html
+since: v1.2
+---
+ 
+Here are some basic technical concepts used in Apache Kylin, please check them for your reference.
+For terminology in domain, please refer to: [Terminology](terminology.html)
+
+## CUBE
+* __Table__ - This is definition of hive tables as source of cubes, which must be synced before building cubes.
+![](/images/docs/concepts/DataSource.png)
+
+* __Data Model__ - This describes a [STAR SCHEMA](https://en.wikipedia.org/wiki/Star_schema) data model, which defines fact/lookup tables and filter condition.
+![](/images/docs/concepts/DataModel.png)
+
+* __Cube Descriptor__ - This describes definition and settings for a cube instance, defining which data model to use, what dimensions and measures to have, how to partition to segments and how to handle auto-merge etc.
+![](/images/docs/concepts/CubeDesc.png)
+
+* __Cube Instance__ - This is instance of cube, built from one cube descriptor, and consist of one or more cube segments according partition settings.
+![](/images/docs/concepts/CubeInstance.png)
+
+* __Partition__ - User can define a DATE/STRING column as partition column on cube descriptor, to separate one cube into several segments with different date periods.
+![](/images/docs/concepts/Partition.png)
+
+* __Cube Segment__ - This is actual carrier of cube data, and maps to a HTable in HBase. One building job creates one new segment for the cube instance. Once data change on specified data period, we can refresh related segments to avoid rebuilding whole cube.
+![](/images/docs/concepts/CubeSegment.png)
+
+* __Aggregation Group__ - Each aggregation group is subset of dimensions, and build cuboid with combinations inside. It aims at pruning for optimization.
+![](/images/docs/concepts/AggregationGroup.png)
+
+## DIMENSION & MEASURE
+* __Mandotary__ - This dimension type is used for cuboid pruning, if a dimension is specified as “mandatory”, then those combinations without such dimension are pruned.
+* __Hierarchy__ - This dimension type is used for cuboid pruning, if dimension A,B,C forms a “hierarchy” relation, then only combinations with A, AB or ABC shall be remained. 
+* __Derived__ - On lookup tables, some dimensions could be generated from its PK, so there's specific mapping between them and FK from fact table. So those dimensions are DERIVED and don't participate in cuboid generation.
+![](/images/docs/concepts/Dimension.png)
+
+* __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
+* __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
+* __Top N__ - For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
+![](/images/docs/concepts/Measure.png)
+
+## CUBE ACTIONS
+* __BUILD__ - Given an interval of partition column, this action is to build a new cube segment.
+* __REFRESH__ - This action will rebuilt cube segment in some partition period, which is used in case of source table increasing.
+* __MERGE__ - This action will merge multiple continuous cube segments into single one. This can be automated with auto-merge settings in cube descriptor.
+* __PURGE__ - Clear segments under a cube instance. This will only update metadata, and won't delete cube data from HBase.
+![](/images/docs/concepts/CubeAction.png)
+
+## JOB STATUS
+* __NEW__ - This denotes one job has been just created.
+* __PENDING__ - This denotes one job is paused by job scheduler and waiting for resources.
+* __RUNNING__ - This denotes one job is running in progress.
+* __FINISHED__ - This denotes one job is successfully finished.
+* __ERROR__ - This denotes one job is aborted with errors.
+* __DISCARDED__ - This denotes one job is cancelled by end users.
+![](/images/docs/concepts/Job.png)
+
+## JOB ACTION
+* __RESUME__ - Once a job in ERROR status, this action will try to restore it from latest successful point.
+* __DISCARD__ - No matter status of a job is, user can end it and release resources with DISCARD action.
+![](/images/docs/concepts/JobAction.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/gettingstarted/events.md
----------------------------------------------------------------------
diff --git a/website/_docs23/gettingstarted/events.md b/website/_docs23/gettingstarted/events.md
new file mode 100644
index 0000000..35d02fb
--- /dev/null
+++ b/website/_docs23/gettingstarted/events.md
@@ -0,0 +1,24 @@
+---
+layout: docs23
+title:  "Events and Conferences"
+categories: gettingstarted
+permalink: /docs23/gettingstarted/events.html
+---
+
+__Conferences__
+
+* [The Evolution of Apache Kylin: Realtime and Plugin Architecture in Kylin](https://www.youtube.com/watch?v=n74zvLmIgF0)([slides](http://www.slideshare.net/YangLi43/apache-kylin-15-updates)) by [Li Yang](https://github.com/liyang-gmt8), at [Hadoop Summit 2016 Dublin](http://hadoopsummit.org/dublin/agenda/), Ireland, 2016-04-14
+* [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
+* [Apache Kylin - Balance Between Space and Time](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015) ([video](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015)) by [Debashis Saha](https://twitter.com/debashis_saha) & [Luke Han](https://twitter.com/lukehq), at Hadoop Summit 2015 in San Jose, US, 2015-06-09
+* [HBaseCon 2015: Apache Kylin; Extreme OLAP Engine for Hadoop](https://vimeo.com/128152444) ([video](https://vimeo.com/128152444), [slides](http://www.slideshare.net/HBaseCon/ecosystem-session-3b)) by [Seshu Adunuthula](https://twitter.com/SeshuAd) at HBaseCon 2015 in San Francisco, US, 2015-05-07
+* [Apache Kylin - Extreme OLAP Engine for Hadoop](http://strataconf.com/big-data-conference-uk-2015/public/schedule/detail/40029) ([slides](http://www.slideshare.net/lukehan/apache-kylin-extreme-olap-engine-for-big-data)) by [Luke Han](https://twitter.com/lukehq) & [Yang Li](https://github.com/liyang-gmt8), at Strata+Hadoop World in London, UK, 2015-05-06
+* [Apache Kylin Open Source Journey](http://www.infoq.com/cn/presentations/open-source-journey-of-apache-kylin) ([slides](http://www.slideshare.net/lukehan/apache-kylin-open-source-journey-for-qcon2015-beijing)) by [Luke Han](https://twitter.com/lukehq), at QCon Beijing in Beijing, China, 2015-04-23
+* [Apache Kylin - OLAP on Hadoop](http://cio.it168.com/a2015/0418/1721/000001721404.shtml) by [Yang Li](https://github.com/liyang-gmt8), at Database Technology Conference China 2015 in Beijing, China, 2015-04-18
+* [Apache Kylin – Cubes on Hadoop](https://www.youtube.com/watch?v=U0SbrVzuOe4) ([video](https://www.youtube.com/watch?v=U0SbrVzuOe4), [slides](http://www.slideshare.net/Hadoop_Summit/apache-kylin-cubes-on-hadoop)) by [Ted Dunning](https://twitter.com/ted_dunning), at Hadoop Summit 2015 Europe in Brussels, Belgium, 2015-04-16
+* [Apache Kylin - Hadoop 上的大规模联机分析平台](http://bdtc2014.hadooper.cn/m/zone/bdtc_2014/schedule3) ([slides](http://www.slideshare.net/lukehan/apache-kylin-big-data-technology-conference-2014-beijing-v2)) by [Luke Han](https://twitter.com/lukehq), at Big Data Technology Conference China in Beijing, China, 2014-12-14
+* [Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive](http://v.csdn.hudong.com/s/article.html?arcid=15820707) ([video](http://v.csdn.hudong.com/s/article.html?arcid=15820707), [slides](http://www.slideshare.net/XuJiang2/kylin-hadoop-olap-engine)) by [Jiang Xu](https://www.linkedin.com/pub/xu-jiang/4/5a8/230), at Shanghai Big Data Summit 2014 in Shanghai, China , 2014-10-25
+
+__Meetup__
+
+* [Apache Kylin Meetup @Bay Area](http://www.meetup.com/Cloud-at-ebayinc/events/218914395/), in San Jose, US, 6:00PM - 7:30PM, Thursday, 2014-12-04
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/gettingstarted/faq.md
----------------------------------------------------------------------
diff --git a/website/_docs23/gettingstarted/faq.md b/website/_docs23/gettingstarted/faq.md
new file mode 100644
index 0000000..e02acfe
--- /dev/null
+++ b/website/_docs23/gettingstarted/faq.md
@@ -0,0 +1,119 @@
+---
+layout: docs23
+title:  "FAQ"
+categories: gettingstarted
+permalink: /docs23/gettingstarted/faq.html
+since: v0.6.x
+---
+
+#### 1. "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat" or "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState"
+
+  * Kylin need many dependent jars (hadoop/hive/hcat/hbase/kafka) on classpath to work, but Kylin doesn't ship them. It will seek these jars from your local machine by running commands like `hbase classpath`, `hive -e set` etc. The founded jars' path will be appended to the environment variable *HBASE_CLASSPATH* (Kylin uses `hbase` shell command to start up, which will read this). But in some Hadoop distribution (like AWS EMR 5.0), the `hbase` shell doesn't keep the origin `HBASE_CLASSPATH` value, that causes the "NoClassDefFoundError".
+
+  * To fix this, find the hbase shell script (in hbase/bin folder), and search *HBASE_CLASSPATH*, check whether it overwrite the value like :
+
+  {% highlight Groff markup %}
+  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*
+  {% endhighlight %}
+
+  * If true, change it to keep the origin value like:
+
+   {% highlight Groff markup %}
+  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
+  {% endhighlight %}
+
+#### 2. Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
+
+  * Kylin uses "Dictionary" encoding to encode/decode the dimension values (check [this blog](/blog/2015/08/13/kylin-dictionary/)); Usually a dimension's cardinality is less than millions, so the "Dict" encoding is good to use. As dictionary need be persisted and loaded into memory, if a dimension's cardinality is very high, the memory footprint will be tremendous, so Kylin add a check on this. If you see this error, suggest to identify the UHC dimension first and then re-evaluate the design (whether need to make that as dimension?). If must keep it, you can by-pass this error with couple ways: 1) change to use other encoding (like `fixed_length`, `integer`) 2) or set a bigger value for `kylin.dictionary.max.cardinality` in `conf/kylin.properties`.
+
+#### 3. Build cube failed due to "error check status"
+
+  * Check if `kylin.log` contains *yarn.resourcemanager.webapp.address:http://0.0.0.0:8088* and *java.net.ConnectException: Connection refused*
+  * If yes, then the problem is the address of resource manager was not available in yarn-site.xml
+  * A workaround is update `kylin.properties`, set `kylin.job.yarn.app.rest.check.status.url=http://YOUR_RM_NODE:8088/ws/v1/cluster/apps/${job_id}?anonymous=true`
+
+#### 4. HBase cannot get master address from ZooKeeper on Hortonworks Sandbox
+   
+  * By default hortonworks disables hbase, you'll have to start hbase in ambari homepage first.
+
+#### 5. Map Reduce Job information cannot display on Hortonworks Sandbox
+   
+  * Check out [https://github.com/KylinOLAP/Kylin/issues/40](https://github.com/KylinOLAP/Kylin/issues/40)
+
+#### 6. How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
+
+  * Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
+
+  {% highlight Groff markup %}
+  I was able to deploy Kylin with following option in POM.
+  <hadoop2.version>2.5.0</hadoop2.version>
+  <yarn.version>2.5.0</yarn.version>
+  <hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
+  <zookeeper.version>3.4.5</zookeeper.version>
+  <hive.version>0.13.1</hive.version>
+  My Cluster is running on Cloudera Distribution CDH 5.2.0.
+  {% endhighlight %}
+
+
+#### 7. SUM(field) returns a negtive result while all the numbers in this field are > 0
+  * If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which would be used as a measure in Kylin; See hive number types: [https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes)
+
+#### 8. Why Kylin need extract the distinct columns from Fact Table before building cube?
+  * Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
+
+#### 9. Why Kylin calculate the HIVE table cardinality?
+  * The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
+
+#### 10. How to add new user or change the default password?
+  * Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
+
+   {% highlight Groff markup %}
+   ${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
+   {% endhighlight %}
+
+  * The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
+  * When you deploy Kylin for more users, switch to LDAP authentication is recommended.
+
+#### 11. Using sub-query for un-supported SQL
+
+{% highlight Groff markup %}
+Original SQL:
+select fact.slr_sgmt,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from ih_daily_fact fact
+inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+group by fact.slr_sgmt
+{% endhighlight %}
+
+{% highlight Groff markup %}
+Using sub-query
+select a.slr_sgmt,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from (
+    select fact.slr_sgmt as slr_sgmt,
+    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
+    sum(gmv) as gmv36,
+    sum(gmv) as gmv35
+    from ih_daily_fact fact
+    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
+) a
+group by a.slr_sgmt
+{% endhighlight %}
+
+#### 12. Build kylin meet NPM errors (中国大陆地区用户请特别注意此问题)
+
+  * Please add proxy for your NPM:  
+  `npm config set proxy http://YOUR_PROXY_IP`
+
+  * Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
+  [http://npm.taobao.org](http://npm.taobao.org)
+
+#### 13. Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
+  * User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
+
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/gettingstarted/terminology.md
----------------------------------------------------------------------
diff --git a/website/_docs23/gettingstarted/terminology.md b/website/_docs23/gettingstarted/terminology.md
new file mode 100644
index 0000000..31c0ce6
--- /dev/null
+++ b/website/_docs23/gettingstarted/terminology.md
@@ -0,0 +1,25 @@
+---
+layout: docs23
+title:  "Terminology"
+categories: gettingstarted
+permalink: /docs23/gettingstarted/terminology.html
+since: v0.5.x
+---
+ 
+
+Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
+They are basic knowledge of Apache Kylin which also will help to well understand such concerpt, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analycits. 
+
+* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
+* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
+* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
+* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
+* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
+* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
+* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
+* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
+* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
+* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_backup_metadata.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_backup_metadata.cn.md b/website/_docs23/howto/howto_backup_metadata.cn.md
new file mode 100644
index 0000000..07ec135
--- /dev/null
+++ b/website/_docs23/howto/howto_backup_metadata.cn.md
@@ -0,0 +1,59 @@
+---
+layout: docs23-cn
+title:  备份元数据
+categories: 帮助
+permalink: /cn/docs23/howto/howto_backup_metadata.html
+---
+
+Kylin将它全部的元数据(包括cube描述和实例、项目、倒排索引描述和实例、任务、表和字典)组织成层级文件系统的形式。然而,Kylin使用hbase来存储元数据,而不是一个普通的文件系统。如果你查看过Kylin的配置文件(kylin.properties),你会发现这样一行:
+
+{% highlight Groff markup %}
+## The metadata store in hbase
+kylin.metadata.url=kylin_metadata@hbase
+{% endhighlight %}
+
+这表明元数据会被保存在一个叫作“kylin_metadata”的htable里。你可以在hbase shell里scan该htbale来获取它。
+
+## 使用二进制包来备份Metadata Store
+
+有时你需要将Kylin的Metadata Store从hbase备份到磁盘文件系统。在这种情况下,假设你在部署Kylin的hadoop命令行(或沙盒)里,你可以到KYLIN_HOME并运行:
+
+{% highlight Groff markup %}
+./bin/metastore.sh backup
+{% endhighlight %}
+
+来将你的元数据导出到本地目录,这个目录在KYLIN_HOME/metadata_backps下,它的命名规则使用了当前时间作为参数:KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second 。
+
+## 使用二进制包来恢复Metatdara Store
+
+万一你发现你的元数据被搞得一团糟,想要恢复先前的备份:
+
+首先,重置Metatdara Store(这个会清理Kylin在hbase的Metadata Store的所有信息,请确保先备份):
+
+{% highlight Groff markup %}
+./bin/metastore.sh reset
+{% endhighlight %}
+
+然后上传备份的元数据到Kylin的Metadata Store:
+{% highlight Groff markup %}
+./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
+{% endhighlight %}
+
+## 在开发环境备份/恢复元数据(0.7.3版本以上可用)
+
+在开发调试Kylin时,典型的环境是一台装有IDE的开发机上和一个后台的沙盒,通常你会写代码并在开发机上运行测试案例,但每次都需要将二进制包放到沙盒里以检查元数据是很麻烦的。这时有一个名为SandboxMetastoreCLI工具类可以帮助你在开发机本地下载/上传元数据。
+
+## 从Metadata Store清理无用的资源(0.7.3版本以上可用)
+随着运行时间增长,类似字典、表快照的资源变得没有用(cube segment被丢弃或者合并了),但是它们依旧占用空间,你可以运行命令来找到并清除它们:
+
+首先,运行一个检查,这是安全的因为它不会改变任何东西:
+{% highlight Groff markup %}
+./bin/metastore.sh clean
+{% endhighlight %}
+
+将要被删除的资源会被列出来:
+
+接下来,增加“--delete true”参数来清理这些资源;在这之前,你应该确保已经备份metadata store:
+{% highlight Groff markup %}
+./bin/metastore.sh clean --delete true
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_backup_metadata.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_backup_metadata.md b/website/_docs23/howto/howto_backup_metadata.md
new file mode 100644
index 0000000..e2e9850
--- /dev/null
+++ b/website/_docs23/howto/howto_backup_metadata.md
@@ -0,0 +1,60 @@
+---
+layout: docs23
+title:  Backup Metadata
+categories: howto
+permalink: /docs23/howto/howto_backup_metadata.html
+---
+
+Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
+
+{% highlight Groff markup %}
+## The metadata store in hbase
+kylin.metadata.url=kylin_metadata@hbase
+{% endhighlight %}
+
+This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
+
+## Backup Metadata Store with binary package
+
+Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
+In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
+
+{% highlight Groff markup %}
+./bin/metastore.sh backup
+{% endhighlight %}
+
+to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
+
+## Restore Metadata Store with binary package
+
+In case you find your metadata store messed up, and you want to restore to a previous backup:
+
+Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
+
+{% highlight Groff markup %}
+./bin/metastore.sh reset
+{% endhighlight %}
+
+Then upload the backup metadata to Kylin's metadata store:
+{% highlight Groff markup %}
+./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
+{% endhighlight %}
+
+## Backup/restore metadata in development env (available since 0.7.3)
+
+When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
+
+## Cleanup unused resources from Metadata Store (available since 0.7.3)
+As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
+
+Firstly, run a check, this is safe as it will not change anything:
+{% highlight Groff markup %}
+./bin/metastore.sh clean
+{% endhighlight %}
+
+The resources that will be dropped will be listed;
+
+Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
+{% highlight Groff markup %}
+./bin/metastore.sh clean --delete true
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_build_cube_with_restapi.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_build_cube_with_restapi.cn.md b/website/_docs23/howto/howto_build_cube_with_restapi.cn.md
new file mode 100644
index 0000000..c5e8fc3
--- /dev/null
+++ b/website/_docs23/howto/howto_build_cube_with_restapi.cn.md
@@ -0,0 +1,54 @@
+---
+layout: docs23-cn
+title:  用API构建cube
+categories: 帮助
+permalink: /cn/docs23/howto/howto_build_cube_with_restapi.html
+---
+
+### 1. 认证
+*   目前Kylin使用[basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication)。
+*   给第一个请求加上用于认证的 Authorization 头部。
+*   或者进行一个特定的请求: POST http://localhost:7070/kylin/api/user/authentication 。
+*   完成认证后, 客户端可以在接下来的请求里带上cookie。
+{% highlight Groff markup %}
+POST http://localhost:7070/kylin/api/user/authentication
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+
+### 2. 获取Cube的详细信息
+*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
+*   用户可以在返回的cube详细信息里找到cube的segment日期范围。
+{% highlight Groff markup %}
+GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+
+### 3.	然后提交cube构建任务
+*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
+*   关于 put 的请求体细节请参考 Build Cube API
+    *   `startTime` 和 `endTime` 应该是utc时间。
+    *   `buildType` 可以是 `BUILD` 、 `MERGE` 或 `REFRESH`。 `BUILD` 用于构建一个新的segment, `REFRESH` 用于刷新一个已有的segment, `MERGE` 用于合并多个已有的segment生成一个较大的segment。
+*   这个方法会返回一个新建的任务实例,它的uuid是任务的唯一id,用于追踪任务状态。
+{% highlight Groff markup %}
+PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+    
+{
+    "startTime": 0,
+    "endTime": 1388563200000,
+    "buildType": "BUILD"
+}
+{% endhighlight %}
+
+### 4.	跟踪任务状态 
+*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
+*   返回的 `job_status` 代表job的当前状态。
+
+### 5.	如果构建任务出现错误,可以重新开始它
+*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_build_cube_with_restapi.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_build_cube_with_restapi.md b/website/_docs23/howto/howto_build_cube_with_restapi.md
new file mode 100644
index 0000000..3619e8c
--- /dev/null
+++ b/website/_docs23/howto/howto_build_cube_with_restapi.md
@@ -0,0 +1,53 @@
+---
+layout: docs23
+title:  Build Cube with API
+categories: howto
+permalink: /docs23/howto/howto_build_cube_with_restapi.html
+---
+
+### 1.	Authentication
+*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
+*   Add `Authorization` header to first request for authentication
+*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
+*   Once authenticated, client can go subsequent requests with cookies.
+{% highlight Groff markup %}
+POST http://localhost:7070/kylin/api/user/authentication
+    
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+
+### 2.	Get details of cube. 
+*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
+*   Client can find cube segment date ranges in returned cube detail.
+{% highlight Groff markup %}
+GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+### 3.	Then submit a build job of the cube. 
+*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
+*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
+    *   `startTime` and `endTime` should be utc timestamp.
+    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
+*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
+{% highlight Groff markup %}
+PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+    
+{
+    "startTime": 0,
+    "endTime": 1388563200000,
+    "buildType": "BUILD"
+}
+{% endhighlight %}
+
+### 4.	Track job status. 
+*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
+*   Returned `job_status` represents current status of job.
+
+### 5.	If the job got errors, you can resume it. 
+*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_cleanup_storage.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_cleanup_storage.cn.md b/website/_docs23/howto/howto_cleanup_storage.cn.md
new file mode 100644
index 0000000..b56ff54
--- /dev/null
+++ b/website/_docs23/howto/howto_cleanup_storage.cn.md
@@ -0,0 +1,21 @@
+---
+layout: docs23-cn
+title:  清理存储
+categories: 帮助
+permalink: /cn/docs23/howto/howto_cleanup_storage.html
+---
+
+Kylin在构建cube期间会在HDFS上生成中间文件;除此之外,当清理/删除/合并cube时,一些HBase表可能被遗留在HBase却以后再也不会被查询;虽然Kylin已经开始做自动化的垃圾回收,但不一定能覆盖到所有的情况;你可以定期做离线的存储清理:
+
+步骤:
+1. 检查哪些资源可以清理,这一步不会删除任何东西:
+{% highlight Groff markup %}
+export KYLIN_HOME=/path/to/kylin_home
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete false
+{% endhighlight %}
+请将这里的 (version) 替换为你安装的Kylin jar版本。
+2. 你可以抽查一两个资源来检查它们是否已经没有被引用了;然后加上“--delete true”选项进行清理。
+{% highlight Groff markup %}
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete true
+{% endhighlight %}
+完成后,中间HDFS上的中间文件和HTable会被移除。

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_cleanup_storage.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_cleanup_storage.md b/website/_docs23/howto/howto_cleanup_storage.md
new file mode 100644
index 0000000..fc89a79
--- /dev/null
+++ b/website/_docs23/howto/howto_cleanup_storage.md
@@ -0,0 +1,22 @@
+---
+layout: docs23
+title:  Cleanup Storage
+categories: howto
+permalink: /docs23/howto/howto_cleanup_storage.html
+---
+
+Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
+automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
+
+Steps:
+1. Check which resources can be cleanup, this will not remove anything:
+{% highlight Groff markup %}
+export KYLIN_HOME=/path/to/kylin_home
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete false
+{% endhighlight %}
+Here please replace (version) with the specific Kylin jar version in your installation;
+2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
+{% highlight Groff markup %}
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.StorageCleanupJob --delete true
+{% endhighlight %}
+On finish, the intermediate HDFS location and HTables should be dropped;

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_enable_zookeeper_acl.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_enable_zookeeper_acl.md b/website/_docs23/howto/howto_enable_zookeeper_acl.md
new file mode 100644
index 0000000..7387813
--- /dev/null
+++ b/website/_docs23/howto/howto_enable_zookeeper_acl.md
@@ -0,0 +1,20 @@
+---
+layout: docs23
+title:  Enable zookeeper acl
+categories: howto
+permalink: /docs23/howto/howto_enable_zookeeper_acl.html
+---
+
+Edit $KYLIN_HOME/conf/kylin.properties to add following configuration item:
+
+* Add "kylin.env.zookeeper.zk-auth". It is the configuration item you can specify the zookeeper authenticated information. Its formats is "scheme:id". The value of scheme that the zookeeper supports is "world", "auth", "digest", "ip" or "super". The "id" is the authenticated information of the scheme. For example:
+
+    `kylin.env.zookeeper.zk-auth=digest:ADMIN:KYLIN`
+
+    The scheme equals to "digest". The id equals to "ADMIN:KYLIN", which expresses the "username:password".
+
+* Add "kylin.env.zookeeper.zk-acl". It is the configuration item you can set access permission. Its formats is "scheme:id:permissions". The value of permissions that the zookeeper supports is "READ", "WRITE", "CREATE", "DELETE" or "ADMIN". For example, we configure that everyone has all the permissions:
+
+    `kylin.env.zookeeper.zk-acl=world:anyone:rwcda`
+
+    The scheme equals to "world". The id equals to "anyone" and the permissions equals to "rwcda".

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_install_ranger_kylin_plugin.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_install_ranger_kylin_plugin.md b/website/_docs23/howto/howto_install_ranger_kylin_plugin.md
new file mode 100644
index 0000000..43d836a
--- /dev/null
+++ b/website/_docs23/howto/howto_install_ranger_kylin_plugin.md
@@ -0,0 +1,8 @@
+---
+layout: docs23
+title:  The Ranger Kylin Plugin Installation Guide
+categories: howto
+permalink: /docs23/howto/howto_install_ranger_kylin_plugin.html
+---
+
+Please refer to [https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin](https://cwiki.apache.org/confluence/display/RANGER/Kylin+Plugin).

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_jdbc.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_jdbc.cn.md b/website/_docs23/howto/howto_jdbc.cn.md
new file mode 100644
index 0000000..e40114f
--- /dev/null
+++ b/website/_docs23/howto/howto_jdbc.cn.md
@@ -0,0 +1,92 @@
+---
+layout: docs23-cn
+title:  Kylin JDBC Driver
+categories: 帮助
+permalink: /cn/docs23/howto/howto_jdbc.html
+---
+
+### 认证
+
+###### 基于Apache Kylin认证RESTFUL服务。支持的参数:
+* user : 用户名
+* password : 密码
+* ssl: true或false。 默认为flas;如果为true,所有的服务调用都会使用https。
+
+### 连接url格式:
+{% highlight Groff markup %}
+jdbc:kylin://<hostname>:<port>/<kylin_project_name>
+{% endhighlight %}
+* 如果“ssl”为true,“port”应该是Kylin server的HTTPS端口。
+* 如果“port”未被指定,driver会使用默认的端口:HTTP 80,HTTPS 443。
+* 必须指定“kylin_project_name”并且用户需要确保它在Kylin server上存在。
+
+### 1. 使用Statement查询
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 2. 使用PreparedStatementv查询
+
+###### 支持的PreparedStatement参数:
+* setString
+* setInt
+* setShort
+* setLong
+* setFloat
+* setDouble
+* setBoolean
+* setByte
+* setDate
+* setTime
+* setTimestamp
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
+state.setInt(1, 10);
+ResultSet resultSet = state.executeQuery();
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 3. 获取查询结果元数据
+Kylin jdbc driver支持元数据列表方法:
+通过sql模式过滤器(比如 %)列出catalog、schema、table和column。
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
+while (tables.next()) {
+    for (int i = 0; i < 10; i++) {
+        assertEquals("dummy", tables.getString(i + 1));
+    }
+}
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_jdbc.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_jdbc.md b/website/_docs23/howto/howto_jdbc.md
new file mode 100644
index 0000000..7243436
--- /dev/null
+++ b/website/_docs23/howto/howto_jdbc.md
@@ -0,0 +1,92 @@
+---
+layout: docs23
+title:  Kylin JDBC Driver
+categories: howto
+permalink: /docs23/howto/howto_jdbc.html
+---
+
+### Authentication
+
+###### Build on Apache Kylin authentication restful service. Supported parameters:
+* user : username 
+* password : password
+* ssl: true/false. Default be false; If true, all the services call will use https.
+
+### Connection URL format:
+{% highlight Groff markup %}
+jdbc:kylin://<hostname>:<port>/<kylin_project_name>
+{% endhighlight %}
+* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
+* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
+* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
+
+### 1. Query with Statement
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 2. Query with PreparedStatement
+
+###### Supported prepared statement parameters:
+* setString
+* setInt
+* setShort
+* setLong
+* setFloat
+* setDouble
+* setBoolean
+* setByte
+* setDate
+* setTime
+* setTimestamp
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
+state.setInt(1, 10);
+ResultSet resultSet = state.executeQuery();
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 3. Get query result set metadata
+Kylin jdbc driver supports metadata list methods:
+List catalog, schema, table and column with sql pattern filters(such as %).
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
+while (tables.next()) {
+    for (int i = 0; i < 10; i++) {
+        assertEquals("dummy", tables.getString(i + 1));
+    }
+}
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_ldap_and_sso.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_ldap_and_sso.md b/website/_docs23/howto/howto_ldap_and_sso.md
new file mode 100644
index 0000000..4083da5
--- /dev/null
+++ b/website/_docs23/howto/howto_ldap_and_sso.md
@@ -0,0 +1,128 @@
+---
+layout: docs23
+title: Secure with LDAP and SSO
+categories: howto
+permalink: /docs23/howto/howto_ldap_and_sso.html
+---
+
+## Enable LDAP authentication
+
+Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
+
+#### Configure LDAP server info
+
+Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be encrypted; You can run the following command to get the encrypted value:
+
+```
+cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
+java -classpath kylin-server-base-\<versioin\>.jar:spring-beans-3.2.17.RELEASE.jar:spring-core-3.2.17.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
+```
+
+Config them in the conf/kylin.properties:
+
+```
+ldap.server=ldap://<your_ldap_host>:<port>
+ldap.username=<your_user_name>
+ldap.password=<your_password_encrypted>
+```
+
+Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
+
+```
+ldap.user.searchBase=OU=UserAccounts,DC=mycompany,DC=com
+ldap.user.searchPattern=(&(cn={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
+ldap.user.groupSearchBase=OU=Group,DC=mycompany,DC=com
+```
+
+If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in ldap.service.*; Otherwise, leave them be empty;
+
+### Configure the administrator group and default role
+
+To map an LDAP group to the admin group in Kylin, need set the "kylin.security.acl.admin-role" to "ROLE_" + GROUP_NAME. For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
+
+```
+kylin.security.acl.admin-role=ROLE_KYLIN-ADMIN-GROUP
+kylin.security.acl.default-role=ROLE_ANALYST,ROLE_MODELER
+```
+
+The "kylin.security.acl.default-role" is a list of the default roles that grant to everyone, keep it as-is.
+
+#### Enable LDAP
+
+Set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server.
+
+## Enable SSO authentication
+
+From v1.5, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
+
+Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
+
+### Generate IDP metadata xml
+Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
+
+  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
+  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
+  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
+
+### Generate JKS keystore for Kylin
+As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
+
+Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
+
+```
+$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
+Enter Export Password: <export_pwd>
+Verifying - Enter Export Password: <export_pwd>
+
+
+$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
+
+Enter destination keystore password:  changeit
+Re-enter new password: changeit
+```
+
+It will put the keys to "samlKeystore.jks" with alias "kylin";
+
+### Enable Higher Ciphers
+
+Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
+
+### Deploy IDP xml file and keystore to Kylin
+
+The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
+	
+  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
+  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
+  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
+
+```
+<!-- Central storage of cryptographic keys -->
+<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
+	<constructor-arg value="classpath:samlKeystore.jks"/>
+	<constructor-arg type="java.lang.String" value="changeit"/>
+	<constructor-arg>
+		<map>
+			<entry key="kylin" value="changeit"/>
+		</map>
+	</constructor-arg>
+	<constructor-arg type="java.lang.String" value="kylin"/>
+</bean>
+
+```
+
+### Other configurations
+In conf/kylin.properties, add the following properties with your server information:
+
+```
+saml.metadata.entityBaseURL=https://host-name/kylin
+saml.context.scheme=https
+saml.context.serverName=host-name
+saml.context.serverPort=443
+saml.context.contextPath=/kylin
+```
+
+Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
+
+### Enable SSO
+Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/40a53fe3/website/_docs23/howto/howto_optimize_build.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs23/howto/howto_optimize_build.cn.md b/website/_docs23/howto/howto_optimize_build.cn.md
new file mode 100644
index 0000000..6103acf
--- /dev/null
+++ b/website/_docs23/howto/howto_optimize_build.cn.md
@@ -0,0 +1,166 @@
+---
+layout: docs23-cn
+title:  优化cube构建
+categories: 帮助
+permalink: /cn/docs23/howto/howto_optimize_build.html
+---
+
+Kylin将Cube构建任务分解为几个依次执行的步骤,这些步骤包括Hive操作、MapReduce操作和其他类型的操作。如果你有很多Cube构建任务需要每天运行,那么你肯定想要减少其中消耗的时间。下文按照Cube构建步骤顺序提供了一些优化经验。
+
+## 创建Hive的中间平表
+
+这一步将数据从源Hive表提取出来(和所有join的表一起)并插入到一个中间平表。如果Cube是分区的,Kylin会加上一个时间条件以确保只有在时间范围内的数据才会被提取。你可以在这个步骤的log查看相关的Hive命令,比如:
+
+```
+hive -e "USE default;
+DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
+
+CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
+(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
+STORED AS SEQUENCEFILE
+LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
+
+SET dfs.replication=2;
+SET hive.exec.compress.output=true;
+SET hive.auto.convert.join.noconditionaltask=true;
+SET hive.auto.convert.join.noconditionaltask.size=100000000;
+SET mapreduce.job.split.metainfo.maxsize=-1;
+
+INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
+AIRLINE.FLIGHTDATE
+,AIRLINE.YEAR
+,AIRLINE.QUARTER
+,...
+,AIRLINE.ARRDELAYMINUTES
+FROM AIRLINE.AIRLINE as AIRLINE
+WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
+
+```
+
+在Hive命令运行时,Kylin会用`conf/kylin_hive_conf.properties`里的配置,比如保留更少的冗余备份和启用Hive的mapper side join。需要的话可以根据集群的具体情况增加其他配置。
+
+如果cube的分区列(在这个案例中是"FIGHTDATE")与Hive表的分区列相同,那么根据它过滤数据能让Hive聪明地跳过不匹配的分区。因此强烈建议用Hive的分区列(如果它是日期列)作为cube的分区列。这对于那些数据量很大的表来说几乎是必须的,否则Hive不得不每次在这步扫描全部文件,消耗非常长的时间。
+
+如果启用了Hive的文件合并,你可以在`conf/kylin_hive_conf.xml`里关闭它,因为Kylin有自己合并文件的方法(下一节):
+
+    <property>
+        <name>hive.merge.mapfiles</name>
+        <value>false</value>
+        <description>Disable Hive's auto merge</description>
+    </property>
+
+## 重新分发中间表
+
+在之前的一步之后,Hive在HDFS上的目录里生成了数据文件:有些是大文件,有些是小文件甚至空文件。这种不平衡的文件分布会导致之后的MR任务出现数据倾斜的问题:有些mapper完成得很快,但其他的就很慢。针对这个问题,Kylin增加了这一个步骤来“重新分发”数据,这是示例输出:
+
+```
+total input rows = 159869711
+expected input rows per mapper = 1000000
+num reducers for RedistributeFlatHiveTableStep = 160
+
+```
+
+重新分发表的命令:
+
+```
+hive -e "USE default;
+SET dfs.replication=2;
+SET hive.exec.compress.output=true;
+SET hive.auto.convert.join.noconditionaltask=true;
+SET hive.auto.convert.join.noconditionaltask.size=100000000;
+SET mapreduce.job.split.metainfo.maxsize=-1;
+set mapreduce.job.reduces=160;
+set hive.merge.mapredfiles=false;
+
+INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
+"
+```
+
+首先,Kylin计算出中间表的行数,然后基于行数的大小算出重新分发数据需要的文件数。默认情况下,Kylin为每一百万行分配一个文件。在这个例子中,有1.6亿行和160个reducer,每个reducer会写一个文件。在接下来对这张表进行的MR步骤里,Hadoop会启动和文件相同数量的mapper来处理数据(通常一百万行数据比一个HDFS数据块要小)。如果你的日常数据量没有这么大或者Hadoop集群有足够的资源,你或许想要更多的并发数,这时可以将`conf/kylin.properties`里的`kylin.job.mapreduce.mapper.input.rows`设为小一点的数值,比如:
+
+`kylin.job.mapreduce.mapper.input.rows=500000`
+
+其次,Kylin会运行 *"INSERT OVERWRITE TABLE ... DISTRIBUTE BY "* 形式的HiveQL来分发数据到指定数量的reducer上。
+
+在很多情况下,Kylin请求Hive随机分发数据到reducer,然后得到大小相近的文件,分发的语句是"DISTRIBUTE BY RAND()"。
+
+如果你的cube指定了一个高基数的列,比如"USER_ID",作为"分片"维度(在cube的“高级设置”页面),Kylin会让Hive根据该列的值重新分发数据,那么在该列有着相同值的行将被分发到同一个文件。这比随机要分发要好得多,因为不仅重新分布了数据,并且在没有额外代价的情况下对数据进行了预先分类,如此一来接下来的cube build处理会从中受益。在典型的场景下,这样优化可以减少40%的build时长。在这个案例中分发的语句是"DISTRIBUTE BY USER_ID":
+
+请注意: 1)“分片”列应该是高基数的维度列,并且它会出现在很多的cuboid中(不只是出现在少数的cuboid)。 使用它来合理进行分发可以在每个时间范围内的数据均匀分布,否则会造成数据倾斜,从而降低build效率。典型的正面例子是:“USER_ID”、“SELLER_ID”、“PRODUCT”、“CELL_NUMBER”等等,这些列的基数应该大于一千(远大于reducer的数量)。 2)"分片"对cube的存储同样有好处,不过这超出了本文的范围。
+
+## 提取事实表的唯一列
+
+在这一步骤Kylin运行MR任务来提取使用字典编码的维度列的唯一值。
+
+实际上这步另外还做了一些事情:通过HyperLogLog计数器收集cube的统计数据,用于估算每个cuboid的行数。如果你发现mapper运行得很慢,这通常表明cube的设计太过复杂,请参考
+[优化cube设计](howto_optimize_cubes.html)来简化cube。如果reducer出现了内存溢出错误,这表明cuboid组合真的太多了或者是YARN的内存分配满足不了需要。如果这一步从任何意义上讲不能在合理的时间内完成,你可以放弃任务并考虑重新设计cube,因为继续下去会花费更长的时间。
+
+你可以通过降低取样的比例(kylin.job.cubing.inmen.sampling.percent)来加速这个步骤,但是帮助可能不大而且影响了cube统计数据的准确性,所有我们并不推荐。
+
+## 构建维度字典
+
+有了前一步提取的维度列唯一值,Kylin会在内存里构建字典(在下个版本将改为MapReduce任务)。通常这一步比较快,但如果唯一值集合很大,Kylin可能会报出类似“字典不支持过高基数”。对于UHC类型的列,请使用其他编码方式,比如“fixed_length”、“integer”等等。
+
+## 保存cuboid的统计数据和创建 HTable
+
+这两步是轻量级和快速的。
+
+## 构建基础cuboid
+
+这一步用Hive的中间表构建基础的cuboid,是“逐层”构建cube算法的第一轮MR计算。Mapper的数目与第二步的reducer数目相等;Reducer的数目是根据cube统计数据估算的:默认情况下每500MB输出使用一个reducer;如果观察到reducer的数量较少,你可以将kylin.properties里的“kylin.job.mapreduce.default.reduce.input.mb”设为小一点的数值以获得过多的资源,比如:
+
+`kylin.job.mapreduce.default.reduce.input.mb=200`
+
+## Build N-Dimension Cuboid 
+## 构建N维cuboid
+
+这些步骤是“逐层”构建cube的过程,每一步以前一步的输出作为输入,然后去掉一个维度以聚合得到一个子cuboid。举个例子,cuboid ABCD去掉A得到BCD,去掉B得到ACD。
+
+有些cuboid可以从一个以上的父cuboid聚合得到,这种情况下,Kylin会选择最小的一个父cuboid。举例,AB可以从ABC(id:1110)和ABD(id:1101)生成,则ABD会被选中,因为它的比ABC要小。在这基础上,如果D的基数较小,聚合运算的成本就会比较低。所以,当设计rowkey序列的时候,请记得将基数较小的维度放在末尾。这样不仅有利于cube构建,而且有助于cube查询,因为预聚合也遵循相同的规则。
+
+通常来说,从N维到(N/2)维的构建比较慢,因为这是cuboid数量爆炸性增长的阶段:N维有1个cuboid,(N-1)维有N个cuboid,(N-2)维有N*(N-1)个cuboid,以此类推。经过(N/2)维构建的步骤,整个构建任务会逐渐变快。
+
+## 构建cube
+
+这个步骤使用一个新的算法来构建cube:“逐片”构建(也称为“内存”构建)。它会使用一轮MR来计算所有的cuboids,但是比通常情况下更耗内存。配置文件"conf/kylin_job_inmem.xml"正是为这步而设。默认情况下它为每个mapper申请3GB内存。如果你的集群有充足的内存,你可以在上述配置文件中分配更多内存给mapper,这样它会用尽可能多的内存来缓存数据以获得更好的性能,比如:
+
+    <property>
+        <name>mapreduce.map.memory.mb</name>
+        <value>6144</value>
+        <description></description>
+    </property>
+    
+    <property>
+        <name>mapreduce.map.java.opts</name>
+        <value>-Xmx5632m</value>
+        <description></description>
+    </property>
+
+
+请注意,Kylin会根据数据分布(从cube的统计数据里获得)自动选择最优的算法,没有被选中的算法对应的步骤会被跳过。你不需要显式地选择构建算法。
+
+## 将cuboid数据转换为HFile
+
+这一步启动一个MR任务来讲cuboid文件(序列文件格式)转换为HBase的HFile格式。Kylin通过cube统计数据计算HBase的region数目,默认情况下每5GB数据对应一个region。Region越多,MR使用的reducer也会越多。如果你观察到reducer数目较小且性能较差,你可以将“conf/kylin.properties”里的以下参数设小一点,比如:
+
+```
+kylin.hbase.region.cut=2
+kylin.hbase.hfile.size.gb=1
+```
+
+如果你不确定一个region应该是多大时,联系你的HBase管理员。
+
+## 将HFile导入HBase表
+
+这一步使用HBase API来讲HFile导入region server,这是轻量级并快速的一步。
+
+## 更新cube信息
+
+在导入数据到HBase后,Kylin在元数据中将对应的cube segment标记为ready。
+
+## 清理资源
+
+将中间宽表从Hive删除。这一步不会阻塞任何操作,因为在前一步segment已经被标记为ready。如果这一步发生错误,不用担心,垃圾回收工作可以晚些再通过Kylin的[StorageCleanupJob](howto_cleanup_storage.html)完成。
+
+## 总结
+还有非常多其他提高Kylin性能的方法,如果你有经验可以分享,欢迎通过[dev@kylin.apache.org](mailto:dev@kylin.apache.org)讨论。