You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by li...@apache.org on 2016/03/12 10:57:22 UTC

[6/6] kylin git commit: rename 2.x to 1.5 in documents

rename 2.x to 1.5 in documents


Project: http://git-wip-us.apache.org/repos/asf/kylin/repo
Commit: http://git-wip-us.apache.org/repos/asf/kylin/commit/516c1f16
Tree: http://git-wip-us.apache.org/repos/asf/kylin/tree/516c1f16
Diff: http://git-wip-us.apache.org/repos/asf/kylin/diff/516c1f16

Branch: refs/heads/document
Commit: 516c1f169fc3eed484bd074738f113bb3e1e4835
Parents: 5ca4c39
Author: Yang Li <li...@apache.org>
Authored: Sat Mar 12 17:09:08 2016 +0800
Committer: Yang Li <li...@apache.org>
Committed: Sat Mar 12 17:56:57 2016 +0800

----------------------------------------------------------------------
 website/_config.yml                             |    4 +-
 website/_data/docs15.yml                        |   58 +
 website/_data/docs2.yml                         |   58 -
 website/_dev/howto_test.md                      |   16 +-
 website/_docs/gettingstarted/concepts.md        |    2 +-
 website/_docs/howto/howto_ldap_and_sso.md       |   81 +-
 website/_docs/index.md                          |    2 +-
 website/_docs/release_notes.md                  |  303 +-----
 website/_docs15/gettingstarted/concepts.md      |   64 ++
 website/_docs15/gettingstarted/events.md        |   27 +
 website/_docs15/gettingstarted/faq.md           |   89 ++
 website/_docs15/gettingstarted/terminology.md   |   25 +
 website/_docs15/howto/howto_backup_hbase.md     |   28 +
 website/_docs15/howto/howto_backup_metadata.md  |   61 ++
 .../howto/howto_build_cube_with_restapi.md      |   54 +
 website/_docs15/howto/howto_cleanup_storage.md  |   22 +
 website/_docs15/howto/howto_jdbc.md             |   93 ++
 website/_docs15/howto/howto_ldap_and_sso.md     |  122 +++
 website/_docs15/howto/howto_optimize_cubes.md   |  213 ++++
 website/_docs15/howto/howto_upgrade.md          |   92 ++
 website/_docs15/howto/howto_use_restapi.md      | 1005 +++++++++++++++++
 .../_docs15/howto/howto_use_restapi_in_js.md    |   47 +
 website/_docs15/index.md                        |   54 +
 website/_docs15/install/advance_settings.md     |   44 +
 website/_docs15/install/hadoop_evn.md           |   34 +
 website/_docs15/install/index.md                |   46 +
 website/_docs15/install/kylin_cluster.md        |   29 +
 website/_docs15/install/kylin_docker.md         |   45 +
 website/_docs15/install/manual_install_guide.md |   47 +
 website/_docs15/release_notes.md                |  704 ++++++++++++
 website/_docs15/tutorial/acl.md                 |   34 +
 website/_docs15/tutorial/create_cube.md         |  128 +++
 website/_docs15/tutorial/cube_build_job.md      |   65 ++
 website/_docs15/tutorial/kylin_sample.md        |   22 +
 website/_docs15/tutorial/odbc.md                |   49 +
 website/_docs15/tutorial/powerbi.md             |   54 +
 website/_docs15/tutorial/tableau.md             |  114 ++
 website/_docs15/tutorial/tableau_91.md          |   50 +
 website/_docs15/tutorial/web.md                 |  138 +++
 website/_docs2/gettingstarted/concepts.md       |   65 --
 website/_docs2/gettingstarted/events.md         |   27 -
 website/_docs2/gettingstarted/faq.md            |   90 --
 website/_docs2/gettingstarted/terminology.md    |   26 -
 website/_docs2/howto/howto_backup_hbase.md      |   29 -
 website/_docs2/howto/howto_backup_metadata.md   |   62 --
 .../howto/howto_build_cube_with_restapi.md      |   55 -
 website/_docs2/howto/howto_cleanup_storage.md   |   23 -
 website/_docs2/howto/howto_jdbc.md              |   94 --
 website/_docs2/howto/howto_ldap_and_sso.md      |  124 ---
 website/_docs2/howto/howto_optimize_cubes.md    |  214 ----
 website/_docs2/howto/howto_upgrade.md           |   95 --
 website/_docs2/howto/howto_use_restapi.md       | 1006 ------------------
 website/_docs2/howto/howto_use_restapi_in_js.md |   48 -
 website/_docs2/index.md                         |   54 -
 website/_docs2/install/advance_settings.md      |   45 -
 website/_docs2/install/hadoop_evn.md            |   35 -
 website/_docs2/install/index.md                 |   47 -
 website/_docs2/install/kylin_cluster.md         |   30 -
 website/_docs2/install/kylin_docker.md          |   46 -
 website/_docs2/install/manual_install_guide.md  |   48 -
 website/_docs2/release_notes.md                 |  706 ------------
 website/_docs2/tutorial/acl.md                  |   35 -
 website/_docs2/tutorial/create_cube.md          |  129 ---
 website/_docs2/tutorial/cube_build_job.md       |   66 --
 website/_docs2/tutorial/kylin_sample.md         |   23 -
 website/_docs2/tutorial/odbc.md                 |   50 -
 website/_docs2/tutorial/powerbi.md              |   55 -
 website/_docs2/tutorial/tableau.md              |  115 --
 website/_docs2/tutorial/tableau_91.md           |   51 -
 website/_docs2/tutorial/web.md                  |  139 ---
 website/_includes/docs15_nav.html               |   33 +
 website/_includes/docs15_ul.html                |   29 +
 website/_includes/docs2_nav.html                |   33 -
 website/_includes/docs2_ul.html                 |   29 -
 website/_layouts/docs15.html                    |   50 +
 website/_layouts/docs2.html                     |   50 -
 .../_posts/blog/2016-02-03-streaming-cubing.md  |    4 +-
 .../blog/2016-02-18-new-aggregation-group.md    |   16 +-
 78 files changed, 3794 insertions(+), 4205 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_config.yml
----------------------------------------------------------------------
diff --git a/website/_config.yml b/website/_config.yml
index d9b9c89..c5f374a 100644
--- a/website/_config.yml
+++ b/website/_config.yml
@@ -27,7 +27,7 @@ encoding: UTF-8
 timezone: America/Dawson 
 
 exclude: ["README.md", "Rakefile", "*.scss", "*.haml", "*.sh"]
-include: [_docs,_docs2,_dev]
+include: [_docs,_docs15,_dev]
 
 # Build settings
 markdown: kramdown
@@ -56,7 +56,7 @@ language_default: 'en'
 collections:
   docs:
     output: true
-  docs2:
+  docs15:
     output: true
   docs-cn:
     output: true    

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_data/docs15.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs15.yml b/website/_data/docs15.yml
new file mode 100644
index 0000000..7f75946
--- /dev/null
+++ b/website/_data/docs15.yml
@@ -0,0 +1,58 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Docs menu items, for English one, docs15-cn.yml is for Chinese one
+# The docs menu is constructed in docs15_nav.html with these data
+- title: Getting Started
+  docs:
+  - index
+  - release_notes
+  - gettingstarted/faq
+  - gettingstarted/events
+  - gettingstarted/terminology
+  - gettingstarted/concepts
+
+- title: Installation
+  docs:
+  - install/index
+  - install/hadoop_env
+  - install/manual_install_guide
+  - install/kylin_cluster
+  - install/advance_settings
+  - install/kylin_docker
+
+- title: Tutorial
+  docs:
+  - tutorial/kylin_sample
+  - tutorial/create_cube
+  - tutorial/cube_build_job
+  - tutorial/acl
+  - tutorial/web
+  - tutorial/tableau
+  - tutorial/tableau_91
+  - tutorial/powerbi
+  - tutorial/odbc
+
+- title: How To
+  docs:
+  - howto/howto_build_cube_with_restapi
+  - howto/howto_use_restapi_in_js
+  - howto/howto_use_restapi
+  - howto/howto_optimize_cubes
+  - howto/howto_backup_metadata
+  - howto/howto_cleanup_storage
+  - howto/howto_jdbc
+  - howto/howto_upgrade
+  - howto/howto_ldap_and_sso

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_data/docs2.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs2.yml b/website/_data/docs2.yml
deleted file mode 100644
index 70fdc1c..0000000
--- a/website/_data/docs2.yml
+++ /dev/null
@@ -1,58 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to you under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Docs menu items, for English one, docs2-cn.yml is for Chinese one
-# The docs menu is constructed in docs2_nav.html with these data
-- title: Getting Started
-  docs:
-  - index
-  - release_notes
-  - gettingstarted/faq
-  - gettingstarted/events
-  - gettingstarted/terminology
-  - gettingstarted/concepts
-
-- title: Installation
-  docs:
-  - install/index
-  - install/hadoop_env
-  - install/manual_install_guide
-  - install/kylin_cluster
-  - install/advance_settings
-  - install/kylin_docker
-
-- title: Tutorial
-  docs:
-  - tutorial/kylin_sample
-  - tutorial/create_cube
-  - tutorial/cube_build_job
-  - tutorial/acl
-  - tutorial/web
-  - tutorial/tableau
-  - tutorial/tableau_91
-  - tutorial/powerbi
-  - tutorial/odbc
-
-- title: How To
-  docs:
-  - howto/howto_build_cube_with_restapi
-  - howto/howto_use_restapi_in_js
-  - howto/howto_use_restapi
-  - howto/howto_optimize_cubes
-  - howto/howto_backup_metadata
-  - howto/howto_cleanup_storage
-  - howto/howto_jdbc
-  - howto/howto_upgrade
-  - howto/howto_ldap_and_sso

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_dev/howto_test.md
----------------------------------------------------------------------
diff --git a/website/_dev/howto_test.md b/website/_dev/howto_test.md
index 6aaa056..62a4e4a 100644
--- a/website/_dev/howto_test.md
+++ b/website/_dev/howto_test.md
@@ -7,22 +7,22 @@ permalink: /development/howto_test.html
 
 In general, there should be unit tests to cover individual classes; there must be integration test to cover end-to-end scenarios like build, merge, and query. Unit test must run independently (does not require an external sandbox).
 
-## Test 2.x branches
+## Test v1.5 and above
 
 * `mvn test` to run unit tests, which has a limited test coverage.
     * Unit tests has no external dependency and can run on any machine.
     * The unit tests do not cover end-to-end scenarios like build, merge, and query.
     * The unit tests take a few minutes to complete.
 * `dev-support/test_all_against_hdp_2_2_4_2_2.sh` to run integration tests, which has the best test coverage.
-    * Integration tests __better be run on a Hadoop sandbox__. We suggest to checkout a copy of code in your sandbox and direct run the test_all_against_hdp_2_2_4_2_2.sh in it. If you don't want to put codes on sandbox, refer to __More on 2.x UT/IT separation__
+    * Integration tests __better be run on a Hadoop sandbox__. We suggest to checkout a copy of code in your sandbox and direct run the test_all_against_hdp_2_2_4_2_2.sh in it. If you don't want to put codes on sandbox, refer to __More on v1.5 UT/IT separation__
     * As the name indicates, the script is only for hdp 2.2.4.2, but you get the idea of how integration test run from it.
     * The integration tests start from generate random data, then build cube, merge cube, and finally query the result and compare to H2 DB.
     * The integration tests take one to two hours to complete.
 
-## Test 1.x branches
+## Test v1.3 and below
 
 * `mvn test` to run unit tests, which has a limited test coverage.
-    * What's special about 1.x is that a hadoop/hbase mini cluster is used to cover queries in unit test.
+    * What's special about v1.3 and below is that a hadoop/hbase mini cluster is used to cover queries in unit test.
 * Run the following to run integration tests.
     * `mvn clean package -DskipTests`
     * `mvn test  -Dtest=org.apache.kylin.job.BuildCubeWithEngineTest -Dhdp.version=2.2.0.0-2041 -DfailIfNoTests=false -P sandbox`
@@ -30,9 +30,9 @@ In general, there should be unit tests to cover individual classes; there must b
     * `mvn test  -fae -P sandbox`
     * `mvn test  -fae  -Dtest=org.apache.kylin.query.test.IIQueryTest -Dhdp.version=2.2.0.0-2041 -DfailIfNoTests=false -P sandbox`
 
-## More on 2.x UT/IT separation
+## More on v1.5 UT/IT separation
 
-From Kylin 2.0 you can run UT(Unit test), environment cube provision and IT(Integration test) separately. 
+From Kylin v1.5 you can run UT(Unit test), environment cube provision and IT (Integration test) separately. 
 Running `mvn verify -Dhdp.version=2.2.4.2-2`  (assume you're on your sandbox) is all you need to run a complete all the test suites.
 
 It will execute the following steps sequentially:
@@ -54,9 +54,9 @@ If your sandbox is already provisioned and your code change will not affect the
 Environment cube provision is indeed running kylin cubing jobs to prepare example cubes in the sandbox. These prepared cubes will be used by the ITs. Currently provision step is bound with the maven pre-integration-test phase, and it contains running BuildCubeWithEngine (HBase required), BuildCubeWithStream(Kafka required) and BuildIIWithStream(Kafka Required). You can run the mvn commands on you sandbox or your develop computer. For the latter case you need to set kylin.job.run.as.remote.cmd=true in __$KYLIN_HOME/examples/test_case_data/sandbox/kylin.properties__. 
 Try appending `-DfastBuildMode=true` to mvn verify command to speed up provision by skipping incremental cubing. 
 
-## More on 1.x Mini Cluster
+## More on v1.3 Mini Cluster
 
-Kylin 1.x used to move as many as possible unit test cases from sandbox to HBase mini cluster (not any more in 2.x), so that user can run tests easily in local without a hadoop sandbox. Two maven profiles are created in the root pom.xml, "default" and "sandbox". The default profile will startup a HBase Mini Cluster to prepare the test data and run the unit tests (the test cases that are not supported by Mini cluster have been added in the "exclude" list). If you want to keep using Sandbox to run test, just run `mvn test -P sandbox`
+Kylin v1.3 (and below) used to move as many as possible unit test cases from sandbox to HBase mini cluster, so that user can run tests easily in local without a hadoop sandbox. Two maven profiles are created in the root pom.xml, "default" and "sandbox". The default profile will startup a HBase Mini Cluster to prepare the test data and run the unit tests (the test cases that are not supported by Mini cluster have been added in the "exclude" list). If you want to keep using Sandbox to run test, just run `mvn test -P sandbox`
 
 ### When use the "default" profile, Kylin will
 

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs/gettingstarted/concepts.md
----------------------------------------------------------------------
diff --git a/website/_docs/gettingstarted/concepts.md b/website/_docs/gettingstarted/concepts.md
index 081ec54..248c10f 100644
--- a/website/_docs/gettingstarted/concepts.md
+++ b/website/_docs/gettingstarted/concepts.md
@@ -40,7 +40,7 @@ For terminology in domain, please refer to: [Terminology](terminology.md)
 
 * __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
 * __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
-* __Top N__ - (Will release in 2.x) For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
+* __Top N__ - (Will release in v1.5) For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
 ![](/images/docs/concepts/Measure.png)
 
 ## CUBE ACTIONS

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs/howto/howto_ldap_and_sso.md
----------------------------------------------------------------------
diff --git a/website/_docs/howto/howto_ldap_and_sso.md b/website/_docs/howto/howto_ldap_and_sso.md
index 5bfb97d..4711222 100644
--- a/website/_docs/howto/howto_ldap_and_sso.md
+++ b/website/_docs/howto/howto_ldap_and_sso.md
@@ -3,7 +3,7 @@ layout: docs
 title:  How to Enable Security with LDAP and SSO
 categories: howto
 permalink: /docs/howto/howto_ldap_and_sso.html
-version: v2.0
+version: v1.3
 since: v1.0
 ---
 
@@ -44,81 +44,4 @@ The "acl.defaultRole" is a list of the default roles that grant to everyone, kee
 
 #### Enable LDAP
 
-For Kylin v0.x and v1.x: set "kylin.sandbox=false" in conf/kylin.properties, then restart Kylin server; 
-For Kylin since v2.0: set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server; 
-
-## Enable SSO authentication
-
-From v2.0, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
-
-Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
-
-### Generate IDP metadata xml
-Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
-
-  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
-  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
-  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
-
-### Generate JKS keystore for Kylin
-As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
-
-Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
-
-```
-$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
-Enter Export Password: <export_pwd>
-Verifying - Enter Export Password: <export_pwd>
-
-
-$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
-
-Enter destination keystore password:  changeit
-Re-enter new password: changeit
-```
-
-It will put the keys to "samlKeystore.jks" with alias "kylin";
-
-### Enable Higher Ciphers
-
-Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
-
-### Deploy IDP xml file and keystore to Kylin
-
-The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
-	
-  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
-  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
-  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
-
-```
-<!-- Central storage of cryptographic keys -->
-<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
-	<constructor-arg value="classpath:samlKeystore.jks"/>
-	<constructor-arg type="java.lang.String" value="changeit"/>
-	<constructor-arg>
-		<map>
-			<entry key="kylin" value="changeit"/>
-		</map>
-	</constructor-arg>
-	<constructor-arg type="java.lang.String" value="kylin"/>
-</bean>
-
-```
-
-### Other configurations
-In conf/kylin.properties, add the following properties with your server information:
-
-```
-saml.metadata.entityBaseURL=https://host-name/kylin
-saml.context.scheme=https
-saml.context.serverName=host-name
-saml.context.serverPort=443
-saml.context.contextPath=/kylin
-```
-
-Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
-
-### Enable SSO
-Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
-
+Set "kylin.sandbox=false" in conf/kylin.properties, then restart Kylin server.

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs/index.md
----------------------------------------------------------------------
diff --git a/website/_docs/index.md b/website/_docs/index.md
index a033134..9732d1b 100644
--- a/website/_docs/index.md
+++ b/website/_docs/index.md
@@ -11,7 +11,7 @@ Welcome to Apache Kylin™
 
 Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets, original contributed from eBay Inc.
 
-Future documents: [v2.x](/docs2/)
+Future documents: [v1.5](/docs15/)
 
 Installation & Setup
 ------------  

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs/release_notes.md
----------------------------------------------------------------------
diff --git a/website/_docs/release_notes.md b/website/_docs/release_notes.md
index 3adea81..e2fee5a 100644
--- a/website/_docs/release_notes.md
+++ b/website/_docs/release_notes.md
@@ -3,7 +3,7 @@ layout: docs
 title:  Apache Kylin™ Release Notes
 categories: gettingstarted
 permalink: /docs/release_notes.html
-version: v2.0
+version: v1.3
 since: v0.7.1
 ---
 
@@ -17,307 +17,6 @@ or send to Apache Kylin mailing list:
 * Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
 
 
-## v2.0-alpha - 2016-02-09
-_Tag:_ [kylin-2.0-alpha](https://github.com/apache/kylin/tree/kylin-2.0-alpha)
-
-__Highlights__
-
-    * [KYLIN-875] - A plugin-able architecture, to allow alternative cube engine / storage engine / data source.
-    * [KYLIN-1245] - A better MR cubing algorithm, about 1.5 times faster than 1.x by comparing hundreds of jobs.
-    * [KYLIN-942] - A better storage engine, makes query roughly 2 times faster (especially for slow queries) than 1.x by comparing tens of thousands sqls.
-    * [KYLIN-738] - Streaming cubing EXPERIMENTAL support, source from kafka, build cube in-mem at minutes interval
-    * [KYLIN-943] - TopN pre-calculation (more UDFs coming)
-    * [KYLIN-1065] - ODBC compatible with Tableau 9.1, MS Excel, MS PowerBI
-    * [KYLIN-1219] - Kylin support SSO with Spring SAML
-
-__Below generated from JIRA system, pending manual revision.__
-
-__New Feature__
-
-    * [KYLIN-196] - Support Job Priority
-    * [KYLIN-528] - Build job flow for Inverted Index building
-    * [KYLIN-596] - Support Excel and Power BI
-    * [KYLIN-599] - Near real-time support
-    * [KYLIN-603] - Add mem store for seconds data latency
-    * [KYLIN-606] - Block level index for Inverted-Index
-    * [KYLIN-607] - More efficient cube building
-    * [KYLIN-609] - Add Hybrid as a federation of Cube and Inverted-index realization
-    * [KYLIN-625] - Create GridTable, a data structure that abstracts vertical and horizontal partition of a table
-    * [KYLIN-728] - IGTStore implementation which use disk when memory runs short
-    * [KYLIN-738] - StreamingOLAP
-    * [KYLIN-749] - support timestamp type in II and cube
-    * [KYLIN-774] - Automatically merge cube segments
-    * [KYLIN-868] - add a metadata backup/restore script in bin folder
-    * [KYLIN-886] - Data Retention for streaming data
-    * [KYLIN-906] - cube retention
-    * [KYLIN-943] - Approximate TopN supported by Cube
-    * [KYLIN-986] - Generalize Streaming scripts and put them into code repository 
-    * [KYLIN-1219] - Kylin support SSO with Spring SAML
-    * [KYLIN-1277] - Upgrade tool to put old-version cube and new-version cube into a hybrid model 
-
-__Improvement__
-
-    * [KYLIN-225] - Support edit "cost" of cube
-    * [KYLIN-589] - Cleanup Intermediate hive table after cube build
-    * [KYLIN-623] - update Kylin UI Style to latest AdminLTE
-    * [KYLIN-633] - Support Timestamp for cube partition
-    * [KYLIN-649] -  move the cache layer from service tier back to storage tier
-    * [KYLIN-655] - Migrate cube storage (query side) to use GridTable API
-    * [KYLIN-663] - Push time condition down to ii endpoint
-    * [KYLIN-668] - Out of memory in mapper when building cube in mem
-    * [KYLIN-671] - Implement fine grained cache for cube and ii
-    * [KYLIN-673] - Performance tuning for In-Mem cubing
-    * [KYLIN-674] - IIEndpoint return metrics as well
-    * [KYLIN-675] - cube&model designer refactor
-    * [KYLIN-678] - optimize RowKeyColumnIO
-    * [KYLIN-697] - Reorganize all test cases to unit test and integration tests
-    * [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS 
-    * [KYLIN-708] - replace BitSet for AggrKey
-    * [KYLIN-712] - some enhancement after code review
-    * [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
-    * [KYLIN-718] - replace aliasMap in storage context with a clear specified return column list
-    * [KYLIN-719] - bundle statistics info in endpoint response
-    * [KYLIN-720] - Optimize endpoint's response structure to suit with no-dictionary data
-    * [KYLIN-721] - streaming cli support third-party streammessage parser
-    * [KYLIN-726] - add remote cli port configuration for KylinConfig
-    * [KYLIN-729] - IIEndpoint eliminate the non-aggregate routine
-    * [KYLIN-734] - Push cache layer to each storage engine
-    * [KYLIN-752] - Improved IN clause performance
-    * [KYLIN-753] - Make the dependency on hbase-common to "provided"
-    * [KYLIN-755] - extract copying libs from prepare.sh so that it can be reused
-    * [KYLIN-760] - Improve the hasing performance in Sampling cuboid size
-    * [KYLIN-772] - Continue cube job when hive query return empty resultset
-    * [KYLIN-773] - performance is slow list jobs
-    * [KYLIN-783] - update hdp version in test cases to 2.2.4
-    * [KYLIN-796] - Add REST API to trigger storage cleanup/GC
-    * [KYLIN-809] - Streaming cubing allow multiple kafka clusters/topics
-    * [KYLIN-816] - Allow gap in cube segments, for streaming case
-    * [KYLIN-822] - list cube overview in one page
-    * [KYLIN-823] - replace fk on fact table on rowkey & aggregation group generate
-    * [KYLIN-838] - improve performance of job query
-    * [KYLIN-844] - add backdoor toggles to control query behavior 
-    * [KYLIN-845] - Enable coprocessor even when there is memory hungry distinct count
-    * [KYLIN-858] - add snappy compression support
-    * [KYLIN-866] - Confirm with user when he selects empty segments to merge
-    * [KYLIN-869] - Enhance mail notification
-    * [KYLIN-870] - Speed up hbase segments info by caching
-    * [KYLIN-871] - growing dictionary for streaming case
-    * [KYLIN-874] - script for fill streaming gap automatically
-    * [KYLIN-875] - Decouple with Hadoop to allow alternative Input / Build Engine / Storage
-    * [KYLIN-879] - add a tool to collect orphan hbases 
-    * [KYLIN-880] -  Kylin should change the default folder from /tmp to user configurable destination
-    * [KYLIN-881] - Upgrade Calcite to 1.3.0
-    * [KYLIN-882] - check access to kylin.hdfs.working.dir
-    * [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
-    * [KYLIN-893] - Remove the dependency on quartz and metrics
-    * [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
-    * [KYLIN-896] - Clean ODBC code, add them into main repository and write docs to help compiling
-    * [KYLIN-901] - Add tool for cleanup Kylin metadata storage
-    * [KYLIN-902] - move streaming related parameters into StreamingConfig
-    * [KYLIN-903] - automate metadata cleanup job
-    * [KYLIN-909] - Adapt GTStore to hbase endpoint
-    * [KYLIN-919] - more friendly UI for 0.8
-    * [KYLIN-922] - Enforce same code style for both intellij and eclipse user
-    * [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
-    * [KYLIN-927] - Real time cubes merging skipping gaps
-    * [KYLIN-933] - friendly UI to use data model
-    * [KYLIN-938] - add friendly tip to page when rest request failed
-    * [KYLIN-942] - Cube parallel scan on Hbase
-    * [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
-    * [KYLIN-957] - Support HBase in a separate cluster
-    * [KYLIN-960] - Split storage module to core-storage and storage-hbase
-    * [KYLIN-973] - add a tool to analyse streaming output logs
-    * [KYLIN-984] - Behavior change in streaming data consuming
-    * [KYLIN-987] - Rename 0.7-staging and 0.8 branch
-    * [KYLIN-1014] - Support kerberos authentication while getting status from RM
-    * [KYLIN-1018] - make TimedJsonStreamParser default parser 
-    * [KYLIN-1019] - Remove v1 cube model classes from code repository
-    * [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
-    * [KYLIN-1025] - Save cube change is very slow
-    * [KYLIN-1036] - Code Clean, remove code which never used at front end
-    * [KYLIN-1041] - ADD Streaming UI 
-    * [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
-    * [KYLIN-1058] - Remove "right join" during model creation
-    * [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
-    * [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
-    * [KYLIN-1065] - ODBC driver support tableau 9.1
-    * [KYLIN-1068] - Optimize the memory footprint for TopN counter
-    * [KYLIN-1069] - update tip for 'Partition Column' on UI
-    * [KYLIN-1095] - Update AdminLTE to latest version
-    * [KYLIN-1096] - Deprecate minicluster in 2.x staging
-    * [KYLIN-1099] - Support dictionary of cardinality over 10 millions
-    * [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
-    * [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
-    * [KYLIN-1116] - Use local dictionary for InvertedIndex batch building
-    * [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
-    * [KYLIN-1126] - v2 storage(for parallel scan) backward compatibility with v1 storage
-    * [KYLIN-1135] - Pscan use share thread pool
-    * [KYLIN-1136] - Distinguish fast build mode and complete build mode
-    * [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
-    * [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
-    * [KYLIN-1154] - Load job page is very slow when there are a lot of history job
-    * [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
-    * [KYLIN-1160] - Set default logger appender of log4j for JDBC
-    * [KYLIN-1161] - Rest API /api/cubes?cubeName=  is doing fuzzy match instead of exact match
-    * [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
-    * [KYLIN-1190] - Make memory budget per query configurable
-    * [KYLIN-1234] - Cube ACL does not work
-    * [KYLIN-1235] - allow user to select dimension column as options when edit COUNT_DISTINCT measure
-    * [KYLIN-1237] - Revisit on cube size estimation
-    * [KYLIN-1239] - attribute each htable with team contact and owner name
-    * [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
-    * [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
-    * [KYLIN-1246] - get cubes API update - offset,limit not required
-    * [KYLIN-1251] - add toggle event for tree label
-    * [KYLIN-1259] - Change font/background color of job progress
-    * [KYLIN-1265] - Make sure 2.0 query is no slower than 1.0
-    * [KYLIN-1266] - Tune 2.0 release package size
-    * [KYLIN-1267] - Check Kryo performance when spilling aggregation cache
-    * [KYLIN-1268] - Fix 2 kylin logs
-    * [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
-    * [KYLIN-1281] - Add "partition_date_end", and move "partition_date_start" into cube descriptor
-    * [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual 
-    * [KYLIN-1287] - UI update for streaming build action
-    * [KYLIN-1297] - Diagnose query performance issues in 2.x versions
-    * [KYLIN-1301] - fix segment pruning failure in 2.x versions
-    * [KYLIN-1308] - query storage v2 enable parallel cube visiting
-    * [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
-    * [KYLIN-1318] - enable gc log for kylin server instance
-    * [KYLIN-1323] - Improve performance of converting data to hfile
-    * [KYLIN-1327] - Tool for batch updating host information of htables
-    * [KYLIN-1334] - allow truncating string for fixed length dimensions
-    * [KYLIN-1341] - Display JSON of Data Model in the dialog
-    * [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
-    * [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
-
-__Bug__
-
-    * [KYLIN-404] - Can't get cube source record size.
-    * [KYLIN-457] - log4j error and dup lines in kylin.log
-    * [KYLIN-521] - No verification even if join condition is invalid
-    * [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
-    * [KYLIN-635] - IN clause within CASE when is not working
-    * [KYLIN-656] - REST API get cube desc NullPointerException when cube is not exists
-    * [KYLIN-660] - Make configurable of dictionary cardinality cap
-    * [KYLIN-665] - buffer error while in mem cubing
-    * [KYLIN-688] - possible memory leak for segmentIterator
-    * [KYLIN-731] - Parallel stream build will throw OOM
-    * [KYLIN-740] - Slowness with many IN() values
-    * [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
-    * [KYLIN-748] - II returned result not correct when decimal omits precision and scal
-    * [KYLIN-751] - Max on negative double values is not working
-    * [KYLIN-766] - round BigDecimal according to the DataType scale
-    * [KYLIN-769] - empty segment build fail due to no dictionary 
-    * [KYLIN-771] - query cache is not evicted when metadata changes
-    * [KYLIN-778] - can't build cube after package to binary 
-    * [KYLIN-780] - Upgrade Calcite to 1.0
-    * [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted 
-    * [KYLIN-801] - fix remaining issues on query cache and storage cache
-    * [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
-    * [KYLIN-807] - Avoid write conflict between job engine and stream cube builder
-    * [KYLIN-817] - Support Extract() on timestamp column
-    * [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
-    * [KYLIN-828] - kylin still use ldap profile when comment the line "kylin.sandbox=false" in kylin.properties
-    * [KYLIN-834] - optimize StreamingUtil binary search perf
-    * [KYLIN-837] - fix submit build type when refresh cube
-    * [KYLIN-873] - cancel button does not work when [resume][discard] job
-    * [KYLIN-889] - Support more than one HDFS files of lookup table
-    * [KYLIN-897] - Update CubeMigrationCLI to copy data model info
-    * [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
-    * [KYLIN-905] - Boolean type not supported
-    * [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
-    * [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
-    * [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
-    * [KYLIN-914] - Scripts shebang should use /bin/bash
-    * [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
-    * [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
-    * [KYLIN-930] - can't see realizations under each project at project list page
-    * [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
-    * [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
-    * [KYLIN-936] - can not see job step log 
-    * [KYLIN-944] - update doc about how to consume kylin API in javascript
-    * [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
-    * [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
-    * [KYLIN-951] - Drop RowBlock concept from GridTable general API
-    * [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
-    * [KYLIN-967] - Dump running queries on memory shortage
-    * [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
-    * [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
-    * [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
-    * [KYLIN-983] - Query sql offset keyword bug
-    * [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
-    * [KYLIN-991] - StorageCleanupJob may clean a newly created HTable in streaming cube building
-    * [KYLIN-992] - ConcurrentModificationException when initializing ResourceStore
-    * [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
-    * [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
-    * [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million 
-    * [KYLIN-1026] - Error message for git check is not correct in package.sh
-    * [KYLIN-1027] - HBase Token not added after KYLIN-1007
-    * [KYLIN-1033] - Error when joining two sub-queries
-    * [KYLIN-1039] - Filter like (A or false) yields wrong result
-    * [KYLIN-1047] - Upgrade to Calcite 1.4
-    * [KYLIN-1066] - Only 1 reducer is started in the "Build cube" step of MR_Engine_V2
-    * [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
-    * [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
-    * [KYLIN-1078] - UI - Cannot have comments in the end of New Query textbox
-    * [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
-    * [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
-    * [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
-    * [KYLIN-1113] - Support TopN query in v2/CubeStorageQuery.java
-    * [KYLIN-1115] - Clean up ODBC driver code
-    * [KYLIN-1121] - ResourceTool download/upload does not work in binary package
-    * [KYLIN-1127] - Refactor CacheService
-    * [KYLIN-1137] - TopN measure need support dictionary merge
-    * [KYLIN-1138] - Bad CubeDesc signature cause segment be delete when enable a cube
-    * [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
-    * [KYLIN-1151] - Menu items should be aligned when create new model
-    * [KYLIN-1152] - ResourceStore should read content and timestamp in one go
-    * [KYLIN-1153] - Upgrade is needed for cubedesc metadata from 1.x to 2.0
-    * [KYLIN-1171] - KylinConfig truncate bug
-    * [KYLIN-1179] - Cannot use String as partition column
-    * [KYLIN-1180] - Some NPE in Dictionary
-    * [KYLIN-1181] - Split metadata size exceeded when data got huge in one segment
-    * [KYLIN-1192] - Cannot edit data model desc without name change
-    * [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
-    * [KYLIN-1211] - Add 'Enable Cache' button in System page
-    * [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
-    * [KYLIN-1218] - java.lang.NullPointerException in MeasureTypeFactory when sync hive table
-    * [KYLIN-1220] - JsonMappingException: Can not deserialize instance of java.lang.String out of START_ARRAY
-    * [KYLIN-1225] - Only 15 cubes listed in the /models page
-    * [KYLIN-1226] - InMemCubeBuilder throw OOM for multiple HLLC measures
-    * [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
-    * [KYLIN-1236] - redirect to home page when input invalid url
-    * [KYLIN-1250] - Got NPE when discarding a job
-    * [KYLIN-1260] - Job status labels are not in same style
-    * [KYLIN-1269] - Can not get last error message in email
-    * [KYLIN-1271] - Create streaming table layer will disappear if click on outside
-    * [KYLIN-1274] - Query from JDBC is partial results by default
-    * [KYLIN-1282] - Comparison filter on Date/Time column not work for query
-    * [KYLIN-1289] - Click on subsequent wizard steps doesn't work when editing existing cube or model
-    * [KYLIN-1303] - Error when in-mem cubing on empty data source which has boolean columns
-    * [KYLIN-1306] - Null strings are not applied during fast cubing
-    * [KYLIN-1314] - Display issue for aggression groups 
-    * [KYLIN-1315] - UI: Cannot add normal dimension when creating new cube 
-    * [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
-    * [KYLIN-1317] - Kill underlying running hadoop job while discard a job
-    * [KYLIN-1328] - "UnsupportedOperationException" is thrown when remove a data model
-    * [KYLIN-1330] - UI create model: Press enter will go back to pre step
-    * [KYLIN-1336] - 404 errors of model page and api 'access/DataModelDesc' in console
-    * [KYLIN-1337] - Sort cube name doesn't work well 
-    * [KYLIN-1346] - IllegalStateException happens in SparkCubing
-    * [KYLIN-1347] - UI: cannot place cursor in front of the last dimension
-    * [KYLIN-1349] - 'undefined' is logged in console when adding lookup table
-    * [KYLIN-1352] - 'Cache already exists' exception in high-concurrency query situation
-    * [KYLIN-1356] - use exec-maven-plugin for IT environment provision
-    * [KYLIN-1357] - Cloned cube has build time information
-    * [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
-    * [KYLIN-1382] - CubeMigrationCLI reports error when migrate cube
-    * [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale 
-    * [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
-    * [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
-    * [KYLIN-1414] - Couldn't drag and drop rowkey, js error is thrown in browser console
-
-
 ## v1.2 - 2015-12-15
 _Tag:_ [kylin-1.2](https://github.com/apache/kylin/tree/kylin-1.2)
 

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/gettingstarted/concepts.md
----------------------------------------------------------------------
diff --git a/website/_docs15/gettingstarted/concepts.md b/website/_docs15/gettingstarted/concepts.md
new file mode 100644
index 0000000..af46c6c
--- /dev/null
+++ b/website/_docs15/gettingstarted/concepts.md
@@ -0,0 +1,64 @@
+---
+layout: docs15
+title:  "Technical Concepts"
+categories: gettingstarted
+permalink: /docs15/gettingstarted/concepts.html
+since: v1.2
+---
+ 
+Here are some basic technical concepts used in Apache Kylin, please check them for your reference.
+For terminology in domain, please refer to: [Terminology](terminology.md)
+
+## CUBE
+* __Table__ - This is definition of hive tables as source of cubes, which must be synced before building cubes.
+![](/images/docs/concepts/DataSource.png)
+
+* __Data Model__ - This describes a [STAR SCHEMA](https://en.wikipedia.org/wiki/Star_schema) data model, which defines fact/lookup tables and filter condition.
+![](/images/docs/concepts/DataModel.png)
+
+* __Cube Descriptor__ - This describes definition and settings for a cube instance, defining which data model to use, what dimensions and measures to have, how to partition to segments and how to handle auto-merge etc.
+![](/images/docs/concepts/CubeDesc.png)
+
+* __Cube Instance__ - This is instance of cube, built from one cube descriptor, and consist of one or more cube segments according partition settings.
+![](/images/docs/concepts/CubeInstance.png)
+
+* __Partition__ - User can define a DATE/STRING column as partition column on cube descriptor, to separate one cube into several segments with different date periods.
+![](/images/docs/concepts/Partition.png)
+
+* __Cube Segment__ - This is actual carrier of cube data, and maps to a HTable in HBase. One building job creates one new segment for the cube instance. Once data change on specified data period, we can refresh related segments to avoid rebuilding whole cube.
+![](/images/docs/concepts/CubeSegment.png)
+
+* __Aggregation Group__ - Each aggregation group is subset of dimensions, and build cuboid with combinations inside. It aims at pruning for optimization.
+![](/images/docs/concepts/AggregationGroup.png)
+
+## DIMENSION & MEASURE
+* __Mandotary__ - This dimension type is used for cuboid pruning, if a dimension is specified as “mandatory”, then those combinations without such dimension are pruned.
+* __Hierarchy__ - This dimension type is used for cuboid pruning, if dimension A,B,C forms a “hierarchy” relation, then only combinations with A, AB or ABC shall be remained. 
+* __Derived__ - On lookup tables, some dimensions could be generated from its PK, so there's specific mapping between them and FK from fact table. So those dimensions are DERIVED and don't participate in cuboid generation.
+![](/images/docs/concepts/Dimension.png)
+
+* __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
+* __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
+* __Top N__ - For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
+![](/images/docs/concepts/Measure.png)
+
+## CUBE ACTIONS
+* __BUILD__ - Given an interval of partition column, this action is to build a new cube segment.
+* __REFRESH__ - This action will rebuilt cube segment in some partition period, which is used in case of source table increasing.
+* __MERGE__ - This action will merge multiple continuous cube segments into single one. This can be automated with auto-merge settings in cube descriptor.
+* __PURGE__ - Clear segments under a cube instance. This will only update metadata, and won't delete cube data from HBase.
+![](/images/docs/concepts/CubeAction.png)
+
+## JOB STATUS
+* __NEW__ - This denotes one job has been just created.
+* __PENDING__ - This denotes one job is paused by job scheduler and waiting for resources.
+* __RUNNING__ - This denotes one job is running in progress.
+* __FINISHED__ - This denotes one job is successfully finished.
+* __ERROR__ - This denotes one job is aborted with errors.
+* __DISCARDED__ - This denotes one job is cancelled by end users.
+![](/images/docs/concepts/Job.png)
+
+## JOB ACTION
+* __RESUME__ - Once a job in ERROR status, this action will try to restore it from latest successful point.
+* __DISCARD__ - No matter status of a job is, user can end it and release resources with DISCARD action.
+![](/images/docs/concepts/JobAction.png)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/gettingstarted/events.md
----------------------------------------------------------------------
diff --git a/website/_docs15/gettingstarted/events.md b/website/_docs15/gettingstarted/events.md
new file mode 100644
index 0000000..f57ca94
--- /dev/null
+++ b/website/_docs15/gettingstarted/events.md
@@ -0,0 +1,27 @@
+---
+layout: docs15
+title:  "Events and Conferences"
+categories: gettingstarted
+permalink: /docs15/gettingstarted/events.html
+---
+
+__Coming Events__
+
+* ApacheCon EU 2015
+
+__Conferences__
+
+* [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
+* [Apache Kylin - Balance Between Space and Time](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015) ([video](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015)) by [Debashis Saha](https://twitter.com/debashis_saha) & [Luke Han](https://twitter.com/lukehq), at Hadoop Summit 2015 in San Jose, US, 2015-06-09
+* [HBaseCon 2015: Apache Kylin; Extreme OLAP Engine for Hadoop](https://vimeo.com/128152444) ([video](https://vimeo.com/128152444), [slides](http://www.slideshare.net/HBaseCon/ecosystem-session-3b)) by [Seshu Adunuthula](https://twitter.com/SeshuAd) at HBaseCon 2015 in San Francisco, US, 2015-05-07
+* [Apache Kylin - Extreme OLAP Engine for Hadoop](http://strataconf.com/big-data-conference-uk-2015/public/schedule/detail/40029) ([slides](http://www.slideshare.net/lukehan/apache-kylin-extreme-olap-engine-for-big-data)) by [Luke Han](https://twitter.com/lukehq) & [Yang Li](https://github.com/liyang-gmt8), at Strata+Hadoop World in London, UK, 2015-05-06
+* [Apache Kylin Open Source Journey](http://www.infoq.com/cn/presentations/open-source-journey-of-apache-kylin) ([slides](http://www.slideshare.net/lukehan/apache-kylin-open-source-journey-for-qcon2015-beijing)) by [Luke Han](https://twitter.com/lukehq), at QCon Beijing in Beijing, China, 2015-04-23
+* [Apache Kylin - OLAP on Hadoop](http://cio.it168.com/a2015/0418/1721/000001721404.shtml) by [Yang Li](https://github.com/liyang-gmt8), at Database Technology Conference China 2015 in Beijing, China, 2015-04-18
+* [Apache Kylin – Cubes on Hadoop](https://www.youtube.com/watch?v=U0SbrVzuOe4) ([video](https://www.youtube.com/watch?v=U0SbrVzuOe4), [slides](http://www.slideshare.net/Hadoop_Summit/apache-kylin-cubes-on-hadoop)) by [Ted Dunning](https://twitter.com/ted_dunning), at Hadoop Summit 2015 Europe in Brussels, Belgium, 2015-04-16
+* [Apache Kylin - Hadoop 上的大规模联机分析平台](http://bdtc2014.hadooper.cn/m/zone/bdtc_2014/schedule3) ([slides](http://www.slideshare.net/lukehan/apache-kylin-big-data-technology-conference-2014-beijing-v2)) by [Luke Han](https://twitter.com/lukehq), at Big Data Technology Conference China in Beijing, China, 2014-12-14
+* [Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive](http://v.csdn.hudong.com/s/article.html?arcid=15820707) ([video](http://v.csdn.hudong.com/s/article.html?arcid=15820707), [slides](http://www.slideshare.net/XuJiang2/kylin-hadoop-olap-engine)) by [Jiang Xu](https://www.linkedin.com/pub/xu-jiang/4/5a8/230), at Shanghai Big Data Summit 2014 in Shanghai, China , 2014-10-25
+
+__Meetup__
+
+* [Apache Kylin Meetup @Bay Area](http://www.meetup.com/Cloud-at-ebayinc/events/218914395/), in San Jose, US, 6:00PM - 7:30PM, Thursday, 2014-12-04
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/gettingstarted/faq.md
----------------------------------------------------------------------
diff --git a/website/_docs15/gettingstarted/faq.md b/website/_docs15/gettingstarted/faq.md
new file mode 100644
index 0000000..bbb3343
--- /dev/null
+++ b/website/_docs15/gettingstarted/faq.md
@@ -0,0 +1,89 @@
+---
+layout: docs15
+title:  "FAQ"
+categories: gettingstarted
+permalink: /docs15/gettingstarted/faq.html
+since: v0.6.x
+---
+
+### Some NPM error causes ERROR exit (中国大陆地区用户请特别注意此问题)?  
+For people from China:  
+
+* Please add proxy for your NPM (请为NPM设置代理):  
+`npm config set proxy http://YOUR_PROXY_IP`
+
+* Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
+[http://npm.taobao.org](http://npm.taobao.org)
+
+### Can't get master address from ZooKeeper" when installing Kylin on Hortonworks Sandbox
+Check out [https://github.com/KylinOLAP/Kylin/issues/9](https://github.com/KylinOLAP/Kylin/issues/9).
+
+### Map Reduce Job information can't display on sandbox deployment
+Check out [https://github.com/KylinOLAP/Kylin/issues/40](https://github.com/KylinOLAP/Kylin/issues/40)
+
+#### Install Kylin on CDH 5.2 or Hadoop 2.5.x
+Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
+{% highlight Groff markup %}
+I was able to deploy Kylin with following option in POM.
+<hadoop2.version>2.5.0</hadoop2.version>
+<yarn.version>2.5.0</yarn.version>
+<hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
+<zookeeper.version>3.4.5</zookeeper.version>
+<hive.version>0.13.1</hive.version>
+My Cluster is running on Cloudera Distribution CDH 5.2.0.
+{% endhighlight %}
+
+#### Unable to load a big cube as HTable, with java.lang.OutOfMemoryError: unable to create new native thread
+HBase (as of writing) allocates one thread per region when bulk loading a HTable. Try reduce the number of regions of your cube by setting its "capacity" to "MEDIUM" or "LARGE". Also tweaks OS & JVM can allow more threads, for example see [this article](http://blog.egilh.com/2006/06/2811aspx.html).
+
+#### Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
+User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
+
+#### SUM(field) returns a negtive result while all the numbers in this field are > 0
+If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which would be used as a measure in Kylin; See hive number types: [https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes)
+
+#### Why Kylin need extract the distinct columns from Fact Table before building cube?
+Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
+
+#### Why Kylin calculate the HIVE table cardinality?
+The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
+
+#### How to add new user or change the default password?
+Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
+{% highlight Groff markup %}
+${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
+{% endhighlight %}
+The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
+When you deploy Kylin for more users, switch to LDAP authentication is recommended; To enable LDAP authentication, update "kylin.sandbox" in conf/kylin.properties to false, and also configure the ldap.* properties in ${KYLIN_HOME}/conf/kylin.properties
+
+#### Using sub-query for un-supported SQL
+
+{% highlight Groff markup %}
+Original SQL:
+select fact.slr_sgmt,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from ih_daily_fact fact
+inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+group by fact.slr_sgmt
+{% endhighlight %}
+
+{% highlight Groff markup %}
+Using sub-query
+select a.slr_sgmt,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from (
+    select fact.slr_sgmt as slr_sgmt,
+    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
+    sum(gmv) as gmv36,
+    sum(gmv) as gmv35
+    from ih_daily_fact fact
+    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
+) a
+group by a.slr_sgmt
+{% endhighlight %}
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/gettingstarted/terminology.md
----------------------------------------------------------------------
diff --git a/website/_docs15/gettingstarted/terminology.md b/website/_docs15/gettingstarted/terminology.md
new file mode 100644
index 0000000..3ff5394
--- /dev/null
+++ b/website/_docs15/gettingstarted/terminology.md
@@ -0,0 +1,25 @@
+---
+layout: docs15
+title:  "Terminology"
+categories: gettingstarted
+permalink: /docs15/gettingstarted/terminology.html
+since: v0.5.x
+---
+ 
+
+Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
+They are basic knowledge of Apache Kylin which also will help to well understand such concerpt, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analycits. 
+
+* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
+* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
+* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
+* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
+* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
+* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
+* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
+* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
+* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
+* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/howto/howto_backup_hbase.md
----------------------------------------------------------------------
diff --git a/website/_docs15/howto/howto_backup_hbase.md b/website/_docs15/howto/howto_backup_hbase.md
new file mode 100644
index 0000000..0c81924
--- /dev/null
+++ b/website/_docs15/howto/howto_backup_hbase.md
@@ -0,0 +1,28 @@
+---
+layout: docs15
+title:  How to Clean/Backup HBase Tables
+categories: howto
+permalink: /docs15/howto/howto_backup_hbase.html
+since: v0.7.1
+---
+
+Kylin persists all data (meta data and cube) in HBase; You may want to export the data sometimes for whatever purposes 
+(backup, migration, troubleshotting etc); This page describes the steps to do this and also there is a Java app for you to do this easily.
+
+Steps:
+
+1. Cleanup unused cubes to save storage space (be cautious on production!): run the following command in hbase CLI: 
+{% highlight Groff markup %}
+hbase org.apache.hadoop.util.RunJar /${KYLIN_HOME}/lib/kylin-job-(version).jar org.apache.kylin.job.hadoop.cube.StorageCleanupJob --delete true
+{% endhighlight %}
+2. List all HBase tables, iterate and then export each Kylin table to HDFS; 
+See [https://hbase.apache.org/book/ops_mgt.html#export](https://hbase.apache.org/book/ops_mgt.html#export)
+
+3. Copy the export folder from HDFS to local file system, and then archive it;
+
+4. (optional) Download the archive from Hadoop CLI to local;
+
+5. Cleanup the export folder from CLI HDFS and local file system;
+
+Kylin provide the "ExportHBaseData.java" (currently only exist in "minicluster" branch) for you to do the 
+step 2-5 in one run; Please ensure the correct path of "kylin.properties" has been set in the sys env; This Java uses the sandbox config by default;

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/howto/howto_backup_metadata.md
----------------------------------------------------------------------
diff --git a/website/_docs15/howto/howto_backup_metadata.md b/website/_docs15/howto/howto_backup_metadata.md
new file mode 100644
index 0000000..e006eb0
--- /dev/null
+++ b/website/_docs15/howto/howto_backup_metadata.md
@@ -0,0 +1,61 @@
+---
+layout: docs15
+title:  How to Backup Metadata
+categories: howto
+permalink: /docs15/howto/howto_backup_metadata.html
+since: v0.7.1
+---
+
+Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
+
+{% highlight Groff markup %}
+## The metadata store in hbase
+kylin.metadata.url=kylin_metadata@hbase
+{% endhighlight %}
+
+This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
+
+## Backup Metadata Store with binary package
+
+Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
+In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
+
+{% highlight Groff markup %}
+./bin/metastore.sh backup
+{% endhighlight %}
+
+to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
+
+## Restore Metadata Store with binary package
+
+In case you find your metadata store messed up, and you want to restore to a previous backup:
+
+Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
+
+{% highlight Groff markup %}
+./bin/metastore.sh reset
+{% endhighlight %}
+
+Then upload the backup metadata to Kylin's metadata store:
+{% highlight Groff markup %}
+./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
+{% endhighlight %}
+
+## Backup/restore metadata in development env (available since 0.7.3)
+
+When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
+
+## Cleanup unused resources from Metadata Store (available since 0.7.3)
+As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
+
+Firstly, run a check, this is safe as it will not change anything:
+{% highlight Groff markup %}
+./bin/metastore.sh clean
+{% endhighlight %}
+
+The resources that will be dropped will be listed;
+
+Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
+{% highlight Groff markup %}
+./bin/metastore.sh clean --delete true
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/howto/howto_build_cube_with_restapi.md
----------------------------------------------------------------------
diff --git a/website/_docs15/howto/howto_build_cube_with_restapi.md b/website/_docs15/howto/howto_build_cube_with_restapi.md
new file mode 100644
index 0000000..42f96dc
--- /dev/null
+++ b/website/_docs15/howto/howto_build_cube_with_restapi.md
@@ -0,0 +1,54 @@
+---
+layout: docs15
+title:  How to Build Cube with Restful API
+categories: howto
+permalink: /docs15/howto/howto_build_cube_with_restapi.html
+since: v0.7.1
+---
+
+### 1.	Authentication
+*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
+*   Add `Authorization` header to first request for authentication
+*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
+*   Once authenticated, client can go subsequent requests with cookies.
+{% highlight Groff markup %}
+POST http://localhost:7070/kylin/api/user/authentication
+    
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+
+### 2.	Get details of cube. 
+*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
+*   Client can find cube segment date ranges in returned cube detail.
+{% highlight Groff markup %}
+GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+### 3.	Then submit a build job of the cube. 
+*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
+*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
+    *   `startTime` and `endTime` should be utc timestamp.
+    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
+*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
+{% highlight Groff markup %}
+PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+    
+{
+    "startTime": 0,
+    "endTime": 1388563200000,
+    "buildType": "BUILD"
+}
+{% endhighlight %}
+
+### 4.	Track job status. 
+*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
+*   Returned `job_status` represents current status of job.
+
+### 5.	If the job got errors, you can resume it. 
+*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/howto/howto_cleanup_storage.md
----------------------------------------------------------------------
diff --git a/website/_docs15/howto/howto_cleanup_storage.md b/website/_docs15/howto/howto_cleanup_storage.md
new file mode 100644
index 0000000..56cef19
--- /dev/null
+++ b/website/_docs15/howto/howto_cleanup_storage.md
@@ -0,0 +1,22 @@
+---
+layout: docs15
+title:  How to Cleanup Storage (HDFS & HBase Tables)
+categories: howto
+permalink: /docs15/howto/howto_cleanup_storage.html
+since: v1.5
+---
+
+Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
+automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
+
+Steps:
+1. Check which resources can be cleanup, this will not remove anything:
+{% highlight Groff markup %}
+hbase org.apache.hadoop.util.RunJar ${KYLIN_HOME}/lib/kylin-job-(version).jar org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete false
+{% endhighlight %}
+Here please replace (version) with the specific Kylin jar version in your installation;
+2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
+{% highlight Groff markup %}
+hbase org.apache.hadoop.util.RunJar ${KYLIN_HOME}/lib/kylin-job-(version).jar org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete true
+{% endhighlight %}
+On finish, the intermediate HDFS location and HTables will be dropped;

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/howto/howto_jdbc.md
----------------------------------------------------------------------
diff --git a/website/_docs15/howto/howto_jdbc.md b/website/_docs15/howto/howto_jdbc.md
new file mode 100644
index 0000000..d374685
--- /dev/null
+++ b/website/_docs15/howto/howto_jdbc.md
@@ -0,0 +1,93 @@
+---
+layout: docs15
+title:  How to Use kylin Remote JDBC Driver
+categories: howto
+permalink: /docs15/howto/howto_jdbc.html
+since: v0.7.1
+---
+
+### Authentication
+
+###### Build on kylin authentication restful service. Supported parameters:
+* user : username 
+* password : password
+* ssl: true/false. Default be false; If true, all the services call will use https.
+
+### Connection URL format:
+{% highlight Groff markup %}
+jdbc:kylin://<hostname>:<port>/<kylin_project_name>
+{% endhighlight %}
+* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
+* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
+* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
+
+### 1. Query with Statement
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 2. Query with PreparedStatement
+
+###### Supported prepared statement parameters:
+* setString
+* setInt
+* setShort
+* setLong
+* setFloat
+* setDouble
+* setBoolean
+* setByte
+* setDate
+* setTime
+* setTimestamp
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
+state.setInt(1, 10);
+ResultSet resultSet = state.executeQuery();
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 3. Get query result set metadata
+Kylin jdbc driver supports metadata list methods:
+List catalog, schema, table and column with sql pattern filters(such as %).
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
+while (tables.next()) {
+    for (int i = 0; i < 10; i++) {
+        assertEquals("dummy", tables.getString(i + 1));
+    }
+}
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/howto/howto_ldap_and_sso.md
----------------------------------------------------------------------
diff --git a/website/_docs15/howto/howto_ldap_and_sso.md b/website/_docs15/howto/howto_ldap_and_sso.md
new file mode 100644
index 0000000..1780729
--- /dev/null
+++ b/website/_docs15/howto/howto_ldap_and_sso.md
@@ -0,0 +1,122 @@
+---
+layout: docs15
+title:  How to Enable Security with LDAP and SSO
+categories: howto
+permalink: /docs15/howto/howto_ldap_and_sso.html
+since: v1.5
+---
+
+## Enable LDAP authentication
+
+Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
+
+#### Configure LDAP server info
+
+Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be salted; You can Google "Generate a BCrypt Password" or run org.apache.kylin.rest.security.PasswordPlaceholderConfigurer to get a hash of your password.
+
+```
+ldap.server=ldap://<your_ldap_host>:<port>
+ldap.username=<your_user_name>
+ldap.password=<your_password_hash>
+```
+
+Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
+
+```
+ldap.user.searchBase=OU=UserAccounts,DC=mycompany,DC=com
+ldap.user.searchPattern=(&(AccountName={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
+ldap.user.groupSearchBase=OU=Group,DC=mycompany,DC=com
+```
+
+If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in ldap.service.*; Otherwise, leave them be empty;
+
+### Configure the administrator group and default role
+
+To map an LDAP group to the admin group in Kylin, need set the "acl.adminRole" to "ROLE_" + GROUP_NAME. For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
+
+```
+acl.adminRole=ROLE_KYLIN-ADMIN-GROUP
+acl.defaultRole=ROLE_ANALYST,ROLE_MODELER
+```
+
+The "acl.defaultRole" is a list of the default roles that grant to everyone, keep it as-is.
+
+#### Enable LDAP
+
+Set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server.
+
+## Enable SSO authentication
+
+From v1.5, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
+
+Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
+
+### Generate IDP metadata xml
+Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
+
+  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
+  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
+  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
+
+### Generate JKS keystore for Kylin
+As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
+
+Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
+
+```
+$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
+Enter Export Password: <export_pwd>
+Verifying - Enter Export Password: <export_pwd>
+
+
+$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
+
+Enter destination keystore password:  changeit
+Re-enter new password: changeit
+```
+
+It will put the keys to "samlKeystore.jks" with alias "kylin";
+
+### Enable Higher Ciphers
+
+Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
+
+### Deploy IDP xml file and keystore to Kylin
+
+The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
+	
+  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
+  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
+  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
+
+```
+<!-- Central storage of cryptographic keys -->
+<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
+	<constructor-arg value="classpath:samlKeystore.jks"/>
+	<constructor-arg type="java.lang.String" value="changeit"/>
+	<constructor-arg>
+		<map>
+			<entry key="kylin" value="changeit"/>
+		</map>
+	</constructor-arg>
+	<constructor-arg type="java.lang.String" value="kylin"/>
+</bean>
+
+```
+
+### Other configurations
+In conf/kylin.properties, add the following properties with your server information:
+
+```
+saml.metadata.entityBaseURL=https://host-name/kylin
+saml.context.scheme=https
+saml.context.serverName=host-name
+saml.context.serverPort=443
+saml.context.contextPath=/kylin
+```
+
+Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
+
+### Enable SSO
+Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs15/howto/howto_optimize_cubes.md
----------------------------------------------------------------------
diff --git a/website/_docs15/howto/howto_optimize_cubes.md b/website/_docs15/howto/howto_optimize_cubes.md
new file mode 100644
index 0000000..5468347
--- /dev/null
+++ b/website/_docs15/howto/howto_optimize_cubes.md
@@ -0,0 +1,213 @@
+---
+layout: docs15
+title:  How to Optimize Cubes
+categories: howto
+permalink: /docs15/howto/howto_optimize_cubes.html
+since: v0.7.1
+---
+
+## Hierarchies:
+
+Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
+
+group by continent
+group by continent, country
+group by continent, country, city
+
+In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
+
+If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
+
+
+A. Hierarchies on lookup table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, FK</td>
+    <td></td>
+    <td>PK,,H1,H2,H3,,,,</td>
+  </tr>
+</table>
+
+---
+
+B. Hierarchies on fact table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
+  </tr>
+</table>
+
+---
+
+
+There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
+
+A*. Hierarchies on lookup table over its primary key
+
+
+<table>
+  <tr>
+    <td align="center">Lookup Table(Calendar)</td>
+  </tr>
+  <tr>
+    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
+  </tr>
+</table>
+
+---
+
+
+For cases like A* what you need is another optimization called "Derived Columns"
+
+## Derived Columns:
+
+Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
+
+For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, DimA(FK) </td>
+    <td></td>
+    <td>DimX(PK),,DimB, DimC</td>
+  </tr>
+</table>
+
+---
+
+
+Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
+
+
+<table>
+  <tr>
+    <th>dimA</th>
+    <th>dimB</th>
+    <th>dimC</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>b</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>c</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+</table>
+
+
+in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
+
+original combinations:
+ABC,AB,AC,BC,A,B,C
+
+combinations when driving B from A:
+AC,A,C
+
+at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
+
+
+<table>
+  <tr>
+    <th>DimA</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>2</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"