You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by li...@apache.org on 2017/03/25 01:22:27 UTC

[1/5] kylin git commit: prepare docs for 2.0

Repository: kylin
Updated Branches:
  refs/heads/document 55b167095 -> 7ea64f38a


http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/tableau_91.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/tableau_91.cn.md b/website/_docs20/tutorial/tableau_91.cn.md
new file mode 100644
index 0000000..40b5aa2
--- /dev/null
+++ b/website/_docs20/tutorial/tableau_91.cn.md
@@ -0,0 +1,51 @@
+---
+layout: docs20-cn
+title:  Tableau 9 \u6559\u7a0b
+categories: tutorial
+permalink: /cn/docs20/tutorial/tableau_91.html
+version: v1.2
+since: v1.2
+---
+
+Tableau 9\u5df2\u7ecf\u53d1\u5e03\u4e00\u6bb5\u65f6\u95f4\u4e86\uff0c\u793e\u533a\u6709\u5f88\u591a\u7528\u6237\u5e0c\u671bApache Kylin\u80fd\u8fdb\u4e00\u6b65\u652f\u6301\u8be5\u7248\u672c\u3002\u73b0\u5728\u53ef\u4ee5\u901a\u8fc7\u66f4\u65b0Kylin ODBC\u9a71\u52a8\u4ee5\u4f7f\u7528Tableau 9\u6765\u4e0eKylin\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002
+
+
+### Tableau 8.x \u7528\u6237
+\u8bf7\u53c2\u8003[Tableau \u6559\u7a0b](./tableau.html)\u4ee5\u83b7\u5f97\u66f4\u8be6\u7ec6\u5e2e\u52a9\u3002
+
+### Install ODBC Driver
+\u53c2\u8003\u9875\u9762[Kylin ODBC \u9a71\u52a8\u7a0b\u5e8f\u6559\u7a0b](./odbc.html)\uff0c\u8bf7\u786e\u4fdd\u4e0b\u8f7d\u5e76\u5b89\u88c5Kylin ODBC Driver __v1.5__. \u5982\u679c\u4f60\u5b89\u88c5\u6709\u65e9\u524d\u7248\u672c\uff0c\u8bf7\u5378\u8f7d\u540e\u518d\u5b89\u88c5\u3002 
+
+### Connect to Kylin Server
+\u5728Tableau 9.1\u521b\u5efa\u65b0\u7684\u6570\u636e\u8fde\u63a5\uff0c\u5355\u51fb\u5de6\u4fa7\u9762\u677f\u4e2d\u7684`Other Database(ODBC)`\uff0c\u5e76\u5728\u5f39\u51fa\u7a97\u53e3\u4e2d\u9009\u62e9`KylinODBCDriver` 
+![](/images/tutorial/odbc/tableau_91/1.png)
+
+\u8f93\u5165\u4f60\u7684\u670d\u52a1\u5668\u5730\u5740\u3001\u7aef\u53e3\u3001\u9879\u76ee\u3001\u7528\u6237\u540d\u548c\u5bc6\u7801\uff0c\u70b9\u51fb`Connect`\u53ef\u83b7\u53d6\u6709\u6743\u9650\u8bbf\u95ee\u7684\u6240\u6709\u9879\u76ee\u5217\u8868\u3002\u6709\u5173\u6743\u9650\u7684\u8be6\u7ec6\u4fe1\u606f\u8bf7\u53c2\u8003[Kylin Cube \u6743\u9650\u6388\u4e88\u6559\u7a0b](./acl.html).
+![](/images/tutorial/odbc/tableau_91/2.png)
+
+### \u6620\u5c04\u6570\u636e\u6a21\u578b
+\u5728\u5de6\u4fa7\u7684\u5217\u8868\u4e2d\uff0c\u9009\u62e9\u6570\u636e\u5e93`defaultCatalog`\u5e76\u5355\u51fb\u201d\u641c\u7d22\u201c\u6309\u94ae\uff0c\u5c06\u5217\u51fa\u6240\u6709\u53ef\u67e5\u8be2\u7684\u8868\u3002\u7528\u9f20\u6807\u628a\u8868\u62d6\u62fd\u5230\u53f3\u4fa7\u533a\u57df\uff0c\u5c31\u53ef\u4ee5\u6dfb\u52a0\u8868\u4f5c\u4e3a\u6570\u636e\u6e90\uff0c\u5e76\u521b\u5efa\u597d\u8868\u4e0e\u8868\u7684\u8fde\u63a5\u5173\u7cfb
+![](/images/tutorial/odbc/tableau_91/3.png)
+
+### Connect Live
+Tableau 9.1\u4e2d\u6709\u4e24\u79cd\u6570\u636e\u6e90\u8fde\u63a5\u7c7b\u578b\uff0c\u9009\u62e9\uff40\u5728\u7ebf\uff40\u9009\u9879\u4ee5\u786e\u4fdd\u4f7f\u7528'Connect Live'\u6a21\u5f0f
+![](/images/tutorial/odbc/tableau_91/4.png)
+
+### \u81ea\u5b9a\u4e49SQL
+\u5982\u679c\u9700\u8981\u4f7f\u7528\u81ea\u5b9a\u4e49SQL\uff0c\u53ef\u4ee5\u5355\u51fb\u5de6\u4fa7\uff40New Custom SQL\uff40\u5e76\u5728\u5f39\u7a97\u4e2d\u8f93\u5165SQL\u8bed\u53e5\uff0c\u5c31\u53ef\u6dfb\u52a0\u4e3a\u6570\u636e\u6e90.
+![](/images/tutorial/odbc/tableau_91/5.png)
+
+### \u53ef\u89c6\u5316
+\u73b0\u5728\u4f60\u53ef\u4ee5\u8fdb\u4e00\u6b65\u4f7f\u7528Tableau\u8fdb\u884c\u53ef\u89c6\u5316\u5206\u6790\uff1a
+![](/images/tutorial/odbc/tableau_91/6.png)
+
+### \u53d1\u5e03\u5230Tableau\u670d\u52a1\u5668
+\u5982\u679c\u5e0c\u671b\u53d1\u5e03\u5230Tableau\u670d\u52a1\u5668, \u70b9\u51fb`Server`\u83dc\u5355\u5e76\u9009\u62e9`Publish Workbook`
+![](/images/tutorial/odbc/tableau_91/7.png)
+
+### \u66f4\u591a
+
+- \u8bf7\u53c2\u8003[Tableau \u6559\u7a0b](./tableau.html)\u4ee5\u83b7\u5f97\u66f4\u591a\u4fe1\u606f
+- \u4e5f\u53ef\u4ee5\u53c2\u8003\u793e\u533a\u7528\u6237Alberto Ramon Portoles (a.ramonportoles@gmail.com)\u63d0\u4f9b\u7684\u5206\u4eab: [KylinWithTableau](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau)
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/tableau_91.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/tableau_91.md b/website/_docs20/tutorial/tableau_91.md
new file mode 100644
index 0000000..dd94091
--- /dev/null
+++ b/website/_docs20/tutorial/tableau_91.md
@@ -0,0 +1,50 @@
+---
+layout: docs20
+title:  Tableau 9
+categories: tutorial
+permalink: /docs20/tutorial/tableau_91.html
+---
+
+Tableau 9.x has been released a while, there are many users are asking about support this version with Apache Kylin. With updated Kylin ODBC Driver, now user could interactive with Kylin service through Tableau 9.x.
+
+
+### For Tableau 8.x User
+Please refer to [Kylin and Tableau Tutorial](./tableau.html) for detail guide.
+
+### Install Kylin ODBC Driver
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+Please make sure to download and install Kylin ODBC Driver __v1.5__. If you already installed ODBC Driver in your system, please uninstall it first. 
+
+### Connect to Kylin Server
+Connect Using Driver: Start Tableau 9.1 desktop, click `Other Database(ODBC)` in the left panel and choose KylinODBCDriver in the pop-up window. 
+![](/images/tutorial/odbc/tableau_91/1.png)
+
+Provide your Sever location, credentials and project. Clicking `Connect` button, you can get the list of projects that you have permission to access, see details at [Kylin Cube Permission Grant Tutorial](./acl.html).
+![](/images/tutorial/odbc/tableau_91/2.png)
+
+### Mapping Data Model
+In left panel, select `defaultCatalog` as Database, click `Search` button in Table search box, and all tables get listed. With drag and drop to the right region, tables will become data source. Make sure JOINs are configured correctly.
+![](/images/tutorial/odbc/tableau_91/3.png)
+
+### Connect Live
+There are two types of `Connection`, choose the `Live` option to make sure using Connect Live mode.
+![](/images/tutorial/odbc/tableau_91/4.png)
+
+### Custom SQL
+To use customized SQL, click `New Custom SQL` in left panel and type SQL statement in pop-up dialog.
+![](/images/tutorial/odbc/tableau_91/5.png)
+
+### Visualization
+Now you can start to enjou analyzing with Tableau 9.1.
+![](/images/tutorial/odbc/tableau_91/6.png)
+
+### Publish to Tableau Server
+If you want to publish local dashboard to a Tableau Server, just expand `Server` menu and select `Publish Workbook`.
+![](/images/tutorial/odbc/tableau_91/7.png)
+
+### More
+
+- You can refer to [Kylin and Tableau Tutorial](./tableau.html) for more detail.
+- Here is a good tutorial written by Alberto Ramon Portoles (a.ramonportoles@gmail.com): [KylinWithTableau](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau)
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/web.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/web.cn.md b/website/_docs20/tutorial/web.cn.md
new file mode 100644
index 0000000..73ffbdd
--- /dev/null
+++ b/website/_docs20/tutorial/web.cn.md
@@ -0,0 +1,134 @@
+---
+layout: docs20-cn
+title:  Kylin\u7f51\u9875\u7248\u6559\u7a0b
+categories: \u6559\u7a0b
+permalink: /cn/docs20/tutorial/web.html
+version: v1.2
+---
+
+> **\u652f\u6301\u7684\u6d4f\u89c8\u5668**
+> 
+> Windows: Google Chrome, FireFox
+> 
+> Mac: Google Chrome, FireFox, Safari
+
+## 1. \u8bbf\u95ee & \u767b\u9646
+\u8bbf\u95ee\u4e3b\u673a: http://hostname:7070
+\u4f7f\u7528\u7528\u6237\u540d/\u5bc6\u7801\u767b\u9646\uff1aADMIN/KYLIN
+
+![]( /images/Kylin-Web-Tutorial/1 login.png)
+
+## 2. Kylin\u4e2d\u53ef\u7528\u7684Hive\u8868
+\u867d\u7136Kylin\u4f7f\u7528SQL\u4f5c\u4e3a\u67e5\u8be2\u63a5\u53e3\u5e76\u5229\u7528Hive\u5143\u6570\u636e\uff0cKylin\u4e0d\u4f1a\u8ba9\u7528\u6237\u67e5\u8be2\u6240\u6709\u7684hive\u8868\uff0c\u56e0\u4e3a\u5230\u76ee\u524d\u4e3a\u6b62\u5b83\u662f\u4e00\u4e2a\u9884\u6784\u5efaOLAP(MOLAP)\u7cfb\u7edf\u3002\u4e3a\u4e86\u4f7f\u8868\u5728Kylin\u4e2d\u53ef\u7528\uff0c\u4f7f\u7528"Sync"\u65b9\u6cd5\u80fd\u591f\u65b9\u4fbf\u5730\u4eceHive\u4e2d\u540c\u6b65\u8868\u3002
+
+![]( /images/Kylin-Web-Tutorial/2 tables.png)
+
+## 3. Kylin OLAP Cube
+Kylin\u7684OLAP Cube\u662f\u4ece\u661f\u578b\u6a21\u5f0f\u7684Hive\u8868\u4e2d\u83b7\u53d6\u7684\u9884\u8ba1\u7b97\u6570\u636e\u96c6\uff0c\u8fd9\u662f\u4f9b\u7528\u6237\u63a2\u7d22\u3001\u7ba1\u7406\u6240\u6709cube\u7684\u7f51\u9875\u7ba1\u7406\u9875\u9762\u3002\u7531\u83dc\u5355\u680f\u8fdb\u5165`Cubes`\u9875\u9762\uff0c\u7cfb\u7edf\u4e2d\u6240\u6709\u53ef\u7528\u7684cube\u5c06\u88ab\u5217\u51fa\u3002
+
+![]( /images/Kylin-Web-Tutorial/3 cubes.png)
+
+\u63a2\u7d22\u66f4\u591a\u5173\u4e8eCube\u7684\u8be6\u7ec6\u4fe1\u606f
+
+* \u8868\u683c\u89c6\u56fe:
+
+   ![]( /images/Kylin-Web-Tutorial/4 form-view.png)
+
+* SQL \u89c6\u56fe (Hive\u67e5\u8be2\u8bfb\u53d6\u6570\u636e\u4ee5\u751f\u6210cube):
+
+   ![]( /images/Kylin-Web-Tutorial/5 sql-view.png)
+
+* \u53ef\u89c6\u5316 (\u663e\u793a\u8fd9\u4e2acube\u80cc\u540e\u7684\u661f\u578b\u6a21\u5f0f):
+
+   ![]( /images/Kylin-Web-Tutorial/6 visualization.png)
+
+* \u8bbf\u95ee (\u6388\u4e88\u7528\u6237/\u89d2\u8272\u6743\u9650\uff0cbeta\u7248\u4e2d\u6388\u4e88\u6743\u9650\u64cd\u4f5c\u4ec5\u5bf9\u7ba1\u7406\u5458\u5f00\u653e):
+
+   ![]( /images/Kylin-Web-Tutorial/7 access.png)
+
+## 4. \u5728\u7f51\u9875\u4e0a\u7f16\u5199\u548c\u8fd0\u884cSQL
+Kelin\u7684\u7f51\u9875\u7248\u4e3a\u7528\u6237\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7b80\u5355\u7684\u67e5\u8be2\u5de5\u5177\u6765\u8fd0\u884cSQL\u4ee5\u63a2\u7d22\u73b0\u5b58\u7684cube\uff0c\u9a8c\u8bc1\u7ed3\u679c\u5e76\u63a2\u7d22\u4f7f\u7528#5\u4e2d\u7684Pivot analysis\u4e0e\u53ef\u89c6\u5316\u5206\u6790\u7684\u7ed3\u679c\u96c6\u3002
+
+> **\u67e5\u8be2\u9650\u5236**
+> 
+> 1. \u4ec5\u652f\u6301SELECT\u67e5\u8be2
+> 
+> 2. \u4e3a\u4e86\u907f\u514d\u4ece\u670d\u52a1\u5668\u5230\u5ba2\u6237\u7aef\u4ea7\u751f\u5de8\u5927\u7684\u7f51\u7edc\u6d41\u91cf\uff0cbeta\u7248\u4e2d\u7684\u626b\u63cf\u8303\u56f4\u9600\u503c\u88ab\u8bbe\u7f6e\u4e3a1,000,000\u3002
+> 
+> 3. beta\u7248\u4e2d\uff0cSQL\u5728cube\u4e2d\u65e0\u6cd5\u627e\u5230\u7684\u6570\u636e\u5c06\u4e0d\u4f1a\u91cd\u5b9a\u5411\u5230Hive
+
+\u7531\u83dc\u5355\u680f\u8fdb\u5165\u201cQuery\u201d\u9875\u9762\uff1a
+
+![]( /images/Kylin-Web-Tutorial/8 query.png)
+
+* \u6e90\u8868\uff1a
+
+   \u6d4f\u89c8\u5668\u5f53\u524d\u53ef\u7528\u8868\uff08\u4e0eHive\u76f8\u540c\u7684\u7ed3\u6784\u548c\u5143\u6570\u636e\uff09\uff1a
+  
+   ![]( /images/Kylin-Web-Tutorial/9 query-table.png)
+
+* \u65b0\u7684\u67e5\u8be2\uff1a
+
+   \u4f60\u53ef\u4ee5\u7f16\u5199\u548c\u8fd0\u884c\u4f60\u7684\u67e5\u8be2\u5e76\u63a2\u7d22\u7ed3\u679c\u3002\u8fd9\u91cc\u63d0\u4f9b\u4e00\u4e2a\u67e5\u8be2\u4f9b\u4f60\u53c2\u8003\uff1a
+
+   ![]( /images/Kylin-Web-Tutorial/10 query-result.png)
+
+* \u5df2\u4fdd\u5b58\u7684\u67e5\u8be2\uff1a
+
+   \u4e0e\u7528\u6237\u8d26\u53f7\u5173\u8054\uff0c\u4f60\u5c06\u80fd\u591f\u4ece\u4e0d\u540c\u7684\u6d4f\u89c8\u5668\u751a\u81f3\u673a\u5668\u4e0a\u83b7\u53d6\u5df2\u4fdd\u5b58\u7684\u67e5\u8be2\u3002
+   \u5728\u7ed3\u679c\u533a\u57df\u70b9\u51fb\u201cSave\u201d\uff0c\u5c06\u4f1a\u5f39\u51fa\u540d\u5b57\u548c\u63cf\u8ff0\u6765\u4fdd\u5b58\u5f53\u524d\u67e5\u8be2\uff1a
+
+   ![]( /images/Kylin-Web-Tutorial/11 save-query.png)
+
+   \u70b9\u51fb\u201cSaved Queries\u201d\u63a2\u7d22\u6240\u6709\u5df2\u4fdd\u5b58\u7684\u67e5\u8be2\uff0c\u4f60\u53ef\u4ee5\u76f4\u63a5\u91cd\u65b0\u63d0\u4ea4\u5b83\u6765\u8fd0\u884c\u6216\u5220\u9664\u5b83\uff1a
+
+   ![]( /images/Kylin-Web-Tutorial/11 save-query-2.png)
+
+* \u67e5\u8be2\u5386\u53f2\uff1a
+
+   \u4ec5\u4fdd\u5b58\u5f53\u524d\u7528\u6237\u5728\u5f53\u524d\u6d4f\u89c8\u5668\u4e2d\u7684\u67e5\u8be2\u5386\u53f2\uff0c\u8fd9\u5c06\u9700\u8981\u542f\u7528cookie\uff0c\u5e76\u4e14\u5982\u679c\u4f60\u6e05\u7406\u6d4f\u89c8\u5668\u7f13\u5b58\u5c06\u4f1a\u4e22\u5931\u6570\u636e\u3002\u70b9\u51fb\u201cQuery History\u201d\u6807\u7b7e\uff0c\u4f60\u53ef\u4ee5\u76f4\u63a5\u91cd\u65b0\u63d0\u4ea4\u5176\u4e2d\u7684\u4efb\u4f55\u4e00\u6761\u5e76\u518d\u6b21\u8fd0\u884c\u3002
+
+## 5. Pivot Analysis\u4e0e\u53ef\u89c6\u5316
+Kylin\u7684\u7f51\u9875\u7248\u63d0\u4f9b\u4e00\u4e2a\u7b80\u5355\u7684Pivot\u4e0e\u53ef\u89c6\u5316\u5206\u6790\u5de5\u5177\u4f9b\u7528\u6237\u63a2\u7d22\u4ed6\u4eec\u7684\u67e5\u8be2\u7ed3\u679c\uff1a
+
+* \u4e00\u822c\u4fe1\u606f\uff1a
+
+   \u5f53\u67e5\u8be2\u8fd0\u884c\u6210\u529f\u540e\uff0c\u5b83\u5c06\u5448\u73b0\u4e00\u4e2a\u6210\u529f\u6307\u6807\u4e0e\u88ab\u8bbf\u95ee\u7684cube\u540d\u5b57\u3002
+   \u540c\u65f6\u5b83\u5c06\u4f1a\u5448\u73b0\u8fd9\u4e2a\u67e5\u8be2\u5728\u540e\u53f0\u5f15\u64ce\u8fd0\u884c\u4e86\u591a\u4e45\uff08\u4e0d\u5305\u62ec\u4eceKylin\u670d\u52a1\u5668\u5230\u6d4f\u89c8\u5668\u7684\u7f51\u7edc\u901a\u4fe1\uff09\uff1a
+
+   ![]( /images/Kylin-Web-Tutorial/12 general.png)
+
+* \u67e5\u8be2\u7ed3\u679c\uff1a
+
+   \u80fd\u591f\u65b9\u4fbf\u5730\u5728\u4e00\u4e2a\u5217\u4e0a\u6392\u5e8f\u3002
+
+   ![]( /images/Kylin-Web-Tutorial/13 results.png)
+
+* \u5bfc\u51fa\u5230CSV\u6587\u4ef6
+
+   \u70b9\u51fb\u201cExport\u201d\u6309\u94ae\u4ee5CSV\u6587\u4ef6\u683c\u5f0f\u4fdd\u5b58\u5f53\u524d\u7ed3\u679c\u3002
+
+* Pivot\u8868\uff1a
+
+   \u5c06\u4e00\u4e2a\u6216\u591a\u4e2a\u5217\u62d6\u653e\u5230\u6807\u5934\uff0c\u7ed3\u679c\u5c06\u6839\u636e\u8fd9\u4e9b\u5217\u7684\u503c\u5206\u7ec4\uff1a
+
+   ![]( /images/Kylin-Web-Tutorial/14 drag.png)
+
+* \u53ef\u89c6\u5316\uff1a
+
+   \u540c\u65f6\uff0c\u7ed3\u679c\u96c6\u5c06\u88ab\u65b9\u4fbf\u5730\u663e\u793a\u5728\u201c\u53ef\u89c6\u5316\u201d\u7684\u4e0d\u540c\u56fe\u8868\u4e2d\uff1a
+
+   \u6ce8\u610f\uff1a\u7ebf\u5f62\u56fe\u4ec5\u5f53\u81f3\u5c11\u4e00\u4e2a\u4eceHive\u8868\u4e2d\u83b7\u53d6\u7684\u7ef4\u5ea6\u6709\u771f\u5b9e\u7684\u201cDate\u201d\u6570\u636e\u7c7b\u578b\u5217\u65f6\u624d\u662f\u53ef\u7528\u7684\u3002
+
+   * \u6761\u5f62\u56fe\uff1a
+
+   ![]( /images/Kylin-Web-Tutorial/15 bar-chart.png)
+   
+   * \u997c\u56fe\uff1a
+
+   ![]( /images/Kylin-Web-Tutorial/16 pie-chart.png)
+
+   * \u7ebf\u5f62\u56fe\uff1a
+
+   ![]( /images/Kylin-Web-Tutorial/17 line-chart.png)
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/web.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/web.md b/website/_docs20/tutorial/web.md
new file mode 100644
index 0000000..b3c29fe
--- /dev/null
+++ b/website/_docs20/tutorial/web.md
@@ -0,0 +1,123 @@
+---
+layout: docs20
+title:  Kylin Web Interface
+categories: tutorial
+permalink: /docs20/tutorial/web.html
+---
+
+> **Supported Browsers**
+> Windows: Google Chrome, FireFox
+> Mac: Google Chrome, FireFox, Safari
+
+## 1. Access & Login
+Host to access: http://hostname:7070
+Login with username/password: ADMIN/KYLIN
+
+![](/images/tutorial/1.5/Kylin-Web-Tutorial/1 login.png)
+
+## 2. Sync Hive Table into Kylin
+Although Kylin will using SQL as query interface and leverage Hive metadata, kylin will not enable user to query all hive tables since it's a pre-build OLAP (MOLAP) system so far. To enable Table in Kylin, it will be easy to using "Sync" function to sync up tables from Hive.
+
+![](/images/tutorial/1.5/Kylin-Web-Tutorial/2 tables.png)
+
+## 3. Kylin OLAP Cube
+Kylin's OLAP Cubes are pre-calculation datasets from star schema tables, Here's the web interface for user to explorer, manage all cubes. Go to `Model` menu, it will list all cubes available in system:
+
+![](/images/tutorial/1.5/Kylin-Web-Tutorial/3 cubes.png)
+
+To explore more detail about the Cube
+
+* Form View:
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/4 form-view.png)
+
+* SQL View (Hive Query to read data to generate the cube):
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/5 sql-view.png)
+
+* Access (Grant user/role privileges, grant operation only open to Admin):
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/7 access.png)
+
+## 4. Write and Execute SQL on web
+Kylin's web offer a simple query tool for user to run SQL to explorer existing cube, verify result and explorer the result set using #5's Pivot analysis and visualization
+
+> **Query Limit**
+> 
+> 1. Only SELECT query be supported
+> 
+> 2. SQL will not be redirect to Hive
+
+Go to "Insight" menu:
+
+![](/images/tutorial/1.5/Kylin-Web-Tutorial/8 query.png)
+
+* Source Tables:
+
+   Browser current available tables (same structure and metadata as Hive):
+  
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/9 query-table.png)
+
+* New Query:
+
+   You can write and execute your query and explorer the result.
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/10 query-result.png)
+
+* Saved Query (only work after enable LDAP security):
+
+   Associate with user account, you can get saved query from different browsers even machines.
+   Click "Save" in Result area, it will popup for name and description to save current query:
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/11 save-query.png)
+
+   Click "Saved Queries" to browser all your saved queries, you could direct submit it or remove it.
+
+* Query History:
+
+   Only keep the current user's query history in current bowser, it will require cookie enabled and will lost if you clean up bowser's cache. Click "Query History" tab, you could directly resubmit any of them to execute again.
+
+## 5. Pivot Analysis and Visualization
+There's one simple pivot and visualization analysis tool in Kylin's web for user to explore their query result:
+
+* General Information:
+
+   When the query execute success, it will present a success indictor and also a cube's name which be hit. 
+   Also it will present how long this query be executed in backend engine (not cover network traffic from Kylin server to browser):
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/12 general.png)
+
+* Query Result:
+
+   It's easy to order on one column.
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/13 results.png)
+
+* Export to CSV File
+
+   Click "Export" button to save current result as CSV file.
+
+* Pivot Table:
+
+   Drag and drop one or more columns into the header, the result will grouping by such column's value:
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/14 drag.png)
+
+* Visualization:
+
+   Also, the result set will be easy to show with different charts in "Visualization":
+
+   note: line chart only available when there's at least one dimension with real "Date" data type of column from Hive Table.
+
+   * Bar Chart:
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/15 bar-chart.png)
+   
+   * Pie Chart:
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/16 pie-chart.png)
+
+   * Line Chart
+
+   ![](/images/tutorial/1.5/Kylin-Web-Tutorial/17 line-chart.png)
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_includes/docs20_nav.cn.html
----------------------------------------------------------------------
diff --git a/website/_includes/docs20_nav.cn.html b/website/_includes/docs20_nav.cn.html
new file mode 100644
index 0000000..79f30e1
--- /dev/null
+++ b/website/_includes/docs20_nav.cn.html
@@ -0,0 +1,33 @@
+<!--
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+-->
+
+<div class="col-md-3 col-lg-3 col-xs-4 aside1 visible-md visible-lg" id="nside1" style=" padding-top: 2em">
+    <ul class="nav nav-pills nav-stacked">    
+    {% for section in site.data.docs20-cn %}
+    <li><a href="#{{ section | first }}" data-toggle="collapse" id="navtitle">{{ section.title }}</a></li>
+    <div class="collapse in">
+  	<div class="list-group" id="list1">
+    <ul style="list-style-type:disc">
+    {% include docs20_ul.cn.html items=section.docs %}
+        <ul>
+  </div>
+</div>
+    {% endfor %}
+
+    </ul>
+</div>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_includes/docs20_nav.html
----------------------------------------------------------------------
diff --git a/website/_includes/docs20_nav.html b/website/_includes/docs20_nav.html
new file mode 100644
index 0000000..fbd7aab
--- /dev/null
+++ b/website/_includes/docs20_nav.html
@@ -0,0 +1,33 @@
+<!--
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+-->
+
+<div class="col-md-3 col-lg-3 col-xs-4 aside1 visible-md visible-lg" id="nside1" style=" padding-top: 2em">
+    <ul class="nav nav-pills nav-stacked">
+    {% for section in site.data.docs20 %}
+    <li><a href="#{{ section | first }}" data-toggle="collapse" id="navtitle">{{ section.title }}</a></li>
+    <div class="collapse in">
+  	<div class="list-group" id="list1">
+    <ul style="list-style-type:disc">
+    {% include docs20_ul.html items=section.docs %}
+        <ul>
+  </div>
+</div>
+    {% endfor %}
+
+    </ul>
+</div>

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_includes/docs20_ul.cn.html
----------------------------------------------------------------------
diff --git a/website/_includes/docs20_ul.cn.html b/website/_includes/docs20_ul.cn.html
new file mode 100644
index 0000000..9bc37dc
--- /dev/null
+++ b/website/_includes/docs20_ul.cn.html
@@ -0,0 +1,28 @@
+{% assign items = include.items %}
+
+
+
+{% for item in items %}
+
+  {% assign item_url = item | prepend:"/cn/docs20/" | append:".html" %}
+
+  {% if item_url == page.url %}
+    {% assign c = "current" %}
+  {% else %}
+    {% assign c = "" %}
+  {% endif %}
+
+
+
+  {% for p in site.docs20 %}
+    {% if p.url == item_url %}
+      <li><a href="{{ p.url }}" class="list-group-item-lay pjaxlink" id="navlist">{{p.title}}</a></li>      
+      {% break %}
+    {% endif %}
+  {% endfor %}
+
+{% endfor %}
+
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_includes/docs20_ul.html
----------------------------------------------------------------------
diff --git a/website/_includes/docs20_ul.html b/website/_includes/docs20_ul.html
new file mode 100644
index 0000000..a3f83f5
--- /dev/null
+++ b/website/_includes/docs20_ul.html
@@ -0,0 +1,29 @@
+{% assign items = include.items %}
+
+
+
+{% for item in items %}
+
+  {% assign item_url = item | prepend:"/docs20/" | append:".html" %}
+      
+
+  {% if item_url == page.url %}
+    {% assign c = "current" %}
+  {% else %}
+    {% assign c = "" %}
+  {% endif %}
+
+
+
+  {% for p in site.docs20 %}
+    {% if p.url == item_url %}
+      <li><a href="{{ p.url }}" class="list-group-item-lay pjaxlink" id="navlist">{{p.title}}</a></li>      
+      {% break %}
+    {% endif %}
+  {% endfor %}
+
+{% endfor %}
+
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_includes/header.cn.html
----------------------------------------------------------------------
diff --git a/website/_includes/header.cn.html b/website/_includes/header.cn.html
index 3f6bdbe..74e7627 100644
--- a/website/_includes/header.cn.html
+++ b/website/_includes/header.cn.html
@@ -40,7 +40,7 @@
     <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
       <ul class="nav navbar-nav">
      <li><a href="/cn">\u9996\u9875</a></li>
-          <li><a href="/cn/docs15" >\u6587\u6863</a></li>
+          <li><a href="/cn/docs20" >\u6587\u6863</a></li>
           <li><a href="/cn/download">\u4e0b\u8f7d</li>
           <li><a href="/community" >\u793e\u533a</a></li>
           <li><a href="/development" >\u5f00\u53d1</a></li>

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_includes/header.html
----------------------------------------------------------------------
diff --git a/website/_includes/header.html b/website/_includes/header.html
index 97c8776..bca3ada 100644
--- a/website/_includes/header.html
+++ b/website/_includes/header.html
@@ -45,7 +45,7 @@
     <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
       <ul class="nav navbar-nav">
      <li><a href="/">Home</a></li>
-          <li><a href="/docs16" >Docs</a></li>
+          <li><a href="/docs20" >Docs</a></li>
           <li><a href="/download">Download</li>
           <li><a href="/community" >Community</a></li>
           <li><a href="/development" >Development</a></li>

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_layouts/docs20-cn.html
----------------------------------------------------------------------
diff --git a/website/_layouts/docs20-cn.html b/website/_layouts/docs20-cn.html
new file mode 100644
index 0000000..52fb5ef
--- /dev/null
+++ b/website/_layouts/docs20-cn.html
@@ -0,0 +1,46 @@
+<!--
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+-->
+
+<!doctype html>
+<html>
+	{% include head.cn.html %}
+	<body>
+		{% include header.cn.html %}
+		
+		<div class="container">
+			<div class="row">
+				{% include docs20_nav.cn.html %}
+				<div class="col-md-9 col-lg-9 col-xs-14 aside2">
+					<div id="container">
+						<div id="pjax">
+							<h1 class="post-title">{{ page.title }}</h1>
+							<article class="post-content" >
+							{{ content }}
+							</article>
+						</div>
+					</div>
+				</div>
+			</div>
+		</div>		
+		{% include footer.html %}
+
+	<script src="/assets/js/jquery-1.9.1.min.js"></script> 
+	<script src="/assets/js/bootstrap.min.js"></script> 
+	<script src="/assets/js/main.js"></script>
+	</body>
+</html>

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_layouts/docs20.html
----------------------------------------------------------------------
diff --git a/website/_layouts/docs20.html b/website/_layouts/docs20.html
new file mode 100644
index 0000000..7b4ac02
--- /dev/null
+++ b/website/_layouts/docs20.html
@@ -0,0 +1,50 @@
+<!--
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+-->
+
+<!doctype html>
+<html>
+	{% include head.html %}
+	<body>
+		{% include header.html %}
+		
+		<div class="container">
+			<div class="row">
+				{% include docs20_nav.html %}
+				<div class="col-md-9 col-lg-9 col-xs-14 aside2">
+					<div id="container">
+						<div id="pjax">
+							<h1 class="post-title">{{ page.title }}</h1>
+							{% if page.version == NULL %}
+							{% else %}							
+								<p>version: {{page.version}}, since: {{page.since}}</p>
+							{% endif %}
+							<article class="post-content" >	
+							{{ content }}
+							</article>
+						</div>
+					</div>
+				</div>
+			</div>
+		</div>		
+		{% include footer.html %}
+
+	<script src="/assets/js/jquery-1.9.1.min.js"></script> 
+	<script src="/assets/js/bootstrap.min.js"></script> 
+	<script src="/assets/js/main.js"></script>
+	</body>
+</html>


[5/5] kylin git commit: prepare docs for 2.0

Posted by li...@apache.org.
prepare docs for 2.0


Project: http://git-wip-us.apache.org/repos/asf/kylin/repo
Commit: http://git-wip-us.apache.org/repos/asf/kylin/commit/7ea64f38
Tree: http://git-wip-us.apache.org/repos/asf/kylin/tree/7ea64f38
Diff: http://git-wip-us.apache.org/repos/asf/kylin/diff/7ea64f38

Branch: refs/heads/document
Commit: 7ea64f38af437893648622dc86ed62d29bd1ca79
Parents: 55b1670
Author: Li Yang <li...@apache.org>
Authored: Sat Mar 25 09:22:16 2017 +0800
Committer: Li Yang <li...@apache.org>
Committed: Sat Mar 25 09:22:16 2017 +0800

----------------------------------------------------------------------
 website/_config.yml                             |   10 +-
 website/_data/docs20-cn.yml                     |   20 +
 website/_data/docs20.yml                        |   65 +
 website/_docs16/index.md                        |    1 -
 website/_docs16/tutorial/cube_spark.md          |  166 ---
 .../_docs20/gettingstarted/best_practices.md    |   27 +
 website/_docs20/gettingstarted/concepts.md      |   64 +
 website/_docs20/gettingstarted/events.md        |   24 +
 website/_docs20/gettingstarted/faq.md           |  119 ++
 website/_docs20/gettingstarted/terminology.md   |   25 +
 website/_docs20/howto/howto_backup_metadata.md  |   60 +
 .../howto/howto_build_cube_with_restapi.md      |   53 +
 website/_docs20/howto/howto_cleanup_storage.md  |   22 +
 website/_docs20/howto/howto_jdbc.md             |   92 ++
 website/_docs20/howto/howto_ldap_and_sso.md     |  128 ++
 website/_docs20/howto/howto_optimize_build.md   |  190 +++
 website/_docs20/howto/howto_optimize_cubes.md   |  212 +++
 .../_docs20/howto/howto_update_coprocessor.md   |   14 +
 website/_docs20/howto/howto_upgrade.md          |   66 +
 website/_docs20/howto/howto_use_beeline.md      |   14 +
 .../howto/howto_use_distributed_scheduler.md    |   16 +
 website/_docs20/howto/howto_use_restapi.md      | 1113 +++++++++++++++
 .../_docs20/howto/howto_use_restapi_in_js.md    |   46 +
 website/_docs20/index.cn.md                     |   26 +
 website/_docs20/index.md                        |   59 +
 website/_docs20/install/advance_settings.md     |   98 ++
 website/_docs20/install/hadoop_evn.md           |   40 +
 website/_docs20/install/index.cn.md             |   46 +
 website/_docs20/install/index.md                |   35 +
 website/_docs20/install/kylin_cluster.md        |   32 +
 website/_docs20/install/kylin_docker.md         |   10 +
 .../_docs20/install/manual_install_guide.cn.md  |   48 +
 website/_docs20/release_notes.md                | 1333 ++++++++++++++++++
 website/_docs20/tutorial/acl.cn.md              |   35 +
 website/_docs20/tutorial/acl.md                 |   32 +
 website/_docs20/tutorial/create_cube.cn.md      |  129 ++
 website/_docs20/tutorial/create_cube.md         |  198 +++
 website/_docs20/tutorial/cube_build_job.cn.md   |   66 +
 website/_docs20/tutorial/cube_build_job.md      |   67 +
 website/_docs20/tutorial/cube_spark.md          |  166 +++
 website/_docs20/tutorial/cube_streaming.md      |  219 +++
 website/_docs20/tutorial/flink.md               |  249 ++++
 .../_docs20/tutorial/kylin_client_tool.cn.md    |   97 ++
 website/_docs20/tutorial/kylin_sample.md        |   21 +
 website/_docs20/tutorial/odbc.cn.md             |   34 +
 website/_docs20/tutorial/odbc.md                |   49 +
 website/_docs20/tutorial/powerbi.cn.md          |   56 +
 website/_docs20/tutorial/powerbi.md             |   54 +
 website/_docs20/tutorial/squirrel.md            |  112 ++
 website/_docs20/tutorial/tableau.cn.md          |  116 ++
 website/_docs20/tutorial/tableau.md             |  113 ++
 website/_docs20/tutorial/tableau_91.cn.md       |   51 +
 website/_docs20/tutorial/tableau_91.md          |   50 +
 website/_docs20/tutorial/web.cn.md              |  134 ++
 website/_docs20/tutorial/web.md                 |  123 ++
 website/_includes/docs20_nav.cn.html            |   33 +
 website/_includes/docs20_nav.html               |   33 +
 website/_includes/docs20_ul.cn.html             |   28 +
 website/_includes/docs20_ul.html                |   29 +
 website/_includes/header.cn.html                |    2 +-
 website/_includes/header.html                   |    2 +-
 website/_layouts/docs20-cn.html                 |   46 +
 website/_layouts/docs20.html                    |   50 +
 63 files changed, 6496 insertions(+), 172 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_config.yml
----------------------------------------------------------------------
diff --git a/website/_config.yml b/website/_config.yml
index c58bc26..efeed85 100644
--- a/website/_config.yml
+++ b/website/_config.yml
@@ -61,10 +61,14 @@ collections:
   docs15:
     output: true
   docs15-cn:
-    output: true   
+    output: true
   docs16:
     output: true
   docs16-cn:
-    output: true     
+    output: true
+  docs20:
+    output: true
+  docs20-cn:
+    output: true
   dev:
-    output: true  
+    output: true

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_data/docs20-cn.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs20-cn.yml b/website/_data/docs20-cn.yml
new file mode 100644
index 0000000..f69fbe5
--- /dev/null
+++ b/website/_data/docs20-cn.yml
@@ -0,0 +1,20 @@
+- title: \u5f00\u59cb
+  docs:
+  - index
+
+- title: \u5b89\u88c5
+  docs:
+  - install/install_guide
+  - install/manual_install_guide
+
+- title: \u6559\u7a0b
+  docs:
+  - tutorial/create_cube_cn
+  - tutorial/cube_build_job
+  - tutorial/acl
+  - tutorial/web
+  - tutorial/tableau
+  - tutorial/tableau_91
+  - tutorial/powerbi
+  - tutorial/odbc
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_data/docs20.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs20.yml b/website/_data/docs20.yml
new file mode 100644
index 0000000..1d4501d
--- /dev/null
+++ b/website/_data/docs20.yml
@@ -0,0 +1,65 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Docs menu items, for English one, docs20-cn.yml is for Chinese one
+# The docs menu is constructed in docs20_nav.html with these data
+- title: Getting Started
+  docs:
+  - index
+  - release_notes
+  - gettingstarted/faq
+  - gettingstarted/events
+  - gettingstarted/best_practices
+  - gettingstarted/terminology
+  - gettingstarted/concepts
+
+- title: Installation
+  docs:
+  - install/index
+  - install/hadoop_env
+  - install/manual_install_guide
+  - install/kylin_cluster
+  - install/advance_settings
+  - install/kylin_docker
+
+- title: Tutorial
+  docs:
+  - tutorial/kylin_sample
+  - tutorial/create_cube
+  - tutorial/cube_build_job
+  - tutorial/cube_spark
+  - tutorial/acl
+  - tutorial/web
+  - tutorial/tableau
+  - tutorial/tableau_91
+  - tutorial/powerbi
+  - tutorial/odbc
+  - tutorial/flink
+  - tutorial/squirrel
+
+- title: How To
+  docs:
+  - howto/howto_build_cube_with_restapi
+  - howto/howto_use_restapi_in_js
+  - howto/howto_use_restapi
+  - howto/howto_optimize_cubes
+  - howto/howto_optimize_build
+  - howto/howto_backup_metadata
+  - howto/howto_cleanup_storage
+  - howto/howto_jdbc
+  - howto/howto_upgrade
+  - howto/howto_ldap_and_sso
+  - howto/howto_use_beeline
+  - howto/howto_update_coprocessor

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs16/index.md
----------------------------------------------------------------------
diff --git a/website/_docs16/index.md b/website/_docs16/index.md
index 87c97b4..b4eee3b 100644
--- a/website/_docs16/index.md
+++ b/website/_docs16/index.md
@@ -32,7 +32,6 @@ Tutorial
 4. [Web Interface](tutorial/web.html)
 5. [SQL reference: by Apache Calcite](http://calcite.apache.org/docs/reference.html)
 6. [Build Cube with Streaming Data (beta)](tutorial/cube_streaming.html)
-6. [Build Cube with Spark engine (v2.0 beta)](tutorial/cube_spark.html)
 
 
 Connectivity and APIs

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs16/tutorial/cube_spark.md
----------------------------------------------------------------------
diff --git a/website/_docs16/tutorial/cube_spark.md b/website/_docs16/tutorial/cube_spark.md
deleted file mode 100644
index 743eb51..0000000
--- a/website/_docs16/tutorial/cube_spark.md
+++ /dev/null
@@ -1,166 +0,0 @@
----
-layout: docs16
-title:  Build Cube with Spark (beta)
-categories: tutorial
-permalink: /docs16/tutorial/cube_spark.html
----
-Kylin v2.0 introduces the Spark cube engine, it uses Apache Spark to replace MapReduce in the build cube step; You can check [this blog](/blog/2017/02/23/by-layer-spark-cubing/) for an overall picture. The current document uses the sample cube to demo how to try the new engine.
-
-## Preparation
-To finish this tutorial, you need a Hadoop environment which has Kylin v2.0.0 or above installed. Here we will use Hortonworks HDP 2.4 Sandbox VM, the Hadoop components as well as Hive/HBase has already been started. 
-
-## Install Kylin v2.0.0 beta
-
-Download the Kylin v2.0.0 beta for HBase 1.x from Kylin's download page, and then uncompress the tar ball into */usr/local/* folder:
-
-{% highlight Groff markup %}
-
-wget https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.0.0-beta/apache-kylin-2.0.0-beta-hbase1x.tar.gz -P /tmp
-
-tar -zxvf /tmp/apache-kylin-2.0.0-beta-hbase1x.tar.gz -C /usr/local/
-
-export KYLIN_HOME=/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin
-{% endhighlight %}
-
-## Prepare "kylin.env.hadoop-conf-dir"
-
-To run Spark on Yarn, need specify **HADOOP_CONF_DIR** environment variable, which is the directory that contains the (client side) configuration files for Hadoop. In many Hadoop distributions the directory is "/etc/hadoop/conf"; But Kylin not only need access HDFS, Yarn and Hive, but also HBase, so the default directory might not have all necessary files. In this case, you need create a new directory and then copying or linking those client files (core-site.xml, yarn-site.xml, hive-site.xml and hbase-site.xml) there. In HDP 2.4, there is a conflict between hive-tez and Spark, so need change the default engine from "tez" to "mr" when copy for Kylin.
-
-{% highlight Groff markup %}
-
-mkdir $KYLIN_HOME/hadoop-conf
-ln -s /etc/hadoop/conf/core-site.xml $KYLIN_HOME/hadoop-conf/core-site.xml 
-ln -s /etc/hadoop/conf/yarn-site.xml $KYLIN_HOME/hadoop-conf/yarn-site.xml 
-ln -s /etc/hbase/2.4.0.0-169/0/hbase-site.xml $KYLIN_HOME/hadoop-conf/hbase-site.xml 
-cp /etc/hive/2.4.0.0-169/0/hive-site.xml $KYLIN_HOME/hadoop-conf/hive-site.xml 
-vi $KYLIN_HOME/hadoop-conf/hive-site.xml (change "hive.execution.engine" value from "tez" to "mr")
-
-{% endhighlight %}
-
-Now, let Kylin know this directory with property "kylin.env.hadoop-conf-dir" in kylin.properties:
-
-{% highlight Groff markup %}
-kylin.env.hadoop-conf-dir=/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/hadoop-conf
-{% endhighlight %}
-
-If this property isn't set, Kylin will use the directory that "hive-site.xml" locates in; while that folder may have no "hbase-site.xml", will get HBase/ZK connection error in Spark.
-
-## Check Spark configuration
-
-Kylin embedes a Spark binary (v1.6.3) in $KYLIN_HOME/spark, all the Spark configurations can be managed in $KYLIN_HOME/conf/kylin.properties with prefix *"kylin.engine.spark-conf."*. These properties will be extracted and applied when runs submit Spark job; E.g, if you configure "kylin.engine.spark-conf.spark.executor.memory=4G", Kylin will use "--conf spark.executor.memory=4G" as parameter when execute "spark-submit".
-
-Before you run Spark cubing, suggest take a look on these configurations and do customization according to your cluster. Below is the default configurations, which is also the minimal config for a sandbox (1 executor with 1GB memory); usually in a normal cluster, need much more executors and each has at least 4GB memory and 2 cores:
-
-{% highlight Groff markup %}
-kylin.engine.spark-conf.spark.master=yarn
-kylin.engine.spark-conf.spark.submit.deployMode=cluster
-kylin.engine.spark-conf.spark.yarn.queue=default
-kylin.engine.spark-conf.spark.executor.memory=1G
-kylin.engine.spark-conf.spark.executor.cores=2
-kylin.engine.spark-conf.spark.executor.instances=1
-kylin.engine.spark-conf.spark.eventLog.enabled=true
-kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-history
-kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history
-#kylin.engine.spark-conf.spark.yarn.jar=hdfs://namenode:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
-#kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
-
-## uncomment for HDP
-#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
-#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
-#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
-
-{% endhighlight %}
-
-For running on Hortonworks platform, need specify "hdp.version" as Java options for Yarn containers, so please uncommment the last three lines in kylin.properties. 
-
-Besides, in order to avoid repeatedly uploading Spark assembly jar to Yarn, you can manually do that once, and then configure the jar's HDFS location; Please note, the HDFS location need be full qualified name.
-
-{% highlight Groff markup %}
-hadoop fs -mkdir -p /kylin/spark/
-hadoop fs -put $KYLIN_HOME/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar /kylin/spark/
-{% endhighlight %}
-
-After do that, the config in kylin.properties will be:
-{% highlight Groff markup %}
-kylin.engine.spark-conf.spark.yarn.jar=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
-kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
-kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
-kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
-{% endhighlight %}
-
-All the "kylin.engine.spark-conf.*" parameters can be overwritten at Cube or Project level, this gives more flexibility to the user.
-
-## Create and modify sample cube
-
-Run the sample.sh to create the sample cube, and then start Kylin server:
-
-{% highlight Groff markup %}
-
-$KYLIN_HOME/bin/sample.sh
-$KYLIN_HOME/bin/kylin.sh start
-
-{% endhighlight %}
-
-After Kylin is started, access Kylin web, edit the "kylin_sales" cube, in the "Advanced Setting" page, change the "Cube Engine" from "MapReduce" to "Spark (Beta)":
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/1_cube_engine.png)
-
-Click "Next" to the "Configuration Overwrites" page, click "+Property" to add property "kylin.engine.spark.rdd-partition-cut-mb" with value "100" (reasons below):
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_overwrite_partition.png)
-
-The sample cube has two memory hungry measures: a "COUNT DISTINCT" and a "TOPN(100)"; Their size estimation can be inaccurate when the source data is small: the estimized size is much larger than the real size, that causes much more RDD partitions be splitted, which slows down the build. Here 100 is a more reasonable number for it. Click "Next" and "Save" to save the cube.
-
-
-## Build Cube with Spark
-
-Click "Build", select current date as the build end date. Kylin generates a build job in the "Monitor" page, in which the 7th step is the Spark cubing. The job engine starts to execute the steps in sequence. 
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_job_with_spark.png)
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/3_spark_cubing_step.png)
-
-When Kylin executes this step, you can monitor the status in Yarn resource manager. Click the "Application Master" link will open Spark web UI, it shows the progress of each stage and the detailed information.
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/4_job_on_rm.png)
-
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/5_spark_web_gui.png)
-
-
-After all steps be successfully executed, the Cube becomes "Ready" and you can query it as normal.
-
-## Troubleshooting
-
-When getting error, you should check "logs/kylin.log" firstly. There has the full Spark command that Kylin executes, e.g:
-
-{% highlight Groff markup %}
-2017-03-06 14:44:38,574 INFO  [Job 2d5c1178-c6f6-4b50-8937-8e5e3b39227e-306] spark.SparkExecutable:121 : cmd:export HADOOP_CONF_DIR=/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/hadoop-conf && /usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/spark/bin/spark-submit --class org.apache.kylin.common.util.SparkEntry  --conf spark.executor.instances=1  --conf spark.yarn.jar=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar  --conf spark.yarn.queue=default  --conf spark.yarn.am.extraJavaOptions=-Dhdp.version=current  --conf spark.history.fs.logDirectory=hdfs:///kylin/spark-history  --conf spark.driver.extraJavaOptions=-Dhdp.version=current  --conf spark.master=yarn  --conf spark.executor.extraJavaOptions=-Dhdp.version=current  --conf spark.executor.memory=1G  --conf spark.eventLog.enabled=true  --conf spark.eventLog.dir=hdfs:///kylin/spark-history  --conf spark.executor.cores=2  --conf spark.submit.deployMode=cluster --files /etc/hbase/2.4.0.0-169/0/hbase-site.xml
  --jars /usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar,/usr/hdp/2.4.0.0-169/hbase/lib/htrace-core-3.1.0-incubating.jar,/usr/hdp/2.4.0.0-169/hbase/lib/hbase-client-1.1.2.2.4.0.0-169.jar,/usr/hdp/2.4.0.0-169/hbase/lib/hbase-common-1.1.2.2.4.0.0-169.jar,/usr/hdp/2.4.0.0-169/hbase/lib/hbase-protocol-1.1.2.2.4.0.0-169.jar,/usr/hdp/2.4.0.0-169/hbase/lib/metrics-core-2.2.0.jar,/usr/hdp/2.4.0.0-169/hbase/lib/guava-12.0.1.jar, /usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/lib/kylin-job-2.0.0-SNAPSHOT.jar -className org.apache.kylin.engine.spark.SparkCubingByLayer -hiveTable kylin_intermediate_kylin_sales_cube_555c4d32_40bb_457d_909a_1bb017bf2d9e -segmentId 555c4d32-40bb-457d-909a-1bb017bf2d9e -confPath /usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/conf -output hdfs:///kylin/kylin_metadata/kylin-2d5c1178-c6f6-4b50-8937-8e5e3b39227e/kylin_sales_cube/cuboid/ -cubename kylin_sales_cube
-
-{% endhighlight %}
-
-You can copy the cmd to execute manually in shell and then tunning the parameters quickly; During the execution, you can access Yarn resource manager to check more. If the job has already finished, you can check the history info in Spark history server. 
-
-By default Kylin outputs the history to "hdfs:///kylin/spark-history", you need start Spark history server on that directory, or change to use your existing Spark history server's event directory in conf/kylin.properties with parameter "kylin.engine.spark-conf.spark.eventLog.dir" and "kylin.engine.spark-conf.spark.history.fs.logDirectory".
-
-The following command will start a Spark history server instance on Kylin's output directory, before run it making sure you have stopped the existing Spark history server in sandbox:
-
-{% highlight Groff markup %}
-$KYLIN_HOME/spark/sbin/start-history-server.sh hdfs://sandbox.hortonworks.com:8020/kylin/spark-history 
-{% endhighlight %}
-
-In web browser, access "http://sandbox:18080" it shows the job history:
-
-   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/9_spark_history.png)
-
-Click a specific job, there you will see the detail runtime information, that is very helpful for trouble shooting and performance tuning.
-
-## Go further
-
-If you're a Kylin administrator but new to Spark, suggest you go through [Spark documents](https://spark.apache.org/docs/1.6.3/), and don't forget to update the configurations accordingly. Spark's performance relies on Cluster's memory and CPU resource, while Kylin's Cube build is a heavy task when having a complex data model and a huge dataset to build at one time. If your cluster resource couldn't fulfill, errors like "OutOfMemorry" will be thrown in Spark executors, so please use it properly. For Cube which has UHC dimension, many combinations (e.g, a full cube with more than 12 dimensions), or memory hungry measures (Count Distinct, Top-N), suggest to use the MapReduce engine. If your Cube model is simple, all measures are SUM/MIN/MAX/COUNT, source data is small to medium scale, Spark engine would be a good choice. Besides, Streaming build isn't supported in this engine so far (KYLIN-2484).
-
-Now the Spark engine is in public beta; If you have any question, comment, or bug fix, welcome to discuss in dev@kylin.apache.org.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/gettingstarted/best_practices.md
----------------------------------------------------------------------
diff --git a/website/_docs20/gettingstarted/best_practices.md b/website/_docs20/gettingstarted/best_practices.md
new file mode 100644
index 0000000..59e9005
--- /dev/null
+++ b/website/_docs20/gettingstarted/best_practices.md
@@ -0,0 +1,27 @@
+---
+layout: docs20
+title:  "Community Best Practices"
+categories: gettingstarted
+permalink: /docs20/gettingstarted/best_practices.html
+since: v1.3.x
+---
+
+List of articles about Kylin best practices contributed by community. Some of them are from Chinese community. Many thanks!
+
+* [Apache Kylin\u5728\u767e\u5ea6\u5730\u56fe\u7684\u5b9e\u8df5](http://www.infoq.com/cn/articles/practis-of-apache-kylin-in-baidu-map)
+
+* [Apache Kylin \u5927\u6570\u636e\u65f6\u4ee3\u7684OLAP\u5229\u5668](http://www.bitstech.net/2016/01/04/kylin-olap/)(\u7f51\u6613\u6848\u4f8b)
+
+* [Apache Kylin\u5728\u4e91\u6d77\u7684\u5b9e\u8df5](http://www.csdn.net/article/2015-11-27/2826343)(\u4eac\u4e1c\u6848\u4f8b)
+
+* [Kylin, Mondrian, Saiku\u7cfb\u7edf\u7684\u6574\u5408](http://tech.youzan.com/kylin-mondrian-saiku/)(\u6709\u8d5e\u6848\u4f8b)
+
+* [Big Data MDX with Mondrian and Apache Kylin](https://www.inovex.de/fileadmin/files/Vortraege/2015/big-data-mdx-with-mondrian-and-apache-kylin-sebastien-jelsch-pcm-11-2015.pdf)
+
+* [Kylin and Mondrain Interaction](https://github.com/mustangore/kylin-mondrian-interaction) (Thanks to [mustangore](https://github.com/mustangore))
+
+* [Kylin And Tableau Tutorial](https://github.com/albertoRamon/Kylin/tree/master/KylinWithTableau) (Thanks to [Ram�n Portol�s, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
+
+* [Kylin and Qlik Integration](https://github.com/albertoRamon/Kylin/tree/master/KylinWithQlik) (Thanks to [Ram�n Portol�s, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
+
+* [How to use Hue with Kylin](https://github.com/albertoRamon/Kylin/tree/master/KylinWithHue) (Thanks to [Ram�n Portol�s, Alberto](https://www.linkedin.com/in/alberto-ramon-portoles-a02b523b))
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/gettingstarted/concepts.md
----------------------------------------------------------------------
diff --git a/website/_docs20/gettingstarted/concepts.md b/website/_docs20/gettingstarted/concepts.md
new file mode 100644
index 0000000..138a7f1
--- /dev/null
+++ b/website/_docs20/gettingstarted/concepts.md
@@ -0,0 +1,64 @@
+---
+layout: docs20
+title:  "Technical Concepts"
+categories: gettingstarted
+permalink: /docs20/gettingstarted/concepts.html
+since: v1.2
+---
+ 
+Here are some basic technical concepts used in Apache Kylin, please check them for your reference.
+For terminology in domain, please refer to: [Terminology](terminology.html)
+
+## CUBE
+* __Table__ - This is definition of hive tables as source of cubes, which must be synced before building cubes.
+![](/images/docs/concepts/DataSource.png)
+
+* __Data Model__ - This describes a [STAR SCHEMA](https://en.wikipedia.org/wiki/Star_schema) data model, which defines fact/lookup tables and filter condition.
+![](/images/docs/concepts/DataModel.png)
+
+* __Cube Descriptor__ - This describes definition and settings for a cube instance, defining which data model to use, what dimensions and measures to have, how to partition to segments and how to handle auto-merge etc.
+![](/images/docs/concepts/CubeDesc.png)
+
+* __Cube Instance__ - This is instance of cube, built from one cube descriptor, and consist of one or more cube segments according partition settings.
+![](/images/docs/concepts/CubeInstance.png)
+
+* __Partition__ - User can define a DATE/STRING column as partition column on cube descriptor, to separate one cube into several segments with different date periods.
+![](/images/docs/concepts/Partition.png)
+
+* __Cube Segment__ - This is actual carrier of cube data, and maps to a HTable in HBase. One building job creates one new segment for the cube instance. Once data change on specified data period, we can refresh related segments to avoid rebuilding whole cube.
+![](/images/docs/concepts/CubeSegment.png)
+
+* __Aggregation Group__ - Each aggregation group is subset of dimensions, and build cuboid with combinations inside. It aims at pruning for optimization.
+![](/images/docs/concepts/AggregationGroup.png)
+
+## DIMENSION & MEASURE
+* __Mandotary__ - This dimension type is used for cuboid pruning, if a dimension is specified as \u201cmandatory\u201d, then those combinations without such dimension are pruned.
+* __Hierarchy__ - This dimension type is used for cuboid pruning, if dimension A,B,C forms a \u201chierarchy\u201d relation, then only combinations with A, AB or ABC shall be remained. 
+* __Derived__ - On lookup tables, some dimensions could be generated from its PK, so there's specific mapping between them and FK from fact table. So those dimensions are DERIVED and don't participate in cuboid generation.
+![](/images/docs/concepts/Dimension.png)
+
+* __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
+* __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
+* __Top N__ - For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
+![](/images/docs/concepts/Measure.png)
+
+## CUBE ACTIONS
+* __BUILD__ - Given an interval of partition column, this action is to build a new cube segment.
+* __REFRESH__ - This action will rebuilt cube segment in some partition period, which is used in case of source table increasing.
+* __MERGE__ - This action will merge multiple continuous cube segments into single one. This can be automated with auto-merge settings in cube descriptor.
+* __PURGE__ - Clear segments under a cube instance. This will only update metadata, and won't delete cube data from HBase.
+![](/images/docs/concepts/CubeAction.png)
+
+## JOB STATUS
+* __NEW__ - This denotes one job has been just created.
+* __PENDING__ - This denotes one job is paused by job scheduler and waiting for resources.
+* __RUNNING__ - This denotes one job is running in progress.
+* __FINISHED__ - This denotes one job is successfully finished.
+* __ERROR__ - This denotes one job is aborted with errors.
+* __DISCARDED__ - This denotes one job is cancelled by end users.
+![](/images/docs/concepts/Job.png)
+
+## JOB ACTION
+* __RESUME__ - Once a job in ERROR status, this action will try to restore it from latest successful point.
+* __DISCARD__ - No matter status of a job is, user can end it and release resources with DISCARD action.
+![](/images/docs/concepts/JobAction.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/gettingstarted/events.md
----------------------------------------------------------------------
diff --git a/website/_docs20/gettingstarted/events.md b/website/_docs20/gettingstarted/events.md
new file mode 100644
index 0000000..db72c61
--- /dev/null
+++ b/website/_docs20/gettingstarted/events.md
@@ -0,0 +1,24 @@
+---
+layout: docs20
+title:  "Events and Conferences"
+categories: gettingstarted
+permalink: /docs20/gettingstarted/events.html
+---
+
+__Conferences__
+
+* [The Evolution of Apache Kylin: Realtime and Plugin Architecture in Kylin](https://www.youtube.com/watch?v=n74zvLmIgF0)([slides](http://www.slideshare.net/YangLi43/apache-kylin-15-updates)) by [Li Yang](https://github.com/liyang-gmt8), at [Hadoop Summit 2016 Dublin](http://hadoopsummit.org/dublin/agenda/), Ireland, 2016-04-14
+* [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
+* [Apache Kylin - Balance Between Space and Time](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015) ([video](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015)) by [Debashis Saha](https://twitter.com/debashis_saha) & [Luke Han](https://twitter.com/lukehq), at Hadoop Summit 2015 in San Jose, US, 2015-06-09
+* [HBaseCon 2015: Apache Kylin; Extreme OLAP Engine for Hadoop](https://vimeo.com/128152444) ([video](https://vimeo.com/128152444), [slides](http://www.slideshare.net/HBaseCon/ecosystem-session-3b)) by [Seshu Adunuthula](https://twitter.com/SeshuAd) at HBaseCon 2015 in San Francisco, US, 2015-05-07
+* [Apache Kylin - Extreme OLAP Engine for Hadoop](http://strataconf.com/big-data-conference-uk-2015/public/schedule/detail/40029) ([slides](http://www.slideshare.net/lukehan/apache-kylin-extreme-olap-engine-for-big-data)) by [Luke Han](https://twitter.com/lukehq) & [Yang Li](https://github.com/liyang-gmt8), at Strata+Hadoop World in London, UK, 2015-05-06
+* [Apache Kylin Open Source Journey](http://www.infoq.com/cn/presentations/open-source-journey-of-apache-kylin) ([slides](http://www.slideshare.net/lukehan/apache-kylin-open-source-journey-for-qcon2015-beijing)) by [Luke Han](https://twitter.com/lukehq), at QCon Beijing in Beijing, China, 2015-04-23
+* [Apache Kylin - OLAP on Hadoop](http://cio.it168.com/a2015/0418/1721/000001721404.shtml) by [Yang Li](https://github.com/liyang-gmt8), at Database Technology Conference China 2015 in Beijing, China, 2015-04-18
+* [Apache Kylin \u2013 Cubes on Hadoop](https://www.youtube.com/watch?v=U0SbrVzuOe4) ([video](https://www.youtube.com/watch?v=U0SbrVzuOe4), [slides](http://www.slideshare.net/Hadoop_Summit/apache-kylin-cubes-on-hadoop)) by [Ted Dunning](https://twitter.com/ted_dunning), at Hadoop Summit 2015 Europe in Brussels, Belgium, 2015-04-16
+* [Apache Kylin \uff0d Hadoop \u4e0a\u7684\u5927\u89c4\u6a21\u8054\u673a\u5206\u6790\u5e73\u53f0](http://bdtc2014.hadooper.cn/m/zone/bdtc_2014/schedule3) ([slides](http://www.slideshare.net/lukehan/apache-kylin-big-data-technology-conference-2014-beijing-v2)) by [Luke Han](https://twitter.com/lukehq), at Big Data Technology Conference China in Beijing, China, 2014-12-14
+* [Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive](http://v.csdn.hudong.com/s/article.html?arcid=15820707) ([video](http://v.csdn.hudong.com/s/article.html?arcid=15820707), [slides](http://www.slideshare.net/XuJiang2/kylin-hadoop-olap-engine)) by [Jiang Xu](https://www.linkedin.com/pub/xu-jiang/4/5a8/230), at Shanghai Big Data Summit 2014 in Shanghai, China , 2014-10-25
+
+__Meetup__
+
+* [Apache Kylin Meetup @Bay Area](http://www.meetup.com/Cloud-at-ebayinc/events/218914395/), in San Jose, US, 6:00PM - 7:30PM, Thursday, 2014-12-04
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/gettingstarted/faq.md
----------------------------------------------------------------------
diff --git a/website/_docs20/gettingstarted/faq.md b/website/_docs20/gettingstarted/faq.md
new file mode 100644
index 0000000..d1455b4
--- /dev/null
+++ b/website/_docs20/gettingstarted/faq.md
@@ -0,0 +1,119 @@
+---
+layout: docs20
+title:  "FAQ"
+categories: gettingstarted
+permalink: /docs20/gettingstarted/faq.html
+since: v0.6.x
+---
+
+#### 1. "bin/find-hive-dependency.sh" can locate hive/hcat jars in local, but Kylin reports error like "java.lang.NoClassDefFoundError: org/apache/hive/hcatalog/mapreduce/HCatInputFormat"
+
+  * Kylin need many dependent jars (hadoop/hive/hcat/hbase/kafka) on classpath to work, but Kylin doesn't ship them. It will seek these jars from your local machine by running commands like `hbase classpath`, `hive -e set` etc. The founded jars' path will be appended to the environment variable *HBASE_CLASSPATH* (Kylin uses `hbase` shell command to start up, which will read this). But in some Hadoop distribution (like EMR 5.0), the `hbase` shell doesn't keep the origin `HBASE_CLASSPATH` value, that causes the "NoClassDefFoundError".
+
+  * To fix this, find the hbase shell script (in hbase/bin folder), and search *HBASE_CLASSPATH*, check whether it overwrite the value like :
+
+  {% highlight Groff markup %}
+  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*
+  {% endhighlight %}
+
+  * If true, change it to keep the origin value like:
+
+   {% highlight Groff markup %}
+  export HBASE_CLASSPATH=$HADOOP_CONF:$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$ZOOKEEPER_HOME/*:$ZOOKEEPER_HOME/lib/*:$HBASE_CLASSPATH
+  {% endhighlight %}
+
+#### 2. Get "java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 5220674" in "Build Dimension Dictionary" step
+
+  * Kylin uses "Dictionary" encoding to encode/decode the dimension values (check [this blog](/blog/2015/08/13/kylin-dictionary/)); Usually a dimension's cardinality is less than millions, so the "Dict" encoding is good to use. As dictionary need be persisted and loaded into memory, if a dimension's cardinality is very high, the memory footprint will be tremendous, so Kylin add a check on this. If you see this error, suggest to identify the UHC dimension first and then re-evaluate the design (whether need to make that as dimension?). If must keep it, you can by-pass this error with couple ways: 1) change to use other encoding (like `fixed_length`, `integer`) 2) or set a bigger value for `kylin.dictionary.max.cardinality` in `conf/kylin.properties`.
+
+#### 3. Build cube failed due to "error check status"
+
+  * Check if `kylin.log` contains *yarn.resourcemanager.webapp.address:http://0.0.0.0:8088* and *java.net.ConnectException: Connection refused*
+  * If yes, then the problem is the address of resource manager was not available in yarn-site.xml
+  * A workaround is update `kylin.properties`, set `kylin.job.yarn.app.rest.check.status.url=http://YOUR_RM_NODE:8088/ws/v1/cluster/apps/${job_id}?anonymous=true`
+
+#### 4. HBase cannot get master address from ZooKeeper on Hortonworks Sandbox
+   
+  * By default hortonworks disables hbase, you'll have to start hbase in ambari homepage first.
+
+#### 5. Map Reduce Job information cannot display on Hortonworks Sandbox
+   
+  * Check out [https://github.com/KylinOLAP/Kylin/issues/40](https://github.com/KylinOLAP/Kylin/issues/40)
+
+#### 6. How to Install Kylin on CDH 5.2 or Hadoop 2.5.x
+
+  * Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
+
+  {% highlight Groff markup %}
+  I was able to deploy Kylin with following option in POM.
+  <hadoop2.version>2.5.0</hadoop2.version>
+  <yarn.version>2.5.0</yarn.version>
+  <hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
+  <zookeeper.version>3.4.5</zookeeper.version>
+  <hive.version>0.13.1</hive.version>
+  My Cluster is running on Cloudera Distribution CDH 5.2.0.
+  {% endhighlight %}
+
+
+#### 7. SUM(field) returns a negtive result while all the numbers in this field are > 0
+  * If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which would be used as a measure in Kylin; See hive number types: [https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes)
+
+#### 8. Why Kylin need extract the distinct columns from Fact Table before building cube?
+  * Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
+
+#### 9. Why Kylin calculate the HIVE table cardinality?
+  * The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
+
+#### 10. How to add new user or change the default password?
+  * Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
+
+   {% highlight Groff markup %}
+   ${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
+   {% endhighlight %}
+
+  * The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
+  * When you deploy Kylin for more users, switch to LDAP authentication is recommended.
+
+#### 11. Using sub-query for un-supported SQL
+
+{% highlight Groff markup %}
+Original SQL:
+select fact.slr_sgmt,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from ih_daily_fact fact
+inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+group by fact.slr_sgmt
+{% endhighlight %}
+
+{% highlight Groff markup %}
+Using sub-query
+select a.slr_sgmt,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from (
+    select fact.slr_sgmt as slr_sgmt,
+    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
+    sum(gmv) as gmv36,
+    sum(gmv) as gmv35
+    from ih_daily_fact fact
+    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
+) a
+group by a.slr_sgmt
+{% endhighlight %}
+
+#### 12. Build kylin meet NPM errors (\u4e2d\u56fd\u5927\u9646\u5730\u533a\u7528\u6237\u8bf7\u7279\u522b\u6ce8\u610f\u6b64\u95ee\u9898)
+
+  * Please add proxy for your NPM:  
+  `npm config set proxy http://YOUR_PROXY_IP`
+
+  * Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (\u8bf7\u66f4\u65b0\u60a8\u672c\u5730\u7684NPM\u4ed3\u5e93\u4ee5\u4f7f\u7528\u56fd\u5185\u7684NPM\u955c\u50cf\uff0c\u4f8b\u5982\u6dd8\u5b9dNPM\u955c\u50cf) :  
+  [http://npm.taobao.org](http://npm.taobao.org)
+
+#### 13. Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
+  * User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
+
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/gettingstarted/terminology.md
----------------------------------------------------------------------
diff --git a/website/_docs20/gettingstarted/terminology.md b/website/_docs20/gettingstarted/terminology.md
new file mode 100644
index 0000000..5d7ecf6
--- /dev/null
+++ b/website/_docs20/gettingstarted/terminology.md
@@ -0,0 +1,25 @@
+---
+layout: docs20
+title:  "Terminology"
+categories: gettingstarted
+permalink: /docs20/gettingstarted/terminology.html
+since: v0.5.x
+---
+ 
+
+Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
+They are basic knowledge of Apache Kylin which also will help to well understand such concerpt, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analycits. 
+
+* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
+* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
+* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
+* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
+* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
+* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
+* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
+* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
+* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
+* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_backup_metadata.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_backup_metadata.md b/website/_docs20/howto/howto_backup_metadata.md
new file mode 100644
index 0000000..f742e01
--- /dev/null
+++ b/website/_docs20/howto/howto_backup_metadata.md
@@ -0,0 +1,60 @@
+---
+layout: docs20
+title:  Backup Metadata
+categories: howto
+permalink: /docs20/howto/howto_backup_metadata.html
+---
+
+Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
+
+{% highlight Groff markup %}
+## The metadata store in hbase
+kylin.metadata.url=kylin_metadata@hbase
+{% endhighlight %}
+
+This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
+
+## Backup Metadata Store with binary package
+
+Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
+In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
+
+{% highlight Groff markup %}
+./bin/metastore.sh backup
+{% endhighlight %}
+
+to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
+
+## Restore Metadata Store with binary package
+
+In case you find your metadata store messed up, and you want to restore to a previous backup:
+
+Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
+
+{% highlight Groff markup %}
+./bin/metastore.sh reset
+{% endhighlight %}
+
+Then upload the backup metadata to Kylin's metadata store:
+{% highlight Groff markup %}
+./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
+{% endhighlight %}
+
+## Backup/restore metadata in development env (available since 0.7.3)
+
+When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
+
+## Cleanup unused resources from Metadata Store (available since 0.7.3)
+As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
+
+Firstly, run a check, this is safe as it will not change anything:
+{% highlight Groff markup %}
+./bin/metastore.sh clean
+{% endhighlight %}
+
+The resources that will be dropped will be listed;
+
+Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
+{% highlight Groff markup %}
+./bin/metastore.sh clean --delete true
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_build_cube_with_restapi.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_build_cube_with_restapi.md b/website/_docs20/howto/howto_build_cube_with_restapi.md
new file mode 100644
index 0000000..42df9cd
--- /dev/null
+++ b/website/_docs20/howto/howto_build_cube_with_restapi.md
@@ -0,0 +1,53 @@
+---
+layout: docs20
+title:  Build Cube with RESTful API
+categories: howto
+permalink: /docs20/howto/howto_build_cube_with_restapi.html
+---
+
+### 1.	Authentication
+*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
+*   Add `Authorization` header to first request for authentication
+*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
+*   Once authenticated, client can go subsequent requests with cookies.
+{% highlight Groff markup %}
+POST http://localhost:7070/kylin/api/user/authentication
+    
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+
+### 2.	Get details of cube. 
+*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
+*   Client can find cube segment date ranges in returned cube detail.
+{% highlight Groff markup %}
+GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+### 3.	Then submit a build job of the cube. 
+*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
+*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
+    *   `startTime` and `endTime` should be utc timestamp.
+    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
+*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
+{% highlight Groff markup %}
+PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+    
+{
+    "startTime": 0,
+    "endTime": 1388563200000,
+    "buildType": "BUILD"
+}
+{% endhighlight %}
+
+### 4.	Track job status. 
+*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
+*   Returned `job_status` represents current status of job.
+
+### 5.	If the job got errors, you can resume it. 
+*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_cleanup_storage.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_cleanup_storage.md b/website/_docs20/howto/howto_cleanup_storage.md
new file mode 100644
index 0000000..badede1
--- /dev/null
+++ b/website/_docs20/howto/howto_cleanup_storage.md
@@ -0,0 +1,22 @@
+---
+layout: docs20
+title:  Cleanup Storage (HDFS & HBase)
+categories: howto
+permalink: /docs20/howto/howto_cleanup_storage.html
+---
+
+Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
+automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
+
+Steps:
+1. Check which resources can be cleanup, this will not remove anything:
+{% highlight Groff markup %}
+export KYLIN_HOME=/path/to/kylin_home
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete false
+{% endhighlight %}
+Here please replace (version) with the specific Kylin jar version in your installation;
+2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
+{% highlight Groff markup %}
+${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete true
+{% endhighlight %}
+On finish, the intermediate HDFS location and HTables should be dropped;

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_jdbc.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_jdbc.md b/website/_docs20/howto/howto_jdbc.md
new file mode 100644
index 0000000..9e6deeb
--- /dev/null
+++ b/website/_docs20/howto/howto_jdbc.md
@@ -0,0 +1,92 @@
+---
+layout: docs20
+title:  Use JDBC Driver
+categories: howto
+permalink: /docs20/howto/howto_jdbc.html
+---
+
+### Authentication
+
+###### Build on Apache Kylin authentication restful service. Supported parameters:
+* user : username 
+* password : password
+* ssl: true/false. Default be false; If true, all the services call will use https.
+
+### Connection URL format:
+{% highlight Groff markup %}
+jdbc:kylin://<hostname>:<port>/<kylin_project_name>
+{% endhighlight %}
+* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
+* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
+* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
+
+### 1. Query with Statement
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 2. Query with PreparedStatement
+
+###### Supported prepared statement parameters:
+* setString
+* setInt
+* setShort
+* setLong
+* setFloat
+* setDouble
+* setBoolean
+* setByte
+* setDate
+* setTime
+* setTimestamp
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
+state.setInt(1, 10);
+ResultSet resultSet = state.executeQuery();
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 3. Get query result set metadata
+Kylin jdbc driver supports metadata list methods:
+List catalog, schema, table and column with sql pattern filters(such as %).
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
+while (tables.next()) {
+    for (int i = 0; i < 10; i++) {
+        assertEquals("dummy", tables.getString(i + 1));
+    }
+}
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_ldap_and_sso.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_ldap_and_sso.md b/website/_docs20/howto/howto_ldap_and_sso.md
new file mode 100644
index 0000000..8085f39
--- /dev/null
+++ b/website/_docs20/howto/howto_ldap_and_sso.md
@@ -0,0 +1,128 @@
+---
+layout: docs20
+title: Enable Security with LDAP and SSO
+categories: howto
+permalink: /docs20/howto/howto_ldap_and_sso.html
+---
+
+## Enable LDAP authentication
+
+Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
+
+#### Configure LDAP server info
+
+Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be encrypted; You can run the following command to get the encrypted value (please note, the password's length should be less than 16 characters, see [KYLIN-2416](https://issues.apache.org/jira/browse/KYLIN-2416)):
+
+```
+cd $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/lib
+java -classpath kylin-server-base-1.6.0.jar:spring-beans-3.2.17.RELEASE.jar:spring-core-3.2.17.RELEASE.jar:commons-codec-1.7.jar org.apache.kylin.rest.security.PasswordPlaceholderConfigurer AES <your_password>
+```
+
+Config them in the conf/kylin.properties:
+
+```
+ldap.server=ldap://<your_ldap_host>:<port>
+ldap.username=<your_user_name>
+ldap.password=<your_password_encrypted>
+```
+
+Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
+
+```
+ldap.user.searchBase=OU=UserAccounts,DC=mycompany,DC=com
+ldap.user.searchPattern=(&(cn={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
+ldap.user.groupSearchBase=OU=Group,DC=mycompany,DC=com
+```
+
+If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in ldap.service.*; Otherwise, leave them be empty;
+
+### Configure the administrator group and default role
+
+To map an LDAP group to the admin group in Kylin, need set the "acl.adminRole" to "ROLE_" + GROUP_NAME. For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
+
+```
+acl.adminRole=ROLE_KYLIN-ADMIN-GROUP
+acl.defaultRole=ROLE_ANALYST,ROLE_MODELER
+```
+
+The "acl.defaultRole" is a list of the default roles that grant to everyone, keep it as-is.
+
+#### Enable LDAP
+
+Set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server.
+
+## Enable SSO authentication
+
+From v1.5, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
+
+Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
+
+### Generate IDP metadata xml
+Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
+
+  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
+  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
+  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
+
+### Generate JKS keystore for Kylin
+As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
+
+Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
+
+```
+$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
+Enter Export Password: <export_pwd>
+Verifying - Enter Export Password: <export_pwd>
+
+
+$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
+
+Enter destination keystore password:  changeit
+Re-enter new password: changeit
+```
+
+It will put the keys to "samlKeystore.jks" with alias "kylin";
+
+### Enable Higher Ciphers
+
+Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
+
+### Deploy IDP xml file and keystore to Kylin
+
+The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
+	
+  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
+  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
+  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
+
+```
+<!-- Central storage of cryptographic keys -->
+<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
+	<constructor-arg value="classpath:samlKeystore.jks"/>
+	<constructor-arg type="java.lang.String" value="changeit"/>
+	<constructor-arg>
+		<map>
+			<entry key="kylin" value="changeit"/>
+		</map>
+	</constructor-arg>
+	<constructor-arg type="java.lang.String" value="kylin"/>
+</bean>
+
+```
+
+### Other configurations
+In conf/kylin.properties, add the following properties with your server information:
+
+```
+saml.metadata.entityBaseURL=https://host-name/kylin
+saml.context.scheme=https
+saml.context.serverName=host-name
+saml.context.serverPort=443
+saml.context.contextPath=/kylin
+```
+
+Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
+
+### Enable SSO
+Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_optimize_build.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_optimize_build.md b/website/_docs20/howto/howto_optimize_build.md
new file mode 100644
index 0000000..8b1ff65
--- /dev/null
+++ b/website/_docs20/howto/howto_optimize_build.md
@@ -0,0 +1,190 @@
+---
+layout: docs20
+title:  Optimize Cube Build
+categories: howto
+permalink: /docs20/howto/howto_optimize_build.html
+---
+
+Kylin decomposes a Cube build task into several steps and then executes them in sequence. These steps include Hive operations, MapReduce jobs, and other types job. When you have many Cubes to build daily, then you definitely want to speed up this process. Here are some practices that you probably want to know, and they are organized in the same order as the steps sequence.
+
+
+
+## Create Intermediate Flat Hive Table
+
+This step extracts data from source Hive tables (with all tables joined) and inserts them into an intermediate flat table. If Cube is partitioned, Kylin will add a time condition so that only the data in the range would be fetched. You can check the related Hive command in the log of this step, e.g: 
+
+```
+hive -e "USE default;
+DROP TABLE IF EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34;
+
+CREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34
+(AIRLINE_FLIGHTDATE date,AIRLINE_YEAR int,AIRLINE_QUARTER int,...,AIRLINE_ARRDELAYMINUTES int)
+STORED AS SEQUENCEFILE
+LOCATION 'hdfs:///kylin/kylin200instance/kylin-0a8d71e8-df77-495f-b501-03c06f785b6c/kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34';
+
+SET dfs.replication=2;
+SET hive.exec.compress.output=true;
+SET hive.auto.convert.join.noconditionaltask=true;
+SET hive.auto.convert.join.noconditionaltask.size=100000000;
+SET mapreduce.job.split.metainfo.maxsize=-1;
+
+INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT
+AIRLINE.FLIGHTDATE
+,AIRLINE.YEAR
+,AIRLINE.QUARTER
+,...
+,AIRLINE.ARRDELAYMINUTES
+FROM AIRLINE.AIRLINE as AIRLINE
+WHERE (AIRLINE.FLIGHTDATE >= '1987-10-01' AND AIRLINE.FLIGHTDATE < '2017-01-01');
+"
+
+```
+
+Kylin applies the configuration in conf/kylin\_hive\_conf.xml while Hive commands are running, for instance, use less replication and enable Hive's mapper side join. If it is needed, you can add other configurations which are good for your cluster.
+
+If Cube's partition column ("FLIGHTDATE" in this case) is the same as Hive table's partition column, then filtering on it will let Hive smartly skip those non-matched partitions. So it is highly recommended to use Hive table's paritition column (if it is a date column) as the Cube's partition column. This is almost required for those very large tables, or Hive has to scan all files each time in this step, costing terribly long time.
+
+If your Hive enables file merge, you can disable them in "conf/kylin\_hive\_conf.xml" as Kylin has its own way to merge files (in the next step): 
+
+    <property>
+        <name>hive.merge.mapfiles</name>
+        <value>false</value>
+        <description>Disable Hive's auto merge</description>
+    </property>
+
+
+## Redistribute intermediate table
+
+After the previous step, Hive generates the data files in HDFS folder: while some files are large, some are small or even empty. The imbalanced file distribution would lead subsequent MR jobs to imbalance as well: some mappers finish quickly yet some others are very slow. To balance them, Kylin adds this step to "redistribute" the data and here is a sample output:
+
+```
+total input rows = 159869711
+expected input rows per mapper = 1000000
+num reducers for RedistributeFlatHiveTableStep = 160
+
+```
+
+
+Redistribute table, cmd: 
+
+```
+hive -e "USE default;
+SET dfs.replication=2;
+SET hive.exec.compress.output=true;
+SET hive.auto.convert.join.noconditionaltask=true;
+SET hive.auto.convert.join.noconditionaltask.size=100000000;
+SET mapreduce.job.split.metainfo.maxsize=-1;
+set mapreduce.job.reduces=160;
+set hive.merge.mapredfiles=false;
+
+INSERT OVERWRITE TABLE kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 SELECT * FROM kylin_intermediate_airline_cube_v3610f668a3cdb437e8373c034430f6c34 DISTRIBUTE BY RAND();
+"
+
+```
+
+
+
+Firstly, Kylin gets the row count of this intermediate table; then based on the number of row count, it would get amount of files needed to get data redistributed. By default, Kylin allocates one file per 1 million rows. In this sample, there are 160 million rows and exist 160 reducers, and each reducer would write 1 file. In following MR step over this table, Hadoop will start the same number Mappers as the files to process (usually 1 million's data size is small than a HDFS block size). If your daily data scale isn't so large or Hadoop cluster has enough resources, you may want to get more concurrency. Setting "kylin.job.mapreduce.mapper.input.rows" in "conf/kylin.properties" to a smaller value will get that, e.g:
+
+`kylin.job.mapreduce.mapper.input.rows=500000`
+
+
+Secondly, Kylin runs a *"INSERT OVERWIRTE TABLE .... DISTRIBUTE BY "* HiveQL to distribute the rows among a specified number of reducers.
+
+In most cases, Kylin asks Hive to randomly distributes the rows among reducers, then get files very closed in size. The distribute clause is "DISTRIBUTE BY RAND()".
+
+If your Cube has specified a "shard by" dimension (in Cube's "Advanced setting" page), which is a high cardinality column (like "USER\_ID"), Kylin will ask Hive to redistribute data by that column's value. Then for the rows that have the same value as this column has, they will go to the same file. This is much better than "by random",  because the data will be not only redistributed but also pre-categorized without additional cost, thus benefiting the subsequent Cube build process. Under a typical scenario, this optimization can cut off 40% building time. In this case the distribute clause will be "DISTRIBUTE BY USER_ID":
+
+**Please note:** 1) The "shard by" column should be a high cardinality dimension column, and it appears in many cuboids (not just appears in seldom cuboids). Utilize it to distribute properly can get equidistribution in every time range; otherwise it will cause data incline, which will reduce the building speed. Typical good cases are: "USER\_ID", "SELLER\_ID", "PRODUCT", "CELL\_NUMBER", so forth, whose cardinality is higher than one thousand (should be much more than the reducer numbers). 2) Using "shard by" has other advantage in Cube storage, but it is out of this doc's scope.
+
+
+
+## Extract Fact Table Distinct Columns
+
+In this step Kylin runs a MR job to fetch distinct values for the dimensions, which are using dictionary encoding. 
+
+Actually this step does more: it collects the Cube statistics by using HyperLogLog counters to estimate the row count of each Cuboid. If you find that mappers work incredible slowly, it usually indicates that the Cube design is too complex, please check [optimize cube design](howto_optimize_cubes.html) to make the Cube thinner. If the reducers get OutOfMemory error, it indicates that the Cuboid combination does explode or the default YARN memory allocation cannot meet demands. If this step couldn't finish in a reasonable time by all means, you can give up and revisit the design as the real building will take longer.
+
+You can reduce the sampling percentage (kylin.job.cubing.inmem.sampling.percen in kylin.properties) to get this step accelerated, but this may not help much and impact on the accuracy of Cube statistics, thus we don't recommend.  
+
+
+
+## Build Dimension Dictionary
+
+With the distinct values fetched in previous step, Kylin will build dictionaries in memory (in next version this will be moved to MR). Usually this step is fast, but if the value set is large, Kylin may report error like "Too high cardinality is not suitable for dictionary". For UHC column, please use other encoding method for the UHC column, such as "fixed_length", "integer" and so on.
+
+
+
+## Save Cuboid Statistics and Create HTable
+
+These two steps are lightweight and fast.
+
+
+
+## Build Base Cuboid 
+
+This step is building the base cuboid from the intermediate table, which is the first round MR of the "by-layer" cubing algorithm. The mapper number is equals to the reducer number of step 2; The reducer number is estimated with the cube statistics: by default use 1 reducer every 500MB output; If you observed the reducer number is small, you can set "kylin.job.mapreduce.default.reduce.input.mb" in kylin.properties to a smaller value to get more resources, e.g: `kylin.job.mapreduce.default.reduce.input.mb=200`
+
+
+## Build N-Dimension Cuboid 
+
+These steps are the "by-layer" cubing process, each step uses the output of previous step as the input, and then cut off one dimension to aggregate to get one child cuboid. For example, from cuboid ABCD, cut off A get BCD, cut off B get ACD etc. 
+
+Some cuboid can be aggregated from more than 1 parent cubiods, in this case, Kylin will select the minimal parent cuboid. For example, AB can be generated from ABC (id: 1110) and ABD (id: 1101), so ABD will be used as its id is smaller than ABC. Based on this, if D's cardinality is small, the aggregation will be cost-efficient. So, when you design the Cube rowkey sequence, please remember to put low cardinality dimensions to the tail position. This not only benefit the Cube build, but also benefit the Cube query as the post-aggregation follows the same rule.
+
+Usually from the N-D to (N/2)-D the building is slow, because it is the cuboid explosion process: N-D has 1 Cuboid, (N-1)-D has N cuboids, (N-2)-D has N*(N-1) cuboids, etc. After (N/2)-D step, the building gets faster gradually.
+
+
+
+## Build Cube
+
+This step uses a new algorithm to build the Cube: "by-split" Cubing (also called as "in-mem" cubing). It will use one round MR to calculate all cuboids, but it requests more memory than normal. The "conf/kylin\_job\_conf\_inmem.xml" is made for this step. By default it requests 3GB memory for each mapper. If your cluster has enough memory, you can allocate more in "conf/kylin\_job\_conf\_inmem.xml" so it will use as much possible memory to hold the data and gain a better performance, e.g:
+
+    <property>
+        <name>mapreduce.map.memory.mb</name>
+        <value>6144</value>
+        <description></description>
+    </property>
+    
+    <property>
+        <name>mapreduce.map.java.opts</name>
+        <value>-Xmx5632m</value>
+        <description></description>
+    </property>
+
+
+Please note, Kylin will automatically select the best algorithm based on the data distribution (get in Cube statistics). The not-selected algorithm's steps will be skipped. You don't need to select the algorithm explicitly.
+
+
+
+## Convert Cuboid Data to HFile
+
+This step starts a MR job to convert the Cuboid files (sequence file format) into HBase's HFile format. Kylin calculates the HBase region number with the Cube statistics, by default 1 region per 5GB. The more regions got, the more reducers would be utilized. If you observe the reducer's number is small and performance is poor, you can set the following parameters in "conf/kylin.properties" to smaller, as follows:
+
+```
+kylin.hbase.region.cut=2
+kylin.hbase.hfile.size.gb=1
+```
+
+If you're not sure what size a region should be, contact your HBase administrator. 
+
+
+## Load HFile to HBase Table
+
+This step uses HBase API to load the HFile to region servers, it is lightweight and fast.
+
+
+
+## Update Cube Info
+
+After loading data into HBase, Kylin marks this Cube segment as ready in metadata. This step is very fast.
+
+
+
+## Cleanup
+
+Drop the intermediate table from Hive. This step doesn't block anything as the segment has been marked ready in the previous step. If this step gets error, no need to worry, the garbage can be collected later when Kylin executes the [StorageCleanupJob](howto_cleanup_storage.html).
+
+
+## Summary
+There are also many other methods to boost the performance. If you have practices to share, welcome to discuss in [dev@kylin.apache.org](mailto:dev@kylin.apache.org).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_optimize_cubes.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_optimize_cubes.md b/website/_docs20/howto/howto_optimize_cubes.md
new file mode 100644
index 0000000..171149d
--- /dev/null
+++ b/website/_docs20/howto/howto_optimize_cubes.md
@@ -0,0 +1,212 @@
+---
+layout: docs20
+title:  Optimize Cube Design
+categories: howto
+permalink: /docs20/howto/howto_optimize_cubes.html
+---
+
+## Hierarchies:
+
+Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
+
+group by continent
+group by continent, country
+group by continent, country, city
+
+In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
+
+If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
+
+
+A. Hierarchies on lookup table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, FK</td>
+    <td></td>
+    <td>PK,,H1,H2,H3,,,,</td>
+  </tr>
+</table>
+
+---
+
+B. Hierarchies on fact table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
+  </tr>
+</table>
+
+---
+
+
+There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
+
+A*. Hierarchies on lookup table over its primary key
+
+
+<table>
+  <tr>
+    <td align="center">Lookup Table(Calendar)</td>
+  </tr>
+  <tr>
+    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
+  </tr>
+</table>
+
+---
+
+
+For cases like A* what you need is another optimization called "Derived Columns"
+
+## Derived Columns:
+
+Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
+
+For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, DimA(FK) </td>
+    <td></td>
+    <td>DimX(PK),,DimB, DimC</td>
+  </tr>
+</table>
+
+---
+
+
+Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
+
+
+<table>
+  <tr>
+    <th>dimA</th>
+    <th>dimB</th>
+    <th>dimC</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>b</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>c</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+</table>
+
+
+in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
+
+original combinations:
+ABC,AB,AC,BC,A,B,C
+
+combinations when driving B from A:
+AC,A,C
+
+at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
+
+
+<table>
+  <tr>
+    <th>DimA</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>2</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_update_coprocessor.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_update_coprocessor.md b/website/_docs20/howto/howto_update_coprocessor.md
new file mode 100644
index 0000000..8f83d70
--- /dev/null
+++ b/website/_docs20/howto/howto_update_coprocessor.md
@@ -0,0 +1,14 @@
+---
+layout: docs20
+title:  How to Update HBase Coprocessor
+categories: howto
+permalink: /docs20/howto/howto_update_coprocessor.html
+---
+
+Kylin leverages HBase coprocessor to optimize query performance. After new versions released, the RPC protocol may get changed, so user need to redeploy coprocessor to HTable.
+
+There's a CLI tool to update HBase Coprocessor:
+
+{% highlight Groff markup %}
+$KYLIN_HOME/bin/kylin.sh org.apache.kylin.storage.hbase.util.DeployCoprocessorCLI $KYLIN_HOME/lib/kylin-coprocessor-*.jar all
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_upgrade.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_upgrade.md b/website/_docs20/howto/howto_upgrade.md
new file mode 100644
index 0000000..811b6c6
--- /dev/null
+++ b/website/_docs20/howto/howto_upgrade.md
@@ -0,0 +1,66 @@
+---
+layout: docs20
+title:  Upgrade From Old Versions
+categories: howto
+permalink: /docs20/howto/howto_upgrade.html
+since: v1.5.1
+---
+
+Running as a Hadoop client, Apache Kylin's metadata and Cube data are persistended in Hadoop (HBase and HDFS), so the upgrade is relatively easy and user doesn't need worry about data loss. The upgrade can be performed in the following steps:
+
+* Download the new Apache Kylin binary package for your Hadoop version from Kylin's download page;
+* Uncompress the new version Kylin package to a new folder, e.g, /usr/local/kylin/apache-kylin-1.6.0/ (directly overwrite old instance is not recommended);
+* Copy the configuration files (`$KYLIN_HOME/conf/*`) from old instance (e.g /usr/local/kylin/apache-kylin-1.5.4/) to the new instance's `conf` folder if you have customized configurations; It is recommended to do a compare and merge since there might be new parameters introduced. If you have modified tomcat configuration ($KYLIN_HOME/tomcat/conf/), also remember to do the same.
+* Stop the current Kylin instance with `./bin/kylin.sh stop`;
+* Set the `KYLIN_HOME` env variable to the new instance's installation folder. If you have set `KYLIN_HOME` in `~/.bash_profile` or other scripts, remember to update them as well.
+* Start the new Kylin instance with `$KYLIN_HOME/bin/kylin start`; After be started, login Kylin web to check whether your cubes can be loaded correctly.
+* [Upgrade coprocessor](howto_update_coprocessor.html) to ensure the HBase region servers use the latest Kylin coprocessor.
+* Verify your SQL queries can be performed successfully.
+
+Below are versions specific guides:
+
+## Upgrade from v1.5.4 to v1.6.0
+Kylin v1.5.4 and v1.6.0 are compitible in metadata; Please follow the common upgrade steps above.
+
+## Upgrade from v1.5.3 to v1.5.4
+Kylin v1.5.3 and v1.5.4 are compitible in metadata; Please follow the common upgrade steps above.
+
+## Upgrade from 1.5.2 to v1.5.3
+Kylin v1.5.3 metadata is compitible with v1.5.2, your cubes don't need rebuilt, as usual, some actions need to be performed:
+
+#### 1. Update HBase coprocessor
+The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
+
+#### 2. Update conf/kylin_hive_conf.xml
+From 1.5.3, Kylin doesn't need Hive to merge small files anymore; For users who copy the conf/ from previous version, please remove the "merge" related properties in kylin_hive_conf.xml, including "hive.merge.mapfiles", "hive.merge.mapredfiles", and "hive.merge.size.per.task"; this will save the time on extracting data from Hive.
+
+
+## Upgrade from 1.5.1 to v1.5.2
+Kylin v1.5.2 metadata is compitible with v1.5.1, your cubes don't need upgrade, while some actions need to be performed:
+
+#### 1. Update HBase coprocessor
+The HBase tables for existing cubes need be updated to the latest coprocessor; Follow [this guide](howto_update_coprocessor.html) to update;
+
+#### 2. Update conf/kylin.properties
+In v1.5.2 several properties are deprecated, and several new one are added:
+
+Deprecated:
+
+* kylin.hbase.region.cut.small=5
+* kylin.hbase.region.cut.medium=10
+* kylin.hbase.region.cut.large=50
+
+New:
+
+* kylin.hbase.region.cut=5
+* kylin.hbase.hfile.size.gb=2
+
+These new parameters determines how to split HBase region; To use different size you can overwite these params in Cube level. 
+
+When copy from old kylin.properties file, suggest to remove the deprecated ones and add the new ones.
+
+#### 3. Add conf/kylin\_job\_conf\_inmem.xml
+A new job conf file named "kylin\_job\_conf\_inmem.xml" is added in "conf" folder; As Kylin 1.5 introduced the "fast cubing" algorithm, which aims to leverage more memory to do the in-mem aggregation; Kylin will use this new conf file for submitting the in-mem cube build job, which requesting different memory with a normal job; Please update it properly according to your cluster capacity.
+
+Besides, if you have used separate config files for different capacity cubes, for example "kylin\_job\_conf\_small.xml", "kylin\_job\_conf\_medium.xml" and "kylin\_job\_conf\_large.xml", please note that they are deprecated now; Only "kylin\_job\_conf.xml" and "kylin\_job\_conf\_inmem.xml" will be used for submitting cube job; If you have cube level job configurations (like using different Yarn job queue), you can customize at cube level, check [KYLIN-1706](https://issues.apache.org/jira/browse/KYLIN-1706)
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_use_beeline.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_use_beeline.md b/website/_docs20/howto/howto_use_beeline.md
new file mode 100644
index 0000000..1effdca
--- /dev/null
+++ b/website/_docs20/howto/howto_use_beeline.md
@@ -0,0 +1,14 @@
+---
+layout: docs20
+title:  Use Beeline for Hive Commands
+categories: howto
+permalink: /docs20/howto/howto_use_beeline.html
+---
+
+Beeline(https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients) is recommended by many venders to replace Hive CLI. By default Kylin uses Hive CLI to synchronize Hive tables, create flatten intermediate tables, etc. By simple configuration changes you can set Kylin to use Beeline instead.
+
+Edit $KYLIN_HOME/conf/kylin.properties by:
+
+  1. change kylin.hive.client=cli to kylin.hive.client=beeline
+  2. add "kylin.hive.beeline.params", this is where you can specifiy beeline commmand parameters. Like username(-n), JDBC URL(-u),etc. There's a sample kylin.hive.beeline.params included in default kylin.properties, however it's commented. You can modify the sample based on your real environment.
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_use_distributed_scheduler.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_use_distributed_scheduler.md b/website/_docs20/howto/howto_use_distributed_scheduler.md
new file mode 100644
index 0000000..4cdac8a
--- /dev/null
+++ b/website/_docs20/howto/howto_use_distributed_scheduler.md
@@ -0,0 +1,16 @@
+---
+layout: docs20
+title:  Use distributed job scheduler
+categories: howto
+permalink: /docs20/howto/howto_use_distributed_scheduler.html
+---
+
+Since Kylin 2.0, Kylin support distributed job scheduler.
+Which is more extensible, available and reliable than default job scheduler.
+To enable the distributed job scheduler, you need to set or update three configs in the kylin.properties:
+
+```
+1. kylin.job.scheduler.default=2
+2. kylin.job.lock=org.apache.kylin.storage.hbase.util.ZookeeperDistributedJobLock
+3. add all job servers and query servers to the kylin.server.cluster-servers
+```


[2/5] kylin git commit: prepare docs for 2.0

Posted by li...@apache.org.
http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/create_cube.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/create_cube.cn.md b/website/_docs20/tutorial/create_cube.cn.md
new file mode 100644
index 0000000..5c28e11
--- /dev/null
+++ b/website/_docs20/tutorial/create_cube.cn.md
@@ -0,0 +1,129 @@
+---
+layout: docs20-cn
+title:  Kylin Cube \u521b\u5efa\u6559\u7a0b
+categories: \u6559\u7a0b
+permalink: /cn/docs20/tutorial/create_cube.html
+version: v1.2
+since: v0.7.1
+---
+  
+  
+### I. \u65b0\u5efa\u4e00\u4e2a\u9879\u76ee
+1. \u7531\u9876\u90e8\u83dc\u5355\u680f\u8fdb\u5165`Query`\u9875\u9762\uff0c\u7136\u540e\u70b9\u51fb`Manage Projects`\u3002
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
+
+2. \u70b9\u51fb`+ Project`\u6309\u94ae\u6dfb\u52a0\u4e00\u4e2a\u65b0\u7684\u9879\u76ee\u3002
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/2 %2Bproject.png)
+
+3. \u586b\u5199\u4e0b\u5217\u8868\u5355\u5e76\u70b9\u51fb`submit`\u6309\u94ae\u63d0\u4ea4\u8bf7\u6c42\u3002
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/3 new-project.png)
+
+4. \u6210\u529f\u540e\uff0c\u5e95\u90e8\u4f1a\u663e\u793a\u901a\u77e5\u3002
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
+
+### II. \u540c\u6b65\u4e00\u5f20\u8868
+1. \u5728\u9876\u90e8\u83dc\u5355\u680f\u70b9\u51fb`Tables`\uff0c\u7136\u540e\u70b9\u51fb`+ Sync`\u6309\u94ae\u52a0\u8f7dhive\u8868\u5143\u6570\u636e\u3002
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/4 %2Btable.png)
+
+2. \u8f93\u5165\u8868\u540d\u5e76\u70b9\u51fb`Sync`\u6309\u94ae\u63d0\u4ea4\u8bf7\u6c42\u3002
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
+
+### III. \u65b0\u5efa\u4e00\u4e2acube
+\u9996\u5148\uff0c\u5728\u9876\u90e8\u83dc\u5355\u680f\u70b9\u51fb`Cubes`\u3002\u7136\u540e\u70b9\u51fb`+Cube`\u6309\u94ae\u8fdb\u5165cube designer\u9875\u9762\u3002
+
+![](/images/Kylin-Cube-Creation-Tutorial/6 %2Bcube.png)
+
+**\u6b65\u9aa41. Cube\u4fe1\u606f**
+
+\u586b\u5199cube\u57fa\u672c\u4fe1\u606f\u3002\u70b9\u51fb`Next`\u8fdb\u5165\u4e0b\u4e00\u6b65\u3002
+
+\u4f60\u53ef\u4ee5\u4f7f\u7528\u5b57\u6bcd\u3001\u6570\u5b57\u548c\u201c_\u201d\u6765\u4e3a\u4f60\u7684cube\u547d\u540d\uff08\u6ce8\u610f\u540d\u5b57\u4e2d\u4e0d\u80fd\u4f7f\u7528\u7a7a\u683c\uff09\u3002
+
+![](/images/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
+
+**\u6b65\u9aa42. \u7ef4\u5ea6**
+
+1. \u5efa\u7acb\u4e8b\u5b9e\u8868\u3002
+
+    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-factable.png)
+
+2. \u70b9\u51fb`+Dimension`\u6309\u94ae\u6dfb\u52a0\u4e00\u4e2a\u65b0\u7684\u7ef4\u5ea6\u3002
+
+    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-%2Bdim.png)
+
+3. \u53ef\u4ee5\u9009\u62e9\u4e0d\u540c\u7c7b\u578b\u7684\u7ef4\u5ea6\u52a0\u5165\u4e00\u4e2acube\u3002\u6211\u4eec\u5728\u8fd9\u91cc\u5217\u51fa\u5176\u4e2d\u4e00\u90e8\u5206\u4f9b\u4f60\u53c2\u8003\u3002
+
+    * \u4ece\u4e8b\u5b9e\u8868\u83b7\u53d6\u7ef4\u5ea6\u3002
+          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeA.png)
+
+    * \u4ece\u67e5\u627e\u8868\u83b7\u53d6\u7ef4\u5ea6\u3002
+        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-1.png)
+
+        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-2.png)
+   
+    * \u4ece\u6709\u5206\u7ea7\u7ed3\u6784\u7684\u67e5\u627e\u8868\u83b7\u53d6\u7ef4\u5ea6\u3002
+          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeC.png)
+
+    * \u4ece\u6709\u884d\u751f\u7ef4\u5ea6(derived dimensions)\u7684\u67e5\u627e\u8868\u83b7\u53d6\u7ef4\u5ea6\u3002
+          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeD.png)
+
+4. \u7528\u6237\u53ef\u4ee5\u5728\u4fdd\u5b58\u7ef4\u5ea6\u540e\u8fdb\u884c\u7f16\u8f91\u3002
+   ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-edit.png)
+
+**\u6b65\u9aa43. \u5ea6\u91cf**
+
+1. \u70b9\u51fb`+Measure`\u6309\u94ae\u6dfb\u52a0\u4e00\u4e2a\u65b0\u7684\u5ea6\u91cf\u3002
+   ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-%2Bmeas.png)
+
+2. \u6839\u636e\u5b83\u7684\u8868\u8fbe\u5f0f\u5171\u67095\u79cd\u4e0d\u540c\u7c7b\u578b\u7684\u5ea6\u91cf\uff1a`SUM`\u3001`MAX`\u3001`MIN`\u3001`COUNT`\u548c`COUNT_DISTINCT`\u3002\u8bf7\u8c28\u614e\u9009\u62e9\u8fd4\u56de\u7c7b\u578b\uff0c\u5b83\u4e0e`COUNT(DISTINCT)`\u7684\u8bef\u5dee\u7387\u76f8\u5173\u3002
+   * SUM
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-sum.png)
+
+   * MIN
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-min.png)
+
+   * MAX
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-max.png)
+
+   * COUNT
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-count.png)
+
+   * DISTINCT_COUNT
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-distinct.png)
+
+**\u6b65\u9aa44. \u8fc7\u6ee4\u5668**
+
+\u8fd9\u4e00\u6b65\u9aa4\u662f\u53ef\u9009\u7684\u3002\u4f60\u53ef\u4ee5\u4f7f\u7528`SQL`\u683c\u5f0f\u6dfb\u52a0\u4e00\u4e9b\u6761\u4ef6\u8fc7\u6ee4\u5668\u3002
+
+![](/images/Kylin-Cube-Creation-Tutorial/10 filter.png)
+
+**\u6b65\u9aa45. \u66f4\u65b0\u8bbe\u7f6e**
+
+\u8fd9\u4e00\u6b65\u9aa4\u662f\u4e3a\u589e\u91cf\u6784\u5efacube\u800c\u8bbe\u8ba1\u7684\u3002
+
+![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting1.png)
+
+\u9009\u62e9\u5206\u533a\u7c7b\u578b\u3001\u5206\u533a\u5217\u548c\u5f00\u59cb\u65e5\u671f\u3002
+
+![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting2.png)
+
+**\u6b65\u9aa46. \u9ad8\u7ea7\u8bbe\u7f6e**
+
+![](/images/Kylin-Cube-Creation-Tutorial/12 advanced.png)
+
+**\u6b65\u9aa47. \u6982\u89c8 & \u4fdd\u5b58**
+
+\u4f60\u53ef\u4ee5\u6982\u89c8\u4f60\u7684cube\u5e76\u8fd4\u56de\u4e4b\u524d\u7684\u6b65\u9aa4\u8fdb\u884c\u4fee\u6539\u3002\u70b9\u51fb`Save`\u6309\u94ae\u5b8c\u6210cube\u521b\u5efa\u3002
+
+![](/images/Kylin-Cube-Creation-Tutorial/13 overview.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/create_cube.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/create_cube.md b/website/_docs20/tutorial/create_cube.md
new file mode 100644
index 0000000..ea2216b
--- /dev/null
+++ b/website/_docs20/tutorial/create_cube.md
@@ -0,0 +1,198 @@
+---
+layout: docs20
+title:  Kylin Cube Creation
+categories: tutorial
+permalink: /docs20/tutorial/create_cube.html
+---
+
+This tutorial will guide you to create a cube. It need you have at least 1 sample table in Hive. If you don't have, you can follow this to create some data.
+  
+### I. Create a Project
+1. Go to `Query` page in top menu bar, then click `Manage Projects`.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
+
+2. Click the `+ Project` button to add a new project.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/2 +project.png)
+
+3. Enter a project name, e.g, "Tutorial", with a description (optional), then click `submit` button to send the request.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3 new-project.png)
+
+4. After success, the project will show in the table.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
+
+### II. Sync up Hive Table
+1. Click `Model` in top bar and then click `Data Source` tab in the left part, it lists all the tables loaded into Kylin; click `Load Hive Table` button.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table.png)
+
+2. Enter the hive table names, separated with commad, and then click `Sync` to send the request.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
+
+3. [Optional] If you want to browser the hive database to pick tables, click the `Load Hive Table From Tree` button.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table-tree.png)
+
+4. [Optional] Expand the database node, click to select the table to load, and then click `Sync`.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-tree.png)
+
+5. A success message will pop up. In the left `Tables` section, the newly loaded table is added. Click the table name will expand the columns.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-info.png)
+
+6. In the background, Kylin will run a MapReduce job to calculate the approximate cardinality for the newly synced table. After the job be finished, refresh web page and then click the table name, the cardinality will be shown in the table info.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table-cardinality.png)
+
+
+### III. Create Data Model
+Before create a cube, need define a data model. The data model defines the star schema. One data model can be reused in multiple cubes.
+
+1. Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Model`.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 +model.png)
+
+2. Enter a name for the model, with an optional description.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-name.png)
+
+3. In the `Fact Table` box, select the fact table of this data model.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-fact-table.png)
+
+4. [Optional] Click `Add Lookup Table` button to add a lookup table. Select the table name and join type (inner or left).
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-lookup-table.png)
+
+5. [Optional] Click `New Join Condition` button, select the FK column of fact table in the left, and select the PK column of lookup table in the right side. Repeat this if have more than one join columns.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-join-condition.png)
+
+6. Click "OK", repeat step 4 and 5 to add more lookup tables if any. After finished, click "Next".
+
+7. The "Dimensions" page allows to select the columns that will be used as dimension in the child cubes. Click the `Columns` cell of a table, in the drop-down list select the column to the list. 
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-dimensions.png)
+
+8. Click "Next" go to the "Measures" page, select the columns that will be used in measure/metrics. The measure column can only from fact table. 
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-measures.png)
+
+9. Click "Next" to the "Settings" page. If the data in fact table increases by day, select the corresponding date column in the `Partition Date Column`, and select the date format, otherwise leave it as blank.
+
+10. [Optional] Select `Cube Size`, which is an indicator on the scale of the cube, by default it is `MEDIUM`.
+
+11. [Optional] If some records want to excluded from the cube, like dirty data, you can input the condition in `Filter`.
+
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-partition-column.png)
+
+12. Click `Save` and then select `Yes` to save the data model. After created, the data model will be shown in the left `Models` list.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-created.png)
+
+### IV. Create Cube
+After the data model be created, you can start to create cube. 
+
+Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in the drop-down list select `New Cube`.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 new-cube.png)
+
+
+**Step 1. Cube Info**
+
+Select the data model, enter the cube name; Click `Next` to enter the next step.
+
+You can use letters, numbers and '_' to name your cube (blank space in name is not allowed). `Notification List` is a list of email addresses which be notified on cube job success/failure.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
+    
+
+**Step 2. Dimensions**
+
+1. Click `Add Dimension`, it popups two option: "Normal" and "Derived": "Normal" is to add a normal independent dimension column, "Derived" is to add a derived dimension column. Read more in [How to optimize cubes](/docs15/howto/howto_optimize_cubes.html).
+
+2. Click "Normal" and then select a dimension column, give it a meaningful name.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-normal.png)
+    
+3. [Optional] Click "Derived" and then pickup 1 more multiple columns on lookup table, give them a meaningful name.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-derived.png)
+
+4. Repeate 2 and 3 to add all dimension columns; you can do this in batch for "Normal" dimension with the button `Auto Generator`. 
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-dimension-batch.png)
+
+5. Click "Next" after select all dimensions.
+
+**Step 3. Measures**
+
+1. Click the `+Measure` to add a new measure.
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 meas-+meas.png)
+
+2. There are 6 types of measure according to its expression: `SUM`, `MAX`, `MIN`, `COUNT`, `COUNT_DISTINCT` and `TOP_N`. Properly select the return type for `COUNT_DISTINCT` and `TOP_N`, as it will impact on the cube size.
+   * SUM
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-sum.png)
+
+   * MIN
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-min.png)
+
+   * MAX
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-max.png)
+
+   * COUNT
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-count.png)
+
+   * DISTINCT_COUNT
+   This measure has two implementations: 
+   a) approximate implementation with HyperLogLog, select an acceptable error rate, lower error rate will take more storage.
+   b) precise implementation with bitmap (see limitation in https://issues.apache.org/jira/browse/KYLIN-1186). 
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-distinct.png)
+
+   Pleaste note: distinct count is a very heavy data type, it is slower to build and query comparing to other measures.
+
+   * TOP_N
+   Approximate TopN measure pre-calculates the top records in each dimension combination, it will provide higher performance in query time than no pre-calculation; Need specify two parameters here: the first is the column will be used as metrics for Top records (aggregated with SUM and then sorted in descending order); the second is the literal ID, represents the record like seller_id;
+
+   Properly select the return type, depends on how many top records to inspect: top 10, top 100 or top 1000. 
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-topn.png)
+
+
+**Step 4. Refresh Setting**
+
+This step is designed for incremental cube build. 
+
+`Auto Merge Time Ranges (days)`: merge the small segments into medium and large segment automatically. If you don't want to auto merge, remove the default two ranges.
+
+`Retention Range (days)`: only keep the segment whose data is in past given days in cube, the old segment will be automatically dropped from head; 0 means not enable this feature.
+
+`Partition Start Date`: the start date of this cube.
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/9 refresh-setting1.png)
+
+**Step 5. Advanced Setting**
+
+`Aggregation Groups`: by default Kylin put all dimensions into one aggregation group; you can create multiple aggregation groups by knowing well about your query patterns. For the concepts of "Mandatory Dimensions", "Hierarchy Dimensions" and "Joint Dimensions", read this blog: [New Aggregation Group](/blog/2016/02/18/new-aggregation-group/)
+
+`Rowkeys`: the rowkeys are composed by the dimension encoded values. "Dictionary" is the default encoding method; If a dimension is not fit with dictionary (e.g., cardinality > 10 million), select "false" and then enter the fixed length for that dimension, usually that is the max. length of that column; if a value is longer than that size it will be truncated. Please note, without dictionary encoding, the cube size might be much bigger.
+
+You can drag & drop a dimension column to adjust its position in rowkey; Put the mandantory dimension at the begining, then followed the dimensions that heavily involved in filters (where condition). Put high cardinality dimensions ahead of low cardinality dimensions.
+
+
+**Step 6. Overview & Save**
+
+You can overview your cube and go back to previous step to modify it. Click the `Save` button to complete the cube creation.
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 overview.png)
+
+Cheers! now the cube is created, you can go ahead to build and play it.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/cube_build_job.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/cube_build_job.cn.md b/website/_docs20/tutorial/cube_build_job.cn.md
new file mode 100644
index 0000000..a0b2a6b
--- /dev/null
+++ b/website/_docs20/tutorial/cube_build_job.cn.md
@@ -0,0 +1,66 @@
+---
+layout: docs20-cn
+title:  Kylin Cube \u5efa\u7acb\u548cJob\u76d1\u63a7\u6559\u7a0b
+categories: \u6559\u7a0b
+permalink: /cn/docs20/tutorial/cube_build_job.html
+version: v1.2
+since: v0.7.1
+---
+
+### Cube\u5efa\u7acb
+\u9996\u5148\uff0c\u786e\u8ba4\u4f60\u62e5\u6709\u4f60\u60f3\u8981\u5efa\u7acb\u7684cube\u7684\u6743\u9650\u3002
+
+1. \u5728`Cubes`\u9875\u9762\u4e2d\uff0c\u70b9\u51fbcube\u680f\u53f3\u4fa7\u7684`Action`\u4e0b\u62c9\u6309\u94ae\u5e76\u9009\u62e9`Build`\u64cd\u4f5c\u3002
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
+
+2. \u9009\u62e9\u540e\u4f1a\u51fa\u73b0\u4e00\u4e2a\u5f39\u51fa\u7a97\u53e3\u3002
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/2 pop-up.png)
+
+3. \u70b9\u51fb`END DATE`\u8f93\u5165\u6846\u9009\u62e9\u589e\u91cf\u6784\u5efa\u8fd9\u4e2acube\u7684\u7ed3\u675f\u65e5\u671f\u3002
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
+
+4. \u70b9\u51fb`Submit`\u63d0\u4ea4\u8bf7\u6c42\u3002
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 submit.png)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4.1 success.png)
+
+   \u63d0\u4ea4\u8bf7\u6c42\u6210\u529f\u540e\uff0c\u4f60\u5c06\u4f1a\u770b\u5230`Jobs`\u9875\u9762\u65b0\u5efa\u4e86job\u3002
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 jobs-page.png)
+
+5. \u5982\u8981\u653e\u5f03\u8fd9\u4e2ajob\uff0c\u70b9\u51fb`Discard`\u6309\u94ae\u3002
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
+
+### Job\u76d1\u63a7
+\u5728`Jobs`\u9875\u9762\uff0c\u70b9\u51fbjob\u8be6\u60c5\u6309\u94ae\u67e5\u770b\u663e\u793a\u4e8e\u53f3\u4fa7\u7684\u8be6\u7ec6\u4fe1\u606f\u3002
+
+![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
+
+job\u8be6\u7ec6\u4fe1\u606f\u4e3a\u8ddf\u8e2a\u4e00\u4e2ajob\u63d0\u4f9b\u4e86\u5b83\u7684\u6bcf\u4e00\u6b65\u8bb0\u5f55\u3002\u4f60\u53ef\u4ee5\u5c06\u5149\u6807\u505c\u653e\u5728\u4e00\u4e2a\u6b65\u9aa4\u72b6\u6001\u56fe\u6807\u4e0a\u67e5\u770b\u57fa\u672c\u72b6\u6001\u548c\u4fe1\u606f\u3002
+
+![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
+
+\u70b9\u51fb\u6bcf\u4e2a\u6b65\u9aa4\u663e\u793a\u7684\u56fe\u6807\u6309\u94ae\u67e5\u770b\u8be6\u60c5\uff1a`Parameters`\u3001`Log`\u3001`MRJob`\u3001`EagleMonitoring`\u3002
+
+* Parameters
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
+
+* Log
+        
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
+
+* MRJob(MapReduce Job)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/cube_build_job.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/cube_build_job.md b/website/_docs20/tutorial/cube_build_job.md
new file mode 100644
index 0000000..0810c5b
--- /dev/null
+++ b/website/_docs20/tutorial/cube_build_job.md
@@ -0,0 +1,67 @@
+---
+layout: docs20
+title:  Kylin Cube Build and Job Monitoring
+categories: tutorial
+permalink: /docs20/tutorial/cube_build_job.html
+---
+
+### Cube Build
+First of all, make sure that you have authority of the cube you want to build.
+
+1. In `Models` page, click the `Action` drop down button in the right of a cube column and select operation `Build`.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
+
+2. There is a pop-up window after the selection, click `END DATE` input box to select end date of this incremental cube build.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
+
+4. Click `Submit` to send the build request. After success, you will see the new job in the `Monitor` page.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 jobs-page.png)
+
+5. The new job is in "pending" status; after a while, it will be started to run and you will see the progress by refresh the web page or click the refresh button.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 job-progress.png)
+
+
+6. Wait the job to finish. In the between if you want to discard it, click `Actions` -> `Discard` button.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
+
+7. After the job is 100% finished, the cube's status becomes to "Ready", means it is ready to serve SQL queries. In the `Model` tab, find the cube, click cube name to expand the section, in the "HBase" tab, it will list the cube segments. Each segment has a start/end time; Its underlying HBase table information is also listed.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/10 cube-segment.png)
+
+If you have more source data, repeate the steps above to build them into the cube.
+
+### Job Monitoring
+In the `Monitor` page, click the job detail button to see detail information show in the right side.
+
+![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
+
+The detail information of a job provides a step-by-step record to trace a job. You can hover a step status icon to see the basic status and information.
+
+![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
+
+Click the icon buttons showing in each step to see the details: `Parameters`, `Log`, `MRJob`.
+
+* Parameters
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
+
+* Log
+        
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
+
+* MRJob(MapReduce Job)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/cube_spark.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/cube_spark.md b/website/_docs20/tutorial/cube_spark.md
new file mode 100644
index 0000000..5f7893a
--- /dev/null
+++ b/website/_docs20/tutorial/cube_spark.md
@@ -0,0 +1,166 @@
+---
+layout: docs20
+title:  Build Cube with Spark (beta)
+categories: tutorial
+permalink: /docs20/tutorial/cube_spark.html
+---
+Kylin v2.0 introduces the Spark cube engine, it uses Apache Spark to replace MapReduce in the build cube step; You can check [this blog](/blog/2017/02/23/by-layer-spark-cubing/) for an overall picture. The current document uses the sample cube to demo how to try the new engine.
+
+## Preparation
+To finish this tutorial, you need a Hadoop environment which has Kylin v2.0.0 or above installed. Here we will use Hortonworks HDP 2.4 Sandbox VM, the Hadoop components as well as Hive/HBase has already been started. 
+
+## Install Kylin v2.0.0 beta
+
+Download the Kylin v2.0.0 beta for HBase 1.x from Kylin's download page, and then uncompress the tar ball into */usr/local/* folder:
+
+{% highlight Groff markup %}
+
+wget https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.0.0-beta/apache-kylin-2.0.0-beta-hbase1x.tar.gz -P /tmp
+
+tar -zxvf /tmp/apache-kylin-2.0.0-beta-hbase1x.tar.gz -C /usr/local/
+
+export KYLIN_HOME=/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin
+{% endhighlight %}
+
+## Prepare "kylin.env.hadoop-conf-dir"
+
+To run Spark on Yarn, need specify **HADOOP_CONF_DIR** environment variable, which is the directory that contains the (client side) configuration files for Hadoop. In many Hadoop distributions the directory is "/etc/hadoop/conf"; But Kylin not only need access HDFS, Yarn and Hive, but also HBase, so the default directory might not have all necessary files. In this case, you need create a new directory and then copying or linking those client files (core-site.xml, yarn-site.xml, hive-site.xml and hbase-site.xml) there. In HDP 2.4, there is a conflict between hive-tez and Spark, so need change the default engine from "tez" to "mr" when copy for Kylin.
+
+{% highlight Groff markup %}
+
+mkdir $KYLIN_HOME/hadoop-conf
+ln -s /etc/hadoop/conf/core-site.xml $KYLIN_HOME/hadoop-conf/core-site.xml 
+ln -s /etc/hadoop/conf/yarn-site.xml $KYLIN_HOME/hadoop-conf/yarn-site.xml 
+ln -s /etc/hbase/2.4.0.0-169/0/hbase-site.xml $KYLIN_HOME/hadoop-conf/hbase-site.xml 
+cp /etc/hive/2.4.0.0-169/0/hive-site.xml $KYLIN_HOME/hadoop-conf/hive-site.xml 
+vi $KYLIN_HOME/hadoop-conf/hive-site.xml (change "hive.execution.engine" value from "tez" to "mr")
+
+{% endhighlight %}
+
+Now, let Kylin know this directory with property "kylin.env.hadoop-conf-dir" in kylin.properties:
+
+{% highlight Groff markup %}
+kylin.env.hadoop-conf-dir=/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/hadoop-conf
+{% endhighlight %}
+
+If this property isn't set, Kylin will use the directory that "hive-site.xml" locates in; while that folder may have no "hbase-site.xml", will get HBase/ZK connection error in Spark.
+
+## Check Spark configuration
+
+Kylin embedes a Spark binary (v1.6.3) in $KYLIN_HOME/spark, all the Spark configurations can be managed in $KYLIN_HOME/conf/kylin.properties with prefix *"kylin.engine.spark-conf."*. These properties will be extracted and applied when runs submit Spark job; E.g, if you configure "kylin.engine.spark-conf.spark.executor.memory=4G", Kylin will use "--conf spark.executor.memory=4G" as parameter when execute "spark-submit".
+
+Before you run Spark cubing, suggest take a look on these configurations and do customization according to your cluster. Below is the default configurations, which is also the minimal config for a sandbox (1 executor with 1GB memory); usually in a normal cluster, need much more executors and each has at least 4GB memory and 2 cores:
+
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.master=yarn
+kylin.engine.spark-conf.spark.submit.deployMode=cluster
+kylin.engine.spark-conf.spark.yarn.queue=default
+kylin.engine.spark-conf.spark.executor.memory=1G
+kylin.engine.spark-conf.spark.executor.cores=2
+kylin.engine.spark-conf.spark.executor.instances=1
+kylin.engine.spark-conf.spark.eventLog.enabled=true
+kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-history
+kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history
+#kylin.engine.spark-conf.spark.yarn.jar=hdfs://namenode:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
+#kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
+
+## uncomment for HDP
+#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
+
+{% endhighlight %}
+
+For running on Hortonworks platform, need specify "hdp.version" as Java options for Yarn containers, so please uncommment the last three lines in kylin.properties. 
+
+Besides, in order to avoid repeatedly uploading Spark assembly jar to Yarn, you can manually do that once, and then configure the jar's HDFS location; Please note, the HDFS location need be full qualified name.
+
+{% highlight Groff markup %}
+hadoop fs -mkdir -p /kylin/spark/
+hadoop fs -put $KYLIN_HOME/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar /kylin/spark/
+{% endhighlight %}
+
+After do that, the config in kylin.properties will be:
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.yarn.jar=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
+kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
+{% endhighlight %}
+
+All the "kylin.engine.spark-conf.*" parameters can be overwritten at Cube or Project level, this gives more flexibility to the user.
+
+## Create and modify sample cube
+
+Run the sample.sh to create the sample cube, and then start Kylin server:
+
+{% highlight Groff markup %}
+
+$KYLIN_HOME/bin/sample.sh
+$KYLIN_HOME/bin/kylin.sh start
+
+{% endhighlight %}
+
+After Kylin is started, access Kylin web, edit the "kylin_sales" cube, in the "Advanced Setting" page, change the "Cube Engine" from "MapReduce" to "Spark (Beta)":
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/1_cube_engine.png)
+
+Click "Next" to the "Configuration Overwrites" page, click "+Property" to add property "kylin.engine.spark.rdd-partition-cut-mb" with value "100" (reasons below):
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_overwrite_partition.png)
+
+The sample cube has two memory hungry measures: a "COUNT DISTINCT" and a "TOPN(100)"; Their size estimation can be inaccurate when the source data is small: the estimized size is much larger than the real size, that causes much more RDD partitions be splitted, which slows down the build. Here 100 is a more reasonable number for it. Click "Next" and "Save" to save the cube.
+
+
+## Build Cube with Spark
+
+Click "Build", select current date as the build end date. Kylin generates a build job in the "Monitor" page, in which the 7th step is the Spark cubing. The job engine starts to execute the steps in sequence. 
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_job_with_spark.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/3_spark_cubing_step.png)
+
+When Kylin executes this step, you can monitor the status in Yarn resource manager. Click the "Application Master" link will open Spark web UI, it shows the progress of each stage and the detailed information.
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/4_job_on_rm.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/5_spark_web_gui.png)
+
+
+After all steps be successfully executed, the Cube becomes "Ready" and you can query it as normal.
+
+## Troubleshooting
+
+When getting error, you should check "logs/kylin.log" firstly. There has the full Spark command that Kylin executes, e.g:
+
+{% highlight Groff markup %}
+2017-03-06 14:44:38,574 INFO  [Job 2d5c1178-c6f6-4b50-8937-8e5e3b39227e-306] spark.SparkExecutable:121 : cmd:export HADOOP_CONF_DIR=/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/hadoop-conf && /usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/spark/bin/spark-submit --class org.apache.kylin.common.util.SparkEntry  --conf spark.executor.instances=1  --conf spark.yarn.jar=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar  --conf spark.yarn.queue=default  --conf spark.yarn.am.extraJavaOptions=-Dhdp.version=current  --conf spark.history.fs.logDirectory=hdfs:///kylin/spark-history  --conf spark.driver.extraJavaOptions=-Dhdp.version=current  --conf spark.master=yarn  --conf spark.executor.extraJavaOptions=-Dhdp.version=current  --conf spark.executor.memory=1G  --conf spark.eventLog.enabled=true  --conf spark.eventLog.dir=hdfs:///kylin/spark-history  --conf spark.executor.cores=2  --conf spark.submit.deployMode=cluster --files /etc/hbase/2.4.0.0-169/0/hbase-site.xml
  --jars /usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar,/usr/hdp/2.4.0.0-169/hbase/lib/htrace-core-3.1.0-incubating.jar,/usr/hdp/2.4.0.0-169/hbase/lib/hbase-client-1.1.2.2.4.0.0-169.jar,/usr/hdp/2.4.0.0-169/hbase/lib/hbase-common-1.1.2.2.4.0.0-169.jar,/usr/hdp/2.4.0.0-169/hbase/lib/hbase-protocol-1.1.2.2.4.0.0-169.jar,/usr/hdp/2.4.0.0-169/hbase/lib/metrics-core-2.2.0.jar,/usr/hdp/2.4.0.0-169/hbase/lib/guava-12.0.1.jar, /usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/lib/kylin-job-2.0.0-SNAPSHOT.jar -className org.apache.kylin.engine.spark.SparkCubingByLayer -hiveTable kylin_intermediate_kylin_sales_cube_555c4d32_40bb_457d_909a_1bb017bf2d9e -segmentId 555c4d32-40bb-457d-909a-1bb017bf2d9e -confPath /usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/conf -output hdfs:///kylin/kylin_metadata/kylin-2d5c1178-c6f6-4b50-8937-8e5e3b39227e/kylin_sales_cube/cuboid/ -cubename kylin_sales_cube
+
+{% endhighlight %}
+
+You can copy the cmd to execute manually in shell and then tunning the parameters quickly; During the execution, you can access Yarn resource manager to check more. If the job has already finished, you can check the history info in Spark history server. 
+
+By default Kylin outputs the history to "hdfs:///kylin/spark-history", you need start Spark history server on that directory, or change to use your existing Spark history server's event directory in conf/kylin.properties with parameter "kylin.engine.spark-conf.spark.eventLog.dir" and "kylin.engine.spark-conf.spark.history.fs.logDirectory".
+
+The following command will start a Spark history server instance on Kylin's output directory, before run it making sure you have stopped the existing Spark history server in sandbox:
+
+{% highlight Groff markup %}
+$KYLIN_HOME/spark/sbin/start-history-server.sh hdfs://sandbox.hortonworks.com:8020/kylin/spark-history 
+{% endhighlight %}
+
+In web browser, access "http://sandbox:18080" it shows the job history:
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/9_spark_history.png)
+
+Click a specific job, there you will see the detail runtime information, that is very helpful for trouble shooting and performance tuning.
+
+## Go further
+
+If you're a Kylin administrator but new to Spark, suggest you go through [Spark documents](https://spark.apache.org/docs/1.6.3/), and don't forget to update the configurations accordingly. Spark's performance relies on Cluster's memory and CPU resource, while Kylin's Cube build is a heavy task when having a complex data model and a huge dataset to build at one time. If your cluster resource couldn't fulfill, errors like "OutOfMemorry" will be thrown in Spark executors, so please use it properly. For Cube which has UHC dimension, many combinations (e.g, a full cube with more than 12 dimensions), or memory hungry measures (Count Distinct, Top-N), suggest to use the MapReduce engine. If your Cube model is simple, all measures are SUM/MIN/MAX/COUNT, source data is small to medium scale, Spark engine would be a good choice. Besides, Streaming build isn't supported in this engine so far (KYLIN-2484).
+
+Now the Spark engine is in public beta; If you have any question, comment, or bug fix, welcome to discuss in dev@kylin.apache.org.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/cube_streaming.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/cube_streaming.md b/website/_docs20/tutorial/cube_streaming.md
new file mode 100644
index 0000000..08e5bf9
--- /dev/null
+++ b/website/_docs20/tutorial/cube_streaming.md
@@ -0,0 +1,219 @@
+---
+layout: docs20
+title:  Scalable Cubing from Kafka (beta)
+categories: tutorial
+permalink: /docs20/tutorial/cube_streaming.html
+---
+Kylin v1.6 releases the scalable streaming cubing function, it leverages Hadoop to consume the data from Kafka to build the cube, you can check [this blog](/blog/2016/10/18/new-nrt-streaming/) for the high level design. This doc is a step by step tutorial, illustrating how to create and build a sample cube;
+
+## Preparation
+To finish this tutorial, you need a Hadoop environment which has kylin v1.6.0 or above installed, and also have a Kafka (v0.10.0 or above) running; Previous Kylin version has a couple issues so please upgrade your Kylin instance at first.
+
+In this tutorial, we will use Hortonworks HDP 2.2.4 Sandbox VM + Kafka v0.10.0(Scala 2.10) as the environment.
+
+## Install Kafka 0.10.0.0 and Kylin
+Don't use HDP 2.2.4's build-in Kafka as it is too old, stop it first if it is running.
+{% highlight Groff markup %}
+curl -s http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgz | tar -xz -C /usr/local/
+
+cd /usr/local/kafka_2.10-0.10.0.0/
+
+bin/kafka-server-start.sh config/server.properties &
+
+{% endhighlight %}
+
+Download the Kylin v1.6 from download page, expand the tar ball in /usr/local/ folder.
+
+## Create sample Kafka topic and populate data
+
+Create a sample topic "kylindemo", with 3 partitions:
+
+{% highlight Groff markup %}
+
+bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic kylindemo
+Created topic "kylindemo".
+{% endhighlight %}
+
+Put sample data to this topic; Kylin has an utility class which can do this;
+
+{% highlight Groff markup %}
+export KAFKA_HOME=/usr/local/kafka_2.10-0.10.0.0
+export KYLIN_HOME=/usr/local/apache-kylin-1.6.0-bin
+
+cd $KYLIN_HOME
+./bin/kylin.sh org.apache.kylin.source.kafka.util.KafkaSampleProducer --topic kylindemo --broker localhost:9092
+{% endhighlight %}
+
+This tool will send 100 records to Kafka every second. Please keep it running during this tutorial. You can check the sample message with kafka-console-consumer.sh now:
+
+{% highlight Groff markup %}
+cd $KAFKA_HOME
+bin/kafka-console-consumer.sh --zookeeper localhost:2181 --bootstrap-server localhost:9092 --topic kylindemo --from-beginning
+{"amount":63.50375137330458,"category":"TOY","order_time":1477415932581,"device":"Other","qty":4,"user":{"id":"bf249f36-f593-4307-b156-240b3094a1c3","age":21,"gender":"Male"},"currency":"USD","country":"CHINA"}
+{"amount":22.806058795736583,"category":"ELECTRONIC","order_time":1477415932591,"device":"Andriod","qty":1,"user":{"id":"00283efe-027e-4ec1-bbed-c2bbda873f1d","age":27,"gender":"Female"},"currency":"USD","country":"INDIA"}
+
+ {% endhighlight %}
+
+## Define a table from streaming
+Start Kylin server with "$KYLIN_HOME/bin/kylin.sh start", login Kylin Web GUI at http://sandbox:7070/kylin/, select an existing project or create a new project; Click "Model" -> "Data Source", then click the icon "Add Streaming Table";
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/1_Add_streaming_table.png)
+
+In the pop-up dialogue, enter a sample record which you got from the kafka-console-consumer, click the ">>" button, Kylin parses the JSON message and listS all the properties;
+
+You need give a logic table name for this streaming data source; The name will be used for SQL query later; here enter "STREAMING_SALES_TABLE" as an example in the "Table Name" field.
+
+You need select a timestamp field which will be used to identify the time of a message; Kylin can derive other time values like "year_start", "quarter_start" from this time column, which can give your more flexibility on building and querying the cube. Here check "order_time". You can deselect those properties which are not needed for cube. Here let's keep all fields.
+
+Notice that Kylin supports structured (or say "embedded") message from v1.6, it will convert them into a flat table structure. By default use "_" as the separator of the structed properties.
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/2_Define_streaming_table.png)
+
+
+Click "Next". On this page, provide the Kafka cluster information; Enter "kylindemo" as "Topic" name; The cluster has 1 broker, whose host name is "sandbox", port is "9092", click "Save".
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Kafka_setting.png)
+
+In "Advanced setting" section, the "timeout" and "buffer size" are the configurations for connecting with Kafka, keep them. 
+
+In "Parser Setting", by default Kylin assumes your message is JSON format, and each record's timestamp column (specified by "tsColName") is a bigint (epoch time) value; in this case, you just need set the "tsColumn" to "order_time"; 
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_setting.png)
+
+In real case if the timestamp value is a string valued timestamp like "Jul 20, 2016 9:59:17 AM", you need specify the parser class with "tsParser" and the time pattern with "tsPattern" like this:
+
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_time.png)
+
+Click "Submit" to save the configurations. Now a "Streaming" table is created.
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/4_Streaming_table.png)
+
+## Define data model
+With the table defined in previous step, now we can create the data model. The step is almost the same as you create a normal data model, but it has two requirement:
+
+* Streaming Cube doesn't support join with lookup tables; When define the data model, only select fact table, no lookup table;
+* Streaming Cube must be partitioned; If you're going to build the Cube incrementally at minutes level, select "MINUTE_START" as the cube's partition date column. If at hours level, select "HOUR_START".
+
+Here we pick 13 dimension and 2 measure columns:
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/5_Data_model_dimension.png)
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/6_Data_model_measure.png)
+Save the data model.
+
+## Create Cube
+
+The streaming Cube is almost the same as a normal cube. a couple of points need get your attention:
+
+* The partition time column should be a dimension of the Cube. In Streaming OLAP the time is always a query condition, and Kylin will leverage this to narrow down the scanned partitions.
+* Don't use "order\_time" as dimension as that is pretty fine-grained; suggest to use "mintue\_start", "hour\_start" or other, depends on how you will inspect the data.
+* Define "year\_start", "quarter\_start", "month\_start", "day\_start", "hour\_start", "minute\_start" as a hierarchy to reduce the combinations to calculate.
+* In the "refersh setting" step, create more merge ranges, like 0.5 hour, 4 hours, 1 day, and then 7 days; This will help to control the cube segment number.
+* In the "rowkeys" section, drag&drop the "minute\_start" to the head position, as for streaming queries, the time condition is always appeared; putting it to head will help to narrow down the scan range.
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/8_Cube_dimension.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/9_Cube_measure.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/10_agg_group.png)
+
+	![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/11_Rowkey.png)
+
+Save the cube.
+
+## Run a build
+
+You can trigger the build from web GUI, by clicking "Actions" -> "Build", or sending a request to Kylin RESTful API with 'curl' command:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+{% endhighlight %}
+
+Please note the API endpoint is different from a normal cube (this URL end with "build2").
+
+Here 0 means from the last position, and 9223372036854775807 (Long.MAX_VALUE) means to the end position on Kafka topic. If it is the first time to build (no previous segment), Kylin will seek to beginning of the topics as the start position. 
+
+In the "Monitor" page, a new job is generated; Wait it 100% finished.
+
+## Click the "Insight" tab, compose a SQL to run, e.g:
+
+ {% highlight Groff markup %}
+select minute_start, count(*), sum(amount), sum(qty) from streaming_sales_table group by minute_start order by minute_start
+ {% endhighlight %}
+
+The result looks like below.
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/13_Query_result.png)
+
+
+## Automate the build
+
+Once the first build and query got successfully, you can schedule incremental builds at a certain frequency. Kylin will record the offsets of each build; when receive a build request, it will start from the last end position, and then seek the latest offsets from Kafka. With the REST API you can trigger it with any scheduler tools like Linux cron:
+
+  {% highlight Groff markup %}
+crontab -e
+*/5 * * * * curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+ {% endhighlight %}
+
+Now you can site down and watch the cube be automatically built from streaming. And when the cube segments accumulate to bigger time range, Kylin will automatically merge them into a bigger segment.
+
+## Trouble shootings
+
+ * You may encounter the following error when run "kylin.sh":
+{% highlight Groff markup %}
+Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/Producer
+	at java.lang.Class.getDeclaredMethods0(Native Method)
+	at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
+	at java.lang.Class.getMethod0(Class.java:2856)
+	at java.lang.Class.getMethod(Class.java:1668)
+	at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
+	at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
+Caused by: java.lang.ClassNotFoundException: org.apache.kafka.clients.producer.Producer
+	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
+	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
+	at java.security.AccessController.doPrivileged(Native Method)
+	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
+	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
+	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
+	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
+	... 6 more
+{% endhighlight %}
+
+The reason is Kylin wasn't able to find the proper Kafka client jars; Make sure you have properly set "KAFKA_HOME" environment variable.
+
+ * Get "killed by admin" error in the "Build Cube" step
+
+ Within a Sandbox VM, YARN may not allocate the requested memory resource to MR job as the "inmem" cubing algorithm requests more memory. You can bypass this by requesting less memory: edit "conf/kylin_job_conf_inmem.xml", change the following two parameters like this:
+
+ {% highlight Groff markup %}
+    <property>
+        <name>mapreduce.map.memory.mb</name>
+        <value>1072</value>
+        <description></description>
+    </property>
+
+    <property>
+        <name>mapreduce.map.java.opts</name>
+        <value>-Xmx800m</value>
+        <description></description>
+    </property>
+ {% endhighlight %}
+
+ * If there already be bunch of history messages in Kafka and you don't want to build from the very beginning, you can trigger a call to set the current end position as the start for the cube:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, "sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/{your_cube_name}/init_start_offsets
+{% endhighlight %}
+
+ * If some build job got error and you discard it, there will be a hole (or say gap) left in the Cube. Since each time Kylin will build from last position, you couldn't expect the hole be filled by normal builds. Kylin provides API to check and fill the holes 
+
+Check holes:
+ {% highlight Groff markup %}
+curl -X GET --user ADMINN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+
+If the result is an empty arrary, means there is no hole; Otherwise, trigger Kylin to fill them:
+ {% highlight Groff markup %}
+curl -X PUT --user ADMINN:KYLIN -H "Content-Type: application/json;charset=utf-8" http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/flink.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/flink.md b/website/_docs20/tutorial/flink.md
new file mode 100644
index 0000000..d74f602
--- /dev/null
+++ b/website/_docs20/tutorial/flink.md
@@ -0,0 +1,249 @@
+---
+layout: docs20
+title:  Connect from Apache Flink
+categories: tutorial
+permalink: /docs20/tutorial/flink.html
+---
+
+
+### Introduction
+
+This document describes how to use Kylin as a data source in Apache Flink; 
+
+There were several attempts to do this in Scala and JDBC, but none of them works: 
+
+* [attempt1](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JDBCInputFormat-preparation-with-Flink-1-1-SNAPSHOT-and-Scala-2-11-td5371.html)  
+* [attempt2](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-OT-in-class-org-apache-flink-api-common-io-RichInputFormat-could-not-be-determi-td7287.html)  
+* [attempt3](http://stackoverflow.com/questions/36067881/create-dataset-from-jdbc-source-in-flink-using-scala)  
+* [attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala); 
+
+We will try use CreateInput and [JDBCInputFormat](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html) in batch mode and access via JDBC to Kylin. But it isn\u2019t implemented in Scala, is only in Java [MailList](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html). This doc will go step by step solving these problems.
+
+### Pre-requisites
+
+* Need an instance of Kylin, with a Cube; [Sample Cube](kylin_sample.html) will be good enough.
+* [Scala](http://www.scala-lang.org/) and [Apache Flink](http://flink.apache.org/) Installed
+* [IntelliJ](https://www.jetbrains.com/idea/) Installed and configured for Scala/Flink (see [Flink IDE setup guide](https://ci.apache.org/projects/flink/flink-docs-release-1.1/internals/ide_setup.html) )
+
+### Used software:
+
+* [Apache Flink](http://flink.apache.org/downloads.html) v1.2-SNAPSHOT
+* [Apache Kylin](http://kylin.apache.org/download/) v1.5.2 (v1.6.0 also works)
+* [IntelliJ](https://www.jetbrains.com/idea/download/#section=linux)  v2016.2
+* [Scala](downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz)  v2.11
+
+### Starting point:
+
+This can be out initial skeleton: 
+
+{% highlight Groff markup %}
+import org.apache.flink.api.scala._
+val env = ExecutionEnvironment.getExecutionEnvironment
+val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
+  .setDrivername("org.apache.kylin.jdbc.Driver")
+  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
+  .setUsername("ADMIN")
+  .setPassword("KYLIN")
+  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
+  .finish()
+  val dataset =env.createInput(inputFormat)
+{% endhighlight %}
+
+The first error is: ![alt text](/images/Flink-Tutorial/02.png)
+
+Add to Scala: 
+{% highlight Groff markup %}
+import org.apache.flink.api.java.io.jdbc.JDBCInputFormat
+{% endhighlight %}
+
+Next error is  ![alt text](/images/Flink-Tutorial/03.png)
+
+We can solve dependencies [(mvn repository: jdbc)](https://mvnrepository.com/artifact/org.apache.flink/flink-jdbc/1.1.2); Add this to your pom.xml:
+{% highlight Groff markup %}
+<dependency>
+   <groupId>org.apache.flink</groupId>
+   <artifactId>flink-jdbc</artifactId>
+   <version>${flink.version}</version>
+</dependency>
+{% endhighlight %}
+
+## Solve dependencies of row 
+
+Similar to previous point we need solve dependencies of Row Class [(mvn repository: Table) ](https://mvnrepository.com/artifact/org.apache.flink/flink-table_2.10/1.1.2):
+
+  ![](/images/Flink-Tutorial/03b.png)
+
+
+* In pom.xml add:
+{% highlight Groff markup %}
+<dependency>
+   <groupId>org.apache.flink</groupId>
+   <artifactId>flink-table_2.10</artifactId>
+   <version>${flink.version}</version>
+</dependency>
+{% endhighlight %}
+
+* In Scala: 
+{% highlight Groff markup %}
+import org.apache.flink.api.table.Row
+{% endhighlight %}
+
+## Solve RowTypeInfo property (and their new dependencies)
+
+This is the new error to solve:
+
+  ![](/images/Flink-Tutorial/04.png)
+
+
+* If check the code of [JDBCInputFormat.java](https://github.com/apache/flink/blob/master/flink-batch-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java#L69), we can see [this new property](https://github.com/apache/flink/commit/09b428bd65819b946cf82ab1fdee305eb5a941f5#diff-9b49a5041d50d9f9fad3f8060b3d1310R69) (and mandatory) added on Apr 2016 by [FLINK-3750](https://issues.apache.org/jira/browse/FLINK-3750)  Manual [JDBCInputFormat](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.html) v1.2 in Java
+
+   Add the new Property: **setRowTypeInfo**
+   
+{% highlight Groff markup %}
+val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
+  .setDrivername("org.apache.kylin.jdbc.Driver")
+  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
+  .setUsername("ADMIN")
+  .setPassword("KYLIN")
+  .setQuery("select count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt")
+  .setRowTypeInfo(DB_ROWTYPE)
+  .finish()
+{% endhighlight %}
+
+* How can configure this property in Scala? In [Attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala), there is an incorrect solution
+   
+   We can check the types using the intellisense: ![alt text](/images/Flink-Tutorial/05.png)
+   
+   Then we will need add more dependences; Add to scala:
+
+{% highlight Groff markup %}
+import org.apache.flink.api.table.typeutils.RowTypeInfo
+import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation}
+{% endhighlight %}
+
+   Create a Array or Seq of TypeInformation[ ]
+
+  ![](/images/Flink-Tutorial/06.png)
+
+
+   Solution:
+   
+{% highlight Groff markup %}
+   var stringColum: TypeInformation[String] = createTypeInformation[String]
+   val DB_ROWTYPE = new RowTypeInfo(Seq(stringColum))
+{% endhighlight %}
+
+## Solve ClassNotFoundException
+
+  ![](/images/Flink-Tutorial/07.png)
+
+Need find the kylin-jdbc-x.x.x.jar and then expose to Flink
+
+1. Find the Kylin JDBC jar
+
+   From Kylin [Download](http://kylin.apache.org/download/) choose **Binary** and the **correct version of Kylin and HBase**
+   
+   Download & Unpack: in ./lib: 
+   
+  ![](/images/Flink-Tutorial/08.png)
+
+
+2. Make this JAR accessible to Flink
+
+   If you execute like service you need put this JAR in you Java class path using your .bashrc 
+
+  ![](/images/Flink-Tutorial/09.png)
+
+
+  Check the actual value: ![alt text](/images/Flink-Tutorial/10.png)
+  
+  Check the permission for this file (Must be accessible for you):
+
+  ![](/images/Flink-Tutorial/11.png)
+
+ 
+  If you are executing from IDE, need add your class path manually:
+  
+  On IntelliJ: ![alt text](/images/Flink-Tutorial/12.png)  > ![alt text](/images/Flink-Tutorial/13.png) > ![alt text](/images/Flink-Tutorial/14.png) > ![alt text](/images/Flink-Tutorial/15.png)
+  
+  The result, will be similar to: ![alt text](/images/Flink-Tutorial/16.png)
+  
+## Solve "Couldn\u2019t access resultSet" error
+
+  ![](/images/Flink-Tutorial/17.png)
+
+
+It is related with [Flink 4108](https://issues.apache.org/jira/browse/FLINK-4108)  [(MailList)](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html#a9415) and Timo Walther [make a PR](https://github.com/apache/flink/pull/2619)
+
+If you are running Flink <= 1.2 you will need apply this path and make clean install
+
+## Solve the casting error
+
+  ![](/images/Flink-Tutorial/18.png)
+
+In the error message you have the problem and solution \u2026. nice ;)  ��
+
+## The result
+
+The output must be similar to this, print the result of query by standard output:
+
+  ![](/images/Flink-Tutorial/19.png)
+
+
+## Now, more complex
+
+Try with a multi-colum and multi-type query:
+
+{% highlight Groff markup %}
+select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
+from kylin_sales 
+group by part_dt 
+order by part_dt
+{% endhighlight %}
+
+Need changes in DB_ROWTYPE:
+
+  ![](/images/Flink-Tutorial/20.png)
+
+
+And import lib of Java, to work with Data type of Java ![alt text](/images/Flink-Tutorial/21.png)
+
+The new result will be: 
+
+  ![](/images/Flink-Tutorial/23.png)
+
+
+## Error:  Reused Connection
+
+
+  ![](/images/Flink-Tutorial/24.png)
+
+Check if your HBase and Kylin is working. Also you can use Kylin UI for it.
+
+
+## Error:  java.lang.AbstractMethodError:  \u2026.Avatica Connection
+
+See [Kylin 1898](https://issues.apache.org/jira/browse/KYLIN-1898) 
+
+It is a problem with kylin-jdbc-1.x.x. JAR, you need use Calcite 1.8 or above; The solution is to use Kylin 1.5.4 or above.
+
+  ![](/images/Flink-Tutorial/25.png)
+
+
+
+## Error: can't expand macros compiled by previous versions of scala
+
+Is a problem with versions of scala, check in with "scala -version" your actual version and choose your correct POM.
+
+Perhaps you will need a IntelliJ > File > Invalidates Cache > Invalidate and Restart.
+
+I added POM for Scala 2.11
+
+
+## Final Words
+
+Now you can read Kylin\u2019s data from Apache Flink, great!
+
+[Full Code Example](https://github.com/albertoRamon/Flink/tree/master/ReadKylinFromFlink/flink-scala-project)
+
+Solved all integration problems, and tested with different types of data (Long, BigDecimal and Dates). The patch has been comited at 15 Oct, then, will be part of Flink 1.2.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/kylin_client_tool.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/kylin_client_tool.cn.md b/website/_docs20/tutorial/kylin_client_tool.cn.md
new file mode 100644
index 0000000..7100b19
--- /dev/null
+++ b/website/_docs20/tutorial/kylin_client_tool.cn.md
@@ -0,0 +1,97 @@
+---
+layout: docs20-cn
+title:  Kylin Client Tool \u4f7f\u7528\u6559\u7a0b
+categories: \u6559\u7a0b
+permalink: /cn/docs20/tutorial/kylin_client_tool.html
+---
+  
+> Kylin-client-tool\u662f\u4e00\u4e2a\u7528python\u7f16\u5199\u7684\uff0c\u5b8c\u5168\u57fa\u4e8ekylin\u7684rest api\u7684\u5de5\u5177\u3002\u53ef\u4ee5\u5b9e\u73b0kylin\u7684cube\u521b\u5efa\uff0c\u6309\u65f6build cube\uff0cjob\u7684\u63d0\u4ea4\u3001\u8c03\u5ea6\u3001\u67e5\u770b\u3001\u53d6\u6d88\u4e0e\u6062\u590d\u3002
+  
+## \u5b89\u88c5
+1.\u786e\u8ba4\u8fd0\u884c\u73af\u5883\u5b89\u88c5\u4e86python2.6/2.7
+
+2.\u672c\u5de5\u5177\u9700\u5b89\u88c5\u7b2c\u4e09\u65b9python\u5305apscheduler\u548crequests\uff0c\u8fd0\u884csetup.sh\u8fdb\u884c\u5b89\u88c5\uff0cmac\u7528\u6237\u8fd0\u884csetup-mac.sh\u8fdb\u884c\u5b89\u88c5\u3002\u4e5f\u53ef\u7528setuptools\u8fdb\u884c\u5b89\u88c5
+
+## \u914d\u7f6e
+\u4fee\u6539\u5de5\u5177\u76ee\u5f55\u4e0b\u7684settings/settings.py\u6587\u4ef6\uff0c\u8fdb\u884c\u914d\u7f6e
+
+`KYLIN_USER`  Kylin\u7528\u6237\u540d
+
+`KYLIN_PASSWORD`  Kylin\u7684\u5bc6\u7801
+
+`KYLIN_REST_HOST`  Kylin\u7684\u5730\u5740
+
+`KYLIN_REST_PORT`  Kylin\u7684\u7aef\u53e3
+
+`KYLIN_JOB_MAX_COCURRENT`  \u5141\u8bb8\u540c\u65f6build\u7684job\u6570\u91cf
+
+`KYLIN_JOB_MAX_RETRY`  cube build\u51fa\u73b0error\u540e\uff0c\u5141\u8bb8\u7684\u91cd\u542fjob\u6b21\u6570
+
+## \u547d\u4ee4\u884c\u7684\u4f7f\u7528
+\u672c\u5de5\u5177\u4f7f\u7528optparse\u901a\u8fc7\u547d\u4ee4\u884c\u6765\u6267\u884c\u64cd\u4f5c\uff0c\u5177\u4f53\u7528\u6cd5\u53ef\u901a\u8fc7`python kylin_client_tool.py \uff0dh`\u6765\u67e5\u770b
+
+## cube\u7684\u521b\u5efa
+\u672c\u5de5\u5177\u5b9a\u4e49\u4e86\u4e00\u79cd\u8bfb\u624b\u5199\u7684\u6587\u672c\uff0c\u6765\u5feb\u901fcube\u521b\u5efa\u7684\u65b9\u6cd5\uff0c\u683c\u5f0f\u5982\u4e0b
+
+`cube\u540d|fact table\u540d|\u7ef4\u5ea61,\u7ef4\u5ea61\u7c7b\u578b;\u7ef4\u5ea62,\u7ef4\u5ea62\u7c7b\u578b...|\u6307\u68071,\u6307\u68071\u8868\u8fbe\u5f0f,\u6307\u68071\u7c7b\u578b...|\u8bbe\u7f6e\u9879|filter|`
+
+\u8bbe\u7f6e\u9879\u5185\u6709\u4ee5\u4e0b\u9009\u9879\uff0c
+
+`no_dictionary`  \u8bbe\u7f6eRowkeys\u4e2d\u4e0d\u751f\u6210dictionary\u7684\u7ef4\u5ea6\u53ca\u5176\u957f\u5ea6
+
+`mandatory_dimension`  \u8bbe\u7f6eRowkeys\u4e2dmandatory\u7684\u7ef4\u5ea6
+
+`aggregation_group`  \u8bbe\u7f6eaggregation group
+
+`partition_date_column`  \u8bbe\u7f6epartition date column
+
+`partition_date_start`  \u8bbe\u7f6epartition start date
+
+\u5177\u4f53\u4f8b\u5b50\u53ef\u4ee5\u67e5\u770bcube_def.csv\u6587\u4ef6\uff0c\u76ee\u524d\u4e0d\u652f\u6301\u542blookup table\u7684cube\u521b\u5efa
+
+\u4f7f\u7528`-c`\u547d\u4ee4\u8fdb\u884c\u521b\u5efa\uff0c\u7528`-F`\u6307\u5b9acube\u5b9a\u4e49\u6587\u4ef6\uff0c\u4f8b\u5982
+
+`python kylin_client_tool.py -c -F cube_def.csv`
+
+## build cube
+###\u4f7f\u7528cube\u5b9a\u4e49\u6587\u4ef6build
+\u4f7f\u7528`-b`\u547d\u4ee4\uff0c\u9700\u8981\u7528`-F`\u6307\u5b9acube\u5b9a\u4e49\u6587\u4ef6\uff0c\u5982\u679c\u6307\u5b9a\u4e86partition date column\uff0c\u901a\u8fc7`-T`\u6307\u5b9aend date(year-month-day\u683c\u5f0f)\uff0c\u82e5\u4e0d\u6307\u5b9a\uff0c\u4ee5\u5f53\u524d\u65f6\u95f4\u4e3aend date\uff0c\u4f8b\u5982
+
+`python kylin_client_tool.py -b -F cube_def.csv -T 2016-03-01`
+
+###\u4f7f\u7528cube\u540d\u6587\u4ef6build
+\u7528`-f`\u6307\u5b9acube\u540d\u6587\u4ef6\uff0c\u6587\u4ef6\u6bcf\u884c\u4e00\u4e2acube\u540d
+
+`python kylin_client_tool.py -b -f cube_names.csv -T 2016-03-01`
+
+###\u76f4\u63a5\u547d\u4ee4\u884c\u5199cube\u540dbuild
+\u7528`-C`\u6307\u5b9acube\u540d\uff0c\u901a\u8fc7\u9017\u53f7\u8fdb\u884c\u5206\u9694
+
+`python kylin_client_tool.py -b -C client_tool_test1,client_tool_test2 -T 2016-03-01`
+
+## job\u7ba1\u7406
+###\u67e5\u770bjob\u72b6\u6001
+\u4f7f\u7528`-s`\u547d\u4ee4\u67e5\u770b\uff0c\u7528`-f`\u6307\u5b9acube\u540d\u6587\u4ef6\uff0c\u7528`-C`\u6307\u5b9acube\u540d\uff0c\u82e5\u4e0d\u6307\u5b9a\uff0c\u5c06\u67e5\u770b\u6240\u6709cube\u72b6\u6001\u3002\u7528`-S`\u6307\u5b9ajob\u72b6\u6001\uff0cR\u8868\u793a`Running`\uff0cE\u8868\u793a`Error`\uff0cF\u8868\u793a`Finished`\uff0cD\u8868\u793a`Discarded`\uff0c\u4f8b\u5982\uff1a
+
+`python kylin_client_tool.py -s -C kylin_sales_cube -f cube_names.csv -S F`
+
+###\u6062\u590djob
+\u7528`-r`\u547d\u4ee4\u6062\u590djob\uff0c\u7528`-f`\u6307\u5b9acube\u540d\u6587\u4ef6\uff0c\u7528`-C`\u6307\u5b9acube\u540d\uff0c\u82e5\u4e0d\u6307\u5b9a\uff0c\u5c06\u6062\u590d\u6240\u6709Error\u72b6\u6001\u7684job\uff0c\u4f8b\u5982\uff1a
+
+`python kylin_client_tool.py -r -C kylin_sales_cube -f cube_names.csv`
+
+###\u53d6\u6d88job
+\u7528`-k`\u547d\u4ee4\u53d6\u6d88job\uff0c\u7528`-f`\u6307\u5b9acube\u540d\u6587\u4ef6\uff0c\u7528`-C`\u6307\u5b9acube\u540d\uff0c\u82e5\u4e0d\u6307\u5b9a\uff0c\u5c06\u53d6\u6d88\u6240\u6709Running\u6216Error\u72b6\u6001\u7684job\uff0c\u4f8b\u5982\uff1a
+
+`python kylin_client_tool.py -k -C kylin_sales_cube -f cube_names.csv`
+
+## \u5b9a\u65f6build cube
+### \u6bcf\u9694\u4e00\u6bb5\u65f6\u95f4build cube
+\u5728cube build\u547d\u4ee4\u7684\u57fa\u7840\u4e0a\uff0c\u4f7f\u7528`-B i`\u6307\u5b9a\u6bcf\u9694\u4e00\u6bb5\u65f6\u95f4build\u7684\u65b9\u5f0f\uff0c\u4f7f\u7528`-O`\u6307\u5b9a\u95f4\u9694\u7684\u5c0f\u65f6\u6570\uff0c\u4f8b\u5982\uff1a
+
+`python kylin_client_tool.py -b -F cube_def.csv -B i -O 1`
+
+### \u8bbe\u5b9a\u65f6\u95f4build cube
+\u4f7f\u7528`-B t`\u6307\u5b9a\u6309\u65f6build cube\u7684\u65b9\u5f0f\uff0c\u4f7f\u7528`-O`\u6307\u5b9abuild\u65f6\u95f4\uff0c\u7528\u9017\u53f7\u8fdb\u884c\u5206\u9694
+
+`python kylin_client_tool.py -b -F cube_def.csv -T 2016-03-04 -B t -O 2016,3,1,0,0,0`

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/kylin_sample.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/kylin_sample.md b/website/_docs20/tutorial/kylin_sample.md
new file mode 100644
index 0000000..d083f10
--- /dev/null
+++ b/website/_docs20/tutorial/kylin_sample.md
@@ -0,0 +1,21 @@
+---
+layout: docs20
+title:  Quick Start with Sample Cube
+categories: tutorial
+permalink: /docs20/tutorial/kylin_sample.html
+---
+
+Kylin provides a script for you to create a sample Cube; the script will also create three sample hive tables:
+
+1. Run ${KYLIN_HOME}/bin/sample.sh ; Restart kylin server to flush the caches;
+2. Logon Kylin web with default user ADMIN/KYLIN, select project "learn_kylin" in the project dropdown list (left upper corner);
+3. Select the sample cube "kylin_sales_cube", click "Actions" -> "Build", pick up a date later than 2014-01-01 (to cover all 10000 sample records);
+4. Check the build progress in "Monitor" tab, until 100%;
+5. Execute SQLs in the "Insight" tab, for example:
+	select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt
+6. You can verify the query result and compare the response time with hive;
+
+   
+## What's next
+
+You can create another cube with the sample tables, by following the tutorials.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/odbc.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/odbc.cn.md b/website/_docs20/tutorial/odbc.cn.md
new file mode 100644
index 0000000..665b824
--- /dev/null
+++ b/website/_docs20/tutorial/odbc.cn.md
@@ -0,0 +1,34 @@
+---
+layout: docs20-cn
+title:  Kylin ODBC \u9a71\u52a8\u7a0b\u5e8f\u6559\u7a0b
+categories: \u6559\u7a0b
+permalink: /cn/docs20/tutorial/odbc.html
+version: v1.2
+since: v0.7.1
+---
+
+> \u6211\u4eec\u63d0\u4f9bKylin ODBC\u9a71\u52a8\u7a0b\u5e8f\u4ee5\u652f\u6301ODBC\u517c\u5bb9\u5ba2\u6237\u7aef\u5e94\u7528\u7684\u6570\u636e\u8bbf\u95ee\u3002
+> 
+> 32\u4f4d\u7248\u672c\u621664\u4f4d\u7248\u672c\u7684\u9a71\u52a8\u7a0b\u5e8f\u90fd\u662f\u53ef\u7528\u7684\u3002
+> 
+> \u6d4b\u8bd5\u64cd\u4f5c\u7cfb\u7edf\uff1aWindows 7\uff0cWindows Server 2008 R2
+> 
+> \u6d4b\u8bd5\u5e94\u7528\uff1aTableau 8.0.4 \u548c Tableau 8.1.3
+
+## \u524d\u63d0\u6761\u4ef6
+1. Microsoft Visual C++ 2012 \u518d\u5206\u914d\uff08Redistributable\uff09
+   * 32\u4f4dWindows\u621632\u4f4dTableau Desktop\uff1a\u4e0b\u8f7d\uff1a[32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
+   * 64\u4f4dWindows\u621664\u4f4dTableau Desktop\uff1a\u4e0b\u8f7d\uff1a[64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
+
+2. ODBC\u9a71\u52a8\u7a0b\u5e8f\u5185\u90e8\u4ece\u4e00\u4e2aREST\u670d\u52a1\u5668\u83b7\u53d6\u7ed3\u679c\uff0c\u786e\u4fdd\u4f60\u80fd\u591f\u8bbf\u95ee\u4e00\u4e2a
+
+## \u5b89\u88c5
+1. \u5982\u679c\u4f60\u5df2\u7ecf\u5b89\u88c5\uff0c\u9996\u5148\u5378\u8f7d\u5df2\u5b58\u5728\u7684Kylin ODBC
+2. \u4ece[\u4e0b\u8f7d](../../download/)\u4e0b\u8f7d\u9644\u4ef6\u9a71\u52a8\u5b89\u88c5\u7a0b\u5e8f\uff0c\u5e76\u8fd0\u884c\u3002
+   * 32\u4f4dTableau Desktop\uff1a\u8bf7\u5b89\u88c5KylinODBCDriver (x86).exe
+   * 64\u4f4dTableau Desktop\uff1a\u8bf7\u5b89\u88c5KylinODBCDriver (x64).exe
+
+3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
+
+## \u9519\u8bef\u62a5\u544a
+\u5982\u6709\u95ee\u9898\uff0c\u8bf7\u62a5\u544a\u9519\u8bef\u81f3Apache Kylin JIRA\uff0c\u6216\u8005\u53d1\u9001\u90ae\u4ef6\u5230dev\u90ae\u4ef6\u5217\u8868\u3002

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/odbc.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/odbc.md b/website/_docs20/tutorial/odbc.md
new file mode 100644
index 0000000..f386fd6
--- /dev/null
+++ b/website/_docs20/tutorial/odbc.md
@@ -0,0 +1,49 @@
+---
+layout: docs20
+title:  Kylin ODBC Driver
+categories: tutorial
+permalink: /docs20/tutorial/odbc.html
+since: v0.7.1
+---
+
+> We provide Kylin ODBC driver to enable data access from ODBC-compatible client applications.
+> 
+> Both 32-bit version or 64-bit version driver are available.
+> 
+> Tested Operation System: Windows 7, Windows Server 2008 R2
+> 
+> Tested Application: Tableau 8.0.4, Tableau 8.1.3 and Tableau 9.1
+
+## Prerequisites
+1. Microsoft Visual C++ 2012 Redistributable 
+   * For 32 bit Windows or 32 bit Tableau Desktop: Download: [32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
+   * For 64 bit Windows or 64 bit Tableau Desktop: Download: [64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
+
+
+2. ODBC driver internally gets results from a REST server, make sure you have access to one
+
+## Installation
+1. Uninstall existing Kylin ODBC first, if you already installled it before
+2. Download ODBC Driver from [download](../../download/).
+   * For 32 bit Tableau Desktop: Please install KylinODBCDriver (x86).exe
+   * For 64 bit Tableau Desktop: Please install KylinODBCDriver (x64).exe
+
+3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
+
+## DSN configuration
+1. Open ODBCAD to configure DSN.
+	* For 32 bit driver, please use the 32bit version in C:\Windows\SysWOW64\odbcad32.exe
+	* For 64 bit driver, please use the default "Data Sources (ODBC)" in Control Panel/Administrator Tools
+![]( /images/Kylin-ODBC-DSN/1.png)
+
+2. Open "System DSN" tab, and click "Add", you will see KylinODBCDriver listed as an option, Click "Finish" to continue.
+![]( /images/Kylin-ODBC-DSN/2.png)
+
+3. In the pop up dialog, fill in all the blanks, The server host is where your Kylin Rest Server is started.
+![]( /images/Kylin-ODBC-DSN/3.png)
+
+4. Click "Done", and you will see your new DSN listed in the "System Data Sources", you can use this DSN afterwards.
+![]( /images/Kylin-ODBC-DSN/4.png)
+
+## Bug Report
+Please open Apache Kylin JIRA to report bug, or send to dev mailing list.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/powerbi.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/powerbi.cn.md b/website/_docs20/tutorial/powerbi.cn.md
new file mode 100644
index 0000000..9326a82
--- /dev/null
+++ b/website/_docs20/tutorial/powerbi.cn.md
@@ -0,0 +1,56 @@
+---
+layout: docs20-cn
+title:  \u5fae\u8f6fExcel\u53caPower BI\u6559\u7a0b
+categories: tutorial
+permalink: /cn/docs20/tutorial/powerbi.html
+version: v1.2
+since: v1.2
+---
+
+Microsoft Excel\u662f\u5f53\u4ecaWindows\u5e73\u53f0\u4e0a\u6700\u6d41\u884c\u7684\u6570\u636e\u5904\u7406\u8f6f\u4ef6\u4e4b\u4e00\uff0c\u652f\u6301\u591a\u79cd\u6570\u636e\u5904\u7406\u529f\u80fd\uff0c\u53ef\u4ee5\u5229\u7528Power Query\u4eceODBC\u6570\u636e\u6e90\u8bfb\u53d6\u6570\u636e\u5e76\u8fd4\u56de\u5230\u6570\u636e\u8868\u4e2d\u3002
+
+Microsoft Power BI \u662f\u7531\u5fae\u8f6f\u63a8\u51fa\u7684\u5546\u4e1a\u667a\u80fd\u7684\u4e13\u4e1a\u5206\u6790\u5de5\u5177\uff0c\u7ed9\u7528\u6237\u63d0\u4f9b\u7b80\u5355\u4e14\u4e30\u5bcc\u7684\u6570\u636e\u53ef\u89c6\u5316\u53ca\u5206\u6790\u529f\u80fd\u3002
+
+> Apache Kylin\u76ee\u524d\u7248\u672c\u4e0d\u652f\u6301\u539f\u59cb\u6570\u636e\u7684\u67e5\u8be2\uff0c\u90e8\u5206\u67e5\u8be2\u4f1a\u56e0\u6b64\u5931\u8d25\uff0c\u5bfc\u81f4\u5e94\u7528\u7a0b\u5e8f\u53d1\u751f\u5f02\u5e38\uff0c\u5efa\u8bae\u6253\u4e0aKYLIN-1075\u8865\u4e01\u5305\u4ee5\u4f18\u5316\u67e5\u8be2\u7ed3\u679c\u7684\u663e\u793a\u3002
+
+
+> Power BI\u53caExcel\u4e0d\u652f\u6301"connect live"\u6a21\u5f0f\uff0c\u8bf7\u6ce8\u610f\u5e76\u6dfb\u52a0where\u6761\u4ef6\u5728\u67e5\u8be2\u8d85\u5927\u6570\u636e\u96c6\u65f6\u5019\uff0c\u4ee5\u907f\u514d\u4ece\u670d\u52a1\u5668\u62c9\u53bb\u8fc7\u591a\u7684\u6570\u636e\u5230\u672c\u5730\uff0c\u751a\u81f3\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\u67e5\u8be2\u6267\u884c\u5931\u8d25\u3002
+
+### Install ODBC Driver
+\u53c2\u8003\u9875\u9762[Kylin ODBC \u9a71\u52a8\u7a0b\u5e8f\u6559\u7a0b](./odbc.html)\uff0c\u8bf7\u786e\u4fdd\u4e0b\u8f7d\u5e76\u5b89\u88c5Kylin ODBC Driver __v1.2__. \u5982\u679c\u4f60\u5b89\u88c5\u6709\u65e9\u524d\u7248\u672c\uff0c\u8bf7\u5378\u8f7d\u540e\u518d\u5b89\u88c5\u3002 
+
+### \u8fde\u63a5Excel\u5230Kylin
+1. \u4ece\u5fae\u8f6f\u5b98\u7f51\u4e0b\u8f7d\u548c\u5b89\u88c5Power Query\uff0c\u5b89\u88c5\u5b8c\u6210\u540e\u5728Excel\u4e2d\u4f1a\u770b\u5230Power Query\u7684Fast Tab\uff0c\u5355\u51fb\uff40From other sources\uff40\u4e0b\u62c9\u6309\u94ae\uff0c\u5e76\u9009\u62e9\uff40From ODBC\uff40\u9879
+![](/images/tutorial/odbc/ms_tool/Picture1.png)
+
+2. \u5728\u5f39\u51fa\u7684`From ODBC`\u6570\u636e\u8fde\u63a5\u5411\u5bfc\u4e2d\u8f93\u5165Apache Kylin\u670d\u52a1\u5668\u7684\u8fde\u63a5\u5b57\u7b26\u4e32\uff0c\u4e5f\u53ef\u4ee5\u5728\uff40SQL\uff40\u6587\u672c\u6846\u4e2d\u8f93\u5165\u60a8\u60f3\u8981\u6267\u884c\u7684SQL\u8bed\u53e5\uff0c\u5355\u51fb\uff40OK\uff40\uff0cSQL\u7684\u6267\u884c\u7ed3\u679c\u5c31\u4f1a\u7acb\u5373\u52a0\u8f7d\u5230Excel\u7684\u6570\u636e\u8868\u4e2d
+![](/images/tutorial/odbc/ms_tool/Picture2.png)
+
+> \u4e3a\u4e86\u7b80\u5316\u8fde\u63a5\u5b57\u7b26\u4e32\u7684\u8f93\u5165\uff0c\u63a8\u8350\u521b\u5efaApache Kylin\u7684DSN\uff0c\u53ef\u4ee5\u5c06\u8fde\u63a5\u5b57\u7b26\u4e32\u7b80\u5316\u4e3aDSN=[YOUR_DSN_NAME]\uff0c\u6709\u5173DSN\u7684\u521b\u5efa\u8bf7\u53c2\u8003\uff1a[https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599)\u3002
+
+ 
+3. \u5982\u679c\u60a8\u9009\u62e9\u4e0d\u8f93\u5165SQL\u8bed\u53e5\uff0cPower Query\u5c06\u4f1a\u5217\u51fa\u6240\u6709\u7684\u6570\u636e\u5e93\u8868\uff0c\u60a8\u53ef\u4ee5\u6839\u636e\u9700\u8981\u5bf9\u6574\u5f20\u8868\u7684\u6570\u636e\u8fdb\u884c\u52a0\u8f7d\u3002\u4f46\u662f\uff0cApache Kylin\u6682\u4e0d\u652f\u6301\u539f\u6570\u636e\u7684\u67e5\u8be2\uff0c\u90e8\u5206\u8868\u7684\u52a0\u8f7d\u53ef\u80fd\u56e0\u6b64\u53d7\u9650
+![](/images/tutorial/odbc/ms_tool/Picture3.png)
+
+4. \u7a0d\u7b49\u7247\u523b\uff0c\u6570\u636e\u5df2\u6210\u529f\u52a0\u8f7d\u5230Excel\u4e2d
+![](/images/tutorial/odbc/ms_tool/Picture4.png)
+
+5.  \u4e00\u65e6\u670d\u52a1\u5668\u7aef\u6570\u636e\u4ea7\u751f\u66f4\u65b0\uff0c\u5219\u9700\u8981\u5bf9Excel\u4e2d\u7684\u6570\u636e\u8fdb\u884c\u540c\u6b65\uff0c\u53f3\u952e\u5355\u51fb\u53f3\u4fa7\u5217\u8868\u4e2d\u7684\u6570\u636e\u6e90\uff0c\u9009\u62e9\uff40Refresh\uff40\uff0c\u6700\u65b0\u7684\u6570\u636e\u4fbf\u4f1a\u66f4\u65b0\u5230\u6570\u636e\u8868\u4e2d.
+
+6.  1.  \u4e3a\u4e86\u63d0\u5347\u6027\u80fd\uff0c\u53ef\u4ee5\u5728Power Query\u4e2d\u6253\u5f00\uff40Query Options\uff40\u8bbe\u7f6e\uff0c\u7136\u540e\u5f00\u542f\uff40Fast data load\uff40\uff0c\u8fd9\u5c06\u63d0\u9ad8\u6570\u636e\u52a0\u8f7d\u901f\u5ea6\uff0c\u4f46\u53ef\u80fd\u9020\u6210\u754c\u9762\u7684\u6682\u65f6\u65e0\u54cd\u5e94
+
+### Power BI
+1.  \u542f\u52a8\u60a8\u5df2\u7ecf\u5b89\u88c5\u7684Power BI\u684c\u9762\u7248\u7a0b\u5e8f\uff0c\u5355\u51fb\uff40Get data\uff40\u6309\u94ae\uff0c\u5e76\u9009\u4e2dODBC\u6570\u636e\u6e90.
+![](/images/tutorial/odbc/ms_tool/Picture5.png)
+
+2.  \u5728\u5f39\u51fa\u7684`From ODBC`\u6570\u636e\u8fde\u63a5\u5411\u5bfc\u4e2d\u8f93\u5165Apache Kylin\u670d\u52a1\u5668\u7684\u6570\u636e\u5e93\u8fde\u63a5\u5b57\u7b26\u4e32\uff0c\u4e5f\u53ef\u4ee5\u5728\uff40SQL\uff40\u6587\u672c\u6846\u4e2d\u8f93\u5165\u60a8\u60f3\u8981\u6267\u884c\u7684SQL\u8bed\u53e5\u3002\u5355\u51fb\uff40OK\uff40\uff0cSQL\u7684\u6267\u884c\u7ed3\u679c\u5c31\u4f1a\u7acb\u5373\u52a0\u8f7d\u5230Power BI\u4e2d
+![](/images/tutorial/odbc/ms_tool/Picture6.png)
+
+3.  \u5982\u679c\u60a8\u9009\u62e9\u4e0d\u8f93\u5165SQL\u8bed\u53e5\uff0cPower BI\u5c06\u4f1a\u5217\u51fa\u9879\u76ee\u4e2d\u6240\u6709\u7684\u8868\uff0c\u60a8\u53ef\u4ee5\u6839\u636e\u9700\u8981\u5c06\u6574\u5f20\u8868\u7684\u6570\u636e\u8fdb\u884c\u52a0\u8f7d\u3002\u4f46\u662f\uff0cApache Kylin\u6682\u4e0d\u652f\u6301\u539f\u6570\u636e\u7684\u67e5\u8be2\uff0c\u90e8\u5206\u8868\u7684\u52a0\u8f7d\u53ef\u80fd\u56e0\u6b64\u53d7\u9650
+![](/images/tutorial/odbc/ms_tool/Picture7.png)
+
+4.  \u73b0\u5728\u4f60\u53ef\u4ee5\u8fdb\u4e00\u6b65\u4f7f\u7528Power BI\u8fdb\u884c\u53ef\u89c6\u5316\u5206\u6790\uff1a
+![](/images/tutorial/odbc/ms_tool/Picture8.png)
+
+5.  \u5355\u51fb\u5de5\u5177\u680f\u7684\uff40Refresh\uff40\u6309\u94ae\u5373\u53ef\u91cd\u65b0\u52a0\u8f7d\u6570\u636e\u5e76\u5bf9\u56fe\u8868\u8fdb\u884c\u66f4\u65b0
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/powerbi.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/powerbi.md b/website/_docs20/tutorial/powerbi.md
new file mode 100644
index 0000000..5465c57
--- /dev/null
+++ b/website/_docs20/tutorial/powerbi.md
@@ -0,0 +1,54 @@
+---
+layout: docs20
+title:  MS Excel and Power BI
+categories: tutorial
+permalink: /docs20/tutorial/powerbi.html
+since: v1.2
+---
+
+Microsoft Excel is one of the most famous data tool on Windows platform, and has plenty of data analyzing functions. With Power Query installed as plug-in, excel can easily read data from ODBC data source and fill spreadsheets. 
+
+Microsoft Power BI is a business intelligence tool providing rich functionality and experience for data visualization and processing to user.
+
+> Apache Kylin currently doesn't support query on raw data yet, some queries might fail and cause some exceptions in application. Patch KYLIN-1075 is recommended to get better look of query result.
+
+> Power BI and Excel do not support "connect live" model for other ODBC driver yet, please pay attention when you query on huge dataset, it may pull too many data into your client which will take a while even fail at the end.
+
+### Install ODBC Driver
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+Please make sure to download and install Kylin ODBC Driver __v1.2__. If you already installed ODBC Driver in your system, please uninstall it first. 
+
+### Kylin and Excel
+1. Download Power Query from Microsoft\u2019s Website and install it. Then run Excel, switch to `Power Query` fast tab, click `From Other Sources` dropdown list, and select `ODBC` item.
+![](/images/tutorial/odbc/ms_tool/Picture1.png)
+
+2.  You\u2019ll see `From ODBC` dialog, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox. Optionally you can type a SQL statement in `SQL statement` textbox. Click `OK`, result set will run to your spreadsheet now.
+![](/images/tutorial/odbc/ms_tool/Picture2.png)
+
+> Tips: In order to simplify the Database Connection String, DSN is recommended, which can shorten the Connection String like `DSN=[YOUR_DSN_NAME]`. Details about DSN, refer to [https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599).
+ 
+3. If you didn\u2019t input the SQL statement in last step, Power Query will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
+![](/images/tutorial/odbc/ms_tool/Picture3.png)
+
+4.  Hold on for a while, the data is lying in Excel now.
+![](/images/tutorial/odbc/ms_tool/Picture4.png)
+
+5.  If you want to sync data with Kylin Server, just right click the data source in right panel, and select `Refresh`, then you\u2019ll see the latest data.
+
+6.  To improve data loading performance, you can enable `Fast data load` in Power Query, but this will make your UI unresponsive for a while. 
+
+### Power BI
+1.  Run Power BI Desktop, and click `Get Data` button, then select `ODBC` as data source type.
+![](/images/tutorial/odbc/ms_tool/Picture5.png)
+
+2.  Same with Excel, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox, and optionally type a SQL statement in `SQL statement` textbox. Click `OK`, the result set will come to Power BI as a new data source query.
+![](/images/tutorial/odbc/ms_tool/Picture6.png)
+
+3.  If you didn\u2019t input the SQL statement in last step, Power BI will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
+![](/images/tutorial/odbc/ms_tool/Picture7.png)
+
+4.  Now you can start to enjoy analyzing with Power BI.
+![](/images/tutorial/odbc/ms_tool/Picture8.png)
+
+5.  To reload the data and redraw the charts, just click `Refresh` button in `Home` fast tab.
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/squirrel.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/squirrel.md b/website/_docs20/tutorial/squirrel.md
new file mode 100644
index 0000000..7d0c9d9
--- /dev/null
+++ b/website/_docs20/tutorial/squirrel.md
@@ -0,0 +1,112 @@
+---
+layout: docs20
+title:  Connect from SQuirreL
+categories: tutorial
+permalink: /docs20/tutorial/squirrel.html
+---
+
+### Introduction
+
+[SQuirreL SQL](http://www.squirrelsql.org/) is a multi platform Universal SQL Client (GNU License). You can use it to access HBase + Phoenix and Hive. This document introduces how to connect to Kylin from SQuirreL.
+
+### Used Software
+
+* [Kylin v1.6.0](/download/) & ODBC 1.6
+* [SquirreL SQL v3.7.1](http://www.squirrelsql.org/)
+
+## Pre-requisites
+
+* Find the Kylin JDBC driver jar
+  From Kylin Download, Choose Binary and the **correct version of Kylin and HBase**
+	Download & Unpack:  in **./lib**: 
+  ![](/images/SQuirreL-Tutorial/01.png)
+
+
+* Need an instance of Kylin, with a Cube; the [Sample Cube](kylin_sample.html) is enough.
+
+  ![](/images/SQuirreL-Tutorial/02.png)
+
+
+* [Dowload and install SquirreL](http://www.squirrelsql.org/#installation)
+
+## Add Kylin JDBC Driver
+
+On left menu: ![alt text](/images/SQuirreL-Tutorial/03.png) >![alt text](/images/SQuirreL-Tutorial/04.png)  > ![alt text](/images/SQuirreL-Tutorial/05.png)  > ![alt text](/images/SQuirreL-Tutorial/06.png)
+
+And locate the JAR: ![alt text](/images/SQuirreL-Tutorial/07.png)
+
+Configure this parameters:
+
+* Put a name: ![alt text](/images/SQuirreL-Tutorial/08.png)
+* Example URL ![alt text](/images/SQuirreL-Tutorial/09.png)
+
+  jdbc:kylin://172.17.0.2:7070/learn_kylin
+* Put Class Name: ![alt text](/images/SQuirreL-Tutorial/10.png)
+	Tip:  If auto complete not work, type:  org.apache.kylin.jdbc.Driver 
+	
+Check the Driver List: ![alt text](/images/SQuirreL-Tutorial/11.png)
+
+## Add Aliases
+
+On left menu: ![alt text](/images/SQuirreL-Tutorial/12.png)  > ![alt text](/images/SQuirreL-Tutorial/13.png) : (Login pass by default: ADMIN / KYLIN)
+
+  ![](/images/SQuirreL-Tutorial/14.png)
+
+
+And automatically launch conection:
+
+  ![](/images/SQuirreL-Tutorial/15.png)
+
+
+## Connect and Execute
+
+The startup window when connected:
+
+  ![](/images/SQuirreL-Tutorial/16.png)
+
+
+Choose Tab: and write a query  (whe use Kylin\u2019s example cube):
+
+  ![](/images/SQuirreL-Tutorial/17.png)
+
+
+```
+select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers 
+from kylin_sales group by part_dt 
+order by part_dt
+```
+
+Execute With: ![alt text](/images/SQuirreL-Tutorial/18.png) 
+
+  ![](/images/SQuirreL-Tutorial/19.png)
+
+
+And it\u2019s works!
+
+## Tips:
+
+SquirreL isn\u2019t the most stable SQL Client, but it is very flexible and get a lot of info; It can be used for PoC and checking connectivity issues.
+
+List of tables: 
+
+  ![](/images/SQuirreL-Tutorial/21.png)
+
+
+List of columns of table:
+
+  ![](/images/SQuirreL-Tutorial/22.png)
+
+
+List of column of Querie:
+
+  ![](/images/SQuirreL-Tutorial/23.png)
+
+
+Export the result of queries:
+
+  ![](/images/SQuirreL-Tutorial/24.png)
+
+
+ Info about time query execution:
+
+  ![](/images/SQuirreL-Tutorial/25.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/tableau.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/tableau.cn.md b/website/_docs20/tutorial/tableau.cn.md
new file mode 100644
index 0000000..e185b38
--- /dev/null
+++ b/website/_docs20/tutorial/tableau.cn.md
@@ -0,0 +1,116 @@
+---
+layout: docs20-cn
+title:  Tableau\u6559\u7a0b
+categories: \u6559\u7a0b
+permalink: /cn/docs20/tutorial/tableau.html
+version: v1.2
+since: v0.7.1
+---
+
+> Kylin ODBC\u9a71\u52a8\u7a0b\u5e8f\u4e0eTableau\u5b58\u5728\u4e00\u4e9b\u9650\u5236\uff0c\u8bf7\u5728\u5c1d\u8bd5\u524d\u4ed4\u7ec6\u9605\u8bfb\u672c\u8bf4\u660e\u4e66\u3002
+> * \u4ec5\u652f\u6301\u201cmanaged\u201d\u5206\u6790\u8def\u5f84\uff0cKylin\u5f15\u64ce\u5c06\u5bf9\u610f\u5916\u7684\u7ef4\u5ea6\u6216\u5ea6\u91cf\u62a5\u9519
+> * \u8bf7\u59cb\u7ec8\u4f18\u5148\u9009\u62e9\u4e8b\u5b9e\u8868\uff0c\u7136\u540e\u4f7f\u7528\u6b63\u786e\u7684\u8fde\u63a5\u6761\u4ef6\u6dfb\u52a0\u67e5\u627e\u8868\uff08cube\u4e2d\u5df2\u5b9a\u4e49\u7684\u8fde\u63a5\u7c7b\u578b\uff09
+> * \u8bf7\u52ff\u5c1d\u8bd5\u5728\u591a\u4e2a\u4e8b\u5b9e\u8868\u6216\u591a\u4e2a\u67e5\u627e\u8868\u4e4b\u95f4\u8fdb\u884c\u8fde\u63a5\uff1b
+> * \u4f60\u53ef\u4ee5\u5c1d\u8bd5\u4f7f\u7528\u7c7b\u4f3cTableau\u8fc7\u6ee4\u5668\u4e2dseller id\u8fd9\u6837\u7684\u9ad8\u57fa\u6570\u7ef4\u5ea6\uff0c\u4f46\u5f15\u64ce\u73b0\u5728\u5c06\u53ea\u8fd4\u56de\u6709\u9650\u4e2aTableau\u8fc7\u6ee4\u5668\u4e2d\u7684seller id\u3002
+> 
+> \u5982\u9700\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\u6216\u6709\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7\u8054\u7cfbKylin\u56e2\u961f\uff1a`kylinolap@gmail.com`
+
+
+### \u4f7f\u7528Tableau 9.x\u7684\u7528\u6237
+\u8bf7\u53c2\u8003[Tableau 9 \u6559\u7a0b](./tableau_91.html)\u4ee5\u83b7\u5f97\u66f4\u8be6\u7ec6\u5e2e\u52a9\u3002
+
+### \u6b65\u9aa41. \u5b89\u88c5Kylin ODBC\u9a71\u52a8\u7a0b\u5e8f
+\u53c2\u8003\u9875\u9762[Kylin ODBC \u9a71\u52a8\u7a0b\u5e8f\u6559\u7a0b](./odbc.html)\u3002
+
+### \u6b65\u9aa42. \u8fde\u63a5\u5230Kylin\u670d\u52a1\u5668
+> \u6211\u4eec\u5efa\u8bae\u4f7f\u7528Connect Using Driver\u800c\u4e0d\u662fUsing DSN\u3002
+
+Connect Using Driver: \u9009\u62e9\u5de6\u4fa7\u9762\u677f\u4e2d\u7684\u201cOther Database(ODBC)\u201d\u548c\u5f39\u51fa\u7a97\u53e3\u7684\u201cKylinODBCDriver\u201d\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
+
+\u8f93\u5165\u4f60\u7684\u670d\u52a1\u5668\u4f4d\u7f6e\u548c\u8bc1\u4e66\uff1a\u670d\u52a1\u5668\u4e3b\u673a\uff0c\u7aef\u53e3\uff0c\u7528\u6237\u540d\u548c\u5bc6\u7801\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
+
+\u70b9\u51fb\u201cConnect\u201d\u83b7\u53d6\u4f60\u6709\u6743\u9650\u8bbf\u95ee\u7684\u9879\u76ee\u5217\u8868\u3002\u6709\u5173\u6743\u9650\u7684\u8be6\u7ec6\u4fe1\u606f\u8bf7\u53c2\u8003[Kylin Cube Permission Grant Tutorial](https://github.com/KylinOLAP/Kylin/wiki/Kylin-Cube-Permission-Grant-Tutorial)\u3002\u7136\u540e\u5728\u4e0b\u62c9\u5217\u8868\u4e2d\u9009\u62e9\u4f60\u60f3\u8981\u8fde\u63a5\u7684\u9879\u76ee\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/3 project.jpg)
+
+\u70b9\u51fb\u201cDone\u201d\u5b8c\u6210\u8fde\u63a5\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/4 done.jpg)
+
+### \u6b65\u9aa43. \u4f7f\u7528\u5355\u8868\u6216\u591a\u8868
+> \u9650\u5236
+>    * \u5fc5\u987b\u9996\u5148\u9009\u62e9\u4e8b\u5b9e\u8868
+>    * \u8bf7\u52ff\u4ec5\u652f\u6301\u4ece\u67e5\u627e\u8868\u9009\u62e9
+>    * \u8fde\u63a5\u6761\u4ef6\u5fc5\u987b\u4e0ecube\u5b9a\u4e49\u5339\u914d
+
+**\u9009\u62e9\u4e8b\u5b9e\u8868**
+
+\u9009\u62e9`Multiple Tables`\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
+
+\u7136\u540e\u70b9\u51fb`Add Table...`\u6dfb\u52a0\u4e00\u5f20\u4e8b\u5b9e\u8868\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
+
+![](/images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
+
+**\u9009\u62e9\u67e5\u627e\u8868**
+
+\u70b9\u51fb`Add Table...`\u6dfb\u52a0\u4e00\u5f20\u67e5\u627e\u8868\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
+
+\u4ed4\u7ec6\u5efa\u7acb\u8fde\u63a5\u6761\u6b3e\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/8 join.jpg)
+
+\u7ee7\u7eed\u901a\u8fc7\u70b9\u51fb`Add Table...`\u6dfb\u52a0\u8868\u76f4\u5230\u6240\u6709\u7684\u67e5\u627e\u8868\u90fd\u88ab\u6b63\u786e\u6dfb\u52a0\u3002\u547d\u540d\u6b64\u8fde\u63a5\u4ee5\u5728Tableau\u4e2d\u4f7f\u7528\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
+
+**\u4f7f\u7528Connect Live**
+
+`Data Connection`\u5171\u6709\u4e09\u79cd\u7c7b\u578b\u3002\u9009\u62e9`Connect Live`\u9009\u9879\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
+
+\u7136\u540e\u4f60\u5c31\u80fd\u591f\u5c3d\u60c5\u4f7f\u7528Tableau\u8fdb\u884c\u5206\u6790\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
+
+**\u6dfb\u52a0\u989d\u5916\u67e5\u627e\u8868**
+
+\u70b9\u51fb\u9876\u90e8\u83dc\u5355\u680f\u7684`Data`\uff0c\u9009\u62e9`Edit Tables...`\u66f4\u65b0\u67e5\u627e\u8868\u4fe1\u606f\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
+
+### \u6b65\u9aa44. \u4f7f\u7528\u81ea\u5b9a\u4e49SQL
+\u4f7f\u7528\u81ea\u5b9a\u4e49SQL\u7c7b\u4f3c\u4e8e\u4f7f\u7528\u5355\u8868/\u591a\u8868\uff0c\u4f46\u4f60\u9700\u8981\u5728`Custom SQL`\u6807\u7b7e\u590d\u5236\u4f60\u7684SQL\u540e\u91c7\u53d6\u540c\u4e0a\u6307\u4ee4\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
+
+### \u6b65\u9aa45. \u53d1\u5e03\u5230Tableau\u670d\u52a1\u5668
+\u5982\u679c\u4f60\u5df2\u7ecf\u5b8c\u6210\u4f7f\u7528Tableau\u5236\u4f5c\u4e00\u4e2a\u4eea\u8868\u677f\uff0c\u4f60\u53ef\u4ee5\u5c06\u5b83\u53d1\u5e03\u5230Tableau\u670d\u52a1\u5668\u4e0a\u3002
+\u70b9\u51fb\u9876\u90e8\u83dc\u5355\u680f\u7684`Server`\uff0c\u9009\u62e9`Publish Workbook...`\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
+
+\u7136\u540e\u767b\u9646\u4f60\u7684Tableau\u670d\u52a1\u5668\u5e76\u51c6\u5907\u53d1\u5e03\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
+
+\u5982\u679c\u4f60\u6b63\u5728\u4f7f\u7528Connect Using Driver\u800c\u4e0d\u662fDSN\u8fde\u63a5\uff0c\u4f60\u8fd8\u5c06\u9700\u8981\u5d4c\u5165\u4f60\u7684\u5bc6\u7801\u3002\u70b9\u51fb\u5de6\u4e0b\u65b9\u7684`Authentication`\u6309\u94ae\u5e76\u9009\u62e9`Embedded Password`\u3002\u70b9\u51fb`Publish`\u7136\u540e\u4f60\u5c06\u770b\u5230\u7ed3\u679c\u3002
+
+![](/images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
+
+### \u5c0f\u8d34\u58eb
+* \u5728Tableau\u4e2d\u9690\u85cf\u8868\u540d
+
+    * Tableau\u5c06\u4f1a\u6839\u636e\u6e90\u8868\u540d\u5206\u7ec4\u663e\u793a\u5217\uff0c\u4f46\u7528\u6237\u53ef\u80fd\u5e0c\u671b\u6839\u636e\u5176\u4ed6\u4e0d\u540c\u7684\u5b89\u6392\u7ec4\u7ec7\u5217\u3002\u4f7f\u7528Tableau\u4e2d\u7684"Group by Folder"\u5e76\u521b\u5efa\u6587\u4ef6\u5939\u6765\u5bf9\u4e0d\u540c\u7684\u5217\u5206\u7ec4\u3002
+
+     ![](/images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/tableau.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/tableau.md b/website/_docs20/tutorial/tableau.md
new file mode 100644
index 0000000..e46b4e6
--- /dev/null
+++ b/website/_docs20/tutorial/tableau.md
@@ -0,0 +1,113 @@
+---
+layout: docs20
+title:  Tableau 8
+categories: tutorial
+permalink: /docs20/tutorial/tableau.html
+---
+
+> There are some limitations of Kylin ODBC driver with Tableau, please read carefully this instruction before you try it.
+> 
+> * Only support "managed" analysis path, Kylin engine will raise exception for unexpected dimension or metric
+> * Please always select Fact Table first, then add lookup tables with correct join condition (defined join type in cube)
+> * Do not try to join between fact tables or lookup tables;
+> * You can try to use high cardinality dimensions like seller id as Tableau Filter, but the engine will only return limited seller id in Tableau's filter now.
+
+### For Tableau 9.x User
+Please refer to [Tableau 9.x Tutorial](./tableau_91.html) for detail guide.
+
+### Step 1. Install Kylin ODBC Driver
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+
+### Step 2. Connect to Kylin Server
+> We recommended to use Connect Using Driver instead of Using DSN.
+
+Connect Using Driver: Select "Other Database(ODBC)" in the left panel and choose KylinODBCDriver in the pop-up window. 
+
+![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
+
+Enter your Sever location and credentials: server host, port, username and password.
+
+![]( /images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
+
+Click "Connect" to get the list of projects that you have permission to access. See details about permission in [Kylin Cube Permission Grant Tutorial](./acl.html). Then choose the project you want to connect in the drop down list. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/3 project.jpg)
+
+Click "Done" to complete the connection.
+
+![]( /images/Kylin-and-Tableau-Tutorial/4 done.jpg)
+
+### Step 3. Using Single Table or Multiple Tables
+> Limitation
+> 
+>    * Must select FACT table first
+>    * Do not support select from lookup table only
+>    * The join condition must match within cube definition
+
+**Select Fact Table**
+
+Select `Multiple Tables`.
+
+![]( /images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
+
+Then click `Add Table...` to add a fact table.
+
+![]( /images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
+
+![]( /images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
+
+**Select Look-up Table**
+
+Click `Add Table...` to add a look-up table. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
+
+Set up the join clause carefully. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/8 join.jpg)
+
+Keep add tables through click `Add Table...` until all the look-up tables have been added properly. Give the connection a name for use in Tableau.
+
+![]( /images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
+
+**Using Connect Live**
+
+There are three types of `Data Connection`. Choose the `Connect Live` option. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
+
+Then you can enjoy analyzing with Tableau.
+
+![]( /images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
+
+**Add additional look-up Tables**
+
+Click `Data` in the top menu bar, select `Edit Tables...` to update the look-up table information.
+
+![]( /images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
+
+### Step 4. Using Customized SQL
+To use customized SQL resembles using Single Table/Multiple Tables, except that you just need to paste your SQL in `Custom SQL` tab and take the same instruction as above.
+
+![]( /images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
+
+### Step 5. Publish to Tableau Server
+Suppose you have finished making a dashboard with Tableau, you can publish it to Tableau Server.
+Click `Server` in the top menu bar, select `Publish Workbook...`. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
+
+Then sign in your Tableau Server and prepare to publish. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
+
+If you're Using Driver Connect instead of DSN connect, you'll need to additionally embed your password in. Click the `Authentication` button at left bottom and select `Embedded Password`. Click `Publish` and you will see the result.
+
+![]( /images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
+
+### Tips
+* Hide Table name in Tableau
+
+    * Tableau will display columns be grouped by source table name, but user may want to organize columns with different structure. Using "Group by Folder" in Tableau and Create Folders to group different columns.
+
+     ![]( /images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)


[3/5] kylin git commit: prepare docs for 2.0

Posted by li...@apache.org.
http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/release_notes.md
----------------------------------------------------------------------
diff --git a/website/_docs20/release_notes.md b/website/_docs20/release_notes.md
new file mode 100644
index 0000000..7f8138f
--- /dev/null
+++ b/website/_docs20/release_notes.md
@@ -0,0 +1,1333 @@
+---
+layout: docs20
+title:  Apache Kylin Release Notes
+categories: gettingstarted
+permalink: /docs20/release_notes.html
+---
+
+To download latest release, please visit: [http://kylin.apache.org/download/](http://kylin.apache.org/download/), 
+there are source code package, binary package, ODBC driver and installation guide avaliable.
+
+Any problem or issue, please report to Apache Kylin JIRA project: [https://issues.apache.org/jira/browse/KYLIN](https://issues.apache.org/jira/browse/KYLIN)
+
+or send to Apache Kylin mailing list:
+
+* User relative: [user@kylin.apache.org](mailto:user@kylin.apache.org)
+* Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
+
+## v1.6.0 - 2016-11-26
+_Tag:_ [kylin-1.6.0](https://github.com/apache/kylin/tree/kylin-1.6.0)
+This is a major release with better support for using Apache Kafka as data source. Check [how to upgrade](/docs16/howto/howto_upgrade.html) to do the upgrading.
+
+__New Feature__
+
+* [KYLIN-1726] - Scalable streaming cubing
+* [KYLIN-1919] - Support Embedded Structure when Parsing Streaming Message
+* [KYLIN-2055] - Add an encoder for Boolean type
+* [KYLIN-2067] - Add API to check and fill segment holes
+* [KYLIN-2079] - add explicit configuration knob for coprocessor timeout
+* [KYLIN-2088] - Support intersect count for calculation of retention or conversion rates
+* [KYLIN-2125] - Support using beeline to load hive table metadata
+
+__Bug__
+
+* [KYLIN-1565] - Read the kv max size from HBase config
+* [KYLIN-1820] - Column autocomplete should remove the user input in model designer
+* [KYLIN-1828] - java.lang.StringIndexOutOfBoundsException in org.apache.kylin.storage.hbase.util.StorageCleanupJob
+* [KYLIN-1967] - Dictionary rounding can cause IllegalArgumentException in GTScanRangePlanner
+* [KYLIN-1978] - kylin.sh compatible issue on Ubuntu
+* [KYLIN-1990] - The SweetAlert at the front page may out of the page if the content is too long.
+* [KYLIN-2007] - CUBOID_CACHE is not cleared when rebuilding ALL cache
+* [KYLIN-2012] - more robust approach to hive schema changes
+* [KYLIN-2024] - kylin TopN only support the first measure 
+* [KYLIN-2027] - Error "connection timed out" occurs when zookeeper's port is set in hbase.zookeeper.quorum of hbase-site.xml
+* [KYLIN-2028] - find-*-dependency script fail on Mac OS
+* [KYLIN-2035] - Auto Merge Submit Continuously
+* [KYLIN-2041] - Wrong parameter definition in Get Hive Tables REST API
+* [KYLIN-2043] - Rollback httpclient to 4.2.5 to align with Hadoop 2.6/2.7
+* [KYLIN-2044] - Unclosed DataInputByteBuffer in BitmapCounter#peekLength
+* [KYLIN-2045] - Wrong argument order in JobInstanceExtractor#executeExtract()
+* [KYLIN-2047] - Ineffective null check in MetadataManager
+* [KYLIN-2050] - Potentially ineffective call to close() in QueryCli
+* [KYLIN-2051] - Potentially ineffective call to IOUtils.closeQuietly()
+* [KYLIN-2052] - Edit "Top N" measure, the "group by" column wasn't displayed
+* [KYLIN-2059] - Concurrent build issue in CubeManager.calculateToBeSegments()
+* [KYLIN-2069] - NPE in LookupStringTable
+* [KYLIN-2078] - Can't see generated SQL at Web UI
+* [KYLIN-2084] - Unload sample table failed
+* [KYLIN-2085] - PrepareStatement return incorrect result in some cases
+* [KYLIN-2086] - Still report error when there is more than 12 dimensions in one agg group
+* [KYLIN-2093] - Clear cache in CubeMetaIngester
+* [KYLIN-2097] - Get 'Column does not exist in row key desc" on cube has TopN measure
+* [KYLIN-2099] - Import table error of sample table KYLIN_CAL_DT
+* [KYLIN-2106] - UI bug - Advanced Settings - Rowkeys - new Integer dictionary encoding - could possibly impact also cube metadata
+* [KYLIN-2109] - Deploy coprocessor only this server own the table
+* [KYLIN-2110] - Ineffective comparison in BooleanDimEnc#equals()
+* [KYLIN-2114] - WEB-Global-Dictionary bug fix and improve
+* [KYLIN-2115] - some extended column query returns wrong answer
+* [KYLIN-2116] - when hive field delimitor exists in table field values, fields order is wrong
+* [KYLIN-2119] - Wrong chart value and sort when process scientific notation 
+* [KYLIN-2120] - kylin1.5.4.1 with cdh5.7 cube sql Oops Faild to take action
+* [KYLIN-2121] - Failed to pull data to PowerBI or Excel on some query
+* [KYLIN-2127] - UI bug fix for Extend Column
+* [KYLIN-2130] - QueryMetrics concurrent bug fix
+* [KYLIN-2132] - Unable to pull data from Kylin Cube ( learn_kylin cube ) to Excel or Power BI for Visualization and some dimensions are not showing up.
+* [KYLIN-2134] - Kylin will treat empty string as NULL by mistake
+* [KYLIN-2137] - Failed to run mr job when user put a kafka jar in hive's lib folder
+* [KYLIN-2138] - Unclosed ResultSet in BeelineHiveClient
+* [KYLIN-2146] - "Streaming Cluster" page should remove "Margin" inputbox
+* [KYLIN-2152] - TopN group by column does not distinguish between NULL and ""
+* [KYLIN-2154] - source table rows will be skipped if TOPN's group column contains NULL values
+* [KYLIN-2158] - Delete joint dimension not right
+* [KYLIN-2159] - Redistribution Hive Table Step always requires row_count filename as 000000_0 
+* [KYLIN-2167] - FactDistinctColumnsReducer may get wrong max/min partition col value
+* [KYLIN-2173] - push down limit leads to wrong answer when filter is loosened
+* [KYLIN-2178] - CubeDescTest is unstable
+* [KYLIN-2201] - Cube desc and aggregation group rule combination max check fail
+* [KYLIN-2226] - Build Dimension Dictionary Error
+
+__Improvement__
+
+* [KYLIN-1042] - Horizontal scalable solution for streaming cubing
+* [KYLIN-1827] - Send mail notification when runtime exception throws during build/merge cube
+* [KYLIN-1839] - improvement set classpath before submitting mr job
+* [KYLIN-1917] - TopN counter merge performance improvement
+* [KYLIN-1962] - Split kylin.properties into two files
+* [KYLIN-1999] - Use some compression at UT/IT
+* [KYLIN-2019] - Add license checker into checkstyle rule
+* [KYLIN-2033] - Refactor broadcast of metadata change
+* [KYLIN-2042] - QueryController puts entry in Cache w/o checking QueryCacheEnabled
+* [KYLIN-2054] - TimedJsonStreamParser should support other time format
+* [KYLIN-2068] - Import hive comment when sync tables
+* [KYLIN-2070] - UI changes for allowing concurrent build/refresh/merge
+* [KYLIN-2073] - Need timestamp info for diagnose  
+* [KYLIN-2075] - TopN measure: need select "constant" + "1" as the SUM|ORDER parameter
+* [KYLIN-2076] - Improve sample cube and data
+* [KYLIN-2080] - UI: allow multiple building jobs for the same cube
+* [KYLIN-2082] - Support to change streaming configuration
+* [KYLIN-2089] - Make update HBase coprocessor concurrent
+* [KYLIN-2090] - Allow updating cube level config even the cube is ready
+* [KYLIN-2091] - Add API to init the start-point (of each parition) for streaming cube
+* [KYLIN-2095] - Hive mr job use overrided MR job configuration by cube properties
+* [KYLIN-2098] - TopN support query UHC column without sorting by sum value
+* [KYLIN-2100] - Allow cube to override HIVE job configuration by properties
+* [KYLIN-2108] - Support usage of schema name "default" in SQL
+* [KYLIN-2111] - only allow columns from Model dimensions when add group by column to TOP_N
+* [KYLIN-2112] - Allow a column be a dimension as well as "group by" column in TopN measure
+* [KYLIN-2113] - Need sort by columns in SQLDigest
+* [KYLIN-2118] - allow user view CubeInstance json even cube is ready
+* [KYLIN-2122] - Move the partition offset calculation before submitting job
+* [KYLIN-2126] - use column name as default dimension name when auto generate dimension for lookup table
+* [KYLIN-2140] - rename packaged js with different name when build
+* [KYLIN-2143] - allow more options from Extended Columns,COUNT_DISTINCT,RAW_TABLE
+* [KYLIN-2162] - Improve the cube validation error message
+* [KYLIN-2221] - rethink on KYLIN-1684
+* [KYLIN-2083] - more RAM estimation test for MeasureAggregator and GTAggregateScanner
+* [KYLIN-2105] - add QueryId
+* [KYLIN-1321] - Add derived checkbox for lookup table columns on Auto Generate Dimensions panel
+* [KYLIN-1995] - Upgrade MapReduce properties which are deprecated
+
+__Task__
+
+* [KYLIN-2072] - Cleanup old streaming code
+* [KYLIN-2081] - UI change to support embeded streaming message
+* [KYLIN-2171] - Release 1.6.0
+
+
+## v1.5.4.1 - 2016-09-28
+_Tag:_ [kylin-1.5.4.1](https://github.com/apache/kylin/tree/kylin-1.5.4.1)
+This version fixes two major bugs introduced in 1.5.4; The metadata and HBase coprocessor is compatible with 1.5.4.
+
+__Bug__
+
+* [KYLIN-2010] - Date dictionary return wrong SQL result
+* [KYLIN-2026] - NPE occurs when build a cube without partition column
+* [KYLIN-2032] - Cube build failed when partition column isn't in dimension list
+
+## v1.5.4 - 2016-09-15
+_Tag:_ [kylin-1.5.4](https://github.com/apache/kylin/tree/kylin-1.5.4)
+This version includes bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.3; While after upgrade, you still need update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
+
+__New Feature__
+
+* [KYLIN-1732] - Support Window Function
+* [KYLIN-1767] - UI for TopN: specify encoding and multiple "group by"
+* [KYLIN-1849] - Search cube by name in Web UI
+* [KYLIN-1908] - Collect Metrics to JMX
+* [KYLIN-1921] - Support Grouping Funtions
+* [KYLIN-1964] - Add a companion tool of CubeMetaExtractor for cube importing
+
+__Bug__
+
+* [KYLIN-962] - [UI] Cube Designer can't drag rowkey normally
+* [KYLIN-1194] - Filter(CubeName) on Jobs/Monitor page works only once
+* [KYLIN-1488] - When modifying a model, Save after deleting a lookup table. The internal error will pop up.
+* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
+* [KYLIN-1808] - unload non existing table cause NPE
+* [KYLIN-1834] - java.lang.IllegalArgumentException: Value not exists! - in Step 4 - Build Dimension Dictionary
+* [KYLIN-1883] - Consensus Problem when running the tool, MetadataCleanupJob
+* [KYLIN-1889] - Didn't deal with the failure of renaming folder in hdfs when running the tool CubeMigrationCLI
+* [KYLIN-1929] - Error to load slow query in "Monitor" page for non-admin user
+* [KYLIN-1933] - Deploy in cluster mode, the "query" node report "scheduler has not been started" every second
+* [KYLIN-1934] - 'Value not exist' During Cube Merging Caused by Empty Dict
+* [KYLIN-1939] - Linkage error while executing any queries
+* [KYLIN-1942] - Models are missing after change project's name
+* [KYLIN-1953] - Error handling for diagnosis
+* [KYLIN-1956] - Can't query from child cube of a hybrid cube after its status changed from disabled to enabled
+* [KYLIN-1961] - Project name is always constant instead of real project name in email notification
+* [KYLIN-1970] - System Menu UI ACL issue
+* [KYLIN-1972] - Access denied when query seek to hybrid
+* [KYLIN-1973] - java.lang.NegativeArraySizeException when Build Dimension Dictionary
+* [KYLIN-1982] - CubeMigrationCLI: associate model with project
+* [KYLIN-1986] - CubeMigrationCLI: make global dictionary unique
+* [KYLIN-1992] - Clear ThreadLocal Contexts when query failed before scaning HBase
+* [KYLIN-1996] - Keep original column order when designing cube
+* [KYLIN-1998] - Job engine lock is not release at shutdown
+* [KYLIN-2003] - error start time at query result page
+* [KYLIN-2005] - Move all storage side behavior hints to GTScanRequest
+
+__Improvement__
+
+* [KYLIN-672] - Add Env and Project Info in job email notification
+* [KYLIN-1702] - The Key of the Snapshot to the related lookup table may be not informative
+* [KYLIN-1855] - Should exclude those joins in whose related lookup tables no dimensions are used in cube
+* [KYLIN-1858] - Remove all InvertedIndex(Streaming purpose) related codes and tests
+* [KYLIN-1866] - Add tip for field at 'Add Streaming' table page.
+* [KYLIN-1867] - Upgrade dependency libraries
+* [KYLIN-1874] - Make roaring bitmap version determined
+* [KYLIN-1898] - Upgrade to Avatica 1.8 or higher
+* [KYLIN-1904] - WebUI for GlobalDictionary
+* [KYLIN-1906] - Add more comments and default value for kylin.properties
+* [KYLIN-1910] - Support Separate HBase Cluster with NN HA and Kerberos Authentication
+* [KYLIN-1920] - Add view CubeInstance json function
+* [KYLIN-1922] - Improve the logic to decide whether to pre aggregate on Region server
+* [KYLIN-1923] - Add access controller to query
+* [KYLIN-1924] - Region server metrics: replace int type for long type for scanned row count
+* [KYLIN-1925] - Do not allow cross project clone for cube
+* [KYLIN-1926] - Loosen the constraint on FK-PK data type matching
+* [KYLIN-1936] - Improve enable limit logic (exactAggregation is too strict)
+* [KYLIN-1940] - Add owner for DataModel
+* [KYLIN-1941] - Show submitter for slow query
+* [KYLIN-1954] - BuildInFunctionTransformer should be executed per CubeSegmentScanner
+* [KYLIN-1963] - Delegate the loading of certain package (like slf4j) to tomcat's parent classloader
+* [KYLIN-1965] - Check duplicated measure name
+* [KYLIN-1966] - Refactor IJoinedFlatTableDesc
+* [KYLIN-1979] - Move hackNoGroupByAggregation to cube-based storage implementations
+* [KYLIN-1984] - Don't use compression in packaging configuration
+* [KYLIN-1985] - SnapshotTable should only keep the columns described in tableDesc
+* [KYLIN-1997] - Add pivot feature back in query result page
+* [KYLIN-2004] - Make the creating intermediate hive table steps configurable (two options)
+
+## v1.5.3 - 2016-07-28
+_Tag:_ [kylin-1.5.3](https://github.com/apache/kylin/tree/kylin-1.5.3)
+This version includes many bug fixs/enhancements as well as new features; It is backward compatiple with v1.5.2; But after upgrade, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
+
+__New Feature__
+
+* [KYLIN-1478] - TopN measure should support non-dictionary encoding for ultra high cardinality
+* [KYLIN-1693] - Support multiple group-by columns for TOP_N meausre
+* [KYLIN-1752] - Add an option to fail cube build job when source table is empty
+* [KYLIN-1756] - Allow user to run MR jobs against different Hadoop queues
+
+__Bug__
+
+* [KYLIN-1499] - Couldn't save query, error in backend
+* [KYLIN-1568] - Calculate row value buffer size instead of hard coded ROWVALUE_BUFFER_SIZE
+* [KYLIN-1645] - Exception inside coprocessor should report back to the query thread
+* [KYLIN-1646] - Column appeared twice if it was declared as both dimension and measure
+* [KYLIN-1676] - High CPU in TrieDictionary due to incorrect use of HashMap
+* [KYLIN-1679] - bin/get-properties.sh cannot get property which contains space or equals sign
+* [KYLIN-1684] - query on table "kylin_sales" return empty resultset after cube "kylin_sales_cube" which generated by sample.sh is ready
+* [KYLIN-1694] - make multiply coefficient configurable when estimating cuboid size
+* [KYLIN-1695] - Skip cardinality calculation job when loading hive table
+* [KYLIN-1703] - The not-thread-safe ToolRunner.run() will cause concurrency issue in job engine
+* [KYLIN-1704] - When load empty snapshot, NULL Pointer Exception occurs
+* [KYLIN-1723] - GTAggregateScanner$Dump.flush() must not write the WHOLE metrics buffer
+* [KYLIN-1738] - MRJob Id is not saved to kylin jobs if MR job is killed
+* [KYLIN-1742] - kylin.sh should always set KYLIN_HOME to an absolute path
+* [KYLIN-1755] - TopN Measure IndexOutOfBoundsException
+* [KYLIN-1760] - Save query hits org.apache.hadoop.hbase.TableNotFoundException: kylin_metadata_user
+* [KYLIN-1762] - Query threw NPE with 3 or more join conditions
+* [KYLIN-1769] - There is no response when click "Property" button at Cube Designer
+* [KYLIN-1777] - Streaming cube build shouldn't check working segment
+* [KYLIN-1780] - Potential issue in SnapshotTable.equals()
+* [KYLIN-1781] - kylin.properties encoding error while contain chinese prop key or value
+* [KYLIN-1783] - Can't add override property at cube design 'Configuration Overwrites' step.
+* [KYLIN-1785] - NoSuchElementException when Mandatory Dimensions contains all Dimensions
+* [KYLIN-1787] - Properly deal with limit clause in CubeHBaseEndpointRPC (SELECT * problem)
+* [KYLIN-1788] - Allow arbitrary number of mandatory dimensions in one aggregation group
+* [KYLIN-1789] - Couldn't use View as Lookup when join type is "inner"
+* [KYLIN-1795] - bin/sample.sh doesn't work when configured hive client is beeline
+* [KYLIN-1800] - IllegalArgumentExceptio: Too many digits for NumberDictionary: -0.009999999999877218. Expect 19 digits before decimal point at max.
+* [KYLIN-1803] - ExtendedColumn Measure Encoding with Non-ascii Characters
+* [KYLIN-1811] - Error step may be skipped sometimes when resume a cube job
+* [KYLIN-1816] - More than one base KylinConfig exist in spring JVM
+* [KYLIN-1817] - No result from JDBC with Date filter in prepareStatement
+* [KYLIN-1838] - Fix sample cube definition
+* [KYLIN-1848] - Can't sort cubes by any field in Web UI
+* [KYLIN-1862] - "table not found" in "Build Dimension Dictionary" step
+* [KYLIN-1879] - RestAPI /api/jobs always returns 0 for exec_start_time and exec_end_time fields
+* [KYLIN-1882] - it report can't find the intermediate table in '#4 Step Name: Build Dimension Dictionary' when use hive view as lookup table
+* [KYLIN-1896] - JDBC support mybatis
+* [KYLIN-1905] - Wrong Default Date in Cube Build Web UI
+* [KYLIN-1909] - Wrong access control to rest get cubes
+* [KYLIN-1911] - NPE when extended column has NULL value
+* [KYLIN-1912] - Create Intermediate Flat Hive Table failed when using beeline
+* [KYLIN-1913] - query log printed abnormally if the query contains "\r" (not "\r\n")
+* [KYLIN-1918] - java.lang.UnsupportedOperationException when unload hive table
+
+__Improvement__
+
+* [KYLIN-1319] - Find a better way to check hadoop job status
+* [KYLIN-1379] - More stable and functional precise count distinct implements after KYLIN-1186
+* [KYLIN-1656] - Improve performance of MRv2 engine by making each mapper handles a configured number of records
+* [KYLIN-1657] - Add new configuration kylin.job.mapreduce.min.reducer.number
+* [KYLIN-1669] - Deprecate the "Capacity" field from DataModel
+* [KYLIN-1677] - Distribute source data by certain columns when creating flat table
+* [KYLIN-1705] - Global (and more scalable) dictionary
+* [KYLIN-1706] - Allow cube to override MR job configuration by properties
+* [KYLIN-1714] - Make job/source/storage engines configurable from kylin.properties
+* [KYLIN-1717] - Make job engine scheduler configurable
+* [KYLIN-1718] - Grow ByteBuffer Dynamically in Cube Building and Query
+* [KYLIN-1719] - Add config in scan request to control compress the query result or not
+* [KYLIN-1724] - Support Amazon EMR
+* [KYLIN-1725] - Use KylinConfig inside coprocessor
+* [KYLIN-1728] - Introduce dictionary metadata
+* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
+* [KYLIN-1747] - Calculate all 0 (except mandatory) cuboids
+* [KYLIN-1749] - Allow mandatory only cuboid
+* [KYLIN-1751] - Make kylin log configurable
+* [KYLIN-1766] - CubeTupleConverter.translateResult() is slow due to date conversion
+* [KYLIN-1775] - Add Cube Migrate Support for Global Dictionary
+* [KYLIN-1782] - API redesign for CubeDesc
+* [KYLIN-1786] - Frontend work for KYLIN-1313 (extended columns as measure)
+* [KYLIN-1792] - behaviours for non-aggregated queries
+* [KYLIN-1805] - It's easily got stuck when deleting HTables during running the StorageCleanupJob
+* [KYLIN-1815] - Cleanup package size
+* [KYLIN-1818] - change kafka dependency to provided
+* [KYLIN-1821] - Reformat all of the java files and enable checkstyle to enforce code formatting
+* [KYLIN-1823] - refactor kylin-server packaging
+* [KYLIN-1846] - minimize dependencies of JDBC driver
+* [KYLIN-1884] - Reload metadata automatically after migrating cube
+* [KYLIN-1894] - GlobalDictionary may corrupt when server suddenly crash
+* [KYLIN-1744] - Separate concepts of source offset and date range on cube segments
+* [KYLIN-1654] - Upgrade httpclient dependency
+* [KYLIN-1774] - Update Kylin's tomcat version to 7.0.69
+* [KYLIN-1861] - Hive may fail to create flat table with "GC overhead error"
+
+## v1.5.2.1 - 2016-06-07
+_Tag:_ [kylin-1.5.2.1](https://github.com/apache/kylin/tree/kylin-1.5.2.1)
+
+This is a hot-fix version on v1.5.2, no new feature introduced, please upgrade to this version;
+
+__Bug__
+
+* [KYLIN-1758] - createLookupHiveViewMaterializationStep will create intermediate table for fact table
+* [KYLIN-1739] - kylin_job_conf_inmem.xml can impact non-inmem MR job
+
+
+## v1.5.2 - 2016-05-26
+_Tag:_ [kylin-1.5.2](https://github.com/apache/kylin/tree/kylin-1.5.2)
+
+This version is backward compatiple with v1.5.1. But after upgrade to v1.5.2 from v1.5.1, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
+
+__Highlights__
+
+* [KYLIN-1077] - Support Hive View as Lookup Table
+* [KYLIN-1515] - Make Kylin run on MapR
+* [KYLIN-1600] - Download diagnosis zip from GUI
+* [KYLIN-1672] - support kylin on cdh 5.7
+
+__New Feature__
+
+* [KYLIN-1016] - Count distinct on any dimension should work even not a predefined measure
+* [KYLIN-1077] - Support Hive View as Lookup Table
+* [KYLIN-1441] - Display time column as partition column
+* [KYLIN-1515] - Make Kylin run on MapR
+* [KYLIN-1600] - Download diagnosis zip from GUI
+* [KYLIN-1672] - support kylin on cdh 5.7
+
+__Improvement__
+
+* [KYLIN-869] - Enhance mail notification
+* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
+* [KYLIN-1313] - Enable deriving dimensions on non PK/FK
+* [KYLIN-1323] - Improve performance of converting data to hfile
+* [KYLIN-1340] - Tools to extract all cube/hybrid/project related metadata to facilitate diagnosing/debugging/* sharing
+* [KYLIN-1381] - change RealizationCapacity from three profiles to specific numbers
+* [KYLIN-1391] - quicker and better response to v2 storage engine's rpc timeout exception
+* [KYLIN-1418] - Memory hungry cube should select LAYER and INMEM cubing smartly
+* [KYLIN-1432] - For GUI, to add one option "yyyy-MM-dd HH:MM:ss" for Partition Date Column
+* [KYLIN-1453] - cuboid sharding based on specific column
+* [KYLIN-1487] - attach a hyperlink to introduce new aggregation group
+* [KYLIN-1526] - Move query cache back to query controller level
+* [KYLIN-1542] - Hfile owner is not hbase
+* [KYLIN-1544] - Make hbase encoding and block size configurable just like hbase compression
+* [KYLIN-1561] - Refactor storage engine(v2) to be extension friendly
+* [KYLIN-1566] - Add and use a separate kylin_job_conf.xml for in-mem cubing
+* [KYLIN-1567] - Front-end work for KYLIN-1557
+* [KYLIN-1578] - Coprocessor thread voluntarily stop itself when it reaches timeout
+* [KYLIN-1579] - IT preparation classes like BuildCubeWithEngine should exit with status code upon build * exception
+* [KYLIN-1580] - Use 1 byte instead of 8 bytes as column indicator in fact distinct MR job
+* [KYLIN-1584] - Specify region cut size in cubedesc and leave the RealizationCapacity in model as a hint
+* [KYLIN-1585] - make MAX_HBASE_FUZZY_KEYS in GTScanRangePlanner configurable
+* [KYLIN-1587] - show cube level configuration overwrites properties in CubeDesigner
+* [KYLIN-1591] - enabling different block size setting for small column families
+* [KYLIN-1599] - Add "isShardBy" flag in rowkey panel
+* [KYLIN-1601] - Need not to shrink scan cache when hbase rows can be large
+* [KYLIN-1602] - User could dump hbase usage for diagnosis
+* [KYLIN-1614] - Bring more information in diagnosis tool
+* [KYLIN-1621] - Use deflate level 1 to enable compression "on the fly"
+* [KYLIN-1623] - Make the hll precision for data samping configurable
+* [KYLIN-1624] - HyperLogLogPlusCounter will become inaccurate when there're billions of entries
+* [KYLIN-1625] - GC log overwrites old one after restart Kylin service
+* [KYLIN-1627] - add backdoor toggle to dump binary cube storage response for further analysis
+* [KYLIN-1731] - allow non-admin user to edit 'Advenced Setting' step in CubeDesigner
+
+__Bug__
+
+* [KYLIN-989] - column width is too narrow for timestamp field
+* [KYLIN-1197] - cube data not updated after purge
+* [KYLIN-1305] - Can not get more than one system admin email in config
+* [KYLIN-1551] - Should check and ensure TopN measure has two parameters specified
+* [KYLIN-1563] - Unsafe check of initiated in HybridInstance#init()
+* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
+* [KYLIN-1574] - Unclosed ResultSet in QueryService#getMetadata()
+* [KYLIN-1581] - NPE in Job engine when execute MR job
+* [KYLIN-1593] - Agg group info will be blank when trying to edit cube
+* [KYLIN-1595] - columns in metric could also be in filter/groupby
+* [KYLIN-1596] - UT fail, due to String encoding CharsetEncoder mismatch
+* [KYLIN-1598] - cannot run complete UT at windows dev machine
+* [KYLIN-1604] - Concurrent write issue on hdfs when deploy coprocessor
+* [KYLIN-1612] - Cube is ready but insight tables not result
+* [KYLIN-1615] - UT 'HiveCmdBuilderTest' fail on 'testBeeline'
+* [KYLIN-1619] - Can't find any realization coursed by Top-N measure
+* [KYLIN-1622] - sql not executed and report topN error
+* [KYLIN-1631] - Web UI of TopN, "group by" column couldn't be a dimension column
+* [KYLIN-1634] - Unclosed OutputStream in SSHClient#scpFileToLocal()
+* [KYLIN-1637] - Sample cube build error
+* [KYLIN-1638] - Unclosed HBaseAdmin in ToolUtil#getHBaseMetaStoreId()
+* [KYLIN-1639] - Wrong logging of JobID in MapReduceExecutable.java
+* [KYLIN-1643] - Kylin's hll counter count "NULL" as a value
+* [KYLIN-1647] - Purge a cube, and then build again, the start date is not updated
+* [KYLIN-1650] - java.io.IOException: Filesystem closed - in Cube Build Step 2 (MapR)
+* [KYLIN-1655] - function name 'getKylinPropertiesAsInputSteam' misspelt
+* [KYLIN-1660] - Streaming/kafka config not match with table name
+* [KYLIN-1662] - tableName got truncated during request mapping for /tables/tableName
+* [KYLIN-1666] - Should check project selection before add a stream table
+* [KYLIN-1667] - Streaming table name should allow enter "DB.TABLE" format
+* [KYLIN-1673] - make sure metadata in 1.5.2 compatible with 1.5.1
+* [KYLIN-1678] - MetaData clean just clean FINISHED and DISCARD jobs,but job correct status is SUCCEED
+* [KYLIN-1685] - error happens while execute a sql contains '?' using Statement
+* [KYLIN-1688] - Illegal char on result dataset table
+* [KYLIN-1721] - KylinConfigExt lost base properties when store into file
+* [KYLIN-1722] - IntegerDimEnc serialization exception inside coprocessor
+
+## v1.5.1 - 2016-04-13
+_Tag:_ [kylin-1.5.1](https://github.com/apache/kylin/tree/kylin-1.5.1)
+
+This version is backward compatiple with v1.5.0. But after upgrade to v1.5.1 from v1.5.0, you need to update coprocessor, refer to [How to update coprocessor](/docs15/howto/howto_update_coprocessor.html).
+
+__Highlights__
+
+* [KYLIN-1122] - Kylin support detail data query from fact table
+* [KYLIN-1492] - Custom dimension encoding
+* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
+* [KYLIN-1534] - Cube specific config, override global kylin.properties
+* [KYLIN-1546] - Tool to dump information for diagnosis
+
+__New Feature__
+
+* [KYLIN-1122] - Kylin support detail data query from fact table
+* [KYLIN-1378] - Add UI for TopN measure
+* [KYLIN-1492] - Custom dimension encoding
+* [KYLIN-1495] - Metadata upgrade from 1.0~1.3 to 1.5, including metadata correction, relevant tools, etc.
+* [KYLIN-1501] - Run some classes at the beginning of kylin server startup
+* [KYLIN-1503] - Print version information with kylin.sh
+* [KYLIN-1531] - Add smoke test scripts
+* [KYLIN-1534] - Cube specific config, override global kylin.properties
+* [KYLIN-1540] - REST API for deleting segment
+* [KYLIN-1541] - IntegerDimEnc, custom dimension encoding for integers
+* [KYLIN-1546] - Tool to dump information for diagnosis
+* [KYLIN-1550] - Persist some recent bad query
+
+__Improvement__
+
+* [KYLIN-1490] - Use InstallShield 2015 to generate ODBC Driver setup files
+* [KYLIN-1498] - cube desc signature not calculated correctly
+* [KYLIN-1500] - streaming_fillgap cause out of memory
+* [KYLIN-1502] - When cube is not empty, only signature consistent cube desc updates are allowed
+* [KYLIN-1504] - Use NavigableSet to store rowkey and use prefix filter to check resource path prefix instead String comparison on tomcat side
+* [KYLIN-1505] - Combine guava filters with Predicates.and
+* [KYLIN-1543] - GTFilterScanner performance tuning
+* [KYLIN-1557] - Enhance the check on aggregation group dimension number
+
+__Bug__
+
+* [KYLIN-1373] - need to encode export query url to get right result in query page
+* [KYLIN-1434] - Kylin Job Monitor API: /kylin/api/jobs is too slow in large kylin deployment
+* [KYLIN-1472] - Export csv get error when there is a plus sign in the sql
+* [KYLIN-1486] - java.lang.IllegalArgumentException: Too many digits for NumberDictionary
+* [KYLIN-1491] - Should return base cuboid as valid cuboid if no aggregation group matches
+* [KYLIN-1493] - make ExecutableManager.getInstance thread safe
+* [KYLIN-1497] - Make three <class>.getInstance thread safe
+* [KYLIN-1507] - Couldn't find hive dependency jar on some platform like CDH
+* [KYLIN-1513] - Time partitioning doesn't work across multiple days
+* [KYLIN-1514] - MD5 validation of Tomcat does not work when package tar
+* [KYLIN-1521] - Couldn't refresh a cube segment whose start time is before 1970-01-01
+* [KYLIN-1522] - HLLC is incorrect when result is feed from cache
+* [KYLIN-1524] - Get "java.lang.Double cannot be cast to java.lang.Long" error when Top-N metris data type is BigInt
+* [KYLIN-1527] - Columns with all NULL values can't be queried
+* [KYLIN-1537] - Failed to create flat hive table, when name is too long
+* [KYLIN-1538] - DoubleDeltaSerializer cause obvious error after deserialize and serialize
+* [KYLIN-1553] - Cannot find rowkey column "COL_NAME" in cube CubeDesc
+* [KYLIN-1564] - Unclosed table in BuildCubeWithEngine#checkHFilesInHBase()
+* [KYLIN-1569] - Select any column when adding a custom aggregation in GUI
+
+## v1.5.0 - 2016-03-12
+_Tag:_ [kylin-1.5.0](https://github.com/apache/kylin/tree/kylin-1.5.0)
+
+__This version is not backward compatible.__ The format of cube and metadata has been refactored in order to get times of performance improvement. We recommend this version, but does not suggest upgrade from previous deployment directly. A clean and new deployment of this version is strongly recommended. If you have to upgrade from previous deployment, an upgrade guide will be provided by community later.
+
+__Highlights__
+
+* [KYLIN-875] - A plugin-able architecture, to allow alternative cube engine / storage engine / data source.
+* [KYLIN-1245] - A better MR cubing algorithm, about 1.5 times faster by comparing hundreds of jobs.
+* [KYLIN-942] - A better storage engine, makes query roughly 2 times faster (especially for slow queries) by comparing tens of thousands sqls.
+* [KYLIN-738] - Streaming cubing EXPERIMENTAL support, source from kafka, build cube in-mem at minutes interval.
+* [KYLIN-242] - Redesign aggregation group, support of 20+ dimensions made easy.
+* [KYLIN-976] - Custom aggregation types (or UDF in other words).
+* [KYLIN-943] - TopN aggregation type.
+* [KYLIN-1065] - ODBC compatible with Tableau 9.1, MS Excel, MS PowerBI.
+* [KYLIN-1219] - Kylin support SSO with Spring SAML.
+
+__New Feature__
+
+* [KYLIN-528] - Build job flow for Inverted Index building
+* [KYLIN-579] - Unload table from Kylin
+* [KYLIN-596] - Support Excel and Power BI
+* [KYLIN-599] - Near real-time support
+* [KYLIN-607] - More efficient cube building
+* [KYLIN-609] - Add Hybrid as a federation of Cube and Inverted-index realization
+* [KYLIN-625] - Create GridTable, a data structure that abstracts vertical and horizontal partition of a table
+* [KYLIN-728] - IGTStore implementation which use disk when memory runs short
+* [KYLIN-738] - StreamingOLAP
+* [KYLIN-749] - support timestamp type in II and cube
+* [KYLIN-774] - Automatically merge cube segments
+* [KYLIN-868] - add a metadata backup/restore script in bin folder
+* [KYLIN-886] - Data Retention for streaming data
+* [KYLIN-906] - cube retention
+* [KYLIN-943] - Approximate TopN supported by Cube
+* [KYLIN-986] - Generalize Streaming scripts and put them into code repository
+* [KYLIN-1219] - Kylin support SSO with Spring SAML
+* [KYLIN-1277] - Upgrade tool to put old-version cube and new-version cube into a hybrid model
+* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
+	
+* [KYLIN-976] - Support Custom Aggregation Types
+* [KYLIN-1054] - Support Hive client Beeline
+* [KYLIN-1128] - Clone Cube Metadata
+* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
+* [KYLIN-1458] - Checking the consistency of cube segment host with the environment after cube migration
+* [KYLIN-1483] - Command tool to visualize all cuboids in a cube/segment
+
+__Improvement__
+
+* [KYLIN-225] - Support edit "cost" of cube
+* [KYLIN-410] - table schema not expand when clicking the database text
+* [KYLIN-589] - Cleanup Intermediate hive table after cube build
+* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
+* [KYLIN-633] - Support Timestamp for cube partition
+* [KYLIN-649] - move the cache layer from service tier back to storage tier
+* [KYLIN-655] - Migrate cube storage (query side) to use GridTable API
+* [KYLIN-663] - Push time condition down to ii endpoint
+* [KYLIN-668] - Out of memory in mapper when building cube in mem
+* [KYLIN-671] - Implement fine grained cache for cube and ii
+* [KYLIN-674] - IIEndpoint return metrics as well
+* [KYLIN-675] - cube&model designer refactor
+* [KYLIN-678] - optimize RowKeyColumnIO
+* [KYLIN-697] - Reorganize all test cases to unit test and integration tests
+* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS
+* [KYLIN-708] - replace BitSet for AggrKey
+* [KYLIN-712] - some enhancement after code review
+* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
+* [KYLIN-718] - replace aliasMap in storage context with a clear specified return column list
+* [KYLIN-719] - bundle statistics info in endpoint response
+* [KYLIN-720] - Optimize endpoint's response structure to suit with no-dictionary data
+* [KYLIN-721] - streaming cli support third-party streammessage parser
+* [KYLIN-726] - add remote cli port configuration for KylinConfig
+* [KYLIN-729] - IIEndpoint eliminate the non-aggregate routine
+* [KYLIN-734] - Push cache layer to each storage engine
+* [KYLIN-752] - Improved IN clause performance
+* [KYLIN-753] - Make the dependency on hbase-common to "provided"
+* [KYLIN-755] - extract copying libs from prepare.sh so that it can be reused
+* [KYLIN-760] - Improve the hasing performance in Sampling cuboid size
+* [KYLIN-772] - Continue cube job when hive query return empty resultset
+* [KYLIN-773] - performance is slow list jobs
+* [KYLIN-783] - update hdp version in test cases to 2.2.4
+* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
+* [KYLIN-809] - Streaming cubing allow multiple kafka clusters/topics
+* [KYLIN-816] - Allow gap in cube segments, for streaming case
+* [KYLIN-822] - list cube overview in one page
+* [KYLIN-823] - replace fk on fact table on rowkey & aggregation group generate
+* [KYLIN-838] - improve performance of job query
+* [KYLIN-844] - add backdoor toggles to control query behavior
+* [KYLIN-845] - Enable coprocessor even when there is memory hungry distinct count
+* [KYLIN-858] - add snappy compression support
+* [KYLIN-866] - Confirm with user when he selects empty segments to merge
+* [KYLIN-869] - Enhance mail notification
+* [KYLIN-870] - Speed up hbase segments info by caching
+* [KYLIN-871] - growing dictionary for streaming case
+* [KYLIN-874] - script for fill streaming gap automatically
+* [KYLIN-875] - Decouple with Hadoop to allow alternative Input / Build Engine / Storage
+* [KYLIN-879] - add a tool to collect orphan hbases
+* [KYLIN-880] - Kylin should change the default folder from /tmp to user configurable destination
+* [KYLIN-881] - Upgrade Calcite to 1.3.0
+* [KYLIN-882] - check access to kylin.hdfs.working.dir
+* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
+* [KYLIN-893] - Remove the dependency on quartz and metrics
+* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
+* [KYLIN-896] - Clean ODBC code, add them into main repository and write docs to help compiling
+* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
+* [KYLIN-902] - move streaming related parameters into StreamingConfig
+* [KYLIN-909] - Adapt GTStore to hbase endpoint
+* [KYLIN-919] - more friendly UI for 0.8
+* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
+* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
+* [KYLIN-927] - Real time cubes merging skipping gaps
+* [KYLIN-933] - friendly UI to use data model
+* [KYLIN-938] - add friendly tip to page when rest request failed
+* [KYLIN-942] - Cube parallel scan on Hbase
+* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
+* [KYLIN-957] - Support HBase in a separate cluster
+* [KYLIN-960] - Split storage module to core-storage and storage-hbase
+* [KYLIN-973] - add a tool to analyse streaming output logs
+* [KYLIN-984] - Behavior change in streaming data consuming
+* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
+* [KYLIN-1014] - Support kerberos authentication while getting status from RM
+* [KYLIN-1018] - make TimedJsonStreamParser default parser
+* [KYLIN-1019] - Remove v1 cube model classes from code repository
+* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
+* [KYLIN-1025] - Save cube change is very slow
+* [KYLIN-1036] - Code Clean, remove code which never used at front end
+* [KYLIN-1041] - ADD Streaming UI
+* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
+* [KYLIN-1058] - Remove "right join" during model creation
+* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
+* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
+* [KYLIN-1065] - ODBC driver support tableau 9.1
+* [KYLIN-1068] - Optimize the memory footprint for TopN counter
+* [KYLIN-1069] - update tip for 'Partition Column' on UI
+* [KYLIN-1074] - Load hive tables with selecting mode
+* [KYLIN-1095] - Update AdminLTE to latest version
+* [KYLIN-1096] - Deprecate minicluster
+* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
+* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
+* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
+* [KYLIN-1116] - Use local dictionary for InvertedIndex batch building
+* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
+* [KYLIN-1126] - v2 storage(for parallel scan) backward compatibility with v1 storage
+* [KYLIN-1135] - Pscan use share thread pool
+* [KYLIN-1136] - Distinguish fast build mode and complete build mode
+* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
+* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
+* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
+* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
+* [KYLIN-1160] - Set default logger appender of log4j for JDBC
+* [KYLIN-1161] - Rest API /api/cubes?cubeName= is doing fuzzy match instead of exact match
+* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
+* [KYLIN-1190] - Make memory budget per query configurable
+* [KYLIN-1211] - Add 'Enable Cache' button in System page
+* [KYLIN-1234] - Cube ACL does not work
+* [KYLIN-1235] - allow user to select dimension column as options when edit COUNT_DISTINCT measure
+* [KYLIN-1237] - Revisit on cube size estimation
+* [KYLIN-1239] - attribute each htable with team contact and owner name
+* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
+* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
+* [KYLIN-1246] - get cubes API update - offset,limit not required
+* [KYLIN-1251] - add toggle event for tree label
+* [KYLIN-1259] - Change font/background color of job progress
+* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
+* [KYLIN-1266] - Tune release package size
+* [KYLIN-1267] - Check Kryo performance when spilling aggregation cache
+* [KYLIN-1268] - Fix 2 kylin logs
+* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
+* [KYLIN-1281] - Add "partition_date_end", and move "partition_date_start" into cube descriptor
+* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
+* [KYLIN-1287] - UI update for streaming build action
+* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
+* [KYLIN-1301] - fix segment pruning failure
+* [KYLIN-1308] - query storage v2 enable parallel cube visiting
+* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
+* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
+* [KYLIN-1318] - enable gc log for kylin server instance
+* [KYLIN-1323] - Improve performance of converting data to hfile
+* [KYLIN-1327] - Tool for batch updating host information of htables
+* [KYLIN-1333] - Kylin Entity Permission Control
+* [KYLIN-1334] - allow truncating string for fixed length dimensions
+* [KYLIN-1341] - Display JSON of Data Model in the dialog
+* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
+* [KYLIN-1365] - Kylin ACL enhancement
+* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
+* [KYLIN-1424] - Should support multiple selection in picking up dimension/measure column step in data model wizard
+* [KYLIN-1438] - auto generate aggregation group
+* [KYLIN-1474] - expose list, remove and cat in metastore.sh
+* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
+	
+* [KYLIN-242] - Redesign aggregation group
+* [KYLIN-770] - optimize memory usage for GTSimpleMemStore GTAggregationScanner
+* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
+* [KYLIN-980] - FactDistinctColumnsJob to support high cardinality columns
+* [KYLIN-1079] - Manager large number of entries in metadata store
+* [KYLIN-1082] - Hive dependencies should be add to tmpjars
+* [KYLIN-1201] - Enhance project level ACL
+* [KYLIN-1222] - restore testing v1 query engine in case need it as a fallback for v2
+* [KYLIN-1232] - Refine ODBC Connection UI
+* [KYLIN-1237] - Revisit on cube size estimation
+* [KYLIN-1239] - attribute each htable with team contact and owner name
+* [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
+* [KYLIN-1265] - Make sure 1.4-rc query is no slower than 1.0
+* [KYLIN-1266] - Tune release package size
+* [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
+* [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual
+* [KYLIN-1297] - Diagnose query performance issues in 1.4 branch
+* [KYLIN-1301] - fix segment pruning failure
+* [KYLIN-1308] - query storage v2 enable parallel cube visiting
+* [KYLIN-1318] - enable gc log for kylin server instance
+* [KYLIN-1327] - Tool for batch updating host information of htables
+* [KYLIN-1343] - Upgrade calcite version to 1.6
+* [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
+* [KYLIN-1366] - Bind metadata version with release version
+* [KYLIN-1389] - Formatting ODBC Drive C++ code
+* [KYLIN-1405] - Aggregation group validation
+* [KYLIN-1465] - Beautify kylin log to convenience both production trouble shooting and CI debuging
+* [KYLIN-1475] - Inject ehcache manager for any test case that will touch ehcache manager
+
+__Bug__
+
+* [KYLIN-404] - Can't get cube source record size.
+* [KYLIN-457] - log4j error and dup lines in kylin.log
+* [KYLIN-521] - No verification even if join condition is invalid
+* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
+* [KYLIN-635] - IN clause within CASE when is not working
+* [KYLIN-656] - REST API get cube desc NullPointerException when cube is not exists
+* [KYLIN-660] - Make configurable of dictionary cardinality cap
+* [KYLIN-665] - buffer error while in mem cubing
+* [KYLIN-688] - possible memory leak for segmentIterator
+* [KYLIN-731] - Parallel stream build will throw OOM
+* [KYLIN-740] - Slowness with many IN() values
+* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
+* [KYLIN-748] - II returned result not correct when decimal omits precision and scal
+* [KYLIN-751] - Max on negative double values is not working
+* [KYLIN-766] - round BigDecimal according to the DataType scale
+* [KYLIN-769] - empty segment build fail due to no dictionary
+* [KYLIN-771] - query cache is not evicted when metadata changes
+* [KYLIN-778] - can't build cube after package to binary
+* [KYLIN-780] - Upgrade Calcite to 1.0
+* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted
+* [KYLIN-801] - fix remaining issues on query cache and storage cache
+* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
+* [KYLIN-807] - Avoid write conflict between job engine and stream cube builder
+* [KYLIN-817] - Support Extract() on timestamp column
+* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
+* [KYLIN-828] - kylin still use ldap profile when comment the line "kylin.sandbox=false" in kylin.properties
+* [KYLIN-834] - optimize StreamingUtil binary search perf
+* [KYLIN-837] - fix submit build type when refresh cube
+* [KYLIN-873] - cancel button does not work when [resume][discard] job
+* [KYLIN-889] - Support more than one HDFS files of lookup table
+* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
+* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
+* [KYLIN-905] - Boolean type not supported
+* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
+* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
+* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
+* [KYLIN-914] - Scripts shebang should use /bin/bash
+* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
+* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
+* [KYLIN-930] - can't see realizations under each project at project list page
+* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
+* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
+* [KYLIN-936] - can not see job step log
+* [KYLIN-944] - update doc about how to consume kylin API in javascript
+* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
+* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
+* [KYLIN-951] - Drop RowBlock concept from GridTable general API
+* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
+* [KYLIN-967] - Dump running queries on memory shortage
+* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
+* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
+* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
+* [KYLIN-983] - Query sql offset keyword bug
+* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
+* [KYLIN-991] - StorageCleanupJob may clean a newly created HTable in streaming cube building
+* [KYLIN-992] - ConcurrentModificationException when initializing ResourceStore
+* [KYLIN-993] - implement substr support in kylin
+* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
+* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
+* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million
+* [KYLIN-1026] - Error message for git check is not correct in package.sh
+* [KYLIN-1027] - HBase Token not added after KYLIN-1007
+* [KYLIN-1033] - Error when joining two sub-queries
+* [KYLIN-1039] - Filter like (A or false) yields wrong result
+* [KYLIN-1047] - Upgrade to Calcite 1.4
+* [KYLIN-1066] - Only 1 reducer is started in the "Build cube" step of MR_Engine_V2
+* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
+* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
+* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
+* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
+* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
+* [KYLIN-1113] - Support TopN query in v2/CubeStorageQuery.java
+* [KYLIN-1115] - Clean up ODBC driver code
+* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
+* [KYLIN-1127] - Refactor CacheService
+* [KYLIN-1137] - TopN measure need support dictionary merge
+* [KYLIN-1138] - Bad CubeDesc signature cause segment be delete when enable a cube
+* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
+* [KYLIN-1151] - Menu items should be aligned when create new model
+* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
+* [KYLIN-1153] - Upgrade is needed for cubedesc metadata from 1.3 to 1.4
+* [KYLIN-1171] - KylinConfig truncate bug
+* [KYLIN-1179] - Cannot use String as partition column
+* [KYLIN-1180] - Some NPE in Dictionary
+* [KYLIN-1181] - Split metadata size exceeded when data got huge in one segment
+* [KYLIN-1182] - DataModelDesc needs to be updated from v1.x to v2.0
+* [KYLIN-1192] - Cannot edit data model desc without name change
+* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
+* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
+* [KYLIN-1218] - java.lang.NullPointerException in MeasureTypeFactory when sync hive table
+* [KYLIN-1220] - JsonMappingException: Can not deserialize instance of java.lang.String out of START_ARRAY
+* [KYLIN-1225] - Only 15 cubes listed in the /models page
+* [KYLIN-1226] - InMemCubeBuilder throw OOM for multiple HLLC measures
+* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
+* [KYLIN-1236] - redirect to home page when input invalid url
+* [KYLIN-1250] - Got NPE when discarding a job
+* [KYLIN-1260] - Job status labels are not in same style
+* [KYLIN-1269] - Can not get last error message in email
+* [KYLIN-1271] - Create streaming table layer will disappear if click on outside
+* [KYLIN-1274] - Query from JDBC is partial results by default
+* [KYLIN-1282] - Comparison filter on Date/Time column not work for query
+* [KYLIN-1289] - Click on subsequent wizard steps doesn't work when editing existing cube or model
+* [KYLIN-1303] - Error when in-mem cubing on empty data source which has boolean columns
+* [KYLIN-1306] - Null strings are not applied during fast cubing
+* [KYLIN-1314] - Display issue for aggression groups
+* [KYLIN-1315] - UI: Cannot add normal dimension when creating new cube
+* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
+* [KYLIN-1328] - "UnsupportedOperationException" is thrown when remove a data model
+* [KYLIN-1330] - UI create model: Press enter will go back to pre step
+* [KYLIN-1336] - 404 errors of model page and api 'access/DataModelDesc' in console
+* [KYLIN-1337] - Sort cube name doesn't work well
+* [KYLIN-1346] - IllegalStateException happens in SparkCubing
+* [KYLIN-1347] - UI: cannot place cursor in front of the last dimension
+* [KYLIN-1349] - 'undefined' is logged in console when adding lookup table
+* [KYLIN-1352] - 'Cache already exists' exception in high-concurrency query situation
+* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
+* [KYLIN-1357] - Cloned cube has build time information
+* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
+* [KYLIN-1382] - CubeMigrationCLI reports error when migrate cube
+* [KYLIN-1387] - Streaming cubing doesn't generate cuboids files on HDFS, cause cube merge failure
+* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
+* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
+* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
+* [KYLIN-1412] - Widget width of "Partition date column" is too small to select
+* [KYLIN-1413] - Row key column's sequence is wrong after saving the cube
+* [KYLIN-1414] - Couldn't drag and drop rowkey, js error is thrown in browser console
+* [KYLIN-1417] - TimedJsonStreamParser is case sensitive for message's property name
+* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
+* [KYLIN-1420] - Query returns empty result on partition column's boundary condition
+* [KYLIN-1421] - Cube "source record" is always zero for streaming
+* [KYLIN-1423] - HBase size precision issue
+* [KYLIN-1430] - Not add "STREAMING_" prefix when import a streaming table
+* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
+* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
+* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
+* 
+* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
+* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
+* [KYLIN-1344] - Bitmap measure defined after TopN measure can cause merge to fail
+* [KYLIN-1356] - use exec-maven-plugin for IT environment provision
+* [KYLIN-1386] - Duplicated projects appear in connection dialog after clicking CONNECT button multiple times
+* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale
+* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
+* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
+* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
+* [KYLIN-1469] - Hive dependency jars are hard coded in test
+* [KYLIN-1471] - LIMIT after having clause should not be pushed down to storage context
+* [KYLIN-1473] - Cannot have comments in the end of New Query textbox
+
+__Task__
+
+* [KYLIN-529] - Migrate ODBC source code to Apache Git
+* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
+* [KYLIN-762] - remove quartz dependency
+* [KYLIN-763] - remove author name
+* [KYLIN-820] - support streaming cube of exact timestamp range
+* [KYLIN-907] - Improve Kylin community development experience
+* [KYLIN-1112] - Reorganize InvertedIndex source codes into plug-in architecture
+	
+* [KYLIN-808] - streaming cubing support split by data timestamp
+* [KYLIN-1427] - Enable partition date column to support date and hour as separate columns for increment cube build
+
+__Test__
+
+* [KYLIN-677] - benchmark for Endpoint without dictionary
+* [KYLIN-826] - create new test case for streaming building & queries
+
+
+## v1.3.0 - 2016-03-14
+_Tag:_ [kylin-1.3.0](https://github.com/apache/kylin/tree/kylin-1.3.0)
+
+__New Feature__
+
+* [KYLIN-579] - Unload table from Kylin
+* [KYLIN-976] - Support Custom Aggregation Types
+* [KYLIN-1054] - Support Hive client Beeline
+* [KYLIN-1128] - Clone Cube Metadata
+* [KYLIN-1186] - Support precise Count Distinct using bitmap (under limited conditions)
+
+__Improvement__
+
+* [KYLIN-955] - HiveColumnCardinalityJob should use configurations in conf/kylin_job_conf.xml
+* [KYLIN-1014] - Support kerberos authentication while getting status from RM
+* [KYLIN-1074] - Load hive tables with selecting mode
+* [KYLIN-1082] - Hive dependencies should be add to tmpjars
+* [KYLIN-1132] - make filtering input easier in creating cube
+* [KYLIN-1201] - Enhance project level ACL
+* [KYLIN-1211] - Add 'Enable Cache' button in System page
+* [KYLIN-1234] - Cube ACL does not work
+* [KYLIN-1240] - Fix link and typo in README
+* [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
+* [KYLIN-1246] - get cubes API update - offset,limit not required
+* [KYLIN-1251] - add toggle event for tree label
+* [KYLIN-1259] - Change font/background color of job progress
+* [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
+* [KYLIN-1317] - Kill underlying running hadoop job while discard a job
+* [KYLIN-1323] - Improve performance of converting data to hfile
+* [KYLIN-1333] - Kylin Entity Permission Control 
+* [KYLIN-1343] - Upgrade calcite version to 1.6
+* [KYLIN-1365] - Kylin ACL enhancement
+* [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
+
+__Bug__
+
+* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
+* [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
+* [KYLIN-1078] - Cannot have comments in the end of New Query textbox
+* [KYLIN-1104] - Long dimension value cause ArrayIndexOutOfBoundsException
+* [KYLIN-1110] - can not see project options after clear brower cookie and cache
+* [KYLIN-1159] - problem about kylin web UI
+* [KYLIN-1214] - Remove "Back to My Cubes" link in non-edit mode
+* [KYLIN-1215] - minor, update website member's info on community page
+* [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
+* [KYLIN-1236] - redirect to home page when input invalid url
+* [KYLIN-1250] - Got NPE when discarding a job
+* [KYLIN-1254] - cube model will be overridden while creating a new cube with the same name
+* [KYLIN-1260] - Job status labels are not in same style
+* [KYLIN-1274] - Query from JDBC is partial results by default
+* [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
+* [KYLIN-1330] - UI create model: Press enter will go back to pre step
+* [KYLIN-1331] - UI Delete Aggregation Groups: cursor disappeared after delete 1 dimension
+* [KYLIN-1342] - Typo in doc
+* [KYLIN-1354] - Couldn't edit a cube if it has no "partition date" set
+* [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
+* [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale 
+* [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
+* [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
+* [KYLIN-1412] - Widget width of "Partition date column"  is too small to select
+* [KYLIN-1419] - NullPointerException occurs when query from subqueries with order by
+* [KYLIN-1423] - HBase size precision issue
+* [KYLIN-1443] - For setting Auto Merge Time Ranges, before sending them to backend, the related time ranges should be sorted increasingly
+* [KYLIN-1445] - Kylin should throw error if HIVE_CONF dir cannot be found
+* [KYLIN-1456] - Shouldn't use "1970-01-01" as the default end date
+* [KYLIN-1466] - Some environment variables are not used in bin/kylin.sh <RUNNABLE_CLASS_NAME>
+* [KYLIN-1469] - Hive dependency jars are hard coded in test
+
+__Test__
+
+* [KYLIN-1335] - Disable PrintResult in KylinQueryTest
+
+
+## v1.2 - 2015-12-15
+_Tag:_ [kylin-1.2](https://github.com/apache/kylin/tree/kylin-1.2)
+
+__New Feature__
+
+* [KYLIN-596] - Support Excel and Power BI
+    
+__Improvement__
+
+* [KYLIN-389] - Can't edit cube name for existing cubes
+* [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS 
+* [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
+* [KYLIN-1058] - Remove "right join" during model creation
+* [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
+* [KYLIN-1065] - ODBC driver support tableau 9.1
+* [KYLIN-1069] - update tip for 'Partition Column' on UI
+* [KYLIN-1081] - ./bin/find-hive-dependency.sh may not find hive-hcatalog-core.jar
+* [KYLIN-1095] - Update AdminLTE to latest version
+* [KYLIN-1099] - Support dictionary of cardinality over 10 millions
+* [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
+* [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
+* [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
+* [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
+* [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
+* [KYLIN-1154] - Load job page is very slow when there are a lot of history job
+* [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
+* [KYLIN-1160] - Set default logger appender of log4j for JDBC
+* [KYLIN-1161] - Rest API /api/cubes?cubeName=  is doing fuzzy match instead of exact match
+* [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
+* [KYLIN-1166] - CubeMigrationCLI should disable and purge the cube in source store after be migrated
+* [KYLIN-1168] - Couldn't save cube after doing some modification, get "Update data model is not allowed! Please create a new cube if needed" error
+* [KYLIN-1190] - Make memory budget per query configurable
+
+__Bug__
+
+* [KYLIN-693] - Couldn't change a cube's name after it be created
+* [KYLIN-930] - can't see realizations under each project at project list page
+* [KYLIN-966] - When user creates a cube, if enter a name which already exists, Kylin will thrown expection on last step
+* [KYLIN-1033] - Error when joining two sub-queries
+* [KYLIN-1039] - Filter like (A or false) yields wrong result
+* [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
+* [KYLIN-1070] - changing  case in table name in  model desc
+* [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
+* [KYLIN-1098] - two "kylin.hbase.region.count.min" in conf/kylin.properties
+* [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
+* [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
+* [KYLIN-1120] - MapReduce job read local meta issue
+* [KYLIN-1121] - ResourceTool download/upload does not work in binary package
+* [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
+* [KYLIN-1148] - Edit project's name and cancel edit, project's name still modified
+* [KYLIN-1152] - ResourceStore should read content and timestamp in one go
+* [KYLIN-1155] - unit test with minicluster doesn't work on 1.x
+* [KYLIN-1203] - Cannot save cube after correcting the configuration mistake
+* [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
+* [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
+
+__Task__
+
+* [KYLIN-1170] - Update website and status files to TLP
+
+
+## v1.1.1-incubating - 2015-11-04
+_Tag:_ [kylin-1.1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1.1-incubating)
+
+__Improvement__
+
+* [KYLIN-999] - License check and cleanup for release
+
+## v1.1-incubating - 2015-10-25
+_Tag:_ [kylin-1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1-incubating)
+
+__New Feature__
+
+* [KYLIN-222] - Web UI to Display CubeInstance Information
+* [KYLIN-906] - cube retention
+* [KYLIN-910] - Allow user to enter "retention range" in days on Cube UI
+
+__Bug__
+
+* [KYLIN-457] - log4j error and dup lines in kylin.log
+* [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
+* [KYLIN-740] - Slowness with many IN() values
+* [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
+* [KYLIN-771] - query cache is not evicted when metadata changes
+* [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted 
+* [KYLIN-847] - "select * from fact" does not work on 0.7 branch
+* [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
+* [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
+* [KYLIN-944] - update doc about how to consume kylin API in javascript
+* [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
+* [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
+* [KYLIN-958] - update cube data model may fail and leave metadata in inconsistent state
+* [KYLIN-961] - Can't get cube  source record count.
+* [KYLIN-967] - Dump running queries on memory shortage
+* [KYLIN-968] - CubeSegment.lastBuildJobID is null in new instance but used for rowkey_stats path
+* [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
+* [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
+* [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
+* [KYLIN-983] - Query sql offset keyword bug
+* [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
+* [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
+* [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
+* [KYLIN-1005] - fail to acquire ZookeeperJobLock when hbase.zookeeper.property.clientPort is configured other than 2181
+* [KYLIN-1015] - Hive dependency jars appeared twice on job configuration
+* [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million 
+* [KYLIN-1026] - Error message for git check is not correct in package.sh
+
+__Improvement__
+
+* [KYLIN-343] - Enable timeout on query 
+* [KYLIN-367] - automatically backup metadata everyday
+* [KYLIN-589] - Cleanup Intermediate hive table after cube build
+* [KYLIN-772] - Continue cube job when hive query return empty resultset
+* [KYLIN-858] - add snappy compression support
+* [KYLIN-882] - check access to kylin.hdfs.working.dir
+* [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
+* [KYLIN-901] - Add tool for cleanup Kylin metadata storage
+* [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
+* [KYLIN-957] - Support HBase in a separate cluster
+* [KYLIN-965] - Allow user to configure the region split size for cube
+* [KYLIN-971] - kylin display timezone on UI
+* [KYLIN-987] - Rename 0.7-staging and 0.8 branch
+* [KYLIN-998] - Finish the hive intermediate table clean up job in org.apache.kylin.job.hadoop.cube.StorageCleanupJob
+* [KYLIN-999] - License check and cleanup for release
+* [KYLIN-1013] - Make hbase client configurations like timeout configurable
+* [KYLIN-1025] - Save cube change is very slow
+* [KYLIN-1034] - Faster bitmap indexes with Roaring bitmaps
+* [KYLIN-1035] - Validate [Project] before create Cube on UI
+* [KYLIN-1037] - Remove hardcoded "hdp.version" from regression tests
+* [KYLIN-1047] - Upgrade to Calcite 1.4
+* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
+* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
+* [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
+* [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
+
+
+## v1.0-incubating - 2015-09-06
+_Tag:_ [kylin-1.0-incubating](https://github.com/apache/kylin/tree/kylin-1.0-incubating)
+
+__New Feature__
+
+* [KYLIN-591] - Leverage Zeppelin to interactive with Kylin
+
+__Bug__
+
+* [KYLIN-404] - Can't get cube source record size.
+* [KYLIN-626] - JDBC error for float and double values
+* [KYLIN-751] - Max on negative double values is not working
+* [KYLIN-757] - Cache wasn't flushed in cluster mode
+* [KYLIN-780] - Upgrade Calcite to 1.0
+* [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
+* [KYLIN-889] - Support more than one HDFS files of lookup table
+* [KYLIN-897] - Update CubeMigrationCLI to copy data model info
+* [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
+* [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
+* [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
+* [KYLIN-914] - Scripts shebang should use /bin/bash
+* [KYLIN-915] - appendDBName in CubeMetadataUpgrade will return null
+* [KYLIN-921] - Dimension with all nulls cause BuildDimensionDictionary failed due to FileNotFoundException
+* [KYLIN-923] - FetcherRunner will never run again if encountered exception during running
+* [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
+* [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
+* [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
+* [KYLIN-936] - can not see job step log 
+* [KYLIN-940] - NPE when close the null resouce
+* [KYLIN-945] - Kylin JDBC - Get Connection from DataSource results in NullPointerException
+* [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
+* [KYLIN-949] - Query cache doesn't work properly for prepareStatement queries
+
+__Improvement__
+
+* [KYLIN-568] - job support stop/suspend function so that users can manually resume a job
+* [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
+* [KYLIN-792] - kylin performance insight [dashboard]
+* [KYLIN-838] - improve performance of job query
+* [KYLIN-842] - Add version and commit id into binary package
+* [KYLIN-844] - add backdoor toggles to control query behavior 
+* [KYLIN-857] - backport coprocessor improvement in 0.8 to 0.7
+* [KYLIN-866] - Confirm with user when he selects empty segments to merge
+* [KYLIN-867] - Hybrid model for multiple realizations/cubes
+* [KYLIN-880] -  Kylin should change the default folder from /tmp to user configurable destination
+* [KYLIN-881] - Upgrade Calcite to 1.3.0
+* [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
+* [KYLIN-893] - Remove the dependency on quartz and metrics
+* [KYLIN-922] - Enforce same code style for both intellij and eclipse user
+* [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
+* [KYLIN-933] - friendly UI to use data model
+* [KYLIN-938] - add friendly tip to page when rest request failed
+
+__Task__
+
+* [KYLIN-884] - Restructure docs and website
+* [KYLIN-907] - Improve Kylin community development experience
+* [KYLIN-954] - Release v1.0 (formerly v0.7.3)
+* [KYLIN-863] - create empty segment when there is no data in one single streaming batch
+* [KYLIN-908] - Help community developer to setup develop/debug environment
+* [KYLIN-931] - Port KYLIN-921 to 0.8 branch
+
+## v0.7.2-incubating - 2015-07-21
+_Tag:_ [kylin-0.7.2-incubating](https://github.com/apache/kylin/tree/kylin-0.7.2-incubating)
+
+__Main Changes:__  
+Critical bug fixes after v0.7.1 release, please go with this version directly for new case and upgrade to this version for existing deployment.
+
+__Bug__  
+
+* [KYLIN-514] - Error message is not helpful to user when doing something in Jason Editor window
+* [KYLIN-598] - Kylin detecting hive table delim failure
+* [KYLIN-660] - Make configurable of dictionary cardinality cap
+* [KYLIN-765] - When a cube job is failed, still be possible to submit a new job
+* [KYLIN-814] - Duplicate columns error for subqueries on fact table
+* [KYLIN-819] - Fix necessary ColumnMetaData order for Calcite (Optic)
+* [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
+* [KYLIN-829] - Cube "Actions" shows "NA"; but after expand the "access" tab, the button shows up
+* [KYLIN-830] - Cube merge failed after migrating from v0.6 to v0.7
+* [KYLIN-831] - Kylin report "Column 'ABC' not found in table 'TABLE' while executing SQL", when that column is FK but not define as a dimension
+* [KYLIN-840] - HBase table compress not enabled even LZO is installed
+* [KYLIN-848] - Couldn't resume or discard a cube job
+* [KYLIN-849] - Couldn't query metrics on lookup table PK
+* [KYLIN-865] - Cube has been built but couldn't query; In log it said "Realization 'CUBE.CUBE_NAME' defined under project PROJECT_NAME is not found
+* [KYLIN-873] - cancel button does not work when [resume][discard] job
+* [KYLIN-888] - "Jobs" page only shows 15 job at max, the "Load more" button was disappeared
+
+__Improvement__
+
+* [KYLIN-159] - Metadata migrate tool 
+* [KYLIN-199] - Validation Rule: Unique value of Lookup table's key columns
+* [KYLIN-207] - Support SQL pagination
+* [KYLIN-209] - Merge tail small MR jobs into one
+* [KYLIN-210] - Split heavy MR job to more small jobs
+* [KYLIN-221] - Convert cleanup and GC to job 
+* [KYLIN-284] - add log for all Rest API Request
+* [KYLIN-488] - Increase HDFS block size 1GB
+* [KYLIN-600] - measure return type update
+* [KYLIN-611] - Allow Implicit Joins
+* [KYLIN-623] - update Kylin UI Style to latest AdminLTE
+* [KYLIN-727] - Cube build in BuildCubeWithEngine does not cover incremental build/cube merge
+* [KYLIN-752] - Improved IN clause performance
+* [KYLIN-773] - performance is slow list jobs
+* [KYLIN-839] - Optimize Snapshot table memory usage 
+
+__New Feature__
+
+* [KYLIN-211] - Bitmap Inverted Index
+* [KYLIN-285] - Enhance alert program for whole system
+* [KYLIN-467] - Validataion Rule: Check duplicate rows in lookup table
+* [KYLIN-471] - Support "Copy" on grid result
+
+__Task__
+
+* [KYLIN-7] - Enable maven checkstyle plugin
+* [KYLIN-885] - Release v0.7.2
+* [KYLIN-812] - Upgrade to Calcite 0.9.2
+
+## v0.7.1-incubating (First Apache Release) - 2015-06-10  
+_Tag:_ [kylin-0.7.1-incubating](https://github.com/apache/kylin/tree/kylin-0.7.1-incubating)
+
+Apache Kylin v0.7.1-incubating has rolled out on June 10, 2015. This is also the first Apache release after join incubating. 
+
+__Main Changes:__
+
+* Package renamed from com.kylinolap to org.apache.kylin
+* Code cleaned up to apply Apache License policy
+* Easy install and setup with bunch of scripts and automation
+* Job engine refactor to be generic job manager for all jobs, and improved efficiency
+* Support Hive database other than 'default'
+* JDBC driver avaliable for client to interactive with Kylin server
+* Binary pacakge avaliable download 
+
+__New Feature__
+
+* [KYLIN-327] - Binary distribution 
+* [KYLIN-368] - Move MailService to Common module
+* [KYLIN-540] - Data model upgrade for legacy cube descs
+* [KYLIN-576] - Refactor expansion rate expression
+
+__Task__
+
+* [KYLIN-361] - Rename package name with Apache Kylin
+* [KYLIN-531] - Rename package name to org.apache.kylin
+* [KYLIN-533] - Job Engine Refactoring
+* [KYLIN-585] - Simplify deployment
+* [KYLIN-586] - Add Apache License header in each source file
+* [KYLIN-587] - Remove hard copy of javascript libraries
+* [KYLIN-624] - Add dimension and metric info into DataModel
+* [KYLIN-650] - Move all document from github wiki to code repository (using md file)
+* [KYLIN-669] - Release v0.7.1 as first apache release
+* [KYLIN-670] - Update pom with "incubating" in version number
+* [KYLIN-737] - Generate and sign release package for review and vote
+* [KYLIN-795] - Release after success vote
+
+__Bug__
+
+* [KYLIN-132] - Job framework
+* [KYLIN-194] - Dict & ColumnValueContainer does not support number comparison, they do string comparison right now
+* [KYLIN-220] - Enable swap column of Rowkeys in Cube Designer
+* [KYLIN-230] - Error when create HTable
+* [KYLIN-255] - Error when a aggregated function appear twice in select clause
+* [KYLIN-383] - Sample Hive EDW database name should be replaced by "default" in the sample
+* [KYLIN-399] - refreshed segment not correctly published to cube
+* [KYLIN-412] - No exception or message when sync up table which can't access
+* [KYLIN-421] - Hive table metadata issue
+* [KYLIN-436] - Can't sync Hive table metadata from other database rather than "default"
+* [KYLIN-508] - Too high cardinality is not suitable for dictionary!
+* [KYLIN-509] - Order by on fact table not works correctly
+* [KYLIN-517] - Always delete the last one of Add Lookup page buttom even if deleting the first join condition
+* [KYLIN-524] - Exception will throw out if dimension is created on a lookup table, then deleting the lookup table.
+* [KYLIN-547] - Create cube failed if column dictionary sets false and column length value greater than 0
+* [KYLIN-556] - error tip enhance when cube detail return empty
+* [KYLIN-570] - Need not to call API before sending login request
+* [KYLIN-571] - Dimensions lost when creating cube though Joson Editor
+* [KYLIN-572] - HTable size is wrong
+* [KYLIN-581] - unable to build cube
+* [KYLIN-583] - Dependency of Hive conf/jar in II branch will affect auto deploy
+* [KYLIN-588] - Error when run package.sh
+* [KYLIN-593] - angular.min.js.map and angular-resource.min.js.map are missing in kylin.war
+* [KYLIN-594] - Making changes in build and packaging with respect to apache release process
+* [KYLIN-595] - Kylin JDBC driver should not assume Kylin server listen on either 80 or 443
+* [KYLIN-605] - Issue when install Kylin on a CLI which does not have yarn Resource Manager
+* [KYLIN-614] - find hive dependency shell fine is unable to set the hive dependency correctly
+* [KYLIN-615] - Unable add measures in Kylin web UI
+* [KYLIN-619] - Cube build fails with hive+tez
+* [KYLIN-620] - Wrong duration number
+* [KYLIN-621] - SecurityException when running MR job
+* [KYLIN-627] - Hive tables' partition column was not sync into Kylin
+* [KYLIN-628] - Couldn't build a new created cube
+* [KYLIN-629] - Kylin failed to run mapreduce job if there is no mapreduce.application.classpath in mapred-site.xml
+* [KYLIN-630] - ArrayIndexOutOfBoundsException when merge cube segments 
+* [KYLIN-638] - kylin.sh stop not working
+* [KYLIN-639] - Get "Table 'xxxx' not found while executing SQL" error after a cube be successfully built
+* [KYLIN-640] - sum of float not working
+* [KYLIN-642] - Couldn't refresh cube segment
+* [KYLIN-643] - JDBC couldn't connect to Kylin: "java.sql.SQLException: Authentication Failed"
+* [KYLIN-644] - join table as null error when build the cube
+* [KYLIN-652] - Lookup table alias will be set to null
+* [KYLIN-657] - JDBC Driver not register into DriverManager
+* [KYLIN-658] - java.lang.IllegalArgumentException: Cannot find rowkey column XXX in cube CubeDesc
+* [KYLIN-659] - Couldn't adjust the rowkey sequence when create cube
+* [KYLIN-666] - Select float type column got class cast exception
+* [KYLIN-681] - Failed to build dictionary if the rowkey's dictionary property is "date(yyyy-mm-dd)"
+* [KYLIN-682] - Got "No aggregator for func 'MIN' and return type 'decimal(19,4)'" error when build cube
+* [KYLIN-684] - Remove holistic distinct count and multiple column distinct count from sample cube
+* [KYLIN-691] - update tomcat download address in download-tomcat.sh
+* [KYLIN-696] - Dictionary couldn't recognize a value and throw IllegalArgumentException: "Not a valid value"
+* [KYLIN-703] - UT failed due to unknown host issue
+* [KYLIN-711] - UT failure in REST module
+* [KYLIN-739] - Dimension as metrics does not work with PK-FK derived column
+* [KYLIN-761] - Tables are not shown in the "Query" tab, and couldn't run SQL query after cube be built
+
+__Improvement__
+
+* [KYLIN-168] - Installation fails if multiple ZK
+* [KYLIN-182] - Validation Rule: columns used in Join condition should have same datatype
+* [KYLIN-204] - Kylin web not works properly in IE
+* [KYLIN-217] - Enhance coprocessor with endpoints 
+* [KYLIN-251] - job engine refactoring
+* [KYLIN-261] - derived column validate when create cube
+* [KYLIN-317] - note: grunt.json need to be configured when add new javascript or css file
+* [KYLIN-324] - Refactor metadata to support InvertedIndex
+* [KYLIN-407] - Validation: There's should no Hive table column using "binary" data type
+* [KYLIN-445] - Rename cube_desc/cube folder
+* [KYLIN-452] - Automatically create local cluster for running tests
+* [KYLIN-498] - Merge metadata tables 
+* [KYLIN-532] - Refactor data model in kylin front end
+* [KYLIN-539] - use hbase command to launch tomcat
+* [KYLIN-542] - add project property feature for cube
+* [KYLIN-553] - From cube instance, couldn't easily find the project instance that it belongs to
+* [KYLIN-563] - Wrap kylin start and stop with a script 
+* [KYLIN-567] - More flexible validation of new segments
+* [KYLIN-569] - Support increment+merge job
+* [KYLIN-578] - add more generic configuration for ssh
+* [KYLIN-601] - Extract content from kylin.tgz to "kylin" folder
+* [KYLIN-616] - Validation Rule: partition date column should be in dimension columns
+* [KYLIN-634] - Script to import sample data and cube metadata
+* [KYLIN-636] - wiki/On-Hadoop-CLI-installation is not up to date
+* [KYLIN-637] - add start&end date for hbase info in cubeDesigner
+* [KYLIN-714] - Add Apache RAT to pom.xml
+* [KYLIN-753] - Make the dependency on hbase-common to "provided"
+* [KYLIN-758] - Updating port forwarding issue Hadoop Installation on Hortonworks Sandbox.
+* [KYLIN-779] - [UI] jump to cube list after create cube
+* [KYLIN-796] - Add REST API to trigger storage cleanup/GC
+
+__Wish__
+
+* [KYLIN-608] - Distinct count for ii storage
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/acl.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/acl.cn.md b/website/_docs20/tutorial/acl.cn.md
new file mode 100644
index 0000000..999a311
--- /dev/null
+++ b/website/_docs20/tutorial/acl.cn.md
@@ -0,0 +1,35 @@
+---
+layout: docs20-cn
+title:  Kylin Cube \u6743\u9650\u6388\u4e88\u6559\u7a0b
+categories: \u6559\u7a0b
+permalink: /cn/docs20/tutorial/acl.html
+version: v1.2
+since: v0.7.1
+---
+
+  
+
+\u5728`Cubes`\u9875\u9762\uff0c\u53cc\u51fbcube\u884c\u67e5\u770b\u8be6\u7ec6\u4fe1\u606f\u3002\u5728\u8fd9\u91cc\u6211\u4eec\u5173\u6ce8`Access`\u6807\u7b7e\u3002
+\u70b9\u51fb`+Grant`\u6309\u94ae\u8fdb\u884c\u6388\u6743\u3002
+
+![]( /images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
+
+\u4e00\u4e2acube\u6709\u56db\u79cd\u4e0d\u540c\u7684\u6743\u9650\u3002\u5c06\u4f60\u7684\u9f20\u6807\u79fb\u52a8\u5230`?`\u56fe\u6807\u67e5\u770b\u8be6\u7ec6\u4fe1\u606f\u3002
+
+![]( /images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
+
+\u6388\u6743\u5bf9\u8c61\u4e5f\u6709\u4e24\u79cd\uff1a`User`\u548c`Role`\u3002`Role`\u662f\u6307\u4e00\u7ec4\u62e5\u6709\u540c\u6837\u6743\u9650\u7684\u7528\u6237\u3002
+
+### 1. \u6388\u4e88\u7528\u6237\u6743\u9650
+* \u9009\u62e9`User`\u7c7b\u578b\uff0c\u8f93\u5165\u4f60\u60f3\u8981\u6388\u6743\u7684\u7528\u6237\u7684\u7528\u6237\u540d\u5e76\u9009\u62e9\u76f8\u5e94\u7684\u6743\u9650\u3002
+
+     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
+
+* \u7136\u540e\u70b9\u51fb`Grant`\u6309\u94ae\u63d0\u4ea4\u8bf7\u6c42\u3002\u5728\u8fd9\u4e00\u64cd\u4f5c\u6210\u529f\u540e\uff0c\u4f60\u4f1a\u5728\u8868\u4e2d\u770b\u5230\u4e00\u4e2a\u65b0\u7684\u8868\u9879\u3002\u4f60\u53ef\u4ee5\u9009\u62e9\u4e0d\u540c\u7684\u8bbf\u95ee\u6743\u9650\u6765\u4fee\u6539\u7528\u6237\u6743\u9650\u3002\u70b9\u51fb`Revoke`\u6309\u94ae\u53ef\u4ee5\u5220\u9664\u4e00\u4e2a\u62e5\u6709\u6743\u9650\u7684\u7528\u6237\u3002
+
+     ![]( /images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
+
+### 2. \u6388\u4e88\u89d2\u8272\u6743\u9650
+* \u9009\u62e9`Role`\u7c7b\u578b\uff0c\u901a\u8fc7\u70b9\u51fb\u4e0b\u62c9\u6309\u94ae\u9009\u62e9\u4f60\u60f3\u8981\u6388\u6743\u7684\u4e00\u7ec4\u7528\u6237\u5e76\u9009\u62e9\u4e00\u4e2a\u6743\u9650\u3002
+
+* \u7136\u540e\u70b9\u51fb`Grant`\u6309\u94ae\u63d0\u4ea4\u8bf7\u6c42\u3002\u5728\u8fd9\u4e00\u64cd\u4f5c\u6210\u529f\u540e\uff0c\u4f60\u4f1a\u5728\u8868\u4e2d\u770b\u5230\u4e00\u4e2a\u65b0\u7684\u8868\u9879\u3002\u4f60\u53ef\u4ee5\u9009\u62e9\u4e0d\u540c\u7684\u8bbf\u95ee\u6743\u9650\u6765\u4fee\u6539\u7ec4\u6743\u9650\u3002\u70b9\u51fb`Revoke`\u6309\u94ae\u53ef\u4ee5\u5220\u9664\u4e00\u4e2a\u62e5\u6709\u6743\u9650\u7684\u7ec4\u3002

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/acl.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/acl.md b/website/_docs20/tutorial/acl.md
new file mode 100644
index 0000000..2bcaf2c
--- /dev/null
+++ b/website/_docs20/tutorial/acl.md
@@ -0,0 +1,32 @@
+---
+layout: docs20
+title:  Kylin Cube Permission
+categories: tutorial
+permalink: /docs20/tutorial/acl.html
+since: v0.7.1
+---
+
+In `Cubes` page, double click the cube row to see the detail information. Here we focus on the `Access` tab.
+Click the `+Grant` button to grant permission. 
+
+![](/images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
+
+There are four different kinds of permissions for a cube. Move your mouse over the `?` icon to see detail information. 
+
+![](/images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
+
+There are also two types of user that a permission can be granted: `User` and `Role`. `Role` means a group of users who have the same role.
+
+### 1. Grant User Permission
+* Select `User` type, enter the username of the user you want to grant and select the related permission. 
+
+     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
+
+* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a user. To delete a user with permission, just click the `Revoke` button.
+
+     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
+
+### 2. Grant Role Permission
+* Select `Role` type, choose a group of users that you want to grant by click the drop down button and select a permission.
+
+* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a group. To delete a group with permission, just click the `Revoke` button.


[4/5] kylin git commit: prepare docs for 2.0

Posted by li...@apache.org.
http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_use_restapi.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_use_restapi.md b/website/_docs20/howto/howto_use_restapi.md
new file mode 100644
index 0000000..58ec55b
--- /dev/null
+++ b/website/_docs20/howto/howto_use_restapi.md
@@ -0,0 +1,1113 @@
+---
+layout: docs20
+title:  Use RESTful API
+categories: howto
+permalink: /docs20/howto/howto_use_restapi.html
+since: v0.7.1
+---
+
+This page lists the major RESTful APIs provided by Kylin.
+
+* Query
+   * [Authentication](#authentication)
+   * [Query](#query)
+   * [List queryable tables](#list-queryable-tables)
+* CUBE
+   * [List cubes](#list-cubes)
+   * [Get cube](#get-cube)
+   * [Get cube descriptor (dimension, measure info, etc)](#get-cube-descriptor)
+   * [Get data model (fact and lookup table info)](#get-data-model)
+   * [Build cube](#build-cube)
+   * [Disable cube](#disable-cube)
+   * [Purge cube](#purge-cube)
+   * [Enable cube](#enable-cube)
+* JOB
+   * [Resume job](#resume-job)
+   * [Pause job](#pause-job)
+   * [Discard job](#discard-job)
+   * [Get job status](#get-job-status)
+   * [Get job step output](#get-job-step-output)
+* Metadata
+   * [Get Hive Table](#get-hive-table)
+   * [Get Hive Table (Extend Info)](#get-hive-table-extend-info)
+   * [Get Hive Tables](#get-hive-tables)
+   * [Load Hive Tables](#load-hive-tables)
+* Cache
+   * [Wipe cache](#wipe-cache)
+* Streaming
+   * [Initiate cube start position](#initiate-cube-start-position)
+   * [Build stream cube](#build-stream-cube)
+   * [Check segment holes](#check-segment-holes)
+   * [Fill segment holes](#fill-segment-holes)
+
+## Authentication
+`POST /kylin/api/user/authentication`
+
+#### Request Header
+Authorization data encoded by basic auth is needed in the header, such as:
+Authorization:Basic {data}
+
+#### Response Body
+* userDetails - Defined authorities and status of current user.
+
+#### Response Sample
+
+```sh
+{  
+   "userDetails":{  
+      "password":null,
+      "username":"sample",
+      "authorities":[  
+         {  
+            "authority":"ROLE_ANALYST"
+         },
+         {  
+            "authority":"ROLE_MODELER"
+         }
+      ],
+      "accountNonExpired":true,
+      "accountNonLocked":true,
+      "credentialsNonExpired":true,
+      "enabled":true
+   }
+}
+```
+
+#### Curl Example
+
+```
+curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' http://<host>:<port>/kylin/api/user/authentication
+```
+
+If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
+
+```
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/build
+```
+
+Alternatively, you can provide the username/password with option "user" in each curl call; please note this has the risk of password leak in shell history:
+
+
+```
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: application/json;charset=utf-8" -d '{ "startTime": 820454400000, "endTime": 821318400000, "buildType": "BUILD"}' http://localhost:7070/kylin/api/cubes/kylin_sales/build
+```
+
+***
+
+## Query
+`POST /kylin/api/query`
+
+#### Request Body
+* sql - `required` `string` The text of sql statement.
+* offset - `optional` `int` Query offset. If offset is set in sql, curIndex will be ignored.
+* limit - `optional` `int` Query limit. If limit is set in sql, perPage will be ignored.
+* acceptPartial - `optional` `bool` Whether accept a partial result or not, default be "false". Set to "false" for production use. 
+* project - `optional` `string` Project to perform query. Default value is 'DEFAULT'.
+
+#### Request Sample
+
+```sh
+{  
+   "sql":"select * from TEST_KYLIN_FACT",
+   "offset":0,
+   "limit":50000,
+   "acceptPartial":false,
+   "project":"DEFAULT"
+}
+```
+
+#### Curl Example
+
+```
+curl -X POST -H "Authorization: Basic XXXXXXXXX" -H "Content-Type: application/json" -d '{ "sql":"select count(*) from TEST_KYLIN_FACT", "project":"learn_kylin" }' http://localhost:7070/kylin/api/query
+```
+
+#### Response Body
+* columnMetas - Column metadata information of result set.
+* results - Data set of result.
+* cube - Cube used for this query.
+* affectedRowCount - Count of affected row by this sql statement.
+* isException - Whether this response is an exception.
+* ExceptionMessage - Message content of the exception.
+* Duration - Time cost of this query
+* Partial - Whether the response is a partial result or not. Decided by `acceptPartial` of request.
+
+#### Response Sample
+
+```sh
+{  
+   "columnMetas":[  
+      {  
+         "isNullable":1,
+         "displaySize":0,
+         "label":"CAL_DT",
+         "name":"CAL_DT",
+         "schemaName":null,
+         "catelogName":null,
+         "tableName":null,
+         "precision":0,
+         "scale":0,
+         "columnType":91,
+         "columnTypeName":"DATE",
+         "readOnly":true,
+         "writable":false,
+         "caseSensitive":true,
+         "searchable":false,
+         "currency":false,
+         "signed":true,
+         "autoIncrement":false,
+         "definitelyWritable":false
+      },
+      {  
+         "isNullable":1,
+         "displaySize":10,
+         "label":"LEAF_CATEG_ID",
+         "name":"LEAF_CATEG_ID",
+         "schemaName":null,
+         "catelogName":null,
+         "tableName":null,
+         "precision":10,
+         "scale":0,
+         "columnType":4,
+         "columnTypeName":"INTEGER",
+         "readOnly":true,
+         "writable":false,
+         "caseSensitive":true,
+         "searchable":false,
+         "currency":false,
+         "signed":true,
+         "autoIncrement":false,
+         "definitelyWritable":false
+      }
+   ],
+   "results":[  
+      [  
+         "2013-08-07",
+         "32996",
+         "15",
+         "15",
+         "Auction",
+         "10000000",
+         "49.048952730908745",
+         "49.048952730908745",
+         "49.048952730908745",
+         "1"
+      ],
+      [  
+         "2013-08-07",
+         "43398",
+         "0",
+         "14",
+         "ABIN",
+         "10000633",
+         "85.78317064220418",
+         "85.78317064220418",
+         "85.78317064220418",
+         "1"
+      ]
+   ],
+   "cube":"test_kylin_cube_with_slr_desc",
+   "affectedRowCount":0,
+   "isException":false,
+   "exceptionMessage":null,
+   "duration":3451,
+   "partial":false
+}
+```
+
+
+## List queryable tables
+`GET /kylin/api/tables_and_columns`
+
+#### Request Parameters
+* project - `required` `string` The project to load tables
+
+#### Response Sample
+```sh
+[  
+   {  
+      "columns":[  
+         {  
+            "table_NAME":"TEST_CAL_DT",
+            "table_SCHEM":"EDW",
+            "column_NAME":"CAL_DT",
+            "data_TYPE":91,
+            "nullable":1,
+            "column_SIZE":-1,
+            "buffer_LENGTH":-1,
+            "decimal_DIGITS":0,
+            "num_PREC_RADIX":10,
+            "column_DEF":null,
+            "sql_DATA_TYPE":-1,
+            "sql_DATETIME_SUB":-1,
+            "char_OCTET_LENGTH":-1,
+            "ordinal_POSITION":1,
+            "is_NULLABLE":"YES",
+            "scope_CATLOG":null,
+            "scope_SCHEMA":null,
+            "scope_TABLE":null,
+            "source_DATA_TYPE":-1,
+            "iS_AUTOINCREMENT":null,
+            "table_CAT":"defaultCatalog",
+            "remarks":null,
+            "type_NAME":"DATE"
+         },
+         {  
+            "table_NAME":"TEST_CAL_DT",
+            "table_SCHEM":"EDW",
+            "column_NAME":"WEEK_BEG_DT",
+            "data_TYPE":91,
+            "nullable":1,
+            "column_SIZE":-1,
+            "buffer_LENGTH":-1,
+            "decimal_DIGITS":0,
+            "num_PREC_RADIX":10,
+            "column_DEF":null,
+            "sql_DATA_TYPE":-1,
+            "sql_DATETIME_SUB":-1,
+            "char_OCTET_LENGTH":-1,
+            "ordinal_POSITION":2,
+            "is_NULLABLE":"YES",
+            "scope_CATLOG":null,
+            "scope_SCHEMA":null,
+            "scope_TABLE":null,
+            "source_DATA_TYPE":-1,
+            "iS_AUTOINCREMENT":null,
+            "table_CAT":"defaultCatalog",
+            "remarks":null,
+            "type_NAME":"DATE"
+         }
+      ],
+      "table_NAME":"TEST_CAL_DT",
+      "table_SCHEM":"EDW",
+      "ref_GENERATION":null,
+      "self_REFERENCING_COL_NAME":null,
+      "type_SCHEM":null,
+      "table_TYPE":"TABLE",
+      "table_CAT":"defaultCatalog",
+      "remarks":null,
+      "type_CAT":null,
+      "type_NAME":null
+   }
+]
+```
+
+***
+
+## List cubes
+`GET /kylin/api/cubes`
+
+#### Request Parameters
+* offset - `required` `int` Offset used by pagination
+* limit - `required` `int ` Cubes per page.
+* cubeName - `optional` `string` Keyword for cube names. To find cubes whose name contains this keyword.
+* projectName - `optional` `string` Project name.
+
+#### Response Sample
+```sh
+[  
+   {  
+      "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
+      "last_modified":1407831634847,
+      "name":"test_kylin_cube_with_slr_empty",
+      "owner":null,
+      "version":null,
+      "descriptor":"test_kylin_cube_with_slr_desc",
+      "cost":50,
+      "status":"DISABLED",
+      "segments":[  
+      ],
+      "create_time":null,
+      "source_records_count":0,
+      "source_records_size":0,
+      "size_kb":0
+   }
+]
+```
+
+## Get cube
+`GET /kylin/api/cubes/{cubeName}`
+
+#### Path Variable
+* cubeName - `required` `string` Cube name to find.
+
+## Get cube descriptor
+`GET /kylin/api/cube_desc/{cubeName}`
+Get descriptor for specified cube instance.
+
+#### Path Variable
+* cubeName - `required` `string` Cube name.
+
+#### Response Sample
+```sh
+[
+    {
+        "uuid": "a24ca905-1fc6-4f67-985c-38fa5aeafd92", 
+        "name": "test_kylin_cube_with_slr_desc", 
+        "description": null, 
+        "dimensions": [
+            {
+                "id": 0, 
+                "name": "CAL_DT", 
+                "table": "EDW.TEST_CAL_DT", 
+                "column": null, 
+                "derived": [
+                    "WEEK_BEG_DT"
+                ], 
+                "hierarchy": false
+            }, 
+            {
+                "id": 1, 
+                "name": "CATEGORY", 
+                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
+                "column": null, 
+                "derived": [
+                    "USER_DEFINED_FIELD1", 
+                    "USER_DEFINED_FIELD3", 
+                    "UPD_DATE", 
+                    "UPD_USER"
+                ], 
+                "hierarchy": false
+            }, 
+            {
+                "id": 2, 
+                "name": "CATEGORY_HIERARCHY", 
+                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
+                "column": [
+                    "META_CATEG_NAME", 
+                    "CATEG_LVL2_NAME", 
+                    "CATEG_LVL3_NAME"
+                ], 
+                "derived": null, 
+                "hierarchy": true
+            }, 
+            {
+                "id": 3, 
+                "name": "LSTG_FORMAT_NAME", 
+                "table": "DEFAULT.TEST_KYLIN_FACT", 
+                "column": [
+                    "LSTG_FORMAT_NAME"
+                ], 
+                "derived": null, 
+                "hierarchy": false
+            }, 
+            {
+                "id": 4, 
+                "name": "SITE_ID", 
+                "table": "EDW.TEST_SITES", 
+                "column": null, 
+                "derived": [
+                    "SITE_NAME", 
+                    "CRE_USER"
+                ], 
+                "hierarchy": false
+            }, 
+            {
+                "id": 5, 
+                "name": "SELLER_TYPE_CD", 
+                "table": "EDW.TEST_SELLER_TYPE_DIM", 
+                "column": null, 
+                "derived": [
+                    "SELLER_TYPE_DESC"
+                ], 
+                "hierarchy": false
+            }, 
+            {
+                "id": 6, 
+                "name": "SELLER_ID", 
+                "table": "DEFAULT.TEST_KYLIN_FACT", 
+                "column": [
+                    "SELLER_ID"
+                ], 
+                "derived": null, 
+                "hierarchy": false
+            }
+        ], 
+        "measures": [
+            {
+                "id": 1, 
+                "name": "GMV_SUM", 
+                "function": {
+                    "expression": "SUM", 
+                    "parameter": {
+                        "type": "column", 
+                        "value": "PRICE", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "decimal(19,4)"
+                }, 
+                "dependent_measure_ref": null
+            }, 
+            {
+                "id": 2, 
+                "name": "GMV_MIN", 
+                "function": {
+                    "expression": "MIN", 
+                    "parameter": {
+                        "type": "column", 
+                        "value": "PRICE", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "decimal(19,4)"
+                }, 
+                "dependent_measure_ref": null
+            }, 
+            {
+                "id": 3, 
+                "name": "GMV_MAX", 
+                "function": {
+                    "expression": "MAX", 
+                    "parameter": {
+                        "type": "column", 
+                        "value": "PRICE", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "decimal(19,4)"
+                }, 
+                "dependent_measure_ref": null
+            }, 
+            {
+                "id": 4, 
+                "name": "TRANS_CNT", 
+                "function": {
+                    "expression": "COUNT", 
+                    "parameter": {
+                        "type": "constant", 
+                        "value": "1", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "bigint"
+                }, 
+                "dependent_measure_ref": null
+            }, 
+            {
+                "id": 5, 
+                "name": "ITEM_COUNT_SUM", 
+                "function": {
+                    "expression": "SUM", 
+                    "parameter": {
+                        "type": "column", 
+                        "value": "ITEM_COUNT", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "bigint"
+                }, 
+                "dependent_measure_ref": null
+            }
+        ], 
+        "rowkey": {
+            "rowkey_columns": [
+                {
+                    "column": "SELLER_ID", 
+                    "length": 18, 
+                    "dictionary": null, 
+                    "mandatory": true
+                }, 
+                {
+                    "column": "CAL_DT", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "LEAF_CATEG_ID", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "META_CATEG_NAME", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "CATEG_LVL2_NAME", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "CATEG_LVL3_NAME", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "LSTG_FORMAT_NAME", 
+                    "length": 12, 
+                    "dictionary": null, 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "LSTG_SITE_ID", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "SLR_SEGMENT_CD", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }
+            ], 
+            "aggregation_groups": [
+                [
+                    "LEAF_CATEG_ID", 
+                    "META_CATEG_NAME", 
+                    "CATEG_LVL2_NAME", 
+                    "CATEG_LVL3_NAME", 
+                    "CAL_DT"
+                ]
+            ]
+        }, 
+        "signature": "lsLAl2jL62ZApmOLZqWU3g==", 
+        "last_modified": 1445850327000, 
+        "model_name": "test_kylin_with_slr_model_desc", 
+        "null_string": null, 
+        "hbase_mapping": {
+            "column_family": [
+                {
+                    "name": "F1", 
+                    "columns": [
+                        {
+                            "qualifier": "M", 
+                            "measure_refs": [
+                                "GMV_SUM", 
+                                "GMV_MIN", 
+                                "GMV_MAX", 
+                                "TRANS_CNT", 
+                                "ITEM_COUNT_SUM"
+                            ]
+                        }
+                    ]
+                }
+            ]
+        }, 
+        "notify_list": null, 
+        "auto_merge_time_ranges": null, 
+        "retention_range": 0
+    }
+]
+```
+
+## Get data model
+`GET /kylin/api/model/{modelName}`
+
+#### Path Variable
+* modelName - `required` `string` Data model name, by default it should be the same with cube name.
+
+#### Response Sample
+```sh
+{
+    "uuid": "ff527b94-f860-44c3-8452-93b17774c647", 
+    "name": "test_kylin_with_slr_model_desc", 
+    "lookups": [
+        {
+            "table": "EDW.TEST_CAL_DT", 
+            "join": {
+                "type": "inner", 
+                "primary_key": [
+                    "CAL_DT"
+                ], 
+                "foreign_key": [
+                    "CAL_DT"
+                ]
+            }
+        }, 
+        {
+            "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
+            "join": {
+                "type": "inner", 
+                "primary_key": [
+                    "LEAF_CATEG_ID", 
+                    "SITE_ID"
+                ], 
+                "foreign_key": [
+                    "LEAF_CATEG_ID", 
+                    "LSTG_SITE_ID"
+                ]
+            }
+        }
+    ], 
+    "capacity": "MEDIUM", 
+    "last_modified": 1442372116000, 
+    "fact_table": "DEFAULT.TEST_KYLIN_FACT", 
+    "filter_condition": null, 
+    "partition_desc": {
+        "partition_date_column": "DEFAULT.TEST_KYLIN_FACT.CAL_DT", 
+        "partition_date_start": 0, 
+        "partition_date_format": "yyyy-MM-dd", 
+        "partition_type": "APPEND", 
+        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
+    }
+}
+```
+
+## Build cube
+`PUT /kylin/api/cubes/{cubeName}/build`
+
+#### Path Variable
+* cubeName - `required` `string` Cube name.
+
+#### Request Body
+* startTime - `required` `long` Start timestamp of data to build, e.g. 1388563200000 for 2014-1-1
+* endTime - `required` `long` End timestamp of data to build
+* buildType - `required` `string` Supported build type: 'BUILD', 'MERGE', 'REFRESH'
+
+#### Curl Example
+```
+curl -X PUT -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/{cubeName}/build
+```
+
+#### Response Sample
+```
+{  
+   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
+   "last_modified":1407908916705,
+   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
+   "type":"BUILD",
+   "duration":0,
+   "related_cube":"test_kylin_cube_with_slr_empty",
+   "related_segment":"19700101000000_20140731160000",
+   "exec_start_time":0,
+   "exec_end_time":0,
+   "mr_waiting":0,
+   "steps":[  
+      {  
+         "interruptCmd":null,
+         "name":"Create Intermediate Flat Hive Table",
+         "sequence_id":0,
+         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_CD smallint\n,SELLER_ID bigint\n,PRICE decimal\n)\nROW FORMAT DELIMITED FIELDS TERMINATED BY '\\177'\nSTORED AS SEQUENCEFILE\nLOCATION '/tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6';\nSET mapreduce.job.split.metainfo.maxsize=-1;\nSET mapred.compress.map.output=true;\nSET mapred.map.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compress=true;\nSET ma
 pred.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compression.type=BLOCK;\nSET mapreduce.job.max.split.locations=2000;\nSET hive.exec.compress.output=true;\nSET hive.auto.convert.join.noconditionaltask = true;\nSET hive.auto.convert.join.noconditionaltask.size = 300000000;\nINSERT OVERWRITE TABLE kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\nSELECT\nTEST_KYLIN_FACT.CAL_DT\n,TEST_KYLIN_FACT.LEAF_CATEG_ID\n,TEST_KYLIN_FACT.LSTG_SITE_ID\n,TEST_CATEGORY_GROUPINGS.META_CATEG_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL2_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL3_NAME\n,TEST_KYLIN_FACT.LSTG_FORMAT_NAME\n,TEST_KYLIN_FACT.SLR_SEGMENT_CD\n,TEST_KYLIN_FACT.SELLER_ID\n,TEST_KYLIN_FACT.PRICE\nFROM TEST_KYLIN_FACT\nINNER JOIN TEST_CAL_DT\nON TEST_KYLIN_FACT.CAL_DT = TEST_CAL_DT.CAL_DT\nINNER JOIN TEST_CATEGORY_GROUPINGS\nON TEST_KYLIN_FACT.LEAF_CATEG_ID = TEST_CATEGORY_GROUPINGS.LEAF_CATEG_ID AN
 D TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_CATEGORY_GROUPINGS.SITE_ID\nINNER JOIN TEST_SITES\nON TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_SITES.SITE_ID\nINNER JOIN TEST_SELLER_TYPE_DIM\nON TEST_KYLIN_FACT.SLR_SEGMENT_CD = TEST_SELLER_TYPE_DIM.SELLER_TYPE_CD\nWHERE (test_kylin_fact.cal_dt < '2014-07-31 16:00:00')\n;\n\"",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"SHELL_CMD_HADOOP",
+         "info":null,
+         "run_async":false
+      },
+      {  
+         "interruptCmd":null,
+         "name":"Extract Fact Table Distinct Columns",
+         "sequence_id":1,
+         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
+         "info":null,
+         "run_async":true
+      },
+      {  
+         "interruptCmd":null,
+         "name":"Load HFile to HBase Table",
+         "sequence_id":12,
+         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
+         "info":null,
+         "run_async":false
+      }
+   ],
+   "job_status":"PENDING",
+   "progress":0.0
+}
+```
+
+## Enable Cube
+`PUT /kylin/api/cubes/{cubeName}/enable`
+
+#### Path variable
+* cubeName - `required` `string` Cube name.
+
+#### Response Sample
+```sh
+{  
+   "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
+   "last_modified":1407909046305,
+   "name":"test_kylin_cube_with_slr_ready",
+   "owner":null,
+   "version":null,
+   "descriptor":"test_kylin_cube_with_slr_desc",
+   "cost":50,
+   "status":"ACTIVE",
+   "segments":[  
+      {  
+         "name":"19700101000000_20140531160000",
+         "storage_location_identifier":"KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_READY-19700101000000_20140531160000_BF043D2D-9A4A-45E9-AA59-5A17D3F34A50",
+         "date_range_start":0,
+         "date_range_end":1401552000000,
+         "status":"READY",
+         "size_kb":4758,
+         "source_records":6000,
+         "source_records_size":620356,
+         "last_build_time":1407832663227,
+         "last_build_job_id":"2c7a2b63-b052-4a51-8b09-0c24b5792cda",
+         "binary_signature":null,
+         "dictionaries":{  
+            "TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME/16d8185c-ee6b-4f8c-a919-756d9809f937.dict",
+            "TEST_KYLIN_FACT/LSTG_SITE_ID":"/dict/TEST_SITES/SITE_ID/0bec6bb3-1b0d-469c-8289-b8c4ca5d5001.dict",
+            "TEST_KYLIN_FACT/SLR_SEGMENT_CD":"/dict/TEST_SELLER_TYPE_DIM/SELLER_TYPE_CD/0c5d77ec-316b-47e0-ba9a-0616be890ad6.dict",
+            "TEST_KYLIN_FACT/CAL_DT":"/dict/PREDEFINED/date(yyyy-mm-dd)/64ac4f82-f2af-476e-85b9-f0805001014e.dict",
+            "TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME/270fbfb0-281c-4602-8413-2970a7439c47.dict",
+            "TEST_KYLIN_FACT/LEAF_CATEG_ID":"/dict/TEST_CATEGORY_GROUPINGS/LEAF_CATEG_ID/2602386c-debb-4968-8d2f-b52b8215e385.dict",
+            "TEST_CATEGORY_GROUPINGS/META_CATEG_NAME":"/dict/TEST_CATEGORY_GROUPINGS/META_CATEG_NAME/0410d2c4-4686-40bc-ba14-170042a2de94.dict"
+         },
+         "snapshots":{  
+            "TEST_CAL_DT":"/table_snapshot/TEST_CAL_DT.csv/8f7cfc8a-020d-4019-b419-3c6deb0ffaa0.snapshot",
+            "TEST_SELLER_TYPE_DIM":"/table_snapshot/TEST_SELLER_TYPE_DIM.csv/c60fd05e-ac94-4016-9255-96521b273b81.snapshot",
+            "TEST_CATEGORY_GROUPINGS":"/table_snapshot/TEST_CATEGORY_GROUPINGS.csv/363f4a59-b725-4459-826d-3188bde6a971.snapshot",
+            "TEST_SITES":"/table_snapshot/TEST_SITES.csv/78e0aecc-3ec6-4406-b86e-bac4b10ea63b.snapshot"
+         }
+      }
+   ],
+   "create_time":null,
+   "source_records_count":6000,
+   "source_records_size":0,
+   "size_kb":4758
+}
+```
+
+## Disable Cube
+`PUT /kylin/api/cubes/{cubeName}/disable`
+
+#### Path variable
+* cubeName - `required` `string` Cube name.
+
+#### Response Sample
+(Same as "Enable Cube")
+
+## Purge Cube
+`PUT /kylin/api/cubes/{cubeName}/purge`
+
+#### Path variable
+* cubeName - `required` `string` Cube name.
+
+#### Response Sample
+(Same as "Enable Cube")
+
+***
+
+## Resume Job
+`PUT /kylin/api/jobs/{jobId}/resume`
+
+#### Path variable
+* jobId - `required` `string` Job id.
+
+#### Response Sample
+```
+{  
+   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
+   "last_modified":1407908916705,
+   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
+   "type":"BUILD",
+   "duration":0,
+   "related_cube":"test_kylin_cube_with_slr_empty",
+   "related_segment":"19700101000000_20140731160000",
+   "exec_start_time":0,
+   "exec_end_time":0,
+   "mr_waiting":0,
+   "steps":[  
+      {  
+         "interruptCmd":null,
+         "name":"Create Intermediate Flat Hive Table",
+         "sequence_id":0,
+         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_CD smallint\n,SELLER_ID bigint\n,PRICE decimal\n)\nROW FORMAT DELIMITED FIELDS TERMINATED BY '\\177'\nSTORED AS SEQUENCEFILE\nLOCATION '/tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6';\nSET mapreduce.job.split.metainfo.maxsize=-1;\nSET mapred.compress.map.output=true;\nSET mapred.map.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compress=true;\nSET ma
 pred.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compression.type=BLOCK;\nSET mapreduce.job.max.split.locations=2000;\nSET hive.exec.compress.output=true;\nSET hive.auto.convert.join.noconditionaltask = true;\nSET hive.auto.convert.join.noconditionaltask.size = 300000000;\nINSERT OVERWRITE TABLE kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\nSELECT\nTEST_KYLIN_FACT.CAL_DT\n,TEST_KYLIN_FACT.LEAF_CATEG_ID\n,TEST_KYLIN_FACT.LSTG_SITE_ID\n,TEST_CATEGORY_GROUPINGS.META_CATEG_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL2_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL3_NAME\n,TEST_KYLIN_FACT.LSTG_FORMAT_NAME\n,TEST_KYLIN_FACT.SLR_SEGMENT_CD\n,TEST_KYLIN_FACT.SELLER_ID\n,TEST_KYLIN_FACT.PRICE\nFROM TEST_KYLIN_FACT\nINNER JOIN TEST_CAL_DT\nON TEST_KYLIN_FACT.CAL_DT = TEST_CAL_DT.CAL_DT\nINNER JOIN TEST_CATEGORY_GROUPINGS\nON TEST_KYLIN_FACT.LEAF_CATEG_ID = TEST_CATEGORY_GROUPINGS.LEAF_CATEG_ID AN
 D TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_CATEGORY_GROUPINGS.SITE_ID\nINNER JOIN TEST_SITES\nON TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_SITES.SITE_ID\nINNER JOIN TEST_SELLER_TYPE_DIM\nON TEST_KYLIN_FACT.SLR_SEGMENT_CD = TEST_SELLER_TYPE_DIM.SELLER_TYPE_CD\nWHERE (test_kylin_fact.cal_dt < '2014-07-31 16:00:00')\n;\n\"",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"SHELL_CMD_HADOOP",
+         "info":null,
+         "run_async":false
+      },
+      {  
+         "interruptCmd":null,
+         "name":"Extract Fact Table Distinct Columns",
+         "sequence_id":1,
+         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
+         "info":null,
+         "run_async":true
+      },
+      {  
+         "interruptCmd":null,
+         "name":"Load HFile to HBase Table",
+         "sequence_id":12,
+         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
+         "info":null,
+         "run_async":false
+      }
+   ],
+   "job_status":"PENDING",
+   "progress":0.0
+}
+```
+## Pause Job
+`PUT /kylin/api/jobs/{jobId}/pause`
+
+#### Path variable
+* jobId - `required` `string` Job id.
+
+## Discard Job
+`PUT /kylin/api/jobs/{jobId}/cancel`
+
+#### Path variable
+* jobId - `required` `string` Job id.
+
+## Get Job Status
+`GET /kylin/api/jobs/{jobId}`
+
+#### Path variable
+* jobId - `required` `string` Job id.
+
+#### Response Sample
+(Same as "Resume Job")
+
+## Get job step output
+`GET /kylin/api/jobs/{jobId}/steps/{stepId}/output`
+
+#### Path Variable
+* jobId - `required` `string` Job id.
+* stepId - `required` `string` Step id; the step id is composed by jobId with step sequence id; for example, the jobId is "fb479e54-837f-49a2-b457-651fc50be110", its 3rd step id is "fb479e54-837f-49a2-b457-651fc50be110-3", 
+
+#### Response Sample
+```
+{  
+   "cmd_output":"log string"
+}
+```
+
+***
+
+## Get Hive Table
+`GET /kylin/api/tables/{tableName}`
+
+#### Request Parameters
+* tableName - `required` `string` table name to find.
+
+#### Response Sample
+```sh
+{
+    uuid: "69cc92c0-fc42-4bb9-893f-bd1141c91dbe",
+    name: "SAMPLE_07",
+    columns: [{
+        id: "1",
+        name: "CODE",
+        datatype: "string"
+    }, {
+        id: "2",
+        name: "DESCRIPTION",
+        datatype: "string"
+    }, {
+        id: "3",
+        name: "TOTAL_EMP",
+        datatype: "int"
+    }, {
+        id: "4",
+        name: "SALARY",
+        datatype: "int"
+    }],
+    database: "DEFAULT",
+    last_modified: 1419330476755
+}
+```
+
+## Get Hive Table (Extend Info)
+`GET /kylin/api/tables/{tableName}/exd-map`
+
+#### Request Parameters
+* tableName - `optional` `string` table name to find.
+
+#### Response Sample
+```
+{
+    "minFileSize": "46055",
+    "totalNumberFiles": "1",
+    "location": "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_07",
+    "lastAccessTime": "1418374103365",
+    "lastUpdateTime": "1398176493340",
+    "columns": "struct columns { string code, string description, i32 total_emp, i32 salary}",
+    "partitionColumns": "",
+    "EXD_STATUS": "true",
+    "maxFileSize": "46055",
+    "inputformat": "org.apache.hadoop.mapred.TextInputFormat",
+    "partitioned": "false",
+    "tableName": "sample_07",
+    "owner": "hue",
+    "totalFileSize": "46055",
+    "outputformat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
+}
+```
+
+## Get Hive Tables
+`GET /kylin/api/tables`
+
+#### Request Parameters
+* project- `required` `string` will list all tables in the project.
+* ext- `optional` `boolean`  set true to get extend info of table.
+
+#### Response Sample
+```sh
+[
+ {
+    uuid: "53856c96-fe4d-459e-a9dc-c339b1bc3310",
+    name: "SAMPLE_08",
+    columns: [{
+        id: "1",
+        name: "CODE",
+        datatype: "string"
+    }, {
+        id: "2",
+        name: "DESCRIPTION",
+        datatype: "string"
+    }, {
+        id: "3",
+        name: "TOTAL_EMP",
+        datatype: "int"
+    }, {
+        id: "4",
+        name: "SALARY",
+        datatype: "int"
+    }],
+    database: "DEFAULT",
+    cardinality: {},
+    last_modified: 0,
+    exd: {
+        minFileSize: "46069",
+        totalNumberFiles: "1",
+        location: "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_08",
+        lastAccessTime: "1398176495945",
+        lastUpdateTime: "1398176495981",
+        columns: "struct columns { string code, string description, i32 total_emp, i32 salary}",
+        partitionColumns: "",
+        EXD_STATUS: "true",
+        maxFileSize: "46069",
+        inputformat: "org.apache.hadoop.mapred.TextInputFormat",
+        partitioned: "false",
+        tableName: "sample_08",
+        owner: "hue",
+        totalFileSize: "46069",
+        outputformat: "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
+    }
+  }
+]
+```
+
+## Load Hive Tables
+`POST /kylin/api/tables/{tables}/{project}`
+
+#### Request Parameters
+* tables - `required` `string` table names you want to load from hive, separated with comma.
+* project - `required` `String`  the project which the tables will be loaded into.
+
+#### Response Sample
+```
+{
+    "result.loaded": ["DEFAULT.SAMPLE_07"],
+    "result.unloaded": ["sapmle_08"]
+}
+```
+
+***
+
+## Wipe cache
+`PUT /kylin/api/cache/{type}/{name}/{action}`
+
+#### Path variable
+* type - `required` `string` 'METADATA' or 'CUBE'
+* name - `required` `string` Cache key, e.g the cube name.
+* action - `required` `string` 'create', 'update' or 'drop'
+
+***
+
+## Initiate cube start position
+Set the stream cube's start position to the current latest offsets; This can avoid building from the earlist position of Kafka topic (if you have set a long retension time); 
+
+`PUT /kylin/api/cubes/{cubeName}/init_start_offsets`
+
+#### Path variable
+* cubeName - `required` `string` Cube name
+
+#### Response Sample
+```sh
+{
+    "result": "success", 
+    "offsets": "{0=246059529, 1=253547684, 2=253023895, 3=172996803, 4=165503476, 5=173513896, 6=19200473, 7=26691891, 8=26699895, 9=26694021, 10=19204164, 11=26694597}"
+}
+```
+
+## Build stream cube
+`PUT /kylin/api/cubes/{cubeName}/build2`
+
+This API is specific for stream cube's building;
+
+#### Path variable
+* cubeName - `required` `string` Cube name
+
+#### Request Body
+
+* sourceOffsetStart - `required` `long` The start offset, 0 represents from previous position;
+* sourceOffsetEnd  - `required` `long` The end offset, 9223372036854775807 represents to the end position of current stream data
+* buildType - `required` Build type, "BUILD", "MERGE" or "REFRESH"
+
+#### Request Sample
+
+```sh
+{  
+   "sourceOffsetStart": 0, 
+   "sourceOffsetEnd": 9223372036854775807, 
+   "buildType": "BUILD"
+}
+```
+
+#### Response Sample
+```sh
+{
+    "uuid": "3afd6e75-f921-41e1-8c68-cb60bc72a601", 
+    "last_modified": 1480402541240, 
+    "version": "1.6.0", 
+    "name": "embedded_cube_clone - 1409830324_1409849348 - BUILD - PST 2016-11-28 22:55:41", 
+    "type": "BUILD", 
+    "duration": 0, 
+    "related_cube": "embedded_cube_clone", 
+    "related_segment": "42ebcdea-cbe9-4905-84db-31cb25f11515", 
+    "exec_start_time": 0, 
+    "exec_end_time": 0, 
+    "mr_waiting": 0, 
+ ...
+}
+```
+
+## Check segment holes
+`GET /kylin/api/cubes/{cubeName}/holes`
+
+#### Path variable
+* cubeName - `required` `string` Cube name
+
+## Fill segment holes
+`PUT /kylin/api/cubes/{cubeName}/holes`
+
+#### Path variable
+* cubeName - `required` `string` Cube name

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/howto/howto_use_restapi_in_js.md
----------------------------------------------------------------------
diff --git a/website/_docs20/howto/howto_use_restapi_in_js.md b/website/_docs20/howto/howto_use_restapi_in_js.md
new file mode 100644
index 0000000..6bdfae4
--- /dev/null
+++ b/website/_docs20/howto/howto_use_restapi_in_js.md
@@ -0,0 +1,46 @@
+---
+layout: docs20
+title:  Use RESTful API in Javascript
+categories: howto
+permalink: /docs20/howto/howto_use_restapi_in_js.html
+---
+Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers.
+
+## Example on Query API.
+```
+$.ajaxSetup({
+      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
+    });
+    var request = $.ajax({
+       url: "http://hostname/kylin/api/query",
+       type: "POST",
+       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
+       dataType: "json"
+    });
+    request.done(function( msg ) {
+       alert(msg);
+    }); 
+    request.fail(function( jqXHR, textStatus ) {
+       alert( "Request failed: " + textStatus );
+  });
+
+```
+
+## Keypoints
+1. add basic access authorization info in http headers.
+2. use right ajax type and data synax.
+
+## Basic access authorization
+For what is basic access authorization, refer to [Wikipedia Page](http://en.wikipedia.org/wiki/Basic_access_authentication).
+How to generate your authorization code (download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
+
+```
+var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
+ 
+$.ajaxSetup({
+   headers: { 
+    'Authorization': "Basic " + authorizationCode, 
+    'Content-Type': 'application/json;charset=utf-8' 
+   }
+});
+```

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/index.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/index.cn.md b/website/_docs20/index.cn.md
new file mode 100644
index 0000000..83b6f55
--- /dev/null
+++ b/website/_docs20/index.cn.md
@@ -0,0 +1,26 @@
+---
+layout: docs20-cn
+title: \u6982\u8ff0
+categories: docs
+permalink: /cn/docs20/index.html
+---
+
+\u6b22\u8fce\u6765\u5230 Apache Kylin\u2122
+------------  
+> Extreme OLAP Engine for Big Data
+
+Apache Kylin\u2122\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u5206\u5e03\u5f0f\u5206\u6790\u5f15\u64ce\uff0c\u63d0\u4f9bHadoop\u4e4b\u4e0a\u7684SQL\u67e5\u8be2\u63a5\u53e3\u53ca\u591a\u7ef4\u5206\u6790\uff08OLAP\uff09\u80fd\u529b\u4ee5\u652f\u6301\u8d85\u5927\u89c4\u6a21\u6570\u636e\uff0c\u6700\u521d\u7531eBay Inc.\u5f00\u53d1\u5e76\u8d21\u732e\u81f3\u5f00\u6e90\u793e\u533a\u3002
+
+\u67e5\u770b\u65e7\u7248\u672c\u6587\u6863: 
+* [v1.5](/cn/docs15/)
+* [v1.3](/cn/docs/) 
+
+\u5b89\u88c5 
+------------  
+\u8bf7\u53c2\u8003\u5b89\u88c5\u6587\u6863\u4ee5\u5b89\u88c5Apache Kylin: [\u5b89\u88c5\u5411\u5bfc](/cn/docs20/install/)
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/index.md
----------------------------------------------------------------------
diff --git a/website/_docs20/index.md b/website/_docs20/index.md
new file mode 100644
index 0000000..f34112c
--- /dev/null
+++ b/website/_docs20/index.md
@@ -0,0 +1,59 @@
+---
+layout: docs20
+title: Overview
+categories: docs
+permalink: /docs20/index.html
+---
+
+Welcome to Apache Kylin\u2122: Extreme OLAP Engine for Big Data
+------------  
+
+Apache Kylin\u2122 is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets.
+
+Document of prior versions: 
+
+* [v1.6.x document](/docs16/)
+* [v1.5.x document](/docs15/)
+* [v1.3.x document](/docs/) 
+
+Installation & Setup
+------------  
+1. [Hadoop Env](install/hadoop_env.html)
+2. [Installation Guide](install/index.html)
+3. [Advanced settings](install/advance_settings.html)
+4. [Deploy in cluster mode](install/kylin_cluster.html)
+5. [Run Kylin with Docker](install/kylin_docker.html)
+
+
+Tutorial
+------------  
+1. [Quick Start with Sample Cube](tutorial/kylin_sample.html)
+2. [Cube Creation](tutorial/create_cube.html)
+3. [Cube Build and Job Monitoring](tutorial/cube_build_job.html)
+4. [Web Interface](tutorial/web.html)
+5. [SQL reference: by Apache Calcite](http://calcite.apache.org/docs/reference.html)
+6. [Build Cube with Streaming Data](tutorial/cube_streaming.html)
+7. [Build Cube with Spark Engine (beta)](tutorial/cube_spark.html)
+
+
+Connectivity and APIs
+------------  
+1. [ODBC driver](tutorial/odbc.html)
+2. [JDBC driver](howto/howto_jdbc.html)
+3. [RESTful API list](howto/howto_use_restapi.html)
+4. [Build cube with RESTful API](howto/howto_build_cube_with_restapi.html)
+5. [Call RESTful API in Javascript](howto/howto_use_restapi_in_js.html)
+6. [Connect from MS Excel and PowerBI](tutorial/powerbi.html)
+7. [Connect from Tableau 8](tutorial/tableau.html)
+8. [Connect from Tableau 9](tutorial/tableau_91.html)
+9. [Connect from SQuirreL](tutorial/squirrel.html)
+10. [Connect from Apache Flink](tutorial/flink.html)
+
+Operations
+------------  
+1. [Backup/restore Kylin metadata](howto/howto_backup_metadata.html)
+2. [Cleanup storage (HDFS & HBase)](howto/howto_cleanup_storage.html)
+3. [Upgrade from old version](howto/howto_upgrade.html)
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/install/advance_settings.md
----------------------------------------------------------------------
diff --git a/website/_docs20/install/advance_settings.md b/website/_docs20/install/advance_settings.md
new file mode 100644
index 0000000..f76d39a
--- /dev/null
+++ b/website/_docs20/install/advance_settings.md
@@ -0,0 +1,98 @@
+---
+layout: docs20
+title:  "Advanced Settings"
+categories: install
+permalink: /docs20/install/advance_settings.html
+---
+
+## Overwrite default kylin.properties at Cube level
+In `conf/kylin.properties` there are many parameters, which control/impact on Kylin's behaviors; Most parameters are global configs like security or job related; while some are Cube related; These Cube related parameters can be customized at each Cube level, so you can control the behaviors more flexibly. The GUI to do this is in the "Configuration Overwrites" step of the Cube wizard, as the screenshot below.
+
+![]( /images/install/overwrite_config.png)
+
+Here take two example: 
+
+ * `kylin.cube.algorithm`: it defines the Cubing algorithm that the job engine will select; Its default value is "auto", means the engine will dynamically pick an algorithm ("layer" or "inmem") by sampling the data. If you knows Kylin and your data/cluster well, you can set your preferred algorithm directly (usually "inmem" has better performance but will request more memory).   
+
+ * `kylin.hbase.region.cut`: it defines how big a region is when creating the HBase table. The default value is "5" (GB) per region. It might be too big for a small or medium cube, so you can give it a smaller value to get more regions created, then can gain better query performance.
+
+## Overwrite default Hadoop job conf at Cube level
+The `conf/kylin_job_conf.xml` and `conf/kylin_job_conf_inmem.xml` manage the default configurations for Hadoop jobs. If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need adding a prefix `kylin.job.mr.config.override.`; These configs will be parsed out and then applied when submitting jobs. See two examples below:
+
+ * If want a cube's job getting more memory from Yarn, you can define: `kylin.job.mr.config.override.mapreduce.map.java.opts=-Xmx7g` and `kylin.job.mr.config.override.mapreduce.map.memory.mb=8192`
+ * If want a cube's job going to a different Yarn resource queue, you can define: `kylin.job.mr.config.override.mapreduce.job.queuename=myQueue` (note: "myQueue" is just a sample)
+
+ ## Overwrite default Hive job conf at Cube level
+The `conf/kylin_hive_conf.xml` manage the default configurations when running Hive job (like creating intermediate flat hive table). If you have the need to customize the configs by cube, you can achieve that with the similar way as above, but need using another prefix `kylin.hive.config.override.`; These configs will be parsed out and then applied when running "hive -e" or "beeline" commands. See example below:
+
+ * If want hive goes a different Yarn resource queue, you can define: `kylin.hive.config.override.mapreduce.job.queuename=myQueue` (note: "myQueue" is just a sample)
+
+
+## Enable compression
+
+By default, Kylin does not enable compression, this is not the recommend settings for production environment, but a tradeoff for new Kylin users. A suitable compression algorithm will reduce the storage overhead. But unsupported algorithm will break the Kylin job build also. There are three kinds of compression used in Kylin, HBase table compression, Hive output compression and MR jobs output compression. 
+
+* HBase table compression
+The compression settings define in `kyiln.properties` by `kylin.hbase.default.compression.codec`, default value is *none*. The valid value includes *none*, *snappy*, *lzo*, *gzip* and *lz4*. Before changing the compression algorithm, please make sure the selected algorithm is supported on your HBase cluster. Especially for snappy, lzo and lz4, not all Hadoop distributions include these. 
+
+* Hive output compression
+The compression settings define in `kylin_hive_conf.xml`. The default setting is empty which leverages the Hive default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_hive_conf.xml`. Take the snappy compression for example:
+{% highlight Groff markup %}
+    <property>
+        <name>mapreduce.map.output.compress.codec</name>
+        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
+        <description></description>
+    </property>
+    <property>
+        <name>mapreduce.output.fileoutputformat.compress.codec</name>
+        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
+        <description></description>
+    </property>
+{% endhighlight %}
+
+* MR jobs output compression
+The compression settings define in `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. The default setting is empty which leverages the MR default configuration. If you want to override the settings, please add (or replace) the following properties into `kylin_job_conf.xml` and `kylin_job_conf_inmem.xml`. Take the snappy compression for example:
+{% highlight Groff markup %}
+    <property>
+        <name>mapreduce.map.output.compress.codec</name>
+        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
+        <description></description>
+    </property>
+    <property>
+        <name>mapreduce.output.fileoutputformat.compress.codec</name>
+        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
+        <description></description>
+    </property>
+{% endhighlight %}
+
+Compression settings only take effect after restarting Kylin server instance.
+
+## Allocate more memory to Kylin instance
+
+Open `bin/setenv.sh`, which has two sample settings for `KYLIN_JVM_SETTINGS` environment variable; The default setting is small (4GB at max.), you can comment it and then un-comment the next line to allocate 16GB:
+
+{% highlight Groff markup %}
+export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$KYLIN_HOME/logs/kylin.gc.$$ -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M"
+# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX:MaxPermSize=512m -XX:NewSize=3g -XX:MaxNewSize=3g -XX:SurvivorRatio=4 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=70 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError"
+{% endhighlight %}
+
+## Enable LDAP or SSO authentication
+
+Check [How to Enable Security with LDAP and SSO](../howto/howto_ldap_and_sso.html)
+
+
+## Enable email notification
+
+Kylin can send email notification on job complete/fail; To enable this, edit `conf/kylin.properties`, set the following parameters:
+{% highlight Groff markup %}
+mail.enabled=true
+mail.host=your-smtp-server
+mail.username=your-smtp-account
+mail.password=your-smtp-pwd
+mail.sender=your-sender-address
+kylin.job.admin.dls=adminstrator-address
+{% endhighlight %}
+
+Restart Kylin server to take effective. To disable, set `mail.enabled` back to `false`.
+
+Administrator will get notifications for all jobs. Modeler and Analyst need enter email address into the "Notification List" at the first page of cube wizard, and then will get notified for that cube.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/install/hadoop_evn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/install/hadoop_evn.md b/website/_docs20/install/hadoop_evn.md
new file mode 100644
index 0000000..2c300df
--- /dev/null
+++ b/website/_docs20/install/hadoop_evn.md
@@ -0,0 +1,40 @@
+---
+layout: docs20
+title:  "Hadoop Environment"
+categories: install
+permalink: /docs20/install/hadoop_env.html
+---
+
+Kylin need run in a Hadoop node, to get better stability, we suggest you to deploy it a pure Hadoop client machine, on which it the command lines like `hive`, `hbase`, `hadoop`, `hdfs` already be installed and configured. The Linux account that running Kylin has got permission to the Hadoop cluster, including create/write hdfs, hive tables, hbase tables and submit MR jobs. 
+
+## Recommended Hadoop Versions
+
+* Hadoop: 2.6 - 2.7
+* Hive: 0.13 - 1.2.1
+* HBase: 0.98 - 0.99, 1.x
+* JDK: 1.7+
+
+_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1. Windows and MacOS have known issues._
+
+To make things easier we strongly recommend you try Kylin with an all-in-one sandbox VM, like [HDP sandbox](http://hortonworks.com/products/hortonworks-sandbox/), and give it 10 GB memory. In the following tutorial we'll go with **Hortonworks Sandbox 2.1** and **Cloudera QuickStart VM 5.1**. 
+
+To avoid permission issue in the sandbox, you can use its `root` account. The password for **Hortonworks Sandbox 2.1** is `hadoop` , for **Cloudera QuickStart VM 5.1** is `cloudera`.
+
+We also suggest you using bridged mode instead of NAT mode in Virtual Box settings. Bridged mode will assign your sandbox an independent IP address so that you can avoid issues like [this](https://github.com/KylinOLAP/Kylin/issues/12).
+
+### Start Hadoop
+Use ambari helps to launch hadoop:
+
+```
+ambari-agent start
+ambari-server start
+```
+
+With both command successfully run you can go to ambari homepage at <http://your_sandbox_ip:8080> (user:admin,password:admin) to check everything's status. **By default hortonworks ambari disables Hbase, you need manually start the `Hbase` service at ambari homepage.**
+
+![start hbase in ambari](https://raw.githubusercontent.com/KylinOLAP/kylinolap.github.io/master/docs/installation/starthbase.png)
+
+**Additonal Info for setting up Hortonworks Sandbox on Virtual Box**
+
+	Please make sure Hbase Master port [Default 60000] and Zookeeper [Default 2181] is forwarded to Host OS.
+ 

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/install/index.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/install/index.cn.md b/website/_docs20/install/index.cn.md
new file mode 100644
index 0000000..68b5aec
--- /dev/null
+++ b/website/_docs20/install/index.cn.md
@@ -0,0 +1,46 @@
+---
+layout: docs20
+title:  "Installation Guide"
+categories: install
+permalink: /cn/docs20/install/index.html
+version: v0.7.2
+since: v0.7.1
+---
+
+### Environment
+
+Kylin requires a properly setup hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check this reference: [Hadoop Environment](hadoop_env.html).
+
+## Prerequisites on Hadoop
+
+* Hadoop: 2.4+
+* Hive: 0.13+
+* HBase: 0.98+, 1.x
+* JDK: 1.7+  
+_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1_
+
+
+It is most common to install Kylin on a Hadoop client machine. It can be used for demo use, or for those who want to host their own web site to provide Kylin service. The scenario is depicted as:
+
+![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
+
+For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
+
+Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
+
+### Install Kylin
+
+1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
+2. Export KYLIN_HOME pointing to the extracted Kylin folder
+3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
+4. To start Kylin, simply run **bin/kylin.sh start**
+5. To stop Kylin, simply run **bin/kylin.sh stop**
+
+> If you want to have multiple Kylin nodes please refer to [this](kylin_cluster.html)
+
+After Kylin started you can visit <http://your_hostname:7070/kylin>. The username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
+
+1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
+2. [Create and Build your own cube](../tutorial/create_cube.html)
+3. [Kylin Web Tutorial](../tutorial/web.html)
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/install/index.md
----------------------------------------------------------------------
diff --git a/website/_docs20/install/index.md b/website/_docs20/install/index.md
new file mode 100644
index 0000000..77794e1
--- /dev/null
+++ b/website/_docs20/install/index.md
@@ -0,0 +1,35 @@
+---
+layout: docs20
+title:  "Installation Guide"
+categories: install
+permalink: /docs20/install/index.html
+---
+
+### Environment
+
+Kylin requires a properly setup Hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check [Hadoop Environment](hadoop_env.html).
+
+It is most common to install Kylin on a Hadoop client machine, from which Kylin can talk with the Hadoop cluster via command lines including `hive`, `hbase`, `hadoop`, etc. The scenario is depicted as:
+
+![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
+
+For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
+
+Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
+
+### Install Kylin
+
+1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
+2. Export KYLIN_HOME pointing to the extracted Kylin folder
+3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
+4. To start Kylin, run **bin/kylin.sh start**, after the server starts, you can watch logs/kylin.log for runtime logs;
+5. To stop Kylin, run **bin/kylin.sh stop**
+
+> If you want to have multiple Kylin nodes running to provide high availability, please refer to [this](kylin_cluster.html)
+
+After Kylin started you can visit <http://hostname:7070/kylin>. The default username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
+
+1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
+2. [Create and Build a cube](../tutorial/create_cube.html)
+3. [Kylin Web Tutorial](../tutorial/web.html)
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/install/kylin_cluster.md
----------------------------------------------------------------------
diff --git a/website/_docs20/install/kylin_cluster.md b/website/_docs20/install/kylin_cluster.md
new file mode 100644
index 0000000..d7fec7e
--- /dev/null
+++ b/website/_docs20/install/kylin_cluster.md
@@ -0,0 +1,32 @@
+---
+layout: docs20
+title:  "Deploy in Cluster Mode"
+categories: install
+permalink: /docs20/install/kylin_cluster.html
+---
+
+
+### Kylin Server modes
+
+Kylin instances are stateless,  the runtime state is saved in its "Metadata Store" in hbase (kylin.metadata.url config in conf/kylin.properties). For load balance considerations it is possible to start multiple Kylin instances sharing the same metadata store (thus sharing the same state on table schemas, job status, cube status, etc.)
+
+Each of the kylin instances has a kylin.server.mode entry in conf/kylin.properties specifying the runtime mode, it has three options: 1. "job" for running job engine only 2. "query" for running query engine only and 3 "all" for running both. Notice that only one server can run the job engine("all" mode or "job" mode), the others must all be "query" mode.
+
+A typical scenario is depicted in the following chart:
+
+![]( /images/install/kylin_server_modes.png)
+
+### Setting up Multiple Kylin REST servers
+
+If you are running Kylin in a cluster where you have multiple Kylin REST server instances, please make sure you have the following property correctly configured in ${KYLIN_HOME}/conf/kylin.properties for EVERY server instance.
+
+1. kylin.rest.servers 
+	List of web servers in use, this enables one web server instance to sync up with other servers. For example: kylin.rest.servers=sandbox1:7070,sandbox2:7070
+  
+2. kylin.server.mode
+	Make sure there is only one instance whose "kylin.server.mode" is set to "all"(or "job"), others should be "query"
+	
+## Setup load balancer 
+
+To enable Kylin high availability, you need setup a load balancer in front of these servers, let it routing the incoming requests to the cluster. Client sides send all requests to the load balancer, instead of talk with a specific instance. 
+	

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/install/kylin_docker.md
----------------------------------------------------------------------
diff --git a/website/_docs20/install/kylin_docker.md b/website/_docs20/install/kylin_docker.md
new file mode 100644
index 0000000..a0a09eb
--- /dev/null
+++ b/website/_docs20/install/kylin_docker.md
@@ -0,0 +1,10 @@
+---
+layout: docs20
+title:  "Run Kylin with Docker"
+categories: install
+permalink: /docs20/install/kylin_docker.html
+version: v1.5.3
+since: v1.5.2
+---
+
+Apache Kylin runs as a client of Hadoop cluster, so it is reasonable to run within a Docker container; please check [this project](https://github.com/Kyligence/kylin-docker/) on github.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/install/manual_install_guide.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/install/manual_install_guide.cn.md b/website/_docs20/install/manual_install_guide.cn.md
new file mode 100644
index 0000000..b369568
--- /dev/null
+++ b/website/_docs20/install/manual_install_guide.cn.md
@@ -0,0 +1,48 @@
+---
+layout: docs20-cn
+title:  "\u624b\u52a8\u5b89\u88c5\u6307\u5357"
+categories: \u5b89\u88c5
+permalink: /cn/docs20/install/manual_install_guide.html
+version: v0.7.2
+since: v0.7.1
+---
+
+## \u5f15\u8a00
+
+\u5728\u5927\u591a\u6570\u60c5\u51b5\u4e0b\uff0c\u6211\u4eec\u7684\u81ea\u52a8\u811a\u672c[Installation Guide](./index.html)\u53ef\u4ee5\u5e2e\u52a9\u4f60\u5728\u4f60\u7684hadoop sandbox\u751a\u81f3\u4f60\u7684hadoop cluster\u4e2d\u542f\u52a8Kylin\u3002\u4f46\u662f\uff0c\u4e3a\u9632\u90e8\u7f72\u811a\u672c\u51fa\u9519\uff0c\u6211\u4eec\u64b0\u5199\u672c\u6587\u4f5c\u4e3a\u53c2\u8003\u6307\u5357\u6765\u89e3\u51b3\u4f60\u7684\u95ee\u9898\u3002
+
+\u57fa\u672c\u4e0a\u672c\u6587\u89e3\u91ca\u4e86\u81ea\u52a8\u811a\u672c\u4e2d\u7684\u6bcf\u4e00\u6b65\u9aa4\u3002\u6211\u4eec\u5047\u8bbe\u4f60\u5df2\u7ecf\u5bf9Linux\u4e0a\u7684Hadoop\u64cd\u4f5c\u975e\u5e38\u719f\u6089\u3002
+
+## \u524d\u63d0\u6761\u4ef6
+* \u5df2\u5b89\u88c5Tomcat\uff0c\u8f93\u51fa\u5230CATALINA_HOME\uff08with CATALINA_HOME exported). 
+* Kylin \u4e8c\u8fdb\u5236\u6587\u4ef6\u62f7\u8d1d\u81f3\u672c\u5730\u5e76\u89e3\u538b\uff0c\u4e4b\u540e\u4f7f\u7528$KYLIN_HOME\u5f15\u7528
+
+## \u6b65\u9aa4
+
+### \u51c6\u5907Jars
+
+Kylin\u4f1a\u9700\u8981\u4f7f\u7528\u4e24\u4e2ajar\u5305\uff0c\u4e24\u4e2ajar\u5305\u548c\u914d\u7f6e\u5728\u9ed8\u8ba4kylin.properties\uff1a\uff08there two jars and configured in the default kylin.properties\uff09
+
+```
+kylin.job.jar=/tmp/kylin/kylin-job-latest.jar
+
+```
+
+\u8fd9\u662fKylin\u7528\u4e8eMR jobs\u7684job jar\u5305\u3002\u4f60\u9700\u8981\u590d\u5236 $KYLIN_HOME/job/target/kylin-job-latest.jar \u5230 /tmp/kylin/
+
+```
+kylin.coprocessor.local.jar=/tmp/kylin/kylin-coprocessor-latest.jar
+
+```
+
+\u8fd9\u662f\u4e00\u4e2aKylin\u4f1a\u653e\u5728hbase\u4e0a\u7684hbase\u534f\u5904\u7406jar\u5305\u3002\u5b83\u7528\u4e8e\u63d0\u9ad8\u6027\u80fd\u3002\u4f60\u9700\u8981\u590d\u5236 $KYLIN_HOME/storage/target/kylin-coprocessor-latest.jar \u5230 /tmp/kylin/
+
+### \u542f\u52a8Kylin
+
+\u4ee5`./kylin.sh start`
+
+\u542f\u52a8Kylin
+
+\u5e76\u4ee5`./Kylin.sh stop`
+
+\u505c\u6b62Kylin