You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by li...@apache.org on 2016/03/12 10:57:19 UTC

[3/6] kylin git commit: rename 2.x to 1.5 in documents

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/gettingstarted/faq.md
----------------------------------------------------------------------
diff --git a/website/_docs2/gettingstarted/faq.md b/website/_docs2/gettingstarted/faq.md
deleted file mode 100644
index 3087f37..0000000
--- a/website/_docs2/gettingstarted/faq.md
+++ /dev/null
@@ -1,90 +0,0 @@
----
-layout: docs2
-title:  "FAQ"
-categories: gettingstarted
-permalink: /docs2/gettingstarted/faq.html
-version: v0.7.2
-since: v0.6.x
----
-
-### Some NPM error causes ERROR exit (中国大陆地区用户请特别注意此问题)?  
-For people from China:  
-
-* Please add proxy for your NPM (请为NPM设置代理):  
-`npm config set proxy http://YOUR_PROXY_IP`
-
-* Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
-[http://npm.taobao.org](http://npm.taobao.org)
-
-### Can't get master address from ZooKeeper" when installing Kylin on Hortonworks Sandbox
-Check out [https://github.com/KylinOLAP/Kylin/issues/9](https://github.com/KylinOLAP/Kylin/issues/9).
-
-### Map Reduce Job information can't display on sandbox deployment
-Check out [https://github.com/KylinOLAP/Kylin/issues/40](https://github.com/KylinOLAP/Kylin/issues/40)
-
-#### Install Kylin on CDH 5.2 or Hadoop 2.5.x
-Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
-{% highlight Groff markup %}
-I was able to deploy Kylin with following option in POM.
-<hadoop2.version>2.5.0</hadoop2.version>
-<yarn.version>2.5.0</yarn.version>
-<hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
-<zookeeper.version>3.4.5</zookeeper.version>
-<hive.version>0.13.1</hive.version>
-My Cluster is running on Cloudera Distribution CDH 5.2.0.
-{% endhighlight %}
-
-#### Unable to load a big cube as HTable, with java.lang.OutOfMemoryError: unable to create new native thread
-HBase (as of writing) allocates one thread per region when bulk loading a HTable. Try reduce the number of regions of your cube by setting its "capacity" to "MEDIUM" or "LARGE". Also tweaks OS & JVM can allow more threads, for example see [this article](http://blog.egilh.com/2006/06/2811aspx.html).
-
-#### Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
-User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
-
-#### SUM(field) returns a negtive result while all the numbers in this field are > 0
-If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which would be used as a measure in Kylin; See hive number types: [https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes)
-
-#### Why Kylin need extract the distinct columns from Fact Table before building cube?
-Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
-
-#### Why Kylin calculate the HIVE table cardinality?
-The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
-
-#### How to add new user or change the default password?
-Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
-{% highlight Groff markup %}
-${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
-{% endhighlight %}
-The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
-When you deploy Kylin for more users, switch to LDAP authentication is recommended; To enable LDAP authentication, update "kylin.sandbox" in conf/kylin.properties to false, and also configure the ldap.* properties in ${KYLIN_HOME}/conf/kylin.properties
-
-#### Using sub-query for un-supported SQL
-
-{% highlight Groff markup %}
-Original SQL:
-select fact.slr_sgmt,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from ih_daily_fact fact
-inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-group by fact.slr_sgmt
-{% endhighlight %}
-
-{% highlight Groff markup %}
-Using sub-query
-select a.slr_sgmt,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
-sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
-from (
-    select fact.slr_sgmt as slr_sgmt,
-    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
-    sum(gmv) as gmv36,
-    sum(gmv) as gmv35
-    from ih_daily_fact fact
-    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
-    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
-) a
-group by a.slr_sgmt
-{% endhighlight %}
-
-
-

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/gettingstarted/terminology.md
----------------------------------------------------------------------
diff --git a/website/_docs2/gettingstarted/terminology.md b/website/_docs2/gettingstarted/terminology.md
deleted file mode 100644
index f6c615d..0000000
--- a/website/_docs2/gettingstarted/terminology.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-layout: docs2
-title:  "Terminology"
-categories: gettingstarted
-permalink: /docs2/gettingstarted/terminology.html
-version: v1.0
-since: v0.5.x
----
- 
-
-Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
-They are basic knowledge of Apache Kylin which also will help to well understand such concerpt, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analycits. 
-
-* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
-* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
-* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
-* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
-* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
-* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
-* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
-* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
-* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
-* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
-
-
-

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_backup_hbase.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_backup_hbase.md b/website/_docs2/howto/howto_backup_hbase.md
deleted file mode 100644
index 17bc51a..0000000
--- a/website/_docs2/howto/howto_backup_hbase.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-layout: docs2
-title:  How to Clean/Backup HBase Tables
-categories: howto
-permalink: /docs2/howto/howto_backup_hbase.html
-version: v1.0
-since: v0.7.1
----
-
-Kylin persists all data (meta data and cube) in HBase; You may want to export the data sometimes for whatever purposes 
-(backup, migration, troubleshotting etc); This page describes the steps to do this and also there is a Java app for you to do this easily.
-
-Steps:
-
-1. Cleanup unused cubes to save storage space (be cautious on production!): run the following command in hbase CLI: 
-{% highlight Groff markup %}
-hbase org.apache.hadoop.util.RunJar /${KYLIN_HOME}/lib/kylin-job-(version).jar org.apache.kylin.job.hadoop.cube.StorageCleanupJob --delete true
-{% endhighlight %}
-2. List all HBase tables, iterate and then export each Kylin table to HDFS; 
-See [https://hbase.apache.org/book/ops_mgt.html#export](https://hbase.apache.org/book/ops_mgt.html#export)
-
-3. Copy the export folder from HDFS to local file system, and then archive it;
-
-4. (optional) Download the archive from Hadoop CLI to local;
-
-5. Cleanup the export folder from CLI HDFS and local file system;
-
-Kylin provide the "ExportHBaseData.java" (currently only exist in "minicluster" branch) for you to do the 
-step 2-5 in one run; Please ensure the correct path of "kylin.properties" has been set in the sys env; This Java uses the sandbox config by default;

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_backup_metadata.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_backup_metadata.md b/website/_docs2/howto/howto_backup_metadata.md
deleted file mode 100644
index 7e5e439..0000000
--- a/website/_docs2/howto/howto_backup_metadata.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-layout: docs2
-title:  How to Backup Metadata
-categories: howto
-permalink: /docs2/howto/howto_backup_metadata.html
-version: v1.0
-since: v0.7.1
----
-
-Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
-
-{% highlight Groff markup %}
-## The metadata store in hbase
-kylin.metadata.url=kylin_metadata@hbase
-{% endhighlight %}
-
-This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
-
-## Backup Metadata Store with binary package
-
-Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
-In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
-
-{% highlight Groff markup %}
-./bin/metastore.sh backup
-{% endhighlight %}
-
-to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
-
-## Restore Metadata Store with binary package
-
-In case you find your metadata store messed up, and you want to restore to a previous backup:
-
-Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
-
-{% highlight Groff markup %}
-./bin/metastore.sh reset
-{% endhighlight %}
-
-Then upload the backup metadata to Kylin's metadata store:
-{% highlight Groff markup %}
-./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
-{% endhighlight %}
-
-## Backup/restore metadata in development env (available since 0.7.3)
-
-When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
-
-## Cleanup unused resources from Metadata Store (available since 0.7.3)
-As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
-
-Firstly, run a check, this is safe as it will not change anything:
-{% highlight Groff markup %}
-./bin/metastore.sh clean
-{% endhighlight %}
-
-The resources that will be dropped will be listed;
-
-Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
-{% highlight Groff markup %}
-./bin/metastore.sh clean --delete true
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_build_cube_with_restapi.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_build_cube_with_restapi.md b/website/_docs2/howto/howto_build_cube_with_restapi.md
deleted file mode 100644
index 0bae7bf..0000000
--- a/website/_docs2/howto/howto_build_cube_with_restapi.md
+++ /dev/null
@@ -1,55 +0,0 @@
----
-layout: docs2
-title:  How to Build Cube with Restful API
-categories: howto
-permalink: /docs2/howto/howto_build_cube_with_restapi.html
-version: v1.2
-since: v0.7.1
----
-
-### 1.	Authentication
-*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
-*   Add `Authorization` header to first request for authentication
-*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
-*   Once authenticated, client can go subsequent requests with cookies.
-{% highlight Groff markup %}
-POST http://localhost:7070/kylin/api/user/authentication
-    
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-
-### 2.	Get details of cube. 
-*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
-*   Client can find cube segment date ranges in returned cube detail.
-{% highlight Groff markup %}
-GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-{% endhighlight %}
-### 3.	Then submit a build job of the cube. 
-*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
-*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
-    *   `startTime` and `endTime` should be utc timestamp.
-    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
-*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
-{% highlight Groff markup %}
-PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
-
-Authorization:Basic xxxxJD124xxxGFxxxSDF
-Content-Type: application/json;charset=UTF-8
-    
-{
-    "startTime": 0,
-    "endTime": 1388563200000,
-    "buildType": "BUILD"
-}
-{% endhighlight %}
-
-### 4.	Track job status. 
-*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
-*   Returned `job_status` represents current status of job.
-
-### 5.	If the job got errors, you can resume it. 
-*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_cleanup_storage.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_cleanup_storage.md b/website/_docs2/howto/howto_cleanup_storage.md
deleted file mode 100644
index 8ccab53..0000000
--- a/website/_docs2/howto/howto_cleanup_storage.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-layout: docs2
-title:  How to Cleanup Storage (HDFS & HBase Tables)
-categories: howto
-permalink: /docs2/howto/howto_cleanup_storage.html
-version: v2
-since: v2
----
-
-Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
-automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
-
-Steps:
-1. Check which resources can be cleanup, this will not remove anything:
-{% highlight Groff markup %}
-hbase org.apache.hadoop.util.RunJar ${KYLIN_HOME}/lib/kylin-job-(version).jar org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete false
-{% endhighlight %}
-Here please replace (version) with the specific Kylin jar version in your installation;
-2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
-{% highlight Groff markup %}
-hbase org.apache.hadoop.util.RunJar ${KYLIN_HOME}/lib/kylin-job-(version).jar org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete true
-{% endhighlight %}
-On finish, the intermediate HDFS location and HTables will be dropped;

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_jdbc.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_jdbc.md b/website/_docs2/howto/howto_jdbc.md
deleted file mode 100644
index 871b75a..0000000
--- a/website/_docs2/howto/howto_jdbc.md
+++ /dev/null
@@ -1,94 +0,0 @@
----
-layout: docs2
-title:  How to Use kylin Remote JDBC Driver
-categories: howto
-permalink: /docs2/howto/howto_jdbc.html
-version: v1.2
-since: v0.7.1
----
-
-### Authentication
-
-###### Build on kylin authentication restful service. Supported parameters:
-* user : username 
-* password : password
-* ssl: true/false. Default be false; If true, all the services call will use https.
-
-### Connection URL format:
-{% highlight Groff markup %}
-jdbc:kylin://<hostname>:<port>/<kylin_project_name>
-{% endhighlight %}
-* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
-* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
-* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
-
-### 1. Query with Statement
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 2. Query with PreparedStatement
-
-###### Supported prepared statement parameters:
-* setString
-* setInt
-* setShort
-* setLong
-* setFloat
-* setDouble
-* setBoolean
-* setByte
-* setDate
-* setTime
-* setTimestamp
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
-state.setInt(1, 10);
-ResultSet resultSet = state.executeQuery();
-
-while (resultSet.next()) {
-    assertEquals("foo", resultSet.getString(1));
-    assertEquals("bar", resultSet.getString(2));
-    assertEquals("tool", resultSet.getString(3));
-}
-{% endhighlight %}
-
-### 3. Get query result set metadata
-Kylin jdbc driver supports metadata list methods:
-List catalog, schema, table and column with sql pattern filters(such as %).
-
-{% highlight Groff markup %}
-Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
-Properties info = new Properties();
-info.put("user", "ADMIN");
-info.put("password", "KYLIN");
-Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
-Statement state = conn.createStatement();
-ResultSet resultSet = state.executeQuery("select * from test_table");
-
-ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
-while (tables.next()) {
-    for (int i = 0; i < 10; i++) {
-        assertEquals("dummy", tables.getString(i + 1));
-    }
-}
-{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_ldap_and_sso.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_ldap_and_sso.md b/website/_docs2/howto/howto_ldap_and_sso.md
deleted file mode 100644
index 835e50c..0000000
--- a/website/_docs2/howto/howto_ldap_and_sso.md
+++ /dev/null
@@ -1,124 +0,0 @@
----
-layout: docs2
-title:  How to Enable Security with LDAP and SSO
-categories: howto
-permalink: /docs2/howto/howto_ldap_and_sso.html
-version: v2.0
-since: v1.0
----
-
-## Enable LDAP authentication
-
-Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
-
-#### Configure LDAP server info
-
-Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be salted; You can Google "Generate a BCrypt Password" or run org.apache.kylin.rest.security.PasswordPlaceholderConfigurer to get a hash of your password.
-
-```
-ldap.server=ldap://<your_ldap_host>:<port>
-ldap.username=<your_user_name>
-ldap.password=<your_password_hash>
-```
-
-Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
-
-```
-ldap.user.searchBase=OU=UserAccounts,DC=mycompany,DC=com
-ldap.user.searchPattern=(&(AccountName={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
-ldap.user.groupSearchBase=OU=Group,DC=mycompany,DC=com
-```
-
-If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in ldap.service.*; Otherwise, leave them be empty;
-
-### Configure the administrator group and default role
-
-To map an LDAP group to the admin group in Kylin, need set the "acl.adminRole" to "ROLE_" + GROUP_NAME. For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
-
-```
-acl.adminRole=ROLE_KYLIN-ADMIN-GROUP
-acl.defaultRole=ROLE_ANALYST,ROLE_MODELER
-```
-
-The "acl.defaultRole" is a list of the default roles that grant to everyone, keep it as-is.
-
-#### Enable LDAP
-
-For Kylin v0.x and v1.x: set "kylin.sandbox=false" in conf/kylin.properties, then restart Kylin server; 
-For Kylin since v2.0: set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server; 
-
-## Enable SSO authentication
-
-From v2.0, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
-
-Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
-
-### Generate IDP metadata xml
-Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
-
-  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
-  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
-  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
-
-### Generate JKS keystore for Kylin
-As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
-
-Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
-
-```
-$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
-Enter Export Password: <export_pwd>
-Verifying - Enter Export Password: <export_pwd>
-
-
-$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
-
-Enter destination keystore password:  changeit
-Re-enter new password: changeit
-```
-
-It will put the keys to "samlKeystore.jks" with alias "kylin";
-
-### Enable Higher Ciphers
-
-Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
-
-### Deploy IDP xml file and keystore to Kylin
-
-The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
-	
-  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
-  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
-  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
-
-```
-<!-- Central storage of cryptographic keys -->
-<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
-	<constructor-arg value="classpath:samlKeystore.jks"/>
-	<constructor-arg type="java.lang.String" value="changeit"/>
-	<constructor-arg>
-		<map>
-			<entry key="kylin" value="changeit"/>
-		</map>
-	</constructor-arg>
-	<constructor-arg type="java.lang.String" value="kylin"/>
-</bean>
-
-```
-
-### Other configurations
-In conf/kylin.properties, add the following properties with your server information:
-
-```
-saml.metadata.entityBaseURL=https://host-name/kylin
-saml.context.scheme=https
-saml.context.serverName=host-name
-saml.context.serverPort=443
-saml.context.contextPath=/kylin
-```
-
-Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
-
-### Enable SSO
-Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
-

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_optimize_cubes.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_optimize_cubes.md b/website/_docs2/howto/howto_optimize_cubes.md
deleted file mode 100644
index 2e51c63..0000000
--- a/website/_docs2/howto/howto_optimize_cubes.md
+++ /dev/null
@@ -1,214 +0,0 @@
----
-layout: docs2
-title:  How to Optimize Cubes
-categories: howto
-permalink: /docs2/howto/howto_optimize_cubes.html
-version: v0.7.2
-since: v0.7.1
----
-
-## Hierarchies:
-
-Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
-
-group by continent
-group by continent, country
-group by continent, country, city
-
-In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
-
-If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
-
-
-A. Hierarchies on lookup table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, FK</td>
-    <td></td>
-    <td>PK,,H1,H2,H3,,,,</td>
-  </tr>
-</table>
-
----
-
-B. Hierarchies on fact table
-
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
-  </tr>
-</table>
-
----
-
-
-There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
-
-A*. Hierarchies on lookup table over its primary key
-
-
-<table>
-  <tr>
-    <td align="center">Lookup Table(Calendar)</td>
-  </tr>
-  <tr>
-    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
-  </tr>
-</table>
-
----
-
-
-For cases like A* what you need is another optimization called "Derived Columns"
-
-## Derived Columns:
-
-Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
-
-For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
-
-<table>
-  <tr>
-    <td align="center">Fact table</td>
-    <td align="center">(joins)</td>
-    <td align="center">Lookup Table</td>
-  </tr>
-  <tr>
-    <td>column1,column2,,,,,, DimA(FK) </td>
-    <td></td>
-    <td>DimX(PK),,DimB, DimC</td>
-  </tr>
-</table>
-
----
-
-
-Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
-
-
-<table>
-  <tr>
-    <th>dimA</th>
-    <th>dimB</th>
-    <th>dimC</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>b</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>c</td>
-    <td>?</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>a</td>
-    <td>?</td>
-  </tr>
-</table>
-
-
-in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
-
-original combinations:
-ABC,AB,AC,BC,A,B,C
-
-combinations when driving B from A:
-AC,A,C
-
-at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
-
-
-<table>
-  <tr>
-    <th>DimA</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>1</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>2</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>3</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>4</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
-
-
-<table>
-  <tr>
-    <th>DimB</th>
-    <th>count(*)</th>
-  </tr>
-  <tr>
-    <td>a</td>
-    <td>2</td>
-  </tr>
-  <tr>
-    <td>b</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>c</td>
-    <td>1</td>
-  </tr>
-</table>
-
-
-this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_upgrade.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_upgrade.md b/website/_docs2/howto/howto_upgrade.md
deleted file mode 100644
index 32ac0cf..0000000
--- a/website/_docs2/howto/howto_upgrade.md
+++ /dev/null
@@ -1,95 +0,0 @@
----
-layout: docs2
-title:  How to Upgrade
-categories: howto
-permalink: /docs2/howto/howto_upgrade.html
-version: v2.0
-since: v2.0
----
-
-## Upgrade from v1.x to v2.0 
-
-Here Kylin v2.0 are generated from 2.0-rc branch.
-
-From v1.x to v2.0, Kylin's cube data is backward compatible, but metadata has been refactored as new schema, to support new features on cubing and query enhancement. So if you want to deploy v2.0 on your v1.x base, you need to upgrade the metadata as following steps:
-
-#### 1. Backup metadata on v1.x
-To avoid data loss during the upgrade, a backup at the very beginning is always suggested. In case of upgrade failure, you can roll back to original state with the backup.
-
-```
-$KYLIN_HOME/bin/metastore.sh backup
-``` 
-It will print the backup folder, take it down and make sure it will not be deleted before the upgrade finished. If there is no "metastore.sh", you can use HBase's snapshot command to do backup:
-
-```
-hbase shell
-snapshot 'kylin_metadata', 'kylin_metadata_backup20160101'
-```
-Here 'kylin_metadata' is the default kylin metadata table name, replace it with the right table name of your Kylin metastore.
-
-#### 2. Stop Kylin v1.x instance
-Before deploying Kylin v2.0 instance, you need to stop the old instance. Note that end users cannot access kylin service from this point.
-
-```
-$KYLIN_HOME/bin/kylin.sh stop
-```
-#### 3. Install Kylin v2.0 and copy back "conf"
-Download the new Kylin v2.0 binary package from Kylin's download page; Extract it to a different folder other than current KYLIN_HOME; Before copy back the "conf" folder, do a compare and merge between the old and new kylin.properties to ensure newly introduced property will be kept.
-
-#### (Optional) 4. Upgrading metadata will not bring new features of v2.0 to existing cube built with v1.x engine. If you want to leverage those features, please refer to [Highlight]() part.
-
-#### 5. Automaticly upgrade metadata
-Kylin v2.0 package provides a script for metadata automaticly upgrade. In this upgrade, empty cubes will be updated to v2.0 version and all new features are enabled for them. But those non-empty cubes are not able to use those new features.
-
-```
-export KYLIN_HOME="<path_of_new_installation>"
-$KYLIN_HOME/bin/upgrade_v2.sh
-```
-After this, the metadata in hbase table has been applied with new metadata schema.
-
-#### 6. Start Kylin v2.0 instance
-```
-$KYLIN_HOME/bin/kylin.sh start
-```
-Check the log and open web UI to see if the upgrade succeeded.
-
-## Rollback if the upgrade is failed
-If the new version couldn't startup normally, you need to roll back to orignal v1.x version. The steps are as followed:
-
-#### 1. Stop Kylin v2.0 instance
-
-```
-$KYLIN_HOME/bin/kylin.sh stop
-```
-#### 2. Rstore 1.x metadata from backup folder
-
-```
-export KYLIN_HOME="<path_of_1.x_installation>"
-$KYLIN_HOME/bin/metastore.sh restore <backup_folder>
-``` 
-#### 3. Deploy coprocessor of v1.x
-Since coprocessor of used HTable are upgraded as v2.0, you need to manually downgrade them with this command.
-
-```
-$KYLIN_HOME/bin/kylin.sh org.apache.kylin.job.tools.DeployCoprocessorCLI $KYLIN_HOME/lib/kylin-coprocessor*.jar -all
-```
-
-#### 4. Start Kylin v1.x instance
- 
-```
-$KYLIN_HOME/bin/kylin.sh start
-```
-
-## Highlights
-Since old cubes built with v1.x cannot leverage new features of v2.0. But if you must have them on your cubes, you can choose one of these solutions:
-#### 1. Rebuilt cubes
-If the cost of rebuilding is acceptable, if you purge the cube before Step 4(Running Upgrade Scripts). After upgrade done, you need to manually rebuilt those segments by yourself.
-#### 2. Use hybrid model
-If you don't want to rebuild any cube, but want to leverage new features for  new data. You can use hybrid model, which contains not only your old cube, but also an empty cube which has same model with the old one. For the empty cube, you can do incremental building with v2 features. For the old cube, you can refresh existing segments only.
-
-Here is the command to create hybrid model:
-
-```
-export KYLIN_HOME="<path_of_v2.0_installation>"
-$KYLIN_HOME/bin/kylin.sh org.apache.kylin.storage.hbase.util.ExtendCubeToHybridCLI <project_name> <cube_name>
-```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_use_restapi.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_use_restapi.md b/website/_docs2/howto/howto_use_restapi.md
deleted file mode 100644
index 3adaf66..0000000
--- a/website/_docs2/howto/howto_use_restapi.md
+++ /dev/null
@@ -1,1006 +0,0 @@
----
-layout: docs2
-title:  How to Use Restful API
-categories: howto
-permalink: /docs2/howto/howto_use_restapi.html
-version: v1.2
-since: v0.7.1
----
-
-This page lists all the Rest APIs provided by Kylin; The base of the URL is `/kylin/api`, so don't forget to add it before a certain API's path. For example, to get all cube instances, send HTTP GET request to "/kylin/api/cubes".
-
-* Query
-   * [Authentication](#authentication)
-   * [Query](#query)
-   * [List queryable tables](#list-queryable-tables)
-* CUBE
-   * [List cubes](#list-cubes)
-   * [Get cube](#get-cube)
-   * [Get cube descriptor (dimension, measure info, etc)](#get-cube-descriptor)
-   * [Get data model (fact and lookup table info)](#get-data-model)
-   * [Build cube](#build-cube)
-   * [Disable cube](#disable-cube)
-   * [Purge cube](#purge-cube)
-   * [Enable cube](#enable-cube)
-* JOB
-   * [Resume job](#resume-job)
-   * [Discard job](#discard-job)
-   * [Get job step output](#get-job-step-output)
-* Metadata
-   * [Get Hive Table](#get-hive-table)
-   * [Get Hive Table (Extend Info)](#get-hive-table-extend-info)
-   * [Get Hive Tables](#get-hive-tables)
-   * [Load Hive Tables](#load-hive-tables)
-* Cache
-   * [Wipe cache](#wipe-cache)
-
-## Authentication
-`POST /user/authentication`
-
-#### Request Header
-Authorization data encoded by basic auth is needed in the header, such as:
-Authorization:Basic {data}
-
-#### Response Body
-* userDetails - Defined authorities and status of current user.
-
-#### Response Sample
-
-```sh
-{  
-   "userDetails":{  
-      "password":null,
-      "username":"sample",
-      "authorities":[  
-         {  
-            "authority":"ROLE_ANALYST"
-         },
-         {  
-            "authority":"ROLE_MODELER"
-         }
-      ],
-      "accountNonExpired":true,
-      "accountNonLocked":true,
-      "credentialsNonExpired":true,
-      "enabled":true
-   }
-}
-```
-
-Example with `curl`: 
-
-```
-curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' http://<host>:<port>/kylin/api/user/authentication
-```
-
-If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
-
-```
-curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/rebuild
-```
-
-***
-
-## Query
-`POST /query`
-
-#### Request Body
-* sql - `required` `string` The text of sql statement.
-* offset - `optional` `int` Query offset. If offset is set in sql, curIndex will be ignored.
-* limit - `optional` `int` Query limit. If limit is set in sql, perPage will be ignored.
-* acceptPartial - `optional` `bool` Whether accept a partial result or not, default be "false". Set to "false" for production use. 
-* project - `optional` `string` Project to perform query. Default value is 'DEFAULT'.
-
-#### Request Sample
-
-```sh
-{  
-   "sql":"select * from TEST_KYLIN_FACT",
-   "offset":0,
-   "limit":50000,
-   "acceptPartial":false,
-   "project":"DEFAULT"
-}
-```
-
-#### Response Body
-* columnMetas - Column metadata information of result set.
-* results - Data set of result.
-* cube - Cube used for this query.
-* affectedRowCount - Count of affected row by this sql statement.
-* isException - Whether this response is an exception.
-* ExceptionMessage - Message content of the exception.
-* Duration - Time cost of this query
-* Partial - Whether the response is a partial result or not. Decided by `acceptPartial` of request.
-
-#### Response Sample
-
-```sh
-{  
-   "columnMetas":[  
-      {  
-         "isNullable":1,
-         "displaySize":0,
-         "label":"CAL_DT",
-         "name":"CAL_DT",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":0,
-         "scale":0,
-         "columnType":91,
-         "columnTypeName":"DATE",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      },
-      {  
-         "isNullable":1,
-         "displaySize":10,
-         "label":"LEAF_CATEG_ID",
-         "name":"LEAF_CATEG_ID",
-         "schemaName":null,
-         "catelogName":null,
-         "tableName":null,
-         "precision":10,
-         "scale":0,
-         "columnType":4,
-         "columnTypeName":"INTEGER",
-         "readOnly":true,
-         "writable":false,
-         "caseSensitive":true,
-         "searchable":false,
-         "currency":false,
-         "signed":true,
-         "autoIncrement":false,
-         "definitelyWritable":false
-      }
-   ],
-   "results":[  
-      [  
-         "2013-08-07",
-         "32996",
-         "15",
-         "15",
-         "Auction",
-         "10000000",
-         "49.048952730908745",
-         "49.048952730908745",
-         "49.048952730908745",
-         "1"
-      ],
-      [  
-         "2013-08-07",
-         "43398",
-         "0",
-         "14",
-         "ABIN",
-         "10000633",
-         "85.78317064220418",
-         "85.78317064220418",
-         "85.78317064220418",
-         "1"
-      ]
-   ],
-   "cube":"test_kylin_cube_with_slr_desc",
-   "affectedRowCount":0,
-   "isException":false,
-   "exceptionMessage":null,
-   "duration":3451,
-   "partial":false
-}
-```
-
-## List queryable tables
-`GET /tables_and_columns`
-
-#### Request Parameters
-* project - `required` `string` The project to load tables
-
-#### Response Sample
-```sh
-[  
-   {  
-      "columns":[  
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"CAL_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":1,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         },
-         {  
-            "table_NAME":"TEST_CAL_DT",
-            "table_SCHEM":"EDW",
-            "column_NAME":"WEEK_BEG_DT",
-            "data_TYPE":91,
-            "nullable":1,
-            "column_SIZE":-1,
-            "buffer_LENGTH":-1,
-            "decimal_DIGITS":0,
-            "num_PREC_RADIX":10,
-            "column_DEF":null,
-            "sql_DATA_TYPE":-1,
-            "sql_DATETIME_SUB":-1,
-            "char_OCTET_LENGTH":-1,
-            "ordinal_POSITION":2,
-            "is_NULLABLE":"YES",
-            "scope_CATLOG":null,
-            "scope_SCHEMA":null,
-            "scope_TABLE":null,
-            "source_DATA_TYPE":-1,
-            "iS_AUTOINCREMENT":null,
-            "table_CAT":"defaultCatalog",
-            "remarks":null,
-            "type_NAME":"DATE"
-         }
-      ],
-      "table_NAME":"TEST_CAL_DT",
-      "table_SCHEM":"EDW",
-      "ref_GENERATION":null,
-      "self_REFERENCING_COL_NAME":null,
-      "type_SCHEM":null,
-      "table_TYPE":"TABLE",
-      "table_CAT":"defaultCatalog",
-      "remarks":null,
-      "type_CAT":null,
-      "type_NAME":null
-   }
-]
-```
-
-***
-
-## List cubes
-`GET /cubes`
-
-#### Request Parameters
-* offset - `required` `int` Offset used by pagination
-* limit - `required` `int ` Cubes per page.
-* cubeName - `optional` `string` Keyword for cube names. To find cubes whose name contains this keyword.
-* projectName - `optional` `string` Project name.
-
-#### Response Sample
-```sh
-[  
-   {  
-      "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-      "last_modified":1407831634847,
-      "name":"test_kylin_cube_with_slr_empty",
-      "owner":null,
-      "version":null,
-      "descriptor":"test_kylin_cube_with_slr_desc",
-      "cost":50,
-      "status":"DISABLED",
-      "segments":[  
-      ],
-      "create_time":null,
-      "source_records_count":0,
-      "source_records_size":0,
-      "size_kb":0
-   }
-]
-```
-
-## Get cube
-`GET /cubes/{cubeName}`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name to find.
-
-## Get cube descriptor
-`GET /cube_desc/{cubeName}`
-Get descriptor for specified cube instance.
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-[
-    {
-        "uuid": "a24ca905-1fc6-4f67-985c-38fa5aeafd92", 
-        "name": "test_kylin_cube_with_slr_desc", 
-        "description": null, 
-        "dimensions": [
-            {
-                "id": 0, 
-                "name": "CAL_DT", 
-                "table": "EDW.TEST_CAL_DT", 
-                "column": null, 
-                "derived": [
-                    "WEEK_BEG_DT"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 1, 
-                "name": "CATEGORY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": null, 
-                "derived": [
-                    "USER_DEFINED_FIELD1", 
-                    "USER_DEFINED_FIELD3", 
-                    "UPD_DATE", 
-                    "UPD_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 2, 
-                "name": "CATEGORY_HIERARCHY", 
-                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-                "column": [
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": true
-            }, 
-            {
-                "id": 3, 
-                "name": "LSTG_FORMAT_NAME", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "LSTG_FORMAT_NAME"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }, 
-            {
-                "id": 4, 
-                "name": "SITE_ID", 
-                "table": "EDW.TEST_SITES", 
-                "column": null, 
-                "derived": [
-                    "SITE_NAME", 
-                    "CRE_USER"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 5, 
-                "name": "SELLER_TYPE_CD", 
-                "table": "EDW.TEST_SELLER_TYPE_DIM", 
-                "column": null, 
-                "derived": [
-                    "SELLER_TYPE_DESC"
-                ], 
-                "hierarchy": false
-            }, 
-            {
-                "id": 6, 
-                "name": "SELLER_ID", 
-                "table": "DEFAULT.TEST_KYLIN_FACT", 
-                "column": [
-                    "SELLER_ID"
-                ], 
-                "derived": null, 
-                "hierarchy": false
-            }
-        ], 
-        "measures": [
-            {
-                "id": 1, 
-                "name": "GMV_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 2, 
-                "name": "GMV_MIN", 
-                "function": {
-                    "expression": "MIN", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 3, 
-                "name": "GMV_MAX", 
-                "function": {
-                    "expression": "MAX", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "PRICE", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "decimal(19,4)"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 4, 
-                "name": "TRANS_CNT", 
-                "function": {
-                    "expression": "COUNT", 
-                    "parameter": {
-                        "type": "constant", 
-                        "value": "1", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }, 
-            {
-                "id": 5, 
-                "name": "ITEM_COUNT_SUM", 
-                "function": {
-                    "expression": "SUM", 
-                    "parameter": {
-                        "type": "column", 
-                        "value": "ITEM_COUNT", 
-                        "next_parameter": null
-                    }, 
-                    "returntype": "bigint"
-                }, 
-                "dependent_measure_ref": null
-            }
-        ], 
-        "rowkey": {
-            "rowkey_columns": [
-                {
-                    "column": "SELLER_ID", 
-                    "length": 18, 
-                    "dictionary": null, 
-                    "mandatory": true
-                }, 
-                {
-                    "column": "CAL_DT", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LEAF_CATEG_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "META_CATEG_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL2_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "CATEG_LVL3_NAME", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_FORMAT_NAME", 
-                    "length": 12, 
-                    "dictionary": null, 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "LSTG_SITE_ID", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }, 
-                {
-                    "column": "SLR_SEGMENT_CD", 
-                    "length": 0, 
-                    "dictionary": "true", 
-                    "mandatory": false
-                }
-            ], 
-            "aggregation_groups": [
-                [
-                    "LEAF_CATEG_ID", 
-                    "META_CATEG_NAME", 
-                    "CATEG_LVL2_NAME", 
-                    "CATEG_LVL3_NAME", 
-                    "CAL_DT"
-                ]
-            ]
-        }, 
-        "signature": "lsLAl2jL62ZApmOLZqWU3g==", 
-        "last_modified": 1445850327000, 
-        "model_name": "test_kylin_with_slr_model_desc", 
-        "null_string": null, 
-        "hbase_mapping": {
-            "column_family": [
-                {
-                    "name": "F1", 
-                    "columns": [
-                        {
-                            "qualifier": "M", 
-                            "measure_refs": [
-                                "GMV_SUM", 
-                                "GMV_MIN", 
-                                "GMV_MAX", 
-                                "TRANS_CNT", 
-                                "ITEM_COUNT_SUM"
-                            ]
-                        }
-                    ]
-                }
-            ]
-        }, 
-        "notify_list": null, 
-        "auto_merge_time_ranges": null, 
-        "retention_range": 0
-    }
-]
-```
-
-## Get data model
-`GET /model/{modelName}`
-
-#### Path Variable
-* modelName - `required` `string` Data model name, by default it should be the same with cube name.
-
-#### Response Sample
-```sh
-{
-    "uuid": "ff527b94-f860-44c3-8452-93b17774c647", 
-    "name": "test_kylin_with_slr_model_desc", 
-    "lookups": [
-        {
-            "table": "EDW.TEST_CAL_DT", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "CAL_DT"
-                ], 
-                "foreign_key": [
-                    "CAL_DT"
-                ]
-            }
-        }, 
-        {
-            "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
-            "join": {
-                "type": "inner", 
-                "primary_key": [
-                    "LEAF_CATEG_ID", 
-                    "SITE_ID"
-                ], 
-                "foreign_key": [
-                    "LEAF_CATEG_ID", 
-                    "LSTG_SITE_ID"
-                ]
-            }
-        }
-    ], 
-    "capacity": "MEDIUM", 
-    "last_modified": 1442372116000, 
-    "fact_table": "DEFAULT.TEST_KYLIN_FACT", 
-    "filter_condition": null, 
-    "partition_desc": {
-        "partition_date_column": "DEFAULT.TEST_KYLIN_FACT.CAL_DT", 
-        "partition_date_start": 0, 
-        "partition_date_format": "yyyy-MM-dd", 
-        "partition_type": "APPEND", 
-        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
-    }
-}
-```
-
-## Build cube
-`PUT /cubes/{cubeName}/rebuild`
-
-#### Path Variable
-* cubeName - `required` `string` Cube name.
-
-#### Request Body
-* startTime - `required` `long` Start timestamp of data to build, e.g. 1388563200000 for 2014-1-1
-* endTime - `required` `long` End timestamp of data to build
-* buildType - `required` `string` Supported build type: 'BUILD', 'MERGE', 'REFRESH'
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_CD smallint\n,SELLER_ID bigint\n,PRICE decimal\n)\nROW FORMAT DELIMITED FIELDS TERMINATED BY '\\177'\nSTORED AS SEQUENCEFILE\nLOCATION '/tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6';\nSET mapreduce.job.split.metainfo.maxsize=-1;\nSET mapred.compress.map.output=true;\nSET mapred.map.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compress=true;\nSET ma
 pred.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compression.type=BLOCK;\nSET mapreduce.job.max.split.locations=2000;\nSET hive.exec.compress.output=true;\nSET hive.auto.convert.join.noconditionaltask = true;\nSET hive.auto.convert.join.noconditionaltask.size = 300000000;\nINSERT OVERWRITE TABLE kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\nSELECT\nTEST_KYLIN_FACT.CAL_DT\n,TEST_KYLIN_FACT.LEAF_CATEG_ID\n,TEST_KYLIN_FACT.LSTG_SITE_ID\n,TEST_CATEGORY_GROUPINGS.META_CATEG_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL2_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL3_NAME\n,TEST_KYLIN_FACT.LSTG_FORMAT_NAME\n,TEST_KYLIN_FACT.SLR_SEGMENT_CD\n,TEST_KYLIN_FACT.SELLER_ID\n,TEST_KYLIN_FACT.PRICE\nFROM TEST_KYLIN_FACT\nINNER JOIN TEST_CAL_DT\nON TEST_KYLIN_FACT.CAL_DT = TEST_CAL_DT.CAL_DT\nINNER JOIN TEST_CATEGORY_GROUPINGS\nON TEST_KYLIN_FACT.LEAF_CATEG_ID = TEST_CATEGORY_GROUPINGS.LEAF_CATEG_ID AN
 D TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_CATEGORY_GROUPINGS.SITE_ID\nINNER JOIN TEST_SITES\nON TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_SITES.SITE_ID\nINNER JOIN TEST_SELLER_TYPE_DIM\nON TEST_KYLIN_FACT.SLR_SEGMENT_CD = TEST_SELLER_TYPE_DIM.SELLER_TYPE_CD\nWHERE (test_kylin_fact.cal_dt < '2014-07-31 16:00:00')\n;\n\"",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-
-## Enable Cube
-`PUT /cubes/{cubeName}/enable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-```sh
-{  
-   "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
-   "last_modified":1407909046305,
-   "name":"test_kylin_cube_with_slr_ready",
-   "owner":null,
-   "version":null,
-   "descriptor":"test_kylin_cube_with_slr_desc",
-   "cost":50,
-   "status":"ACTIVE",
-   "segments":[  
-      {  
-         "name":"19700101000000_20140531160000",
-         "storage_location_identifier":"KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_READY-19700101000000_20140531160000_BF043D2D-9A4A-45E9-AA59-5A17D3F34A50",
-         "date_range_start":0,
-         "date_range_end":1401552000000,
-         "status":"READY",
-         "size_kb":4758,
-         "source_records":6000,
-         "source_records_size":620356,
-         "last_build_time":1407832663227,
-         "last_build_job_id":"2c7a2b63-b052-4a51-8b09-0c24b5792cda",
-         "binary_signature":null,
-         "dictionaries":{  
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME/16d8185c-ee6b-4f8c-a919-756d9809f937.dict",
-            "TEST_KYLIN_FACT/LSTG_SITE_ID":"/dict/TEST_SITES/SITE_ID/0bec6bb3-1b0d-469c-8289-b8c4ca5d5001.dict",
-            "TEST_KYLIN_FACT/SLR_SEGMENT_CD":"/dict/TEST_SELLER_TYPE_DIM/SELLER_TYPE_CD/0c5d77ec-316b-47e0-ba9a-0616be890ad6.dict",
-            "TEST_KYLIN_FACT/CAL_DT":"/dict/PREDEFINED/date(yyyy-mm-dd)/64ac4f82-f2af-476e-85b9-f0805001014e.dict",
-            "TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME/270fbfb0-281c-4602-8413-2970a7439c47.dict",
-            "TEST_KYLIN_FACT/LEAF_CATEG_ID":"/dict/TEST_CATEGORY_GROUPINGS/LEAF_CATEG_ID/2602386c-debb-4968-8d2f-b52b8215e385.dict",
-            "TEST_CATEGORY_GROUPINGS/META_CATEG_NAME":"/dict/TEST_CATEGORY_GROUPINGS/META_CATEG_NAME/0410d2c4-4686-40bc-ba14-170042a2de94.dict"
-         },
-         "snapshots":{  
-            "TEST_CAL_DT":"/table_snapshot/TEST_CAL_DT.csv/8f7cfc8a-020d-4019-b419-3c6deb0ffaa0.snapshot",
-            "TEST_SELLER_TYPE_DIM":"/table_snapshot/TEST_SELLER_TYPE_DIM.csv/c60fd05e-ac94-4016-9255-96521b273b81.snapshot",
-            "TEST_CATEGORY_GROUPINGS":"/table_snapshot/TEST_CATEGORY_GROUPINGS.csv/363f4a59-b725-4459-826d-3188bde6a971.snapshot",
-            "TEST_SITES":"/table_snapshot/TEST_SITES.csv/78e0aecc-3ec6-4406-b86e-bac4b10ea63b.snapshot"
-         }
-      }
-   ],
-   "create_time":null,
-   "source_records_count":6000,
-   "source_records_size":0,
-   "size_kb":4758
-}
-```
-
-## Disable Cube
-`PUT /cubes/{cubeName}/disable`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-## Purge Cube
-`PUT /cubes/{cubeName}/purge`
-
-#### Path variable
-* cubeName - `required` `string` Cube name.
-
-#### Response Sample
-(Same as "Enable Cube")
-
-***
-
-## Resume Job
-`PUT /jobs/{jobId}/resume`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-```
-{  
-   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
-   "last_modified":1407908916705,
-   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
-   "type":"BUILD",
-   "duration":0,
-   "related_cube":"test_kylin_cube_with_slr_empty",
-   "related_segment":"19700101000000_20140731160000",
-   "exec_start_time":0,
-   "exec_end_time":0,
-   "mr_waiting":0,
-   "steps":[  
-      {  
-         "interruptCmd":null,
-         "name":"Create Intermediate Flat Hive Table",
-         "sequence_id":0,
-         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_CD smallint\n,SELLER_ID bigint\n,PRICE decimal\n)\nROW FORMAT DELIMITED FIELDS TERMINATED BY '\\177'\nSTORED AS SEQUENCEFILE\nLOCATION '/tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6';\nSET mapreduce.job.split.metainfo.maxsize=-1;\nSET mapred.compress.map.output=true;\nSET mapred.map.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compress=true;\nSET ma
 pred.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compression.type=BLOCK;\nSET mapreduce.job.max.split.locations=2000;\nSET hive.exec.compress.output=true;\nSET hive.auto.convert.join.noconditionaltask = true;\nSET hive.auto.convert.join.noconditionaltask.size = 300000000;\nINSERT OVERWRITE TABLE kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\nSELECT\nTEST_KYLIN_FACT.CAL_DT\n,TEST_KYLIN_FACT.LEAF_CATEG_ID\n,TEST_KYLIN_FACT.LSTG_SITE_ID\n,TEST_CATEGORY_GROUPINGS.META_CATEG_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL2_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL3_NAME\n,TEST_KYLIN_FACT.LSTG_FORMAT_NAME\n,TEST_KYLIN_FACT.SLR_SEGMENT_CD\n,TEST_KYLIN_FACT.SELLER_ID\n,TEST_KYLIN_FACT.PRICE\nFROM TEST_KYLIN_FACT\nINNER JOIN TEST_CAL_DT\nON TEST_KYLIN_FACT.CAL_DT = TEST_CAL_DT.CAL_DT\nINNER JOIN TEST_CATEGORY_GROUPINGS\nON TEST_KYLIN_FACT.LEAF_CATEG_ID = TEST_CATEGORY_GROUPINGS.LEAF_CATEG_ID AN
 D TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_CATEGORY_GROUPINGS.SITE_ID\nINNER JOIN TEST_SITES\nON TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_SITES.SITE_ID\nINNER JOIN TEST_SELLER_TYPE_DIM\nON TEST_KYLIN_FACT.SLR_SEGMENT_CD = TEST_SELLER_TYPE_DIM.SELLER_TYPE_CD\nWHERE (test_kylin_fact.cal_dt < '2014-07-31 16:00:00')\n;\n\"",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"SHELL_CMD_HADOOP",
-         "info":null,
-         "run_async":false
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Extract Fact Table Distinct Columns",
-         "sequence_id":1,
-         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
-         "info":null,
-         "run_async":true
-      },
-      {  
-         "interruptCmd":null,
-         "name":"Load HFile to HBase Table",
-         "sequence_id":12,
-         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
-         "interrupt_cmd":null,
-         "exec_start_time":0,
-         "exec_end_time":0,
-         "exec_wait_time":0,
-         "step_status":"PENDING",
-         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
-         "info":null,
-         "run_async":false
-      }
-   ],
-   "job_status":"PENDING",
-   "progress":0.0
-}
-```
-
-## Discard Job
-`PUT /jobs/{jobId}/cancel`
-
-#### Path variable
-* jobId - `required` `string` Job id.
-
-#### Response Sample
-(Same as "Resume job")
-
-## Get job step output
-`GET /{jobId}/steps/{stepId}/output`
-
-#### Path Variable
-* jobId - `required` `string` Job id.
-* stepId - `required` `string` Step id; the step id is composed by jobId with step sequence id; for example, the jobId is "fb479e54-837f-49a2-b457-651fc50be110", its 3rd step id is "fb479e54-837f-49a2-b457-651fc50be110-3", 
-
-#### Response Sample
-```
-{  
-   "cmd_output":"log string"
-}
-```
-
-***
-
-## Get Hive Table
-`GET /tables/{tableName}`
-
-#### Request Parameters
-* tableName - `required` `string` table name to find.
-
-#### Response Sample
-```sh
-{
-    uuid: "69cc92c0-fc42-4bb9-893f-bd1141c91dbe",
-    name: "SAMPLE_07",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    last_modified: 1419330476755
-}
-```
-
-## Get Hive Table (Extend Info)
-`GET /tables/{tableName}/exd-map`
-
-#### Request Parameters
-* tableName - `optional` `string` table name to find.
-
-#### Response Sample
-```
-{
-    "minFileSize": "46055",
-    "totalNumberFiles": "1",
-    "location": "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_07",
-    "lastAccessTime": "1418374103365",
-    "lastUpdateTime": "1398176493340",
-    "columns": "struct columns { string code, string description, i32 total_emp, i32 salary}",
-    "partitionColumns": "",
-    "EXD_STATUS": "true",
-    "maxFileSize": "46055",
-    "inputformat": "org.apache.hadoop.mapred.TextInputFormat",
-    "partitioned": "false",
-    "tableName": "sample_07",
-    "owner": "hue",
-    "totalFileSize": "46055",
-    "outputformat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
-}
-```
-
-## Get Hive Tables
-`GET /tables`
-
-#### Request Parameters
-* project- `required` `string` will list all tables in the project.
-* ext- `optional` `boolean`  set true to get extend info of table.
-
-#### Response Sample
-```sh
-[
- {
-    uuid: "53856c96-fe4d-459e-a9dc-c339b1bc3310",
-    name: "SAMPLE_08",
-    columns: [{
-        id: "1",
-        name: "CODE",
-        datatype: "string"
-    }, {
-        id: "2",
-        name: "DESCRIPTION",
-        datatype: "string"
-    }, {
-        id: "3",
-        name: "TOTAL_EMP",
-        datatype: "int"
-    }, {
-        id: "4",
-        name: "SALARY",
-        datatype: "int"
-    }],
-    database: "DEFAULT",
-    cardinality: {},
-    last_modified: 0,
-    exd: {
-        minFileSize: "46069",
-        totalNumberFiles: "1",
-        location: "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_08",
-        lastAccessTime: "1398176495945",
-        lastUpdateTime: "1398176495981",
-        columns: "struct columns { string code, string description, i32 total_emp, i32 salary}",
-        partitionColumns: "",
-        EXD_STATUS: "true",
-        maxFileSize: "46069",
-        inputformat: "org.apache.hadoop.mapred.TextInputFormat",
-        partitioned: "false",
-        tableName: "sample_08",
-        owner: "hue",
-        totalFileSize: "46069",
-        outputformat: "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
-    }
-  }
-]
-```
-
-## Load Hive Tables
-`POST /tables/{tables}/{project}`
-
-#### Request Parameters
-* tables - `required` `string` table names you want to load from hive, separated with comma.
-* project - `required` `String`  the project which the tables will be loaded into.
-
-#### Response Sample
-```
-{
-    "result.loaded": ["DEFAULT.SAMPLE_07"],
-    "result.unloaded": ["sapmle_08"]
-}
-```
-
-***
-
-## Wipe cache
-`GET /cache/{type}/{name}/{action}`
-
-#### Path variable
-* type - `required` `string` 'METADATA' or 'CUBE'
-* name - `required` `string` Cache key, e.g the cube name.
-* action - `required` `string` 'create', 'update' or 'drop'
-

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/howto/howto_use_restapi_in_js.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_use_restapi_in_js.md b/website/_docs2/howto/howto_use_restapi_in_js.md
deleted file mode 100644
index 14cb7c9..0000000
--- a/website/_docs2/howto/howto_use_restapi_in_js.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-layout: docs2
-title:  How to Use Restful API in Javascript
-categories: howto
-permalink: /docs2/howto/howto_use_restapi_in_js.html
-version: v1.2
-since: v0.7.1
----
-Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers.
-
-## Example on Query API.
-```
-$.ajaxSetup({
-      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
-    });
-    var request = $.ajax({
-       url: "http://hostname/kylin/api/query",
-       type: "POST",
-       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
-       dataType: "json"
-    });
-    request.done(function( msg ) {
-       alert(msg);
-    }); 
-    request.fail(function( jqXHR, textStatus ) {
-       alert( "Request failed: " + textStatus );
-  });
-
-```
-
-## Keypoints
-1. add basic access authorization info in http headers.
-2. use right ajax type and data synax.
-
-## Basic access authorization
-For what is basic access authorization, refer to [Wikipedia Page](http://en.wikipedia.org/wiki/Basic_access_authentication).
-How to generate your authorization code (download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
-
-```
-var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
- 
-$.ajaxSetup({
-   headers: { 
-    'Authorization': "Basic " + authorizationCode, 
-    'Content-Type': 'application/json;charset=utf-8' 
-   }
-});
-```

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/index.md
----------------------------------------------------------------------
diff --git a/website/_docs2/index.md b/website/_docs2/index.md
deleted file mode 100644
index c7bfe96..0000000
--- a/website/_docs2/index.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-layout: docs2
-title: Overview
-categories: docs
-permalink: /docs2/index.html
----
-
-Welcome to Apache Kylin™
-------------  
-> Extreme OLAP Engine for Big Data
-
-Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets, original contributed from eBay Inc.
-
-Prior documents: [v1.x](/docs/)
-
-Installation & Setup
-------------  
-
-Please follow installation & tutorial in the navigation panel.
-
-Advanced Topics
--------  
-
-#### Connectivity
-
-1. [How to use Kylin remote JDBC driver](howto/howto_jdbc.html)
-2. [SQL reference](http://calcite.apache.org/)
-
----
-
-#### REST APIs
-
-1. [Kylin Restful API list](howto/howto_use_restapi.html)
-2. [Build cube with Restful API](howto/howto_build_cube_with_restapi.html)
-3. [How to consume Kylin REST API in javascript](howto/howto_use_restapi_in_js.html)
-
----
-
-#### Operations
-
-1. [Backup/restore Kylin metadata store](howto/howto_backup_metadata.html)
-2. [Cleanup storage (HDFS & HBase tables)](howto/howto_cleanup_storage.html)
-3. [Advanced env configurations](install/advance_settings.html)
-3. [How to upgrade](howto/howto_upgrade.html)
-
----
-
-#### Technical Details
-
-1. [New meta data model structure](/development/new_metadata.html)
-
-
-
-

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/install/advance_settings.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/advance_settings.md b/website/_docs2/install/advance_settings.md
deleted file mode 100644
index 06c73ef..0000000
--- a/website/_docs2/install/advance_settings.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-layout: docs2
-title:  "Advance Settings of Kylin Environment"
-categories: install
-permalink: /docs2/install/advance_settings.html
-version: v0.7.2
-since: v0.7.1
----
-
-## Enable LZO compression
-
-By default Kylin leverages snappy compression to compress the output of MR jobs, as well as hbase table storage, reducing the storage overhead. We do not choose LZO compression in Kylin because hadoop venders tend to not include LZO in their distributions due to license(GPL) issues. To enable LZO in Kylin, follow these steps:
-
-#### Make sure LZO is working in your environment
-
-We have a simple tool to test whether LZO is well installed on EVERY SERVER in hbase cluster ( http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.4/bk_installing_manually_book/content/ch_install_hdfs_yarn_chapter.html#install-snappy-man-install ), and restart the cluster.
-To test it on the hadoop CLI that you deployed Kylin, Just run
-
-{% highlight Groff markup %}
-hbase org.apache.hadoop.hbase.util.CompressionTest file:///PATH-TO-A-LOCAL-TMP-FILE lzo
-{% endhighlight %}
-
-If no exception is printed, you're good to go. Otherwise you'll need to first install LZO properly on this server.
-To test if the hbase cluster is ready to create LZO compressed tables, test following hbase command:
-
-{% highlight Groff markup %}
-create 'lzoTable', {NAME => 'colFam',COMPRESSION => 'LZO'}
-{% endhighlight %}
-
-#### Use LZO for HBase compression
-
-You'll need to stop Kylin first by running `./kylin.sh stop`, and then modify $KYLIN_HOME/conf/kylin_job_conf.xml by uncommenting some configuration entries related to LZO compression. 
-After this, you need to run `./kylin.sh start` to start Kylin again. Now Kylin will use LZO to compress MR outputs and hbase tables.
-
-Goto $KYLIN_HOME/conf/kylin.properties, change kylin.hbase.default.compression.codec=snappy to kylin.hbase.default.compression.codec=lzo
-
-#### Use LZO for MR jobs
-
-Modify $KYLIN_HOME/conf/kylin_job_conf.xml by changing all org.apache.hadoop.io.compress.SnappyCodec to com.hadoop.compression.lzo.LzoCodec. 
-
-Start Kylin again. Now Kylin will use LZO to compress MR outputs and HBase tables.
-
-## Enable LDAP or SSO authentication
-
-Check [How to Enable Security with LDAP and SSO](../howto/howto_ldap_and_sso.html)

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/install/hadoop_evn.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/hadoop_evn.md b/website/_docs2/install/hadoop_evn.md
deleted file mode 100644
index 9694863..0000000
--- a/website/_docs2/install/hadoop_evn.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-layout: docs2
-title:  "Hadoop Environment"
-categories: install
-permalink: /docs2/install/hadoop_env.html
-version: v0.7.2
-since: v0.7.1
----
-
-## Hadoop Environment
-
-Kylin requires you having access to a hadoop CLI, where you have full permissions to hdfs, hive, hbase and map-reduce. To make things easier we strongly recommend you starting with running Kylin on a hadoop sandbox, like <http://hortonworks.com/products/hortonworks-sandbox/>. In the following tutorial we'll go with **Hortonworks Sandbox 2.1** and **Cloudera QuickStart VM 5.1**. 
-
-To avoid permission issue, we suggest you using `root` account. The password for **Hortonworks Sandbox 2.1** is `hadoop` , for **Cloudera QuickStart VM 5.1** is `cloudera`.
-
-We also suggest you using bridged mode instead of NAT mode in your virtual box settings. Bridged mode will assign your sandbox an independent IP so that you can avoid issues like https://github.com/KylinOLAP/Kylin/issues/12
-
-### Start Hadoop
-
-Please make sure Hive, HDFS and HBase are available on our CLI machine.
-If you don't know how, here's a simple tutorial for hortonworks sanbox:
-
-Use ambari helps to launch hadoop:
-
-ambari-agent start
-ambari-server start
-	
-With both command successfully run you can go to ambari homepage at <http://your_sandbox_ip:8080> (user:admin,password:admin) to check everything's status. **By default hortonworks ambari disables Hbase, you'll need manually start the `Hbase` service at ambari homepage.**
-
-![start hbase in ambari](https://raw.githubusercontent.com/KylinOLAP/kylinolap.github.io/master/docs/installation/starthbase.png)
-
-**Additonal Info for setting up HortonWorks Sandbox on Virtual Box**
-
-	Please make sure Hbase Master port [Default 60000] and Zookeeper [Default 2181] is forwarded to Host OS.
- 
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/install/index.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/index.md b/website/_docs2/install/index.md
deleted file mode 100644
index b0e154e..0000000
--- a/website/_docs2/install/index.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-layout: docs2
-title:  "Installation Guide"
-categories: install
-permalink: /docs2/install/index.html
-version: v0.7.2
-since: v0.7.1
----
-
-### Environment
-
-Kylin requires a properly setup hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check this reference: [Hadoop Environment](hadoop_env.html).
-
-## Recommended Hadoop Versions
-
-* Hadoop: 2.4 - 2.7
-* Hive: 0.13 - 0.14
-* HBase: 0.98 - 0.99
-* JDK: 1.7+
-
-_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1_
-
-
-It is most common to install Kylin on a Hadoop client machine. It can be used for demo use, or for those who want to host their own web site to provide Kylin service. The scenario is depicted as:
-
-![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
-
-For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
-
-Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
-
-### Install Kylin
-
-1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
-2. Export KYLIN_HOME pointing to the extracted Kylin folder
-3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
-4. To start Kylin, simply run **bin/kylin.sh start**
-5. To stop Kylin, simply run **bin/kylin.sh stop**
-
-> If you want to have multiple Kylin nodes please refer to [this](kylin_cluster.html)
-
-After Kylin started you can visit <http://your_hostname:7070/kylin>. The username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
-
-1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
-2. [Create and Build your own cube](../tutorial/create_cube.html)
-3. [Kylin Web Tutorial](../tutorial/web.html)
-

http://git-wip-us.apache.org/repos/asf/kylin/blob/516c1f16/website/_docs2/install/kylin_cluster.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/kylin_cluster.md b/website/_docs2/install/kylin_cluster.md
deleted file mode 100644
index 5200643..0000000
--- a/website/_docs2/install/kylin_cluster.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-layout: docs2
-title:  "Multiple Kylin REST servers"
-categories: install
-permalink: /docs2/install/kylin_cluster.html
-version: v0.7.2
-since: v0.7.1
----
-
-
-### Kylin Server modes
-
-Kylin instances are stateless,  the runtime state is saved in its "Metadata Store" in hbase (kylin.metadata.url config in conf/kylin.properties). For load balance considerations it is possible to start multiple Kylin instances sharing the same metadata store (thus sharing the same state on table schemas, job status, cube status, etc.)
-
-Each of the kylin instances has a kylin.server.mode entry in conf/kylin.properties specifying the runtime mode, it has three options: 1. "job" for running job engine only 2. "query" for running query engine only and 3 "all" for running both. Notice that only one server can run the job engine("all" mode or "job" mode), the others must all be "query" mode.
-
-A typical scenario is depicted in the following chart:
-
-![]( /images/install/kylin_server_modes.png)
-
-### Setting up Multiple Kylin REST servers
-
-If you are running Kylin in a cluster or you have multiple Kylin REST server instances, please make sure you have the following property correctly configured in ${KYLIN_HOME}/conf/kylin.properties
-
-1. kylin.rest.servers 
-	List of web servers in use, this enables one web server instance to sync up with other servers. For example: kylin.rest.servers=sandbox1:7070,sandbox2:7070
-  
-2. kylin.server.mode
-	Make sure there is only one instance whose "kylin.server.mode" is set to "all" if there are multiple instances.
-	
\ No newline at end of file