You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by li...@apache.org on 2016/02/17 14:42:18 UTC

[1/3] kylin git commit: KYLIN-1375 Init 2.x doc by copying from 1.x

Repository: kylin
Updated Branches:
  refs/heads/document ed810ebea -> 0fb16aa2e


http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/release_notes.md
----------------------------------------------------------------------
diff --git a/website/_docs2/release_notes.md b/website/_docs2/release_notes.md
new file mode 100644
index 0000000..c4ffd74
--- /dev/null
+++ b/website/_docs2/release_notes.md
@@ -0,0 +1,706 @@
+---
+layout: docs2
+title:  Apache Kylin™ Release Notes
+categories: gettingstarted
+permalink: /docs2/release_notes.html
+version: v2.0
+since: v0.7.1
+---
+
+To download latest release, please visit: [http://kylin.apache.org/download/](http://kylin.apache.org/download/), 
+there are source code package, binary package, ODBC driver and installation guide avaliable.
+
+Any problem or issue, please report to Apache Kylin JIRA project: [https://issues.apache.org/jira/browse/KYLIN](https://issues.apache.org/jira/browse/KYLIN)
+
+or send to Apache Kylin mailing list:   
+* User relative: [user@kylin.apache.org](mailto:user@kylin.apache.org)
+* Development relative: [dev@kylin.apache.org](mailto:dev@kylin.apache.org)
+
+
+## v2.0-alpha - 2016-02-09
+_Tag:_ [kylin-2.0-alpha](https://github.com/apache/kylin/tree/kylin-2.0-alpha)
+
+__Highlights__
+
+    * [KYLIN-875] - A plugin-able architecture, to allow alternative cube engine / storage engine / data source.
+    * [KYLIN-1245] - A better MR cubing algorithm, about 1.5 times faster than 1.x by comparing hundreds of jobs.
+    * [KYLIN-942] - A better storage engine, makes query roughly 2 times faster (especially for slow queries) than 1.x by comparing tens of thousands sqls.
+    * [KYLIN-738] - Streaming cubing EXPERIMENTAL support, source from kafka, build cube in-mem at minutes interval
+    * [KYLIN-943] - TopN pre-calculation (more UDFs coming)
+    * [KYLIN-1065] - ODBC compatible with Tableau 9.1, MS Excel, MS PowerBI
+    * [KYLIN-1219] - Kylin support SSO with Spring SAML
+
+__Below generated from JIRA system, pending manual revision.__
+
+__New Feature__
+
+    * [KYLIN-196] - Support Job Priority
+    * [KYLIN-528] - Build job flow for Inverted Index building
+    * [KYLIN-596] - Support Excel and Power BI
+    * [KYLIN-599] - Near real-time support
+    * [KYLIN-603] - Add mem store for seconds data latency
+    * [KYLIN-606] - Block level index for Inverted-Index
+    * [KYLIN-607] - More efficient cube building
+    * [KYLIN-609] - Add Hybrid as a federation of Cube and Inverted-index realization
+    * [KYLIN-625] - Create GridTable, a data structure that abstracts vertical and horizontal partition of a table
+    * [KYLIN-728] - IGTStore implementation which use disk when memory runs short
+    * [KYLIN-738] - StreamingOLAP
+    * [KYLIN-749] - support timestamp type in II and cube
+    * [KYLIN-774] - Automatically merge cube segments
+    * [KYLIN-868] - add a metadata backup/restore script in bin folder
+    * [KYLIN-886] - Data Retention for streaming data
+    * [KYLIN-906] - cube retention
+    * [KYLIN-943] - Approximate TopN supported by Cube
+    * [KYLIN-986] - Generalize Streaming scripts and put them into code repository 
+    * [KYLIN-1219] - Kylin support SSO with Spring SAML
+    * [KYLIN-1277] - Upgrade tool to put old-version cube and new-version cube into a hybrid model 
+
+__Improvement__
+
+    * [KYLIN-225] - Support edit "cost" of cube
+    * [KYLIN-589] - Cleanup Intermediate hive table after cube build
+    * [KYLIN-623] - update Kylin UI Style to latest AdminLTE
+    * [KYLIN-633] - Support Timestamp for cube partition
+    * [KYLIN-649] -  move the cache layer from service tier back to storage tier
+    * [KYLIN-655] - Migrate cube storage (query side) to use GridTable API
+    * [KYLIN-663] - Push time condition down to ii endpoint
+    * [KYLIN-668] - Out of memory in mapper when building cube in mem
+    * [KYLIN-671] - Implement fine grained cache for cube and ii
+    * [KYLIN-673] - Performance tuning for In-Mem cubing
+    * [KYLIN-674] - IIEndpoint return metrics as well
+    * [KYLIN-675] - cube&model designer refactor
+    * [KYLIN-678] - optimize RowKeyColumnIO
+    * [KYLIN-697] - Reorganize all test cases to unit test and integration tests
+    * [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS 
+    * [KYLIN-708] - replace BitSet for AggrKey
+    * [KYLIN-712] - some enhancement after code review
+    * [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
+    * [KYLIN-718] - replace aliasMap in storage context with a clear specified return column list
+    * [KYLIN-719] - bundle statistics info in endpoint response
+    * [KYLIN-720] - Optimize endpoint's response structure to suit with no-dictionary data
+    * [KYLIN-721] - streaming cli support third-party streammessage parser
+    * [KYLIN-726] - add remote cli port configuration for KylinConfig
+    * [KYLIN-729] - IIEndpoint eliminate the non-aggregate routine
+    * [KYLIN-734] - Push cache layer to each storage engine
+    * [KYLIN-752] - Improved IN clause performance
+    * [KYLIN-753] - Make the dependency on hbase-common to "provided"
+    * [KYLIN-755] - extract copying libs from prepare.sh so that it can be reused
+    * [KYLIN-760] - Improve the hasing performance in Sampling cuboid size
+    * [KYLIN-772] - Continue cube job when hive query return empty resultset
+    * [KYLIN-773] - performance is slow list jobs
+    * [KYLIN-783] - update hdp version in test cases to 2.2.4
+    * [KYLIN-796] - Add REST API to trigger storage cleanup/GC
+    * [KYLIN-809] - Streaming cubing allow multiple kafka clusters/topics
+    * [KYLIN-816] - Allow gap in cube segments, for streaming case
+    * [KYLIN-822] - list cube overview in one page
+    * [KYLIN-823] - replace fk on fact table on rowkey & aggregation group generate
+    * [KYLIN-838] - improve performance of job query
+    * [KYLIN-844] - add backdoor toggles to control query behavior 
+    * [KYLIN-845] - Enable coprocessor even when there is memory hungry distinct count
+    * [KYLIN-858] - add snappy compression support
+    * [KYLIN-866] - Confirm with user when he selects empty segments to merge
+    * [KYLIN-869] - Enhance mail notification
+    * [KYLIN-870] - Speed up hbase segments info by caching
+    * [KYLIN-871] - growing dictionary for streaming case
+    * [KYLIN-874] - script for fill streaming gap automatically
+    * [KYLIN-875] - Decouple with Hadoop to allow alternative Input / Build Engine / Storage
+    * [KYLIN-879] - add a tool to collect orphan hbases 
+    * [KYLIN-880] -  Kylin should change the default folder from /tmp to user configurable destination
+    * [KYLIN-881] - Upgrade Calcite to 1.3.0
+    * [KYLIN-882] - check access to kylin.hdfs.working.dir
+    * [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
+    * [KYLIN-893] - Remove the dependency on quartz and metrics
+    * [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
+    * [KYLIN-896] - Clean ODBC code, add them into main repository and write docs to help compiling
+    * [KYLIN-901] - Add tool for cleanup Kylin metadata storage
+    * [KYLIN-902] - move streaming related parameters into StreamingConfig
+    * [KYLIN-903] - automate metadata cleanup job
+    * [KYLIN-909] - Adapt GTStore to hbase endpoint
+    * [KYLIN-919] - more friendly UI for 0.8
+    * [KYLIN-922] - Enforce same code style for both intellij and eclipse user
+    * [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
+    * [KYLIN-927] - Real time cubes merging skipping gaps
+    * [KYLIN-933] - friendly UI to use data model
+    * [KYLIN-938] - add friendly tip to page when rest request failed
+    * [KYLIN-942] - Cube parallel scan on Hbase
+    * [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
+    * [KYLIN-957] - Support HBase in a separate cluster
+    * [KYLIN-960] - Split storage module to core-storage and storage-hbase
+    * [KYLIN-973] - add a tool to analyse streaming output logs
+    * [KYLIN-984] - Behavior change in streaming data consuming
+    * [KYLIN-987] - Rename 0.7-staging and 0.8 branch
+    * [KYLIN-1014] - Support kerberos authentication while getting status from RM
+    * [KYLIN-1018] - make TimedJsonStreamParser default parser 
+    * [KYLIN-1019] - Remove v1 cube model classes from code repository
+    * [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
+    * [KYLIN-1025] - Save cube change is very slow
+    * [KYLIN-1036] - Code Clean, remove code which never used at front end
+    * [KYLIN-1041] - ADD Streaming UI 
+    * [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
+    * [KYLIN-1058] - Remove "right join" during model creation
+    * [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
+    * [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
+    * [KYLIN-1065] - ODBC driver support tableau 9.1
+    * [KYLIN-1068] - Optimize the memory footprint for TopN counter
+    * [KYLIN-1069] - update tip for 'Partition Column' on UI
+    * [KYLIN-1095] - Update AdminLTE to latest version
+    * [KYLIN-1096] - Deprecate minicluster in 2.x staging
+    * [KYLIN-1099] - Support dictionary of cardinality over 10 millions
+    * [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
+    * [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
+    * [KYLIN-1116] - Use local dictionary for InvertedIndex batch building
+    * [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
+    * [KYLIN-1126] - v2 storage(for parallel scan) backward compatibility with v1 storage
+    * [KYLIN-1135] - Pscan use share thread pool
+    * [KYLIN-1136] - Distinguish fast build mode and complete build mode
+    * [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
+    * [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
+    * [KYLIN-1154] - Load job page is very slow when there are a lot of history job
+    * [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
+    * [KYLIN-1160] - Set default logger appender of log4j for JDBC
+    * [KYLIN-1161] - Rest API /api/cubes?cubeName=  is doing fuzzy match instead of exact match
+    * [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
+    * [KYLIN-1190] - Make memory budget per query configurable
+    * [KYLIN-1234] - Cube ACL does not work
+    * [KYLIN-1235] - allow user to select dimension column as options when edit COUNT_DISTINCT measure
+    * [KYLIN-1237] - Revisit on cube size estimation
+    * [KYLIN-1239] - attribute each htable with team contact and owner name
+    * [KYLIN-1244] - In query window, enable fast copy&paste by double clicking tables/columns' names.
+    * [KYLIN-1245] - Switch between layer cubing and in-mem cubing according to stats
+    * [KYLIN-1246] - get cubes API update - offset,limit not required
+    * [KYLIN-1251] - add toggle event for tree label
+    * [KYLIN-1259] - Change font/background color of job progress
+    * [KYLIN-1265] - Make sure 2.0 query is no slower than 1.0
+    * [KYLIN-1266] - Tune 2.0 release package size
+    * [KYLIN-1267] - Check Kryo performance when spilling aggregation cache
+    * [KYLIN-1268] - Fix 2 kylin logs
+    * [KYLIN-1270] - improve TimedJsonStreamParser to support month_start,quarter_start,year_start
+    * [KYLIN-1281] - Add "partition_date_end", and move "partition_date_start" into cube descriptor
+    * [KYLIN-1283] - Replace GTScanRequest's SerDer form Kryo to manual 
+    * [KYLIN-1287] - UI update for streaming build action
+    * [KYLIN-1297] - Diagnose query performance issues in 2.x versions
+    * [KYLIN-1301] - fix segment pruning failure in 2.x versions
+    * [KYLIN-1308] - query storage v2 enable parallel cube visiting
+    * [KYLIN-1312] - Enhance DeployCoprocessorCLI to support Cube level filter
+    * [KYLIN-1318] - enable gc log for kylin server instance
+    * [KYLIN-1323] - Improve performance of converting data to hfile
+    * [KYLIN-1327] - Tool for batch updating host information of htables
+    * [KYLIN-1334] - allow truncating string for fixed length dimensions
+    * [KYLIN-1341] - Display JSON of Data Model in the dialog
+    * [KYLIN-1350] - hbase Result.binarySearch is found to be problematic in concurrent environments
+    * [KYLIN-1368] - JDBC Driver is not generic to restAPI json result
+
+__Bug__
+
+    * [KYLIN-404] - Can't get cube source record size.
+    * [KYLIN-457] - log4j error and dup lines in kylin.log
+    * [KYLIN-521] - No verification even if join condition is invalid
+    * [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
+    * [KYLIN-635] - IN clause within CASE when is not working
+    * [KYLIN-656] - REST API get cube desc NullPointerException when cube is not exists
+    * [KYLIN-660] - Make configurable of dictionary cardinality cap
+    * [KYLIN-665] - buffer error while in mem cubing
+    * [KYLIN-688] - possible memory leak for segmentIterator
+    * [KYLIN-731] - Parallel stream build will throw OOM
+    * [KYLIN-740] - Slowness with many IN() values
+    * [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
+    * [KYLIN-748] - II returned result not correct when decimal omits precision and scal
+    * [KYLIN-751] - Max on negative double values is not working
+    * [KYLIN-766] - round BigDecimal according to the DataType scale
+    * [KYLIN-769] - empty segment build fail due to no dictionary 
+    * [KYLIN-771] - query cache is not evicted when metadata changes
+    * [KYLIN-778] - can't build cube after package to binary 
+    * [KYLIN-780] - Upgrade Calcite to 1.0
+    * [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted 
+    * [KYLIN-801] - fix remaining issues on query cache and storage cache
+    * [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
+    * [KYLIN-807] - Avoid write conflict between job engine and stream cube builder
+    * [KYLIN-817] - Support Extract() on timestamp column
+    * [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
+    * [KYLIN-828] - kylin still use ldap profile when comment the line "kylin.sandbox=false" in kylin.properties
+    * [KYLIN-834] - optimize StreamingUtil binary search perf
+    * [KYLIN-837] - fix submit build type when refresh cube
+    * [KYLIN-873] - cancel button does not work when [resume][discard] job
+    * [KYLIN-889] - Support more than one HDFS files of lookup table
+    * [KYLIN-897] - Update CubeMigrationCLI to copy data model info
+    * [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
+    * [KYLIN-905] - Boolean type not supported
+    * [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
+    * [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
+    * [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
+    * [KYLIN-914] - Scripts shebang should use /bin/bash
+    * [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
+    * [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
+    * [KYLIN-930] - can't see realizations under each project at project list page
+    * [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
+    * [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
+    * [KYLIN-936] - can not see job step log 
+    * [KYLIN-944] - update doc about how to consume kylin API in javascript
+    * [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
+    * [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
+    * [KYLIN-951] - Drop RowBlock concept from GridTable general API
+    * [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
+    * [KYLIN-967] - Dump running queries on memory shortage
+    * [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
+    * [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
+    * [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
+    * [KYLIN-983] - Query sql offset keyword bug
+    * [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
+    * [KYLIN-991] - StorageCleanupJob may clean a newly created HTable in streaming cube building
+    * [KYLIN-992] - ConcurrentModificationException when initializing ResourceStore
+    * [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
+    * [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
+    * [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million 
+    * [KYLIN-1026] - Error message for git check is not correct in package.sh
+    * [KYLIN-1027] - HBase Token not added after KYLIN-1007
+    * [KYLIN-1033] - Error when joining two sub-queries
+    * [KYLIN-1039] - Filter like (A or false) yields wrong result
+    * [KYLIN-1047] - Upgrade to Calcite 1.4
+    * [KYLIN-1066] - Only 1 reducer is started in the "Build cube" step of MR_Engine_V2
+    * [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
+    * [KYLIN-1075] - select [MeasureCol] from [FactTbl] is not supported
+    * [KYLIN-1078] - UI - Cannot have comments in the end of New Query textbox
+    * [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
+    * [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
+    * [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
+    * [KYLIN-1113] - Support TopN query in v2/CubeStorageQuery.java
+    * [KYLIN-1115] - Clean up ODBC driver code
+    * [KYLIN-1121] - ResourceTool download/upload does not work in binary package
+    * [KYLIN-1127] - Refactor CacheService
+    * [KYLIN-1137] - TopN measure need support dictionary merge
+    * [KYLIN-1138] - Bad CubeDesc signature cause segment be delete when enable a cube
+    * [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
+    * [KYLIN-1151] - Menu items should be aligned when create new model
+    * [KYLIN-1152] - ResourceStore should read content and timestamp in one go
+    * [KYLIN-1153] - Upgrade is needed for cubedesc metadata from 1.x to 2.0
+    * [KYLIN-1171] - KylinConfig truncate bug
+    * [KYLIN-1179] - Cannot use String as partition column
+    * [KYLIN-1180] - Some NPE in Dictionary
+    * [KYLIN-1181] - Split metadata size exceeded when data got huge in one segment
+    * [KYLIN-1192] - Cannot edit data model desc without name change
+    * [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
+    * [KYLIN-1211] - Add 'Enable Cache' button in System page
+    * [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
+    * [KYLIN-1218] - java.lang.NullPointerException in MeasureTypeFactory when sync hive table
+    * [KYLIN-1220] - JsonMappingException: Can not deserialize instance of java.lang.String out of START_ARRAY
+    * [KYLIN-1225] - Only 15 cubes listed in the /models page
+    * [KYLIN-1226] - InMemCubeBuilder throw OOM for multiple HLLC measures
+    * [KYLIN-1230] - When CubeMigrationCLI copied ACL from one env to another, it may not work
+    * [KYLIN-1236] - redirect to home page when input invalid url
+    * [KYLIN-1250] - Got NPE when discarding a job
+    * [KYLIN-1260] - Job status labels are not in same style
+    * [KYLIN-1269] - Can not get last error message in email
+    * [KYLIN-1271] - Create streaming table layer will disappear if click on outside
+    * [KYLIN-1274] - Query from JDBC is partial results by default
+    * [KYLIN-1282] - Comparison filter on Date/Time column not work for query
+    * [KYLIN-1289] - Click on subsequent wizard steps doesn't work when editing existing cube or model
+    * [KYLIN-1303] - Error when in-mem cubing on empty data source which has boolean columns
+    * [KYLIN-1306] - Null strings are not applied during fast cubing
+    * [KYLIN-1314] - Display issue for aggression groups 
+    * [KYLIN-1315] - UI: Cannot add normal dimension when creating new cube 
+    * [KYLIN-1316] - Wrong label in Dialog CUBE REFRESH CONFIRM
+    * [KYLIN-1317] - Kill underlying running hadoop job while discard a job
+    * [KYLIN-1328] - "UnsupportedOperationException" is thrown when remove a data model
+    * [KYLIN-1330] - UI create model: Press enter will go back to pre step
+    * [KYLIN-1336] - 404 errors of model page and api 'access/DataModelDesc' in console
+    * [KYLIN-1337] - Sort cube name doesn't work well 
+    * [KYLIN-1346] - IllegalStateException happens in SparkCubing
+    * [KYLIN-1347] - UI: cannot place cursor in front of the last dimension
+    * [KYLIN-1349] - 'undefined' is logged in console when adding lookup table
+    * [KYLIN-1352] - 'Cache already exists' exception in high-concurrency query situation
+    * [KYLIN-1356] - use exec-maven-plugin for IT environment provision
+    * [KYLIN-1357] - Cloned cube has build time information
+    * [KYLIN-1372] - Query using PrepareStatement failed with multi OR clause
+    * [KYLIN-1382] - CubeMigrationCLI reports error when migrate cube
+    * [KYLIN-1396] - minor bug in BigDecimalSerializer - avoidVerbose should be incremented each time when input scale is larger than given scale 
+    * [KYLIN-1400] - kylin.metadata.url with hbase namespace problem
+    * [KYLIN-1402] - StringIndexOutOfBoundsException in Kylin Hive Column Cardinality Job
+    * [KYLIN-1414] - Couldn't drag and drop rowkey, js error is thrown in browser console
+
+
+## v1.2 - 2015-12-15
+_Tag:_ [kylin-1.2](https://github.com/apache/kylin/tree/kylin-1.2)
+
+__New Feature__
+
+    * [KYLIN-596] - Support Excel and Power BI
+    
+__Improvement__
+
+    * [KYLIN-389] - Can't edit cube name for existing cubes
+    * [KYLIN-702] - When Kylin create the flat hive table, it generates large number of small files in HDFS 
+    * [KYLIN-1021] - upload dependent jars of kylin to HDFS and set tmpjars
+    * [KYLIN-1058] - Remove "right join" during model creation
+    * [KYLIN-1064] - restore disabled queries in KylinQueryTest.testVerifyQuery
+    * [KYLIN-1065] - ODBC driver support tableau 9.1
+    * [KYLIN-1069] - update tip for 'Partition Column' on UI
+    * [KYLIN-1081] - ./bin/find-hive-dependency.sh may not find hive-hcatalog-core.jar
+    * [KYLIN-1095] - Update AdminLTE to latest version
+    * [KYLIN-1099] - Support dictionary of cardinality over 10 millions
+    * [KYLIN-1101] - Allow "YYYYMMDD" as a date partition column
+    * [KYLIN-1105] - Cache in AbstractRowKeyEncoder.createInstance() is useless
+    * [KYLIN-1119] - refine find-hive-dependency.sh to correctly get hcatalog path
+    * [KYLIN-1139] - Hive job not starting due to error "conflicting lock present for default mode EXCLUSIVE "
+    * [KYLIN-1149] - When yarn return an incomplete job tracking URL, Kylin will fail to get job status
+    * [KYLIN-1154] - Load job page is very slow when there are a lot of history job
+    * [KYLIN-1157] - CubeMigrationCLI doesn't copy ACL
+    * [KYLIN-1160] - Set default logger appender of log4j for JDBC
+    * [KYLIN-1161] - Rest API /api/cubes?cubeName=  is doing fuzzy match instead of exact match
+    * [KYLIN-1162] - Enhance HadoopStatusGetter to be compatible with YARN-2605
+    * [KYLIN-1166] - CubeMigrationCLI should disable and purge the cube in source store after be migrated
+    * [KYLIN-1168] - Couldn't save cube after doing some modification, get "Update data model is not allowed! Please create a new cube if needed" error
+    * [KYLIN-1190] - Make memory budget per query configurable
+
+__Bug__
+
+    * [KYLIN-693] - Couldn't change a cube's name after it be created
+    * [KYLIN-930] - can't see realizations under each project at project list page
+    * [KYLIN-966] - When user creates a cube, if enter a name which already exists, Kylin will thrown expection on last step
+    * [KYLIN-1033] - Error when joining two sub-queries
+    * [KYLIN-1039] - Filter like (A or false) yields wrong result
+    * [KYLIN-1067] - Support get MapReduce Job status for ResourceManager HA Env
+    * [KYLIN-1070] - changing  case in table name in  model desc
+    * [KYLIN-1093] - Consolidate getCurrentHBaseConfiguration() and newHBaseConfiguration() in HadoopUtil
+    * [KYLIN-1098] - two "kylin.hbase.region.count.min" in conf/kylin.properties
+    * [KYLIN-1106] - Can not send email caused by Build Base Cuboid Data step failed
+    * [KYLIN-1108] - Return Type Empty When Measure-> Count In Cube Design
+    * [KYLIN-1120] - MapReduce job read local meta issue
+    * [KYLIN-1121] - ResourceTool download/upload does not work in binary package
+    * [KYLIN-1140] - Kylin's sample cube "kylin_sales_cube" couldn't be saved.
+    * [KYLIN-1148] - Edit project's name and cancel edit, project's name still modified
+    * [KYLIN-1152] - ResourceStore should read content and timestamp in one go
+    * [KYLIN-1155] - unit test with minicluster doesn't work on 1.x
+    * [KYLIN-1203] - Cannot save cube after correcting the configuration mistake
+    * [KYLIN-1205] - hbase RpcClient java.io.IOException: Unexpected closed connection
+    * [KYLIN-1216] - Can't parse DateFormat like 'YYYYMMDD' correctly in query
+
+__Task__
+
+    * [KYLIN-1170] - Update website and status files to TLP
+
+
+## v1.1.1-incubating - 2015-11-04
+_Tag:_ [kylin-1.1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1.1-incubating)
+
+__Improvement__
+
+    * [KYLIN-999] - License check and cleanup for release
+
+## v1.1-incubating - 2015-10-25
+_Tag:_ [kylin-1.1-incubating](https://github.com/apache/kylin/tree/kylin-1.1-incubating)
+
+__New Feature__
+
+    * [KYLIN-222] - Web UI to Display CubeInstance Information
+    * [KYLIN-906] - cube retention
+    * [KYLIN-910] - Allow user to enter "retention range" in days on Cube UI
+
+__Bug__
+
+    * [KYLIN-457] - log4j error and dup lines in kylin.log
+    * [KYLIN-632] - "kylin.sh stop" doesn't check whether KYLIN_HOME was set
+    * [KYLIN-740] - Slowness with many IN() values
+    * [KYLIN-747] - bad query performance when IN clause contains a value doesn't exist in the dictionary
+    * [KYLIN-771] - query cache is not evicted when metadata changes
+    * [KYLIN-797] - Cuboid cache will cache massive invalid cuboid if existed many cubes which already be deleted 
+    * [KYLIN-847] - "select * from fact" does not work on 0.7 branch
+    * [KYLIN-913] - Cannot find rowkey column XXX in cube CubeDesc
+    * [KYLIN-918] - Calcite throws "java.lang.Float cannot be cast to java.lang.Double" error while executing SQL
+    * [KYLIN-944] - update doc about how to consume kylin API in javascript
+    * [KYLIN-950] - Web UI "Jobs" tab view the job reduplicated
+    * [KYLIN-952] - User can trigger a Refresh job on an non-existing cube segment via REST API
+    * [KYLIN-958] - update cube data model may fail and leave metadata in inconsistent state
+    * [KYLIN-961] - Can't get cube  source record count.
+    * [KYLIN-967] - Dump running queries on memory shortage
+    * [KYLIN-968] - CubeSegment.lastBuildJobID is null in new instance but used for rowkey_stats path
+    * [KYLIN-975] - change kylin.job.hive.database.for.intermediatetable cause job to fail
+    * [KYLIN-978] - GarbageCollectionStep dropped Hive Intermediate Table but didn't drop external hdfs path
+    * [KYLIN-982] - package.sh should grep out "Download*" messages when determining version
+    * [KYLIN-983] - Query sql offset keyword bug
+    * [KYLIN-985] - Don't suppoprt aggregation AVG while executing SQL
+    * [KYLIN-1001] - Kylin generates wrong HDFS path in creating intermediate table
+    * [KYLIN-1004] - Dictionary with '' value cause cube merge to fail
+    * [KYLIN-1005] - fail to acquire ZookeeperJobLock when hbase.zookeeper.property.clientPort is configured other than 2181
+    * [KYLIN-1015] - Hive dependency jars appeared twice on job configuration
+    * [KYLIN-1020] - Although "kylin.query.scan.threshold" is set, it still be restricted to less than 4 million 
+    * [KYLIN-1026] - Error message for git check is not correct in package.sh
+
+__Improvement__
+
+    * [KYLIN-343] - Enable timeout on query 
+    * [KYLIN-367] - automatically backup metadata everyday
+    * [KYLIN-589] - Cleanup Intermediate hive table after cube build
+    * [KYLIN-772] - Continue cube job when hive query return empty resultset
+    * [KYLIN-858] - add snappy compression support
+    * [KYLIN-882] - check access to kylin.hdfs.working.dir
+    * [KYLIN-895] - Add "retention_range" attribute for cube instance, and automatically drop the oldest segment when exceeds retention
+    * [KYLIN-901] - Add tool for cleanup Kylin metadata storage
+    * [KYLIN-956] - Allow users to configure hbase compression algorithm in kylin.properties
+    * [KYLIN-957] - Support HBase in a separate cluster
+    * [KYLIN-965] - Allow user to configure the region split size for cube
+    * [KYLIN-971] - kylin display timezone on UI
+    * [KYLIN-987] - Rename 0.7-staging and 0.8 branch
+    * [KYLIN-998] - Finish the hive intermediate table clean up job in org.apache.kylin.job.hadoop.cube.StorageCleanupJob
+    * [KYLIN-999] - License check and cleanup for release
+    * [KYLIN-1013] - Make hbase client configurations like timeout configurable
+    * [KYLIN-1025] - Save cube change is very slow
+    * [KYLIN-1034] - Faster bitmap indexes with Roaring bitmaps
+    * [KYLIN-1035] - Validate [Project] before create Cube on UI
+    * [KYLIN-1037] - Remove hardcoded "hdp.version" from regression tests
+    * [KYLIN-1047] - Upgrade to Calcite 1.4
+    * [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
+    * [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
+    * [KYLIN-1048] - CPU and memory killer in Cuboid.findById()
+    * [KYLIN-1061] - "kylin.sh start" should check whether kylin has already been running
+
+
+## v1.0-incubating - 2015-09-06
+_Tag:_ [kylin-1.0-incubating](https://github.com/apache/kylin/tree/kylin-1.0-incubating)
+
+__New Feature__
+
+    * [KYLIN-591] - Leverage Zeppelin to interactive with Kylin
+
+__Bug__
+
+    * [KYLIN-404] - Can't get cube source record size.
+    * [KYLIN-626] - JDBC error for float and double values
+    * [KYLIN-751] - Max on negative double values is not working
+    * [KYLIN-757] - Cache wasn't flushed in cluster mode
+    * [KYLIN-780] - Upgrade Calcite to 1.0
+    * [KYLIN-805] - Drop useless Hive intermediate table and HBase tables in the last step of cube build/merge
+    * [KYLIN-889] - Support more than one HDFS files of lookup table
+    * [KYLIN-897] - Update CubeMigrationCLI to copy data model info
+    * [KYLIN-898] - "CUBOID_CACHE" in Cuboid.java never flushes
+    * [KYLIN-911] - NEW segments not DELETED when cancel BuildAndMerge Job
+    * [KYLIN-912] - $KYLIN_HOME/tomcat/temp folder takes much disk space after long run
+    * [KYLIN-914] - Scripts shebang should use /bin/bash
+    * [KYLIN-915] - appendDBName in CubeMetadataUpgrade will return null
+    * [KYLIN-921] - Dimension with all nulls cause BuildDimensionDictionary failed due to FileNotFoundException
+    * [KYLIN-923] - FetcherRunner will never run again if encountered exception during running
+    * [KYLIN-929] - can not sort cubes by [Source Records] at cubes list page
+    * [KYLIN-934] - Negative number in SUM result and Kylin results not matching exactly Hive results
+    * [KYLIN-935] - always loading when try to view the log of the sub-step of cube build job
+    * [KYLIN-936] - can not see job step log 
+    * [KYLIN-940] - NPE when close the null resouce
+    * [KYLIN-945] - Kylin JDBC - Get Connection from DataSource results in NullPointerException
+    * [KYLIN-946] - [UI] refresh page show no results when Project selected as [--Select All--]
+    * [KYLIN-949] - Query cache doesn't work properly for prepareStatement queries
+
+__Improvement__
+
+    * [KYLIN-568] - job support stop/suspend function so that users can manually resume a job
+    * [KYLIN-717] - optimize OLAPEnumerator.convertCurrentRow()
+    * [KYLIN-792] - kylin performance insight [dashboard]
+    * [KYLIN-838] - improve performance of job query
+    * [KYLIN-842] - Add version and commit id into binary package
+    * [KYLIN-844] - add backdoor toggles to control query behavior 
+    * [KYLIN-857] - backport coprocessor improvement in 0.8 to 0.7
+    * [KYLIN-866] - Confirm with user when he selects empty segments to merge
+    * [KYLIN-867] - Hybrid model for multiple realizations/cubes
+    * [KYLIN-880] -  Kylin should change the default folder from /tmp to user configurable destination
+    * [KYLIN-881] - Upgrade Calcite to 1.3.0
+    * [KYLIN-883] - Using configurable option for Hive intermediate tables created by Kylin job
+    * [KYLIN-893] - Remove the dependency on quartz and metrics
+    * [KYLIN-922] - Enforce same code style for both intellij and eclipse user
+    * [KYLIN-926] - Make sure Kylin leaves no garbage files in local OS and HDFS/HBASE
+    * [KYLIN-933] - friendly UI to use data model
+    * [KYLIN-938] - add friendly tip to page when rest request failed
+
+__Task__
+
+    * [KYLIN-884] - Restructure docs and website
+    * [KYLIN-907] - Improve Kylin community development experience
+    * [KYLIN-954] - Release v1.0 (formerly v0.7.3)
+    * [KYLIN-863] - create empty segment when there is no data in one single streaming batch
+    * [KYLIN-908] - Help community developer to setup develop/debug environment
+    * [KYLIN-931] - Port KYLIN-921 to 0.8 branch
+
+## v0.7.2-incubating - 2015-07-21
+_Tag:_ [kylin-0.7.2-incubating](https://github.com/apache/kylin/tree/kylin-0.7.2-incubating)
+
+__Main Changes:__  
+Critical bug fixes after v0.7.1 release, please go with this version directly for new case and upgrade to this version for existing deployment.
+
+__Bug__  
+
+    * [KYLIN-514] - Error message is not helpful to user when doing something in Jason Editor window
+    * [KYLIN-598] - Kylin detecting hive table delim failure
+    * [KYLIN-660] - Make configurable of dictionary cardinality cap
+    * [KYLIN-765] - When a cube job is failed, still be possible to submit a new job
+    * [KYLIN-814] - Duplicate columns error for subqueries on fact table
+    * [KYLIN-819] - Fix necessary ColumnMetaData order for Calcite (Optic)
+    * [KYLIN-824] - Cube Build fails if lookup table doesn't have any files under HDFS location
+    * [KYLIN-829] - Cube "Actions" shows "NA"; but after expand the "access" tab, the button shows up
+    * [KYLIN-830] - Cube merge failed after migrating from v0.6 to v0.7
+    * [KYLIN-831] - Kylin report "Column 'ABC' not found in table 'TABLE' while executing SQL", when that column is FK but not define as a dimension
+    * [KYLIN-840] - HBase table compress not enabled even LZO is installed
+    * [KYLIN-848] - Couldn't resume or discard a cube job
+    * [KYLIN-849] - Couldn't query metrics on lookup table PK
+    * [KYLIN-865] - Cube has been built but couldn't query; In log it said "Realization 'CUBE.CUBE_NAME' defined under project PROJECT_NAME is not found
+    * [KYLIN-873] - cancel button does not work when [resume][discard] job
+    * [KYLIN-888] - "Jobs" page only shows 15 job at max, the "Load more" button was disappeared
+
+__Improvement__
+
+    * [KYLIN-159] - Metadata migrate tool 
+    * [KYLIN-199] - Validation Rule: Unique value of Lookup table's key columns
+    * [KYLIN-207] - Support SQL pagination
+    * [KYLIN-209] - Merge tail small MR jobs into one
+    * [KYLIN-210] - Split heavy MR job to more small jobs
+    * [KYLIN-221] - Convert cleanup and GC to job 
+    * [KYLIN-284] - add log for all Rest API Request
+    * [KYLIN-488] - Increase HDFS block size 1GB
+    * [KYLIN-600] - measure return type update
+    * [KYLIN-611] - Allow Implicit Joins
+    * [KYLIN-623] - update Kylin UI Style to latest AdminLTE
+    * [KYLIN-727] - Cube build in BuildCubeWithEngine does not cover incremental build/cube merge
+    * [KYLIN-752] - Improved IN clause performance
+    * [KYLIN-773] - performance is slow list jobs
+    * [KYLIN-839] - Optimize Snapshot table memory usage 
+
+__New Feature__
+
+    * [KYLIN-211] - Bitmap Inverted Index
+    * [KYLIN-285] - Enhance alert program for whole system
+    * [KYLIN-467] - Validataion Rule: Check duplicate rows in lookup table
+    * [KYLIN-471] - Support "Copy" on grid result
+
+__Task__
+
+    * [KYLIN-7] - Enable maven checkstyle plugin
+    * [KYLIN-885] - Release v0.7.2
+    * [KYLIN-812] - Upgrade to Calcite 0.9.2
+
+## v0.7.1-incubating (First Apache Release) - 2015-06-10  
+_Tag:_ [kylin-0.7.1-incubating](https://github.com/apache/kylin/tree/kylin-0.7.1-incubating)
+
+Apache Kylin v0.7.1-incubating has rolled out on June 10, 2015. This is also the first Apache release after join incubating. 
+
+__Main Changes:__
+
+* Package renamed from com.kylinolap to org.apache.kylin
+* Code cleaned up to apply Apache License policy
+* Easy install and setup with bunch of scripts and automation
+* Job engine refactor to be generic job manager for all jobs, and improved efficiency
+* Support Hive database other than 'default'
+* JDBC driver avaliable for client to interactive with Kylin server
+* Binary pacakge avaliable download 
+
+__New Feature__
+
+    * [KYLIN-327] - Binary distribution 
+    * [KYLIN-368] - Move MailService to Common module
+    * [KYLIN-540] - Data model upgrade for legacy cube descs
+    * [KYLIN-576] - Refactor expansion rate expression
+
+__Task__
+
+    * [KYLIN-361] - Rename package name with Apache Kylin
+    * [KYLIN-531] - Rename package name to org.apache.kylin
+    * [KYLIN-533] - Job Engine Refactoring
+    * [KYLIN-585] - Simplify deployment
+    * [KYLIN-586] - Add Apache License header in each source file
+    * [KYLIN-587] - Remove hard copy of javascript libraries
+    * [KYLIN-624] - Add dimension and metric info into DataModel
+    * [KYLIN-650] - Move all document from github wiki to code repository (using md file)
+    * [KYLIN-669] - Release v0.7.1 as first apache release
+    * [KYLIN-670] - Update pom with "incubating" in version number
+    * [KYLIN-737] - Generate and sign release package for review and vote
+    * [KYLIN-795] - Release after success vote
+
+__Bug__
+
+    * [KYLIN-132] - Job framework
+    * [KYLIN-194] - Dict & ColumnValueContainer does not support number comparison, they do string comparison right now
+    * [KYLIN-220] - Enable swap column of Rowkeys in Cube Designer
+    * [KYLIN-230] - Error when create HTable
+    * [KYLIN-255] - Error when a aggregated function appear twice in select clause
+    * [KYLIN-383] - Sample Hive EDW database name should be replaced by "default" in the sample
+    * [KYLIN-399] - refreshed segment not correctly published to cube
+    * [KYLIN-412] - No exception or message when sync up table which can't access
+    * [KYLIN-421] - Hive table metadata issue
+    * [KYLIN-436] - Can't sync Hive table metadata from other database rather than "default"
+    * [KYLIN-508] - Too high cardinality is not suitable for dictionary!
+    * [KYLIN-509] - Order by on fact table not works correctly
+    * [KYLIN-517] - Always delete the last one of Add Lookup page buttom even if deleting the first join condition
+    * [KYLIN-524] - Exception will throw out if dimension is created on a lookup table, then deleting the lookup table.
+    * [KYLIN-547] - Create cube failed if column dictionary sets false and column length value greater than 0
+    * [KYLIN-556] - error tip enhance when cube detail return empty
+    * [KYLIN-570] - Need not to call API before sending login request
+    * [KYLIN-571] - Dimensions lost when creating cube though Joson Editor
+    * [KYLIN-572] - HTable size is wrong
+    * [KYLIN-581] - unable to build cube
+    * [KYLIN-583] - Dependency of Hive conf/jar in II branch will affect auto deploy
+    * [KYLIN-588] - Error when run package.sh
+    * [KYLIN-593] - angular.min.js.map and angular-resource.min.js.map are missing in kylin.war
+    * [KYLIN-594] - Making changes in build and packaging with respect to apache release process
+    * [KYLIN-595] - Kylin JDBC driver should not assume Kylin server listen on either 80 or 443
+    * [KYLIN-605] - Issue when install Kylin on a CLI which does not have yarn Resource Manager
+    * [KYLIN-614] - find hive dependency shell fine is unable to set the hive dependency correctly
+    * [KYLIN-615] - Unable add measures in Kylin web UI
+    * [KYLIN-619] - Cube build fails with hive+tez
+    * [KYLIN-620] - Wrong duration number
+    * [KYLIN-621] - SecurityException when running MR job
+    * [KYLIN-627] - Hive tables' partition column was not sync into Kylin
+    * [KYLIN-628] - Couldn't build a new created cube
+    * [KYLIN-629] - Kylin failed to run mapreduce job if there is no mapreduce.application.classpath in mapred-site.xml
+    * [KYLIN-630] - ArrayIndexOutOfBoundsException when merge cube segments 
+    * [KYLIN-638] - kylin.sh stop not working
+    * [KYLIN-639] - Get "Table 'xxxx' not found while executing SQL" error after a cube be successfully built
+    * [KYLIN-640] - sum of float not working
+    * [KYLIN-642] - Couldn't refresh cube segment
+    * [KYLIN-643] - JDBC couldn't connect to Kylin: "java.sql.SQLException: Authentication Failed"
+    * [KYLIN-644] - join table as null error when build the cube
+    * [KYLIN-652] - Lookup table alias will be set to null
+    * [KYLIN-657] - JDBC Driver not register into DriverManager
+    * [KYLIN-658] - java.lang.IllegalArgumentException: Cannot find rowkey column XXX in cube CubeDesc
+    * [KYLIN-659] - Couldn't adjust the rowkey sequence when create cube
+    * [KYLIN-666] - Select float type column got class cast exception
+    * [KYLIN-681] - Failed to build dictionary if the rowkey's dictionary property is "date(yyyy-mm-dd)"
+    * [KYLIN-682] - Got "No aggregator for func 'MIN' and return type 'decimal(19,4)'" error when build cube
+    * [KYLIN-684] - Remove holistic distinct count and multiple column distinct count from sample cube
+    * [KYLIN-691] - update tomcat download address in download-tomcat.sh
+    * [KYLIN-696] - Dictionary couldn't recognize a value and throw IllegalArgumentException: "Not a valid value"
+    * [KYLIN-703] - UT failed due to unknown host issue
+    * [KYLIN-711] - UT failure in REST module
+    * [KYLIN-739] - Dimension as metrics does not work with PK-FK derived column
+    * [KYLIN-761] - Tables are not shown in the "Query" tab, and couldn't run SQL query after cube be built
+
+__Improvement__
+
+    * [KYLIN-168] - Installation fails if multiple ZK
+    * [KYLIN-182] - Validation Rule: columns used in Join condition should have same datatype
+    * [KYLIN-204] - Kylin web not works properly in IE
+    * [KYLIN-217] - Enhance coprocessor with endpoints 
+    * [KYLIN-251] - job engine refactoring
+    * [KYLIN-261] - derived column validate when create cube
+    * [KYLIN-317] - note: grunt.json need to be configured when add new javascript or css file
+    * [KYLIN-324] - Refactor metadata to support InvertedIndex
+    * [KYLIN-407] - Validation: There's should no Hive table column using "binary" data type
+    * [KYLIN-445] - Rename cube_desc/cube folder
+    * [KYLIN-452] - Automatically create local cluster for running tests
+    * [KYLIN-498] - Merge metadata tables 
+    * [KYLIN-532] - Refactor data model in kylin front end
+    * [KYLIN-539] - use hbase command to launch tomcat
+    * [KYLIN-542] - add project property feature for cube
+    * [KYLIN-553] - From cube instance, couldn't easily find the project instance that it belongs to
+    * [KYLIN-563] - Wrap kylin start and stop with a script 
+    * [KYLIN-567] - More flexible validation of new segments
+    * [KYLIN-569] - Support increment+merge job
+    * [KYLIN-578] - add more generic configuration for ssh
+    * [KYLIN-601] - Extract content from kylin.tgz to "kylin" folder
+    * [KYLIN-616] - Validation Rule: partition date column should be in dimension columns
+    * [KYLIN-634] - Script to import sample data and cube metadata
+    * [KYLIN-636] - wiki/On-Hadoop-CLI-installation is not up to date
+    * [KYLIN-637] - add start&end date for hbase info in cubeDesigner
+    * [KYLIN-714] - Add Apache RAT to pom.xml
+    * [KYLIN-753] - Make the dependency on hbase-common to "provided"
+    * [KYLIN-758] - Updating port forwarding issue Hadoop Installation on Hortonworks Sandbox.
+    * [KYLIN-779] - [UI] jump to cube list after create cube
+    * [KYLIN-796] - Add REST API to trigger storage cleanup/GC
+
+__Wish__
+
+    * [KYLIN-608] - Distinct count for ii storage
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/tutorial/acl.md
----------------------------------------------------------------------
diff --git a/website/_docs2/tutorial/acl.md b/website/_docs2/tutorial/acl.md
new file mode 100644
index 0000000..caf00cf
--- /dev/null
+++ b/website/_docs2/tutorial/acl.md
@@ -0,0 +1,35 @@
+---
+layout: docs2
+title:  Kylin Cube Permission Grant Tutorial
+categories: tutorial
+permalink: /docs2/tutorial/acl.html
+version: v1.2
+since: v0.7.1
+---
+
+   
+
+In `Cubes` page, double click the cube row to see the detail information. Here we focus on the `Access` tab.
+Click the `+Grant` button to grant permission. 
+
+![](/images/Kylin-Cube-Permission-Grant-Tutorial/14 +grant.png)
+
+There are four different kinds of permissions for a cube. Move your mouse over the `?` icon to see detail information. 
+
+![](/images/Kylin-Cube-Permission-Grant-Tutorial/15 grantInfo.png)
+
+There are also two types of user that a permission can be granted: `User` and `Role`. `Role` means a group of users who have the same role.
+
+### 1. Grant User Permission
+* Select `User` type, enter the username of the user you want to grant and select the related permission. 
+
+     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 grant-user.png)
+
+* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a user. To delete a user with permission, just click the `Revoke` button.
+
+     ![](/images/Kylin-Cube-Permission-Grant-Tutorial/16 user-update.png)
+
+### 2. Grant Role Permission
+* Select `Role` type, choose a group of users that you want to grant by click the drop down button and select a permission.
+
+* Then click the `Grant` button to send a request. After the success of this operation, you will see a new table entry show in the table. You can select various permission of access to change the permission of a group. To delete a group with permission, just click the `Revoke` button.

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/tutorial/create_cube.md
----------------------------------------------------------------------
diff --git a/website/_docs2/tutorial/create_cube.md b/website/_docs2/tutorial/create_cube.md
new file mode 100644
index 0000000..915f3b9
--- /dev/null
+++ b/website/_docs2/tutorial/create_cube.md
@@ -0,0 +1,129 @@
+---
+layout: docs2
+title:  Kylin Cube Creation Tutorial
+categories: tutorial
+permalink: /docs2/tutorial/create_cube.html
+version: v1.2
+since: v0.7.1
+---
+  
+  
+### I. Create a Project
+1. Go to `Query` page in top menu bar, then click `Manage Projects`.
+
+   ![]( /images/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
+
+2. Click the `+ Project` button to add a new project.
+
+   ![]( /images/Kylin-Cube-Creation-Tutorial/2 +project.png)
+
+3. Fulfill the following form and click `submit` button to send a request.
+
+   ![]( /images/Kylin-Cube-Creation-Tutorial/3 new-project.png)
+
+4. After success, there will be a notification show in the bottom.
+
+   ![]( /images/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
+
+### II. Sync up a Table
+1. Click `Tables` in top bar and then click the `+ Sync` button to load hive table metadata.
+
+   ![]( /images/Kylin-Cube-Creation-Tutorial/4 +table.png)
+
+2. Enter the table names and click `Sync` to send a request.
+
+   ![]( /images/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
+
+### III. Create a Cube
+To start with, click `Cubes` in top bar.Then click `+Cube` button to enter the cube designer page.
+
+![]( /images/Kylin-Cube-Creation-Tutorial/6 +cube.png)
+
+**Step 1. Cube Info**
+
+Fill up the basic information of the cube. Click `Next` to enter the next step.
+
+You can use letters, numbers and '_' to name your cube (Notice that space in name is not allowed).
+
+![]( /images/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
+
+**Step 2. Dimensions**
+
+1. Set up the fact table.
+
+    ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-factable.png)
+
+2. Click `+Dimension` to add a new dimension.
+
+    ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-+dim.png)
+
+3. There are different types of dimensions that might be added to a cube. Here we list some of them for your reference.
+
+    * Dimensions from fact table.
+        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeA.png)
+
+    * Dimensions from look up table.
+        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-1.png)
+
+        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-2.png)
+   
+    * Dimensions from look up table with hierarchy.
+        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeC.png)
+
+    * Dimensions from look up table with derived dimensions.
+        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeD.png)
+
+4. User can edit the dimension after saving it.
+   ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-edit.png)
+
+**Step 3. Measures**
+
+1. Click the `+Measure` to add a new measure.
+   ![]( /images/Kylin-Cube-Creation-Tutorial/9 meas-+meas.png)
+
+2. There are 5 different types of measure according to its expression: `SUM`, `MAX`, `MIN`, `COUNT` and `COUNT_DISTINCT`. Please be  carefully to choose the return type, which is related to the error rate of the `COUNT(DISTINCT)`.
+   * SUM
+
+     ![]( /images/Kylin-Cube-Creation-Tutorial/9 meas-sum.png)
+
+   * MIN
+
+     ![]( /images/Kylin-Cube-Creation-Tutorial/9 meas-min.png)
+
+   * MAX
+
+     ![]( /images/Kylin-Cube-Creation-Tutorial/9 meas-max.png)
+
+   * COUNT
+
+     ![]( /images/Kylin-Cube-Creation-Tutorial/9 meas-count.png)
+
+   * DISTINCT_COUNT
+
+     ![]( /images/Kylin-Cube-Creation-Tutorial/9 meas-distinct.png)
+
+**Step 4. Filter**
+
+This step is optional. You can add some condition filter in `SQL` format.
+
+![]( /images/Kylin-Cube-Creation-Tutorial/10 filter.png)
+
+**Step 5. Refresh Setting**
+
+This step is designed for incremental cube build. 
+
+![]( /images/Kylin-Cube-Creation-Tutorial/11 refresh-setting1.png)
+
+Choose partition type, partition column and start date.
+
+![]( /images/Kylin-Cube-Creation-Tutorial/11 refresh-setting2.png)
+
+**Step 6. Advanced Setting**
+
+![]( /images/Kylin-Cube-Creation-Tutorial/12 advanced.png)
+
+**Step 7. Overview & Save**
+
+You can overview your cube and go back to previous step to modify it. Click the `Save` button to complete the cube creation.
+
+![]( /images/Kylin-Cube-Creation-Tutorial/13 overview.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/tutorial/cube_build_job.md
----------------------------------------------------------------------
diff --git a/website/_docs2/tutorial/cube_build_job.md b/website/_docs2/tutorial/cube_build_job.md
new file mode 100644
index 0000000..3a73697
--- /dev/null
+++ b/website/_docs2/tutorial/cube_build_job.md
@@ -0,0 +1,66 @@
+---
+layout: docs2
+title:  Kylin Cube Build and Job Monitoring Tutorial
+categories: tutorial
+permalink: /docs2/tutorial/cube_build_job.html
+version: v1.2
+since: v0.7.1
+---
+
+### Cube Build
+First of all, make sure that you have authority of the cube you want to build.
+
+1. In `Cubes` page, click the `Action` drop down button in the right of a cube column and select operation `Build`.
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 action-build.png)
+
+2. There is a pop-up window after the selection. 
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/2 pop-up.png)
+
+3. Click `END DATE` input box to choose end date of this incremental cube build.
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
+
+4. Click `Submit` to send request. 
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 submit.png)
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4.1 success.png)
+
+   After submit the request successfully, you will see the job just be created in the `Jobs` page.
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 jobs-page.png)
+
+5. To discard this job, just click the `Discard` button.
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
+
+### Job Monitoring
+In the `Jobs` page, click the job detail button to see detail information show in the right side.
+
+![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
+
+The detail information of a job provides a step-by-step record to trace a job. You can hover a step status icon to see the basic status and information.
+
+![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
+
+Click the icon button show in each step to see the details: `Parameters`, `Log`, `MRJob`, `EagleMonitoring`.
+
+* Parameters
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters-d.png)
+
+* Log
+        
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
+
+* MRJob(MapReduce Job)
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
+
+   ![](/images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/tutorial/kylin_sample.md
----------------------------------------------------------------------
diff --git a/website/_docs2/tutorial/kylin_sample.md b/website/_docs2/tutorial/kylin_sample.md
new file mode 100644
index 0000000..281e2ea
--- /dev/null
+++ b/website/_docs2/tutorial/kylin_sample.md
@@ -0,0 +1,23 @@
+---
+layout: docs2
+title:  Quick Start with Sample Cube
+categories: tutorial
+permalink: /docs2/tutorial/kylin_sample.html
+version: v1.2
+since: v0.7.1
+---
+
+Kylin provides a script for you to create a sample Cube; the script will also create three sample hive tables:
+
+1. Run ${KYLIN_HOME}/bin/sample.sh ; Restart kylin server to flush the caches;
+2. Logon Kylin web, select project "learn_kylin";
+3. Select the sample cube "kylin_sales_cube", click "Actions" -> "Build", pick up a date later than 2014-01-01 (to cover all 10000 sample records);
+4. Check the build progress in "Jobs" tab, until 100%;
+5. Execute SQLs in the "Query" tab, for example:
+	select part_dt, sum(price) as total_selled, count(distinct seller_id) as sellers from kylin_sales group by part_dt order by part_dt
+6. You can verify the query result and compare the response time with hive;
+
+   
+## What's next
+
+After cube being built, please refer to other document of this tutorial for more detail information.

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/tutorial/odbc.md
----------------------------------------------------------------------
diff --git a/website/_docs2/tutorial/odbc.md b/website/_docs2/tutorial/odbc.md
new file mode 100644
index 0000000..a800429
--- /dev/null
+++ b/website/_docs2/tutorial/odbc.md
@@ -0,0 +1,50 @@
+---
+layout: docs2
+title:  Kylin ODBC Driver Tutorial
+categories: tutorial
+permalink: /docs2/tutorial/odbc.html
+version: v1.2
+since: v0.7.1
+---
+
+> We provide Kylin ODBC driver to enable data access from ODBC-compatible client applications.
+> 
+> Both 32-bit version or 64-bit version driver are available.
+> 
+> Tested Operation System: Windows 7, Windows Server 2008 R2
+> 
+> Tested Application: Tableau 8.0.4, Tableau 8.1.3 and Tableau 9.1
+
+## Prerequisites
+1. Microsoft Visual C++ 2012 Redistributable 
+   * For 32 bit Windows or 32 bit Tableau Desktop: Download: [32bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe) 
+   * For 64 bit Windows or 64 bit Tableau Desktop: Download: [64bit version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
+
+
+2. ODBC driver internally gets results from a REST server, make sure you have access to one
+
+## Installation
+1. Uninstall existing Kylin ODBC first, if you already installled it before
+2. Download ODBC Driver from [download](../../download/).
+   * For 32 bit Tableau Desktop: Please install KylinODBCDriver (x86).exe
+   * For 64 bit Tableau Desktop: Please install KylinODBCDriver (x64).exe
+
+3. Both drivers already be installed on Tableau Server, you properly should be able to publish to there without issues
+
+## DSN configuration
+1. Open ODBCAD to configure DSN.
+	* For 32 bit driver, please use the 32bit version in C:\Windows\SysWOW64\odbcad32.exe
+	* For 64 bit driver, please use the default "Data Sources (ODBC)" in Control Panel/Administrator Tools
+![]( /images/Kylin-ODBC-DSN/1.png)
+
+2. Open "System DSN" tab, and click "Add", you will see KylinODBCDriver listed as an option, Click "Finish" to continue.
+![]( /images/Kylin-ODBC-DSN/2.png)
+
+3. In the pop up dialog, fill in all the blanks, The server host is where your Kylin Rest Server is started.
+![]( /images/Kylin-ODBC-DSN/3.png)
+
+4. Click "Done", and you will see your new DSN listed in the "System Data Sources", you can use this DSN afterwards.
+![]( /images/Kylin-ODBC-DSN/4.png)
+
+## Bug Report
+Please open Apache Kylin JIRA to report bug, or send to dev mailing list.

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/tutorial/powerbi.md
----------------------------------------------------------------------
diff --git a/website/_docs2/tutorial/powerbi.md b/website/_docs2/tutorial/powerbi.md
new file mode 100644
index 0000000..9828180
--- /dev/null
+++ b/website/_docs2/tutorial/powerbi.md
@@ -0,0 +1,55 @@
+---
+layout: docs2
+title:  MS Excel and Power BI Tutorial
+categories: tutorial
+permalink: /docs2/tutorial/powerbi.html
+version: v1.2
+since: v1.2
+---
+
+Microsoft Excel is one of the most famous data tool on Windows platform, and has plenty of data analyzing functions. With Power Query installed as plug-in, excel can easily read data from ODBC data source and fill spreadsheets. 
+
+Microsoft Power BI is a business intelligence tool providing rich functionality and experience for data visualization and processing to user.
+
+> Apache Kylin currently doesn't support query on raw data yet, some queries might fail and cause some exceptions in application. Patch KYLIN-1075 is recommended to get better look of query result.
+
+> Power BI and Excel do not support "connect live" model for other ODBC driver yet, please pay attention when you query on huge dataset, it may pull too many data into your client which will take a while even fail at the end.
+
+### Install ODBC Driver
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+Please make sure to download and install Kylin ODBC Driver __v1.2__. If you already installed ODBC Driver in your system, please uninstall it first. 
+
+### Kylin and Excel
+1. Download Power Query from Microsoft’s Website and install it. Then run Excel, switch to `Power Query` fast tab, click `From Other Sources` dropdown list, and select `ODBC` item.
+![](/images/tutorial/odbc/ms_tool/Picture1.png)
+
+2.  You’ll see `From ODBC` dialog, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox. Optionally you can type a SQL statement in `SQL statement` textbox. Click `OK`, result set will run to your spreadsheet now.
+![](/images/tutorial/odbc/ms_tool/Picture2.png)
+
+> Tips: In order to simplify the Database Connection String, DSN is recommended, which can shorten the Connection String like `DSN=[YOUR_DSN_NAME]`. Details about DSN, refer to [https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599).
+ 
+3. If you didn’t input the SQL statement in last step, Power Query will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
+![](/images/tutorial/odbc/ms_tool/Picture3.png)
+
+4.  Hold on for a while, the data is lying in Excel now.
+![](/images/tutorial/odbc/ms_tool/Picture4.png)
+
+5.  If you want to sync data with Kylin Server, just right click the data source in right panel, and select `Refresh`, then you’ll see the latest data.
+
+6.  To improve data loading performance, you can enable `Fast data load` in Power Query, but this will make your UI unresponsive for a while. 
+
+### Power BI
+1.  Run Power BI Desktop, and click `Get Data` button, then select `ODBC` as data source type.
+![](/images/tutorial/odbc/ms_tool/Picture5.png)
+
+2.  Same with Excel, just type Database Connection String of Apache Kylin Server in the `Connection String` textbox, and optionally type a SQL statement in `SQL statement` textbox. Click `OK`, the result set will come to Power BI as a new data source query.
+![](/images/tutorial/odbc/ms_tool/Picture6.png)
+
+3.  If you didn’t input the SQL statement in last step, Power BI will list all tables in the project, which means you can load data from the whole table. But, since Apache Kylin cannot query on raw data currently, this function may be limited.
+![](/images/tutorial/odbc/ms_tool/Picture7.png)
+
+4.  Now you can start to enjoy analyzing with Power BI.
+![](/images/tutorial/odbc/ms_tool/Picture8.png)
+
+5.  To reload the data and redraw the charts, just click `Refresh` button in `Home` fast tab.
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/tutorial/tableau.md
----------------------------------------------------------------------
diff --git a/website/_docs2/tutorial/tableau.md b/website/_docs2/tutorial/tableau.md
new file mode 100644
index 0000000..53dcaa3
--- /dev/null
+++ b/website/_docs2/tutorial/tableau.md
@@ -0,0 +1,115 @@
+---
+layout: docs2
+title:  Tableau Tutorial
+categories: tutorial
+permalink: /docs2/tutorial/tableau.html
+version: v1.2
+since: v0.7.1
+---
+
+> There are some limitations of Kylin ODBC driver with Tableau, please read carefully this instruction before you try it.
+> 
+> * Only support "managed" analysis path, Kylin engine will raise exception for unexpected dimension or metric
+> * Please always select Fact Table first, then add lookup tables with correct join condition (defined join type in cube)
+> * Do not try to join between fact tables or lookup tables;
+> * You can try to use high cardinality dimensions like seller id as Tableau Filter, but the engine will only return limited seller id in Tableau's filter now.
+
+### For Tableau 9.x User
+Please refer to [Tableau 9.x Tutorial](./tableau_91.html) for detail guide.
+
+### Step 1. Install Kylin ODBC Driver
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+
+### Step 2. Connect to Kylin Server
+> We recommended to use Connect Using Driver instead of Using DSN.
+
+Connect Using Driver: Select "Other Database(ODBC)" in the left panel and choose KylinODBCDriver in the pop-up window. 
+
+![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
+
+Enter your Sever location and credentials: server host, port, username and password.
+
+![]( /images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
+
+Click "Connect" to get the list of projects that you have permission to access. See details about permission in [Kylin Cube Permission Grant Tutorial](./acl.html). Then choose the project you want to connect in the drop down list. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/3 project.jpg)
+
+Click "Done" to complete the connection.
+
+![]( /images/Kylin-and-Tableau-Tutorial/4 done.jpg)
+
+### Step 3. Using Single Table or Multiple Tables
+> Limitation
+> 
+>    * Must select FACT table first
+>    * Do not support select from lookup table only
+>    * The join condition must match within cube definition
+
+**Select Fact Table**
+
+Select `Multiple Tables`.
+
+![]( /images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
+
+Then click `Add Table...` to add a fact table.
+
+![]( /images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
+
+![]( /images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
+
+**Select Look-up Table**
+
+Click `Add Table...` to add a look-up table. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
+
+Set up the join clause carefully. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/8 join.jpg)
+
+Keep add tables through click `Add Table...` until all the look-up tables have been added properly. Give the connection a name for use in Tableau.
+
+![]( /images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
+
+**Using Connect Live**
+
+There are three types of `Data Connection`. Choose the `Connect Live` option. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
+
+Then you can enjoy analyzing with Tableau.
+
+![]( /images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
+
+**Add additional look-up Tables**
+
+Click `Data` in the top menu bar, select `Edit Tables...` to update the look-up table information.
+
+![]( /images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
+
+### Step 4. Using Customized SQL
+To use customized SQL resembles using Single Table/Multiple Tables, except that you just need to paste your SQL in `Custom SQL` tab and take the same instruction as above.
+
+![]( /images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
+
+### Step 5. Publish to Tableau Server
+Suppose you have finished making a dashboard with Tableau, you can publish it to Tableau Server.
+Click `Server` in the top menu bar, select `Publish Workbook...`. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
+
+Then sign in your Tableau Server and prepare to publish. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
+
+If you're Using Driver Connect instead of DSN connect, you'll need to additionally embed your password in. Click the `Authentication` button at left bottom and select `Embedded Password`. Click `Publish` and you will see the result.
+
+![]( /images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
+
+### Tips
+* Hide Table name in Tableau
+
+    * Tableau will display columns be grouped by source table name, but user may want to organize columns with different structure. Using "Group by Folder" in Tableau and Create Folders to group different columns.
+
+     ![]( /images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/tutorial/tableau_91.md
----------------------------------------------------------------------
diff --git a/website/_docs2/tutorial/tableau_91.md b/website/_docs2/tutorial/tableau_91.md
new file mode 100644
index 0000000..0c6e559
--- /dev/null
+++ b/website/_docs2/tutorial/tableau_91.md
@@ -0,0 +1,51 @@
+---
+layout: docs2
+title:  Tableau 9 Tutorial
+categories: tutorial
+permalink: /docs2/tutorial/tableau_91.html
+version: v1.2
+since: v1.2
+---
+
+Tableau 9.x has been released a while, there are many users are asking about support this version with Apache Kylin. With updated Kylin ODBC Driver, now user could interactive with Kylin service through Tableau 9.x.
+
+> Apache Kylin currently doesn't support query on raw data yet, some queries might fail and cause some exceptions in application. Patch [KYLIN-1075](https://issues.apache.org/jira/browse/KYLIN-1075) is recommended to get better look of query result.
+
+### For Tableau 8.x User
+Please refer to [Kylin and Tableau Tutorial](./tableau.html) for detail guide.
+
+### Install Kylin ODBC Driver
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+Please make sure to download and install Kylin ODBC Driver __v1.2__. If you already installed ODBC Driver in your system, please uninstall it first. 
+
+### Connect to Kylin Server
+Connect Using Driver: Start Tableau 9.1 desktop, click `Other Database(ODBC)` in the left panel and choose KylinODBCDriver in the pop-up window. 
+![](/images/tutorial/odbc/tableau_91/1.png)
+
+Provide your Sever location, credentials and project. Clicking `Connect` button, you can get the list of projects that you have permission to access, see details at [Kylin Cube Permission Grant Tutorial](./acl.html).
+![](/images/tutorial/odbc/tableau_91/2.png)
+
+### Mapping Data Model
+In left panel, select `defaultCatalog` as Database, click `Search` button in Table search box, and all tables get listed. With drag and drop to the right region, tables will become data source. Make sure JOINs are configured correctly.
+![](/images/tutorial/odbc/tableau_91/3.png)
+
+### Connect Live
+There are two types of `Connection`, choose the `Live` option to make sure using Connect Live mode.
+![](/images/tutorial/odbc/tableau_91/4.png)
+
+### Custom SQL
+To use customized SQL, click `New Custom SQL` in left panel and type SQL statement in pop-up dialog.
+![](/images/tutorial/odbc/tableau_91/5.png)
+
+### Visualization
+Now you can start to enjou analyzing with Tableau 9.1.
+![](/images/tutorial/odbc/tableau_91/6.png)
+
+### Publish to Tableau Server
+If you want to publish local dashboard to a Tableau Server, just expand `Server` menu and select `Publish Workbook`.
+![](/images/tutorial/odbc/tableau_91/7.png)
+
+### More
+Please refer to [Kylin and Tableau Tutorial](./tableau.html) for more detail.
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/tutorial/web.md
----------------------------------------------------------------------
diff --git a/website/_docs2/tutorial/web.md b/website/_docs2/tutorial/web.md
new file mode 100644
index 0000000..d876085
--- /dev/null
+++ b/website/_docs2/tutorial/web.md
@@ -0,0 +1,139 @@
+---
+layout: docs2
+title:  Kylin Web Tutorial
+categories: tutorial
+permalink: /docs2/tutorial/web.html
+version: v1.2
+since: v0.7.1
+---
+
+> **Supported Browsers**
+> 
+> Windows: Google Chrome, FireFox
+> 
+> Mac: Google Chrome, FireFox, Safari
+
+## 1. Access & Login
+Host to access: http://your_sandbox_ip:9080
+Login with username/password: ADMIN/KYLIN
+
+![](/images/Kylin-Web-Tutorial/1 login.png)
+
+## 2. Available Hive Tables in Kylin
+Although Kylin will using SQL as query interface and leverage Hive metadata, kylin will not enable user to query all hive tables since it's a pre-build OLAP (MOLAP) system so far. To enable Table in Kylin, it will be easy to using "Sync" function to sync up tables from Hive.
+
+![](/images/Kylin-Web-Tutorial/2 tables.png)
+
+## 3. Kylin OLAP Cube
+Kylin's OLAP Cubes are pre-calculation datasets from Star Schema Hive tables, Here's the web management interface for user to explorer, manage all cubes.Go to `Cubes` Menu, it will list all cubes available in system:
+
+![](/images/Kylin-Web-Tutorial/3 cubes.png)
+
+To explore more detail about the Cube
+
+* Form View:
+
+   ![](/images/Kylin-Web-Tutorial/4 form-view.png)
+
+* SQL View (Hive Query to read data to generate the cube):
+
+   ![](/images/Kylin-Web-Tutorial/5 sql-view.png)
+
+* Visualization (Showing the Star Schema behind of this cube):
+
+   ![](/images/Kylin-Web-Tutorial/6 visualization.png)
+
+* Access (Grant user/role privileges, Grant operation only open to Admin in beta):
+
+   ![](/images/Kylin-Web-Tutorial/7 access.png)
+
+## 4. Write and Execute SQL on web
+Kylin's web offer a simple query tool for user to run SQL to explorer existing cube, verify result and explorer the result set using #5's Pivot analysis and visualization
+
+> **Query Limit**
+> 
+> 1. Only SELECT query be supported
+> 
+> 2. To avoid huge network traffic from server to client, the scan range's threshold be set to 1,000,000 in beta.
+> 
+> 3. SQL can't found data from cube will not redirect to Hive in beta
+
+Go to "Query" menu:
+
+![](/images/Kylin-Web-Tutorial/8 query.png)
+
+* Source Tables:
+
+   Browser current available Tables (same structure and metadata as Hive):
+  
+   ![](/images/Kylin-Web-Tutorial/9 query-table.png)
+
+* New Query:
+
+   You can write and execute your query and explorer the result. One query for you reference:
+
+   ![](/images/Kylin-Web-Tutorial/10 query-result.png)
+
+* Saved Query:
+
+   Associate with user account, you can get saved query from different browsers even machines.
+   Click "Save" in Result area, it will popup for name and description to save current query:
+
+   ![](/images/Kylin-Web-Tutorial/11 save-query.png)
+
+   Click "Saved Queries" to browser all your saved queries, you could direct resubmit it to run or remove it:
+
+   ![](/images/Kylin-Web-Tutorial/11 save-query-2.png)
+
+* Query History:
+
+   Only keep the current user's query history in current bowser, it will require cookie enabled and will lost if you clean up bowser's cache.Click "Query History" tab, you could directly resubmit any of them to execute again.
+
+## 5. Pivot Analysis and Visualization
+There's one simple pivot and visualization analysis tool in Kylin's web for user to explore their query result:
+
+* General Information:
+
+   When the query execute success, it will present a success indictor and also a cube's name which be hit. 
+   Also it will present how long this query be executed in backend engine (not cover network traffic from Kylin server to browser):
+
+   ![](/images/Kylin-Web-Tutorial/12 general.png)
+
+* Query Result:
+
+   It's easy to order on one column.
+
+   ![](/images/Kylin-Web-Tutorial/13 results.png)
+
+* Export to CSV File
+
+   Click "Export" button to save current result as CSV file.
+
+* Pivot Table:
+
+   Drag and Drop one or more columns into the header, the result will grouping by such column's value:
+
+   ![](/images/Kylin-Web-Tutorial/14 drag.png)
+
+* Visualization:
+
+   Also, the result set will be easy to show with different charts in "Visualization":
+
+   note: line chart only available when there's at least one dimension with real "Date" data type of column from Hive Table.
+
+   * Bar Chart:
+
+   ![](/images/Kylin-Web-Tutorial/15 bar-chart.png)
+   
+   * Pie Chart:
+
+   ![](/images/Kylin-Web-Tutorial/16 pie-chart.png)
+
+   * Line Chart
+
+   ![](/images/Kylin-Web-Tutorial/17 line-chart.png)
+
+## 6. Cube Build Job Monitoring
+Monitor and manage cube build process, diagnostic into the detail and even link to Hadoop's job information directly:
+
+![](/images/Kylin-Web-Tutorial/7 job-steps.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_includes/docs2_nav.html
----------------------------------------------------------------------
diff --git a/website/_includes/docs2_nav.html b/website/_includes/docs2_nav.html
new file mode 100644
index 0000000..d19a3c3
--- /dev/null
+++ b/website/_includes/docs2_nav.html
@@ -0,0 +1,33 @@
+<!--
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+-->
+
+<div class="col-md-3 col-lg-3 col-xs-4 aside1 visible-md visible-lg" id="nside1" style=" padding-top: 2em">
+    <ul class="nav nav-pills nav-stacked">
+    {% for section in site.data.docs2 %}
+    <li><a href="#{{ section | first }}" data-toggle="collapse" id="navtitle">{{ section.title }}</a></li>
+    <div class="collapse in">
+  	<div class="list-group" id="list1">
+    <ul style="list-style-type:disc">
+    {% include docs2_ul.html items=section.docs %}
+        <ul>
+  </div>
+</div>
+    {% endfor %}
+
+    </ul>
+</div>

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_includes/docs2_ul.html
----------------------------------------------------------------------
diff --git a/website/_includes/docs2_ul.html b/website/_includes/docs2_ul.html
new file mode 100644
index 0000000..e6d364d
--- /dev/null
+++ b/website/_includes/docs2_ul.html
@@ -0,0 +1,29 @@
+{% assign items = include.items %}
+
+
+
+{% for item in items %}
+
+  {% assign item_url = item | prepend:"/docs2/" | append:".html" %}
+      
+
+  {% if item_url == page.url %}
+    {% assign c = "current" %}
+  {% else %}
+    {% assign c = "" %}
+  {% endif %}
+
+
+
+  {% for p in site.docs2 %}
+    {% if p.url == item_url %}
+      <li><a href="{{ p.url }}" class="list-group-item-lay pjaxlink" id="navlist">{{p.title}}</a></li>      
+      {% break %}
+    {% endif %}
+  {% endfor %}
+
+{% endfor %}
+
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_layouts/docs2.html
----------------------------------------------------------------------
diff --git a/website/_layouts/docs2.html b/website/_layouts/docs2.html
new file mode 100644
index 0000000..5964d07
--- /dev/null
+++ b/website/_layouts/docs2.html
@@ -0,0 +1,50 @@
+<!--
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+-->
+
+<!doctype html>
+<html>
+	{% include head.html %}
+	<body>
+		{% include header.html %}
+		
+		<div class="container">
+			<div class="row">
+				{% include docs2_nav.html %}
+				<div class="col-md-9 col-lg-9 col-xs-14 aside2">
+					<div id="container">
+						<div id="pjax">
+							<h1 class="post-title">{{ page.title }}</h1>
+							{% if page.version == NULL %}
+							{% else %}							
+								<p>version: {{page.version}}, since: {{page.since}}</p>
+							{% endif %}
+							<article class="post-content" >	
+							{{ content }}
+							</article>
+						</div>
+					</div>
+				</div>
+			</div>
+		</div>		
+		{% include footer.html %}
+
+	<script src="/assets/js/jquery-1.9.1.min.js"></script> 
+	<script src="/assets/js/bootstrap.min.js"></script> 
+	<script src="/assets/js/main.js"></script>
+	</body>
+</html>


[3/3] kylin git commit: KYLIN-1375 Init 2.x doc by copying from 1.x

Posted by li...@apache.org.
KYLIN-1375 Init 2.x doc by copying from 1.x


Project: http://git-wip-us.apache.org/repos/asf/kylin/repo
Commit: http://git-wip-us.apache.org/repos/asf/kylin/commit/0fb16aa2
Tree: http://git-wip-us.apache.org/repos/asf/kylin/tree/0fb16aa2
Diff: http://git-wip-us.apache.org/repos/asf/kylin/diff/0fb16aa2

Branch: refs/heads/document
Commit: 0fb16aa2ec25240514d7e0dda9d7f1eb04446130
Parents: ed810eb
Author: Yang Li <li...@apache.org>
Authored: Wed Feb 17 21:41:07 2016 +0800
Committer: Yang Li <li...@apache.org>
Committed: Wed Feb 17 21:41:58 2016 +0800

----------------------------------------------------------------------
 website/_config.yml                             |    4 +-
 website/_data/docs2.yml                         |   58 +
 website/_docs/index.md                          |    2 +
 website/_docs2/gettingstarted/concepts.md       |   65 ++
 website/_docs2/gettingstarted/events.md         |   27 +
 website/_docs2/gettingstarted/faq.md            |   90 ++
 website/_docs2/gettingstarted/terminology.md    |   26 +
 website/_docs2/howto/howto_backup_hbase.md      |   29 +
 website/_docs2/howto/howto_backup_metadata.md   |   62 ++
 .../howto/howto_build_cube_with_restapi.md      |   55 +
 website/_docs2/howto/howto_cleanup_storage.md   |   23 +
 website/_docs2/howto/howto_jdbc.md              |   94 ++
 website/_docs2/howto/howto_ldap_and_sso.md      |  124 +++
 website/_docs2/howto/howto_optimize_cubes.md    |  214 ++++
 website/_docs2/howto/howto_upgrade.md           |  103 ++
 website/_docs2/howto/howto_use_restapi.md       | 1006 ++++++++++++++++++
 website/_docs2/howto/howto_use_restapi_in_js.md |   48 +
 website/_docs2/index.md                         |   54 +
 website/_docs2/install/advance_settings.md      |   45 +
 website/_docs2/install/hadoop_evn.md            |   35 +
 website/_docs2/install/index.md                 |   47 +
 website/_docs2/install/kylin_cluster.md         |   30 +
 website/_docs2/install/kylin_docker.md          |   46 +
 website/_docs2/install/manual_install_guide.md  |   48 +
 website/_docs2/release_notes.md                 |  706 ++++++++++++
 website/_docs2/tutorial/acl.md                  |   35 +
 website/_docs2/tutorial/create_cube.md          |  129 +++
 website/_docs2/tutorial/cube_build_job.md       |   66 ++
 website/_docs2/tutorial/kylin_sample.md         |   23 +
 website/_docs2/tutorial/odbc.md                 |   50 +
 website/_docs2/tutorial/powerbi.md              |   55 +
 website/_docs2/tutorial/tableau.md              |  115 ++
 website/_docs2/tutorial/tableau_91.md           |   51 +
 website/_docs2/tutorial/web.md                  |  139 +++
 website/_includes/docs2_nav.html                |   33 +
 website/_includes/docs2_ul.html                 |   29 +
 website/_layouts/docs2.html                     |   50 +
 37 files changed, 3815 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_config.yml
----------------------------------------------------------------------
diff --git a/website/_config.yml b/website/_config.yml
index 7531ff1..d9b9c89 100644
--- a/website/_config.yml
+++ b/website/_config.yml
@@ -27,7 +27,7 @@ encoding: UTF-8
 timezone: America/Dawson 
 
 exclude: ["README.md", "Rakefile", "*.scss", "*.haml", "*.sh"]
-include: [_docs,_dev]
+include: [_docs,_docs2,_dev]
 
 # Build settings
 markdown: kramdown
@@ -56,6 +56,8 @@ language_default: 'en'
 collections:
   docs:
     output: true
+  docs2:
+    output: true
   docs-cn:
     output: true    
   dev:

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_data/docs2.yml
----------------------------------------------------------------------
diff --git a/website/_data/docs2.yml b/website/_data/docs2.yml
new file mode 100644
index 0000000..70fdc1c
--- /dev/null
+++ b/website/_data/docs2.yml
@@ -0,0 +1,58 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Docs menu items, for English one, docs2-cn.yml is for Chinese one
+# The docs menu is constructed in docs2_nav.html with these data
+- title: Getting Started
+  docs:
+  - index
+  - release_notes
+  - gettingstarted/faq
+  - gettingstarted/events
+  - gettingstarted/terminology
+  - gettingstarted/concepts
+
+- title: Installation
+  docs:
+  - install/index
+  - install/hadoop_env
+  - install/manual_install_guide
+  - install/kylin_cluster
+  - install/advance_settings
+  - install/kylin_docker
+
+- title: Tutorial
+  docs:
+  - tutorial/kylin_sample
+  - tutorial/create_cube
+  - tutorial/cube_build_job
+  - tutorial/acl
+  - tutorial/web
+  - tutorial/tableau
+  - tutorial/tableau_91
+  - tutorial/powerbi
+  - tutorial/odbc
+
+- title: How To
+  docs:
+  - howto/howto_build_cube_with_restapi
+  - howto/howto_use_restapi_in_js
+  - howto/howto_use_restapi
+  - howto/howto_optimize_cubes
+  - howto/howto_backup_metadata
+  - howto/howto_cleanup_storage
+  - howto/howto_jdbc
+  - howto/howto_upgrade
+  - howto/howto_ldap_and_sso

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs/index.md
----------------------------------------------------------------------
diff --git a/website/_docs/index.md b/website/_docs/index.md
index 89b5024..a033134 100644
--- a/website/_docs/index.md
+++ b/website/_docs/index.md
@@ -11,6 +11,8 @@ Welcome to Apache Kylin™
 
 Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets, original contributed from eBay Inc.
 
+Future documents: [v2.x](/docs2/)
+
 Installation & Setup
 ------------  
 

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/gettingstarted/concepts.md
----------------------------------------------------------------------
diff --git a/website/_docs2/gettingstarted/concepts.md b/website/_docs2/gettingstarted/concepts.md
new file mode 100644
index 0000000..c2bac59
--- /dev/null
+++ b/website/_docs2/gettingstarted/concepts.md
@@ -0,0 +1,65 @@
+---
+layout: docs2
+title:  "Technical Concepts"
+categories: gettingstarted
+permalink: /docs2/gettingstarted/concepts.html
+version: v1.2
+since: v1.2
+---
+ 
+Here are some basic technical concepts used in Apache Kylin, please check them for your reference.
+For terminology in domain, please refer to: [Terminology](terminology.md)
+
+## CUBE
+* __Table__ - This is definition of hive tables as source of cubes, which must be synced before building cubes.
+![](/images/docs/concepts/DataSource.png)
+
+* __Data Model__ - This describes a [STAR SCHEMA](https://en.wikipedia.org/wiki/Star_schema) data model, which defines fact/lookup tables and filter condition.
+![](/images/docs/concepts/DataModel.png)
+
+* __Cube Descriptor__ - This describes definition and settings for a cube instance, defining which data model to use, what dimensions and measures to have, how to partition to segments and how to handle auto-merge etc.
+![](/images/docs/concepts/CubeDesc.png)
+
+* __Cube Instance__ - This is instance of cube, built from one cube descriptor, and consist of one or more cube segments according partition settings.
+![](/images/docs/concepts/CubeInstance.png)
+
+* __Partition__ - User can define a DATE/STRING column as partition column on cube descriptor, to separate one cube into several segments with different date periods.
+![](/images/docs/concepts/Partition.png)
+
+* __Cube Segment__ - This is actual carrier of cube data, and maps to a HTable in HBase. One building job creates one new segment for the cube instance. Once data change on specified data period, we can refresh related segments to avoid rebuilding whole cube.
+![](/images/docs/concepts/CubeSegment.png)
+
+* __Aggregation Group__ - Each aggregation group is subset of dimensions, and build cuboid with combinations inside. It aims at pruning for optimization.
+![](/images/docs/concepts/AggregationGroup.png)
+
+## DIMENSION & MEASURE
+* __Mandotary__ - This dimension type is used for cuboid pruning, if a dimension is specified as “mandatory”, then those combinations without such dimension are pruned.
+* __Hierarchy__ - This dimension type is used for cuboid pruning, if dimension A,B,C forms a “hierarchy” relation, then only combinations with A, AB or ABC shall be remained. 
+* __Derived__ - On lookup tables, some dimensions could be generated from its PK, so there's specific mapping between them and FK from fact table. So those dimensions are DERIVED and don't participate in cuboid generation.
+![](/images/docs/concepts/Dimension.png)
+
+* __Count Distinct(HyperLogLog)__ - Immediate COUNT DISTINCT is hard to calculate, a approximate algorithm - [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) is introduced, and keep error rate in a lower level. 
+* __Count Distinct(Precise)__ - Precise COUNT DISTINCT will be pre-calculated basing on RoaringBitmap, currently only int or bigint are supported.
+* __Top N__ - (Will release in 2.x) For example, with this measure type, user can easily get specified numbers of top sellers/buyers etc. 
+![](/images/docs/concepts/Measure.png)
+
+## CUBE ACTIONS
+* __BUILD__ - Given an interval of partition column, this action is to build a new cube segment.
+* __REFRESH__ - This action will rebuilt cube segment in some partition period, which is used in case of source table increasing.
+* __MERGE__ - This action will merge multiple continuous cube segments into single one. This can be automated with auto-merge settings in cube descriptor.
+* __PURGE__ - Clear segments under a cube instance. This will only update metadata, and won't delete cube data from HBase.
+![](/images/docs/concepts/CubeAction.png)
+
+## JOB STATUS
+* __NEW__ - This denotes one job has been just created.
+* __PENDING__ - This denotes one job is paused by job scheduler and waiting for resources.
+* __RUNNING__ - This denotes one job is running in progress.
+* __FINISHED__ - This denotes one job is successfully finished.
+* __ERROR__ - This denotes one job is aborted with errors.
+* __DISCARDED__ - This denotes one job is cancelled by end users.
+![](/images/docs/concepts/Job.png)
+
+## JOB ACTION
+* __RESUME__ - Once a job in ERROR status, this action will try to restore it from latest successful point.
+* __DISCARD__ - No matter status of a job is, user can end it and release resources with DISCARD action.
+![](/images/docs/concepts/JobAction.png)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/gettingstarted/events.md
----------------------------------------------------------------------
diff --git a/website/_docs2/gettingstarted/events.md b/website/_docs2/gettingstarted/events.md
new file mode 100644
index 0000000..c17691f
--- /dev/null
+++ b/website/_docs2/gettingstarted/events.md
@@ -0,0 +1,27 @@
+---
+layout: docs2
+title:  "Events and Conferences"
+categories: gettingstarted
+permalink: /docs2/gettingstarted/events.html
+---
+
+__Coming Events__
+
+* ApacheCon EU 2015
+
+__Conferences__
+
+* [Apache Kylin - Balance Between Space and Time](http://www.chinahadoop.com/2015/July/Shanghai/agenda.php) ([slides](http://www.slideshare.net/qhzhou/apache-kylin-china-hadoop-summit-2015-shanghai)) by [Qianhao Zhou](https://github.com/qhzhou), at Hadoop Summit 2015 in Shanghai, China, 2015-07-24
+* [Apache Kylin - Balance Between Space and Time](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015) ([video](https://www.youtube.com/watch?v=jgvZSFaXPgI), [slides](http://www.slideshare.net/DebashisSaha/apache-kylin-balance-between-space-and-time-hadop-summit-2015)) by [Debashis Saha](https://twitter.com/debashis_saha) & [Luke Han](https://twitter.com/lukehq), at Hadoop Summit 2015 in San Jose, US, 2015-06-09
+* [HBaseCon 2015: Apache Kylin; Extreme OLAP Engine for Hadoop](https://vimeo.com/128152444) ([video](https://vimeo.com/128152444), [slides](http://www.slideshare.net/HBaseCon/ecosystem-session-3b)) by [Seshu Adunuthula](https://twitter.com/SeshuAd) at HBaseCon 2015 in San Francisco, US, 2015-05-07
+* [Apache Kylin - Extreme OLAP Engine for Hadoop](http://strataconf.com/big-data-conference-uk-2015/public/schedule/detail/40029) ([slides](http://www.slideshare.net/lukehan/apache-kylin-extreme-olap-engine-for-big-data)) by [Luke Han](https://twitter.com/lukehq) & [Yang Li](https://github.com/liyang-gmt8), at Strata+Hadoop World in London, UK, 2015-05-06
+* [Apache Kylin Open Source Journey](http://www.infoq.com/cn/presentations/open-source-journey-of-apache-kylin) ([slides](http://www.slideshare.net/lukehan/apache-kylin-open-source-journey-for-qcon2015-beijing)) by [Luke Han](https://twitter.com/lukehq), at QCon Beijing in Beijing, China, 2015-04-23
+* [Apache Kylin - OLAP on Hadoop](http://cio.it168.com/a2015/0418/1721/000001721404.shtml) by [Yang Li](https://github.com/liyang-gmt8), at Database Technology Conference China 2015 in Beijing, China, 2015-04-18
+* [Apache Kylin – Cubes on Hadoop](https://www.youtube.com/watch?v=U0SbrVzuOe4) ([video](https://www.youtube.com/watch?v=U0SbrVzuOe4), [slides](http://www.slideshare.net/Hadoop_Summit/apache-kylin-cubes-on-hadoop)) by [Ted Dunning](https://twitter.com/ted_dunning), at Hadoop Summit 2015 Europe in Brussels, Belgium, 2015-04-16
+* [Apache Kylin - Hadoop 上的大规模联机分析平台](http://bdtc2014.hadooper.cn/m/zone/bdtc_2014/schedule3) ([slides](http://www.slideshare.net/lukehan/apache-kylin-big-data-technology-conference-2014-beijing-v2)) by [Luke Han](https://twitter.com/lukehq), at Big Data Technology Conference China in Beijing, China, 2014-12-14
+* [Apache Kylin: OLAP Engine on Hadoop - Tech Deep Dive](http://v.csdn.hudong.com/s/article.html?arcid=15820707) ([video](http://v.csdn.hudong.com/s/article.html?arcid=15820707), [slides](http://www.slideshare.net/XuJiang2/kylin-hadoop-olap-engine)) by [Jiang Xu](https://www.linkedin.com/pub/xu-jiang/4/5a8/230), at Shanghai Big Data Summit 2014 in Shanghai, China , 2014-10-25
+
+__Meetup__
+
+* [Apache Kylin Meetup @Bay Area](http://www.meetup.com/Cloud-at-ebayinc/events/218914395/), in San Jose, US, 6:00PM - 7:30PM, Thursday, 2014-12-04
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/gettingstarted/faq.md
----------------------------------------------------------------------
diff --git a/website/_docs2/gettingstarted/faq.md b/website/_docs2/gettingstarted/faq.md
new file mode 100644
index 0000000..3087f37
--- /dev/null
+++ b/website/_docs2/gettingstarted/faq.md
@@ -0,0 +1,90 @@
+---
+layout: docs2
+title:  "FAQ"
+categories: gettingstarted
+permalink: /docs2/gettingstarted/faq.html
+version: v0.7.2
+since: v0.6.x
+---
+
+### Some NPM error causes ERROR exit (中国大陆地区用户请特别注意此问题)?  
+For people from China:  
+
+* Please add proxy for your NPM (请为NPM设置代理):  
+`npm config set proxy http://YOUR_PROXY_IP`
+
+* Please update your local NPM repository to using any mirror of npmjs.org, like Taobao NPM (请更新您本地的NPM仓库以使用国内的NPM镜像,例如淘宝NPM镜像) :  
+[http://npm.taobao.org](http://npm.taobao.org)
+
+### Can't get master address from ZooKeeper" when installing Kylin on Hortonworks Sandbox
+Check out [https://github.com/KylinOLAP/Kylin/issues/9](https://github.com/KylinOLAP/Kylin/issues/9).
+
+### Map Reduce Job information can't display on sandbox deployment
+Check out [https://github.com/KylinOLAP/Kylin/issues/40](https://github.com/KylinOLAP/Kylin/issues/40)
+
+#### Install Kylin on CDH 5.2 or Hadoop 2.5.x
+Check out discussion: [https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kylin-olap/X0GZfsX1jLc/nzs6xAhNpLkJ)
+{% highlight Groff markup %}
+I was able to deploy Kylin with following option in POM.
+<hadoop2.version>2.5.0</hadoop2.version>
+<yarn.version>2.5.0</yarn.version>
+<hbase-hadoop2.version>0.98.6-hadoop2</hbase-hadoop2.version>
+<zookeeper.version>3.4.5</zookeeper.version>
+<hive.version>0.13.1</hive.version>
+My Cluster is running on Cloudera Distribution CDH 5.2.0.
+{% endhighlight %}
+
+#### Unable to load a big cube as HTable, with java.lang.OutOfMemoryError: unable to create new native thread
+HBase (as of writing) allocates one thread per region when bulk loading a HTable. Try reduce the number of regions of your cube by setting its "capacity" to "MEDIUM" or "LARGE". Also tweaks OS & JVM can allow more threads, for example see [this article](http://blog.egilh.com/2006/06/2811aspx.html).
+
+#### Failed to run BuildCubeWithEngineTest, saying failed to connect to hbase while hbase is active
+User may get this error when first time run hbase client, please check the error trace to see whether there is an error saying couldn't access a folder like "/hadoop/hbase/local/jars"; If that folder doesn't exist, create it.
+
+#### SUM(field) returns a negtive result while all the numbers in this field are > 0
+If a column is declared as integer in Hive, the SQL engine (calcite) will use column's type (integer) as the data type for "SUM(field)", while the aggregated value on this field may exceed the scope of integer; in that case the cast will cause a negtive value be returned; The workround is, alter that column's type to BIGINT in hive, and then sync the table schema to Kylin (the cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive for an integer column which would be used as a measure in Kylin; See hive number types: [https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-NumericTypes)
+
+#### Why Kylin need extract the distinct columns from Fact Table before building cube?
+Kylin uses dictionary to encode the values in each column, this greatly reduce the cube's storage size. To build the dictionary, Kylin need fetch the distinct values for each column.
+
+#### Why Kylin calculate the HIVE table cardinality?
+The cardinality of dimensions is an important measure of cube complexity. The higher the cardinality, the bigger the cube, and thus the longer to build and the slower to query. Cardinality > 1,000 is worth attention and > 1,000,000 should be avoided at best effort. For optimal cube performance, try reduce high cardinality by categorize values or derive features.
+
+#### How to add new user or change the default password?
+Kylin web's security is implemented with Spring security framework, where the kylinSecurity.xml is the main configuration file:
+{% highlight Groff markup %}
+${KYLIN_HOME}/tomcat/webapps/kylin/WEB-INF/classes/kylinSecurity.xml
+{% endhighlight %}
+The password hash for pre-defined test users can be found in the profile "sandbox,testing" part; To change the default password, you need generate a new hash and then update it here, please refer to the code snippet in: [https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input](https://stackoverflow.com/questions/25844419/spring-bcryptpasswordencoder-generate-different-password-for-same-input)
+When you deploy Kylin for more users, switch to LDAP authentication is recommended; To enable LDAP authentication, update "kylin.sandbox" in conf/kylin.properties to false, and also configure the ldap.* properties in ${KYLIN_HOME}/conf/kylin.properties
+
+#### Using sub-query for un-supported SQL
+
+{% highlight Groff markup %}
+Original SQL:
+select fact.slr_sgmt,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when cal.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from ih_daily_fact fact
+inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+group by fact.slr_sgmt
+{% endhighlight %}
+
+{% highlight Groff markup %}
+Using sub-query
+select a.slr_sgmt,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-09-06' then gmv else 0 end) as W36,
+sum(case when a.RTL_WEEK_BEG_DT = '2015-08-30' then gmv else 0 end) as W35
+from (
+    select fact.slr_sgmt as slr_sgmt,
+    cal.RTL_WEEK_BEG_DT as RTL_WEEK_BEG_DT,
+    sum(gmv) as gmv36,
+    sum(gmv) as gmv35
+    from ih_daily_fact fact
+    inner join dw_cal_dt cal on fact.cal_dt = cal.cal_dt
+    group by fact.slr_sgmt, cal.RTL_WEEK_BEG_DT
+) a
+group by a.slr_sgmt
+{% endhighlight %}
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/gettingstarted/terminology.md
----------------------------------------------------------------------
diff --git a/website/_docs2/gettingstarted/terminology.md b/website/_docs2/gettingstarted/terminology.md
new file mode 100644
index 0000000..f6c615d
--- /dev/null
+++ b/website/_docs2/gettingstarted/terminology.md
@@ -0,0 +1,26 @@
+---
+layout: docs2
+title:  "Terminology"
+categories: gettingstarted
+permalink: /docs2/gettingstarted/terminology.html
+version: v1.0
+since: v0.5.x
+---
+ 
+
+Here are some domain terms we are using in Apache Kylin, please check them for your reference.   
+They are basic knowledge of Apache Kylin which also will help to well understand such concerpt, term, knowledge, theory and others about Data Warehouse, Business Intelligence for analycits. 
+
+* __Data Warehouse__: a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
+* __Business Intelligence__: Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes, [wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
+* __OLAP__: OLAP is an acronym for [online analytical processing](https://en.wikipedia.org/wiki/Online_analytical_processing)
+* __OLAP Cube__: an OLAP cube is an array of data understood in terms of its 0 or more dimensions, [wikipedia](http://en.wikipedia.org/wiki/OLAP_cube)
+* __Star Schema__: the star schema consists of one or more fact tables referencing any number of dimension tables, [wikipedia](https://en.wikipedia.org/wiki/Star_schema)
+* __Fact Table__: a Fact table consists of the measurements, metrics or facts of a business process, [wikipedia](https://en.wikipedia.org/wiki/Fact_table)
+* __Lookup Table__: a lookup table is an array that replaces runtime computation with a simpler array indexing operation, [wikipedia](https://en.wikipedia.org/wiki/Lookup_table)
+* __Dimension__: A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time, [wikipedia](https://en.wikipedia.org/wiki/Dimension_(data_warehouse))
+* __Measure__: a measure is a property on which calculations (e.g., sum, count, average, minimum, maximum) can be made, [wikipedia](https://en.wikipedia.org/wiki/Measure_(data_warehouse))
+* __Join__: a SQL join clause combines records from two or more tables in a relational database, [wikipedia](https://en.wikipedia.org/wiki/Join_(SQL))
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_backup_hbase.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_backup_hbase.md b/website/_docs2/howto/howto_backup_hbase.md
new file mode 100644
index 0000000..17bc51a
--- /dev/null
+++ b/website/_docs2/howto/howto_backup_hbase.md
@@ -0,0 +1,29 @@
+---
+layout: docs2
+title:  How to Clean/Backup HBase Tables
+categories: howto
+permalink: /docs2/howto/howto_backup_hbase.html
+version: v1.0
+since: v0.7.1
+---
+
+Kylin persists all data (meta data and cube) in HBase; You may want to export the data sometimes for whatever purposes 
+(backup, migration, troubleshotting etc); This page describes the steps to do this and also there is a Java app for you to do this easily.
+
+Steps:
+
+1. Cleanup unused cubes to save storage space (be cautious on production!): run the following command in hbase CLI: 
+{% highlight Groff markup %}
+hbase org.apache.hadoop.util.RunJar /${KYLIN_HOME}/lib/kylin-job-(version).jar org.apache.kylin.job.hadoop.cube.StorageCleanupJob --delete true
+{% endhighlight %}
+2. List all HBase tables, iterate and then export each Kylin table to HDFS; 
+See [https://hbase.apache.org/book/ops_mgt.html#export](https://hbase.apache.org/book/ops_mgt.html#export)
+
+3. Copy the export folder from HDFS to local file system, and then archive it;
+
+4. (optional) Download the archive from Hadoop CLI to local;
+
+5. Cleanup the export folder from CLI HDFS and local file system;
+
+Kylin provide the "ExportHBaseData.java" (currently only exist in "minicluster" branch) for you to do the 
+step 2-5 in one run; Please ensure the correct path of "kylin.properties" has been set in the sys env; This Java uses the sandbox config by default;

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_backup_metadata.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_backup_metadata.md b/website/_docs2/howto/howto_backup_metadata.md
new file mode 100644
index 0000000..7e5e439
--- /dev/null
+++ b/website/_docs2/howto/howto_backup_metadata.md
@@ -0,0 +1,62 @@
+---
+layout: docs2
+title:  How to Backup Metadata
+categories: howto
+permalink: /docs2/howto/howto_backup_metadata.html
+version: v1.0
+since: v0.7.1
+---
+
+Kylin organizes all of its metadata (including cube descriptions and instances, projects, inverted index description and instances, jobs, tables and dictionaries) as a hierarchy file system. However, Kylin uses hbase to store it, rather than normal file system. If you check your kylin configuration file(kylin.properties) you will find such a line:
+
+{% highlight Groff markup %}
+## The metadata store in hbase
+kylin.metadata.url=kylin_metadata@hbase
+{% endhighlight %}
+
+This indicates that the metadata will be saved as a htable called `kylin_metadata`. You can scan the htable in hbase shell to check it out.
+
+## Backup Metadata Store with binary package
+
+Sometimes you need to backup the Kylin's Metadata Store from hbase to your disk file system.
+In such cases, assuming you're on the hadoop CLI(or sandbox) where you deployed Kylin, you can go to KYLIN_HOME and run :
+
+{% highlight Groff markup %}
+./bin/metastore.sh backup
+{% endhighlight %}
+
+to dump your metadata to your local folder a folder under KYLIN_HOME/metadata_backps, the folder is named after current time with the syntax: KYLIN_HOME/meta_backups/meta_year_month_day_hour_minute_second
+
+## Restore Metadata Store with binary package
+
+In case you find your metadata store messed up, and you want to restore to a previous backup:
+
+Firstly, reset the metadata store (this will clean everything of the Kylin metadata store in hbase, make sure to backup):
+
+{% highlight Groff markup %}
+./bin/metastore.sh reset
+{% endhighlight %}
+
+Then upload the backup metadata to Kylin's metadata store:
+{% highlight Groff markup %}
+./bin/metastore.sh restore $KYLIN_HOME/meta_backups/meta_xxxx_xx_xx_xx_xx_xx
+{% endhighlight %}
+
+## Backup/restore metadata in development env (available since 0.7.3)
+
+When developing/debugging Kylin, typically you have a dev machine with an IDE, and a backend sandbox. Usually you'll write code and run test cases at dev machine. It would be troublesome if you always have to put a binary package in the sandbox to check the metadata. There is a helper class called SandboxMetastoreCLI to help you download/upload metadata locally at your dev machine. Follow the Usage information and run it in your IDE.
+
+## Cleanup unused resources from Metadata Store (available since 0.7.3)
+As time goes on, some resources like dictionary, table snapshots became useless (as the cube segment be dropped or merged), but they still take space there; You can run command to find and cleanup them from metadata store:
+
+Firstly, run a check, this is safe as it will not change anything:
+{% highlight Groff markup %}
+./bin/metastore.sh clean
+{% endhighlight %}
+
+The resources that will be dropped will be listed;
+
+Next, add the "--delete true" parameter to cleanup those resources; before this, make sure you have made a backup of the metadata store;
+{% highlight Groff markup %}
+./bin/metastore.sh clean --delete true
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_build_cube_with_restapi.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_build_cube_with_restapi.md b/website/_docs2/howto/howto_build_cube_with_restapi.md
new file mode 100644
index 0000000..0bae7bf
--- /dev/null
+++ b/website/_docs2/howto/howto_build_cube_with_restapi.md
@@ -0,0 +1,55 @@
+---
+layout: docs2
+title:  How to Build Cube with Restful API
+categories: howto
+permalink: /docs2/howto/howto_build_cube_with_restapi.html
+version: v1.2
+since: v0.7.1
+---
+
+### 1.	Authentication
+*   Currently, Kylin uses [basic authentication](http://en.wikipedia.org/wiki/Basic_access_authentication).
+*   Add `Authorization` header to first request for authentication
+*   Or you can do a specific request by `POST http://localhost:7070/kylin/api/user/authentication`
+*   Once authenticated, client can go subsequent requests with cookies.
+{% highlight Groff markup %}
+POST http://localhost:7070/kylin/api/user/authentication
+    
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+
+### 2.	Get details of cube. 
+*   `GET http://localhost:7070/kylin/api/cubes?cubeName={cube_name}&limit=15&offset=0`
+*   Client can find cube segment date ranges in returned cube detail.
+{% highlight Groff markup %}
+GET http://localhost:7070/kylin/api/cubes?cubeName=test_kylin_cube_with_slr&limit=15&offset=0
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+{% endhighlight %}
+### 3.	Then submit a build job of the cube. 
+*   `PUT http://localhost:7070/kylin/api/cubes/{cube_name}/rebuild`
+*   For put request body detail please refer to [Build Cube API](howto_use_restapi.html#build-cube). 
+    *   `startTime` and `endTime` should be utc timestamp.
+    *   `buildType` can be `BUILD` ,`MERGE` or `REFRESH`. `BUILD` is for building a new segment, `REFRESH` for refreshing an existing segment. `MERGE` is for merging multiple existing segments into one bigger segment.
+*   This method will return a new created job instance,  whose uuid is the unique id of job to track job status.
+{% highlight Groff markup %}
+PUT http://localhost:7070/kylin/api/cubes/test_kylin_cube_with_slr/rebuild
+
+Authorization:Basic xxxxJD124xxxGFxxxSDF
+Content-Type: application/json;charset=UTF-8
+    
+{
+    "startTime": 0,
+    "endTime": 1388563200000,
+    "buildType": "BUILD"
+}
+{% endhighlight %}
+
+### 4.	Track job status. 
+*   `GET http://localhost:7070/kylin/api/jobs/{job_uuid}`
+*   Returned `job_status` represents current status of job.
+
+### 5.	If the job got errors, you can resume it. 
+*   `PUT http://localhost:7070/kylin/api/jobs/{job_uuid}/resume`

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_cleanup_storage.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_cleanup_storage.md b/website/_docs2/howto/howto_cleanup_storage.md
new file mode 100644
index 0000000..f440d8b
--- /dev/null
+++ b/website/_docs2/howto/howto_cleanup_storage.md
@@ -0,0 +1,23 @@
+---
+layout: docs2
+title:  How to Cleanup Storage (HDFS & HBase Tables)
+categories: howto
+permalink: /docs2/howto/howto_cleanup_storage.html
+version: v0.7.2
+since: v0.7.1
+---
+
+Kylin will generate intermediate files in HDFS during the cube building; Besides, when purge/drop/merge cubes, some HBase tables may be left in HBase and will no longer be queried; Although Kylin has started to do some 
+automated garbage collection, it might not cover all cases; You can do an offline storage cleanup periodically:
+
+Steps:
+1. Check which resources can be cleanup, this will not remove anything:
+{% highlight Groff markup %}
+hbase org.apache.hadoop.util.RunJar ${KYLIN_HOME}/lib/kylin-job-(version).jar org.apache.kylin.job.hadoop.cube.StorageCleanupJob --delete false
+{% endhighlight %}
+Here please replace (version) with the specific Kylin jar version in your installation;
+2. You can pickup 1 or 2 resources to check whether they're no longer be referred; Then add the "--delete true" option to start the cleanup:
+{% highlight Groff markup %}
+hbase org.apache.hadoop.util.RunJar ${KYLIN_HOME}/lib/kylin-job-(version).jar org.apache.kylin.job.hadoop.cube.StorageCleanupJob --delete true
+{% endhighlight %}
+On finish, the intermediate HDFS location and HTables will be dropped;
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_jdbc.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_jdbc.md b/website/_docs2/howto/howto_jdbc.md
new file mode 100644
index 0000000..871b75a
--- /dev/null
+++ b/website/_docs2/howto/howto_jdbc.md
@@ -0,0 +1,94 @@
+---
+layout: docs2
+title:  How to Use kylin Remote JDBC Driver
+categories: howto
+permalink: /docs2/howto/howto_jdbc.html
+version: v1.2
+since: v0.7.1
+---
+
+### Authentication
+
+###### Build on kylin authentication restful service. Supported parameters:
+* user : username 
+* password : password
+* ssl: true/false. Default be false; If true, all the services call will use https.
+
+### Connection URL format:
+{% highlight Groff markup %}
+jdbc:kylin://<hostname>:<port>/<kylin_project_name>
+{% endhighlight %}
+* If "ssl" = true, the "port" should be Kylin server's HTTPS port; 
+* If "port" is not specified, the driver will use default port: HTTP 80, HTTPS 443;
+* The "kylin_project_name" must be specified and user need ensure it exists in Kylin server;
+
+### 1. Query with Statement
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 2. Query with PreparedStatement
+
+###### Supported prepared statement parameters:
+* setString
+* setInt
+* setShort
+* setLong
+* setFloat
+* setDouble
+* setBoolean
+* setByte
+* setDate
+* setTime
+* setTimestamp
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+PreparedStatement state = conn.prepareStatement("select * from test_table where id=?");
+state.setInt(1, 10);
+ResultSet resultSet = state.executeQuery();
+
+while (resultSet.next()) {
+    assertEquals("foo", resultSet.getString(1));
+    assertEquals("bar", resultSet.getString(2));
+    assertEquals("tool", resultSet.getString(3));
+}
+{% endhighlight %}
+
+### 3. Get query result set metadata
+Kylin jdbc driver supports metadata list methods:
+List catalog, schema, table and column with sql pattern filters(such as %).
+
+{% highlight Groff markup %}
+Driver driver = (Driver) Class.forName("org.apache.kylin.jdbc.Driver").newInstance();
+Properties info = new Properties();
+info.put("user", "ADMIN");
+info.put("password", "KYLIN");
+Connection conn = driver.connect("jdbc:kylin://localhost:7070/kylin_project_name", info);
+Statement state = conn.createStatement();
+ResultSet resultSet = state.executeQuery("select * from test_table");
+
+ResultSet tables = conn.getMetaData().getTables(null, null, "dummy", null);
+while (tables.next()) {
+    for (int i = 0; i < 10; i++) {
+        assertEquals("dummy", tables.getString(i + 1));
+    }
+}
+{% endhighlight %}

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_ldap_and_sso.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_ldap_and_sso.md b/website/_docs2/howto/howto_ldap_and_sso.md
new file mode 100644
index 0000000..835e50c
--- /dev/null
+++ b/website/_docs2/howto/howto_ldap_and_sso.md
@@ -0,0 +1,124 @@
+---
+layout: docs2
+title:  How to Enable Security with LDAP and SSO
+categories: howto
+permalink: /docs2/howto/howto_ldap_and_sso.html
+version: v2.0
+since: v1.0
+---
+
+## Enable LDAP authentication
+
+Kylin supports LDAP authentication for enterprise or production deployment; This is implemented with Spring Security framework; Before enable LDAP, please contact your LDAP administrator to get necessary information, like LDAP server URL, username/password, search patterns;
+
+#### Configure LDAP server info
+
+Firstly, provide LDAP URL, and username/password if the LDAP server is secured; The password in kylin.properties need be salted; You can Google "Generate a BCrypt Password" or run org.apache.kylin.rest.security.PasswordPlaceholderConfigurer to get a hash of your password.
+
+```
+ldap.server=ldap://<your_ldap_host>:<port>
+ldap.username=<your_user_name>
+ldap.password=<your_password_hash>
+```
+
+Secondly, provide the user search patterns, this is by LDAP design, here is just a sample:
+
+```
+ldap.user.searchBase=OU=UserAccounts,DC=mycompany,DC=com
+ldap.user.searchPattern=(&(AccountName={0})(memberOf=CN=MYCOMPANY-USERS,DC=mycompany,DC=com))
+ldap.user.groupSearchBase=OU=Group,DC=mycompany,DC=com
+```
+
+If you have service accounts (e.g, for system integration) which also need be authenticated, configure them in ldap.service.*; Otherwise, leave them be empty;
+
+### Configure the administrator group and default role
+
+To map an LDAP group to the admin group in Kylin, need set the "acl.adminRole" to "ROLE_" + GROUP_NAME. For example, in LDAP the group "KYLIN-ADMIN-GROUP" is the list of administrators, here need set it as:
+
+```
+acl.adminRole=ROLE_KYLIN-ADMIN-GROUP
+acl.defaultRole=ROLE_ANALYST,ROLE_MODELER
+```
+
+The "acl.defaultRole" is a list of the default roles that grant to everyone, keep it as-is.
+
+#### Enable LDAP
+
+For Kylin v0.x and v1.x: set "kylin.sandbox=false" in conf/kylin.properties, then restart Kylin server; 
+For Kylin since v2.0: set "kylin.security.profile=ldap" in conf/kylin.properties, then restart Kylin server; 
+
+## Enable SSO authentication
+
+From v2.0, Kylin provides SSO with SAML. The implementation is based on Spring Security SAML Extension. You can read [this reference](http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle/) to get an overall understand.
+
+Before trying this, you should have successfully enabled LDAP and managed users with it, as SSO server may only do authentication, Kylin need search LDAP to get the user's detail information.
+
+### Generate IDP metadata xml
+Contact your IDP (ID provider), asking to generate the SSO metadata file; Usually you need provide three piece of info:
+
+  1. Partner entity ID, which is an unique ID of your app, e.g,: https://host-name/kylin/saml/metadata 
+  2. App callback endpoint, to which the SAML assertion be posted, it need be: https://host-name/kylin/saml/SSO
+  3. Public certificate of Kylin server, the SSO server will encrypt the message with it.
+
+### Generate JKS keystore for Kylin
+As Kylin need send encrypted message (signed with Kylin's private key) to SSO server, a keystore (JKS) need be provided. There are a couple ways to generate the keystore, below is a sample.
+
+Assume kylin.crt is the public certificate file, kylin.key is the private certificate file; firstly create a PKCS#12 file with openssl, then convert it to JKS with keytool: 
+
+```
+$ openssl pkcs12 -export -in kylin.crt -inkey kylin.key -out kylin.p12
+Enter Export Password: <export_pwd>
+Verifying - Enter Export Password: <export_pwd>
+
+
+$ keytool -importkeystore -srckeystore kylin.p12 -srcstoretype PKCS12 -srcstorepass <export_pwd> -alias 1 -destkeystore samlKeystore.jks -destalias kylin -destkeypass changeit
+
+Enter destination keystore password:  changeit
+Re-enter new password: changeit
+```
+
+It will put the keys to "samlKeystore.jks" with alias "kylin";
+
+### Enable Higher Ciphers
+
+Make sure your environment is ready to handle higher level crypto keys, you may need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files, copy local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security .
+
+### Deploy IDP xml file and keystore to Kylin
+
+The IDP metadata and keystore file need be deployed in Kylin web app's classpath in $KYLIN_HOME/tomcat/webapps/kylin/WEB-INF/classes 
+	
+  1. Name the IDP file to sso_metadata.xml and then copy to Kylin's classpath;
+  2. Name the keystore as "samlKeystore.jks" and then copy to Kylin's classpath;
+  3. If you use another alias or password, remember to update that kylinSecurity.xml accordingly:
+
+```
+<!-- Central storage of cryptographic keys -->
+<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
+	<constructor-arg value="classpath:samlKeystore.jks"/>
+	<constructor-arg type="java.lang.String" value="changeit"/>
+	<constructor-arg>
+		<map>
+			<entry key="kylin" value="changeit"/>
+		</map>
+	</constructor-arg>
+	<constructor-arg type="java.lang.String" value="kylin"/>
+</bean>
+
+```
+
+### Other configurations
+In conf/kylin.properties, add the following properties with your server information:
+
+```
+saml.metadata.entityBaseURL=https://host-name/kylin
+saml.context.scheme=https
+saml.context.serverName=host-name
+saml.context.serverPort=443
+saml.context.contextPath=/kylin
+```
+
+Please note, Kylin assume in the SAML message there is a "email" attribute representing the login user, and the name before @ will be used to search LDAP. 
+
+### Enable SSO
+Set "kylin.security.profile=saml" in conf/kylin.properties, then restart Kylin server; After that, type a URL like "/kylin" or "/kylin/cubes" will redirect to SSO for login, and jump back after be authorized. While login with LDAP is still available, you can type "/kylin/login" to use original way. The Rest API (/kylin/api/*) still use LDAP + basic authentication, no impact.
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_optimize_cubes.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_optimize_cubes.md b/website/_docs2/howto/howto_optimize_cubes.md
new file mode 100644
index 0000000..2e51c63
--- /dev/null
+++ b/website/_docs2/howto/howto_optimize_cubes.md
@@ -0,0 +1,214 @@
+---
+layout: docs2
+title:  How to Optimize Cubes
+categories: howto
+permalink: /docs2/howto/howto_optimize_cubes.html
+version: v0.7.2
+since: v0.7.1
+---
+
+## Hierarchies:
+
+Theoretically for N dimensions you'll end up with 2^N dimension combinations. However for some group of dimensions there are no need to create so many combinations. For example, if you have three dimensions: continent, country, city (In hierarchies, the "bigger" dimension comes first). You will only need the following three combinations of group by when you do drill down analysis:
+
+group by continent
+group by continent, country
+group by continent, country, city
+
+In such cases the combination count is reduced from 2^3=8 to 3, which is a great optimization. The same goes for the YEAR,QUATER,MONTH,DATE case.
+
+If we Donate the hierarchy dimension as H1,H2,H3, typical scenarios would be:
+
+
+A. Hierarchies on lookup table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, FK</td>
+    <td></td>
+    <td>PK,,H1,H2,H3,,,,</td>
+  </tr>
+</table>
+
+---
+
+B. Hierarchies on fact table
+
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,H1,H2,H3,,,,,,, </td>
+  </tr>
+</table>
+
+---
+
+
+There is a special case for scenario A, where PK on the lookup table is accidentally being part of the hierarchies. For example we have a calendar lookup table where cal_dt is the primary key:
+
+A*. Hierarchies on lookup table over its primary key
+
+
+<table>
+  <tr>
+    <td align="center">Lookup Table(Calendar)</td>
+  </tr>
+  <tr>
+    <td>cal_dt(PK), week_beg_dt, month_beg_dt, quarter_beg_dt,,,</td>
+  </tr>
+</table>
+
+---
+
+
+For cases like A* what you need is another optimization called "Derived Columns"
+
+## Derived Columns:
+
+Derived column is used when one or more dimensions (They must be dimension on lookup table, these columns are called "Derived") can be deduced from another(Usually it is the corresponding FK, this is called the "host column")
+
+For example, suppose we have a lookup table where we join fact table and it with "where DimA = DimX". Notice in Kylin, if you choose FK into a dimension, the corresponding PK will be automatically querable, without any extra cost. The secret is that since FK and PK are always identical, Kylin can apply filters/groupby on the FK first, and transparently replace them to PK.  This indicates that if we want the DimA(FK), DimX(PK), DimB, DimC in our cube, we can safely choose DimA,DimB,DimC only.
+
+<table>
+  <tr>
+    <td align="center">Fact table</td>
+    <td align="center">(joins)</td>
+    <td align="center">Lookup Table</td>
+  </tr>
+  <tr>
+    <td>column1,column2,,,,,, DimA(FK) </td>
+    <td></td>
+    <td>DimX(PK),,DimB, DimC</td>
+  </tr>
+</table>
+
+---
+
+
+Let's say that DimA(the dimension representing FK/PK) has a special mapping to DimB:
+
+
+<table>
+  <tr>
+    <th>dimA</th>
+    <th>dimB</th>
+    <th>dimC</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>b</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>c</td>
+    <td>?</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>a</td>
+    <td>?</td>
+  </tr>
+</table>
+
+
+in this case, given a value in DimA, the value of DimB is determined, so we say dimB can be derived from DimA. When we build a cube that contains both DimA and DimB, we simple include DimA, and marking DimB as derived. Derived column(DimB) does not participant in cuboids generation:
+
+original combinations:
+ABC,AB,AC,BC,A,B,C
+
+combinations when driving B from A:
+AC,A,C
+
+at Runtime, in case queries like "select count(*) from fact_table inner join looup1 group by looup1 .dimB", it is expecting cuboid containing DimB to answer the query. However, DimB will appear in NONE of the cuboids due to derived optimization. In this case, we modify the execution plan to make it group by  DimA(its host column) first, we'll get intermediate answer like:
+
+
+<table>
+  <tr>
+    <th>DimA</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>1</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>2</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>3</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>4</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+Afterwards, Kylin will replace DimA values with DimB values(since both of their values are in lookup table, Kylin can load the whole lookup table into memory and build a mapping for them), and the intermediate result becomes:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+After this, the runtime SQL engine(calcite) will further aggregate the intermediate result to:
+
+
+<table>
+  <tr>
+    <th>DimB</th>
+    <th>count(*)</th>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>2</td>
+  </tr>
+  <tr>
+    <td>b</td>
+    <td>1</td>
+  </tr>
+  <tr>
+    <td>c</td>
+    <td>1</td>
+  </tr>
+</table>
+
+
+this step happens at query runtime, this is what it means "at the cost of extra runtime aggregation"

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_upgrade.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_upgrade.md b/website/_docs2/howto/howto_upgrade.md
new file mode 100644
index 0000000..709f94d
--- /dev/null
+++ b/website/_docs2/howto/howto_upgrade.md
@@ -0,0 +1,103 @@
+---
+layout: docs2
+title:  How to Upgrade
+categories: howto
+permalink: /docs2/howto/howto_upgrade.html
+version: v1.2
+since: v0.7.1
+---
+
+## Upgrade among v0.7.x and v1.x 
+
+From v0.7.1 to latest v1.2, Kylin's metadata is backward compatible, the upgrade can be finished in couple of minutes:
+
+#### 1. Backup metadata
+Backup the Kylin metadata peridically is a good practice, and is highly suggested before upgrade; 
+
+```
+cd $KYLIN_HOME
+./bin/metastore.sh backup
+``` 
+It will print the backup folder, take it down and make sure it will not be deleted before the upgrade finished. If there is no "metastore.sh", use HBase's snapshot command to do backup:
+
+```
+hbase shell
+snapshot 'kylin_metadata', 'kylin_metadata_backup20150610'
+```
+Here 'kylin_metadata' is the default kylin metadata table name, replace it with the right table name of your Kylin;
+
+#### 2. Install new Kylin and copy back "conf"
+Download the new Kylin binary package from Kylin's download page; Extract it to a different folder other than current KYLIN_HOME; Before copy back the "conf" folder, do a compare and merge between the old and new kylin.properties to ensure newly introduced property will be kept.
+
+#### 3. Stop old and start new Kylin instance
+```
+cd $KYLIN_HOME
+./bin/kylin.sh stop
+export KYLIN_HOME="<path_of_new_installation>"
+cd $KYLIN_HOME
+./bin/kylin.sh start
+```
+
+#### 4. Back-port if the upgrade is failed
+If the new version couldn't startup and need back-port, shutdown it and then switch to the old KYLIN_HOME to start. Idealy that would return to the origin state. If the metadata is broken, restore it from the backup folder.
+
+```
+./bin/metastore.sh restore <path_of_metadata_backup>
+```
+
+## Upgrade from v0.6.x to v0.7.x 
+
+In v0.7, Kylin refactored the metadata structure, for the new features like inverted-index and streaming; If you have cube created with v0.6 and want to keep in v0.7, a migration is needed; (Please skip v0.7.1 as
+it has several compatible issues and the fix will be included in v0.7.2) Below is the steps;
+
+#### 1. Backup v0.6 metadata
+To avoid data loss in the migration, a backup at the very beginning is always suggested; You can use HBase's backup or snapshot command to achieve this; Here is a sample with snapshot:
+
+```
+hbase shell
+snapshot 'kylin_metadata', 'kylin_metadata_backup20150610'
+```
+
+'kylin_metadata' is the default kylin metadata table name, replace it with the right table name of your Kylin;
+
+#### 2. Dump v0.6 metadata to local file
+This is also a backup method; As the migration tool is only tested with local file system, this step is must; All metadata need be downloaded, including snapshot, dictionary, etc;
+
+```
+hbase  org.apache.hadoop.util.RunJar  ${KYLIN_HOME}/lib/kylin-job-x.x.x-job.jar  org.apache.kylin.common.persistence.ResourceTool  download  ./meta_dump
+```
+
+(./meta_dump is the local folder that the metadata will be downloaded, change to name you preferred)
+
+#### 3. Run CubeMetadataUpgrade to migrate the metadata
+This step is to run the migration tool to parse the v0.6 metadata and then convert to v0.7 format; A verification will be performed in the last, and report error if some cube couldn't be migrated;
+
+```
+hbase org.apache.hadoop.util.RunJar  ${KYLIN_HOME}/lib/kylin-job-x.x.x-job.jar org.apache.kylin.job.CubeMetadataUpgrade ./meta_dump
+```
+
+1. The tool will not overwrite v0.6 metadata; It will create a new folder with "_v2" suffix in the same folder, in this case the "./meta_dump_v2" will be created;
+2. By default this tool will only migrate the job history in last 30 days; If you want to keep elder job history, please tweak upgradeJobInstance() method by your own;
+3. If you see _No error or warning messages; The migration is success_ , that's good; Otherwise please check the error/warning messages carefully;
+4. For some problem you may need manually update the JSON file, to check whether the problem is gone, you can run a verify against the new metadata:
+
+```
+hbase org.apache.hadoop.util.RunJar  ${KYLIN_HOME}/lib/kylin-job-x.x.x-job.jar org.apache.kylin.job.CubeMetadataUpgrade ./meta_dump2 verify
+```
+
+#### 4. Upload the new metadata to HBase
+Now the new format of metadata will be upload to the HBase to replace the old format; Stop Kylin, and then:
+
+```
+hbase org.apache.hadoop.util.RunJar  ${KYLIN_HOME}/lib/kylin-job-x.x.x-job.jar  org.apache.kylin.common.persistence.ResourceTool  reset
+hbase org.apache.hadoop.util.RunJar  ${KYLIN_HOME}/lib/kylin-job-x.x.x-job.jar  org.apache.kylin.common.persistence.ResourceTool  upload  ./meta_dump_v2
+```
+
+#### 5. Update HTables to use new coprocessor
+Kylin uses HBase coprocessor to do server side aggregation; When Kylin instance upgrades to V0.7, the HTables that created in V0.6 should also be updated to use the new coprocessor:
+
+```
+hbase org.apache.hadoop.util.RunJar  ${KYLIN_HOME}/lib/kylin-job-x.x.x-job.jar  org.apache.kylin.job.tools.DeployCoprocessorCLI ${KYLIN_HOME}/lib/kylin-coprocessor-x.x.x.jar
+```
+
+Done; Update your v0.7 Kylin configure to point to the same metadata HBase table, then start Kylin server; Check whether all cubes and other information are kept;
\ No newline at end of file


[2/3] kylin git commit: KYLIN-1375 Init 2.x doc by copying from 1.x

Posted by li...@apache.org.
http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_use_restapi.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_use_restapi.md b/website/_docs2/howto/howto_use_restapi.md
new file mode 100644
index 0000000..3adaf66
--- /dev/null
+++ b/website/_docs2/howto/howto_use_restapi.md
@@ -0,0 +1,1006 @@
+---
+layout: docs2
+title:  How to Use Restful API
+categories: howto
+permalink: /docs2/howto/howto_use_restapi.html
+version: v1.2
+since: v0.7.1
+---
+
+This page lists all the Rest APIs provided by Kylin; The base of the URL is `/kylin/api`, so don't forget to add it before a certain API's path. For example, to get all cube instances, send HTTP GET request to "/kylin/api/cubes".
+
+* Query
+   * [Authentication](#authentication)
+   * [Query](#query)
+   * [List queryable tables](#list-queryable-tables)
+* CUBE
+   * [List cubes](#list-cubes)
+   * [Get cube](#get-cube)
+   * [Get cube descriptor (dimension, measure info, etc)](#get-cube-descriptor)
+   * [Get data model (fact and lookup table info)](#get-data-model)
+   * [Build cube](#build-cube)
+   * [Disable cube](#disable-cube)
+   * [Purge cube](#purge-cube)
+   * [Enable cube](#enable-cube)
+* JOB
+   * [Resume job](#resume-job)
+   * [Discard job](#discard-job)
+   * [Get job step output](#get-job-step-output)
+* Metadata
+   * [Get Hive Table](#get-hive-table)
+   * [Get Hive Table (Extend Info)](#get-hive-table-extend-info)
+   * [Get Hive Tables](#get-hive-tables)
+   * [Load Hive Tables](#load-hive-tables)
+* Cache
+   * [Wipe cache](#wipe-cache)
+
+## Authentication
+`POST /user/authentication`
+
+#### Request Header
+Authorization data encoded by basic auth is needed in the header, such as:
+Authorization:Basic {data}
+
+#### Response Body
+* userDetails - Defined authorities and status of current user.
+
+#### Response Sample
+
+```sh
+{  
+   "userDetails":{  
+      "password":null,
+      "username":"sample",
+      "authorities":[  
+         {  
+            "authority":"ROLE_ANALYST"
+         },
+         {  
+            "authority":"ROLE_MODELER"
+         }
+      ],
+      "accountNonExpired":true,
+      "accountNonLocked":true,
+      "credentialsNonExpired":true,
+      "enabled":true
+   }
+}
+```
+
+Example with `curl`: 
+
+```
+curl -c /path/to/cookiefile.txt -X POST -H "Authorization: Basic XXXXXXXXX" -H 'Content-Type: application/json' http://<host>:<port>/kylin/api/user/authentication
+```
+
+If login successfully, the JSESSIONID will be saved into the cookie file; In the subsequent http requests, attach the cookie, for example:
+
+```
+curl -b /path/to/cookiefile.txt -X PUT -H 'Content-Type: application/json' -d '{"startTime":'1423526400000', "endTime":'1423526400', "buildType":"BUILD"}' http://<host>:<port>/kylin/api/cubes/your_cube/rebuild
+```
+
+***
+
+## Query
+`POST /query`
+
+#### Request Body
+* sql - `required` `string` The text of sql statement.
+* offset - `optional` `int` Query offset. If offset is set in sql, curIndex will be ignored.
+* limit - `optional` `int` Query limit. If limit is set in sql, perPage will be ignored.
+* acceptPartial - `optional` `bool` Whether accept a partial result or not, default be "false". Set to "false" for production use. 
+* project - `optional` `string` Project to perform query. Default value is 'DEFAULT'.
+
+#### Request Sample
+
+```sh
+{  
+   "sql":"select * from TEST_KYLIN_FACT",
+   "offset":0,
+   "limit":50000,
+   "acceptPartial":false,
+   "project":"DEFAULT"
+}
+```
+
+#### Response Body
+* columnMetas - Column metadata information of result set.
+* results - Data set of result.
+* cube - Cube used for this query.
+* affectedRowCount - Count of affected row by this sql statement.
+* isException - Whether this response is an exception.
+* ExceptionMessage - Message content of the exception.
+* Duration - Time cost of this query
+* Partial - Whether the response is a partial result or not. Decided by `acceptPartial` of request.
+
+#### Response Sample
+
+```sh
+{  
+   "columnMetas":[  
+      {  
+         "isNullable":1,
+         "displaySize":0,
+         "label":"CAL_DT",
+         "name":"CAL_DT",
+         "schemaName":null,
+         "catelogName":null,
+         "tableName":null,
+         "precision":0,
+         "scale":0,
+         "columnType":91,
+         "columnTypeName":"DATE",
+         "readOnly":true,
+         "writable":false,
+         "caseSensitive":true,
+         "searchable":false,
+         "currency":false,
+         "signed":true,
+         "autoIncrement":false,
+         "definitelyWritable":false
+      },
+      {  
+         "isNullable":1,
+         "displaySize":10,
+         "label":"LEAF_CATEG_ID",
+         "name":"LEAF_CATEG_ID",
+         "schemaName":null,
+         "catelogName":null,
+         "tableName":null,
+         "precision":10,
+         "scale":0,
+         "columnType":4,
+         "columnTypeName":"INTEGER",
+         "readOnly":true,
+         "writable":false,
+         "caseSensitive":true,
+         "searchable":false,
+         "currency":false,
+         "signed":true,
+         "autoIncrement":false,
+         "definitelyWritable":false
+      }
+   ],
+   "results":[  
+      [  
+         "2013-08-07",
+         "32996",
+         "15",
+         "15",
+         "Auction",
+         "10000000",
+         "49.048952730908745",
+         "49.048952730908745",
+         "49.048952730908745",
+         "1"
+      ],
+      [  
+         "2013-08-07",
+         "43398",
+         "0",
+         "14",
+         "ABIN",
+         "10000633",
+         "85.78317064220418",
+         "85.78317064220418",
+         "85.78317064220418",
+         "1"
+      ]
+   ],
+   "cube":"test_kylin_cube_with_slr_desc",
+   "affectedRowCount":0,
+   "isException":false,
+   "exceptionMessage":null,
+   "duration":3451,
+   "partial":false
+}
+```
+
+## List queryable tables
+`GET /tables_and_columns`
+
+#### Request Parameters
+* project - `required` `string` The project to load tables
+
+#### Response Sample
+```sh
+[  
+   {  
+      "columns":[  
+         {  
+            "table_NAME":"TEST_CAL_DT",
+            "table_SCHEM":"EDW",
+            "column_NAME":"CAL_DT",
+            "data_TYPE":91,
+            "nullable":1,
+            "column_SIZE":-1,
+            "buffer_LENGTH":-1,
+            "decimal_DIGITS":0,
+            "num_PREC_RADIX":10,
+            "column_DEF":null,
+            "sql_DATA_TYPE":-1,
+            "sql_DATETIME_SUB":-1,
+            "char_OCTET_LENGTH":-1,
+            "ordinal_POSITION":1,
+            "is_NULLABLE":"YES",
+            "scope_CATLOG":null,
+            "scope_SCHEMA":null,
+            "scope_TABLE":null,
+            "source_DATA_TYPE":-1,
+            "iS_AUTOINCREMENT":null,
+            "table_CAT":"defaultCatalog",
+            "remarks":null,
+            "type_NAME":"DATE"
+         },
+         {  
+            "table_NAME":"TEST_CAL_DT",
+            "table_SCHEM":"EDW",
+            "column_NAME":"WEEK_BEG_DT",
+            "data_TYPE":91,
+            "nullable":1,
+            "column_SIZE":-1,
+            "buffer_LENGTH":-1,
+            "decimal_DIGITS":0,
+            "num_PREC_RADIX":10,
+            "column_DEF":null,
+            "sql_DATA_TYPE":-1,
+            "sql_DATETIME_SUB":-1,
+            "char_OCTET_LENGTH":-1,
+            "ordinal_POSITION":2,
+            "is_NULLABLE":"YES",
+            "scope_CATLOG":null,
+            "scope_SCHEMA":null,
+            "scope_TABLE":null,
+            "source_DATA_TYPE":-1,
+            "iS_AUTOINCREMENT":null,
+            "table_CAT":"defaultCatalog",
+            "remarks":null,
+            "type_NAME":"DATE"
+         }
+      ],
+      "table_NAME":"TEST_CAL_DT",
+      "table_SCHEM":"EDW",
+      "ref_GENERATION":null,
+      "self_REFERENCING_COL_NAME":null,
+      "type_SCHEM":null,
+      "table_TYPE":"TABLE",
+      "table_CAT":"defaultCatalog",
+      "remarks":null,
+      "type_CAT":null,
+      "type_NAME":null
+   }
+]
+```
+
+***
+
+## List cubes
+`GET /cubes`
+
+#### Request Parameters
+* offset - `required` `int` Offset used by pagination
+* limit - `required` `int ` Cubes per page.
+* cubeName - `optional` `string` Keyword for cube names. To find cubes whose name contains this keyword.
+* projectName - `optional` `string` Project name.
+
+#### Response Sample
+```sh
+[  
+   {  
+      "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
+      "last_modified":1407831634847,
+      "name":"test_kylin_cube_with_slr_empty",
+      "owner":null,
+      "version":null,
+      "descriptor":"test_kylin_cube_with_slr_desc",
+      "cost":50,
+      "status":"DISABLED",
+      "segments":[  
+      ],
+      "create_time":null,
+      "source_records_count":0,
+      "source_records_size":0,
+      "size_kb":0
+   }
+]
+```
+
+## Get cube
+`GET /cubes/{cubeName}`
+
+#### Path Variable
+* cubeName - `required` `string` Cube name to find.
+
+## Get cube descriptor
+`GET /cube_desc/{cubeName}`
+Get descriptor for specified cube instance.
+
+#### Path Variable
+* cubeName - `required` `string` Cube name.
+
+#### Response Sample
+```sh
+[
+    {
+        "uuid": "a24ca905-1fc6-4f67-985c-38fa5aeafd92", 
+        "name": "test_kylin_cube_with_slr_desc", 
+        "description": null, 
+        "dimensions": [
+            {
+                "id": 0, 
+                "name": "CAL_DT", 
+                "table": "EDW.TEST_CAL_DT", 
+                "column": null, 
+                "derived": [
+                    "WEEK_BEG_DT"
+                ], 
+                "hierarchy": false
+            }, 
+            {
+                "id": 1, 
+                "name": "CATEGORY", 
+                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
+                "column": null, 
+                "derived": [
+                    "USER_DEFINED_FIELD1", 
+                    "USER_DEFINED_FIELD3", 
+                    "UPD_DATE", 
+                    "UPD_USER"
+                ], 
+                "hierarchy": false
+            }, 
+            {
+                "id": 2, 
+                "name": "CATEGORY_HIERARCHY", 
+                "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
+                "column": [
+                    "META_CATEG_NAME", 
+                    "CATEG_LVL2_NAME", 
+                    "CATEG_LVL3_NAME"
+                ], 
+                "derived": null, 
+                "hierarchy": true
+            }, 
+            {
+                "id": 3, 
+                "name": "LSTG_FORMAT_NAME", 
+                "table": "DEFAULT.TEST_KYLIN_FACT", 
+                "column": [
+                    "LSTG_FORMAT_NAME"
+                ], 
+                "derived": null, 
+                "hierarchy": false
+            }, 
+            {
+                "id": 4, 
+                "name": "SITE_ID", 
+                "table": "EDW.TEST_SITES", 
+                "column": null, 
+                "derived": [
+                    "SITE_NAME", 
+                    "CRE_USER"
+                ], 
+                "hierarchy": false
+            }, 
+            {
+                "id": 5, 
+                "name": "SELLER_TYPE_CD", 
+                "table": "EDW.TEST_SELLER_TYPE_DIM", 
+                "column": null, 
+                "derived": [
+                    "SELLER_TYPE_DESC"
+                ], 
+                "hierarchy": false
+            }, 
+            {
+                "id": 6, 
+                "name": "SELLER_ID", 
+                "table": "DEFAULT.TEST_KYLIN_FACT", 
+                "column": [
+                    "SELLER_ID"
+                ], 
+                "derived": null, 
+                "hierarchy": false
+            }
+        ], 
+        "measures": [
+            {
+                "id": 1, 
+                "name": "GMV_SUM", 
+                "function": {
+                    "expression": "SUM", 
+                    "parameter": {
+                        "type": "column", 
+                        "value": "PRICE", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "decimal(19,4)"
+                }, 
+                "dependent_measure_ref": null
+            }, 
+            {
+                "id": 2, 
+                "name": "GMV_MIN", 
+                "function": {
+                    "expression": "MIN", 
+                    "parameter": {
+                        "type": "column", 
+                        "value": "PRICE", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "decimal(19,4)"
+                }, 
+                "dependent_measure_ref": null
+            }, 
+            {
+                "id": 3, 
+                "name": "GMV_MAX", 
+                "function": {
+                    "expression": "MAX", 
+                    "parameter": {
+                        "type": "column", 
+                        "value": "PRICE", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "decimal(19,4)"
+                }, 
+                "dependent_measure_ref": null
+            }, 
+            {
+                "id": 4, 
+                "name": "TRANS_CNT", 
+                "function": {
+                    "expression": "COUNT", 
+                    "parameter": {
+                        "type": "constant", 
+                        "value": "1", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "bigint"
+                }, 
+                "dependent_measure_ref": null
+            }, 
+            {
+                "id": 5, 
+                "name": "ITEM_COUNT_SUM", 
+                "function": {
+                    "expression": "SUM", 
+                    "parameter": {
+                        "type": "column", 
+                        "value": "ITEM_COUNT", 
+                        "next_parameter": null
+                    }, 
+                    "returntype": "bigint"
+                }, 
+                "dependent_measure_ref": null
+            }
+        ], 
+        "rowkey": {
+            "rowkey_columns": [
+                {
+                    "column": "SELLER_ID", 
+                    "length": 18, 
+                    "dictionary": null, 
+                    "mandatory": true
+                }, 
+                {
+                    "column": "CAL_DT", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "LEAF_CATEG_ID", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "META_CATEG_NAME", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "CATEG_LVL2_NAME", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "CATEG_LVL3_NAME", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "LSTG_FORMAT_NAME", 
+                    "length": 12, 
+                    "dictionary": null, 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "LSTG_SITE_ID", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }, 
+                {
+                    "column": "SLR_SEGMENT_CD", 
+                    "length": 0, 
+                    "dictionary": "true", 
+                    "mandatory": false
+                }
+            ], 
+            "aggregation_groups": [
+                [
+                    "LEAF_CATEG_ID", 
+                    "META_CATEG_NAME", 
+                    "CATEG_LVL2_NAME", 
+                    "CATEG_LVL3_NAME", 
+                    "CAL_DT"
+                ]
+            ]
+        }, 
+        "signature": "lsLAl2jL62ZApmOLZqWU3g==", 
+        "last_modified": 1445850327000, 
+        "model_name": "test_kylin_with_slr_model_desc", 
+        "null_string": null, 
+        "hbase_mapping": {
+            "column_family": [
+                {
+                    "name": "F1", 
+                    "columns": [
+                        {
+                            "qualifier": "M", 
+                            "measure_refs": [
+                                "GMV_SUM", 
+                                "GMV_MIN", 
+                                "GMV_MAX", 
+                                "TRANS_CNT", 
+                                "ITEM_COUNT_SUM"
+                            ]
+                        }
+                    ]
+                }
+            ]
+        }, 
+        "notify_list": null, 
+        "auto_merge_time_ranges": null, 
+        "retention_range": 0
+    }
+]
+```
+
+## Get data model
+`GET /model/{modelName}`
+
+#### Path Variable
+* modelName - `required` `string` Data model name, by default it should be the same with cube name.
+
+#### Response Sample
+```sh
+{
+    "uuid": "ff527b94-f860-44c3-8452-93b17774c647", 
+    "name": "test_kylin_with_slr_model_desc", 
+    "lookups": [
+        {
+            "table": "EDW.TEST_CAL_DT", 
+            "join": {
+                "type": "inner", 
+                "primary_key": [
+                    "CAL_DT"
+                ], 
+                "foreign_key": [
+                    "CAL_DT"
+                ]
+            }
+        }, 
+        {
+            "table": "DEFAULT.TEST_CATEGORY_GROUPINGS", 
+            "join": {
+                "type": "inner", 
+                "primary_key": [
+                    "LEAF_CATEG_ID", 
+                    "SITE_ID"
+                ], 
+                "foreign_key": [
+                    "LEAF_CATEG_ID", 
+                    "LSTG_SITE_ID"
+                ]
+            }
+        }
+    ], 
+    "capacity": "MEDIUM", 
+    "last_modified": 1442372116000, 
+    "fact_table": "DEFAULT.TEST_KYLIN_FACT", 
+    "filter_condition": null, 
+    "partition_desc": {
+        "partition_date_column": "DEFAULT.TEST_KYLIN_FACT.CAL_DT", 
+        "partition_date_start": 0, 
+        "partition_date_format": "yyyy-MM-dd", 
+        "partition_type": "APPEND", 
+        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
+    }
+}
+```
+
+## Build cube
+`PUT /cubes/{cubeName}/rebuild`
+
+#### Path Variable
+* cubeName - `required` `string` Cube name.
+
+#### Request Body
+* startTime - `required` `long` Start timestamp of data to build, e.g. 1388563200000 for 2014-1-1
+* endTime - `required` `long` End timestamp of data to build
+* buildType - `required` `string` Supported build type: 'BUILD', 'MERGE', 'REFRESH'
+
+#### Response Sample
+```
+{  
+   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
+   "last_modified":1407908916705,
+   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
+   "type":"BUILD",
+   "duration":0,
+   "related_cube":"test_kylin_cube_with_slr_empty",
+   "related_segment":"19700101000000_20140731160000",
+   "exec_start_time":0,
+   "exec_end_time":0,
+   "mr_waiting":0,
+   "steps":[  
+      {  
+         "interruptCmd":null,
+         "name":"Create Intermediate Flat Hive Table",
+         "sequence_id":0,
+         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_CD smallint\n,SELLER_ID bigint\n,PRICE decimal\n)\nROW FORMAT DELIMITED FIELDS TERMINATED BY '\\177'\nSTORED AS SEQUENCEFILE\nLOCATION '/tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6';\nSET mapreduce.job.split.metainfo.maxsize=-1;\nSET mapred.compress.map.output=true;\nSET mapred.map.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compress=true;\nSET ma
 pred.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compression.type=BLOCK;\nSET mapreduce.job.max.split.locations=2000;\nSET hive.exec.compress.output=true;\nSET hive.auto.convert.join.noconditionaltask = true;\nSET hive.auto.convert.join.noconditionaltask.size = 300000000;\nINSERT OVERWRITE TABLE kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\nSELECT\nTEST_KYLIN_FACT.CAL_DT\n,TEST_KYLIN_FACT.LEAF_CATEG_ID\n,TEST_KYLIN_FACT.LSTG_SITE_ID\n,TEST_CATEGORY_GROUPINGS.META_CATEG_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL2_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL3_NAME\n,TEST_KYLIN_FACT.LSTG_FORMAT_NAME\n,TEST_KYLIN_FACT.SLR_SEGMENT_CD\n,TEST_KYLIN_FACT.SELLER_ID\n,TEST_KYLIN_FACT.PRICE\nFROM TEST_KYLIN_FACT\nINNER JOIN TEST_CAL_DT\nON TEST_KYLIN_FACT.CAL_DT = TEST_CAL_DT.CAL_DT\nINNER JOIN TEST_CATEGORY_GROUPINGS\nON TEST_KYLIN_FACT.LEAF_CATEG_ID = TEST_CATEGORY_GROUPINGS.LEAF_CATEG_ID AN
 D TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_CATEGORY_GROUPINGS.SITE_ID\nINNER JOIN TEST_SITES\nON TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_SITES.SITE_ID\nINNER JOIN TEST_SELLER_TYPE_DIM\nON TEST_KYLIN_FACT.SLR_SEGMENT_CD = TEST_SELLER_TYPE_DIM.SELLER_TYPE_CD\nWHERE (test_kylin_fact.cal_dt < '2014-07-31 16:00:00')\n;\n\"",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"SHELL_CMD_HADOOP",
+         "info":null,
+         "run_async":false
+      },
+      {  
+         "interruptCmd":null,
+         "name":"Extract Fact Table Distinct Columns",
+         "sequence_id":1,
+         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
+         "info":null,
+         "run_async":true
+      },
+      {  
+         "interruptCmd":null,
+         "name":"Load HFile to HBase Table",
+         "sequence_id":12,
+         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
+         "info":null,
+         "run_async":false
+      }
+   ],
+   "job_status":"PENDING",
+   "progress":0.0
+}
+```
+
+## Enable Cube
+`PUT /cubes/{cubeName}/enable`
+
+#### Path variable
+* cubeName - `required` `string` Cube name.
+
+#### Response Sample
+```sh
+{  
+   "uuid":"1eaca32a-a33e-4b69-83dd-0bb8b1f8c53b",
+   "last_modified":1407909046305,
+   "name":"test_kylin_cube_with_slr_ready",
+   "owner":null,
+   "version":null,
+   "descriptor":"test_kylin_cube_with_slr_desc",
+   "cost":50,
+   "status":"ACTIVE",
+   "segments":[  
+      {  
+         "name":"19700101000000_20140531160000",
+         "storage_location_identifier":"KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_READY-19700101000000_20140531160000_BF043D2D-9A4A-45E9-AA59-5A17D3F34A50",
+         "date_range_start":0,
+         "date_range_end":1401552000000,
+         "status":"READY",
+         "size_kb":4758,
+         "source_records":6000,
+         "source_records_size":620356,
+         "last_build_time":1407832663227,
+         "last_build_job_id":"2c7a2b63-b052-4a51-8b09-0c24b5792cda",
+         "binary_signature":null,
+         "dictionaries":{  
+            "TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL2_NAME/16d8185c-ee6b-4f8c-a919-756d9809f937.dict",
+            "TEST_KYLIN_FACT/LSTG_SITE_ID":"/dict/TEST_SITES/SITE_ID/0bec6bb3-1b0d-469c-8289-b8c4ca5d5001.dict",
+            "TEST_KYLIN_FACT/SLR_SEGMENT_CD":"/dict/TEST_SELLER_TYPE_DIM/SELLER_TYPE_CD/0c5d77ec-316b-47e0-ba9a-0616be890ad6.dict",
+            "TEST_KYLIN_FACT/CAL_DT":"/dict/PREDEFINED/date(yyyy-mm-dd)/64ac4f82-f2af-476e-85b9-f0805001014e.dict",
+            "TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME":"/dict/TEST_CATEGORY_GROUPINGS/CATEG_LVL3_NAME/270fbfb0-281c-4602-8413-2970a7439c47.dict",
+            "TEST_KYLIN_FACT/LEAF_CATEG_ID":"/dict/TEST_CATEGORY_GROUPINGS/LEAF_CATEG_ID/2602386c-debb-4968-8d2f-b52b8215e385.dict",
+            "TEST_CATEGORY_GROUPINGS/META_CATEG_NAME":"/dict/TEST_CATEGORY_GROUPINGS/META_CATEG_NAME/0410d2c4-4686-40bc-ba14-170042a2de94.dict"
+         },
+         "snapshots":{  
+            "TEST_CAL_DT":"/table_snapshot/TEST_CAL_DT.csv/8f7cfc8a-020d-4019-b419-3c6deb0ffaa0.snapshot",
+            "TEST_SELLER_TYPE_DIM":"/table_snapshot/TEST_SELLER_TYPE_DIM.csv/c60fd05e-ac94-4016-9255-96521b273b81.snapshot",
+            "TEST_CATEGORY_GROUPINGS":"/table_snapshot/TEST_CATEGORY_GROUPINGS.csv/363f4a59-b725-4459-826d-3188bde6a971.snapshot",
+            "TEST_SITES":"/table_snapshot/TEST_SITES.csv/78e0aecc-3ec6-4406-b86e-bac4b10ea63b.snapshot"
+         }
+      }
+   ],
+   "create_time":null,
+   "source_records_count":6000,
+   "source_records_size":0,
+   "size_kb":4758
+}
+```
+
+## Disable Cube
+`PUT /cubes/{cubeName}/disable`
+
+#### Path variable
+* cubeName - `required` `string` Cube name.
+
+#### Response Sample
+(Same as "Enable Cube")
+
+## Purge Cube
+`PUT /cubes/{cubeName}/purge`
+
+#### Path variable
+* cubeName - `required` `string` Cube name.
+
+#### Response Sample
+(Same as "Enable Cube")
+
+***
+
+## Resume Job
+`PUT /jobs/{jobId}/resume`
+
+#### Path variable
+* jobId - `required` `string` Job id.
+
+#### Response Sample
+```
+{  
+   "uuid":"c143e0e4-ac5f-434d-acf3-46b0d15e3dc6",
+   "last_modified":1407908916705,
+   "name":"test_kylin_cube_with_slr_empty - 19700101000000_20140731160000 - BUILD - PDT 2014-08-12 22:48:36",
+   "type":"BUILD",
+   "duration":0,
+   "related_cube":"test_kylin_cube_with_slr_empty",
+   "related_segment":"19700101000000_20140731160000",
+   "exec_start_time":0,
+   "exec_end_time":0,
+   "mr_waiting":0,
+   "steps":[  
+      {  
+         "interruptCmd":null,
+         "name":"Create Intermediate Flat Hive Table",
+         "sequence_id":0,
+         "exec_cmd":"hive -e \"DROP TABLE IF EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6;\nCREATE EXTERNAL TABLE IF NOT EXISTS kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\n(\nCAL_DT date\n,LEAF_CATEG_ID int\n,LSTG_SITE_ID int\n,META_CATEG_NAME string\n,CATEG_LVL2_NAME string\n,CATEG_LVL3_NAME string\n,LSTG_FORMAT_NAME string\n,SLR_SEGMENT_CD smallint\n,SELLER_ID bigint\n,PRICE decimal\n)\nROW FORMAT DELIMITED FIELDS TERMINATED BY '\\177'\nSTORED AS SEQUENCEFILE\nLOCATION '/tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6';\nSET mapreduce.job.split.metainfo.maxsize=-1;\nSET mapred.compress.map.output=true;\nSET mapred.map.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compress=true;\nSET ma
 pred.output.compression.codec=com.hadoop.compression.lzo.LzoCodec;\nSET mapred.output.compression.type=BLOCK;\nSET mapreduce.job.max.split.locations=2000;\nSET hive.exec.compress.output=true;\nSET hive.auto.convert.join.noconditionaltask = true;\nSET hive.auto.convert.join.noconditionaltask.size = 300000000;\nINSERT OVERWRITE TABLE kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6\nSELECT\nTEST_KYLIN_FACT.CAL_DT\n,TEST_KYLIN_FACT.LEAF_CATEG_ID\n,TEST_KYLIN_FACT.LSTG_SITE_ID\n,TEST_CATEGORY_GROUPINGS.META_CATEG_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL2_NAME\n,TEST_CATEGORY_GROUPINGS.CATEG_LVL3_NAME\n,TEST_KYLIN_FACT.LSTG_FORMAT_NAME\n,TEST_KYLIN_FACT.SLR_SEGMENT_CD\n,TEST_KYLIN_FACT.SELLER_ID\n,TEST_KYLIN_FACT.PRICE\nFROM TEST_KYLIN_FACT\nINNER JOIN TEST_CAL_DT\nON TEST_KYLIN_FACT.CAL_DT = TEST_CAL_DT.CAL_DT\nINNER JOIN TEST_CATEGORY_GROUPINGS\nON TEST_KYLIN_FACT.LEAF_CATEG_ID = TEST_CATEGORY_GROUPINGS.LEAF_CATEG_ID AN
 D TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_CATEGORY_GROUPINGS.SITE_ID\nINNER JOIN TEST_SITES\nON TEST_KYLIN_FACT.LSTG_SITE_ID = TEST_SITES.SITE_ID\nINNER JOIN TEST_SELLER_TYPE_DIM\nON TEST_KYLIN_FACT.SLR_SEGMENT_CD = TEST_SELLER_TYPE_DIM.SELLER_TYPE_CD\nWHERE (test_kylin_fact.cal_dt < '2014-07-31 16:00:00')\n;\n\"",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"SHELL_CMD_HADOOP",
+         "info":null,
+         "run_async":false
+      },
+      {  
+         "interruptCmd":null,
+         "name":"Extract Fact Table Distinct Columns",
+         "sequence_id":1,
+         "exec_cmd":" -conf C:/kylin/Kylin/server/src/main/resources/hadoop_job_conf_medium.xml -cubename test_kylin_cube_with_slr_empty -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/kylin_intermediate_test_kylin_cube_with_slr_desc_19700101000000_20140731160000_c143e0e4_ac5f_434d_acf3_46b0d15e3dc6 -output /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/fact_distinct_columns -jobname Kylin_Fact_Distinct_Columns_test_kylin_cube_with_slr_empty_Step_1",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"JAVA_CMD_HADOOP_FACTDISTINCT",
+         "info":null,
+         "run_async":true
+      },
+      {  
+         "interruptCmd":null,
+         "name":"Load HFile to HBase Table",
+         "sequence_id":12,
+         "exec_cmd":" -input /tmp/kylin-c143e0e4-ac5f-434d-acf3-46b0d15e3dc6/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN-CUBE-TEST_KYLIN_CUBE_WITH_SLR_EMPTY-19700101000000_20140731160000_11BB4326-5975-4358-804C-70D53642E03A -cubename test_kylin_cube_with_slr_empty",
+         "interrupt_cmd":null,
+         "exec_start_time":0,
+         "exec_end_time":0,
+         "exec_wait_time":0,
+         "step_status":"PENDING",
+         "cmd_type":"JAVA_CMD_HADOOP_NO_MR_BULKLOAD",
+         "info":null,
+         "run_async":false
+      }
+   ],
+   "job_status":"PENDING",
+   "progress":0.0
+}
+```
+
+## Discard Job
+`PUT /jobs/{jobId}/cancel`
+
+#### Path variable
+* jobId - `required` `string` Job id.
+
+#### Response Sample
+(Same as "Resume job")
+
+## Get job step output
+`GET /{jobId}/steps/{stepId}/output`
+
+#### Path Variable
+* jobId - `required` `string` Job id.
+* stepId - `required` `string` Step id; the step id is composed by jobId with step sequence id; for example, the jobId is "fb479e54-837f-49a2-b457-651fc50be110", its 3rd step id is "fb479e54-837f-49a2-b457-651fc50be110-3", 
+
+#### Response Sample
+```
+{  
+   "cmd_output":"log string"
+}
+```
+
+***
+
+## Get Hive Table
+`GET /tables/{tableName}`
+
+#### Request Parameters
+* tableName - `required` `string` table name to find.
+
+#### Response Sample
+```sh
+{
+    uuid: "69cc92c0-fc42-4bb9-893f-bd1141c91dbe",
+    name: "SAMPLE_07",
+    columns: [{
+        id: "1",
+        name: "CODE",
+        datatype: "string"
+    }, {
+        id: "2",
+        name: "DESCRIPTION",
+        datatype: "string"
+    }, {
+        id: "3",
+        name: "TOTAL_EMP",
+        datatype: "int"
+    }, {
+        id: "4",
+        name: "SALARY",
+        datatype: "int"
+    }],
+    database: "DEFAULT",
+    last_modified: 1419330476755
+}
+```
+
+## Get Hive Table (Extend Info)
+`GET /tables/{tableName}/exd-map`
+
+#### Request Parameters
+* tableName - `optional` `string` table name to find.
+
+#### Response Sample
+```
+{
+    "minFileSize": "46055",
+    "totalNumberFiles": "1",
+    "location": "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_07",
+    "lastAccessTime": "1418374103365",
+    "lastUpdateTime": "1398176493340",
+    "columns": "struct columns { string code, string description, i32 total_emp, i32 salary}",
+    "partitionColumns": "",
+    "EXD_STATUS": "true",
+    "maxFileSize": "46055",
+    "inputformat": "org.apache.hadoop.mapred.TextInputFormat",
+    "partitioned": "false",
+    "tableName": "sample_07",
+    "owner": "hue",
+    "totalFileSize": "46055",
+    "outputformat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
+}
+```
+
+## Get Hive Tables
+`GET /tables`
+
+#### Request Parameters
+* project- `required` `string` will list all tables in the project.
+* ext- `optional` `boolean`  set true to get extend info of table.
+
+#### Response Sample
+```sh
+[
+ {
+    uuid: "53856c96-fe4d-459e-a9dc-c339b1bc3310",
+    name: "SAMPLE_08",
+    columns: [{
+        id: "1",
+        name: "CODE",
+        datatype: "string"
+    }, {
+        id: "2",
+        name: "DESCRIPTION",
+        datatype: "string"
+    }, {
+        id: "3",
+        name: "TOTAL_EMP",
+        datatype: "int"
+    }, {
+        id: "4",
+        name: "SALARY",
+        datatype: "int"
+    }],
+    database: "DEFAULT",
+    cardinality: {},
+    last_modified: 0,
+    exd: {
+        minFileSize: "46069",
+        totalNumberFiles: "1",
+        location: "hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/sample_08",
+        lastAccessTime: "1398176495945",
+        lastUpdateTime: "1398176495981",
+        columns: "struct columns { string code, string description, i32 total_emp, i32 salary}",
+        partitionColumns: "",
+        EXD_STATUS: "true",
+        maxFileSize: "46069",
+        inputformat: "org.apache.hadoop.mapred.TextInputFormat",
+        partitioned: "false",
+        tableName: "sample_08",
+        owner: "hue",
+        totalFileSize: "46069",
+        outputformat: "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
+    }
+  }
+]
+```
+
+## Load Hive Tables
+`POST /tables/{tables}/{project}`
+
+#### Request Parameters
+* tables - `required` `string` table names you want to load from hive, separated with comma.
+* project - `required` `String`  the project which the tables will be loaded into.
+
+#### Response Sample
+```
+{
+    "result.loaded": ["DEFAULT.SAMPLE_07"],
+    "result.unloaded": ["sapmle_08"]
+}
+```
+
+***
+
+## Wipe cache
+`GET /cache/{type}/{name}/{action}`
+
+#### Path variable
+* type - `required` `string` 'METADATA' or 'CUBE'
+* name - `required` `string` Cache key, e.g the cube name.
+* action - `required` `string` 'create', 'update' or 'drop'
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/howto/howto_use_restapi_in_js.md
----------------------------------------------------------------------
diff --git a/website/_docs2/howto/howto_use_restapi_in_js.md b/website/_docs2/howto/howto_use_restapi_in_js.md
new file mode 100644
index 0000000..14cb7c9
--- /dev/null
+++ b/website/_docs2/howto/howto_use_restapi_in_js.md
@@ -0,0 +1,48 @@
+---
+layout: docs2
+title:  How to Use Restful API in Javascript
+categories: howto
+permalink: /docs2/howto/howto_use_restapi_in_js.html
+version: v1.2
+since: v0.7.1
+---
+Kylin security is based on basic access authorization, if you want to use API in your javascript, you need to add authorization info in http headers.
+
+## Example on Query API.
+```
+$.ajaxSetup({
+      headers: { 'Authorization': "Basic eWFu**********X***ZA==", 'Content-Type': 'application/json;charset=utf-8' } // use your own authorization code here
+    });
+    var request = $.ajax({
+       url: "http://hostname/kylin/api/query",
+       type: "POST",
+       data: '{"sql":"select count(*) from SUMMARY;","offset":0,"limit":50000,"acceptPartial":true,"project":"test"}',
+       dataType: "json"
+    });
+    request.done(function( msg ) {
+       alert(msg);
+    }); 
+    request.fail(function( jqXHR, textStatus ) {
+       alert( "Request failed: " + textStatus );
+  });
+
+```
+
+## Keypoints
+1. add basic access authorization info in http headers.
+2. use right ajax type and data synax.
+
+## Basic access authorization
+For what is basic access authorization, refer to [Wikipedia Page](http://en.wikipedia.org/wiki/Basic_access_authentication).
+How to generate your authorization code (download and import "jquery.base64.js" from [https://github.com/yckart/jquery.base64.js](https://github.com/yckart/jquery.base64.js)).
+
+```
+var authorizationCode = $.base64('encode', 'NT_USERNAME' + ":" + 'NT_PASSWORD');
+ 
+$.ajaxSetup({
+   headers: { 
+    'Authorization': "Basic " + authorizationCode, 
+    'Content-Type': 'application/json;charset=utf-8' 
+   }
+});
+```

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/index.md
----------------------------------------------------------------------
diff --git a/website/_docs2/index.md b/website/_docs2/index.md
new file mode 100644
index 0000000..c7bfe96
--- /dev/null
+++ b/website/_docs2/index.md
@@ -0,0 +1,54 @@
+---
+layout: docs2
+title: Overview
+categories: docs
+permalink: /docs2/index.html
+---
+
+Welcome to Apache Kylin™
+------------  
+> Extreme OLAP Engine for Big Data
+
+Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets, original contributed from eBay Inc.
+
+Prior documents: [v1.x](/docs/)
+
+Installation & Setup
+------------  
+
+Please follow installation & tutorial in the navigation panel.
+
+Advanced Topics
+-------  
+
+#### Connectivity
+
+1. [How to use Kylin remote JDBC driver](howto/howto_jdbc.html)
+2. [SQL reference](http://calcite.apache.org/)
+
+---
+
+#### REST APIs
+
+1. [Kylin Restful API list](howto/howto_use_restapi.html)
+2. [Build cube with Restful API](howto/howto_build_cube_with_restapi.html)
+3. [How to consume Kylin REST API in javascript](howto/howto_use_restapi_in_js.html)
+
+---
+
+#### Operations
+
+1. [Backup/restore Kylin metadata store](howto/howto_backup_metadata.html)
+2. [Cleanup storage (HDFS & HBase tables)](howto/howto_cleanup_storage.html)
+3. [Advanced env configurations](install/advance_settings.html)
+3. [How to upgrade](howto/howto_upgrade.html)
+
+---
+
+#### Technical Details
+
+1. [New meta data model structure](/development/new_metadata.html)
+
+
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/install/advance_settings.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/advance_settings.md b/website/_docs2/install/advance_settings.md
new file mode 100644
index 0000000..06c73ef
--- /dev/null
+++ b/website/_docs2/install/advance_settings.md
@@ -0,0 +1,45 @@
+---
+layout: docs2
+title:  "Advance Settings of Kylin Environment"
+categories: install
+permalink: /docs2/install/advance_settings.html
+version: v0.7.2
+since: v0.7.1
+---
+
+## Enable LZO compression
+
+By default Kylin leverages snappy compression to compress the output of MR jobs, as well as hbase table storage, reducing the storage overhead. We do not choose LZO compression in Kylin because hadoop venders tend to not include LZO in their distributions due to license(GPL) issues. To enable LZO in Kylin, follow these steps:
+
+#### Make sure LZO is working in your environment
+
+We have a simple tool to test whether LZO is well installed on EVERY SERVER in hbase cluster ( http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.4/bk_installing_manually_book/content/ch_install_hdfs_yarn_chapter.html#install-snappy-man-install ), and restart the cluster.
+To test it on the hadoop CLI that you deployed Kylin, Just run
+
+{% highlight Groff markup %}
+hbase org.apache.hadoop.hbase.util.CompressionTest file:///PATH-TO-A-LOCAL-TMP-FILE lzo
+{% endhighlight %}
+
+If no exception is printed, you're good to go. Otherwise you'll need to first install LZO properly on this server.
+To test if the hbase cluster is ready to create LZO compressed tables, test following hbase command:
+
+{% highlight Groff markup %}
+create 'lzoTable', {NAME => 'colFam',COMPRESSION => 'LZO'}
+{% endhighlight %}
+
+#### Use LZO for HBase compression
+
+You'll need to stop Kylin first by running `./kylin.sh stop`, and then modify $KYLIN_HOME/conf/kylin_job_conf.xml by uncommenting some configuration entries related to LZO compression. 
+After this, you need to run `./kylin.sh start` to start Kylin again. Now Kylin will use LZO to compress MR outputs and hbase tables.
+
+Goto $KYLIN_HOME/conf/kylin.properties, change kylin.hbase.default.compression.codec=snappy to kylin.hbase.default.compression.codec=lzo
+
+#### Use LZO for MR jobs
+
+Modify $KYLIN_HOME/conf/kylin_job_conf.xml by changing all org.apache.hadoop.io.compress.SnappyCodec to com.hadoop.compression.lzo.LzoCodec. 
+
+Start Kylin again. Now Kylin will use LZO to compress MR outputs and HBase tables.
+
+## Enable LDAP or SSO authentication
+
+Check [How to Enable Security with LDAP and SSO](../howto/howto_ldap_and_sso.html)

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/install/hadoop_evn.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/hadoop_evn.md b/website/_docs2/install/hadoop_evn.md
new file mode 100644
index 0000000..9694863
--- /dev/null
+++ b/website/_docs2/install/hadoop_evn.md
@@ -0,0 +1,35 @@
+---
+layout: docs2
+title:  "Hadoop Environment"
+categories: install
+permalink: /docs2/install/hadoop_env.html
+version: v0.7.2
+since: v0.7.1
+---
+
+## Hadoop Environment
+
+Kylin requires you having access to a hadoop CLI, where you have full permissions to hdfs, hive, hbase and map-reduce. To make things easier we strongly recommend you starting with running Kylin on a hadoop sandbox, like <http://hortonworks.com/products/hortonworks-sandbox/>. In the following tutorial we'll go with **Hortonworks Sandbox 2.1** and **Cloudera QuickStart VM 5.1**. 
+
+To avoid permission issue, we suggest you using `root` account. The password for **Hortonworks Sandbox 2.1** is `hadoop` , for **Cloudera QuickStart VM 5.1** is `cloudera`.
+
+We also suggest you using bridged mode instead of NAT mode in your virtual box settings. Bridged mode will assign your sandbox an independent IP so that you can avoid issues like https://github.com/KylinOLAP/Kylin/issues/12
+
+### Start Hadoop
+
+Please make sure Hive, HDFS and HBase are available on our CLI machine.
+If you don't know how, here's a simple tutorial for hortonworks sanbox:
+
+Use ambari helps to launch hadoop:
+
+ambari-agent start
+ambari-server start
+	
+With both command successfully run you can go to ambari homepage at <http://your_sandbox_ip:8080> (user:admin,password:admin) to check everything's status. **By default hortonworks ambari disables Hbase, you'll need manually start the `Hbase` service at ambari homepage.**
+
+![start hbase in ambari](https://raw.githubusercontent.com/KylinOLAP/kylinolap.github.io/master/docs/installation/starthbase.png)
+
+**Additonal Info for setting up HortonWorks Sandbox on Virtual Box**
+
+	Please make sure Hbase Master port [Default 60000] and Zookeeper [Default 2181] is forwarded to Host OS.
+ 
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/install/index.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/index.md b/website/_docs2/install/index.md
new file mode 100644
index 0000000..b0e154e
--- /dev/null
+++ b/website/_docs2/install/index.md
@@ -0,0 +1,47 @@
+---
+layout: docs2
+title:  "Installation Guide"
+categories: install
+permalink: /docs2/install/index.html
+version: v0.7.2
+since: v0.7.1
+---
+
+### Environment
+
+Kylin requires a properly setup hadoop environment to run. Following are the minimal request to run Kylin, for more detial, please check this reference: [Hadoop Environment](hadoop_env.html).
+
+## Recommended Hadoop Versions
+
+* Hadoop: 2.4 - 2.7
+* Hive: 0.13 - 0.14
+* HBase: 0.98 - 0.99
+* JDK: 1.7+
+
+_Tested with Hortonworks HDP 2.2 and Cloudera Quickstart VM 5.1_
+
+
+It is most common to install Kylin on a Hadoop client machine. It can be used for demo use, or for those who want to host their own web site to provide Kylin service. The scenario is depicted as:
+
+![On-Hadoop-CLI-installation](/images/install/on_cli_install_scene.png)
+
+For normal use cases, the application in the above picture means Kylin Web, which contains a web interface for cube building, querying and all sorts of management. Kylin Web launches a query engine for querying and a cube build engine for building cubes. These two engines interact with the Hadoop components, like hive and hbase.
+
+Except for some prerequisite software installations, the core of Kylin installation is accomplished by running a single script. After running the script, you will be able to build sample cube and query the tables behind the cubes via a unified web interface.
+
+### Install Kylin
+
+1. Download latest Kylin binaries at [http://kylin.apache.org/download](http://kylin.apache.org/download)
+2. Export KYLIN_HOME pointing to the extracted Kylin folder
+3. Make sure the user has the privilege to run hadoop, hive and hbase cmd in shell. If you are not so sure, you can run **bin/check-env.sh**, it will print out the detail information if you have some environment issues.
+4. To start Kylin, simply run **bin/kylin.sh start**
+5. To stop Kylin, simply run **bin/kylin.sh stop**
+
+> If you want to have multiple Kylin nodes please refer to [this](kylin_cluster.html)
+
+After Kylin started you can visit <http://your_hostname:7070/kylin>. The username/password is ADMIN/KYLIN. It's a clean Kylin homepage with nothing in there. To start with you can:
+
+1. [Quick play with a sample cube](../tutorial/kylin_sample.html)
+2. [Create and Build your own cube](../tutorial/create_cube.html)
+3. [Kylin Web Tutorial](../tutorial/web.html)
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/install/kylin_cluster.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/kylin_cluster.md b/website/_docs2/install/kylin_cluster.md
new file mode 100644
index 0000000..5200643
--- /dev/null
+++ b/website/_docs2/install/kylin_cluster.md
@@ -0,0 +1,30 @@
+---
+layout: docs2
+title:  "Multiple Kylin REST servers"
+categories: install
+permalink: /docs2/install/kylin_cluster.html
+version: v0.7.2
+since: v0.7.1
+---
+
+
+### Kylin Server modes
+
+Kylin instances are stateless,  the runtime state is saved in its "Metadata Store" in hbase (kylin.metadata.url config in conf/kylin.properties). For load balance considerations it is possible to start multiple Kylin instances sharing the same metadata store (thus sharing the same state on table schemas, job status, cube status, etc.)
+
+Each of the kylin instances has a kylin.server.mode entry in conf/kylin.properties specifying the runtime mode, it has three options: 1. "job" for running job engine only 2. "query" for running query engine only and 3 "all" for running both. Notice that only one server can run the job engine("all" mode or "job" mode), the others must all be "query" mode.
+
+A typical scenario is depicted in the following chart:
+
+![]( /images/install/kylin_server_modes.png)
+
+### Setting up Multiple Kylin REST servers
+
+If you are running Kylin in a cluster or you have multiple Kylin REST server instances, please make sure you have the following property correctly configured in ${KYLIN_HOME}/conf/kylin.properties
+
+1. kylin.rest.servers 
+	List of web servers in use, this enables one web server instance to sync up with other servers. For example: kylin.rest.servers=sandbox1:7070,sandbox2:7070
+  
+2. kylin.server.mode
+	Make sure there is only one instance whose "kylin.server.mode" is set to "all" if there are multiple instances.
+	
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/install/kylin_docker.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/kylin_docker.md b/website/_docs2/install/kylin_docker.md
new file mode 100644
index 0000000..c59ee89
--- /dev/null
+++ b/website/_docs2/install/kylin_docker.md
@@ -0,0 +1,46 @@
+---
+layout: docs2
+title:  "On Hadoop Kylin installation using Docker"
+categories: install
+permalink: /docs2/install/kylin_docker.html
+version: v0.6
+since: v0.6
+---
+
+With help of SequenceIQ, we have put together a fully automated method of creating a Kylin cluster (along with Hadoop, HBase and Hive). The only thing you will need to do is to pull the container from the official Docker repository by using the commands listed below:
+
+### Pre-Requisite
+
+1. Docker (If you don't have Docker installed, follow this [link](https://docs.docker.com/installation/#installation))
+2. Minimum RAM - 4Gb (We'll be running Kylin, Hadoop, HBase & Hive)
+
+### Installation
+{% highlight Groff markup %}
+docker pull sequenceiq/kylin:0.7.2
+{% endhighlight %}
+
+Once the container is pulled you are ready to start playing with Kylin. Get the following helper functions from our Kylin GitHub [repository](https://github.com/sequenceiq/docker-kylin/blob/master/ambari-functions) - _(make sure you source it)._
+
+{% highlight Groff markup %}
+ $ wget https://raw.githubusercontent.com/sequenceiq/docker-kylin/master/ambari-functions
+ $ source ambari-functions
+{% endhighlight %}
+{% highlight Groff markup %}
+ $ kylin-deploy-cluster 1
+{% endhighlight %}
+
+You can specify the number of nodes you'd like to have in your cluster (1 in this case). Once we installed all the necessary Hadoop
+services we'll build Kylin on top of it and then you can reach the UI on: 
+{% highlight Groff markup %}
+#Ambari Dashboard
+http://<container_ip>:8080
+{% endhighlight %}
+Use `admin/admin` to login. Make sure HBase is running. 
+
+{% highlight Groff markup %}
+#Kylin Dashboard
+http://<container_ip>:7070/kylin
+{% endhighlight %}
+The default credentials to login are: `ADMIN:KYLIN`. 
+The cluster is pre-populated with sample data and is ready to build cubes as shown [here](../tutorial/create_cube.html).
+  

http://git-wip-us.apache.org/repos/asf/kylin/blob/0fb16aa2/website/_docs2/install/manual_install_guide.md
----------------------------------------------------------------------
diff --git a/website/_docs2/install/manual_install_guide.md b/website/_docs2/install/manual_install_guide.md
new file mode 100644
index 0000000..5ec438c
--- /dev/null
+++ b/website/_docs2/install/manual_install_guide.md
@@ -0,0 +1,48 @@
+---
+layout: docs2
+title:  Manual Installation Guide
+categories: install
+permalink: /docs2/install/manual_install_guide.html
+version: v0.7.2
+since: v0.7.1
+---
+
+## INTRODUCTION
+
+In most cases our automated script [Installation Guide](index.html) can help you launch Kylin in your hadoop sandbox and even your hadoop cluster. However, in case something went wrong in the deploy script, this article comes as an reference guide to fix your issues.
+
+Basically this article explains every step in the automatic script. We assume that you are already very familiar with Hadoop operations on Linux. 
+
+## PREREQUISITES
+* Tomcat installed, with CATALINA_HOME exported. 
+* Kylin binary pacakge copied to local and setup $KYLIN_HOME correctly
+
+## STEPS
+
+### 4. Prepare Jars
+
+There are two jars that Kylin will need to use, there two jars and configured in the default kylin.properties:
+
+```
+kylin.job.jar=/tmp/kylin/kylin-job-latest.jar
+
+```
+
+This is job jar that Kylin uses for MR jobs. You need to copy $KYLIN_HOME/job/target/kylin-job-latest.jar to /tmp/kylin/
+
+```
+kylin.coprocessor.local.jar=/tmp/kylin/kylin-coprocessor-latest.jar
+
+```
+
+This is a hbase coprocessor jar that Kylin will put on hbase. It is used for performance boosting. You need to copy $KYLIN_HOME/storage/target/kylin-coprocessor-latest.jar to /tmp/kylin/
+
+### 5. Start Kylin
+
+Start Kylin with
+
+`./kylin.sh start`
+
+and stop Kylin with
+
+`./Kylin.sh stop`