You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by sh...@apache.org on 2018/12/10 06:23:02 UTC

[kylin] branch kylin-on-parquet updated (4d79e0d -> e8f96bb)

This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a change to branch kylin-on-parquet
in repository https://gitbox.apache.org/repos/asf/kylin.git.


 discard 4d79e0d  KYLIN-3625 code review
 discard 8bbb777  KYLIN-3626 Allow customization for storage path
 discard bbf9d25  KYLIN-3623 Convert cuboid to Parquet in MR
 discard c464a38  KYLIN-3625 Query engine for Parquet and apply CI for Parquet
 discard 5566c59  KYLIN-3624 Convert sequence files to Parquet in Spark
 discard 9954d36  KYLIN-3622,KYLIN-3624 Cube layout in Parquet; convert cube data to parquet in Spark
     add 45fb6a2  KYLIN-3689 When the startTime is equal to the endTime in build request, the segment will build all data.
     add c9d5d0d  KYLIN-3689 fix UT
     add dca3ee7  KYLIN-3631 Use Arrays#parallelSort instead of Arrays#sort
     add cede58b  KYLIN-3290 Leverage getDecalredConstructor().newInstance() instead of newInstance()
     add 4e03407  KYLIN-3636 set default storage_type to 2
     add 5aea3d4  KYLIN-3697:check port availability when starts kylin instance
     add 7105e5e  KYLIN-3697 minor, only kylin.sh start check-env
     add 78a5f34  KYLIN-3669, add log to indicate the case when using GTStreamAggregateScanner.
     add 98cb504  KYLIN-3666 HDFS metadata url not be recognized
     add ac7a86a  KYLIN-3693 TopN incorrect in Spark engine
     add 920ac2f  KYLIN-3559 Use Splitter for splitting String
     add b046585  KYLIN-3559 Resolve UT issues
     add 733d574  #KYLIN-3684, HIVE_LIB is not set or not resolved correctly
     add 70f15d7  KYLIN-3704 upgrade calcite version to 1.16.0-kylin-r2 version
     add c4e58dc  Revert "KYLIN-3559 Resolve UT issues"
     add 2d8e0fc  Revert "KYLIN-3559 Use Splitter for splitting String"
     add 4e6a69b  KYLIN-3665 Partition time column may never be added
     add 7bcc698  KYLIN-3597 fix static code issues
     add 97ee98b  KYLIN-3559 Use Splitter for splitting String (#364)
     add 4158d7b  KYLIN-3700 Quote sql identities when creating flat table
     add be5df4a  minor, remove a deprecated configuration("kylin.job.lock") usage in the code.
     add 7f672b0  KYLIN-3695, fix lose decimal scale value in column type decimal(a, b)
     add b8ad201  KYLIN-1111 Ignore unsupported hive column types when sync hive table
     add 2979f40  minor, clean deprecated configuration "kylin.job.controller.lock" in kylin-backward-compatibility.properties.
     add 39878ba  Issue 3597 fix sonar issues (#375)
     new fa94830  KYLIN-3622,KYLIN-3624 Cube layout in Parquet; convert cube data to parquet in Spark
     new ff134be  KYLIN-3624 Convert sequence files to Parquet in Spark
     new a47b2d6  KYLIN-3625 Query engine for Parquet and apply CI for Parquet
     new 5ea041c  KYLIN-3623 Convert cuboid to Parquet in MR
     new 126ab09  KYLIN-3626 Allow customization for storage path
     new e8f96bb  KYLIN-3625 code review

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (4d79e0d)
            \
             N -- N -- N   refs/heads/kylin-on-parquet (e8f96bb)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 6 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 build/bin/check-env.sh                             |   5 +-
 ...ple-streaming.sh => check-port-availability.sh} |  14 +-
 build/bin/find-hive-dependency.sh                  |  53 ++++-
 build/bin/kylin.sh                                 |   5 +-
 .../kylin/cache/memcached/MemcachedCache.java      |  11 +-
 .../kylin/common/BackwardCompatibilityConfig.java  |   7 +-
 .../java/org/apache/kylin/common/util/Bytes.java   |  47 ++--
 .../org/apache/kylin/common/util/ClassUtil.java    |   2 +-
 .../apache/kylin/common/util/HiveCmdBuilder.java   |   5 +-
 .../org/apache/kylin/common/util/StringUtil.java   |  22 +-
 .../kylin-backward-compatibility.properties        |   1 -
 .../kylin/common/util/HiveCmdBuilderTest.java      |  10 +-
 .../apache/kylin/common/util/StringUtilTest.java   |  53 +++++
 .../java/org/apache/kylin/cube/CubeSegment.java    |   6 +
 .../kylin/cube/cli/CubeSignatureRefresher.java     |   3 +-
 .../java/org/apache/kylin/cube/model/CubeDesc.java |   2 +-
 .../validation/rule/AggregationGroupRule.java      |  28 ++-
 .../kylin/cube/cuboid/CuboidSchedulerTest.java     |   5 +-
 .../kylin/dict/DictionaryInfoSerializer.java       |   8 +-
 .../apache/kylin/dict/DictionarySerializer.java    |   2 +-
 .../org/apache/kylin/dict/NumberDictionary.java    |   2 +-
 .../org/apache/kylin/dict/NumberDictionary2.java   |   2 +-
 .../java/org/apache/kylin/dict/TrieDictionary.java |   2 +-
 .../apache/kylin/dict/TrieDictionaryForest.java    |   2 +-
 .../kylin/dict/global/GlobalDictHDFSStore.java     |   4 +-
 .../java/org/apache/kylin/job/JoinedFlatTable.java |  45 +++-
 .../kylin/job/execution/AbstractExecutable.java    |  27 +--
 .../job/impl/threadpool/DistributedScheduler.java  |   6 +-
 .../kylin/job/util/FlatTableSqlQuoteUtils.java     | 229 ++++++++++++++++++
 .../kylin/job/util/FlatTableSqlQuoteUtilsTest.java | 137 +++++++++++
 .../BitmapIntersectDistinctCountAggFunc.java       |   8 +-
 .../apache/kylin/measure/raw/RawAggregator.java    |   1 -
 .../org/apache/kylin/measure/topn/Counter.java     |  18 +-
 .../kylin/measure/topn/DoubleDeltaSerializer.java  |   2 +-
 .../kylin/measure/topn/TopNCounterSerializer.java  |   2 +-
 .../apache/kylin/measure/topn/TopNMeasureType.java |  41 ++--
 .../kylin/metadata/cachesync/Broadcaster.java      |  23 +-
 .../apache/kylin/metadata/datatype/DataType.java   |  13 +-
 .../apache/kylin/metadata/model/PartitionDesc.java |  39 ++--
 .../org/apache/kylin/metadata/model/Segments.java  |   2 +-
 .../org/apache/kylin/metadata/model/TableDesc.java |  14 +-
 .../org/apache/kylin/metadata/model/TableRef.java  |   4 +
 .../measure/topn/DoubleDeltaSerializerTest.java    |   6 +-
 .../kylin/metadata/datatype/DataTypeTest.java      |   7 +
 .../DefaultPartitionConditionBuilderTest.java      |   2 +-
 .../storage/gtrecord/SegmentCubeTupleIterator.java |   1 +
 .../datasource/adaptor/AbstractJdbcAdaptor.java    |  96 ++++++--
 .../sdk/datasource/adaptor/DefaultAdaptor.java     |  90 +++++---
 .../sdk/datasource/framework/JdbcConnector.java    |   8 +-
 .../framework/SourceConnectorFactory.java          |   2 +
 .../datasource/framework/conv/ConvSqlWriter.java   |  25 +-
 .../framework/conv/DefaultConfiguer.java           |  40 ++--
 .../datasource/framework/conv/SqlConverter.java    |  33 ++-
 .../framework/conv/GenericSqlConverterTest.java    |  33 +--
 .../framework/conv/SqlConverterTest.java           | 256 ++++++++++++++++++---
 .../apache/kylin/engine/mr/ByteArrayWritable.java  |  14 +-
 .../kylin/engine/mr/LookupMaterializeContext.java  |   5 +-
 .../kylin/engine/mr/common/AbstractHadoopJob.java  |  17 +-
 .../engine/mr/steps/MergeDictionaryMapper.java     |  16 +-
 .../engine/mr/steps/MergeDictionaryReducer.java    |   3 +-
 .../mr/steps/RowKeyDistributionCheckerJob.java     |  93 --------
 .../mr/steps/RowKeyDistributionCheckerMapper.java  | 112 ---------
 .../mr/steps/RowKeyDistributionCheckerReducer.java |  51 ----
 .../kylin/engine/mr/steps/SegmentReEncoder.java    |  11 +-
 .../engine/mr/steps/UpdateDictionaryStep.java      |   3 +-
 .../kylin/engine/spark/SparkCubingByLayer.java     |   8 +-
 .../kylin/engine/spark/SparkCubingMerge.java       |  17 +-
 .../kylin/engine/spark/SparkMergingDictionary.java |   3 +-
 .../org/apache/kylin/jdbc/KylinConnection.java     |   4 +-
 .../cube/cuboid/algorithm/ITAlgorithmTestBase.java |   3 +-
 .../cube/inmemcubing/ITInMemCubeBuilderTest.java   |   5 +-
 .../org/apache/kylin/jdbc/ITJDBCDriverTest.java    |   2 +-
 .../kylin/job/BaseTestDistributedScheduler.java    |   2 -
 .../org/apache/kylin/query/ITKylinQueryTest.java   |  18 +-
 .../query/sql_distinct_precisely/query03.sql       |   6 +-
 .../query/sql_distinct_precisely/query04.sql       |   6 +-
 .../query03.sql                                    |   2 +-
 .../query04.sql}                                   |   1 +
 pom.xml                                            |   2 +-
 .../org/apache/kylin/query/QueryConnection.java    |   5 +-
 .../kylin/query/relnode/OLAPAggregateRel.java      |   2 +-
 .../kylin/query/relnode/OLAPAuthentication.java    |   8 +-
 .../apache/kylin/query/relnode/OLAPTableScan.java  |  12 +-
 .../broadcaster/BroadcasterReceiveServlet.java     |   9 +-
 .../kylin/rest/controller/TableController.java     |   4 +-
 .../apache/kylin/rest/init/InitialTaskManager.java |   3 +-
 .../apache/kylin/rest/service/AdminService.java    |   3 +-
 .../apache/kylin/rest/service/QueryService.java    |   3 +-
 .../rest/signature/RealizationSetCalculator.java   |   3 +-
 .../org/apache/kylin/rest/bean/BeanValidator.java  |   2 +-
 .../kylin/source/hive/GarbageCollectionStep.java   |   2 +-
 .../apache/kylin/source/hive/HiveInputBase.java    |  10 +-
 .../kylin/source/hive/HiveMetadataExplorer.java    |  29 ++-
 .../HiveColumnCardinalityUpdateJob.java            |   3 +-
 .../apache/kylin/source/hive/HiveMRInputTest.java  |  18 +-
 .../apache/kylin/source/jdbc/JdbcHiveMRInput.java  |  24 +-
 .../kylin/source/jdbc/extensible/JdbcExplorer.java |  88 +++----
 .../source/jdbc/extensible/JdbcHiveMRInput.java    |  21 +-
 .../jdbc/extensible/JdbcHiveMRInputTest.java       |  14 +-
 .../source/kafka/util/KafkaSampleProducer.java     |  12 +-
 .../org/apache/kylin/tool/CubeMetaExtractor.java   |   7 +-
 .../apache/kylin/tool/CubeMigrationCheckCLI.java   |   9 +-
 .../org/apache/kylin/tool/HBaseUsageExtractor.java |   5 +-
 .../org/apache/kylin/tool/JobDiagnosisInfoCLI.java |  11 +-
 webapp/app/js/controllers/cubeMeasures.js          |   8 +-
 105 files changed, 1436 insertions(+), 794 deletions(-)
 copy build/bin/{sample-streaming.sh => check-port-availability.sh} (66%)
 mode change 100755 => 100644
 create mode 100644 core-common/src/test/java/org/apache/kylin/common/util/StringUtilTest.java
 create mode 100644 core-job/src/main/java/org/apache/kylin/job/util/FlatTableSqlQuoteUtils.java
 create mode 100644 core-job/src/test/java/org/apache/kylin/job/util/FlatTableSqlQuoteUtilsTest.java
 delete mode 100644 engine-mr/src/main/java/org/apache/kylin/engine/mr/steps/RowKeyDistributionCheckerJob.java
 delete mode 100644 engine-mr/src/main/java/org/apache/kylin/engine/mr/steps/RowKeyDistributionCheckerMapper.java
 delete mode 100644 engine-mr/src/main/java/org/apache/kylin/engine/mr/steps/RowKeyDistributionCheckerReducer.java
 copy kylin-it/src/test/resources/query/{sql_distinct_precisely => sql_distinct_precisely_rollup}/query03.sql (97%)
 copy kylin-it/src/test/resources/query/{sql_distinct_precisely/query03.sql => sql_distinct_precisely_rollup/query04.sql} (97%)


[kylin] 01/06: KYLIN-3622,KYLIN-3624 Cube layout in Parquet; convert cube data to parquet in Spark

Posted by sh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch kylin-on-parquet
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit fa9483011c2a18e401def8d24e9d6248d13c8b86
Author: chao long <wa...@qq.com>
AuthorDate: Sat Sep 29 14:05:30 2018 +0800

    KYLIN-3622,KYLIN-3624 Cube layout in Parquet; convert cube data to parquet in Spark
---
 .../src/main/resources/kylin-defaults.properties   |  26 ++
 .../org/apache/kylin/cube/kv/RowKeyColumnIO.java   |   3 +
 .../apache/kylin/measure/BufferedMeasureCodec.java |   4 +
 .../apache/kylin/metadata/model/IStorageAware.java |   1 +
 .../kylin/engine/mr/common/CubeStatsReader.java    |   5 +
 .../engine/spark/SparkBatchCubingJobBuilder2.java  |  21 +-
 .../kylin/engine/spark/SparkCubingByLayer.java     |  20 +-
 .../engine/spark/SparkCubingByLayerParquet.java    | 421 +++++++++++++++++++++
 pom.xml                                            |   6 +
 server-base/pom.xml                                |   4 +
 server/pom.xml                                     |   6 +
 .../kylin/storage/parquet/ParquetStorage.java      |  49 +++
 .../storage/parquet/cube/CubeStorageQuery.java     |  33 +-
 .../storage/parquet/steps/ParquetSparkOutput.java  |  72 ++++
 webapp/app/js/controllers/cubeSchema.js            |   9 +-
 webapp/app/js/model/cubeConfig.js                  |   4 +
 .../partials/cubeDesigner/advanced_settings.html   |  15 +-
 17 files changed, 684 insertions(+), 15 deletions(-)

diff --git a/core-common/src/main/resources/kylin-defaults.properties b/core-common/src/main/resources/kylin-defaults.properties
index 6f2db9a..6238e44 100644
--- a/core-common/src/main/resources/kylin-defaults.properties
+++ b/core-common/src/main/resources/kylin-defaults.properties
@@ -61,6 +61,9 @@ kylin.restclient.connection.default-max-per-route=20
 #max connections of one rest-client
 kylin.restclient.connection.max-total=200
 
+## Parquet storage
+kylin.storage.provider.4=org.apache.kylin.storage.parquet.ParquetStorage
+
 ### PUBLIC CONFIG ###
 kylin.engine.default=2
 kylin.storage.default=2
@@ -351,3 +354,26 @@ kylin.engine.spark-conf-mergedict.spark.memory.fraction=0.2
 #kylin.source.jdbc.pass=
 #kylin.source.jdbc.sqoop-home=
 #kylin.source.jdbc.filed-delimiter=|
+
+kylin.storage.columnar.spark-env.HADOOP_CONF_DIR=${kylin_hadoop_conf_dir}
+## for any spark config entry in http://spark.apache.org/docs/latest/configuration.html#spark-properties, prefix it with "kylin.storage.columnar.spark-conf" and append here
+kylin.storage.columnar.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current -Dzipkin.collector-hostname=${ZIPKIN_HOSTNAME} -Dzipkin.collector-port=${ZIPKIN_SCRIBE_PORT} -DinfluxDB.address=${INFLUXDB_ADDRESS} -Dlog4j.configuration=spark-executor-log4j.properties -Dlog4j.debug -Dkap.spark.identifier=${KAP_SPARK_IDENTIFIER} -Dkap.hdfs.working.dir=${KAP_HDFS_WORKING_DIR} -Dkap.metadata.url=${KAP_METADATA_IDENTIFIER} -XX:MaxDirectMemorySize=896M -Dsparder.dict.cache.size=${SPARDER [...]
+kylin.storage.columnar.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+kylin.storage.columnar.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+#kylin.storage.columnar.spark-conf.spark.serializer=org.apache.spark.serializer.JavaSerializer
+kylin.storage.columnar.spark-conf.spark.driver.memory=512m
+kylin.storage.columnar.spark-conf.spark.executor.memory=512m
+kylin.storage.columnar.spark-conf.spark.yarn.executor.memoryOverhead=512
+kylin.storage.columnar.spark-conf.yarn.am.memory=512m
+kylin.storage.columnar.spark-conf.spark.executor.cores=1
+kylin.storage.columnar.spark-conf.spark.executor.instances=1
+kylin.storage.columnar.spark-conf.spark.task.maxFailures=1
+kylin.storage.columnar.spark-conf.spark.ui.port=4041
+kylin.storage.columnar.spark-conf.spark.locality.wait=0s
+kylin.storage.columnar.spark-conf.spark.sql.dialect=hiveql
+kylin.storage.columnar.spark-conf.spark.hadoop.yarn.timeline-service.enabled=false
+kylin.storage.columnar.spark-conf.hive.execution.engine=MR
+kylin.storage.columnar.spark-conf.spark.scheduler.listenerbus.eventqueue.size=100000000
+kylin.storage.columnar.spark-conf.spark.master=yarn-client
+kylin.storage.columnar.spark-conf.spark.broadcast.compress=false
+
diff --git a/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyColumnIO.java b/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyColumnIO.java
index 65911a0..b0efc91 100644
--- a/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyColumnIO.java
+++ b/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyColumnIO.java
@@ -18,6 +18,7 @@
 
 package org.apache.kylin.cube.kv;
 
+import org.apache.kylin.common.util.BytesUtil;
 import org.apache.kylin.common.util.Dictionary;
 import org.apache.kylin.dimension.DictionaryDimEnc;
 import org.apache.kylin.dimension.DimensionEncoding;
@@ -57,6 +58,8 @@ public class RowKeyColumnIO implements java.io.Serializable {
 
     public String readColumnString(TblColRef col, byte[] bytes, int offset, int length) {
         DimensionEncoding dimEnc = dimEncMap.get(col);
+        if (dimEnc instanceof DictionaryDimEnc)
+            return String.valueOf(BytesUtil.readUnsigned(bytes, offset, length));
         return dimEnc.decode(bytes, offset, length);
     }
 
diff --git a/core-metadata/src/main/java/org/apache/kylin/measure/BufferedMeasureCodec.java b/core-metadata/src/main/java/org/apache/kylin/measure/BufferedMeasureCodec.java
index 44e5708..ec8069f 100644
--- a/core-metadata/src/main/java/org/apache/kylin/measure/BufferedMeasureCodec.java
+++ b/core-metadata/src/main/java/org/apache/kylin/measure/BufferedMeasureCodec.java
@@ -40,6 +40,10 @@ public class BufferedMeasureCodec implements java.io.Serializable {
     private transient ByteBuffer buf;
     final private int[] measureSizes;
 
+    public MeasureCodec getCodec() {
+        return codec;
+    }
+
     public BufferedMeasureCodec(Collection<MeasureDesc> measureDescs) {
         this.codec = new MeasureCodec(measureDescs);
         this.measureSizes = new int[codec.getMeasuresCount()];
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/model/IStorageAware.java b/core-metadata/src/main/java/org/apache/kylin/metadata/model/IStorageAware.java
index e552574..3e7de24 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/model/IStorageAware.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/model/IStorageAware.java
@@ -23,6 +23,7 @@ public interface IStorageAware {
     public static final int ID_HBASE = 0;
     public static final int ID_HYBRID = 1;
     public static final int ID_SHARDED_HBASE = 2;
+    public static final int ID_PARQUET = 4;
 
     int getStorageType();
 }
diff --git a/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/CubeStatsReader.java b/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/CubeStatsReader.java
index 102995e..f6579ef 100644
--- a/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/CubeStatsReader.java
+++ b/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/CubeStatsReader.java
@@ -307,6 +307,11 @@ public class CubeStatsReader {
         return ret;
     }
 
+    public double estimateCuboidSize(long cuboidId) {
+        Map<Long, Double> cuboidSizeMap = getCuboidSizeMap();
+        return cuboidSizeMap.get(cuboidId);
+    }
+
     public List<Long> getCuboidsByLayer(int level) {
         if (cuboidScheduler == null) {
             throw new UnsupportedOperationException("cuboid scheduler is null");
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java
index 3f3c14d..41e4e49 100644
--- a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java
+++ b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java
@@ -35,6 +35,8 @@ import org.apache.kylin.metadata.model.IJoinedFlatTableDesc;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static org.apache.kylin.metadata.model.IStorageAware.ID_PARQUET;
+
 /**
  */
 public class SparkBatchCubingJobBuilder2 extends JobBuilderSupport {
@@ -73,11 +75,16 @@ public class SparkBatchCubingJobBuilder2 extends JobBuilderSupport {
         // add materialize lookup tables if needed
         LookupMaterializeContext lookupMaterializeContext = addMaterializeLookupTableSteps(result);
 
-        outputSide.addStepPhase2_BuildDictionary(result);
+        if (seg.getStorageType() != ID_PARQUET) {
+            outputSide.addStepPhase2_BuildDictionary(result);
+        }
 
         // Phase 3: Build Cube
         addLayerCubingSteps(result, jobId, cuboidRootPath); // layer cubing, only selected algorithm will execute
-        outputSide.addStepPhase3_BuildCube(result);
+
+        if (seg.getStorageType() != ID_PARQUET) {
+            outputSide.addStepPhase3_BuildCube(result);
+        }
 
         // Phase 4: Update Metadata & Cleanup
         result.addTask(createUpdateCubeInfoAfterBuildStep(jobId, lookupMaterializeContext));
@@ -116,7 +123,11 @@ public class SparkBatchCubingJobBuilder2 extends JobBuilderSupport {
 
     protected void addLayerCubingSteps(final CubingJob result, final String jobId, final String cuboidRootPath) {
         final SparkExecutable sparkExecutable = new SparkExecutable();
-        sparkExecutable.setClassName(SparkCubingByLayer.class.getName());
+        if (seg.getStorageType() == ID_PARQUET) {
+            sparkExecutable.setClassName(SparkCubingByLayerParquet.class.getName());
+        } else {
+            sparkExecutable.setClassName(SparkCubingByLayer.class.getName());
+        }
         configureSparkJob(seg, sparkExecutable, jobId, cuboidRootPath);
         result.addTask(sparkExecutable);
     }
@@ -142,6 +153,10 @@ public class SparkBatchCubingJobBuilder2 extends JobBuilderSupport {
         StringUtil.appendWithSeparator(jars, seg.getConfig().getSparkAdditionalJars());
         sparkExecutable.setJars(jars.toString());
         sparkExecutable.setName(ExecutableConstants.STEP_NAME_BUILD_SPARK_CUBE);
+
+        if (seg.getStorageType() == ID_PARQUET) {
+            sparkExecutable.setCounterSaveAs(",," + CubingJob.CUBE_SIZE_BYTES, getCounterOuputPath(jobId));
+        }
     }
 
     public String getSegmentMetadataUrl(KylinConfig kylinConfig, String jobId) {
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java
index f3b0a13..5931c64 100644
--- a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java
+++ b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java
@@ -23,7 +23,9 @@ import java.util.ArrayList;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Locale;
+import java.util.Map;
 
+import com.google.common.collect.Maps;
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.OptionBuilder;
 import org.apache.commons.cli.Options;
@@ -55,6 +57,7 @@ import org.apache.kylin.engine.mr.common.CubeStatsReader;
 import org.apache.kylin.engine.mr.common.NDCuboidBuilder;
 import org.apache.kylin.engine.mr.common.SerializableConfiguration;
 import org.apache.kylin.job.JoinedFlatTable;
+import org.apache.kylin.job.constant.ExecutableConstants;
 import org.apache.kylin.measure.BufferedMeasureCodec;
 import org.apache.kylin.measure.MeasureAggregators;
 import org.apache.kylin.measure.MeasureIngester;
@@ -66,6 +69,7 @@ import org.apache.spark.api.java.function.Function;
 import org.apache.spark.api.java.function.Function2;
 import org.apache.spark.api.java.function.PairFlatMapFunction;
 import org.apache.spark.api.java.function.PairFunction;
+import org.apache.spark.sql.SQLContext;
 import org.apache.spark.storage.StorageLevel;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -91,6 +95,8 @@ public class SparkCubingByLayer extends AbstractApplication implements Serializa
             .withDescription("Hive Intermediate Table").create("hiveTable");
     public static final Option OPTION_INPUT_PATH = OptionBuilder.withArgName(BatchConstants.ARG_INPUT).hasArg()
             .isRequired(true).withDescription("Hive Intermediate Table PATH").create(BatchConstants.ARG_INPUT);
+    public static final Option OPTION_COUNTER_PATH = OptionBuilder.withArgName(BatchConstants.ARG_COUNTER_OUPUT).hasArg()
+            .isRequired(false).withDescription("counter output path").create(BatchConstants.ARG_COUNTER_OUPUT);
 
     private Options options;
 
@@ -102,6 +108,7 @@ public class SparkCubingByLayer extends AbstractApplication implements Serializa
         options.addOption(OPTION_SEGMENT_ID);
         options.addOption(OPTION_META_URL);
         options.addOption(OPTION_OUTPUT_PATH);
+        options.addOption(OPTION_COUNTER_PATH);
     }
 
     @Override
@@ -117,6 +124,7 @@ public class SparkCubingByLayer extends AbstractApplication implements Serializa
         String cubeName = optionsHelper.getOptionValue(OPTION_CUBE_NAME);
         String segmentId = optionsHelper.getOptionValue(OPTION_SEGMENT_ID);
         String outputPath = optionsHelper.getOptionValue(OPTION_OUTPUT_PATH);
+        String counterPath = optionsHelper.getOptionValue(OPTION_COUNTER_PATH);
 
         Class[] kryoClassArray = new Class[] { Class.forName("scala.reflect.ClassTag$$anon$1") };
 
@@ -128,6 +136,7 @@ public class SparkCubingByLayer extends AbstractApplication implements Serializa
 
         KylinSparkJobListener jobListener = new KylinSparkJobListener();
         JavaSparkContext sc = new JavaSparkContext(conf);
+
         sc.sc().addSparkListener(jobListener);
         HadoopUtil.deletePath(sc.hadoopConfiguration(), new Path(outputPath));
         SparkUtil.modifySparkHadoopConfiguration(sc.sc()); // set dfs.replication=2 and enable compress
@@ -205,8 +214,17 @@ public class SparkCubingByLayer extends AbstractApplication implements Serializa
         }
         allRDDs[totalLevels].unpersist();
         logger.info("Finished on calculating all level cuboids.");
+
+        // only parquet storage work
         logger.info("HDFS: Number of bytes written=" + jobListener.metrics.getBytesWritten());
-        //HadoopUtil.deleteHDFSMeta(metaUrl);
+
+        if (counterPath != null) {
+            Map<String, String> counterMap = Maps.newHashMap();
+            counterMap.put(ExecutableConstants.HDFS_BYTES_WRITTEN, String.valueOf(jobListener.metrics.getBytesWritten()));
+
+            // save counter to hdfs
+            HadoopUtil.writeToSequenceFile(sc.hadoopConfiguration(), counterPath, counterMap);
+        }
     }
 
     protected JavaPairRDD<ByteArray, Object[]> prepareOutput(JavaPairRDD<ByteArray, Object[]> rdd, KylinConfig config,
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java
new file mode 100644
index 0000000..d8fccf9
--- /dev/null
+++ b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java
@@ -0,0 +1,421 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.engine.spark;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.TaskID;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter;
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.ByteArray;
+import org.apache.kylin.common.util.Bytes;
+import org.apache.kylin.common.util.BytesUtil;
+import org.apache.kylin.common.util.JsonUtil;
+import org.apache.kylin.cube.CubeInstance;
+import org.apache.kylin.cube.CubeManager;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.cube.cuboid.Cuboid;
+import org.apache.kylin.cube.kv.RowConstants;
+import org.apache.kylin.cube.kv.RowKeyDecoder;
+import org.apache.kylin.cube.model.CubeDesc;
+import org.apache.kylin.dimension.AbstractDateDimEnc;
+import org.apache.kylin.dimension.DimensionEncoding;
+import org.apache.kylin.dimension.FixedLenDimEnc;
+import org.apache.kylin.dimension.FixedLenHexDimEnc;
+import org.apache.kylin.dimension.IDimensionEncodingMap;
+import org.apache.kylin.engine.mr.BatchCubingJobBuilder2;
+import org.apache.kylin.engine.mr.common.AbstractHadoopJob;
+import org.apache.kylin.engine.mr.common.CubeStatsReader;
+import org.apache.kylin.engine.mr.common.SerializableConfiguration;
+import org.apache.kylin.measure.BufferedMeasureCodec;
+import org.apache.kylin.measure.MeasureIngester;
+import org.apache.kylin.measure.MeasureType;
+import org.apache.kylin.measure.basic.BasicMeasureType;
+import org.apache.kylin.measure.basic.BigDecimalIngester;
+import org.apache.kylin.measure.basic.DoubleIngester;
+import org.apache.kylin.measure.basic.LongIngester;
+import org.apache.kylin.metadata.model.MeasureDesc;
+import org.apache.kylin.metadata.model.TblColRef;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.GroupFactory;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.ParquetOutputFormat;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.io.api.Binary;
+import org.apache.parquet.schema.MessageType;
+import org.apache.parquet.schema.OriginalType;
+import org.apache.parquet.schema.PrimitiveType;
+import org.apache.parquet.schema.Types;
+import org.apache.spark.Partitioner;
+import org.apache.spark.api.java.JavaPairRDD;
+import org.apache.spark.api.java.function.PairFunction;
+import scala.Tuple2;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.nio.ByteBuffer;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+public class SparkCubingByLayerParquet extends SparkCubingByLayer {
+    @Override
+    protected void saveToHDFS(JavaPairRDD<ByteArray, Object[]> rdd, String metaUrl, String cubeName, CubeSegment cubeSeg, String hdfsBaseLocation, int level, Job job, KylinConfig kylinConfig) throws Exception {
+        final IDimensionEncodingMap dimEncMap = cubeSeg.getDimensionEncodingMap();
+
+        Cuboid baseCuboid = Cuboid.getBaseCuboid(cubeSeg.getCubeDesc());
+
+        final Map<TblColRef, String> colTypeMap = Maps.newHashMap();
+        final Map<MeasureDesc, String> meaTypeMap = Maps.newHashMap();
+
+        MessageType schema = cuboidToMessageType(baseCuboid, dimEncMap, cubeSeg.getCubeDesc(), colTypeMap, meaTypeMap);
+
+        logger.info("Schema: {}", schema.toString());
+
+        final CuboidToPartitionMapping cuboidToPartitionMapping = new CuboidToPartitionMapping(cubeSeg, kylinConfig, level);
+
+        logger.info("CuboidToPartitionMapping: {}", cuboidToPartitionMapping.toString());
+
+        JavaPairRDD<ByteArray, Object[]> repartitionedRDD = rdd.repartitionAndSortWithinPartitions(new CuboidPartitioner(cuboidToPartitionMapping));
+
+        String output = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel(hdfsBaseLocation, level);
+
+        job.setOutputFormatClass(CustomParquetOutputFormat.class);
+        GroupWriteSupport.setSchema(schema, job.getConfiguration());
+        CustomParquetOutputFormat.setOutputPath(job, new Path(output));
+        CustomParquetOutputFormat.setWriteSupportClass(job, GroupWriteSupport.class);
+        CustomParquetOutputFormat.setCuboidToPartitionMapping(job, cuboidToPartitionMapping);
+
+        JavaPairRDD<Void, Group> groupRDD = repartitionedRDD.mapToPair(new GenerateGroupRDDFunction(cubeName, cubeSeg.getUuid(), metaUrl, new SerializableConfiguration(job.getConfiguration()), colTypeMap, meaTypeMap));
+
+        groupRDD.saveAsNewAPIHadoopDataset(job.getConfiguration());
+    }
+
+    static class CuboidPartitioner extends Partitioner {
+
+        private CuboidToPartitionMapping mapping;
+
+        public CuboidPartitioner(CuboidToPartitionMapping cuboidToPartitionMapping) {
+            this.mapping = cuboidToPartitionMapping;
+        }
+
+        @Override
+        public int numPartitions() {
+            return mapping.getNumPartitions();
+        }
+
+        @Override
+        public int getPartition(Object key) {
+            ByteArray byteArray = (ByteArray) key;
+            long cuboidId = Bytes.toLong(byteArray.array(), RowConstants.ROWKEY_SHARDID_LEN, RowConstants.ROWKEY_CUBOIDID_LEN);
+
+            return mapping.getRandomPartitionForCuboidId(cuboidId);
+        }
+    }
+
+    public static class CuboidToPartitionMapping implements Serializable {
+        private Map<Long, List<Integer>> cuboidPartitions;
+        private int partitionNum;
+
+        public CuboidToPartitionMapping(Map<Long, List<Integer>> cuboidPartitions) {
+            this.cuboidPartitions = cuboidPartitions;
+            int partitions = 0;
+            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
+                partitions = partitions + entry.getValue().size();
+            }
+            this.partitionNum = partitions;
+        }
+
+        public CuboidToPartitionMapping(CubeSegment cubeSeg, KylinConfig kylinConfig, int level) throws IOException {
+            cuboidPartitions = Maps.newHashMap();
+
+            List<Long> layeredCuboids = cubeSeg.getCuboidScheduler().getCuboidsByLayer().get(level);
+            CubeStatsReader cubeStatsReader = new CubeStatsReader(cubeSeg, kylinConfig);
+
+            int position = 0;
+            for (Long cuboidId : layeredCuboids) {
+                int partition = estimateCuboidPartitionNum(cuboidId, cubeStatsReader, kylinConfig);
+                List<Integer> positions = Lists.newArrayListWithCapacity(partition);
+
+                for (int i = position; i < position + partition; i++) {
+                    positions.add(i);
+                }
+
+                cuboidPartitions.put(cuboidId, positions);
+                position = position + partition;
+            }
+
+            this.partitionNum = position;
+        }
+
+        public String serialize() throws JsonProcessingException {
+            return JsonUtil.writeValueAsString(cuboidPartitions);
+        }
+
+        public static CuboidToPartitionMapping deserialize(String jsonMapping) throws IOException {
+            Map<Long, List<Integer>> cuboidPartitions = JsonUtil.readValue(jsonMapping, new TypeReference<Map<Long, List<Integer>>>() {});
+            return new CuboidToPartitionMapping(cuboidPartitions);
+        }
+
+        public int getNumPartitions() {
+            return this.partitionNum;
+        }
+
+        public long getCuboidIdByPartition(int partition) {
+            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
+                if (entry.getValue().contains(partition)) {
+                    return entry.getKey();
+                }
+            }
+
+            throw new IllegalArgumentException("No cuboidId for partition id: " + partition);
+        }
+
+        public int getRandomPartitionForCuboidId(long cuboidId) {
+            List<Integer> partitions = cuboidPartitions.get(cuboidId);
+            return partitions.get(new Random().nextInt(partitions.size()));
+        }
+
+        public int getPartitionNumForCuboidId(long cuboidId) {
+            return cuboidPartitions.get(cuboidId).size();
+        }
+
+        public String getPartitionFilePrefix(int partition) {
+            String prefix = "cuboid_";
+            long cuboid = getCuboidIdByPartition(partition);
+            int partNum = partition % getPartitionNumForCuboidId(cuboid);
+            prefix = prefix + cuboid + "_part" + partNum;
+
+            return prefix;
+        }
+
+        private int estimateCuboidPartitionNum(long cuboidId, CubeStatsReader cubeStatsReader, KylinConfig kylinConfig) {
+            double cuboidSize = cubeStatsReader.estimateCuboidSize(cuboidId);
+            float rddCut = kylinConfig.getSparkRDDPartitionCutMB();
+            int partition = (int) (cuboidSize / rddCut);
+            partition = Math.max(kylinConfig.getSparkMinPartition(), partition);
+            partition = Math.min(kylinConfig.getSparkMaxPartition(), partition);
+            return partition;
+        }
+
+        @Override
+        public String toString() {
+            StringBuilder sb = new StringBuilder();
+            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
+                sb.append("cuboidId:").append(entry.getKey()).append(" [").append(StringUtils.join(entry.getValue(), ",")).append("]\n");
+            }
+
+            return sb.toString();
+        }
+    }
+
+    public static class CustomParquetOutputFormat extends ParquetOutputFormat {
+        public static final String CUBOID_TO_PARTITION_MAPPING = "cuboidToPartitionMapping";
+
+        @Override
+        public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException {
+            FileOutputCommitter committer = (FileOutputCommitter)this.getOutputCommitter(context);
+            TaskID taskId = context.getTaskAttemptID().getTaskID();
+            int partition = taskId.getId();
+
+            CuboidToPartitionMapping mapping = CuboidToPartitionMapping.deserialize(context.getConfiguration().get(CUBOID_TO_PARTITION_MAPPING));
+
+            return new Path(committer.getWorkPath(), getUniqueFile(context, mapping.getPartitionFilePrefix(partition)+ "-" + getOutputName(context), extension));
+        }
+
+        public static void setCuboidToPartitionMapping(Job job, CuboidToPartitionMapping cuboidToPartitionMapping) throws IOException {
+            String jsonStr = cuboidToPartitionMapping.serialize();
+
+            job.getConfiguration().set(CUBOID_TO_PARTITION_MAPPING, jsonStr);
+        }
+    }
+
+    static class GenerateGroupRDDFunction implements PairFunction<Tuple2<ByteArray, Object[]>, Void, Group> {
+        private volatile transient boolean initialized = false;
+        private String cubeName;
+        private String segmentId;
+        private String metaUrl;
+        private SerializableConfiguration conf;
+        private List<MeasureDesc> measureDescs;
+        private RowKeyDecoder decoder;
+        private Map<TblColRef, String> colTypeMap;
+        private Map<MeasureDesc, String> meaTypeMap;
+        private GroupFactory factory;
+        private BufferedMeasureCodec measureCodec;
+
+        public GenerateGroupRDDFunction(String cubeName, String segmentId, String metaurl, SerializableConfiguration conf, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
+            this.cubeName = cubeName;
+            this.segmentId = segmentId;
+            this.metaUrl = metaurl;
+            this.conf = conf;
+            this.colTypeMap = colTypeMap;
+            this.meaTypeMap = meaTypeMap;
+        }
+
+        private void init() {
+            KylinConfig kConfig = AbstractHadoopJob.loadKylinConfigFromHdfs(conf, metaUrl);
+            KylinConfig.setAndUnsetThreadLocalConfig(kConfig);
+            CubeInstance cubeInstance = CubeManager.getInstance(kConfig).getCube(cubeName);
+            CubeDesc cubeDesc = cubeInstance.getDescriptor();
+            CubeSegment cubeSegment = cubeInstance.getSegmentById(segmentId);
+            measureDescs = cubeDesc.getMeasures();
+            decoder = new RowKeyDecoder(cubeSegment);
+            factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(conf.get()));
+            measureCodec = new BufferedMeasureCodec(cubeDesc.getMeasures());
+        }
+
+        @Override
+        public Tuple2<Void, Group> call(Tuple2<ByteArray, Object[]> tuple) throws Exception {
+            if (initialized == false) {
+                synchronized (SparkCubingByLayer.class) {
+                    if (initialized == false) {
+                        init();
+                    }
+                }
+            }
+
+            long cuboid = decoder.decode(tuple._1.array());
+            List<String> values = decoder.getValues();
+            List<TblColRef> columns = decoder.getColumns();
+
+            Group group = factory.newGroup();
+
+            // for check
+            group.append("cuboidId", cuboid);
+
+            for (int i = 0; i < columns.size(); i++) {
+                TblColRef column = columns.get(i);
+                parseColValue(group, column, values.get(i));
+            }
+
+            ByteBuffer valueBuf = measureCodec.encode(tuple._2());
+            byte[] encodedBytes = new byte[valueBuf.position()];
+            System.arraycopy(valueBuf.array(), 0, encodedBytes, 0, valueBuf.position());
+
+            int[] valueLengths = measureCodec.getCodec().getPeekLength(ByteBuffer.wrap(encodedBytes));
+
+            int valueOffset = 0;
+            for (int i = 0; i < valueLengths.length; ++i) {
+                MeasureDesc measureDesc = measureDescs.get(i);
+                parseMeaValue(group, measureDesc, encodedBytes, valueOffset, valueLengths[i]);
+                valueOffset += valueLengths[i];
+            }
+
+            return new Tuple2<>(null, group);
+        }
+
+        private void parseColValue(final Group group, final TblColRef colRef, final String value) {
+            switch (colTypeMap.get(colRef)) {
+                case "int":
+                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Integer.valueOf(value));
+                    break;
+                case "long":
+                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Long.valueOf(value));
+                    break;
+                default:
+                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Binary.fromString(value));
+                    break;
+            }
+        }
+
+        private void parseMeaValue(final Group group, final MeasureDesc measureDesc, final byte[] value, final int offset, final int length) {
+            switch (meaTypeMap.get(measureDesc)) {
+                case "long":
+                    group.append(measureDesc.getName(), BytesUtil.readLong(value, offset, length));
+                    break;
+                case "double":
+                    group.append(measureDesc.getName(), ByteBuffer.wrap(value, offset, length).getDouble());
+                    break;
+                default:
+                    group.append(measureDesc.getName(), Binary.fromConstantByteArray(value, offset, length));
+                    break;
+            }
+        }
+    }
+
+    private MessageType cuboidToMessageType(Cuboid cuboid, IDimensionEncodingMap dimEncMap, CubeDesc cubeDesc, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
+        Types.MessageTypeBuilder builder = Types.buildMessage();
+
+        List<TblColRef> colRefs = cuboid.getColumns();
+
+        builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named("cuboidId");
+
+        for (TblColRef colRef : colRefs) {
+            DimensionEncoding dimEnc = dimEncMap.get(colRef);
+
+            if (dimEnc instanceof AbstractDateDimEnc) {
+                builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
+                colTypeMap.put(colRef, "long");
+            } else if (dimEnc instanceof FixedLenDimEnc || dimEnc instanceof FixedLenHexDimEnc) {
+                org.apache.kylin.metadata.datatype.DataType colDataType = colRef.getType();
+                if (colDataType.isNumberFamily() || colDataType.isDateTimeFamily()){
+                    builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
+                    colTypeMap.put(colRef, "long");
+                } else {
+                    // stringFamily && default
+                    builder.optional(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(getColName(colRef));
+                    colTypeMap.put(colRef, "string");
+                }
+            } else {
+                builder.optional(PrimitiveType.PrimitiveTypeName.INT32).named(getColName(colRef));
+                colTypeMap.put(colRef, "int");
+            }
+        }
+
+        MeasureIngester[] aggrIngesters = MeasureIngester.create(cubeDesc.getMeasures());
+
+        for (int i = 0; i < cubeDesc.getMeasures().size(); i++) {
+            MeasureDesc measureDesc = cubeDesc.getMeasures().get(i);
+            org.apache.kylin.metadata.datatype.DataType meaDataType = measureDesc.getFunction().getReturnDataType();
+            MeasureType measureType = measureDesc.getFunction().getMeasureType();
+
+            if (measureType instanceof BasicMeasureType) {
+                MeasureIngester measureIngester = aggrIngesters[i];
+                if (measureIngester instanceof LongIngester) {
+                    builder.required(PrimitiveType.PrimitiveTypeName.INT64).named(measureDesc.getName());
+                    meaTypeMap.put(measureDesc, "long");
+                } else if (measureIngester instanceof DoubleIngester) {
+                    builder.required(PrimitiveType.PrimitiveTypeName.DOUBLE).named(measureDesc.getName());
+                    meaTypeMap.put(measureDesc, "double");
+                } else if (measureIngester instanceof BigDecimalIngester) {
+                    builder.required(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.DECIMAL).precision(meaDataType.getPrecision()).scale(meaDataType.getScale()).named(measureDesc.getName());
+                    meaTypeMap.put(measureDesc, "decimal");
+                } else {
+                    builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(measureDesc.getName());
+                    meaTypeMap.put(measureDesc, "binary");
+                }
+            } else {
+                builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(measureDesc.getName());
+                meaTypeMap.put(measureDesc, "binary");
+            }
+        }
+
+        return builder.named(String.valueOf(cuboid.getId()));
+    }
+
+    private String getColName(TblColRef colRef) {
+        return colRef.getTableAlias() + "_" + colRef.getName();
+    }
+}
diff --git a/pom.xml b/pom.xml
index 7474817..a2bf4ec 100644
--- a/pom.xml
+++ b/pom.xml
@@ -359,6 +359,12 @@
         <version>${project.version}</version>
         <type>test-jar</type>
       </dependency>
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-storage-parquet</artifactId>
+            <version>${project.version}</version>
+            <type>test-jar</type>
+        </dependency>
       <dependency>
         <groupId>org.apache.kylin</groupId>
         <artifactId>kylin-server-base</artifactId>
diff --git a/server-base/pom.xml b/server-base/pom.xml
index 4cd3f76..29b193b 100644
--- a/server-base/pom.xml
+++ b/server-base/pom.xml
@@ -66,6 +66,10 @@
         </dependency>
         <dependency>
             <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-storage-parquet</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
             <artifactId>kylin-source-hive</artifactId>
         </dependency>
         <dependency>
diff --git a/server/pom.xml b/server/pom.xml
index b1365a7..a898eff 100644
--- a/server/pom.xml
+++ b/server/pom.xml
@@ -102,6 +102,12 @@
         </dependency>
         <dependency>
             <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-storage-parquet</artifactId>
+            <type>test-jar</type>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
             <artifactId>kylin-server-base</artifactId>
             <type>test-jar</type>
             <scope>test</scope>
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java
new file mode 100644
index 0000000..cddce23
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet;
+
+import org.apache.kylin.cube.CubeInstance;
+import org.apache.kylin.engine.spark.ISparkOutput;
+import org.apache.kylin.metadata.realization.IRealization;
+import org.apache.kylin.metadata.realization.RealizationType;
+import org.apache.kylin.storage.IStorage;
+import org.apache.kylin.storage.IStorageQuery;
+import org.apache.kylin.storage.parquet.cube.CubeStorageQuery;
+import org.apache.kylin.storage.parquet.steps.ParquetSparkOutput;
+
+public class ParquetStorage implements IStorage {
+    @Override
+    public IStorageQuery createQuery(IRealization realization) {
+        if (realization.getType() == RealizationType.CUBE) {
+            return new CubeStorageQuery((CubeInstance) realization);
+        } else {
+            throw new IllegalStateException(
+                    "Unsupported realization type for ParquetStorage: " + realization.getType());
+        }
+    }
+
+    @Override
+    public <I> I adaptToBuildEngine(Class<I> engineInterface) {
+        if (engineInterface == ISparkOutput.class) {
+            return (I) new ParquetSparkOutput();
+        } else {
+            throw new RuntimeException("Cannot adapt to " + engineInterface);
+        }
+    }
+}
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/model/IStorageAware.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
similarity index 50%
copy from core-metadata/src/main/java/org/apache/kylin/metadata/model/IStorageAware.java
copy to storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
index e552574..6a3ad59 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/model/IStorageAware.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
@@ -6,23 +6,38 @@
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
- * 
+ *
  *     http://www.apache.org/licenses/LICENSE-2.0
- * 
+ *
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS,
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  * See the License for the specific language governing permissions and
  * limitations under the License.
-*/
+ */
 
-package org.apache.kylin.metadata.model;
+package org.apache.kylin.storage.parquet.cube;
 
-public interface IStorageAware {
+import org.apache.kylin.cube.CubeInstance;
+import org.apache.kylin.metadata.realization.SQLDigest;
+import org.apache.kylin.metadata.tuple.ITupleIterator;
+import org.apache.kylin.metadata.tuple.TupleInfo;
+import org.apache.kylin.storage.StorageContext;
+import org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase;
 
-    public static final int ID_HBASE = 0;
-    public static final int ID_HYBRID = 1;
-    public static final int ID_SHARDED_HBASE = 2;
+public class CubeStorageQuery extends GTCubeStorageQueryBase {
 
-    int getStorageType();
+    public CubeStorageQuery(CubeInstance cube) {
+        super(cube);
+    }
+
+    @Override
+    public ITupleIterator search(StorageContext context, SQLDigest sqlDigest, TupleInfo returnTupleInfo) {
+        return super.search(context, sqlDigest, returnTupleInfo);
+    }
+
+    @Override
+    protected String getGTStorage() {
+        return null;
+    }
 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java
new file mode 100644
index 0000000..6f82d8b
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet.steps;
+
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.engine.spark.ISparkOutput;
+import org.apache.kylin.job.execution.DefaultChainedExecutable;
+
+import java.util.List;
+
+public class ParquetSparkOutput implements ISparkOutput {
+    @Override
+    public ISparkBatchCubingOutputSide getBatchCubingOutputSide(CubeSegment seg) {
+        return new ISparkBatchCubingOutputSide() {
+            @Override
+            public void addStepPhase2_BuildDictionary(DefaultChainedExecutable jobFlow) {
+
+            }
+
+            @Override
+            public void addStepPhase3_BuildCube(DefaultChainedExecutable jobFlow) {
+
+            }
+
+            @Override
+            public void addStepPhase4_Cleanup(DefaultChainedExecutable jobFlow) {
+
+            }
+        };
+    }
+
+    @Override
+    public ISparkBatchMergeOutputSide getBatchMergeOutputSide(CubeSegment seg) {
+        return new ISparkBatchMergeOutputSide() {
+            @Override
+            public void addStepPhase1_MergeDictionary(DefaultChainedExecutable jobFlow) {
+
+            }
+
+            @Override
+            public void addStepPhase2_BuildCube(CubeSegment set, List<CubeSegment> mergingSegments, DefaultChainedExecutable jobFlow) {
+
+            }
+
+            @Override
+            public void addStepPhase3_Cleanup(DefaultChainedExecutable jobFlow) {
+
+            }
+        };
+    }
+
+    @Override
+    public ISparkBatchOptimizeOutputSide getBatchOptimizeOutputSide(CubeSegment seg) {
+        return null;
+    }
+}
diff --git a/webapp/app/js/controllers/cubeSchema.js b/webapp/app/js/controllers/cubeSchema.js
index 5eed9b9..0ac8a96 100755
--- a/webapp/app/js/controllers/cubeSchema.js
+++ b/webapp/app/js/controllers/cubeSchema.js
@@ -18,7 +18,7 @@
 
 'use strict';
 
-KylinApp.controller('CubeSchemaCtrl', function ($scope, QueryService, UserService,modelsManager, ProjectService, AuthenticationService,$filter,ModelService,MetaModel,CubeDescModel,CubeList,TableModel,ProjectModel,ModelDescService,SweetAlert,cubesManager,StreamingService,CubeService,VdmUtil) {
+KylinApp.controller('CubeSchemaCtrl', function ($scope, QueryService, UserService,modelsManager, ProjectService, AuthenticationService,$filter,ModelService,MetaModel,CubeDescModel,CubeList,TableModel,ProjectModel,ModelDescService,SweetAlert,cubesManager,StreamingService,CubeService,VdmUtil, kylinConfig) {
     $scope.modelsManager = modelsManager;
     $scope.cubesManager = cubesManager;
     $scope.projects = [];
@@ -419,6 +419,13 @@ KylinApp.controller('CubeSchemaCtrl', function ($scope, QueryService, UserServic
             }
         });
 
+        // set default value for null type
+        if ($scope.cubeMetaFrame.engine_type === null) {
+            $scope.cubeMetaFrame.engine_type = kylinConfig.getCubeEng();
+        }
+        if ($scope.cubeMetaFrame.storage_type === null) {
+            $scope.cubeMetaFrame.storage_type = kylinConfig.getStorageEng();
+        }
 
         var errorInfo = "";
         angular.forEach(errors,function(item){
diff --git a/webapp/app/js/model/cubeConfig.js b/webapp/app/js/model/cubeConfig.js
index a83d4c9..42e4c34 100644
--- a/webapp/app/js/model/cubeConfig.js
+++ b/webapp/app/js/model/cubeConfig.js
@@ -27,6 +27,10 @@ KylinApp.constant('cubeConfig', {
     {name:'MapReduce',value: 2},
     {name:'Spark',value: 4}
   ],
+  storageTypes: [
+    {name: 'HBase', value: 2},
+    {name: 'Parquet', value: 4}
+  ],
   joinTypes: [
     {name: 'Left', value: 'left'},
     {name: 'Inner', value: 'inner'}
diff --git a/webapp/app/partials/cubeDesigner/advanced_settings.html b/webapp/app/partials/cubeDesigner/advanced_settings.html
index 89229d0..a6674da 100755
--- a/webapp/app/partials/cubeDesigner/advanced_settings.html
+++ b/webapp/app/partials/cubeDesigner/advanced_settings.html
@@ -371,7 +371,7 @@
         <!--Cube Engine-->
         <div class="form-group large-popover" style="margin-bottom:30px;">
           <h3 style="margin-left:42px;margin-bottom:30px;">Cube Engine  <i kylinpopover placement="right" title="Cube Engine" template="CubeEngineTip.html" class="fa fa-info-circle"></i></h3>
-          <div class="row" style="margin-left:42px">
+          <div class="row" style="margin-left:42px;margin-bottom:20px;">
             <label class="control-label col-xs-12 col-sm-3 no-padding-right font-color-default"><b>Engine Type :</b></label>
             <div class="col-xs-12 col-sm-6">
               <select style="width: 100%" chosen
@@ -384,6 +384,19 @@
               <span ng-if="state.mode=='view'&&cubeMetaFrame.engine_type==4">Spark</span>
             </div>
           </div>
+          <div class="row" style="margin-left:42px">
+            <label class="control-label col-xs-12 col-sm-3 no-padding-right font-color-default"><b>Storage Type :</b></label>
+            <div class="col-xs-12 col-sm-6">
+              <select style="width: 100%" chosen
+                      ng-model="cubeMetaFrame.storage_type"
+                      ng-if="state.mode=='edit'"
+                      ng-options="st.value as st.name for st in cubeConfig.storageTypes">
+                <option value="">--Select Storage Type--</option>
+              </select>
+              <span ng-if="state.mode=='view'&&cubeMetaFrame.storage_type==2">HBase</span>
+              <span ng-if="state.mode=='view'&&cubeMetaFrame.storage_type==4">Parquet</span>
+            </div>
+          </div>
         </div>
         <div class="form-group large-popover">
           <h3 style="margin-left:42px">Advanced Dictionaries  <i kylinpopover placement="right" title="Advanced Dictionaries" template="AdvancedDictionariesTip.html" class="fa fa-info-circle"></i></h3>


[kylin] 02/06: KYLIN-3624 Convert sequence files to Parquet in Spark

Posted by sh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch kylin-on-parquet
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit ff134bede6cfdc21cf5ef14f39f442d2a2996ea7
Author: Yichen Zhou <zh...@gmail.com>
AuthorDate: Thu Oct 11 21:01:53 2018 +0800

    KYLIN-3624 Convert sequence files to Parquet in Spark
---
 assembly/pom.xml                                   |   4 +
 .../kylin/job/constant/ExecutableConstants.java    |   1 +
 .../apache/kylin/engine/mr/JobBuilderSupport.java  |  10 +-
 .../engine/spark/SparkBatchCubingJobBuilder2.java  |  19 +-
 .../kylin/engine/spark/SparkCubingByLayer.java     |  10 +-
 examples/test_case_data/sandbox/kylin.properties   |   1 -
 .../kylin/storage/hbase/steps/HBaseSparkSteps.java |   2 +-
 .../kylin/storage/parquet/ParquetStorage.java      |   6 +-
 .../kylin/storage/parquet/cube/CubeSparkRPC.java   | 107 ++++
 .../storage/parquet/steps/ParquetJobSteps.java     |  37 ++
 .../storage/parquet/steps/ParquetMROutput.java     | 103 ++++
 .../storage/parquet/steps/ParquetSparkOutput.java  |   6 +-
 .../storage/parquet/steps/ParquetSparkSteps.java   |  66 +++
 .../storage/parquet/steps/SparkCubeParquet.java    | 543 +++++++++++++++++++++
 14 files changed, 891 insertions(+), 24 deletions(-)

diff --git a/assembly/pom.xml b/assembly/pom.xml
index dd3211a..c74f6c8 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -52,6 +52,10 @@
         </dependency>
         <dependency>
             <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-storage-parquet</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
             <artifactId>kylin-engine-mr</artifactId>
         </dependency>
         <dependency>
diff --git a/core-job/src/main/java/org/apache/kylin/job/constant/ExecutableConstants.java b/core-job/src/main/java/org/apache/kylin/job/constant/ExecutableConstants.java
index 5735a80..d405772 100644
--- a/core-job/src/main/java/org/apache/kylin/job/constant/ExecutableConstants.java
+++ b/core-job/src/main/java/org/apache/kylin/job/constant/ExecutableConstants.java
@@ -52,6 +52,7 @@ public final class ExecutableConstants {
     public static final String STEP_NAME_CREATE_HBASE_TABLE = "Create HTable";
     public static final String STEP_NAME_CONVERT_CUBOID_TO_HFILE = "Convert Cuboid Data to HFile";
     public static final String STEP_NAME_BULK_LOAD_HFILE = "Load HFile to HBase Table";
+    public static final String STEP_NAME_CONVERT_CUBOID_TO_PARQUET = "Convert Cuboid Data to Parquet";
     public static final String STEP_NAME_COPY_DICTIONARY = "Copy dictionary from Old Segment";
     public static final String STEP_NAME_MERGE_DICTIONARY = "Merge Cuboid Dictionary";
     public static final String STEP_NAME_MERGE_STATISTICS = "Merge Cuboid Statistics";
diff --git a/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java b/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
index 11c7d36..d21638a 100644
--- a/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
+++ b/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
@@ -338,10 +338,18 @@ public class JobBuilderSupport {
         return getJobWorkingDir(jobId) + "/hbase-conf.xml";
     }
 
-    public String getCounterOuputPath(String jobId) {
+    public String getCounterOutputPath(String jobId) {
         return getRealizationRootPath(jobId) + "/counter";
     }
 
+    public String getParquetOutputPath(String jobId) {
+        return getRealizationRootPath(jobId) + "/parquet/";
+    }
+
+    public String getParquetOutputPath() {
+        return getParquetOutputPath(seg.getLastBuildJobID());
+    }
+
     // ============================================================================
     // static methods also shared by other job flow participant
     // ----------------------------------------------------------------------------
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java
index 41e4e49..62ccf03 100644
--- a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java
+++ b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java
@@ -75,16 +75,11 @@ public class SparkBatchCubingJobBuilder2 extends JobBuilderSupport {
         // add materialize lookup tables if needed
         LookupMaterializeContext lookupMaterializeContext = addMaterializeLookupTableSteps(result);
 
-        if (seg.getStorageType() != ID_PARQUET) {
-            outputSide.addStepPhase2_BuildDictionary(result);
-        }
+        outputSide.addStepPhase2_BuildDictionary(result);
 
         // Phase 3: Build Cube
         addLayerCubingSteps(result, jobId, cuboidRootPath); // layer cubing, only selected algorithm will execute
-
-        if (seg.getStorageType() != ID_PARQUET) {
-            outputSide.addStepPhase3_BuildCube(result);
-        }
+        outputSide.addStepPhase3_BuildCube(result);
 
         // Phase 4: Update Metadata & Cleanup
         result.addTask(createUpdateCubeInfoAfterBuildStep(jobId, lookupMaterializeContext));
@@ -110,7 +105,7 @@ public class SparkBatchCubingJobBuilder2 extends JobBuilderSupport {
 
         sparkExecutable.setJobId(jobId);
         sparkExecutable.setName(ExecutableConstants.STEP_NAME_FACT_DISTINCT_COLUMNS);
-        sparkExecutable.setCounterSaveAs(CubingJob.SOURCE_RECORD_COUNT + "," + CubingJob.SOURCE_SIZE_BYTES, getCounterOuputPath(jobId));
+        sparkExecutable.setCounterSaveAs(CubingJob.SOURCE_RECORD_COUNT + "," + CubingJob.SOURCE_SIZE_BYTES, getCounterOutputPath(jobId));
 
         StringBuilder jars = new StringBuilder();
 
@@ -123,11 +118,7 @@ public class SparkBatchCubingJobBuilder2 extends JobBuilderSupport {
 
     protected void addLayerCubingSteps(final CubingJob result, final String jobId, final String cuboidRootPath) {
         final SparkExecutable sparkExecutable = new SparkExecutable();
-        if (seg.getStorageType() == ID_PARQUET) {
-            sparkExecutable.setClassName(SparkCubingByLayerParquet.class.getName());
-        } else {
-            sparkExecutable.setClassName(SparkCubingByLayer.class.getName());
-        }
+        sparkExecutable.setClassName(SparkCubingByLayer.class.getName());
         configureSparkJob(seg, sparkExecutable, jobId, cuboidRootPath);
         result.addTask(sparkExecutable);
     }
@@ -155,7 +146,7 @@ public class SparkBatchCubingJobBuilder2 extends JobBuilderSupport {
         sparkExecutable.setName(ExecutableConstants.STEP_NAME_BUILD_SPARK_CUBE);
 
         if (seg.getStorageType() == ID_PARQUET) {
-            sparkExecutable.setCounterSaveAs(",," + CubingJob.CUBE_SIZE_BYTES, getCounterOuputPath(jobId));
+            sparkExecutable.setCounterSaveAs(",," + CubingJob.CUBE_SIZE_BYTES, getCounterOutputPath(jobId));
         }
     }
 
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java
index 5931c64..288615c 100644
--- a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java
+++ b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java
@@ -30,7 +30,9 @@ import org.apache.commons.cli.Option;
 import org.apache.commons.cli.OptionBuilder;
 import org.apache.commons.cli.Options;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
 import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.AbstractApplication;
 import org.apache.kylin.common.util.ByteArray;
@@ -48,8 +50,6 @@ import org.apache.kylin.cube.model.CubeDesc;
 import org.apache.kylin.cube.model.CubeJoinedFlatTableEnrich;
 import org.apache.kylin.engine.EngineFactory;
 import org.apache.kylin.engine.mr.BatchCubingJobBuilder2;
-import org.apache.kylin.engine.mr.IMROutput2;
-import org.apache.kylin.engine.mr.MRUtil;
 import org.apache.kylin.engine.mr.common.AbstractHadoopJob;
 import org.apache.kylin.engine.mr.common.BaseCuboidBuilder;
 import org.apache.kylin.engine.mr.common.BatchConstants;
@@ -126,7 +126,7 @@ public class SparkCubingByLayer extends AbstractApplication implements Serializa
         String outputPath = optionsHelper.getOptionValue(OPTION_OUTPUT_PATH);
         String counterPath = optionsHelper.getOptionValue(OPTION_COUNTER_PATH);
 
-        Class[] kryoClassArray = new Class[] { Class.forName("scala.reflect.ClassTag$$anon$1") };
+        Class[] kryoClassArray = new Class[] { Class.forName("scala.reflect.ClassTag$$anon$1"), Class.forName("org.apache.spark.internal.io.FileCommitProtocol$TaskCommitMessage"), Class.forName("scala.collection.immutable.Set$EmptySet$") };
 
         SparkConf conf = new SparkConf().setAppName("Cubing for:" + cubeName + " segment " + segmentId);
         //serialization conf
@@ -238,8 +238,8 @@ public class SparkCubingByLayer extends AbstractApplication implements Serializa
         final String cuboidOutputPath = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel(hdfsBaseLocation, level);
         final SerializableConfiguration sConf = new SerializableConfiguration(job.getConfiguration());
 
-        IMROutput2.IMROutputFormat outputFormat = MRUtil.getBatchCubingOutputSide2(cubeSeg).getOuputFormat();
-        outputFormat.configureJobOutput(job, cuboidOutputPath, cubeSeg, cubeSeg.getCuboidScheduler(), level);
+        FileOutputFormat.setOutputPath(job, new Path(cuboidOutputPath));
+        job.setOutputFormatClass(SequenceFileOutputFormat.class);
 
         prepareOutput(rdd, kylinConfig, cubeSeg, level).mapToPair(
                 new PairFunction<Tuple2<ByteArray, Object[]>, org.apache.hadoop.io.Text, org.apache.hadoop.io.Text>() {
diff --git a/examples/test_case_data/sandbox/kylin.properties b/examples/test_case_data/sandbox/kylin.properties
index e6a6bd6..a89b4da 100644
--- a/examples/test_case_data/sandbox/kylin.properties
+++ b/examples/test_case_data/sandbox/kylin.properties
@@ -41,7 +41,6 @@ kylin.source.hive.client=cli
 # The metadata store in hbase
 kylin.metadata.url=kylin_metadata@hbase
 
-
 # The storage for final cube file in hbase
 kylin.storage.url=hbase
 
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HBaseSparkSteps.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HBaseSparkSteps.java
index ccab22f..ad83003 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HBaseSparkSteps.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HBaseSparkSteps.java
@@ -71,7 +71,7 @@ public class HBaseSparkSteps extends HBaseJobSteps {
         sparkExecutable.setJars(jars.toString());
 
         sparkExecutable.setName(ExecutableConstants.STEP_NAME_CONVERT_CUBOID_TO_HFILE);
-        sparkExecutable.setCounterSaveAs(",," + CubingJob.CUBE_SIZE_BYTES, getCounterOuputPath(jobId));
+        sparkExecutable.setCounterSaveAs(",," + CubingJob.CUBE_SIZE_BYTES, getCounterOutputPath(jobId));
 
         return sparkExecutable;
     }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java
index cddce23..e05b433 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java
@@ -19,12 +19,14 @@
 package org.apache.kylin.storage.parquet;
 
 import org.apache.kylin.cube.CubeInstance;
+import org.apache.kylin.engine.mr.IMROutput2;
 import org.apache.kylin.engine.spark.ISparkOutput;
 import org.apache.kylin.metadata.realization.IRealization;
 import org.apache.kylin.metadata.realization.RealizationType;
 import org.apache.kylin.storage.IStorage;
 import org.apache.kylin.storage.IStorageQuery;
 import org.apache.kylin.storage.parquet.cube.CubeStorageQuery;
+import org.apache.kylin.storage.parquet.steps.ParquetMROutput;
 import org.apache.kylin.storage.parquet.steps.ParquetSparkOutput;
 
 public class ParquetStorage implements IStorage {
@@ -42,7 +44,9 @@ public class ParquetStorage implements IStorage {
     public <I> I adaptToBuildEngine(Class<I> engineInterface) {
         if (engineInterface == ISparkOutput.class) {
             return (I) new ParquetSparkOutput();
-        } else {
+        } else if (engineInterface == IMROutput2.class) {
+            return (I) new ParquetMROutput();
+        } else{
             throw new RuntimeException("Cannot adapt to " + engineInterface);
         }
     }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java
new file mode 100644
index 0000000..1322da8
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet.cube;
+
+import com.google.common.collect.Lists;
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.QueryContext;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.cube.cuboid.Cuboid;
+import org.apache.kylin.engine.mr.JobBuilderSupport;
+import org.apache.kylin.gridtable.GTInfo;
+import org.apache.kylin.gridtable.GTScanRequest;
+import org.apache.kylin.gridtable.IGTScanner;
+import org.apache.kylin.gridtable.IGTStorage;
+import org.apache.kylin.metadata.model.ISegment;
+import org.apache.kylin.metadata.realization.RealizationType;
+import org.apache.kylin.storage.StorageContext;
+import org.apache.kylin.storage.parquet.spark.ParquetPayload;
+import org.apache.kylin.storage.parquet.spark.SparkSubmitter;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+
+public class CubeSparkRPC implements IGTStorage {
+
+    public static final Logger logger = LoggerFactory.getLogger(CubeSparkRPC.class);
+
+    protected CubeSegment cubeSegment;
+    protected Cuboid cuboid;
+    protected GTInfo gtInfo;
+    protected StorageContext storageContext;
+
+    public CubeSparkRPC(ISegment segment, Cuboid cuboid, GTInfo gtInfo, StorageContext context) {
+        this.cubeSegment = (CubeSegment) segment;
+        this.cuboid = cuboid;
+        this.gtInfo = gtInfo;
+        this.storageContext = context;
+    }
+
+    protected List<Integer> getRequiredParquetColumns(GTScanRequest request) {
+        List<Integer> columnFamilies = Lists.newArrayList();
+
+        for (int i = 0; i < request.getSelectedColBlocks().trueBitCount(); i++) {
+            columnFamilies.add(request.getSelectedColBlocks().trueBitAt(i));
+        }
+
+        return columnFamilies;
+    }
+
+    @Override
+    public IGTScanner getGTScanner(GTScanRequest scanRequest) throws IOException {
+        String scanReqId = Integer.toHexString(System.identityHashCode(scanRequest));
+
+        ParquetPayload.ParquetPayloadBuilder builder = new ParquetPayload.ParquetPayloadBuilder();
+
+        JobBuilderSupport jobBuilderSupport = new JobBuilderSupport(cubeSegment, "");
+
+        List<List<Long>> layeredCuboids = cubeSegment.getCuboidScheduler().getCuboidsByLayer();
+        int level = 0;
+        for (List<Long> levelCuboids : layeredCuboids) {
+            if (levelCuboids.contains(cuboid.getId())) {
+                 break;
+            }
+            level++;
+        }
+
+        String dataFolderName;
+        String parquetRootPath = jobBuilderSupport.getParquetOutputPath();
+        dataFolderName = JobBuilderSupport.getCuboidOutputPathsByLevel(parquetRootPath, level) + "/" + cuboid.getId();
+
+        builder.setGtScanRequest(scanRequest.toByteArray()).setGtScanRequestId(scanReqId)
+                .setKylinProperties(KylinConfig.getInstanceFromEnv().exportAllToString())
+                .setRealizationId(cubeSegment.getCubeInstance().getName()).setSegmentId(cubeSegment.getUuid())
+                .setDataFolderName(dataFolderName)
+                .setMaxRecordLength(scanRequest.getInfo().getMaxLength())
+                .setParquetColumns(getRequiredParquetColumns(scanRequest))
+                .setRealizationType(RealizationType.CUBE.toString()).setQueryId(QueryContext.current().getQueryId())
+                .setSpillEnabled(cubeSegment.getConfig().getQueryCoprocessorSpillEnabled())
+                .setMaxScanBytes(cubeSegment.getConfig().getPartitionMaxScanBytes())
+                .setStartTime(scanRequest.getStartTime()).setStorageType(cubeSegment.getStorageType());
+
+        ParquetPayload payload = builder.createParquetPayload();
+
+        logger.info("The scan {} for segment {} is ready to be submitted to spark client", scanReqId, cubeSegment);
+
+        return SparkSubmitter.submitParquetTask(scanRequest, payload);
+    }
+
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java
new file mode 100644
index 0000000..b47a03a
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet.steps;
+
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.engine.mr.JobBuilderSupport;
+import org.apache.kylin.job.execution.AbstractExecutable;
+
+
+/**
+ * Common steps for building cube into Parquet
+ */
+public abstract class ParquetJobSteps extends JobBuilderSupport {
+
+    public ParquetJobSteps(CubeSegment seg) {
+        super(seg, null);
+    }
+
+
+    abstract public AbstractExecutable createConvertToParquetStep(String jobId);
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java
new file mode 100644
index 0000000..fe85e24
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kylin.storage.parquet.steps;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
+import org.apache.kylin.common.util.HadoopUtil;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.cube.cuboid.CuboidScheduler;
+import org.apache.kylin.engine.mr.IMROutput2;
+import org.apache.kylin.job.execution.DefaultChainedExecutable;
+import org.apache.kylin.metadata.model.IEngineAware;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Created by Yichen on 10/16/18.
+ */
+public class ParquetMROutput implements IMROutput2 {
+
+    private static final Logger logger = LoggerFactory.getLogger(ParquetMROutput.class);
+
+    @Override
+    public IMRBatchCubingOutputSide2 getBatchCubingOutputSide(CubeSegment seg) {
+
+        boolean useSpark = seg.getCubeDesc().getEngineType() == IEngineAware.ID_SPARK;
+
+
+        // TODO need refactor
+        if (!useSpark){
+            throw new RuntimeException("Cannot adapt to MR engine");
+        }
+        final ParquetJobSteps steps = new ParquetSparkSteps(seg);
+
+        return new IMRBatchCubingOutputSide2() {
+
+            @Override
+            public void addStepPhase2_BuildDictionary(DefaultChainedExecutable jobFlow) {
+            }
+
+            @Override
+            public void addStepPhase3_BuildCube(DefaultChainedExecutable jobFlow) {
+            }
+
+            @Override
+            public void addStepPhase4_Cleanup(DefaultChainedExecutable jobFlow) {
+            }
+
+            @Override
+            public IMROutputFormat getOuputFormat() {
+                return new ParquetMROutputFormat();
+            }
+        };
+
+    }
+
+    public static class ParquetMROutputFormat implements IMROutputFormat {
+
+        @Override
+        public void configureJobInput(Job job, String input) throws Exception {
+            job.setInputFormatClass(SequenceFileInputFormat.class);
+        }
+
+        @Override
+        public void configureJobOutput(Job job, String output, CubeSegment segment, CuboidScheduler cuboidScheduler,
+                                       int level) throws Exception {
+
+            Path outputPath = new Path(output);
+            FileOutputFormat.setOutputPath(job, outputPath);
+            job.setOutputFormatClass(SequenceFileOutputFormat.class);
+            HadoopUtil.deletePath(job.getConfiguration(), outputPath);
+        }
+    }
+
+    @Override
+    public IMRBatchMergeOutputSide2 getBatchMergeOutputSide(CubeSegment seg) {
+        return null;
+    }
+
+    @Override
+    public IMRBatchOptimizeOutputSide2 getBatchOptimizeOutputSide(CubeSegment seg) {
+        return null;
+    }
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java
index 6f82d8b..176afd0 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java
@@ -27,14 +27,16 @@ import java.util.List;
 public class ParquetSparkOutput implements ISparkOutput {
     @Override
     public ISparkBatchCubingOutputSide getBatchCubingOutputSide(CubeSegment seg) {
+        final ParquetJobSteps steps = new ParquetSparkSteps(seg);
+
         return new ISparkBatchCubingOutputSide() {
             @Override
             public void addStepPhase2_BuildDictionary(DefaultChainedExecutable jobFlow) {
-
             }
 
             @Override
             public void addStepPhase3_BuildCube(DefaultChainedExecutable jobFlow) {
+                jobFlow.addTask(steps.createConvertToParquetStep(jobFlow.getId()));
 
             }
 
@@ -47,6 +49,7 @@ public class ParquetSparkOutput implements ISparkOutput {
 
     @Override
     public ISparkBatchMergeOutputSide getBatchMergeOutputSide(CubeSegment seg) {
+        final ParquetJobSteps steps = new ParquetSparkSteps(seg);
         return new ISparkBatchMergeOutputSide() {
             @Override
             public void addStepPhase1_MergeDictionary(DefaultChainedExecutable jobFlow) {
@@ -55,6 +58,7 @@ public class ParquetSparkOutput implements ISparkOutput {
 
             @Override
             public void addStepPhase2_BuildCube(CubeSegment set, List<CubeSegment> mergingSegments, DefaultChainedExecutable jobFlow) {
+                jobFlow.addTask(steps.createConvertToParquetStep(jobFlow.getId()));
 
             }
 
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java
new file mode 100644
index 0000000..296bc68
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet.steps;
+
+import org.apache.kylin.common.util.StringUtil;
+import org.apache.kylin.cube.CubeSegment;
+
+import org.apache.kylin.engine.mr.CubingJob;
+import org.apache.kylin.engine.spark.SparkBatchCubingJobBuilder2;
+import org.apache.kylin.engine.spark.SparkExecutable;
+import org.apache.kylin.job.constant.ExecutableConstants;
+import org.apache.kylin.job.execution.AbstractExecutable;
+
+
+public class ParquetSparkSteps extends ParquetJobSteps {
+
+    public ParquetSparkSteps(CubeSegment seg) {
+        super(seg);
+    }
+
+    @Override
+    public AbstractExecutable createConvertToParquetStep(String jobId) {
+
+        String cuboidRootPath = getCuboidRootPath(jobId);
+        String inputPath = cuboidRootPath + (cuboidRootPath.endsWith("/") ? "" : "/");
+
+        SparkBatchCubingJobBuilder2 jobBuilder2 = new SparkBatchCubingJobBuilder2(seg, null);
+        final SparkExecutable sparkExecutable = new SparkExecutable();
+        sparkExecutable.setClassName(SparkCubeParquet.class.getName());
+        sparkExecutable.setParam(SparkCubeParquet.OPTION_CUBE_NAME.getOpt(), seg.getRealization().getName());
+        sparkExecutable.setParam(SparkCubeParquet.OPTION_SEGMENT_ID.getOpt(), seg.getUuid());
+        sparkExecutable.setParam(SparkCubeParquet.OPTION_INPUT_PATH.getOpt(), inputPath);
+        sparkExecutable.setParam(SparkCubeParquet.OPTION_OUTPUT_PATH.getOpt(), getParquetOutputPath(jobId));
+        sparkExecutable.setParam(SparkCubeParquet.OPTION_META_URL.getOpt(),
+                jobBuilder2.getSegmentMetadataUrl(seg.getConfig(), jobId));
+        sparkExecutable.setJobId(jobId);
+
+        StringBuilder jars = new StringBuilder();
+
+        StringUtil.appendWithSeparator(jars, seg.getConfig().getSparkAdditionalJars());
+        sparkExecutable.setJars(jars.toString());
+
+        sparkExecutable.setName(ExecutableConstants.STEP_NAME_CONVERT_CUBOID_TO_PARQUET);
+        sparkExecutable.setCounterSaveAs(",," + CubingJob.CUBE_SIZE_BYTES, getCounterOutputPath(jobId));
+
+        return sparkExecutable;
+    }
+
+
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
new file mode 100644
index 0000000..2a7c1ee
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
@@ -0,0 +1,543 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kylin.storage.parquet.steps;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.TaskID;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter;
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.AbstractApplication;
+import org.apache.kylin.common.util.ByteArray;
+import org.apache.kylin.common.util.Bytes;
+import org.apache.kylin.common.util.BytesUtil;
+import org.apache.kylin.common.util.HadoopUtil;
+import org.apache.kylin.common.util.JsonUtil;
+import org.apache.kylin.common.util.OptionsHelper;
+import org.apache.kylin.cube.CubeInstance;
+import org.apache.kylin.cube.CubeManager;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.cube.cuboid.Cuboid;
+import org.apache.kylin.cube.kv.RowConstants;
+import org.apache.kylin.cube.kv.RowKeyDecoder;
+import org.apache.kylin.cube.model.CubeDesc;
+import org.apache.kylin.dimension.AbstractDateDimEnc;
+import org.apache.kylin.dimension.DimensionEncoding;
+import org.apache.kylin.dimension.FixedLenDimEnc;
+import org.apache.kylin.dimension.FixedLenHexDimEnc;
+import org.apache.kylin.dimension.IDimensionEncodingMap;
+import org.apache.kylin.engine.mr.BatchCubingJobBuilder2;
+import org.apache.kylin.engine.mr.common.AbstractHadoopJob;
+import org.apache.kylin.engine.mr.common.BatchConstants;
+import org.apache.kylin.engine.mr.common.CubeStatsReader;
+import org.apache.kylin.engine.mr.common.SerializableConfiguration;
+import org.apache.kylin.engine.spark.KylinSparkJobListener;
+import org.apache.kylin.engine.spark.SparkUtil;
+import org.apache.kylin.job.constant.ExecutableConstants;
+import org.apache.kylin.measure.BufferedMeasureCodec;
+import org.apache.kylin.measure.MeasureIngester;
+import org.apache.kylin.measure.MeasureType;
+import org.apache.kylin.measure.basic.BasicMeasureType;
+import org.apache.kylin.measure.basic.BigDecimalIngester;
+import org.apache.kylin.measure.basic.DoubleIngester;
+import org.apache.kylin.measure.basic.LongIngester;
+import org.apache.kylin.metadata.model.MeasureDesc;
+import org.apache.kylin.metadata.model.TblColRef;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.GroupFactory;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.ParquetOutputFormat;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.io.api.Binary;
+import org.apache.parquet.schema.MessageType;
+import org.apache.parquet.schema.OriginalType;
+import org.apache.parquet.schema.PrimitiveType;
+import org.apache.parquet.schema.Types;
+import org.apache.spark.Partitioner;
+import org.apache.spark.SparkConf;
+import org.apache.spark.api.java.JavaPairRDD;
+import org.apache.spark.api.java.JavaSparkContext;
+import org.apache.spark.api.java.function.PairFunction;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import scala.Tuple2;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.nio.ByteBuffer;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+
+public class SparkCubeParquet extends AbstractApplication implements Serializable{
+
+    protected static final Logger logger = LoggerFactory.getLogger(SparkCubeParquet.class);
+
+    public static final Option OPTION_CUBE_NAME = OptionBuilder.withArgName(BatchConstants.ARG_CUBE_NAME).hasArg()
+            .isRequired(true).withDescription("Cube Name").create(BatchConstants.ARG_CUBE_NAME);
+    public static final Option OPTION_SEGMENT_ID = OptionBuilder.withArgName("segment").hasArg().isRequired(true)
+            .withDescription("Cube Segment Id").create("segmentId");
+    public static final Option OPTION_META_URL = OptionBuilder.withArgName("metaUrl").hasArg().isRequired(true)
+            .withDescription("HDFS metadata url").create("metaUrl");
+    public static final Option OPTION_OUTPUT_PATH = OptionBuilder.withArgName(BatchConstants.ARG_OUTPUT).hasArg()
+            .isRequired(true).withDescription("Paqruet output path").create(BatchConstants.ARG_OUTPUT);
+    public static final Option OPTION_INPUT_PATH = OptionBuilder.withArgName(BatchConstants.ARG_INPUT).hasArg()
+            .isRequired(true).withDescription("Cuboid files PATH").create(BatchConstants.ARG_INPUT);
+    public static final Option OPTION_COUNTER_PATH = OptionBuilder.withArgName(BatchConstants.ARG_COUNTER_OUPUT).hasArg()
+            .isRequired(true).withDescription("Counter output path").create(BatchConstants.ARG_COUNTER_OUPUT);
+
+    private Options options;
+
+    public SparkCubeParquet(){
+        options = new Options();
+        options.addOption(OPTION_INPUT_PATH);
+        options.addOption(OPTION_CUBE_NAME);
+        options.addOption(OPTION_SEGMENT_ID);
+        options.addOption(OPTION_META_URL);
+        options.addOption(OPTION_OUTPUT_PATH);
+        options.addOption(OPTION_COUNTER_PATH);
+    }
+
+    @Override
+    protected Options getOptions() {
+        return options;
+    }
+
+    @Override
+    protected void execute(OptionsHelper optionsHelper) throws Exception {
+        final String metaUrl = optionsHelper.getOptionValue(OPTION_META_URL);
+        final String inputPath = optionsHelper.getOptionValue(OPTION_INPUT_PATH);
+        final String cubeName = optionsHelper.getOptionValue(OPTION_CUBE_NAME);
+        final String segmentId = optionsHelper.getOptionValue(OPTION_SEGMENT_ID);
+        final String outputPath = optionsHelper.getOptionValue(OPTION_OUTPUT_PATH);
+        final String counterPath = optionsHelper.getOptionValue(OPTION_COUNTER_PATH);
+
+        Class[] kryoClassArray = new Class[] { Class.forName("scala.reflect.ClassTag$$anon$1"), Text.class, Group.class};
+
+        SparkConf conf = new SparkConf().setAppName("Converting Parquet File for: " + cubeName + " segment " + segmentId);
+        //serialization conf
+        conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
+        conf.set("spark.kryo.registrator", "org.apache.kylin.engine.spark.KylinKryoRegistrator");
+        conf.set("spark.kryo.registrationRequired", "true").registerKryoClasses(kryoClassArray);
+
+
+        KylinSparkJobListener jobListener = new KylinSparkJobListener();
+        try (JavaSparkContext sc = new JavaSparkContext(conf)){
+            sc.sc().addSparkListener(jobListener);
+            final SerializableConfiguration sConf = new SerializableConfiguration(sc.hadoopConfiguration());
+
+            final KylinConfig envConfig = AbstractHadoopJob.loadKylinConfigFromHdfs(sConf, metaUrl);
+
+            final CubeInstance cubeInstance = CubeManager.getInstance(envConfig).getCube(cubeName);
+            final CubeSegment cubeSegment = cubeInstance.getSegmentById(segmentId);
+
+
+            final FileSystem fs = new Path(inputPath).getFileSystem(sc.hadoopConfiguration());
+
+            final int totalLevels = cubeSegment.getCuboidScheduler().getBuildLevel();
+            JavaPairRDD<Text, Text>[] allRDDs = new JavaPairRDD[totalLevels + 1];
+
+            final Job job = Job.getInstance(sConf.get());
+            logger.info("Input path: {}", inputPath);
+            logger.info("Output path: {}", outputPath);
+
+            // Read from cuboid and save to parquet
+            for (int level = 0; level <= totalLevels; level++) {
+                String cuboidPath = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel(inputPath, level);
+                allRDDs[level] = SparkUtil.parseInputPath(cuboidPath, fs, sc, Text.class, Text.class);
+                saveToParquet(allRDDs[level], metaUrl, cubeName, cubeSegment, outputPath, level, job, envConfig);
+            }
+
+            Map<String, String> counterMap = Maps.newHashMap();
+            counterMap.put(ExecutableConstants.HDFS_BYTES_WRITTEN, String.valueOf(jobListener.metrics.getBytesWritten()));
+
+            // save counter to hdfs
+            HadoopUtil.writeToSequenceFile(sc.hadoopConfiguration(), counterPath, counterMap);
+        }
+
+    }
+
+    protected void saveToParquet(JavaPairRDD<Text, Text> rdd, String metaUrl, String cubeName, CubeSegment cubeSeg, String parquetOutput, int level, Job job, KylinConfig kylinConfig) throws Exception {
+        final IDimensionEncodingMap dimEncMap = cubeSeg.getDimensionEncodingMap();
+
+        Cuboid baseCuboid = Cuboid.getBaseCuboid(cubeSeg.getCubeDesc());
+
+        final Map<TblColRef, String> colTypeMap = Maps.newHashMap();
+        final Map<MeasureDesc, String> meaTypeMap = Maps.newHashMap();
+
+        MessageType schema = cuboidToMessageType(baseCuboid, dimEncMap, cubeSeg.getCubeDesc(), colTypeMap, meaTypeMap);
+
+        logger.info("Schema: {}", schema.toString());
+
+        final CuboidToPartitionMapping cuboidToPartitionMapping = new CuboidToPartitionMapping(cubeSeg, kylinConfig, level);
+
+        logger.info("CuboidToPartitionMapping: {}", cuboidToPartitionMapping.toString());
+
+        JavaPairRDD<Text, Text> repartitionedRDD = rdd.partitionBy(new CuboidPartitioner(cuboidToPartitionMapping));
+
+        String output = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel(parquetOutput, level);
+
+        job.setOutputFormatClass(CustomParquetOutputFormat.class);
+        GroupWriteSupport.setSchema(schema, job.getConfiguration());
+        CustomParquetOutputFormat.setOutputPath(job, new Path(output));
+        CustomParquetOutputFormat.setWriteSupportClass(job, GroupWriteSupport.class);
+        CustomParquetOutputFormat.setCuboidToPartitionMapping(job, cuboidToPartitionMapping);
+
+        JavaPairRDD<Void, Group> groupRDD = rdd.partitionBy(new CuboidPartitioner(cuboidToPartitionMapping))
+                                                .mapToPair(new GenerateGroupRDDFunction(cubeName, cubeSeg.getUuid(), metaUrl, new SerializableConfiguration(job.getConfiguration()), colTypeMap, meaTypeMap));
+
+        groupRDD.saveAsNewAPIHadoopDataset(job.getConfiguration());
+    }
+
+    static class CuboidPartitioner extends Partitioner {
+
+        private CuboidToPartitionMapping mapping;
+
+        public CuboidPartitioner(CuboidToPartitionMapping cuboidToPartitionMapping) {
+            this.mapping = cuboidToPartitionMapping;
+        }
+
+        @Override
+        public int numPartitions() {
+            return mapping.getNumPartitions();
+        }
+
+        @Override
+        public int getPartition(Object key) {
+            Text textKey = (Text)key;
+            return mapping.getPartitionForCuboidId(textKey.getBytes());
+        }
+    }
+
+    public static class CuboidToPartitionMapping implements Serializable {
+        private Map<Long, List<Integer>> cuboidPartitions;
+        private int partitionNum;
+
+        public CuboidToPartitionMapping(Map<Long, List<Integer>> cuboidPartitions) {
+            this.cuboidPartitions = cuboidPartitions;
+            int partitions = 0;
+            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
+                partitions = partitions + entry.getValue().size();
+            }
+            this.partitionNum = partitions;
+        }
+
+        public CuboidToPartitionMapping(CubeSegment cubeSeg, KylinConfig kylinConfig, int level) throws IOException {
+            cuboidPartitions = Maps.newHashMap();
+
+            List<Long> layeredCuboids = cubeSeg.getCuboidScheduler().getCuboidsByLayer().get(level);
+            CubeStatsReader cubeStatsReader = new CubeStatsReader(cubeSeg, kylinConfig);
+
+            int position = 0;
+            for (Long cuboidId : layeredCuboids) {
+                int partition = estimateCuboidPartitionNum(cuboidId, cubeStatsReader, kylinConfig);
+                List<Integer> positions = Lists.newArrayListWithCapacity(partition);
+
+                for (int i = position; i < position + partition; i++) {
+                    positions.add(i);
+                }
+
+                cuboidPartitions.put(cuboidId, positions);
+                position = position + partition;
+            }
+
+            this.partitionNum = position;
+        }
+
+        public String serialize() throws JsonProcessingException {
+            return JsonUtil.writeValueAsString(cuboidPartitions);
+        }
+
+        public static CuboidToPartitionMapping deserialize(String jsonMapping) throws IOException {
+            Map<Long, List<Integer>> cuboidPartitions = JsonUtil.readValue(jsonMapping, new TypeReference<Map<Long, List<Integer>>>() {});
+            return new CuboidToPartitionMapping(cuboidPartitions);
+        }
+
+        public int getNumPartitions() {
+            return this.partitionNum;
+        }
+
+        public long getCuboidIdByPartition(int partition) {
+            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
+                if (entry.getValue().contains(partition)) {
+                    return entry.getKey();
+                }
+            }
+
+            throw new IllegalArgumentException("No cuboidId for partition id: " + partition);
+        }
+
+        public int getPartitionForCuboidId(byte[] key) {
+            long cuboidId = Bytes.toLong(key, RowConstants.ROWKEY_SHARDID_LEN, RowConstants.ROWKEY_CUBOIDID_LEN);
+            List<Integer> partitions = cuboidPartitions.get(cuboidId);
+            int partitionKey = mod(key, RowConstants.ROWKEY_COL_DEFAULT_LENGTH, key.length, partitions.size());
+
+            return partitions.get(partitionKey);
+        }
+
+        private int mod(byte[] src, int start, int end, int total) {
+            int sum = Bytes.hashBytes(src, start, end - start);
+            int mod = sum % total;
+            if (mod < 0)
+                mod += total;
+
+            return mod;
+        }
+
+        public int getPartitionNumForCuboidId(long cuboidId) {
+            return cuboidPartitions.get(cuboidId).size();
+        }
+
+        public String getPartitionFilePrefix(int partition) {
+            String prefix = "cuboid_";
+            long cuboid = getCuboidIdByPartition(partition);
+            int partNum = partition % getPartitionNumForCuboidId(cuboid);
+            prefix = prefix + cuboid + "_part" + partNum;
+
+            return prefix;
+        }
+
+        private int estimateCuboidPartitionNum(long cuboidId, CubeStatsReader cubeStatsReader, KylinConfig kylinConfig) {
+            double cuboidSize = cubeStatsReader.estimateCuboidSize(cuboidId);
+            float rddCut = kylinConfig.getSparkRDDPartitionCutMB();
+            int partition = (int) (cuboidSize / (rddCut * 10));
+            partition = Math.max(kylinConfig.getSparkMinPartition(), partition);
+            partition = Math.min(kylinConfig.getSparkMaxPartition(), partition);
+            return partition;
+        }
+
+        @Override
+        public String toString() {
+            StringBuilder sb = new StringBuilder();
+            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
+                sb.append("cuboidId:").append(entry.getKey()).append(" [").append(StringUtils.join(entry.getValue(), ",")).append("]\n");
+            }
+
+            return sb.toString();
+        }
+    }
+
+    public static class CustomParquetOutputFormat extends ParquetOutputFormat {
+        public static final String CUBOID_TO_PARTITION_MAPPING = "cuboidToPartitionMapping";
+
+        @Override
+        public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException {
+            FileOutputCommitter committer = (FileOutputCommitter)this.getOutputCommitter(context);
+            TaskID taskId = context.getTaskAttemptID().getTaskID();
+            int partition = taskId.getId();
+
+            CuboidToPartitionMapping mapping = CuboidToPartitionMapping.deserialize(context.getConfiguration().get(CUBOID_TO_PARTITION_MAPPING));
+
+            return new Path(committer.getWorkPath(), getUniqueFile(context, mapping.getPartitionFilePrefix(partition)+ "-" + getOutputName(context), extension));
+        }
+
+        public static void setCuboidToPartitionMapping(Job job, CuboidToPartitionMapping cuboidToPartitionMapping) throws IOException {
+            String jsonStr = cuboidToPartitionMapping.serialize();
+
+            job.getConfiguration().set(CUBOID_TO_PARTITION_MAPPING, jsonStr);
+        }
+    }
+
+    static class GenerateGroupRDDFunction implements PairFunction<Tuple2<Text, Text>, Void, Group> {
+        private volatile transient boolean initialized = false;
+        private String cubeName;
+        private String segmentId;
+        private String metaUrl;
+        private SerializableConfiguration conf;
+        private List<MeasureDesc> measureDescs;
+        private RowKeyDecoder decoder;
+        private Map<TblColRef, String> colTypeMap;
+        private Map<MeasureDesc, String> meaTypeMap;
+        private GroupFactory factory;
+        private BufferedMeasureCodec measureCodec;
+
+        public GenerateGroupRDDFunction(String cubeName, String segmentId, String metaurl, SerializableConfiguration conf, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
+            this.cubeName = cubeName;
+            this.segmentId = segmentId;
+            this.metaUrl = metaurl;
+            this.conf = conf;
+            this.colTypeMap = colTypeMap;
+            this.meaTypeMap = meaTypeMap;
+        }
+
+        private void init() {
+            KylinConfig kConfig = AbstractHadoopJob.loadKylinConfigFromHdfs(conf, metaUrl);
+            KylinConfig.setAndUnsetThreadLocalConfig(kConfig);
+            CubeInstance cubeInstance = CubeManager.getInstance(kConfig).getCube(cubeName);
+            CubeDesc cubeDesc = cubeInstance.getDescriptor();
+            CubeSegment cubeSegment = cubeInstance.getSegmentById(segmentId);
+            measureDescs = cubeDesc.getMeasures();
+            decoder = new RowKeyDecoder(cubeSegment);
+            factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(conf.get()));
+            measureCodec = new BufferedMeasureCodec(cubeDesc.getMeasures());
+        }
+
+        @Override
+        public Tuple2<Void, Group> call(Tuple2<Text, Text> tuple) throws Exception {
+
+            logger.debug("call: transfer Text to byte[]");
+            if (initialized == false) {
+                synchronized (SparkCubeParquet.class) {
+                    if (initialized == false) {
+                        init();
+                        initialized = true;
+                    }
+                }
+            }
+
+            long cuboid = decoder.decode(tuple._1.getBytes());
+            List<String> values = decoder.getValues();
+            List<TblColRef> columns = decoder.getColumns();
+
+            Group group = factory.newGroup();
+
+            // for check
+            group.append("cuboidId", cuboid);
+
+            for (int i = 0; i < columns.size(); i++) {
+                TblColRef column = columns.get(i);
+                parseColValue(group, column, values.get(i));
+            }
+
+
+            byte[] encodedBytes = tuple._2().copyBytes();
+            int[] valueLengths = measureCodec.getCodec().getPeekLength(ByteBuffer.wrap(encodedBytes));
+
+            int valueOffset = 0;
+            for (int i = 0; i < valueLengths.length; ++i) {
+                MeasureDesc measureDesc = measureDescs.get(i);
+                parseMeaValue(group, measureDesc, encodedBytes, valueOffset, valueLengths[i]);
+                valueOffset += valueLengths[i];
+            }
+
+            return new Tuple2<>(null, group);
+        }
+
+        private void parseColValue(final Group group, final TblColRef colRef, final String value) {
+            if (value==null) {
+                logger.info("value is null");
+                return;
+            }
+            switch (colTypeMap.get(colRef)) {
+                case "int":
+                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Integer.valueOf(value));
+                    break;
+                case "long":
+                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Long.valueOf(value));
+                    break;
+                default:
+                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Binary.fromString(value));
+                    break;
+            }
+        }
+
+        private void parseMeaValue(final Group group, final MeasureDesc measureDesc, final byte[] value, final int offset, final int length) throws IOException {
+            if (value==null) {
+                logger.info("value is null");
+                return;
+            }
+            switch (meaTypeMap.get(measureDesc)) {
+                case "long":
+                    group.append(measureDesc.getName(), BytesUtil.readLong(value, offset, length));
+                    break;
+                case "double":
+                    group.append(measureDesc.getName(), ByteBuffer.wrap(value, offset, length).getDouble());
+                    break;
+                default:
+                    group.append(measureDesc.getName(), Binary.fromConstantByteArray(value, offset, length));
+                    break;
+            }
+        }
+    }
+
+    private MessageType cuboidToMessageType(Cuboid cuboid, IDimensionEncodingMap dimEncMap, CubeDesc cubeDesc, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
+        Types.MessageTypeBuilder builder = Types.buildMessage();
+
+        List<TblColRef> colRefs = cuboid.getColumns();
+
+        builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named("cuboidId");
+
+        for (TblColRef colRef : colRefs) {
+            DimensionEncoding dimEnc = dimEncMap.get(colRef);
+
+            if (dimEnc instanceof AbstractDateDimEnc) {
+                builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
+                colTypeMap.put(colRef, "long");
+            } else if (dimEnc instanceof FixedLenDimEnc || dimEnc instanceof FixedLenHexDimEnc) {
+                org.apache.kylin.metadata.datatype.DataType colDataType = colRef.getType();
+                if (colDataType.isNumberFamily() || colDataType.isDateTimeFamily()){
+                    builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
+                    colTypeMap.put(colRef, "long");
+                } else {
+                    // stringFamily && default
+                    builder.optional(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(getColName(colRef));
+                    colTypeMap.put(colRef, "string");
+                }
+            } else {
+                builder.optional(PrimitiveType.PrimitiveTypeName.INT32).named(getColName(colRef));
+                colTypeMap.put(colRef, "int");
+            }
+        }
+
+        MeasureIngester[] aggrIngesters = MeasureIngester.create(cubeDesc.getMeasures());
+
+        for (int i = 0; i < cubeDesc.getMeasures().size(); i++) {
+            MeasureDesc measureDesc = cubeDesc.getMeasures().get(i);
+            org.apache.kylin.metadata.datatype.DataType meaDataType = measureDesc.getFunction().getReturnDataType();
+            MeasureType measureType = measureDesc.getFunction().getMeasureType();
+
+            if (measureType instanceof BasicMeasureType) {
+                MeasureIngester measureIngester = aggrIngesters[i];
+                if (measureIngester instanceof LongIngester) {
+                    builder.required(PrimitiveType.PrimitiveTypeName.INT64).named(measureDesc.getName());
+                    meaTypeMap.put(measureDesc, "long");
+                } else if (measureIngester instanceof DoubleIngester) {
+                    builder.required(PrimitiveType.PrimitiveTypeName.DOUBLE).named(measureDesc.getName());
+                    meaTypeMap.put(measureDesc, "double");
+                } else if (measureIngester instanceof BigDecimalIngester) {
+                    builder.required(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.DECIMAL).precision(meaDataType.getPrecision()).scale(meaDataType.getScale()).named(measureDesc.getName());
+                    meaTypeMap.put(measureDesc, "decimal");
+                } else {
+                    builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(measureDesc.getName());
+                    meaTypeMap.put(measureDesc, "binary");
+                }
+            } else {
+                builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(measureDesc.getName());
+                meaTypeMap.put(measureDesc, "binary");
+            }
+        }
+
+        return builder.named(String.valueOf(cuboid.getId()));
+    }
+
+    private String getColName(TblColRef colRef) {
+        return colRef.getTableAlias() + "_" + colRef.getName();
+    }
+}


[kylin] 06/06: KYLIN-3625 code review

Posted by sh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch kylin-on-parquet
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit e8f96bb2534e07f8647215c1e878ec5af19399d0
Author: shaofengshi <sh...@apache.org>
AuthorDate: Mon Dec 10 11:40:21 2018 +0800

    KYLIN-3625 code review
---
 .../org/apache/kylin/common/KylinConfigBase.java   |   2 +-
 .../src/main/resources/kylin-defaults.properties   |  12 +-
 .../java/org/apache/kylin/gridtable/GTRecord.java  |  45 ---
 .../java/org/apache/kylin/gridtable/GTUtil.java    |  36 ++
 .../filter/BuiltInFunctionTupleFilter.java         |  16 +-
 .../kylin/metadata/filter/CaseTupleFilter.java     |   6 +-
 .../kylin/metadata/filter/ColumnTupleFilter.java   |   2 +-
 .../kylin/metadata/filter/CompareTupleFilter.java  |   8 +-
 .../kylin/metadata/filter/ConstantTupleFilter.java |   2 +-
 .../kylin/metadata/filter/DynamicTupleFilter.java  |   2 +-
 .../kylin/metadata/filter/ExtractTupleFilter.java  |   2 +-
 .../kylin/metadata/filter/LogicalTupleFilter.java  |   4 +-
 .../apache/kylin/metadata/filter/TupleFilter.java  |   2 +-
 .../metadata/filter/UDF/MassInTupleFilter.java     |   2 +-
 .../metadata/filter/UnsupportedTupleFilter.java    |   2 +-
 .../storage/gtrecord/CubeScanRangePlanner.java     |   2 +-
 .../storage/path/DefaultStoragePathBuilder.java    |   1 -
 .../kylin/storage/path/IStoragePathBuilder.java    |   8 +-
 .../apache/kylin/engine/mr/JobBuilderSupport.java  |   4 +-
 .../engine/spark/SparkCubingByLayerParquet.java    | 433 ---------------------
 .../apache/kylin/rest/job/StorageCleanupJob.java   |   2 +-
 .../execute/d861b8b7-c773-47ab-bb1e-c8782ae8d930   |   2 +-
 .../org/apache/kylin/source/hive/HiveMRInput.java  |  10 +-
 .../apache/kylin/source/hive/HiveSparkInput.java   |   2 +-
 .../apache/kylin/source/hive/HiveMRInputTest.java  |   4 +-
 .../apache/kylin/source/kafka/KafkaMRInput.java    |   4 +-
 .../apache/kylin/source/kafka/KafkaSparkInput.java |   2 +-
 .../storage/hbase/lookup/HBaseLookupMRSteps.java   |   2 +-
 .../hbase/steps/HDFSPathGarbageCollectionStep.java |   2 +-
 .../kylin/storage/hbase/util/CubeMigrationCLI.java |   4 +-
 .../storage/hbase/util/StorageCleanupJob.java      |   2 +-
 .../kylin/storage/parquet/ParquetStorage.java      |   2 +-
 .../storage/parquet/cube/CubeStorageQuery.java     |   9 -
 .../storage/parquet/spark/ParquetPayload.java      |   5 +-
 .../kylin/storage/parquet/spark/ParquetTask.java   |  57 +--
 .../storage/parquet/spark/SparkSubmitter.java      |   9 +-
 .../spark/gtscanner/ParquetRecordGTScanner.java    |  54 ++-
 .../gtscanner/ParquetRecordGTScanner4Cube.java     |  34 +-
 .../parquet/steps/ConvertToParquetReducer.java     |  21 +-
 .../parquet/steps/CuboidToPartitionMapping.java    |  25 +-
 .../storage/parquet/steps/MRCubeParquetJob.java    |  12 +-
 .../storage/parquet/steps/ParquetConvertor.java    |   9 +-
 .../storage/parquet/steps/ParquetJobSteps.java     |   6 +-
 .../storage/parquet/steps/ParquetMROutput.java     |   6 -
 .../storage/parquet/steps/ParquetMRSteps.java      |   1 -
 .../storage/parquet/steps/SparkCubeParquet.java    |  61 +--
 .../apache/kylin/ext/DebugTomcatClassLoader.java   |   2 +-
 .../org/apache/kylin/ext/SparkClassLoader.java     |   4 +-
 .../org/apache/kylin/tool/CubeMigrationCLI.java    |   4 +-
 webapp/app/js/model/cubeConfig.js                  |   4 +-
 50 files changed, 239 insertions(+), 713 deletions(-)

diff --git a/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java b/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
index 2633ddf..e7f7236 100644
--- a/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
+++ b/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
@@ -1864,7 +1864,7 @@ abstract public class KylinConfigBase implements Serializable {
         return getOptional("kylin.source.jdbc.adaptor");
     }
 
-    public String getStorageSystemPathBuilderClz() {
+    public String getStoragePathBuilder() {
         return getOptional("storage.path.builder", "org.apache.kylin.storage.path.DefaultStoragePathBuilder");
     }
 }
diff --git a/core-common/src/main/resources/kylin-defaults.properties b/core-common/src/main/resources/kylin-defaults.properties
index 8115a50..d0a4ae2 100644
--- a/core-common/src/main/resources/kylin-defaults.properties
+++ b/core-common/src/main/resources/kylin-defaults.properties
@@ -357,23 +357,21 @@ kylin.engine.spark-conf-mergedict.spark.memory.fraction=0.2
 
 kylin.storage.columnar.spark-env.HADOOP_CONF_DIR=${kylin_hadoop_conf_dir}
 ## for any spark config entry in http://spark.apache.org/docs/latest/configuration.html#spark-properties, prefix it with "kylin.storage.columnar.spark-conf" and append here
-kylin.storage.columnar.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current -Dzipkin.collector-hostname=${ZIPKIN_HOSTNAME} -Dzipkin.collector-port=${ZIPKIN_SCRIBE_PORT} -DinfluxDB.address=${INFLUXDB_ADDRESS} -Dlog4j.configuration=spark-executor-log4j.properties -Dlog4j.debug -Dkap.spark.identifier=${KAP_SPARK_IDENTIFIER} -Dkap.hdfs.working.dir=${KAP_HDFS_WORKING_DIR} -Dkap.metadata.url=${KAP_METADATA_IDENTIFIER} -XX:MaxDirectMemorySize=896M -Dsparder.dict.cache.size=${SPARDER [...]
+kylin.storage.columnar.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
 kylin.storage.columnar.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
 kylin.storage.columnar.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
-#kylin.storage.columnar.spark-conf.spark.serializer=org.apache.spark.serializer.JavaSerializer
+kylin.storage.columnar.spark-conf.yarn.am.memory=512m
 kylin.storage.columnar.spark-conf.spark.driver.memory=1g
 kylin.storage.columnar.spark-conf.spark.executor.memory=1g
-kylin.storage.columnar.spark-conf.spark.yarn.executor.memoryOverhead=512
-kylin.storage.columnar.spark-conf.yarn.am.memory=512m
 kylin.storage.columnar.spark-conf.spark.executor.cores=1
 kylin.storage.columnar.spark-conf.spark.executor.instances=1
+kylin.storage.columnar.spark-conf.spark.yarn.executor.memoryOverhead=378
+kylin.storage.columnar.spark-conf.spark.hadoop.yarn.timeline-service.enabled=false
 kylin.storage.columnar.spark-conf.spark.task.maxFailures=1
-kylin.storage.columnar.spark-conf.spark.ui.port=4041
 kylin.storage.columnar.spark-conf.spark.locality.wait=0s
 kylin.storage.columnar.spark-conf.spark.sql.dialect=hiveql
-kylin.storage.columnar.spark-conf.spark.hadoop.yarn.timeline-service.enabled=false
 kylin.storage.columnar.spark-conf.hive.execution.engine=MR
 kylin.storage.columnar.spark-conf.spark.scheduler.listenerbus.eventqueue.size=100000000
 kylin.storage.columnar.spark-conf.spark.master=yarn-client
 kylin.storage.columnar.spark-conf.spark.broadcast.compress=false
-
+#kylin.storage.columnar.spark-conf.spark.yarn.archive=hdfs://namenode:8020/kylin/spark/spark-libs.jar
\ No newline at end of file
diff --git a/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java b/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java
index 24278c4..8f0e0fb 100644
--- a/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java
+++ b/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java
@@ -20,17 +20,7 @@ package org.apache.kylin.gridtable;
 
 import com.google.common.base.Preconditions;
 import org.apache.kylin.common.util.ByteArray;
-import org.apache.kylin.common.util.BytesUtil;
 import org.apache.kylin.common.util.ImmutableBitSet;
-import org.apache.kylin.dimension.DictionaryDimEnc;
-import org.apache.kylin.measure.bitmap.BitmapSerializer;
-import org.apache.kylin.measure.dim.DimCountDistincSerializer;
-import org.apache.kylin.measure.extendedcolumn.ExtendedColumnSerializer;
-import org.apache.kylin.measure.hllc.HLLCSerializer;
-import org.apache.kylin.measure.percentile.PercentileSerializer;
-import org.apache.kylin.measure.raw.RawSerializer;
-import org.apache.kylin.measure.topn.TopNCounterSerializer;
-import org.apache.kylin.metadata.datatype.DataTypeSerializer;
 
 import java.nio.ByteBuffer;
 import java.util.Arrays;
@@ -112,41 +102,6 @@ public class GTRecord implements Comparable<GTRecord> {
         return this;
     }
 
-    /** set record to the codes of specified values, reuse given space to hold the codes */
-    @SuppressWarnings("checkstyle:BooleanExpressionComplexity")
-    public GTRecord setValuesParquet(ImmutableBitSet selectedCols, ByteArray space, Object... values) {
-        assert selectedCols.cardinality() == values.length;
-
-        ByteBuffer buf = space.asBuffer();
-        int pos = buf.position();
-        for (int i = 0; i < selectedCols.trueBitCount(); i++) {
-            int c = selectedCols.trueBitAt(i);
-
-            DataTypeSerializer serializer = info.codeSystem.getSerializer(c);
-            if (serializer instanceof DictionaryDimEnc.DictionarySerializer) {
-                int len = serializer.peekLength(buf);
-                BytesUtil.writeUnsigned((Integer) values[i], len, buf);
-                int newPos = buf.position();
-                cols[c].reset(buf.array(), buf.arrayOffset() + pos, newPos - pos);
-                pos = newPos;
-            } else if (serializer instanceof TopNCounterSerializer ||
-                    serializer instanceof HLLCSerializer ||
-                    serializer instanceof BitmapSerializer ||
-                    serializer instanceof ExtendedColumnSerializer ||
-                    serializer instanceof PercentileSerializer ||
-                    serializer instanceof DimCountDistincSerializer ||
-                    serializer instanceof RawSerializer) {
-                cols[c].reset((byte[]) values[i], 0, ((byte[]) values[i]).length);
-            } else {
-                info.codeSystem.encodeColumnValue(c, values[i], buf);
-                int newPos = buf.position();
-                cols[c].reset(buf.array(), buf.arrayOffset() + pos, newPos - pos);
-                pos = newPos;
-            }
-        }
-        return this;
-    }
-
     /** decode and return the values of this record */
     public Object[] getValues() {
         return getValues(info.colAll, new Object[info.getColumnCount()]);
diff --git a/core-cube/src/main/java/org/apache/kylin/gridtable/GTUtil.java b/core-cube/src/main/java/org/apache/kylin/gridtable/GTUtil.java
index 49c68c5..8821a8f 100644
--- a/core-cube/src/main/java/org/apache/kylin/gridtable/GTUtil.java
+++ b/core-cube/src/main/java/org/apache/kylin/gridtable/GTUtil.java
@@ -456,4 +456,40 @@ public class GTUtil {
             }
         }
     }
+
+    /** set record to the codes of specified values, reuse given space to hold the codes */
+    public static GTRecord setValuesParquet(GTRecord record, ByteBuffer buf, Map<Integer, Integer> dictCols,
+            Map<Integer, Integer> binaryCols, Map<Integer, Integer> otherCols, Object[] values) {
+
+        int pos = buf.position();
+        int i, c;
+        for (Map.Entry<Integer, Integer> entry : dictCols.entrySet()) {
+            i = entry.getKey();
+            c = entry.getValue();
+            DictionaryDimEnc.DictionarySerializer serializer = (DictionaryDimEnc.DictionarySerializer) record.info.codeSystem
+                    .getSerializer(c);
+            int len = serializer.peekLength(buf);
+            BytesUtil.writeUnsigned((Integer) values[i], len, buf);
+            int newPos = buf.position();
+            record.cols[c].reset(buf.array(), buf.arrayOffset() + pos, newPos - pos);
+            pos = newPos;
+        }
+
+        for (Map.Entry<Integer, Integer> entry : binaryCols.entrySet()) {
+            i = entry.getKey();
+            c = entry.getValue();
+            record.cols[c].reset((byte[]) values[i], 0, ((byte[]) values[i]).length);
+        }
+
+        for (Map.Entry<Integer, Integer> entry : otherCols.entrySet()) {
+            i = entry.getKey();
+            c = entry.getValue();
+            record.info.codeSystem.encodeColumnValue(c, values[i], buf);
+            int newPos = buf.position();
+            record.cols[c].reset(buf.array(), buf.arrayOffset() + pos, newPos - pos);
+            pos = newPos;
+        }
+
+        return record;
+    }
 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/BuiltInFunctionTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/BuiltInFunctionTupleFilter.java
index 38cbd66..36b5369 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/BuiltInFunctionTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/BuiltInFunctionTupleFilter.java
@@ -177,18 +177,18 @@ public class BuiltInFunctionTupleFilter extends FunctionTupleFilter {
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         List<? extends TupleFilter> childFilter = this.getChildren();
         String op = this.getName();
         switch (op) {
             case "LIKE":
                 assert childFilter.size() == 2;
-                return childFilter.get(0).toSparkSqlFilter() + toSparkFuncMap.get(op) + childFilter.get(1).toSparkSqlFilter();
+                return childFilter.get(0).toSQL() + toSparkFuncMap.get(op) + childFilter.get(1).toSQL();
             case "||":
                 StringBuilder result = new StringBuilder().append(toSparkFuncMap.get(op)).append("(");
                 int index = 0;
                 for (TupleFilter filter : childFilter) {
-                    result.append(filter.toSparkSqlFilter());
+                    result.append(filter.toSQL());
                     if (index < childFilter.size() - 1) {
                         result.append(",");
                     }
@@ -200,17 +200,17 @@ public class BuiltInFunctionTupleFilter extends FunctionTupleFilter {
             case "UPPER":
             case "CHAR_LENGTH":
                 assert childFilter.size() == 1;
-                return toSparkFuncMap.get(op) + "(" + childFilter.get(0).toSparkSqlFilter() + ")";
+                return toSparkFuncMap.get(op) + "(" + childFilter.get(0).toSQL() + ")";
             case "SUBSTRING":
                 assert childFilter.size() == 3;
-                return toSparkFuncMap.get(op) + "(" + childFilter.get(0).toSparkSqlFilter() + "," + childFilter.get(1).toSparkSqlFilter() + "," + childFilter.get(2).toSparkSqlFilter() + ")";
+                return toSparkFuncMap.get(op) + "(" + childFilter.get(0).toSQL() + "," + childFilter.get(1).toSQL() + "," + childFilter.get(2).toSQL() + ")";
             default:
                 if (childFilter.size() == 1) {
-                    return op + "(" + childFilter.get(0).toSparkSqlFilter() + ")";
+                    return op + "(" + childFilter.get(0).toSQL() + ")";
                 } else if (childFilter.size() == 2) {
-                    return childFilter.get(0).toSparkSqlFilter() + op + childFilter.get(1).toSparkSqlFilter();
+                    return childFilter.get(0).toSQL() + op + childFilter.get(1).toSQL();
                 } else if (childFilter.size() == 3) {
-                    return op + "(" + childFilter.get(0).toSparkSqlFilter() + "," + childFilter.get(1).toSparkSqlFilter() + "," + childFilter.get(2).toSparkSqlFilter() + ")";
+                    return op + "(" + childFilter.get(0).toSQL() + "," + childFilter.get(1).toSQL() + "," + childFilter.get(2).toSQL() + ")";
                 }
                 throw new IllegalArgumentException("Operator " + op + " is not supported");
         }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CaseTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CaseTupleFilter.java
index 4305557..16e645b 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CaseTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CaseTupleFilter.java
@@ -134,16 +134,16 @@ public class CaseTupleFilter extends TupleFilter implements IOptimizeableTupleFi
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         String result = "(case ";
         TupleFilter whenFilter;
         TupleFilter thenFilter;
         for (int i = 0; i < this.getWhenFilters().size(); i++) {
             whenFilter = this.getWhenFilters().get(i);
             thenFilter = this.getThenFilters().get(i);
-            result += " when " + whenFilter.toSparkSqlFilter() + " then " + thenFilter.toSparkSqlFilter();
+            result += " when " + whenFilter.toSQL() + " then " + thenFilter.toSQL();
         }
-        result += " else " + this.getElseFilter().toSparkSqlFilter();
+        result += " else " + this.getElseFilter().toSQL();
         result += " end)";
         return result;
     }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ColumnTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ColumnTupleFilter.java
index 09a16f5..caab5a7 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ColumnTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ColumnTupleFilter.java
@@ -163,7 +163,7 @@ public class ColumnTupleFilter extends TupleFilter {
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         return this.columnRef.getTableAlias() + "_" + this.columnRef.getName();
     }
 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CompareTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CompareTupleFilter.java
index b63ac0a..33e9819 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CompareTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CompareTupleFilter.java
@@ -280,7 +280,7 @@ public class CompareTupleFilter extends TupleFilter implements IOptimizeableTupl
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         List<? extends TupleFilter> childFilter = this.getChildren();
         switch (this.getOperator()) {
             case EQ:
@@ -290,15 +290,15 @@ public class CompareTupleFilter extends TupleFilter implements IOptimizeableTupl
             case GTE:
             case LTE:
                 assert childFilter.size() == 2;
-                return childFilter.get(0).toSparkSqlFilter() + toSparkOpMap.get(this.getOperator()) + childFilter.get(1).toSparkSqlFilter();
+                return childFilter.get(0).toSQL() + toSparkOpMap.get(this.getOperator()) + childFilter.get(1).toSQL();
             case IN:
             case NOTIN:
                 assert childFilter.size() == 2;
-                return childFilter.get(0).toSparkSqlFilter() + toSparkOpMap.get(this.getOperator()) + "(" + childFilter.get(1).toSparkSqlFilter() + ")";
+                return childFilter.get(0).toSQL() + toSparkOpMap.get(this.getOperator()) + "(" + childFilter.get(1).toSQL() + ")";
             case ISNULL:
             case ISNOTNULL:
                 assert childFilter.size() == 1;
-                return childFilter.get(0).toSparkSqlFilter() + toSparkOpMap.get(this.getOperator());
+                return childFilter.get(0).toSQL() + toSparkOpMap.get(this.getOperator());
             default:
                 throw new IllegalStateException("operator " + this.getOperator() + " not supported: ");
         }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ConstantTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ConstantTupleFilter.java
index e9ecf16..61d7fb6 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ConstantTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ConstantTupleFilter.java
@@ -114,7 +114,7 @@ public class ConstantTupleFilter extends TupleFilter {
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         if (this.equals(TRUE)) {
             return "true";
         } else if (this.equals(FALSE)) {
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/DynamicTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/DynamicTupleFilter.java
index c4490e5..ffc1527 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/DynamicTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/DynamicTupleFilter.java
@@ -79,7 +79,7 @@ public class DynamicTupleFilter extends TupleFilter {
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         return "1=1";
     }
 
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ExtractTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ExtractTupleFilter.java
index 36ea021..59c3e5f 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ExtractTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ExtractTupleFilter.java
@@ -123,7 +123,7 @@ public class ExtractTupleFilter extends TupleFilter {
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         return "1=1";
     }
 
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/LogicalTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/LogicalTupleFilter.java
index 99cf3f3..ebea184 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/LogicalTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/LogicalTupleFilter.java
@@ -155,7 +155,7 @@ public class LogicalTupleFilter extends TupleFilter implements IOptimizeableTupl
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         StringBuilder result = new StringBuilder("");
         switch (this.getOperator()) {
             case AND:
@@ -164,7 +164,7 @@ public class LogicalTupleFilter extends TupleFilter implements IOptimizeableTupl
                 String op = toSparkOpMap.get(this.getOperator());
                 int index = 0;
                 for (TupleFilter filter : this.getChildren()) {
-                    result.append(filter.toSparkSqlFilter());
+                    result.append(filter.toSQL());
                     if (index < this.getChildren().size() - 1) {
                         result.append(op);
                     }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/TupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/TupleFilter.java
index 28a8c6c..0d3fb3e 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/TupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/TupleFilter.java
@@ -421,6 +421,6 @@ public abstract class TupleFilter {
         }
     }
 
-    public abstract String toSparkSqlFilter();
+    public abstract String toSQL();
 
 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UDF/MassInTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UDF/MassInTupleFilter.java
index b153003..ab7f6e0 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UDF/MassInTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UDF/MassInTupleFilter.java
@@ -154,7 +154,7 @@ public class MassInTupleFilter extends FunctionTupleFilter {
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         return "1=1";
     }
 
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UnsupportedTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UnsupportedTupleFilter.java
index 143ee6d..3bd147c 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UnsupportedTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UnsupportedTupleFilter.java
@@ -58,7 +58,7 @@ public class UnsupportedTupleFilter extends TupleFilter {
     }
 
     @Override
-    public String toSparkSqlFilter() {
+    public String toSQL() {
         return "1=1";
     }
 }
diff --git a/core-storage/src/main/java/org/apache/kylin/storage/gtrecord/CubeScanRangePlanner.java b/core-storage/src/main/java/org/apache/kylin/storage/gtrecord/CubeScanRangePlanner.java
index 60fe33f..9c21d82 100644
--- a/core-storage/src/main/java/org/apache/kylin/storage/gtrecord/CubeScanRangePlanner.java
+++ b/core-storage/src/main/java/org/apache/kylin/storage/gtrecord/CubeScanRangePlanner.java
@@ -111,7 +111,7 @@ public class CubeScanRangePlanner extends ScanRangePlannerBase {
         this.havingFilter = havingFilter;
 
         if (convertedFilter != null) {
-            this.filterPushDownSQL = convertedFilter.toSparkSqlFilter();
+            this.filterPushDownSQL = convertedFilter.toSQL();
             logger.info("--filterPushDownSQL--: {}", this.filterPushDownSQL);
         }
 
diff --git a/core-storage/src/main/java/org/apache/kylin/storage/path/DefaultStoragePathBuilder.java b/core-storage/src/main/java/org/apache/kylin/storage/path/DefaultStoragePathBuilder.java
index 1e50ba0..858a431 100644
--- a/core-storage/src/main/java/org/apache/kylin/storage/path/DefaultStoragePathBuilder.java
+++ b/core-storage/src/main/java/org/apache/kylin/storage/path/DefaultStoragePathBuilder.java
@@ -34,7 +34,6 @@ public class DefaultStoragePathBuilder implements IStoragePathBuilder {
     @Override
     public String getJobRealizationRootPath(CubeSegment cubeSegment, String jobId) {
         String jobWorkingDir = getJobWorkingDir(cubeSegment.getConfig().getHdfsWorkingDirectory(), jobId);
-
         return jobWorkingDir + SLASH + cubeSegment.getRealization().getName();
     }
 
diff --git a/core-storage/src/main/java/org/apache/kylin/storage/path/IStoragePathBuilder.java b/core-storage/src/main/java/org/apache/kylin/storage/path/IStoragePathBuilder.java
index 7a70f98..5ad20d1 100644
--- a/core-storage/src/main/java/org/apache/kylin/storage/path/IStoragePathBuilder.java
+++ b/core-storage/src/main/java/org/apache/kylin/storage/path/IStoragePathBuilder.java
@@ -21,11 +21,11 @@ package org.apache.kylin.storage.path;
 import org.apache.kylin.cube.CubeSegment;
 
 public interface IStoragePathBuilder {
-    public static final String SLASH = "/";
+    String SLASH = "/";
 
-    public String getJobWorkingDir(String workingDir, String jobId);
+    String getJobWorkingDir(String workingDir, String jobId);
 
-    public String getJobRealizationRootPath(CubeSegment cubeSegment, String jobId);
+    String getJobRealizationRootPath(CubeSegment cubeSegment, String jobId);
 
-    public String getRealizationFinalDataPath(CubeSegment cubeSegment);
+    String getRealizationFinalDataPath(CubeSegment cubeSegment);
 }
diff --git a/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java b/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
index 8525dcc..934c04e 100644
--- a/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
+++ b/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
@@ -77,8 +77,8 @@ public class JobBuilderSupport {
         this.config = new JobEngineConfig(seg.getConfig());
         this.seg = seg;
         this.submitter = submitter;
-        String pathBuilderClz = seg.getConfig().getStorageSystemPathBuilderClz();
-        this.storagePathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(pathBuilderClz);
+        String pathBuilder = seg.getConfig().getStoragePathBuilder();
+        this.storagePathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(pathBuilder);
     }
 
     public MapReduceExecutable createFactDistinctColumnsStep(String jobId) {
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java
deleted file mode 100644
index 154c058..0000000
--- a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java
+++ /dev/null
@@ -1,433 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.kylin.engine.spark;
-
-import com.fasterxml.jackson.core.JsonProcessingException;
-import com.fasterxml.jackson.core.type.TypeReference;
-import com.google.common.collect.Lists;
-import com.google.common.collect.Maps;
-import org.apache.commons.lang3.StringUtils;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.TaskAttemptContext;
-import org.apache.hadoop.mapreduce.TaskID;
-import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter;
-import org.apache.kylin.common.KylinConfig;
-import org.apache.kylin.common.util.ByteArray;
-import org.apache.kylin.common.util.Bytes;
-import org.apache.kylin.common.util.BytesUtil;
-import org.apache.kylin.common.util.JsonUtil;
-import org.apache.kylin.cube.CubeInstance;
-import org.apache.kylin.cube.CubeManager;
-import org.apache.kylin.cube.CubeSegment;
-import org.apache.kylin.cube.cuboid.Cuboid;
-import org.apache.kylin.cube.kv.RowConstants;
-import org.apache.kylin.cube.kv.RowKeyDecoder;
-import org.apache.kylin.cube.model.CubeDesc;
-import org.apache.kylin.dimension.AbstractDateDimEnc;
-import org.apache.kylin.dimension.DimensionEncoding;
-import org.apache.kylin.dimension.FixedLenDimEnc;
-import org.apache.kylin.dimension.FixedLenHexDimEnc;
-import org.apache.kylin.dimension.IDimensionEncodingMap;
-import org.apache.kylin.engine.mr.BatchCubingJobBuilder2;
-import org.apache.kylin.engine.mr.common.AbstractHadoopJob;
-import org.apache.kylin.engine.mr.common.CubeStatsReader;
-import org.apache.kylin.engine.mr.common.SerializableConfiguration;
-import org.apache.kylin.measure.BufferedMeasureCodec;
-import org.apache.kylin.measure.MeasureIngester;
-import org.apache.kylin.measure.MeasureType;
-import org.apache.kylin.measure.basic.BasicMeasureType;
-import org.apache.kylin.measure.basic.BigDecimalIngester;
-import org.apache.kylin.measure.basic.DoubleIngester;
-import org.apache.kylin.measure.basic.LongIngester;
-import org.apache.kylin.measure.percentile.PercentileSerializer;
-import org.apache.kylin.metadata.datatype.DataType;
-import org.apache.kylin.metadata.model.MeasureDesc;
-import org.apache.kylin.metadata.model.TblColRef;
-import org.apache.parquet.example.data.Group;
-import org.apache.parquet.example.data.GroupFactory;
-import org.apache.parquet.example.data.simple.SimpleGroupFactory;
-import org.apache.parquet.hadoop.ParquetOutputFormat;
-import org.apache.parquet.hadoop.example.GroupWriteSupport;
-import org.apache.parquet.io.api.Binary;
-import org.apache.parquet.schema.MessageType;
-import org.apache.parquet.schema.OriginalType;
-import org.apache.parquet.schema.PrimitiveType;
-import org.apache.parquet.schema.Types;
-import org.apache.spark.Partitioner;
-import org.apache.spark.api.java.JavaPairRDD;
-import org.apache.spark.api.java.function.PairFunction;
-import scala.Tuple2;
-
-import java.io.IOException;
-import java.io.Serializable;
-import java.math.BigDecimal;
-import java.nio.ByteBuffer;
-import java.util.List;
-import java.util.Map;
-import java.util.Random;
-
-public class SparkCubingByLayerParquet extends SparkCubingByLayer {
-    @Override
-    protected void saveToHDFS(JavaPairRDD<ByteArray, Object[]> rdd, String metaUrl, String cubeName, CubeSegment cubeSeg, String hdfsBaseLocation, int level, Job job, KylinConfig kylinConfig) throws Exception {
-        final IDimensionEncodingMap dimEncMap = cubeSeg.getDimensionEncodingMap();
-
-        Cuboid baseCuboid = Cuboid.getBaseCuboid(cubeSeg.getCubeDesc());
-
-        final Map<TblColRef, String> colTypeMap = Maps.newHashMap();
-        final Map<MeasureDesc, String> meaTypeMap = Maps.newHashMap();
-
-        MessageType schema = cuboidToMessageType(baseCuboid, dimEncMap, cubeSeg.getCubeDesc(), colTypeMap, meaTypeMap);
-
-        logger.info("Schema: {}", schema.toString());
-
-        final CuboidToPartitionMapping cuboidToPartitionMapping = new CuboidToPartitionMapping(cubeSeg, kylinConfig, level);
-
-        logger.info("CuboidToPartitionMapping: {}", cuboidToPartitionMapping.toString());
-
-        JavaPairRDD<ByteArray, Object[]> repartitionedRDD = rdd.repartitionAndSortWithinPartitions(new CuboidPartitioner(cuboidToPartitionMapping));
-
-        String output = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel(hdfsBaseLocation, level);
-
-        job.setOutputFormatClass(CustomParquetOutputFormat.class);
-        GroupWriteSupport.setSchema(schema, job.getConfiguration());
-        CustomParquetOutputFormat.setOutputPath(job, new Path(output));
-        CustomParquetOutputFormat.setWriteSupportClass(job, GroupWriteSupport.class);
-        CustomParquetOutputFormat.setCuboidToPartitionMapping(job, cuboidToPartitionMapping);
-
-        JavaPairRDD<Void, Group> groupRDD = repartitionedRDD.mapToPair(new GenerateGroupRDDFunction(cubeName, cubeSeg.getUuid(), metaUrl, new SerializableConfiguration(job.getConfiguration()), colTypeMap, meaTypeMap));
-
-        groupRDD.saveAsNewAPIHadoopDataset(job.getConfiguration());
-    }
-
-    static class CuboidPartitioner extends Partitioner {
-
-        private CuboidToPartitionMapping mapping;
-
-        public CuboidPartitioner(CuboidToPartitionMapping cuboidToPartitionMapping) {
-            this.mapping = cuboidToPartitionMapping;
-        }
-
-        @Override
-        public int numPartitions() {
-            return mapping.getNumPartitions();
-        }
-
-        @Override
-        public int getPartition(Object key) {
-            ByteArray byteArray = (ByteArray) key;
-            long cuboidId = Bytes.toLong(byteArray.array(), RowConstants.ROWKEY_SHARDID_LEN, RowConstants.ROWKEY_CUBOIDID_LEN);
-
-            return mapping.getRandomPartitionForCuboidId(cuboidId);
-        }
-    }
-
-    public static class CuboidToPartitionMapping implements Serializable {
-        private Map<Long, List<Integer>> cuboidPartitions;
-        private int partitionNum;
-
-        public CuboidToPartitionMapping(Map<Long, List<Integer>> cuboidPartitions) {
-            this.cuboidPartitions = cuboidPartitions;
-            int partitions = 0;
-            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
-                partitions = partitions + entry.getValue().size();
-            }
-            this.partitionNum = partitions;
-        }
-
-        public CuboidToPartitionMapping(CubeSegment cubeSeg, KylinConfig kylinConfig, int level) throws IOException {
-            cuboidPartitions = Maps.newHashMap();
-
-            List<Long> layeredCuboids = cubeSeg.getCuboidScheduler().getCuboidsByLayer().get(level);
-            CubeStatsReader cubeStatsReader = new CubeStatsReader(cubeSeg, kylinConfig);
-
-            int position = 0;
-            for (Long cuboidId : layeredCuboids) {
-                int partition = estimateCuboidPartitionNum(cuboidId, cubeStatsReader, kylinConfig);
-                List<Integer> positions = Lists.newArrayListWithCapacity(partition);
-
-                for (int i = position; i < position + partition; i++) {
-                    positions.add(i);
-                }
-
-                cuboidPartitions.put(cuboidId, positions);
-                position = position + partition;
-            }
-
-            this.partitionNum = position;
-        }
-
-        public String serialize() throws JsonProcessingException {
-            return JsonUtil.writeValueAsString(cuboidPartitions);
-        }
-
-        public static CuboidToPartitionMapping deserialize(String jsonMapping) throws IOException {
-            Map<Long, List<Integer>> cuboidPartitions = JsonUtil.readValue(jsonMapping, new TypeReference<Map<Long, List<Integer>>>() {});
-            return new CuboidToPartitionMapping(cuboidPartitions);
-        }
-
-        public int getNumPartitions() {
-            return this.partitionNum;
-        }
-
-        public long getCuboidIdByPartition(int partition) {
-            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
-                if (entry.getValue().contains(partition)) {
-                    return entry.getKey();
-                }
-            }
-
-            throw new IllegalArgumentException("No cuboidId for partition id: " + partition);
-        }
-
-        public int getRandomPartitionForCuboidId(long cuboidId) {
-            List<Integer> partitions = cuboidPartitions.get(cuboidId);
-            return partitions.get(new Random().nextInt(partitions.size()));
-        }
-
-        public int getPartitionNumForCuboidId(long cuboidId) {
-            return cuboidPartitions.get(cuboidId).size();
-        }
-
-        public String getPartitionFilePrefix(int partition) {
-            String prefix = "cuboid_";
-            long cuboid = getCuboidIdByPartition(partition);
-            int partNum = partition % getPartitionNumForCuboidId(cuboid);
-            prefix = prefix + cuboid + "_part" + partNum;
-
-            return prefix;
-        }
-
-        private int estimateCuboidPartitionNum(long cuboidId, CubeStatsReader cubeStatsReader, KylinConfig kylinConfig) {
-            double cuboidSize = cubeStatsReader.estimateCuboidSize(cuboidId);
-            float rddCut = kylinConfig.getSparkRDDPartitionCutMB();
-            int partition = (int) (cuboidSize / rddCut);
-            partition = Math.max(kylinConfig.getSparkMinPartition(), partition);
-            partition = Math.min(kylinConfig.getSparkMaxPartition(), partition);
-            return partition;
-        }
-
-        @Override
-        public String toString() {
-            StringBuilder sb = new StringBuilder();
-            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
-                sb.append("cuboidId:").append(entry.getKey()).append(" [").append(StringUtils.join(entry.getValue(), ",")).append("]\n");
-            }
-
-            return sb.toString();
-        }
-    }
-
-    public static class CustomParquetOutputFormat extends ParquetOutputFormat {
-        public static final String CUBOID_TO_PARTITION_MAPPING = "cuboidToPartitionMapping";
-
-        @Override
-        public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException {
-            FileOutputCommitter committer = (FileOutputCommitter)this.getOutputCommitter(context);
-            TaskID taskId = context.getTaskAttemptID().getTaskID();
-            int partition = taskId.getId();
-
-            CuboidToPartitionMapping mapping = CuboidToPartitionMapping.deserialize(context.getConfiguration().get(CUBOID_TO_PARTITION_MAPPING));
-
-            return new Path(committer.getWorkPath(), getUniqueFile(context, mapping.getPartitionFilePrefix(partition)+ "-" + getOutputName(context), extension));
-        }
-
-        public static void setCuboidToPartitionMapping(Job job, CuboidToPartitionMapping cuboidToPartitionMapping) throws IOException {
-            String jsonStr = cuboidToPartitionMapping.serialize();
-
-            job.getConfiguration().set(CUBOID_TO_PARTITION_MAPPING, jsonStr);
-        }
-    }
-
-    static class GenerateGroupRDDFunction implements PairFunction<Tuple2<ByteArray, Object[]>, Void, Group> {
-        private volatile transient boolean initialized = false;
-        private String cubeName;
-        private String segmentId;
-        private String metaUrl;
-        private SerializableConfiguration conf;
-        private List<MeasureDesc> measureDescs;
-        private RowKeyDecoder decoder;
-        private Map<TblColRef, String> colTypeMap;
-        private Map<MeasureDesc, String> meaTypeMap;
-        private GroupFactory factory;
-        private BufferedMeasureCodec measureCodec;
-        private PercentileSerializer serializer;
-        private ByteBuffer byteBuffer;
-
-        public GenerateGroupRDDFunction(String cubeName, String segmentId, String metaurl, SerializableConfiguration conf, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
-            this.cubeName = cubeName;
-            this.segmentId = segmentId;
-            this.metaUrl = metaurl;
-            this.conf = conf;
-            this.colTypeMap = colTypeMap;
-            this.meaTypeMap = meaTypeMap;
-        }
-
-        private void init() {
-            KylinConfig kConfig = AbstractHadoopJob.loadKylinConfigFromHdfs(conf, metaUrl);
-            KylinConfig.setAndUnsetThreadLocalConfig(kConfig);
-            CubeInstance cubeInstance = CubeManager.getInstance(kConfig).getCube(cubeName);
-            CubeDesc cubeDesc = cubeInstance.getDescriptor();
-            CubeSegment cubeSegment = cubeInstance.getSegmentById(segmentId);
-            measureDescs = cubeDesc.getMeasures();
-            decoder = new RowKeyDecoder(cubeSegment);
-            factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(conf.get()));
-            measureCodec = new BufferedMeasureCodec(cubeDesc.getMeasures());
-            serializer = new PercentileSerializer(DataType.getType("percentile(100)"));
-
-        }
-
-        @Override
-        public Tuple2<Void, Group> call(Tuple2<ByteArray, Object[]> tuple) throws Exception {
-            if (initialized == false) {
-                synchronized (SparkCubingByLayer.class) {
-                    if (initialized == false) {
-                        init();
-                    }
-                }
-            }
-
-            long cuboid = decoder.decode(tuple._1.array());
-            List<String> values = decoder.getValues();
-            List<TblColRef> columns = decoder.getColumns();
-
-            Group group = factory.newGroup();
-
-            // for check
-            group.append("cuboidId", cuboid);
-
-            for (int i = 0; i < columns.size(); i++) {
-                TblColRef column = columns.get(i);
-                parseColValue(group, column, values.get(i));
-            }
-
-            ByteBuffer valueBuf = measureCodec.encode(tuple._2());
-            byte[] encodedBytes = new byte[valueBuf.position()];
-            System.arraycopy(valueBuf.array(), 0, encodedBytes, 0, valueBuf.position());
-
-            int[] valueLengths = measureCodec.getCodec().getPeekLength(ByteBuffer.wrap(encodedBytes));
-
-            int valueOffset = 0;
-            for (int i = 0; i < valueLengths.length; ++i) {
-                MeasureDesc measureDesc = measureDescs.get(i);
-                parseMeaValue(group, measureDesc, encodedBytes, valueOffset, valueLengths[i], tuple._2[i]);
-                valueOffset += valueLengths[i];
-            }
-
-            return new Tuple2<>(null, group);
-        }
-
-        private void parseColValue(final Group group, final TblColRef colRef, final String value) {
-            switch (colTypeMap.get(colRef)) {
-                case "int":
-                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Integer.valueOf(value));
-                    break;
-                case "long":
-                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Long.valueOf(value));
-                    break;
-                default:
-                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Binary.fromString(value));
-                    break;
-            }
-        }
-
-        private void parseMeaValue(final Group group, final MeasureDesc measureDesc, final byte[] value, final int offset, final int length, final Object d) {
-            switch (meaTypeMap.get(measureDesc)) {
-                case "long":
-                    group.append(measureDesc.getName(), BytesUtil.readLong(value, offset, length));
-                    break;
-                case "double":
-                    group.append(measureDesc.getName(), ByteBuffer.wrap(value, offset, length).getDouble());
-                    break;
-                case "decimal":
-                    BigDecimal decimal = (BigDecimal)d;
-                    decimal = decimal.setScale(4);
-                    group.append(measureDesc.getName(), Binary.fromConstantByteArray(decimal.unscaledValue().toByteArray()));
-                    break;
-                default:
-                    group.append(measureDesc.getName(), Binary.fromConstantByteArray(value, offset, length));
-                    break;
-            }
-        }
-    }
-
-    private MessageType cuboidToMessageType(Cuboid cuboid, IDimensionEncodingMap dimEncMap, CubeDesc cubeDesc, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
-        Types.MessageTypeBuilder builder = Types.buildMessage();
-
-        List<TblColRef> colRefs = cuboid.getColumns();
-
-        builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named("cuboidId");
-
-        for (TblColRef colRef : colRefs) {
-            DimensionEncoding dimEnc = dimEncMap.get(colRef);
-
-            if (dimEnc instanceof AbstractDateDimEnc) {
-                builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
-                colTypeMap.put(colRef, "long");
-            } else if (dimEnc instanceof FixedLenDimEnc || dimEnc instanceof FixedLenHexDimEnc) {
-                org.apache.kylin.metadata.datatype.DataType colDataType = colRef.getType();
-                if (colDataType.isNumberFamily() || colDataType.isDateTimeFamily()){
-                    builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
-                    colTypeMap.put(colRef, "long");
-                } else {
-                    // stringFamily && default
-                    builder.optional(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(getColName(colRef));
-                    colTypeMap.put(colRef, "string");
-                }
-            } else {
-                builder.optional(PrimitiveType.PrimitiveTypeName.INT32).named(getColName(colRef));
-                colTypeMap.put(colRef, "int");
-            }
-        }
-
-        MeasureIngester[] aggrIngesters = MeasureIngester.create(cubeDesc.getMeasures());
-
-        for (int i = 0; i < cubeDesc.getMeasures().size(); i++) {
-            MeasureDesc measureDesc = cubeDesc.getMeasures().get(i);
-            org.apache.kylin.metadata.datatype.DataType meaDataType = measureDesc.getFunction().getReturnDataType();
-            MeasureType measureType = measureDesc.getFunction().getMeasureType();
-
-            if (measureType instanceof BasicMeasureType) {
-                MeasureIngester measureIngester = aggrIngesters[i];
-                if (measureIngester instanceof LongIngester) {
-                    builder.required(PrimitiveType.PrimitiveTypeName.INT64).named(measureDesc.getName());
-                    meaTypeMap.put(measureDesc, "long");
-                } else if (measureIngester instanceof DoubleIngester) {
-                    builder.required(PrimitiveType.PrimitiveTypeName.DOUBLE).named(measureDesc.getName());
-                    meaTypeMap.put(measureDesc, "double");
-                } else if (measureIngester instanceof BigDecimalIngester) {
-                    builder.required(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.DECIMAL).precision(meaDataType.getPrecision()).scale(meaDataType.getScale()).named(measureDesc.getName());
-                    meaTypeMap.put(measureDesc, "decimal");
-                } else {
-                    builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(measureDesc.getName());
-                    meaTypeMap.put(measureDesc, "binary");
-                }
-            } else {
-                builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(measureDesc.getName());
-                meaTypeMap.put(measureDesc, "binary");
-            }
-        }
-
-        return builder.named(String.valueOf(cuboid.getId()));
-    }
-
-    private String getColName(TblColRef colRef) {
-        return colRef.getTableAlias() + "_" + colRef.getName();
-    }
-}
diff --git a/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java b/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java
index 6900ca7..bfc0976 100755
--- a/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java
+++ b/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java
@@ -102,7 +102,7 @@ public class StorageCleanupJob extends AbstractApplication {
         this.defaultFs = defaultFs;
         this.hbaseFs = hbaseFs;
         this.executableManager = ExecutableManager.getInstance(config);
-        this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStorageSystemPathBuilderClz());
+        this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStoragePathBuilder());
     }
 
     public void setDelete(boolean delete) {
diff --git a/server-base/src/test/resources/test_meta/execute/d861b8b7-c773-47ab-bb1e-c8782ae8d930 b/server-base/src/test/resources/test_meta/execute/d861b8b7-c773-47ab-bb1e-c8782ae8d930
index ed6d6fa..208a0c9e 100644
--- a/server-base/src/test/resources/test_meta/execute/d861b8b7-c773-47ab-bb1e-c8782ae8d930
+++ b/server-base/src/test/resources/test_meta/execute/d861b8b7-c773-47ab-bb1e-c8782ae8d930
@@ -21,7 +21,7 @@
     "version" : "2.3.0.20500",
     "name" : "Redistribute Flat Hive Table",
     "tasks" : null,
-    "type" : "org.apache.kylin.source.hive.HiveMRInput$RedistributeFlatHiveTableStep",
+    "type" : "zorg.apache.kylin.source.hive.HiveMRInput$RedistributeFlatHiveTableStep",
     "params" : {
       "HiveInit" : "USE default;\n",
       "HiveRedistributeData" : "INSERT OVERWRITE TABLE kylin_intermediate_ss_a110ac52_9a91_49fe_944a_346d61e54115 SELECT * FROM kylin_intermediate_ss_a110ac52_9a91_49fe_944a_346d61e54115 DISTRIBUTE BY RAND();\n",
diff --git a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveMRInput.java b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveMRInput.java
index 3c9434b..c2687a1 100644
--- a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveMRInput.java
+++ b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveMRInput.java
@@ -115,10 +115,10 @@ public class HiveMRInput extends HiveInputBase implements IMRInput {
 
     public static class BatchCubingInputSide implements IMRBatchCubingInputSide {
 
-        final protected IJoinedFlatTableDesc flatDesc;
-        final protected String flatTableDatabase;
-        final protected String hdfsWorkingDir;
-        final protected IStoragePathBuilder pathBuilder;
+        protected final IJoinedFlatTableDesc flatDesc;
+        protected final String flatTableDatabase;
+        protected final String hdfsWorkingDir;
+        protected final IStoragePathBuilder pathBuilder;
 
         List<String> hiveViewIntermediateTables = Lists.newArrayList();
 
@@ -127,7 +127,7 @@ public class HiveMRInput extends HiveInputBase implements IMRInput {
             this.flatDesc = flatDesc;
             this.flatTableDatabase = config.getHiveDatabaseForIntermediateTable();
             this.hdfsWorkingDir = config.getHdfsWorkingDirectory();
-            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStorageSystemPathBuilderClz());
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStoragePathBuilder());
         }
 
         @Override
diff --git a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveSparkInput.java b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveSparkInput.java
index 031548c8..1364eed 100644
--- a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveSparkInput.java
+++ b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveSparkInput.java
@@ -73,7 +73,7 @@ public class HiveSparkInput extends HiveInputBase implements ISparkInput {
             this.flatDesc = flatDesc;
             this.flatTableDatabase = config.getHiveDatabaseForIntermediateTable();
             this.hdfsWorkingDir = config.getHdfsWorkingDirectory();
-            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStorageSystemPathBuilderClz());
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStoragePathBuilder());
         }
 
         @Override
diff --git a/source-hive/src/test/java/org/apache/kylin/source/hive/HiveMRInputTest.java b/source-hive/src/test/java/org/apache/kylin/source/hive/HiveMRInputTest.java
index 24e91ae..301b766 100644
--- a/source-hive/src/test/java/org/apache/kylin/source/hive/HiveMRInputTest.java
+++ b/source-hive/src/test/java/org/apache/kylin/source/hive/HiveMRInputTest.java
@@ -46,11 +46,11 @@ public class HiveMRInputTest {
         try (SetAndUnsetThreadLocalConfig autoUnset = KylinConfig.setAndUnsetThreadLocalConfig(kylinConfig)) {
             when(kylinConfig.getHiveTableDirCreateFirst()).thenReturn(true);
             when(kylinConfig.getHdfsWorkingDirectory()).thenReturn("/tmp/kylin/");
-            when(kylinConfig.getStorageSystemPathBuilderClz()).thenReturn("org.apache.kylin.storage.path.DefaultStoragePathBuilder");
+            when(kylinConfig.getStoragePathBuilder()).thenReturn("org.apache.kylin.storage.path.DefaultStoragePathBuilder");
             DefaultChainedExecutable defaultChainedExecutable = mock(DefaultChainedExecutable.class);
             defaultChainedExecutable.setId(RandomUtil.randomUUID().toString());
 
-            String jobWorkingDir = HiveInputBase.getJobWorkingDir(defaultChainedExecutable, KylinConfig.getInstanceFromEnv().getHdfsWorkingDirectory(), (IStoragePathBuilder) ClassUtil.newInstance(KylinConfig.getInstanceFromEnv().getStorageSystemPathBuilderClz()));
+            String jobWorkingDir = HiveInputBase.getJobWorkingDir(defaultChainedExecutable, KylinConfig.getInstanceFromEnv().getHdfsWorkingDirectory(), (IStoragePathBuilder) ClassUtil.newInstance(KylinConfig.getInstanceFromEnv().getStoragePathBuilder()));
             jobWorkDirPath = new Path(jobWorkingDir);
             Assert.assertTrue(fileSystem.exists(jobWorkDirPath));
         } finally {
diff --git a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaMRInput.java b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaMRInput.java
index 68f87ff..798d099 100644
--- a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaMRInput.java
+++ b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaMRInput.java
@@ -82,7 +82,7 @@ public class KafkaMRInput extends KafkaInputBase implements IMRInput {
         public KafkaTableInputFormat(CubeSegment cubeSegment, JobEngineConfig conf) {
             this.cubeSegment = cubeSegment;
             this.conf = conf;
-            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(conf.getConfig().getStorageSystemPathBuilderClz());
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(conf.getConfig().getStoragePathBuilder());
         }
 
         @Override
@@ -134,7 +134,7 @@ public class KafkaMRInput extends KafkaInputBase implements IMRInput {
             this.seg = seg;
             this.cubeDesc = seg.getCubeDesc();
             this.cubeName = seg.getCubeInstance().getName();
-            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStorageSystemPathBuilderClz());
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStoragePathBuilder());
         }
 
         @Override
diff --git a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSparkInput.java b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSparkInput.java
index 9c14fd6..9a84612 100644
--- a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSparkInput.java
+++ b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSparkInput.java
@@ -73,7 +73,7 @@ public class KafkaSparkInput extends KafkaInputBase implements ISparkInput {
             this.seg = seg;
             this.cubeDesc = seg.getCubeDesc();
             this.cubeName = seg.getCubeInstance().getName();
-            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(this.config.getStorageSystemPathBuilderClz());
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(this.config.getStoragePathBuilder());
         }
 
         @Override
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/lookup/HBaseLookupMRSteps.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/lookup/HBaseLookupMRSteps.java
index f6d11bd..e8f7713 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/lookup/HBaseLookupMRSteps.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/lookup/HBaseLookupMRSteps.java
@@ -61,7 +61,7 @@ public class HBaseLookupMRSteps {
     public HBaseLookupMRSteps(CubeInstance cube) {
         this.cube = cube;
         this.config = new JobEngineConfig(cube.getConfig());
-        this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(cube.getConfig().getStorageSystemPathBuilderClz());
+        this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(cube.getConfig().getStoragePathBuilder());
     }
 
     public void addMaterializeLookupTablesSteps(LookupMaterializeContext context) {
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HDFSPathGarbageCollectionStep.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HDFSPathGarbageCollectionStep.java
index 05df80d..4c31c45 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HDFSPathGarbageCollectionStep.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HDFSPathGarbageCollectionStep.java
@@ -60,7 +60,7 @@ public class HDFSPathGarbageCollectionStep extends AbstractExecutable {
     protected ExecuteResult doWork(ExecutableContext context) throws ExecuteException {
         try {
             config = new JobEngineConfig(context.getConfig());
-            pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(context.getConfig().getStorageSystemPathBuilderClz());
+            pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(context.getConfig().getStoragePathBuilder());
             List<String> toDeletePaths = getDeletePaths();
             dropHdfsPathOnCluster(toDeletePaths, HadoopUtil.getWorkingFileSystem());
 
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/CubeMigrationCLI.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/CubeMigrationCLI.java
index ce43456..619c32c 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/CubeMigrationCLI.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/CubeMigrationCLI.java
@@ -186,8 +186,8 @@ public class CubeMigrationCLI {
     }
 
     private static void renameFoldersInHdfs(CubeInstance cube) {
-        IStoragePathBuilder srcPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(srcConfig.getStorageSystemPathBuilderClz());
-        IStoragePathBuilder dstPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(dstConfig.getStorageSystemPathBuilderClz());
+        IStoragePathBuilder srcPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(srcConfig.getStoragePathBuilder());
+        IStoragePathBuilder dstPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(dstConfig.getStoragePathBuilder());
         for (CubeSegment segment : cube.getSegments()) {
 
             String jobUuid = segment.getLastBuildJobID();
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/StorageCleanupJob.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/StorageCleanupJob.java
index fd6124b..496b2bb 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/StorageCleanupJob.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/StorageCleanupJob.java
@@ -187,7 +187,7 @@ public class StorageCleanupJob extends AbstractApplication {
     private void cleanUnusedHdfsFiles(Configuration conf) throws IOException {
         JobEngineConfig engineConfig = new JobEngineConfig(KylinConfig.getInstanceFromEnv());
         CubeManager cubeMgr = CubeManager.getInstance(KylinConfig.getInstanceFromEnv());
-        IStoragePathBuilder pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(engineConfig.getConfig().getStorageSystemPathBuilderClz());
+        IStoragePathBuilder pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(engineConfig.getConfig().getStoragePathBuilder());
 
         FileSystem fs = HadoopUtil.getWorkingFileSystem(conf);
         List<String> allHdfsPathsNeedToBeDeleted = new ArrayList<String>();
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java
index e05b433..e878ec4 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/ParquetStorage.java
@@ -47,7 +47,7 @@ public class ParquetStorage implements IStorage {
         } else if (engineInterface == IMROutput2.class) {
             return (I) new ParquetMROutput();
         } else{
-            throw new RuntimeException("Cannot adapt to " + engineInterface);
+            throw new UnsupportedOperationException("Cannot adapt to " + engineInterface);
         }
     }
 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
index a3ce76f..2fb5733 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
@@ -20,10 +20,6 @@ package org.apache.kylin.storage.parquet.cube;
 
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.cube.CubeInstance;
-import org.apache.kylin.metadata.realization.SQLDigest;
-import org.apache.kylin.metadata.tuple.ITupleIterator;
-import org.apache.kylin.metadata.tuple.TupleInfo;
-import org.apache.kylin.storage.StorageContext;
 import org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase;
 
 public class CubeStorageQuery extends GTCubeStorageQueryBase {
@@ -33,11 +29,6 @@ public class CubeStorageQuery extends GTCubeStorageQueryBase {
     }
 
     @Override
-    public ITupleIterator search(StorageContext context, SQLDigest sqlDigest, TupleInfo returnTupleInfo) {
-        return super.search(context, sqlDigest, returnTupleInfo);
-    }
-
-    @Override
     protected String getGTStorage() {
         return KylinConfig.getInstanceFromEnv().getSparkCubeGTStorage();
     }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java
index 3129b8e..2f1aa17 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java
@@ -119,7 +119,7 @@ public class ParquetPayload {
         return storageType;
     }
 
-    static public class ParquetPayloadBuilder {
+    public static class ParquetPayloadBuilder {
         private byte[] gtScanRequest;
         private String gtScanRequestId;
         private String kylinProperties;
@@ -136,9 +136,6 @@ public class ParquetPayload {
         private long startTime;
         private int storageType;
 
-        public ParquetPayloadBuilder() {
-        }
-
         public ParquetPayloadBuilder setGtScanRequest(byte[] gtScanRequest) {
             this.gtScanRequest = gtScanRequest;
             return this;
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetTask.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetTask.java
index 611ee44..4b08c38 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetTask.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetTask.java
@@ -31,7 +31,12 @@ import org.apache.kylin.cube.CubeSegment;
 import org.apache.kylin.cube.cuboid.Cuboid;
 import org.apache.kylin.cube.gridtable.CuboidToGridTableMapping;
 import org.apache.kylin.gridtable.GTScanRequest;
+import org.apache.kylin.measure.extendedcolumn.ExtendedColumnMeasureType;
+import org.apache.kylin.measure.percentile.PercentileMeasureType;
+import org.apache.kylin.measure.raw.RawMeasureType;
+import org.apache.kylin.measure.topn.TopNMeasureType;
 import org.apache.kylin.metadata.datatype.DataType;
+import org.apache.kylin.metadata.model.FunctionDesc;
 import org.apache.kylin.metadata.model.MeasureDesc;
 import org.apache.kylin.metadata.model.TblColRef;
 import org.apache.spark.api.java.JavaRDD;
@@ -94,7 +99,7 @@ public class ParquetTask implements Serializable {
             String dataFolderName = request.getDataFolderName();
 
             String baseFolder = dataFolderName.substring(0, dataFolderName.lastIndexOf('/'));
-            String cuboidId = dataFolderName.substring(dataFolderName.lastIndexOf("/") + 1);
+            String cuboidId = dataFolderName.substring(dataFolderName.lastIndexOf('/') + 1);
             String prefix = "cuboid_" + cuboidId + "_";
 
             CubeInstance cubeInstance = CubeManager.getInstance(kylinConfig).getCube(request.getRealizationId());
@@ -117,11 +122,11 @@ public class ParquetTask implements Serializable {
                 pathBuilder.append(p.toString()).append(";");
             }
 
-            logger.info("Columnar path is " + pathBuilder.toString());
-            logger.info("Required Measures: " + StringUtils.join(request.getParquetColumns(), ","));
-            logger.info("Max GT length: " + request.getMaxRecordLength());
+            logger.info("Columnar path is {}", pathBuilder);
+            logger.info("Required Measures: {}", StringUtils.join(request.getParquetColumns(), ","));
+            logger.info("Max GT length: {}", request.getMaxRecordLength());
         } catch (IOException e) {
-            throw new RuntimeException(e);
+            throw new IllegalStateException(e);
         }
     }
 
@@ -138,14 +143,14 @@ public class ParquetTask implements Serializable {
             map.clear();
 
         } catch (IllegalAccessException | NoSuchFieldException e) {
-            throw new RuntimeException(e);
+            throw new IllegalStateException(e);
         }
     }
 
     public Iterator<Object[]> executeTask() {
-        logger.info("Start to visit cube data with Spark SQL <<<<<<");
+        logger.debug("Start to visit cube data with Spark <<<<<<");
 
-        SQLContext sqlContext = new SQLContext(SparderEnv.getSparkSession().sparkContext());
+        SQLContext sqlContext = SparderEnv.getSparkSession().sqlContext();
 
         Dataset<Row> dataset = sqlContext.read().parquet(parquetPaths);
         ImmutableBitSet dimensions = scanRequest.getDimensions();
@@ -175,21 +180,17 @@ public class ParquetTask implements Serializable {
         // sort
         dataset = dataset.sort(getSortColumn(groupBy, mapping));
 
-        JavaRDD<Row> rowRDD = dataset.javaRDD();
-
-        JavaRDD<Object[]> objRDD = rowRDD.map(new Function<Row, Object[]>() {
-            @Override
-            public Object[] call(Row row) throws Exception {
-                Object[] objects = new Object[row.length()];
-                for (int i = 0; i < row.length(); i++) {
-                    objects[i] = row.get(i);
-                }
-                return objects;
+        JavaRDD<Object[]> objRDD = dataset.javaRDD().map((Function<Row, Object[]>) row -> {
+            Object[] objects = new Object[row.length()];
+            for (int i = 0; i < row.length(); i++) {
+                objects[i] = row.get(i);
             }
+            return objects;
         });
 
-        logger.info("partitions: {}", objRDD.getNumPartitions());
+        logger.debug("partitions: {}", objRDD.getNumPartitions());
 
+        // TODO: optimize the way to collect data.
         List<Object[]> result = objRDD.collect();
         return result.iterator();
     }
@@ -216,23 +217,23 @@ public class ParquetTask implements Serializable {
     private Column getAggColumn(String metName, String func, DataType dataType) {
         Column column;
         switch (func) {
-            case "SUM":
+            case FunctionDesc.FUNC_SUM:
                 column = sum(metName);
                 break;
-            case "MIN":
+            case FunctionDesc.FUNC_MIN:
                 column = min(metName);
                 break;
-            case "MAX":
+            case FunctionDesc.FUNC_MAX:
                 column = max(metName);
                 break;
-            case "COUNT":
+            case FunctionDesc.FUNC_COUNT:
                 column = sum(metName);
                 break;
-            case "TOP_N":
-            case "COUNT_DISTINCT":
-            case "EXTENDED_COLUMN":
-            case "PERCENTILE_APPROX":
-            case "RAW":
+            case TopNMeasureType.FUNC_TOP_N:
+            case FunctionDesc.FUNC_COUNT_DISTINCT:
+            case ExtendedColumnMeasureType.FUNC_EXTENDED_COLUMN:
+            case PercentileMeasureType.FUNC_PERCENTILE_APPROX:
+            case RawMeasureType.FUNC_RAW:
                 String udf = UdfManager.register(dataType, func);
                 column = callUDF(udf, col(metName));
                 break;
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/SparkSubmitter.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/SparkSubmitter.java
index 1c8c246..604cb6d 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/SparkSubmitter.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/SparkSubmitter.java
@@ -21,7 +21,6 @@ package org.apache.kylin.storage.parquet.spark;
 import org.apache.kylin.ext.ClassLoaderUtils;
 import org.apache.kylin.gridtable.GTScanRequest;
 import org.apache.kylin.gridtable.IGTScanner;
-import org.apache.kylin.storage.parquet.spark.gtscanner.ParquetRecordGTScanner;
 import org.apache.kylin.storage.parquet.spark.gtscanner.ParquetRecordGTScanner4Cube;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -30,13 +29,9 @@ public class SparkSubmitter {
     public static final Logger logger = LoggerFactory.getLogger(SparkSubmitter.class);
 
     public static IGTScanner submitParquetTask(GTScanRequest scanRequest, ParquetPayload payload) {
-
         Thread.currentThread().setContextClassLoader(ClassLoaderUtils.getSparkClassLoader());
         ParquetTask parquetTask = new ParquetTask(payload);
-
-        ParquetRecordGTScanner scanner = new ParquetRecordGTScanner4Cube(scanRequest.getInfo(), parquetTask.executeTask(), scanRequest,
-                payload.getMaxScanBytes());
-
-        return scanner;
+        return new ParquetRecordGTScanner4Cube(scanRequest.getInfo(),
+                parquetTask.executeTask(), scanRequest, payload.getMaxScanBytes());
     }
 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner.java
index 322aec1..27f005a 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner.java
@@ -19,18 +19,30 @@
 package org.apache.kylin.storage.parquet.spark.gtscanner;
 
 import com.google.common.collect.Iterators;
+import com.google.common.collect.Maps;
 import org.apache.kylin.common.exceptions.KylinTimeoutException;
 import org.apache.kylin.common.exceptions.ResourceLimitExceededException;
-import org.apache.kylin.common.util.ByteArray;
 import org.apache.kylin.common.util.ImmutableBitSet;
+import org.apache.kylin.dimension.DictionaryDimEnc;
 import org.apache.kylin.gridtable.GTInfo;
 import org.apache.kylin.gridtable.GTRecord;
 import org.apache.kylin.gridtable.GTScanRequest;
+import org.apache.kylin.gridtable.GTUtil;
 import org.apache.kylin.gridtable.IGTScanner;
+import org.apache.kylin.measure.bitmap.BitmapSerializer;
+import org.apache.kylin.measure.dim.DimCountDistincSerializer;
+import org.apache.kylin.measure.extendedcolumn.ExtendedColumnSerializer;
+import org.apache.kylin.measure.hllc.HLLCSerializer;
+import org.apache.kylin.measure.percentile.PercentileSerializer;
+import org.apache.kylin.measure.raw.RawSerializer;
+import org.apache.kylin.measure.topn.TopNCounterSerializer;
+import org.apache.kylin.metadata.datatype.DataTypeSerializer;
 
 import javax.annotation.Nullable;
 import java.io.IOException;
+import java.nio.ByteBuffer;
 import java.util.Iterator;
+import java.util.Map;
 
 /**
  * this class tracks resource 
@@ -43,20 +55,16 @@ public abstract class ParquetRecordGTScanner implements IGTScanner {
     private ImmutableBitSet columns;
 
     private long maxScannedBytes;
-
     private long scannedRows;
     private long scannedBytes;
 
-    private ImmutableBitSet[] columnBlocks;
-
     public ParquetRecordGTScanner(GTInfo info, Iterator<Object[]> iterator, GTScanRequest scanRequest,
-                                  long maxScannedBytes) {
+            long maxScannedBytes) {
         this.iterator = iterator;
         this.info = info;
         this.gtrecord = new GTRecord(info);
         this.columns = scanRequest.getColumns();
         this.maxScannedBytes = maxScannedBytes;
-        this.columnBlocks = getParquetCoveredColumnBlocks(scanRequest);
     }
 
     @Override
@@ -79,12 +87,40 @@ public abstract class ParquetRecordGTScanner implements IGTScanner {
     @Override
     public Iterator<GTRecord> iterator() {
         return Iterators.transform(iterator, new com.google.common.base.Function<Object[], GTRecord>() {
+            private int maxColumnLength = -1;
+            private ByteBuffer byteBuf = null;
+            private final Map<Integer, Integer> dictCols = Maps.newHashMap();
+            private final Map<Integer, Integer> binaryCols = Maps.newHashMap();
+            private final Map<Integer, Integer> otherCols = Maps.newHashMap();
             @Nullable
             @Override
+            @SuppressWarnings("checkstyle:BooleanExpressionComplexity")
             public GTRecord apply(@Nullable Object[] input) {
-                gtrecord.setValuesParquet(ParquetRecordGTScanner.this.columns, new ByteArray(info.getMaxColumnLength(ParquetRecordGTScanner.this.columns)), input);
+                if (maxColumnLength <= 0) {
+                    maxColumnLength = info.getMaxColumnLength(ParquetRecordGTScanner.this.columns);
+                    for (int i = 0; i < ParquetRecordGTScanner.this.columns.trueBitCount(); i++) {
+                        int c = ParquetRecordGTScanner.this.columns.trueBitAt(i);
+                        DataTypeSerializer serializer = info.getCodeSystem().getSerializer(c);
+                        if (serializer instanceof DictionaryDimEnc.DictionarySerializer) {
+                            dictCols.put(i, c);
+                        } else if (serializer instanceof TopNCounterSerializer || serializer instanceof HLLCSerializer
+                                || serializer instanceof BitmapSerializer
+                                || serializer instanceof ExtendedColumnSerializer
+                                || serializer instanceof PercentileSerializer
+                                || serializer instanceof DimCountDistincSerializer
+                                || serializer instanceof RawSerializer) {
+                            binaryCols.put(i, c);
+                        } else {
+                            otherCols.put(i, c);
+                        }
+                    }
+
+                    byteBuf = ByteBuffer.allocate(maxColumnLength);
+                }
+                byteBuf.clear();
+                GTUtil.setValuesParquet(gtrecord, byteBuf, dictCols, binaryCols, otherCols, input);
 
-                scannedBytes += info.getMaxColumnLength(ParquetRecordGTScanner.this.columns);
+                scannedBytes += maxColumnLength;
                 if ((++scannedRows % GTScanRequest.terminateCheckInterval == 1) && Thread.interrupted()) {
                     throw new KylinTimeoutException("Query timeout");
                 }
@@ -98,8 +134,6 @@ public abstract class ParquetRecordGTScanner implements IGTScanner {
         });
     }
 
-    abstract protected ImmutableBitSet getParquetCoveredColumns(GTScanRequest scanRequest);
 
-    abstract protected ImmutableBitSet[] getParquetCoveredColumnBlocks(GTScanRequest scanRequest);
 
 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner4Cube.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner4Cube.java
index 3bf670e..d97531f 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner4Cube.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner4Cube.java
@@ -18,47 +18,15 @@
 
 package org.apache.kylin.storage.parquet.spark.gtscanner;
 
-import org.apache.kylin.common.util.ImmutableBitSet;
 import org.apache.kylin.gridtable.GTInfo;
 import org.apache.kylin.gridtable.GTScanRequest;
 
-import java.util.BitSet;
 import java.util.Iterator;
 
 public class ParquetRecordGTScanner4Cube extends ParquetRecordGTScanner {
     public ParquetRecordGTScanner4Cube(GTInfo info, Iterator<Object[]> iterator, GTScanRequest scanRequest,
-                                       long maxScannedBytes) {
+            long maxScannedBytes) {
         super(info, iterator, scanRequest, maxScannedBytes);
     }
 
-    protected ImmutableBitSet getParquetCoveredColumns(GTScanRequest scanRequest) {
-        BitSet bs = new BitSet();
-
-        ImmutableBitSet dimensions = scanRequest.getInfo().getPrimaryKey();
-        for (int i = 0; i < dimensions.trueBitCount(); ++i) {
-            bs.set(dimensions.trueBitAt(i));
-        }
-
-        ImmutableBitSet queriedColumns = scanRequest.getColumns();
-        for (int i = 0; i < queriedColumns.trueBitCount(); ++i) {
-            bs.set(queriedColumns.trueBitAt(i));
-        }
-        return new ImmutableBitSet(bs);
-    }
-
-    protected ImmutableBitSet[] getParquetCoveredColumnBlocks(GTScanRequest scanRequest) {
-        
-        ImmutableBitSet selectedColBlocksBitSet = scanRequest.getSelectedColBlocks();
-        
-        ImmutableBitSet[] selectedColBlocks = new ImmutableBitSet[selectedColBlocksBitSet.trueBitCount()];
-        
-        for(int i = 0; i < selectedColBlocksBitSet.trueBitCount(); i++) {
-            
-            selectedColBlocks[i] = scanRequest.getInfo().getColumnBlock(selectedColBlocksBitSet.trueBitAt(i));
-            
-        }
-
-        return selectedColBlocks;
-    }
-
 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ConvertToParquetReducer.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ConvertToParquetReducer.java
index c778ee0..a0c1a2d 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ConvertToParquetReducer.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ConvertToParquetReducer.java
@@ -43,7 +43,6 @@ import java.io.IOException;
 import java.util.Map;
 
 /**
- * Created by Yichen on 11/14/18.
  */
 public class ConvertToParquetReducer extends KylinReducer<Text, Text, NullWritable, Group> {
     private ParquetConvertor convertor;
@@ -51,7 +50,7 @@ public class ConvertToParquetReducer extends KylinReducer<Text, Text, NullWritab
     private CubeSegment cubeSegment;
 
     @Override
-    protected void doSetup(Context context) throws IOException, InterruptedException {
+    protected void doSetup(Context context) throws IOException {
         Configuration conf = context.getConfiguration();
         super.bindCurrentConfiguration(conf);
         mos = new MultipleOutputs(context);
@@ -68,9 +67,9 @@ public class ConvertToParquetReducer extends KylinReducer<Text, Text, NullWritab
         SerializableConfiguration sConf = new SerializableConfiguration(conf);
 
         Map<TblColRef, String> colTypeMap = Maps.newHashMap();
-        Map<MeasureDesc, String> meaTypeMap = Maps.newHashMap();
-        ParquetConvertor.generateTypeMap(baseCuboid, dimEncMap, cube.getDescriptor(), colTypeMap, meaTypeMap);
-        convertor = new ParquetConvertor(cubeName, segmentId, kylinConfig, sConf, colTypeMap, meaTypeMap);
+        Map<MeasureDesc, String> measureTypeMap = Maps.newHashMap();
+        ParquetConvertor.generateTypeMap(baseCuboid, dimEncMap, cube.getDescriptor(), colTypeMap, measureTypeMap);
+        convertor = new ParquetConvertor(cubeName, segmentId, kylinConfig, sConf, colTypeMap, measureTypeMap);
     }
 
     @Override
@@ -80,14 +79,10 @@ public class ConvertToParquetReducer extends KylinReducer<Text, Text, NullWritab
         int partitionId = context.getTaskAttemptID().getTaskID().getId();
 
         for (Text value : values) {
-            try {
-                Group group = convertor.parseValueToGroup(key, value);
-                String output = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel("", layerNumber)
-                        + "/" + ParquetJobSteps.getCuboidOutputFileName(cuboidId, partitionId);
-                mos.write(MRCubeParquetJob.BY_LAYER_OUTPUT, null, group, output);
-            } catch (IOException e){
-                throw new IOException(e);
-            }
+            Group group = convertor.parseValueToGroup(key, value);
+            String output = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel("", layerNumber) + "/"
+                    + ParquetJobSteps.getCuboidOutputFileName(cuboidId, partitionId);
+            mos.write(MRCubeParquetJob.BY_LAYER_OUTPUT, null, group, output);
         }
     }
 
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/CuboidToPartitionMapping.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/CuboidToPartitionMapping.java
index 7fcf95d..97309f1 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/CuboidToPartitionMapping.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/CuboidToPartitionMapping.java
@@ -39,7 +39,6 @@ import java.util.Map;
 import java.util.Set;
 
 /**
- * Created by Yichen on 11/12/18.
  */
 public class CuboidToPartitionMapping implements Serializable {
     private static final Logger logger = LoggerFactory.getLogger(CuboidToPartitionMapping.class);
@@ -58,31 +57,26 @@ public class CuboidToPartitionMapping implements Serializable {
 
     public CuboidToPartitionMapping(CubeSegment cubeSeg, KylinConfig kylinConfig) throws IOException {
         cuboidPartitions = Maps.newHashMap();
-
         Set<Long> allCuboidIds = cubeSeg.getCuboidScheduler().getAllCuboidIds();
-
-        CalculatePartitionId(cubeSeg, kylinConfig, allCuboidIds);
+        calculatePartitionId(cubeSeg, kylinConfig, allCuboidIds);
     }
 
     public CuboidToPartitionMapping(CubeSegment cubeSeg, KylinConfig kylinConfig, int level) throws IOException {
         cuboidPartitions = Maps.newHashMap();
-
         List<Long> layeredCuboids = cubeSeg.getCuboidScheduler().getCuboidsByLayer().get(level);
-
-        CalculatePartitionId(cubeSeg, kylinConfig, layeredCuboids);
+        calculatePartitionId(cubeSeg, kylinConfig, layeredCuboids);
     }
 
-    private void CalculatePartitionId(CubeSegment cubeSeg, KylinConfig kylinConfig, Collection<Long> cuboidIds) throws IOException {
+    private void calculatePartitionId(CubeSegment cubeSeg, KylinConfig kylinConfig, Collection<Long> cuboidIds)
+            throws IOException {
         int position = 0;
         CubeStatsReader cubeStatsReader = new CubeStatsReader(cubeSeg, kylinConfig);
         for (Long cuboidId : cuboidIds) {
             int partition = estimateCuboidPartitionNum(cuboidId, cubeStatsReader, kylinConfig);
             List<Integer> positions = Lists.newArrayListWithCapacity(partition);
-
             for (int i = position; i < position + partition; i++) {
                 positions.add(i);
             }
-
             cuboidPartitions.put(cuboidId, positions);
             position = position + partition;
         }
@@ -95,7 +89,9 @@ public class CuboidToPartitionMapping implements Serializable {
     }
 
     public static CuboidToPartitionMapping deserialize(String jsonMapping) throws IOException {
-        Map<Long, List<Integer>> cuboidPartitions = JsonUtil.readValue(jsonMapping, new TypeReference<Map<Long, List<Integer>>>() {});
+        Map<Long, List<Integer>> cuboidPartitions = JsonUtil.readValue(jsonMapping,
+                new TypeReference<Map<Long, List<Integer>>>() {
+                });
         return new CuboidToPartitionMapping(cuboidPartitions);
     }
 
@@ -137,9 +133,7 @@ public class CuboidToPartitionMapping implements Serializable {
     public String getPartitionFilePrefix(int partition) {
         long cuboid = getCuboidIdByPartition(partition);
         int partNum = partition % getPartitionNumForCuboidId(cuboid);
-        String prefix = ParquetJobSteps.getCuboidOutputFileName(cuboid, partNum);
-
-        return prefix;
+        return ParquetJobSteps.getCuboidOutputFileName(cuboid, partNum);
     }
 
     private int estimateCuboidPartitionNum(long cuboidId, CubeStatsReader cubeStatsReader, KylinConfig kylinConfig) {
@@ -157,7 +151,8 @@ public class CuboidToPartitionMapping implements Serializable {
     public String toString() {
         StringBuilder sb = new StringBuilder();
         for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
-            sb.append("cuboidId:").append(entry.getKey()).append(" [").append(StringUtils.join(entry.getValue(), ",")).append("]\n");
+            sb.append("cuboidId:").append(entry.getKey()).append(" [").append(StringUtils.join(entry.getValue(), ","))
+                    .append("]\n");
         }
 
         return sb.toString();
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/MRCubeParquetJob.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/MRCubeParquetJob.java
index 7113a7a..a1d74fe 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/MRCubeParquetJob.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/MRCubeParquetJob.java
@@ -48,13 +48,12 @@ import java.io.IOException;
 
 
 /**
- * Created by Yichen on 11/9/18.
  */
 public class MRCubeParquetJob extends AbstractHadoopJob {
 
     protected static final Logger logger = LoggerFactory.getLogger(MRCubeParquetJob.class);
 
-    final static String BY_LAYER_OUTPUT = "ByLayer";
+    public static final String BY_LAYER_OUTPUT = "ByLayer";
     private Options options;
 
     public MRCubeParquetJob(){
@@ -73,7 +72,7 @@ public class MRCubeParquetJob extends AbstractHadoopJob {
         final Path inputPath = new Path(getOptionValue(OPTION_INPUT_PATH));
         final Path outputPath = new Path(getOptionValue(OPTION_OUTPUT_PATH));
         final String cubeName = getOptionValue(OPTION_CUBE_NAME);
-        logger.info("CubeName: ", cubeName);
+        logger.info("CubeName: {}", cubeName);
         final String segmentId = optionsHelper.getOptionValue(OPTION_SEGMENT_ID);
         KylinConfig kylinConfig = KylinConfig.getInstanceFromEnv();
         CubeManager cubeManager = CubeManager.getInstance(kylinConfig);
@@ -94,13 +93,10 @@ public class MRCubeParquetJob extends AbstractHadoopJob {
         Cuboid baseCuboid = Cuboid.getBaseCuboid(cubeSegment.getCubeDesc());
 
         MessageType schema = ParquetConvertor.cuboidToMessageType(baseCuboid, dimEncMap, cubeSegment.getCubeDesc());
-        logger.info("Schema: {}", schema.toString());
+        logger.info("Schema: {}", schema);
 
         try {
-
             job.getConfiguration().set(BatchConstants.ARG_CUBOID_TO_PARTITION_MAPPING, jsonStr);
-
-
             addInputDirs(inputPath.toString(), job);
             FileOutputFormat.setOutputPath(job, outputPath);
 
@@ -145,7 +141,7 @@ public class MRCubeParquetJob extends AbstractHadoopJob {
             try {
                 mapping = CuboidToPartitionMapping.deserialize(conf.get(BatchConstants.ARG_CUBOID_TO_PARTITION_MAPPING));
             } catch (IOException e) {
-                throw new RuntimeException(e);
+                throw new IllegalArgumentException(e);
             }
         }
 
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetConvertor.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetConvertor.java
index 9b9578d..b3be22d 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetConvertor.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetConvertor.java
@@ -64,7 +64,6 @@ import java.util.List;
 import java.util.Map;
 
 /**
- * Created by Yichen on 11/9/18.
  */
 public class ParquetConvertor {
     private static final Logger logger = LoggerFactory.getLogger(ParquetConvertor.class);
@@ -121,7 +120,7 @@ public class ParquetConvertor {
         int valueOffset = 0;
         for (int i = 0; i < valueLengths.length; ++i) {
             MeasureDesc measureDesc = measureDescs.get(i);
-            parseMeaValue(group, measureDesc, rawValue.getBytes(), valueOffset, valueLengths[i]);
+            parseMeasureValue(group, measureDesc, rawValue.getBytes(), valueOffset, valueLengths[i]);
             valueOffset += valueLengths[i];
         }
 
@@ -146,7 +145,7 @@ public class ParquetConvertor {
         }
     }
 
-    private void parseMeaValue(final Group group, final MeasureDesc measureDesc, final byte[] value, final int offset, final int length) throws IOException {
+    private void parseMeasureValue(final Group group, final MeasureDesc measureDesc, final byte[] value, final int offset, final int length) {
         if (value==null) {
             logger.error("value is null");
             return;
@@ -161,10 +160,10 @@ public class ParquetConvertor {
             case DATATYPE_DECIMAL:
                 BigDecimal decimal = serializer.deserialize(ByteBuffer.wrap(value, offset, length));
                 decimal = decimal.setScale(4);
-                group.append(measureDesc.getName(), Binary.fromByteArray(decimal.unscaledValue().toByteArray()));
+                group.append(measureDesc.getName(), Binary.fromReusedByteArray(decimal.unscaledValue().toByteArray()));
                 break;
             default:
-                group.append(measureDesc.getName(), Binary.fromByteArray(value, offset, length));
+                group.append(measureDesc.getName(), Binary.fromReusedByteArray(value, offset, length));
                 break;
         }
     }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java
index 625737a..f097a6a 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java
@@ -22,7 +22,6 @@ import org.apache.kylin.cube.CubeSegment;
 import org.apache.kylin.engine.mr.JobBuilderSupport;
 import org.apache.kylin.job.execution.AbstractExecutable;
 
-
 /**
  * Common steps for building cube into Parquet
  */
@@ -33,8 +32,7 @@ public abstract class ParquetJobSteps extends JobBuilderSupport {
     }
 
     public static String getCuboidOutputFileName(long cuboid, int partNum) {
-        String fileName = "cuboid_" + cuboid + "_part" + partNum;
-        return fileName;
+        return "cuboid_" + cuboid + "_part" + partNum;
     }
-    abstract public AbstractExecutable convertToParquetStep(String jobId);
+    public abstract AbstractExecutable convertToParquetStep(String jobId);
 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java
index f54a7a0..00f8e36 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java
@@ -32,20 +32,14 @@ import org.apache.kylin.engine.mr.steps.HiveToBaseCuboidMapper;
 import org.apache.kylin.engine.mr.steps.InMemCuboidMapper;
 import org.apache.kylin.engine.mr.steps.NDCuboidMapper;
 import org.apache.kylin.job.execution.DefaultChainedExecutable;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 import java.util.List;
 import java.util.Map;
 
-
 /**
- * Created by Yichen on 10/16/18.
  */
 public class ParquetMROutput implements IMROutput2 {
 
-    private static final Logger logger = LoggerFactory.getLogger(ParquetMROutput.class);
-
     @Override
     public IMRBatchCubingOutputSide2 getBatchCubingOutputSide(CubeSegment seg) {
 
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java
index e1d0b0f..bfa5ce6 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java
@@ -25,7 +25,6 @@ import org.apache.kylin.job.constant.ExecutableConstants;
 import org.apache.kylin.job.execution.AbstractExecutable;
 
 /**
- * Created by Yichen on 11/8/18.
  */
 public class ParquetMRSteps extends ParquetJobSteps{
     public ParquetMRSteps(CubeSegment seg) {
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
index 4ad7cb3..c6ea2ac 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
@@ -63,8 +63,7 @@ import java.io.IOException;
 import java.io.Serializable;
 import java.util.Map;
 
-
-public class SparkCubeParquet extends AbstractApplication implements Serializable{
+public class SparkCubeParquet extends AbstractApplication implements Serializable {
 
     protected static final Logger logger = LoggerFactory.getLogger(SparkCubeParquet.class);
 
@@ -78,12 +77,12 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
             .isRequired(true).withDescription("Paqruet output path").create(BatchConstants.ARG_OUTPUT);
     public static final Option OPTION_INPUT_PATH = OptionBuilder.withArgName(BatchConstants.ARG_INPUT).hasArg()
             .isRequired(true).withDescription("Cuboid files PATH").create(BatchConstants.ARG_INPUT);
-    public static final Option OPTION_COUNTER_PATH = OptionBuilder.withArgName(BatchConstants.ARG_COUNTER_OUPUT).hasArg()
-            .isRequired(true).withDescription("Counter output path").create(BatchConstants.ARG_COUNTER_OUPUT);
+    public static final Option OPTION_COUNTER_PATH = OptionBuilder.withArgName(BatchConstants.ARG_COUNTER_OUPUT)
+            .hasArg().isRequired(true).withDescription("Counter output path").create(BatchConstants.ARG_COUNTER_OUPUT);
 
     private Options options;
 
-    public SparkCubeParquet(){
+    public SparkCubeParquet() {
         options = new Options();
         options.addOption(OPTION_INPUT_PATH);
         options.addOption(OPTION_CUBE_NAME);
@@ -107,17 +106,18 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
         final String outputPath = optionsHelper.getOptionValue(OPTION_OUTPUT_PATH);
         final String counterPath = optionsHelper.getOptionValue(OPTION_COUNTER_PATH);
 
-        Class[] kryoClassArray = new Class[] { Class.forName("scala.reflect.ClassTag$$anon$1"), Text.class, Group.class};
+        Class[] kryoClassArray = new Class[] { Class.forName("scala.reflect.ClassTag$$anon$1"), Text.class,
+                Group.class };
 
-        SparkConf conf = new SparkConf().setAppName("Converting Parquet File for: " + cubeName + " segment " + segmentId);
+        SparkConf conf = new SparkConf()
+                .setAppName("Converting Parquet File for: " + cubeName + " segment " + segmentId);
         //serialization conf
         conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
         conf.set("spark.kryo.registrator", "org.apache.kylin.engine.spark.KylinKryoRegistrator");
         conf.set("spark.kryo.registrationRequired", "true").registerKryoClasses(kryoClassArray);
 
-
         KylinSparkJobListener jobListener = new KylinSparkJobListener();
-        try (JavaSparkContext sc = new JavaSparkContext(conf)){
+        try (JavaSparkContext sc = new JavaSparkContext(conf)) {
             sc.sc().addSparkListener(jobListener);
 
             HadoopUtil.deletePath(sc.hadoopConfiguration(), new Path(outputPath));
@@ -147,10 +147,10 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
             ParquetConvertor.generateTypeMap(baseCuboid, dimEncMap, cubeSegment.getCubeDesc(), colTypeMap, meaTypeMap);
             GroupWriteSupport.setSchema(schema, job.getConfiguration());
 
-            GenerateGroupRDDFunction groupPairFunction = new GenerateGroupRDDFunction(cubeName, cubeSegment.getUuid(), metaUrl, new SerializableConfiguration(job.getConfiguration()), colTypeMap, meaTypeMap);
-
+            GenerateGroupRDDFunction groupPairFunction = new GenerateGroupRDDFunction(cubeName, cubeSegment.getUuid(),
+                    metaUrl, new SerializableConfiguration(job.getConfiguration()), colTypeMap, meaTypeMap);
 
-            logger.info("Schema: {}", schema.toString());
+            logger.info("Schema: {}", schema);
 
             // Read from cuboid and save to parquet
             for (int level = 0; level <= totalLevels; level++) {
@@ -162,7 +162,8 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
             logger.info("HDFS: Number of bytes written={}", jobListener.metrics.getBytesWritten());
 
             Map<String, String> counterMap = Maps.newHashMap();
-            counterMap.put(ExecutableConstants.HDFS_BYTES_WRITTEN, String.valueOf(jobListener.metrics.getBytesWritten()));
+            counterMap.put(ExecutableConstants.HDFS_BYTES_WRITTEN,
+                    String.valueOf(jobListener.metrics.getBytesWritten()));
 
             // save counter to hdfs
             HadoopUtil.writeToSequenceFile(sc.hadoopConfiguration(), counterPath, counterMap);
@@ -170,10 +171,12 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
 
     }
 
-    protected void saveToParquet(JavaPairRDD<Text, Text> rdd, GenerateGroupRDDFunction groupRDDFunction, CubeSegment cubeSeg, String parquetOutput, int level, Job job, KylinConfig kylinConfig) throws IOException {
-        final CuboidToPartitionMapping cuboidToPartitionMapping = new CuboidToPartitionMapping(cubeSeg, kylinConfig, level);
+    protected void saveToParquet(JavaPairRDD<Text, Text> rdd, GenerateGroupRDDFunction groupRDDFunction,
+            CubeSegment cubeSeg, String parquetOutput, int level, Job job, KylinConfig kylinConfig) throws IOException {
+        final CuboidToPartitionMapping cuboidToPartitionMapping = new CuboidToPartitionMapping(cubeSeg, kylinConfig,
+                level);
 
-        logger.info("CuboidToPartitionMapping: {}", cuboidToPartitionMapping.toString());
+        logger.info("CuboidToPartitionMapping: {}", cuboidToPartitionMapping);
 
         String output = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel(parquetOutput, level);
 
@@ -182,7 +185,8 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
         CustomParquetOutputFormat.setWriteSupportClass(job, GroupWriteSupport.class);
         CustomParquetOutputFormat.setCuboidToPartitionMapping(job, cuboidToPartitionMapping);
 
-        JavaPairRDD<Void, Group> groupRDD = rdd.partitionBy(new CuboidPartitioner(cuboidToPartitionMapping)).mapToPair(groupRDDFunction);
+        JavaPairRDD<Void, Group> groupRDD = rdd.partitionBy(new CuboidPartitioner(cuboidToPartitionMapping))
+                .mapToPair(groupRDDFunction);
 
         groupRDD.saveAsNewAPIHadoopDataset(job.getConfiguration());
     }
@@ -201,7 +205,7 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
 
         @Override
         public int getPartition(Object key) {
-            Text textKey = (Text)key;
+            Text textKey = (Text) key;
             return mapping.getPartitionByKey(textKey.getBytes());
         }
     }
@@ -210,16 +214,19 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
 
         @Override
         public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException {
-            FileOutputCommitter committer = (FileOutputCommitter)this.getOutputCommitter(context);
+            FileOutputCommitter committer = (FileOutputCommitter) this.getOutputCommitter(context);
             TaskID taskId = context.getTaskAttemptID().getTaskID();
             int partition = taskId.getId();
 
-            CuboidToPartitionMapping mapping = CuboidToPartitionMapping.deserialize(context.getConfiguration().get(BatchConstants.ARG_CUBOID_TO_PARTITION_MAPPING));
+            CuboidToPartitionMapping mapping = CuboidToPartitionMapping
+                    .deserialize(context.getConfiguration().get(BatchConstants.ARG_CUBOID_TO_PARTITION_MAPPING));
 
-            return new Path(committer.getWorkPath(), getUniqueFile(context, mapping.getPartitionFilePrefix(partition)+ "-" + getOutputName(context), extension));
+            return new Path(committer.getWorkPath(), getUniqueFile(context,
+                    mapping.getPartitionFilePrefix(partition) + "-" + getOutputName(context), extension));
         }
 
-        public static void setCuboidToPartitionMapping(Job job, CuboidToPartitionMapping cuboidToPartitionMapping) throws IOException {
+        public static void setCuboidToPartitionMapping(Job job, CuboidToPartitionMapping cuboidToPartitionMapping)
+                throws IOException {
             String jsonStr = cuboidToPartitionMapping.serialize();
 
             job.getConfiguration().set(BatchConstants.ARG_CUBOID_TO_PARTITION_MAPPING, jsonStr);
@@ -227,7 +234,7 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
     }
 
     static class GenerateGroupRDDFunction implements PairFunction<Tuple2<Text, Text>, Void, Group> {
-        private volatile transient boolean initialized = false;
+        private transient volatile boolean initialized = false;
         private String cubeName;
         private String segmentId;
         private String metaUrl;
@@ -237,7 +244,9 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
 
         private transient ParquetConvertor convertor;
 
-        public GenerateGroupRDDFunction(String cubeName, String segmentId, String metaurl, SerializableConfiguration conf, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
+        public GenerateGroupRDDFunction(String cubeName, String segmentId, String metaurl,
+                SerializableConfiguration conf, Map<TblColRef, String> colTypeMap,
+                Map<MeasureDesc, String> meaTypeMap) {
             this.cubeName = cubeName;
             this.segmentId = segmentId;
             this.metaUrl = metaurl;
@@ -253,9 +262,9 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
 
         @Override
         public Tuple2<Void, Group> call(Tuple2<Text, Text> tuple) throws Exception {
-            if (initialized == false) {
+            if (!initialized) {
                 synchronized (SparkCubeParquet.class) {
-                    if (initialized == false) {
+                    if (!initialized) {
                         init();
                         initialized = true;
                     }
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/DebugTomcatClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/DebugTomcatClassLoader.java
index a0c212c..88012c8 100644
--- a/tomcat-ext/src/main/java/org/apache/kylin/ext/DebugTomcatClassLoader.java
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/DebugTomcatClassLoader.java
@@ -113,7 +113,7 @@ public class DebugTomcatClassLoader extends ParallelWebappClassLoader {
             return sparkClassLoader.loadClass(name);
         }
         if (isParentCLPrecedent(name)) {
-            logger.debug("Skipping exempt class " + name + " - delegating directly to parent");
+            logger.trace("Skipping exempt class " + name + " - delegating directly to parent");
             return parent.loadClass(name);
         }
         return super.loadClass(name, resolve);
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java
index 8fe211e..1cdfbc6 100644
--- a/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java
@@ -159,11 +159,11 @@ public class SparkClassLoader extends URLClassLoader {
             // Check whether the class has already been loaded:
             Class<?> clasz = findLoadedClass(name);
             if (clasz != null) {
-                logger.debug("Class " + name + " already loaded");
+                logger.trace("Class " + name + " already loaded");
             } else {
                 try {
                     // Try to find this class using the URLs passed to this ClassLoader
-                    logger.debug("Finding class: " + name);
+                    logger.trace("Finding class: " + name);
                     clasz = super.findClass(name);
                     if (clasz == null) {
                         logger.debug("cannot find class" + name);
diff --git a/tool/src/main/java/org/apache/kylin/tool/CubeMigrationCLI.java b/tool/src/main/java/org/apache/kylin/tool/CubeMigrationCLI.java
index 5c2a839..e01c3c7 100644
--- a/tool/src/main/java/org/apache/kylin/tool/CubeMigrationCLI.java
+++ b/tool/src/main/java/org/apache/kylin/tool/CubeMigrationCLI.java
@@ -230,8 +230,8 @@ public class CubeMigrationCLI extends AbstractApplication {
     }
 
     protected void renameFoldersInHdfs(CubeInstance cube) throws IOException {
-        IStoragePathBuilder srcPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(srcConfig.getStorageSystemPathBuilderClz());
-        IStoragePathBuilder dstPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(dstConfig.getStorageSystemPathBuilderClz());
+        IStoragePathBuilder srcPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(srcConfig.getStoragePathBuilder());
+        IStoragePathBuilder dstPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(dstConfig.getStoragePathBuilder());
 
         for (CubeSegment segment : cube.getSegments()) {
 
diff --git a/webapp/app/js/model/cubeConfig.js b/webapp/app/js/model/cubeConfig.js
index 42e4c34..34358d7 100644
--- a/webapp/app/js/model/cubeConfig.js
+++ b/webapp/app/js/model/cubeConfig.js
@@ -29,7 +29,7 @@ KylinApp.constant('cubeConfig', {
   ],
   storageTypes: [
     {name: 'HBase', value: 2},
-    {name: 'Parquet', value: 4}
+    {name: 'Parquet (alpha)', value: 4}
   ],
   joinTypes: [
     {name: 'Left', value: 'left'},
@@ -204,4 +204,4 @@ KylinApp.constant('cubeConfig', {
       'left': '-12px'
     }
   }
-});
\ No newline at end of file
+});


[kylin] 04/06: KYLIN-3623 Convert cuboid to Parquet in MR

Posted by sh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch kylin-on-parquet
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 5ea041c91e825a744e246935cc165b1addf45f75
Author: Yichen Zhou <zh...@gmail.com>
AuthorDate: Tue Nov 13 14:51:35 2018 +0800

    KYLIN-3623 Convert cuboid to Parquet in MR
---
 core-common/pom.xml                                |   2 +-
 .../org/apache/kylin/common/KylinConfigBase.java   |  16 +
 .../apache/kylin/cube/cuboid/CuboidScheduler.java  |  29 +-
 .../java/org/apache/kylin/gridtable/GTRecord.java  |   1 +
 .../kylin/engine/mr/common/BatchConstants.java     |   1 +
 .../java/org/apache/spark/sql/KylinSession.scala   |   1 -
 .../java/org/apache/spark/sql/SparderEnv.scala     |   8 -
 storage-parquet/pom.xml                            |  12 +
 .../kylin/storage/parquet/cube/CubeSparkRPC.java   |  12 +-
 .../storage/parquet/spark/ParquetPayload.java      |   1 +
 .../parquet/steps/ConvertToParquetReducer.java     |  98 ++++++
 .../parquet/steps/CuboidToPartitionMapping.java    | 165 ++++++++++
 .../storage/parquet/steps/MRCubeParquetJob.java    | 168 ++++++++++
 .../storage/parquet/steps/ParquetConvertor.java    | 283 +++++++++++++++++
 .../storage/parquet/steps/ParquetJobSteps.java     |   7 +-
 .../storage/parquet/steps/ParquetMROutput.java     |  78 ++++-
 .../storage/parquet/steps/ParquetMRSteps.java      |  59 ++++
 .../storage/parquet/steps/ParquetSparkOutput.java  |   4 +-
 .../storage/parquet/steps/ParquetSparkSteps.java   |   2 +-
 .../storage/parquet/steps/SparkCubeParquet.java    | 338 ++-------------------
 .../java/org/apache/kylin/ext/ItClassLoader.java   |   4 +-
 .../org/apache/kylin/ext/ItSparkClassLoader.java   |  10 +-
 .../org/apache/kylin/ext/SparkClassLoader.java     |  25 +-
 .../org/apache/kylin/ext/TomcatClassLoader.java    |  19 +-
 24 files changed, 966 insertions(+), 377 deletions(-)

diff --git a/core-common/pom.xml b/core-common/pom.xml
index 594e39b..9e1e3ac 100644
--- a/core-common/pom.xml
+++ b/core-common/pom.xml
@@ -104,4 +104,4 @@
             <scope>test</scope>
         </dependency>
     </dependencies>
-</project>
+</project>
\ No newline at end of file
diff --git a/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java b/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
index 6092834..e3e3e03 100644
--- a/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
+++ b/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
@@ -1121,6 +1121,22 @@ abstract public class KylinConfigBase implements Serializable {
     }
 
     // ============================================================================
+    // STORAGE.Parquet
+    // ============================================================================
+
+    public float getParquetFileSizeMB() {
+        return Integer.parseInt(getOptional("kylin.storage.parquet.file-size-mb", "100"));
+    }
+
+    public int getParquetMinPartitions() {
+        return Integer.parseInt(getOptional("kylin.storage.parquet.min-partitions", "1"));
+    }
+
+    public int getParquetMaxPartitions() {
+        return Integer.parseInt(getOptional("kylin.storage.parquet.max-partitions", "5000"));
+    }
+
+    // ============================================================================
     // ENGINE.MR
     // ============================================================================
 
diff --git a/core-cube/src/main/java/org/apache/kylin/cube/cuboid/CuboidScheduler.java b/core-cube/src/main/java/org/apache/kylin/cube/cuboid/CuboidScheduler.java
index 17096f6..c1791e1 100644
--- a/core-cube/src/main/java/org/apache/kylin/cube/cuboid/CuboidScheduler.java
+++ b/core-cube/src/main/java/org/apache/kylin/cube/cuboid/CuboidScheduler.java
@@ -20,8 +20,10 @@ package org.apache.kylin.cube.cuboid;
 
 import java.util.Collections;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
+import com.google.common.collect.Maps;
 import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.cube.model.AggregationGroup;
 import org.apache.kylin.cube.model.CubeDesc;
@@ -71,6 +73,8 @@ abstract public class CuboidScheduler {
     
     private transient List<List<Long>> cuboidsByLayer;
 
+    private transient Map<Long, Integer> cuboidLayerMap;
+
     public long getBaseCuboidId() {
         return Cuboid.getBaseCuboidId(cubeDesc);
     }
@@ -125,7 +129,30 @@ abstract public class CuboidScheduler {
     public int getBuildLevel() {
         return getCuboidsByLayer().size() - 1;
     }
-    
+
+    public Map<Long, Integer> getCuboidLayerMap() {
+        if (cuboidLayerMap != null){
+            return cuboidLayerMap;
+        }
+        cuboidLayerMap = Maps.newHashMap();
+
+        if (cuboidsByLayer == null) {
+            cuboidsByLayer = getCuboidsByLayer();
+        }
+
+        for (int layerIndex = 0; layerIndex <= getBuildLevel(); layerIndex++){
+            List<Long> layeredCuboids = cuboidsByLayer.get(layerIndex);
+            for (Long cuboidId : layeredCuboids){
+                cuboidLayerMap.put(cuboidId, layerIndex);
+            }
+        }
+
+        int size = getAllCuboidIds().size();
+        int totalNum = cuboidLayerMap.size();
+        Preconditions.checkState(totalNum == size, "total Num: " + totalNum + " actual size: " + size);
+        return cuboidLayerMap;
+    }
+
     /** Returns the key for what this cuboid scheduler responsible for. */
     public String getCuboidCacheKey() {
         return cubeDesc.getName();
diff --git a/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java b/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java
index d7be088..24278c4 100644
--- a/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java
+++ b/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java
@@ -113,6 +113,7 @@ public class GTRecord implements Comparable<GTRecord> {
     }
 
     /** set record to the codes of specified values, reuse given space to hold the codes */
+    @SuppressWarnings("checkstyle:BooleanExpressionComplexity")
     public GTRecord setValuesParquet(ImmutableBitSet selectedCols, ByteArray space, Object... values) {
         assert selectedCols.cardinality() == values.length;
 
diff --git a/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/BatchConstants.java b/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/BatchConstants.java
index 66da1b2..33264c8 100644
--- a/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/BatchConstants.java
+++ b/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/BatchConstants.java
@@ -107,6 +107,7 @@ public interface BatchConstants {
     String ARG_HBASE_CONF_PATH = "hbaseConfPath";
     String ARG_SHRUNKEN_DICT_PATH = "shrunkenDictPath";
     String ARG_COUNTER_OUPUT = "counterOutput";
+    String ARG_CUBOID_TO_PARTITION_MAPPING = "cuboidToPartitionMapping";
 
     /**
      * logger and counter
diff --git a/engine-spark/src/main/java/org/apache/spark/sql/KylinSession.scala b/engine-spark/src/main/java/org/apache/spark/sql/KylinSession.scala
index 20f86e5..72c4ba5 100644
--- a/engine-spark/src/main/java/org/apache/spark/sql/KylinSession.scala
+++ b/engine-spark/src/main/java/org/apache/spark/sql/KylinSession.scala
@@ -107,7 +107,6 @@ object KylinSession extends Logging {
         sparkContext.addSparkListener(new SparkListener {
           override def onApplicationEnd(applicationEnd: SparkListenerApplicationEnd): Unit = {
             SparkSession.setDefaultSession(null)
-            SparkSession.sqlListener.set(null)
           }
         })
         UdfManager.create(session)
diff --git a/engine-spark/src/main/java/org/apache/spark/sql/SparderEnv.scala b/engine-spark/src/main/java/org/apache/spark/sql/SparderEnv.scala
index ecaa5c8..209576d 100644
--- a/engine-spark/src/main/java/org/apache/spark/sql/SparderEnv.scala
+++ b/engine-spark/src/main/java/org/apache/spark/sql/SparderEnv.scala
@@ -57,14 +57,6 @@ object SparderEnv extends Logging {
     getSparkSession.sparkContext.conf.get(key)
   }
 
-  def getActiveJobs(): Int = {
-    SparderEnv.getSparkSession.sparkContext.jobProgressListener.activeJobs.size
-  }
-
-  def getFailedJobs(): Int = {
-    SparderEnv.getSparkSession.sparkContext.jobProgressListener.failedJobs.size
-  }
-
   def getAsyncResultCore: Int = {
     val sparkConf = getSparkSession.sparkContext.getConf
     val instances = sparkConf.get("spark.executor.instances").toInt
diff --git a/storage-parquet/pom.xml b/storage-parquet/pom.xml
index 3778031..88ad6f9 100644
--- a/storage-parquet/pom.xml
+++ b/storage-parquet/pom.xml
@@ -103,6 +103,18 @@
             <scope>provided</scope>
         </dependency>
 
+        <dependency>
+            <groupId>org.apache.parquet</groupId>
+            <artifactId>parquet-column</artifactId>
+            <version>1.8.1</version>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.parquet</groupId>
+            <artifactId>parquet-hadoop</artifactId>
+            <version>1.8.1</version>
+        </dependency>
+
     </dependencies>
 
     <build>
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java
index 5009a51..62c294d 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java
@@ -73,11 +73,6 @@ public class CubeSparkRPC implements IGTStorage {
 
         JobBuilderSupport jobBuilderSupport = new JobBuilderSupport(cubeSegment, "");
 
-<<<<<<< HEAD
-=======
-        String cubooidRootPath = jobBuilderSupport.getCuboidRootPath();
-
->>>>>>> 198041d63... KYLIN-3625 Init query
         List<List<Long>> layeredCuboids = cubeSegment.getCuboidScheduler().getCuboidsByLayer();
         int level = 0;
         for (List<Long> levelCuboids : layeredCuboids) {
@@ -87,13 +82,8 @@ public class CubeSparkRPC implements IGTStorage {
             level++;
         }
 
-<<<<<<< HEAD
-        String dataFolderName;
         String parquetRootPath = jobBuilderSupport.getParquetOutputPath();
-        dataFolderName = JobBuilderSupport.getCuboidOutputPathsByLevel(parquetRootPath, level) + "/" + cuboid.getId();
-=======
-        String dataFolderName = JobBuilderSupport.getCuboidOutputPathsByLevel(cubooidRootPath, level) + "/" + cuboid.getId();
->>>>>>> 198041d63... KYLIN-3625 Init query
+        String dataFolderName = JobBuilderSupport.getCuboidOutputPathsByLevel(parquetRootPath, level) + "/" + cuboid.getId();
 
         builder.setGtScanRequest(scanRequest.toByteArray()).setGtScanRequestId(scanReqId)
                 .setKylinProperties(KylinConfig.getInstanceFromEnv().exportAllToString())
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java
index 9096679..3129b8e 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java
@@ -37,6 +37,7 @@ public class ParquetPayload {
     private long startTime;
     private int storageType;
 
+    @SuppressWarnings("checkstyle:ParameterNumber")
     private ParquetPayload(byte[] gtScanRequest, String gtScanRequestId, String kylinProperties, String realizationId,
                            String segmentId, String dataFolderName, int maxRecordLength, List<Integer> parquetColumns,
                            boolean isUseII, String realizationType, String queryId, boolean spillEnabled, long maxScanBytes,
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ConvertToParquetReducer.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ConvertToParquetReducer.java
new file mode 100644
index 0000000..c778ee0
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ConvertToParquetReducer.java
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+package org.apache.kylin.storage.parquet.steps;
+
+import com.google.common.collect.Maps;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.lib.output.MultipleOutputs;
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.Bytes;
+import org.apache.kylin.cube.CubeInstance;
+import org.apache.kylin.cube.CubeManager;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.cube.cuboid.Cuboid;
+import org.apache.kylin.cube.kv.RowConstants;
+import org.apache.kylin.dimension.IDimensionEncodingMap;
+import org.apache.kylin.engine.mr.BatchCubingJobBuilder2;
+import org.apache.kylin.engine.mr.KylinReducer;
+import org.apache.kylin.engine.mr.common.AbstractHadoopJob;
+import org.apache.kylin.engine.mr.common.BatchConstants;
+import org.apache.kylin.engine.mr.common.SerializableConfiguration;
+import org.apache.kylin.metadata.model.MeasureDesc;
+import org.apache.kylin.metadata.model.TblColRef;
+import org.apache.parquet.example.data.Group;
+
+import java.io.IOException;
+import java.util.Map;
+
+/**
+ * Created by Yichen on 11/14/18.
+ */
+public class ConvertToParquetReducer extends KylinReducer<Text, Text, NullWritable, Group> {
+    private ParquetConvertor convertor;
+    private MultipleOutputs<NullWritable, Group> mos;
+    private CubeSegment cubeSegment;
+
+    @Override
+    protected void doSetup(Context context) throws IOException, InterruptedException {
+        Configuration conf = context.getConfiguration();
+        super.bindCurrentConfiguration(conf);
+        mos = new MultipleOutputs(context);
+
+        KylinConfig kylinConfig = AbstractHadoopJob.loadKylinPropsAndMetadata();
+
+        String cubeName = conf.get(BatchConstants.CFG_CUBE_NAME);
+        String segmentId = conf.get(BatchConstants.CFG_CUBE_SEGMENT_ID);
+        CubeManager cubeManager = CubeManager.getInstance(kylinConfig);
+        CubeInstance cube = cubeManager.getCube(cubeName);
+        cubeSegment = cube.getSegmentById(segmentId);
+        Cuboid baseCuboid = Cuboid.getBaseCuboid(cubeSegment.getCubeDesc());
+        final IDimensionEncodingMap dimEncMap = cubeSegment.getDimensionEncodingMap();
+        SerializableConfiguration sConf = new SerializableConfiguration(conf);
+
+        Map<TblColRef, String> colTypeMap = Maps.newHashMap();
+        Map<MeasureDesc, String> meaTypeMap = Maps.newHashMap();
+        ParquetConvertor.generateTypeMap(baseCuboid, dimEncMap, cube.getDescriptor(), colTypeMap, meaTypeMap);
+        convertor = new ParquetConvertor(cubeName, segmentId, kylinConfig, sConf, colTypeMap, meaTypeMap);
+    }
+
+    @Override
+    protected void doReduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
+        long cuboidId = Bytes.toLong(key.getBytes(), RowConstants.ROWKEY_SHARDID_LEN, RowConstants.ROWKEY_CUBOIDID_LEN);
+        int layerNumber = cubeSegment.getCuboidScheduler().getCuboidLayerMap().get(cuboidId);
+        int partitionId = context.getTaskAttemptID().getTaskID().getId();
+
+        for (Text value : values) {
+            try {
+                Group group = convertor.parseValueToGroup(key, value);
+                String output = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel("", layerNumber)
+                        + "/" + ParquetJobSteps.getCuboidOutputFileName(cuboidId, partitionId);
+                mos.write(MRCubeParquetJob.BY_LAYER_OUTPUT, null, group, output);
+            } catch (IOException e){
+                throw new IOException(e);
+            }
+        }
+    }
+
+    @Override
+    protected void doCleanup(Context context) throws IOException, InterruptedException {
+        mos.close();
+    }
+}
\ No newline at end of file
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/CuboidToPartitionMapping.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/CuboidToPartitionMapping.java
new file mode 100644
index 0000000..7fcf95d
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/CuboidToPartitionMapping.java
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+package org.apache.kylin.storage.parquet.steps;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.Bytes;
+import org.apache.kylin.common.util.JsonUtil;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.cube.kv.RowConstants;
+import org.apache.kylin.engine.mr.common.CubeStatsReader;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * Created by Yichen on 11/12/18.
+ */
+public class CuboidToPartitionMapping implements Serializable {
+    private static final Logger logger = LoggerFactory.getLogger(CuboidToPartitionMapping.class);
+
+    private Map<Long, List<Integer>> cuboidPartitions;
+    private int partitionNum;
+
+    public CuboidToPartitionMapping(Map<Long, List<Integer>> cuboidPartitions) {
+        this.cuboidPartitions = cuboidPartitions;
+        int partitions = 0;
+        for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
+            partitions = partitions + entry.getValue().size();
+        }
+        this.partitionNum = partitions;
+    }
+
+    public CuboidToPartitionMapping(CubeSegment cubeSeg, KylinConfig kylinConfig) throws IOException {
+        cuboidPartitions = Maps.newHashMap();
+
+        Set<Long> allCuboidIds = cubeSeg.getCuboidScheduler().getAllCuboidIds();
+
+        CalculatePartitionId(cubeSeg, kylinConfig, allCuboidIds);
+    }
+
+    public CuboidToPartitionMapping(CubeSegment cubeSeg, KylinConfig kylinConfig, int level) throws IOException {
+        cuboidPartitions = Maps.newHashMap();
+
+        List<Long> layeredCuboids = cubeSeg.getCuboidScheduler().getCuboidsByLayer().get(level);
+
+        CalculatePartitionId(cubeSeg, kylinConfig, layeredCuboids);
+    }
+
+    private void CalculatePartitionId(CubeSegment cubeSeg, KylinConfig kylinConfig, Collection<Long> cuboidIds) throws IOException {
+        int position = 0;
+        CubeStatsReader cubeStatsReader = new CubeStatsReader(cubeSeg, kylinConfig);
+        for (Long cuboidId : cuboidIds) {
+            int partition = estimateCuboidPartitionNum(cuboidId, cubeStatsReader, kylinConfig);
+            List<Integer> positions = Lists.newArrayListWithCapacity(partition);
+
+            for (int i = position; i < position + partition; i++) {
+                positions.add(i);
+            }
+
+            cuboidPartitions.put(cuboidId, positions);
+            position = position + partition;
+        }
+
+        this.partitionNum = position;
+    }
+
+    public String serialize() throws JsonProcessingException {
+        return JsonUtil.writeValueAsString(cuboidPartitions);
+    }
+
+    public static CuboidToPartitionMapping deserialize(String jsonMapping) throws IOException {
+        Map<Long, List<Integer>> cuboidPartitions = JsonUtil.readValue(jsonMapping, new TypeReference<Map<Long, List<Integer>>>() {});
+        return new CuboidToPartitionMapping(cuboidPartitions);
+    }
+
+    public int getNumPartitions() {
+        return this.partitionNum;
+    }
+
+    public long getCuboidIdByPartition(int partition) {
+        for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
+            if (entry.getValue().contains(partition)) {
+                return entry.getKey();
+            }
+        }
+
+        throw new IllegalArgumentException("No cuboidId for partition id: " + partition);
+    }
+
+    public int getPartitionByKey(byte[] key) {
+        long cuboidId = Bytes.toLong(key, RowConstants.ROWKEY_SHARDID_LEN, RowConstants.ROWKEY_CUBOIDID_LEN);
+        List<Integer> partitions = cuboidPartitions.get(cuboidId);
+        int partitionKey = mod(key, RowConstants.ROWKEY_SHARD_AND_CUBOID_LEN, key.length, partitions.size());
+
+        return partitions.get(partitionKey);
+    }
+
+    private int mod(byte[] src, int start, int end, int total) {
+        int sum = Bytes.hashBytes(src, start, end - start);
+        int mod = sum % total;
+        if (mod < 0)
+            mod += total;
+
+        return mod;
+    }
+
+    public int getPartitionNumForCuboidId(long cuboidId) {
+        return cuboidPartitions.get(cuboidId).size();
+    }
+
+    public String getPartitionFilePrefix(int partition) {
+        long cuboid = getCuboidIdByPartition(partition);
+        int partNum = partition % getPartitionNumForCuboidId(cuboid);
+        String prefix = ParquetJobSteps.getCuboidOutputFileName(cuboid, partNum);
+
+        return prefix;
+    }
+
+    private int estimateCuboidPartitionNum(long cuboidId, CubeStatsReader cubeStatsReader, KylinConfig kylinConfig) {
+        double cuboidSize = cubeStatsReader.estimateCuboidSize(cuboidId);
+        float rddCut = kylinConfig.getParquetFileSizeMB();
+        int partition = (int) (cuboidSize / rddCut);
+        partition = Math.max(kylinConfig.getParquetMinPartitions(), partition);
+        partition = Math.min(kylinConfig.getParquetMaxPartitions(), partition);
+
+        logger.info("cuboid:{}, est_size:{}, partitions:{}", cuboidId, cuboidSize, partition);
+        return partition;
+    }
+
+    @Override
+    public String toString() {
+        StringBuilder sb = new StringBuilder();
+        for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
+            sb.append("cuboidId:").append(entry.getKey()).append(" [").append(StringUtils.join(entry.getValue(), ",")).append("]\n");
+        }
+
+        return sb.toString();
+    }
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/MRCubeParquetJob.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/MRCubeParquetJob.java
new file mode 100644
index 0000000..7113a7a
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/MRCubeParquetJob.java
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+package org.apache.kylin.storage.parquet.steps;
+
+import org.apache.commons.cli.Options;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Partitioner;
+import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat;
+import org.apache.hadoop.mapreduce.lib.output.MultipleOutputs;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.cube.CubeInstance;
+import org.apache.kylin.cube.CubeManager;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.cube.cuboid.Cuboid;
+import org.apache.kylin.dimension.IDimensionEncodingMap;
+import org.apache.kylin.engine.mr.common.AbstractHadoopJob;
+import org.apache.kylin.engine.mr.common.BatchConstants;
+import org.apache.parquet.example.data.Group;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.parquet.hadoop.example.ExampleOutputFormat;
+import org.apache.parquet.schema.MessageType;
+
+import java.io.IOException;
+
+
+/**
+ * Created by Yichen on 11/9/18.
+ */
+public class MRCubeParquetJob extends AbstractHadoopJob {
+
+    protected static final Logger logger = LoggerFactory.getLogger(MRCubeParquetJob.class);
+
+    final static String BY_LAYER_OUTPUT = "ByLayer";
+    private Options options;
+
+    public MRCubeParquetJob(){
+        options = new Options();
+        options.addOption(OPTION_JOB_NAME);
+        options.addOption(OPTION_CUBE_NAME);
+        options.addOption(OPTION_SEGMENT_ID);
+        options.addOption(OPTION_INPUT_PATH);
+        options.addOption(OPTION_OUTPUT_PATH);
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+        parseOptions(options, args);
+
+        final Path inputPath = new Path(getOptionValue(OPTION_INPUT_PATH));
+        final Path outputPath = new Path(getOptionValue(OPTION_OUTPUT_PATH));
+        final String cubeName = getOptionValue(OPTION_CUBE_NAME);
+        logger.info("CubeName: ", cubeName);
+        final String segmentId = optionsHelper.getOptionValue(OPTION_SEGMENT_ID);
+        KylinConfig kylinConfig = KylinConfig.getInstanceFromEnv();
+        CubeManager cubeManager = CubeManager.getInstance(kylinConfig);
+        CubeInstance cube = cubeManager.getCube(cubeName);
+
+        CubeSegment cubeSegment = cube.getSegmentById(segmentId);
+        logger.info("Input path: {}", inputPath);
+        logger.info("Output path: {}", outputPath);
+
+        job = Job.getInstance(getConf(), getOptionValue(OPTION_JOB_NAME));
+        setJobClasspath(job, cube.getConfig());
+        CuboidToPartitionMapping cuboidToPartitionMapping= new CuboidToPartitionMapping(cubeSegment, kylinConfig);
+        String jsonStr = cuboidToPartitionMapping.serialize();
+        logger.info("Total Partition: {}", cuboidToPartitionMapping.getNumPartitions());
+
+        final IDimensionEncodingMap dimEncMap = cubeSegment.getDimensionEncodingMap();
+
+        Cuboid baseCuboid = Cuboid.getBaseCuboid(cubeSegment.getCubeDesc());
+
+        MessageType schema = ParquetConvertor.cuboidToMessageType(baseCuboid, dimEncMap, cubeSegment.getCubeDesc());
+        logger.info("Schema: {}", schema.toString());
+
+        try {
+
+            job.getConfiguration().set(BatchConstants.ARG_CUBOID_TO_PARTITION_MAPPING, jsonStr);
+
+
+            addInputDirs(inputPath.toString(), job);
+            FileOutputFormat.setOutputPath(job, outputPath);
+
+            job.setJobName("Converting Parquet File for: " + cubeName + " segment " + segmentId);
+            job.setInputFormatClass(SequenceFileInputFormat.class);
+            job.setMapOutputKeyClass(Text.class);
+            job.setMapOutputValueClass(Text.class);
+
+            job.setPartitionerClass(CuboidPartitioner.class);
+
+            job.setOutputKeyClass(NullWritable.class);
+            job.setOutputValueClass(Group.class);
+            job.setReducerClass(ConvertToParquetReducer.class);
+            job.setNumReduceTasks(cuboidToPartitionMapping.getNumPartitions());
+
+            MultipleOutputs.addNamedOutput(job, BY_LAYER_OUTPUT, ExampleOutputFormat.class, NullWritable.class, Group.class);
+
+            job.getConfiguration().set(BatchConstants.CFG_CUBE_NAME, cubeName);
+            job.getConfiguration().set(BatchConstants.CFG_CUBE_SEGMENT_ID, segmentId);
+            ExampleOutputFormat.setSchema(job, schema);
+            LazyOutputFormat.setOutputFormatClass(job, ExampleOutputFormat.class);
+            attachCubeMetadataWithDict(cube, job.getConfiguration());
+
+            this.deletePath(job.getConfiguration(), outputPath);
+            FileOutputFormat.setOutputPath(job, outputPath);
+            return waitForCompletion(job);
+
+        } finally {
+            if (job != null) {
+                cleanupTempConfFile(job.getConfiguration());
+            }
+        }
+    }
+
+    public static class CuboidPartitioner extends Partitioner<Text, Text> implements Configurable {
+        private Configuration conf;
+        private CuboidToPartitionMapping mapping;
+
+        @Override
+        public void setConf(Configuration configuration) {
+            this.conf = configuration;
+            try {
+                mapping = CuboidToPartitionMapping.deserialize(conf.get(BatchConstants.ARG_CUBOID_TO_PARTITION_MAPPING));
+            } catch (IOException e) {
+                throw new RuntimeException(e);
+            }
+        }
+
+        @Override
+        public int getPartition(Text key, Text value, int numPartitions) {
+            return mapping.getPartitionByKey(key.getBytes());
+        }
+
+        @Override
+        public Configuration getConf() {
+            return conf;
+        }
+    }
+
+    public static void main(String[] args) throws Exception {
+        MRCubeParquetJob job = new MRCubeParquetJob();
+        int exitCode = ToolRunner.run(job, args);
+        System.exit(exitCode);
+    }
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetConvertor.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetConvertor.java
new file mode 100644
index 0000000..9b9578d
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetConvertor.java
@@ -0,0 +1,283 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+package org.apache.kylin.storage.parquet.steps;
+
+import org.apache.hadoop.io.Text;
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.BytesUtil;
+import org.apache.kylin.cube.CubeInstance;
+import org.apache.kylin.cube.CubeManager;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.cube.cuboid.Cuboid;
+import org.apache.kylin.cube.kv.RowKeyDecoder;
+import org.apache.kylin.cube.kv.RowKeyDecoderParquet;
+import org.apache.kylin.cube.model.CubeDesc;
+import org.apache.kylin.dimension.AbstractDateDimEnc;
+import org.apache.kylin.dimension.DimensionEncoding;
+import org.apache.kylin.dimension.FixedLenDimEnc;
+import org.apache.kylin.dimension.FixedLenHexDimEnc;
+import org.apache.kylin.dimension.IDimensionEncodingMap;
+import org.apache.kylin.engine.mr.common.SerializableConfiguration;
+import org.apache.kylin.measure.BufferedMeasureCodec;
+import org.apache.kylin.measure.MeasureIngester;
+import org.apache.kylin.measure.MeasureType;
+import org.apache.kylin.measure.basic.BasicMeasureType;
+import org.apache.kylin.measure.basic.BigDecimalIngester;
+import org.apache.kylin.measure.basic.DoubleIngester;
+import org.apache.kylin.measure.basic.LongIngester;
+import org.apache.kylin.metadata.datatype.BigDecimalSerializer;
+import org.apache.kylin.metadata.datatype.DataType;
+import org.apache.kylin.metadata.model.MeasureDesc;
+import org.apache.kylin.metadata.model.TblColRef;
+import org.apache.parquet.example.data.Group;
+import org.apache.parquet.example.data.GroupFactory;
+import org.apache.parquet.example.data.simple.SimpleGroupFactory;
+import org.apache.parquet.hadoop.example.GroupWriteSupport;
+import org.apache.parquet.io.api.Binary;
+
+import org.apache.parquet.schema.MessageType;
+import org.apache.parquet.schema.OriginalType;
+import org.apache.parquet.schema.PrimitiveType;
+import org.apache.parquet.schema.Types;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.math.BigDecimal;
+import java.nio.ByteBuffer;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Created by Yichen on 11/9/18.
+ */
+public class ParquetConvertor {
+    private static final Logger logger = LoggerFactory.getLogger(ParquetConvertor.class);
+
+    public static final String FIELD_CUBOID_ID = "cuboidId";
+    public static final String DATATYPE_DECIMAL = "decimal";
+    public static final String DATATYPE_INT = "int";
+    public static final String DATATYPE_LONG = "long";
+    public static final String DATATYPE_DOUBLE = "double";
+    public static final String DATATYPE_STRING = "string";
+    public static final String DATATYPE_BINARY = "binary";
+
+    private RowKeyDecoder decoder;
+    private BufferedMeasureCodec measureCodec;
+    private Map<TblColRef, String> colTypeMap;
+    private Map<MeasureDesc, String> meaTypeMap;
+    private BigDecimalSerializer serializer;
+    private GroupFactory factory;
+    private List<MeasureDesc> measureDescs;
+
+    public ParquetConvertor(String cubeName, String segmentId, KylinConfig kConfig, SerializableConfiguration sConf, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap){
+        KylinConfig.setAndUnsetThreadLocalConfig(kConfig);
+
+        this.colTypeMap = colTypeMap;
+        this.meaTypeMap = meaTypeMap;
+        serializer = new BigDecimalSerializer(DataType.getType(DATATYPE_DECIMAL));
+
+        CubeInstance cubeInstance = CubeManager.getInstance(kConfig).getCube(cubeName);
+        CubeDesc cubeDesc = cubeInstance.getDescriptor();
+        CubeSegment cubeSegment = cubeInstance.getSegmentById(segmentId);
+        measureDescs = cubeDesc.getMeasures();
+        decoder = new RowKeyDecoderParquet(cubeSegment);
+        factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(sConf.get()));
+        measureCodec = new BufferedMeasureCodec(cubeDesc.getMeasures());
+    }
+
+    protected Group parseValueToGroup(Text rawKey, Text rawValue) throws IOException{
+        Group group = factory.newGroup();
+
+        long cuboidId = decoder.decode(rawKey.getBytes());
+        List<String> values = decoder.getValues();
+        List<TblColRef> columns = decoder.getColumns();
+
+        // for check
+        group.append(FIELD_CUBOID_ID, cuboidId);
+
+        for (int i = 0; i < columns.size(); i++) {
+            TblColRef column = columns.get(i);
+            parseColValue(group, column, values.get(i));
+        }
+
+        int[] valueLengths = measureCodec.getCodec().getPeekLength(ByteBuffer.wrap(rawValue.getBytes()));
+
+        int valueOffset = 0;
+        for (int i = 0; i < valueLengths.length; ++i) {
+            MeasureDesc measureDesc = measureDescs.get(i);
+            parseMeaValue(group, measureDesc, rawValue.getBytes(), valueOffset, valueLengths[i]);
+            valueOffset += valueLengths[i];
+        }
+
+        return group;
+    }
+
+    private void parseColValue(final Group group, final TblColRef colRef, final String value) {
+        if (value==null) {
+            logger.error("value is null");
+            return;
+        }
+        switch (colTypeMap.get(colRef)) {
+            case DATATYPE_INT:
+                group.append(colRef.getTableAlias() + "_" + colRef.getName(), Integer.valueOf(value));
+                break;
+            case DATATYPE_LONG:
+                group.append(colRef.getTableAlias() + "_" + colRef.getName(), Long.valueOf(value));
+                break;
+            default:
+                group.append(colRef.getTableAlias() + "_" + colRef.getName(), Binary.fromString(value));
+                break;
+        }
+    }
+
+    private void parseMeaValue(final Group group, final MeasureDesc measureDesc, final byte[] value, final int offset, final int length) throws IOException {
+        if (value==null) {
+            logger.error("value is null");
+            return;
+        }
+        switch (meaTypeMap.get(measureDesc)) {
+            case DATATYPE_LONG:
+                group.append(measureDesc.getName(), BytesUtil.readVLong(ByteBuffer.wrap(value, offset, length)));
+                break;
+            case DATATYPE_DOUBLE:
+                group.append(measureDesc.getName(), ByteBuffer.wrap(value, offset, length).getDouble());
+                break;
+            case DATATYPE_DECIMAL:
+                BigDecimal decimal = serializer.deserialize(ByteBuffer.wrap(value, offset, length));
+                decimal = decimal.setScale(4);
+                group.append(measureDesc.getName(), Binary.fromByteArray(decimal.unscaledValue().toByteArray()));
+                break;
+            default:
+                group.append(measureDesc.getName(), Binary.fromByteArray(value, offset, length));
+                break;
+        }
+    }
+
+    protected static MessageType cuboidToMessageType(Cuboid cuboid, IDimensionEncodingMap dimEncMap, CubeDesc cubeDesc) {
+        Types.MessageTypeBuilder builder = Types.buildMessage();
+
+        List<TblColRef> colRefs = cuboid.getColumns();
+
+        builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(FIELD_CUBOID_ID);
+
+        for (TblColRef colRef : colRefs) {
+            DimensionEncoding dimEnc = dimEncMap.get(colRef);
+            colToMessageType(dimEnc, colRef, builder);
+        }
+
+        MeasureIngester[] aggrIngesters = MeasureIngester.create(cubeDesc.getMeasures());
+
+        for (int i = 0; i < cubeDesc.getMeasures().size(); i++) {
+            MeasureDesc measureDesc = cubeDesc.getMeasures().get(i);
+            DataType meaDataType = measureDesc.getFunction().getReturnDataType();
+            MeasureType measureType = measureDesc.getFunction().getMeasureType();
+
+            meaColToMessageType(measureType, measureDesc.getName(), meaDataType, aggrIngesters[i], builder);
+        }
+
+        return builder.named(String.valueOf(cuboid.getId()));
+    }
+
+    protected static void generateTypeMap(Cuboid cuboid, IDimensionEncodingMap dimEncMap, CubeDesc cubeDesc, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap){
+        List<TblColRef> colRefs = cuboid.getColumns();
+
+        for (TblColRef colRef : colRefs) {
+            DimensionEncoding dimEnc = dimEncMap.get(colRef);
+            addColTypeMap(dimEnc, colRef, colTypeMap);
+        }
+
+        MeasureIngester[] aggrIngesters = MeasureIngester.create(cubeDesc.getMeasures());
+
+        for (int i = 0; i < cubeDesc.getMeasures().size(); i++) {
+            MeasureDesc measureDesc = cubeDesc.getMeasures().get(i);
+            MeasureType measureType = measureDesc.getFunction().getMeasureType();
+            addMeaColTypeMap(measureType, measureDesc, aggrIngesters[i], meaTypeMap);
+        }
+    }
+    private static String getColName(TblColRef colRef) {
+        return colRef.getTableAlias() + "_" + colRef.getName();
+    }
+
+    private static void addColTypeMap(DimensionEncoding dimEnc, TblColRef colRef, Map<TblColRef, String> colTypeMap) {
+        if (dimEnc instanceof AbstractDateDimEnc) {
+            colTypeMap.put(colRef, DATATYPE_LONG);
+        } else if (dimEnc instanceof FixedLenDimEnc || dimEnc instanceof FixedLenHexDimEnc) {
+            DataType colDataType = colRef.getType();
+            if (colDataType.isNumberFamily() || colDataType.isDateTimeFamily()){
+                colTypeMap.put(colRef, DATATYPE_LONG);
+            } else {
+                // stringFamily && default
+                colTypeMap.put(colRef, DATATYPE_STRING);
+            }
+        } else {
+            colTypeMap.put(colRef, DATATYPE_INT);
+        }
+    }
+
+    private static Map<MeasureDesc, String> addMeaColTypeMap(MeasureType measureType, MeasureDesc measureDesc, MeasureIngester aggrIngester, Map<MeasureDesc, String> meaTypeMap) {
+        if (measureType instanceof BasicMeasureType) {
+            MeasureIngester measureIngester = aggrIngester;
+            if (measureIngester instanceof LongIngester) {
+                meaTypeMap.put(measureDesc, DATATYPE_LONG);
+            } else if (measureIngester instanceof DoubleIngester) {
+                meaTypeMap.put(measureDesc, DATATYPE_DOUBLE);
+            } else if (measureIngester instanceof BigDecimalIngester) {
+                meaTypeMap.put(measureDesc, DATATYPE_DECIMAL);
+            } else {
+                meaTypeMap.put(measureDesc, DATATYPE_BINARY);
+            }
+        } else {
+            meaTypeMap.put(measureDesc, DATATYPE_BINARY);
+        }
+        return meaTypeMap;
+    }
+
+    private static void colToMessageType(DimensionEncoding dimEnc, TblColRef colRef, Types.MessageTypeBuilder builder) {
+        if (dimEnc instanceof AbstractDateDimEnc) {
+            builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
+        } else if (dimEnc instanceof FixedLenDimEnc || dimEnc instanceof FixedLenHexDimEnc) {
+            DataType colDataType = colRef.getType();
+            if (colDataType.isNumberFamily() || colDataType.isDateTimeFamily()){
+                builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
+            } else {
+                // stringFamily && default
+                builder.optional(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(getColName(colRef));
+            }
+        } else {
+            builder.optional(PrimitiveType.PrimitiveTypeName.INT32).named(getColName(colRef));
+        }
+    }
+
+    private static void meaColToMessageType(MeasureType measureType, String meaDescName, DataType meaDataType, MeasureIngester aggrIngester, Types.MessageTypeBuilder builder) {
+        if (measureType instanceof BasicMeasureType) {
+            MeasureIngester measureIngester = aggrIngester;
+            if (measureIngester instanceof LongIngester) {
+                builder.required(PrimitiveType.PrimitiveTypeName.INT64).named(meaDescName);
+            } else if (measureIngester instanceof DoubleIngester) {
+                builder.required(PrimitiveType.PrimitiveTypeName.DOUBLE).named(meaDescName);
+            } else if (measureIngester instanceof BigDecimalIngester) {
+                builder.required(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.DECIMAL).precision(meaDataType.getPrecision()).scale(meaDataType.getScale()).named(meaDescName);
+            } else {
+                builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(meaDescName);
+            }
+        } else {
+            builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(meaDescName);
+        }
+    }
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java
index b47a03a..625737a 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetJobSteps.java
@@ -32,6 +32,9 @@ public abstract class ParquetJobSteps extends JobBuilderSupport {
         super(seg, null);
     }
 
-
-    abstract public AbstractExecutable createConvertToParquetStep(String jobId);
+    public static String getCuboidOutputFileName(long cuboid, int partNum) {
+        String fileName = "cuboid_" + cuboid + "_part" + partNum;
+        return fileName;
+    }
+    abstract public AbstractExecutable convertToParquetStep(String jobId);
 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java
index fe85e24..f54a7a0 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMROutput.java
@@ -26,11 +26,18 @@ import org.apache.kylin.common.util.HadoopUtil;
 import org.apache.kylin.cube.CubeSegment;
 import org.apache.kylin.cube.cuboid.CuboidScheduler;
 import org.apache.kylin.engine.mr.IMROutput2;
+import org.apache.kylin.engine.mr.common.AbstractHadoopJob;
+import org.apache.kylin.engine.mr.common.MapReduceUtil;
+import org.apache.kylin.engine.mr.steps.HiveToBaseCuboidMapper;
+import org.apache.kylin.engine.mr.steps.InMemCuboidMapper;
+import org.apache.kylin.engine.mr.steps.NDCuboidMapper;
 import org.apache.kylin.job.execution.DefaultChainedExecutable;
-import org.apache.kylin.metadata.model.IEngineAware;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.util.List;
+import java.util.Map;
+
 
 /**
  * Created by Yichen on 10/16/18.
@@ -42,14 +49,7 @@ public class ParquetMROutput implements IMROutput2 {
     @Override
     public IMRBatchCubingOutputSide2 getBatchCubingOutputSide(CubeSegment seg) {
 
-        boolean useSpark = seg.getCubeDesc().getEngineType() == IEngineAware.ID_SPARK;
-
-
-        // TODO need refactor
-        if (!useSpark){
-            throw new RuntimeException("Cannot adapt to MR engine");
-        }
-        final ParquetJobSteps steps = new ParquetSparkSteps(seg);
+        final ParquetJobSteps steps = new ParquetMRSteps(seg);
 
         return new IMRBatchCubingOutputSide2() {
 
@@ -59,6 +59,7 @@ public class ParquetMROutput implements IMROutput2 {
 
             @Override
             public void addStepPhase3_BuildCube(DefaultChainedExecutable jobFlow) {
+                jobFlow.addTask(steps.convertToParquetStep(jobFlow.getId()));
             }
 
             @Override
@@ -83,21 +84,76 @@ public class ParquetMROutput implements IMROutput2 {
         @Override
         public void configureJobOutput(Job job, String output, CubeSegment segment, CuboidScheduler cuboidScheduler,
                                        int level) throws Exception {
+            int reducerNum = 1;
+            Class mapperClass = job.getMapperClass();
+
+            //allow user specially set config for base cuboid step
+            if (mapperClass == HiveToBaseCuboidMapper.class) {
+                for (Map.Entry<String, String> entry : segment.getConfig().getBaseCuboidMRConfigOverride().entrySet()) {
+                    job.getConfiguration().set(entry.getKey(), entry.getValue());
+                }
+            }
 
+            if (mapperClass == HiveToBaseCuboidMapper.class || mapperClass == NDCuboidMapper.class) {
+                reducerNum = MapReduceUtil.getLayeredCubingReduceTaskNum(segment, cuboidScheduler,
+                        AbstractHadoopJob.getTotalMapInputMB(job), level);
+            } else if (mapperClass == InMemCuboidMapper.class) {
+                reducerNum = MapReduceUtil.getInmemCubingReduceTaskNum(segment, cuboidScheduler);
+            }
             Path outputPath = new Path(output);
             FileOutputFormat.setOutputPath(job, outputPath);
             job.setOutputFormatClass(SequenceFileOutputFormat.class);
+            job.setNumReduceTasks(reducerNum);
             HadoopUtil.deletePath(job.getConfiguration(), outputPath);
         }
     }
 
     @Override
     public IMRBatchMergeOutputSide2 getBatchMergeOutputSide(CubeSegment seg) {
-        return null;
+        final ParquetJobSteps steps = new ParquetMRSteps(seg);
+        return new IMRBatchMergeOutputSide2() {
+            @Override
+            public void addStepPhase1_MergeDictionary(DefaultChainedExecutable jobFlow) {
+            }
+
+            @Override
+            public void addStepPhase2_BuildCube(CubeSegment set, List<CubeSegment> mergingSegments, DefaultChainedExecutable jobFlow) {
+                jobFlow.addTask(steps.convertToParquetStep(jobFlow.getId()));
+            }
+
+            @Override
+            public void addStepPhase3_Cleanup(DefaultChainedExecutable jobFlow) {
+            }
+
+            @Override
+            public IMRMergeOutputFormat getOuputFormat() {
+                return null;
+            }
+        };
     }
 
     @Override
     public IMRBatchOptimizeOutputSide2 getBatchOptimizeOutputSide(CubeSegment seg) {
-        return null;
+        final ParquetJobSteps steps = new ParquetMRSteps(seg);
+
+        return new IMRBatchOptimizeOutputSide2() {
+            @Override
+            public void addStepPhase2_CreateHTable(DefaultChainedExecutable jobFlow) {
+            }
+
+            @Override
+            public void addStepPhase3_BuildCube(DefaultChainedExecutable jobFlow) {
+                jobFlow.addTask(steps.convertToParquetStep(jobFlow.getId()));
+            }
+
+            @Override
+            public void addStepPhase4_Cleanup(DefaultChainedExecutable jobFlow) {
+            }
+
+            @Override
+            public void addStepPhase5_Cleanup(DefaultChainedExecutable jobFlow) {
+            }
+
+        };
     }
 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java
new file mode 100644
index 0000000..829f074
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kylin.storage.parquet.steps;
+
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.engine.mr.CubingJob;
+import org.apache.kylin.engine.mr.common.BatchConstants;
+import org.apache.kylin.engine.mr.common.MapReduceExecutable;
+import org.apache.kylin.job.constant.ExecutableConstants;
+import org.apache.kylin.job.execution.AbstractExecutable;
+
+/**
+ * Created by Yichen on 11/8/18.
+ */
+public class ParquetMRSteps extends ParquetJobSteps{
+    public ParquetMRSteps(CubeSegment seg) {
+        super(seg);
+    }
+
+    @Override
+    public AbstractExecutable convertToParquetStep(String jobId) {
+        String cuboidRootPath = getCuboidRootPath(jobId);
+        String inputPath = cuboidRootPath + (cuboidRootPath.endsWith("/") ? "" : "/") + "*";
+
+        final MapReduceExecutable mrExecutable = new MapReduceExecutable();
+        mrExecutable.setName(ExecutableConstants.STEP_NAME_CONVERT_CUBOID_TO_PARQUET);
+        mrExecutable.setMapReduceJobClass(MRCubeParquetJob.class);
+        StringBuilder cmd = new StringBuilder();
+
+        appendMapReduceParameters(cmd);
+        appendExecCmdParameters(cmd, BatchConstants.ARG_CUBE_NAME, seg.getRealization().getName());
+        appendExecCmdParameters(cmd, BatchConstants.ARG_INPUT, inputPath);
+        appendExecCmdParameters(cmd, BatchConstants.ARG_OUTPUT, getParquetOutputPath(jobId));
+        appendExecCmdParameters(cmd, BatchConstants.ARG_SEGMENT_ID, seg.getUuid());
+        appendExecCmdParameters(cmd, BatchConstants.ARG_JOB_NAME, "Kylin_Parquet_Generator_" + seg.getRealization().getName()+ "_Step");
+
+        mrExecutable.setMapReduceParams(cmd.toString());
+        mrExecutable.setName(ExecutableConstants.STEP_NAME_CONVERT_CUBOID_TO_PARQUET);
+        mrExecutable.setCounterSaveAs(",," + CubingJob.CUBE_SIZE_BYTES);
+
+        return mrExecutable;
+    }
+
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java
index 176afd0..794ede4 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkOutput.java
@@ -36,7 +36,7 @@ public class ParquetSparkOutput implements ISparkOutput {
 
             @Override
             public void addStepPhase3_BuildCube(DefaultChainedExecutable jobFlow) {
-                jobFlow.addTask(steps.createConvertToParquetStep(jobFlow.getId()));
+                jobFlow.addTask(steps.convertToParquetStep(jobFlow.getId()));
 
             }
 
@@ -58,7 +58,7 @@ public class ParquetSparkOutput implements ISparkOutput {
 
             @Override
             public void addStepPhase2_BuildCube(CubeSegment set, List<CubeSegment> mergingSegments, DefaultChainedExecutable jobFlow) {
-                jobFlow.addTask(steps.createConvertToParquetStep(jobFlow.getId()));
+                jobFlow.addTask(steps.convertToParquetStep(jobFlow.getId()));
 
             }
 
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java
index 296bc68..65bd30a 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java
@@ -35,7 +35,7 @@ public class ParquetSparkSteps extends ParquetJobSteps {
     }
 
     @Override
-    public AbstractExecutable createConvertToParquetStep(String jobId) {
+    public AbstractExecutable convertToParquetStep(String jobId) {
 
         String cuboidRootPath = getCuboidRootPath(jobId);
         String inputPath = cuboidRootPath + (cuboidRootPath.endsWith("/") ? "" : "/");
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
index def4d8d..4ad7cb3 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
@@ -17,14 +17,10 @@
  */
 package org.apache.kylin.storage.parquet.steps;
 
-import com.fasterxml.jackson.core.JsonProcessingException;
-import com.fasterxml.jackson.core.type.TypeReference;
-import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.OptionBuilder;
 import org.apache.commons.cli.Options;
-import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
@@ -34,54 +30,26 @@ import org.apache.hadoop.mapreduce.TaskID;
 import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.AbstractApplication;
-import org.apache.kylin.common.util.ByteArray;
-import org.apache.kylin.common.util.Bytes;
-import org.apache.kylin.common.util.BytesUtil;
 import org.apache.kylin.common.util.HadoopUtil;
-import org.apache.kylin.common.util.JsonUtil;
 import org.apache.kylin.common.util.OptionsHelper;
 import org.apache.kylin.cube.CubeInstance;
 import org.apache.kylin.cube.CubeManager;
 import org.apache.kylin.cube.CubeSegment;
 import org.apache.kylin.cube.cuboid.Cuboid;
-import org.apache.kylin.cube.kv.RowConstants;
-import org.apache.kylin.cube.kv.RowKeyDecoder;
-import org.apache.kylin.cube.kv.RowKeyDecoderParquet;
-import org.apache.kylin.cube.model.CubeDesc;
-import org.apache.kylin.dimension.AbstractDateDimEnc;
-import org.apache.kylin.dimension.DimensionEncoding;
-import org.apache.kylin.dimension.FixedLenDimEnc;
-import org.apache.kylin.dimension.FixedLenHexDimEnc;
 import org.apache.kylin.dimension.IDimensionEncodingMap;
 import org.apache.kylin.engine.mr.BatchCubingJobBuilder2;
 import org.apache.kylin.engine.mr.common.AbstractHadoopJob;
 import org.apache.kylin.engine.mr.common.BatchConstants;
-import org.apache.kylin.engine.mr.common.CubeStatsReader;
 import org.apache.kylin.engine.mr.common.SerializableConfiguration;
 import org.apache.kylin.engine.spark.KylinSparkJobListener;
 import org.apache.kylin.engine.spark.SparkUtil;
 import org.apache.kylin.job.constant.ExecutableConstants;
-import org.apache.kylin.measure.BufferedMeasureCodec;
-import org.apache.kylin.measure.MeasureIngester;
-import org.apache.kylin.measure.MeasureType;
-import org.apache.kylin.measure.basic.BasicMeasureType;
-import org.apache.kylin.measure.basic.BigDecimalIngester;
-import org.apache.kylin.measure.basic.DoubleIngester;
-import org.apache.kylin.measure.basic.LongIngester;
-import org.apache.kylin.metadata.datatype.BigDecimalSerializer;
-import org.apache.kylin.metadata.datatype.DataType;
 import org.apache.kylin.metadata.model.MeasureDesc;
 import org.apache.kylin.metadata.model.TblColRef;
 import org.apache.parquet.example.data.Group;
-import org.apache.parquet.example.data.GroupFactory;
-import org.apache.parquet.example.data.simple.SimpleGroupFactory;
 import org.apache.parquet.hadoop.ParquetOutputFormat;
 import org.apache.parquet.hadoop.example.GroupWriteSupport;
-import org.apache.parquet.io.api.Binary;
 import org.apache.parquet.schema.MessageType;
-import org.apache.parquet.schema.OriginalType;
-import org.apache.parquet.schema.PrimitiveType;
-import org.apache.parquet.schema.Types;
 import org.apache.spark.Partitioner;
 import org.apache.spark.SparkConf;
 import org.apache.spark.api.java.JavaPairRDD;
@@ -93,9 +61,6 @@ import scala.Tuple2;
 
 import java.io.IOException;
 import java.io.Serializable;
-import java.math.BigDecimal;
-import java.nio.ByteBuffer;
-import java.util.List;
 import java.util.Map;
 
 
@@ -172,11 +137,26 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
             logger.info("Input path: {}", inputPath);
             logger.info("Output path: {}", outputPath);
 
+            final Map<TblColRef, String> colTypeMap = Maps.newHashMap();
+            final Map<MeasureDesc, String> meaTypeMap = Maps.newHashMap();
+
+            final IDimensionEncodingMap dimEncMap = cubeSegment.getDimensionEncodingMap();
+
+            Cuboid baseCuboid = Cuboid.getBaseCuboid(cubeSegment.getCubeDesc());
+            MessageType schema = ParquetConvertor.cuboidToMessageType(baseCuboid, dimEncMap, cubeSegment.getCubeDesc());
+            ParquetConvertor.generateTypeMap(baseCuboid, dimEncMap, cubeSegment.getCubeDesc(), colTypeMap, meaTypeMap);
+            GroupWriteSupport.setSchema(schema, job.getConfiguration());
+
+            GenerateGroupRDDFunction groupPairFunction = new GenerateGroupRDDFunction(cubeName, cubeSegment.getUuid(), metaUrl, new SerializableConfiguration(job.getConfiguration()), colTypeMap, meaTypeMap);
+
+
+            logger.info("Schema: {}", schema.toString());
+
             // Read from cuboid and save to parquet
             for (int level = 0; level <= totalLevels; level++) {
                 String cuboidPath = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel(inputPath, level);
                 allRDDs[level] = SparkUtil.parseInputPath(cuboidPath, fs, sc, Text.class, Text.class);
-                saveToParquet(allRDDs[level], metaUrl, cubeName, cubeSegment, outputPath, level, job, envConfig);
+                saveToParquet(allRDDs[level], groupPairFunction, cubeSegment, outputPath, level, job, envConfig);
             }
 
             logger.info("HDFS: Number of bytes written={}", jobListener.metrics.getBytesWritten());
@@ -190,45 +170,28 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
 
     }
 
-    protected void saveToParquet(JavaPairRDD<Text, Text> rdd, String metaUrl, String cubeName, CubeSegment cubeSeg, String parquetOutput, int level, Job job, KylinConfig kylinConfig) throws Exception {
-        final IDimensionEncodingMap dimEncMap = cubeSeg.getDimensionEncodingMap();
-
-        Cuboid baseCuboid = Cuboid.getBaseCuboid(cubeSeg.getCubeDesc());
-
-        final Map<TblColRef, String> colTypeMap = Maps.newHashMap();
-        final Map<MeasureDesc, String> meaTypeMap = Maps.newHashMap();
-
-        MessageType schema = cuboidToMessageType(baseCuboid, dimEncMap, cubeSeg.getCubeDesc(), colTypeMap, meaTypeMap);
-
-        logger.info("Schema: {}", schema.toString());
-
+    protected void saveToParquet(JavaPairRDD<Text, Text> rdd, GenerateGroupRDDFunction groupRDDFunction, CubeSegment cubeSeg, String parquetOutput, int level, Job job, KylinConfig kylinConfig) throws IOException {
         final CuboidToPartitionMapping cuboidToPartitionMapping = new CuboidToPartitionMapping(cubeSeg, kylinConfig, level);
 
         logger.info("CuboidToPartitionMapping: {}", cuboidToPartitionMapping.toString());
 
-        JavaPairRDD<Text, Text> repartitionedRDD = rdd.partitionBy(new CuboidPartitioner(cuboidToPartitionMapping));
-
         String output = BatchCubingJobBuilder2.getCuboidOutputPathsByLevel(parquetOutput, level);
 
         job.setOutputFormatClass(CustomParquetOutputFormat.class);
-        GroupWriteSupport.setSchema(schema, job.getConfiguration());
         CustomParquetOutputFormat.setOutputPath(job, new Path(output));
         CustomParquetOutputFormat.setWriteSupportClass(job, GroupWriteSupport.class);
         CustomParquetOutputFormat.setCuboidToPartitionMapping(job, cuboidToPartitionMapping);
 
-        JavaPairRDD<Void, Group> groupRDD = rdd.partitionBy(new CuboidPartitioner(cuboidToPartitionMapping))
-                                                .mapToPair(new GenerateGroupRDDFunction(cubeName, cubeSeg.getUuid(), metaUrl, new SerializableConfiguration(job.getConfiguration()), colTypeMap, meaTypeMap));
+        JavaPairRDD<Void, Group> groupRDD = rdd.partitionBy(new CuboidPartitioner(cuboidToPartitionMapping)).mapToPair(groupRDDFunction);
 
         groupRDD.saveAsNewAPIHadoopDataset(job.getConfiguration());
     }
 
     static class CuboidPartitioner extends Partitioner {
         private CuboidToPartitionMapping mapping;
-        private boolean enableSharding;
 
-        public CuboidPartitioner(CuboidToPartitionMapping cuboidToPartitionMapping, boolean enableSharding) {
+        public CuboidPartitioner(CuboidToPartitionMapping cuboidToPartitionMapping) {
             this.mapping = cuboidToPartitionMapping;
-            this.enableSharding =enableSharding;
         }
 
         @Override
@@ -239,121 +202,11 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
         @Override
         public int getPartition(Object key) {
             Text textKey = (Text)key;
-            return mapping.getPartitionForCuboidId(textKey.getBytes());
-        }
-    }
-
-    public static class CuboidToPartitionMapping implements Serializable {
-        private Map<Long, List<Integer>> cuboidPartitions;
-        private int partitionNum;
-
-        public CuboidToPartitionMapping(Map<Long, List<Integer>> cuboidPartitions) {
-            this.cuboidPartitions = cuboidPartitions;
-            int partitions = 0;
-            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
-                partitions = partitions + entry.getValue().size();
-            }
-            this.partitionNum = partitions;
-        }
-
-        public CuboidToPartitionMapping(CubeSegment cubeSeg, KylinConfig kylinConfig, int level) throws IOException {
-            cuboidPartitions = Maps.newHashMap();
-
-            List<Long> layeredCuboids = cubeSeg.getCuboidScheduler().getCuboidsByLayer().get(level);
-            CubeStatsReader cubeStatsReader = new CubeStatsReader(cubeSeg, kylinConfig);
-
-            int position = 0;
-            for (Long cuboidId : layeredCuboids) {
-                int partition = estimateCuboidPartitionNum(cuboidId, cubeStatsReader, kylinConfig);
-                List<Integer> positions = Lists.newArrayListWithCapacity(partition);
-
-                for (int i = position; i < position + partition; i++) {
-                    positions.add(i);
-                }
-
-                cuboidPartitions.put(cuboidId, positions);
-                position = position + partition;
-            }
-
-            this.partitionNum = position;
-        }
-
-        public String serialize() throws JsonProcessingException {
-            return JsonUtil.writeValueAsString(cuboidPartitions);
-        }
-
-        public static CuboidToPartitionMapping deserialize(String jsonMapping) throws IOException {
-            Map<Long, List<Integer>> cuboidPartitions = JsonUtil.readValue(jsonMapping, new TypeReference<Map<Long, List<Integer>>>() {});
-            return new CuboidToPartitionMapping(cuboidPartitions);
-        }
-
-        public int getNumPartitions() {
-            return this.partitionNum;
-        }
-
-        public long getCuboidIdByPartition(int partition) {
-            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
-                if (entry.getValue().contains(partition)) {
-                    return entry.getKey();
-                }
-            }
-
-            throw new IllegalArgumentException("No cuboidId for partition id: " + partition);
-        }
-
-        public int getPartitionForCuboidId(byte[] key) {
-            long cuboidId = Bytes.toLong(key, RowConstants.ROWKEY_SHARDID_LEN, RowConstants.ROWKEY_CUBOIDID_LEN);
-            List<Integer> partitions = cuboidPartitions.get(cuboidId);
-            int partitionKey = mod(key, RowConstants.ROWKEY_COL_DEFAULT_LENGTH, key.length, partitions.size());
-
-            return partitions.get(partitionKey);
-        }
-
-        private int mod(byte[] src, int start, int end, int total) {
-            int sum = Bytes.hashBytes(src, start, end - start);
-            int mod = sum % total;
-            if (mod < 0)
-                mod += total;
-
-            return mod;
-        }
-
-        public int getPartitionNumForCuboidId(long cuboidId) {
-            return cuboidPartitions.get(cuboidId).size();
-        }
-
-        public String getPartitionFilePrefix(int partition) {
-            String prefix = "cuboid_";
-            long cuboid = getCuboidIdByPartition(partition);
-            int partNum = partition % getPartitionNumForCuboidId(cuboid);
-            prefix = prefix + cuboid + "_part" + partNum;
-
-            return prefix;
-        }
-
-        private int estimateCuboidPartitionNum(long cuboidId, CubeStatsReader cubeStatsReader, KylinConfig kylinConfig) {
-            double cuboidSize = cubeStatsReader.estimateCuboidSize(cuboidId);
-            float rddCut = kylinConfig.getSparkRDDPartitionCutMB();
-            int partition = (int) (cuboidSize / (rddCut * 10));
-            partition = Math.max(kylinConfig.getSparkMinPartition(), partition);
-            partition = Math.min(kylinConfig.getSparkMaxPartition(), partition);
-            logger.info("cuboid:{}, est_size:{}, partitions:{}", cuboidId, cuboidSize, partition);
-            return partition;
-        }
-
-        @Override
-        public String toString() {
-            StringBuilder sb = new StringBuilder();
-            for (Map.Entry<Long, List<Integer>> entry : cuboidPartitions.entrySet()) {
-                sb.append("cuboidId:").append(entry.getKey()).append(" [").append(StringUtils.join(entry.getValue(), ",")).append("]\n");
-            }
-
-            return sb.toString();
+            return mapping.getPartitionByKey(textKey.getBytes());
         }
     }
 
     public static class CustomParquetOutputFormat extends ParquetOutputFormat {
-        public static final String CUBOID_TO_PARTITION_MAPPING = "cuboidToPartitionMapping";
 
         @Override
         public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException {
@@ -361,7 +214,7 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
             TaskID taskId = context.getTaskAttemptID().getTaskID();
             int partition = taskId.getId();
 
-            CuboidToPartitionMapping mapping = CuboidToPartitionMapping.deserialize(context.getConfiguration().get(CUBOID_TO_PARTITION_MAPPING));
+            CuboidToPartitionMapping mapping = CuboidToPartitionMapping.deserialize(context.getConfiguration().get(BatchConstants.ARG_CUBOID_TO_PARTITION_MAPPING));
 
             return new Path(committer.getWorkPath(), getUniqueFile(context, mapping.getPartitionFilePrefix(partition)+ "-" + getOutputName(context), extension));
         }
@@ -369,7 +222,7 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
         public static void setCuboidToPartitionMapping(Job job, CuboidToPartitionMapping cuboidToPartitionMapping) throws IOException {
             String jsonStr = cuboidToPartitionMapping.serialize();
 
-            job.getConfiguration().set(CUBOID_TO_PARTITION_MAPPING, jsonStr);
+            job.getConfiguration().set(BatchConstants.ARG_CUBOID_TO_PARTITION_MAPPING, jsonStr);
         }
     }
 
@@ -379,14 +232,10 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
         private String segmentId;
         private String metaUrl;
         private SerializableConfiguration conf;
-        private List<MeasureDesc> measureDescs;
-        private RowKeyDecoder decoder;
         private Map<TblColRef, String> colTypeMap;
         private Map<MeasureDesc, String> meaTypeMap;
-        private GroupFactory factory;
-        private BufferedMeasureCodec measureCodec;
-        private BigDecimalSerializer serializer;
-        private int count = 0;
+
+        private transient ParquetConvertor convertor;
 
         public GenerateGroupRDDFunction(String cubeName, String segmentId, String metaurl, SerializableConfiguration conf, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
             this.cubeName = cubeName;
@@ -398,17 +247,8 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
         }
 
         private void init() {
-            KylinConfig kConfig = AbstractHadoopJob.loadKylinConfigFromHdfs(conf, metaUrl);
-            KylinConfig.setAndUnsetThreadLocalConfig(kConfig);
-            CubeInstance cubeInstance = CubeManager.getInstance(kConfig).getCube(cubeName);
-            CubeDesc cubeDesc = cubeInstance.getDescriptor();
-            CubeSegment cubeSegment = cubeInstance.getSegmentById(segmentId);
-            measureDescs = cubeDesc.getMeasures();
-            decoder = new RowKeyDecoderParquet(cubeSegment);
-            factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(conf.get()));
-            measureCodec = new BufferedMeasureCodec(cubeDesc.getMeasures());
-            serializer = new BigDecimalSerializer(DataType.getType("decimal"));
-            initialized = true;
+            KylinConfig kylinConfig = AbstractHadoopJob.loadKylinConfigFromHdfs(conf, metaUrl);
+            convertor = new ParquetConvertor(cubeName, segmentId, kylinConfig, conf, colTypeMap, meaTypeMap);
         }
 
         @Override
@@ -421,133 +261,9 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
                     }
                 }
             }
-
-            long cuboid = decoder.decode4Parquet(tuple._1.getBytes());
-            List<String> values = decoder.getValues();
-            List<TblColRef> columns = decoder.getColumns();
-
-            Group group = factory.newGroup();
-
-            // for check
-            group.append("cuboidId", cuboid);
-
-            for (int i = 0; i < columns.size(); i++) {
-                TblColRef column = columns.get(i);
-                parseColValue(group, column, values.get(i));
-            }
-
-            count ++;
-
-            byte[] encodedBytes = tuple._2().getBytes();
-            int[] valueLengths = measureCodec.getCodec().getPeekLength(ByteBuffer.wrap(encodedBytes));
-
-            int valueOffset = 0;
-            for (int i = 0; i < valueLengths.length; ++i) {
-                MeasureDesc measureDesc = measureDescs.get(i);
-                parseMeaValue(group, measureDesc, encodedBytes, valueOffset, valueLengths[i]);
-                valueOffset += valueLengths[i];
-            }
+            Group group = convertor.parseValueToGroup(tuple._1(), tuple._2());
 
             return new Tuple2<>(null, group);
         }
-
-        private void parseColValue(final Group group, final TblColRef colRef, final String value) {
-            if (value==null) {
-                logger.info("value is null");
-                return;
-            }
-            switch (colTypeMap.get(colRef)) {
-                case "int":
-                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Integer.valueOf(value));
-                    break;
-                case "long":
-                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Long.valueOf(value));
-                    break;
-                default:
-                    group.append(colRef.getTableAlias() + "_" + colRef.getName(), Binary.fromString(value));
-                    break;
-            }
-        }
-
-        private void parseMeaValue(final Group group, final MeasureDesc measureDesc, final byte[] value, final int offset, final int length) throws IOException {
-            if (value==null) {
-                logger.info("value is null");
-                return;
-            }
-            switch (meaTypeMap.get(measureDesc)) {
-                case "long":
-                    group.append(measureDesc.getName(), BytesUtil.readVLong(ByteBuffer.wrap(value, offset, length)));
-                    break;
-                case "double":
-                    group.append(measureDesc.getName(), ByteBuffer.wrap(value, offset, length).getDouble());
-                    break;
-                case "decimal":
-                    BigDecimal decimal = serializer.deserialize(ByteBuffer.wrap(value, offset, length));
-                    decimal = decimal.setScale(4);
-                    group.append(measureDesc.getName(), Binary.fromConstantByteArray(decimal.unscaledValue().toByteArray()));
-                    break;
-                default:
-                    group.append(measureDesc.getName(), Binary.fromConstantByteArray(value, offset, length));
-                    break;
-            }
-        }
-    }
-
-    private MessageType cuboidToMessageType(Cuboid cuboid, IDimensionEncodingMap dimEncMap, CubeDesc cubeDesc, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
-        Types.MessageTypeBuilder builder = Types.buildMessage();
-
-        List<TblColRef> colRefs = cuboid.getColumns();
-
-        builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named("cuboidId");
-
-        for (TblColRef colRef : colRefs) {
-            DimensionEncoding dimEnc = dimEncMap.get(colRef);
-
-            if (dimEnc instanceof AbstractDateDimEnc) {
-                builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
-                colTypeMap.put(colRef, "long");
-            } else if (dimEnc instanceof FixedLenDimEnc || dimEnc instanceof FixedLenHexDimEnc) {
-                org.apache.kylin.metadata.datatype.DataType colDataType = colRef.getType();
-                builder.optional(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(getColName(colRef));
-                colTypeMap.put(colRef, "string");
-            } else {
-                builder.optional(PrimitiveType.PrimitiveTypeName.INT32).named(getColName(colRef));
-                colTypeMap.put(colRef, "int");
-            }
-        }
-
-        MeasureIngester[] aggrIngesters = MeasureIngester.create(cubeDesc.getMeasures());
-
-        for (int i = 0; i < cubeDesc.getMeasures().size(); i++) {
-            MeasureDesc measureDesc = cubeDesc.getMeasures().get(i);
-            org.apache.kylin.metadata.datatype.DataType meaDataType = measureDesc.getFunction().getReturnDataType();
-            MeasureType measureType = measureDesc.getFunction().getMeasureType();
-
-            if (measureType instanceof BasicMeasureType) {
-                MeasureIngester measureIngester = aggrIngesters[i];
-                if (measureIngester instanceof LongIngester) {
-                    builder.required(PrimitiveType.PrimitiveTypeName.INT64).named(measureDesc.getName());
-                    meaTypeMap.put(measureDesc, "long");
-                } else if (measureIngester instanceof DoubleIngester) {
-                    builder.required(PrimitiveType.PrimitiveTypeName.DOUBLE).named(measureDesc.getName());
-                    meaTypeMap.put(measureDesc, "double");
-                } else if (measureIngester instanceof BigDecimalIngester) {
-                    builder.required(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.DECIMAL).precision(meaDataType.getPrecision()).scale(meaDataType.getScale()).named(measureDesc.getName());
-                    meaTypeMap.put(measureDesc, "decimal");
-                } else {
-                    builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(measureDesc.getName());
-                    meaTypeMap.put(measureDesc, "binary");
-                }
-            } else {
-                builder.required(PrimitiveType.PrimitiveTypeName.BINARY).named(measureDesc.getName());
-                meaTypeMap.put(measureDesc, "binary");
-            }
-        }
-
-        return builder.named(String.valueOf(cuboid.getId()));
-    }
-
-    private String getColName(TblColRef colRef) {
-        return colRef.getTableAlias() + "_" + colRef.getName();
     }
 }
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/ItClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/ItClassLoader.java
index 0590999..e0a9508 100644
--- a/tomcat-ext/src/main/java/org/apache/kylin/ext/ItClassLoader.java
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/ItClassLoader.java
@@ -79,9 +79,9 @@ public class ItClassLoader extends URLClassLoader {
                 e.printStackTrace();
             }
         }
-        String spark_home = System.getenv("SPARK_HOME");
+        String sparkHome = System.getenv("SPARK_HOME");
         try {
-            File sparkJar = findFile(spark_home + "/jars", "spark-yarn_.*.jar");
+            File sparkJar = findFile(sparkHome + "/jars", "spark-yarn_.*.jar");
             addURL(sparkJar.toURI().toURL());
             addURL(new File("../examples/test_case_data/sandbox").toURI().toURL());
         } catch (MalformedURLException e) {
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/ItSparkClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/ItSparkClassLoader.java
index c69ef9c..afca915 100644
--- a/tomcat-ext/src/main/java/org/apache/kylin/ext/ItSparkClassLoader.java
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/ItSparkClassLoader.java
@@ -66,15 +66,15 @@ public class ItSparkClassLoader extends URLClassLoader {
     }
 
     public void init() throws MalformedURLException {
-        String spark_home = System.getenv("SPARK_HOME");
-        if (spark_home == null) {
-            spark_home = System.getProperty("SPARK_HOME");
-            if (spark_home == null) {
+        String sparkHome = System.getenv("SPARK_HOME");
+        if (sparkHome == null) {
+            sparkHome = System.getProperty("SPARK_HOME");
+            if (sparkHome == null) {
                 throw new RuntimeException(
                         "Spark home not found; set it explicitly or use the SPARK_HOME environment variable.");
             }
         }
-        File file = new File(spark_home + "/jars");
+        File file = new File(sparkHome + "/jars");
         File[] jars = file.listFiles();
         for (File jar : jars) {
             addURL(jar.toURI().toURL());
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java
index dba782b..8fe211e 100644
--- a/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java
@@ -62,25 +62,25 @@ public class SparkClassLoader extends URLClassLoader {
     private static Logger logger = LoggerFactory.getLogger(SparkClassLoader.class);
 
     static {
-        String sparkclassloader_spark_cl_preempt_classes = System.getenv("SPARKCLASSLOADER_SPARK_CL_PREEMPT_CLASSES");
-        if (!StringUtils.isEmpty(sparkclassloader_spark_cl_preempt_classes)) {
-            SPARK_CL_PREEMPT_CLASSES = StringUtils.split(sparkclassloader_spark_cl_preempt_classes, ",");
+        String sparkclassloaderSparkClPreemptClasses = System.getenv("SPARKCLASSLOADER_SPARK_CL_PREEMPT_CLASSES");
+        if (!StringUtils.isEmpty(sparkclassloaderSparkClPreemptClasses)) {
+            SPARK_CL_PREEMPT_CLASSES = StringUtils.split(sparkclassloaderSparkClPreemptClasses, ",");
         }
 
-        String sparkclassloader_spark_cl_preempt_files = System.getenv("SPARKCLASSLOADER_SPARK_CL_PREEMPT_FILES");
-        if (!StringUtils.isEmpty(sparkclassloader_spark_cl_preempt_files)) {
-            SPARK_CL_PREEMPT_FILES = StringUtils.split(sparkclassloader_spark_cl_preempt_files, ",");
+        String sparkclassloaderSparkClPreemptFiles = System.getenv("SPARKCLASSLOADER_SPARK_CL_PREEMPT_FILES");
+        if (!StringUtils.isEmpty(sparkclassloaderSparkClPreemptFiles)) {
+            SPARK_CL_PREEMPT_FILES = StringUtils.split(sparkclassloaderSparkClPreemptFiles, ",");
         }
 
-        String sparkclassloader_this_cl_precedent_classes = System.getenv("SPARKCLASSLOADER_THIS_CL_PRECEDENT_CLASSES");
-        if (!StringUtils.isEmpty(sparkclassloader_this_cl_precedent_classes)) {
-            THIS_CL_PRECEDENT_CLASSES = StringUtils.split(sparkclassloader_this_cl_precedent_classes, ",");
+        String sparkclassloaderThisClPrecedentClasses = System.getenv("SPARKCLASSLOADER_THIS_CL_PRECEDENT_CLASSES");
+        if (!StringUtils.isEmpty(sparkclassloaderThisClPrecedentClasses)) {
+            THIS_CL_PRECEDENT_CLASSES = StringUtils.split(sparkclassloaderThisClPrecedentClasses, ",");
         }
 
-        String sparkclassloader_parent_cl_precedent_classes = System
+        String sparkclassloaderParentClPrecedentClasses = System
                 .getenv("SPARKCLASSLOADER_PARENT_CL_PRECEDENT_CLASSES");
-        if (!StringUtils.isEmpty(sparkclassloader_parent_cl_precedent_classes)) {
-            PARENT_CL_PRECEDENT_CLASSES = StringUtils.split(sparkclassloader_parent_cl_precedent_classes, ",");
+        if (!StringUtils.isEmpty(sparkclassloaderParentClPrecedentClasses)) {
+            PARENT_CL_PRECEDENT_CLASSES = StringUtils.split(sparkclassloaderParentClPrecedentClasses, ",");
         }
 
         try {
@@ -111,6 +111,7 @@ public class SparkClassLoader extends URLClassLoader {
         init();
     }
 
+    @SuppressWarnings("checkstyle:LocalVariableName")
     public void init() throws MalformedURLException {
         String spark_home = System.getenv("SPARK_HOME");
         if (spark_home == null) {
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/TomcatClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/TomcatClassLoader.java
index 89717ec..be0ecd9 100644
--- a/tomcat-ext/src/main/java/org/apache/kylin/ext/TomcatClassLoader.java
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/TomcatClassLoader.java
@@ -48,21 +48,21 @@ public class TomcatClassLoader extends ParallelWebappClassLoader {
     private static final Set<String> wontFindClasses = new HashSet<>();
 
     static {
-        String tomcatclassloader_parent_cl_precedent_classes = System
+        String tomcatclassloaderParentClPrecedentClasses = System
                 .getenv("TOMCATCLASSLOADER_PARENT_CL_PRECEDENT_CLASSES");
-        if (!StringUtils.isEmpty(tomcatclassloader_parent_cl_precedent_classes)) {
-            PARENT_CL_PRECEDENT_CLASSES = StringUtils.split(tomcatclassloader_parent_cl_precedent_classes, ",");
+        if (!StringUtils.isEmpty(tomcatclassloaderParentClPrecedentClasses)) {
+            PARENT_CL_PRECEDENT_CLASSES = StringUtils.split(tomcatclassloaderParentClPrecedentClasses, ",");
         }
 
-        String tomcatclassloader_this_cl_precedent_classes = System
+        String tomcatclassloaderThisClPrecedentClasses = System
                 .getenv("TOMCATCLASSLOADER_THIS_CL_PRECEDENT_CLASSES");
-        if (!StringUtils.isEmpty(tomcatclassloader_this_cl_precedent_classes)) {
-            THIS_CL_PRECEDENT_CLASSES = StringUtils.split(tomcatclassloader_this_cl_precedent_classes, ",");
+        if (!StringUtils.isEmpty(tomcatclassloaderThisClPrecedentClasses)) {
+            THIS_CL_PRECEDENT_CLASSES = StringUtils.split(tomcatclassloaderThisClPrecedentClasses, ",");
         }
 
-        String tomcatclassloader_codegen_classes = System.getenv("TOMCATCLASSLOADER_CODEGEN_CLASSES");
-        if (!StringUtils.isEmpty(tomcatclassloader_codegen_classes)) {
-            CODEGEN_CLASSES = StringUtils.split(tomcatclassloader_codegen_classes, ",");
+        String tomcatclassloaderCodegenClasses = System.getenv("TOMCATCLASSLOADER_CODEGEN_CLASSES");
+        if (!StringUtils.isEmpty(tomcatclassloaderCodegenClasses)) {
+            CODEGEN_CLASSES = StringUtils.split(tomcatclassloaderCodegenClasses, ",");
         }
 
         wontFindClasses.add("Class");
@@ -98,6 +98,7 @@ public class TomcatClassLoader extends ParallelWebappClassLoader {
         init();
     }
 
+    @SuppressWarnings("checkstyle:LocalVariableName")
     public void init() {
         String spark_home = System.getenv("SPARK_HOME");
         try {


[kylin] 05/06: KYLIN-3626 Allow customization for storage path

Posted by sh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch kylin-on-parquet
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 126ab092ef059824f6e2e8ba2209adb1e2fcf9cc
Author: chao long <wa...@qq.com>
AuthorDate: Fri Nov 30 08:21:09 2018 +0800

    KYLIN-3626 Allow customization for storage path
---
 .../org/apache/kylin/common/KylinConfigBase.java   |  4 ++
 .../storage/path/DefaultStoragePathBuilder.java    | 49 ++++++++++++++++
 .../kylin/storage/path/IStoragePathBuilder.java    | 31 +++++++++++
 .../kylin/storage/path/S3ReversePathBuilder.java   | 65 ++++++++++++++++++++++
 .../apache/kylin/engine/mr/JobBuilderSupport.java  | 47 +++++++---------
 .../apache/kylin/rest/job/StorageCleanupJob.java   | 13 +++--
 .../apache/kylin/source/hive/HiveInputBase.java    | 11 ++--
 .../org/apache/kylin/source/hive/HiveMRInput.java  | 10 +++-
 .../apache/kylin/source/hive/HiveSparkInput.java   | 11 ++--
 .../apache/kylin/source/hive/HiveMRInputTest.java  |  6 +-
 .../apache/kylin/source/jdbc/JdbcHiveMRInput.java  |  2 +-
 .../apache/kylin/source/kafka/KafkaMRInput.java    | 11 +++-
 .../apache/kylin/source/kafka/KafkaSparkInput.java |  7 ++-
 .../storage/hbase/lookup/HBaseLookupMRSteps.java   |  6 +-
 .../hbase/steps/HDFSPathGarbageCollectionStep.java |  7 ++-
 .../kylin/storage/hbase/util/CubeMigrationCLI.java |  9 ++-
 .../storage/hbase/util/StorageCleanupJob.java      |  8 ++-
 .../storage/parquet/steps/ParquetMRSteps.java      |  2 +-
 .../storage/parquet/steps/ParquetSparkSteps.java   |  2 +-
 .../org/apache/kylin/tool/CubeMigrationCLI.java    | 10 +++-
 20 files changed, 245 insertions(+), 66 deletions(-)

diff --git a/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java b/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
index e3e3e03..2633ddf 100644
--- a/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
+++ b/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
@@ -1863,4 +1863,8 @@ abstract public class KylinConfigBase implements Serializable {
     public String getJdbcSourceAdaptor() {
         return getOptional("kylin.source.jdbc.adaptor");
     }
+
+    public String getStorageSystemPathBuilderClz() {
+        return getOptional("storage.path.builder", "org.apache.kylin.storage.path.DefaultStoragePathBuilder");
+    }
 }
diff --git a/core-storage/src/main/java/org/apache/kylin/storage/path/DefaultStoragePathBuilder.java b/core-storage/src/main/java/org/apache/kylin/storage/path/DefaultStoragePathBuilder.java
new file mode 100644
index 0000000..1e50ba0
--- /dev/null
+++ b/core-storage/src/main/java/org/apache/kylin/storage/path/DefaultStoragePathBuilder.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.path;
+
+import org.apache.kylin.cube.CubeSegment;
+
+public class DefaultStoragePathBuilder implements IStoragePathBuilder {
+
+    @Override
+    public String getJobWorkingDir(String workingDir, String jobId) {
+        if (!workingDir.endsWith(SLASH)) {
+            workingDir = workingDir + SLASH;
+        }
+
+        return workingDir + "kylin-" + jobId;
+    }
+
+    @Override
+    public String getJobRealizationRootPath(CubeSegment cubeSegment, String jobId) {
+        String jobWorkingDir = getJobWorkingDir(cubeSegment.getConfig().getHdfsWorkingDirectory(), jobId);
+
+        return jobWorkingDir + SLASH + cubeSegment.getRealization().getName();
+    }
+
+    @Override
+    public String getRealizationFinalDataPath(CubeSegment cubeSegment) {
+        String workingDir = cubeSegment.getConfig().getHdfsWorkingDirectory();
+        String cubeName = cubeSegment.getRealization().getName();
+        String segName = cubeSegment.getName();
+
+        return workingDir + SLASH + cubeName + SLASH + segName;
+    }
+}
diff --git a/core-storage/src/main/java/org/apache/kylin/storage/path/IStoragePathBuilder.java b/core-storage/src/main/java/org/apache/kylin/storage/path/IStoragePathBuilder.java
new file mode 100644
index 0000000..7a70f98
--- /dev/null
+++ b/core-storage/src/main/java/org/apache/kylin/storage/path/IStoragePathBuilder.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.path;
+
+import org.apache.kylin.cube.CubeSegment;
+
+public interface IStoragePathBuilder {
+    public static final String SLASH = "/";
+
+    public String getJobWorkingDir(String workingDir, String jobId);
+
+    public String getJobRealizationRootPath(CubeSegment cubeSegment, String jobId);
+
+    public String getRealizationFinalDataPath(CubeSegment cubeSegment);
+}
diff --git a/core-storage/src/main/java/org/apache/kylin/storage/path/S3ReversePathBuilder.java b/core-storage/src/main/java/org/apache/kylin/storage/path/S3ReversePathBuilder.java
new file mode 100644
index 0000000..5b01ad3
--- /dev/null
+++ b/core-storage/src/main/java/org/apache/kylin/storage/path/S3ReversePathBuilder.java
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.path;
+
+import org.apache.commons.lang3.ArrayUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.kylin.cube.CubeSegment;
+
+public class S3ReversePathBuilder implements IStoragePathBuilder {
+
+    protected String prefix;
+    protected String bucketName;
+
+    protected String getReverseMetaDir(String workingDir) {
+        int prefixIndex = workingDir.indexOf("//") + 2;
+        int dirIndex = workingDir.indexOf(SLASH, prefixIndex);
+
+        this.prefix = workingDir.substring(0, prefixIndex);
+        this.bucketName = workingDir.substring(prefixIndex, dirIndex);
+
+        String[] dirs = workingDir.substring(dirIndex + 1).split(SLASH);
+        ArrayUtils.reverse(dirs);
+
+        return StringUtils.join(dirs, SLASH);
+    }
+
+    @Override
+    public String getJobWorkingDir(String workingDir, String jobId) {
+        String reverseMetaDir = getReverseMetaDir(workingDir);
+
+        return this.prefix + this.bucketName + SLASH + "kylin-" + jobId + SLASH + reverseMetaDir;
+    }
+
+    @Override
+    public String getJobRealizationRootPath(CubeSegment cubeSegment, String jobId) {
+        String jobWorkingDir = getJobWorkingDir(cubeSegment.getConfig().getHdfsWorkingDirectory(), jobId);
+
+        return jobWorkingDir + SLASH + cubeSegment.getRealization().getName();
+    }
+
+    @Override
+    public String getRealizationFinalDataPath(CubeSegment cubeSegment) {
+        String reverseMetaDir = getReverseMetaDir(cubeSegment.getConfig().getHdfsWorkingDirectory());
+        String cubeName = cubeSegment.getRealization().getName();
+        String segName = cubeSegment.getName();
+
+        return this.prefix + this.bucketName + SLASH + segName + SLASH + cubeName + SLASH + reverseMetaDir;
+    }
+}
diff --git a/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java b/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
index fd11c99..8525dcc 100644
--- a/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
+++ b/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
@@ -27,6 +27,7 @@ import java.util.regex.Pattern;
 
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.StorageURL;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.cube.CubeSegment;
 import org.apache.kylin.cube.cuboid.CuboidModeEnum;
 import org.apache.kylin.cube.model.CubeDesc;
@@ -50,6 +51,9 @@ import org.apache.kylin.job.engine.JobEngineConfig;
 import org.apache.kylin.metadata.model.TblColRef;
 
 import com.google.common.base.Preconditions;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
+
+import static org.apache.kylin.storage.path.IStoragePathBuilder.SLASH;
 
 /**
  * Hold reusable steps for builders.
@@ -59,6 +63,7 @@ public class JobBuilderSupport {
     final protected JobEngineConfig config;
     final protected CubeSegment seg;
     final protected String submitter;
+    final protected IStoragePathBuilder storagePathBuilder;
 
     final public static String LayeredCuboidFolderPrefix = "level_";
 
@@ -72,6 +77,8 @@ public class JobBuilderSupport {
         this.config = new JobEngineConfig(seg.getConfig());
         this.seg = seg;
         this.submitter = submitter;
+        String pathBuilderClz = seg.getConfig().getStorageSystemPathBuilderClz();
+        this.storagePathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(pathBuilderClz);
     }
 
     public MapReduceExecutable createFactDistinctColumnsStep(String jobId) {
@@ -272,11 +279,11 @@ public class JobBuilderSupport {
     // ============================================================================
 
     public String getJobWorkingDir(String jobId) {
-        return getJobWorkingDir(config, jobId);
+        return storagePathBuilder.getJobWorkingDir(config.getConfig().getHdfsWorkingDirectory(), jobId);
     }
 
     public String getRealizationRootPath(String jobId) {
-        return getJobWorkingDir(jobId) + "/" + seg.getRealization().getName();
+        return storagePathBuilder.getJobRealizationRootPath(seg, jobId);
     }
 
     public String getCuboidRootPath(String jobId) {
@@ -311,7 +318,7 @@ public class JobBuilderSupport {
     }
 
     public String getStatisticsPath(String jobId) {
-        return getRealizationRootPath(jobId) + "/fact_distinct_columns/" + BatchConstants.CFG_OUTPUT_STATISTICS;
+        return getFactDistinctColumnsPath(jobId) + SLASH + BatchConstants.CFG_OUTPUT_STATISTICS;
     }
 
     public String getShrunkenDictionaryPath(String jobId) {
@@ -346,29 +353,24 @@ public class JobBuilderSupport {
         return getRealizationRootPath(jobId) + "/counter";
     }
 
-    public String getParquetOutputPath(String jobId) {
-        return getRealizationRootPath(jobId) + "/parquet/";
+    public String getParquetOutputPath() {
+        return storagePathBuilder.getRealizationFinalDataPath(seg) + "/";
     }
 
-    public String getParquetOutputPath() {
-        return getParquetOutputPath(seg.getLastBuildJobID());
+    public String getDumpMetadataPath(String jobId) {
+        return getRealizationRootPath(jobId) + "/metadata";
+    }
+
+    public String getSegmentMetadataUrl(KylinConfig kylinConfig, String jobId) {
+        Map<String, String> param = new HashMap<>();
+        param.put("path", getDumpMetadataPath(jobId));
+        return new StorageURL(kylinConfig.getMetadataUrl().getIdentifier(), "hdfs", param).toString();
     }
 
     // ============================================================================
     // static methods also shared by other job flow participant
     // ----------------------------------------------------------------------------
 
-    public static String getJobWorkingDir(JobEngineConfig conf, String jobId) {
-        return getJobWorkingDir(conf.getHdfsWorkingDirectory(), jobId);
-    }
-
-    public static String getJobWorkingDir(String hdfsDir, String jobId) {
-        if (!hdfsDir.endsWith("/")) {
-            hdfsDir = hdfsDir + "/";
-        }
-        return hdfsDir + "kylin-" + jobId;
-    }
-
     public static StringBuilder appendExecCmdParameters(StringBuilder buf, String paraName, String paraValue) {
         return buf.append(" -").append(paraName).append(" ").append(paraValue);
     }
@@ -389,10 +391,6 @@ public class JobBuilderSupport {
         return cuboidRootPath + PathNameCuboidInMem;
     }
 
-    public String getDumpMetadataPath(String jobId) {
-        return getRealizationRootPath(jobId) + "/metadata";
-    }
-
     public static String extractJobIDFromPath(String path) {
         Matcher matcher = JOB_NAME_PATTERN.matcher(path);
         // check the first occurrence
@@ -403,9 +401,4 @@ public class JobBuilderSupport {
         }
     }
 
-    public String getSegmentMetadataUrl(KylinConfig kylinConfig, String jobId) {
-        Map<String, String> param = new HashMap<>();
-        param.put("path", getDumpMetadataPath(jobId));
-        return new StorageURL(kylinConfig.getMetadataUrl().getIdentifier(), "hdfs", param).toString();
-    }
 }
diff --git a/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java b/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java
index b73e916..6900ca7 100755
--- a/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java
+++ b/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.AbstractApplication;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.CliCommandExecutor;
 import org.apache.kylin.common.util.HadoopUtil;
 import org.apache.kylin.common.util.HiveCmdBuilder;
@@ -49,7 +50,6 @@ import org.apache.kylin.common.util.Pair;
 import org.apache.kylin.cube.CubeInstance;
 import org.apache.kylin.cube.CubeManager;
 import org.apache.kylin.cube.CubeSegment;
-import org.apache.kylin.engine.mr.JobBuilderSupport;
 import org.apache.kylin.job.engine.JobEngineConfig;
 import org.apache.kylin.job.execution.AbstractExecutable;
 import org.apache.kylin.job.execution.ExecutableManager;
@@ -57,6 +57,7 @@ import org.apache.kylin.job.execution.ExecutableState;
 import org.apache.kylin.metadata.MetadataConstants;
 import org.apache.kylin.source.ISourceMetadataExplorer;
 import org.apache.kylin.source.SourceManager;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -81,7 +82,8 @@ public class StorageCleanupJob extends AbstractApplication {
     final protected FileSystem hbaseFs;
     final protected FileSystem defaultFs;
     final protected ExecutableManager executableManager;
-    
+    final protected IStoragePathBuilder pathBuilder;
+
     protected boolean delete = false;
     protected boolean force = false;
     
@@ -100,6 +102,7 @@ public class StorageCleanupJob extends AbstractApplication {
         this.defaultFs = defaultFs;
         this.hbaseFs = hbaseFs;
         this.executableManager = ExecutableManager.getInstance(config);
+        this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStorageSystemPathBuilderClz());
     }
 
     public void setDelete(boolean delete) {
@@ -248,7 +251,7 @@ public class StorageCleanupJob extends AbstractApplication {
         for (String jobId : allJobs) {
             final ExecutableState state = executableManager.getOutput(jobId).getState();
             if (!state.isFinalState()) {
-                String path = JobBuilderSupport.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobId);
+                String path = pathBuilder.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobId);
                 allHdfsPathsNeedToBeDeleted.remove(path);
                 logger.info("Skip " + path + " from deletion list, as the path belongs to job " + jobId
                         + " with status " + state);
@@ -260,7 +263,7 @@ public class StorageCleanupJob extends AbstractApplication {
             for (CubeSegment seg : cube.getSegments()) {
                 String jobUuid = seg.getLastBuildJobID();
                 if (jobUuid != null && jobUuid.equals("") == false) {
-                    String path = JobBuilderSupport.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobUuid);
+                    String path = pathBuilder.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobUuid);
                     allHdfsPathsNeedToBeDeleted.remove(path);
                     logger.info("Skip " + path + " from deletion list, as the path belongs to segment " + seg
                             + " of cube " + cube.getName());
@@ -415,7 +418,7 @@ public class StorageCleanupJob extends AbstractApplication {
             String segmentId = uuid.replace("_", "-");
 
             if (segmentId2JobId.containsKey(segmentId)) {
-                String path = JobBuilderSupport.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(),
+                String path = pathBuilder.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(),
                         segmentId2JobId.get(segmentId)) + "/" + tableToDelete;
                 Path externalDataPath = new Path(path);
                 if (defaultFs.exists(externalDataPath)) {
diff --git a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveInputBase.java b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveInputBase.java
index c55015b..081457a 100644
--- a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveInputBase.java
+++ b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveInputBase.java
@@ -29,7 +29,6 @@ import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.HadoopUtil;
 import org.apache.kylin.common.util.HiveCmdBuilder;
 import org.apache.kylin.cube.model.CubeDesc;
-import org.apache.kylin.engine.mr.JobBuilderSupport;
 import org.apache.kylin.engine.mr.steps.CubingExecutableUtil;
 import org.apache.kylin.job.JoinedFlatTable;
 import org.apache.kylin.job.common.ShellExecutable;
@@ -41,6 +40,7 @@ import org.apache.kylin.metadata.model.DataModelDesc;
 import org.apache.kylin.metadata.model.IJoinedFlatTableDesc;
 import org.apache.kylin.metadata.model.JoinTableDesc;
 import org.apache.kylin.metadata.model.TableDesc;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -58,11 +58,11 @@ public class HiveInputBase {
         return String.format(Locale.ROOT, "%s.%s", database, tableName).toUpperCase(Locale.ROOT);
     }
 
-    protected void addStepPhase1_DoCreateFlatTable(DefaultChainedExecutable jobFlow, String hdfsWorkingDir,
+    protected void addStepPhase1_DoCreateFlatTable(DefaultChainedExecutable jobFlow, String hdfsWorkingDir, IStoragePathBuilder pathBuilder,
             IJoinedFlatTableDesc flatTableDesc, String flatTableDatabase) {
         final String cubeName = CubingExecutableUtil.getCubeName(jobFlow.getParams());
         final String hiveInitStatements = JoinedFlatTable.generateHiveInitStatements(flatTableDatabase);
-        final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir);
+        final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir, pathBuilder);
 
         jobFlow.addTask(createFlatHiveTableStep(hiveInitStatements, jobWorkingDir, cubeName, flatTableDesc));
     }
@@ -142,9 +142,10 @@ public class HiveInputBase {
         return createIntermediateTableHql.toString();
     }
 
-    protected static String getJobWorkingDir(DefaultChainedExecutable jobFlow, String hdfsWorkingDir) {
+    protected static String getJobWorkingDir(DefaultChainedExecutable jobFlow, String hdfsWorkingDir, IStoragePathBuilder pathBuilder) {
+
+        String jobWorkingDir = pathBuilder.getJobWorkingDir(hdfsWorkingDir, jobFlow.getId());
 
-        String jobWorkingDir = JobBuilderSupport.getJobWorkingDir(hdfsWorkingDir, jobFlow.getId());
         if (KylinConfig.getInstanceFromEnv().getHiveTableDirCreateFirst()) {
             // Create work dir to avoid hive create it,
             // the difference is that the owners are different.
diff --git a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveMRInput.java b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveMRInput.java
index d6b85ed..3c9434b 100644
--- a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveMRInput.java
+++ b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveMRInput.java
@@ -29,6 +29,7 @@ import org.apache.hive.hcatalog.data.HCatRecord;
 import org.apache.hive.hcatalog.mapreduce.HCatInputFormat;
 import org.apache.hive.hcatalog.mapreduce.HCatSplit;
 import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.HadoopUtil;
 import org.apache.kylin.common.util.StringUtil;
 import org.apache.kylin.cube.CubeInstance;
@@ -43,6 +44,7 @@ import org.apache.kylin.job.execution.ExecutableManager;
 import org.apache.kylin.metadata.model.IJoinedFlatTableDesc;
 import org.apache.kylin.metadata.model.ISegment;
 import org.apache.kylin.metadata.model.TableDesc;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -116,6 +118,7 @@ public class HiveMRInput extends HiveInputBase implements IMRInput {
         final protected IJoinedFlatTableDesc flatDesc;
         final protected String flatTableDatabase;
         final protected String hdfsWorkingDir;
+        final protected IStoragePathBuilder pathBuilder;
 
         List<String> hiveViewIntermediateTables = Lists.newArrayList();
 
@@ -124,6 +127,7 @@ public class HiveMRInput extends HiveInputBase implements IMRInput {
             this.flatDesc = flatDesc;
             this.flatTableDatabase = config.getHiveDatabaseForIntermediateTable();
             this.hdfsWorkingDir = config.getHdfsWorkingDirectory();
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStorageSystemPathBuilderClz());
         }
 
         @Override
@@ -150,14 +154,14 @@ public class HiveMRInput extends HiveInputBase implements IMRInput {
         protected void addStepPhase1_DoCreateFlatTable(DefaultChainedExecutable jobFlow) {
             final String cubeName = CubingExecutableUtil.getCubeName(jobFlow.getParams());
             final String hiveInitStatements = JoinedFlatTable.generateHiveInitStatements(flatTableDatabase);
-            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir);
+            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir, pathBuilder);
 
             jobFlow.addTask(createFlatHiveTableStep(hiveInitStatements, jobWorkingDir, cubeName, flatDesc));
         }
 
         protected void addStepPhase1_DoMaterializeLookupTable(DefaultChainedExecutable jobFlow) {
             final String hiveInitStatements = JoinedFlatTable.generateHiveInitStatements(flatTableDatabase);
-            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir);
+            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir, pathBuilder);
 
             AbstractExecutable task = createLookupHiveViewMaterializationStep(hiveInitStatements, jobWorkingDir,
                     flatDesc, hiveViewIntermediateTables, jobFlow.getId());
@@ -168,7 +172,7 @@ public class HiveMRInput extends HiveInputBase implements IMRInput {
 
         @Override
         public void addStepPhase4_Cleanup(DefaultChainedExecutable jobFlow) {
-            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir);
+            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir, pathBuilder);
 
             org.apache.kylin.source.hive.GarbageCollectionStep step = new org.apache.kylin.source.hive.GarbageCollectionStep();
             step.setName(ExecutableConstants.STEP_NAME_HIVE_CLEANUP);
diff --git a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveSparkInput.java b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveSparkInput.java
index d710db7..031548c8 100644
--- a/source-hive/src/main/java/org/apache/kylin/source/hive/HiveSparkInput.java
+++ b/source-hive/src/main/java/org/apache/kylin/source/hive/HiveSparkInput.java
@@ -22,6 +22,7 @@ import java.util.Collections;
 import java.util.List;
 
 import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.StringUtil;
 import org.apache.kylin.cube.CubeInstance;
 import org.apache.kylin.cube.CubeManager;
@@ -33,6 +34,7 @@ import org.apache.kylin.job.execution.AbstractExecutable;
 import org.apache.kylin.job.execution.DefaultChainedExecutable;
 import org.apache.kylin.metadata.model.IJoinedFlatTableDesc;
 import org.apache.kylin.metadata.model.ISegment;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -63,7 +65,7 @@ public class HiveSparkInput extends HiveInputBase implements ISparkInput {
         final protected IJoinedFlatTableDesc flatDesc;
         final protected String flatTableDatabase;
         final protected String hdfsWorkingDir;
-
+        final protected IStoragePathBuilder pathBuilder;
         List<String> hiveViewIntermediateTables = Lists.newArrayList();
 
         public BatchCubingInputSide(IJoinedFlatTableDesc flatDesc) {
@@ -71,6 +73,7 @@ public class HiveSparkInput extends HiveInputBase implements ISparkInput {
             this.flatDesc = flatDesc;
             this.flatTableDatabase = config.getHiveDatabaseForIntermediateTable();
             this.hdfsWorkingDir = config.getHdfsWorkingDirectory();
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStorageSystemPathBuilderClz());
         }
 
         @Override
@@ -81,7 +84,7 @@ public class HiveSparkInput extends HiveInputBase implements ISparkInput {
             final String hiveInitStatements = JoinedFlatTable.generateHiveInitStatements(flatTableDatabase);
 
             // create flat table first
-            addStepPhase1_DoCreateFlatTable(jobFlow, hdfsWorkingDir, flatDesc, flatTableDatabase);
+            addStepPhase1_DoCreateFlatTable(jobFlow, hdfsWorkingDir, pathBuilder, flatDesc, flatTableDatabase);
 
             // then count and redistribute
             if (cubeConfig.isHiveRedistributeEnabled()) {
@@ -95,7 +98,7 @@ public class HiveSparkInput extends HiveInputBase implements ISparkInput {
 
         protected void addStepPhase1_DoMaterializeLookupTable(DefaultChainedExecutable jobFlow) {
             final String hiveInitStatements = JoinedFlatTable.generateHiveInitStatements(flatTableDatabase);
-            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir);
+            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir, pathBuilder);
 
             AbstractExecutable task = createLookupHiveViewMaterializationStep(hiveInitStatements, jobWorkingDir,
                     flatDesc, hiveViewIntermediateTables, jobFlow.getId());
@@ -106,7 +109,7 @@ public class HiveSparkInput extends HiveInputBase implements ISparkInput {
 
         @Override
         public void addStepPhase4_Cleanup(DefaultChainedExecutable jobFlow) {
-            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir);
+            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir, pathBuilder);
 
             GarbageCollectionStep step = new GarbageCollectionStep();
             step.setName(ExecutableConstants.STEP_NAME_HIVE_CLEANUP);
diff --git a/source-hive/src/test/java/org/apache/kylin/source/hive/HiveMRInputTest.java b/source-hive/src/test/java/org/apache/kylin/source/hive/HiveMRInputTest.java
index cc1b5e1..24e91ae 100644
--- a/source-hive/src/test/java/org/apache/kylin/source/hive/HiveMRInputTest.java
+++ b/source-hive/src/test/java/org/apache/kylin/source/hive/HiveMRInputTest.java
@@ -28,9 +28,11 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.KylinConfig.SetAndUnsetThreadLocalConfig;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.RandomUtil;
 import org.apache.kylin.common.util.StringUtil;
 import org.apache.kylin.job.execution.DefaultChainedExecutable;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -44,11 +46,11 @@ public class HiveMRInputTest {
         try (SetAndUnsetThreadLocalConfig autoUnset = KylinConfig.setAndUnsetThreadLocalConfig(kylinConfig)) {
             when(kylinConfig.getHiveTableDirCreateFirst()).thenReturn(true);
             when(kylinConfig.getHdfsWorkingDirectory()).thenReturn("/tmp/kylin/");
+            when(kylinConfig.getStorageSystemPathBuilderClz()).thenReturn("org.apache.kylin.storage.path.DefaultStoragePathBuilder");
             DefaultChainedExecutable defaultChainedExecutable = mock(DefaultChainedExecutable.class);
             defaultChainedExecutable.setId(RandomUtil.randomUUID().toString());
 
-            String jobWorkingDir = HiveInputBase.getJobWorkingDir(defaultChainedExecutable,
-                    KylinConfig.getInstanceFromEnv().getHdfsWorkingDirectory());
+            String jobWorkingDir = HiveInputBase.getJobWorkingDir(defaultChainedExecutable, KylinConfig.getInstanceFromEnv().getHdfsWorkingDirectory(), (IStoragePathBuilder) ClassUtil.newInstance(KylinConfig.getInstanceFromEnv().getStorageSystemPathBuilderClz()));
             jobWorkDirPath = new Path(jobWorkingDir);
             Assert.assertTrue(fileSystem.exists(jobWorkDirPath));
         } finally {
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcHiveMRInput.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcHiveMRInput.java
index 3460dd2..9a2b7dc 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcHiveMRInput.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcHiveMRInput.java
@@ -67,7 +67,7 @@ public class JdbcHiveMRInput extends HiveMRInput {
         protected void addStepPhase1_DoCreateFlatTable(DefaultChainedExecutable jobFlow) {
             final String cubeName = CubingExecutableUtil.getCubeName(jobFlow.getParams());
             final String hiveInitStatements = JoinedFlatTable.generateHiveInitStatements(flatTableDatabase);
-            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir);
+            final String jobWorkingDir = getJobWorkingDir(jobFlow, hdfsWorkingDir, pathBuilder);
 
             jobFlow.addTask(createSqoopToFlatHiveStep(jobWorkingDir, cubeName));
             jobFlow.addTask(createFlatHiveTableFromFiles(hiveInitStatements, jobWorkingDir));
diff --git a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaMRInput.java b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaMRInput.java
index 1c94f9c..68f87ff 100644
--- a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaMRInput.java
+++ b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaMRInput.java
@@ -32,11 +32,11 @@ import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
 import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.Bytes;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.cube.CubeSegment;
 import org.apache.kylin.cube.model.CubeDesc;
 import org.apache.kylin.cube.model.CubeJoinedFlatTableDesc;
 import org.apache.kylin.engine.mr.IMRInput;
-import org.apache.kylin.engine.mr.JobBuilderSupport;
 import org.apache.kylin.engine.mr.common.BatchConstants;
 import org.apache.kylin.job.JoinedFlatTable;
 import org.apache.kylin.job.engine.JobEngineConfig;
@@ -45,6 +45,7 @@ import org.apache.kylin.metadata.MetadataConstants;
 import org.apache.kylin.metadata.model.IJoinedFlatTableDesc;
 import org.apache.kylin.metadata.model.ISegment;
 import org.apache.kylin.metadata.model.TableDesc;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -76,10 +77,12 @@ public class KafkaMRInput extends KafkaInputBase implements IMRInput {
         private final CubeSegment cubeSegment;
         private final JobEngineConfig conf;
         private String delimiter = BatchConstants.SEQUENCE_FILE_DEFAULT_DELIMITER;
+        private final IStoragePathBuilder pathBuilder;
 
         public KafkaTableInputFormat(CubeSegment cubeSegment, JobEngineConfig conf) {
             this.cubeSegment = cubeSegment;
             this.conf = conf;
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(conf.getConfig().getStorageSystemPathBuilderClz());
         }
 
         @Override
@@ -88,7 +91,7 @@ public class KafkaMRInput extends KafkaInputBase implements IMRInput {
             String jobId = job.getConfiguration().get(BatchConstants.ARG_CUBING_JOB_ID);
             IJoinedFlatTableDesc flatHiveTableDesc = new CubeJoinedFlatTableDesc(cubeSegment);
             String inputPath = JoinedFlatTable.getTableDir(flatHiveTableDesc,
-                    JobBuilderSupport.getJobWorkingDir(conf, jobId));
+                    pathBuilder.getJobWorkingDir(conf.getHdfsWorkingDirectory(), jobId));
             try {
                 FileInputFormat.addInputPath(job, new Path(inputPath));
             } catch (IOException e) {
@@ -121,6 +124,7 @@ public class KafkaMRInput extends KafkaInputBase implements IMRInput {
         private List<String> intermediateTables = Lists.newArrayList();
         private List<String> intermediatePaths = Lists.newArrayList();
         private String cubeName;
+        private IStoragePathBuilder pathBuilder;
 
         public BatchCubingInputSide(CubeSegment seg, IJoinedFlatTableDesc flatDesc) {
             this.conf = new JobEngineConfig(KylinConfig.getInstanceFromEnv());
@@ -130,6 +134,7 @@ public class KafkaMRInput extends KafkaInputBase implements IMRInput {
             this.seg = seg;
             this.cubeDesc = seg.getCubeDesc();
             this.cubeName = seg.getCubeInstance().getName();
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(config.getStorageSystemPathBuilderClz());
         }
 
         @Override
@@ -153,7 +158,7 @@ public class KafkaMRInput extends KafkaInputBase implements IMRInput {
         }
 
         protected String getJobWorkingDir(DefaultChainedExecutable jobFlow) {
-            return JobBuilderSupport.getJobWorkingDir(config.getHdfsWorkingDirectory(), jobFlow.getId());
+            return pathBuilder.getJobWorkingDir(config.getHdfsWorkingDirectory(), jobFlow.getId());
         }
 
         @Override
diff --git a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSparkInput.java b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSparkInput.java
index 7db6c32..9c14fd6 100644
--- a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSparkInput.java
+++ b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSparkInput.java
@@ -21,15 +21,16 @@ import java.util.List;
 import java.util.Locale;
 
 import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.cube.CubeSegment;
 import org.apache.kylin.cube.model.CubeDesc;
-import org.apache.kylin.engine.mr.JobBuilderSupport;
 import org.apache.kylin.engine.spark.ISparkInput;
 import org.apache.kylin.job.engine.JobEngineConfig;
 import org.apache.kylin.job.execution.DefaultChainedExecutable;
 import org.apache.kylin.metadata.MetadataConstants;
 import org.apache.kylin.metadata.model.IJoinedFlatTableDesc;
 import org.apache.kylin.metadata.model.ISegment;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -62,6 +63,7 @@ public class KafkaSparkInput extends KafkaInputBase implements ISparkInput {
         final private List<String> intermediateTables = Lists.newArrayList();
         final private List<String> intermediatePaths = Lists.newArrayList();
         private String cubeName;
+        private IStoragePathBuilder pathBuilder;
 
         public BatchCubingInputSide(CubeSegment seg, IJoinedFlatTableDesc flatDesc) {
             this.conf = new JobEngineConfig(KylinConfig.getInstanceFromEnv());
@@ -71,6 +73,7 @@ public class KafkaSparkInput extends KafkaInputBase implements ISparkInput {
             this.seg = seg;
             this.cubeDesc = seg.getCubeDesc();
             this.cubeName = seg.getCubeInstance().getName();
+            this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(this.config.getStorageSystemPathBuilderClz());
         }
 
         @Override
@@ -94,7 +97,7 @@ public class KafkaSparkInput extends KafkaInputBase implements ISparkInput {
         }
 
         protected String getJobWorkingDir(DefaultChainedExecutable jobFlow) {
-            return JobBuilderSupport.getJobWorkingDir(config.getHdfsWorkingDirectory(), jobFlow.getId());
+            return pathBuilder.getJobWorkingDir(config.getHdfsWorkingDirectory(), jobFlow.getId());
         }
 
         @Override
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/lookup/HBaseLookupMRSteps.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/lookup/HBaseLookupMRSteps.java
index 69e5e66..f6d11bd 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/lookup/HBaseLookupMRSteps.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/lookup/HBaseLookupMRSteps.java
@@ -23,6 +23,7 @@ import java.util.List;
 import java.util.Set;
 
 import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.RandomUtil;
 import org.apache.kylin.cube.CubeInstance;
 import org.apache.kylin.cube.model.CubeDesc;
@@ -45,6 +46,7 @@ import org.apache.kylin.metadata.model.TableRef;
 import org.apache.kylin.source.IReadableTable;
 import org.apache.kylin.source.SourceManager;
 import org.apache.kylin.storage.hbase.HBaseConnection;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -54,10 +56,12 @@ public class HBaseLookupMRSteps {
     protected static final Logger logger = LoggerFactory.getLogger(HBaseLookupMRSteps.class);
     private CubeInstance cube;
     private JobEngineConfig config;
+    private IStoragePathBuilder pathBuilder;
 
     public HBaseLookupMRSteps(CubeInstance cube) {
         this.cube = cube;
         this.config = new JobEngineConfig(cube.getConfig());
+        this.pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(cube.getConfig().getStorageSystemPathBuilderClz());
     }
 
     public void addMaterializeLookupTablesSteps(LookupMaterializeContext context) {
@@ -158,7 +162,7 @@ public class HBaseLookupMRSteps {
     }
 
     private String getLookupTableHFilePath(String tableName, String jobId) {
-        return HBaseConnection.makeQualifiedPathInHBaseCluster(JobBuilderSupport.getJobWorkingDir(config, jobId) + "/"
+        return HBaseConnection.makeQualifiedPathInHBaseCluster(pathBuilder.getJobWorkingDir(config.getHdfsWorkingDirectory(), jobId) + "/"
                 + tableName + "/hfile/");
     }
 
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HDFSPathGarbageCollectionStep.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HDFSPathGarbageCollectionStep.java
index 3ec27d4..05df80d 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HDFSPathGarbageCollectionStep.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/steps/HDFSPathGarbageCollectionStep.java
@@ -25,14 +25,15 @@ import java.util.List;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.HadoopUtil;
-import org.apache.kylin.engine.mr.JobBuilderSupport;
 import org.apache.kylin.job.engine.JobEngineConfig;
 import org.apache.kylin.job.exception.ExecuteException;
 import org.apache.kylin.job.execution.AbstractExecutable;
 import org.apache.kylin.job.execution.ExecutableContext;
 import org.apache.kylin.job.execution.ExecuteResult;
 import org.apache.kylin.storage.hbase.HBaseConnection;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -48,6 +49,7 @@ public class HDFSPathGarbageCollectionStep extends AbstractExecutable {
     public static final String TO_DELETE_PATHS = "toDeletePaths";
     private StringBuffer output;
     private JobEngineConfig config;
+    private IStoragePathBuilder pathBuilder;
 
     public HDFSPathGarbageCollectionStep() {
         super();
@@ -58,6 +60,7 @@ public class HDFSPathGarbageCollectionStep extends AbstractExecutable {
     protected ExecuteResult doWork(ExecutableContext context) throws ExecuteException {
         try {
             config = new JobEngineConfig(context.getConfig());
+            pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(context.getConfig().getStorageSystemPathBuilderClz());
             List<String> toDeletePaths = getDeletePaths();
             dropHdfsPathOnCluster(toDeletePaths, HadoopUtil.getWorkingFileSystem());
 
@@ -93,7 +96,7 @@ public class HDFSPathGarbageCollectionStep extends AbstractExecutable {
                 // If hbase was deployed on another cluster, the job dir is empty and should be dropped,
                 // because of rowkey_stats and hfile dirs are both dropped.
                 if (fileSystem.listStatus(oldPath.getParent()).length == 0) {
-                    Path emptyJobPath = new Path(JobBuilderSupport.getJobWorkingDir(config, getJobId()));
+                    Path emptyJobPath = new Path(pathBuilder.getJobWorkingDir(config.getHdfsWorkingDirectory(), getJobId()));
                     emptyJobPath = Path.getPathWithoutSchemeAndAuthority(emptyJobPath);
                     if (fileSystem.exists(emptyJobPath)) {
                         fileSystem.delete(emptyJobPath, true);
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/CubeMigrationCLI.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/CubeMigrationCLI.java
index 00635ba..ce43456 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/CubeMigrationCLI.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/CubeMigrationCLI.java
@@ -48,6 +48,7 @@ import org.apache.kylin.common.persistence.ResourceStore;
 import org.apache.kylin.common.persistence.Serializer;
 import org.apache.kylin.common.restclient.RestClient;
 import org.apache.kylin.common.util.Bytes;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.Dictionary;
 import org.apache.kylin.common.util.HadoopUtil;
 import org.apache.kylin.cube.CubeInstance;
@@ -58,7 +59,6 @@ import org.apache.kylin.dict.DictionaryInfo;
 import org.apache.kylin.dict.DictionaryManager;
 import org.apache.kylin.dict.lookup.SnapshotManager;
 import org.apache.kylin.dict.lookup.SnapshotTable;
-import org.apache.kylin.engine.mr.JobBuilderSupport;
 import org.apache.kylin.metadata.cachesync.Broadcaster;
 import org.apache.kylin.metadata.model.DataModelDesc;
 import org.apache.kylin.metadata.model.SegmentStatusEnum;
@@ -68,6 +68,7 @@ import org.apache.kylin.metadata.realization.IRealizationConstants;
 import org.apache.kylin.metadata.realization.RealizationStatusEnum;
 import org.apache.kylin.metadata.realization.RealizationType;
 import org.apache.kylin.storage.hbase.HBaseConnection;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -185,11 +186,13 @@ public class CubeMigrationCLI {
     }
 
     private static void renameFoldersInHdfs(CubeInstance cube) {
+        IStoragePathBuilder srcPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(srcConfig.getStorageSystemPathBuilderClz());
+        IStoragePathBuilder dstPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(dstConfig.getStorageSystemPathBuilderClz());
         for (CubeSegment segment : cube.getSegments()) {
 
             String jobUuid = segment.getLastBuildJobID();
-            String src = JobBuilderSupport.getJobWorkingDir(srcConfig.getHdfsWorkingDirectory(), jobUuid);
-            String tgt = JobBuilderSupport.getJobWorkingDir(dstConfig.getHdfsWorkingDirectory(), jobUuid);
+            String src = srcPathBuilder.getJobWorkingDir(srcConfig.getHdfsWorkingDirectory(), jobUuid);
+            String tgt = dstPathBuilder.getJobWorkingDir(dstConfig.getHdfsWorkingDirectory(), jobUuid);
 
             operations.add(new Opt(OptType.RENAME_FOLDER_IN_HDFS, new Object[] { src, tgt }));
         }
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/StorageCleanupJob.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/StorageCleanupJob.java
index 725d819..fd6124b 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/StorageCleanupJob.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/util/StorageCleanupJob.java
@@ -45,6 +45,7 @@ import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.Connection;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.AbstractApplication;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.CliCommandExecutor;
 import org.apache.kylin.common.util.HadoopUtil;
 import org.apache.kylin.common.util.HiveCmdBuilder;
@@ -53,7 +54,6 @@ import org.apache.kylin.common.util.Pair;
 import org.apache.kylin.cube.CubeInstance;
 import org.apache.kylin.cube.CubeManager;
 import org.apache.kylin.cube.CubeSegment;
-import org.apache.kylin.engine.mr.JobBuilderSupport;
 import org.apache.kylin.job.engine.JobEngineConfig;
 import org.apache.kylin.job.execution.AbstractExecutable;
 import org.apache.kylin.job.execution.ExecutableManager;
@@ -61,6 +61,7 @@ import org.apache.kylin.job.execution.ExecutableState;
 import org.apache.kylin.metadata.MetadataConstants;
 import org.apache.kylin.metadata.realization.IRealizationConstants;
 import org.apache.kylin.storage.hbase.HBaseConnection;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -186,6 +187,7 @@ public class StorageCleanupJob extends AbstractApplication {
     private void cleanUnusedHdfsFiles(Configuration conf) throws IOException {
         JobEngineConfig engineConfig = new JobEngineConfig(KylinConfig.getInstanceFromEnv());
         CubeManager cubeMgr = CubeManager.getInstance(KylinConfig.getInstanceFromEnv());
+        IStoragePathBuilder pathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(engineConfig.getConfig().getStorageSystemPathBuilderClz());
 
         FileSystem fs = HadoopUtil.getWorkingFileSystem(conf);
         List<String> allHdfsPathsNeedToBeDeleted = new ArrayList<String>();
@@ -208,7 +210,7 @@ public class StorageCleanupJob extends AbstractApplication {
             // only remove FINISHED and DISCARDED job intermediate files
             final ExecutableState state = executableManager.getOutput(jobId).getState();
             if (!state.isFinalState()) {
-                String path = JobBuilderSupport.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobId);
+                String path = pathBuilder.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobId);
                 allHdfsPathsNeedToBeDeleted.remove(path);
                 logger.info("Skip " + path + " from deletion list, as the path belongs to job " + jobId + " with status " + state);
             }
@@ -219,7 +221,7 @@ public class StorageCleanupJob extends AbstractApplication {
             for (CubeSegment seg : cube.getSegments()) {
                 String jobUuid = seg.getLastBuildJobID();
                 if (jobUuid != null && jobUuid.equals("") == false) {
-                    String path = JobBuilderSupport.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobUuid);
+                    String path = pathBuilder.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobUuid);
                     allHdfsPathsNeedToBeDeleted.remove(path);
                     logger.info("Skip " + path + " from deletion list, as the path belongs to segment " + seg + " of cube " + cube.getName());
                 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java
index 829f074..e1d0b0f 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetMRSteps.java
@@ -45,7 +45,7 @@ public class ParquetMRSteps extends ParquetJobSteps{
         appendMapReduceParameters(cmd);
         appendExecCmdParameters(cmd, BatchConstants.ARG_CUBE_NAME, seg.getRealization().getName());
         appendExecCmdParameters(cmd, BatchConstants.ARG_INPUT, inputPath);
-        appendExecCmdParameters(cmd, BatchConstants.ARG_OUTPUT, getParquetOutputPath(jobId));
+        appendExecCmdParameters(cmd, BatchConstants.ARG_OUTPUT, getParquetOutputPath());
         appendExecCmdParameters(cmd, BatchConstants.ARG_SEGMENT_ID, seg.getUuid());
         appendExecCmdParameters(cmd, BatchConstants.ARG_JOB_NAME, "Kylin_Parquet_Generator_" + seg.getRealization().getName()+ "_Step");
 
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java
index 65bd30a..6486187 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/ParquetSparkSteps.java
@@ -46,7 +46,7 @@ public class ParquetSparkSteps extends ParquetJobSteps {
         sparkExecutable.setParam(SparkCubeParquet.OPTION_CUBE_NAME.getOpt(), seg.getRealization().getName());
         sparkExecutable.setParam(SparkCubeParquet.OPTION_SEGMENT_ID.getOpt(), seg.getUuid());
         sparkExecutable.setParam(SparkCubeParquet.OPTION_INPUT_PATH.getOpt(), inputPath);
-        sparkExecutable.setParam(SparkCubeParquet.OPTION_OUTPUT_PATH.getOpt(), getParquetOutputPath(jobId));
+        sparkExecutable.setParam(SparkCubeParquet.OPTION_OUTPUT_PATH.getOpt(), getParquetOutputPath());
         sparkExecutable.setParam(SparkCubeParquet.OPTION_META_URL.getOpt(),
                 jobBuilder2.getSegmentMetadataUrl(seg.getConfig(), jobId));
         sparkExecutable.setJobId(jobId);
diff --git a/tool/src/main/java/org/apache/kylin/tool/CubeMigrationCLI.java b/tool/src/main/java/org/apache/kylin/tool/CubeMigrationCLI.java
index 6909b74..5c2a839 100644
--- a/tool/src/main/java/org/apache/kylin/tool/CubeMigrationCLI.java
+++ b/tool/src/main/java/org/apache/kylin/tool/CubeMigrationCLI.java
@@ -42,6 +42,7 @@ import org.apache.kylin.common.persistence.ResourceStore;
 import org.apache.kylin.common.persistence.Serializer;
 import org.apache.kylin.common.restclient.RestClient;
 import org.apache.kylin.common.util.AbstractApplication;
+import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.Dictionary;
 import org.apache.kylin.common.util.HadoopUtil;
 import org.apache.kylin.common.util.OptionsHelper;
@@ -53,7 +54,6 @@ import org.apache.kylin.dict.DictionaryInfo;
 import org.apache.kylin.dict.DictionaryManager;
 import org.apache.kylin.dict.lookup.SnapshotManager;
 import org.apache.kylin.dict.lookup.SnapshotTable;
-import org.apache.kylin.engine.mr.JobBuilderSupport;
 import org.apache.kylin.metadata.MetadataConstants;
 import org.apache.kylin.metadata.model.DataModelDesc;
 import org.apache.kylin.metadata.model.IStorageAware;
@@ -66,6 +66,7 @@ import org.apache.kylin.metadata.realization.IRealizationConstants;
 import org.apache.kylin.metadata.realization.RealizationStatusEnum;
 import org.apache.kylin.metadata.realization.RealizationType;
 import org.apache.kylin.storage.hbase.HBaseConnection;
+import org.apache.kylin.storage.path.IStoragePathBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -229,11 +230,14 @@ public class CubeMigrationCLI extends AbstractApplication {
     }
 
     protected void renameFoldersInHdfs(CubeInstance cube) throws IOException {
+        IStoragePathBuilder srcPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(srcConfig.getStorageSystemPathBuilderClz());
+        IStoragePathBuilder dstPathBuilder = (IStoragePathBuilder)ClassUtil.newInstance(dstConfig.getStorageSystemPathBuilderClz());
+
         for (CubeSegment segment : cube.getSegments()) {
 
             String jobUuid = segment.getLastBuildJobID();
-            String src = JobBuilderSupport.getJobWorkingDir(srcConfig.getHdfsWorkingDirectory(), jobUuid);
-            String tgt = JobBuilderSupport.getJobWorkingDir(dstConfig.getHdfsWorkingDirectory(), jobUuid);
+            String src = srcPathBuilder.getJobWorkingDir(srcConfig.getHdfsWorkingDirectory(), jobUuid);
+            String tgt = dstPathBuilder.getJobWorkingDir(dstConfig.getHdfsWorkingDirectory(), jobUuid);
 
             operations.add(new Opt(OptType.RENAME_FOLDER_IN_HDFS, new Object[] { src, tgt }));
         }


[kylin] 03/06: KYLIN-3625 Query engine for Parquet and apply CI for Parquet

Posted by sh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch kylin-on-parquet
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit a47b2d6808b8f10c662abced18a07faa51f7a32f
Author: chao long <wa...@qq.com>
AuthorDate: Sat Sep 29 16:17:58 2018 +0800

    KYLIN-3625 Query engine for Parquet and apply CI for Parquet
---
 build/script/prepare-libs.sh                       |   2 +
 .../java/org/apache/kylin/common/KylinConfig.java  |  60 ++-
 .../java/org/apache/kylin/common/QueryContext.java |  50 +++
 .../org/apache/kylin/common/util/FileUtils.java    |  31 +-
 .../src/main/resources/kylin-defaults.properties   |   6 +-
 .../cube/gridtable/CuboidToGridTableMapping.java   |   7 +
 .../org/apache/kylin/cube/kv/RowKeyColumnIO.java   |   6 +-
 .../org/apache/kylin/cube/kv/RowKeyDecoder.java    |  12 +-
 .../apache/kylin/cube/kv/RowKeyDecoderParquet.java |  27 +-
 .../java/org/apache/kylin/gridtable/GTRecord.java  |  53 ++-
 .../org/apache/kylin/gridtable/GTScanRequest.java  |  33 +-
 .../kylin/gridtable/GTScanRequestBuilder.java      |   8 +-
 .../java/org/apache/kylin/gridtable/GTUtil.java    | 100 ++++-
 .../apache/kylin/measure/MeasureTypeFactory.java   |  11 +-
 .../measure/percentile/PercentileMeasureType.java  |   2 +
 .../apache/kylin/measure/topn/TopNAggregator.java  |   5 +
 .../org/apache/kylin/measure/topn/TopNCounter.java |   5 +
 .../filter/BuiltInFunctionTupleFilter.java         |  40 ++
 .../kylin/metadata/filter/CaseTupleFilter.java     |  15 +
 .../kylin/metadata/filter/ColumnTupleFilter.java   |  11 +
 .../kylin/metadata/filter/CompareTupleFilter.java  |  26 ++
 .../kylin/metadata/filter/ConstantTupleFilter.java |  37 ++
 .../kylin/metadata/filter/DynamicTupleFilter.java  |   5 +
 .../kylin/metadata/filter/ExtractTupleFilter.java  |   5 +
 .../kylin/metadata/filter/LogicalTupleFilter.java  |  25 ++
 .../apache/kylin/metadata/filter/TupleFilter.java  |  45 ++
 .../metadata/filter/UDF/MassInTupleFilter.java     |   5 +
 .../metadata/filter/UnsupportedTupleFilter.java    |   5 +
 .../apache/kylin/metadata/model/ParameterDesc.java |   2 +-
 .../storage/gtrecord/CubeScanRangePlanner.java     |  15 +-
 .../apache/kylin/engine/mr/JobBuilderSupport.java  |   4 +
 .../kylin/engine/mr/steps/CuboidReducer.java       |   6 +-
 engine-spark/pom.xml                               |   5 +
 .../engine/spark/SparkBatchCubingJobBuilder2.java  |  10 +-
 .../kylin/engine/spark/SparkCubingByLayer.java     |  18 +-
 .../engine/spark/SparkCubingByLayerParquet.java    |  16 +-
 .../java/org/apache/spark/sql/KylinSession.scala   | 231 ++++++++++
 .../java/org/apache/spark/sql/SparderEnv.scala     | 114 +++++
 .../sql/hive/KylinHiveSessionStateBuilder.scala    |  64 +++
 .../org/apache/spark/sql/manager/UdfManager.scala  |  98 +++++
 .../org/apache/spark/sql/udf/SparderAggFun.scala   | 152 +++++++
 .../apache/spark/sql/util/SparderTypeUtil.scala    | 473 +++++++++++++++++++++
 .../org/apache/spark/util/KylinReflectUtils.scala  |  78 ++++
 .../main/java/org/apache/spark/util/XmlUtils.scala | 111 +++++
 .../test_case_data/webapps/META-INF/context.xml    |  38 ++
 kylin-it/jacoco-it.exec                            | Bin 0 -> 2432563 bytes
 kylin-it/pom.xml                                   | 246 ++++++-----
 .../org/apache/kylin/jdbc/ITJDBCDriver2Test.java   |  30 +-
 .../kylin/provision/BuildCubeWithEngine.java       |  83 ++--
 .../org/apache/kylin/query/ITCombination2Test.java |  80 ++++
 .../apache/kylin/query/ITFailfastQuery2Test.java   |  84 ++++
 .../org/apache/kylin/query/ITKylinQuery2Test.java  | 115 +++++
 .../org/apache/kylin/query/ITKylinQueryTest.java   |   8 +-
 kylin-test/pom.xml                                 |  32 ++
 .../main/java/org/apache/kylin/junit/EnvUtils.java |  64 +++
 .../org/apache/kylin/junit/SparkTestRunner.java    |  78 ++++
 .../junit/SparkTestRunnerRunnerWithParameters.java | 147 +++++++
 .../SparkTestRunnerWithParametersFactory.java      |  35 +-
 pom.xml                                            |  18 +
 .../apache/kylin/rest/init/InitialTaskManager.java |  26 ++
 server/pom.xml                                     |   7 +
 .../java/org/apache/kylin/rest/DebugTomcat.java    |  10 +
 server/src/main/webapp/META-INF/context.xml        |  38 ++
 storage-parquet/pom.xml                            | 159 +++++++
 .../kylin/storage/parquet/cube/CubeSparkRPC.java   |   9 +
 .../storage/parquet/cube/CubeStorageQuery.java     |   3 +-
 .../storage/parquet/spark/ParquetPayload.java      | 222 ++++++++++
 .../kylin/storage/parquet/spark/ParquetTask.java   | 308 ++++++++++++++
 .../storage/parquet/spark/SparkSubmitter.java      |  42 ++
 .../spark/gtscanner/ParquetRecordGTScanner.java    | 105 +++++
 .../gtscanner/ParquetRecordGTScanner4Cube.java     |  64 +++
 .../storage/parquet/steps/SparkCubeParquet.java    |  46 +-
 .../org/apache/kylin/ext/ClassLoaderUtils.java     |  89 ++++
 .../apache/kylin/ext/DebugTomcatClassLoader.java   | 152 +++++++
 .../java/org/apache/kylin/ext/ItClassLoader.java   | 175 ++++++++
 .../org/apache/kylin/ext/ItSparkClassLoader.java   | 189 ++++++++
 .../org/apache/kylin/ext/SparkClassLoader.java     | 236 ++++++++++
 .../org/apache/kylin/ext/TomcatClassLoader.java    | 190 +++++++++
 tool/pom.xml                                       |   4 +
 webapp/app/META-INF/context.xml                    |  38 ++
 80 files changed, 4873 insertions(+), 331 deletions(-)

diff --git a/build/script/prepare-libs.sh b/build/script/prepare-libs.sh
index 789a120..45c8150 100644
--- a/build/script/prepare-libs.sh
+++ b/build/script/prepare-libs.sh
@@ -32,6 +32,7 @@ rm -rf build/lib build/tool
 mkdir build/lib build/tool
 cp assembly/target/kylin-assembly-${version}-job.jar build/lib/kylin-job-${version}.jar
 cp storage-hbase/target/kylin-storage-hbase-${version}-coprocessor.jar build/lib/kylin-coprocessor-${version}.jar
+cp storage-parquet/target/kylin-storage-parquet-${version}-spark.jar build/lib/kylin-storage-parquet-${version}.jar
 cp jdbc/target/kylin-jdbc-${version}.jar build/lib/kylin-jdbc-${version}.jar
 cp tool-assembly/target/kylin-tool-assembly-${version}-assembly.jar build/tool/kylin-tool-${version}.jar
 cp datasource-sdk/target/kylin-datasource-sdk-${version}-lib.jar build/lib/kylin-datasource-sdk-${version}.jar
@@ -39,6 +40,7 @@ cp datasource-sdk/target/kylin-datasource-sdk-${version}-lib.jar build/lib/kylin
 # Copied file becomes 000 for some env (e.g. my Cygwin)
 chmod 644 build/lib/kylin-job-${version}.jar
 chmod 644 build/lib/kylin-coprocessor-${version}.jar
+chmod 644 build/lib/kylin-storage-parquet-${version}.jar
 chmod 644 build/lib/kylin-jdbc-${version}.jar
 chmod 644 build/tool/kylin-tool-${version}.jar
 chmod 644 build/lib/kylin-datasource-sdk-${version}.jar
diff --git a/core-common/src/main/java/org/apache/kylin/common/KylinConfig.java b/core-common/src/main/java/org/apache/kylin/common/KylinConfig.java
index 4a86b76..f3d3a29 100644
--- a/core-common/src/main/java/org/apache/kylin/common/KylinConfig.java
+++ b/core-common/src/main/java/org/apache/kylin/common/KylinConfig.java
@@ -18,6 +18,16 @@
 
 package org.apache.kylin.common;
 
+import com.google.common.base.Preconditions;
+import org.apache.commons.io.IOUtils;
+import org.apache.commons.lang.StringUtils;
+import org.apache.kylin.common.restclient.RestClient;
+import org.apache.kylin.common.util.ClassUtil;
+import org.apache.kylin.common.util.FileUtils;
+import org.apache.kylin.common.util.OrderedProperties;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import java.io.BufferedReader;
 import java.io.File;
 import java.io.FileInputStream;
@@ -35,16 +45,6 @@ import java.util.Map;
 import java.util.Properties;
 import java.util.concurrent.ConcurrentHashMap;
 
-import org.apache.commons.io.IOUtils;
-import org.apache.commons.lang.StringUtils;
-import org.apache.kylin.common.restclient.RestClient;
-import org.apache.kylin.common.util.ClassUtil;
-import org.apache.kylin.common.util.OrderedProperties;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import com.google.common.base.Preconditions;
-
 /**
  */
 public class KylinConfig extends KylinConfigBase {
@@ -555,4 +555,44 @@ public class KylinConfig extends KylinConfigBase {
             return this.base() == ((KylinConfig) another).base();
     }
 
+    public Map<String, String> getSparkConf() {
+        return getPropertiesByPrefix("kylin.storage.columnar.spark-conf.");
+    }
+
+    public String getColumnarSparkEnv(String conf) {
+        return getPropertiesByPrefix("kylin.storage.columnar.spark-env.").get(conf);
+    }
+
+    public boolean isParquetSeparateFsEnabled() {
+        return Boolean.parseBoolean(getOptional("kylin.storage.columnar.separate-fs-enable", "false"));
+    }
+
+    public String getParquetSeparateOverrideFiles() {
+        return getOptional("kylin.storage.columnar.separate-override-files",
+                "core-site.xml,hdfs-site.xml,yarn-site.xml");
+    }
+
+    public String getSparkCubeGTStorage() {
+        return getOptional("kylin.storage.parquet.gtstorage",
+                "org.apache.kylin.storage.parquet.cube.CubeSparkRPC");
+    }
+
+    public boolean isParquetSparkCleanCachedRDDAfterUse() {
+        return Boolean.parseBoolean(getOptional("kylin.storage.parquet.clean-cached-rdd-after-use", "false"));
+    }
+
+    public String sparderJars() {
+        try {
+            File storageFile = FileUtils.findFile(KylinConfigBase.getKylinHome() + "/lib",
+                    "kylin-storage-parquet-.*.jar");
+            String path1 = "";
+            if (storageFile != null) {
+                path1 = storageFile.getCanonicalPath();
+            }
+
+            return getOptional("kylin.query.parquet-additional-jars", path1);
+        } catch (IOException e) {
+            return "";
+        }
+    }
 }
diff --git a/core-common/src/main/java/org/apache/kylin/common/QueryContext.java b/core-common/src/main/java/org/apache/kylin/common/QueryContext.java
index a065a13..49ffab1 100644
--- a/core-common/src/main/java/org/apache/kylin/common/QueryContext.java
+++ b/core-common/src/main/java/org/apache/kylin/common/QueryContext.java
@@ -19,10 +19,12 @@
 package org.apache.kylin.common;
 
 import java.io.Serializable;
+import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.Future;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 
@@ -45,6 +47,21 @@ public class QueryContext {
         void stop(QueryContext query);
     }
 
+    private static final ThreadLocal<QueryContext> contexts = new ThreadLocal<QueryContext>() {
+        @Override
+        protected QueryContext initialValue() {
+            return new QueryContext();
+        }
+    };
+
+    public static QueryContext current() {
+        return contexts.get();
+    }
+
+    public static void reset() {
+        contexts.remove();
+    }
+
     private long queryStartMillis;
 
     private final String queryId;
@@ -55,6 +72,10 @@ public class QueryContext {
     private AtomicLong scannedBytes = new AtomicLong();
     private Object calcitePlan;
 
+    private boolean isHighPriorityQuery = false;
+    private Set<Future> allRunningTasks = new HashSet<>();
+    private boolean isTimeout;
+
     private AtomicBoolean isRunning = new AtomicBoolean(true);
     private volatile Throwable throwable;
     private String stopReason;
@@ -82,6 +103,35 @@ public class QueryContext {
         }
     }
 
+    public boolean isHighPriorityQuery() {
+        return isHighPriorityQuery;
+    }
+
+    public void markHighPriorityQuery() {
+        isHighPriorityQuery = true;
+    }
+
+    public void addRunningTasks(Future task) {
+        this.allRunningTasks.add(task);
+    }
+
+    public Set<Future> getAllRunningTasks() {
+        return allRunningTasks;
+    }
+
+    public void removeRunningTask(Future task) {
+        this.allRunningTasks.remove(task);
+    }
+
+    public boolean isTimeout() {
+        return isTimeout;
+    }
+
+    public void setTimeout(boolean timeout) {
+        isTimeout = timeout;
+    }
+
+
     public String getQueryId() {
         return queryId == null ? "" : queryId;
     }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java b/core-common/src/main/java/org/apache/kylin/common/util/FileUtils.java
similarity index 52%
copy from storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
copy to core-common/src/main/java/org/apache/kylin/common/util/FileUtils.java
index 6a3ad59..dd9b9b5 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
+++ b/core-common/src/main/java/org/apache/kylin/common/util/FileUtils.java
@@ -16,28 +16,19 @@
  * limitations under the License.
  */
 
-package org.apache.kylin.storage.parquet.cube;
+package org.apache.kylin.common.util;
 
-import org.apache.kylin.cube.CubeInstance;
-import org.apache.kylin.metadata.realization.SQLDigest;
-import org.apache.kylin.metadata.tuple.ITupleIterator;
-import org.apache.kylin.metadata.tuple.TupleInfo;
-import org.apache.kylin.storage.StorageContext;
-import org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase;
+import java.io.File;
 
-public class CubeStorageQuery extends GTCubeStorageQueryBase {
-
-    public CubeStorageQuery(CubeInstance cube) {
-        super(cube);
-    }
-
-    @Override
-    public ITupleIterator search(StorageContext context, SQLDigest sqlDigest, TupleInfo returnTupleInfo) {
-        return super.search(context, sqlDigest, returnTupleInfo);
-    }
-
-    @Override
-    protected String getGTStorage() {
+public final class FileUtils {
+    public static File findFile(String dir, String ptn) {
+        File[] files = new File(dir).listFiles();
+        if (files != null) {
+            for (File f : files) {
+                if (f.getName().matches(ptn))
+                    return f;
+            }
+        }
         return null;
     }
 }
diff --git a/core-common/src/main/resources/kylin-defaults.properties b/core-common/src/main/resources/kylin-defaults.properties
index 6238e44..8115a50 100644
--- a/core-common/src/main/resources/kylin-defaults.properties
+++ b/core-common/src/main/resources/kylin-defaults.properties
@@ -146,7 +146,7 @@ kylin.storage.partition.aggr-spill-enabled=true
 
 # The maximum number of bytes each coprocessor is allowed to scan.
 # To allow arbitrary large scan, you can set it to 0.
-kylin.storage.partition.max-scan-bytes=3221225472
+kylin.storage.partition.max-scan-bytes=0
 
 # The default coprocessor timeout is (hbase.rpc.timeout * 0.9) / 1000 seconds,
 # You can set it to a smaller value. 0 means use default.
@@ -361,8 +361,8 @@ kylin.storage.columnar.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=
 kylin.storage.columnar.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
 kylin.storage.columnar.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
 #kylin.storage.columnar.spark-conf.spark.serializer=org.apache.spark.serializer.JavaSerializer
-kylin.storage.columnar.spark-conf.spark.driver.memory=512m
-kylin.storage.columnar.spark-conf.spark.executor.memory=512m
+kylin.storage.columnar.spark-conf.spark.driver.memory=1g
+kylin.storage.columnar.spark-conf.spark.executor.memory=1g
 kylin.storage.columnar.spark-conf.spark.yarn.executor.memoryOverhead=512
 kylin.storage.columnar.spark-conf.yarn.am.memory=512m
 kylin.storage.columnar.spark-conf.spark.executor.cores=1
diff --git a/core-cube/src/main/java/org/apache/kylin/cube/gridtable/CuboidToGridTableMapping.java b/core-cube/src/main/java/org/apache/kylin/cube/gridtable/CuboidToGridTableMapping.java
index 05256cc..9052e50 100644
--- a/core-cube/src/main/java/org/apache/kylin/cube/gridtable/CuboidToGridTableMapping.java
+++ b/core-cube/src/main/java/org/apache/kylin/cube/gridtable/CuboidToGridTableMapping.java
@@ -51,6 +51,7 @@ public class CuboidToGridTableMapping {
 
     private int nDimensions;
     private Map<TblColRef, Integer> dim2gt;
+    private Map<MeasureDesc, Integer> met2gt;
     private ImmutableBitSet gtPrimaryKey;
 
     private int nMetrics;
@@ -68,6 +69,7 @@ public class CuboidToGridTableMapping {
 
         // dimensions
         dim2gt = Maps.newHashMap();
+        met2gt = Maps.newHashMap();
         BitSet pk = new BitSet();
         for (TblColRef dimension : cuboid.getColumns()) {
             gtDataTypes.add(dimension.getType());
@@ -96,6 +98,7 @@ public class CuboidToGridTableMapping {
             // Ensure the holistic version if exists is always the first.
             FunctionDesc func = measure.getFunction();
             metrics2gt.put(func, gtColIdx);
+            met2gt.put(measure, gtColIdx);
             gtDataTypes.add(func.getReturnDataType());
 
             // map to column block
@@ -245,4 +248,8 @@ public class CuboidToGridTableMapping {
     public Map<TblColRef, Integer> getDim2gt() {
         return ImmutableMap.copyOf(dim2gt);
     }
+
+    public Map<MeasureDesc, Integer> getMet2gt() {
+        return ImmutableMap.copyOf(met2gt);
+    }
 }
diff --git a/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyColumnIO.java b/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyColumnIO.java
index b0efc91..79cc233 100644
--- a/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyColumnIO.java
+++ b/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyColumnIO.java
@@ -56,11 +56,15 @@ public class RowKeyColumnIO implements java.io.Serializable {
         dimEnc.encode(value, output, outputOffset);
     }
 
-    public String readColumnString(TblColRef col, byte[] bytes, int offset, int length) {
+    public String readColumnStringKeepDicValue(TblColRef col, byte[] bytes, int offset, int length) {
         DimensionEncoding dimEnc = dimEncMap.get(col);
         if (dimEnc instanceof DictionaryDimEnc)
             return String.valueOf(BytesUtil.readUnsigned(bytes, offset, length));
         return dimEnc.decode(bytes, offset, length);
     }
 
+    public String readColumnString(TblColRef col, byte[] bytes, int offset, int length) {
+        DimensionEncoding dimEnc = dimEncMap.get(col);
+        return dimEnc.decode(bytes, offset, length);
+    }
 }
diff --git a/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyDecoder.java b/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyDecoder.java
index 71ad4bf..fde7f33 100644
--- a/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyDecoder.java
+++ b/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyDecoder.java
@@ -36,12 +36,12 @@ import org.apache.kylin.metadata.model.TblColRef;
  */
 public class RowKeyDecoder {
 
-    private final CubeDesc cubeDesc;
-    private final RowKeyColumnIO colIO;
-    private final RowKeySplitter rowKeySplitter;
+    protected final CubeDesc cubeDesc;
+    protected final RowKeyColumnIO colIO;
+    protected final RowKeySplitter rowKeySplitter;
 
-    private Cuboid cuboid;
-    private List<String> values;
+    protected Cuboid cuboid;
+    protected List<String> values;
 
     public RowKeyDecoder(CubeSegment cubeSegment) {
         this.cubeDesc = cubeSegment.getCubeDesc();
@@ -76,7 +76,7 @@ public class RowKeyDecoder {
         this.cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);
     }
 
-    private void collectValue(TblColRef col, byte[] valueBytes, int offset, int length) throws IOException {
+    protected void collectValue(TblColRef col, byte[] valueBytes, int offset, int length) throws IOException {
         String strValue = colIO.readColumnString(col, valueBytes, offset, length);
         values.add(strValue);
     }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java b/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyDecoderParquet.java
similarity index 52%
copy from storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
copy to core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyDecoderParquet.java
index 6a3ad59..9b1f1a5 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
+++ b/core-cube/src/main/java/org/apache/kylin/cube/kv/RowKeyDecoderParquet.java
@@ -16,28 +16,21 @@
  * limitations under the License.
  */
 
-package org.apache.kylin.storage.parquet.cube;
+package org.apache.kylin.cube.kv;
 
-import org.apache.kylin.cube.CubeInstance;
-import org.apache.kylin.metadata.realization.SQLDigest;
-import org.apache.kylin.metadata.tuple.ITupleIterator;
-import org.apache.kylin.metadata.tuple.TupleInfo;
-import org.apache.kylin.storage.StorageContext;
-import org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.metadata.model.TblColRef;
 
-public class CubeStorageQuery extends GTCubeStorageQueryBase {
+import java.io.IOException;
 
-    public CubeStorageQuery(CubeInstance cube) {
-        super(cube);
+public class RowKeyDecoderParquet extends RowKeyDecoder {
+    public RowKeyDecoderParquet(CubeSegment cubeSegment) {
+        super(cubeSegment);
     }
 
     @Override
-    public ITupleIterator search(StorageContext context, SQLDigest sqlDigest, TupleInfo returnTupleInfo) {
-        return super.search(context, sqlDigest, returnTupleInfo);
-    }
-
-    @Override
-    protected String getGTStorage() {
-        return null;
+    protected void collectValue(TblColRef col, byte[] valueBytes, int offset, int length) throws IOException {
+        String strValue = colIO.readColumnStringKeepDicValue(col, valueBytes, offset, length);
+        values.add(strValue);
     }
 }
diff --git a/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java b/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java
index b4a57c7..d7be088 100644
--- a/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java
+++ b/core-cube/src/main/java/org/apache/kylin/gridtable/GTRecord.java
@@ -18,14 +18,23 @@
 
 package org.apache.kylin.gridtable;
 
-import java.nio.ByteBuffer;
-import java.util.Arrays;
-import java.util.Comparator;
-
+import com.google.common.base.Preconditions;
 import org.apache.kylin.common.util.ByteArray;
+import org.apache.kylin.common.util.BytesUtil;
 import org.apache.kylin.common.util.ImmutableBitSet;
+import org.apache.kylin.dimension.DictionaryDimEnc;
+import org.apache.kylin.measure.bitmap.BitmapSerializer;
+import org.apache.kylin.measure.dim.DimCountDistincSerializer;
+import org.apache.kylin.measure.extendedcolumn.ExtendedColumnSerializer;
+import org.apache.kylin.measure.hllc.HLLCSerializer;
+import org.apache.kylin.measure.percentile.PercentileSerializer;
+import org.apache.kylin.measure.raw.RawSerializer;
+import org.apache.kylin.measure.topn.TopNCounterSerializer;
+import org.apache.kylin.metadata.datatype.DataTypeSerializer;
 
-import com.google.common.base.Preconditions;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.Comparator;
 
 public class GTRecord implements Comparable<GTRecord> {
 
@@ -103,6 +112,40 @@ public class GTRecord implements Comparable<GTRecord> {
         return this;
     }
 
+    /** set record to the codes of specified values, reuse given space to hold the codes */
+    public GTRecord setValuesParquet(ImmutableBitSet selectedCols, ByteArray space, Object... values) {
+        assert selectedCols.cardinality() == values.length;
+
+        ByteBuffer buf = space.asBuffer();
+        int pos = buf.position();
+        for (int i = 0; i < selectedCols.trueBitCount(); i++) {
+            int c = selectedCols.trueBitAt(i);
+
+            DataTypeSerializer serializer = info.codeSystem.getSerializer(c);
+            if (serializer instanceof DictionaryDimEnc.DictionarySerializer) {
+                int len = serializer.peekLength(buf);
+                BytesUtil.writeUnsigned((Integer) values[i], len, buf);
+                int newPos = buf.position();
+                cols[c].reset(buf.array(), buf.arrayOffset() + pos, newPos - pos);
+                pos = newPos;
+            } else if (serializer instanceof TopNCounterSerializer ||
+                    serializer instanceof HLLCSerializer ||
+                    serializer instanceof BitmapSerializer ||
+                    serializer instanceof ExtendedColumnSerializer ||
+                    serializer instanceof PercentileSerializer ||
+                    serializer instanceof DimCountDistincSerializer ||
+                    serializer instanceof RawSerializer) {
+                cols[c].reset((byte[]) values[i], 0, ((byte[]) values[i]).length);
+            } else {
+                info.codeSystem.encodeColumnValue(c, values[i], buf);
+                int newPos = buf.position();
+                cols[c].reset(buf.array(), buf.arrayOffset() + pos, newPos - pos);
+                pos = newPos;
+            }
+        }
+        return this;
+    }
+
     /** decode and return the values of this record */
     public Object[] getValues() {
         return getValues(info.colAll, new Object[info.getColumnCount()]);
diff --git a/core-cube/src/main/java/org/apache/kylin/gridtable/GTScanRequest.java b/core-cube/src/main/java/org/apache/kylin/gridtable/GTScanRequest.java
index e3a2234..b6cf250 100644
--- a/core-cube/src/main/java/org/apache/kylin/gridtable/GTScanRequest.java
+++ b/core-cube/src/main/java/org/apache/kylin/gridtable/GTScanRequest.java
@@ -18,14 +18,9 @@
 
 package org.apache.kylin.gridtable;
 
-import java.io.IOException;
-import java.nio.BufferOverflowException;
-import java.nio.ByteBuffer;
-import java.util.Arrays;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
 import org.apache.commons.io.IOUtils;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.ByteArray;
@@ -45,9 +40,13 @@ import org.apache.kylin.metadata.model.TblColRef;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.Lists;
-import com.google.common.collect.Maps;
-import com.google.common.collect.Sets;
+import java.io.IOException;
+import java.nio.BufferOverflowException;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
 
 public class GTScanRequest {
 
@@ -71,6 +70,7 @@ public class GTScanRequest {
 
     // optional filtering
     private TupleFilter filterPushDown;
+    private String filterPushDownSQL;
     private TupleFilter havingFilterPushDown;
 
     // optional aggregation
@@ -95,7 +95,7 @@ public class GTScanRequest {
     GTScanRequest(GTInfo info, List<GTScanRange> ranges, ImmutableBitSet dimensions, ImmutableBitSet aggrGroupBy, //
             ImmutableBitSet aggrMetrics, String[] aggrMetricsFuncs, ImmutableBitSet rtAggrMetrics, //
             ImmutableBitSet dynamicCols, Map<Integer, TupleExpression> tupleExpressionMap, //
-            TupleFilter filterPushDown, TupleFilter havingFilterPushDown, //
+            TupleFilter filterPushDown, String filterPushDownSQL, TupleFilter havingFilterPushDown, //
             boolean allowStorageAggregation, double aggCacheMemThreshold, int storageScanRowNumThreshold, //
             int storagePushDownLimit, StorageLimitLevel storageLimitLevel, String storageBehavior, long startTime,
             long timeout) {
@@ -107,6 +107,7 @@ public class GTScanRequest {
         }
         this.columns = dimensions;
         this.filterPushDown = filterPushDown;
+        this.filterPushDownSQL = filterPushDownSQL;
         this.havingFilterPushDown = havingFilterPushDown;
 
         this.aggrGroupBy = aggrGroupBy;
@@ -318,6 +319,10 @@ public class GTScanRequest {
         return filterPushDown;
     }
 
+    public String getFilterPushDownSQL() {
+        return filterPushDownSQL;
+    }
+
     public TupleFilter getHavingFilterPushDown() {
         return havingFilterPushDown;
     }
@@ -445,6 +450,7 @@ public class GTScanRequest {
             BytesUtil.writeVLong(value.startTime, out);
             BytesUtil.writeVLong(value.timeout, out);
             BytesUtil.writeUTFString(value.storageBehavior, out);
+            BytesUtil.writeUTFString(value.filterPushDownSQL, out);
 
             // for dynamic related info
             ImmutableBitSet.serializer.serialize(value.dynamicCols, out);
@@ -499,6 +505,7 @@ public class GTScanRequest {
             long startTime = BytesUtil.readVLong(in);
             long timeout = BytesUtil.readVLong(in);
             String storageBehavior = BytesUtil.readUTFString(in);
+            String filterPushDownSQL = BytesUtil.readUTFString(in);
 
             ImmutableBitSet aDynCols = ImmutableBitSet.serializer.deserialize(in);
 
@@ -516,7 +523,7 @@ public class GTScanRequest {
                     .setAggrGroupBy(sAggGroupBy).setAggrMetrics(sAggrMetrics).setAggrMetricsFuncs(sAggrMetricFuncs)
                     .setRtAggrMetrics(aRuntimeAggrMetrics).setDynamicColumns(aDynCols)
                     .setExprsPushDown(sTupleExpressionMap)
-                    .setFilterPushDown(sGTFilter).setHavingFilterPushDown(sGTHavingFilter)
+                    .setFilterPushDown(sGTFilter).setFilterPushDownSQL(filterPushDownSQL).setHavingFilterPushDown(sGTHavingFilter)
                     .setAllowStorageAggregation(sAllowPreAggr).setAggCacheMemThreshold(sAggrCacheGB)
                     .setStorageScanRowNumThreshold(storageScanRowNumThreshold)
                     .setStoragePushDownLimit(storagePushDownLimit).setStorageLimitLevel(storageLimitLevel)
diff --git a/core-cube/src/main/java/org/apache/kylin/gridtable/GTScanRequestBuilder.java b/core-cube/src/main/java/org/apache/kylin/gridtable/GTScanRequestBuilder.java
index 94a89e6..daf766d 100644
--- a/core-cube/src/main/java/org/apache/kylin/gridtable/GTScanRequestBuilder.java
+++ b/core-cube/src/main/java/org/apache/kylin/gridtable/GTScanRequestBuilder.java
@@ -33,6 +33,7 @@ public class GTScanRequestBuilder {
     private GTInfo info;
     private List<GTScanRange> ranges;
     private TupleFilter filterPushDown;
+    private String filterPushDownSQL;
     private TupleFilter havingFilterPushDown;
     private ImmutableBitSet dimensions;
     private ImmutableBitSet aggrGroupBy = null;
@@ -80,6 +81,11 @@ public class GTScanRequestBuilder {
         return this;
     }
 
+    public GTScanRequestBuilder setFilterPushDownSQL(String filterPushDownSQL) {
+        this.filterPushDownSQL = filterPushDownSQL;
+        return this;
+    }
+
     public GTScanRequestBuilder setHavingFilterPushDown(TupleFilter havingFilterPushDown) {
         this.havingFilterPushDown = havingFilterPushDown;
         return this;
@@ -180,7 +186,7 @@ public class GTScanRequestBuilder {
         this.timeout = timeout == -1 ? 300000 : timeout;
 
         return new GTScanRequest(info, ranges, dimensions, aggrGroupBy, aggrMetrics, aggrMetricsFuncs, rtAggrMetrics,
-                dynamicColumns, exprsPushDown, filterPushDown, havingFilterPushDown, allowStorageAggregation,
+                dynamicColumns, exprsPushDown, filterPushDown, filterPushDownSQL, havingFilterPushDown, allowStorageAggregation,
                 aggCacheMemThreshold, storageScanRowNumThreshold, storagePushDownLimit, storageLimitLevel,
                 storageBehavior, startTime, timeout);
     }
diff --git a/core-cube/src/main/java/org/apache/kylin/gridtable/GTUtil.java b/core-cube/src/main/java/org/apache/kylin/gridtable/GTUtil.java
index 298225f..49c68c5 100644
--- a/core-cube/src/main/java/org/apache/kylin/gridtable/GTUtil.java
+++ b/core-cube/src/main/java/org/apache/kylin/gridtable/GTUtil.java
@@ -24,9 +24,17 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import com.google.common.collect.Lists;
 import org.apache.kylin.common.util.ByteArray;
 import org.apache.kylin.common.util.BytesUtil;
 import org.apache.kylin.cube.gridtable.CuboidToGridTableMapping;
+import org.apache.kylin.dimension.AbstractDateDimEnc;
+import org.apache.kylin.dimension.DictionaryDimEnc;
+import org.apache.kylin.dimension.DimensionEncoding;
+import org.apache.kylin.dimension.FixedLenDimEnc;
+import org.apache.kylin.dimension.IntDimEnc;
+import org.apache.kylin.dimension.IntegerDimEnc;
+import org.apache.kylin.metadata.datatype.DataTypeSerializer;
 import org.apache.kylin.metadata.expression.TupleExpression;
 import org.apache.kylin.metadata.expression.TupleExpressionSerializer;
 import org.apache.kylin.metadata.filter.ColumnTupleFilter;
@@ -44,6 +52,8 @@ import org.apache.kylin.metadata.model.TblColRef;
 import com.google.common.collect.Maps;
 import com.google.common.collect.Sets;
 
+import static org.apache.kylin.metadata.filter.ConstantTupleFilter.TRUE;
+
 public class GTUtil {
 
     private GTUtil(){}
@@ -76,8 +86,22 @@ public class GTUtil {
     }
 
     public static TupleFilter convertFilterColumnsAndConstants(TupleFilter rootFilter, GTInfo info, //
-            Map<TblColRef, Integer> colMapping, Set<TblColRef> unevaluatableColumnCollector) {
-        TupleFilter filter = convertFilter(rootFilter, info, colMapping, true, unevaluatableColumnCollector);
+                                                               Map<TblColRef, Integer> colMapping, Set<TblColRef> unevaluatableColumnCollector) {
+        return convertFilterColumnsAndConstants(rootFilter, info, colMapping, unevaluatableColumnCollector, false);
+    }
+
+    public static TupleFilter convertFilterColumnsAndConstants(TupleFilter rootFilter, GTInfo info, //
+            Map<TblColRef, Integer> colMapping, Set<TblColRef> unevaluatableColumnCollector, boolean isParquet) {
+        if (rootFilter == null) {
+            return null;
+        }
+
+        TupleFilter filter;
+        if (isParquet) {
+            filter = convertFilter(rootFilter, new GTConvertDecoratorParquet(unevaluatableColumnCollector, colMapping, info, true));
+        } else {
+            filter = convertFilter(rootFilter, info, colMapping, true, unevaluatableColumnCollector);
+        }
 
         // optimize the filter: after translating with dictionary, some filters become determined
         // e.g.
@@ -110,6 +134,20 @@ public class GTUtil {
         return TupleFilterSerializer.deserialize(bytes, filterCodeSystem);
     }
 
+    private static TupleFilter convertFilter(TupleFilter rootFilter, GTConvertDecorator decorator) {
+        rootFilter = decorator.onSerialize(rootFilter);
+        List<TupleFilter> newChildren = Lists.newArrayListWithCapacity(rootFilter.getChildren().size());
+        if (rootFilter.hasChildren()) {
+            for (TupleFilter childFilter : rootFilter.getChildren()) {
+                newChildren.add(convertFilter(childFilter, decorator));
+
+            }
+            rootFilter.removeAllChildren();
+            rootFilter.addChildren(newChildren);
+        }
+        return rootFilter;
+    }
+
     public static TupleExpression convertFilterColumnsAndConstants(TupleExpression rootExpression, GTInfo info,
             CuboidToGridTableMapping mapping, Set<TblColRef> unevaluatableColumnCollector) {
         Map<TblColRef, FunctionDesc> innerFuncMap = Maps.newHashMap();
@@ -183,6 +221,43 @@ public class GTUtil {
         };
     }
 
+    private static class GTConvertDecoratorParquet extends GTConvertDecorator {
+        public GTConvertDecoratorParquet(Set<TblColRef> unevaluatableColumnCollector, Map<TblColRef, Integer> colMapping,
+                                         GTInfo info, boolean encodeConstants) {
+            super(unevaluatableColumnCollector, colMapping, info, encodeConstants);
+        }
+
+        @Override
+        protected TupleFilter convertColumnFilter(ColumnTupleFilter columnFilter) {
+            return columnFilter;
+        }
+
+        @Override
+        protected Object translate(int col, Object value, int roundingFlag) {
+            try {
+                buf.clear();
+                DimensionEncoding dimEnc = info.codeSystem.getDimEnc(col);
+                info.codeSystem.encodeColumnValue(col, value, roundingFlag, buf);
+                DataTypeSerializer serializer = dimEnc.asDataTypeSerializer();
+                buf.flip();
+                if (dimEnc instanceof DictionaryDimEnc) {
+                    int id = BytesUtil.readUnsigned(buf, dimEnc.getLengthOfEncoding());
+                    return id;
+                } else if (dimEnc instanceof AbstractDateDimEnc) {
+                    return Long.valueOf((String)serializer.deserialize(buf));
+                } else if (dimEnc instanceof FixedLenDimEnc) {
+                    return serializer.deserialize(buf);
+                } else if (dimEnc instanceof IntegerDimEnc || dimEnc instanceof IntDimEnc) {
+                    return Integer.valueOf((String)serializer.deserialize(buf));
+                } else {
+                    return value;
+                }
+            } catch (IllegalArgumentException ex) {
+                return null;
+            }
+        }
+    }
+
     protected static class GTConvertDecorator implements TupleFilterSerializer.Decorator {
         protected final Set<TblColRef> unevaluatableColumnCollector;
         protected final Map<TblColRef, Integer> colMapping;
@@ -190,7 +265,7 @@ public class GTUtil {
         protected final boolean useEncodeConstants;
 
         public GTConvertDecorator(Set<TblColRef> unevaluatableColumnCollector, Map<TblColRef, Integer> colMapping,
-                GTInfo info, boolean encodeConstants) {
+                                  GTInfo info, boolean encodeConstants) {
             this.unevaluatableColumnCollector = unevaluatableColumnCollector;
             this.colMapping = colMapping;
             this.info = info;
@@ -213,20 +288,18 @@ public class GTUtil {
             // will always return FALSE.
             if (filter.getOperator() == FilterOperatorEnum.NOT && !TupleFilter.isEvaluableRecursively(filter)) {
                 TupleFilter.collectColumns(filter, unevaluatableColumnCollector);
-                return ConstantTupleFilter.TRUE;
+                return TRUE;
             }
 
             // shortcut for unEvaluatable filter
             if (!filter.isEvaluable()) {
                 TupleFilter.collectColumns(filter, unevaluatableColumnCollector);
-                return ConstantTupleFilter.TRUE;
+                return TRUE;
             }
 
             // map to column onto grid table
             if (colMapping != null && filter instanceof ColumnTupleFilter) {
-                ColumnTupleFilter colFilter = (ColumnTupleFilter) filter;
-                int gtColIdx = mapCol(colFilter.getColumn());
-                return new ColumnTupleFilter(info.colRef(gtColIdx));
+                return convertColumnFilter((ColumnTupleFilter) filter);
             }
 
             // encode constants
@@ -237,6 +310,11 @@ public class GTUtil {
             return filter;
         }
 
+        protected TupleFilter convertColumnFilter(ColumnTupleFilter columnFilter) {
+            int gtColIdx = mapCol(columnFilter.getColumn());
+            return new ColumnTupleFilter(info.colRef(gtColIdx));
+        }
+
         protected TupleFilter encodeConstants(CompareTupleFilter oldCompareFilter) {
             // extract ColumnFilter & ConstantFilter
             TblColRef externalCol = oldCompareFilter.getColumn();
@@ -261,7 +339,7 @@ public class GTUtil {
             int col = colMapping == null ? externalCol.getColumnDesc().getZeroBasedIndex() : mapCol(externalCol);
 
             TupleFilter result;
-            ByteArray code;
+            Object code;
 
             // translate constant into code
             switch (newCompareFilter.getOperator()) {
@@ -353,7 +431,7 @@ public class GTUtil {
             return result;
         }
 
-        private TupleFilter newCompareFilter(FilterOperatorEnum op, TblColRef col, ByteArray code) {
+        private TupleFilter newCompareFilter(FilterOperatorEnum op, TblColRef col, Object code) {
             CompareTupleFilter r = new CompareTupleFilter(op);
             r.addChild(new ColumnTupleFilter(col));
             r.addChild(new ConstantTupleFilter(code));
@@ -368,7 +446,7 @@ public class GTUtil {
 
         transient ByteBuffer buf;
 
-        protected ByteArray translate(int col, Object value, int roundingFlag) {
+        protected Object translate(int col, Object value, int roundingFlag) {
             try {
                 buf.clear();
                 info.codeSystem.encodeColumnValue(col, value, roundingFlag, buf);
diff --git a/core-metadata/src/main/java/org/apache/kylin/measure/MeasureTypeFactory.java b/core-metadata/src/main/java/org/apache/kylin/measure/MeasureTypeFactory.java
index d16a705..621a13e 100644
--- a/core-metadata/src/main/java/org/apache/kylin/measure/MeasureTypeFactory.java
+++ b/core-metadata/src/main/java/org/apache/kylin/measure/MeasureTypeFactory.java
@@ -18,10 +18,8 @@
 
 package org.apache.kylin.measure;
 
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.KylinConfigCannotInitException;
 import org.apache.kylin.measure.basic.BasicMeasureType;
@@ -38,8 +36,9 @@ import org.apache.kylin.metadata.model.FunctionDesc;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.Lists;
-import com.google.common.collect.Maps;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
 
 /**
  * Factory for MeasureType.
diff --git a/core-metadata/src/main/java/org/apache/kylin/measure/percentile/PercentileMeasureType.java b/core-metadata/src/main/java/org/apache/kylin/measure/percentile/PercentileMeasureType.java
index 60b3282..9534ecf 100644
--- a/core-metadata/src/main/java/org/apache/kylin/measure/percentile/PercentileMeasureType.java
+++ b/core-metadata/src/main/java/org/apache/kylin/measure/percentile/PercentileMeasureType.java
@@ -108,4 +108,6 @@ public class PercentileMeasureType extends MeasureType<PercentileCounter> {
     public Map<String, Class<?>> getRewriteCalciteAggrFunctions() {
         return UDAF_MAP;
     }
+
+
 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/measure/topn/TopNAggregator.java b/core-metadata/src/main/java/org/apache/kylin/measure/topn/TopNAggregator.java
index bc2bc36..ec8bca7 100644
--- a/core-metadata/src/main/java/org/apache/kylin/measure/topn/TopNAggregator.java
+++ b/core-metadata/src/main/java/org/apache/kylin/measure/topn/TopNAggregator.java
@@ -46,6 +46,11 @@ public class TopNAggregator extends MeasureAggregator<TopNCounter<ByteArray>> {
 
     @Override
     public TopNCounter<ByteArray> aggregate(TopNCounter<ByteArray> value1, TopNCounter<ByteArray> value2) {
+        if (value1 == null) {
+            return new TopNCounter<>(value2);
+        } else if (value2 == null) {
+            return new TopNCounter<>(value1);
+        }
         int thisCapacity = value1.getCapacity();
         TopNCounter<ByteArray> aggregated = new TopNCounter<>(thisCapacity * 2);
         aggregated.merge(value1);
diff --git a/core-metadata/src/main/java/org/apache/kylin/measure/topn/TopNCounter.java b/core-metadata/src/main/java/org/apache/kylin/measure/topn/TopNCounter.java
index 932248d..84debeb 100644
--- a/core-metadata/src/main/java/org/apache/kylin/measure/topn/TopNCounter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/measure/topn/TopNCounter.java
@@ -59,6 +59,11 @@ public class TopNCounter<T> implements Iterable<Counter<T>>, java.io.Serializabl
         counterList = Lists.newLinkedList();
     }
 
+    public TopNCounter(TopNCounter another) {
+        this(another.capacity);
+        merge(another);
+    }
+
     public int getCapacity() {
         return capacity;
     }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/BuiltInFunctionTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/BuiltInFunctionTupleFilter.java
index 9082c1f..38cbd66 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/BuiltInFunctionTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/BuiltInFunctionTupleFilter.java
@@ -177,6 +177,46 @@ public class BuiltInFunctionTupleFilter extends FunctionTupleFilter {
     }
 
     @Override
+    public String toSparkSqlFilter() {
+        List<? extends TupleFilter> childFilter = this.getChildren();
+        String op = this.getName();
+        switch (op) {
+            case "LIKE":
+                assert childFilter.size() == 2;
+                return childFilter.get(0).toSparkSqlFilter() + toSparkFuncMap.get(op) + childFilter.get(1).toSparkSqlFilter();
+            case "||":
+                StringBuilder result = new StringBuilder().append(toSparkFuncMap.get(op)).append("(");
+                int index = 0;
+                for (TupleFilter filter : childFilter) {
+                    result.append(filter.toSparkSqlFilter());
+                    if (index < childFilter.size() - 1) {
+                        result.append(",");
+                    }
+                    index ++;
+                }
+                result.append(")");
+                return result.toString();
+            case "LOWER":
+            case "UPPER":
+            case "CHAR_LENGTH":
+                assert childFilter.size() == 1;
+                return toSparkFuncMap.get(op) + "(" + childFilter.get(0).toSparkSqlFilter() + ")";
+            case "SUBSTRING":
+                assert childFilter.size() == 3;
+                return toSparkFuncMap.get(op) + "(" + childFilter.get(0).toSparkSqlFilter() + "," + childFilter.get(1).toSparkSqlFilter() + "," + childFilter.get(2).toSparkSqlFilter() + ")";
+            default:
+                if (childFilter.size() == 1) {
+                    return op + "(" + childFilter.get(0).toSparkSqlFilter() + ")";
+                } else if (childFilter.size() == 2) {
+                    return childFilter.get(0).toSparkSqlFilter() + op + childFilter.get(1).toSparkSqlFilter();
+                } else if (childFilter.size() == 3) {
+                    return op + "(" + childFilter.get(0).toSparkSqlFilter() + "," + childFilter.get(1).toSparkSqlFilter() + "," + childFilter.get(2).toSparkSqlFilter() + ")";
+                }
+                throw new IllegalArgumentException("Operator " + op + " is not supported");
+        }
+    }
+
+    @Override
     public String toString() {
         StringBuilder sb = new StringBuilder();
         if (isReversed)
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CaseTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CaseTupleFilter.java
index 9083212..4305557 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CaseTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CaseTupleFilter.java
@@ -134,6 +134,21 @@ public class CaseTupleFilter extends TupleFilter implements IOptimizeableTupleFi
     }
 
     @Override
+    public String toSparkSqlFilter() {
+        String result = "(case ";
+        TupleFilter whenFilter;
+        TupleFilter thenFilter;
+        for (int i = 0; i < this.getWhenFilters().size(); i++) {
+            whenFilter = this.getWhenFilters().get(i);
+            thenFilter = this.getThenFilters().get(i);
+            result += " when " + whenFilter.toSparkSqlFilter() + " then " + thenFilter.toSparkSqlFilter();
+        }
+        result += " else " + this.getElseFilter().toSparkSqlFilter();
+        result += " end)";
+        return result;
+    }
+
+    @Override
     public TupleFilter acceptOptimizeTransformer(FilterOptimizeTransformer transformer) {
         List<TupleFilter> newChildren = Lists.newArrayList();
         for (TupleFilter child : this.getChildren()) {
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ColumnTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ColumnTupleFilter.java
index 6d3d541..09a16f5 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ColumnTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ColumnTupleFilter.java
@@ -46,12 +46,18 @@ public class ColumnTupleFilter extends TupleFilter {
     private TblColRef columnRef;
     private Object tupleValue;
     private List<Object> values;
+    private String colName;
 
     public ColumnTupleFilter(TblColRef column) {
+        this(column, null);
+    }
+
+    public ColumnTupleFilter(TblColRef column, String colName) {
         super(Collections.<TupleFilter> emptyList(), FilterOperatorEnum.COLUMN);
         this.columnRef = column;
         this.values = new ArrayList<Object>(1);
         this.values.add(null);
+        this.colName = colName;
     }
 
     public TblColRef getColumn() {
@@ -155,4 +161,9 @@ public class ColumnTupleFilter extends TupleFilter {
             this.columnRef = column.getRef();
         }
     }
+
+    @Override
+    public String toSparkSqlFilter() {
+        return this.columnRef.getTableAlias() + "_" + this.columnRef.getName();
+    }
 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CompareTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CompareTupleFilter.java
index 1c1c409..b63ac0a 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CompareTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/CompareTupleFilter.java
@@ -22,6 +22,7 @@ import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.HashSet;
+import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
@@ -279,6 +280,31 @@ public class CompareTupleFilter extends TupleFilter implements IOptimizeableTupl
     }
 
     @Override
+    public String toSparkSqlFilter() {
+        List<? extends TupleFilter> childFilter = this.getChildren();
+        switch (this.getOperator()) {
+            case EQ:
+            case NEQ:
+            case LT:
+            case GT:
+            case GTE:
+            case LTE:
+                assert childFilter.size() == 2;
+                return childFilter.get(0).toSparkSqlFilter() + toSparkOpMap.get(this.getOperator()) + childFilter.get(1).toSparkSqlFilter();
+            case IN:
+            case NOTIN:
+                assert childFilter.size() == 2;
+                return childFilter.get(0).toSparkSqlFilter() + toSparkOpMap.get(this.getOperator()) + "(" + childFilter.get(1).toSparkSqlFilter() + ")";
+            case ISNULL:
+            case ISNOTNULL:
+                assert childFilter.size() == 1;
+                return childFilter.get(0).toSparkSqlFilter() + toSparkOpMap.get(this.getOperator());
+            default:
+                throw new IllegalStateException("operator " + this.getOperator() + " not supported: ");
+        }
+    }
+
+    @Override
     public TupleFilter acceptOptimizeTransformer(FilterOptimizeTransformer transformer) {
         return transformer.visit(this);
     }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ConstantTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ConstantTupleFilter.java
index e4f8b2e..e9ecf16 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ConstantTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ConstantTupleFilter.java
@@ -21,6 +21,7 @@ package org.apache.kylin.metadata.filter;
 import java.nio.ByteBuffer;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.List;
 
 import org.apache.kylin.common.util.BytesUtil;
 import org.apache.kylin.metadata.tuple.IEvaluatableTuple;
@@ -88,6 +89,10 @@ public class ConstantTupleFilter extends TupleFilter {
         return this.constantValues;
     }
 
+    public void setValues(List<Object> constantValues) {
+        this.constantValues = constantValues;
+    }
+
     @SuppressWarnings({ "unchecked", "rawtypes" })
     @Override
     public void serialize(IFilterCodeSystem cs, ByteBuffer buffer) {
@@ -108,6 +113,38 @@ public class ConstantTupleFilter extends TupleFilter {
         }
     }
 
+    @Override
+    public String toSparkSqlFilter() {
+        if (this.equals(TRUE)) {
+            return "true";
+        } else if (this.equals(FALSE)) {
+            return "false";
+        }
+
+        StringBuilder sb = new StringBuilder("");
+
+        if (constantValues.isEmpty()) {
+            sb.append("null");
+        } else {
+            for (Object value : constantValues) {
+                if (value == null) {
+                    sb.append("null");
+                }
+                if (value instanceof String) {
+                    sb.append("'" + value + "'");
+                } else {
+                    sb.append(value);
+                }
+                sb.append(",");
+            }
+        }
+        String result = sb.toString();
+        if (result.endsWith(",")) {
+            result = result.substring(0, result.length() - 1);
+        }
+        return result;
+    }
+
     @Override public boolean equals(Object o) {
         if (this == o)
             return true;
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/DynamicTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/DynamicTupleFilter.java
index d9dc52a..c4490e5 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/DynamicTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/DynamicTupleFilter.java
@@ -78,4 +78,9 @@ public class DynamicTupleFilter extends TupleFilter {
         this.variableName = BytesUtil.readUTFString(buffer);
     }
 
+    @Override
+    public String toSparkSqlFilter() {
+        return "1=1";
+    }
+
 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ExtractTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ExtractTupleFilter.java
index 8c2ba94..36ea021 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ExtractTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/ExtractTupleFilter.java
@@ -122,4 +122,9 @@ public class ExtractTupleFilter extends TupleFilter {
     public void deserialize(IFilterCodeSystem<?> cs, ByteBuffer buffer) {
     }
 
+    @Override
+    public String toSparkSqlFilter() {
+        return "1=1";
+    }
+
 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/LogicalTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/LogicalTupleFilter.java
index f0c825f..99cf3f3 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/LogicalTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/LogicalTupleFilter.java
@@ -155,6 +155,31 @@ public class LogicalTupleFilter extends TupleFilter implements IOptimizeableTupl
     }
 
     @Override
+    public String toSparkSqlFilter() {
+        StringBuilder result = new StringBuilder("");
+        switch (this.getOperator()) {
+            case AND:
+            case OR:
+                result.append("(");
+                String op = toSparkOpMap.get(this.getOperator());
+                int index = 0;
+                for (TupleFilter filter : this.getChildren()) {
+                    result.append(filter.toSparkSqlFilter());
+                    if (index < this.getChildren().size() - 1) {
+                        result.append(op);
+                    }
+                    index ++;
+                }
+                result.append(")");
+                break;
+            default:
+                throw new IllegalArgumentException("Operator " + this.getOperator() + " is not supported");
+        }
+
+        return result.toString();
+    }
+
+    @Override
     public TupleFilter acceptOptimizeTransformer(FilterOptimizeTransformer transformer) {
         List<TupleFilter> newChildren = Lists.newArrayList();
         for (TupleFilter child : this.getChildren()) {
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/TupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/TupleFilter.java
index 672aba0..28a8c6c 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/TupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/TupleFilter.java
@@ -26,6 +26,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import com.google.common.collect.ImmutableMap;
 import org.apache.kylin.metadata.model.TblColRef;
 import org.apache.kylin.metadata.tuple.IEvaluatableTuple;
 import org.slf4j.Logger;
@@ -34,6 +35,19 @@ import org.slf4j.LoggerFactory;
 import com.google.common.collect.Maps;
 import com.google.common.collect.Sets;
 
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.AND;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.EQ;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.GT;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.GTE;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.IN;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.ISNOTNULL;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.ISNULL;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.LT;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.LTE;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.NEQ;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.NOTIN;
+import static org.apache.kylin.metadata.filter.TupleFilter.FilterOperatorEnum.OR;
+
 /**
  * 
  * @author xjiang
@@ -43,6 +57,35 @@ public abstract class TupleFilter {
 
     static final Logger logger = LoggerFactory.getLogger(TupleFilter.class);
 
+    public static final Map<TupleFilter.FilterOperatorEnum, String> toSparkOpMap = ImmutableMap.<TupleFilter.FilterOperatorEnum, String>builder()
+            .put(EQ, " = ")
+            .put(NEQ, " != ")
+            .put(LT, " < ")
+            .put(GT, " > ")
+            .put(GTE, " >= ")
+            .put(LTE, " <= ")
+            .put(ISNULL, " is null")
+            .put(ISNOTNULL, " is not null")
+            .put(IN, " in ")
+            .put(NOTIN, " not in ")
+            .put(AND, " and ")
+            .put(OR, " or ")
+            .build();
+
+    //TODO all function mapping
+    public static final Map<String, String> toSparkFuncMap = ImmutableMap.<String, String>builder()
+            .put("LOWER", "LOWER")
+            .put("UPPER", "UPPER")
+            .put("CHAR_LENGTH", "LENGTH")
+            .put("SUBSTRING", "SUBSTRING")
+            .put("LIKE", " LIKE ")
+            .put("||", "CONCAT")
+            .build();
+
+    public void removeAllChildren() {
+        this.children.clear();
+    }
+
     public enum FilterOperatorEnum {
         EQ(1), NEQ(2), GT(3), LT(4), GTE(5), LTE(6), ISNULL(7), ISNOTNULL(8), IN(9), NOTIN(10), AND(20), OR(21), NOT(22), COLUMN(30), CONSTANT(31), DYNAMIC(32), EXTRACT(33), CASE(34), FUNCTION(35), MASSIN(36), EVAL_FUNC(37), UNSUPPORTED(38);
 
@@ -378,4 +421,6 @@ public abstract class TupleFilter {
         }
     }
 
+    public abstract String toSparkSqlFilter();
+
 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UDF/MassInTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UDF/MassInTupleFilter.java
index d0ff92e..b153003 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UDF/MassInTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UDF/MassInTupleFilter.java
@@ -153,6 +153,11 @@ public class MassInTupleFilter extends FunctionTupleFilter {
         reverse = Boolean.parseBoolean(BytesUtil.readUTFString(buffer));
     }
 
+    @Override
+    public String toSparkSqlFilter() {
+        return "1=1";
+    }
+
     public static boolean containsMassInTupleFilter(TupleFilter filter) {
         if (filter == null)
             return false;
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UnsupportedTupleFilter.java b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UnsupportedTupleFilter.java
index 85605d4..143ee6d 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UnsupportedTupleFilter.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/filter/UnsupportedTupleFilter.java
@@ -56,4 +56,9 @@ public class UnsupportedTupleFilter extends TupleFilter {
     @Override
     public void deserialize(IFilterCodeSystem<?> cs, ByteBuffer buffer) {
     }
+
+    @Override
+    public String toSparkSqlFilter() {
+        return "1=1";
+    }
 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/model/ParameterDesc.java b/core-metadata/src/main/java/org/apache/kylin/metadata/model/ParameterDesc.java
index f757503..c3bb5a3 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/model/ParameterDesc.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/model/ParameterDesc.java
@@ -195,7 +195,7 @@ public class ParameterDesc implements Serializable {
      * 1. easy to compare without considering order
      * 2. easy to compare one by one
      */
-    private static class PlainParameter {
+    private static class PlainParameter implements Serializable{
         private String type;
         private String value;
         private TblColRef colRef = null;
diff --git a/core-storage/src/main/java/org/apache/kylin/storage/gtrecord/CubeScanRangePlanner.java b/core-storage/src/main/java/org/apache/kylin/storage/gtrecord/CubeScanRangePlanner.java
index 229ef01..60fe33f 100644
--- a/core-storage/src/main/java/org/apache/kylin/storage/gtrecord/CubeScanRangePlanner.java
+++ b/core-storage/src/main/java/org/apache/kylin/storage/gtrecord/CubeScanRangePlanner.java
@@ -72,6 +72,8 @@ public class CubeScanRangePlanner extends ScanRangePlannerBase {
     protected CubeSegment cubeSegment;
     protected CubeDesc cubeDesc;
     protected Cuboid cuboid;
+    protected String filterPushDownSQL;
+    protected CuboidToGridTableMapping mapping;
 
     public CubeScanRangePlanner(CubeSegment cubeSegment, Cuboid cuboid, TupleFilter filter, Set<TblColRef> dimensions, //
             Set<TblColRef> groupByDims, List<TblColRef> dynGroupsDims, List<TupleExpression> dynGroupExprs, //
@@ -87,7 +89,7 @@ public class CubeScanRangePlanner extends ScanRangePlannerBase {
         this.cubeDesc = cubeSegment.getCubeDesc();
         this.cuboid = cuboid;
 
-        final CuboidToGridTableMapping mapping = context.getMapping();
+        mapping = context.getMapping();
 
         this.gtInfo = CubeGridTable.newGTInfo(cuboid, new CubeDimEncMap(cubeSegment), mapping);
 
@@ -98,13 +100,21 @@ public class CubeScanRangePlanner extends ScanRangePlannerBase {
         this.rangeEndComparator = RecordComparators.getRangeEndComparator(comp);
         //start key GTRecord compare to stop key GTRecord
         this.rangeStartEndComparator = RecordComparators.getRangeStartEndComparator(comp);
-
         //replace the constant values in filter to dictionary codes
         Set<TblColRef> groupByPushDown = Sets.newHashSet(groupByDims);
         groupByPushDown.addAll(dynGroupsDims);
+
         this.gtFilter = GTUtil.convertFilterColumnsAndConstants(filter, gtInfo, mapping.getDim2gt(), groupByPushDown);
+
+        TupleFilter convertedFilter = GTUtil.convertFilterColumnsAndConstants(filter, gtInfo, mapping.getDim2gt(), groupByPushDown, true);
+
         this.havingFilter = havingFilter;
 
+        if (convertedFilter != null) {
+            this.filterPushDownSQL = convertedFilter.toSparkSqlFilter();
+            logger.info("--filterPushDownSQL--: {}", this.filterPushDownSQL);
+        }
+
         this.gtDimensions = mapping.makeGridTableColumns(dimensions);
         this.gtAggrGroups = mapping.makeGridTableColumns(replaceDerivedColumns(groupByPushDown, cubeSegment.getCubeDesc()));
         this.gtAggrMetrics = mapping.makeGridTableColumns(metrics);
@@ -171,6 +181,7 @@ public class CubeScanRangePlanner extends ScanRangePlannerBase {
             scanRequest = new GTScanRequestBuilder().setInfo(gtInfo).setRanges(scanRanges).setDimensions(gtDimensions)
                     .setAggrGroupBy(gtAggrGroups).setAggrMetrics(gtAggrMetrics).setAggrMetricsFuncs(gtAggrFuncs)
                     .setFilterPushDown(gtFilter)//
+                    .setFilterPushDownSQL(filterPushDownSQL)
                     .setRtAggrMetrics(gtRtAggrMetrics).setDynamicColumns(gtDynColumns)
                     .setExprsPushDown(tupleExpressionMap)//
                     .setAllowStorageAggregation(context.isNeedStorageAggregation())
diff --git a/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java b/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
index d21638a..fd11c99 100644
--- a/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
+++ b/engine-mr/src/main/java/org/apache/kylin/engine/mr/JobBuilderSupport.java
@@ -287,6 +287,10 @@ public class JobBuilderSupport {
         return getCuboidRootPath(seg.getLastBuildJobID());
     }
 
+    public String getCuboidRootPath() {
+        return getCuboidRootPath(seg.getLastBuildJobID());
+    }
+
     public void appendMapReduceParameters(StringBuilder buf) {
         appendMapReduceParameters(buf, JobEngineConfig.DEFAULT_JOB_CONF_SUFFIX);
     }
diff --git a/engine-mr/src/main/java/org/apache/kylin/engine/mr/steps/CuboidReducer.java b/engine-mr/src/main/java/org/apache/kylin/engine/mr/steps/CuboidReducer.java
index a7fa2cd..e0652ca 100644
--- a/engine-mr/src/main/java/org/apache/kylin/engine/mr/steps/CuboidReducer.java
+++ b/engine-mr/src/main/java/org/apache/kylin/engine/mr/steps/CuboidReducer.java
@@ -6,9 +6,9 @@
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
- * 
+ *
  *     http://www.apache.org/licenses/LICENSE-2.0
- * 
+ *
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS,
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -39,7 +39,7 @@ import org.slf4j.LoggerFactory;
 
 /**
  * @author George Song (ysong1)
- * 
+ *
  */
 public class CuboidReducer extends KylinReducer<Text, Text, Text, Text> {
 
diff --git a/engine-spark/pom.xml b/engine-spark/pom.xml
index 26a3ad7..7bb4a52 100644
--- a/engine-spark/pom.xml
+++ b/engine-spark/pom.xml
@@ -83,6 +83,11 @@
         </dependency>
 
         <dependency>
+            <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-tomcat-ext</artifactId>
+        </dependency>
+
+        <dependency>
             <groupId>junit</groupId>
             <artifactId>junit</artifactId>
             <scope>test</scope>
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java
index 62ccf03..94333a2 100644
--- a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java
+++ b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkBatchCubingJobBuilder2.java
@@ -18,9 +18,6 @@
 
 package org.apache.kylin.engine.spark;
 
-import java.util.HashMap;
-import java.util.Map;
-
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.StorageURL;
 import org.apache.kylin.common.util.StringUtil;
@@ -35,7 +32,8 @@ import org.apache.kylin.metadata.model.IJoinedFlatTableDesc;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import static org.apache.kylin.metadata.model.IStorageAware.ID_PARQUET;
+import java.util.HashMap;
+import java.util.Map;
 
 /**
  */
@@ -144,10 +142,6 @@ public class SparkBatchCubingJobBuilder2 extends JobBuilderSupport {
         StringUtil.appendWithSeparator(jars, seg.getConfig().getSparkAdditionalJars());
         sparkExecutable.setJars(jars.toString());
         sparkExecutable.setName(ExecutableConstants.STEP_NAME_BUILD_SPARK_CUBE);
-
-        if (seg.getStorageType() == ID_PARQUET) {
-            sparkExecutable.setCounterSaveAs(",," + CubingJob.CUBE_SIZE_BYTES, getCounterOutputPath(jobId));
-        }
     }
 
     public String getSegmentMetadataUrl(KylinConfig kylinConfig, String jobId) {
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java
index 288615c..46973a7 100644
--- a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java
+++ b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayer.java
@@ -17,14 +17,6 @@
 */
 package org.apache.kylin.engine.spark;
 
-import java.io.Serializable;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-
 import com.google.common.collect.Maps;
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.OptionBuilder;
@@ -69,13 +61,19 @@ import org.apache.spark.api.java.function.Function;
 import org.apache.spark.api.java.function.Function2;
 import org.apache.spark.api.java.function.PairFlatMapFunction;
 import org.apache.spark.api.java.function.PairFunction;
-import org.apache.spark.sql.SQLContext;
 import org.apache.spark.storage.StorageLevel;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import scala.Tuple2;
 
+import java.io.Serializable;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+
 /**
  * Spark application to build cube with the "by-layer" algorithm. Only support source data from Hive; Metadata in HBase.
  */
diff --git a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java
index d8fccf9..154c058 100644
--- a/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java
+++ b/engine-spark/src/main/java/org/apache/kylin/engine/spark/SparkCubingByLayerParquet.java
@@ -56,6 +56,8 @@ import org.apache.kylin.measure.basic.BasicMeasureType;
 import org.apache.kylin.measure.basic.BigDecimalIngester;
 import org.apache.kylin.measure.basic.DoubleIngester;
 import org.apache.kylin.measure.basic.LongIngester;
+import org.apache.kylin.measure.percentile.PercentileSerializer;
+import org.apache.kylin.metadata.datatype.DataType;
 import org.apache.kylin.metadata.model.MeasureDesc;
 import org.apache.kylin.metadata.model.TblColRef;
 import org.apache.parquet.example.data.Group;
@@ -75,6 +77,7 @@ import scala.Tuple2;
 
 import java.io.IOException;
 import java.io.Serializable;
+import java.math.BigDecimal;
 import java.nio.ByteBuffer;
 import java.util.List;
 import java.util.Map;
@@ -264,6 +267,8 @@ public class SparkCubingByLayerParquet extends SparkCubingByLayer {
         private Map<MeasureDesc, String> meaTypeMap;
         private GroupFactory factory;
         private BufferedMeasureCodec measureCodec;
+        private PercentileSerializer serializer;
+        private ByteBuffer byteBuffer;
 
         public GenerateGroupRDDFunction(String cubeName, String segmentId, String metaurl, SerializableConfiguration conf, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
             this.cubeName = cubeName;
@@ -284,6 +289,8 @@ public class SparkCubingByLayerParquet extends SparkCubingByLayer {
             decoder = new RowKeyDecoder(cubeSegment);
             factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(conf.get()));
             measureCodec = new BufferedMeasureCodec(cubeDesc.getMeasures());
+            serializer = new PercentileSerializer(DataType.getType("percentile(100)"));
+
         }
 
         @Override
@@ -319,7 +326,7 @@ public class SparkCubingByLayerParquet extends SparkCubingByLayer {
             int valueOffset = 0;
             for (int i = 0; i < valueLengths.length; ++i) {
                 MeasureDesc measureDesc = measureDescs.get(i);
-                parseMeaValue(group, measureDesc, encodedBytes, valueOffset, valueLengths[i]);
+                parseMeaValue(group, measureDesc, encodedBytes, valueOffset, valueLengths[i], tuple._2[i]);
                 valueOffset += valueLengths[i];
             }
 
@@ -340,7 +347,7 @@ public class SparkCubingByLayerParquet extends SparkCubingByLayer {
             }
         }
 
-        private void parseMeaValue(final Group group, final MeasureDesc measureDesc, final byte[] value, final int offset, final int length) {
+        private void parseMeaValue(final Group group, final MeasureDesc measureDesc, final byte[] value, final int offset, final int length, final Object d) {
             switch (meaTypeMap.get(measureDesc)) {
                 case "long":
                     group.append(measureDesc.getName(), BytesUtil.readLong(value, offset, length));
@@ -348,6 +355,11 @@ public class SparkCubingByLayerParquet extends SparkCubingByLayer {
                 case "double":
                     group.append(measureDesc.getName(), ByteBuffer.wrap(value, offset, length).getDouble());
                     break;
+                case "decimal":
+                    BigDecimal decimal = (BigDecimal)d;
+                    decimal = decimal.setScale(4);
+                    group.append(measureDesc.getName(), Binary.fromConstantByteArray(decimal.unscaledValue().toByteArray()));
+                    break;
                 default:
                     group.append(measureDesc.getName(), Binary.fromConstantByteArray(value, offset, length));
                     break;
diff --git a/engine-spark/src/main/java/org/apache/spark/sql/KylinSession.scala b/engine-spark/src/main/java/org/apache/spark/sql/KylinSession.scala
new file mode 100644
index 0000000..20f86e5
--- /dev/null
+++ b/engine-spark/src/main/java/org/apache/spark/sql/KylinSession.scala
@@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+package org.apache.spark.sql
+
+import java.io.File
+import java.nio.file.Paths
+import java.util.Properties
+
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.fs.Path
+import org.apache.kylin.common.KylinConfig
+import org.apache.spark.internal.Logging
+import org.apache.spark.scheduler.{SparkListener, SparkListenerApplicationEnd}
+import org.apache.spark.sql.SparkSession.Builder
+import org.apache.spark.sql.internal.{SessionState, SharedState}
+import org.apache.spark.sql.manager.UdfManager
+import org.apache.spark.util.{KylinReflectUtils, XmlUtils}
+import org.apache.spark.{SparkConf, SparkContext}
+
+import scala.collection.JavaConverters._
+
+class KylinSession(
+                    @transient val sc: SparkContext,
+                    @transient private val existingSharedState: Option[SharedState])
+  extends SparkSession(sc) {
+
+  def this(sc: SparkContext) {
+    this(sc, None)
+  }
+
+//  @transient
+//  override lazy val sessionState: SessionState =
+//    KylinReflectUtils.getSessionState(sc, this).asInstanceOf[SessionState]
+
+  override def newSession(): SparkSession = {
+    new KylinSession(sparkContext, Some(sharedState))
+  }
+}
+
+object KylinSession extends Logging {
+
+  implicit class KylinBuilder(builder: Builder) {
+    def getOrCreateKylinSession(): SparkSession = synchronized {
+      val options =
+        getValue("options", builder)
+          .asInstanceOf[scala.collection.mutable.HashMap[String, String]]
+      val userSuppliedContext: Option[SparkContext] =
+        getValue("userSuppliedContext", builder)
+          .asInstanceOf[Option[SparkContext]]
+      var session: SparkSession = SparkSession.getActiveSession match {
+        case Some(sparkSession: KylinSession) =>
+          if ((sparkSession ne null) && !sparkSession.sparkContext.isStopped) {
+            options.foreach {
+              case (k, v) => sparkSession.sessionState.conf.setConfString(k, v)
+            }
+            sparkSession
+          } else {
+            null
+          }
+        case _ => null
+      }
+      if (session ne null) {
+        return session
+      }
+      // Global synchronization so we will only set the default session once.
+      SparkSession.synchronized {
+        // If the current thread does not have an active session, get it from the global session.
+        session = SparkSession.getDefaultSession match {
+          case Some(sparkSession: KylinSession) =>
+            if ((sparkSession ne null) && !sparkSession.sparkContext.isStopped) {
+              sparkSession
+            } else {
+              null
+            }
+          case _ => null
+        }
+        if (session ne null) {
+          return session
+        }
+        val sparkContext = userSuppliedContext.getOrElse {
+          // set app name if not given
+          val sparkConf = initSparkConf()
+          options.foreach { case (k, v) => sparkConf.set(k, v) }
+          val sc = SparkContext.getOrCreate(sparkConf)
+          // maybe this is an existing SparkContext, update its SparkConf which maybe used
+          // by SparkSession
+          sc
+        }
+        session = new KylinSession(sparkContext)
+        SparkSession.setDefaultSession(session)
+        sparkContext.addSparkListener(new SparkListener {
+          override def onApplicationEnd(applicationEnd: SparkListenerApplicationEnd): Unit = {
+            SparkSession.setDefaultSession(null)
+            SparkSession.sqlListener.set(null)
+          }
+        })
+        UdfManager.create(session)
+        session
+      }
+    }
+
+    def getValue(name: String, builder: Builder): Any = {
+      val currentMirror = scala.reflect.runtime.currentMirror
+      val instanceMirror = currentMirror.reflect(builder)
+      val m = currentMirror
+        .classSymbol(builder.getClass)
+        .toType
+        .members
+        .find { p =>
+          p.name.toString.equals(name)
+        }
+        .get
+        .asTerm
+      instanceMirror.reflectField(m).get
+    }
+
+    private lazy val kylinConfig: KylinConfig = KylinConfig.getInstanceFromEnv
+
+    def initSparkConf(): SparkConf = {
+      val sparkConf = new SparkConf()
+
+      kylinConfig.getSparkConf.asScala.foreach {
+        case (k, v) =>
+          sparkConf.set(k, v)
+      }
+      if (KylinConfig.getInstanceFromEnv.isParquetSeparateFsEnabled) {
+        logInfo("ParquetSeparateFs is enabled : begin override read cluster conf to sparkConf")
+        addReadConfToSparkConf(sparkConf)
+      }
+      val instances = sparkConf.get("spark.executor.instances").toInt
+      val cores = sparkConf.get("spark.executor.cores").toInt
+      val sparkCores = instances * cores
+      if (sparkConf.get("spark.sql.shuffle.partitions", "").isEmpty) {
+        sparkConf.set("spark.sql.shuffle.partitions", sparkCores.toString)
+      }
+      sparkConf.set("spark.sql.session.timeZone", "UTC")
+      sparkConf.set("spark.debug.maxToStringFields", "1000")
+      sparkConf.set("spark.scheduler.mode", "FAIR")
+      if (new File(KylinConfig.getKylinConfDir.getCanonicalPath + "/fairscheduler.xml")
+        .exists()) {
+        val fairScheduler = KylinConfig.getKylinConfDir.getCanonicalPath + "/fairscheduler.xml"
+        sparkConf.set("spark.scheduler.allocation.file", fairScheduler)
+      }
+
+      if (!"true".equalsIgnoreCase(System.getProperty("spark.local"))) {
+        if("yarn-client".equalsIgnoreCase(sparkConf.get("spark.master"))){
+          sparkConf.set("spark.yarn.dist.jars", kylinConfig.sparderJars)
+        } else {
+          sparkConf.set("spark.jars", kylinConfig.sparderJars)
+        }
+
+        val filePath = KylinConfig.getInstanceFromEnv.sparderJars
+          .split(",")
+          .filter(p => p.contains("storage-parquet"))
+          .apply(0)
+        val fileName = filePath.substring(filePath.lastIndexOf('/') + 1)
+        sparkConf.set("spark.executor.extraClassPath", fileName)
+      }
+
+      sparkConf
+    }
+
+
+    /**
+      * For R/W Splitting
+      *
+      * @param sparkConf
+      * @return
+      */
+    def addReadConfToSparkConf(sparkConf: SparkConf): SparkConf = {
+      val readHadoopConfDir = kylinConfig
+        .getColumnarSparkEnv("HADOOP_CONF_DIR")
+      if (!new File(readHadoopConfDir).exists()) {
+        throw new IllegalArgumentException(
+          "kylin.storage.columnar.spark-env.HADOOP_CONF_DIR not found: " + readHadoopConfDir)
+      }
+      overrideHadoopConfigToSparkConf(readHadoopConfDir, sparkConf)
+      changeStagingDir(sparkConf, readHadoopConfDir)
+      sparkConf
+    }
+
+    private def changeStagingDir(sparkConf: SparkConf, readHadoopConf: String) = {
+      val coreProperties: Properties =
+        XmlUtils.loadProp(readHadoopConf + "/core-site.xml")
+      val path = new Path(coreProperties.getProperty("fs.defaultFS"))
+      val homePath =
+        path.getFileSystem(new Configuration()).getHomeDirectory.toString
+      sparkConf.set("spark.yarn.stagingDir", homePath)
+    }
+
+    private def overrideHadoopConfigToSparkConf(
+                                                 readHadoopConf: String,
+                                                 sparkConf: SparkConf): Unit = {
+      val overrideFiles =
+        kylinConfig.getParquetSeparateOverrideFiles.split(",")
+      overrideFiles.foreach { configFile =>
+        logInfo("find config file : " + configFile)
+        val cleanedPath = Paths.get(readHadoopConf, configFile)
+        val properties: Properties =
+          XmlUtils.loadProp(cleanedPath.toString)
+        properties
+          .entrySet()
+          .asScala
+          .filter(_.getValue.asInstanceOf[String].nonEmpty)
+          .foreach { entry =>
+            logInfo("override : " + entry)
+            sparkConf.set(
+              "spark.hadoop." + entry.getKey.asInstanceOf[String],
+              entry.getValue.asInstanceOf[String])
+          }
+      }
+    }
+  }
+
+}
diff --git a/engine-spark/src/main/java/org/apache/spark/sql/SparderEnv.scala b/engine-spark/src/main/java/org/apache/spark/sql/SparderEnv.scala
new file mode 100644
index 0000000..ecaa5c8
--- /dev/null
+++ b/engine-spark/src/main/java/org/apache/spark/sql/SparderEnv.scala
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+package org.apache.spark.sql
+
+import org.apache.spark.internal.Logging;
+import org.apache.spark.sql.KylinSession._
+import org.apache.kylin.ext.ClassLoaderUtils;
+
+object SparderEnv extends Logging {
+  @volatile
+  private var spark: SparkSession = _
+
+  def getSparkSession: SparkSession = withClassLoad {
+    if (spark == null || spark.sparkContext.isStopped) {
+      logInfo("Init spark.")
+      initSpark()
+    }
+    spark
+  }
+
+  def isSparkAvailable: Boolean = {
+    spark != null && !spark.sparkContext.isStopped
+  }
+
+  def restartSpark(): Unit = withClassLoad {
+    this.synchronized {
+      if (spark != null && !spark.sparkContext.isStopped) {
+        spark.stop()
+      }
+
+      logInfo("Restart Spark")
+      init()
+    }
+  }
+
+  def init(): Unit = withClassLoad {
+    getSparkSession
+  }
+
+  def getSparkConf(key: String): String = {
+    getSparkSession.sparkContext.conf.get(key)
+  }
+
+  def getActiveJobs(): Int = {
+    SparderEnv.getSparkSession.sparkContext.jobProgressListener.activeJobs.size
+  }
+
+  def getFailedJobs(): Int = {
+    SparderEnv.getSparkSession.sparkContext.jobProgressListener.failedJobs.size
+  }
+
+  def getAsyncResultCore: Int = {
+    val sparkConf = getSparkSession.sparkContext.getConf
+    val instances = sparkConf.get("spark.executor.instances").toInt
+    val cores = sparkConf.get("spark.executor.cores").toInt
+    Math.round(instances * cores / 3)
+  }
+
+  def initSpark(): Unit = withClassLoad {
+    this.synchronized {
+      if (spark == null || spark.sparkContext.isStopped) {
+        val sparkSession = System.getProperty("spark.local") match {
+          case "true" =>
+            SparkSession.builder
+              .master("local")
+              .appName("test-local-sql-context")
+//                        .enableHiveSupport()
+              .getOrCreateKylinSession()
+          case _ =>
+            SparkSession.builder
+              .appName("test-sql-context")
+              .enableHiveSupport()
+              .getOrCreateKylinSession()
+        }
+        spark = sparkSession
+
+        logInfo("Spark context started successfully with stack trace:")
+        logInfo(Thread.currentThread().getStackTrace.mkString("\n"))
+        logInfo("Class loader: " + Thread.currentThread().getContextClassLoader.toString)
+      }
+    }
+  }
+
+  /**
+    * To avoid spark being affected by the environment, we use spark classloader load spark.
+    *
+    * @param body Somewhere if you use spark
+    * @tparam T Action function
+    * @return The body return
+    */
+  def withClassLoad[T](body: => T): T = {
+    val originClassLoad = Thread.currentThread().getContextClassLoader
+    Thread.currentThread().setContextClassLoader(ClassLoaderUtils.getSparkClassLoader)
+    val t = body
+    Thread.currentThread().setContextClassLoader(originClassLoad)
+    t
+  }
+}
diff --git a/engine-spark/src/main/java/org/apache/spark/sql/hive/KylinHiveSessionStateBuilder.scala b/engine-spark/src/main/java/org/apache/spark/sql/hive/KylinHiveSessionStateBuilder.scala
new file mode 100644
index 0000000..c399381
--- /dev/null
+++ b/engine-spark/src/main/java/org/apache/spark/sql/hive/KylinHiveSessionStateBuilder.scala
@@ -0,0 +1,64 @@
+/*
+ * Copyright (C) 2016 Kyligence Inc. All rights reserved.
+ *
+ * http://kyligence.io
+ *
+ * This software is the confidential and proprietary information of
+ * Kyligence Inc. ("Confidential Information"). You shall not disclose
+ * such Confidential Information and shall use it only in accordance
+ * with the terms of the license agreement you entered into with
+ * Kyligence Inc.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+package org.apache.spark.sql.hive
+
+import org.apache.spark.sql.SparkSession
+import org.apache.spark.sql.internal.{BaseSessionStateBuilder, SessionState}
+
+/**
+  * hive session  hava some rule exp: find datasource table rule
+  *
+  * @param sparkSession
+  * @param parentState
+  */
+class KylinHiveSessionStateBuilder(sparkSession: SparkSession, parentState: Option[SessionState] = None)
+  extends HiveSessionStateBuilder(sparkSession, parentState) {
+  experimentalMethods.extraOptimizations = {
+      Seq()
+  }
+
+  private def externalCatalog: HiveExternalCatalog =
+    session.sharedState.externalCatalog.asInstanceOf[HiveExternalCatalog]
+
+  override protected def newBuilder: NewBuilder =
+    new KylinHiveSessionStateBuilder(_, _)
+
+}
+
+/**
+  * use for no hive mode
+  *
+  * @param sparkSession
+  * @param parentState
+  */
+class KylinSessionStateBuilder(sparkSession: SparkSession, parentState: Option[SessionState] = None)
+  extends BaseSessionStateBuilder(sparkSession, parentState) {
+  experimentalMethods.extraOptimizations = {
+      Seq()
+  }
+
+  override protected def newBuilder: NewBuilder =
+    new KylinSessionStateBuilder(_, _)
+
+}
diff --git a/engine-spark/src/main/java/org/apache/spark/sql/manager/UdfManager.scala b/engine-spark/src/main/java/org/apache/spark/sql/manager/UdfManager.scala
new file mode 100644
index 0000000..499cbcd
--- /dev/null
+++ b/engine-spark/src/main/java/org/apache/spark/sql/manager/UdfManager.scala
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+package org.apache.spark.sql.manager
+
+import java.util.concurrent.TimeUnit
+import java.util.concurrent.atomic.AtomicReference
+
+import com.google.common.cache.{Cache, CacheBuilder, RemovalListener, RemovalNotification}
+import org.apache.kylin.metadata.datatype.DataType
+import org.apache.spark.api.java.JavaSparkContext
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.SparkSession
+import org.apache.spark.sql.udf.SparderAggFun
+
+class UdfManager(sparkSession: SparkSession) extends Logging {
+  private var udfCache: Cache[String, String] = _
+
+  udfCache = CacheBuilder.newBuilder
+    .maximumSize(100)
+    .expireAfterWrite(1, TimeUnit.HOURS)
+    .removalListener(new RemovalListener[String, String]() {
+      override def onRemoval(notification: RemovalNotification[String, String]): Unit = {
+        val func = notification.getKey
+        logInfo(s"remove function $func")
+      }
+    })
+    .build
+    .asInstanceOf[Cache[String, String]]
+
+  def destory(): Unit = {
+    udfCache.cleanUp()
+  }
+
+  def doRegister(dataType: DataType, funcName: String): String = {
+    val name = genKey(dataType, funcName)
+    val cacheFunc = udfCache.getIfPresent(name)
+    if (cacheFunc == null) {
+      udfCache.put(name, "")
+      sparkSession.udf.register(name, new SparderAggFun(funcName, dataType))
+    }
+    name
+  }
+
+  def genKey(dataType: DataType, funcName: String): String = {
+    dataType.toString
+      .replace("(", "_")
+      .replace(")", "_")
+      .replace(",", "_") + funcName
+  }
+
+}
+
+object UdfManager {
+
+  private val defaultManager = new AtomicReference[UdfManager]
+  private val defaultSparkSession: AtomicReference[SparkSession] =
+    new AtomicReference[SparkSession]
+
+  def refresh(sc: JavaSparkContext): Unit = {
+    val sparkSession = SparkSession.builder.config(sc.getConf).getOrCreate
+
+    defaultManager.get().destory()
+    create(sparkSession)
+  }
+
+  def create(sparkSession: SparkSession): Unit = {
+    val manager = new UdfManager(sparkSession)
+    defaultManager.set(manager)
+    defaultSparkSession.set(sparkSession)
+  }
+
+  def create(sc: JavaSparkContext): Unit = {
+    val sparkSession = SparkSession.builder.config(sc.getConf).getOrCreate
+    create(sparkSession)
+
+  }
+
+  def register(dataType: DataType, func: String): String = {
+    defaultManager.get().doRegister(dataType, func)
+  }
+
+}
diff --git a/engine-spark/src/main/java/org/apache/spark/sql/udf/SparderAggFun.scala b/engine-spark/src/main/java/org/apache/spark/sql/udf/SparderAggFun.scala
new file mode 100644
index 0000000..d84b31d
--- /dev/null
+++ b/engine-spark/src/main/java/org/apache/spark/sql/udf/SparderAggFun.scala
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+package org.apache.spark.sql.udf
+
+import java.nio.ByteBuffer
+
+import org.apache.kylin.gridtable.GTInfo
+import org.apache.kylin.measure.MeasureAggregator
+import org.apache.kylin.metadata.datatype.DataTypeSerializer
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
+import org.apache.spark.sql.types._
+import org.apache.spark.sql.util.SparderTypeUtil
+
+class SparderAggFun(funcName: String, dataTp: org.apache.kylin.metadata.datatype.DataType)
+  extends UserDefinedAggregateFunction
+    with Logging {
+
+  protected val _inputDataType = {
+    val schema = StructType(Seq(StructField("inputBinary", BinaryType)))
+    schema
+  }
+
+  protected val _bufferSchema: StructType = {
+    val schema = StructType(Seq(StructField("bufferBinary", BinaryType)))
+    schema
+  }
+
+  protected val _returnDataType: DataType =
+    SparderTypeUtil.kylinTypeToSparkResultType(dataTp)
+
+  protected var byteBuffer: ByteBuffer = null
+  protected var init = false
+  protected var gtInfo: GTInfo = _
+  protected var measureAggregator: MeasureAggregator[Any] = _
+  protected var colId: Int = _
+  protected var serializer: DataTypeSerializer[Any] = _
+  protected var aggregator: MeasureAggregator[Any] = _
+
+  var time = System.currentTimeMillis()
+
+  override def bufferSchema: StructType = _bufferSchema
+
+  override def inputSchema: StructType = _inputDataType
+
+  override def deterministic: Boolean = true
+
+
+  override def initialize(buffer: MutableAggregationBuffer): Unit = {
+    val isSum0 = (funcName == "$SUM0")
+    if (byteBuffer == null) {
+      serializer = DataTypeSerializer.create(dataTp).asInstanceOf[DataTypeSerializer[Any]]
+      byteBuffer = ByteBuffer.allocate(serializer.maxLength)
+    }
+
+    aggregator = MeasureAggregator
+      .create(if (isSum0) "COUNT" else funcName, dataTp)
+      .asInstanceOf[MeasureAggregator[Any]]
+
+    aggregator.reset()
+
+    val initVal = if (isSum0) {
+      // $SUM0 is the rewritten form of COUNT, which should return 0 instead of null in case of no input
+      byteBuffer.clear()
+      serializer.serialize(aggregator.getState, byteBuffer)
+      byteBuffer.array().slice(0, byteBuffer.position())
+    } else {
+      null
+    }
+    buffer.update(0, initVal)
+  }
+
+  val MAX_BUFFER_CAP: Int = 50 * 1024 * 1024
+
+  override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
+    merge(buffer, input)
+  }
+
+  override def merge(buffer: MutableAggregationBuffer, input: Row): Unit = {
+    if (!input.isNullAt(0)) {
+      try {
+        val byteArray = input.apply(0).asInstanceOf[Array[Byte]]
+        if (byteArray.length == 0) {
+          return
+        }
+        val oldValue = if(buffer.isNullAt(0)) null else serializer.deserialize(ByteBuffer.wrap(buffer.apply(0).asInstanceOf[Array[Byte]]))
+        val newValue = serializer.deserialize(ByteBuffer.wrap(byteArray))
+
+        val aggedValue = aggregator.aggregate(oldValue, newValue)
+
+        if (aggedValue != null) {
+          byteBuffer.clear()
+          serializer.serialize(aggedValue, byteBuffer)
+          buffer.update(0, byteBuffer.array().slice(0, byteBuffer.position()))
+        }
+      } catch {
+        case e: Exception =>
+          throw new Exception(
+            "error data is: " + input
+              .apply(0)
+              .asInstanceOf[Array[Byte]]
+              .mkString(","),
+            e)
+      }
+    }
+  }
+
+  override def evaluate(buffer: Row): Any = {
+    if (buffer.isNullAt(0)) {
+      null
+    } else {
+      val ret = dataTp.getName match {
+        case dt if dt.startsWith("percentile") => buffer.apply(0).asInstanceOf[Array[Byte]]
+        case "hllc" => buffer.apply(0).asInstanceOf[Array[Byte]]
+        case "bitmap" => buffer.apply(0).asInstanceOf[Array[Byte]]
+        case "dim_dc" => buffer.apply(0).asInstanceOf[Array[Byte]]
+        case "extendedcolumn" => buffer.apply(0).asInstanceOf[Array[Byte]]
+        case "raw" => buffer.apply(0).asInstanceOf[Array[Byte]]
+        case t if t startsWith "top" => buffer.apply(0).asInstanceOf[Array[Byte]]
+        case _ => null
+      }
+
+      if (ret != null)
+        ret
+      else
+        throw new IllegalArgumentException("unsupported function")
+    }
+  }
+
+  override def toString: String = {
+    s"SparderAggFun@$funcName${dataType.toString}"
+  }
+
+  override def dataType: DataType = _returnDataType
+}
diff --git a/engine-spark/src/main/java/org/apache/spark/sql/util/SparderTypeUtil.scala b/engine-spark/src/main/java/org/apache/spark/sql/util/SparderTypeUtil.scala
new file mode 100644
index 0000000..f2adbcc
--- /dev/null
+++ b/engine-spark/src/main/java/org/apache/spark/sql/util/SparderTypeUtil.scala
@@ -0,0 +1,473 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+package org.apache.spark.sql.util
+
+import java.math.BigDecimal
+import java.sql.Timestamp
+import java.util.Locale
+
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.sql.`type`.SqlTypeName
+import org.apache.kylin.common.util.DateFormat
+import org.apache.kylin.metadata.datatype.DataType
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.types._
+
+object SparderTypeUtil extends Logging {
+  val DATETIME_FAMILY = List("time", "date", "timestamp", "datetime")
+
+  def isDateTimeFamilyType(dataType: String): Boolean = {
+    DATETIME_FAMILY.contains(dataType.toLowerCase(Locale.ROOT))
+  }
+
+  def isDateType(dataType: String): Boolean = {
+    "date".equalsIgnoreCase(dataType)
+  }
+
+  def isDateTime(sqlTypeName: SqlTypeName): Boolean = {
+    SqlTypeName.DATETIME_TYPES.contains(sqlTypeName)
+  }
+
+  // scalastyle:off
+  def kylinTypeToSparkResultType(dataTp: DataType): org.apache.spark.sql.types.DataType = {
+    dataTp.getName match {
+      case tp if tp.startsWith("hllc") => BinaryType
+      case tp if tp.startsWith("top") => BinaryType
+      case tp if tp.startsWith("percentile") => BinaryType
+      case tp if tp.startsWith("bitmap") => BinaryType
+      case "decimal" => DecimalType(dataTp.getPrecision, dataTp.getScale)
+      case "date" => IntegerType
+      case "time" => LongType
+      case "timestamp" => LongType
+      case "datetime" => LongType
+      case "tinyint" => ByteType
+      case "smallint" => ShortType
+      case "integer" => IntegerType
+      case "int4" => IntegerType
+      case "bigint" => LongType
+      case "long8" => LongType
+      case "float" => FloatType
+      case "double" => DoubleType
+      case "real" => DoubleType
+      case tp if tp.startsWith("varchar") => StringType
+      case tp if tp.startsWith("char") => StringType
+      case "bitmap" => LongType
+      case "dim_dc" => BinaryType
+      case "boolean" => BooleanType
+      case "extendedcolumn" => BinaryType
+      case "raw" => BinaryType
+      case noSupport => throw new IllegalArgumentException(s"No supported data type: $noSupport")
+    }
+  }
+
+  // scalastyle:off
+  def kylinSQLTypeToSparkType(dataTp: DataType): org.apache.spark.sql.types.DataType = {
+    dataTp.getName match {
+      case "decimal" => DecimalType(dataTp.getPrecision, dataTp.getScale)
+      case "date" => DateType
+      case "time" => DateType
+      case "timestamp" => TimestampType
+      case "datetime" => DateType
+      case "tinyint" => ByteType
+      case "smallint" => ShortType
+      case "integer" => IntegerType
+      case "int4" => IntegerType
+      case "bigint" => LongType
+      case "long8" => LongType
+      case "float" => FloatType
+      case "double" => DoubleType
+      case "real" => DoubleType
+      case tp if tp.startsWith("varchar") => StringType
+      case tp if tp.startsWith("char") => StringType
+      case "bitmap" => LongType
+      case "dim_dc" => LongType
+      case "boolean" => BooleanType
+      case noSupport => throw new IllegalArgumentException(s"No supported data type: $noSupport")
+    }
+  }
+
+  // scalastyle:off
+  def convertSqlTypeNameToSparkType(sqlTypeName: SqlTypeName): String = {
+    sqlTypeName match {
+      case SqlTypeName.DECIMAL => "decimal"
+      case SqlTypeName.CHAR => "string"
+      case SqlTypeName.VARCHAR => "string"
+      case SqlTypeName.INTEGER => "int"
+      case SqlTypeName.TINYINT => "byte"
+      case SqlTypeName.SMALLINT => "short"
+      case SqlTypeName.BIGINT => "long"
+      case SqlTypeName.FLOAT => "float"
+      case SqlTypeName.DOUBLE => "double"
+      case SqlTypeName.DATE => "date"
+      case SqlTypeName.TIMESTAMP => "timestamp"
+      case SqlTypeName.BOOLEAN => "boolean"
+      case noSupport => throw new IllegalArgumentException(s"No supported data type: $noSupport")
+    }
+  }
+
+  // scalastyle:off
+  def kylinCubeDataTypeToSparkType(dataTp: DataType): org.apache.spark.sql.types.DataType = {
+    dataTp.getName match {
+      case "decimal" => DecimalType(dataTp.getPrecision, dataTp.getScale)
+      case "date" => DateType
+      case "time" => DateType
+      case "timestamp" => TimestampType
+      case "datetime" => DateType
+      case "tinyint" => ByteType
+      case "smallint" => ShortType
+      case "integer" => IntegerType
+      case "int4" => IntegerType
+      case "bigint" => LongType
+      case "long8" => LongType
+      case "float" => FloatType
+      case "double" => DoubleType
+      case "real" => DoubleType
+      case tp if tp.startsWith("varchar") => StringType
+      case tp if tp.startsWith("char") => StringType
+      case "bitmap" => LongType
+      case "dim_dc" => LongType
+      case "boolean" => BooleanType
+      case noSupport => throw new IllegalArgumentException(s"No supported data type: $noSupport")
+    }
+  }
+
+  // for reader
+  // scalastyle:off
+  def kylinDimensionDataTypeToSparkType(dataTp: String): org.apache.spark.sql.types.DataType = {
+    dataTp match {
+      case "string" => StringType
+      case "date" => LongType
+      case "time" => LongType
+      case "timestamp" => LongType
+      case "datetime" => LongType
+      case "tinyint" => ByteType
+      case "smallint" => ShortType
+      case "integer" => IntegerType
+      case "int4" => IntegerType
+      case "bigint" => LongType
+      case "long8" => LongType
+      case "float" => FloatType
+      case "double" => DoubleType
+      case "real" => DoubleType
+      case tp if tp.startsWith("varchar") => StringType
+      case tp if tp.startsWith("char") => StringType
+      case "bitmap" => LongType
+      case "dim_dc" => LongType
+      case "decimal" => DecimalType(19, 4)
+      case tp if tp.startsWith("decimal") && tp.contains("(") => {
+        try {
+          val precisionAndScale = tp.replace("decimal", "").replace("(", "").replace(")", "").split(",")
+          DataTypes.createDecimalType(precisionAndScale(0).toInt, precisionAndScale(1).toInt)
+        } catch {
+          case e: Exception =>
+            throw new IllegalArgumentException(s"Unsupported data type : $tp", e)
+        }
+      }
+      case "boolean" => BooleanType
+      case noSupport => throw new IllegalArgumentException(s"No supported data type: $noSupport")
+    }
+  }
+
+  // scalastyle:off
+  def kylinRawTableSQLTypeToSparkType(dataTp: DataType): org.apache.spark.sql.types.DataType = {
+    dataTp.getName match {
+      case "decimal" => DecimalType(dataTp.getPrecision, dataTp.getScale)
+      case "date" => DateType
+      case "time" => DateType
+      case "timestamp" => TimestampType
+      case "datetime" => DateType
+      case "tinyint" => ByteType
+      case "smallint" => ShortType
+      case "integer" => IntegerType
+      case "int4" => IntegerType
+      case "bigint" => LongType
+      case "long8" => LongType
+      case "float" => FloatType
+      case "double" => DoubleType
+      case "real" => DoubleType
+      case tp if tp.startsWith("char") => StringType
+      case tp if tp.startsWith("varchar") => StringType
+      case "bitmap" => LongType
+      case "dim_dc" => LongType
+      case "boolean" => BooleanType
+      case noSupport => throw new IllegalArgumentException(s"No supported data type: $noSupport")
+    }
+  }
+
+
+  def convertSqlTypeToSparkType(dt: RelDataType): org.apache.spark.sql.types.DataType = {
+    dt.getSqlTypeName match {
+      case SqlTypeName.DECIMAL => DecimalType(dt.getPrecision, dt.getScale)
+      case SqlTypeName.CHAR => StringType
+      case SqlTypeName.VARCHAR => StringType
+      case SqlTypeName.INTEGER => IntegerType
+      case SqlTypeName.TINYINT => ByteType
+      case SqlTypeName.SMALLINT => ShortType
+      case SqlTypeName.BIGINT => LongType
+      case SqlTypeName.FLOAT => FloatType
+      case SqlTypeName.DOUBLE => DoubleType
+      case SqlTypeName.DATE => DateType
+      case SqlTypeName.TIMESTAMP => TimestampType
+      case SqlTypeName.BOOLEAN => BooleanType
+      case noSupport => throw new IllegalArgumentException(s"No supported data type: $noSupport")
+    }
+  }
+
+  // scalastyle:off
+  def convertStringToValue(s: Any, rowType: RelDataType, toCalcite: Boolean): Any = {
+    val sqlTypeName = rowType.getSqlTypeName
+    if (s == null) {
+      null
+    } else if (s.toString.isEmpty) {
+      val a: Any = sqlTypeName match {
+        case SqlTypeName.DECIMAL => new java.math.BigDecimal(0)
+        case SqlTypeName.CHAR => s.toString
+        case SqlTypeName.VARCHAR => s.toString
+        case SqlTypeName.INTEGER => 0
+        case SqlTypeName.TINYINT => 0.toByte
+        case SqlTypeName.SMALLINT => 0.toShort
+        case SqlTypeName.BIGINT => 0L
+        case SqlTypeName.FLOAT => 0f
+        case SqlTypeName.DOUBLE => 0d
+        case SqlTypeName.DATE => 0
+        case SqlTypeName.TIMESTAMP => 0L
+        case SqlTypeName.TIME => 0L
+        case SqlTypeName.BOOLEAN => null;
+        case null => null
+        case _ => null
+      }
+    } else {
+      try {
+        val a: Any = sqlTypeName match {
+          case SqlTypeName.DECIMAL =>
+            if (s.isInstanceOf[java.lang.Double] || s
+              .isInstanceOf[java.lang.Float] || s.toString.contains(".")) {
+              new java.math.BigDecimal(s.toString)
+                .setScale(rowType.getScale, BigDecimal.ROUND_HALF_EVEN)
+            } else {
+              new java.math.BigDecimal(s.toString)
+            }
+          case SqlTypeName.CHAR => s.toString
+          case SqlTypeName.VARCHAR => s.toString
+          case SqlTypeName.INTEGER => s.toString.toInt
+          case SqlTypeName.TINYINT => s.toString.toByte
+          case SqlTypeName.SMALLINT => s.toString.toShort
+          case SqlTypeName.BIGINT => s.toString.toLong
+          case SqlTypeName.FLOAT => java.lang.Float.parseFloat(s.toString)
+          case SqlTypeName.DOUBLE => java.lang.Double.parseDouble(s.toString)
+          case SqlTypeName.DATE => {
+            // time over here is with timezone.
+            val string = s.toString
+            if (string.contains("-")) {
+              val time = DateFormat.stringToDate(string).getTime
+              if (toCalcite) {
+                (time / (3600 * 24 * 1000)).toInt
+              } else {
+                // ms to s
+                time / 1000
+              }
+            } else {
+              // should not come to here?
+              if (toCalcite) {
+                (toCalciteTimestamp(DateFormat.stringToMillis(string)) / (3600 * 24 * 1000)).toInt
+              } else {
+                DateFormat.stringToMillis(string)
+              }
+            }
+          }
+          case SqlTypeName.TIMESTAMP | SqlTypeName.TIME => {
+            var ts = s.asInstanceOf[Timestamp].getTime
+            if (toCalcite) {
+              ts
+            } else {
+              // ms to s
+              ts / 1000
+            }
+          }
+          case SqlTypeName.BOOLEAN => s;
+          case _ => s.toString
+        }
+        a
+      } catch {
+        case th: Throwable =>
+          logError(s"Error for convert value : $s , class: ${s.getClass}", th)
+          safetyConvertStringToValue(s, rowType, toCalcite)
+      }
+    }
+  }
+
+  def safetyConvertStringToValue(s: Any, rowType: RelDataType, toCalcite: Boolean): Any = {
+    try {
+      rowType.getSqlTypeName match {
+        case SqlTypeName.DECIMAL =>
+          if (s.isInstanceOf[java.lang.Double] || s
+            .isInstanceOf[java.lang.Float] || s.toString.contains(".")) {
+            new java.math.BigDecimal(s.toString)
+              .setScale(rowType.getScale, BigDecimal.ROUND_HALF_EVEN)
+          } else {
+            new java.math.BigDecimal(s.toString)
+          }
+        case SqlTypeName.CHAR => s.toString
+        case SqlTypeName.VARCHAR => s.toString
+        case SqlTypeName.INTEGER => s.toString.toDouble.toInt
+        case SqlTypeName.TINYINT => s.toString.toDouble.toByte
+        case SqlTypeName.SMALLINT => s.toString.toDouble.toShort
+        case SqlTypeName.BIGINT => s.toString.toDouble.toLong
+        case SqlTypeName.FLOAT => java.lang.Float.parseFloat(s.toString)
+        case SqlTypeName.DOUBLE => java.lang.Double.parseDouble(s.toString)
+        case SqlTypeName.DATE => {
+          // time over here is with timezone.
+          val string = s.toString
+          if (string.contains("-")) {
+            val time = DateFormat.stringToDate(string).getTime
+            if (toCalcite) {
+              (time / (3600 * 24 * 1000)).toInt
+            } else {
+              // ms to s
+              time / 1000
+            }
+          } else {
+            // should not come to here?
+            if (toCalcite) {
+              (toCalciteTimestamp(DateFormat.stringToMillis(string)) / (3600 * 24 * 1000)).toInt
+            } else {
+              DateFormat.stringToMillis(string)
+            }
+          }
+        }
+        case SqlTypeName.TIMESTAMP | SqlTypeName.TIME => {
+          var ts = s.asInstanceOf[Timestamp].getTime
+          if (toCalcite) {
+            ts
+          } else {
+            // ms to s
+            ts / 1000
+          }
+        }
+        case SqlTypeName.BOOLEAN => s;
+        case _ => s.toString
+      }
+    } catch {
+      case th: Throwable =>
+        throw new RuntimeException(s"Error for convert value : $s , class: ${s.getClass}", th)
+    }
+  }
+
+  // scalastyle:off
+  def convertStringToResultValue(s: Any, rowType: String, toCalcite: Boolean): Any = {
+    if (s == null) {
+      val a: Any = rowType match {
+        case "DECIMAL" => new java.math.BigDecimal(0)
+        case "CHAR" => null
+        case "VARCHAR" => null
+        case "INTEGER" => 0
+        case "TINYINT" => 0.toByte
+        case "SMALLINT" => 0.toShort
+        case "BIGINT" => 0L
+        case "FLOAT" => 0f
+        case "DOUBLE" => 0d
+        case "DATE" => 0
+        case "TIMESTAMP" => 0L
+        case "TIME" => 0L
+        case "BOOLEAN" => null
+        case null => null
+        case _ => null
+      }
+      a
+    } else {
+      try {
+        val a: Any = rowType match {
+          case "DECIMAL" => new java.math.BigDecimal(s.toString)
+          case "CHAR" => s.toString
+          case "VARCHAR" => s.toString
+          case "INTEGER" => s.toString.toInt
+          case "TINYINT" => s.toString.toByte
+          case "SMALLINT" => s.toString.toShort
+          case "BIGINT" => s.toString.toLong
+          case "FLOAT" => java.lang.Float.parseFloat(s.toString)
+          case "DOUBLE" => java.lang.Double.parseDouble(s.toString)
+          case "DATE" => {
+            if (toCalcite)
+              DateFormat.formatToDateStr(DateFormat.stringToMillis(s.toString))
+            else
+              DateFormat.stringToMillis(s.toString)
+          }
+          case "TIMESTAMP" | "TIME" =>
+            DateFormat.formatToTimeStr(s.asInstanceOf[Timestamp].getTime)
+          case "BOOLEAN" => s
+          case _ => s.toString
+        }
+        a
+      } catch {
+        case th: Throwable =>
+          logError(s"Error for convert value : $s , class: ${s.getClass}", th)
+          throw th
+      }
+    }
+  }
+
+  def convertRowToRow(
+                       rows: Iterator[Row],
+                       typeMap: Map[Int, String],
+                       separator: String): Iterator[String] = {
+    rows.map {
+      row =>
+        var rowIndex = 0
+        row.toSeq
+          .map {
+            cell => {
+              val rType = typeMap.apply(rowIndex)
+              val value =
+                SparderTypeUtil
+                  .convertStringToResultValue(cell, rType, toCalcite = true)
+
+              rowIndex = rowIndex + 1
+              if (value == null) {
+                ""
+              } else {
+                value
+              }
+            }
+          }
+          .mkString(separator)
+    }
+
+  }
+
+  // ms to second
+  def toSparkTimestamp(calciteTimestamp: Long): java.lang.Long = {
+    calciteTimestamp / 1000
+  }
+
+  // ms to microsecond, spark need micro sec.
+  def toSparkMicrosecond(calciteTimestamp: Long): java.lang.Long = {
+    calciteTimestamp * 1000
+  }
+
+  // ms to day
+  def toSparkDate(calciteTimestamp: Long): java.lang.Integer = {
+    (calciteTimestamp / 1000 / 3600 / 24).toInt
+  }
+
+  def toCalciteTimestamp(sparkTimestamp: Long): Long = {
+    sparkTimestamp * 1000
+  }
+
+}
diff --git a/engine-spark/src/main/java/org/apache/spark/util/KylinReflectUtils.scala b/engine-spark/src/main/java/org/apache/spark/util/KylinReflectUtils.scala
new file mode 100644
index 0000000..a139273
--- /dev/null
+++ b/engine-spark/src/main/java/org/apache/spark/util/KylinReflectUtils.scala
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.util
+
+import java.lang.reflect.Field
+
+import org.apache.spark.sql.internal.StaticSQLConf.CATALOG_IMPLEMENTATION
+import org.apache.spark.{SPARK_VERSION, SparkContext}
+
+import scala.reflect.runtime._
+import scala.reflect.runtime.universe._
+
+object KylinReflectUtils {
+  private val rm = universe.runtimeMirror(getClass.getClassLoader)
+
+
+  def getSessionState(sparkContext: SparkContext, kylinSession: Object): Any = {
+    if (SPARK_VERSION.startsWith("2.2")) {
+      var className: String =
+        "org.apache.spark.sql.hive.KylinHiveSessionStateBuilder"
+      if (!"hive".equals(sparkContext.getConf
+        .get(CATALOG_IMPLEMENTATION.key, "in-memory"))) {
+        className = "org.apache.spark.sql.hive.KylinSessionStateBuilder"
+      }
+      val tuple = createObject(className, kylinSession, None)
+      val method = tuple._2.getMethod("build")
+      method.invoke(tuple._1)
+    } else {
+      throw new UnsupportedOperationException("Spark version not supported")
+    }
+  }
+
+  /**
+    * Returns the field val from a object through reflection.
+    *
+    * @param name - name of the field being retrieved.
+    * @param obj  - Object from which the field has to be retrieved.
+    * @tparam T
+    * @return
+    */
+  def getField[T: TypeTag : reflect.ClassTag](name: String, obj: T): Any = {
+    val im = rm.reflect(obj)
+
+    im.symbol.typeSignature.members.find(_.name.toString.equals(name))
+      .map(l => im.reflectField(l.asTerm).get).getOrElse(null)
+  }
+
+
+  def createObject(className: String, conArgs: Object*): (Any, Class[_]) = {
+    val clazz = Utils.classForName(className)
+    val ctor = clazz.getConstructors.head
+    ctor.setAccessible(true)
+    (ctor.newInstance(conArgs: _*), clazz)
+  }
+
+  def getObjectField(clazz: Class[_], filedName: String): Field = {
+    val field = clazz.getDeclaredField(filedName)
+    field.setAccessible(true)
+    field
+
+  }
+}
diff --git a/engine-spark/src/main/java/org/apache/spark/util/XmlUtils.scala b/engine-spark/src/main/java/org/apache/spark/util/XmlUtils.scala
new file mode 100644
index 0000000..6d3f0cd
--- /dev/null
+++ b/engine-spark/src/main/java/org/apache/spark/util/XmlUtils.scala
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.util
+
+import java.io.{BufferedInputStream, File, FileInputStream, InputStream}
+import java.util.Properties
+
+import javax.xml.parsers.{DocumentBuilder, DocumentBuilderFactory}
+import org.apache.hadoop.util.StringInterner
+import org.slf4j.LoggerFactory
+import org.w3c.dom.{Document, Element, Text}
+
+import scala.collection.immutable.Range
+
+object XmlUtils {
+
+  private val logger = LoggerFactory.getLogger(XmlUtils.getClass)
+
+  def loadProp(path: String): Properties = {
+    val docBuilderFactory = DocumentBuilderFactory.newInstance
+    docBuilderFactory.setIgnoringComments(true)
+
+    //  allow includes in the xml file
+    docBuilderFactory.setNamespaceAware(true)
+    try docBuilderFactory.setXIncludeAware(true)
+    catch {
+      case e: UnsupportedOperationException =>
+    }
+    val builder = docBuilderFactory.newDocumentBuilder
+    var doc: Document = null
+    var root: Element = null
+    val properties = new Properties()
+    val file = new File(path).getAbsoluteFile
+    if (file.exists) {
+      doc = parse(builder, new BufferedInputStream(new FileInputStream(file)), path)
+    } else {
+      throw new RuntimeException("File not found in path: " + path)
+    }
+    root = doc.getDocumentElement
+    if (!("configuration" == root.getTagName)) {
+      logger.error("bad conf file: top-level element not <configuration>")
+    }
+
+    val props = root.getChildNodes
+    for (i <- Range(0, props.getLength)) {
+      val propNode = props.item(i)
+
+      propNode match {
+        case prop: Element =>
+          if ("configuration".equals(prop.getTagName())) {}
+          if (!"property".equals(prop.getTagName())) {
+            logger.warn("bad conf file: element not <property>")
+          }
+
+          val fields = prop.getChildNodes
+          var attr: String = null
+          var value: String = null
+          var finalParameter: Boolean = false
+          val source: java.util.LinkedList[String] =
+            new java.util.LinkedList[String]
+          for (j <- Range(0, fields.getLength)) {
+            val fieldNode = fields.item(j)
+            fieldNode match {
+              case field: Element =>
+                if ("name" == field.getTagName && field.hasChildNodes) {
+                  attr =
+                    StringInterner.weakIntern(field.getFirstChild.asInstanceOf[Text].getData.trim)
+                }
+                if ("value" == field.getTagName && field.hasChildNodes) {
+                  value = StringInterner.weakIntern(field.getFirstChild.asInstanceOf[Text].getData)
+                }
+              case _ =>
+            }
+            if (attr != null && value != null) {
+              properties.setProperty(attr, value)
+            }
+          }
+        case _ =>
+      }
+
+    }
+    properties
+  }
+
+  def parse(builder: DocumentBuilder, is: InputStream, systemId: String): Document = {
+    if (is == null) {
+      return null
+    }
+    try {
+      builder.parse(is, systemId)
+    } finally {
+      is.close()
+    }
+  }
+}
diff --git a/examples/test_case_data/webapps/META-INF/context.xml b/examples/test_case_data/webapps/META-INF/context.xml
new file mode 100644
index 0000000..0ad90dc
--- /dev/null
+++ b/examples/test_case_data/webapps/META-INF/context.xml
@@ -0,0 +1,38 @@
+<?xml version='1.0' encoding='utf-8'?>
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~  
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~  
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+<!-- The contents of this file will be loaded for each web application -->
+<Context allowLinking="true">
+
+    <!-- Default set of monitored resources -->
+    <WatchedResource>WEB-INF/web.xml</WatchedResource>
+
+    <!-- Uncomment this to disable session persistence across Tomcat restarts -->
+    <!--
+    <Manager pathname="" />
+    -->
+
+    <!-- Uncomment this to enable Comet connection tacking (provides events
+         on session expiration as well as webapp lifecycle) -->
+    <!--
+    <Valve className="org.apache.catalina.valves.CometConnectionManagerValve" />
+    -->
+
+    <Loader loaderClass="org.apache.kylin.ext.DebugTomcatClassLoader"/>
+
+</Context>
diff --git a/kylin-it/jacoco-it.exec b/kylin-it/jacoco-it.exec
new file mode 100644
index 0000000..5af2bbf
Binary files /dev/null and b/kylin-it/jacoco-it.exec differ
diff --git a/kylin-it/pom.xml b/kylin-it/pom.xml
index 18bb1d9..36cbc33 100644
--- a/kylin-it/pom.xml
+++ b/kylin-it/pom.xml
@@ -96,6 +96,10 @@
             <groupId>org.apache.kylin</groupId>
             <artifactId>kylin-query</artifactId>
         </dependency>
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-test</artifactId>
+        </dependency>
 
         <!-- Env & Test -->
 
@@ -338,6 +342,27 @@
         </dependency>
     </dependencies>
 
+    <profiles>
+        <profile>
+            <id>hbaseStorage</id>
+            <properties>
+                <storageType>2</storageType>
+                <excludeTest>**/ITCombination2Test.java,**/ITFailfastQuery2Test.java,**/ITJDBCDriver2Test.java</excludeTest>
+            </properties>
+            <activation>
+                <activeByDefault>true</activeByDefault>
+            </activation>
+        </profile>
+
+        <profile>
+            <id>parquetStorage</id>
+            <properties>
+                <storageType>4</storageType>
+                <excludeTest>**/ITCombinationTest.java,**/ITFailfastQueryTest.java,**/ITJDBCDriverTest.java,**/ITAclTableMigrationToolTest.java</excludeTest>
+            </properties>
+        </profile>
+    </profiles>
+
     <build>
         <plugins>
             <plugin>
@@ -361,123 +386,114 @@
                     </execution>
                 </executions>
             </plugin>
-        </plugins>
-    </build>
 
-    <profiles>
-        <profile>
-            <id>sandbox</id>
-            <activation>
-                <activeByDefault>true</activeByDefault>
-            </activation>
-            <build>
-                <plugins>
-                    <plugin>
-                        <groupId>org.apache.maven.plugins</groupId>
-                        <artifactId>maven-failsafe-plugin</artifactId>
-                        <executions>
-                            <execution>
-                                <id>integration-tests</id>
-                                <goals>
-                                    <goal>integration-test</goal>
-                                </goals>
-                            </execution>
-                            <execution>
-                                <id>verify</id>
-                                <goals>
-                                    <goal>verify</goal>
-                                </goals>
-                            </execution>
-                        </executions>
+            <!--CI plugins-->
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-failsafe-plugin</artifactId>
+                <executions>
+                    <execution>
+                        <id>integration-tests</id>
+                        <goals>
+                            <goal>integration-test</goal>
+                        </goals>
+                    </execution>
+                    <execution>
+                        <id>verify</id>
+                        <goals>
+                            <goal>verify</goal>
+                        </goals>
+                    </execution>
+                </executions>
+                <configuration>
+                    <excludes>
+                        <exclude>**/*$*</exclude>
+                        <exclude>${excludeTest}</exclude>
+                    </excludes>
+                    <systemProperties>
+                        <property>
+                            <name>log4j.configuration</name>
+                            <value>
+                                file:${project.basedir}/..//build/conf/kylin-tools-log4j.properties
+                            </value>
+                        </property>
+                    </systemProperties>
+                    <argLine>-Xms1G -Xmx2G -XX:PermSize=128M -XX:MaxPermSize=512M
+                        -Dkylin.server.cluster-servers=localhost:7070
+                        -javaagent:${project.basedir}/..//dev-support/jacocoagent.jar=includes=org.apache.kylin.*,output=file,destfile=jacoco-it.exec
+                    </argLine>
+                </configuration>
+            </plugin>
+
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>exec-maven-plugin</artifactId>
+                <executions>
+                    <execution>
+                        <id>build_cube_with_engine</id>
+                        <goals>
+                            <goal>exec</goal>
+                        </goals>
+                        <phase>pre-integration-test</phase>
                         <configuration>
-                            <excludes>
-                                <exclude>**/*$*</exclude>
-                            </excludes>
-                            <systemProperties>
-                                <property>
-                                    <name>log4j.configuration</name>
-                                    <value>
-                                        file:${project.basedir}/..//build/conf/kylin-tools-log4j.properties
-                                    </value>
-                                </property>
-                            </systemProperties>
-                            <argLine>-Xms1G -Xmx2G -XX:PermSize=128M -XX:MaxPermSize=512M
-                                -Dkylin.server.cluster-servers=localhost:7070
-                                -javaagent:${project.basedir}/..//dev-support/jacocoagent.jar=includes=org.apache.kylin.*,output=file,destfile=jacoco-it.exec
-                            </argLine>
+                            <skip>${skipTests}</skip>
+                            <classpathScope>test</classpathScope>
+                            <executable>java</executable>
+                            <arguments>
+                                <argument>-Dhdp.version=${hdp.version}</argument>
+                                <argument>-DfastBuildMode=${fastBuildMode}</argument>
+                                <argument>
+                                    -DbuildCubeUsingProvidedData=${buildCubeUsingProvidedData}
+                                </argument>
+                                <argument>-DengineType=${engineType}</argument>
+                                <argument>-DstorageType=${storageType}</argument>
+                                <argument>
+                                    -Dlog4j.configuration=file:${project.basedir}/..//build/conf/kylin-tools-log4j.properties
+                                </argument>
+                                <argument>
+                                    -javaagent:${project.basedir}/..//dev-support/jacocoagent.jar=includes=org.apache.kylin.*,output=file,destfile=jacoco-it-engine.exec
+                                </argument>
+                                <argument>-classpath</argument>
+                                <classpath />
+                                <argument>org.apache.kylin.provision.BuildCubeWithEngine
+                                </argument>
+                            </arguments>
+                            <workingDirectory>${project.basedir}</workingDirectory>
                         </configuration>
-                    </plugin>
-                    <plugin>
-                        <groupId>org.codehaus.mojo</groupId>
-                        <artifactId>exec-maven-plugin</artifactId>
-                        <executions>
-                            <execution>
-                                <id>build_cube_with_engine</id>
-                                <goals>
-                                    <goal>exec</goal>
-                                </goals>
-                                <phase>pre-integration-test</phase>
-                                <configuration>
-                                    <skip>${skipTests}</skip>
-                                    <classpathScope>test</classpathScope>
-                                    <executable>java</executable>
-                                    <arguments>
-                                        <argument>-Dhdp.version=${hdp.version}</argument>
-                                        <argument>-DfastBuildMode=${fastBuildMode}</argument>
-                                        <argument>
-                                            -DbuildCubeUsingProvidedData=${buildCubeUsingProvidedData}
-                                        </argument>
-                                        <argument>-DengineType=${engineType}</argument>
-                                        <argument>
-                                            -Dlog4j.configuration=file:${project.basedir}/..//build/conf/kylin-tools-log4j.properties
-                                        </argument>
-                                        <argument>
-                                            -javaagent:${project.basedir}/..//dev-support/jacocoagent.jar=includes=org.apache.kylin.*,output=file,destfile=jacoco-it-engine.exec
-                                        </argument>
-                                        <argument>-classpath</argument>
-                                        <classpath />
-                                        <argument>org.apache.kylin.provision.BuildCubeWithEngine
-                                        </argument>
-                                    </arguments>
-                                    <workingDirectory>${project.basedir}</workingDirectory>
-                                </configuration>
-                            </execution>
-                            <execution>
-                                <id>build_cube_with_stream</id>
-                                <goals>
-                                    <goal>exec</goal>
-                                </goals>
-                                <phase>pre-integration-test</phase>
-                                <configuration>
-                                    <skip>${skipTests}</skip>
-                                    <classpathScope>test</classpathScope>
-                                    <executable>java</executable>
-                                    <arguments>
-                                        <argument>-Dhdp.version=${hdp.version}</argument>
-                                        <argument>-DfastBuildMode=${fastBuildMode}</argument>
-                                        <argument>
-                                            -DbuildCubeUsingProvidedData=${buildCubeUsingProvidedData}
-                                        </argument>
-                                        <argument>
-                                            -Dlog4j.configuration=file:${project.basedir}/..//build/conf/kylin-tools-log4j.properties
-                                        </argument>
-                                        <argument>
-                                            -javaagent:${project.basedir}/..//dev-support/jacocoagent.jar=includes=org.apache.kylin.*,output=file,destfile=jacoco-it-stream.exec
-                                        </argument>
-                                        <argument>-classpath</argument>
-                                        <classpath />
-                                        <argument>org.apache.kylin.provision.BuildCubeWithStream
-                                        </argument>
-                                    </arguments>
-                                    <workingDirectory>${project.basedir}</workingDirectory>
-                                </configuration>
-                            </execution>
-                        </executions>
-                    </plugin>
-
-                </plugins>
-            </build>
-        </profile>
-    </profiles>
+                    </execution>
+                    <execution>
+                        <id>build_cube_with_stream</id>
+                        <goals>
+                            <goal>exec</goal>
+                        </goals>
+                        <phase>pre-integration-test</phase>
+                        <configuration>
+                            <skip>${skipTests}</skip>
+                            <classpathScope>test</classpathScope>
+                            <executable>java</executable>
+                            <arguments>
+                                <argument>-Dhdp.version=${hdp.version}</argument>
+                                <argument>-DfastBuildMode=${fastBuildMode}</argument>
+                                <argument>
+                                    -DbuildCubeUsingProvidedData=${buildCubeUsingProvidedData}
+                                </argument>
+                                <argument>
+                                    -Dlog4j.configuration=file:${project.basedir}/..//build/conf/kylin-tools-log4j.properties
+                                </argument>
+                                <argument>
+                                    -javaagent:${project.basedir}/..//dev-support/jacocoagent.jar=includes=org.apache.kylin.*,output=file,destfile=jacoco-it-stream.exec
+                                </argument>
+                                <argument>-classpath</argument>
+                                <classpath />
+                                <argument>org.apache.kylin.provision.BuildCubeWithStream
+                                </argument>
+                            </arguments>
+                            <workingDirectory>${project.basedir}</workingDirectory>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
 
+        </plugins>
+    </build>
 </project>
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java b/kylin-it/src/test/java/org/apache/kylin/jdbc/ITJDBCDriver2Test.java
similarity index 51%
copy from storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
copy to kylin-it/src/test/java/org/apache/kylin/jdbc/ITJDBCDriver2Test.java
index 6a3ad59..558e046 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
+++ b/kylin-it/src/test/java/org/apache/kylin/jdbc/ITJDBCDriver2Test.java
@@ -14,30 +14,16 @@
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  * See the License for the specific language governing permissions and
  * limitations under the License.
- */
-
-package org.apache.kylin.storage.parquet.cube;
+*/
 
-import org.apache.kylin.cube.CubeInstance;
-import org.apache.kylin.metadata.realization.SQLDigest;
-import org.apache.kylin.metadata.tuple.ITupleIterator;
-import org.apache.kylin.metadata.tuple.TupleInfo;
-import org.apache.kylin.storage.StorageContext;
-import org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase;
+package org.apache.kylin.jdbc;
 
-public class CubeStorageQuery extends GTCubeStorageQueryBase {
+import org.apache.kylin.junit.SparkTestRunner;
+import org.junit.runner.RunWith;
 
-    public CubeStorageQuery(CubeInstance cube) {
-        super(cube);
-    }
-
-    @Override
-    public ITupleIterator search(StorageContext context, SQLDigest sqlDigest, TupleInfo returnTupleInfo) {
-        return super.search(context, sqlDigest, returnTupleInfo);
-    }
+/**
+ */
+@RunWith(SparkTestRunner.class)
+public class ITJDBCDriver2Test extends ITJDBCDriverTest {
 
-    @Override
-    protected String getGTStorage() {
-        return null;
-    }
 }
diff --git a/kylin-it/src/test/java/org/apache/kylin/provision/BuildCubeWithEngine.java b/kylin-it/src/test/java/org/apache/kylin/provision/BuildCubeWithEngine.java
index ec5bc35..9bd741c 100644
--- a/kylin-it/src/test/java/org/apache/kylin/provision/BuildCubeWithEngine.java
+++ b/kylin-it/src/test/java/org/apache/kylin/provision/BuildCubeWithEngine.java
@@ -18,25 +18,9 @@
 
 package org.apache.kylin.provision;
 
-import java.io.File;
-import java.io.IOException;
-import java.lang.reflect.Method;
-import java.text.ParseException;
-import java.text.SimpleDateFormat;
-import java.util.Collections;
-import java.util.Comparator;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Random;
-import java.util.Set;
-import java.util.TimeZone;
-import java.util.concurrent.Callable;
-import java.util.concurrent.CountDownLatch;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.Future;
-
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -74,9 +58,24 @@ import org.apache.kylin.storage.hbase.util.ZookeeperJobLock;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.base.Preconditions;
-import com.google.common.collect.Lists;
-import com.google.common.collect.Sets;
+import java.io.File;
+import java.io.IOException;
+import java.lang.reflect.Method;
+import java.text.ParseException;
+import java.text.SimpleDateFormat;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Random;
+import java.util.Set;
+import java.util.TimeZone;
+import java.util.concurrent.Callable;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
 
 public class BuildCubeWithEngine {
 
@@ -86,6 +85,7 @@ public class BuildCubeWithEngine {
     protected ExecutableManager jobService;
     private static boolean fastBuildMode = false;
     private static int engineType;
+    private static int storageType;
 
     private static final Logger logger = LoggerFactory.getLogger(BuildCubeWithEngine.class);
 
@@ -131,10 +131,14 @@ public class BuildCubeWithEngine {
         String specifiedEngineType = System.getProperty("engineType");
         if (StringUtils.isNotEmpty(specifiedEngineType)) {
             engineType = Integer.parseInt(specifiedEngineType);
-        } else {
-            engineType = 2;
         }
 
+        String specifiedStorageType = System.getProperty("storageType");
+        if (StringUtils.isNotEmpty(specifiedEngineType)) {
+            storageType = Integer.parseInt(specifiedStorageType);
+        }
+        logger.info("==storageType: " + specifiedStorageType);
+
         System.setProperty(KylinConfig.KYLIN_CONF, confDir);
         System.setProperty("SPARK_HOME", "/usr/local/spark"); // need manually create and put spark to this folder on Jenkins
         System.setProperty("kylin.hadoop.conf.dir", confDir);
@@ -195,6 +199,9 @@ public class BuildCubeWithEngine {
         }
 
         cubeDescManager = CubeDescManager.getInstance(kylinConfig);
+
+        // update enginType and storageTpye
+        updateCubeDesc("ci_inner_join_cube", "ci_left_join_cube");
     }
 
     public void after() {
@@ -353,12 +360,30 @@ public class BuildCubeWithEngine {
         return doBuildAndMergeOnCube(cubeName);
     }
 
-    @SuppressWarnings("unused")
     private void updateCubeEngineType(String cubeName) throws IOException {
-        CubeDesc cubeDesc = cubeDescManager.getCubeDesc(cubeName);
-        if (cubeDesc.getEngineType() != engineType) {
-            cubeDesc.setEngineType(engineType);
-            cubeDescManager.updateCubeDesc(cubeDesc);
+        if (engineType != 0) {
+            CubeDesc cubeDesc = cubeDescManager.getCubeDesc(cubeName);
+            if (cubeDesc.getEngineType() != engineType) {
+                cubeDesc.setEngineType(engineType);
+                cubeDescManager.updateCubeDesc(cubeDesc);
+            }
+        }
+    }
+
+    private void updateCubeStorageType(String cubeName) throws IOException {
+        if (storageType != 0) {
+            CubeDesc cubeDesc = cubeDescManager.getCubeDesc(cubeName);
+            if (cubeDesc.getStorageType() != storageType) {
+                cubeDesc.setStorageType(storageType);
+                cubeDescManager.updateCubeDesc(cubeDesc);
+            }
+        }
+    }
+
+    private void updateCubeDesc(String... cubeNames) throws IOException {
+        for (String cubeName : cubeNames) {
+            updateCubeEngineType(cubeName);
+            updateCubeStorageType(cubeName);
         }
     }
 
diff --git a/kylin-it/src/test/java/org/apache/kylin/query/ITCombination2Test.java b/kylin-it/src/test/java/org/apache/kylin/query/ITCombination2Test.java
new file mode 100644
index 0000000..f266c34
--- /dev/null
+++ b/kylin-it/src/test/java/org/apache/kylin/query/ITCombination2Test.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+package org.apache.kylin.query;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Map;
+
+import org.apache.kylin.junit.SparkTestRunnerWithParametersFactory;
+import org.apache.kylin.metadata.realization.RealizationType;
+import org.apache.kylin.query.routing.Candidate;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.Maps;
+
+/**
+ */
+@RunWith(Parameterized.class)
+@Parameterized.UseParametersRunnerFactory(SparkTestRunnerWithParametersFactory.class)
+public class ITCombination2Test extends ITKylinQuery2Test {
+
+    private static final Logger logger = LoggerFactory.getLogger(ITCombination2Test.class);
+
+    @BeforeClass
+    public static void setUp() {
+        logger.info("setUp in ITCombination2Test");
+    }
+
+    @AfterClass
+    public static void tearDown() {
+        logger.info("tearDown in ITCombination2Test");
+        clean();
+        Candidate.restorePriorities();
+    }
+
+    /**
+     * return all config combinations, where first setting specifies join type
+     * (inner or left), and the second setting specifies whether to force using
+     * coprocessors(on, off or unset).
+     */
+    @Parameterized.Parameters
+    public static Collection<Object[]> configs() {
+        return Arrays.asList(new Object[][] { //
+                { "inner", "on", "v2" }, //
+                { "left", "on", "v2" }, //
+        });
+    }
+
+    public ITCombination2Test(String joinType, String coprocessorToggle, String queryEngine) throws Exception {
+        logger.info("Into combination join type: " + joinType + ", coprocessor toggle: " + coprocessorToggle + ", query engine: " + queryEngine);
+        Map<RealizationType, Integer> priorities = Maps.newHashMap();
+        priorities.put(RealizationType.HYBRID, 0);
+        priorities.put(RealizationType.CUBE, 0);
+        priorities.put(RealizationType.INVERTED_INDEX, 0);
+        Candidate.setPriorities(priorities);
+        ITKylinQuery2Test.joinType = joinType;
+        setupAll();
+    }
+}
diff --git a/kylin-it/src/test/java/org/apache/kylin/query/ITFailfastQuery2Test.java b/kylin-it/src/test/java/org/apache/kylin/query/ITFailfastQuery2Test.java
new file mode 100644
index 0000000..228a09b
--- /dev/null
+++ b/kylin-it/src/test/java/org/apache/kylin/query/ITFailfastQuery2Test.java
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+package org.apache.kylin.query;
+
+import java.util.Map;
+
+import org.apache.kylin.common.QueryContextFacade;
+import org.apache.kylin.ext.ClassLoaderUtils;
+import org.apache.kylin.junit.SparkTestRunner;
+import org.apache.kylin.metadata.realization.RealizationType;
+import org.apache.kylin.query.routing.Candidate;
+import org.apache.spark.sql.SparderEnv;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.Maps;
+
+@RunWith(SparkTestRunner.class)
+public class ITFailfastQuery2Test extends ITFailfastQueryTest {
+
+    private static final Logger logger = LoggerFactory.getLogger(ITFailfastQuery2Test.class);
+
+    @BeforeClass
+    public static void setUp() throws Exception {
+        logger.info("setUp in ITFailfastQueryTest");
+        Map<RealizationType, Integer> priorities = Maps.newHashMap();
+        priorities.put(RealizationType.HYBRID, 0);
+        priorities.put(RealizationType.CUBE, 0);
+        priorities.put(RealizationType.INVERTED_INDEX, 0);
+        Candidate.setPriorities(priorities);
+        joinType = "left";
+        setupAll();
+
+        // init spark
+        ClassLoader originClassLoader = Thread.currentThread().getContextClassLoader();
+        Thread.currentThread().setContextClassLoader(ClassLoaderUtils.getSparkClassLoader());
+        SparderEnv.init();
+        Thread.currentThread().setContextClassLoader(originClassLoader);
+    }
+
+    @After
+    public void cleanUp() {
+        QueryContextFacade.resetCurrent();
+    }
+
+    @AfterClass
+    public static void tearDown() throws Exception {
+        logger.info("tearDown in ITFailfastQuery2Test");
+        Candidate.restorePriorities();
+        clean();
+    }
+
+    @Override
+    @Test
+    public void testQueryExceedMaxScanBytes() throws Exception {
+        logger.info("testQueryExceedMaxScanBytes ignored");
+    }
+
+    @Override
+    @Test
+    public void testQueryNotExceedMaxScanBytes() throws Exception {
+        logger.info("testQueryNotExceedMaxScanBytes ignored");
+    }
+}
diff --git a/kylin-it/src/test/java/org/apache/kylin/query/ITKylinQuery2Test.java b/kylin-it/src/test/java/org/apache/kylin/query/ITKylinQuery2Test.java
new file mode 100644
index 0000000..942e256
--- /dev/null
+++ b/kylin-it/src/test/java/org/apache/kylin/query/ITKylinQuery2Test.java
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.query;
+
+import java.sql.DriverManager;
+import java.util.Map;
+
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.util.HBaseMetadataTestCase;
+import org.apache.kylin.ext.ClassLoaderUtils;
+import org.apache.kylin.junit.SparkTestRunner;
+import org.apache.kylin.metadata.project.ProjectInstance;
+import org.apache.kylin.metadata.realization.RealizationType;
+import org.apache.kylin.query.routing.Candidate;
+import org.apache.spark.sql.SparderEnv;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+import org.junit.runner.RunWith;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.Maps;
+
+@Ignore("@RunWith(SparkTestRunner.class) is contained by ITCombination2Test")
+@RunWith(SparkTestRunner.class)
+public class ITKylinQuery2Test extends ITKylinQueryTest {
+
+    private static final Logger logger = LoggerFactory.getLogger(ITKylinQuery2Test.class);
+
+    @Rule
+    public ExpectedException thrown = ExpectedException.none();
+
+    @BeforeClass
+    public static void setUp() throws Exception {
+        logger.info("setUp in ITKylinQuery2Test");
+        Map<RealizationType, Integer> priorities = Maps.newHashMap();
+        priorities.put(RealizationType.HYBRID, 0);
+        priorities.put(RealizationType.CUBE, 0);
+        priorities.put(RealizationType.INVERTED_INDEX, 0);
+        Candidate.setPriorities(priorities);
+
+        joinType = "left";
+
+        setupAll();
+    }
+
+    protected static void setupAll() throws Exception {
+        //setup env
+        HBaseMetadataTestCase.staticCreateTestMetadata();
+        config = KylinConfig.getInstanceFromEnv();
+
+        //setup cube conn
+        String project = ProjectInstance.DEFAULT_PROJECT_NAME;
+        cubeConnection = QueryConnection.getConnection(project);
+
+        //setup h2
+        h2Connection = DriverManager.getConnection("jdbc:h2:mem:db" + (h2InstanceCount++) + ";CACHE_SIZE=32072", "sa",
+                "");
+        // Load H2 Tables (inner join)
+        H2Database h2DB = new H2Database(h2Connection, config, project);
+        h2DB.loadAllTables();
+
+        // init spark
+        ClassLoader originClassLoader = Thread.currentThread().getContextClassLoader();
+        Thread.currentThread().setContextClassLoader(ClassLoaderUtils.getSparkClassLoader());
+        SparderEnv.init();
+        Thread.currentThread().setContextClassLoader(originClassLoader);
+    }
+
+    @AfterClass
+    public static void tearDown() throws Exception {
+        logger.info("tearDown in ITKylin2QueryTest");
+        Candidate.restorePriorities();
+        clean();
+    }
+
+    @Override
+    @Test
+    public void testTimeoutQuery() throws Exception {
+        logger.info("TimeoutQuery ignored.");
+    }
+
+    @Override
+    @Test
+    public void testExpressionQuery() throws Exception {
+        logger.info("ExpressionQuery ignored.");
+    }
+
+    @Override
+    @Test
+    public void testStreamingTableQuery() throws Exception {
+        logger.info("StreamingTableQuery ignored.");
+    }
+}
+                                                  
\ No newline at end of file
diff --git a/kylin-it/src/test/java/org/apache/kylin/query/ITKylinQueryTest.java b/kylin-it/src/test/java/org/apache/kylin/query/ITKylinQueryTest.java
index 4fdc68e..6f8eff1 100644
--- a/kylin-it/src/test/java/org/apache/kylin/query/ITKylinQueryTest.java
+++ b/kylin-it/src/test/java/org/apache/kylin/query/ITKylinQueryTest.java
@@ -6,15 +6,15 @@
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
- * 
+ *
  *     http://www.apache.org/licenses/LICENSE-2.0
- * 
+ *
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS,
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  * See the License for the specific language governing permissions and
  * limitations under the License.
-*/
+ */
 
 package org.apache.kylin.query;
 
@@ -463,8 +463,6 @@ public class ITKylinQueryTest extends KylinTestBase {
         execAndCompQuery(getQueryFolderPrefix() + "src/test/resources/query/sql_values", null, true);
     }
 
-
-
     @Test
     public void testPlan() throws Exception {
         String originProp = System.getProperty("calcite.debug");
diff --git a/kylin-test/pom.xml b/kylin-test/pom.xml
new file mode 100644
index 0000000..ad1437f
--- /dev/null
+++ b/kylin-test/pom.xml
@@ -0,0 +1,32 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    <parent>
+        <artifactId>kylin</artifactId>
+        <groupId>org.apache.kylin</groupId>
+        <version>2.6.0-SNAPSHOT</version>
+    </parent>
+    <modelVersion>4.0.0</modelVersion>
+    <name>Apache Kylin - Test</name>
+    <artifactId>kylin-test</artifactId>
+    <packaging>jar</packaging>
+
+    <dependencies>
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-tomcat-ext</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+            <scope>provided</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.springframework</groupId>
+            <artifactId>spring-test</artifactId>
+        </dependency>
+    </dependencies>
+
+</project>
\ No newline at end of file
diff --git a/kylin-test/src/main/java/org/apache/kylin/junit/EnvUtils.java b/kylin-test/src/main/java/org/apache/kylin/junit/EnvUtils.java
new file mode 100644
index 0000000..19a5b50
--- /dev/null
+++ b/kylin-test/src/main/java/org/apache/kylin/junit/EnvUtils.java
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.junit;
+
+import com.google.common.collect.Maps;
+
+import java.lang.reflect.Field;
+import java.util.Collections;
+import java.util.Map;
+
+public final class EnvUtils {
+
+    public static boolean checkEnv(String env) {
+        return System.getenv(env) != null;
+    }
+
+    public static void setNormalEnv() throws Exception {
+
+        Map<String, String> newenv = Maps.newHashMap();
+        setDefaultEnv("SPARK_HOME", "../../build/spark", newenv);
+        //setDefaultEnv("hdp.version", "2.4.0.0-169", newenv);
+        setDefaultEnv("ZIPKIN_HOSTNAME", "sandbox", newenv);
+        setDefaultEnv("ZIPKIN_SCRIBE_PORT", "9410", newenv);
+        setDefaultEnv("KAP_HDFS_WORKING_DIR", "/kylin", newenv);
+        changeEnv(newenv);
+
+    }
+
+    protected static void changeEnv(Map<String, String> newenv) throws Exception {
+        Class[] classes = Collections.class.getDeclaredClasses();
+        Map<String, String> env = System.getenv();
+        for (Class cl : classes) {
+            if ("java.util.Collections$UnmodifiableMap".equals(cl.getName())) {
+                Field field = cl.getDeclaredField("m");
+                field.setAccessible(true);
+                Object obj = field.get(env);
+                Map<String, String> map = (Map<String, String>) obj;
+                map.putAll(newenv);
+            }
+        }
+    }
+
+    private static void setDefaultEnv(String env, String defaultValue, Map<String, String> newenv) {
+        if (System.getenv(env) == null) {
+            newenv.put(env, defaultValue);
+        }
+    }
+}
diff --git a/kylin-test/src/main/java/org/apache/kylin/junit/SparkTestRunner.java b/kylin-test/src/main/java/org/apache/kylin/junit/SparkTestRunner.java
new file mode 100644
index 0000000..bcc8c17
--- /dev/null
+++ b/kylin-test/src/main/java/org/apache/kylin/junit/SparkTestRunner.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.junit;
+
+import org.apache.kylin.ext.ItClassLoader;
+import org.junit.runner.notification.RunNotifier;
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.model.InitializationError;
+
+import java.io.IOException;
+
+public class SparkTestRunner extends BlockJUnit4ClassRunner {
+
+    static public ItClassLoader customClassLoader;
+
+    public SparkTestRunner(Class<?> clazz) throws Exception {
+        super(loadFromCustomClassloader(clazz));
+    }
+
+    // Loads a class in the custom classloader
+    private static Class<?> loadFromCustomClassloader(Class<?> clazz) throws Exception {
+        if(!EnvUtils.checkEnv("SPARK_HOME")){
+            EnvUtils.setNormalEnv();
+        }
+        try {
+            // Only load once to support parallel tests
+            if (customClassLoader == null) {
+                customClassLoader = new ItClassLoader(Thread.currentThread().getContextClassLoader());
+            }
+            return Class.forName(clazz.getName(), false, customClassLoader);
+        } catch (ClassNotFoundException | IOException e) {
+            throw new InitializationError(e);
+        }
+    }
+
+    public static ItClassLoader get() throws IOException {
+        if (customClassLoader == null) {
+            customClassLoader = new ItClassLoader(Thread.currentThread().getContextClassLoader());
+        }
+        return customClassLoader;
+    }
+
+    // Runs junit tests in a separate thread using the custom class loader
+    @Override
+    public void run(final RunNotifier notifier) {
+        Runnable runnable = new Runnable() {
+            @Override
+            public void run() {
+                SparkTestRunner.super.run(notifier);
+            }
+        };
+        Thread thread = new Thread(runnable);
+        thread.setContextClassLoader(customClassLoader);
+        thread.start();
+        try {
+            thread.join();
+        } catch (InterruptedException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+}
\ No newline at end of file
diff --git a/kylin-test/src/main/java/org/apache/kylin/junit/SparkTestRunnerRunnerWithParameters.java b/kylin-test/src/main/java/org/apache/kylin/junit/SparkTestRunnerRunnerWithParameters.java
new file mode 100644
index 0000000..3df9831
--- /dev/null
+++ b/kylin-test/src/main/java/org/apache/kylin/junit/SparkTestRunnerRunnerWithParameters.java
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.junit;
+
+import java.lang.annotation.Annotation;
+import java.lang.reflect.Field;
+import java.util.List;
+
+import org.junit.runner.notification.RunNotifier;
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.Parameterized.Parameter;
+import org.junit.runners.model.FrameworkField;
+import org.junit.runners.model.FrameworkMethod;
+import org.junit.runners.model.Statement;
+import org.junit.runners.parameterized.TestWithParameters;
+
+/**
+ * A {@link BlockJUnit4ClassRunner} with parameters support. Parameters can be
+ * injected via constructor or into annotated fields.
+ */
+public class SparkTestRunnerRunnerWithParameters extends SparkTestRunner {
+    private final Object[] parameters;
+
+    private final String name;
+
+    public SparkTestRunnerRunnerWithParameters(TestWithParameters test) throws Exception {
+        super(test.getTestClass().getJavaClass());
+        parameters = test.getParameters().toArray(new Object[test.getParameters().size()]);
+        name = test.getName();
+    }
+
+    @Override
+    public Object createTest() throws Exception {
+        if (fieldsAreAnnotated()) {
+            return createTestUsingFieldInjection();
+        } else {
+            return createTestUsingConstructorInjection();
+        }
+    }
+
+    private Object createTestUsingConstructorInjection() throws Exception {
+        return getTestClass().getOnlyConstructor().newInstance(parameters);
+    }
+
+    private Object createTestUsingFieldInjection() throws Exception {
+        List<FrameworkField> annotatedFieldsByParameter = getAnnotatedFieldsByParameter();
+        if (annotatedFieldsByParameter.size() != parameters.length) {
+            throw new Exception("Wrong number of parameters and @Parameter fields." + " @Parameter fields counted: "
+                    + annotatedFieldsByParameter.size() + ", available parameters: " + parameters.length + ".");
+        }
+        Object testClassInstance = getTestClass().getJavaClass().newInstance();
+        for (FrameworkField each : annotatedFieldsByParameter) {
+            Field field = each.getField();
+            Parameter annotation = field.getAnnotation(Parameter.class);
+            int index = annotation.value();
+            try {
+                field.set(testClassInstance, parameters[index]);
+            } catch (IllegalArgumentException iare) {
+                throw new Exception(getTestClass().getName() + ": Trying to set " + field.getName() + " with the value "
+                        + parameters[index] + " that is not the right type ("
+                        + parameters[index].getClass().getSimpleName() + " instead of "
+                        + field.getType().getSimpleName() + ").", iare);
+            }
+        }
+        return testClassInstance;
+    }
+
+    @Override
+    protected String getName() {
+        return name;
+    }
+
+    @Override
+    protected String testName(FrameworkMethod method) {
+        return method.getName() + getName();
+    }
+
+    @Override
+    protected void validateConstructor(List<Throwable> errors) {
+        validateOnlyOneConstructor(errors);
+        if (fieldsAreAnnotated()) {
+            validateZeroArgConstructor(errors);
+        }
+    }
+
+    @Override
+    protected void validateFields(List<Throwable> errors) {
+        super.validateFields(errors);
+        if (fieldsAreAnnotated()) {
+            List<FrameworkField> annotatedFieldsByParameter = getAnnotatedFieldsByParameter();
+            int[] usedIndices = new int[annotatedFieldsByParameter.size()];
+            for (FrameworkField each : annotatedFieldsByParameter) {
+                int index = each.getField().getAnnotation(Parameter.class).value();
+                if (index < 0 || index > annotatedFieldsByParameter.size() - 1) {
+                    errors.add(new Exception("Invalid @Parameter value: " + index + ". @Parameter fields counted: "
+                            + annotatedFieldsByParameter.size() + ". Please use an index between 0 and "
+                            + (annotatedFieldsByParameter.size() - 1) + "."));
+                } else {
+                    usedIndices[index]++;
+                }
+            }
+            for (int index = 0; index < usedIndices.length; index++) {
+                int numberOfUse = usedIndices[index];
+                if (numberOfUse == 0) {
+                    errors.add(new Exception("@Parameter(" + index + ") is never used."));
+                } else if (numberOfUse > 1) {
+                    errors.add(
+                            new Exception("@Parameter(" + index + ") is used more than once (" + numberOfUse + ")."));
+                }
+            }
+        }
+    }
+
+    @Override
+    protected Statement classBlock(RunNotifier notifier) {
+        return childrenInvoker(notifier);
+    }
+
+    @Override
+    protected Annotation[] getRunnerAnnotations() {
+        return new Annotation[0];
+    }
+
+    private List<FrameworkField> getAnnotatedFieldsByParameter() {
+        return getTestClass().getAnnotatedFields(Parameter.class);
+    }
+
+    private boolean fieldsAreAnnotated() {
+        return !getAnnotatedFieldsByParameter().isEmpty();
+    }
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java b/kylin-test/src/main/java/org/apache/kylin/junit/SparkTestRunnerWithParametersFactory.java
similarity index 51%
copy from storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
copy to kylin-test/src/main/java/org/apache/kylin/junit/SparkTestRunnerWithParametersFactory.java
index 6a3ad59..68ec888 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
+++ b/kylin-test/src/main/java/org/apache/kylin/junit/SparkTestRunnerWithParametersFactory.java
@@ -16,28 +16,19 @@
  * limitations under the License.
  */
 
-package org.apache.kylin.storage.parquet.cube;
+package org.apache.kylin.junit;
 
-import org.apache.kylin.cube.CubeInstance;
-import org.apache.kylin.metadata.realization.SQLDigest;
-import org.apache.kylin.metadata.tuple.ITupleIterator;
-import org.apache.kylin.metadata.tuple.TupleInfo;
-import org.apache.kylin.storage.StorageContext;
-import org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase;
+import org.junit.runner.Runner;
+import org.junit.runners.model.InitializationError;
+import org.junit.runners.parameterized.ParametersRunnerFactory;
+import org.junit.runners.parameterized.TestWithParameters;
 
-public class CubeStorageQuery extends GTCubeStorageQueryBase {
-
-    public CubeStorageQuery(CubeInstance cube) {
-        super(cube);
-    }
-
-    @Override
-    public ITupleIterator search(StorageContext context, SQLDigest sqlDigest, TupleInfo returnTupleInfo) {
-        return super.search(context, sqlDigest, returnTupleInfo);
-    }
-
-    @Override
-    protected String getGTStorage() {
-        return null;
+public class SparkTestRunnerWithParametersFactory implements ParametersRunnerFactory {
+    public Runner createRunnerForTestWithParameters(TestWithParameters test) throws InitializationError {
+        try {
+            return new SparkTestRunnerRunnerWithParameters(test);
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
     }
-}
+}
\ No newline at end of file
diff --git a/pom.xml b/pom.xml
index a2bf4ec..4ddccd4 100644
--- a/pom.xml
+++ b/pom.xml
@@ -61,6 +61,7 @@
 
     <!-- Spark versions -->
     <spark.version>2.3.2</spark.version>
+
     <kryo.version>4.0.0</kryo.version>
 
     <!-- mysql versions -->
@@ -297,6 +298,11 @@
       </dependency>
       <dependency>
         <groupId>org.apache.kylin</groupId>
+        <artifactId>kylin-storage-parquet</artifactId>
+        <version>${project.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.kylin</groupId>
         <artifactId>kylin-query</artifactId>
         <version>${project.version}</version>
       </dependency>
@@ -337,6 +343,16 @@
       </dependency>
       <dependency>
         <groupId>org.apache.kylin</groupId>
+        <artifactId>kylin-tomcat-ext</artifactId>
+        <version>${project.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.kylin</groupId>
+        <artifactId>kylin-test</artifactId>
+        <version>${project.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.kylin</groupId>
         <artifactId>kylin-core-common</artifactId>
         <version>${project.version}</version>
         <type>test-jar</type>
@@ -1283,6 +1299,7 @@
     <module>source-jdbc</module>
     <module>source-kafka</module>
     <module>storage-hbase</module>
+    <module>storage-parquet</module>
     <module>query</module>
     <module>server-base</module>
     <module>server</module>
@@ -1297,6 +1314,7 @@
     <module>metrics-reporter-kafka</module>
     <module>cache</module>
     <module>datasource-sdk</module>
+    <module>kylin-test</module>
   </modules>
 
   <reporting>
diff --git a/server-base/src/main/java/org/apache/kylin/rest/init/InitialTaskManager.java b/server-base/src/main/java/org/apache/kylin/rest/init/InitialTaskManager.java
index 876ae08..a9fe837 100644
--- a/server-base/src/main/java/org/apache/kylin/rest/init/InitialTaskManager.java
+++ b/server-base/src/main/java/org/apache/kylin/rest/init/InitialTaskManager.java
@@ -21,8 +21,10 @@ package org.apache.kylin.rest.init;
 import org.apache.commons.lang.StringUtils;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.StringUtil;
+import org.apache.kylin.ext.ClassLoaderUtils;
 import org.apache.kylin.rest.metrics.QueryMetrics2Facade;
 import org.apache.kylin.rest.metrics.QueryMetricsFacade;
+import org.apache.spark.sql.SparderEnv;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.springframework.beans.factory.InitializingBean;
@@ -38,6 +40,8 @@ public class InitialTaskManager implements InitializingBean {
     public void afterPropertiesSet() throws Exception {
         logger.info("Kylin service is starting.....");
 
+        checkAndInitSpark();
+
         runInitialTasks();
     }
 
@@ -63,4 +67,26 @@ public class InitialTaskManager implements InitializingBean {
             logger.info("All initial tasks finished.");
         }
     }
+
+    private void checkAndInitSpark() {
+        boolean hasSparkJar = true;
+        // if in ut, has not spark jar.
+        try {
+            Class.forName("org.apache.spark.sql.SparkSession");
+        } catch (ClassNotFoundException e) {
+            logger.info("Can not find org.apache.spark.sql.SparkSession.Spark has not started.");
+            hasSparkJar = false;
+        }
+
+        if (hasSparkJar) {
+            ClassLoader originClassLoader = Thread.currentThread().getContextClassLoader();
+            Thread.currentThread().setContextClassLoader(ClassLoaderUtils.getSparkClassLoader());
+            try {
+                SparderEnv.init();
+            } catch (Throwable ex) {
+                logger.error("Initial Spark Context at starting failed", ex);
+            }
+            Thread.currentThread().setContextClassLoader(originClassLoader);
+        }
+    }
 }
diff --git a/server/pom.xml b/server/pom.xml
index a898eff..d4427c2 100644
--- a/server/pom.xml
+++ b/server/pom.xml
@@ -305,6 +305,13 @@
             <artifactId>junit</artifactId>
             <scope>test</scope>
         </dependency>
+
+        <dependency>
+            <groupId>mysql</groupId>
+            <artifactId>mysql-connector-java</artifactId>
+            <version>${mysql-connector.version}</version>
+            <scope>compile</scope>
+        </dependency>
     </dependencies>
 
     <build>
diff --git a/server/src/main/java/org/apache/kylin/rest/DebugTomcat.java b/server/src/main/java/org/apache/kylin/rest/DebugTomcat.java
index db28595..c0519e6 100644
--- a/server/src/main/java/org/apache/kylin/rest/DebugTomcat.java
+++ b/server/src/main/java/org/apache/kylin/rest/DebugTomcat.java
@@ -19,6 +19,7 @@
 package org.apache.kylin.rest;
 
 import java.io.File;
+import java.io.IOException;
 import java.lang.reflect.Field;
 import java.lang.reflect.Modifier;
 
@@ -80,6 +81,12 @@ public class DebugTomcat {
     private static void overrideDevJobJarLocations() {
         KylinConfig conf = KylinConfig.getInstanceFromEnv();
         File devJobJar = findFile("../assembly/target", "kylin-assembly-.*-SNAPSHOT-job.jar");
+        File sparkJar = findFile("../storage-parquet/target", "kylin-storage-parquet-.*-SNAPSHOT-spark.jar");
+        try {
+            System.setProperty("kylin.query.parquet-additional-jars", sparkJar.getCanonicalPath());
+        } catch (IOException e) {
+            e.printStackTrace();
+        }
         if (devJobJar != null) {
             conf.overrideMRJobJarPath(devJobJar.getAbsolutePath());
         }
@@ -110,8 +117,11 @@ public class DebugTomcat {
 
         File webBase = new File("../webapp/app");
         File webInfDir = new File(webBase, "WEB-INF");
+        File metaInfDir = new File(webBase, "META-INF");
         FileUtils.deleteDirectory(webInfDir);
+        FileUtils.deleteDirectory(metaInfDir);
         FileUtils.copyDirectoryToDirectory(new File("../server/src/main/webapp/WEB-INF"), webBase);
+        FileUtils.copyDirectoryToDirectory(new File("../examples/test_case_data/webapps/META-INF"), webBase);
 
         Tomcat tomcat = new Tomcat();
         tomcat.setPort(port);
diff --git a/server/src/main/webapp/META-INF/context.xml b/server/src/main/webapp/META-INF/context.xml
new file mode 100644
index 0000000..d988ae1
--- /dev/null
+++ b/server/src/main/webapp/META-INF/context.xml
@@ -0,0 +1,38 @@
+<?xml version='1.0' encoding='utf-8'?>
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+<!-- The contents of this file will be loaded for each web application -->
+<Context allowLinking="true">
+
+    <!-- Default set of monitored resources -->
+    <WatchedResource>WEB-INF/web.xml</WatchedResource>
+
+    <!-- Uncomment this to disable session persistence across Tomcat restarts -->
+    <!--
+    <Manager pathname="" />
+    -->
+
+    <!-- Uncomment this to enable Comet connection tacking (provides events
+         on session expiration as well as webapp lifecycle) -->
+    <!--
+    <Valve className="org.apache.catalina.valves.CometConnectionManagerValve" />
+    -->
+
+    <Loader loaderClass="org.apache.kylin.ext.TomcatClassLoader"/>
+
+</Context>
diff --git a/storage-parquet/pom.xml b/storage-parquet/pom.xml
new file mode 100644
index 0000000..3778031
--- /dev/null
+++ b/storage-parquet/pom.xml
@@ -0,0 +1,159 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements.  See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership.  The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License.  You may obtain a copy of the License at
+ 
+     http://www.apache.org/licenses/LICENSE-2.0
+ 
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    <modelVersion>4.0.0</modelVersion>
+
+    <artifactId>kylin-storage-parquet</artifactId>
+    <packaging>jar</packaging>
+    <name>Apache Kylin - Parquet Storage</name>
+    <description>Apache Kylin - Parquet Storage</description>
+
+    <parent>
+        <groupId>org.apache.kylin</groupId>
+        <artifactId>kylin</artifactId>
+        <version>2.6.0-SNAPSHOT</version>
+    </parent>
+
+    <properties>
+        <shadeBase>org.apache.kylin.coprocessor.shaded</shadeBase>
+    </properties>
+
+    <dependencies>
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-core-metadata</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-engine-mr</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-engine-spark</artifactId>
+        </dependency>
+
+        <!-- Env & Test -->
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-core-common</artifactId>
+            <type>test-jar</type>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-core-storage</artifactId>
+            <type>test-jar</type>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.mrunit</groupId>
+            <artifactId>mrunit</artifactId>
+            <classifier>hadoop2</classifier>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.powermock</groupId>
+            <artifactId>powermock-module-junit4-rule-agent</artifactId>
+            <version>${powermock.version}</version>
+            <scope>test</scope>
+        </dependency>
+
+        <!-- Spark dependency -->
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-core_2.11</artifactId>
+            <scope>provided</scope>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-sql_2.11</artifactId>
+            <version>${spark.version}</version>
+            <scope>provided</scope>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-hive_2.11</artifactId>
+            <version>${spark.version}</version>
+            <scope>provided</scope>
+        </dependency>
+
+    </dependencies>
+
+    <build>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-shade-plugin</artifactId>
+                <executions>
+                    <execution>
+                        <phase>package</phase>
+                        <goals>
+                            <goal>shade</goal>
+                        </goals>
+                        <configuration>
+                            <shadedArtifactAttached>true</shadedArtifactAttached>
+                            <shadedClassifierName>spark</shadedClassifierName>
+                            <transformers>
+                                <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" />
+                            </transformers>
+                            <artifactSet>
+                                <includes>
+                                    <include>org.apache.kylin:kylin-core-common</include>
+                                    <include>org.apache.kylin:kylin-core-metadata</include>
+                                    <include>org.apache.kylin:kylin-core-dictionary</include>
+                                    <include>org.apache.kylin:kylin-core-cube</include>
+                                    <include>org.apache.kylin:kylin-engine-spark</include>
+                                    <include>com.tdunning:t-digest</include>
+                                </includes>
+                            </artifactSet>
+                            <relocations>
+                                <relocation>
+                                    <pattern>com.tdunning</pattern>
+                                    <shadedPattern>${shadeBase}.com.tdunning</shadedPattern>
+                                </relocation>
+                            </relocations>
+                            <filters>
+                                <filter>
+                                    <artifact>*:*</artifact>
+                                    <excludes>
+                                        <exclude>META-INF/*.SF</exclude>
+                                        <exclude>META-INF/*.DSA</exclude>
+                      s
+                                        <exclude>META-INF/*.RSA</exclude>
+                                    </excludes>
+                                </filter>
+                            </filters>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+        </plugins>
+    </build>
+
+</project>
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java
index 1322da8..5009a51 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeSparkRPC.java
@@ -73,6 +73,11 @@ public class CubeSparkRPC implements IGTStorage {
 
         JobBuilderSupport jobBuilderSupport = new JobBuilderSupport(cubeSegment, "");
 
+<<<<<<< HEAD
+=======
+        String cubooidRootPath = jobBuilderSupport.getCuboidRootPath();
+
+>>>>>>> 198041d63... KYLIN-3625 Init query
         List<List<Long>> layeredCuboids = cubeSegment.getCuboidScheduler().getCuboidsByLayer();
         int level = 0;
         for (List<Long> levelCuboids : layeredCuboids) {
@@ -82,9 +87,13 @@ public class CubeSparkRPC implements IGTStorage {
             level++;
         }
 
+<<<<<<< HEAD
         String dataFolderName;
         String parquetRootPath = jobBuilderSupport.getParquetOutputPath();
         dataFolderName = JobBuilderSupport.getCuboidOutputPathsByLevel(parquetRootPath, level) + "/" + cuboid.getId();
+=======
+        String dataFolderName = JobBuilderSupport.getCuboidOutputPathsByLevel(cubooidRootPath, level) + "/" + cuboid.getId();
+>>>>>>> 198041d63... KYLIN-3625 Init query
 
         builder.setGtScanRequest(scanRequest.toByteArray()).setGtScanRequestId(scanReqId)
                 .setKylinProperties(KylinConfig.getInstanceFromEnv().exportAllToString())
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
index 6a3ad59..a3ce76f 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/cube/CubeStorageQuery.java
@@ -18,6 +18,7 @@
 
 package org.apache.kylin.storage.parquet.cube;
 
+import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.cube.CubeInstance;
 import org.apache.kylin.metadata.realization.SQLDigest;
 import org.apache.kylin.metadata.tuple.ITupleIterator;
@@ -38,6 +39,6 @@ public class CubeStorageQuery extends GTCubeStorageQueryBase {
 
     @Override
     protected String getGTStorage() {
-        return null;
+        return KylinConfig.getInstanceFromEnv().getSparkCubeGTStorage();
     }
 }
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java
new file mode 100644
index 0000000..9096679
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetPayload.java
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet.spark;
+
+import java.util.List;
+
+public class ParquetPayload {
+    private byte[] gtScanRequest;
+    private String gtScanRequestId;
+    private String kylinProperties;
+    private String realizationId;
+    private String segmentId;
+    private String dataFolderName;
+    private int maxRecordLength;
+    private List<Integer> parquetColumns;
+    private boolean isUseII;
+    private String realizationType;
+    private String queryId;
+    private boolean spillEnabled;
+    private long maxScanBytes;
+    private long startTime;
+    private int storageType;
+
+    private ParquetPayload(byte[] gtScanRequest, String gtScanRequestId, String kylinProperties, String realizationId,
+                           String segmentId, String dataFolderName, int maxRecordLength, List<Integer> parquetColumns,
+                           boolean isUseII, String realizationType, String queryId, boolean spillEnabled, long maxScanBytes,
+                           long startTime, int storageType) {
+        this.gtScanRequest = gtScanRequest;
+        this.gtScanRequestId = gtScanRequestId;
+        this.kylinProperties = kylinProperties;
+        this.realizationId = realizationId;
+        this.segmentId = segmentId;
+        this.dataFolderName = dataFolderName;
+        this.maxRecordLength = maxRecordLength;
+        this.parquetColumns = parquetColumns;
+        this.isUseII = isUseII;
+        this.realizationType = realizationType;
+        this.queryId = queryId;
+        this.spillEnabled = spillEnabled;
+        this.maxScanBytes = maxScanBytes;
+        this.startTime = startTime;
+        this.storageType = storageType;
+    }
+
+    public byte[] getGtScanRequest() {
+        return gtScanRequest;
+    }
+
+    public String getGtScanRequestId() {
+        return gtScanRequestId;
+    }
+
+    public String getKylinProperties() {
+        return kylinProperties;
+    }
+
+    public String getRealizationId() {
+        return realizationId;
+    }
+
+    public String getSegmentId() {
+        return segmentId;
+    }
+
+    public String getDataFolderName() {
+        return dataFolderName;
+    }
+
+    public int getMaxRecordLength() {
+        return maxRecordLength;
+    }
+
+    public List<Integer> getParquetColumns() {
+        return parquetColumns;
+    }
+
+    public boolean isUseII() {
+        return isUseII;
+    }
+
+    public String getRealizationType() {
+        return realizationType;
+    }
+
+    public String getQueryId() {
+        return queryId;
+    }
+
+    public boolean isSpillEnabled() {
+        return spillEnabled;
+    }
+
+    public long getMaxScanBytes() {
+        return maxScanBytes;
+    }
+
+    public long getStartTime() {
+        return startTime;
+    }
+
+    public int getStorageType() {
+        return storageType;
+    }
+
+    static public class ParquetPayloadBuilder {
+        private byte[] gtScanRequest;
+        private String gtScanRequestId;
+        private String kylinProperties;
+        private String realizationId;
+        private String segmentId;
+        private String dataFolderName;
+        private int maxRecordLength;
+        private List<Integer> parquetColumns;
+        private boolean isUseII;
+        private String realizationType;
+        private String queryId;
+        private boolean spillEnabled;
+        private long maxScanBytes;
+        private long startTime;
+        private int storageType;
+
+        public ParquetPayloadBuilder() {
+        }
+
+        public ParquetPayloadBuilder setGtScanRequest(byte[] gtScanRequest) {
+            this.gtScanRequest = gtScanRequest;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setGtScanRequestId(String gtScanRequestId) {
+            this.gtScanRequestId = gtScanRequestId;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setKylinProperties(String kylinProperties) {
+            this.kylinProperties = kylinProperties;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setRealizationId(String realizationId) {
+            this.realizationId = realizationId;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setSegmentId(String segmentId) {
+            this.segmentId = segmentId;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setDataFolderName(String dataFolderName) {
+            this.dataFolderName = dataFolderName;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setMaxRecordLength(int maxRecordLength) {
+            this.maxRecordLength = maxRecordLength;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setParquetColumns(List<Integer> parquetColumns) {
+            this.parquetColumns = parquetColumns;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setIsUseII(boolean isUseII) {
+            this.isUseII = isUseII;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setRealizationType(String realizationType) {
+            this.realizationType = realizationType;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setQueryId(String queryId) {
+            this.queryId = queryId;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setSpillEnabled(boolean spillEnabled) {
+            this.spillEnabled = spillEnabled;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setMaxScanBytes(long maxScanBytes) {
+            this.maxScanBytes = maxScanBytes;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setStartTime(long startTime) {
+            this.startTime = startTime;
+            return this;
+        }
+
+        public ParquetPayloadBuilder setStorageType(int storageType) {
+            this.storageType = storageType;
+            return this;
+        }
+
+        public ParquetPayload createParquetPayload() {
+            return new ParquetPayload(gtScanRequest, gtScanRequestId, kylinProperties, realizationId, segmentId,
+                    dataFolderName, maxRecordLength, parquetColumns, isUseII, realizationType, queryId, spillEnabled,
+                    maxScanBytes, startTime, storageType);
+        }
+    }
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetTask.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetTask.java
new file mode 100644
index 0000000..611ee44
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/ParquetTask.java
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet.spark;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.QueryContext;
+import org.apache.kylin.common.util.HadoopUtil;
+import org.apache.kylin.common.util.ImmutableBitSet;
+import org.apache.kylin.cube.CubeInstance;
+import org.apache.kylin.cube.CubeManager;
+import org.apache.kylin.cube.CubeSegment;
+import org.apache.kylin.cube.cuboid.Cuboid;
+import org.apache.kylin.cube.gridtable.CuboidToGridTableMapping;
+import org.apache.kylin.gridtable.GTScanRequest;
+import org.apache.kylin.metadata.datatype.DataType;
+import org.apache.kylin.metadata.model.MeasureDesc;
+import org.apache.kylin.metadata.model.TblColRef;
+import org.apache.spark.api.java.JavaRDD;
+import org.apache.spark.api.java.JavaSparkContext;
+import org.apache.spark.api.java.function.Function;
+import org.apache.spark.sql.Column;
+import org.apache.spark.sql.Dataset;
+import org.apache.spark.sql.Row;
+import org.apache.spark.sql.SQLContext;
+import org.apache.spark.sql.SparderEnv;
+import org.apache.spark.sql.SparderEnv$;
+import org.apache.spark.sql.manager.UdfManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Field;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.spark.sql.functions.asc;
+import static org.apache.spark.sql.functions.callUDF;
+import static org.apache.spark.sql.functions.col;
+import static org.apache.spark.sql.functions.max;
+import static org.apache.spark.sql.functions.min;
+import static org.apache.spark.sql.functions.sum;
+
+@SuppressWarnings("serial")
+public class ParquetTask implements Serializable {
+    public static final Logger logger = LoggerFactory.getLogger(org.apache.kylin.storage.parquet.spark.ParquetTask.class);
+
+    private final transient JavaSparkContext sc;
+    private final KylinConfig kylinConfig;
+    private final transient Configuration conf;
+    private final transient String[] parquetPaths;
+    private final transient GTScanRequest scanRequest;
+    private final transient CuboidToGridTableMapping mapping;
+
+    ParquetTask(ParquetPayload request) {
+        try {
+            this.sc = JavaSparkContext.fromSparkContext(SparderEnv$.MODULE$.getSparkSession().sparkContext());
+            this.kylinConfig = KylinConfig.getInstanceFromEnv();
+            this.conf = HadoopUtil.getCurrentConfiguration();
+
+            scanRequest = GTScanRequest.serializer
+                    .deserialize(ByteBuffer.wrap(request.getGtScanRequest()));
+
+            long startTime = System.currentTimeMillis();
+            sc.setLocalProperty("spark.job.description", Thread.currentThread().getName());
+
+            if (QueryContext.current().isHighPriorityQuery()) {
+                sc.setLocalProperty("spark.scheduler.pool", "vip_tasks");
+            } else {
+                sc.setLocalProperty("spark.scheduler.pool", "lightweight_tasks");
+            }
+
+            String dataFolderName = request.getDataFolderName();
+
+            String baseFolder = dataFolderName.substring(0, dataFolderName.lastIndexOf('/'));
+            String cuboidId = dataFolderName.substring(dataFolderName.lastIndexOf("/") + 1);
+            String prefix = "cuboid_" + cuboidId + "_";
+
+            CubeInstance cubeInstance = CubeManager.getInstance(kylinConfig).getCube(request.getRealizationId());
+            CubeSegment cubeSegment = cubeInstance.getSegmentById(request.getSegmentId());
+            mapping = new CuboidToGridTableMapping(Cuboid.findById(cubeSegment.getCuboidScheduler(), Long.valueOf(cuboidId)));
+
+            Path[] filePaths = HadoopUtil.getFilteredPath(HadoopUtil.getWorkingFileSystem(conf), new Path(baseFolder), prefix);
+            parquetPaths = new String[filePaths.length];
+
+            for (int i = 0; i < filePaths.length; i++) {
+                parquetPaths[i] = filePaths[i].toString();
+            }
+
+            cleanHadoopConf(conf);
+
+            logger.info("SparkVisit Init takes {} ms", System.currentTimeMillis() - startTime);
+
+            StringBuilder pathBuilder = new StringBuilder();
+            for (Path p : filePaths) {
+                pathBuilder.append(p.toString()).append(";");
+            }
+
+            logger.info("Columnar path is " + pathBuilder.toString());
+            logger.info("Required Measures: " + StringUtils.join(request.getParquetColumns(), ","));
+            logger.info("Max GT length: " + request.getMaxRecordLength());
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    /**
+     * the updatingResource part of Configuration will incur gzip compression in Configuration.write
+     *  we cleaned them out to improve qps
+     */
+    private void cleanHadoopConf(Configuration c) {
+        try {
+            //updatingResource will get compressed by gzip, which is costly
+            Field updatingResourceField = Configuration.class.getDeclaredField("updatingResource");
+            updatingResourceField.setAccessible(true);
+            Map<String, String[]> map = (Map<String, String[]>) updatingResourceField.get(c);
+            map.clear();
+
+        } catch (IllegalAccessException | NoSuchFieldException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    public Iterator<Object[]> executeTask() {
+        logger.info("Start to visit cube data with Spark SQL <<<<<<");
+
+        SQLContext sqlContext = new SQLContext(SparderEnv.getSparkSession().sparkContext());
+
+        Dataset<Row> dataset = sqlContext.read().parquet(parquetPaths);
+        ImmutableBitSet dimensions = scanRequest.getDimensions();
+        ImmutableBitSet metrics = scanRequest.getAggrMetrics();
+        ImmutableBitSet groupBy = scanRequest.getAggrGroupBy();
+
+        // select
+        Column[] selectColumn = getSelectColumn(dimensions, metrics, mapping);
+        dataset = dataset.select(selectColumn);
+
+        // where
+        String where = scanRequest.getFilterPushDownSQL();
+        if (where != null) {
+            dataset = dataset.filter(where);
+        }
+
+        //groupby agg
+        Column[] aggCols = getAggColumns(metrics, mapping);
+        Column[] tailCols;
+
+        if (aggCols.length >= 1) {
+            tailCols = new Column[aggCols.length - 1];
+            System.arraycopy(aggCols, 1, tailCols, 0, tailCols.length);
+            dataset = dataset.groupBy(getGroupByColumn(dimensions, mapping)).agg(aggCols[0], tailCols);
+        }
+
+        // sort
+        dataset = dataset.sort(getSortColumn(groupBy, mapping));
+
+        JavaRDD<Row> rowRDD = dataset.javaRDD();
+
+        JavaRDD<Object[]> objRDD = rowRDD.map(new Function<Row, Object[]>() {
+            @Override
+            public Object[] call(Row row) throws Exception {
+                Object[] objects = new Object[row.length()];
+                for (int i = 0; i < row.length(); i++) {
+                    objects[i] = row.get(i);
+                }
+                return objects;
+            }
+        });
+
+        logger.info("partitions: {}", objRDD.getNumPartitions());
+
+        List<Object[]> result = objRDD.collect();
+        return result.iterator();
+    }
+
+    private Column[] getAggColumns(ImmutableBitSet metrics, CuboidToGridTableMapping mapping) {
+        Column[] columns = new Column[metrics.trueBitCount()];
+        Map<MeasureDesc, Integer> met2gt = mapping.getMet2gt();
+
+        for (int i = 0; i < metrics.trueBitCount(); i++) {
+            int c = metrics.trueBitAt(i);
+            for (Map.Entry<MeasureDesc, Integer> entry : met2gt.entrySet()) {
+                if (entry.getValue() == c) {
+                    MeasureDesc measureDesc = entry.getKey();
+                    String func = measureDesc.getFunction().getExpression();
+                    columns[i] = getAggColumn(measureDesc.getName(), func, measureDesc.getFunction().getReturnDataType());
+                    break;
+                }
+            }
+        }
+
+        return columns;
+    }
+
+    private Column getAggColumn(String metName, String func, DataType dataType) {
+        Column column;
+        switch (func) {
+            case "SUM":
+                column = sum(metName);
+                break;
+            case "MIN":
+                column = min(metName);
+                break;
+            case "MAX":
+                column = max(metName);
+                break;
+            case "COUNT":
+                column = sum(metName);
+                break;
+            case "TOP_N":
+            case "COUNT_DISTINCT":
+            case "EXTENDED_COLUMN":
+            case "PERCENTILE_APPROX":
+            case "RAW":
+                String udf = UdfManager.register(dataType, func);
+                column = callUDF(udf, col(metName));
+                break;
+            default:
+                throw new IllegalArgumentException("Function " + func + " is not supported");
+
+        }
+        return column.alias(metName);
+    }
+
+    private void getDimColumn(ImmutableBitSet dimensions, Column[] columns, int from, CuboidToGridTableMapping mapping) {
+        Map<TblColRef, Integer> dim2gt = mapping.getDim2gt();
+
+        for (int i = 0; i < dimensions.trueBitCount(); i++) {
+            int c = dimensions.trueBitAt(i);
+            for (Map.Entry<TblColRef, Integer> entry : dim2gt.entrySet()) {
+                if (entry.getValue() == c) {
+                    columns[i + from] = col(entry.getKey().getTableAlias() + "_" + entry.getKey().getName());
+                    break;
+                }
+            }
+        }
+    }
+
+    private void getMetColumn(ImmutableBitSet metrics, Column[] columns, int from, CuboidToGridTableMapping mapping) {
+        Map<MeasureDesc, Integer> met2gt = mapping.getMet2gt();
+
+        for (int i = 0; i < metrics.trueBitCount(); i++) {
+            int m = metrics.trueBitAt(i);
+            for (Map.Entry<MeasureDesc, Integer> entry : met2gt.entrySet()) {
+                if (entry.getValue() == m) {
+                    columns[i + from] = col(entry.getKey().getName());
+                    break;
+                }
+            }
+        }
+    }
+
+    private Column[] getSelectColumn(ImmutableBitSet dimensions, ImmutableBitSet metrics, CuboidToGridTableMapping mapping) {
+        Column[] columns = new Column[dimensions.trueBitCount() + metrics.trueBitCount()];
+
+        getDimColumn(dimensions, columns, 0, mapping);
+
+        getMetColumn(metrics, columns, dimensions.trueBitCount(), mapping);
+
+        return columns;
+    }
+
+    private Column[] getGroupByColumn(ImmutableBitSet dimensions, CuboidToGridTableMapping mapping) {
+        Column[] columns = new Column[dimensions.trueBitCount()];
+
+        getDimColumn(dimensions, columns, 0, mapping);
+
+        return columns;
+    }
+
+    private Column[] getSortColumn(ImmutableBitSet dimensions, CuboidToGridTableMapping mapping) {
+        Column[] columns = new Column[dimensions.trueBitCount()];
+        Map<TblColRef, Integer> dim2gt = mapping.getDim2gt();
+
+        for (int i = 0; i < dimensions.trueBitCount(); i++) {
+            int c = dimensions.trueBitAt(i);
+            for (Map.Entry<TblColRef, Integer> entry : dim2gt.entrySet()) {
+                if (entry.getValue() == c) {
+                    columns[i] = asc(entry.getKey().getTableAlias() + "_" + entry.getKey().getName());
+                    break;
+                }
+            }
+        }
+
+        return columns;
+    }
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/SparkSubmitter.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/SparkSubmitter.java
new file mode 100644
index 0000000..1c8c246
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/SparkSubmitter.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet.spark;
+
+import org.apache.kylin.ext.ClassLoaderUtils;
+import org.apache.kylin.gridtable.GTScanRequest;
+import org.apache.kylin.gridtable.IGTScanner;
+import org.apache.kylin.storage.parquet.spark.gtscanner.ParquetRecordGTScanner;
+import org.apache.kylin.storage.parquet.spark.gtscanner.ParquetRecordGTScanner4Cube;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class SparkSubmitter {
+    public static final Logger logger = LoggerFactory.getLogger(SparkSubmitter.class);
+
+    public static IGTScanner submitParquetTask(GTScanRequest scanRequest, ParquetPayload payload) {
+
+        Thread.currentThread().setContextClassLoader(ClassLoaderUtils.getSparkClassLoader());
+        ParquetTask parquetTask = new ParquetTask(payload);
+
+        ParquetRecordGTScanner scanner = new ParquetRecordGTScanner4Cube(scanRequest.getInfo(), parquetTask.executeTask(), scanRequest,
+                payload.getMaxScanBytes());
+
+        return scanner;
+    }
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner.java
new file mode 100644
index 0000000..322aec1
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner.java
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet.spark.gtscanner;
+
+import com.google.common.collect.Iterators;
+import org.apache.kylin.common.exceptions.KylinTimeoutException;
+import org.apache.kylin.common.exceptions.ResourceLimitExceededException;
+import org.apache.kylin.common.util.ByteArray;
+import org.apache.kylin.common.util.ImmutableBitSet;
+import org.apache.kylin.gridtable.GTInfo;
+import org.apache.kylin.gridtable.GTRecord;
+import org.apache.kylin.gridtable.GTScanRequest;
+import org.apache.kylin.gridtable.IGTScanner;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.Iterator;
+
+/**
+ * this class tracks resource 
+ */
+public abstract class ParquetRecordGTScanner implements IGTScanner {
+
+    private Iterator<Object[]> iterator;
+    private GTInfo info;
+    private GTRecord gtrecord;
+    private ImmutableBitSet columns;
+
+    private long maxScannedBytes;
+
+    private long scannedRows;
+    private long scannedBytes;
+
+    private ImmutableBitSet[] columnBlocks;
+
+    public ParquetRecordGTScanner(GTInfo info, Iterator<Object[]> iterator, GTScanRequest scanRequest,
+                                  long maxScannedBytes) {
+        this.iterator = iterator;
+        this.info = info;
+        this.gtrecord = new GTRecord(info);
+        this.columns = scanRequest.getColumns();
+        this.maxScannedBytes = maxScannedBytes;
+        this.columnBlocks = getParquetCoveredColumnBlocks(scanRequest);
+    }
+
+    @Override
+    public GTInfo getInfo() {
+        return info;
+    }
+
+    @Override
+    public void close() throws IOException {
+    }
+
+    public long getTotalScannedRowCount() {
+        return scannedRows;
+    }
+
+    public long getTotalScannedRowBytes() {
+        return scannedBytes;
+    }
+
+    @Override
+    public Iterator<GTRecord> iterator() {
+        return Iterators.transform(iterator, new com.google.common.base.Function<Object[], GTRecord>() {
+            @Nullable
+            @Override
+            public GTRecord apply(@Nullable Object[] input) {
+                gtrecord.setValuesParquet(ParquetRecordGTScanner.this.columns, new ByteArray(info.getMaxColumnLength(ParquetRecordGTScanner.this.columns)), input);
+
+                scannedBytes += info.getMaxColumnLength(ParquetRecordGTScanner.this.columns);
+                if ((++scannedRows % GTScanRequest.terminateCheckInterval == 1) && Thread.interrupted()) {
+                    throw new KylinTimeoutException("Query timeout");
+                }
+                if (scannedBytes > maxScannedBytes) {
+                    throw new ResourceLimitExceededException(
+                            "Partition scanned bytes " + scannedBytes + " exceeds threshold " + maxScannedBytes
+                                    + ", consider increase kylin.storage.partition.max-scan-bytes");
+                }
+                return gtrecord;
+            }
+        });
+    }
+
+    abstract protected ImmutableBitSet getParquetCoveredColumns(GTScanRequest scanRequest);
+
+    abstract protected ImmutableBitSet[] getParquetCoveredColumnBlocks(GTScanRequest scanRequest);
+
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner4Cube.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner4Cube.java
new file mode 100644
index 0000000..3bf670e
--- /dev/null
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/spark/gtscanner/ParquetRecordGTScanner4Cube.java
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.storage.parquet.spark.gtscanner;
+
+import org.apache.kylin.common.util.ImmutableBitSet;
+import org.apache.kylin.gridtable.GTInfo;
+import org.apache.kylin.gridtable.GTScanRequest;
+
+import java.util.BitSet;
+import java.util.Iterator;
+
+public class ParquetRecordGTScanner4Cube extends ParquetRecordGTScanner {
+    public ParquetRecordGTScanner4Cube(GTInfo info, Iterator<Object[]> iterator, GTScanRequest scanRequest,
+                                       long maxScannedBytes) {
+        super(info, iterator, scanRequest, maxScannedBytes);
+    }
+
+    protected ImmutableBitSet getParquetCoveredColumns(GTScanRequest scanRequest) {
+        BitSet bs = new BitSet();
+
+        ImmutableBitSet dimensions = scanRequest.getInfo().getPrimaryKey();
+        for (int i = 0; i < dimensions.trueBitCount(); ++i) {
+            bs.set(dimensions.trueBitAt(i));
+        }
+
+        ImmutableBitSet queriedColumns = scanRequest.getColumns();
+        for (int i = 0; i < queriedColumns.trueBitCount(); ++i) {
+            bs.set(queriedColumns.trueBitAt(i));
+        }
+        return new ImmutableBitSet(bs);
+    }
+
+    protected ImmutableBitSet[] getParquetCoveredColumnBlocks(GTScanRequest scanRequest) {
+        
+        ImmutableBitSet selectedColBlocksBitSet = scanRequest.getSelectedColBlocks();
+        
+        ImmutableBitSet[] selectedColBlocks = new ImmutableBitSet[selectedColBlocksBitSet.trueBitCount()];
+        
+        for(int i = 0; i < selectedColBlocksBitSet.trueBitCount(); i++) {
+            
+            selectedColBlocks[i] = scanRequest.getInfo().getColumnBlock(selectedColBlocksBitSet.trueBitAt(i));
+            
+        }
+
+        return selectedColBlocks;
+    }
+
+}
diff --git a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
index 2a7c1ee..def4d8d 100644
--- a/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
+++ b/storage-parquet/src/main/java/org/apache/kylin/storage/parquet/steps/SparkCubeParquet.java
@@ -46,6 +46,7 @@ import org.apache.kylin.cube.CubeSegment;
 import org.apache.kylin.cube.cuboid.Cuboid;
 import org.apache.kylin.cube.kv.RowConstants;
 import org.apache.kylin.cube.kv.RowKeyDecoder;
+import org.apache.kylin.cube.kv.RowKeyDecoderParquet;
 import org.apache.kylin.cube.model.CubeDesc;
 import org.apache.kylin.dimension.AbstractDateDimEnc;
 import org.apache.kylin.dimension.DimensionEncoding;
@@ -67,6 +68,8 @@ import org.apache.kylin.measure.basic.BasicMeasureType;
 import org.apache.kylin.measure.basic.BigDecimalIngester;
 import org.apache.kylin.measure.basic.DoubleIngester;
 import org.apache.kylin.measure.basic.LongIngester;
+import org.apache.kylin.metadata.datatype.BigDecimalSerializer;
+import org.apache.kylin.metadata.datatype.DataType;
 import org.apache.kylin.metadata.model.MeasureDesc;
 import org.apache.kylin.metadata.model.TblColRef;
 import org.apache.parquet.example.data.Group;
@@ -90,10 +93,10 @@ import scala.Tuple2;
 
 import java.io.IOException;
 import java.io.Serializable;
+import java.math.BigDecimal;
 import java.nio.ByteBuffer;
 import java.util.List;
 import java.util.Map;
-import java.util.Random;
 
 
 public class SparkCubeParquet extends AbstractApplication implements Serializable{
@@ -151,6 +154,8 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
         KylinSparkJobListener jobListener = new KylinSparkJobListener();
         try (JavaSparkContext sc = new JavaSparkContext(conf)){
             sc.sc().addSparkListener(jobListener);
+
+            HadoopUtil.deletePath(sc.hadoopConfiguration(), new Path(outputPath));
             final SerializableConfiguration sConf = new SerializableConfiguration(sc.hadoopConfiguration());
 
             final KylinConfig envConfig = AbstractHadoopJob.loadKylinConfigFromHdfs(sConf, metaUrl);
@@ -158,7 +163,6 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
             final CubeInstance cubeInstance = CubeManager.getInstance(envConfig).getCube(cubeName);
             final CubeSegment cubeSegment = cubeInstance.getSegmentById(segmentId);
 
-
             final FileSystem fs = new Path(inputPath).getFileSystem(sc.hadoopConfiguration());
 
             final int totalLevels = cubeSegment.getCuboidScheduler().getBuildLevel();
@@ -175,6 +179,8 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
                 saveToParquet(allRDDs[level], metaUrl, cubeName, cubeSegment, outputPath, level, job, envConfig);
             }
 
+            logger.info("HDFS: Number of bytes written={}", jobListener.metrics.getBytesWritten());
+
             Map<String, String> counterMap = Maps.newHashMap();
             counterMap.put(ExecutableConstants.HDFS_BYTES_WRITTEN, String.valueOf(jobListener.metrics.getBytesWritten()));
 
@@ -217,11 +223,12 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
     }
 
     static class CuboidPartitioner extends Partitioner {
-
         private CuboidToPartitionMapping mapping;
+        private boolean enableSharding;
 
-        public CuboidPartitioner(CuboidToPartitionMapping cuboidToPartitionMapping) {
+        public CuboidPartitioner(CuboidToPartitionMapping cuboidToPartitionMapping, boolean enableSharding) {
             this.mapping = cuboidToPartitionMapping;
+            this.enableSharding =enableSharding;
         }
 
         @Override
@@ -330,6 +337,7 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
             int partition = (int) (cuboidSize / (rddCut * 10));
             partition = Math.max(kylinConfig.getSparkMinPartition(), partition);
             partition = Math.min(kylinConfig.getSparkMaxPartition(), partition);
+            logger.info("cuboid:{}, est_size:{}, partitions:{}", cuboidId, cuboidSize, partition);
             return partition;
         }
 
@@ -377,6 +385,8 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
         private Map<MeasureDesc, String> meaTypeMap;
         private GroupFactory factory;
         private BufferedMeasureCodec measureCodec;
+        private BigDecimalSerializer serializer;
+        private int count = 0;
 
         public GenerateGroupRDDFunction(String cubeName, String segmentId, String metaurl, SerializableConfiguration conf, Map<TblColRef, String> colTypeMap, Map<MeasureDesc, String> meaTypeMap) {
             this.cubeName = cubeName;
@@ -394,15 +404,15 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
             CubeDesc cubeDesc = cubeInstance.getDescriptor();
             CubeSegment cubeSegment = cubeInstance.getSegmentById(segmentId);
             measureDescs = cubeDesc.getMeasures();
-            decoder = new RowKeyDecoder(cubeSegment);
+            decoder = new RowKeyDecoderParquet(cubeSegment);
             factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(conf.get()));
             measureCodec = new BufferedMeasureCodec(cubeDesc.getMeasures());
+            serializer = new BigDecimalSerializer(DataType.getType("decimal"));
+            initialized = true;
         }
 
         @Override
         public Tuple2<Void, Group> call(Tuple2<Text, Text> tuple) throws Exception {
-
-            logger.debug("call: transfer Text to byte[]");
             if (initialized == false) {
                 synchronized (SparkCubeParquet.class) {
                     if (initialized == false) {
@@ -412,7 +422,7 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
                 }
             }
 
-            long cuboid = decoder.decode(tuple._1.getBytes());
+            long cuboid = decoder.decode4Parquet(tuple._1.getBytes());
             List<String> values = decoder.getValues();
             List<TblColRef> columns = decoder.getColumns();
 
@@ -426,8 +436,9 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
                 parseColValue(group, column, values.get(i));
             }
 
+            count ++;
 
-            byte[] encodedBytes = tuple._2().copyBytes();
+            byte[] encodedBytes = tuple._2().getBytes();
             int[] valueLengths = measureCodec.getCodec().getPeekLength(ByteBuffer.wrap(encodedBytes));
 
             int valueOffset = 0;
@@ -465,11 +476,16 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
             }
             switch (meaTypeMap.get(measureDesc)) {
                 case "long":
-                    group.append(measureDesc.getName(), BytesUtil.readLong(value, offset, length));
+                    group.append(measureDesc.getName(), BytesUtil.readVLong(ByteBuffer.wrap(value, offset, length)));
                     break;
                 case "double":
                     group.append(measureDesc.getName(), ByteBuffer.wrap(value, offset, length).getDouble());
                     break;
+                case "decimal":
+                    BigDecimal decimal = serializer.deserialize(ByteBuffer.wrap(value, offset, length));
+                    decimal = decimal.setScale(4);
+                    group.append(measureDesc.getName(), Binary.fromConstantByteArray(decimal.unscaledValue().toByteArray()));
+                    break;
                 default:
                     group.append(measureDesc.getName(), Binary.fromConstantByteArray(value, offset, length));
                     break;
@@ -492,14 +508,8 @@ public class SparkCubeParquet extends AbstractApplication implements Serializabl
                 colTypeMap.put(colRef, "long");
             } else if (dimEnc instanceof FixedLenDimEnc || dimEnc instanceof FixedLenHexDimEnc) {
                 org.apache.kylin.metadata.datatype.DataType colDataType = colRef.getType();
-                if (colDataType.isNumberFamily() || colDataType.isDateTimeFamily()){
-                    builder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(getColName(colRef));
-                    colTypeMap.put(colRef, "long");
-                } else {
-                    // stringFamily && default
-                    builder.optional(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(getColName(colRef));
-                    colTypeMap.put(colRef, "string");
-                }
+                builder.optional(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(getColName(colRef));
+                colTypeMap.put(colRef, "string");
             } else {
                 builder.optional(PrimitiveType.PrimitiveTypeName.INT32).named(getColName(colRef));
                 colTypeMap.put(colRef, "int");
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/ClassLoaderUtils.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/ClassLoaderUtils.java
new file mode 100644
index 0000000..3888883
--- /dev/null
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/ClassLoaderUtils.java
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.ext;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.net.URLClassLoader;
+
+public final class ClassLoaderUtils {
+    static URLClassLoader sparkClassLoader = null;
+    static URLClassLoader originClassLoader = null;
+    private static Logger logger = LoggerFactory.getLogger(ClassLoaderUtils.class);
+    private static Boolean isPlus;
+    static {
+        isPlus = true;
+    }
+
+    public static File findFile(String dir, String ptn) {
+        File[] files = new File(dir).listFiles();
+        if (files != null) {
+            for (File f : files) {
+                if (f.getName().matches(ptn))
+                    return f;
+            }
+        }
+        return null;
+    }
+
+    public static ClassLoader getSparkClassLoader() {
+        if (!isPlus) {
+            return Thread.currentThread().getContextClassLoader();
+        }
+        if (sparkClassLoader == null) {
+            logger.error("sparkClassLoader not init");
+            return Thread.currentThread().getContextClassLoader();
+        } else {
+            return sparkClassLoader;
+        }
+    }
+
+    public static void setSparkClassLoader(URLClassLoader classLoader) {
+        if (sparkClassLoader != null) {
+            logger.error("sparkClassLoader already initialized");
+        }
+        logger.info("set sparkClassLoader :" + classLoader);
+        if (System.getenv("DEBUG_SPARK_CLASSLOADER") != null) {
+            return;
+        }
+        sparkClassLoader = classLoader;
+    }
+
+    public static ClassLoader getOriginClassLoader() {
+        if (!isPlus) {
+            return Thread.currentThread().getContextClassLoader();
+        }
+        if (originClassLoader == null) {
+            logger.error("originClassLoader not init");
+            return Thread.currentThread().getContextClassLoader();
+        } else {
+            return originClassLoader;
+        }
+    }
+
+    public static void setOriginClassLoader(URLClassLoader classLoader) {
+        if (originClassLoader != null) {
+            logger.error("originClassLoader already initialized");
+        }
+        logger.info("set originClassLoader :" + classLoader);
+        originClassLoader = classLoader;
+    }
+}
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/DebugTomcatClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/DebugTomcatClassLoader.java
new file mode 100644
index 0000000..a0c212c
--- /dev/null
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/DebugTomcatClassLoader.java
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.ext;
+
+import org.apache.catalina.loader.ParallelWebappClassLoader;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.util.HashSet;
+import java.util.Set;
+
+public class DebugTomcatClassLoader extends ParallelWebappClassLoader {
+    private static final String[] PARENT_CL_PRECEDENT_CLASSES = new String[] {
+            // Java standard library:
+            "com.sun.", "launcher.", "javax.", "org.ietf", "java", "org.omg", "org.w3c", "org.xml", "sunw.",
+            // logging
+            "org.slf4j", "org.apache.commons.logging", "org.apache.log4j", "org.apache.catalina", "org.apache.tomcat" };
+    private static final String[] THIS_CL_PRECEDENT_CLASSES = new String[] { "org.apache.kylin",
+            "org.apache.calcite" };
+    private static final String[] CODE_GEN_CLASS = new String[] { "org.apache.spark.sql.catalyst.expressions.Object",
+            "Baz" };
+
+    private static final Set<String> wontFindClasses = new HashSet<>();
+
+    static {
+        wontFindClasses.add("Class");
+        wontFindClasses.add("Object");
+        wontFindClasses.add("org");
+        wontFindClasses.add("java.lang.org");
+        wontFindClasses.add("java.lang$org");
+        wontFindClasses.add("java$lang$org");
+        wontFindClasses.add("org.apache");
+        wontFindClasses.add("org.apache.calcite");
+        wontFindClasses.add("org.apache.calcite.runtime");
+        wontFindClasses.add("org.apache.calcite.linq4j");
+        wontFindClasses.add("Long");
+        wontFindClasses.add("String");
+    }
+
+    private static Logger logger = LoggerFactory.getLogger(DebugTomcatClassLoader.class);
+    private SparkClassLoader sparkClassLoader;
+
+    /**
+     * Creates a DynamicClassLoader that can load classes dynamically
+     * from jar files under a specific folder.
+     *
+     * @param parent the parent ClassLoader to set.
+     */
+    public DebugTomcatClassLoader(ClassLoader parent) throws IOException {
+        super(parent);
+        sparkClassLoader = new SparkClassLoader(this);
+        ClassLoaderUtils.setSparkClassLoader(sparkClassLoader);
+        ClassLoaderUtils.setOriginClassLoader(this);
+        init();
+    }
+
+    public void init() {
+
+        String classPath = System.getProperty("java.class.path");
+        if (classPath == null) {
+            throw new RuntimeException("");
+        }
+        String[] jars = classPath.split(File.pathSeparator);
+        for (String jar : jars) {
+            try {
+                URL url = new File(jar).toURI().toURL();
+                addURL(url);
+            } catch (MalformedURLException e) {
+                e.printStackTrace();
+            }
+        }
+    }
+
+    @Override
+    public Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException {
+        if (isWontFind(name)) {
+            throw new ClassNotFoundException();
+        }
+        if (isCodeGen(name)) {
+            throw new ClassNotFoundException();
+        }
+
+        if (name.startsWith("org.apache.kylin.ext")) {
+            return parent.loadClass(name);
+        }
+
+        if (name.startsWith("org.apache.spark.sql.execution.datasources.sparder.batch.SparderBatchFileFormat")) {
+            return super.loadClass(name, resolve);
+        }
+
+        if (sparkClassLoader.classNeedPreempt(name)) {
+            return sparkClassLoader.loadClass(name);
+        }
+        if (isParentCLPrecedent(name)) {
+            logger.debug("Skipping exempt class " + name + " - delegating directly to parent");
+            return parent.loadClass(name);
+        }
+        return super.loadClass(name, resolve);
+    }
+
+    @Override
+    public InputStream getResourceAsStream(String name) {
+        if (sparkClassLoader.fileNeedPreempt(name)) {
+            return sparkClassLoader.getResourceAsStream(name);
+        }
+        return super.getResourceAsStream(name);
+
+    }
+
+    private boolean isParentCLPrecedent(String name) {
+        for (String exemptPrefix : PARENT_CL_PRECEDENT_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private boolean isWontFind(String name) {
+        return wontFindClasses.contains(name);
+    }
+
+    private boolean isCodeGen(String name) {
+        for (String exemptPrefix : CODE_GEN_CLASS) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+}
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/ItClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/ItClassLoader.java
new file mode 100644
index 0000000..0590999
--- /dev/null
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/ItClassLoader.java
@@ -0,0 +1,175 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.ext;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.net.URLClassLoader;
+
+import static org.apache.kylin.ext.ClassLoaderUtils.findFile;
+
+public class ItClassLoader extends URLClassLoader {
+    private static final String[] PARENT_CL_PRECEDENT_CLASS = new String[] {
+            // Java standard library:
+            "com.sun.", "launcher.", "javax.", "org.ietf", "java", "org.omg", "org.w3c", "org.xml", "sunw.",
+            // logging
+            "org.slf4j", "org.apache.commons.logging", "org.apache.log4j", "sun", "org.apache.catalina",
+            "org.apache.tomcat", };
+    private static final String[] THIS_CL_PRECEDENT_CLASS = new String[] { "org.apache.kylin",
+            "org.apache.calcite" };
+    private static final String[] CODE_GEN_CLASS = new String[] { "org.apache.spark.sql.catalyst.expressions.Object" };
+    public static ItClassLoader defaultClassLoad = null;
+    private static Logger logger = LoggerFactory.getLogger(ItClassLoader.class);
+    public ItSparkClassLoader sparkClassLoader;
+    ClassLoader parent;
+
+    /**
+     * Creates a DynamicClassLoader that can load classes dynamically
+     * from jar files under a specific folder.
+     *
+     * @param parent the parent ClassLoader to set.
+     */
+    public ItClassLoader(ClassLoader parent) throws IOException {
+        super(((URLClassLoader) getSystemClassLoader()).getURLs());
+        this.parent = parent;
+        sparkClassLoader = new ItSparkClassLoader(this);
+        ClassLoaderUtils.setSparkClassLoader(sparkClassLoader);
+        ClassLoaderUtils.setOriginClassLoader(this);
+        defaultClassLoad = this;
+        init();
+    }
+
+    public void init() {
+
+        String classPath = System.getProperty("java.class.path");
+        if (classPath == null) {
+            throw new RuntimeException("");
+        }
+        String[] jars = classPath.split(File.pathSeparator);
+        for (String jar : jars) {
+            if (jar.contains("spark-")) {
+                continue;
+            }
+            try {
+                URL url = new File(jar).toURI().toURL();
+                addURL(url);
+            } catch (MalformedURLException e) {
+                e.printStackTrace();
+            }
+        }
+        String spark_home = System.getenv("SPARK_HOME");
+        try {
+            File sparkJar = findFile(spark_home + "/jars", "spark-yarn_.*.jar");
+            addURL(sparkJar.toURI().toURL());
+            addURL(new File("../examples/test_case_data/sandbox").toURI().toURL());
+        } catch (MalformedURLException e) {
+            e.printStackTrace();
+        }
+
+    }
+
+    @Override
+    public Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException {
+        if (isCodeGen(name)) {
+            throw new ClassNotFoundException();
+        }
+        if (name.startsWith("org.apache.kylin.ext")) {
+            return parent.loadClass(name);
+        }
+        if (isThisCLPrecedent(name)) {
+            synchronized (getClassLoadingLock(name)) {
+                // Check whether the class has already been loaded:
+                Class<?> clasz = findLoadedClass(name);
+                if (clasz != null) {
+                    logger.debug("Class " + name + " already loaded");
+                } else {
+                    try {
+                        // Try to find this class using the URLs passed to this ClassLoader
+                        logger.debug("Finding class: " + name);
+                        clasz = super.findClass(name);
+                    } catch (ClassNotFoundException e) {
+                        // Class not found using this ClassLoader, so delegate to parent
+                        logger.debug("Class " + name + " not found - delegating to parent");
+                        try {
+                            clasz = parent.loadClass(name);
+                        } catch (ClassNotFoundException e2) {
+                            // Class not found in this ClassLoader or in the parent ClassLoader
+                            // Log some debug output before re-throwing ClassNotFoundException
+                            logger.debug("Class " + name + " not found in parent loader");
+                            throw e2;
+                        }
+                    }
+                }
+                return clasz;
+            }
+        }
+        //交换位置 为了让codehua 被父类加载
+        if (isParentCLPrecedent(name)) {
+            logger.debug("Skipping exempt class " + name + " - delegating directly to parent");
+            return parent.loadClass(name);
+        }
+        if (sparkClassLoader.classNeedPreempt(name)) {
+            return sparkClassLoader.loadClass(name);
+        }
+        return super.loadClass(name, resolve);
+    }
+
+    @Override
+    public InputStream getResourceAsStream(String name) {
+        if (sparkClassLoader.fileNeedPreempt(name)) {
+            return sparkClassLoader.getResourceAsStream(name);
+        }
+        return super.getResourceAsStream(name);
+
+    }
+
+    private boolean isParentCLPrecedent(String name) {
+        for (String exemptPrefix : PARENT_CL_PRECEDENT_CLASS) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private boolean isThisCLPrecedent(String name) {
+        for (String exemptPrefix : THIS_CL_PRECEDENT_CLASS) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private boolean isCodeGen(String name) {
+        for (String exemptPrefix : CODE_GEN_CLASS) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+}
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/ItSparkClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/ItSparkClassLoader.java
new file mode 100644
index 0000000..c69ef9c
--- /dev/null
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/ItSparkClassLoader.java
@@ -0,0 +1,189 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.ext;
+
+import com.google.common.collect.Sets;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.net.URLClassLoader;
+import java.util.Set;
+
+import static org.apache.kylin.ext.ClassLoaderUtils.findFile;
+
+public class ItSparkClassLoader extends URLClassLoader {
+    private static final String[] SPARK_CL_PREEMPT_CLASSES = new String[] { "org.apache.spark", "scala.",
+            "org.spark_project"
+            //            "javax.ws.rs.core.Application",
+            //            "javax.ws.rs.core.UriBuilder", "org.glassfish.jersey", "javax.ws.rs.ext"
+            //user javax.ws.rs.api 2.01  not jersey-core-1.9.jar
+    };
+    private static final String[] SPARK_CL_PREEMPT_FILES = new String[] { "spark-version-info.properties",
+            "HiveClientImpl", "org/apache/spark" };
+
+    private static final String[] THIS_CL_PRECEDENT_CLASSES = new String[] { "javax.ws.rs", "org.apache.hadoop.hive" };
+
+    private static final String[] PARENT_CL_PRECEDENT_CLASSES = new String[] {
+            //            // Java standard library:
+            "com.sun.", "launcher.", "java.", "javax.", "org.ietf", "org.omg", "org.w3c", "org.xml", "sunw.", "sun.",
+            // logging
+            "org.apache.commons.logging", "org.apache.log4j", "com.hadoop", "org.slf4j",
+            // Hadoop/HBase/ZK:
+            "org.apache.hadoop", "org.apache.zookeeper", "org.apache.kylin", "com.intellij",
+            "org.apache.calcite", "org.roaringbitmap", "org.apache.parquet" };
+    private static final Set<String> classNotFoundCache = Sets.newHashSet();
+    private static Logger logger = LoggerFactory.getLogger(ItSparkClassLoader.class);
+
+    /**
+     * Creates a DynamicClassLoader that can load classes dynamically
+     * from jar files under a specific folder.
+     *       CubeControllerTest
+     * @param parent the parent ClassLoader to set.
+     */
+    protected ItSparkClassLoader(ClassLoader parent) throws IOException {
+        super(new URL[] {}, parent);
+        init();
+    }
+
+    public void init() throws MalformedURLException {
+        String spark_home = System.getenv("SPARK_HOME");
+        if (spark_home == null) {
+            spark_home = System.getProperty("SPARK_HOME");
+            if (spark_home == null) {
+                throw new RuntimeException(
+                        "Spark home not found; set it explicitly or use the SPARK_HOME environment variable.");
+            }
+        }
+        File file = new File(spark_home + "/jars");
+        File[] jars = file.listFiles();
+        for (File jar : jars) {
+            addURL(jar.toURI().toURL());
+        }
+        File sparkJar = findFile("../storage-parquet/target", "kylin-storage-parquet-.*-SNAPSHOT-spark.jar");
+
+        try {
+            // sparder and query module has org.apache.spark class ,if not add,
+            // that will be load by system classloader
+            // (find class api will be find the parent classloader first,
+            // so ,parent classloader can not load it ,spark class will not found)
+            // why SparkClassLoader is unnecessary?
+            // DebugTomcatClassLoader and TomcatClassLoader  find class api will be find itself first
+            // so, parent classloader can load it , spark class will be found
+            addURL(new File("../engine-spark/target/classes").toURI().toURL());
+            addURL(new File("../engine-spark/target/test-classes").toURI().toURL());
+            addURL(new File("../storage-parquet/target/classes").toURI().toURL());
+            addURL(new File("../query/target/classes").toURI().toURL());
+            addURL(new File("../query/target/test-classes").toURI().toURL());
+            addURL(new File("../udf/target/classes").toURI().toURL());
+            System.setProperty("kylin.query.parquet-additional-jars", sparkJar.getCanonicalPath());
+        } catch (IOException e) {
+            e.printStackTrace();
+        }
+    }
+
+    @Override
+    public Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException {
+        if (needToUseGlobal(name)) {
+            logger.debug("Skipping exempt class " + name + " - delegating directly to parent");
+            try {
+                return getParent().loadClass(name);
+            } catch (ClassNotFoundException e) {
+                return super.findClass(name);
+            }
+        }
+
+        synchronized (getClassLoadingLock(name)) {
+            // Check whether the class has already been loaded:
+            Class<?> clasz = findLoadedClass(name);
+            if (clasz != null) {
+                logger.debug("Class " + name + " already loaded");
+            } else {
+                try {
+                    // Try to find this class using the URLs passed to this ClassLoader
+                    logger.debug("Finding class: " + name);
+                    clasz = super.findClass(name);
+                    if (clasz == null) {
+                        logger.debug("cannot find class" + name);
+                    }
+                } catch (ClassNotFoundException e) {
+                    classNotFoundCache.add(name);
+                    // Class not found using this ClassLoader, so delegate to parent
+                    logger.debug("Class " + name + " not found - delegating to parent");
+                    try {
+                        clasz = getParent().loadClass(name);
+                    } catch (ClassNotFoundException e2) {
+                        // Class not found in this ClassLoader or in the parent ClassLoader
+                        // Log some debug output before re-throwing ClassNotFoundException
+                        logger.debug("Class " + name + " not found in parent loader");
+                        throw e2;
+                    }
+                }
+            }
+            return clasz;
+        }
+    }
+
+    private boolean isThisCLPrecedent(String name) {
+        for (String exemptPrefix : THIS_CL_PRECEDENT_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private boolean isParentCLPrecedent(String name) {
+        for (String exemptPrefix : PARENT_CL_PRECEDENT_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private boolean needToUseGlobal(String name) {
+        return !isThisCLPrecedent(name) && !classNeedPreempt(name) && isParentCLPrecedent(name);
+    }
+
+    boolean classNeedPreempt(String name) {
+        if (classNotFoundCache.contains(name)) {
+            return false;
+        }
+        for (String exemptPrefix : SPARK_CL_PREEMPT_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    boolean fileNeedPreempt(String name) {
+
+        for (String exemptPrefix : SPARK_CL_PREEMPT_FILES) {
+            if (name.contains(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+}
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java
new file mode 100644
index 0000000..dba782b
--- /dev/null
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/SparkClassLoader.java
@@ -0,0 +1,236 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.ext;
+
+import org.apache.commons.lang.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.lang.reflect.Method;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.net.URLClassLoader;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.security.AccessController;
+import java.security.PrivilegedAction;
+import java.util.HashSet;
+import java.util.Set;
+
+import static org.apache.kylin.ext.ClassLoaderUtils.findFile;
+
+public class SparkClassLoader extends URLClassLoader {
+    //preempt these classes from parent
+    private static String[] SPARK_CL_PREEMPT_CLASSES = new String[] { "org.apache.spark", "scala.",
+            "org.spark_project" };
+
+    //preempt these files from parent
+    private static String[] SPARK_CL_PREEMPT_FILES = new String[] { "spark-version-info.properties", "HiveClientImpl",
+            "org/apache/spark" };
+
+    //when loading class (indirectly used by SPARK_CL_PREEMPT_CLASSES), some of them should NOT use parent's first
+    private static String[] THIS_CL_PRECEDENT_CLASSES = new String[] { "javax.ws.rs", "org.apache.hadoop.hive" };
+
+    //when loading class (indirectly used by SPARK_CL_PREEMPT_CLASSES), some of them should use parent's first
+    private static String[] PARENT_CL_PRECEDENT_CLASSES = new String[] {
+            //            // Java standard library:
+            "com.sun.", "launcher.", "java.", "javax.", "org.ietf", "org.omg", "org.w3c", "org.xml", "sunw.", "sun.",
+            // logging
+            "org.apache.commons.logging", "org.apache.log4j", "org.slf4j", "org.apache.hadoop",
+            // Hadoop/HBase/ZK:
+            "org.apache.kylin", "com.intellij", "org.apache.calcite" };
+
+    private static final Set<String> classNotFoundCache = new HashSet<>();
+    private static Logger logger = LoggerFactory.getLogger(SparkClassLoader.class);
+
+    static {
+        String sparkclassloader_spark_cl_preempt_classes = System.getenv("SPARKCLASSLOADER_SPARK_CL_PREEMPT_CLASSES");
+        if (!StringUtils.isEmpty(sparkclassloader_spark_cl_preempt_classes)) {
+            SPARK_CL_PREEMPT_CLASSES = StringUtils.split(sparkclassloader_spark_cl_preempt_classes, ",");
+        }
+
+        String sparkclassloader_spark_cl_preempt_files = System.getenv("SPARKCLASSLOADER_SPARK_CL_PREEMPT_FILES");
+        if (!StringUtils.isEmpty(sparkclassloader_spark_cl_preempt_files)) {
+            SPARK_CL_PREEMPT_FILES = StringUtils.split(sparkclassloader_spark_cl_preempt_files, ",");
+        }
+
+        String sparkclassloader_this_cl_precedent_classes = System.getenv("SPARKCLASSLOADER_THIS_CL_PRECEDENT_CLASSES");
+        if (!StringUtils.isEmpty(sparkclassloader_this_cl_precedent_classes)) {
+            THIS_CL_PRECEDENT_CLASSES = StringUtils.split(sparkclassloader_this_cl_precedent_classes, ",");
+        }
+
+        String sparkclassloader_parent_cl_precedent_classes = System
+                .getenv("SPARKCLASSLOADER_PARENT_CL_PRECEDENT_CLASSES");
+        if (!StringUtils.isEmpty(sparkclassloader_parent_cl_precedent_classes)) {
+            PARENT_CL_PRECEDENT_CLASSES = StringUtils.split(sparkclassloader_parent_cl_precedent_classes, ",");
+        }
+
+        try {
+            final Method registerParallel = ClassLoader.class.getDeclaredMethod("registerAsParallelCapable");
+            AccessController.doPrivileged(new PrivilegedAction<Object>() {
+                public Object run() {
+                    registerParallel.setAccessible(true);
+                    return null;
+                }
+            });
+            Boolean result = (Boolean) registerParallel.invoke(null);
+            if (!result) {
+                logger.warn("registrationFailed");
+            }
+        } catch (Exception ignore) {
+
+        }
+    }
+
+    /**
+     * Creates a DynamicClassLoader that can load classes dynamically
+     * from jar files under a specific folder.
+     *
+     * @param parent the parent ClassLoader to set.
+     */
+    protected SparkClassLoader(ClassLoader parent) throws IOException {
+        super(new URL[] {}, parent);
+        init();
+    }
+
+    public void init() throws MalformedURLException {
+        String spark_home = System.getenv("SPARK_HOME");
+        if (spark_home == null) {
+            spark_home = System.getProperty("SPARK_HOME");
+            if (spark_home == null) {
+                throw new RuntimeException(
+                        "Spark home not found; set it explicitly or use the SPARK_HOME environment variable.");
+            }
+        }
+        File file = new File(spark_home + "/jars");
+        File[] jars = file.listFiles();
+        for (File jar : jars) {
+            addURL(jar.toURI().toURL());
+        }
+        if (System.getenv("KYLIN_HOME") != null) {
+            // for prod
+            String kylin_home = System.getenv("KYLIN_HOME");
+            File sparkJar = findFile(kylin_home + "/lib", "kylin-udf-.*.jar");
+            if (sparkJar != null) {
+                logger.info("Add kylin UDF jar to spark classloader : " + sparkJar.getName());
+                addURL(sparkJar.toURI().toURL());
+            } else {
+                logger.warn("Can not found kylin UDF jar, please set KYLIN_HOME and make sure the kylin-udf-*.jar exists in $KYLIN_HOME/lib");
+            }
+        } else if (Files.exists(Paths.get("../udf/target/classes"))) {
+            //  for debugtomcat
+            logger.info("Add kylin UDF classes to spark classloader");
+            addURL(new File("../udf/target/classes").toURI().toURL());
+        }
+
+    }
+
+    @Override
+    public Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException {
+
+        if (needToUseGlobal(name)) {
+            logger.debug("delegate " + name + " directly to parent");
+            return super.loadClass(name, resolve);
+        }
+        return doLoadclass(name);
+    }
+
+    private Class<?> doLoadclass(String name) throws ClassNotFoundException {
+        synchronized (getClassLoadingLock(name)) {
+            // Check whether the class has already been loaded:
+            Class<?> clasz = findLoadedClass(name);
+            if (clasz != null) {
+                logger.debug("Class " + name + " already loaded");
+            } else {
+                try {
+                    // Try to find this class using the URLs passed to this ClassLoader
+                    logger.debug("Finding class: " + name);
+                    clasz = super.findClass(name);
+                    if (clasz == null) {
+                        logger.debug("cannot find class" + name);
+                    }
+                } catch (ClassNotFoundException e) {
+                    classNotFoundCache.add(name);
+                    // Class not found using this ClassLoader, so delegate to parent
+                    logger.debug("Class " + name + " not found - delegating to parent");
+                    try {
+                        // sparder and query module has some class start with org.apache.spark,
+                        // We need to use some lib that does not exist in spark/jars
+                        clasz = getParent().loadClass(name);
+                    } catch (ClassNotFoundException e2) {
+                        // Class not found in this ClassLoader or in the parent ClassLoader
+                        // Log some debug output before re-throwing ClassNotFoundException
+                        logger.debug("Class " + name + " not found in parent loader");
+                        throw e2;
+                    }
+                }
+            }
+            return clasz;
+        }
+    }
+
+    private boolean isThisCLPrecedent(String name) {
+        for (String exemptPrefix : THIS_CL_PRECEDENT_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private boolean isParentCLPrecedent(String name) {
+        for (String exemptPrefix : PARENT_CL_PRECEDENT_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private boolean needToUseGlobal(String name) {
+        if (name.startsWith("org.apache.spark.sql.execution.datasources.sparder.batch.SparderBatchFileFormat")) {
+            return true;
+        }
+
+        return !isThisCLPrecedent(name) && !classNeedPreempt(name) && isParentCLPrecedent(name);
+    }
+
+    boolean classNeedPreempt(String name) {
+        if (classNotFoundCache.contains(name)) {
+            return false;
+        }
+        for (String exemptPrefix : SPARK_CL_PREEMPT_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    boolean fileNeedPreempt(String name) {
+        for (String exemptPrefix : SPARK_CL_PREEMPT_FILES) {
+            if (name.contains(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+}
diff --git a/tomcat-ext/src/main/java/org/apache/kylin/ext/TomcatClassLoader.java b/tomcat-ext/src/main/java/org/apache/kylin/ext/TomcatClassLoader.java
new file mode 100644
index 0000000..89717ec
--- /dev/null
+++ b/tomcat-ext/src/main/java/org/apache/kylin/ext/TomcatClassLoader.java
@@ -0,0 +1,190 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kylin.ext;
+
+import org.apache.catalina.loader.ParallelWebappClassLoader;
+import org.apache.commons.lang.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.MalformedURLException;
+import java.util.HashSet;
+import java.util.Set;
+
+import static org.apache.kylin.ext.ClassLoaderUtils.findFile;
+
+public class TomcatClassLoader extends ParallelWebappClassLoader {
+    private static String[] PARENT_CL_PRECEDENT_CLASSES = new String[] {
+            // Java standard library:
+            "com.sun.", "launcher.", "javax.", "org.ietf", "java", "org.omg", "org.w3c", "org.xml", "sunw.",
+            // logging
+            "org.slf4j", "org.apache.commons.logging", "org.apache.log4j", "org.apache.catalina", "org.apache.tomcat" };
+
+    private static String[] THIS_CL_PRECEDENT_CLASSES = new String[] { "org.apache.kylin",
+            "org.apache.calcite" };
+
+    private static String[] CODEGEN_CLASSES = new String[] { "org.apache.spark.sql.catalyst.expressions.Object",
+            "Baz" };
+
+    private static final Set<String> wontFindClasses = new HashSet<>();
+
+    static {
+        String tomcatclassloader_parent_cl_precedent_classes = System
+                .getenv("TOMCATCLASSLOADER_PARENT_CL_PRECEDENT_CLASSES");
+        if (!StringUtils.isEmpty(tomcatclassloader_parent_cl_precedent_classes)) {
+            PARENT_CL_PRECEDENT_CLASSES = StringUtils.split(tomcatclassloader_parent_cl_precedent_classes, ",");
+        }
+
+        String tomcatclassloader_this_cl_precedent_classes = System
+                .getenv("TOMCATCLASSLOADER_THIS_CL_PRECEDENT_CLASSES");
+        if (!StringUtils.isEmpty(tomcatclassloader_this_cl_precedent_classes)) {
+            THIS_CL_PRECEDENT_CLASSES = StringUtils.split(tomcatclassloader_this_cl_precedent_classes, ",");
+        }
+
+        String tomcatclassloader_codegen_classes = System.getenv("TOMCATCLASSLOADER_CODEGEN_CLASSES");
+        if (!StringUtils.isEmpty(tomcatclassloader_codegen_classes)) {
+            CODEGEN_CLASSES = StringUtils.split(tomcatclassloader_codegen_classes, ",");
+        }
+
+        wontFindClasses.add("Class");
+        wontFindClasses.add("Object");
+        wontFindClasses.add("org");
+        wontFindClasses.add("java.lang.org");
+        wontFindClasses.add("java.lang$org");
+        wontFindClasses.add("java$lang$org");
+        wontFindClasses.add("org.apache");
+        wontFindClasses.add("org.apache.calcite");
+        wontFindClasses.add("org.apache.calcite.runtime");
+        wontFindClasses.add("org.apache.calcite.linq4j");
+        wontFindClasses.add("Long");
+        wontFindClasses.add("String");
+    }
+
+    public static TomcatClassLoader defaultClassLoad = null;
+    private static Logger logger = LoggerFactory.getLogger(TomcatClassLoader.class);
+    public SparkClassLoader sparkClassLoader;
+
+    /**
+     * Creates a DynamicClassLoader that can load classes dynamically
+     * from jar files under a specific folder.
+     *
+     * @param parent the parent ClassLoader to set.
+     */
+    public TomcatClassLoader(ClassLoader parent) throws IOException {
+        super(parent);
+        sparkClassLoader = new SparkClassLoader(this);
+        ClassLoaderUtils.setSparkClassLoader(sparkClassLoader);
+        ClassLoaderUtils.setOriginClassLoader(this);
+        defaultClassLoad = this;
+        init();
+    }
+
+    public void init() {
+        String spark_home = System.getenv("SPARK_HOME");
+        try {
+            //  SparkContext use spi to match deploy mode
+            //  otherwise SparkContext init fail ,can not find yarn deploy mode
+            File yarnJar = findFile(spark_home + "/jars", "spark-yarn.*.jar");
+            addURL(yarnJar.toURI().toURL());
+            //  jersey in spark will attempt find @Path class file in current classloader.
+            // Not possible to delegate to spark loader
+            // otherwise spark web ui executors tab can not render
+            File coreJar = findFile(spark_home + "/jars", "spark-core.*.jar");
+            addURL(coreJar.toURI().toURL());
+        } catch (MalformedURLException e) {
+            e.printStackTrace();
+        }
+
+    }
+
+    @Override
+    public Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException {
+        // when calcite compile class, some stupid class name will be proposed, not worth to actually lookup
+        if (isWontFind(name)) {
+            throw new ClassNotFoundException();
+        }
+        // spark codegen classload parent is Thread.currentThread().getContextClassLoader()
+        // and calcite baz classloader is EnumerableInterpretable.class's classloader
+        if (isCodeGen(name)) {
+            throw new ClassNotFoundException();
+        }
+        // class loaders should conform to global's
+        if (name.startsWith("org.apache.kylin.ext")) {
+            return parent.loadClass(name);
+        }
+
+        if (name.startsWith("org.apache.spark.sql.execution.datasources.sparder.batch.SparderBatchFileFormat")) {
+            return super.loadClass(name, resolve);
+        }
+
+        // if spark CL needs preempt
+        if (sparkClassLoader.classNeedPreempt(name)) {
+            return sparkClassLoader.loadClass(name);
+        }
+        // tomcat classpath include KAP_HOME/lib , ensure this classload can load kap class
+        if (isParentCLPrecedent(name) && !isThisCLPrecedent(name)) {
+            logger.debug("delegate " + name + " directly to parent");
+            return parent.loadClass(name);
+        }
+        return super.loadClass(name, resolve);
+    }
+
+    @Override
+    public InputStream getResourceAsStream(String name) {
+        if (sparkClassLoader.fileNeedPreempt(name)) {
+            return sparkClassLoader.getResourceAsStream(name);
+        }
+        return super.getResourceAsStream(name);
+
+    }
+
+    private boolean isParentCLPrecedent(String name) {
+        for (String exemptPrefix : PARENT_CL_PRECEDENT_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private boolean isThisCLPrecedent(String name) {
+        for (String exemptPrefix : THIS_CL_PRECEDENT_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private boolean isWontFind(String name) {
+        return wontFindClasses.contains(name);
+    }
+
+    private boolean isCodeGen(String name) {
+        for (String exemptPrefix : CODEGEN_CLASSES) {
+            if (name.startsWith(exemptPrefix)) {
+                return true;
+            }
+        }
+        return false;
+    }
+}
diff --git a/tool/pom.xml b/tool/pom.xml
index 958bd55..1b3e305 100644
--- a/tool/pom.xml
+++ b/tool/pom.xml
@@ -47,6 +47,10 @@
         </dependency>
         <dependency>
             <groupId>org.apache.kylin</groupId>
+            <artifactId>kylin-storage-parquet</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.kylin</groupId>
             <artifactId>kylin-engine-mr</artifactId>
         </dependency>
 
diff --git a/webapp/app/META-INF/context.xml b/webapp/app/META-INF/context.xml
new file mode 100644
index 0000000..0ad90dc
--- /dev/null
+++ b/webapp/app/META-INF/context.xml
@@ -0,0 +1,38 @@
+<?xml version='1.0' encoding='utf-8'?>
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~  
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~  
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+<!-- The contents of this file will be loaded for each web application -->
+<Context allowLinking="true">
+
+    <!-- Default set of monitored resources -->
+    <WatchedResource>WEB-INF/web.xml</WatchedResource>
+
+    <!-- Uncomment this to disable session persistence across Tomcat restarts -->
+    <!--
+    <Manager pathname="" />
+    -->
+
+    <!-- Uncomment this to enable Comet connection tacking (provides events
+         on session expiration as well as webapp lifecycle) -->
+    <!--
+    <Valve className="org.apache.catalina.valves.CometConnectionManagerValve" />
+    -->
+
+    <Loader loaderClass="org.apache.kylin.ext.DebugTomcatClassLoader"/>
+
+</Context>