You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by xx...@apache.org on 2020/11/08 08:13:00 UTC

[kylin] branch kylin-on-parquet-v2 updated (1d43e0a -> 15866d1)

This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a change to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git.


    from 1d43e0a  KYLIN-4800 Add canary tool for sparder-context
     new 69ac9ce  KYLIN-4775 Use docker-compose to deploy Hadoop and Kylin
     new c232388  KYLIN-4775 Use docker-compose to deploy Hadoop and Kylin
     new 823059d  KYLIN-4775 Fix
     new 8a96a4b  Fix KYLIN_CONF
     new e6f579e  KYLIN-4801 Kylin continuous integration testing
     new 4dd747a  KYLIN-4778 package and release by docker image
     new 768a6d6  KYLIN-4775 Refactor & Fix HADOOP_CONF_DIR
     new 9679252  KYLIN-4778 Package and release in docker container
     new 5f16a06  KYLIN-4775 Fix 0.0.0.0:10020 ConnectionRefused
     new 49715c0  KYLIN-4801 Some format specification fix and clean up
     new dc073f4  KYLIN-4775 Update docker/README-cluster.md
     new 97cebbd  KYLIN-4801 Clean up and add test sql
     new 15866d1  KYLIN-4775 Minor fix

The 13 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 KylinTesting.zip                                   | Bin 0 -> 116536 bytes
 .../env/default/default.properties                 |  42 +
 .../env/default/python.properties                  |  13 +-
 .../features/specs/query/query.spec                |   6 +
 .../features/step_impl/before_suite.py             |  61 ++
 .../features/step_impl/generic_test_step.py        | 122 +++
 .../features/step_impl/query/query.py              |  43 ++
 .../kylin_instances/kylin_instance.yml             |  18 +-
 build/CI/kylin-system-testing/kylin_utils/basic.py | 107 +++
 .../CI/kylin-system-testing/kylin_utils/equals.py  | 229 ++++++
 build/CI/kylin-system-testing/kylin_utils/kylin.py | 843 +++++++++++++++++++++
 build/CI/kylin-system-testing/kylin_utils/shell.py | 142 ++++
 build/CI/kylin-system-testing/kylin_utils/util.py  |  81 ++
 build/CI/kylin-system-testing/manifest.json        |   6 +
 .../generic_desc_data/generic_desc_data_3x.json    | 682 +++++++++++++++++
 .../generic_desc_data/generic_desc_data_4x.json    | 625 +++++++++++++++
 .../query/sql/sql_test/sql1.sql                    |  26 +
 .../query/sql_result/sql_test/sql1.json            |   8 +
 build/CI/kylin-system-testing/requirements.txt     |   6 +
 build/CI/run-ci.sh                                 | 131 ++++
 .../kylin/common/util/SourceConfigurationUtil.java |  10 +-
 dev-support/build-release/Dockerfile               |  54 ++
 .../build-release/conf}/settings.xml               |   0
 .../build-release/packaging.sh                     |  33 +-
 dev-support/build-release/script/build_release.sh  | 171 +++++
 .../build-release/script/entrypoint.sh             |  17 +-
 docker/.gitignore                                  |   6 +
 docker/Dockerfile_hadoop                           |  96 ---
 docker/README-cluster.md                           | 178 +++++
 docker/{README.md => README-standalone.md}         |  36 +-
 docker/README.md                                   | 143 +---
 docker/build_cluster_images.sh                     |  58 ++
 .../{build_image.sh => build_standalone_image.sh}  |   0
 docker/docker-compose/others/client-write-read.env |  61 ++
 docker/docker-compose/others/client-write.env      |  61 ++
 .../others/docker-compose-kerberos.yml             |  13 +
 .../others/docker-compose-kylin-write-read.yml     |  51 ++
 .../others/docker-compose-kylin-write.yml          |  57 ++
 .../others/docker-compose-metastore.yml            |  22 +
 docker/docker-compose/others/kylin/README.md       |   2 +
 .../read/conf/hadoop-read}/core-site.xml           |  14 +-
 .../read/conf/hadoop-read/hdfs-site.xml            |  31 +
 .../read/conf/hadoop-read/mapred-site.xml}         |  13 +-
 .../read/conf/hadoop-read/yarn-site.xml            |  46 ++
 .../read}/conf/hadoop/core-site.xml                |  15 +-
 .../docker-compose/read/conf/hadoop/hdfs-site.xml  |  31 +
 .../read/conf/hadoop/mapred-site.xml}              |  13 +-
 .../docker-compose/read/conf/hadoop/yarn-site.xml  |  46 ++
 .../docker-compose/read/conf/hbase/hbase-site.xml  |  34 +
 docker/docker-compose/read/conf/hive/hive-site.xml |  25 +
 .../docker-compose/read/docker-compose-hadoop.yml  | 129 ++++
 .../docker-compose/read/docker-compose-hbase.yml   |  43 ++
 .../read/docker-compose-zookeeper.yml              |  18 +
 docker/docker-compose/read/read-hadoop.env         |  40 +
 .../read/read-hbase-distributed-local.env          |  12 +
 .../write/conf/hadoop-read}/core-site.xml          |  14 +-
 .../write/conf/hadoop-read/hdfs-site.xml           |  31 +
 .../write/conf/hadoop-read/mapred-site.xml}        |  13 +-
 .../write/conf/hadoop-read/yarn-site.xml           |  46 ++
 .../write/conf/hadoop-write}/core-site.xml         |  14 +-
 .../write/conf/hadoop-write/hdfs-site.xml          |  31 +
 .../write/conf/hadoop-write/mapred-site.xml}       |  13 +-
 .../write/conf/hadoop-write/yarn-site.xml          |  46 ++
 .../write}/conf/hadoop/core-site.xml               |  15 +-
 .../docker-compose/write/conf/hadoop/hdfs-site.xml |  31 +
 .../write/conf/hadoop/mapred-site.xml}             |  13 +-
 .../docker-compose/write/conf/hadoop/yarn-site.xml |  46 ++
 .../docker-compose/write/conf/hbase/hbase-site.xml |  34 +
 .../docker-compose/write/conf/hive/hive-site.xml   |  27 +
 .../docker-compose/write/docker-compose-hadoop.yml | 133 ++++
 .../docker-compose/write/docker-compose-hbase.yml  |  43 ++
 .../docker-compose/write/docker-compose-hive.yml   |  37 +
 .../docker-compose/write/docker-compose-kafka.yml  |  18 +
 .../write/docker-compose-zookeeper.yml             |  18 +
 docker/docker-compose/write/write-hadoop.env       |  51 ++
 .../write/write-hbase-distributed-local.env        |  12 +
 docker/dockerfile/cluster/base/Dockerfile          |  67 ++
 docker/dockerfile/cluster/base/entrypoint.sh       | 140 ++++
 docker/dockerfile/cluster/client/Dockerfile        | 161 ++++
 .../cluster/client/conf/hadoop-read}/core-site.xml |  14 +-
 .../cluster/client/conf/hadoop-read/hdfs-site.xml  |  31 +
 .../client/conf/hadoop-read/mapred-site.xml}       |  13 +-
 .../cluster/client/conf/hadoop-read/yarn-site.xml  |  46 ++
 .../client/conf/hadoop-write}/core-site.xml        |  14 +-
 .../cluster/client/conf/hadoop-write/hdfs-site.xml |  31 +
 .../client/conf/hadoop-write/mapred-site.xml}      |  13 +-
 .../cluster/client/conf/hadoop-write/yarn-site.xml |  47 ++
 .../cluster/client/conf/hbase/hbase-site.xml       |  34 +
 .../cluster/client/conf/hive/hive-site.xml         |  25 +
 docker/dockerfile/cluster/client/entrypoint.sh     |   3 +
 docker/dockerfile/cluster/client/run_cli.sh        |  21 +
 .../cluster/datanode/Dockerfile}                   |  20 +-
 .../cluster/datanode/run_dn.sh}                    |  18 +-
 docker/dockerfile/cluster/hbase/Dockerfile         |  60 ++
 docker/dockerfile/cluster/hbase/entrypoint.sh      |  83 ++
 .../cluster/historyserver/Dockerfile}              |  24 +-
 .../cluster/historyserver/run_history.sh}          |  17 +-
 docker/dockerfile/cluster/hive/Dockerfile          |  75 ++
 .../cluster/hive/conf/beeline-log4j2.properties    |  46 ++
 docker/dockerfile/cluster/hive/conf/hive-env.sh    |  55 ++
 .../cluster/hive/conf/hive-exec-log4j2.properties  |  67 ++
 .../cluster/hive/conf/hive-log4j2.properties       |  74 ++
 docker/dockerfile/cluster/hive/conf/hive-site.xml  |  19 +
 .../dockerfile/cluster/hive/conf/ivysettings.xml   |  44 ++
 .../hive/conf/llap-daemon-log4j2.properties        |  94 +++
 docker/dockerfile/cluster/hive/entrypoint.sh       | 136 ++++
 .../cluster/hive/run_hv.sh}                        |  22 +-
 docker/dockerfile/cluster/hmaster/Dockerfile       |  13 +
 .../cluster/hmaster/run_hm.sh}                     |  12 +-
 docker/dockerfile/cluster/hregionserver/Dockerfile |  12 +
 .../cluster/hregionserver/run_hr.sh}               |  12 +-
 .../cluster/kerberos/Dockerfile}                   |  24 +-
 docker/dockerfile/cluster/kerberos/conf/kadm5.acl  |   1 +
 .../cluster/kerberos/conf/kdc.conf}                |  20 +-
 .../cluster/kerberos/conf/krb5.conf}               |  32 +-
 .../cluster/kerberos/run_krb.sh}                   |  18 +-
 .../cluster/kylin/Dockerfile}                      |  17 +-
 docker/dockerfile/cluster/kylin/entrypoint.sh      |   3 +
 docker/dockerfile/cluster/metastore-db/Dockerfile  |  12 +
 docker/dockerfile/cluster/metastore-db/run_db.sh   |  15 +
 .../cluster/namenode/Dockerfile}                   |  25 +-
 .../cluster/namenode/run_nn.sh}                    |  23 +-
 .../cluster/nodemanager/Dockerfile}                |  21 +-
 .../cluster/nodemanager/run_nm.sh}                 |  12 +-
 docker/dockerfile/cluster/pom.xml                  |  81 ++
 .../cluster/resourcemanager/Dockerfile}            |  21 +-
 .../cluster/resourcemanager/run_rm.sh}             |  12 +-
 docker/{ => dockerfile/standalone}/Dockerfile      |   0
 .../standalone}/conf/hadoop/core-site.xml          |   0
 .../standalone}/conf/hadoop/hdfs-site.xml          |   0
 .../standalone}/conf/hadoop/mapred-site.xml        |   0
 .../standalone}/conf/hadoop/yarn-site.xml          |   0
 .../standalone}/conf/hive/hive-site.xml            |   0
 .../standalone}/conf/maven/settings.xml            |   0
 docker/{ => dockerfile/standalone}/entrypoint.sh   |   0
 docker/header.sh                                   | 140 ++++
 docker/setup_hadoop_cluster.sh                     |  82 ++
 docker/{build_image.sh => setup_service.sh}        |  25 +-
 docker/{run_container.sh => setup_standalone.sh}   |   0
 docker/stop_cluster.sh                             |  47 ++
 pom.xml                                            |   1 +
 .../apache/kylin/rest/response/SQLResponse.java    |  10 +
 .../apache/kylin/rest/service/QueryService.java    |  12 +
 .../kylin/rest/response/SQLResponseTest.java       |   2 +-
 144 files changed, 7094 insertions(+), 581 deletions(-)
 create mode 100644 KylinTesting.zip
 create mode 100644 build/CI/kylin-system-testing/env/default/default.properties
 copy docker/run_container.sh => build/CI/kylin-system-testing/env/default/python.properties (82%)
 mode change 100755 => 100644
 create mode 100644 build/CI/kylin-system-testing/features/specs/query/query.spec
 create mode 100644 build/CI/kylin-system-testing/features/step_impl/before_suite.py
 create mode 100644 build/CI/kylin-system-testing/features/step_impl/generic_test_step.py
 create mode 100644 build/CI/kylin-system-testing/features/step_impl/query/query.py
 copy docker/run_container.sh => build/CI/kylin-system-testing/kylin_instances/kylin_instance.yml (82%)
 mode change 100755 => 100644
 create mode 100644 build/CI/kylin-system-testing/kylin_utils/basic.py
 create mode 100644 build/CI/kylin-system-testing/kylin_utils/equals.py
 create mode 100644 build/CI/kylin-system-testing/kylin_utils/kylin.py
 create mode 100644 build/CI/kylin-system-testing/kylin_utils/shell.py
 create mode 100644 build/CI/kylin-system-testing/kylin_utils/util.py
 create mode 100644 build/CI/kylin-system-testing/manifest.json
 create mode 100644 build/CI/kylin-system-testing/meta_data/generic_desc_data/generic_desc_data_3x.json
 create mode 100644 build/CI/kylin-system-testing/meta_data/generic_desc_data/generic_desc_data_4x.json
 create mode 100644 build/CI/kylin-system-testing/query/sql/sql_test/sql1.sql
 create mode 100644 build/CI/kylin-system-testing/query/sql_result/sql_test/sql1.json
 create mode 100644 build/CI/kylin-system-testing/requirements.txt
 create mode 100644 build/CI/run-ci.sh
 create mode 100644 dev-support/build-release/Dockerfile
 copy {docker/conf/maven => dev-support/build-release/conf}/settings.xml (100%)
 copy docker/build_image.sh => dev-support/build-release/packaging.sh (50%)
 mode change 100755 => 100644
 create mode 100644 dev-support/build-release/script/build_release.sh
 copy docker/run_container.sh => dev-support/build-release/script/entrypoint.sh (82%)
 mode change 100755 => 100644
 create mode 100644 docker/.gitignore
 delete mode 100644 docker/Dockerfile_hadoop
 create mode 100644 docker/README-cluster.md
 copy docker/{README.md => README-standalone.md} (80%)
 create mode 100644 docker/build_cluster_images.sh
 copy docker/{build_image.sh => build_standalone_image.sh} (100%)
 create mode 100644 docker/docker-compose/others/client-write-read.env
 create mode 100644 docker/docker-compose/others/client-write.env
 create mode 100644 docker/docker-compose/others/docker-compose-kerberos.yml
 create mode 100644 docker/docker-compose/others/docker-compose-kylin-write-read.yml
 create mode 100644 docker/docker-compose/others/docker-compose-kylin-write.yml
 create mode 100644 docker/docker-compose/others/docker-compose-metastore.yml
 create mode 100644 docker/docker-compose/others/kylin/README.md
 copy docker/{conf/hadoop => docker-compose/read/conf/hadoop-read}/core-site.xml (63%)
 create mode 100644 docker/docker-compose/read/conf/hadoop-read/hdfs-site.xml
 copy docker/{conf/hadoop/core-site.xml => docker-compose/read/conf/hadoop-read/mapred-site.xml} (69%)
 create mode 100644 docker/docker-compose/read/conf/hadoop-read/yarn-site.xml
 copy docker/{ => docker-compose/read}/conf/hadoop/core-site.xml (63%)
 create mode 100644 docker/docker-compose/read/conf/hadoop/hdfs-site.xml
 copy docker/{conf/hadoop/core-site.xml => docker-compose/read/conf/hadoop/mapred-site.xml} (69%)
 create mode 100644 docker/docker-compose/read/conf/hadoop/yarn-site.xml
 create mode 100644 docker/docker-compose/read/conf/hbase/hbase-site.xml
 create mode 100644 docker/docker-compose/read/conf/hive/hive-site.xml
 create mode 100644 docker/docker-compose/read/docker-compose-hadoop.yml
 create mode 100644 docker/docker-compose/read/docker-compose-hbase.yml
 create mode 100644 docker/docker-compose/read/docker-compose-zookeeper.yml
 create mode 100644 docker/docker-compose/read/read-hadoop.env
 create mode 100644 docker/docker-compose/read/read-hbase-distributed-local.env
 copy docker/{conf/hadoop => docker-compose/write/conf/hadoop-read}/core-site.xml (63%)
 create mode 100644 docker/docker-compose/write/conf/hadoop-read/hdfs-site.xml
 copy docker/{conf/hadoop/core-site.xml => docker-compose/write/conf/hadoop-read/mapred-site.xml} (69%)
 create mode 100644 docker/docker-compose/write/conf/hadoop-read/yarn-site.xml
 copy docker/{conf/hadoop => docker-compose/write/conf/hadoop-write}/core-site.xml (63%)
 create mode 100644 docker/docker-compose/write/conf/hadoop-write/hdfs-site.xml
 copy docker/{conf/hadoop/core-site.xml => docker-compose/write/conf/hadoop-write/mapred-site.xml} (69%)
 create mode 100644 docker/docker-compose/write/conf/hadoop-write/yarn-site.xml
 copy docker/{ => docker-compose/write}/conf/hadoop/core-site.xml (63%)
 create mode 100644 docker/docker-compose/write/conf/hadoop/hdfs-site.xml
 copy docker/{conf/hadoop/core-site.xml => docker-compose/write/conf/hadoop/mapred-site.xml} (69%)
 create mode 100644 docker/docker-compose/write/conf/hadoop/yarn-site.xml
 create mode 100644 docker/docker-compose/write/conf/hbase/hbase-site.xml
 create mode 100644 docker/docker-compose/write/conf/hive/hive-site.xml
 create mode 100644 docker/docker-compose/write/docker-compose-hadoop.yml
 create mode 100644 docker/docker-compose/write/docker-compose-hbase.yml
 create mode 100644 docker/docker-compose/write/docker-compose-hive.yml
 create mode 100644 docker/docker-compose/write/docker-compose-kafka.yml
 create mode 100644 docker/docker-compose/write/docker-compose-zookeeper.yml
 create mode 100644 docker/docker-compose/write/write-hadoop.env
 create mode 100644 docker/docker-compose/write/write-hbase-distributed-local.env
 create mode 100644 docker/dockerfile/cluster/base/Dockerfile
 create mode 100644 docker/dockerfile/cluster/base/entrypoint.sh
 create mode 100644 docker/dockerfile/cluster/client/Dockerfile
 copy docker/{conf/hadoop => dockerfile/cluster/client/conf/hadoop-read}/core-site.xml (63%)
 create mode 100644 docker/dockerfile/cluster/client/conf/hadoop-read/hdfs-site.xml
 copy docker/{conf/hadoop/core-site.xml => dockerfile/cluster/client/conf/hadoop-read/mapred-site.xml} (69%)
 create mode 100644 docker/dockerfile/cluster/client/conf/hadoop-read/yarn-site.xml
 copy docker/{conf/hadoop => dockerfile/cluster/client/conf/hadoop-write}/core-site.xml (63%)
 create mode 100644 docker/dockerfile/cluster/client/conf/hadoop-write/hdfs-site.xml
 copy docker/{conf/hadoop/core-site.xml => dockerfile/cluster/client/conf/hadoop-write/mapred-site.xml} (69%)
 create mode 100644 docker/dockerfile/cluster/client/conf/hadoop-write/yarn-site.xml
 create mode 100644 docker/dockerfile/cluster/client/conf/hbase/hbase-site.xml
 create mode 100644 docker/dockerfile/cluster/client/conf/hive/hive-site.xml
 create mode 100644 docker/dockerfile/cluster/client/entrypoint.sh
 create mode 100644 docker/dockerfile/cluster/client/run_cli.sh
 copy docker/{build_image.sh => dockerfile/cluster/datanode/Dockerfile} (70%)
 mode change 100755 => 100644
 copy docker/{run_container.sh => dockerfile/cluster/datanode/run_dn.sh} (76%)
 mode change 100755 => 100644
 create mode 100644 docker/dockerfile/cluster/hbase/Dockerfile
 create mode 100644 docker/dockerfile/cluster/hbase/entrypoint.sh
 copy docker/{build_image.sh => dockerfile/cluster/historyserver/Dockerfile} (59%)
 mode change 100755 => 100644
 copy docker/{run_container.sh => dockerfile/cluster/historyserver/run_history.sh} (79%)
 mode change 100755 => 100644
 create mode 100644 docker/dockerfile/cluster/hive/Dockerfile
 create mode 100644 docker/dockerfile/cluster/hive/conf/beeline-log4j2.properties
 create mode 100644 docker/dockerfile/cluster/hive/conf/hive-env.sh
 create mode 100644 docker/dockerfile/cluster/hive/conf/hive-exec-log4j2.properties
 create mode 100644 docker/dockerfile/cluster/hive/conf/hive-log4j2.properties
 create mode 100644 docker/dockerfile/cluster/hive/conf/hive-site.xml
 create mode 100644 docker/dockerfile/cluster/hive/conf/ivysettings.xml
 create mode 100644 docker/dockerfile/cluster/hive/conf/llap-daemon-log4j2.properties
 create mode 100644 docker/dockerfile/cluster/hive/entrypoint.sh
 copy docker/{run_container.sh => dockerfile/cluster/hive/run_hv.sh} (71%)
 mode change 100755 => 100644
 create mode 100644 docker/dockerfile/cluster/hmaster/Dockerfile
 copy docker/{run_container.sh => dockerfile/cluster/hmaster/run_hm.sh} (82%)
 mode change 100755 => 100644
 create mode 100644 docker/dockerfile/cluster/hregionserver/Dockerfile
 copy docker/{run_container.sh => dockerfile/cluster/hregionserver/run_hr.sh} (82%)
 mode change 100755 => 100644
 copy docker/{build_image.sh => dockerfile/cluster/kerberos/Dockerfile} (63%)
 mode change 100755 => 100644
 create mode 100644 docker/dockerfile/cluster/kerberos/conf/kadm5.acl
 copy docker/{build_image.sh => dockerfile/cluster/kerberos/conf/kdc.conf} (64%)
 mode change 100755 => 100644
 copy docker/{build_image.sh => dockerfile/cluster/kerberos/conf/krb5.conf} (59%)
 mode change 100755 => 100644
 copy docker/{run_container.sh => dockerfile/cluster/kerberos/run_krb.sh} (82%)
 mode change 100755 => 100644
 copy docker/{run_container.sh => dockerfile/cluster/kylin/Dockerfile} (75%)
 mode change 100755 => 100644
 create mode 100644 docker/dockerfile/cluster/kylin/entrypoint.sh
 create mode 100644 docker/dockerfile/cluster/metastore-db/Dockerfile
 create mode 100644 docker/dockerfile/cluster/metastore-db/run_db.sh
 copy docker/{build_image.sh => dockerfile/cluster/namenode/Dockerfile} (61%)
 mode change 100755 => 100644
 copy docker/{build_image.sh => dockerfile/cluster/namenode/run_nn.sh} (61%)
 mode change 100755 => 100644
 copy docker/{run_container.sh => dockerfile/cluster/nodemanager/Dockerfile} (76%)
 mode change 100755 => 100644
 copy docker/{run_container.sh => dockerfile/cluster/nodemanager/run_nm.sh} (82%)
 mode change 100755 => 100644
 create mode 100644 docker/dockerfile/cluster/pom.xml
 copy docker/{run_container.sh => dockerfile/cluster/resourcemanager/Dockerfile} (76%)
 mode change 100755 => 100644
 copy docker/{run_container.sh => dockerfile/cluster/resourcemanager/run_rm.sh} (82%)
 mode change 100755 => 100644
 rename docker/{ => dockerfile/standalone}/Dockerfile (100%)
 rename docker/{ => dockerfile/standalone}/conf/hadoop/core-site.xml (100%)
 rename docker/{ => dockerfile/standalone}/conf/hadoop/hdfs-site.xml (100%)
 rename docker/{ => dockerfile/standalone}/conf/hadoop/mapred-site.xml (100%)
 rename docker/{ => dockerfile/standalone}/conf/hadoop/yarn-site.xml (100%)
 rename docker/{ => dockerfile/standalone}/conf/hive/hive-site.xml (100%)
 rename docker/{ => dockerfile/standalone}/conf/maven/settings.xml (100%)
 rename docker/{ => dockerfile/standalone}/entrypoint.sh (100%)
 create mode 100644 docker/header.sh
 create mode 100644 docker/setup_hadoop_cluster.sh
 rename docker/{build_image.sh => setup_service.sh} (50%)
 mode change 100755 => 100644
 rename docker/{run_container.sh => setup_standalone.sh} (100%)
 create mode 100644 docker/stop_cluster.sh


[kylin] 04/13: Fix KYLIN_CONF

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 8a96a4b6986dd89d15fe6c6b5813f511a0e47a31
Author: XiaoxiangYu <xx...@apache.org>
AuthorDate: Mon Oct 19 20:10:05 2020 +0800

    Fix KYLIN_CONF
---
 .../org/apache/kylin/common/util/SourceConfigurationUtil.java  | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/core-common/src/main/java/org/apache/kylin/common/util/SourceConfigurationUtil.java b/core-common/src/main/java/org/apache/kylin/common/util/SourceConfigurationUtil.java
index 38171de..4645d2c 100644
--- a/core-common/src/main/java/org/apache/kylin/common/util/SourceConfigurationUtil.java
+++ b/core-common/src/main/java/org/apache/kylin/common/util/SourceConfigurationUtil.java
@@ -71,19 +71,25 @@ public class SourceConfigurationUtil {
 
     private static Map<String, String> loadXmlConfiguration(String filename, boolean checkExist) {
         Map<String, String> confProps = new HashMap<>();
-        File confFile;
+        File confFile = null;
         String xmlFileName = filename + ".xml";
         String path = System.getProperty(KylinConfig.KYLIN_CONF);
 
         if (StringUtils.isNotEmpty(path)) {
             confFile = new File(path, xmlFileName);
-        } else {
+            if (!confFile.exists() && path.contains("meta")) {
+                confFile = null;
+            }
+        }
+
+        if (confFile == null) {
             path = KylinConfig.getKylinHome();
             if (StringUtils.isEmpty(path)) {
                 logger.error("KYLIN_HOME is not set, can not locate conf: {}", xmlFileName);
                 return confProps;
             }
             confFile = new File(path + File.separator + "conf", xmlFileName);
+            System.setProperty(KylinConfig.KYLIN_CONF, path + File.separator + "conf");
         }
 
         if (!confFile.exists()) {


[kylin] 05/13: KYLIN-4801 Kylin continuous integration testing

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit e6f579ef05cf60eaad77662431b9d120a7d77133
Author: yaqian.zhang <59...@qq.com>
AuthorDate: Tue Oct 20 20:32:52 2020 +0800

    KYLIN-4801 Kylin continuous integration testing
---
 KylinTesting.zip                                   | Bin 0 -> 116536 bytes
 build/CI/testing/README.md                         |  95 +++
 .../generic_desc_data/generic_desc_data_3x.json    | 682 +++++++++++++++++
 .../generic_desc_data/generic_desc_data_4x.json    | 625 ++++++++++++++++
 build/CI/testing/data/release_test_0001.json       | 626 ++++++++++++++++
 build/CI/testing/env/default/default.properties    |  25 +
 build/CI/testing/env/default/python.properties     |   4 +
 .../specs/authentication/authentication_0001.spec  |  18 +
 .../read_write_separation.spec                     |   5 +
 build/CI/testing/features/specs/generic_test.spec  |  63 ++
 build/CI/testing/features/specs/sample.spec        |   5 +
 .../step_impl/authentication/authentication.py     |  37 +
 .../CI/testing/features/step_impl/before_suite.py  |  44 ++
 .../features/step_impl/generic_test_step.py        |  98 +++
 .../read_write_separation/read_write_separation.py |   0
 build/CI/testing/features/step_impl/sample.py      |  14 +
 .../CI/testing/kylin_instances/kylin_instance.yml  |   7 +
 build/CI/testing/kylin_utils/basic.py              |  90 +++
 build/CI/testing/kylin_utils/equals.py             | 204 +++++
 build/CI/testing/kylin_utils/kylin.py              | 826 +++++++++++++++++++++
 build/CI/testing/kylin_utils/shell.py              | 125 ++++
 build/CI/testing/kylin_utils/util.py               |  64 ++
 build/CI/testing/manifest.json                     |   6 +
 build/CI/testing/requirements.txt                  |   6 +
 24 files changed, 3669 insertions(+)

diff --git a/KylinTesting.zip b/KylinTesting.zip
new file mode 100644
index 0000000..b3c46a4
Binary files /dev/null and b/KylinTesting.zip differ
diff --git a/build/CI/testing/README.md b/build/CI/testing/README.md
new file mode 100644
index 0000000..c2936dc
--- /dev/null
+++ b/build/CI/testing/README.md
@@ -0,0 +1,95 @@
+# kylin-test
+Automated test code repo based on [gauge](https://docs.gauge.org/?os=macos&language=python&ide=vscode) for [Apache Kylin](https://github.com/apache/kylin).
+
+### IDE
+Gauge support IntelliJ IDEA and VSCode as development IDE.
+However, IDEA cannot detect the step implementation method of Python language, just support java.
+VSCode is recommended as the development IDE.
+
+### Clone repo
+```
+git clone https://github.com/zhangayqian/kylin-test
+```
+
+### Prepare environment
+ * Install python3 compiler and version 3.6 recommended
+ * Install gauge
+ ```
+ brew install gauge
+ ```
+ If you encounter the below error:
+ ```
+ Download failed: https://homebrew.bintray.com/bottles/gauge- 1.1.1.mojave.bottle.1.tar.gz
+ ```
+ You can try to download the compressed package manually, put it in the downloads directory of homebrew cache directory, and execute the installation command of gauge again.
+
+* Install required dependencies
+```
+pip install -r requirements.txt
+```
+
+## Directory structure
+* features/specs: Directory of specification file.
+  A specification is a business test case which describes a particular feature of the application that needs testing. Gauge specifications support a .spec or .md file format and these specifications are written in a syntax similar to Markdown.
+  
+* features/step_impl: Directory of Step implementations methods.
+  Every step implementation has an equivalent code as per the language plugin used while installing Gauge. The code is run when the steps inside a spec are executed. The code must have the same number of parameters as mentioned in the step.
+  Steps can be implemented in different ways such as simple step, step with table, step alias, and enum data type used as step parameters.
+
+* data: Directory of data files needed to execute test cases. Such as cube_desc.json.
+
+* env/default: Gauge configuration file directory.
+
+* kylin_instance: Kylin instance configuration file directory.
+
+* kylin_utils: Tools method directory.
+
+## Run Gauge specifications
+* Run all specification
+```
+gauge run
+```
+* Run specification or step or spec according tags, such as:
+```
+gauge run --tags 3.x
+```
+* Please refer to https://docs.gauge.org/execution.html?os=macos&language=python&ide=vscode learn more.
+
+## Tips
+
+A specification consists of different sections; some of which are mandatory and few are optional. The components of a specification are listed as follows:
+
+- Specification heading
+- Scenario
+- Step
+- Parameters
+- Tags
+- Comments
+
+#### Note
+
+Tags - optional, executable component when the specification is run
+Comments - optional, non-executable component when the specification is run
+
+### About tags
+
+Here, we stipulate that all test scenarios should have tags. Mandatory tags include 3.x and 4.x to indicate which versions are supported by the test scenario. Such as:
+```
+# Flink Engine
+Tags:3.x
+```
+```
+# Cube management
+Tags:3.x,4.x
+```
+You can put the tag in the specification heading, so that all scenarios in this specification will have this tag.
+You can also tag your own test spec to make it easier for you to run your own test cases.
+
+### About Project
+There are two project names already occupied, they are `generic_test_project` and `pushdown_test_project`. 
+  
+Every time you run this test, @befroe_suit method will be execute in advance to create `generic_test_project`.  And the model and cube in this project are universal, and the cube has been fully built. They include dimensions and measures as much as possible. When you need to use a built cube to perform tests, you may use it.
+
+`pushdown_test_project` used to compare sql query result. This is a empty project.
+
+Please refer to https://docs.gauge.org/writing-specifications.html?os=macos&language=python&ide=vscode learn more.
diff --git a/build/CI/testing/data/generic_desc_data/generic_desc_data_3x.json b/build/CI/testing/data/generic_desc_data/generic_desc_data_3x.json
new file mode 100644
index 0000000..72ca5a4
--- /dev/null
+++ b/build/CI/testing/data/generic_desc_data/generic_desc_data_3x.json
@@ -0,0 +1,682 @@
+{
+    "load_table_list":
+    "DEFAULT.KYLIN_SALES,DEFAULT.KYLIN_CAL_DT,DEFAULT.KYLIN_CATEGORY_GROUPINGS,DEFAULT.KYLIN_ACCOUNT,DEFAULT.KYLIN_COUNTRY",
+  
+    "model_desc_data":
+    {
+      "uuid": "0928468a-9fab-4185-9a14-6f2e7c74823f",
+      "last_modified": 0,
+      "version": "3.0.0.20500",
+      "name": "generic_test_model",
+      "owner": null,
+      "is_draft": false,
+      "description": "",
+      "fact_table": "DEFAULT.KYLIN_SALES",
+      "lookups": [
+        {
+          "table": "DEFAULT.KYLIN_CAL_DT",
+          "kind": "LOOKUP",
+          "alias": "KYLIN_CAL_DT",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "KYLIN_CAL_DT.CAL_DT"
+            ],
+            "foreign_key": [
+              "KYLIN_SALES.PART_DT"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_CATEGORY_GROUPINGS",
+          "kind": "LOOKUP",
+          "alias": "KYLIN_CATEGORY_GROUPINGS",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID",
+              "KYLIN_CATEGORY_GROUPINGS.SITE_ID"
+            ],
+            "foreign_key": [
+              "KYLIN_SALES.LEAF_CATEG_ID",
+              "KYLIN_SALES.LSTG_SITE_ID"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_ACCOUNT",
+          "kind": "LOOKUP",
+          "alias": "BUYER_ACCOUNT",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "BUYER_ACCOUNT.ACCOUNT_ID"
+            ],
+            "foreign_key": [
+              "KYLIN_SALES.BUYER_ID"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_ACCOUNT",
+          "kind": "LOOKUP",
+          "alias": "SELLER_ACCOUNT",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "SELLER_ACCOUNT.ACCOUNT_ID"
+            ],
+            "foreign_key": [
+              "KYLIN_SALES.SELLER_ID"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_COUNTRY",
+          "kind": "LOOKUP",
+          "alias": "BUYER_COUNTRY",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "BUYER_COUNTRY.COUNTRY"
+            ],
+            "foreign_key": [
+              "BUYER_ACCOUNT.ACCOUNT_COUNTRY"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_COUNTRY",
+          "kind": "LOOKUP",
+          "alias": "SELLER_COUNTRY",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "SELLER_COUNTRY.COUNTRY"
+            ],
+            "foreign_key": [
+              "SELLER_ACCOUNT.ACCOUNT_COUNTRY"
+            ]
+          }
+        }
+      ],
+      "dimensions": [
+        {
+          "table": "KYLIN_SALES",
+          "columns": [
+            "TRANS_ID",
+            "SELLER_ID",
+            "BUYER_ID",
+            "PART_DT",
+            "LEAF_CATEG_ID",
+            "LSTG_FORMAT_NAME",
+            "LSTG_SITE_ID",
+            "OPS_USER_ID",
+            "OPS_REGION"
+          ]
+        },
+        {
+          "table": "KYLIN_CAL_DT",
+          "columns": [
+            "CAL_DT",
+            "WEEK_BEG_DT",
+            "MONTH_BEG_DT",
+            "YEAR_BEG_DT"
+          ]
+        },
+        {
+          "table": "KYLIN_CATEGORY_GROUPINGS",
+          "columns": [
+            "USER_DEFINED_FIELD1",
+            "USER_DEFINED_FIELD3",
+            "META_CATEG_NAME",
+            "CATEG_LVL2_NAME",
+            "CATEG_LVL3_NAME",
+            "LEAF_CATEG_ID",
+            "SITE_ID"
+          ]
+        },
+        {
+          "table": "BUYER_ACCOUNT",
+          "columns": [
+            "ACCOUNT_ID",
+            "ACCOUNT_BUYER_LEVEL",
+            "ACCOUNT_SELLER_LEVEL",
+            "ACCOUNT_COUNTRY",
+            "ACCOUNT_CONTACT"
+          ]
+        },
+        {
+          "table": "SELLER_ACCOUNT",
+          "columns": [
+            "ACCOUNT_ID",
+            "ACCOUNT_BUYER_LEVEL",
+            "ACCOUNT_SELLER_LEVEL",
+            "ACCOUNT_COUNTRY",
+            "ACCOUNT_CONTACT"
+          ]
+        },
+        {
+          "table": "BUYER_COUNTRY",
+          "columns": [
+            "COUNTRY",
+            "NAME"
+          ]
+        },
+        {
+          "table": "SELLER_COUNTRY",
+          "columns": [
+            "COUNTRY",
+            "NAME"
+          ]
+        }
+      ],
+      "metrics": [
+        "KYLIN_SALES.PRICE",
+        "KYLIN_SALES.ITEM_COUNT"
+      ],
+      "filter_condition": "",
+      "partition_desc": {
+        "partition_date_column": "KYLIN_SALES.PART_DT",
+        "partition_time_column": null,
+        "partition_date_start": 0,
+        "partition_date_format": "yyyy-MM-dd HH:mm:ss",
+        "partition_time_format": "HH:mm:ss",
+        "partition_type": "APPEND",
+        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
+      },
+      "capacity": "MEDIUM",
+      "projectName": "generic_test_project"
+    },
+    "cube_desc_data":
+    {
+        "uuid": "02669388-8b98-591a-9fb7-9addcdb2da57",
+        "last_modified": 0,
+        "version": "3.0.0.20500",
+        "name": "generic_test_cube",
+        "is_draft": false,
+        "model_name": "generic_test_model",
+        "description": "",
+        "null_string": null,
+        "dimensions": [
+          {
+            "name": "TRANS_ID",
+            "table": "KYLIN_SALES",
+            "column": "TRANS_ID",
+            "derived": null
+          },
+          {
+            "name": "YEAR_BEG_DT",
+            "table": "KYLIN_CAL_DT",
+            "column": null,
+            "derived": [
+              "YEAR_BEG_DT"
+            ]
+          },
+          {
+            "name": "MONTH_BEG_DT",
+            "table": "KYLIN_CAL_DT",
+            "column": null,
+            "derived": [
+              "MONTH_BEG_DT"
+            ]
+          },
+          {
+            "name": "WEEK_BEG_DT",
+            "table": "KYLIN_CAL_DT",
+            "column": null,
+            "derived": [
+              "WEEK_BEG_DT"
+            ]
+          },
+          {
+            "name": "USER_DEFINED_FIELD1",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": null,
+            "derived": [
+              "USER_DEFINED_FIELD1"
+            ]
+          },
+          {
+            "name": "USER_DEFINED_FIELD3",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": null,
+            "derived": [
+              "USER_DEFINED_FIELD3"
+            ]
+          },
+          {
+            "name": "META_CATEG_NAME",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": "META_CATEG_NAME",
+            "derived": null
+          },
+          {
+            "name": "CATEG_LVL2_NAME",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": "CATEG_LVL2_NAME",
+            "derived": null
+          },
+          {
+            "name": "CATEG_LVL3_NAME",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": "CATEG_LVL3_NAME",
+            "derived": null
+          },
+          {
+            "name": "LSTG_FORMAT_NAME",
+            "table": "KYLIN_SALES",
+            "column": "LSTG_FORMAT_NAME",
+            "derived": null
+          },
+          {
+            "name": "SELLER_ID",
+            "table": "KYLIN_SALES",
+            "column": "SELLER_ID",
+            "derived": null
+          },
+          {
+            "name": "BUYER_ID",
+            "table": "KYLIN_SALES",
+            "column": "BUYER_ID",
+            "derived": null
+          },
+          {
+            "name": "ACCOUNT_BUYER_LEVEL",
+            "table": "BUYER_ACCOUNT",
+            "column": "ACCOUNT_BUYER_LEVEL",
+            "derived": null
+          },
+          {
+            "name": "ACCOUNT_SELLER_LEVEL",
+            "table": "SELLER_ACCOUNT",
+            "column": "ACCOUNT_SELLER_LEVEL",
+            "derived": null
+          },
+          {
+            "name": "BUYER_COUNTRY",
+            "table": "BUYER_ACCOUNT",
+            "column": "ACCOUNT_COUNTRY",
+            "derived": null
+          },
+          {
+            "name": "SELLER_COUNTRY",
+            "table": "SELLER_ACCOUNT",
+            "column": "ACCOUNT_COUNTRY",
+            "derived": null
+          },
+          {
+            "name": "BUYER_COUNTRY_NAME",
+            "table": "BUYER_COUNTRY",
+            "column": "NAME",
+            "derived": null
+          },
+          {
+            "name": "SELLER_COUNTRY_NAME",
+            "table": "SELLER_COUNTRY",
+            "column": "NAME",
+            "derived": null
+          },
+          {
+            "name": "OPS_USER_ID",
+            "table": "KYLIN_SALES",
+            "column": "OPS_USER_ID",
+            "derived": null
+          },
+          {
+            "name": "OPS_REGION",
+            "table": "KYLIN_SALES",
+            "column": "OPS_REGION",
+            "derived": null
+          }
+        ],
+        "measures": [
+          {
+            "name": "GMV_SUM",
+            "function": {
+              "expression": "SUM",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.PRICE"
+              },
+              "returntype": "decimal(19,4)"
+            }
+          },
+          {
+            "name": "BUYER_LEVEL_SUM",
+            "function": {
+              "expression": "SUM",
+              "parameter": {
+                "type": "column",
+                "value": "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL"
+              },
+              "returntype": "bigint"
+            }
+          },
+          {
+            "name": "SELLER_LEVEL_SUM",
+            "function": {
+              "expression": "SUM",
+              "parameter": {
+                "type": "column",
+                "value": "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL"
+              },
+              "returntype": "bigint"
+            }
+          },
+          {
+            "name": "TRANS_CNT",
+            "function": {
+              "expression": "COUNT",
+              "parameter": {
+                "type": "constant",
+                "value": "1"
+              },
+              "returntype": "bigint"
+            }
+          },
+          {
+            "name": "SELLER_CNT_HLL",
+            "function": {
+              "expression": "COUNT_DISTINCT",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.SELLER_ID"
+              },
+              "returntype": "hllc(10)"
+            }
+          },
+          {
+            "name": "TOP_SELLER",
+            "function": {
+              "expression": "TOP_N",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.PRICE",
+                "next_parameter": {
+                  "type": "column",
+                  "value": "KYLIN_SALES.SELLER_ID"
+                }
+              },
+              "returntype": "topn(100,4)",
+              "configuration": {
+                "topn.encoding.KYLIN_SALES.SELLER_ID": "dict",
+                "topn.encoding_version.KYLIN_SALES.SELLER_ID": "1"
+              }
+            }
+          },
+          {
+            "name": "BUYER_CNT_BITMAP",
+            "function": {
+              "expression": "COUNT_DISTINCT",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.BUYER_ID"
+              },
+              "returntype": "bitmap"
+            }
+          },
+          {
+            "name": "MIN_PRICE",
+            "function": {
+              "expression": "MIN",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.PRICE"
+              },
+              "returntype": "decimal(19,4)"
+            }
+          },
+          {
+            "name": "MAX_PRICE",
+            "function": {
+              "expression": "MAX",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.PRICE"
+              },
+              "returntype": "decimal(19,4)"
+            }
+          },
+          {
+            "name": "PERCENTILE_PRICE",
+            "function": {
+              "expression": "PERCENTILE_APPROX",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.PRICE"
+              },
+              "returntype": "percentile(100)"
+            }
+          }
+        ],
+        "dictionaries": [
+          {
+            "column": "KYLIN_SALES.BUYER_ID",
+            "builder": "org.apache.kylin.dict.GlobalDictionaryBuilder",
+            "cube": null,
+            "model": null
+          }
+        ],
+        "rowkey": {
+          "rowkey_columns": [
+            {
+              "column": "KYLIN_SALES.BUYER_ID",
+              "encoding": "integer:4",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.SELLER_ID",
+              "encoding": "integer:4",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.TRANS_ID",
+              "encoding": "integer:4",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.PART_DT",
+              "encoding": "date",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.LEAF_CATEG_ID",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "BUYER_COUNTRY.NAME",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "SELLER_COUNTRY.NAME",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.LSTG_FORMAT_NAME",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.LSTG_SITE_ID",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.OPS_USER_ID",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.OPS_REGION",
+              "encoding": "dict",
+              "encoding_version": 1,
+              "isShardBy": false
+            }
+          ]
+        },
+        "hbase_mapping": {
+          "column_family": [
+            {
+              "name": "F1",
+              "columns": [
+                {
+                  "qualifier": "M",
+                  "measure_refs": [
+                    "GMV_SUM",
+                    "BUYER_LEVEL_SUM",
+                    "SELLER_LEVEL_SUM",
+                    "TRANS_CNT",
+                    "TOP_SELLER",
+                    "MIN_PRICE",
+                    "MAX_PRICE",
+                    "PERCENTILE_PRICE"
+                  ]
+                }
+              ]
+            },
+            {
+              "name": "F2",
+              "columns": [
+                {
+                  "qualifier": "M",
+                  "measure_refs": [
+                    "SELLER_CNT_HLL",
+                    "BUYER_CNT_BITMAP"
+                  ]
+                }
+              ]
+            }
+          ]
+        },
+        "aggregation_groups": [
+          {
+            "includes": [
+              "KYLIN_SALES.PART_DT",
+              "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
+              "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
+              "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
+              "KYLIN_SALES.LEAF_CATEG_ID",
+              "KYLIN_SALES.LSTG_FORMAT_NAME",
+              "KYLIN_SALES.LSTG_SITE_ID",
+              "KYLIN_SALES.OPS_USER_ID",
+              "KYLIN_SALES.OPS_REGION",
+              "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
+              "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL",
+              "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
+              "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
+              "BUYER_COUNTRY.NAME",
+              "SELLER_COUNTRY.NAME"
+            ],
+            "select_rule": {
+              "hierarchy_dims": [
+                [
+                  "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
+                  "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
+                  "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
+                  "KYLIN_SALES.LEAF_CATEG_ID"
+                ]
+              ],
+              "mandatory_dims": [
+                "KYLIN_SALES.PART_DT"
+              ],
+              "joint_dims": [
+                [
+                  "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
+                  "BUYER_COUNTRY.NAME"
+                ],
+                [
+                  "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
+                  "SELLER_COUNTRY.NAME"
+                ],
+                [
+                  "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
+                  "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL"
+                ],
+                [
+                  "KYLIN_SALES.LSTG_FORMAT_NAME",
+                  "KYLIN_SALES.LSTG_SITE_ID"
+                ],
+                [
+                  "KYLIN_SALES.OPS_USER_ID",
+                  "KYLIN_SALES.OPS_REGION"
+                ]
+              ]
+            }
+          }
+        ],
+        "signature": "HpTV7qyxn6IDFiRbFdsb5g==",
+        "notify_list": [],
+        "status_need_notify": [],
+        "partition_date_start": 0,
+        "partition_date_end": 3153600000000,
+        "auto_merge_time_ranges": [],
+        "volatile_range": 0,
+        "retention_range": 0,
+        "engine_type": 2,
+        "storage_type": 2,
+        "override_kylin_properties": {
+          "kylin.cube.aggrgroup.is-mandatory-only-valid": "true",
+          "kylin.engine.spark.rdd-partition-cut-mb": "500"
+        },
+        "cuboid_black_list": [],
+        "parent_forward": 3,
+        "mandatory_dimension_set_list": [],
+        "snapshot_table_desc_list": []
+      }
+}
\ No newline at end of file
diff --git a/build/CI/testing/data/generic_desc_data/generic_desc_data_4x.json b/build/CI/testing/data/generic_desc_data/generic_desc_data_4x.json
new file mode 100644
index 0000000..8d533b5
--- /dev/null
+++ b/build/CI/testing/data/generic_desc_data/generic_desc_data_4x.json
@@ -0,0 +1,625 @@
+{
+    "load_table_list":
+    "DEFAULT.KYLIN_SALES,DEFAULT.KYLIN_CAL_DT,DEFAULT.KYLIN_CATEGORY_GROUPINGS,DEFAULT.KYLIN_ACCOUNT,DEFAULT.KYLIN_COUNTRY",
+  
+    "model_desc_data":
+    {
+      "uuid": "0928468a-9fab-4185-9a14-6f2e7c74823f",
+      "last_modified": 0,
+      "version": "3.0.0.20500",
+      "name": "generic_test_model",
+      "owner": null,
+      "is_draft": false,
+      "description": "",
+      "fact_table": "DEFAULT.KYLIN_SALES",
+      "lookups": [
+        {
+          "table": "DEFAULT.KYLIN_CAL_DT",
+          "kind": "LOOKUP",
+          "alias": "KYLIN_CAL_DT",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "KYLIN_CAL_DT.CAL_DT"
+            ],
+            "foreign_key": [
+              "KYLIN_SALES.PART_DT"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_CATEGORY_GROUPINGS",
+          "kind": "LOOKUP",
+          "alias": "KYLIN_CATEGORY_GROUPINGS",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID",
+              "KYLIN_CATEGORY_GROUPINGS.SITE_ID"
+            ],
+            "foreign_key": [
+              "KYLIN_SALES.LEAF_CATEG_ID",
+              "KYLIN_SALES.LSTG_SITE_ID"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_ACCOUNT",
+          "kind": "LOOKUP",
+          "alias": "BUYER_ACCOUNT",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "BUYER_ACCOUNT.ACCOUNT_ID"
+            ],
+            "foreign_key": [
+              "KYLIN_SALES.BUYER_ID"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_ACCOUNT",
+          "kind": "LOOKUP",
+          "alias": "SELLER_ACCOUNT",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "SELLER_ACCOUNT.ACCOUNT_ID"
+            ],
+            "foreign_key": [
+              "KYLIN_SALES.SELLER_ID"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_COUNTRY",
+          "kind": "LOOKUP",
+          "alias": "BUYER_COUNTRY",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "BUYER_COUNTRY.COUNTRY"
+            ],
+            "foreign_key": [
+              "BUYER_ACCOUNT.ACCOUNT_COUNTRY"
+            ]
+          }
+        },
+        {
+          "table": "DEFAULT.KYLIN_COUNTRY",
+          "kind": "LOOKUP",
+          "alias": "SELLER_COUNTRY",
+          "join": {
+            "type": "inner",
+            "primary_key": [
+              "SELLER_COUNTRY.COUNTRY"
+            ],
+            "foreign_key": [
+              "SELLER_ACCOUNT.ACCOUNT_COUNTRY"
+            ]
+          }
+        }
+      ],
+      "dimensions": [
+        {
+          "table": "KYLIN_SALES",
+          "columns": [
+            "TRANS_ID",
+            "SELLER_ID",
+            "BUYER_ID",
+            "PART_DT",
+            "LEAF_CATEG_ID",
+            "LSTG_FORMAT_NAME",
+            "LSTG_SITE_ID",
+            "OPS_USER_ID",
+            "OPS_REGION"
+          ]
+        },
+        {
+          "table": "KYLIN_CAL_DT",
+          "columns": [
+            "CAL_DT",
+            "WEEK_BEG_DT",
+            "MONTH_BEG_DT",
+            "YEAR_BEG_DT"
+          ]
+        },
+        {
+          "table": "KYLIN_CATEGORY_GROUPINGS",
+          "columns": [
+            "USER_DEFINED_FIELD1",
+            "USER_DEFINED_FIELD3",
+            "META_CATEG_NAME",
+            "CATEG_LVL2_NAME",
+            "CATEG_LVL3_NAME",
+            "LEAF_CATEG_ID",
+            "SITE_ID"
+          ]
+        },
+        {
+          "table": "BUYER_ACCOUNT",
+          "columns": [
+            "ACCOUNT_ID",
+            "ACCOUNT_BUYER_LEVEL",
+            "ACCOUNT_SELLER_LEVEL",
+            "ACCOUNT_COUNTRY",
+            "ACCOUNT_CONTACT"
+          ]
+        },
+        {
+          "table": "SELLER_ACCOUNT",
+          "columns": [
+            "ACCOUNT_ID",
+            "ACCOUNT_BUYER_LEVEL",
+            "ACCOUNT_SELLER_LEVEL",
+            "ACCOUNT_COUNTRY",
+            "ACCOUNT_CONTACT"
+          ]
+        },
+        {
+          "table": "BUYER_COUNTRY",
+          "columns": [
+            "COUNTRY",
+            "NAME"
+          ]
+        },
+        {
+          "table": "SELLER_COUNTRY",
+          "columns": [
+            "COUNTRY",
+            "NAME"
+          ]
+        }
+      ],
+      "metrics": [
+        "KYLIN_SALES.PRICE",
+        "KYLIN_SALES.ITEM_COUNT"
+      ],
+      "filter_condition": "",
+      "partition_desc": {
+        "partition_date_column": "KYLIN_SALES.PART_DT",
+        "partition_time_column": null,
+        "partition_date_start": 0,
+        "partition_date_format": "yyyy-MM-dd HH:mm:ss",
+        "partition_time_format": "HH:mm:ss",
+        "partition_type": "APPEND",
+        "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
+      },
+      "capacity": "MEDIUM",
+      "projectName": "generic_test_project"
+    },
+    "cube_desc_data":
+    {
+        "uuid": "b1c89f5b-5346-05db-0b82-8851ccb72737",
+        "last_modified": 0,
+        "version": "3.0.0.20500",
+        "name": "generic_test_cube",
+        "is_draft": false,
+        "model_name": "generic_test_model",
+        "description": "",
+        "null_string": null,
+        "dimensions": [
+          {
+            "name": "TRANS_ID",
+            "table": "KYLIN_SALES",
+            "column": "TRANS_ID",
+            "derived": null
+          },
+          {
+            "name": "YEAR_BEG_DT",
+            "table": "KYLIN_CAL_DT",
+            "column": null,
+            "derived": [
+              "YEAR_BEG_DT"
+            ]
+          },
+          {
+            "name": "MONTH_BEG_DT",
+            "table": "KYLIN_CAL_DT",
+            "column": null,
+            "derived": [
+              "MONTH_BEG_DT"
+            ]
+          },
+          {
+            "name": "WEEK_BEG_DT",
+            "table": "KYLIN_CAL_DT",
+            "column": null,
+            "derived": [
+              "WEEK_BEG_DT"
+            ]
+          },
+          {
+            "name": "USER_DEFINED_FIELD1",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": null,
+            "derived": [
+              "USER_DEFINED_FIELD1"
+            ]
+          },
+          {
+            "name": "USER_DEFINED_FIELD3",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": null,
+            "derived": [
+              "USER_DEFINED_FIELD3"
+            ]
+          },
+          {
+            "name": "META_CATEG_NAME",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": "META_CATEG_NAME",
+            "derived": null
+          },
+          {
+            "name": "CATEG_LVL2_NAME",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": "CATEG_LVL2_NAME",
+            "derived": null
+          },
+          {
+            "name": "CATEG_LVL3_NAME",
+            "table": "KYLIN_CATEGORY_GROUPINGS",
+            "column": "CATEG_LVL3_NAME",
+            "derived": null
+          },
+          {
+            "name": "LSTG_FORMAT_NAME",
+            "table": "KYLIN_SALES",
+            "column": "LSTG_FORMAT_NAME",
+            "derived": null
+          },
+          {
+            "name": "SELLER_ID",
+            "table": "KYLIN_SALES",
+            "column": "SELLER_ID",
+            "derived": null
+          },
+          {
+            "name": "BUYER_ID",
+            "table": "KYLIN_SALES",
+            "column": "BUYER_ID",
+            "derived": null
+          },
+          {
+            "name": "ACCOUNT_BUYER_LEVEL",
+            "table": "BUYER_ACCOUNT",
+            "column": "ACCOUNT_BUYER_LEVEL",
+            "derived": null
+          },
+          {
+            "name": "ACCOUNT_SELLER_LEVEL",
+            "table": "SELLER_ACCOUNT",
+            "column": "ACCOUNT_SELLER_LEVEL",
+            "derived": null
+          },
+          {
+            "name": "BUYER_COUNTRY",
+            "table": "BUYER_ACCOUNT",
+            "column": "ACCOUNT_COUNTRY",
+            "derived": null
+          },
+          {
+            "name": "SELLER_COUNTRY",
+            "table": "SELLER_ACCOUNT",
+            "column": "ACCOUNT_COUNTRY",
+            "derived": null
+          },
+          {
+            "name": "BUYER_COUNTRY_NAME",
+            "table": "BUYER_COUNTRY",
+            "column": "NAME",
+            "derived": null
+          },
+          {
+            "name": "SELLER_COUNTRY_NAME",
+            "table": "SELLER_COUNTRY",
+            "column": "NAME",
+            "derived": null
+          },
+          {
+            "name": "OPS_USER_ID",
+            "table": "KYLIN_SALES",
+            "column": "OPS_USER_ID",
+            "derived": null
+          },
+          {
+            "name": "OPS_REGION",
+            "table": "KYLIN_SALES",
+            "column": "OPS_REGION",
+            "derived": null
+          }
+        ],
+        "measures": [
+          {
+            "name": "GMV_SUM",
+            "function": {
+              "expression": "SUM",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.PRICE"
+              },
+              "returntype": "decimal(19,4)"
+            }
+          },
+          {
+            "name": "BUYER_LEVEL_SUM",
+            "function": {
+              "expression": "SUM",
+              "parameter": {
+                "type": "column",
+                "value": "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL"
+              },
+              "returntype": "bigint"
+            }
+          },
+          {
+            "name": "SELLER_LEVEL_SUM",
+            "function": {
+              "expression": "SUM",
+              "parameter": {
+                "type": "column",
+                "value": "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL"
+              },
+              "returntype": "bigint"
+            }
+          },
+          {
+            "name": "TRANS_CNT",
+            "function": {
+              "expression": "COUNT",
+              "parameter": {
+                "type": "constant",
+                "value": "1"
+              },
+              "returntype": "bigint"
+            }
+          },
+          {
+            "name": "TOP_SELLER",
+            "function": {
+              "expression": "TOP_N",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.PRICE",
+                "next_parameter": {
+                  "type": "column",
+                  "value": "KYLIN_SALES.SELLER_ID"
+                }
+              },
+              "returntype": "topn(100,4)",
+              "configuration": {
+                "topn.encoding.KYLIN_SALES.SELLER_ID": "dict",
+                "topn.encoding_version.KYLIN_SALES.SELLER_ID": "1"
+              }
+            }
+          },
+          {
+            "name": "SELLER_CNT_HLL",
+            "function": {
+              "expression": "COUNT_DISTINCT",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.SELLER_ID"
+              },
+              "returntype": "hllc(10)"
+            }
+          },
+          {
+            "name": "BUYER_CNT_BITMAP",
+            "function": {
+              "expression": "COUNT_DISTINCT",
+              "parameter": {
+                "type": "column",
+                "value": "KYLIN_SALES.BUYER_ID"
+              },
+              "returntype": "bitmap"
+            }
+          }
+        ],
+        "dictionaries": [
+          {
+            "column": "KYLIN_SALES.BUYER_ID",
+            "builder": "org.apache.kylin.dict.GlobalDictionaryBuilder",
+            "cube": null,
+            "model": null
+          }
+        ],
+        "rowkey": {
+          "rowkey_columns": [
+            {
+              "column": "KYLIN_SALES.BUYER_ID",
+              "encoding": "integer:4",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.SELLER_ID",
+              "encoding": "integer:4",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.TRANS_ID",
+              "encoding": "integer:4",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.PART_DT",
+              "encoding": "date",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.LEAF_CATEG_ID",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "BUYER_COUNTRY.NAME",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "SELLER_COUNTRY.NAME",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.LSTG_FORMAT_NAME",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.LSTG_SITE_ID",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.OPS_USER_ID",
+              "encoding": "dict",
+              "isShardBy": false
+            },
+            {
+              "column": "KYLIN_SALES.OPS_REGION",
+              "encoding": "dict",
+              "isShardBy": false
+            }
+          ]
+        },
+        "hbase_mapping": {
+          "column_family": [
+            {
+              "name": "F1",
+              "columns": [
+                {
+                  "qualifier": "M",
+                  "measure_refs": [
+                    "GMV_SUM",
+                    "BUYER_LEVEL_SUM",
+                    "SELLER_LEVEL_SUM",
+                    "TRANS_CNT",
+                    "TOP_SELLER"
+                  ]
+                }
+              ]
+            },
+            {
+              "name": "F2",
+              "columns": [
+                {
+                  "qualifier": "M",
+                  "measure_refs": [
+                    "SELLER_CNT_HLL",
+                    "BUYER_CNT_BITMAP"
+                  ]
+                }
+              ]
+            }
+          ]
+        },
+        "aggregation_groups": [
+          {
+            "includes": [
+              "KYLIN_SALES.PART_DT",
+              "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
+              "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
+              "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
+              "KYLIN_SALES.LEAF_CATEG_ID",
+              "KYLIN_SALES.LSTG_FORMAT_NAME",
+              "KYLIN_SALES.LSTG_SITE_ID",
+              "KYLIN_SALES.OPS_USER_ID",
+              "KYLIN_SALES.OPS_REGION",
+              "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
+              "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL",
+              "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
+              "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
+              "BUYER_COUNTRY.NAME",
+              "SELLER_COUNTRY.NAME"
+            ],
+            "select_rule": {
+              "hierarchy_dims": [
+                [
+                  "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
+                  "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
+                  "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
+                  "KYLIN_SALES.LEAF_CATEG_ID"
+                ]
+              ],
+              "mandatory_dims": [
+                "KYLIN_SALES.PART_DT"
+              ],
+              "joint_dims": [
+                [
+                  "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
+                  "BUYER_COUNTRY.NAME"
+                ],
+                [
+                  "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
+                  "SELLER_COUNTRY.NAME"
+                ],
+                [
+                  "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
+                  "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL"
+                ],
+                [
+                  "KYLIN_SALES.LSTG_FORMAT_NAME",
+                  "KYLIN_SALES.LSTG_SITE_ID"
+                ],
+                [
+                  "KYLIN_SALES.OPS_USER_ID",
+                  "KYLIN_SALES.OPS_REGION"
+                ]
+              ]
+            }
+          }
+        ],
+        "signature": "vbkxiXn2AOQm8zdkfY1kSw==",
+        "notify_list": [],
+        "status_need_notify": [],
+        "partition_date_start": 0,
+        "partition_date_end": 3153600000000,
+        "auto_merge_time_ranges": [],
+        "volatile_range": 0,
+        "retention_range": 0,
+        "engine_type": 6,
+        "storage_type": 4,
+        "override_kylin_properties": {},
+        "cuboid_black_list": [],
+        "parent_forward": 3,
+        "mandatory_dimension_set_list": [],
+        "snapshot_table_desc_list": []
+      }
+  }
\ No newline at end of file
diff --git a/build/CI/testing/data/release_test_0001.json b/build/CI/testing/data/release_test_0001.json
new file mode 100644
index 0000000..0b5558a
--- /dev/null
+++ b/build/CI/testing/data/release_test_0001.json
@@ -0,0 +1,626 @@
+{
+  "load_table_list":
+  "DEFAULT.KYLIN_SALES,DEFAULT.KYLIN_CAL_DT,DEFAULT.KYLIN_CATEGORY_GROUPINGS,DEFAULT.KYLIN_ACCOUNT,DEFAULT.KYLIN_COUNTRY",
+
+  "model_desc_data":
+  {
+    "uuid": "0928468a-9fab-4185-9a14-6f2e7c74823f",
+    "last_modified": 0,
+    "version": "3.0.0.20500",
+    "name": "release_test_0001_model",
+    "owner": null,
+    "is_draft": false,
+    "description": "",
+    "fact_table": "DEFAULT.KYLIN_SALES",
+    "lookups": [
+      {
+        "table": "DEFAULT.KYLIN_CAL_DT",
+        "kind": "LOOKUP",
+        "alias": "KYLIN_CAL_DT",
+        "join": {
+          "type": "inner",
+          "primary_key": [
+            "KYLIN_CAL_DT.CAL_DT"
+          ],
+          "foreign_key": [
+            "KYLIN_SALES.PART_DT"
+          ]
+        }
+      },
+      {
+        "table": "DEFAULT.KYLIN_CATEGORY_GROUPINGS",
+        "kind": "LOOKUP",
+        "alias": "KYLIN_CATEGORY_GROUPINGS",
+        "join": {
+          "type": "inner",
+          "primary_key": [
+            "KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID",
+            "KYLIN_CATEGORY_GROUPINGS.SITE_ID"
+          ],
+          "foreign_key": [
+            "KYLIN_SALES.LEAF_CATEG_ID",
+            "KYLIN_SALES.LSTG_SITE_ID"
+          ]
+        }
+      },
+      {
+        "table": "DEFAULT.KYLIN_ACCOUNT",
+        "kind": "LOOKUP",
+        "alias": "BUYER_ACCOUNT",
+        "join": {
+          "type": "inner",
+          "primary_key": [
+            "BUYER_ACCOUNT.ACCOUNT_ID"
+          ],
+          "foreign_key": [
+            "KYLIN_SALES.BUYER_ID"
+          ]
+        }
+      },
+      {
+        "table": "DEFAULT.KYLIN_ACCOUNT",
+        "kind": "LOOKUP",
+        "alias": "SELLER_ACCOUNT",
+        "join": {
+          "type": "inner",
+          "primary_key": [
+            "SELLER_ACCOUNT.ACCOUNT_ID"
+          ],
+          "foreign_key": [
+            "KYLIN_SALES.SELLER_ID"
+          ]
+        }
+      },
+      {
+        "table": "DEFAULT.KYLIN_COUNTRY",
+        "kind": "LOOKUP",
+        "alias": "BUYER_COUNTRY",
+        "join": {
+          "type": "inner",
+          "primary_key": [
+            "BUYER_COUNTRY.COUNTRY"
+          ],
+          "foreign_key": [
+            "BUYER_ACCOUNT.ACCOUNT_COUNTRY"
+          ]
+        }
+      },
+      {
+        "table": "DEFAULT.KYLIN_COUNTRY",
+        "kind": "LOOKUP",
+        "alias": "SELLER_COUNTRY",
+        "join": {
+          "type": "inner",
+          "primary_key": [
+            "SELLER_COUNTRY.COUNTRY"
+          ],
+          "foreign_key": [
+            "SELLER_ACCOUNT.ACCOUNT_COUNTRY"
+          ]
+        }
+      }
+    ],
+    "dimensions": [
+      {
+        "table": "KYLIN_SALES",
+        "columns": [
+          "TRANS_ID",
+          "SELLER_ID",
+          "BUYER_ID",
+          "PART_DT",
+          "LEAF_CATEG_ID",
+          "LSTG_FORMAT_NAME",
+          "LSTG_SITE_ID",
+          "OPS_USER_ID",
+          "OPS_REGION"
+        ]
+      },
+      {
+        "table": "KYLIN_CAL_DT",
+        "columns": [
+          "CAL_DT",
+          "WEEK_BEG_DT",
+          "MONTH_BEG_DT",
+          "YEAR_BEG_DT"
+        ]
+      },
+      {
+        "table": "KYLIN_CATEGORY_GROUPINGS",
+        "columns": [
+          "USER_DEFINED_FIELD1",
+          "USER_DEFINED_FIELD3",
+          "META_CATEG_NAME",
+          "CATEG_LVL2_NAME",
+          "CATEG_LVL3_NAME",
+          "LEAF_CATEG_ID",
+          "SITE_ID"
+        ]
+      },
+      {
+        "table": "BUYER_ACCOUNT",
+        "columns": [
+          "ACCOUNT_ID",
+          "ACCOUNT_BUYER_LEVEL",
+          "ACCOUNT_SELLER_LEVEL",
+          "ACCOUNT_COUNTRY",
+          "ACCOUNT_CONTACT"
+        ]
+      },
+      {
+        "table": "SELLER_ACCOUNT",
+        "columns": [
+          "ACCOUNT_ID",
+          "ACCOUNT_BUYER_LEVEL",
+          "ACCOUNT_SELLER_LEVEL",
+          "ACCOUNT_COUNTRY",
+          "ACCOUNT_CONTACT"
+        ]
+      },
+      {
+        "table": "BUYER_COUNTRY",
+        "columns": [
+          "COUNTRY",
+          "NAME"
+        ]
+      },
+      {
+        "table": "SELLER_COUNTRY",
+        "columns": [
+          "COUNTRY",
+          "NAME"
+        ]
+      }
+    ],
+    "metrics": [
+      "KYLIN_SALES.PRICE",
+      "KYLIN_SALES.ITEM_COUNT"
+    ],
+    "filter_condition": "",
+    "partition_desc": {
+      "partition_date_column": "KYLIN_SALES.PART_DT",
+      "partition_time_column": null,
+      "partition_date_start": 0,
+      "partition_date_format": "yyyy-MM-dd HH:mm:ss",
+      "partition_time_format": "HH:mm:ss",
+      "partition_type": "APPEND",
+      "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
+    },
+    "capacity": "MEDIUM",
+    "projectName": null
+  },
+  "cube_desc_data":
+  {
+    "uuid": "0ef9b7a8-3929-4dff-b59d-2100aadc8dbf",
+    "last_modified": 0,
+    "version": "3.0.0.20500",
+    "name": "release_test_0001_cube",
+    "is_draft": false,
+    "model_name": "release_test_0001_model",
+    "description": "",
+    "null_string": null,
+    "dimensions": [
+      {
+        "name": "TRANS_ID",
+        "table": "KYLIN_SALES",
+        "column": "TRANS_ID",
+        "derived": null
+      },
+      {
+        "name": "YEAR_BEG_DT",
+        "table": "KYLIN_CAL_DT",
+        "column": null,
+        "derived": [
+          "YEAR_BEG_DT"
+        ]
+      },
+      {
+        "name": "MONTH_BEG_DT",
+        "table": "KYLIN_CAL_DT",
+        "column": null,
+        "derived": [
+          "MONTH_BEG_DT"
+        ]
+      },
+      {
+        "name": "WEEK_BEG_DT",
+        "table": "KYLIN_CAL_DT",
+        "column": null,
+        "derived": [
+          "WEEK_BEG_DT"
+        ]
+      },
+      {
+        "name": "USER_DEFINED_FIELD1",
+        "table": "KYLIN_CATEGORY_GROUPINGS",
+        "column": null,
+        "derived": [
+          "USER_DEFINED_FIELD1"
+        ]
+      },
+      {
+        "name": "USER_DEFINED_FIELD3",
+        "table": "KYLIN_CATEGORY_GROUPINGS",
+        "column": null,
+        "derived": [
+          "USER_DEFINED_FIELD3"
+        ]
+      },
+      {
+        "name": "META_CATEG_NAME",
+        "table": "KYLIN_CATEGORY_GROUPINGS",
+        "column": "META_CATEG_NAME",
+        "derived": null
+      },
+      {
+        "name": "CATEG_LVL2_NAME",
+        "table": "KYLIN_CATEGORY_GROUPINGS",
+        "column": "CATEG_LVL2_NAME",
+        "derived": null
+      },
+      {
+        "name": "CATEG_LVL3_NAME",
+        "table": "KYLIN_CATEGORY_GROUPINGS",
+        "column": "CATEG_LVL3_NAME",
+        "derived": null
+      },
+      {
+        "name": "LSTG_FORMAT_NAME",
+        "table": "KYLIN_SALES",
+        "column": "LSTG_FORMAT_NAME",
+        "derived": null
+      },
+      {
+        "name": "SELLER_ID",
+        "table": "KYLIN_SALES",
+        "column": "SELLER_ID",
+        "derived": null
+      },
+      {
+        "name": "BUYER_ID",
+        "table": "KYLIN_SALES",
+        "column": "BUYER_ID",
+        "derived": null
+      },
+      {
+        "name": "ACCOUNT_BUYER_LEVEL",
+        "table": "BUYER_ACCOUNT",
+        "column": "ACCOUNT_BUYER_LEVEL",
+        "derived": null
+      },
+      {
+        "name": "ACCOUNT_SELLER_LEVEL",
+        "table": "SELLER_ACCOUNT",
+        "column": "ACCOUNT_SELLER_LEVEL",
+        "derived": null
+      },
+      {
+        "name": "BUYER_COUNTRY",
+        "table": "BUYER_ACCOUNT",
+        "column": "ACCOUNT_COUNTRY",
+        "derived": null
+      },
+      {
+        "name": "SELLER_COUNTRY",
+        "table": "SELLER_ACCOUNT",
+        "column": "ACCOUNT_COUNTRY",
+        "derived": null
+      },
+      {
+        "name": "BUYER_COUNTRY_NAME",
+        "table": "BUYER_COUNTRY",
+        "column": "NAME",
+        "derived": null
+      },
+      {
+        "name": "SELLER_COUNTRY_NAME",
+        "table": "SELLER_COUNTRY",
+        "column": "NAME",
+        "derived": null
+      },
+      {
+        "name": "OPS_USER_ID",
+        "table": "KYLIN_SALES",
+        "column": "OPS_USER_ID",
+        "derived": null
+      },
+      {
+        "name": "OPS_REGION",
+        "table": "KYLIN_SALES",
+        "column": "OPS_REGION",
+        "derived": null
+      }
+    ],
+    "measures": [
+      {
+        "name": "GMV_SUM",
+        "function": {
+          "expression": "SUM",
+          "parameter": {
+            "type": "column",
+            "value": "KYLIN_SALES.PRICE"
+          },
+          "returntype": "decimal(19,4)"
+        }
+      },
+      {
+        "name": "BUYER_LEVEL_SUM",
+        "function": {
+          "expression": "SUM",
+          "parameter": {
+            "type": "column",
+            "value": "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL"
+          },
+          "returntype": "bigint"
+        }
+      },
+      {
+        "name": "SELLER_LEVEL_SUM",
+        "function": {
+          "expression": "SUM",
+          "parameter": {
+            "type": "column",
+            "value": "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL"
+          },
+          "returntype": "bigint"
+        }
+      },
+      {
+        "name": "TRANS_CNT",
+        "function": {
+          "expression": "COUNT",
+          "parameter": {
+            "type": "constant",
+            "value": "1"
+          },
+          "returntype": "bigint"
+        }
+      },
+      {
+        "name": "SELLER_CNT_HLL",
+        "function": {
+          "expression": "COUNT_DISTINCT",
+          "parameter": {
+            "type": "column",
+            "value": "KYLIN_SALES.SELLER_ID"
+          },
+          "returntype": "hllc(10)"
+        }
+      },
+      {
+        "name": "TOP_SELLER",
+        "function": {
+          "expression": "TOP_N",
+          "parameter": {
+            "type": "column",
+            "value": "KYLIN_SALES.PRICE",
+            "next_parameter": {
+              "type": "column",
+              "value": "KYLIN_SALES.SELLER_ID"
+            }
+          },
+          "returntype": "topn(100)",
+          "configuration": {
+            "topn.encoding.KYLIN_SALES.SELLER_ID": "dict",
+            "topn.encoding_version.KYLIN_SALES.SELLER_ID": "1"
+          }
+        }
+      }
+    ],
+    "rowkey": {
+      "rowkey_columns": [
+        {
+          "column": "KYLIN_SALES.BUYER_ID",
+          "encoding": "integer:4",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_SALES.SELLER_ID",
+          "encoding": "integer:4",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_SALES.TRANS_ID",
+          "encoding": "integer:4",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_SALES.PART_DT",
+          "encoding": "date",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_SALES.LEAF_CATEG_ID",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "BUYER_COUNTRY.NAME",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "SELLER_COUNTRY.NAME",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_SALES.LSTG_FORMAT_NAME",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_SALES.LSTG_SITE_ID",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_SALES.OPS_USER_ID",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        },
+        {
+          "column": "KYLIN_SALES.OPS_REGION",
+          "encoding": "dict",
+          "encoding_version": 1,
+          "isShardBy": false
+        }
+      ]
+    },
+    "hbase_mapping": {
+      "column_family": [
+        {
+          "name": "F1",
+          "columns": [
+            {
+              "qualifier": "M",
+              "measure_refs": [
+                "GMV_SUM",
+                "BUYER_LEVEL_SUM",
+                "SELLER_LEVEL_SUM",
+                "TRANS_CNT"
+              ]
+            }
+          ]
+        },
+        {
+          "name": "F2",
+          "columns": [
+            {
+              "qualifier": "M",
+              "measure_refs": [
+                "SELLER_CNT_HLL",
+                "TOP_SELLER"
+              ]
+            }
+          ]
+        }
+      ]
+    },
+    "aggregation_groups": [
+      {
+        "includes": [
+          "KYLIN_SALES.PART_DT",
+          "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
+          "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
+          "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
+          "KYLIN_SALES.LEAF_CATEG_ID",
+          "KYLIN_SALES.LSTG_FORMAT_NAME",
+          "KYLIN_SALES.LSTG_SITE_ID",
+          "KYLIN_SALES.OPS_USER_ID",
+          "KYLIN_SALES.OPS_REGION",
+          "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
+          "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL",
+          "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
+          "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
+          "BUYER_COUNTRY.NAME",
+          "SELLER_COUNTRY.NAME"
+        ],
+        "select_rule": {
+          "hierarchy_dims": [
+            [
+              "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
+              "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
+              "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
+              "KYLIN_SALES.LEAF_CATEG_ID"
+            ]
+          ],
+          "mandatory_dims": [
+            "KYLIN_SALES.PART_DT"
+          ],
+          "joint_dims": [
+            [
+              "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
+              "BUYER_COUNTRY.NAME"
+            ],
+            [
+              "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
+              "SELLER_COUNTRY.NAME"
+            ],
+            [
+              "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
+              "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL"
+            ],
+            [
+              "KYLIN_SALES.LSTG_FORMAT_NAME",
+              "KYLIN_SALES.LSTG_SITE_ID"
+            ],
+            [
+              "KYLIN_SALES.OPS_USER_ID",
+              "KYLIN_SALES.OPS_REGION"
+            ]
+          ]
+        }
+      }
+    ],
+    "signature": null,
+    "notify_list": [],
+    "status_need_notify": [],
+    "partition_date_start": 0,
+    "partition_date_end": 3153600000000,
+    "auto_merge_time_ranges": [],
+    "volatile_range": 0,
+    "retention_range": 0,
+    "engine_type": 2,
+    "storage_type": 2,
+    "override_kylin_properties": {
+      "kylin.cube.aggrgroup.is-mandatory-only-valid": "true",
+      "kylin.engine.spark.rdd-partition-cut-mb": "500"
+    },
+    "cuboid_black_list": [],
+    "parent_forward": 3,
+    "mandatory_dimension_set_list": [],
+    "snapshot_table_desc_list": []
+  }
+}
\ No newline at end of file
diff --git a/build/CI/testing/env/default/default.properties b/build/CI/testing/env/default/default.properties
new file mode 100644
index 0000000..461ec37
--- /dev/null
+++ b/build/CI/testing/env/default/default.properties
@@ -0,0 +1,25 @@
+# default.properties
+# properties set here will be available to the test execution as environment variables
+
+# sample_key = sample_value
+
+#The path to the gauge reports directory. Should be either relative to the project directory or an absolute path
+gauge_reports_dir = reports
+
+#Set as false if gauge reports should not be overwritten on each execution. A new time-stamped directory will be created on each execution.
+overwrite_reports = true
+
+# Set to false to disable screenshots on failure in reports.
+screenshot_on_failure = false
+
+# The path to the gauge logs directory. Should be either relative to the project directory or an absolute path
+logs_directory = logs
+
+# The path the gauge specifications directory. Takes a comma separated list of specification files/directories.
+gauge_specs_dir = features/specs
+
+# The default delimiter used read csv files.
+csv_delimiter = ,
+
+# Allows steps to be written in multiline
+allow_multiline_step = false
\ No newline at end of file
diff --git a/build/CI/testing/env/default/python.properties b/build/CI/testing/env/default/python.properties
new file mode 100644
index 0000000..077d659
--- /dev/null
+++ b/build/CI/testing/env/default/python.properties
@@ -0,0 +1,4 @@
+GAUGE_PYTHON_COMMAND = python3
+
+# Comma seperated list of dirs. path should be relative to project root.
+STEP_IMPL_DIR = features/step_impl
diff --git a/build/CI/testing/features/specs/authentication/authentication_0001.spec b/build/CI/testing/features/specs/authentication/authentication_0001.spec
new file mode 100644
index 0000000..b915e26
--- /dev/null
+++ b/build/CI/testing/features/specs/authentication/authentication_0001.spec
@@ -0,0 +1,18 @@
+# Authentication Test
+Tags:front-end
+## Prepare browser
+
+* Initialize "chrome" browser and connect to "kylin_instance.yml"
+
+## Use the user name and password for user authentication
+
+* Authentication with user "test" and password "password".
+
+* Authentication with built-in user
+     |User   |Password      |
+     |-------|--------------|
+     |ADMIN  |KYLIN         |
+
+
+
+
diff --git a/build/CI/testing/features/specs/deploy_in_cluster_mode/read_write_separation.spec b/build/CI/testing/features/specs/deploy_in_cluster_mode/read_write_separation.spec
new file mode 100644
index 0000000..5ce3f08
--- /dev/null
+++ b/build/CI/testing/features/specs/deploy_in_cluster_mode/read_write_separation.spec
@@ -0,0 +1,5 @@
+# Read and write separation deployment
+Tags: 4.x
+
+## Prepare env
+* Get kylin instance
diff --git a/build/CI/testing/features/specs/generic_test.spec b/build/CI/testing/features/specs/generic_test.spec
new file mode 100644
index 0000000..95e68fa
--- /dev/null
+++ b/build/CI/testing/features/specs/generic_test.spec
@@ -0,0 +1,63 @@
+# Kylin Release Test
+Tags:3.x
+## Prepare env
+* Get kylin instance
+
+* prepare data file from "release_test_0001.json"
+
+* Create project "release_test_0001_project" and load table "load_table_list"
+
+
+## MR engine
+
+* Create model with "model_desc_data" in "release_test_0001_project"
+
+* Create cube with "cube_desc_data" in "release_test_0001_project", cube name is "release_test_0001_cube"
+
+* Build segment from "1325347200000" to "1356969600000" in "release_test_0001_cube"
+
+* Build segment from "1356969600000" to "1391011200000" in "release_test_0001_cube"
+
+* Merge cube "release_test_0001_cube" segment from "1325347200000" to "1391011200000"
+
+
+## SPARK engine
+
+* Clone cube "release_test_0001_cube" and name it "kylin_spark_cube" in "release_test_0001_project", modify build engine to "SPARK"
+
+* Build segment from "1325347200000" to "1356969600000" in "kylin_spark_cube"
+
+* Build segment from "1356969600000" to "1391011200000" in "kylin_spark_cube"
+
+* Merge cube "kylin_spark_cube" segment from "1325347200000" to "1391011200000"
+
+
+## FLINK engine
+
+* Clone cube "release_test_0001_cube" and name it "kylin_flink_cube" in "release_test_0001_project", modify build engine to "FLINK"
+
+* Build segment from "1325347200000" to "1356969600000" in "kylin_flink_cube"
+
+* Build segment from "1356969600000" to "1391011200000" in "kylin_flink_cube"
+
+* Merge cube "kylin_flink_cube" segment from "1325347200000" to "1391011200000"
+
+
+## Query cube and pushdown
+
+* Query SQL "select count(*) from kylin_sales" and specify "release_test_0001_cube" cube to query in "release_test_0001_project", compare result with "10000"
+
+* Query SQL "select count(*) from kylin_sales" and specify "kylin_spark_cube" cube to query in "release_test_0001_project", compare result with "10000"
+
+* Query SQL "select count(*) from kylin_sales" and specify "kylin_flink_cube" cube to query in "release_test_0001_project", compare result with "10000"
+
+* Disable cube "release_test_0001_cube"
+
+* Disable cube "kylin_spark_cube"
+
+* Disable cube "kylin_flink_cube"
+
+* Query SQL "select count(*) from kylin_sales" in "release_test_0001_project" and pushdown, compare result with "10000"
+
+
+
diff --git a/build/CI/testing/features/specs/sample.spec b/build/CI/testing/features/specs/sample.spec
new file mode 100644
index 0000000..bb9c9f5
--- /dev/null
+++ b/build/CI/testing/features/specs/sample.spec
@@ -0,0 +1,5 @@
+# test
+Tags:test,3.x,4.x
+## test
+* Get kylin instance
+* Query sql "select count(*) from kylin_sales" in "generic_test_project" and compare result with pushdown result
diff --git a/build/CI/testing/features/step_impl/authentication/authentication.py b/build/CI/testing/features/step_impl/authentication/authentication.py
new file mode 100644
index 0000000..044d1e2
--- /dev/null
+++ b/build/CI/testing/features/step_impl/authentication/authentication.py
@@ -0,0 +1,37 @@
+from time import sleep
+
+from getgauge.python import step
+from kylin_utils import util
+
+
+class LoginTest:
+
+    @step("Initialize <browser_type> browser and connect to <file_name>")
+    def setup_browser(self, browser_type, file_name):
+        global browser
+        browser = util.setup_browser(browser_type=browser_type)
+
+        browser.get(util.kylin_url(file_name))
+        sleep(3)
+
+        browser.refresh()
+        browser.set_window_size(1400, 800)
+
+    @step("Authentication with user <user> and password <password>.")
+    def assert_authentication_failed(self, user, password):
+        browser.find_element_by_id("username").clear()
+        browser.find_element_by_id("username").send_keys(user)
+        browser.find_element_by_id("password").clear()
+        browser.find_element_by_id("password").send_keys(password)
+
+        browser.find_element_by_class_name("bigger-110").click()
+
+    @step("Authentication with built-in user <table>")
+    def assert_authentication_success(self, table):
+        for i in range(1, 2):
+            user = table.get_row(i)
+            browser.find_element_by_id("username").clear()
+            browser.find_element_by_id("username").send_keys(user[0])
+            browser.find_element_by_id("password").clear()
+            browser.find_element_by_id("password").send_keys(user[1])
+            browser.find_element_by_class_name("bigger-110").click()
diff --git a/build/CI/testing/features/step_impl/before_suite.py b/build/CI/testing/features/step_impl/before_suite.py
new file mode 100644
index 0000000..4cce795
--- /dev/null
+++ b/build/CI/testing/features/step_impl/before_suite.py
@@ -0,0 +1,44 @@
+from getgauge.python import before_suite
+import os
+import json
+
+from kylin_utils import util
+
+
+@before_suite()
+def create_generic_model_and_cube():
+    client = util.setup_instance('kylin_instance.yml')
+    if client.version == '3.x':
+        with open(os.path.join('data/generic_desc_data', 'generic_desc_data_3x.json'), 'r') as f:
+            data = json.load(f)
+    elif client.version == '4.x':
+        with open(os.path.join('data/generic_desc_data', 'generic_desc_data_4x.json'), 'r') as f:
+            data = json.load(f)
+
+    project_name = client.generic_project
+    if not util.if_project_exists(kylin_client=client, project=project_name):
+        client.create_project(project_name)
+
+    tables = data.get('load_table_list')
+    resp = client.load_table(project_name=project_name, tables=tables)
+    assert ",".join(resp["result.loaded"]) == tables
+
+    model_desc_data = data.get('model_desc_data')
+    model_name = model_desc_data.get('name')
+
+    if not util.if_model_exists(kylin_client=client, model_name=model_name, project=project_name):
+        resp = client.create_model(project_name=project_name, 
+                                   model_name=model_name, 
+                                   model_desc_data=model_desc_data)
+        assert json.loads(resp['modelDescData'])['name'] == model_name
+
+    cube_desc_data = data.get('cube_desc_data')
+    cube_name = cube_desc_data.get('name')
+    if not util.if_cube_exists(kylin_client=client, cube_name=cube_name, project=project_name):
+        resp = client.create_cube(project_name=project_name,
+                                  cube_name=cube_name,
+                                  cube_desc_data=cube_desc_data)
+        assert json.loads(resp['cubeDescData'])['name'] == cube_name
+    if client.get_cube_instance(cube_name=cube_name).get('status') != 'READY':
+        resp = client.full_build_cube(cube_name=cube_name)
+        assert client.await_job_finished(job_id=resp['uuid'], waiting_time=20)
diff --git a/build/CI/testing/features/step_impl/generic_test_step.py b/build/CI/testing/features/step_impl/generic_test_step.py
new file mode 100644
index 0000000..cf04d55
--- /dev/null
+++ b/build/CI/testing/features/step_impl/generic_test_step.py
@@ -0,0 +1,98 @@
+from getgauge.python import step
+import os
+import json
+
+from kylin_utils import util
+
+
+@step("Get kylin instance")
+def get_kylin_instance_with_config_file():
+    global client
+    client = util.setup_instance('kylin_instance.yml')
+
+
+@step("prepare data file from <release_test_0001.json>")
+def prepare_data_file_from(file_name):
+    global data
+    with open(os.path.join('data', file_name), 'r') as f:
+        data = json.load(f)
+
+
+@step("Create project <project_name> and load table <tables>")
+def prepare_project_step(project_name, tables):
+    client.create_project(project_name=project_name)
+    tables = data.get(tables)
+    resp = client.load_table(project_name=project_name, tables=tables)
+    assert ",".join(resp["result.loaded"]) == tables
+
+
+@step("Create model with <model_desc> in <project>")
+def create_model_step(model_desc, project):
+    model_name = data.get(model_desc).get('name')
+    model_desc_data = data.get(model_desc)
+
+    resp = client.create_model(project_name=project,
+                               model_name=model_name,
+                               model_desc_data=model_desc_data)
+    assert json.loads(resp['modelDescData'])['name'] == model_name
+
+
+@step("Create cube with <cube_desc> in <prpject>, cube name is <cube_name>")
+def create_cube_step(cube_desc, project, cube_name):
+    resp = client.create_cube(project_name=project,
+                              cube_name=cube_name,
+                              cube_desc_data=data.get(cube_desc))
+    assert json.loads(resp['cubeDescData'])['name'] == cube_name
+
+
+@step("Build segment from <start_time> to <end_time> in <cube_name>")
+def build_first_segment_step(start_time, end_time, cube_name):
+    resp = client.build_segment(start_time=start_time,
+                                end_time=end_time,
+                                cube_name=cube_name)
+    assert client.await_job_finished(job_id=resp['uuid'], waiting_time=20)
+
+
+@step("Merge cube <cube_name> segment from <start_name> to <end_time>")
+def merge_segment_step(cube_name, start_time, end_time):
+    resp = client.merge_segment(cube_name=cube_name,
+                                start_time=start_time,
+                                end_time=end_time)
+    assert client.await_job_finished(job_id=resp['uuid'], waiting_time=20)
+
+
+@step("Clone cube <old_cube_name> and name it <new_cube_name> in <project>, modify build engine to <engine>")
+def clone_cube_step(old_cube_name, new_cube_name, project, build_engine):
+    resp = client.clone_cube(cube_name=old_cube_name,
+                             new_cube_name=new_cube_name,
+                             project_name=project)
+    assert resp.get('name') == new_cube_name
+    client.update_cube_engine(cube_name=new_cube_name,
+                              engine_type=build_engine)
+
+
+@step("Query SQL <SQL> and specify <cube_name> cube to query in <project>, compare result with <result>")
+def query_cube_step(sql, cube_name, project, result):
+    resp = client.execute_query(cube_name=cube_name,
+                                project_name=project,
+                                sql=sql)
+    assert resp.get('isException') is False
+    assert resp.get('results')[0][0] == result
+    assert resp.get('cube') == 'CUBE[name=' + cube_name + ']'
+    assert resp.get('pushDown') is False
+
+
+@step("Disable cube <cube_name>")
+def disable_cube_step(cube_name):
+    resp = client.disable_cube(cube_name=cube_name)
+    assert resp.get('status') == 'DISABLED'
+
+
+@step("Query SQL <SQL> in <project> and pushdown, compare result with <result>")
+def query_pushdown_step(sql, project, result):
+    resp = client.execute_query(project_name=project, sql=sql)
+    assert resp.get('isException') is False
+    assert resp.get('results')[0][0] == result
+    assert resp.get('cube') == ''
+    assert resp.get('pushDown') is True
+
diff --git a/build/CI/testing/features/step_impl/read_write_separation/read_write_separation.py b/build/CI/testing/features/step_impl/read_write_separation/read_write_separation.py
new file mode 100644
index 0000000..e69de29
diff --git a/build/CI/testing/features/step_impl/sample.py b/build/CI/testing/features/step_impl/sample.py
new file mode 100644
index 0000000..d7ac9bb
--- /dev/null
+++ b/build/CI/testing/features/step_impl/sample.py
@@ -0,0 +1,14 @@
+from getgauge.python import step, before_spec
+from kylin_utils import util
+from kylin_utils import equals
+
+
+@before_spec()
+def before_spec_hook():
+    global client
+    client = util.setup_instance("kylin_instance.yml")
+
+
+@step("Query sql <select count(*) from kylin_sales> in <project> and compare result with pushdown result")
+def query_sql_and_compare_result_with_pushdown_result(sql, project):
+    equals.compare_sql_result(sql=sql, project=project, kylin_client=client)
diff --git a/build/CI/testing/kylin_instances/kylin_instance.yml b/build/CI/testing/kylin_instances/kylin_instance.yml
new file mode 100644
index 0000000..ca76d00
--- /dev/null
+++ b/build/CI/testing/kylin_instances/kylin_instance.yml
@@ -0,0 +1,7 @@
+---
+# All mode
+- host: kylin-all
+  port: 7070
+  version: 3.x
+  hadoop_platform: HDP2.4
+  deploy_mode: ALL
\ No newline at end of file
diff --git a/build/CI/testing/kylin_utils/basic.py b/build/CI/testing/kylin_utils/basic.py
new file mode 100644
index 0000000..cd8e416
--- /dev/null
+++ b/build/CI/testing/kylin_utils/basic.py
@@ -0,0 +1,90 @@
+import logging
+import requests
+
+
+class BasicHttpClient:
+    _host = None
+    _port = None
+
+    _headers = {}
+
+    _auth = ('ADMIN', 'KYLIN')
+
+    _inner_session = requests.Session()
+
+    def __init__(self, host, port):
+        if not host or not port:
+            raise ValueError('init http client failed')
+
+        self._host = host
+        self._port = port
+
+    def token(self, token):
+        self._headers['Authorization'] = 'Basic {token}'.format(token=token)
+
+    def auth(self, username, password):
+        self._auth = (username, password)
+
+    def header(self, name, value):
+        self._headers[name] = value
+
+    def headers(self, headers):
+        self._headers = headers
+
+    def _request(self, method, url, params=None, data=None, json=None,  # pylint: disable=too-many-arguments
+                 files=None, headers=None, stream=False, to_json=True, inner_session=False, timeout=60):
+        if inner_session:
+            return self._request_with_session(self._inner_session, method, url,
+                                              params=params,
+                                              data=data,
+                                              json=json,
+                                              files=files,
+                                              headers=headers,
+                                              stream=stream,
+                                              to_json=to_json,
+                                              timeout=timeout)
+        with requests.Session() as session:
+            session.auth = self._auth
+            return self._request_with_session(session, method, url,
+                                              params=params,
+                                              data=data,
+                                              json=json,
+                                              files=files,
+                                              headers=headers,
+                                              stream=stream,
+                                              to_json=to_json,
+                                              timeout=timeout)
+
+    def _request_with_session(self, session, method, url, params=None, data=None,  # pylint: disable=too-many-arguments
+                              json=None, files=None, headers=None, stream=False, to_json=True, timeout=60):
+        if headers is None:
+            headers = self._headers
+        resp = session.request(method, url,
+                               params=params,
+                               data=data,
+                               json=json,
+                               files=files,
+                               headers=headers,
+                               stream=stream,
+                               timeout=timeout
+                               )
+
+        try:
+            if stream:
+                return resp.raw
+            if not resp.content:
+                return None
+
+            if to_json:
+                data = resp.json()
+                resp.raise_for_status()
+                return data
+            return resp.text
+        except requests.HTTPError as http_error:
+            err_msg = f"{str(http_error)} [{data.get('msg', '')}]\n" \
+                      f"{data.get('stacktrace', '')}"
+            logging.error(err_msg)
+            raise requests.HTTPError(err_msg, request=http_error.request, response=http_error.response, )
+        except Exception as error:
+            logging.error(str(error))
+            raise error
diff --git a/build/CI/testing/kylin_utils/equals.py b/build/CI/testing/kylin_utils/equals.py
new file mode 100644
index 0000000..f610f4e
--- /dev/null
+++ b/build/CI/testing/kylin_utils/equals.py
@@ -0,0 +1,204 @@
+import logging
+from kylin_utils import util
+
+_array_types = (list, tuple, set)
+_object_types = (dict, )
+
+
+def api_response_equals(actual, expected, ignore=None):
+    if ignore is None:
+        ignore = []
+
+    def _get_value(ignore):
+        def get_value(key, container):
+            if isinstance(container, _object_types):
+                return container.get(key)
+            if isinstance(container, _array_types):
+                errmsg = ''
+                for item in container:
+                    try:
+                        api_response_equals(item, key, ignore=ignore)
+                        return item
+                    except AssertionError as e:
+                        errmsg += str(e) + '\n'
+                raise AssertionError(errmsg)
+
+            return None
+
+        return get_value
+
+    getvalue = _get_value(ignore)
+    assert_failed = AssertionError(
+        f'assert json failed, expected: [{expected}], actual: [{actual}]')
+
+    if isinstance(expected, _array_types):
+        if not isinstance(actual, _array_types):
+            raise assert_failed
+        for item in expected:
+            api_response_equals(getvalue(item, actual), item, ignore=ignore)
+
+    elif isinstance(expected, _object_types):
+        if not isinstance(actual, _object_types):
+            raise assert_failed
+        for key, value in expected.items():
+            if key not in ignore:
+                api_response_equals(getvalue(key, actual),
+                                    value,
+                                    ignore=ignore)
+            else:
+                if key not in actual:
+                    raise assert_failed
+    else:
+        if actual != expected:
+            raise assert_failed
+
+
+INTEGER_FAMILY = ['TINYINT', 'SMALLINT', 'INTEGER', 'BIGINT']
+
+FRACTION_FAMILY = ['DECIMAL', 'DOUBLE', 'FLOAT']
+
+STRING_FAMILY = ['CHAR', 'VARCHAR', 'STRING']
+
+
+def _is_family(datatype1, datatype2):
+    if datatype1 in STRING_FAMILY and datatype2 in STRING_FAMILY:
+        return True
+    if datatype1 in FRACTION_FAMILY and datatype2 in FRACTION_FAMILY:
+        return True
+    if datatype1 in INTEGER_FAMILY and datatype2 in INTEGER_FAMILY:
+        return True
+    return datatype1 == datatype2
+
+
+class _Row(tuple):
+    def __init__(self, values, types):  # pylint: disable=unused-argument
+        tuple.__init__(self)
+        if len(values) != len(types):
+            raise ValueError('???')
+
+        self._types = types
+
+        self._has_fraction = False
+        for datatype in self._types:
+            if datatype in FRACTION_FAMILY:
+                self._has_fraction = True
+
+    def __new__(cls, values, types):  # pylint: disable=unused-argument
+        return tuple.__new__(cls, values)
+
+    def __eq__(self, other):
+        if not self._has_fraction or not other._has_fraction:
+            return tuple.__eq__(self, other)
+
+        if len(self._types) != len(other._types):
+            return False
+
+        for i in range(len(self._types)):
+            stype = self._types[i]
+            otype = other._types[i]
+
+            if not _is_family(stype, otype):
+                return False
+
+            svalue = self[i]
+            ovalue = other[i]
+
+            if stype in FRACTION_FAMILY:
+                fsvalue = float(svalue)
+                fovalue = float(ovalue)
+
+                diff = abs(fsvalue - fovalue)
+
+                rate = diff / min(fsvalue, fovalue
+                                  ) if fsvalue != 0 and fovalue != 0 else diff
+                if abs(rate) > 0.01:
+                    return False
+
+            else:
+                if svalue != ovalue:
+                    return False
+
+        return True
+
+    def __hash__(self):
+        # Always use __eq__ to compare
+        return 0
+
+
+def query_result_equals(expect_resp, actual_resp):
+    expect_column_types = [
+        x['columnTypeName'] for x in expect_resp['columnMetas']
+    ]
+    expect_result = [[y.strip() if y else y for y in x]
+                     for x in expect_resp['results']]
+
+    actual_column_types = [
+        x['columnTypeName'] for x in actual_resp['columnMetas']
+    ]
+    actual_result = [[y.strip() if y else y for y in x]
+                     for x in actual_resp['results']]
+
+    if len(expect_column_types) != len(actual_column_types):
+        logging.error('column count assert failed [%s,%s]',
+                      len(expect_column_types), len(actual_column_types))
+        return False
+
+    return dataset_equals(expect_result, actual_result, expect_column_types,
+                          actual_column_types)
+
+
+def dataset_equals(expect,
+                   actual,
+                   expect_col_types=None,
+                   actual_col_types=None):
+    if len(expect) != len(actual):
+        logging.error('row count assert failed [%s,%s]', len(expect),
+                      len(actual))
+        return False
+
+    if expect_col_types is None:
+        expect_col_types = ['VARCHAR'] * len(expect[0])
+    expect_set = set()
+    for values in expect:
+        expect_set.add(_Row(values, expect_col_types))
+
+    if actual_col_types is None:
+        actual_col_types = expect_col_types if expect_col_types else [
+            'VARCHAR'
+        ] * len(actual[0])
+    actual_set = set()
+    for values in actual:
+        actual_set.add(_Row(values, actual_col_types))
+
+    assert_result = expect_set ^ actual_set
+    if assert_result:
+        logging.error('diff[%s]', len(assert_result))
+        if len(assert_result) < 20:
+            print(assert_result)
+        return False
+
+    return True
+
+
+def compare_sql_result(sql, project, kylin_client, cube=None):
+    pushdown_project = kylin_client.pushdown_project
+    if not util.if_project_exists(kylin_client=kylin_client, project=pushdown_project):
+        kylin_client.create_project(project_name=pushdown_project)
+
+    hive_tables = kylin_client.list_hive_tables(project_name=project)
+    if hive_tables is not None:
+        for table_info in kylin_client.list_hive_tables(project_name=project):
+            if table_info.get('source_type') == 0:
+                kylin_client.load_table(project_name=pushdown_project,
+                                        tables='{database}.{table}'.format(
+                                            database=table_info.get('database'),
+                                            table=table_info.get('name')))
+    kylin_resp = kylin_client.execute_query(cube_name=cube,
+                                            project_name=project,
+                                            sql=sql)
+    assert kylin_resp.get('isException') is False
+
+    pushdown_resp = kylin_client.execute_query(project_name=pushdown_project, sql=sql)
+    assert pushdown_resp.get('isException') is False
+
+    assert query_result_equals(kylin_resp, pushdown_resp)
\ No newline at end of file
diff --git a/build/CI/testing/kylin_utils/kylin.py b/build/CI/testing/kylin_utils/kylin.py
new file mode 100644
index 0000000..252ce21
--- /dev/null
+++ b/build/CI/testing/kylin_utils/kylin.py
@@ -0,0 +1,826 @@
+import json
+import logging
+import time
+import random
+
+import requests
+
+from .basic import BasicHttpClient
+
+
+class KylinHttpClient(BasicHttpClient):  # pylint: disable=too-many-public-methods
+    _base_url = 'http://{host}:{port}/kylin/api'
+
+    def __init__(self, host, port, version):
+        super().__init__(host, port)
+
+        self._headers = {
+            'Content-Type': 'application/json;charset=utf-8'
+        }
+
+        self._base_url = self._base_url.format(host=self._host, port=self._port)
+        self.generic_project = "generic_test_project"
+        self.pushdown_project = "pushdown_test_project"
+        self.version = version
+
+    def login(self, username, password):
+        self._inner_session.request('POST', self._base_url + '/user/authentication', auth=(username, password))
+        return self._request('GET', '/user/authentication', inner_session=True)
+
+    def check_login_state(self):
+        return self._request('GET', '/user/authentication', inner_session=True)
+
+    def get_session(self):
+        return self._inner_session
+
+    def logout(self):
+        self._inner_session = requests.Session()
+
+    def list_projects(self, limit=100, offset=0):
+        params = {'limit': limit, 'offset': offset}
+        resp = self._request('GET', '/projects', params=params)
+        return resp
+
+    def create_project(self, project_name, description=None, override_kylin_properties=None):
+        data = {'name': project_name,
+                'description': description,
+                'override_kylin_properties': override_kylin_properties,
+                }
+        payload = {
+            'projectDescData': json.dumps(data),
+        }
+        resp = self._request('POST', '/projects', json=payload)
+        return resp
+
+    def update_project(self, project_name, description=None, override_kylin_properties=None):
+        """
+        :param project_name: project name
+        :param description: description of project
+        :param override_kylin_properties: the kylin properties that needs to be override
+        :return:
+        """
+        data = {'name': project_name,
+                'description': description,
+                'override_kylin_properties': override_kylin_properties,
+                }
+        payload = {
+            'formerProjectName': project_name,
+            'projectDescData': json.dumps(data),
+        }
+        resp = self._request('PUT', '/projects', json=payload)
+        return resp
+
+    def delete_project(self, project_name, force=False):
+        """
+        delete project API, before delete the project, make sure the project does not contain models and cubes.
+        If you want to force delete the project, make force=True
+        :param project_name: project name
+        :param force: if force, delete cubes and models before delete project
+        :return:
+        """
+        if force:
+            cubes = self.list_cubes(project_name)
+            logging.debug("Cubes to be deleted: %s", cubes)
+            while cubes:
+                for cube in cubes:
+                    self.delete_cube(cube['name'])
+                cubes = self.list_cubes(project_name)
+            models = self.list_model_desc(project_name)
+            logging.debug("Models to be deleted: %s", models)
+            while models:
+                for model in models:
+                    self.delete_model(model['name'])
+                models = self.list_model_desc(project_name)
+        url = '/projects/{project}'.format(project=project_name)
+        resp = self._request('DELETE', url)
+        return resp
+
+    def load_table(self, project_name, tables, calculate=True):
+        """
+        load or reload table api
+        :param calculate: Default is True
+        :param project_name: project name
+        :param tables: table list, for instance, ['default.kylin_fact', 'default.kylin_sales']
+        :return:
+        """
+        # workaround of #15337
+        # time.sleep(random.randint(5, 10))
+        url = '/tables/{tables}/{project}/'.format(tables=tables, project=project_name)
+        payload = {'calculate': calculate
+                   }
+        resp = self._request('POST', url, json=payload)
+        return resp
+
+    def unload_table(self, project_name, tables):
+        url = '/tables/{tables}/{project}'.format(tables=tables, project=project_name)
+        resp = self._request('DELETE', url)
+        return resp
+
+    def list_hive_tables(self, project_name, extension=False, user_session=False):
+        """
+        :param project_name: project name
+        :param extension: specify whether the table's extension information is returned
+        :param user_session: boolean, true for using login session to execute
+        :return:
+        """
+        url = '/tables'
+        params = {'project': project_name, 'ext': extension}
+        resp = self._request('GET', url, params=params, inner_session=user_session)
+        return resp
+
+    def get_table_info(self, project_name, table_name):
+        """
+        :param project_name: project name
+        :param table_name: table name
+        :return: hive table information
+        """
+        url = '/tables/{project}/{table}'.format(project=project_name, table=table_name)
+        resp = self._request('GET', url)
+        return resp
+
+    def get_tables_info(self, project_name, ext='true'):
+        url = '/tables'
+        params = {'project': project_name, 'ext': ext}
+        resp = self._request('GET', url, params=params)
+        return resp
+
+    def get_table_streaming_config(self, project_name, table_name, limit=100, offset=0):
+        params = {'table': table_name, 'project': project_name, 'limit': limit, 'offset': offset}
+        resp = self._request('GET', '/streaming/getConfig', params=params)
+        return resp
+
+    def load_kafka_table(self, project_name, kafka_config, streaming_config, table_data, message=None):
+        url = '/streaming'
+        payload = {'project': project_name,
+                   'kafkaConfig': json.dumps(kafka_config),
+                   'streamingConfig': json.dumps(streaming_config),
+                   'tableData': json.dumps(table_data),
+                   'message': message}
+        resp = self._request('POST', url, json=payload)
+        return resp
+
+    def update_kafka_table(self, project_name, kafka_config, streaming_config, table_data, cluster_index=0):
+        url = '/streaming'
+        payload = {'project': project_name,
+                   'kafkaConfig': kafka_config,
+                   'streamingConfig': streaming_config,
+                   'tableData': table_data,
+                   'clusterIndex': cluster_index}
+        resp = self._request('PUT', url, json=payload)
+        return resp
+
+    def list_model_desc(self, project_name=None, model_name=None, limit=100, offset=0):
+
+        """
+        :param offset:
+        :param limit:
+        :param project_name: project name
+        :param model_name: model name
+        :return: model desc list
+        """
+        params = {'limit': limit,
+                  'offset': offset,
+                  'modelName': model_name,
+                  'projectName': project_name
+                  }
+        resp = self._request('GET', '/models', params=params)
+        return resp
+
+    def create_model(self, project_name, model_name, model_desc_data, user_session=False):
+        url = '/models'
+        payload = {
+            'project': project_name,
+            'model': model_name,
+            'modelDescData': json.dumps(model_desc_data)
+        }
+        logging.debug("Current payload for creating model is %s", payload)
+        resp = self._request('POST', url, json=payload, inner_session=user_session)
+        return resp
+
+    def update_model(self, project_name, model_name, model_desc_data, user_session=False):
+        url = '/models'
+        payload = {
+            'project': project_name,
+            'model': model_name,
+            'modelDescData': json.dumps(model_desc_data)
+        }
+        resp = self._request('PUT', url, json=payload, inner_session=user_session)
+        return resp
+
+    def clone_model(self, project_name, model_name, new_model_name):
+        url = '/models/{model}/clone'.format(model=model_name)
+        payload = {'modelName': new_model_name, 'project': project_name}
+        resp = self._request('PUT', url, json=payload)
+        return resp
+
+    def delete_model(self, model_name):
+        url = '/models/{model}'.format(model=model_name)
+        # return value is None here
+        return self._request('DELETE', url)
+
+    def get_cube_desc(self, cube_name):
+        url = '/cube_desc/{cube}'.format(cube=cube_name)
+        resp = self._request('GET', url)
+        return resp
+
+    def list_cubes(self, project=None, offset=0, limit=10000, cube_name=None, model_name=None, user_session=False):
+        params = {'projectName': project, 'offset': offset, 'limit': limit,
+                  'cubeName': cube_name, 'modelName': model_name}
+        resp = self._request('GET', '/cubes/', params=params, inner_session=user_session)
+        return resp
+
+    def get_cube_instance(self, cube_name):
+        url = '/cubes/{cube}'.format(cube=cube_name)
+        resp = self._request('GET', url)
+        return resp
+
+    def create_cube(self, project_name, cube_name, cube_desc_data, user_session=False):
+        # workaround of #15337
+        time.sleep(random.randint(5, 10))
+        url = '/cubes'
+        payload = {
+            'project': project_name,
+            'cubeName': cube_name,
+            'cubeDescData': json.dumps(cube_desc_data)
+        }
+        resp = self._request('POST', url, json=payload, inner_session=user_session)
+        return resp
+
+    def update_cube(self, project_name, cube_name, cube_desc_data, user_session=False):
+        # workaround of #15337
+        time.sleep(random.randint(5, 10))
+        url = '/cubes'
+        payload = {
+            'project': project_name,
+            'cubeName': cube_name,
+            'cubeDescData': json.dumps(cube_desc_data)
+        }
+        resp = self._request('PUT', url, json=payload, inner_session=user_session)
+        return resp
+
+    def update_cube_engine(self, cube_name, engine_type):
+        url = '/cubes/{cube}/{engine}'.format(cube=cube_name, engine=engine_type)
+        resp = self._request('PUT', url)
+        return resp
+
+    def build_segment(self, cube_name, start_time, end_time, force=False):
+        """
+        :param cube_name: the name of the cube to be built
+        :param force: force submit mode
+        :param start_time: long, start time, corresponding to the timestamp in GMT format,
+        for instance, 1388534400000 corresponding to 2014-01-01 00:00:00
+        :param end_time: long, end time, corresponding to the timestamp in GMT format
+        :return:
+        """
+        url = '/cubes/{cube}/build'.format(cube=cube_name)
+        payload = {
+            'buildType': 'BUILD',
+            'startTime': start_time,
+            'endTime': end_time,
+            'force': force
+        }
+        resp = self._request('PUT', url, json=payload)
+        return resp
+
+    def full_build_cube(self, cube_name, force=False):
+        """
+        :param cube_name: the name of the cube to be built
+        :param force: force submit mode
+        :return:
+        """
+        return self.build_segment(cube_name, force=force, start_time=0, end_time=31556995200000)
+
+    def merge_segment(self, cube_name, start_time=0, end_time=31556995200000, force=True):
+        """
+        :param cube_name: the name of the cube to be built
+        :param force: force submit mode
+        :param start_time: long, start time, corresponding to the timestamp in GMT format,
+        for instance, 1388534400000 corresponding to 2014-01-01 00:00:00
+        :param end_time: long, end time, corresponding to the timestamp in GMT format
+        :return:
+        """
+        url = '/cubes/{cube}/build'.format(cube=cube_name)
+        payload = {
+            'buildType': 'MERGE',
+            'startTime': start_time,
+            'endTime': end_time,
+            'force': force
+        }
+        resp = self._request('PUT', url, json=payload)
+        return resp
+
+    def refresh_segment(self, cube_name, start_time, end_time, force=True):
+        """
+        :param cube_name: the name of the cube to be built
+        :param force: force submit mode
+        :param start_time: long, start time, corresponding to the timestamp in GMT format,
+        for instance, 1388534400000 corresponding to 2014-01-01 00:00:00
+        :param end_time: long, end time, corresponding to the timestamp in GMT format
+        :return:
+        """
+        url = '/cubes/{cube}/build'.format(cube=cube_name)
+        payload = {
+            'buildType': 'REFRESH',
+            'startTime': start_time,
+            'endTime': end_time,
+            'force': force
+        }
+        resp = self._request('PUT', url, json=payload)
+        return resp
+
+    def delete_segments(self, cube_name, segment_name):
+        url = '/cubes/{cube}/segs/{segment}'.format(cube=cube_name, segment=segment_name)
+        resp = self._request('DELETE', url)
+        return resp
+
+    def build_streaming_cube(self, project_name, cube_name, source_offset_start=0,
+                             source_offset_end='9223372036854775807'):
+        """
+        :param cube_name: cube name
+        :param source_offset_start: long, the start offset where build begins. Here 0 means it is from the last position
+        :param source_offset_end: long, the end offset where build ends. 9223372036854775807 (Long.MAX_VALUE) means to
+                                        the end position on Kafka topic.
+        :param mp_values: string, multiple partition values of corresponding model
+        :param force: boolean, force submit mode
+        :return:
+        """
+        url = '/cubes/{cube}/segments/build_streaming'.format(cube=cube_name)
+        payload = {
+            'buildType': 'BUILD',
+            'project': project_name,
+            'sourceOffsetStart': source_offset_start,
+            'sourceOffsetEnd': source_offset_end,
+        }
+        resp = self._request('PUT', url, json=payload)
+        return resp
+
+    def build_cube_customized(self, cube_name, source_offset_start, source_offset_end=None, mp_values=None,
+                              force=False):
+        """
+        :param cube_name: cube name
+        :param source_offset_start: long, the start offset where build begins
+        :param source_offset_end: long, the end offset where build ends
+        :param mp_values: string, multiple partition values of corresponding model
+        :param force: boolean, force submit mode
+        :return:
+        """
+        url = '/cubes/{cube}/segments/build_customized'.format(cube=cube_name)
+        payload = {
+            'buildType': 'BUILD',
+            'sourceOffsetStart': source_offset_start,
+            'sourceOffsetEnd': source_offset_end,
+            'mpValues': mp_values,
+            'force': force
+        }
+        resp = self._request('PUT', url, json=payload)
+        return resp
+
+    def clone_cube(self, project_name, cube_name, new_cube_name):
+        """
+        :param project_name: project name
+        :param cube_name: cube name of being cloned
+        :param new_cube_name: cube name to be cloned to
+        :return:
+        """
+        url = '/cubes/{cube}/clone'.format(cube=cube_name)
+        payload = {
+            'cubeName': new_cube_name,
+            'project': project_name
+        }
+        resp = self._request('PUT', url, json=payload)
+        return resp
+
+    def enable_cube(self, cube_name):
+        url = '/cubes/{cube}/enable'.format(cube=cube_name)
+        resp = self._request('PUT', url)
+        return resp
+
+    def disable_cube(self, cube_name):
+        url = '/cubes/{cube}/disable'.format(cube=cube_name)
+        resp = self._request('PUT', url)
+        return resp
+
+    def purge_cube(self, cube_name):
+        url = '/cubes/{cube}/purge'.format(cube=cube_name)
+        resp = self._request('PUT', url)
+        return resp
+
+    def delete_cube(self, cube_name):
+        url = '/cubes/{cube}'.format(cube=cube_name)
+        return self._request('DELETE', url)
+
+    def list_holes(self, cube_name):
+        """
+        A healthy cube in production should not have holes in the meaning of inconsecutive segments.
+        :param cube_name: cube name
+        :return:
+        """
+        url = '/cubes/{cube}/holes'.format(cube=cube_name)
+        resp = self._request('GET', url)
+        return resp
+
+    def fill_holes(self, cube_name):
+        """
+        For non-streaming data based Cube, Kyligence Enterprise will submit normal build cube job(s) with
+        corresponding time partition value range(s); For streaming data based Cube, please make sure that
+        corresponding data is not expired or deleted in source before filling holes, otherwise the build job will fail.
+        :param cube_name: string, cube name
+        :return:
+        """
+        url = '/cubes/{cube}/holes'.format(cube=cube_name)
+        resp = self._request('PUT', url)
+        return resp
+
+    def export_cuboids(self, cube_name):
+        url = '/cubes/{cube}/cuboids/export'.fomat(cube=cube_name)
+        resp = self._request('PUT', url)
+        return resp
+
+    def refresh_lookup(self, cube_name, lookup_table):
+        """
+        Only lookup tables of SCD Type 1 are supported to refresh.
+        :param cube_name: cube name
+        :param lookup_table: the name of lookup table to be refreshed with the format DATABASE.TABLE
+        :return:
+        """
+        url = '/cubes/{cube}/refresh_lookup'.format(cube=cube_name)
+        payload = {
+            'cubeName': cube_name,
+            'lookupTableName': lookup_table
+        }
+        resp = self._request('PUT', url, json=payload)
+        return resp
+
+    def get_job_info(self, job_id):
+        url = '/jobs/{job_id}'.format(job_id=job_id)
+        resp = self._request('GET', url)
+        return resp
+
+    def get_job_status(self, job_id):
+        return self.get_job_info(job_id)['job_status']
+
+    def get_step_output(self, job_id, step_id):
+        url = '/jobs/{jobId}/steps/{stepId}/output'.format(jobId=job_id, stepId=step_id)
+        resp = self._request('GET', url)
+        return resp
+
+    def pause_job(self, job_id):
+        url = '/jobs/{jobId}/pause'.format(jobId=job_id)
+        resp = self._request('PUT', url)
+        return resp
+
+    def resume_job(self, job_id):
+        url = '/jobs/{jobId}/resume'.format(jobId=job_id)
+        resp = self._request('PUT', url)
+        return resp
+
+    def discard_job(self, job_id):
+        url = '/jobs/{jobId}/cancel'.format(jobId=job_id)
+        resp = self._request('PUT', url)
+        return resp
+
+    def delete_job(self, job_id):
+        url = '/jobs/{jobId}/drop'.format(jobId=job_id)
+        resp = self._request('DELETE', url)
+        return resp
+
+    def resubmit_job(self, job_id):
+        url = '/jobs/{jobId}/resubmit'.format(jobId=job_id)
+        resp = self._request('PUT', url)
+        return resp
+
+    def list_jobs(self, project_name, status=None, offset=0, limit=10000, time_filter=1, job_search_mode='ALL'):
+        """
+        list jobs in specific project
+        :param job_search_mode: CUBING_ONLY, CHECKPOINT_ONLY, ALL
+        :param project_name: project name
+        :param status: int, 0 -> NEW, 1 -> PENDING, 2 -> RUNNING,
+                            4 -> FINISHED, 8 -> ERROR, 16 -> DISCARDED, 32 -> STOPPED
+        :param offset: offset of returned result
+        :param limit: quantity of returned result per page
+        :param time_filter: int, 0 -> last one day, 1 -> last one week,
+                                 2 -> last one month, 3 -> last one year, 4 -> all
+        :return:
+        """
+        url = '/jobs'
+        params = {
+            'projectName': project_name,
+            'status': status,
+            'offset': offset,
+            'limit': limit,
+            'timeFilter': time_filter,
+            'jobSearchMode': job_search_mode
+        }
+        resp = self._request('GET', url, params=params)
+        return resp
+
+    def await_all_jobs(self, project_name, waiting_time=30):
+        """
+        await all jobs to be finished, default timeout is 30 minutes
+        :param project_name: project name
+        :param waiting_time: timeout, in minutes
+        :return: boolean, timeout will return false
+        """
+        running_flag = ['PENDING', 'RUNNING']
+        try_time = 0
+        max_try_time = waiting_time * 2
+        # finish_flags = ['ERROR', 'FINISHED', 'DISCARDED']
+        while try_time < max_try_time:
+            jobs = self.list_jobs(project_name)
+            all_finished = True
+            for job in jobs:
+                if job['job_status'] in running_flag:
+                    all_finished = False
+                    break
+            if all_finished:
+                return True
+            time.sleep(30)
+            try_time += 1
+        return False
+
+    def await_job(self, job_id, waiting_time=20, interval=1, excepted_status=None):
+        """
+        Await specific job to be given status, default timeout is 20 minutes.
+        :param job_id: job id
+        :param waiting_time: timeout, in minutes.
+        :param interval: check interval, default value is 1 second
+        :param excepted_status: excepted job status list, default contains 'ERROR', 'FINISHED' and 'DISCARDED'
+        :return: boolean, if the job is in finish status, return true
+        """
+        finish_flags = ['ERROR', 'FINISHED', 'DISCARDED']
+        if excepted_status is None:
+            excepted_status = finish_flags
+        timeout = waiting_time * 60
+        start = time.time()
+        while time.time() - start < timeout:
+            job_status = self.get_job_status(job_id)
+            if job_status in excepted_status:
+                return True
+            if job_status in finish_flags:
+                return False
+            time.sleep(interval)
+        return False
+
+    def await_job_finished(self, job_id, waiting_time=20, interval=1):
+        """
+        Await specific job to be finished, default timeout is 20 minutes.
+        :param job_id: job id
+        :param waiting_time: timeout, in minutes.
+        :param interval: check interval, default value is 1 second
+        :return: boolean, if the job is in finish status, return true
+        """
+        return self.await_job(job_id, waiting_time, interval, excepted_status=['FINISHED'])
+
+    def await_job_error(self, job_id, waiting_time=20, interval=1):
+        """
+        Await specific job to be error, default timeout is 20 minutes.
+        :param job_id: job id
+        :param waiting_time: timeout, in minutes.
+        :param interval: check interval, default value is 1 second
+        :return: boolean, if the job is in finish status, return true
+        """
+        return self.await_job(job_id, waiting_time, interval, excepted_status=['ERROR'])
+
+    def await_job_discarded(self, job_id, waiting_time=20, interval=1):
+        """
+        Await specific job to be discarded, default timeout is 20 minutes.
+        :param job_id: job id
+        :param waiting_time: timeout, in minutes.
+        :param interval: check interval, default value is 1 second
+        :return: boolean, if the job is in finish status, return true
+        """
+        return self.await_job(job_id, waiting_time, interval, excepted_status=['DISCARDED'])
+
+    def await_job_step(self, job_id, step, excepted_status=None, waiting_time=20, interval=1):
+        """
+        Await specific job step to be given status, default timeout is 20 minutes.
+        :param job_id: job id
+        :param step: job step
+        :param waiting_time: timeout, in minutes.
+        :param interval: check interval, default value is 1 second
+        :param excepted_status: excepted job status list, default contains 'ERROR', 'FINISHED' and 'DISCARDED'
+        :return: boolean, if the job is in finish status, return true
+        """
+        finish_flags = ['ERROR', 'FINISHED', 'DISCARDED']
+        if excepted_status is None:
+            excepted_status = finish_flags
+        timeout = waiting_time * 60
+        start = time.time()
+        while time.time() - start < timeout:
+            job_info = self.get_job_info(job_id)
+            job_status = job_info['steps'][step]['step_status']
+            if job_status in excepted_status:
+                return True
+            if job_status in finish_flags:
+                return False
+            time.sleep(interval)
+        return False
+
+    def execute_query(self, project_name, sql, cube_name=None, offset=None, limit=None, backdoortoggles=None,
+                      user_session=False,
+                      timeout=60):
+        url = '/query'
+        payload = {
+            'project': project_name,
+            'sql': sql,
+            'offset': offset,
+            'limit': limit
+        }
+        if cube_name:
+            backdoortoggles = {"backdoorToggles": {"DEBUG_TOGGLE_HIT_CUBE": cube_name}}
+        if backdoortoggles:
+            payload.update(backdoortoggles)
+        resp = self._request('POST', url, json=payload, inner_session=user_session, timeout=timeout)
+        return resp
+
+    def save_query(self, sql_name, project_name, sql, description=None):
+        url = '/saved_queries'
+        payload = {
+            'name': sql_name,
+            'project': project_name,
+            'sql': sql,
+            'description': description
+        }
+        self._request('POST', url, json=payload)
+
+    def get_queries(self, project_name, user_session=False):
+        url = '/saved_queries'
+        params = {
+            'project': project_name
+        }
+        response = self._request('GET', url, params=params, inner_session=user_session)
+        return response
+
+    def remove_query(self, sql_id):
+        url = '/saved_queries/{id}'.format(id=sql_id)
+        self._request('DELETE', url)
+
+    def list_queryable_tables(self, project_name):
+        url = '/tables_and_columns'
+        params = {'project': project_name}
+        resp = self._request('GET', url, params=params)
+        return resp
+
+    def get_all_system_prop(self, server=None):
+        url = '/admin/config'
+        if server is not None:
+            url = '/admin/config?server={serverName}'.format(serverName=server)
+        prop_resp = self._request('GET', url).get('config')
+        property_values = {}
+        if prop_resp is None:
+            return property_values
+        prop_lines = prop_resp.splitlines(False)
+        for prop_line in prop_lines:
+            splits = prop_line.split('=')
+            property_values[splits[0]] = splits[1]
+        return property_values
+
+    def create_user(self, user_name, password, authorities, disabled=False, user_session=False):
+        """
+        create a user
+        :param user_name: string, target user name
+        :param password: string, target password
+        :param authorities: array, user's authorities
+        :param disabled: boolean, true for disabled user false for enable user
+        :param user_session: boolean, true for using login session to execute
+        :return:
+        """
+        url = '/user/{username}'.format(username=user_name)
+        payload = {
+            'username': user_name,
+            'password': password,
+            'authorities': authorities,
+            'disabled': disabled,
+        }
+        resp = self._request('POST', url, json=payload, inner_session=user_session)
+        return resp
+
+    def delete_user(self, user_name, user_session=False):
+        """
+        delete user
+        :param user_name: string
+        :param user_session: boolean, true for using login session to execute
+        :return:
+        """
+        url = '/user/{username}'.format(username=user_name)
+        resp = self._request('DELETE', url, inner_session=user_session)
+        return resp
+
+    def update_user(self, user_name, authorities, password=None, disabled=False,
+                    user_session=False, payload_user_name=None):
+        """
+        update user's info
+        :param user_name: string, target user name
+        :param password: string, target password
+        :param authorities: array, user's authorities
+        :param disabled: boolean, true for disabled user false for enable user
+        :param user_session: boolean, true for using login session to execute
+        :param payload_user_name: string, true for using login session to execute
+        :return:
+        """
+        url = '/user/{username}'.format(username=user_name)
+        username_in_payload = user_name if payload_user_name is None else payload_user_name
+        payload = {
+            'username': username_in_payload,
+            'password': password,
+            'authorities': authorities,
+            'disabled': disabled,
+        }
+        resp = self._request('PUT', url, json=payload, inner_session=user_session)
+        return resp
+
+    def update_user_password(self, user_name, new_password, password=None, user_session=False):
+        """
+        update user's password
+        :param user_name: string, target for username
+        :param new_password: string, user's new password
+        :param password: string, user's old password
+        :param user_session: boolean, true for using login session to execute
+        :return:
+        """
+        url = '/user/password'
+        payload = {
+            'username': user_name,
+            'password': password,
+            'newPassword': new_password
+        }
+        resp = self._request('PUT', url, json=payload, inner_session=user_session)
+        return resp
+
+    def list_users(self, project_name=None, group_name=None, is_fuzz_match=False, name=None, offset=0, limit=10000
+                   , user_session=False):
+        """
+        list users
+        :param group_name:string, group name
+        :param project_name: string, project's name
+        :param offset: offset of returned result
+        :param limit: quantity of returned result per page
+        :param is_fuzz_match: bool, true for param name fuzzy match
+        :param name: string, user's name
+        :param user_session: boolean, true for using login session to execute
+        :return:
+        """
+        url = '/user/users'
+        params = {
+            'offset': offset,
+            'limit': limit,
+            'groupName': group_name,
+            'project': project_name,
+            'isFuzzMatch': is_fuzz_match,
+            'name': name
+        }
+        resp = self._request('GET', url, params=params, inner_session=user_session)
+        return resp
+
+    def list_user_authorities(self, project_name, user_session=False):
+        """
+        list groups in a project
+        :param project_name: string, target project name
+        :param user_session: boolean, true for using login session to execute
+        :return:
+        """
+        url = '/user_group/groups'
+        params = {
+            'project': project_name
+        }
+        resp = self._request('GET', url, params=params, inner_session=user_session)
+        return resp
+
+    def create_group(self, group_name, user_session=False):
+        """
+        create a group with group_name
+        :param group_name:  string, target group name
+        :param user_session: boolean, true for using login session to execute
+        :return:
+        """
+        url = '/user_group/{group_name}'.format(group_name=group_name)
+        resp = self._request('POST', url, inner_session=user_session)
+        return resp
+
+    def delete_group(self, group_name, user_session=False):
+        """
+        delete group by group_name
+        :param group_name: string, target group name
+        :param user_session: boolean, true for using login session to execute
+        :return:
+        """
+        url = '/user_group/{group_name}'.format(group_name=group_name)
+        resp = self._request('DELETE', url, inner_session=user_session)
+        return resp
+
+    def add_or_del_users(self, group_name, users):
+        url = '/user_group/users/{group}'.format(group=group_name)
+        payload = {'users': users}
+        resp = self._request('POST', url, json=payload)
+        return resp
+
+    def _request(self, method, url, **kwargs):  # pylint: disable=arguments-differ
+        return super()._request(method, self._base_url + url, **kwargs)
+
+
+def connect(**conf):
+    _host = conf.get('host')
+    _port = conf.get('port')
+    _version = conf.get('version')
+
+    return KylinHttpClient(_host, _port, _version)
diff --git a/build/CI/testing/kylin_utils/shell.py b/build/CI/testing/kylin_utils/shell.py
new file mode 100644
index 0000000..3263636
--- /dev/null
+++ b/build/CI/testing/kylin_utils/shell.py
@@ -0,0 +1,125 @@
+import logging
+from shlex import quote as shlex_quote
+
+import delegator
+import paramiko
+
+
+class SSHShellProcess:
+    def __init__(self, return_code, stdout, stderr):
+        self.return_code = return_code
+        self.output = stdout
+        self.err = stderr
+
+    @property
+    def ok(self):
+        return self.return_code == 0
+
+    def __repr__(self) -> str:
+        return f'SSHShellProcess {{return code: {self.return_code}, output: {self.output}, err: {self.err}}}'
+
+
+class SSHShell:
+    def __init__(self, host, username=None, password=None):
+        self.client = paramiko.SSHClient()
+        self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy)
+        self.client.connect(host, username=username, password=password)
+
+    def command(self, script, timeout=120, get_pty=False):
+        logging.debug(f'ssh exec: {script}')
+        self.client.get_transport().set_keepalive(5)
+        chan = self.client.get_transport().open_session()
+
+        if get_pty:
+            chan.get_pty()
+
+        chan.settimeout(timeout)
+
+        chan.exec_command(f'bash --login -c {shlex_quote(script)}')
+
+        bufsize = 4096
+
+        stdout = ''.join(chan.makefile('r', bufsize))
+        stderr = ''.join(chan.makefile_stderr('r', bufsize))
+
+        return SSHShellProcess(chan.recv_exit_status(), stdout, stderr)
+
+    def __enter__(self):
+        return self
+
+    def __exit__(self, type, value, traceback):  # pylint: disable=redefined-builtin
+        self.client.close()
+
+
+class BashProcess:
+    """bash process object"""
+
+    def __init__(self, script, blocking: bool = True, timeout: int = 60) -> None:
+        """constructor"""
+        # Environ inherits from parent.
+
+        # Remember passed-in arguments.
+        self.script = script
+
+        # Run the subprocess.
+        self.sub = delegator.run(
+            script, block=blocking, timeout=timeout
+        )
+
+    @property
+    def output(self) -> str:
+        """stdout of the running process"""
+        return str(self.sub.out)
+
+    @property
+    def err(self) -> str:
+        """stderr of the running process"""
+        return str(self.sub.err)
+
+    @property
+    def ok(self) -> bool:
+        """if the process exited with a 0 exit code"""
+        return self.sub.ok
+
+    @property
+    def return_code(self) -> int:
+        """the exit code of the process"""
+        return self.sub.return_code
+
+    def __repr__(self) -> str:
+        return f'BashProcess {{return code: {self.return_code}, output: {self.output}, err: {self.err}}}'
+
+
+class Bash:
+    """an instance of bash"""
+
+    def _exec(self, script, timeout=60) -> BashProcess:  # pylint: disable=unused-argument
+        """execute the bash process as a child of this process"""
+        return BashProcess(script, timeout=timeout)
+
+    def command(self, script: str, timeout=60) -> BashProcess:
+        """form up the command with shlex and execute"""
+        logging.debug(f'bash exec: {script}')
+        return self._exec(f"bash -c {shlex_quote(script)}", timeout=timeout)
+
+
+def sshexec(script, host, username=None, password=None):
+    with sshshell(host, username=username, password=password) as ssh:
+        return ssh.command(script)
+
+
+def sshshell(host, username=None, password=None):
+    return SSHShell(host, username=username, password=password)
+
+
+def exec(script):  # pylint: disable=redefined-builtin
+    return Bash().command(script)
+
+
+def shell():
+    return Bash()
+
+
+if __name__ == '__main__':
+    sh = sshshell('10.1.3.94', username='root', password='hadoop')
+    print(sh.command('pwd'))
\ No newline at end of file
diff --git a/build/CI/testing/kylin_utils/util.py b/build/CI/testing/kylin_utils/util.py
new file mode 100644
index 0000000..47ca11e
--- /dev/null
+++ b/build/CI/testing/kylin_utils/util.py
@@ -0,0 +1,64 @@
+from selenium import webdriver
+from yaml import load, loader
+import os
+
+from kylin_utils import kylin
+
+
+def setup_instance(file_name):
+    instances_file = os.path.join('kylin_instances/', file_name)
+    stream = open(instances_file, 'r')
+    for item in load(stream, Loader=loader.SafeLoader):
+        host = item['host']
+        port = item['port']
+        version = item['version']
+    return kylin.connect(host=host, port=port, version=version)
+
+
+def kylin_url(file_name):
+    instances_file = os.path.join('kylin_instances/', file_name)
+    stream = open(instances_file, 'r')
+    for item in load(stream, Loader=loader.SafeLoader):
+        host = item['host']
+        port = item['port']
+    return "http://{host}:{port}/kylin".format(host=host, port=port)
+
+
+def setup_browser(browser_type):
+    if browser_type == "chrome":
+        browser = webdriver.Chrome(executable_path="browser_driver/chromedriver")
+
+    if browser_type == "firefox":
+        browser = webdriver.Firefox(executable_path="browser_driver/geckodriver")
+
+    if browser_type == "safari":
+        browser = webdriver.Safari(executable_path="browser_driver/safaridriver")
+
+    return browser
+
+
+def if_project_exists(kylin_client, project):
+    exists = 0
+    resp = kylin_client.list_projects()
+    for project_info in resp:
+        if project_info.get('name') == project:
+            exists = 1
+    return exists
+
+
+def if_cube_exists(kylin_client, cube_name, project=None):
+    exists = 0
+    resp = kylin_client.list_cubes(project=project)
+    if resp is not None:
+        for cube_info in resp:
+            if cube_info.get('name') == cube_name:
+                exists = 1
+    return exists
+
+
+def if_model_exists(kylin_client, model_name, project):
+    exists = 0
+    resp = kylin_client.list_model_desc(project_name=project, model_name=model_name)
+    if len(resp) == 1:
+        exists = 1
+    return exists
diff --git a/build/CI/testing/manifest.json b/build/CI/testing/manifest.json
new file mode 100644
index 0000000..bc5c9c8
--- /dev/null
+++ b/build/CI/testing/manifest.json
@@ -0,0 +1,6 @@
+{
+  "Language": "python",
+  "Plugins": [
+    "html-report"
+  ]
+}
\ No newline at end of file
diff --git a/build/CI/testing/requirements.txt b/build/CI/testing/requirements.txt
new file mode 100644
index 0000000..8c47779
--- /dev/null
+++ b/build/CI/testing/requirements.txt
@@ -0,0 +1,6 @@
+flake8==3.8.3
+getgauge==0.3.11
+pylint==2.3.1
+requests==2.21.0
+selenium==3.141.0
+yapf==0.30.0


[kylin] 13/13: KYLIN-4775 Minor fix

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 15866d18133ff29a70db01027f7ab4fd0ced0b70
Author: yaqian.zhang <59...@qq.com>
AuthorDate: Thu Nov 5 16:48:52 2020 +0800

    KYLIN-4775 Minor fix
---
 .../kylin-system-testing/features/specs/query/query.spec | 16 ----------------
 .../features/step_impl/query/query.py                    |  7 +++++--
 build/CI/kylin-system-testing/kylin_utils/equals.py      |  3 ++-
 .../query/sql_result/sql_test/sql1.json                  |  8 +++++---
 build/CI/run-ci.sh                                       |  5 +++--
 docker/docker-compose/write/write-hadoop.env             |  1 +
 docker/dockerfile/cluster/client/Dockerfile              |  2 ++
 pom.xml                                                  |  1 +
 .../java/org/apache/kylin/rest/response/SQLResponse.java | 10 ++++++++++
 .../java/org/apache/kylin/rest/service/QueryService.java | 12 ++++++++++++
 .../org/apache/kylin/rest/response/SQLResponseTest.java  |  2 +-
 11 files changed, 42 insertions(+), 25 deletions(-)

diff --git a/build/CI/kylin-system-testing/features/specs/query/query.spec b/build/CI/kylin-system-testing/features/specs/query/query.spec
index 8cd3a6f..0bb72bb 100644
--- a/build/CI/kylin-system-testing/features/specs/query/query.spec
+++ b/build/CI/kylin-system-testing/features/specs/query/query.spec
@@ -1,19 +1,3 @@
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-
 # Kylin SQL test
 Tags:4.x
 
diff --git a/build/CI/kylin-system-testing/features/step_impl/query/query.py b/build/CI/kylin-system-testing/features/step_impl/query/query.py
index 55f5597..32728b2 100644
--- a/build/CI/kylin-system-testing/features/step_impl/query/query.py
+++ b/build/CI/kylin-system-testing/features/step_impl/query/query.py
@@ -32,8 +32,11 @@ def query_sql_file_and_compare(sql_directory, project_name, sql_result_directory
             sql = sql_file.read()
 
         client = util.setup_instance('kylin_instance.yml')
-        with open(sql_result_directory + sql_file_name.split(".")[0] + '.json', 'r', encoding='utf8') as expected_result_file:
-            expected_result = json.loads(expected_result_file.read())
+        expected_result_file_name = sql_result_directory + sql_file_name.split(".")[0]
+        expected_result = None
+        if os.path.exists(expected_result_file_name):
+            with open(sql_result_directory + sql_file_name.split(".")[0] + '.json', 'r', encoding='utf8') as expected_result_file:
+                expected_result = json.loads(expected_result_file.read())
         equals.compare_sql_result(sql=sql, project=project_name, kylin_client=client, expected_result=expected_result)
 
 
diff --git a/build/CI/kylin-system-testing/kylin_utils/equals.py b/build/CI/kylin-system-testing/kylin_utils/equals.py
index 6c990d4..fbb1388 100644
--- a/build/CI/kylin-system-testing/kylin_utils/equals.py
+++ b/build/CI/kylin-system-testing/kylin_utils/equals.py
@@ -221,7 +221,8 @@ def compare_sql_result(sql, project, kylin_client, cube=None, expected_result=No
     assert query_result_equals(kylin_resp, pushdown_resp)
 
     if expected_result is not None:
-        print(kylin_resp.get("totalScanCount"))
+        assert expected_result.get("cube") == kylin_resp.get("cube")
+        assert expected_result.get("cuboidIds") == kylin_resp.get("cuboidIds")
         assert expected_result.get("totalScanCount") == kylin_resp.get("totalScanCount")
         assert expected_result.get("totalScanBytes") == kylin_resp.get("totalScanBytes")
         assert expected_result.get("totalScanFiles") == kylin_resp.get("totalScanFiles")
diff --git a/build/CI/kylin-system-testing/query/sql_result/sql_test/sql1.json b/build/CI/kylin-system-testing/query/sql_result/sql_test/sql1.json
index 3c2ec22..42166d2 100644
--- a/build/CI/kylin-system-testing/query/sql_result/sql_test/sql1.json
+++ b/build/CI/kylin-system-testing/query/sql_result/sql_test/sql1.json
@@ -1,6 +1,8 @@
 {
-  "totalScanCount":7349,
-  "totalScanBytes":229078,
-  "totalScanFiles":2,
+  "cube": "CUBE[name=generic_test_cube]",
+  "cuboidIds": "20480",
+  "totalScanCount": 7349,
+  "totalScanBytes": 229078,
+  "totalScanFiles": 2,
   "pushDown": false
 }
\ No newline at end of file
diff --git a/build/CI/run-ci.sh b/build/CI/run-ci.sh
index acbb2c7..d4c0122 100644
--- a/build/CI/run-ci.sh
+++ b/build/CI/run-ci.sh
@@ -58,6 +58,7 @@ cat > kylin-all/conf/kylin.properties <<EOL
 kylin.metadata.url=kylin_metadata@jdbc,url=jdbc:mysql://metastore-db:3306/metastore,username=kylin,password=kylin,maxActive=10,maxIdle=10
 kylin.env.zookeeper-connect-string=write-zookeeper:2181
 kylin.job.scheduler.default=100
+kylin.engine.spark-conf.spark.shuffle.service.enabled=false
 kylin.query.pushdown.runner-class-name=org.apache.kylin.query.pushdown.PushDownRunnerSparkImpl
 EOL
 
@@ -87,7 +88,7 @@ then
     bash stop_cluster.sh
 
     bash setup_hadoop_cluster.sh --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 \
-      --enable_hbase yes --hbase_version 1.1.2  --enable_ldap nosh setup_cluster.sh \
+      --enable_hbase no --hbase_version 1.1.2  --enable_ldap nosh setup_cluster.sh \
       --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 --enable_hbase yes \
       --hbase_version 1.1.2  --enable_ldap no
     cd ..
@@ -104,7 +105,7 @@ echo "Restart Kylin cluster."
 
 cd docker
 bash setup_service.sh --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 \
-      --enable_hbase yes --hbase_version 1.1.2  --enable_ldap nosh setup_cluster.sh \
+      --enable_hbase no --hbase_version 1.1.2  --enable_ldap nosh setup_cluster.sh \
       --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 --enable_hbase yes \
       --hbase_version 1.1.2  --enable_ldap no
 docker ps
diff --git a/docker/docker-compose/write/write-hadoop.env b/docker/docker-compose/write/write-hadoop.env
index a99c096..60588aa 100644
--- a/docker/docker-compose/write/write-hadoop.env
+++ b/docker/docker-compose/write/write-hadoop.env
@@ -32,6 +32,7 @@ YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___pe
 YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
 YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
 YARN_CONF_mapreduce_jobhistory_address=write-historyserver:10020
+YARN_CONF_yarn_nodemanager_vmem___pmem___ratio=2.5
 
 MAPRED_CONF_mapreduce_framework_name=yarn
 MAPRED_CONF_mapred_child_java_opts=-Xmx4096m
diff --git a/docker/dockerfile/cluster/client/Dockerfile b/docker/dockerfile/cluster/client/Dockerfile
index 43c935e..632516e 100644
--- a/docker/dockerfile/cluster/client/Dockerfile
+++ b/docker/dockerfile/cluster/client/Dockerfile
@@ -128,6 +128,8 @@ ENV SPARK_CONF_DIR=/opt/spark-$SPARK_VERSION-bin-hadoop${SPARK_HADOOP_VERSION}/c
 RUN curl -fSL "${SPARK_URL}" -o /tmp/spark.tar.gz \
     && tar -zxvf /tmp/spark.tar.gz -C /opt/ \
     && rm -f /tmp/spark.tar.gz \
+    && rm -f $SPARK_HOME/conf/hive-site.xml \
+    && ln -s $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf/hive-site.xml \
     && cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf \
     && cp $SPARK_HOME/yarn/*.jar $HADOOP_HOME/share/hadoop/yarn/lib
 
diff --git a/pom.xml b/pom.xml
index 4843dbf..9a3285d 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1903,6 +1903,7 @@
                 <exclude>**/*.json</exclude>
                 <exclude>**/*.json.bad</exclude>
                 <exclude>**/*.md</exclude>
+                <exclude>**/*.spec</exclude>
 
                 <!-- binary files -->
                 <exclude>**/*.dict</exclude>
diff --git a/server-base/src/main/java/org/apache/kylin/rest/response/SQLResponse.java b/server-base/src/main/java/org/apache/kylin/rest/response/SQLResponse.java
index 35aef1c..d5d57ed 100644
--- a/server-base/src/main/java/org/apache/kylin/rest/response/SQLResponse.java
+++ b/server-base/src/main/java/org/apache/kylin/rest/response/SQLResponse.java
@@ -46,6 +46,8 @@ public class SQLResponse implements Serializable {
      */
     protected String cube;
 
+    protected String cuboidIds;
+
     // if not select query, only return affected row count
     protected int affectedRowCount;
 
@@ -135,6 +137,14 @@ public class SQLResponse implements Serializable {
         this.cube = cube;
     }
 
+    public String getCuboidIds() {
+        return cuboidIds;
+    }
+
+    public void setCuboidIds(String cuboidIds) {
+        this.cuboidIds = cuboidIds;
+    }
+
     public int getAffectedRowCount() {
         return affectedRowCount;
     }
diff --git a/server-base/src/main/java/org/apache/kylin/rest/service/QueryService.java b/server-base/src/main/java/org/apache/kylin/rest/service/QueryService.java
index 0ddf4db..d9fc732 100644
--- a/server-base/src/main/java/org/apache/kylin/rest/service/QueryService.java
+++ b/server-base/src/main/java/org/apache/kylin/rest/service/QueryService.java
@@ -1160,6 +1160,7 @@ public class QueryService extends BasicService {
 
         List<String> realizations = Lists.newLinkedList();
         StringBuilder cubeSb = new StringBuilder();
+        StringBuilder cuboidIdsSb = new StringBuilder();
         StringBuilder logSb = new StringBuilder("Processed rows for each storageContext: ");
         QueryContext queryContext = QueryContextFacade.current();
         if (OLAPContext.getThreadLocalContexts() != null) { // contexts can be null in case of 'explain plan for'
@@ -1171,6 +1172,14 @@ public class QueryService extends BasicService {
                     if (cubeSb.length() > 0) {
                         cubeSb.append(",");
                     }
+                    Cuboid cuboid = ctx.storageContext.getCuboid();
+                    if (cuboid != null) {
+                        //Some queries do not involve cuboid, e.g. lookup table query
+                        if(cuboidIdsSb.length() >0) {
+                            cuboidIdsSb.append(",");
+                        }
+                        cuboidIdsSb.append(cuboid.getId());
+                    }
                     cubeSb.append(ctx.realization.getCanonicalName());
                     logSb.append(ctx.storageContext.getProcessedRowCount()).append(" ");
 
@@ -1181,11 +1190,14 @@ public class QueryService extends BasicService {
                 }
                 queryContext.setContextRealization(ctx.id, realizationName, realizationType);
             }
+
+
         }
         logger.info(logSb.toString());
 
         SQLResponse response = new SQLResponse(columnMetas, results, cubeSb.toString(), 0, isException,
                 exceptionMessage, isPartialResult, isPushDown);
+        response.setCuboidIds(cuboidIdsSb.toString());
         response.setTotalScanCount(queryContext.getScannedRows());
         response.setTotalScanFiles((queryContext.getScanFiles() < 0) ? -1 :
                 queryContext.getScanFiles());
diff --git a/server-base/src/test/java/org/apache/kylin/rest/response/SQLResponseTest.java b/server-base/src/test/java/org/apache/kylin/rest/response/SQLResponseTest.java
index 48f4791..f1c704e 100644
--- a/server-base/src/test/java/org/apache/kylin/rest/response/SQLResponseTest.java
+++ b/server-base/src/test/java/org/apache/kylin/rest/response/SQLResponseTest.java
@@ -32,7 +32,7 @@ public class SQLResponseTest {
 
     @Test
     public void testInterfaceConsistency() throws IOException {
-        String[] attrArray = new String[] { "columnMetas", "results", "cube", "affectedRowCount", "isException",
+        String[] attrArray = new String[] { "columnMetas", "results", "cube", "cuboidIds", "affectedRowCount", "isException",
                 "exceptionMessage", "duration", "partial", "totalScanCount", "hitExceptionCache", "storageCacheUsed",
                 "sparkPool", "pushDown", "traceUrl", "totalScanBytes", "totalScanFiles",
                 "metadataTime", "totalSparkScanTime" };


[kylin] 02/13: KYLIN-4775 Use docker-compose to deploy Hadoop and Kylin

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit c23238811aa66cf972555db8cd6ee16006d5f4af
Author: yongheng.liu <li...@gmail.com>
AuthorDate: Mon Oct 19 20:05:59 2020 +0800

    KYLIN-4775 Use docker-compose to deploy Hadoop and Kylin
---
 docker/.gitignore                                  |   6 +
 docker/build_cluster_images.sh                     | 111 +++++------
 .../client.env => others/client-write-read.env}    |   2 +-
 .../{write/client.env => others/client-write.env}  |  11 +-
 .../others/docker-compose-kylin-write-read.yml     |  69 +++++++
 .../others/docker-compose-kylin-write.yml          |  69 +++++++
 .../docker-compose-metastore.yml}                  |  14 +-
 .../docker-compose/read/docker-compose-hadoop.yml  | 129 +++++++++++++
 .../docker-compose/read/docker-compose-hbase.yml   |  43 +++++
 .../docker-compose/write/docker-compose-hadoop.yml | 128 ++++++++++++
 .../docker-compose/write/docker-compose-hbase.yml  |  43 +++++
 .../docker-compose/write/docker-compose-hive.yml   |  37 ++++
 .../docker-compose/write/docker-compose-write.yml  | 215 ---------------------
 docker/docker-compose/write/write-hadoop.env       |   6 +-
 docker/dockerfile/cluster/base/Dockerfile          |   4 +-
 docker/dockerfile/cluster/client/Dockerfile        |   2 +-
 docker/dockerfile/cluster/hive/Dockerfile          |   2 +
 docker/dockerfile/cluster/hive/run_hv.sh           |   4 +
 docker/header.sh                                   | 141 ++++++++++++++
 docker/setup_cluster.sh                            |  26 +--
 docker/stop_cluster.sh                             |  56 ++++--
 21 files changed, 791 insertions(+), 327 deletions(-)

diff --git a/docker/.gitignore b/docker/.gitignore
new file mode 100644
index 0000000..db4d255
--- /dev/null
+++ b/docker/.gitignore
@@ -0,0 +1,6 @@
+docker-compose/others/data/
+docker-compose/read/data/
+docker-compose/write/data/
+docker-compose/others/kylin/kylin-all/
+docker-compose/others/kylin/kylin-job/
+docker-compose/others/kylin/kylin-query/
diff --git a/docker/build_cluster_images.sh b/docker/build_cluster_images.sh
index ac60533..b2aae80 100644
--- a/docker/build_cluster_images.sh
+++ b/docker/build_cluster_images.sh
@@ -1,57 +1,32 @@
 #!/bin/bash
-
-ARGS=`getopt -o h:i:b --long hadoop_version:,hive_version:,hbase_version: -n 'parameter.bash' -- "$@"`
-
-if [ $? != 0 ]; then
-    echo "Terminating..."
-    exit 1
-fi
-
-eval set -- "${ARGS}"
-
-HADOOP_VERSION="2.8.5"
-HIVE_VERSION="1.2.2"
-HBASE_VERSION="1.1.2"
-
-while true;
-do
-    case "$1" in
-        --hadoop_version)
-            HADOOP_VERSION=$2;
-            shift 2;
-            ;;
-        --hive_version)
-            HIVE_VERSION=$2;
-            shift 2;
-            ;;
-        --hbase_version)
-            HBASE_VERSION=$2;
-            shift 2;
-            ;;
-        --)
-            break
-            ;;
-        *)
-            echo "Internal error!"
-            break
-            ;;
-    esac
-done
-
-for arg in $@
-do
-    echo "processing $arg"
-done
-
-echo "........hadoop version: "$HADOOP_VERSION
-echo "........hive version: "$HIVE_VERSION
-echo "........hbase version: "$HBASE_VERSION
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+SCRIPT_PATH=$(cd `dirname $0`; pwd)
+WS_ROOT=`dirname $SCRIPT_PATH`
+
+source ${SCRIPT_PATH}/header.sh
 
 #docker build -t apachekylin/kylin-metastore:mysql_5.6.49 ./kylin/metastore-db
+#
 
 docker build -t apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/base
-docker build -t apachekylin/kylin-hadoop-namenode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/namenode
-docker build -t apachekylin/kylin-hadoop-datanode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/datanode
+docker build -t apachekylin/kylin-hadoop-namenode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} --build-arg HADOOP_WEBHDFS_PORT=${HADOOP_WEBHDFS_PORT} ./dockerfile/cluster/namenode
+docker build -t apachekylin/kylin-hadoop-datanode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} --build-arg HADOOP_DN_PORT=${HADOOP_DN_PORT} ./dockerfile/cluster/datanode
 docker build -t apachekylin/kylin-hadoop-resourcemanager:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/resourcemanager
 docker build -t apachekylin/kylin-hadoop-nodemanager:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/nodemanager
 docker build -t apachekylin/kylin-hadoop-historyserver:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/historyserver
@@ -61,29 +36,29 @@ docker build -t apachekylin/kylin-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERS
 --build-arg HADOOP_VERSION=${HADOOP_VERSION} \
 ./dockerfile/cluster/hive
 
-docker build -t apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hbase
-docker build -t apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hmaster
-docker build -t apachekylin/kylin-hbase-regionserver:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hregionserver
+if [ $ENABLE_HBASE == "yes" ]; then
+  docker build -t apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hbase
+  docker build -t apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hmaster
+  docker build -t apachekylin/kylin-hbase-regionserver:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hregionserver
+fi
 
-docker build -t apachekylin/kylin-kerberos:latest ./dockerfile/cluster/kerberos
+if [ $ENABLE_KERBEROS == "yes" ]; then
+  docker build -t apachekylin/kylin-kerberos:latest ./dockerfile/cluster/kerberos
+fi
+
+if [ $ENABLE_LDAP == "yes" ]; then
+  docker pull osixia/openldap:1.3.0
+fi
+
+#if [ $ENABLE_KAFKA == "yes" ]; then
+#  docker pull bitnami/kafka:2.0.0
+#fi
+docker pull bitnami/kafka:2.0.0
+
+docker pull mysql:5.6.49
 
 docker build -t apachekylin/kylin-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_hbase_${HBASE_VERSION} \
 --build-arg HIVE_VERSION=${HIVE_VERSION} \
 --build-arg HADOOP_VERSION=${HADOOP_VERSION} \
 --build-arg HBASE_VERSION=${HBASE_VERSION} \
 ./dockerfile/cluster/client
-
-
-export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
-export HADOOP_DATANODE_IMAGETAG=apachekylin/kylin-hadoop-datanode:hadoop_${HADOOP_VERSION}
-export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-hadoop-namenode:hadoop_${HADOOP_VERSION}
-export HADOOP_RESOURCEMANAGER_IMAGETAG=apachekylin/kylin-hadoop-resourcemanager:hadoop_${HADOOP_VERSION}
-export HADOOP_NODEMANAGER_IMAGETAG=apachekylin/kylin-hadoop-nodemanager:hadoop_${HADOOP_VERSION}
-export HADOOP_HISTORYSERVER_IMAGETAG=apachekylin/kylin-hadoop-historyserver:hadoop_${HADOOP_VERSION}
-export HIVE_IMAGETAG=apachekylin/kylin-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION}
-export HBASE_MASTER_IMAGETAG=apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION}
-export HBASE_MASTER_IMAGETAG=apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION}
-export HBASE_REGIONSERVER_IMAGETAG=apachekylin/kylin-hbase-regionserver:hbase_${HBASE_VERSION}
-export CLIENT_IMAGETAG=apachekylin/kylin-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_hbase_${HBASE_VERSION}
-export KERBEROS_IMAGE=apachekylin/kylin-kerberos:latest
-
diff --git a/docker/docker-compose/write-read/client.env b/docker/docker-compose/others/client-write-read.env
similarity index 97%
rename from docker/docker-compose/write-read/client.env
rename to docker/docker-compose/others/client-write-read.env
index fc0743c..c61e986 100644
--- a/docker/docker-compose/write-read/client.env
+++ b/docker/docker-compose/others/client-write-read.env
@@ -39,7 +39,7 @@ MAPRED_CONF_mapreduce_reduce_memory_mb=8192
 MAPRED_CONF_mapreduce_map_java_opts=-Xmx3072m
 MAPRED_CONF_mapreduce_reduce_java_opts=-Xmx6144m
 
-HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:mysql://metastore-db/metastore
+HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:mysql://metastore-db/metastore?useSSL=false\&amp;allowPublicKeyRetrieval=true
 HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName=com.mysql.jdbc.Driver
 HIVE_SITE_CONF_javax_jdo_option_ConnectionUserName=kylin
 HIVE_SITE_CONF_javax_jdo_option_ConnectionPassword=kylin
diff --git a/docker/docker-compose/write/client.env b/docker/docker-compose/others/client-write.env
similarity index 91%
rename from docker/docker-compose/write/client.env
rename to docker/docker-compose/others/client-write.env
index fc0743c..edad60b 100644
--- a/docker/docker-compose/write/client.env
+++ b/docker/docker-compose/others/client-write.env
@@ -39,19 +39,18 @@ MAPRED_CONF_mapreduce_reduce_memory_mb=8192
 MAPRED_CONF_mapreduce_map_java_opts=-Xmx3072m
 MAPRED_CONF_mapreduce_reduce_java_opts=-Xmx6144m
 
-HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:mysql://metastore-db/metastore
+HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:mysql://metastore-db/metastore?useSSL=false\&amp;allowPublicKeyRetrieval=true
 HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName=com.mysql.jdbc.Driver
 HIVE_SITE_CONF_javax_jdo_option_ConnectionUserName=kylin
 HIVE_SITE_CONF_javax_jdo_option_ConnectionPassword=kylin
 HIVE_SITE_CONF_datanucleus_autoCreateSchema=true
 HIVE_SITE_CONF_hive_metastore_uris=thrift://write-hive-metastore:9083
 
-HBASE_CONF_hbase_rootdir=hdfs://read-namenode:8020/hbase
+HBASE_CONF_hbase_rootdir=hdfs://write-namenode:8020/hbase
 HBASE_CONF_hbase_cluster_distributed=true
-HBASE_CONF_hbase_zookeeper_quorum=read-zookeeper
-
-HBASE_CONF_hbase_master=read-hbase-master:16000
-HBASE_CONF_hbase_master_hostname=read-hbase-master
+HBASE_CONF_hbase_zookeeper_quorum=write-zookeeper
+HBASE_CONF_hbase_master=write-hbase-master:16000
+HBASE_CONF_hbase_master_hostname=write-hbase-master
 HBASE_CONF_hbase_master_port=16000
 HBASE_CONF_hbase_master_info_port=16010
 HBASE_CONF_hbase_regionserver_port=16020
diff --git a/docker/docker-compose/others/docker-compose-kylin-write-read.yml b/docker/docker-compose/others/docker-compose-kylin-write-read.yml
new file mode 100644
index 0000000..cb67b06
--- /dev/null
+++ b/docker/docker-compose/others/docker-compose-kylin-write-read.yml
@@ -0,0 +1,69 @@
+version: "3.3"
+
+services:
+  kylin-all:
+    image: ${CLIENT_IMAGETAG}
+    container_name: kylin-all
+    hostname: kylin-all
+    volumes:
+      - ./conf/hadoop:/etc/hadoop/conf
+      - ./conf/hbase:/etc/hbase/conf
+      - ./conf/hive:/etc/hive/conf
+      - ./kylin/kylin-all:/opt/kylin/kylin-all
+    env_file:
+      - client-write-read.env
+    environment:
+      HADOOP_CONF_DIR: /etc/hadoop/conf
+      HIVE_CONF_DIR: /etc/hive/conf
+      HBASE_CONF_DIR: /etc/hbase/conf
+      KYLIN_HOME: /opt/kylin/kylin-all
+    networks:
+      - write_kylin
+    ports:
+      - 7070:7070
+
+  kylin-job:
+    image: ${CLIENT_IMAGETAG}
+    container_name: kylin-job
+    hostname: kylin-job
+    volumes:
+      - ./conf/hadoop:/etc/hadoop/conf
+      - ./conf/hbase:/etc/hbase/conf
+      - ./conf/hive:/etc/hive/conf
+      - ./kylin/kylin-job:/opt/kylin/kylin-job
+    env_file:
+      - client-write-read.env
+    environment:
+      HADOOP_CONF_DIR: /etc/hadoop/conf
+      HIVE_CONF_DIR: /etc/hive/conf
+      HBASE_CONF_DIR: /etc/hbase/conf
+      KYLIN_HOME: /opt/kylin/kylin-job
+    networks:
+      - write_kylin
+    ports:
+      - 7071:7070
+
+  kylin-query:
+    image: ${CLIENT_IMAGETAG}
+    container_name: kylin-query
+    hostname: kylin-query
+    volumes:
+      - ./conf/hadoop:/etc/hadoop/conf
+      - ./conf/hbase:/etc/hbase/conf
+      - ./conf/hive:/etc/hive/conf
+      - ./kylin/kylin-query:/opt/kylin/kylin-query
+    env_file:
+      - client-write-read.env
+    environment:
+      HADOOP_CONF_DIR: /etc/hadoop/conf
+      HIVE_CONF_DIR: /etc/hive/conf
+      HBASE_CONF_DIR: /etc/hbase/conf
+      KYLIN_HOME: /opt/kylin/kylin-query
+    networks:
+      - write_kylin
+    ports:
+      - 7072:7070
+
+networks:
+  write_kylin:
+    external: true
\ No newline at end of file
diff --git a/docker/docker-compose/others/docker-compose-kylin-write.yml b/docker/docker-compose/others/docker-compose-kylin-write.yml
new file mode 100644
index 0000000..a78b88a
--- /dev/null
+++ b/docker/docker-compose/others/docker-compose-kylin-write.yml
@@ -0,0 +1,69 @@
+version: "3.3"
+
+services:
+  kylin-all:
+    image: ${CLIENT_IMAGETAG}
+    container_name: kylin-all
+    hostname: kylin-all
+    volumes:
+      - ./conf/hadoop:/etc/hadoop/conf
+      - ./conf/hbase:/etc/hbase/conf
+      - ./conf/hive:/etc/hive/conf
+      - ./kylin/kylin-all:/opt/kylin/kylin-all
+    env_file:
+      - client-write.env
+    environment:
+      HADOOP_CONF_DIR: /etc/hadoop/conf
+      HIVE_CONF_DIR: /etc/hive/conf
+      HBASE_CONF_DIR: /etc/hbase/conf
+      KYLIN_HOME: /opt/kylin/kylin-all
+    networks:
+      - write_kylin
+    ports:
+      - 7070:7070
+
+  kylin-job:
+    image: ${CLIENT_IMAGETAG}
+    container_name: kylin-job
+    hostname: kylin-job
+    volumes:
+      - ./conf/hadoop:/etc/hadoop/conf
+      - ./conf/hbase:/etc/hbase/conf
+      - ./conf/hive:/etc/hive/conf
+      - ./kylin/kylin-job:/opt/kylin/kylin-job
+    env_file:
+      - client-write.env
+    environment:
+      HADOOP_CONF_DIR: /etc/hadoop/conf
+      HIVE_CONF_DIR: /etc/hive/conf
+      HBASE_CONF_DIR: /etc/hbase/conf
+      KYLIN_HOME: /opt/kylin/kylin-job
+    networks:
+      - write_kylin
+    ports:
+      - 7071:7070
+
+  kylin-query:
+    image: ${CLIENT_IMAGETAG}
+    container_name: kylin-query
+    hostname: kylin-query
+    volumes:
+      - ./conf/hadoop:/etc/hadoop/conf
+      - ./conf/hbase:/etc/hbase/conf
+      - ./conf/hive:/etc/hive/conf
+      - ./kylin/kylin-query:/opt/kylin/kylin-query
+    env_file:
+      - client-write.env
+    environment:
+      HADOOP_CONF_DIR: /etc/hadoop/conf
+      HIVE_CONF_DIR: /etc/hive/conf
+      HBASE_CONF_DIR: /etc/hbase/conf
+      KYLIN_HOME: /opt/kylin/kylin-query
+    networks:
+      - write_kylin
+    ports:
+      - 7072:7070
+
+networks:
+  write_kylin:
+    external: true
\ No newline at end of file
diff --git a/docker/docker-compose/write-read/test-docker-compose-mysql.yml b/docker/docker-compose/others/docker-compose-metastore.yml
similarity index 54%
rename from docker/docker-compose/write-read/test-docker-compose-mysql.yml
rename to docker/docker-compose/others/docker-compose-metastore.yml
index 5906c1e..a36df07 100644
--- a/docker/docker-compose/write-read/test-docker-compose-mysql.yml
+++ b/docker/docker-compose/others/docker-compose-metastore.yml
@@ -1,16 +1,24 @@
-
 version: "3.3"
 
 services:
   metastore-db:
-    image: mysql:5.6.49
+#    image: mysql:5.6.49
+#    image: mysql:8.0.11
+    image: mysql:5.7.24
     container_name: metastore-db
     hostname: metastore-db
     volumes:
       - ./data/mysql:/var/lib/mysql
     environment:
       - MYSQL_ROOT_PASSWORD=kylin
-      - MYSQL_DATABASE=kylin
+      - MYSQL_DATABASE=metastore
       - MYSQL_USER=kylin
       - MYSQL_PASSWORD=kylin
+    networks:
+      - write_kylin
+    ports:
+      - 3306:3306
 
+networks:
+  write_kylin:
+    external: true
diff --git a/docker/docker-compose/read/docker-compose-hadoop.yml b/docker/docker-compose/read/docker-compose-hadoop.yml
new file mode 100644
index 0000000..a0e2a66
--- /dev/null
+++ b/docker/docker-compose/read/docker-compose-hadoop.yml
@@ -0,0 +1,129 @@
+version: "3.3"
+
+services:
+  read-namenode:
+    image: ${HADOOP_NAMENODE_IMAGETAG:-apachekylin/kylin-hadoop-namenode:hadoop_2.8.5}
+    container_name: read-namenode
+    hostname: read-namenode
+    volumes:
+      - ./data/write_hadoop_namenode:/hadoop/dfs/name
+    environment:
+      - CLUSTER_NAME=test-kylin
+      - HADOOP_WEBHDFS_PORT=${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - read-hadoop.env
+    networks:
+      - write_kylin
+    expose:
+      - 8020
+    ports:
+      - 50071:50070
+      - 9871:9870
+
+  read-datanode1:
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    container_name: read-datanode1
+    hostname: read-datanode1
+    volumes:
+      - ./data/write_hadoop_datanode1:/hadoop/dfs/data
+    environment:
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070}"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - read-hadoop.env
+    networks:
+      - write_kylin
+    links:
+      - read-namenode
+    expose:
+      - ${HADOOP_DN_PORT:-50075}
+
+  read-datanode2:
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    container_name: read-datanode2
+    hostname: read-datanode2
+    volumes:
+      - ./data/write_hadoop_datanode2:/hadoop/dfs/data
+    environment:
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070}"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - read-hadoop.env
+    networks:
+      - write_kylin
+    expose:
+      - ${HADOOP_DN_PORT:-50075}
+
+  read-datanode3:
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    container_name: read-datanode3
+    hostname: read-datanode3
+    volumes:
+      - ./data/write_hadoop_datanode3:/hadoop/dfs/data
+    environment:
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070}"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - read-hadoop.env
+    networks:
+      - write_kylin
+    expose:
+      - ${HADOOP_DN_PORT:-50075}
+
+  read-resourcemanager:
+    image: ${HADOOP_RESOURCEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-resourcemanager:hadoop_2.8.5}
+    container_name: read-resourcemanager
+    hostname: read-resourcemanager
+    environment:
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070} read-datanode1:${HADOOP_DN_PORT:-50075} read-datanode2:${HADOOP_DN_PORT:-50075} read-datanode3:${HADOOP_DN_PORT:-50075}"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - read-hadoop.env
+    networks:
+      - write_kylin
+    ports:
+      - 8089:8088
+
+  read-nodemanager1:
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5}
+    container_name: read-nodemanager1
+    hostname: read-nodemanager1
+    environment:
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070} read-datanode1:${HADOOP_DN_PORT:-50075} read-datanode2:${HADOOP_DN_PORT:-50075} read-datanode3:${HADOOP_DN_PORT:-50075} read-resourcemanager:8088"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - read-hadoop.env
+    networks:
+      - write_kylin
+
+  read-nodemanager2:
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5}
+    container_name: read-nodemanager2
+    hostname: read-nodemanager2
+    environment:
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070} read-datanode1:${HADOOP_DN_PORT:-50075} read-datanode2:${HADOOP_DN_PORT:-50075} read-datanode3:${HADOOP_DN_PORT:-50075} read-resourcemanager:8088"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - read-hadoop.env
+    networks:
+      - write_kylin
+
+  read-historyserver:
+    image: ${HADOOP_HISTORYSERVER_IMAGETAG:-apachekylin/kylin-hadoop-historyserver:hadoop_2.8.5}
+    container_name: read-historyserver
+    hostname: read-historyserver
+    volumes:
+      - ./data/write_hadoop_historyserver:/hadoop/yarn/timeline
+    environment:
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070} read-datanode1:${HADOOP_DN_PORT:-50075} read-datanode2:${HADOOP_DN_PORT:-50075} read-datanode3:${HADOOP_DN_PORT:-50075} read-resourcemanager:8088"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - read-hadoop.env
+    networks:
+      - write_kylin
+    ports:
+      - 8189:8188
+
+networks:
+  write_kylin:
+    external: true
\ No newline at end of file
diff --git a/docker/docker-compose/read/docker-compose-hbase.yml b/docker/docker-compose/read/docker-compose-hbase.yml
new file mode 100644
index 0000000..ac4048b
--- /dev/null
+++ b/docker/docker-compose/read/docker-compose-hbase.yml
@@ -0,0 +1,43 @@
+version: "3.3"
+
+services:
+  read-hbase-master:
+    image: ${HBASE_MASTER_IMAGETAG:-apachekylin/kylin-hbase-master:hbase1.1.2}
+    container_name: read-hbase-master
+    hostname: read-hbase-master
+    env_file:
+      - read-hbase-distributed-local.env
+    environment:
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070} read-datanode1:${HADOOP_DN_PORT:-50075} read-datanode2:${HADOOP_DN_PORT:-50075} read-datanode3:${HADOOP_DN_PORT:-50075} read-zookeeper:2181"
+    networks:
+      - write_kylin
+    ports:
+      - 16010:16010
+
+  read-hbase-regionserver1:
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-hbase-regionserver:hbase_1.1.2}
+    container_name: read-hbase-regionserver1
+    hostname: read-hbase-regionserver1
+    env_file:
+      - read-hbase-distributed-local.env
+    environment:
+      HBASE_CONF_hbase_regionserver_hostname: read-hbase-regionserver1
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070} read-datanode1:${HADOOP_DN_PORT:-50075} read-datanode2:${HADOOP_DN_PORT:-50075} read-datanode3:${HADOOP_DN_PORT:-50075} read-zookeeper:2181 read-hbase-master:16010"
+    networks:
+      - write_kylin
+
+  read-hbase-regionserver2:
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-hbase-regionserver:hbase_1.1.2}
+    container_name: read-hbase-regionserver2
+    hostname: read-hbase-regionserver2
+    env_file:
+      - read-hbase-distributed-local.env
+    environment:
+      HBASE_CONF_hbase_regionserver_hostname: read-hbase-regionserver2
+      SERVICE_PRECONDITION: "read-namenode:${HADOOP_WEBHDFS_PORT:-50070} read-datanode1:${HADOOP_DN_PORT:-50075} read-datanode2:${HADOOP_DN_PORT:-50075} read-datanode3:${HADOOP_DN_PORT:-50075} read-zookeeper:2181 read-hbase-master:16010"
+    networks:
+      - write_kylin
+
+networks:
+  write_kylin:
+    external: true
diff --git a/docker/docker-compose/write/docker-compose-hadoop.yml b/docker/docker-compose/write/docker-compose-hadoop.yml
new file mode 100644
index 0000000..4286cfc
--- /dev/null
+++ b/docker/docker-compose/write/docker-compose-hadoop.yml
@@ -0,0 +1,128 @@
+version: "3.3"
+
+services:
+  write-namenode:
+    image: ${HADOOP_NAMENODE_IMAGETAG:-apachekylin/kylin-hadoop-namenode:hadoop_2.8.5}
+    container_name: write-namenode
+    hostname: write-namenode
+    volumes:
+      - ./data/write_hadoop_namenode:/hadoop/dfs/name
+    environment:
+      - CLUSTER_NAME=test-kylin
+      - HADOOP_WEBHDFS_PORT=${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - write-hadoop.env
+    networks:
+      - kylin
+    expose:
+      - 8020
+    ports:
+      - 50070:50070
+      - 9870:9870
+
+  write-datanode1:
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    container_name: write-datanode1
+    hostname: write-datanode1
+    volumes:
+      - ./data/write_hadoop_datanode1:/hadoop/dfs/data
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070}"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - write-hadoop.env
+    networks:
+      - kylin
+    links:
+      - write-namenode
+    expose:
+      - ${HADOOP_DN_PORT:-50075}
+
+  write-datanode2:
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    container_name: write-datanode2
+    hostname: write-datanode2
+    volumes:
+      - ./data/write_hadoop_datanode2:/hadoop/dfs/data
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070}"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - write-hadoop.env
+    networks:
+      - kylin
+    expose:
+      - ${HADOOP_DN_PORT:-50075}
+
+  write-datanode3:
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    container_name: write-datanode3
+    hostname: write-datanode3
+    volumes:
+      - ./data/write_hadoop_datanode3:/hadoop/dfs/data
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070}"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - write-hadoop.env
+    networks:
+      - kylin
+    expose:
+      - ${HADOOP_DN_PORT:-50075}
+
+  write-resourcemanager:
+    image: ${HADOOP_RESOURCEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-resourcemanager:hadoop_2.8.5}
+    container_name: write-resourcemanager
+    hostname: write-resourcemanager
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070} write-datanode1:${HADOOP_DN_PORT:-50075} write-datanode2:${HADOOP_DN_PORT:-50075} write-datanode3:${HADOOP_DN_PORT:-50075}"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - write-hadoop.env
+    networks:
+      - kylin
+    ports:
+      - 8088:8088
+
+  write-nodemanager1:
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5}
+    container_name: write-nodemanager1
+    hostname: write-nodemanager1
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070} write-datanode1:${HADOOP_DN_PORT:-50075} write-datanode2:${HADOOP_DN_PORT:-50075} write-datanode3:${HADOOP_DN_PORT:-50075} write-resourcemanager:8088"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - write-hadoop.env
+    networks:
+      - kylin
+
+  write-nodemanager2:
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5}
+    container_name: write-nodemanager2
+    hostname: write-nodemanager2
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070} write-datanode1:${HADOOP_DN_PORT:-50075} write-datanode2:${HADOOP_DN_PORT:-50075} write-datanode3:${HADOOP_DN_PORT:-50075} write-resourcemanager:8088"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - write-hadoop.env
+    networks:
+      - kylin
+
+  write-historyserver:
+    image: ${HADOOP_HISTORYSERVER_IMAGETAG:-apachekylin/kylin-hadoop-historyserver:hadoop_2.8.5}
+    container_name: write-historyserver
+    hostname: write-historyserver
+    volumes:
+      - ./data/write_hadoop_historyserver:/hadoop/yarn/timeline
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070} write-datanode1:${HADOOP_DN_PORT:-50075} write-datanode2:${HADOOP_DN_PORT:-50075} write-datanode3:${HADOOP_DN_PORT:-50075} write-resourcemanager:8088"
+      HADOOP_WEBHDFS_PORT: ${HADOOP_WEBHDFS_PORT:-50070}
+    env_file:
+      - write-hadoop.env
+    networks:
+      - kylin
+    ports:
+      - 8188:8188
+
+networks:
+  kylin:
\ No newline at end of file
diff --git a/docker/docker-compose/write/docker-compose-hbase.yml b/docker/docker-compose/write/docker-compose-hbase.yml
new file mode 100644
index 0000000..d95f32b
--- /dev/null
+++ b/docker/docker-compose/write/docker-compose-hbase.yml
@@ -0,0 +1,43 @@
+version: "3.3"
+
+services:
+  write-hbase-master:
+    image: ${HBASE_MASTER_IMAGETAG:-apachekylin/kylin-hbase-master:hbase1.1.2}
+    container_name: write-hbase-master
+    hostname: write-hbase-master
+    env_file:
+      - write-hbase-distributed-local.env
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070} write-datanode1:${HADOOP_DN_PORT:-50075} write-datanode2:${HADOOP_DN_PORT:-50075} write-datanode3:${HADOOP_DN_PORT:-50075} write-zookeeper:2181"
+    networks:
+      - write_kylin
+    ports:
+      - 16010:16010
+
+  write-hbase-regionserver1:
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-hbase-regionserver:hbase_1.1.2}
+    container_name: write-hbase-regionserver1
+    hostname: write-hbase-regionserver1
+    env_file:
+      - write-hbase-distributed-local.env
+    environment:
+      HBASE_CONF_hbase_regionserver_hostname: write-hbase-regionserver1
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070} write-datanode1:${HADOOP_DN_PORT:-50075} write-datanode2:${HADOOP_DN_PORT:-50075} write-datanode3:${HADOOP_DN_PORT:-50075} write-zookeeper:2181 write-hbase-master:16010"
+    networks:
+      - write_kylin
+
+  write-hbase-regionserver2:
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-hbase-regionserver:hbase_1.1.2}
+    container_name: write-hbase-regionserver2
+    hostname: write-hbase-regionserver2
+    env_file:
+      - write-hbase-distributed-local.env
+    environment:
+      HBASE_CONF_hbase_regionserver_hostname: write-hbase-regionserver2
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070} write-datanode1:${HADOOP_DN_PORT:-50075} write-datanode2:${HADOOP_DN_PORT:-50075} write-datanode3:${HADOOP_DN_PORT:-50075} write-zookeeper:2181 write-hbase-master:16010"
+    networks:
+      - write_kylin
+
+networks:
+  write_kylin:
+    external: true
diff --git a/docker/docker-compose/write/docker-compose-hive.yml b/docker/docker-compose/write/docker-compose-hive.yml
new file mode 100644
index 0000000..9b94a34
--- /dev/null
+++ b/docker/docker-compose/write/docker-compose-hive.yml
@@ -0,0 +1,37 @@
+version: "3.3"
+
+services:
+  write-hive-server:
+    image: ${HIVE_IMAGETAG:-apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5}
+    container_name: write-hive-server
+    hostname: write-hive-server
+    env_file:
+      - write-hadoop.env
+    environment:
+      HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:mysql://metastore-db/metastore"
+      SERVICE_PRECONDITION: "write-hive-metastore:9083"
+      HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName: com.mysql.jdbc.Driver
+    networks:
+      - write_kylin
+    ports:
+      - 10000:10000
+
+  write-hive-metastore:
+    image: ${HIVE_IMAGETAG:-apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5}
+    container_name: write-hive-metastore
+    hostname: write-hive-metastore
+    env_file:
+      - write-hadoop.env
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:${HADOOP_WEBHDFS_PORT:-50070} write-datanode1:${HADOOP_DN_PORT:-50075} write-datanode2:${HADOOP_DN_PORT:-50075} write-datanode3:${HADOOP_DN_PORT:-50075} metastore-db:3306"
+      HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName: com.mysql.jdbc.Driver
+    command: /opt/hive/bin/hive --service metastore
+    networks:
+      - write_kylin
+    expose:
+      - 9083
+
+networks:
+  write_kylin:
+    external: true
+
diff --git a/docker/docker-compose/write/docker-compose-write.yml b/docker/docker-compose/write/docker-compose-write.yml
deleted file mode 100644
index aefe726..0000000
--- a/docker/docker-compose/write/docker-compose-write.yml
+++ /dev/null
@@ -1,215 +0,0 @@
-version: "3.3"
-
-services:
-  write-namenode:
-    image: ${HADOOP_NAMENODE_IMAGETAG:-bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8}
-    container_name: write-namenode
-    hostname: write-namenode
-    volumes:
-      - ./data/write_hadoop_namenode:/hadoop/dfs/name
-    environment:
-      - CLUSTER_NAME=test-write
-    env_file:
-      - write-hadoop.env
-    expose:
-      - 8020
-    ports:
-      - 50070:50070
-
-  write-datanode1:
-    image: ${HADOOP_DATANODE_IMAGETAG:-bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8}
-    container_name: write-datanode1
-    hostname: write-datanode1
-    volumes:
-      - ./data/write_hadoop_datanode1:/hadoop/dfs/data
-    environment:
-      SERVICE_PRECONDITION: "write-namenode:50070"
-    env_file:
-      - write-hadoop.env
-    links:
-      - write-namenode
-
-  write-datanode2:
-    image: ${HADOOP_DATANODE_IMAGETAG:-bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8}
-    container_name: write-datanode2
-    hostname: write-datanode2
-    volumes:
-      - ./data/write_hadoop_datanode2:/hadoop/dfs/data
-    environment:
-      SERVICE_PRECONDITION: "write-namenode:50070"
-    env_file:
-      - write-hadoop.env
-
-  write-datanode3:
-    image: ${HADOOP_DATANODE_IMAGETAG:-bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8}
-    container_name: write-datanode3
-    hostname: write-datanode3
-    volumes:
-      - ./data/write_hadoop_datanode3:/hadoop/dfs/data
-    environment:
-      SERVICE_PRECONDITION: "write-namenode:50070"
-    env_file:
-      - write-hadoop.env
-
-  write-resourcemanager:
-    image: ${HADOOP_RESOURCEMANAGER_IMAGETAG:-bde2020/hadoop-resourcemanager:2.0.0-hadoop2.7.4-java8}
-    container_name: write-resourcemanager
-    hostname: write-resourcemanager
-    environment:
-      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075"
-    env_file:
-      - write-hadoop.env
-    ports:
-      - 8088:8088
-
-  write-nodemanager1:
-    image: ${HADOOP_NODEMANAGER_IMAGETAG:-bde2020/hadoop-nodemanager:2.0.0-hadoop2.7.4-java8}
-    container_name: write-nodemanager1
-    hostname: write-nodemanager1
-    environment:
-      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-resourcemanager:8088"
-    env_file:
-      - write-hadoop.env
-
-  write-nodemanager2:
-    image: ${HADOOP_NODEMANAGER_IMAGETAG:-bde2020/hadoop-nodemanager:2.0.0-hadoop2.7.4-java8}
-    container_name: write-nodemanager2
-    hostname: write-nodemanager2
-    environment:
-      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-resourcemanager:8088"
-    env_file:
-      - write-hadoop.env
-
-  write-historyserver:
-    image: ${HADOOP_HISTORYSERVER_IMAGETAG:-bde2020/hadoop-historyserver:2.0.0-hadoop2.7.4-java8}
-    container_name: write-historyserver
-    hostname: write-historyserver
-    volumes:
-      - ./data/write_hadoop_historyserver:/hadoop/yarn/timeline
-    environment:
-      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-resourcemanager:8088"
-    env_file:
-      - write-hadoop.env
-    ports:
-      - 8188:8188
-
-  write-hive-server:
-    image: ${HIVE_IMAGETAG:-apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5}
-    container_name: write-hive-server
-    hostname: write-hive-server
-    env_file:
-      - write-hadoop.env
-    environment:
-#      HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://write-hive-metastore/metastore"
-      HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:mysql://metastore-db/metastore"
-      SERVICE_PRECONDITION: "write-hive-metastore:9083"
-    ports:
-      - 10000:10000
-
-  write-hive-metastore:
-#    image: ${HIVE_IMAGETAG:-bde2020/hive:2.3.2-postgresql-metastore}
-    image: ${HIVE_IMAGETAG:-apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5}
-    container_name: write-hive-metastore
-    hostname: write-hive-metastore
-    env_file:
-      - write-hadoop.env
-    command: /opt/hive/bin/hive --service metastore
-    expose:
-      - 9083
-    environment:
-      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 metastore-db:3306"
-#       SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-hive-metastore-postgresql:5432"
-
-#  write-hive-metastore-postgresql:
-#    image: bde2020/hive-metastore-postgresql:2.3.0
-#    container_name: write-hive-metastore-postgresql
-#    hostname: write-hive-metastore-postgresql
-
-  metastore-db:
-    image: mysql:5.6.49
-    container_name: metastore-db
-    hostname: metastore-db
-    volumes:
-      - ./data/mysql:/var/lib/mysql
-    environment:
-      - MYSQL_ROOT_PASSWORD=kylin
-      - MYSQL_DATABASE=metastore
-      - MYSQL_USER=kylin
-      - MYSQL_PASSWORD=kylin
-    ports:
-      - 3306:3306
-
-  write-zookeeper:
-    image: ${ZOOKEEPER_IMAGETAG:-zookeeper:3.4.10}
-    container_name: write-zookeeper
-    hostname: write-zookeeper
-    environment:
-      ZOO_MY_ID: 1
-      ZOO_SERVERS: server.1=0.0.0.0:2888:3888
-    ports:
-      - 2181:2181
-
-  write-kafka:
-    image: ${KAFKA_IMAGETAG:-bitnami/kafka:2.0.0}
-    container_name: write-kafkabroker
-    hostname: write-kafkabroker
-    environment:
-      - KAFKA_ZOOKEEPER_CONNECT=write-zookeeper:2181
-      - ALLOW_PLAINTEXT_LISTENER=yes
-    ports:
-      - 9092:9092
-
-  kerberos-kdc:
-    image: ${KERBEROS_IMAGE}
-    container_name: kerberos-kdc
-    hostname: kerberos-kdc
-
-  write-hbase-master:
-    image: ${HBASE_MASTER_IMAGETAG:-bde2020/hbase-master:1.0.0-hbase1.2.6}
-    container_name: write-hbase-master
-    hostname: write-hbase-master
-    env_file:
-      - write-hbase-distributed-local.env
-    environment:
-      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-zookeeper:2181"
-    ports:
-      - 16010:16010
-
-  write-hbase-regionserver1:
-    image: ${HBASE_REGIONSERVER_IMAGETAG:-bde2020/hbase-regionserver:1.0.0-hbase1.2.6}
-    container_name: write-hbase-regionserver1
-    hostname: write-hbase-regionserver1
-    env_file:
-      - write-hbase-distributed-local.env
-    environment:
-      HBASE_CONF_hbase_regionserver_hostname: write-hbase-regionserver1
-      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-zookeeper:2181 write-hbase-master:16010"
-
-  write-hbase-regionserver2:
-    image: ${HBASE_REGIONSERVER_IMAGETAG:-bde2020/hbase-regionserver:1.0.0-hbase1.2.6}
-    container_name: write-hbase-regionserver2
-    hostname: write-hbase-regionserver2
-    env_file:
-      - write-hbase-distributed-local.env
-    environment:
-      HBASE_CONF_hbase_regionserver_hostname: write-hbase-regionserver2
-      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-zookeeper:2181 write-hbase-master:16010"
-
-  kylin-all:
-    image: ${CLIENT_IMAGETAG}
-    container_name: kylin-all
-    hostname: kylin-all
-    volumes:
-      - ./conf/hadoop:/etc/hadoop/conf
-      - ./conf/hbase:/etc/hbase/conf
-      - ./conf/hive:/etc/hive/conf
-      - ./kylin:/opt/kylin/
-    env_file:
-      - client.env
-    environment:
-      HADOOP_CONF_DIR: /etc/hadoop/conf
-      HIVE_CONF_DIR: /etc/hive/conf
-      HBASE_CONF_DIR: /etc/hbase/conf
-      KYLIN_HOME: /opt/kylin/kylin
-    ports:
-      - 7070:7070
diff --git a/docker/docker-compose/write/write-hadoop.env b/docker/docker-compose/write/write-hadoop.env
index 8ec98c9..ef4429a 100644
--- a/docker/docker-compose/write/write-hadoop.env
+++ b/docker/docker-compose/write/write-hadoop.env
@@ -39,9 +39,11 @@ MAPRED_CONF_mapreduce_reduce_memory_mb=8192
 MAPRED_CONF_mapreduce_map_java_opts=-Xmx3072m
 MAPRED_CONF_mapreduce_reduce_java_opts=-Xmx6144m
 
-HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:mysql://metastore-db/metastore
-HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName=com.mysql.jdbc.Driver
+HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:mysql://metastore-db/metastore?useSSL=false\&amp;allowPublicKeyRetrieval=true
+HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName=com.mysql.cj.jdbc.Driver
 HIVE_SITE_CONF_javax_jdo_option_ConnectionUserName=kylin
 HIVE_SITE_CONF_javax_jdo_option_ConnectionPassword=kylin
 HIVE_SITE_CONF_datanucleus_autoCreateSchema=true
+HIVE_SITE_CONF_datanucleus_schema_autoCreateAll=true
+HIVE_SITE_CONF_hive_metastore_schema_verification=false
 HIVE_SITE_CONF_hive_metastore_uris=thrift://write-hive-metastore:9083
\ No newline at end of file
diff --git a/docker/dockerfile/cluster/base/Dockerfile b/docker/dockerfile/cluster/base/Dockerfile
index ccc05b3..8cf5ff0 100644
--- a/docker/dockerfile/cluster/base/Dockerfile
+++ b/docker/dockerfile/cluster/base/Dockerfile
@@ -52,13 +52,13 @@ RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2
 RUN set -x \
     && echo "Fetch URL2 is : ${HADOOP_URL}" \
     && curl -fSL "${HADOOP_URL}" -o /tmp/hadoop.tar.gz \
-    && curl -fSL "${HADOOP_URL}.asc" -o /tmp/hadoop.tar.gz.asc \
+    && curl -fSL "${HADOOP_URL}.asc" -o /tmp/hadoop.tar.gz.asc
 
 RUN set -x \
     && tar -xvf /tmp/hadoop.tar.gz -C /opt/ \
     && rm /tmp/hadoop.tar.gz* \
     && ln -s /opt/hadoop-$HADOOP_VERSION/etc/hadoop /etc/hadoop \
-    && cp /etc/hadoop/mapred-site.xml.template /etc/hadoop/mapred-site.xml \
+    && if [ -e "/etc/hadoop/mapred-site.xml.template" ]; then cp /etc/hadoop/mapred-site.xml.template /etc/hadoop/mapred-site.xml ;fi \
     && mkdir -p /opt/hadoop-$HADOOP_VERSION/logs \
     && mkdir /hadoop-data
 
diff --git a/docker/dockerfile/cluster/client/Dockerfile b/docker/dockerfile/cluster/client/Dockerfile
index 38cbbac..46c1822 100644
--- a/docker/dockerfile/cluster/client/Dockerfile
+++ b/docker/dockerfile/cluster/client/Dockerfile
@@ -96,7 +96,7 @@ RUN chmod a+x /opt/entrypoint/kafka/entrypoint.sh
 
 RUN set -x \
     && ln -s /opt/hadoop-$HADOOP_VERSION/etc/hadoop /etc/hadoop \
-    && cp /etc/hadoop/mapred-site.xml.template /etc/hadoop/mapred-site.xml \
+    && if [ -e "/etc/hadoop/mapred-site.xml.template" ]; then cp /etc/hadoop/mapred-site.xml.template /etc/hadoop/mapred-site.xml ;fi \
     && mkdir -p /opt/hadoop-$HADOOP_VERSION/logs
 
 RUN ln -s /opt/hbase-$HBASE_VERSION/conf /etc/hbase
diff --git a/docker/dockerfile/cluster/hive/Dockerfile b/docker/dockerfile/cluster/hive/Dockerfile
index 46f81f4..c3f11e5 100644
--- a/docker/dockerfile/cluster/hive/Dockerfile
+++ b/docker/dockerfile/cluster/hive/Dockerfile
@@ -49,6 +49,8 @@ RUN echo "Hive URL is :${HIVE_URL}" \
     && wget https://jdbc.postgresql.org/download/postgresql-9.4.1212.jar -O $HIVE_HOME/lib/postgresql-jdbc.jar \
     && rm hive.tar.gz
 
+RUN if [[ $HADOOP_VERSION > "3" ]]; then rm -rf $HIVE_HOME/lib/guava-* ; cp $HADOOP_HOME/share/hadoop/common/lib/guava-* $HIVE_HOME/lib; fi
+
 #Custom configuration goes here
 ADD conf/hive-site.xml $HIVE_HOME/conf
 ADD conf/beeline-log4j2.properties $HIVE_HOME/conf
diff --git a/docker/dockerfile/cluster/hive/run_hv.sh b/docker/dockerfile/cluster/hive/run_hv.sh
index 675937f..fcc3547 100644
--- a/docker/dockerfile/cluster/hive/run_hv.sh
+++ b/docker/dockerfile/cluster/hive/run_hv.sh
@@ -22,5 +22,9 @@ hadoop fs -mkdir -p    /user/hive/warehouse
 hadoop fs -chmod g+w   /tmp
 hadoop fs -chmod g+w   /user/hive/warehouse
 
+if [[ $HIVE_VERSION > "2" ]]; then
+  schematool -dbType mysql -initSchema
+fi
+
 cd $HIVE_HOME/bin
 ./hiveserver2 --hiveconf hive.server2.enable.doAs=false
diff --git a/docker/header.sh b/docker/header.sh
new file mode 100644
index 0000000..a990d90
--- /dev/null
+++ b/docker/header.sh
@@ -0,0 +1,141 @@
+#!/bin/bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+ARGS=`getopt -o h:i:b:c:a:l:k:f:p --long hadoop_version:,hive_version:,hbase_version:,cluster_mode:,enable_hbase:,enable_ldap:,enable_kerberos:,enable_kafka,help  -n 'parameter.bash' -- "$@"`
+
+if [ $? != 0 ]; then
+    echo "Terminating..."
+    exit 1
+fi
+
+eval set -- "${ARGS}"
+
+HADOOP_VERSION="2.8.5"
+HIVE_VERSION="1.2.2"
+HBASE_VERSION="1.1.2"
+
+# write write-read
+CLUSTER_MODE="write"
+# yes,no
+ENABLE_HBASE="yes"
+# yes,no
+ENABLE_LDAP="no"
+# yes,no
+ENABLE_KERBEROS="no"
+#
+ENABLE_KAFKA="no"
+
+while true;
+do
+    case "$1" in
+        --hadoop_version)
+            HADOOP_VERSION=$2;
+            shift 2;
+            ;;
+        --hive_version)
+            HIVE_VERSION=$2;
+            shift 2;
+            ;;
+        --hbase_version)
+            HBASE_VERSION=$2;
+            shift 2;
+            ;;
+        --cluster_mode)
+            CLUSTER_MODE=$2;
+            shift 2;
+            ;;
+         --enable_hbase)
+            ENABLE_HBASE=$2;
+            shift 2;
+            ;;
+        --enable_ldap)
+            ENABLE_LDAP=$2;
+            shift 2;
+            ;;
+        --enable_kerberos)
+            ENABLE_KERBEROS=$2;
+            shift 2;
+            ;;
+        --enable_kafka)
+            ENABLE_KAFKA=$2;
+            shift 2;
+            ;;
+        --help)
+echo << EOF "
+----------------------menu------------------------
+--hadoop_version  hadoop version,default is 2.8.5
+--hive_version  hive version,default is 1.2.2
+--hbase_version hbase version,default is 1.1.2
+--cluster_mode  cluster mode, optional value : [write, write-read],default is write,
+--enable_hbase  whether enable hbase server, optional value : [yes, no], default is yes
+--enable_ldap whether enable ldap server, optional value : [yes, no], default is no
+--enable_kerberos whether enable kerberos server, optional value : [yes, no], default is no
+--enable_kafka whether enable kafka server, optional value : [yes, no], default is no"
+EOF
+            exit 0
+            ;;
+        --)
+            break
+            ;;
+        *)
+            echo "Internal error!"
+            break
+            ;;
+    esac
+done
+
+for arg in $@
+do
+    echo "processing $arg"
+done
+
+echo "........hadoop version: "$HADOOP_VERSION
+echo "........hive version: "$HIVE_VERSION
+echo "........hbase version: "$HBASE_VERSION
+echo "........cluster_mode: "${CLUSTER_MODE}
+echo "........enable hbase: "${ENABLE_HBASE}
+echo "........enable ldap: "${ENABLE_LDAP}
+echo "........enable kerberos: "${ENABLE_KERBEROS}
+
+export HBASE_VERSION=$HBASE_VERSION
+export HADOOP_VERSION=$HADOOP_VERSION
+export HIVE_VERSION=$HIVE_VERSION
+
+export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+export HADOOP_DATANODE_IMAGETAG=apachekylin/kylin-hadoop-datanode:hadoop_${HADOOP_VERSION}
+export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-hadoop-namenode:hadoop_${HADOOP_VERSION}
+export HADOOP_RESOURCEMANAGER_IMAGETAG=apachekylin/kylin-hadoop-resourcemanager:hadoop_${HADOOP_VERSION}
+export HADOOP_NODEMANAGER_IMAGETAG=apachekylin/kylin-hadoop-nodemanager:hadoop_${HADOOP_VERSION}
+export HADOOP_HISTORYSERVER_IMAGETAG=apachekylin/kylin-hadoop-historyserver:hadoop_${HADOOP_VERSION}
+export HIVE_IMAGETAG=apachekylin/kylin-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION}
+
+export HBASE_MASTER_IMAGETAG=apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION}
+export HBASE_MASTER_IMAGETAG=apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION}
+export HBASE_REGIONSERVER_IMAGETAG=apachekylin/kylin-hbase-regionserver:hbase_${HBASE_VERSION}
+
+export KAFKA_IMAGE=bitnami/kafka:2.0.0
+export LDAP_IMAGE=osixia/openldap:1.3.0
+export CLIENT_IMAGETAG=apachekylin/kylin-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_hbase_${HBASE_VERSION}
+
+if [[ $HADOOP_VERSION < "3" ]]; then
+  export HADOOP_WEBHDFS_PORT=50070
+  export HADOOP_DN_PORT=50075
+elif [[ $HADOOP_VERSION > "3" ]]; then
+  export HADOOP_WEBHDFS_PORT=9870
+  export HADOOP_DN_PORT=9864
+fi
diff --git a/docker/setup_cluster.sh b/docker/setup_cluster.sh
index 0e3a260..e7ae80f 100644
--- a/docker/setup_cluster.sh
+++ b/docker/setup_cluster.sh
@@ -1,20 +1,20 @@
 #!/bin/bash
-
-#  Licensed to the Apache Software Foundation (ASF) under one
-#  or more contributor license agreements.  See the NOTICE file
-#  distributed with this work for additional information
-#  regarding copyright ownership.  The ASF licenses this file
-#  to you under the Apache License, Version 2.0 (the
-#  "License"); you may not use this file except in compliance
-#  with the License.  You may obtain a copy of the License at
 #
-#      http://www.apache.org/licenses/LICENSE-2.0
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
 #
-#  Unless required by applicable law or agreed to in writing, software
-#  distributed under the License is distributed on an "AS IS" BASIS,
-#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#  See the License for the specific language governing permissions and
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
 # limitations under the License.
+#
 
 SCRIPT_PATH=$(cd `dirname $0`; pwd)
 WS_ROOT=`dirname $SCRIPT_PATH`
diff --git a/docker/stop_cluster.sh b/docker/stop_cluster.sh
index 87f0ac4..24ce4e8 100644
--- a/docker/stop_cluster.sh
+++ b/docker/stop_cluster.sh
@@ -1,23 +1,47 @@
 #!/bin/bash
-
-#  Licensed to the Apache Software Foundation (ASF) under one
-#  or more contributor license agreements.  See the NOTICE file
-#  distributed with this work for additional information
-#  regarding copyright ownership.  The ASF licenses this file
-#  to you under the Apache License, Version 2.0 (the
-#  "License"); you may not use this file except in compliance
-#  with the License.  You may obtain a copy of the License at
 #
-#      http://www.apache.org/licenses/LICENSE-2.0
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
 #
-#  Unless required by applicable law or agreed to in writing, software
-#  distributed under the License is distributed on an "AS IS" BASIS,
-#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#  See the License for the specific language governing permissions and
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
 # limitations under the License.
+#
 
 SCRIPT_PATH=$(cd `dirname $0`; pwd)
-# set up root directory
 WS_ROOT=`dirname $SCRIPT_PATH`
-# shut down cluster
-KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-write.yml down
+
+source ${SCRIPT_PATH}/header.sh
+
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write.yml down
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write-read.yml down
+
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-zookeeper.yml down
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hadoop.yml down
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hbase.yml down
+
+
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-kafka.yml down
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hbase.yml down
+
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hive.yml down
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-zookeeper.yml down
+
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kerberos.yml down
+# KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-ldap.yml down
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-metastore.yml down
+
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hadoop.yml down
+
+# clean data
+#rm -rf ${SCRIPT_PATH}/docker-compose/write/data/*
+#rm -rf ${SCRIPT_PATH}/docker-compose/read/data/*
+#rm -rf ${SCRIPT_PATH}/docker-compose/others/data/*


[kylin] 10/13: KYLIN-4801 Some format specification fix and clean up

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 49715c038bde459f1b842fc3f9ddf97a90b0976d
Author: yaqian.zhang <59...@qq.com>
AuthorDate: Wed Oct 28 13:56:35 2020 +0800

    KYLIN-4801 Some format specification fix and clean up
---
 .../generic_desc_data/generic_desc_data_3x.json    |  0
 .../generic_desc_data/generic_desc_data_4x.json    |  0
 .../data/release_test_0001.json                    |  0
 .../env/default/default.properties                 | 17 ++++
 .../env/default/python.properties                  | 21 +++++
 .../features/specs/generic_test.spec               | 64 +++++++++++++++
 .../features/step_impl/before_suite.py             | 17 ++++
 .../features/step_impl/generic_test_step.py        | 17 ++++
 .../kylin_instances/kylin_instance.yml             | 22 +++++
 .../kylin_utils/basic.py                           | 17 ++++
 .../kylin_utils/equals.py                          | 17 ++++
 .../kylin_utils/kylin.py                           | 17 ++++
 .../kylin_utils/shell.py                           | 17 ++++
 .../kylin_utils/util.py                            | 17 ++++
 .../manifest.json                                  |  0
 .../requirements.txt                               |  0
 build/CI/run-ci.sh                                 | 10 ++-
 build/CI/testing/README.md                         | 95 ----------------------
 build/CI/testing/env/default/python.properties     |  4 -
 .../specs/authentication/authentication_0001.spec  | 18 ----
 .../read_write_separation.spec                     |  5 --
 build/CI/testing/features/specs/generic_test.spec  | 48 -----------
 build/CI/testing/features/specs/sample.spec        |  5 --
 .../step_impl/authentication/authentication.py     | 37 ---------
 .../read_write_separation/read_write_separation.py |  0
 build/CI/testing/features/step_impl/sample.py      | 14 ----
 .../CI/testing/kylin_instances/kylin_instance.yml  |  7 --
 27 files changed, 251 insertions(+), 235 deletions(-)

diff --git a/build/CI/testing/data/generic_desc_data/generic_desc_data_3x.json b/build/CI/kylin-system-testing/data/generic_desc_data/generic_desc_data_3x.json
similarity index 100%
rename from build/CI/testing/data/generic_desc_data/generic_desc_data_3x.json
rename to build/CI/kylin-system-testing/data/generic_desc_data/generic_desc_data_3x.json
diff --git a/build/CI/testing/data/generic_desc_data/generic_desc_data_4x.json b/build/CI/kylin-system-testing/data/generic_desc_data/generic_desc_data_4x.json
similarity index 100%
rename from build/CI/testing/data/generic_desc_data/generic_desc_data_4x.json
rename to build/CI/kylin-system-testing/data/generic_desc_data/generic_desc_data_4x.json
diff --git a/build/CI/testing/data/release_test_0001.json b/build/CI/kylin-system-testing/data/release_test_0001.json
similarity index 100%
rename from build/CI/testing/data/release_test_0001.json
rename to build/CI/kylin-system-testing/data/release_test_0001.json
diff --git a/build/CI/testing/env/default/default.properties b/build/CI/kylin-system-testing/env/default/default.properties
similarity index 53%
rename from build/CI/testing/env/default/default.properties
rename to build/CI/kylin-system-testing/env/default/default.properties
index 461ec37..9694101 100644
--- a/build/CI/testing/env/default/default.properties
+++ b/build/CI/kylin-system-testing/env/default/default.properties
@@ -1,3 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
 # default.properties
 # properties set here will be available to the test execution as environment variables
 
diff --git a/build/CI/kylin-system-testing/env/default/python.properties b/build/CI/kylin-system-testing/env/default/python.properties
new file mode 100644
index 0000000..4dc60a9
--- /dev/null
+++ b/build/CI/kylin-system-testing/env/default/python.properties
@@ -0,0 +1,21 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+GAUGE_PYTHON_COMMAND = python3
+
+# Comma seperated list of dirs. path should be relative to project root.
+STEP_IMPL_DIR = features/step_impl
diff --git a/build/CI/kylin-system-testing/features/specs/generic_test.spec b/build/CI/kylin-system-testing/features/specs/generic_test.spec
new file mode 100644
index 0000000..d37e236
--- /dev/null
+++ b/build/CI/kylin-system-testing/features/specs/generic_test.spec
@@ -0,0 +1,64 @@
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+# Kylin Release Test
+Tags:3.x
+## Prepare env
+* Get kylin instance
+
+* prepare data file from "release_test_0001.json"
+
+* Create project "release_test_0001_project" and load table "load_table_list"
+
+
+## MR engine
+
+* Create model with "model_desc_data" in "release_test_0001_project"
+
+* Create cube with "cube_desc_data" in "release_test_0001_project", cube name is "release_test_0001_cube"
+
+* Build segment from "1325347200000" to "1356969600000" in "release_test_0001_cube"
+
+* Build segment from "1356969600000" to "1391011200000" in "release_test_0001_cube"
+
+* Merge cube "release_test_0001_cube" segment from "1325347200000" to "1391011200000"
+
+
+SPARK engine
+
+Clone cube "release_test_0001_cube" and name it "kylin_spark_cube" in "release_test_0001_project", modify build engine to "SPARK"
+
+Build segment from "1325347200000" to "1356969600000" in "kylin_spark_cube"
+
+Build segment from "1356969600000" to "1391011200000" in "kylin_spark_cube"
+
+Merge cube "kylin_spark_cube" segment from "1325347200000" to "1391011200000"
+
+
+## Query cube and pushdown
+
+* Query SQL "select count(*) from kylin_sales" and specify "release_test_0001_cube" cube to query in "release_test_0001_project", compare result with "10000"
+
+Query SQL "select count(*) from kylin_sales" and specify "kylin_spark_cube" cube to query in "release_test_0001_project", compare result with "10000"
+
+* Disable cube "release_test_0001_cube"
+
+Disable cube "kylin_spark_cube"
+
+* Query SQL "select count(*) from kylin_sales" in "release_test_0001_project" and pushdown, compare result with "10000"
+
+
+
diff --git a/build/CI/testing/features/step_impl/before_suite.py b/build/CI/kylin-system-testing/features/step_impl/before_suite.py
similarity index 71%
rename from build/CI/testing/features/step_impl/before_suite.py
rename to build/CI/kylin-system-testing/features/step_impl/before_suite.py
index 4cce795..3cd86ca 100644
--- a/build/CI/testing/features/step_impl/before_suite.py
+++ b/build/CI/kylin-system-testing/features/step_impl/before_suite.py
@@ -1,3 +1,20 @@
+#!/usr/bin/python
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 from getgauge.python import before_suite
 import os
 import json
diff --git a/build/CI/testing/features/step_impl/generic_test_step.py b/build/CI/kylin-system-testing/features/step_impl/generic_test_step.py
similarity index 82%
rename from build/CI/testing/features/step_impl/generic_test_step.py
rename to build/CI/kylin-system-testing/features/step_impl/generic_test_step.py
index cf04d55..0aabb98 100644
--- a/build/CI/testing/features/step_impl/generic_test_step.py
+++ b/build/CI/kylin-system-testing/features/step_impl/generic_test_step.py
@@ -1,3 +1,20 @@
+#!/usr/bin/python
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 from getgauge.python import step
 import os
 import json
diff --git a/build/CI/kylin-system-testing/kylin_instances/kylin_instance.yml b/build/CI/kylin-system-testing/kylin_instances/kylin_instance.yml
new file mode 100644
index 0000000..501428f
--- /dev/null
+++ b/build/CI/kylin-system-testing/kylin_instances/kylin_instance.yml
@@ -0,0 +1,22 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+# All mode
+- host: localhost
+  port: 7070
+  version: 3.x
+  hadoop_platform: HDP2.4
+  deploy_mode: ALL
\ No newline at end of file
diff --git a/build/CI/testing/kylin_utils/basic.py b/build/CI/kylin-system-testing/kylin_utils/basic.py
similarity index 81%
rename from build/CI/testing/kylin_utils/basic.py
rename to build/CI/kylin-system-testing/kylin_utils/basic.py
index cd8e416..ee3a1fb 100644
--- a/build/CI/testing/kylin_utils/basic.py
+++ b/build/CI/kylin-system-testing/kylin_utils/basic.py
@@ -1,3 +1,20 @@
+#!/usr/bin/python
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import logging
 import requests
 
diff --git a/build/CI/testing/kylin_utils/equals.py b/build/CI/kylin-system-testing/kylin_utils/equals.py
similarity index 88%
rename from build/CI/testing/kylin_utils/equals.py
rename to build/CI/kylin-system-testing/kylin_utils/equals.py
index f610f4e..9d44aaf 100644
--- a/build/CI/testing/kylin_utils/equals.py
+++ b/build/CI/kylin-system-testing/kylin_utils/equals.py
@@ -1,3 +1,20 @@
+#!/usr/bin/python
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import logging
 from kylin_utils import util
 
diff --git a/build/CI/testing/kylin_utils/kylin.py b/build/CI/kylin-system-testing/kylin_utils/kylin.py
similarity index 97%
rename from build/CI/testing/kylin_utils/kylin.py
rename to build/CI/kylin-system-testing/kylin_utils/kylin.py
index 1cb9a46..10bd36a 100644
--- a/build/CI/testing/kylin_utils/kylin.py
+++ b/build/CI/kylin-system-testing/kylin_utils/kylin.py
@@ -1,3 +1,20 @@
+#!/usr/bin/python
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import json
 import logging
 import time
diff --git a/build/CI/testing/kylin_utils/shell.py b/build/CI/kylin-system-testing/kylin_utils/shell.py
similarity index 80%
rename from build/CI/testing/kylin_utils/shell.py
rename to build/CI/kylin-system-testing/kylin_utils/shell.py
index 3263636..72b734a 100644
--- a/build/CI/testing/kylin_utils/shell.py
+++ b/build/CI/kylin-system-testing/kylin_utils/shell.py
@@ -1,3 +1,20 @@
+#!/usr/bin/python
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import logging
 from shlex import quote as shlex_quote
 
diff --git a/build/CI/testing/kylin_utils/util.py b/build/CI/kylin-system-testing/kylin_utils/util.py
similarity index 70%
rename from build/CI/testing/kylin_utils/util.py
rename to build/CI/kylin-system-testing/kylin_utils/util.py
index 47ca11e..f29e034 100644
--- a/build/CI/testing/kylin_utils/util.py
+++ b/build/CI/kylin-system-testing/kylin_utils/util.py
@@ -1,3 +1,20 @@
+#!/usr/bin/python
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 from selenium import webdriver
 from yaml import load, loader
 import os
diff --git a/build/CI/testing/manifest.json b/build/CI/kylin-system-testing/manifest.json
similarity index 100%
rename from build/CI/testing/manifest.json
rename to build/CI/kylin-system-testing/manifest.json
diff --git a/build/CI/testing/requirements.txt b/build/CI/kylin-system-testing/requirements.txt
similarity index 100%
rename from build/CI/testing/requirements.txt
rename to build/CI/kylin-system-testing/requirements.txt
diff --git a/build/CI/run-ci.sh b/build/CI/run-ci.sh
index 28110b4..41a4bb6 100644
--- a/build/CI/run-ci.sh
+++ b/build/CI/run-ci.sh
@@ -57,6 +57,12 @@ cp -r apache-kylin-bin/* kylin-all
 cat > kylin-all/conf/kylin.properties <<EOL
 kylin.job.scheduler.default=100
 kylin.server.self-discovery-enabled=true
+kylin.query.pushdown.runner-class-name=org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl
+kylin.query.pushdown.update-enabled=false
+kylin.query.pushdown.jdbc.url=jdbc:hive2://write-hive-server:10000/default
+kylin.query.pushdown.jdbc.driver=org.apache.hive.jdbc.HiveDriver
+kylin.query.pushdown.jdbc.username=hive
+kylin.query.pushdown.jdbc.password=
 EOL
 
 #cp -r apache-kylin-bin/* kylin-query
@@ -115,11 +121,11 @@ cd ..
 echo "Wait about 4 minutes ..."
 sleep ${AWAIT_SECOND}
 
-cd build/CI/testing
+cd build/CI/kylin-system-testing
 pip install -r requirements.txt
 gauge run --tags 3.x
 cd -
-echo "Please check build/CI/testing/reports/html-report/index.html for reports."
+echo "Please check build/CI/kylin-system-testing/reports/html-report/index.html for reports."
 
 ###########################################
 ###########################################
diff --git a/build/CI/testing/README.md b/build/CI/testing/README.md
deleted file mode 100644
index c2936dc..0000000
--- a/build/CI/testing/README.md
+++ /dev/null
@@ -1,95 +0,0 @@
-# kylin-test
-Automated test code repo based on [gauge](https://docs.gauge.org/?os=macos&language=python&ide=vscode) for [Apache Kylin](https://github.com/apache/kylin).
-
-### IDE
-Gauge support IntelliJ IDEA and VSCode as development IDE.
-However, IDEA cannot detect the step implementation method of Python language, just support java.
-VSCode is recommended as the development IDE.
-
-### Clone repo
-```
-git clone https://github.com/zhangayqian/kylin-test
-```
-
-### Prepare environment
- * Install python3 compiler and version 3.6 recommended
- * Install gauge
- ```
- brew install gauge
- ```
- If you encounter the below error:
- ```
- Download failed: https://homebrew.bintray.com/bottles/gauge- 1.1.1.mojave.bottle.1.tar.gz
- ```
- You can try to download the compressed package manually, put it in the downloads directory of homebrew cache directory, and execute the installation command of gauge again.
-
-* Install required dependencies
-```
-pip install -r requirements.txt
-```
-
-## Directory structure
-* features/specs: Directory of specification file.
-  A specification is a business test case which describes a particular feature of the application that needs testing. Gauge specifications support a .spec or .md file format and these specifications are written in a syntax similar to Markdown.
-  
-* features/step_impl: Directory of Step implementations methods.
-  Every step implementation has an equivalent code as per the language plugin used while installing Gauge. The code is run when the steps inside a spec are executed. The code must have the same number of parameters as mentioned in the step.
-  Steps can be implemented in different ways such as simple step, step with table, step alias, and enum data type used as step parameters.
-
-* data: Directory of data files needed to execute test cases. Such as cube_desc.json.
-
-* env/default: Gauge configuration file directory.
-
-* kylin_instance: Kylin instance configuration file directory.
-
-* kylin_utils: Tools method directory.
-
-## Run Gauge specifications
-* Run all specification
-```
-gauge run
-```
-* Run specification or step or spec according tags, such as:
-```
-gauge run --tags 3.x
-```
-* Please refer to https://docs.gauge.org/execution.html?os=macos&language=python&ide=vscode learn more.
-
-## Tips
-
-A specification consists of different sections; some of which are mandatory and few are optional. The components of a specification are listed as follows:
-
-- Specification heading
-- Scenario
-- Step
-- Parameters
-- Tags
-- Comments
-
-#### Note
-
-Tags - optional, executable component when the specification is run
-Comments - optional, non-executable component when the specification is run
-
-### About tags
-
-Here, we stipulate that all test scenarios should have tags. Mandatory tags include 3.x and 4.x to indicate which versions are supported by the test scenario. Such as:
-```
-# Flink Engine
-Tags:3.x
-```
-```
-# Cube management
-Tags:3.x,4.x
-```
-You can put the tag in the specification heading, so that all scenarios in this specification will have this tag.
-You can also tag your own test spec to make it easier for you to run your own test cases.
-
-### About Project
-There are two project names already occupied, they are `generic_test_project` and `pushdown_test_project`. 
-  
-Every time you run this test, @befroe_suit method will be execute in advance to create `generic_test_project`.  And the model and cube in this project are universal, and the cube has been fully built. They include dimensions and measures as much as possible. When you need to use a built cube to perform tests, you may use it.
-
-`pushdown_test_project` used to compare sql query result. This is a empty project.
-
-Please refer to https://docs.gauge.org/writing-specifications.html?os=macos&language=python&ide=vscode learn more.
diff --git a/build/CI/testing/env/default/python.properties b/build/CI/testing/env/default/python.properties
deleted file mode 100644
index 077d659..0000000
--- a/build/CI/testing/env/default/python.properties
+++ /dev/null
@@ -1,4 +0,0 @@
-GAUGE_PYTHON_COMMAND = python3
-
-# Comma seperated list of dirs. path should be relative to project root.
-STEP_IMPL_DIR = features/step_impl
diff --git a/build/CI/testing/features/specs/authentication/authentication_0001.spec b/build/CI/testing/features/specs/authentication/authentication_0001.spec
deleted file mode 100644
index b915e26..0000000
--- a/build/CI/testing/features/specs/authentication/authentication_0001.spec
+++ /dev/null
@@ -1,18 +0,0 @@
-# Authentication Test
-Tags:front-end
-## Prepare browser
-
-* Initialize "chrome" browser and connect to "kylin_instance.yml"
-
-## Use the user name and password for user authentication
-
-* Authentication with user "test" and password "password".
-
-* Authentication with built-in user
-     |User   |Password      |
-     |-------|--------------|
-     |ADMIN  |KYLIN         |
-
-
-
-
diff --git a/build/CI/testing/features/specs/deploy_in_cluster_mode/read_write_separation.spec b/build/CI/testing/features/specs/deploy_in_cluster_mode/read_write_separation.spec
deleted file mode 100644
index 5ce3f08..0000000
--- a/build/CI/testing/features/specs/deploy_in_cluster_mode/read_write_separation.spec
+++ /dev/null
@@ -1,5 +0,0 @@
-# Read and write separation deployment
-Tags: 4.x
-
-## Prepare env
-* Get kylin instance
diff --git a/build/CI/testing/features/specs/generic_test.spec b/build/CI/testing/features/specs/generic_test.spec
deleted file mode 100644
index c2f6b5e..0000000
--- a/build/CI/testing/features/specs/generic_test.spec
+++ /dev/null
@@ -1,48 +0,0 @@
-# Kylin Release Test
-Tags:3.x
-## Prepare env
-* Get kylin instance
-
-* prepare data file from "release_test_0001.json"
-
-* Create project "release_test_0001_project" and load table "load_table_list"
-
-
-## MR engine
-
-* Create model with "model_desc_data" in "release_test_0001_project"
-
-* Create cube with "cube_desc_data" in "release_test_0001_project", cube name is "release_test_0001_cube"
-
-* Build segment from "1325347200000" to "1356969600000" in "release_test_0001_cube"
-
-* Build segment from "1356969600000" to "1391011200000" in "release_test_0001_cube"
-
-* Merge cube "release_test_0001_cube" segment from "1325347200000" to "1391011200000"
-
-
-## SPARK engine
-
-* Clone cube "release_test_0001_cube" and name it "kylin_spark_cube" in "release_test_0001_project", modify build engine to "SPARK"
-
-* Build segment from "1325347200000" to "1356969600000" in "kylin_spark_cube"
-
-* Build segment from "1356969600000" to "1391011200000" in "kylin_spark_cube"
-
-* Merge cube "kylin_spark_cube" segment from "1325347200000" to "1391011200000"
-
-
-## Query cube and pushdown
-
-* Query SQL "select count(*) from kylin_sales" and specify "release_test_0001_cube" cube to query in "release_test_0001_project", compare result with "10000"
-
-* Query SQL "select count(*) from kylin_sales" and specify "kylin_spark_cube" cube to query in "release_test_0001_project", compare result with "10000"
-
-* Disable cube "release_test_0001_cube"
-
-* Disable cube "kylin_spark_cube"
-
-* Query SQL "select count(*) from kylin_sales" in "release_test_0001_project" and pushdown, compare result with "10000"
-
-
-
diff --git a/build/CI/testing/features/specs/sample.spec b/build/CI/testing/features/specs/sample.spec
deleted file mode 100644
index bb9c9f5..0000000
--- a/build/CI/testing/features/specs/sample.spec
+++ /dev/null
@@ -1,5 +0,0 @@
-# test
-Tags:test,3.x,4.x
-## test
-* Get kylin instance
-* Query sql "select count(*) from kylin_sales" in "generic_test_project" and compare result with pushdown result
diff --git a/build/CI/testing/features/step_impl/authentication/authentication.py b/build/CI/testing/features/step_impl/authentication/authentication.py
deleted file mode 100644
index 044d1e2..0000000
--- a/build/CI/testing/features/step_impl/authentication/authentication.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from time import sleep
-
-from getgauge.python import step
-from kylin_utils import util
-
-
-class LoginTest:
-
-    @step("Initialize <browser_type> browser and connect to <file_name>")
-    def setup_browser(self, browser_type, file_name):
-        global browser
-        browser = util.setup_browser(browser_type=browser_type)
-
-        browser.get(util.kylin_url(file_name))
-        sleep(3)
-
-        browser.refresh()
-        browser.set_window_size(1400, 800)
-
-    @step("Authentication with user <user> and password <password>.")
-    def assert_authentication_failed(self, user, password):
-        browser.find_element_by_id("username").clear()
-        browser.find_element_by_id("username").send_keys(user)
-        browser.find_element_by_id("password").clear()
-        browser.find_element_by_id("password").send_keys(password)
-
-        browser.find_element_by_class_name("bigger-110").click()
-
-    @step("Authentication with built-in user <table>")
-    def assert_authentication_success(self, table):
-        for i in range(1, 2):
-            user = table.get_row(i)
-            browser.find_element_by_id("username").clear()
-            browser.find_element_by_id("username").send_keys(user[0])
-            browser.find_element_by_id("password").clear()
-            browser.find_element_by_id("password").send_keys(user[1])
-            browser.find_element_by_class_name("bigger-110").click()
diff --git a/build/CI/testing/features/step_impl/read_write_separation/read_write_separation.py b/build/CI/testing/features/step_impl/read_write_separation/read_write_separation.py
deleted file mode 100644
index e69de29..0000000
diff --git a/build/CI/testing/features/step_impl/sample.py b/build/CI/testing/features/step_impl/sample.py
deleted file mode 100644
index d7ac9bb..0000000
--- a/build/CI/testing/features/step_impl/sample.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from getgauge.python import step, before_spec
-from kylin_utils import util
-from kylin_utils import equals
-
-
-@before_spec()
-def before_spec_hook():
-    global client
-    client = util.setup_instance("kylin_instance.yml")
-
-
-@step("Query sql <select count(*) from kylin_sales> in <project> and compare result with pushdown result")
-def query_sql_and_compare_result_with_pushdown_result(sql, project):
-    equals.compare_sql_result(sql=sql, project=project, kylin_client=client)
diff --git a/build/CI/testing/kylin_instances/kylin_instance.yml b/build/CI/testing/kylin_instances/kylin_instance.yml
deleted file mode 100644
index fe6fdd1..0000000
--- a/build/CI/testing/kylin_instances/kylin_instance.yml
+++ /dev/null
@@ -1,7 +0,0 @@
----
-# All mode
-- host: localhost
-  port: 7070
-  version: 3.x
-  hadoop_platform: HDP2.4
-  deploy_mode: ALL
\ No newline at end of file


[kylin] 07/13: KYLIN-4775 Refactor & Fix HADOOP_CONF_DIR

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 768a6d662350d4d18a9e2ced4490df439104f537
Author: XiaoxiangYu <xx...@apache.org>
AuthorDate: Fri Oct 23 13:48:35 2020 +0800

    KYLIN-4775 Refactor & Fix HADOOP_CONF_DIR
---
 build/CI/run-ci.sh                                 | 115 +++++++++++++++++
 .../CI/testing/kylin_instances/kylin_instance.yml  |   2 +-
 build/CI/testing/kylin_utils/kylin.py              |   2 +-
 docker/README-cluster.md                           | 143 +++++++++++++++++++++
 docker/{README.md => README-standalone.md}         |  36 ++++--
 docker/README.md                                   | 143 +--------------------
 docker/build_cluster_images.sh                     |  52 ++++----
 docker/docker-compose/others/client-write-read.env |   4 +-
 docker/docker-compose/others/client-write.env      |   4 +-
 .../others/docker-compose-kylin-write-read.yml     |  30 +----
 .../others/docker-compose-kylin-write.yml          |  36 ++----
 .../others/docker-compose-metastore.yml            |   2 -
 docker/docker-compose/others/kylin/README.md       |   2 +
 .../docker-compose/read/docker-compose-hadoop.yml  |  16 +--
 .../docker-compose/read/docker-compose-hbase.yml   |   6 +-
 docker/docker-compose/read/read-hadoop.env         |   4 +-
 .../docker-compose/write/conf/hive/hive-site.xml   |  10 +-
 .../docker-compose/write/docker-compose-hadoop.yml |  22 ++--
 .../docker-compose/write/docker-compose-hbase.yml  |   6 +-
 .../docker-compose/write/docker-compose-hive.yml   |   4 +-
 docker/docker-compose/write/write-hadoop.env       |   4 +-
 docker/dockerfile/cluster/base/Dockerfile          |  21 +--
 docker/dockerfile/cluster/base/entrypoint.sh       |  42 +++---
 docker/dockerfile/cluster/client/Dockerfile        |  22 ++--
 docker/dockerfile/cluster/client/entrypoint.sh     |   6 +-
 docker/dockerfile/cluster/client/run_cli.sh        |   8 +-
 docker/dockerfile/cluster/datanode/Dockerfile      |   2 +-
 docker/dockerfile/cluster/hbase/Dockerfile         |   7 +-
 docker/dockerfile/cluster/hbase/entrypoint.sh      |   2 +-
 docker/dockerfile/cluster/historyserver/Dockerfile |   2 +-
 docker/dockerfile/cluster/hive/Dockerfile          |   2 +-
 docker/dockerfile/cluster/hive/conf/hive-site.xml  |   3 +-
 docker/dockerfile/cluster/hive/entrypoint.sh       |  40 +++---
 docker/dockerfile/cluster/hmaster/Dockerfile       |   2 +-
 docker/dockerfile/cluster/hregionserver/Dockerfile |   2 +-
 docker/dockerfile/cluster/kylin/Dockerfile         |   2 +-
 docker/dockerfile/cluster/namenode/Dockerfile      |   2 +-
 docker/dockerfile/cluster/nodemanager/Dockerfile   |   2 +-
 .../dockerfile/cluster/resourcemanager/Dockerfile  |   2 +-
 docker/header.sh                                   |  27 ++--
 docker/setup_cluster.sh                            |  16 ++-
 41 files changed, 479 insertions(+), 376 deletions(-)

diff --git a/build/CI/run-ci.sh b/build/CI/run-ci.sh
new file mode 100644
index 0000000..574f892
--- /dev/null
+++ b/build/CI/run-ci.sh
@@ -0,0 +1,115 @@
+#!/bin/bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# 1. Packaging for Kylin binary
+# 2. Deploy hadoop cluster
+# 3. Delpoy kylin cluster
+# 4. Run system testing
+# 5. Clean up
+
+INIT_HADOOP=1
+
+###########################################
+###########################################
+# 0. Prepare
+export JAVA_HOME=/usr/local/java
+export PATH=$JAVA_HOME/bin:$PATH
+export PATH=/root/xiaoxiang.yu/INSTALL/anaconda/bin:$PATH
+binary_file=/root/xiaoxiang.yu/BINARY/apache-kylin-3.1.2-SNAPSHOT-bin.tar.gz
+source ~/.bashrc
+pwd
+
+###########################################
+###########################################
+# 1. Package kylin
+
+#TODO
+cd docker/docker-compose/others/kylin
+cp $binary_file .
+tar zxf apache-kylin-3.1.2-SNAPSHOT-bin.tar.gz
+
+mkdir kylin-all
+mkdir kylin-query
+mkdir kylin-job
+
+cp -r apache-kylin-3.1.2-SNAPSHOT-bin/* kylin-all
+cat > kylin-all/conf/kylin.properties <<EOL
+kylin.job.scheduler.default=100
+kylin.server.self-discovery-enabled=true
+EOL
+
+cp -r apache-kylin-3.1.2-SNAPSHOT-bin/* kylin-query
+cat > kylin-query/conf/kylin.properties <<EOL
+kylin.job.scheduler.default=100
+kylin.server.self-discovery-enabled=true
+EOL
+
+cp -r apache-kylin-3.1.2-SNAPSHOT-bin/* kylin-job
+cat > kylin-job/conf/kylin.properties <<EOL
+kylin.job.scheduler.default=100
+kylin.server.self-discovery-enabled=true
+EOL
+
+cd -
+
+###########################################
+###########################################
+# 2. Deploy Hadoop
+
+if [ "$INIT_HADOOP" = "1" ];
+then
+    echo "Restart Hadoop cluster."
+    cd docker
+
+    bash stop_cluster.sh
+
+    bash setup_cluster.sh --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 \
+      --enable_hbase yes --hbase_version 1.1.2  --enable_ldap nosh setup_cluster.sh \
+      --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 --enable_hbase yes \
+      --hbase_version 1.1.2  --enable_ldap no
+    cd ..
+else
+    echo "Do NOT restart Hadoop cluster."
+fi;
+
+docker ps
+
+###########################################
+###########################################
+# 3. Deploy Kylin
+
+# TODO
+
+###########################################
+###########################################
+# 4. Run test
+
+echo "Wait about 6 minutes ..."
+sleep 360
+
+cd build/CI/testing
+pip install -r requirements.txt
+gauge run --tags 3.x
+cd ..
+
+###########################################
+###########################################
+# 5. Clean up
+
+# TODO
diff --git a/build/CI/testing/kylin_instances/kylin_instance.yml b/build/CI/testing/kylin_instances/kylin_instance.yml
index ca76d00..fe6fdd1 100644
--- a/build/CI/testing/kylin_instances/kylin_instance.yml
+++ b/build/CI/testing/kylin_instances/kylin_instance.yml
@@ -1,6 +1,6 @@
 ---
 # All mode
-- host: kylin-all
+- host: localhost
   port: 7070
   version: 3.x
   hadoop_platform: HDP2.4
diff --git a/build/CI/testing/kylin_utils/kylin.py b/build/CI/testing/kylin_utils/kylin.py
index 252ce21..164f2ca 100644
--- a/build/CI/testing/kylin_utils/kylin.py
+++ b/build/CI/testing/kylin_utils/kylin.py
@@ -95,7 +95,7 @@ class KylinHttpClient(BasicHttpClient):  # pylint: disable=too-many-public-metho
         resp = self._request('DELETE', url)
         return resp
 
-    def load_table(self, project_name, tables, calculate=True):
+    def load_table(self, project_name, tables, calculate=False):
         """
         load or reload table api
         :param calculate: Default is True
diff --git a/docker/README-cluster.md b/docker/README-cluster.md
new file mode 100644
index 0000000..f90ce3b
--- /dev/null
+++ b/docker/README-cluster.md
@@ -0,0 +1,143 @@
+# Kylin deployment by docker-compose for CI/CD
+
+## Backgroud
+
+In order to provide hadoop cluster(s) (without manual deployment) for system level testing to cover some complex features like read write speratation deployment, we would like to provide a docker-compose based way to make it easy to achieve CI/CD .
+
+## Prepare
+
+- Install latest docker & docker-compose, following is what I use.
+
+- Check port 
+
+    Port       |     Component     |     Comment
+---------------| ----------------- | -----------------
+    7070       |       Kylin       |      All     
+    7071       |       Kylin       |      Job     
+    7072       |       Kylin       |      Query             
+    8088       |       Yarn        |       -    
+    16010      |       HBase       |       -    
+    50070      |       HDFS        |       -            
+
+- Clone cource code
+
+```shell 
+git clone
+cd kylin/docker
+```
+
+### How to start Hadoop cluster
+
+- Build and start a single Hadoop 2.8 cluster
+
+```shell
+bash setup_cluster.sh --s write --hadoop_version 2.8.5 --hive_version 1.2.2 \
+    --enable_hbase yes --hbase_version 1.1.2  --enable_ldap nosh setup_cluster.sh --cluster_mode write \
+    --hadoop_version 2.8.5 --hive_version 1.2.2 --enable_hbase yes --hbase_version 1.1.2  --enable_ldap no
+```
+
+## Docker Container
+
+#### Docker Containers
+
+- docker images
+
+```shell 
+root@open-source:/home/ubuntu/xiaoxiang.yu/docker# docker images | grep apachekylin
+apachekylin/kylin-client                   hadoop_2.8.5_hive_1.2.2_hbase_1.1.2   728d1cd89f46        12 hours ago        2.47GB
+apachekylin/kylin-hbase-regionserver       hbase_1.1.2                           41d3a6cd15ec        12 hours ago        1.13GB
+apachekylin/kylin-hbase-master             hbase_1.1.2                           848831eda695        12 hours ago        1.13GB
+apachekylin/kylin-hbase-base               hbase_1.1.2                           f6b9e2beb88e        12 hours ago        1.13GB
+apachekylin/kylin-hive                     hive_1.2.2_hadoop_2.8.5               eb8220ea58f0        12 hours ago        1.83GB
+apachekylin/kylin-hadoop-historyserver     hadoop_2.8.5                          f93b54c430f5        12 hours ago        1.63GB
+apachekylin/kylin-hadoop-nodemanager       hadoop_2.8.5                          88a0f4651047        12 hours ago        1.63GB
+apachekylin/kylin-hadoop-resourcemanager   hadoop_2.8.5                          32a58e854b6e        12 hours ago        1.63GB
+apachekylin/kylin-hadoop-datanode          hadoop_2.8.5                          5855d6a0a8d3        12 hours ago        1.63GB
+apachekylin/kylin-hadoop-namenode          hadoop_2.8.5                          4485f9d2beff        12 hours ago        1.63GB
+apachekylin/kylin-hadoop-base              hadoop_2.8.5                          1b1605941562        12 hours ago        1.63GB
+apachekylin/apache-kylin-standalone        3.1.0                                 2ce49ae43b7e        3 months ago        2.56GB
+```
+
+- docker containers
+
+```shell
+root@open-source:/home/ubuntu/xiaoxiang.yu/docker# docker ps
+CONTAINER ID        IMAGE                                                          COMMAND                  CREATED             STATUS                             PORTS                                                        NAMES
+4881c9b06eff        apachekylin/kylin-client:hadoop_2.8.5_hive_1.2.2_hbase_1.1.2   "/run_cli.sh"            8 seconds ago       Up 4 seconds                       0.0.0.0:7071->7070/tcp                                       kylin-job
+4faed91e3b52        apachekylin/kylin-client:hadoop_2.8.5_hive_1.2.2_hbase_1.1.2   "/run_cli.sh"            8 seconds ago       Up 5 seconds                       0.0.0.0:7072->7070/tcp                                       kylin-query
+b215230ab964        apachekylin/kylin-client:hadoop_2.8.5_hive_1.2.2_hbase_1.1.2   "/run_cli.sh"            8 seconds ago       Up 5 seconds                       0.0.0.0:7070->7070/tcp                                       kylin-all
+64f77396e9fb        apachekylin/kylin-hbase-regionserver:hbase_1.1.2               "/opt/entrypoint/hba…"   12 seconds ago      Up 8 seconds                       16020/tcp, 16030/tcp                                         write-hbase-regionserver1
+c263387ae9dd        apachekylin/kylin-hbase-regionserver:hbase_1.1.2               "/opt/entrypoint/hba…"   12 seconds ago      Up 10 seconds                      16020/tcp, 16030/tcp                                         write-hbase-regionserver2
+9721df1d412f        apachekylin/kylin-hbase-master:hbase_1.1.2                     "/opt/entrypoint/hba…"   12 seconds ago      Up 9 seconds                       16000/tcp, 0.0.0.0:16010->16010/tcp                          write-hbase-master
+ee859d1706ba        apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5                 "/opt/entrypoint/hiv…"   20 seconds ago      Up 17 seconds                      0.0.0.0:10000->10000/tcp, 10002/tcp                          write-hive-server
+b9ef97438912        apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5                 "/opt/entrypoint/hiv…"   20 seconds ago      Up 16 seconds                      9083/tcp, 10000/tcp, 10002/tcp                               write-hive-metastore
+edf687ecb3f0        mysql:5.7.24                                                   "docker-entrypoint.s…"   23 seconds ago      Up 20 seconds                      0.0.0.0:3306->3306/tcp, 33060/tcp                            metastore-db
+7f63c83dcb63        zookeeper:3.4.10                                               "/docker-entrypoint.…"   25 seconds ago      Up 23 seconds                      2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp                   write-zookeeper
+aaf514d200e0        apachekylin/kylin-hadoop-datanode:hadoop_2.8.5                 "/opt/entrypoint/had…"   28 seconds ago      Up 26 seconds                      50075/tcp                                                    write-datanode1
+6a73601eba35        apachekylin/kylin-hadoop-datanode:hadoop_2.8.5                 "/opt/entrypoint/had…"   33 seconds ago      Up 28 seconds                      50075/tcp                                                    write-datanode3
+934b5e7c8c08        apachekylin/kylin-hadoop-resourcemanager:hadoop_2.8.5          "/opt/entrypoint/had…"   33 seconds ago      Up 26 seconds (health: starting)   0.0.0.0:8088->8088/tcp                                       write-resourcemanager
+6405614c2b06        apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5              "/opt/entrypoint/had…"   33 seconds ago      Up 30 seconds (health: starting)   8042/tcp                                                     write-nodemanager2
+e004fc605295        apachekylin/kylin-hadoop-namenode:hadoop_2.8.5                 "/opt/entrypoint/had…"   33 seconds ago      Up 28 seconds (health: starting)   0.0.0.0:9870->9870/tcp, 8020/tcp, 0.0.0.0:50070->50070/tcp   write-namenode
+743105698b0f        apachekylin/kylin-hadoop-historyserver:hadoop_2.8.5            "/opt/entrypoint/had…"   33 seconds ago      Up 29 seconds (health: starting)   0.0.0.0:8188->8188/tcp                                       write-historyserver
+1b38135aeb2a        apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5              "/opt/entrypoint/had…"   33 seconds ago      Up 31 seconds (health: starting)   8042/tcp                                                     write-nodemanager1
+7f53f5a84533        apachekylin/kylin-hadoop-datanode:hadoop_2.8.5                 "/opt/entrypoint/had…"   33 seconds ago      Up 29 seconds                      50075/tcp                                                    write-datanode2
+``` 
+
+- edit /etc/hosts to make it easy to view Web UI
+
+```shell 
+10.1.2.41 write-namenode
+10.1.2.41 write-resourcemanager
+10.1.2.41 write-hbase-master 
+10.1.2.41 write-hive-server
+10.1.2.41 write-hive-metastore
+10.1.2.41 write-zookeeper
+10.1.2.41 metastore-db
+10.1.2.41 kylin-job
+10.1.2.41 kylin-query
+10.1.2.41 kylin-all
+```
+
+#### Hadoop cluster information
+
+-  Support Matrix
+
+Hadoop Version   |  Hive Version |  HBase Version |  Verified
+---------------- | ------------- | -------------- | ----------
+     2.8.5       |     1.2.2     |     1.1.2      |    Yes
+     3.1.4       |     2.3.7     |     2.2.6      | In progress
+
+- Component
+
+   hostname          |                        URL                       |       Tag       |        Comment
+------------------   | ------------------------------------------------ | --------------- | ------------------------
+write-namenode       | http://write-namenode:50070                      |       HDFS      |
+write-datanode1      |
+write-datanode2      |
+write-datanode3      |
+write-resourcemanager| http://write-resourcemanager:8088/cluster        |       YARN      |
+write-nodemanager1   | 
+write-nodemanager2   |
+write-historyserver  |
+write-hbase-master   | http://write-hbase-master:16010                  |       HBase     |
+write-hbase-regionserver1 |
+write-hbase-regionserver2 |
+write-hive-server    |                                                  |       Hive      |
+write-hive-metastore |                                                  |       Hive      |
+write-zookeeper      |                                                  |     Zookeeper   |
+metastore-db         |                                                  |       RDBMS     |
+kylin-job            | http://kylin-all:7071/kylin                      |       Kylin     |
+kylin-query          | http://kylin-all:7072/kylin                      |       Kylin     |
+kylin-all            | http://kylin-all:7070/kylin                      |       Kylin     |
+
+
+## System Testing
+### How to start Kylin
+
+```shell 
+copy kylin into /root/xiaoxiang.yu/kylin/docker/docker-compose/others/kylin
+
+cp kylin.tar.gz /root/xiaoxiang.yu/kylin/docker/docker-compose/others/kylin
+tar zxf kylin.tar.gz
+
+```
\ No newline at end of file
diff --git a/docker/README.md b/docker/README-standalone.md
similarity index 80%
copy from docker/README.md
copy to docker/README-standalone.md
index d137c8b..348a74e 100644
--- a/docker/README.md
+++ b/docker/README-standalone.md
@@ -1,21 +1,21 @@
+## Standalone/Self-contained Kylin deployment for learning
 
 In order to allow users to easily try Kylin, and to facilitate developers to verify and debug after modifying the source code. We provide the all-in-one Kylin docker image. In this image, each service that Kylin relies on is properly installed and deployed, including:
 
 - Jdk 1.8
 - Hadoop 2.7.0
 - Hive 1.2.1
-- Spark 2.4.6
-- Zookeeper 3.4.6
+- Hbase 1.1.2 (With Zookeeper)
+- Spark 2.3.1
 - Kafka 1.1.1
 - MySQL 5.1.73
-- Maven 3.6.1
 
 ## Quickly try Kylin with pre-built images
 
 We have pushed the Kylin images to the [docker hub](https://hub.docker.com/r/apachekylin/apache-kylin-standalone). You do not need to build the image locally, just pull the image from remote (you can browse docker hub to check the available versions):
 
 ```
-docker pull apachekylin/apache-kylin-standalone:4.0.0-alpha
+docker pull apachekylin/apache-kylin-standalone:3.1.0
 ```
 
 After the pull is successful, execute "sh run_container.sh" or the following command to start the container:
@@ -28,14 +28,17 @@ docker run -d \
 -p 50070:50070 \
 -p 8032:8032 \
 -p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+-p 16010:16010 \
+--name apache-kylin-standalone \
+apachekylin/apache-kylin-standalone:3.1.0
 ```
 
 The following services are automatically started when the container starts: 
 
 - NameNode, DataNode
 - ResourceManager, NodeManager
+- HBase
+- Kafka
 - Kylin
 
 and run automatically `$KYLIN_HOME/bin/sample.sh `, create a kylin_streaming_topic topic in Kafka and continue to send data to this topic. This is to let the users start the container and then experience the batch and streaming way to build the cube and query.
@@ -45,6 +48,7 @@ After the container is started, we can enter the container through the `docker e
 - Kylin Web UI: [http://127.0.0.1:7070/kylin/login](http://127.0.0.1:7070/kylin/login)
 - HDFS NameNode Web UI: [http://127.0.0.1:50070](http://127.0.0.1:50070/)
 - YARN ResourceManager Web UI: [http://127.0.0.1:8088](http://127.0.0.1:8088/)
+- HBase Web UI: [http://127.0.0.1:16010](http://127.0.0.1:16010/)
 
 In the container, the relevant environment variables are as follows: 
 
@@ -52,7 +56,8 @@ In the container, the relevant environment variables are as follows:
 JAVA_HOME=/home/admin/jdk1.8.0_141
 HADOOP_HOME=/home/admin/hadoop-2.7.0
 KAFKA_HOME=/home/admin/kafka_2.11-1.1.1
-SPARK_HOME=/home/admin/spark-2.4.6-bin-hadoop2.7
+SPARK_HOME=/home/admin/spark-2.3.1-bin-hadoop2.6
+HBASE_HOME=/home/admin/hbase-1.1.2
 HIVE_HOME=/home/admin/apache-hive-1.2.1-bin
 ```
 
@@ -60,15 +65,24 @@ After about 1 to 2 minutes, all the services should be started. At the Kylin log
 
 In the "Model" tab, you can click "Build" to build the two sample cubes. After the cubes be built, try some queries in the "Insight" tab.
 
-If you want to login into the Docker container, run "docker ps" to get the container id:
+If you want to login into the Docker container, run "docker exec -it apache-kylin-standalone bash" to login it with bash:
+
+```
+> docker exec -it apache-kylin-standalone bash
+[root@c15d10ff6bf1 admin]# ls
+apache-hive-1.2.1-bin                  apache-maven-3.6.1  first_run     hbase-1.1.2   kafka_2.11-1.1.1
+apache-kylin-3.0.0-alpha2-bin-hbase1x  entrypoint.sh       hadoop-2.7.0  jdk1.8.0_141  spark-2.3.1-bin-hadoop2.6
+```
+
+Or you can run "docker ps" to get the container id:
 
 ```
 > docker ps
 CONTAINER ID        IMAGE                                              COMMAND                  CREATED             STATUS              PORTS                                                                                                                                                NAMES
-c15d10ff6bf1        apachekylin/apache-kylin-standalone:3.0.1   "/home/admin/entrypo…"   55 minutes ago      Up 55 minutes       0.0.0.0:7070->7070/tcp, 0.0.0.0:8032->8032/tcp, 0.0.0.0:8042->8042/tcp, 0.0.0.0:8088->8088/tcp, 0.0.0.0:50070->50070/tcp, 0.0.0.0:16010->16010/tcp   romantic_moser
+c15d10ff6bf1        apachekylin/apache-kylin-standalone:3.1.0 "/home/admin/entrypo…"   55 minutes ago      Up 55 minutes       0.0.0.0:7070->7070/tcp, 0.0.0.0:8032->8032/tcp, 0.0.0.0:8042->8042/tcp, 0.0.0.0:8088->8088/tcp, 0.0.0.0:50070->50070/tcp, 0.0.0.0:16010->16010/tcp   romantic_moser
 ```
 
-Then run "docker -it <container id> bash" to login it with bash:
+Then run "docker exec -it <container id> bash" to login it with bash:
 
 ```
 > docker exec -it c15d10ff6bf1 bash
@@ -109,7 +123,7 @@ For example, if you made some code change in Kylin, you can make a new binary pa
 The new package is generated in "dist/" folder; Copy it to the "docker" folder:
 
 ```
-cp ./dist/apache-kylin-4.0.0-SNAPSHOT-bin.tar.gz ./docker
+cp ./dist/apache-kylin-3.1.0-SNAPSHOT-bin.tar.gz ./docker
 ```
 
 Use the "Dockerfile_dev" file to build:
diff --git a/docker/README.md b/docker/README.md
index d137c8b..124cceb 100644
--- a/docker/README.md
+++ b/docker/README.md
@@ -1,140 +1,7 @@
+## Kylin with docker
 
-In order to allow users to easily try Kylin, and to facilitate developers to verify and debug after modifying the source code. We provide the all-in-one Kylin docker image. In this image, each service that Kylin relies on is properly installed and deployed, including:
+Visit our offical docker repositories at https://hub.docker.com/r/apachekylin .
 
-- Jdk 1.8
-- Hadoop 2.7.0
-- Hive 1.2.1
-- Spark 2.4.6
-- Zookeeper 3.4.6
-- Kafka 1.1.1
-- MySQL 5.1.73
-- Maven 3.6.1
-
-## Quickly try Kylin with pre-built images
-
-We have pushed the Kylin images to the [docker hub](https://hub.docker.com/r/apachekylin/apache-kylin-standalone). You do not need to build the image locally, just pull the image from remote (you can browse docker hub to check the available versions):
-
-```
-docker pull apachekylin/apache-kylin-standalone:4.0.0-alpha
-```
-
-After the pull is successful, execute "sh run_container.sh" or the following command to start the container:
-
-```
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
-```
-
-The following services are automatically started when the container starts: 
-
-- NameNode, DataNode
-- ResourceManager, NodeManager
-- Kylin
-
-and run automatically `$KYLIN_HOME/bin/sample.sh `, create a kylin_streaming_topic topic in Kafka and continue to send data to this topic. This is to let the users start the container and then experience the batch and streaming way to build the cube and query.
-
-After the container is started, we can enter the container through the `docker exec` command. Of course, since we have mapped the specified port in the container to the local port, we can open the pages of each service directly in the native browser, such as: 
-
-- Kylin Web UI: [http://127.0.0.1:7070/kylin/login](http://127.0.0.1:7070/kylin/login)
-- HDFS NameNode Web UI: [http://127.0.0.1:50070](http://127.0.0.1:50070/)
-- YARN ResourceManager Web UI: [http://127.0.0.1:8088](http://127.0.0.1:8088/)
-
-In the container, the relevant environment variables are as follows: 
-
-```
-JAVA_HOME=/home/admin/jdk1.8.0_141
-HADOOP_HOME=/home/admin/hadoop-2.7.0
-KAFKA_HOME=/home/admin/kafka_2.11-1.1.1
-SPARK_HOME=/home/admin/spark-2.4.6-bin-hadoop2.7
-HIVE_HOME=/home/admin/apache-hive-1.2.1-bin
-```
-
-After about 1 to 2 minutes, all the services should be started. At the Kylin login page (http://127.0.0.1:7070/kylin), enter ADMIN:KYLIN to login, select the "learn_kylin" project. In the "Model" tab, you should be able to see two sample cubes: "kylin_sales_cube" and "kylin_streaming_cube". If they don't appear, go to the "System" tab, and then click "Reload metadata", they should be loaded.
-
-In the "Model" tab, you can click "Build" to build the two sample cubes. After the cubes be built, try some queries in the "Insight" tab.
-
-If you want to login into the Docker container, run "docker ps" to get the container id:
-
-```
-> docker ps
-CONTAINER ID        IMAGE                                              COMMAND                  CREATED             STATUS              PORTS                                                                                                                                                NAMES
-c15d10ff6bf1        apachekylin/apache-kylin-standalone:3.0.1   "/home/admin/entrypo…"   55 minutes ago      Up 55 minutes       0.0.0.0:7070->7070/tcp, 0.0.0.0:8032->8032/tcp, 0.0.0.0:8042->8042/tcp, 0.0.0.0:8088->8088/tcp, 0.0.0.0:50070->50070/tcp, 0.0.0.0:16010->16010/tcp   romantic_moser
-```
-
-Then run "docker -it <container id> bash" to login it with bash:
-
-```
-> docker exec -it c15d10ff6bf1 bash
-[root@c15d10ff6bf1 admin]# ls
-apache-hive-1.2.1-bin                  apache-maven-3.6.1  first_run     hbase-1.1.2   kafka_2.11-1.1.1
-apache-kylin-3.0.0-alpha2-bin-hbase1x  entrypoint.sh       hadoop-2.7.0  jdk1.8.0_141  spark-2.3.1-bin-hadoop2.6
-```
-
-## Build Docker image in local
-
-You can build the docker image by yourself with the provided Dockerfile. Here we separate the scripts into several files:
-
-- Dockerfile_hadoop: build a Hadoop image with Hadoop/Hive/HBase/Spark/Kafka and other components installed;
-- Dockerfile: based on the Hadoop image, download Kylin from apache website and then start all services.
-- Dockerfile_dev: similar with "Dockerfile", instead of downloading the released version, it copies local built Kylin package to the image.
-
-Others:
-- conf/: the Hadoop/HBase/Hive/Maven configuration files for this docker; Will copy them into the image on 'docker build';
-- entrypoint.sh: the entrypoint script, which will start all the services;
-
-The build is very simple:
-
-```
-./build_image.sh
-```
-The script will build the Hadoop image first, and then build Kylin image based on it. Depends on the network bandwidth, the first time may take a while.
-
-## Customize the Docker image
-
-You can customize these scripts and Dockerfile to make your image.
-
-For example, if you made some code change in Kylin, you can make a new binary package in local with:
-
-```
-./build/scripts/package.sh
-```
-
-The new package is generated in "dist/" folder; Copy it to the "docker" folder:
-
-```
-cp ./dist/apache-kylin-4.0.0-SNAPSHOT-bin.tar.gz ./docker
-```
-
-Use the "Dockerfile_dev" file to build:
-
-```
-docker build -f Dockerfile_dev -t apache-kylin-standalone:test .
-
-```
-
-## Build Docker image for your Hadoop environment
-
-You can run Kylin in Docker with your Hadoop cluster. In this case, you need to build a customized image:
-
-- Use the same version Hadoop components as your cluster;
-- Use your cluster's configuration files (copy to conf/);
-- Modify the "entrypoint.sh", only start Kylin, no need to start other Hadoop services;
-
-
-## Container resource recommendation
-
-In order to allow Kylin to build the cube smoothly, the memory resource we configured for Yarn NodeManager is 6G, plus the memory occupied by each service, please ensure that the memory of the container is not less than 8G, so as to avoid errors due to insufficient memory.
-
-For the resource setting method for the container, please refer to:
-
-- Mac user: <https://docs.docker.com/docker-for-mac/#advanced>
-- Linux user: <https://docs.docker.com/config/containers/resource_constraints/#memory>
-
----
+- For the purpose of quick-start/prepare a learning env, please choose to use [Standalone container deployment](./README-standalone.md) .
+- For the purpose of prepare a CI/CD & system testing, please choose to use [Docker-compose deployment](./README-cluster.md) .
+- For the production deployment, please choose to use [Kylin on Kubernetes](../kubernetes) .
diff --git a/docker/build_cluster_images.sh b/docker/build_cluster_images.sh
index b2aae80..d774434 100644
--- a/docker/build_cluster_images.sh
+++ b/docker/build_cluster_images.sh
@@ -21,44 +21,38 @@ WS_ROOT=`dirname $SCRIPT_PATH`
 
 source ${SCRIPT_PATH}/header.sh
 
-#docker build -t apachekylin/kylin-metastore:mysql_5.6.49 ./kylin/metastore-db
-#
-
-docker build -t apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/base
-docker build -t apachekylin/kylin-hadoop-namenode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} --build-arg HADOOP_WEBHDFS_PORT=${HADOOP_WEBHDFS_PORT} ./dockerfile/cluster/namenode
-docker build -t apachekylin/kylin-hadoop-datanode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} --build-arg HADOOP_DN_PORT=${HADOOP_DN_PORT} ./dockerfile/cluster/datanode
-docker build -t apachekylin/kylin-hadoop-resourcemanager:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/resourcemanager
-docker build -t apachekylin/kylin-hadoop-nodemanager:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/nodemanager
-docker build -t apachekylin/kylin-hadoop-historyserver:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/historyserver
-
-docker build -t apachekylin/kylin-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION} \
---build-arg HIVE_VERSION=${HIVE_VERSION} \
---build-arg HADOOP_VERSION=${HADOOP_VERSION} \
-./dockerfile/cluster/hive
+docker build -t apachekylin/kylin-ci-hadoop-base:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/base
+docker build -t apachekylin/kylin-ci-hadoop-namenode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} --build-arg HADOOP_WEBHDFS_PORT=${HADOOP_WEBHDFS_PORT} ./dockerfile/cluster/namenode
+docker build -t apachekylin/kylin-ci-hadoop-datanode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} --build-arg HADOOP_DN_PORT=${HADOOP_DN_PORT} ./dockerfile/cluster/datanode
+docker build -t apachekylin/kylin-ci-hadoop-resourcemanager:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/resourcemanager
+docker build -t apachekylin/kylin-ci-hadoop-nodemanager:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/nodemanager
+docker build -t apachekylin/kylin-ci-hadoop-historyserver:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/historyserver
+
+docker build -t apachekylin/kylin-ci-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION} \
+  --build-arg HIVE_VERSION=${HIVE_VERSION} \
+  --build-arg HADOOP_VERSION=${HADOOP_VERSION} \
+  ./dockerfile/cluster/hive
 
 if [ $ENABLE_HBASE == "yes" ]; then
-  docker build -t apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hbase
-  docker build -t apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hmaster
-  docker build -t apachekylin/kylin-hbase-regionserver:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hregionserver
+  docker build -t apachekylin/kylin-ci-hbase-base:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hbase
+  docker build -t apachekylin/kylin-ci-hbase-master:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hmaster
+  docker build -t apachekylin/kylin-ci-hbase-regionserver:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hregionserver
 fi
 
 if [ $ENABLE_KERBEROS == "yes" ]; then
-  docker build -t apachekylin/kylin-kerberos:latest ./dockerfile/cluster/kerberos
+  docker build -t apachekylin/kylin-ci-kerberos:latest ./dockerfile/cluster/kerberos
 fi
 
 if [ $ENABLE_LDAP == "yes" ]; then
   docker pull osixia/openldap:1.3.0
 fi
 
-#if [ $ENABLE_KAFKA == "yes" ]; then
-#  docker pull bitnami/kafka:2.0.0
-#fi
-docker pull bitnami/kafka:2.0.0
-
-docker pull mysql:5.6.49
+if [ $ENABLE_KAFKA == "yes" ]; then
+  docker pull bitnami/kafka:2.0.0
+fi
 
-docker build -t apachekylin/kylin-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_hbase_${HBASE_VERSION} \
---build-arg HIVE_VERSION=${HIVE_VERSION} \
---build-arg HADOOP_VERSION=${HADOOP_VERSION} \
---build-arg HBASE_VERSION=${HBASE_VERSION} \
-./dockerfile/cluster/client
+docker build -t apachekylin/kylin-ci-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_hbase_${HBASE_VERSION} \
+  --build-arg HIVE_VERSION=${HIVE_VERSION} \
+  --build-arg HADOOP_VERSION=${HADOOP_VERSION} \
+  --build-arg HBASE_VERSION=${HBASE_VERSION} \
+  ./dockerfile/cluster/client
diff --git a/docker/docker-compose/others/client-write-read.env b/docker/docker-compose/others/client-write-read.env
index c61e986..1a9ecad 100644
--- a/docker/docker-compose/others/client-write-read.env
+++ b/docker/docker-compose/others/client-write-read.env
@@ -26,8 +26,8 @@ YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
 YARN_CONF_yarn_timeline___service_hostname=write-historyserver
 YARN_CONF_mapreduce_map_output_compress=true
 YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec
-YARN_CONF_yarn_nodemanager_resource_memory___mb=16384
-YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8
+YARN_CONF_yarn_nodemanager_resource_memory___mb=10240
+YARN_CONF_yarn_nodemanager_resource_cpu___vcores=6
 YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
 YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
 YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
diff --git a/docker/docker-compose/others/client-write.env b/docker/docker-compose/others/client-write.env
index edad60b..d47815c 100644
--- a/docker/docker-compose/others/client-write.env
+++ b/docker/docker-compose/others/client-write.env
@@ -26,8 +26,8 @@ YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
 YARN_CONF_yarn_timeline___service_hostname=write-historyserver
 YARN_CONF_mapreduce_map_output_compress=true
 YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec
-YARN_CONF_yarn_nodemanager_resource_memory___mb=16384
-YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8
+YARN_CONF_yarn_nodemanager_resource_memory___mb=10240
+YARN_CONF_yarn_nodemanager_resource_cpu___vcores=6
 YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
 YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
 YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
diff --git a/docker/docker-compose/others/docker-compose-kylin-write-read.yml b/docker/docker-compose/others/docker-compose-kylin-write-read.yml
index cb67b06..0804db0 100644
--- a/docker/docker-compose/others/docker-compose-kylin-write-read.yml
+++ b/docker/docker-compose/others/docker-compose-kylin-write-read.yml
@@ -6,17 +6,11 @@ services:
     container_name: kylin-all
     hostname: kylin-all
     volumes:
-      - ./conf/hadoop:/etc/hadoop/conf
-      - ./conf/hbase:/etc/hbase/conf
-      - ./conf/hive:/etc/hive/conf
-      - ./kylin/kylin-all:/opt/kylin/kylin-all
+      - ./kylin/kylin-all:/opt/kylin/
     env_file:
       - client-write-read.env
     environment:
-      HADOOP_CONF_DIR: /etc/hadoop/conf
-      HIVE_CONF_DIR: /etc/hive/conf
-      HBASE_CONF_DIR: /etc/hbase/conf
-      KYLIN_HOME: /opt/kylin/kylin-all
+      KYLIN_HOME: /opt/kylin/
     networks:
       - write_kylin
     ports:
@@ -27,17 +21,11 @@ services:
     container_name: kylin-job
     hostname: kylin-job
     volumes:
-      - ./conf/hadoop:/etc/hadoop/conf
-      - ./conf/hbase:/etc/hbase/conf
-      - ./conf/hive:/etc/hive/conf
-      - ./kylin/kylin-job:/opt/kylin/kylin-job
+      - ./kylin/kylin-job:/opt/kylin/
     env_file:
       - client-write-read.env
     environment:
-      HADOOP_CONF_DIR: /etc/hadoop/conf
-      HIVE_CONF_DIR: /etc/hive/conf
-      HBASE_CONF_DIR: /etc/hbase/conf
-      KYLIN_HOME: /opt/kylin/kylin-job
+      KYLIN_HOME: /opt/kylin/
     networks:
       - write_kylin
     ports:
@@ -48,17 +36,11 @@ services:
     container_name: kylin-query
     hostname: kylin-query
     volumes:
-      - ./conf/hadoop:/etc/hadoop/conf
-      - ./conf/hbase:/etc/hbase/conf
-      - ./conf/hive:/etc/hive/conf
-      - ./kylin/kylin-query:/opt/kylin/kylin-query
+      - ./kylin/kylin-query:/opt/kylin/
     env_file:
       - client-write-read.env
     environment:
-      HADOOP_CONF_DIR: /etc/hadoop/conf
-      HIVE_CONF_DIR: /etc/hive/conf
-      HBASE_CONF_DIR: /etc/hbase/conf
-      KYLIN_HOME: /opt/kylin/kylin-query
+      KYLIN_HOME: /opt/kylin/
     networks:
       - write_kylin
     ports:
diff --git a/docker/docker-compose/others/docker-compose-kylin-write.yml b/docker/docker-compose/others/docker-compose-kylin-write.yml
index a78b88a..e0c1c81 100644
--- a/docker/docker-compose/others/docker-compose-kylin-write.yml
+++ b/docker/docker-compose/others/docker-compose-kylin-write.yml
@@ -3,20 +3,16 @@ version: "3.3"
 services:
   kylin-all:
     image: ${CLIENT_IMAGETAG}
+    labels:
+      org.apache.kylin.description: "This is the All role in Kylin."
     container_name: kylin-all
     hostname: kylin-all
     volumes:
-      - ./conf/hadoop:/etc/hadoop/conf
-      - ./conf/hbase:/etc/hbase/conf
-      - ./conf/hive:/etc/hive/conf
-      - ./kylin/kylin-all:/opt/kylin/kylin-all
+      - ./kylin/kylin-all:/opt/kylin
     env_file:
       - client-write.env
     environment:
-      HADOOP_CONF_DIR: /etc/hadoop/conf
-      HIVE_CONF_DIR: /etc/hive/conf
-      HBASE_CONF_DIR: /etc/hbase/conf
-      KYLIN_HOME: /opt/kylin/kylin-all
+      KYLIN_HOME: /opt/kylin/
     networks:
       - write_kylin
     ports:
@@ -24,20 +20,16 @@ services:
 
   kylin-job:
     image: ${CLIENT_IMAGETAG}
+    labels:
+      org.apache.kylin.description: "This is the Job role in Kylin."
     container_name: kylin-job
     hostname: kylin-job
     volumes:
-      - ./conf/hadoop:/etc/hadoop/conf
-      - ./conf/hbase:/etc/hbase/conf
-      - ./conf/hive:/etc/hive/conf
-      - ./kylin/kylin-job:/opt/kylin/kylin-job
+      - ./kylin/kylin-job:/opt/kylin/
     env_file:
       - client-write.env
     environment:
-      HADOOP_CONF_DIR: /etc/hadoop/conf
-      HIVE_CONF_DIR: /etc/hive/conf
-      HBASE_CONF_DIR: /etc/hbase/conf
-      KYLIN_HOME: /opt/kylin/kylin-job
+      KYLIN_HOME: /opt/kylin/
     networks:
       - write_kylin
     ports:
@@ -45,20 +37,16 @@ services:
 
   kylin-query:
     image: ${CLIENT_IMAGETAG}
+    labels:
+      org.apache.kylin.description: "This is the Query role in Kylin."
     container_name: kylin-query
     hostname: kylin-query
     volumes:
-      - ./conf/hadoop:/etc/hadoop/conf
-      - ./conf/hbase:/etc/hbase/conf
-      - ./conf/hive:/etc/hive/conf
-      - ./kylin/kylin-query:/opt/kylin/kylin-query
+      - ./kylin/kylin-query:/opt/kylin/
     env_file:
       - client-write.env
     environment:
-      HADOOP_CONF_DIR: /etc/hadoop/conf
-      HIVE_CONF_DIR: /etc/hive/conf
-      HBASE_CONF_DIR: /etc/hbase/conf
-      KYLIN_HOME: /opt/kylin/kylin-query
+      KYLIN_HOME: /opt/kylin/
     networks:
       - write_kylin
     ports:
diff --git a/docker/docker-compose/others/docker-compose-metastore.yml b/docker/docker-compose/others/docker-compose-metastore.yml
index a36df07..e237f33 100644
--- a/docker/docker-compose/others/docker-compose-metastore.yml
+++ b/docker/docker-compose/others/docker-compose-metastore.yml
@@ -2,8 +2,6 @@ version: "3.3"
 
 services:
   metastore-db:
-#    image: mysql:5.6.49
-#    image: mysql:8.0.11
     image: mysql:5.7.24
     container_name: metastore-db
     hostname: metastore-db
diff --git a/docker/docker-compose/others/kylin/README.md b/docker/docker-compose/others/kylin/README.md
new file mode 100644
index 0000000..fc03d17
--- /dev/null
+++ b/docker/docker-compose/others/kylin/README.md
@@ -0,0 +1,2 @@
+
+Please put Kylin here.
\ No newline at end of file
diff --git a/docker/docker-compose/read/docker-compose-hadoop.yml b/docker/docker-compose/read/docker-compose-hadoop.yml
index a0e2a66..69888c2 100644
--- a/docker/docker-compose/read/docker-compose-hadoop.yml
+++ b/docker/docker-compose/read/docker-compose-hadoop.yml
@@ -2,7 +2,7 @@ version: "3.3"
 
 services:
   read-namenode:
-    image: ${HADOOP_NAMENODE_IMAGETAG:-apachekylin/kylin-hadoop-namenode:hadoop_2.8.5}
+    image: ${HADOOP_NAMENODE_IMAGETAG:-apachekylin/kylin-ci-hadoop-namenode:hadoop_2.8.5}
     container_name: read-namenode
     hostname: read-namenode
     volumes:
@@ -21,7 +21,7 @@ services:
       - 9871:9870
 
   read-datanode1:
-    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-ci-hadoop-datanode:hadoop_2.8.5}
     container_name: read-datanode1
     hostname: read-datanode1
     volumes:
@@ -39,7 +39,7 @@ services:
       - ${HADOOP_DN_PORT:-50075}
 
   read-datanode2:
-    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-ci-hadoop-datanode:hadoop_2.8.5}
     container_name: read-datanode2
     hostname: read-datanode2
     volumes:
@@ -55,7 +55,7 @@ services:
       - ${HADOOP_DN_PORT:-50075}
 
   read-datanode3:
-    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-ci-hadoop-datanode:hadoop_2.8.5}
     container_name: read-datanode3
     hostname: read-datanode3
     volumes:
@@ -71,7 +71,7 @@ services:
       - ${HADOOP_DN_PORT:-50075}
 
   read-resourcemanager:
-    image: ${HADOOP_RESOURCEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-resourcemanager:hadoop_2.8.5}
+    image: ${HADOOP_RESOURCEMANAGER_IMAGETAG:-apachekylin/kylin-ci-hadoop-resourcemanager:hadoop_2.8.5}
     container_name: read-resourcemanager
     hostname: read-resourcemanager
     environment:
@@ -85,7 +85,7 @@ services:
       - 8089:8088
 
   read-nodemanager1:
-    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5}
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-ci-hadoop-nodemanager:hadoop_2.8.5}
     container_name: read-nodemanager1
     hostname: read-nodemanager1
     environment:
@@ -97,7 +97,7 @@ services:
       - write_kylin
 
   read-nodemanager2:
-    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5}
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-ci-hadoop-nodemanager:hadoop_2.8.5}
     container_name: read-nodemanager2
     hostname: read-nodemanager2
     environment:
@@ -109,7 +109,7 @@ services:
       - write_kylin
 
   read-historyserver:
-    image: ${HADOOP_HISTORYSERVER_IMAGETAG:-apachekylin/kylin-hadoop-historyserver:hadoop_2.8.5}
+    image: ${HADOOP_HISTORYSERVER_IMAGETAG:-apachekylin/kylin-ci-hadoop-historyserver:hadoop_2.8.5}
     container_name: read-historyserver
     hostname: read-historyserver
     volumes:
diff --git a/docker/docker-compose/read/docker-compose-hbase.yml b/docker/docker-compose/read/docker-compose-hbase.yml
index ac4048b..5f158cb 100644
--- a/docker/docker-compose/read/docker-compose-hbase.yml
+++ b/docker/docker-compose/read/docker-compose-hbase.yml
@@ -2,7 +2,7 @@ version: "3.3"
 
 services:
   read-hbase-master:
-    image: ${HBASE_MASTER_IMAGETAG:-apachekylin/kylin-hbase-master:hbase1.1.2}
+    image: ${HBASE_MASTER_IMAGETAG:-apachekylin/kylin-ci-hbase-master:hbase1.1.2}
     container_name: read-hbase-master
     hostname: read-hbase-master
     env_file:
@@ -15,7 +15,7 @@ services:
       - 16010:16010
 
   read-hbase-regionserver1:
-    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-hbase-regionserver:hbase_1.1.2}
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-ci-hbase-regionserver:hbase_1.1.2}
     container_name: read-hbase-regionserver1
     hostname: read-hbase-regionserver1
     env_file:
@@ -27,7 +27,7 @@ services:
       - write_kylin
 
   read-hbase-regionserver2:
-    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-hbase-regionserver:hbase_1.1.2}
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-ci-hbase-regionserver:hbase_1.1.2}
     container_name: read-hbase-regionserver2
     hostname: read-hbase-regionserver2
     env_file:
diff --git a/docker/docker-compose/read/read-hadoop.env b/docker/docker-compose/read/read-hadoop.env
index 9c0086d..5290caa 100644
--- a/docker/docker-compose/read/read-hadoop.env
+++ b/docker/docker-compose/read/read-hadoop.env
@@ -26,8 +26,8 @@ YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
 YARN_CONF_yarn_timeline___service_hostname=read-historyserver
 YARN_CONF_mapreduce_map_output_compress=true
 YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec
-YARN_CONF_yarn_nodemanager_resource_memory___mb=16384
-YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8
+YARN_CONF_yarn_nodemanager_resource_memory___mb=10240
+YARN_CONF_yarn_nodemanager_resource_cpu___vcores=5
 YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
 YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
 YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
diff --git a/docker/docker-compose/write/conf/hive/hive-site.xml b/docker/docker-compose/write/conf/hive/hive-site.xml
index c60fe36..ab7779b 100644
--- a/docker/docker-compose/write/conf/hive/hive-site.xml
+++ b/docker/docker-compose/write/conf/hive/hive-site.xml
@@ -1,5 +1,6 @@
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
    this work for additional information regarding copyright ownership.
@@ -14,11 +15,12 @@
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
---><configuration>
+-->
+<configuration>
     <property><name>hive.metastore.uris</name><value>thrift://write-hive-metastore:9083</value></property>
     <property><name>datanucleus.autoCreateSchema</name><value>false</value></property>
-    <property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:postgresql://write-hive-metastore-postgresql/metastore</value></property>
-    <property><name>javax.jdo.option.ConnectionDriverName</name><value>org.postgresql.Driver</value></property>
+    <property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://metastore-db/metastore?useSSL=false\&amp;allowPublicKeyRetrieval=true</value></property>
+    <property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.cj.jdbc.Driver</value></property>
     <property><name>javax.jdo.option.ConnectionPassword</name><value>hive</value></property>
     <property><name>javax.jdo.option.ConnectionUserName</name><value>hive</value></property>
 </configuration>
diff --git a/docker/docker-compose/write/docker-compose-hadoop.yml b/docker/docker-compose/write/docker-compose-hadoop.yml
index 4286cfc..8c75f37 100644
--- a/docker/docker-compose/write/docker-compose-hadoop.yml
+++ b/docker/docker-compose/write/docker-compose-hadoop.yml
@@ -2,7 +2,7 @@ version: "3.3"
 
 services:
   write-namenode:
-    image: ${HADOOP_NAMENODE_IMAGETAG:-apachekylin/kylin-hadoop-namenode:hadoop_2.8.5}
+    image: ${HADOOP_NAMENODE_IMAGETAG:-apachekylin/kylin-ci-hadoop-namenode:hadoop_2.8.5}
     container_name: write-namenode
     hostname: write-namenode
     volumes:
@@ -21,7 +21,7 @@ services:
       - 9870:9870
 
   write-datanode1:
-    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-ci-hadoop-datanode:hadoop_2.8.5}
     container_name: write-datanode1
     hostname: write-datanode1
     volumes:
@@ -33,13 +33,11 @@ services:
       - write-hadoop.env
     networks:
       - kylin
-    links:
-      - write-namenode
     expose:
       - ${HADOOP_DN_PORT:-50075}
 
   write-datanode2:
-    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-ci-hadoop-datanode:hadoop_2.8.5}
     container_name: write-datanode2
     hostname: write-datanode2
     volumes:
@@ -55,7 +53,7 @@ services:
       - ${HADOOP_DN_PORT:-50075}
 
   write-datanode3:
-    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-hadoop-datanode:hadoop_2.8.5}
+    image: ${HADOOP_DATANODE_IMAGETAG:-apachekylin/kylin-ci-hadoop-datanode:hadoop_2.8.5}
     container_name: write-datanode3
     hostname: write-datanode3
     volumes:
@@ -71,7 +69,7 @@ services:
       - ${HADOOP_DN_PORT:-50075}
 
   write-resourcemanager:
-    image: ${HADOOP_RESOURCEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-resourcemanager:hadoop_2.8.5}
+    image: ${HADOOP_RESOURCEMANAGER_IMAGETAG:-apachekylin/kylin-ci-hadoop-resourcemanager:hadoop_2.8.5}
     container_name: write-resourcemanager
     hostname: write-resourcemanager
     environment:
@@ -85,7 +83,7 @@ services:
       - 8088:8088
 
   write-nodemanager1:
-    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5}
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-ci-hadoop-nodemanager:hadoop_2.8.5}
     container_name: write-nodemanager1
     hostname: write-nodemanager1
     environment:
@@ -95,9 +93,11 @@ services:
       - write-hadoop.env
     networks:
       - kylin
+    ports:
+      - 8044:8042
 
   write-nodemanager2:
-    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-hadoop-nodemanager:hadoop_2.8.5}
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-apachekylin/kylin-ci-hadoop-nodemanager:hadoop_2.8.5}
     container_name: write-nodemanager2
     hostname: write-nodemanager2
     environment:
@@ -107,9 +107,11 @@ services:
       - write-hadoop.env
     networks:
       - kylin
+    ports:
+      - 8043:8042
 
   write-historyserver:
-    image: ${HADOOP_HISTORYSERVER_IMAGETAG:-apachekylin/kylin-hadoop-historyserver:hadoop_2.8.5}
+    image: ${HADOOP_HISTORYSERVER_IMAGETAG:-apachekylin/kylin-ci-hadoop-historyserver:hadoop_2.8.5}
     container_name: write-historyserver
     hostname: write-historyserver
     volumes:
diff --git a/docker/docker-compose/write/docker-compose-hbase.yml b/docker/docker-compose/write/docker-compose-hbase.yml
index d95f32b..5539f9d 100644
--- a/docker/docker-compose/write/docker-compose-hbase.yml
+++ b/docker/docker-compose/write/docker-compose-hbase.yml
@@ -2,7 +2,7 @@ version: "3.3"
 
 services:
   write-hbase-master:
-    image: ${HBASE_MASTER_IMAGETAG:-apachekylin/kylin-hbase-master:hbase1.1.2}
+    image: ${HBASE_MASTER_IMAGETAG:-apachekylin/kylin-ci-hbase-master:hbase1.1.2}
     container_name: write-hbase-master
     hostname: write-hbase-master
     env_file:
@@ -15,7 +15,7 @@ services:
       - 16010:16010
 
   write-hbase-regionserver1:
-    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-hbase-regionserver:hbase_1.1.2}
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-ci-hbase-regionserver:hbase_1.1.2}
     container_name: write-hbase-regionserver1
     hostname: write-hbase-regionserver1
     env_file:
@@ -27,7 +27,7 @@ services:
       - write_kylin
 
   write-hbase-regionserver2:
-    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-hbase-regionserver:hbase_1.1.2}
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-apachekylin/kylin-ci-hbase-regionserver:hbase_1.1.2}
     container_name: write-hbase-regionserver2
     hostname: write-hbase-regionserver2
     env_file:
diff --git a/docker/docker-compose/write/docker-compose-hive.yml b/docker/docker-compose/write/docker-compose-hive.yml
index 9b94a34..54459ff 100644
--- a/docker/docker-compose/write/docker-compose-hive.yml
+++ b/docker/docker-compose/write/docker-compose-hive.yml
@@ -2,7 +2,7 @@ version: "3.3"
 
 services:
   write-hive-server:
-    image: ${HIVE_IMAGETAG:-apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5}
+    image: ${HIVE_IMAGETAG:-apachekylin/kylin-ci-hive:hive_1.2.2_hadoop_2.8.5}
     container_name: write-hive-server
     hostname: write-hive-server
     env_file:
@@ -17,7 +17,7 @@ services:
       - 10000:10000
 
   write-hive-metastore:
-    image: ${HIVE_IMAGETAG:-apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5}
+    image: ${HIVE_IMAGETAG:-apachekylin/kylin-ci-hive:hive_1.2.2_hadoop_2.8.5}
     container_name: write-hive-metastore
     hostname: write-hive-metastore
     env_file:
diff --git a/docker/docker-compose/write/write-hadoop.env b/docker/docker-compose/write/write-hadoop.env
index ef4429a..670756f 100644
--- a/docker/docker-compose/write/write-hadoop.env
+++ b/docker/docker-compose/write/write-hadoop.env
@@ -26,8 +26,8 @@ YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
 YARN_CONF_yarn_timeline___service_hostname=write-historyserver
 YARN_CONF_mapreduce_map_output_compress=true
 YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec
-YARN_CONF_yarn_nodemanager_resource_memory___mb=16384
-YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8
+YARN_CONF_yarn_nodemanager_resource_memory___mb=10240
+YARN_CONF_yarn_nodemanager_resource_cpu___vcores=6
 YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
 YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
 YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
diff --git a/docker/dockerfile/cluster/base/Dockerfile b/docker/dockerfile/cluster/base/Dockerfile
index 8cf5ff0..ebfe227 100644
--- a/docker/dockerfile/cluster/base/Dockerfile
+++ b/docker/dockerfile/cluster/base/Dockerfile
@@ -38,17 +38,6 @@ RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2
     && tar -zxvf /opt/jdk-8u141-linux-x64.tar.gz -C /opt/ \
     && rm -f /opt/jdk-8u141-linux-x64.tar.gz
 
-# use buildkit
-#IF $INSTALL_FROM=="net"
-#RUN set -x \
-#    && echo "Fetch URL2 is : ${HADOOP_URL}" \
-#    && curl -fSL "${HADOOP_URL}" -o /tmp/hadoop.tar.gz \
-#    && curl -fSL "${HADOOP_URL}.asc" -o /tmp/hadoop.tar.gz.asc \
-#ELSE IF $INSTALL_FROM=="local"
-#COPY ${PACKAGE_PATH}hadoop-${HADOOP_VERSION}.tar.gz /tmp/hadoop.tar.gz
-#COPY ${PACKAGE_PATH}hadoop-${HADOOP_VERSION}.tar.gz.asc /tmp/hadoop.tar.gz.asc
-#DONE
-
 RUN set -x \
     && echo "Fetch URL2 is : ${HADOOP_URL}" \
     && curl -fSL "${HADOOP_URL}" -o /tmp/hadoop.tar.gz \
@@ -57,13 +46,14 @@ RUN set -x \
 RUN set -x \
     && tar -xvf /tmp/hadoop.tar.gz -C /opt/ \
     && rm /tmp/hadoop.tar.gz* \
-    && ln -s /opt/hadoop-$HADOOP_VERSION/etc/hadoop /etc/hadoop \
-    && if [ -e "/etc/hadoop/mapred-site.xml.template" ]; then cp /etc/hadoop/mapred-site.xml.template /etc/hadoop/mapred-site.xml ;fi \
+    && mkdir -p /etc/hadoop/conf \
+    && cp -r /opt/hadoop-$HADOOP_VERSION/etc/hadoop/* /etc/hadoop/conf
+    && if [ -e "/etc/hadoop/conf/mapred-site.xml.template" ]; then cp /etc/hadoop/conf/mapred-site.xml.template /etc/hadoop/conf/mapred-site.xml ;fi \
     && mkdir -p /opt/hadoop-$HADOOP_VERSION/logs \
     && mkdir /hadoop-data
 
 ENV HADOOP_PREFIX=/opt/hadoop-$HADOOP_VERSION
-ENV HADOOP_CONF_DIR=/etc/hadoop
+ENV HADOOP_CONF_DIR=/etc/hadoop/conf
 ENV MULTIHOMED_NETWORK=1
 ENV HADOOP_HOME=${HADOOP_PREFIX}
 ENV HADOOP_INSTALL=${HADOOP_HOME}
@@ -74,5 +64,4 @@ ENV PATH $JAVA_HOME/bin:/usr/bin:/bin:$HADOOP_PREFIX/bin/:$PATH
 ADD entrypoint.sh /opt/entrypoint/hadoop/entrypoint.sh
 RUN chmod a+x /opt/entrypoint/hadoop/entrypoint.sh
 
-ENTRYPOINT ["/opt/entrypoint/hadoop/entrypoint.sh"]
-
+ENTRYPOINT ["/opt/entrypoint/hadoop/entrypoint.sh"]
\ No newline at end of file
diff --git a/docker/dockerfile/cluster/base/entrypoint.sh b/docker/dockerfile/cluster/base/entrypoint.sh
index 3479844..2ecb8a2 100644
--- a/docker/dockerfile/cluster/base/entrypoint.sh
+++ b/docker/dockerfile/cluster/base/entrypoint.sh
@@ -51,46 +51,46 @@ function configure() {
         var="${envPrefix}_${c}"
         value=${!var}
         echo " - Setting $name=$value"
-        addProperty /etc/hadoop/$module-site.xml $name "$value"
+        addProperty /etc/hadoop/conf/$module-site.xml $name "$value"
     done
 }
 
-configure /etc/hadoop/core-site.xml core CORE_CONF
-configure /etc/hadoop/hdfs-site.xml hdfs HDFS_CONF
-configure /etc/hadoop/yarn-site.xml yarn YARN_CONF
-configure /etc/hadoop/httpfs-site.xml httpfs HTTPFS_CONF
-configure /etc/hadoop/kms-site.xml kms KMS_CONF
+configure /etc/hadoop/conf/core-site.xml core CORE_CONF
+configure /etc/hadoop/conf/hdfs-site.xml hdfs HDFS_CONF
+configure /etc/hadoop/conf/yarn-site.xml yarn YARN_CONF
+configure /etc/hadoop/conf/httpfs-site.xml httpfs HTTPFS_CONF
+configure /etc/hadoop/conf/kms-site.xml kms KMS_CONF
 
 if [ "$MULTIHOMED_NETWORK" = "1" ]; then
     echo "Configuring for multihomed network"
 
     # HDFS
-    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.rpc-bind-host 0.0.0.0
-    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.servicerpc-bind-host 0.0.0.0
-    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.http-bind-host 0.0.0.0
-    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.https-bind-host 0.0.0.0
-    addProperty /etc/hadoop/hdfs-site.xml dfs.client.use.datanode.hostname true
-    addProperty /etc/hadoop/hdfs-site.xml dfs.datanode.use.datanode.hostname true
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.namenode.rpc-bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.namenode.servicerpc-bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.namenode.http-bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.namenode.https-bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.client.use.datanode.hostname true
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.datanode.use.datanode.hostname true
 
     # YARN
-    addProperty /etc/hadoop/yarn-site.xml yarn.resourcemanager.bind-host 0.0.0.0
-    addProperty /etc/hadoop/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
-    addProperty /etc/hadoop/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
-    addProperty /etc/hadoop/yarn-site.xml yarn.timeline-service.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/yarn-site.xml yarn.resourcemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/yarn-site.xml yarn.timeline-service.bind-host 0.0.0.0
 
     # MAPRED
-    addProperty /etc/hadoop/mapred-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/mapred-site.xml yarn.nodemanager.bind-host 0.0.0.0
 fi
 
 if [ -n "$GANGLIA_HOST" ]; then
-    mv /etc/hadoop/hadoop-metrics.properties /etc/hadoop/hadoop-metrics.properties.orig
-    mv /etc/hadoop/hadoop-metrics2.properties /etc/hadoop/hadoop-metrics2.properties.orig
+    mv /etc/hadoop/conf/hadoop-metrics.properties /etc/hadoop/conf/hadoop-metrics.properties.orig
+    mv /etc/hadoop/conf/hadoop-metrics2.properties /etc/hadoop/conf/hadoop-metrics2.properties.orig
 
     for module in mapred jvm rpc ugi; do
         echo "$module.class=org.apache.hadoop.metrics.ganglia.GangliaContext31"
         echo "$module.period=10"
         echo "$module.servers=$GANGLIA_HOST:8649"
-    done > /etc/hadoop/hadoop-metrics.properties
+    done > /etc/hadoop/conf/hadoop-metrics.properties
 
     for module in namenode datanode resourcemanager nodemanager mrappmaster jobhistoryserver; do
         echo "$module.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31"
@@ -99,7 +99,7 @@ if [ -n "$GANGLIA_HOST" ]; then
         echo "$module.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both"
         echo "$module.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40"
         echo "$module.sink.ganglia.servers=$GANGLIA_HOST:8649"
-    done > /etc/hadoop/hadoop-metrics2.properties
+    done > /etc/hadoop/conf/hadoop-metrics2.properties
 fi
 
 function wait_for_it()
diff --git a/docker/dockerfile/cluster/client/Dockerfile b/docker/dockerfile/cluster/client/Dockerfile
index 48008c1..43c935e 100644
--- a/docker/dockerfile/cluster/client/Dockerfile
+++ b/docker/dockerfile/cluster/client/Dockerfile
@@ -24,13 +24,13 @@ ARG KAFKA_VERSION=2.0.0
 ARG SPARK_VERSION=2.3.1
 ARG SPARK_HADOOP_VERSION=2.6
 
-FROM apachekylin/kylin-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION} AS hive
+FROM apachekylin/kylin-ci-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION} AS hive
 ENV JAVA_VERSION ${JAVA_VERSION}
 ENV HADOOP_VERSION ${HADOOP_VERSION}
 ENV HIVE_VERSION ${HIVE_VERSION}
 
 ARG HBASE_VERSION=1.1.2
-FROM apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION} AS hbase
+FROM apachekylin/kylin-ci-hbase-master:hbase_${HBASE_VERSION} AS hbase
 ENV HBASE_VERSION ${HBASE_VERSION}
 
 
@@ -52,8 +52,8 @@ ARG HIVE_VERSION=1.2.1
 ARG HBASE_VERSION=1.1.2
 ARG ZOOKEEPER_VERSION=3.4.10
 ARG KAFKA_VERSION=2.0.0
-ARG SPARK_VERSION=2.3.1
-ARG SPARK_HADOOP_VERSION=2.6
+ARG SPARK_VERSION=2.4.6
+ARG SPARK_HADOOP_VERSION=2.7
 
 ENV JAVA_VERSION ${JAVA_VERSION}
 ENV HADOOP_VERSION ${HADOOP_VERSION}
@@ -95,24 +95,24 @@ RUN chmod a+x /opt/entrypoint/kafka/entrypoint.sh
 
 
 RUN set -x \
-    && ln -s /opt/hadoop-$HADOOP_VERSION/etc/hadoop /etc/hadoop \
-    && if [ -e "/etc/hadoop/mapred-site.xml.template" ]; then cp /etc/hadoop/mapred-site.xml.template /etc/hadoop/mapred-site.xml ;fi \
+    && mkdir -p /etc/hadoop/conf \
+    && mkdir -p /etc/hbase/conf \
+    && cp -r /opt/hadoop-$HADOOP_VERSION/etc/hadoop/* /etc/hadoop/conf \
+    && cp -r /opt/hbase-$HBASE_VERSION/conf/* /etc/hbase/conf \
+    && if [ -e "/etc/hadoop/conf/mapred-site.xml.template" ]; then cp /etc/hadoop/conf/mapred-site.xml.template /etc/hadoop/conf/mapred-site.xml ;fi \
     && mkdir -p /opt/hadoop-$HADOOP_VERSION/logs
 
-RUN ln -s /opt/hbase-$HBASE_VERSION/conf /etc/hbase
-
-
 ENV JAVA_HOME=/opt/${JAVA_VERSION}
 
 ENV HADOOP_PREFIX=/opt/hadoop-$HADOOP_VERSION
-ENV HADOOP_CONF_DIR=/etc/hadoop
+ENV HADOOP_CONF_DIR=/etc/hadoop/conf
 ENV HADOOP_HOME=${HADOOP_PREFIX}
 ENV HADOOP_INSTALL=${HADOOP_HOME}
 
 ENV HIVE_HOME=/opt/hive
 
 ENV HBASE_PREFIX=/opt/hbase-$HBASE_VERSION
-ENV HBASE_CONF_DIR=/etc/hbase
+ENV HBASE_CONF_DIR=/etc/hbase/conf
 ENV HBASE_HOME=${HBASE_PREFIX}
 
 
diff --git a/docker/dockerfile/cluster/client/entrypoint.sh b/docker/dockerfile/cluster/client/entrypoint.sh
index dddc072..7a693aa 100644
--- a/docker/dockerfile/cluster/client/entrypoint.sh
+++ b/docker/dockerfile/cluster/client/entrypoint.sh
@@ -1,7 +1,3 @@
 #!/bin/bash
 
-/opt/entrypoint/hadoop/entrypoint.sh
-/opt/entrypoint/hive/entrypoint.sh
-/opt/entrypoint/hbase/entrypoint.sh
-#/opt/entrypoint/zookeeper/entrypoint.sh
-#/opt/entrypoint/kafka/entrypoint.sh
+
diff --git a/docker/dockerfile/cluster/client/run_cli.sh b/docker/dockerfile/cluster/client/run_cli.sh
index 371c3e1..fcdd71c 100644
--- a/docker/dockerfile/cluster/client/run_cli.sh
+++ b/docker/dockerfile/cluster/client/run_cli.sh
@@ -4,7 +4,13 @@
 /opt/entrypoint/hive/entrypoint.sh
 /opt/entrypoint/hbase/entrypoint.sh
 
+sleep 180
+
+cd $KYLIN_HOME
+sh bin/sample.sh
+sh bin/kylin.sh start
+
 while :
 do
-    sleep 1000
+    sleep 100
 done
\ No newline at end of file
diff --git a/docker/dockerfile/cluster/datanode/Dockerfile b/docker/dockerfile/cluster/datanode/Dockerfile
index 54bbc10..6dcb771 100644
--- a/docker/dockerfile/cluster/datanode/Dockerfile
+++ b/docker/dockerfile/cluster/datanode/Dockerfile
@@ -17,7 +17,7 @@
 
 ARG HADOOP_VERSION=2.8.5
 ARG HADOOP_DN_PORT=50075
-FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+FROM apachekylin/kylin-ci-hadoop-base:hadoop_${HADOOP_VERSION}
 
 ENV HADOOP_DN_PORT ${HADOOP_DN_PORT}
 
diff --git a/docker/dockerfile/cluster/hbase/Dockerfile b/docker/dockerfile/cluster/hbase/Dockerfile
index 9b92d56..22daf45 100644
--- a/docker/dockerfile/cluster/hbase/Dockerfile
+++ b/docker/dockerfile/cluster/hbase/Dockerfile
@@ -41,14 +41,15 @@ RUN set -x \
     && tar -xvf /tmp/hbase.tar.gz -C /opt/ \
     && rm /tmp/hbase.tar.gz*
 
-RUN ln -s /opt/hbase-$HBASE_VERSION/conf /etc/hbase
-RUN mkdir /opt/hbase-$HBASE_VERSION/logs
+RUN mkdir -p /etc/hbase/conf \
+    && cp -r /opt/hbase-$HBASE_VERSION/conf/* /etc/hbase/conf \
+    && mkdir /opt/hbase-$HBASE_VERSION/logs
 
 RUN mkdir /hadoop-data
 
 ENV HBASE_PREFIX=/opt/hbase-$HBASE_VERSION
 ENV HBASE_HOME=${HBASE_PREFIX}
-ENV HBASE_CONF_DIR=/etc/hbase
+ENV HBASE_CONF_DIR=/etc/hbase/conf
 
 ENV USER=root
 ENV PATH $JAVA_HOME/bin:$HBASE_PREFIX/bin/:$PATH
diff --git a/docker/dockerfile/cluster/hbase/entrypoint.sh b/docker/dockerfile/cluster/hbase/entrypoint.sh
index 5aea8d9..661bd61 100644
--- a/docker/dockerfile/cluster/hbase/entrypoint.sh
+++ b/docker/dockerfile/cluster/hbase/entrypoint.sh
@@ -39,7 +39,7 @@ function configure() {
         var="${envPrefix}_${c}"
         value=${!var}
         echo " - Setting $name=$value"
-        addProperty /etc/hbase/$module-site.xml $name "$value"
+        addProperty /etc/hbase/conf/$module-site.xml $name "$value"
     done
 }
 
diff --git a/docker/dockerfile/cluster/historyserver/Dockerfile b/docker/dockerfile/cluster/historyserver/Dockerfile
index 2adda43..7c89d00 100644
--- a/docker/dockerfile/cluster/historyserver/Dockerfile
+++ b/docker/dockerfile/cluster/historyserver/Dockerfile
@@ -16,7 +16,7 @@
 #
 
 ARG HADOOP_VERSION=2.8.5
-FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+FROM apachekylin/kylin-ci-hadoop-base:hadoop_${HADOOP_VERSION}
 
 ARG HADOOP_HISTORY_PORT=8188
 ENV HADOOP_HISTORY_PORT ${HADOOP_HISTORY_PORT}
diff --git a/docker/dockerfile/cluster/hive/Dockerfile b/docker/dockerfile/cluster/hive/Dockerfile
index c3f11e5..de544d8 100644
--- a/docker/dockerfile/cluster/hive/Dockerfile
+++ b/docker/dockerfile/cluster/hive/Dockerfile
@@ -16,7 +16,7 @@
 #
 
 ARG HADOOP_VERSION=2.8.5
-FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+FROM apachekylin/kylin-ci-hadoop-base:hadoop_${HADOOP_VERSION}
 
 ENV HIVE_HOME /opt/hive
 ENV HADOOP_HOME /opt/hadoop-$HADOOP_VERSION
diff --git a/docker/dockerfile/cluster/hive/conf/hive-site.xml b/docker/dockerfile/cluster/hive/conf/hive-site.xml
index 60f3935..c6e1d92 100644
--- a/docker/dockerfile/cluster/hive/conf/hive-site.xml
+++ b/docker/dockerfile/cluster/hive/conf/hive-site.xml
@@ -14,5 +14,6 @@
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
---><configuration>
+-->
+<configuration>
 </configuration>
diff --git a/docker/dockerfile/cluster/hive/entrypoint.sh b/docker/dockerfile/cluster/hive/entrypoint.sh
index d6a888c..7a129cd 100644
--- a/docker/dockerfile/cluster/hive/entrypoint.sh
+++ b/docker/dockerfile/cluster/hive/entrypoint.sh
@@ -48,39 +48,39 @@ function configure() {
     done
 }
 
-configure /etc/hadoop/core-site.xml core CORE_CONF
-configure /etc/hadoop/hdfs-site.xml hdfs HDFS_CONF
-configure /etc/hadoop/yarn-site.xml yarn YARN_CONF
-configure /etc/hadoop/httpfs-site.xml httpfs HTTPFS_CONF
-configure /etc/hadoop/kms-site.xml kms KMS_CONF
-configure /etc/hadoop/mapred-site.xml mapred MAPRED_CONF
-configure /etc/hadoop/hive-site.xml hive HIVE_SITE_CONF
+configure /etc/hadoop/conf/core-site.xml core CORE_CONF
+configure /etc/hadoop/conf/hdfs-site.xml hdfs HDFS_CONF
+configure /etc/hadoop/conf/yarn-site.xml yarn YARN_CONF
+configure /etc/hadoop/conf/httpfs-site.xml httpfs HTTPFS_CONF
+configure /etc/hadoop/conf/kms-site.xml kms KMS_CONF
+configure /etc/hadoop/conf/mapred-site.xml mapred MAPRED_CONF
+configure /etc/hadoop/conf/hive-site.xml hive HIVE_SITE_CONF
 configure /opt/hive/conf/hive-site.xml hive HIVE_SITE_CONF
 
 if [ "$MULTIHOMED_NETWORK" = "1" ]; then
     echo "Configuring for multihomed network"
 
     # HDFS
-    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.rpc-bind-host 0.0.0.0
-    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.servicerpc-bind-host 0.0.0.0
-    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.http-bind-host 0.0.0.0
-    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.https-bind-host 0.0.0.0
-    addProperty /etc/hadoop/hdfs-site.xml dfs.client.use.datanode.hostname true
-    addProperty /etc/hadoop/hdfs-site.xml dfs.datanode.use.datanode.hostname true
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.namenode.rpc-bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.namenode.servicerpc-bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.namenode.http-bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.namenode.https-bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.client.use.datanode.hostname true
+    addProperty /etc/hadoop/conf/hdfs-site.xml dfs.datanode.use.datanode.hostname true
 
     # YARN
-    addProperty /etc/hadoop/yarn-site.xml yarn.resourcemanager.bind-host 0.0.0.0
-    addProperty /etc/hadoop/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
-    addProperty /etc/hadoop/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
-    addProperty /etc/hadoop/yarn-site.xml yarn.timeline-service.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/yarn-site.xml yarn.resourcemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/yarn-site.xml yarn.timeline-service.bind-host 0.0.0.0
 
     # MAPRED
-    addProperty /etc/hadoop/mapred-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/conf/mapred-site.xml yarn.nodemanager.bind-host 0.0.0.0
 fi
 
 if [ -n "$GANGLIA_HOST" ]; then
-    mv /etc/hadoop/hadoop-metrics.properties /etc/hadoop/hadoop-metrics.properties.orig
-    mv /etc/hadoop/hadoop-metrics2.properties /etc/hadoop/hadoop-metrics2.properties.orig
+    mv /etc/hadoop/conf/hadoop-metrics.properties /etc/hadoop/conf/hadoop-metrics.properties.orig
+    mv /etc/hadoop/conf/hadoop-metrics2.properties /etc/hadoop/conf/hadoop-metrics2.properties.orig
 
     for module in mapred jvm rpc ugi; do
         echo "$module.class=org.apache.hadoop.metrics.ganglia.GangliaContext31"
diff --git a/docker/dockerfile/cluster/hmaster/Dockerfile b/docker/dockerfile/cluster/hmaster/Dockerfile
index 09aa0e3..bcdc1de 100644
--- a/docker/dockerfile/cluster/hmaster/Dockerfile
+++ b/docker/dockerfile/cluster/hmaster/Dockerfile
@@ -2,7 +2,7 @@
 
 ARG HBASE_VERSION=1.1.2
 
-FROM apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION}
+FROM apachekylin/kylin-ci-hbase-base:hbase_${HBASE_VERSION}
 
 ENV HBASE_VERSION ${HBASE_VERSION}
 COPY run_hm.sh /run_hm.sh
diff --git a/docker/dockerfile/cluster/hregionserver/Dockerfile b/docker/dockerfile/cluster/hregionserver/Dockerfile
index aaced16..f4e63e9 100644
--- a/docker/dockerfile/cluster/hregionserver/Dockerfile
+++ b/docker/dockerfile/cluster/hregionserver/Dockerfile
@@ -1,6 +1,6 @@
 ARG HBASE_VERSION=1.1.2
 
-FROM apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION}
+FROM apachekylin/kylin-ci-hbase-base:hbase_${HBASE_VERSION}
 
 ENV HBASE_VERSION ${HBASE_VERSION}
 
diff --git a/docker/dockerfile/cluster/kylin/Dockerfile b/docker/dockerfile/cluster/kylin/Dockerfile
index 2bd4a1b..9c2a4cf 100644
--- a/docker/dockerfile/cluster/kylin/Dockerfile
+++ b/docker/dockerfile/cluster/kylin/Dockerfile
@@ -20,6 +20,6 @@ ARG HIVE_VERSION=1.2.1
 ARG HBASE_VERSION=1.1.2
 ARG SPARK_VERSION=2.3.3
 
-FROM apachekylin/kylin-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_spark_${HBASE_VERSION}_spark_${SPARK_VERSION} AS client
+FROM apachekylin/kylin-ci-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_spark_${HBASE_VERSION}_spark_${SPARK_VERSION} AS client
 
 #CMD ["/bin/bash"]
\ No newline at end of file
diff --git a/docker/dockerfile/cluster/namenode/Dockerfile b/docker/dockerfile/cluster/namenode/Dockerfile
index 3418680..0a44841 100644
--- a/docker/dockerfile/cluster/namenode/Dockerfile
+++ b/docker/dockerfile/cluster/namenode/Dockerfile
@@ -16,7 +16,7 @@
 #
 
 ARG HADOOP_VERSION=2.8.5
-FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+FROM apachekylin/kylin-ci-hadoop-base:hadoop_${HADOOP_VERSION}
 
 ENV HADOOP_VERSION ${HADOOP_VERSION}
 
diff --git a/docker/dockerfile/cluster/nodemanager/Dockerfile b/docker/dockerfile/cluster/nodemanager/Dockerfile
index 8ec68df..631dcae 100644
--- a/docker/dockerfile/cluster/nodemanager/Dockerfile
+++ b/docker/dockerfile/cluster/nodemanager/Dockerfile
@@ -16,7 +16,7 @@
 #
 
 ARG HADOOP_VERSION=2.8.5
-FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+FROM apachekylin/kylin-ci-hadoop-base:hadoop_${HADOOP_VERSION}
 
 MAINTAINER kylin
 
diff --git a/docker/dockerfile/cluster/resourcemanager/Dockerfile b/docker/dockerfile/cluster/resourcemanager/Dockerfile
index b99027f..5fee110 100644
--- a/docker/dockerfile/cluster/resourcemanager/Dockerfile
+++ b/docker/dockerfile/cluster/resourcemanager/Dockerfile
@@ -16,7 +16,7 @@
 #
 
 ARG HADOOP_VERSION=2.8.5
-FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+FROM apachekylin/kylin-ci-hadoop-base:hadoop_${HADOOP_VERSION}
 
 MAINTAINER kylin
 
diff --git a/docker/header.sh b/docker/header.sh
index a990d90..a5a6cf7 100644
--- a/docker/header.sh
+++ b/docker/header.sh
@@ -28,8 +28,7 @@ eval set -- "${ARGS}"
 HADOOP_VERSION="2.8.5"
 HIVE_VERSION="1.2.2"
 HBASE_VERSION="1.1.2"
-
-# write write-read
+# write,write-read
 CLUSTER_MODE="write"
 # yes,no
 ENABLE_HBASE="yes"
@@ -37,7 +36,7 @@ ENABLE_HBASE="yes"
 ENABLE_LDAP="no"
 # yes,no
 ENABLE_KERBEROS="no"
-#
+# yes,no
 ENABLE_KAFKA="no"
 
 while true;
@@ -116,21 +115,21 @@ export HBASE_VERSION=$HBASE_VERSION
 export HADOOP_VERSION=$HADOOP_VERSION
 export HIVE_VERSION=$HIVE_VERSION
 
-export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
-export HADOOP_DATANODE_IMAGETAG=apachekylin/kylin-hadoop-datanode:hadoop_${HADOOP_VERSION}
-export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-hadoop-namenode:hadoop_${HADOOP_VERSION}
-export HADOOP_RESOURCEMANAGER_IMAGETAG=apachekylin/kylin-hadoop-resourcemanager:hadoop_${HADOOP_VERSION}
-export HADOOP_NODEMANAGER_IMAGETAG=apachekylin/kylin-hadoop-nodemanager:hadoop_${HADOOP_VERSION}
-export HADOOP_HISTORYSERVER_IMAGETAG=apachekylin/kylin-hadoop-historyserver:hadoop_${HADOOP_VERSION}
-export HIVE_IMAGETAG=apachekylin/kylin-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION}
+export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-ci-hadoop-base:hadoop_${HADOOP_VERSION}
+export HADOOP_DATANODE_IMAGETAG=apachekylin/kylin-ci-hadoop-datanode:hadoop_${HADOOP_VERSION}
+export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-ci-hadoop-namenode:hadoop_${HADOOP_VERSION}
+export HADOOP_RESOURCEMANAGER_IMAGETAG=apachekylin/kylin-ci-hadoop-resourcemanager:hadoop_${HADOOP_VERSION}
+export HADOOP_NODEMANAGER_IMAGETAG=apachekylin/kylin-ci-hadoop-nodemanager:hadoop_${HADOOP_VERSION}
+export HADOOP_HISTORYSERVER_IMAGETAG=apachekylin/kylin-ci-hadoop-historyserver:hadoop_${HADOOP_VERSION}
+export HIVE_IMAGETAG=apachekylin/kylin-ci-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION}
 
-export HBASE_MASTER_IMAGETAG=apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION}
-export HBASE_MASTER_IMAGETAG=apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION}
-export HBASE_REGIONSERVER_IMAGETAG=apachekylin/kylin-hbase-regionserver:hbase_${HBASE_VERSION}
+export HBASE_MASTER_IMAGETAG=apachekylin/kylin-ci-hbase-base:hbase_${HBASE_VERSION}
+export HBASE_MASTER_IMAGETAG=apachekylin/kylin-ci-hbase-master:hbase_${HBASE_VERSION}
+export HBASE_REGIONSERVER_IMAGETAG=apachekylin/kylin-ci-hbase-regionserver:hbase_${HBASE_VERSION}
 
 export KAFKA_IMAGE=bitnami/kafka:2.0.0
 export LDAP_IMAGE=osixia/openldap:1.3.0
-export CLIENT_IMAGETAG=apachekylin/kylin-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_hbase_${HBASE_VERSION}
+export CLIENT_IMAGETAG=apachekylin/kylin-ci-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_hbase_${HBASE_VERSION}
 
 if [[ $HADOOP_VERSION < "3" ]]; then
   export HADOOP_WEBHDFS_PORT=50070
diff --git a/docker/setup_cluster.sh b/docker/setup_cluster.sh
index 34cc01e..b323cd7 100644
--- a/docker/setup_cluster.sh
+++ b/docker/setup_cluster.sh
@@ -19,34 +19,37 @@
 SCRIPT_PATH=$(cd `dirname $0`; pwd)
 WS_ROOT=`dirname $SCRIPT_PATH`
 
-source ${SCRIPT_PATH}/build_cluster_images.sh
-
-# restart cluster
+#source ${SCRIPT_PATH}/build_cluster_images.sh
+source ${SCRIPT_PATH}/header.sh
 
+echo "Restart main Hadoop cluster ......"
 KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hadoop.yml down
 KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-zookeeper.yml down
 KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-metastore.yml down
 KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hive.yml down
+
 sleep 5
-# hadoop
+
 KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hadoop.yml up -d
 KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-zookeeper.yml up -d
 KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-metastore.yml up -d
 KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hive.yml up -d
 
-
+echo "Restart Kerberos service ......"
 if [ $ENABLE_KERBEROS == "yes" ]; then
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kerberos.yml down
   sleep 2
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kerberos.yml up -d
 fi
 
+echo "Restart LADP service ......"
 if [ $ENABLE_LDAP == "yes" ]; then
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-ldap.yml down
   sleep 2
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-ldap.yml up -d
 fi
 
+echo "Restart Kafka service ......"
 if [ $ENABLE_KAFKA == "yes" ]; then
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-kafka.yml down
   sleep 2
@@ -55,6 +58,7 @@ fi
 
 
 if [ $CLUSTER_MODE == "write" ]; then
+  echo "Restart Kylin cluster & HBase cluster ......"
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write.yml down
   if [ $ENABLE_HBASE == "yes" ]; then
     KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hbase.yml down
@@ -64,8 +68,8 @@ if [ $CLUSTER_MODE == "write" ]; then
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write.yml up -d
 fi
 
-# restart cluster
 if [ $CLUSTER_MODE == "write-read" ]; then
+  echo "Restart Kylin cluster[write-read mode] & Read HBase cluster ......"
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-zookeeper.yml down
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hadoop.yml down
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write-read.yml down


[kylin] 01/13: KYLIN-4775 Use docker-compose to deploy Hadoop and Kylin

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 69ac9cea322d5430ac0756caab730ceb19449e7e
Author: yongheng.liu <li...@gmail.com>
AuthorDate: Fri Sep 25 15:24:51 2020 +0800

    KYLIN-4775 Use docker-compose to deploy Hadoop and Kylin
---
 docker/Dockerfile_hadoop                           |  96 ---------
 docker/build_cluster_images.sh                     |  89 +++++++++
 .../{build_image.sh => build_standalone_image.sh}  |   0
 .../others/docker-compose-kerberos.yml             |  13 ++
 .../read/conf/hadoop-read}/core-site.xml           |  14 +-
 .../read/conf/hadoop-read/hdfs-site.xml            |  31 +++
 .../read/conf/hadoop-read/mapred-site.xml}         |  13 +-
 .../read/conf/hadoop-read/yarn-site.xml            |  46 +++++
 .../read}/conf/hadoop/core-site.xml                |  15 +-
 .../docker-compose/read/conf/hadoop/hdfs-site.xml  |  31 +++
 .../read/conf/hadoop/mapred-site.xml}              |  13 +-
 .../docker-compose/read/conf/hadoop/yarn-site.xml  |  46 +++++
 .../docker-compose/read/conf/hbase/hbase-site.xml  |  34 ++++
 docker/docker-compose/read/conf/hive/hive-site.xml |  25 +++
 .../read/docker-compose-zookeeper.yml              |  18 ++
 docker/docker-compose/read/read-hadoop.env         |  40 ++++
 .../read/read-hbase-distributed-local.env          |  12 ++
 docker/docker-compose/write-read/client.env        |  61 ++++++
 .../write-read/test-docker-compose-mysql.yml       |  16 ++
 docker/docker-compose/write/client.env             |  61 ++++++
 .../write/conf/hadoop-read}/core-site.xml          |  14 +-
 .../write/conf/hadoop-read/hdfs-site.xml           |  31 +++
 .../write/conf/hadoop-read/mapred-site.xml}        |  13 +-
 .../write/conf/hadoop-read/yarn-site.xml           |  46 +++++
 .../write/conf/hadoop-write}/core-site.xml         |  14 +-
 .../write/conf/hadoop-write/hdfs-site.xml          |  31 +++
 .../write/conf/hadoop-write/mapred-site.xml}       |  13 +-
 .../write/conf/hadoop-write/yarn-site.xml          |  46 +++++
 .../write}/conf/hadoop/core-site.xml               |  15 +-
 .../docker-compose/write/conf/hadoop/hdfs-site.xml |  31 +++
 .../write/conf/hadoop/mapred-site.xml}             |  13 +-
 .../docker-compose/write/conf/hadoop/yarn-site.xml |  46 +++++
 .../docker-compose/write/conf/hbase/hbase-site.xml |  34 ++++
 .../docker-compose/write/conf/hive/hive-site.xml   |  25 +++
 .../docker-compose/write/docker-compose-kafka.yml  |  18 ++
 .../docker-compose/write/docker-compose-write.yml  | 215 +++++++++++++++++++++
 .../write/docker-compose-zookeeper.yml             |  18 ++
 docker/docker-compose/write/write-hadoop.env       |  47 +++++
 .../write/write-hbase-distributed-local.env        |  12 ++
 docker/dockerfile/cluster/base/Dockerfile          |  78 ++++++++
 docker/dockerfile/cluster/base/entrypoint.sh       | 140 ++++++++++++++
 docker/dockerfile/cluster/client/Dockerfile        | 157 +++++++++++++++
 .../cluster/client/conf/hadoop-read}/core-site.xml |  14 +-
 .../cluster/client/conf/hadoop-read/hdfs-site.xml  |  31 +++
 .../client/conf/hadoop-read/mapred-site.xml}       |  13 +-
 .../cluster/client/conf/hadoop-read/yarn-site.xml  |  46 +++++
 .../client/conf/hadoop-write}/core-site.xml        |  14 +-
 .../cluster/client/conf/hadoop-write/hdfs-site.xml |  31 +++
 .../client/conf/hadoop-write/mapred-site.xml}      |  13 +-
 .../cluster/client/conf/hadoop-write/yarn-site.xml |  46 +++++
 .../cluster/client/conf/hbase/hbase-site.xml       |  34 ++++
 .../cluster/client/conf/hive/hive-site.xml         |  25 +++
 docker/dockerfile/cluster/client/entrypoint.sh     |   7 +
 docker/dockerfile/cluster/client/run_cli.sh        |  10 +
 .../cluster/datanode/Dockerfile}                   |  20 +-
 .../cluster/datanode/run_dn.sh}                    |  18 +-
 docker/dockerfile/cluster/hbase/Dockerfile         |  59 ++++++
 docker/dockerfile/cluster/hbase/entrypoint.sh      |  83 ++++++++
 .../cluster/historyserver/Dockerfile}              |  23 ++-
 .../cluster/historyserver/run_history.sh}          |  12 +-
 docker/dockerfile/cluster/hive/Dockerfile          |  73 +++++++
 .../cluster/hive/conf/beeline-log4j2.properties    |  46 +++++
 docker/dockerfile/cluster/hive/conf/hive-env.sh    |  55 ++++++
 .../cluster/hive/conf/hive-exec-log4j2.properties  |  67 +++++++
 .../cluster/hive/conf/hive-log4j2.properties       |  74 +++++++
 docker/dockerfile/cluster/hive/conf/hive-site.xml  |  18 ++
 .../dockerfile/cluster/hive/conf/ivysettings.xml   |  44 +++++
 .../hive/conf/llap-daemon-log4j2.properties        |  94 +++++++++
 docker/dockerfile/cluster/hive/entrypoint.sh       | 136 +++++++++++++
 .../cluster/hive/run_hv.sh}                        |  18 +-
 docker/dockerfile/cluster/hmaster/Dockerfile       |  13 ++
 .../cluster/hmaster/run_hm.sh}                     |  12 +-
 docker/dockerfile/cluster/hregionserver/Dockerfile |  12 ++
 .../cluster/hregionserver/run_hr.sh}               |  12 +-
 .../cluster/kerberos/Dockerfile}                   |  24 ++-
 docker/dockerfile/cluster/kerberos/conf/kadm5.acl  |   1 +
 .../cluster/kerberos/conf/kdc.conf}                |  20 +-
 .../cluster/kerberos/conf/krb5.conf}               |  32 ++-
 .../cluster/kerberos/run_krb.sh}                   |  18 +-
 .../cluster/kylin/Dockerfile}                      |  17 +-
 docker/dockerfile/cluster/kylin/entrypoint.sh      |   3 +
 docker/dockerfile/cluster/metastore-db/Dockerfile  |  12 ++
 docker/dockerfile/cluster/metastore-db/run_db.sh   |  15 ++
 .../cluster/namenode/Dockerfile}                   |  25 ++-
 .../cluster/namenode/run_nn.sh}                    |  23 ++-
 .../cluster/nodemanager/Dockerfile}                |  21 +-
 .../cluster/nodemanager/run_nm.sh}                 |  12 +-
 docker/dockerfile/cluster/pom.xml                  |  81 ++++++++
 .../cluster/resourcemanager/Dockerfile}            |  21 +-
 .../cluster/resourcemanager/run_rm.sh}             |  12 +-
 docker/{ => dockerfile/standalone}/Dockerfile      |   0
 .../standalone}/conf/hadoop/core-site.xml          |   0
 .../standalone}/conf/hadoop/hdfs-site.xml          |   0
 .../standalone}/conf/hadoop/mapred-site.xml        |   0
 .../standalone}/conf/hadoop/yarn-site.xml          |   0
 .../standalone}/conf/hive/hive-site.xml            |   0
 .../standalone}/conf/maven/settings.xml            |   0
 docker/{ => dockerfile/standalone}/entrypoint.sh   |   0
 docker/setup_cluster.sh                            |  28 +++
 docker/{run_container.sh => setup_standalone.sh}   |   0
 docker/stop_cluster.sh                             |  23 +++
 101 files changed, 2906 insertions(+), 386 deletions(-)

diff --git a/docker/Dockerfile_hadoop b/docker/Dockerfile_hadoop
deleted file mode 100644
index 8e76855..0000000
--- a/docker/Dockerfile_hadoop
+++ /dev/null
@@ -1,96 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-# Docker image with Hadoop/Spark/Hive/ZK/Kafka installed
-FROM centos:6.9
-
-ENV HIVE_VERSION 1.2.1
-ENV HADOOP_VERSION 2.7.0
-ENV SPARK_VERSION 2.4.6
-ENV ZK_VERSION 3.4.6
-ENV KAFKA_VERSION 1.1.1
-
-ENV JAVA_HOME /home/admin/jdk1.8.0_141
-ENV MVN_HOME /home/admin/apache-maven-3.6.1
-ENV HADOOP_HOME /home/admin/hadoop-$HADOOP_VERSION
-ENV HIVE_HOME /home/admin/apache-hive-$HIVE_VERSION-bin
-ENV HADOOP_CONF $HADOOP_HOME/etc/hadoop
-ENV HADOOP_CONF_DIR $HADOOP_HOME/etc/hadoop
-ENV SPARK_HOME /home/admin/spark-$SPARK_VERSION-bin-hadoop2.7
-ENV SPARK_CONF_DIR $SPARK_HOME/conf
-ENV ZK_HOME /home/admin/zookeeper-$ZK_VERSION
-ENV KAFKA_HOME /home/admin/kafka_2.11-$KAFKA_VERSION
-ENV PATH $PATH:$JAVA_HOME/bin:$ZK_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$MVN_HOME/bin:$KAFKA_HOME/bin
-
-USER root
-
-WORKDIR /home/admin
-
-# install tools
-RUN yum -y install lsof.x86_64 wget.x86_64 tar.x86_64 git.x86_64 mysql-server.x86_64 mysql.x86_64 unzip.x86_64
-
-# install mvn
-RUN wget https://archive.apache.org/dist/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz \
-    && tar -zxvf apache-maven-3.6.1-bin.tar.gz \
-    && rm -f apache-maven-3.6.1-bin.tar.gz
-COPY conf/maven/settings.xml $MVN_HOME/conf/settings.xml
-
-# install npm
-RUN curl -sL https://rpm.nodesource.com/setup_8.x | bash - \
-    && yum install -y nodejs
-
-# setup jdk
-RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.tar.gz" \
-    && tar -zxvf /home/admin/jdk-8u141-linux-x64.tar.gz \
-    && rm -f /home/admin/jdk-8u141-linux-x64.tar.gz
-
-# setup hadoop
-RUN wget https://archive.apache.org/dist/hadoop/core/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz \
-    && tar -zxvf /home/admin/hadoop-$HADOOP_VERSION.tar.gz \
-    && rm -f /home/admin/hadoop-$HADOOP_VERSION.tar.gz \
-    && mkdir -p /data/hadoop
-COPY conf/hadoop/* $HADOOP_CONF/
-
-# setup hive
-RUN wget https://archive.apache.org/dist/hive/hive-$HIVE_VERSION/apache-hive-$HIVE_VERSION-bin.tar.gz \
-    && tar -zxvf /home/admin/apache-hive-$HIVE_VERSION-bin.tar.gz \
-    && rm -f /home/admin/apache-hive-$HIVE_VERSION-bin.tar.gz \
-    && wget -P $HIVE_HOME/lib https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.24/mysql-connector-java-5.1.24.jar
-COPY conf/hive/hive-site.xml $HIVE_HOME/conf
-COPY conf/hive/hive-site.xml $HADOOP_CONF/
-
-# setup spark
-RUN wget https://archive.apache.org/dist/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop2.7.tgz \
-    && tar -zxvf /home/admin/spark-$SPARK_VERSION-bin-hadoop2.7.tgz \
-    && rm -f /home/admin/spark-$SPARK_VERSION-bin-hadoop2.7.tgz \
-    && cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf \
-    && cp $SPARK_HOME/yarn/*.jar $HADOOP_HOME/share/hadoop/yarn/lib
-RUN cp $HIVE_HOME/lib/mysql-connector-java-5.1.24.jar $SPARK_HOME/jars
-RUN cp $HIVE_HOME/hcatalog/share/hcatalog/hive-hcatalog-core-1.2.1.jar $SPARK_HOME/jars/
-COPY conf/spark/* $SPARK_CONF_DIR/
-
-# setup kafka
-RUN wget https://archive.apache.org/dist/kafka/$KAFKA_VERSION/kafka_2.11-$KAFKA_VERSION.tgz \
-    && tar -zxvf /home/admin/kafka_2.11-$KAFKA_VERSION.tgz \
-    && rm -f /home/admin/kafka_2.11-$KAFKA_VERSION.tgz
-
-# setup zk
-RUN wget https://archive.apache.org/dist/zookeeper/zookeeper-$ZK_VERSION/zookeeper-$ZK_VERSION.tar.gz \
-    && tar -zxvf /home/admin/zookeeper-$ZK_VERSION.tar.gz \
-    && rm -f /home/admin/zookeeper-$ZK_VERSION.tar.gz \
-    && mkdir -p /data/zookeeper
-COPY conf/zk/zoo.cfg $ZK_HOME/conf
diff --git a/docker/build_cluster_images.sh b/docker/build_cluster_images.sh
new file mode 100644
index 0000000..ac60533
--- /dev/null
+++ b/docker/build_cluster_images.sh
@@ -0,0 +1,89 @@
+#!/bin/bash
+
+ARGS=`getopt -o h:i:b --long hadoop_version:,hive_version:,hbase_version: -n 'parameter.bash' -- "$@"`
+
+if [ $? != 0 ]; then
+    echo "Terminating..."
+    exit 1
+fi
+
+eval set -- "${ARGS}"
+
+HADOOP_VERSION="2.8.5"
+HIVE_VERSION="1.2.2"
+HBASE_VERSION="1.1.2"
+
+while true;
+do
+    case "$1" in
+        --hadoop_version)
+            HADOOP_VERSION=$2;
+            shift 2;
+            ;;
+        --hive_version)
+            HIVE_VERSION=$2;
+            shift 2;
+            ;;
+        --hbase_version)
+            HBASE_VERSION=$2;
+            shift 2;
+            ;;
+        --)
+            break
+            ;;
+        *)
+            echo "Internal error!"
+            break
+            ;;
+    esac
+done
+
+for arg in $@
+do
+    echo "processing $arg"
+done
+
+echo "........hadoop version: "$HADOOP_VERSION
+echo "........hive version: "$HIVE_VERSION
+echo "........hbase version: "$HBASE_VERSION
+
+#docker build -t apachekylin/kylin-metastore:mysql_5.6.49 ./kylin/metastore-db
+
+docker build -t apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/base
+docker build -t apachekylin/kylin-hadoop-namenode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/namenode
+docker build -t apachekylin/kylin-hadoop-datanode:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/datanode
+docker build -t apachekylin/kylin-hadoop-resourcemanager:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/resourcemanager
+docker build -t apachekylin/kylin-hadoop-nodemanager:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/nodemanager
+docker build -t apachekylin/kylin-hadoop-historyserver:hadoop_${HADOOP_VERSION} --build-arg HADOOP_VERSION=${HADOOP_VERSION} ./dockerfile/cluster/historyserver
+
+docker build -t apachekylin/kylin-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION} \
+--build-arg HIVE_VERSION=${HIVE_VERSION} \
+--build-arg HADOOP_VERSION=${HADOOP_VERSION} \
+./dockerfile/cluster/hive
+
+docker build -t apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hbase
+docker build -t apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hmaster
+docker build -t apachekylin/kylin-hbase-regionserver:hbase_${HBASE_VERSION} --build-arg HBASE_VERSION=${HBASE_VERSION} ./dockerfile/cluster/hregionserver
+
+docker build -t apachekylin/kylin-kerberos:latest ./dockerfile/cluster/kerberos
+
+docker build -t apachekylin/kylin-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_hbase_${HBASE_VERSION} \
+--build-arg HIVE_VERSION=${HIVE_VERSION} \
+--build-arg HADOOP_VERSION=${HADOOP_VERSION} \
+--build-arg HBASE_VERSION=${HBASE_VERSION} \
+./dockerfile/cluster/client
+
+
+export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+export HADOOP_DATANODE_IMAGETAG=apachekylin/kylin-hadoop-datanode:hadoop_${HADOOP_VERSION}
+export HADOOP_NAMENODE_IMAGETAG=apachekylin/kylin-hadoop-namenode:hadoop_${HADOOP_VERSION}
+export HADOOP_RESOURCEMANAGER_IMAGETAG=apachekylin/kylin-hadoop-resourcemanager:hadoop_${HADOOP_VERSION}
+export HADOOP_NODEMANAGER_IMAGETAG=apachekylin/kylin-hadoop-nodemanager:hadoop_${HADOOP_VERSION}
+export HADOOP_HISTORYSERVER_IMAGETAG=apachekylin/kylin-hadoop-historyserver:hadoop_${HADOOP_VERSION}
+export HIVE_IMAGETAG=apachekylin/kylin-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION}
+export HBASE_MASTER_IMAGETAG=apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION}
+export HBASE_MASTER_IMAGETAG=apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION}
+export HBASE_REGIONSERVER_IMAGETAG=apachekylin/kylin-hbase-regionserver:hbase_${HBASE_VERSION}
+export CLIENT_IMAGETAG=apachekylin/kylin-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_hbase_${HBASE_VERSION}
+export KERBEROS_IMAGE=apachekylin/kylin-kerberos:latest
+
diff --git a/docker/build_image.sh b/docker/build_standalone_image.sh
similarity index 100%
copy from docker/build_image.sh
copy to docker/build_standalone_image.sh
diff --git a/docker/docker-compose/others/docker-compose-kerberos.yml b/docker/docker-compose/others/docker-compose-kerberos.yml
new file mode 100644
index 0000000..3d90062
--- /dev/null
+++ b/docker/docker-compose/others/docker-compose-kerberos.yml
@@ -0,0 +1,13 @@
+version: "3.3"
+
+services:
+  kerberos-kdc:
+    image: ${KERBEROS_IMAGE}
+    container_name: kerberos-kdc
+    hostname: kerberos-kdc
+    networks:
+      - write_kylin
+
+networks:
+  write_kylin:
+    external: true
\ No newline at end of file
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/read/conf/hadoop-read/core-site.xml
similarity index 63%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/read/conf/hadoop-read/core-site.xml
index 6fe6404..69fc462 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/read/conf/hadoop-read/core-site.xml
@@ -17,13 +17,9 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
+<property><name>fs.defaultFS</name><value>hdfs://write-namenode:8020</value></property>
+<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
+<property><name>hadoop.http.staticuser.user</name><value>root</value></property>
 </configuration>
diff --git a/docker/docker-compose/read/conf/hadoop-read/hdfs-site.xml b/docker/docker-compose/read/conf/hadoop-read/hdfs-site.xml
new file mode 100644
index 0000000..cdf7778
--- /dev/null
+++ b/docker/docker-compose/read/conf/hadoop-read/hdfs-site.xml
@@ -0,0 +1,31 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+<property><name>dfs.namenode.name.dir</name><value>file:///hadoop/dfs/name</value></property>
+<property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>false</value></property>
+<property><name>dfs.permissions.enabled</name><value>false</value></property>
+<property><name>dfs.webhdfs.enabled</name><value>true</value></property>
+<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
+<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/read/conf/hadoop-read/mapred-site.xml
similarity index 69%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/read/conf/hadoop-read/mapred-site.xml
index 6fe6404..d5cc450 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/read/conf/hadoop-read/mapred-site.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="UTF-8"?>
+<?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,13 +17,6 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
 </configuration>
diff --git a/docker/docker-compose/read/conf/hadoop-read/yarn-site.xml b/docker/docker-compose/read/conf/hadoop-read/yarn-site.xml
new file mode 100644
index 0000000..392cf4c
--- /dev/null
+++ b/docker/docker-compose/read/conf/hadoop-read/yarn-site.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<configuration>
+
+<!-- Site specific YARN configuration properties -->
+
+<property><name>yarn.resourcemanager.fs.state-store.uri</name><value>/rmstate</value></property>
+<property><name>yarn.timeline-service.generic-application-history.enabled</name><value>true</value></property>
+<property><name>mapreduce.map.output.compress</name><value>true</value></property>
+<property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value></property>
+<property><name>mapred.map.output.compress.codec</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>yarn.timeline-service.enabled</name><value>true</value></property>
+<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
+<property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value></property>
+<property><name>yarn.resourcemanager.system-metrics-publisher.enabled</name><value>true</value></property>
+<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/app-logs</value></property>
+<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
+<property><name>yarn.resourcemanager.resource_tracker.address</name><value>read-resourcemanager:8031</value></property>
+<property><name>yarn.resourcemanager.hostname</name><value>read-resourcemanager</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-vcores</name><value>4</value></property>
+<property><name>yarn.timeline-service.hostname</name><value>read-historyserver</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-mb</name><value>8192</value></property>
+<property><name>yarn.log.server.url</name><value>http://read-historyserver:8188/applicationhistory/logs/</value></property>
+<property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property>
+<property><name>yarn.resourcemanager.scheduler.address</name><value>read-resourcemanager:8030</value></property>
+<property><name>yarn.resourcemanager.address</name><value>read-resourcemanager:8032</value></property>
+<property><name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name><value>98.5</value></property>
+<property><name>yarn.nodemanager.resource.memory-mb</name><value>16384</value></property>
+<property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value></property>
+<property><name>yarn.resourcemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.timeline-service.bind-host</name><value>0.0.0.0</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/read/conf/hadoop/core-site.xml
similarity index 63%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/read/conf/hadoop/core-site.xml
index 6fe6404..dd5a81b 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/read/conf/hadoop/core-site.xml
@@ -17,13 +17,10 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
+<property><name>fs.defaultFS</name><value>hdfs://write-namenode:8020</value></property>
+<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
+<property><name>hadoop.http.staticuser.user</name><value>root</value></property>
+
 </configuration>
diff --git a/docker/docker-compose/read/conf/hadoop/hdfs-site.xml b/docker/docker-compose/read/conf/hadoop/hdfs-site.xml
new file mode 100644
index 0000000..cdf7778
--- /dev/null
+++ b/docker/docker-compose/read/conf/hadoop/hdfs-site.xml
@@ -0,0 +1,31 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+<property><name>dfs.namenode.name.dir</name><value>file:///hadoop/dfs/name</value></property>
+<property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>false</value></property>
+<property><name>dfs.permissions.enabled</name><value>false</value></property>
+<property><name>dfs.webhdfs.enabled</name><value>true</value></property>
+<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
+<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/read/conf/hadoop/mapred-site.xml
similarity index 69%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/read/conf/hadoop/mapred-site.xml
index 6fe6404..d5cc450 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/read/conf/hadoop/mapred-site.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="UTF-8"?>
+<?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,13 +17,6 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
 </configuration>
diff --git a/docker/docker-compose/read/conf/hadoop/yarn-site.xml b/docker/docker-compose/read/conf/hadoop/yarn-site.xml
new file mode 100644
index 0000000..b55dd34
--- /dev/null
+++ b/docker/docker-compose/read/conf/hadoop/yarn-site.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<configuration>
+
+<!-- Site specific YARN configuration properties -->
+
+<property><name>yarn.resourcemanager.fs.state-store.uri</name><value>/rmstate</value></property>
+<property><name>yarn.timeline-service.generic-application-history.enabled</name><value>true</value></property>
+<property><name>mapreduce.map.output.compress</name><value>true</value></property>
+<property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value></property>
+<property><name>mapred.map.output.compress.codec</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>yarn.timeline-service.enabled</name><value>true</value></property>
+<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
+<property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value></property>
+<property><name>yarn.resourcemanager.system-metrics-publisher.enabled</name><value>true</value></property>
+<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/app-logs</value></property>
+<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
+<property><name>yarn.resourcemanager.resource_tracker.address</name><value>write-resourcemanager:8031</value></property>
+<property><name>yarn.resourcemanager.hostname</name><value>write-resourcemanager</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-vcores</name><value>4</value></property>
+<property><name>yarn.timeline-service.hostname</name><value>write-historyserver</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-mb</name><value>8192</value></property>
+<property><name>yarn.log.server.url</name><value>http://write-historyserver:8188/applicationhistory/logs/</value></property>
+<property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property>
+<property><name>yarn.resourcemanager.scheduler.address</name><value>write-resourcemanager:8030</value></property>
+<property><name>yarn.resourcemanager.address</name><value>write-resourcemanager:8032</value></property>
+<property><name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name><value>98.5</value></property>
+<property><name>yarn.nodemanager.resource.memory-mb</name><value>16384</value></property>
+<property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value></property>
+<property><name>yarn.resourcemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.timeline-service.bind-host</name><value>0.0.0.0</value></property>
+</configuration>
diff --git a/docker/docker-compose/read/conf/hbase/hbase-site.xml b/docker/docker-compose/read/conf/hbase/hbase-site.xml
new file mode 100644
index 0000000..988d91c
--- /dev/null
+++ b/docker/docker-compose/read/conf/hbase/hbase-site.xml
@@ -0,0 +1,34 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+<property><name>hbase.zookeeper.quorum</name><value>read-zookeeper</value></property>
+<property><name>hbase.master</name><value>read-hbase-master:16000</value></property>
+<property><name>hbase.regionserver.port</name><value>16020</value></property>
+<property><name>hbase.regionserver.info.port</name><value>16030</value></property>
+<property><name>DIR</name><value>/etc/hbase</value></property>
+<property><name>hbase.cluster.distributed</name><value>true</value></property>
+<property><name>hbase.rootdir</name><value>hdfs://read-namenode:8020/hbase</value></property>
+<property><name>hbase.master.info.port</name><value>16010</value></property>
+<property><name>hbase.master.hostname</name><value>read-hbase-master</value></property>
+<property><name>hbase.master.port</name><value>16000</value></property>
+</configuration>
diff --git a/docker/docker-compose/read/conf/hive/hive-site.xml b/docker/docker-compose/read/conf/hive/hive-site.xml
new file mode 100644
index 0000000..c60fe36
--- /dev/null
+++ b/docker/docker-compose/read/conf/hive/hive-site.xml
@@ -0,0 +1,25 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--><configuration>
+    <property><name>hive.metastore.uris</name><value>thrift://write-hive-metastore:9083</value></property>
+    <property><name>datanucleus.autoCreateSchema</name><value>false</value></property>
+    <property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:postgresql://write-hive-metastore-postgresql/metastore</value></property>
+    <property><name>javax.jdo.option.ConnectionDriverName</name><value>org.postgresql.Driver</value></property>
+    <property><name>javax.jdo.option.ConnectionPassword</name><value>hive</value></property>
+    <property><name>javax.jdo.option.ConnectionUserName</name><value>hive</value></property>
+</configuration>
+
diff --git a/docker/docker-compose/read/docker-compose-zookeeper.yml b/docker/docker-compose/read/docker-compose-zookeeper.yml
new file mode 100644
index 0000000..71ea252
--- /dev/null
+++ b/docker/docker-compose/read/docker-compose-zookeeper.yml
@@ -0,0 +1,18 @@
+version: "3.3"
+
+services:
+  read-zookeeper:
+    image: ${ZOOKEEPER_IMAGETAG:-zookeeper:3.4.10}
+    container_name: read-zookeeper
+    hostname: read-zookeeper
+    environment:
+      ZOO_MY_ID: 1
+      ZOO_SERVERS: server.1=0.0.0.0:2888:3888
+    networks:
+      - write_kylin
+    ports:
+      - 2182:2181
+
+networks:
+  write_kylin:
+    external: true
\ No newline at end of file
diff --git a/docker/docker-compose/read/read-hadoop.env b/docker/docker-compose/read/read-hadoop.env
new file mode 100644
index 0000000..9c0086d
--- /dev/null
+++ b/docker/docker-compose/read/read-hadoop.env
@@ -0,0 +1,40 @@
+CORE_CONF_fs_defaultFS=hdfs://read-namenode:8020
+CORE_CONF_hadoop_http_staticuser_user=root
+CORE_CONF_hadoop_proxyuser_hue_hosts=*
+CORE_CONF_hadoop_proxyuser_hue_groups=*
+CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec
+
+HDFS_CONF_dfs_webhdfs_enabled=true
+HDFS_CONF_dfs_permissions_enabled=false
+HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false
+
+YARN_CONF_yarn_log___aggregation___enable=true
+YARN_CONF_yarn_log_server_url=http://read-historyserver:8188/applicationhistory/logs/
+YARN_CONF_yarn_resourcemanager_recovery_enabled=true
+YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
+YARN_CONF_yarn_resourcemanager_scheduler_class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
+YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___mb=8192
+YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___vcores=4
+YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
+YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
+YARN_CONF_yarn_resourcemanager_hostname=read-resourcemanager
+YARN_CONF_yarn_resourcemanager_address=read-resourcemanager:8032
+YARN_CONF_yarn_resourcemanager_scheduler_address=read-resourcemanager:8030
+YARN_CONF_yarn_resourcemanager_resource__tracker_address=read-resourcemanager:8031
+YARN_CONF_yarn_timeline___service_enabled=true
+YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
+YARN_CONF_yarn_timeline___service_hostname=read-historyserver
+YARN_CONF_mapreduce_map_output_compress=true
+YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec
+YARN_CONF_yarn_nodemanager_resource_memory___mb=16384
+YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8
+YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
+YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
+YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
+
+MAPRED_CONF_mapreduce_framework_name=yarn
+MAPRED_CONF_mapred_child_java_opts=-Xmx4096m
+MAPRED_CONF_mapreduce_map_memory_mb=4096
+MAPRED_CONF_mapreduce_reduce_memory_mb=8192
+MAPRED_CONF_mapreduce_map_java_opts=-Xmx3072m
+MAPRED_CONF_mapreduce_reduce_java_opts=-Xmx6144m
diff --git a/docker/docker-compose/read/read-hbase-distributed-local.env b/docker/docker-compose/read/read-hbase-distributed-local.env
new file mode 100644
index 0000000..4ba8e19
--- /dev/null
+++ b/docker/docker-compose/read/read-hbase-distributed-local.env
@@ -0,0 +1,12 @@
+HBASE_CONF_hbase_rootdir=hdfs://read-namenode:8020/hbase
+HBASE_CONF_hbase_cluster_distributed=true
+HBASE_CONF_hbase_zookeeper_quorum=read-zookeeper
+
+HBASE_CONF_hbase_master=read-hbase-master:16000
+HBASE_CONF_hbase_master_hostname=read-hbase-master
+HBASE_CONF_hbase_master_port=16000
+HBASE_CONF_hbase_master_info_port=16010
+HBASE_CONF_hbase_regionserver_port=16020
+HBASE_CONF_hbase_regionserver_info_port=16030
+
+HBASE_MANAGES_ZK=false
\ No newline at end of file
diff --git a/docker/docker-compose/write-read/client.env b/docker/docker-compose/write-read/client.env
new file mode 100644
index 0000000..fc0743c
--- /dev/null
+++ b/docker/docker-compose/write-read/client.env
@@ -0,0 +1,61 @@
+CORE_CONF_fs_defaultFS=hdfs://write-namenode:8020
+CORE_CONF_hadoop_http_staticuser_user=root
+CORE_CONF_hadoop_proxyuser_hue_hosts=*
+CORE_CONF_hadoop_proxyuser_hue_groups=*
+CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec
+
+HDFS_CONF_dfs_webhdfs_enabled=true
+HDFS_CONF_dfs_permissions_enabled=false
+HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false
+
+YARN_CONF_yarn_log___aggregation___enable=true
+YARN_CONF_yarn_log_server_url=http://write-historyserver:8188/applicationhistory/logs/
+YARN_CONF_yarn_resourcemanager_recovery_enabled=true
+YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
+YARN_CONF_yarn_resourcemanager_scheduler_class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
+YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___mb=8192
+YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___vcores=4
+YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
+YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
+YARN_CONF_yarn_resourcemanager_hostname=write-resourcemanager
+YARN_CONF_yarn_resourcemanager_address=write-resourcemanager:8032
+YARN_CONF_yarn_resourcemanager_scheduler_address=write-resourcemanager:8030
+YARN_CONF_yarn_resourcemanager_resource__tracker_address=write-resourcemanager:8031
+YARN_CONF_yarn_timeline___service_enabled=true
+YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
+YARN_CONF_yarn_timeline___service_hostname=write-historyserver
+YARN_CONF_mapreduce_map_output_compress=true
+YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec
+YARN_CONF_yarn_nodemanager_resource_memory___mb=16384
+YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8
+YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
+YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
+YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
+
+MAPRED_CONF_mapreduce_framework_name=yarn
+MAPRED_CONF_mapred_child_java_opts=-Xmx4096m
+MAPRED_CONF_mapreduce_map_memory_mb=4096
+MAPRED_CONF_mapreduce_reduce_memory_mb=8192
+MAPRED_CONF_mapreduce_map_java_opts=-Xmx3072m
+MAPRED_CONF_mapreduce_reduce_java_opts=-Xmx6144m
+
+HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:mysql://metastore-db/metastore
+HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName=com.mysql.jdbc.Driver
+HIVE_SITE_CONF_javax_jdo_option_ConnectionUserName=kylin
+HIVE_SITE_CONF_javax_jdo_option_ConnectionPassword=kylin
+HIVE_SITE_CONF_datanucleus_autoCreateSchema=true
+HIVE_SITE_CONF_hive_metastore_uris=thrift://write-hive-metastore:9083
+
+HBASE_CONF_hbase_rootdir=hdfs://read-namenode:8020/hbase
+HBASE_CONF_hbase_cluster_distributed=true
+HBASE_CONF_hbase_zookeeper_quorum=read-zookeeper
+
+HBASE_CONF_hbase_master=read-hbase-master:16000
+HBASE_CONF_hbase_master_hostname=read-hbase-master
+HBASE_CONF_hbase_master_port=16000
+HBASE_CONF_hbase_master_info_port=16010
+HBASE_CONF_hbase_regionserver_port=16020
+HBASE_CONF_hbase_regionserver_info_port=16030
+
+HBASE_MANAGES_ZK=false
+
diff --git a/docker/docker-compose/write-read/test-docker-compose-mysql.yml b/docker/docker-compose/write-read/test-docker-compose-mysql.yml
new file mode 100644
index 0000000..5906c1e
--- /dev/null
+++ b/docker/docker-compose/write-read/test-docker-compose-mysql.yml
@@ -0,0 +1,16 @@
+
+version: "3.3"
+
+services:
+  metastore-db:
+    image: mysql:5.6.49
+    container_name: metastore-db
+    hostname: metastore-db
+    volumes:
+      - ./data/mysql:/var/lib/mysql
+    environment:
+      - MYSQL_ROOT_PASSWORD=kylin
+      - MYSQL_DATABASE=kylin
+      - MYSQL_USER=kylin
+      - MYSQL_PASSWORD=kylin
+
diff --git a/docker/docker-compose/write/client.env b/docker/docker-compose/write/client.env
new file mode 100644
index 0000000..fc0743c
--- /dev/null
+++ b/docker/docker-compose/write/client.env
@@ -0,0 +1,61 @@
+CORE_CONF_fs_defaultFS=hdfs://write-namenode:8020
+CORE_CONF_hadoop_http_staticuser_user=root
+CORE_CONF_hadoop_proxyuser_hue_hosts=*
+CORE_CONF_hadoop_proxyuser_hue_groups=*
+CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec
+
+HDFS_CONF_dfs_webhdfs_enabled=true
+HDFS_CONF_dfs_permissions_enabled=false
+HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false
+
+YARN_CONF_yarn_log___aggregation___enable=true
+YARN_CONF_yarn_log_server_url=http://write-historyserver:8188/applicationhistory/logs/
+YARN_CONF_yarn_resourcemanager_recovery_enabled=true
+YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
+YARN_CONF_yarn_resourcemanager_scheduler_class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
+YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___mb=8192
+YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___vcores=4
+YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
+YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
+YARN_CONF_yarn_resourcemanager_hostname=write-resourcemanager
+YARN_CONF_yarn_resourcemanager_address=write-resourcemanager:8032
+YARN_CONF_yarn_resourcemanager_scheduler_address=write-resourcemanager:8030
+YARN_CONF_yarn_resourcemanager_resource__tracker_address=write-resourcemanager:8031
+YARN_CONF_yarn_timeline___service_enabled=true
+YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
+YARN_CONF_yarn_timeline___service_hostname=write-historyserver
+YARN_CONF_mapreduce_map_output_compress=true
+YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec
+YARN_CONF_yarn_nodemanager_resource_memory___mb=16384
+YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8
+YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
+YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
+YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
+
+MAPRED_CONF_mapreduce_framework_name=yarn
+MAPRED_CONF_mapred_child_java_opts=-Xmx4096m
+MAPRED_CONF_mapreduce_map_memory_mb=4096
+MAPRED_CONF_mapreduce_reduce_memory_mb=8192
+MAPRED_CONF_mapreduce_map_java_opts=-Xmx3072m
+MAPRED_CONF_mapreduce_reduce_java_opts=-Xmx6144m
+
+HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:mysql://metastore-db/metastore
+HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName=com.mysql.jdbc.Driver
+HIVE_SITE_CONF_javax_jdo_option_ConnectionUserName=kylin
+HIVE_SITE_CONF_javax_jdo_option_ConnectionPassword=kylin
+HIVE_SITE_CONF_datanucleus_autoCreateSchema=true
+HIVE_SITE_CONF_hive_metastore_uris=thrift://write-hive-metastore:9083
+
+HBASE_CONF_hbase_rootdir=hdfs://read-namenode:8020/hbase
+HBASE_CONF_hbase_cluster_distributed=true
+HBASE_CONF_hbase_zookeeper_quorum=read-zookeeper
+
+HBASE_CONF_hbase_master=read-hbase-master:16000
+HBASE_CONF_hbase_master_hostname=read-hbase-master
+HBASE_CONF_hbase_master_port=16000
+HBASE_CONF_hbase_master_info_port=16010
+HBASE_CONF_hbase_regionserver_port=16020
+HBASE_CONF_hbase_regionserver_info_port=16030
+
+HBASE_MANAGES_ZK=false
+
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/write/conf/hadoop-read/core-site.xml
similarity index 63%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/write/conf/hadoop-read/core-site.xml
index 6fe6404..69fc462 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/write/conf/hadoop-read/core-site.xml
@@ -17,13 +17,9 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
+<property><name>fs.defaultFS</name><value>hdfs://write-namenode:8020</value></property>
+<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
+<property><name>hadoop.http.staticuser.user</name><value>root</value></property>
 </configuration>
diff --git a/docker/docker-compose/write/conf/hadoop-read/hdfs-site.xml b/docker/docker-compose/write/conf/hadoop-read/hdfs-site.xml
new file mode 100644
index 0000000..cdf7778
--- /dev/null
+++ b/docker/docker-compose/write/conf/hadoop-read/hdfs-site.xml
@@ -0,0 +1,31 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+<property><name>dfs.namenode.name.dir</name><value>file:///hadoop/dfs/name</value></property>
+<property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>false</value></property>
+<property><name>dfs.permissions.enabled</name><value>false</value></property>
+<property><name>dfs.webhdfs.enabled</name><value>true</value></property>
+<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
+<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/write/conf/hadoop-read/mapred-site.xml
similarity index 69%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/write/conf/hadoop-read/mapred-site.xml
index 6fe6404..d5cc450 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/write/conf/hadoop-read/mapred-site.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="UTF-8"?>
+<?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,13 +17,6 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
 </configuration>
diff --git a/docker/docker-compose/write/conf/hadoop-read/yarn-site.xml b/docker/docker-compose/write/conf/hadoop-read/yarn-site.xml
new file mode 100644
index 0000000..392cf4c
--- /dev/null
+++ b/docker/docker-compose/write/conf/hadoop-read/yarn-site.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<configuration>
+
+<!-- Site specific YARN configuration properties -->
+
+<property><name>yarn.resourcemanager.fs.state-store.uri</name><value>/rmstate</value></property>
+<property><name>yarn.timeline-service.generic-application-history.enabled</name><value>true</value></property>
+<property><name>mapreduce.map.output.compress</name><value>true</value></property>
+<property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value></property>
+<property><name>mapred.map.output.compress.codec</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>yarn.timeline-service.enabled</name><value>true</value></property>
+<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
+<property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value></property>
+<property><name>yarn.resourcemanager.system-metrics-publisher.enabled</name><value>true</value></property>
+<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/app-logs</value></property>
+<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
+<property><name>yarn.resourcemanager.resource_tracker.address</name><value>read-resourcemanager:8031</value></property>
+<property><name>yarn.resourcemanager.hostname</name><value>read-resourcemanager</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-vcores</name><value>4</value></property>
+<property><name>yarn.timeline-service.hostname</name><value>read-historyserver</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-mb</name><value>8192</value></property>
+<property><name>yarn.log.server.url</name><value>http://read-historyserver:8188/applicationhistory/logs/</value></property>
+<property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property>
+<property><name>yarn.resourcemanager.scheduler.address</name><value>read-resourcemanager:8030</value></property>
+<property><name>yarn.resourcemanager.address</name><value>read-resourcemanager:8032</value></property>
+<property><name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name><value>98.5</value></property>
+<property><name>yarn.nodemanager.resource.memory-mb</name><value>16384</value></property>
+<property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value></property>
+<property><name>yarn.resourcemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.timeline-service.bind-host</name><value>0.0.0.0</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/write/conf/hadoop-write/core-site.xml
similarity index 63%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/write/conf/hadoop-write/core-site.xml
index 6fe6404..69fc462 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/write/conf/hadoop-write/core-site.xml
@@ -17,13 +17,9 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
+<property><name>fs.defaultFS</name><value>hdfs://write-namenode:8020</value></property>
+<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
+<property><name>hadoop.http.staticuser.user</name><value>root</value></property>
 </configuration>
diff --git a/docker/docker-compose/write/conf/hadoop-write/hdfs-site.xml b/docker/docker-compose/write/conf/hadoop-write/hdfs-site.xml
new file mode 100644
index 0000000..cdf7778
--- /dev/null
+++ b/docker/docker-compose/write/conf/hadoop-write/hdfs-site.xml
@@ -0,0 +1,31 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+<property><name>dfs.namenode.name.dir</name><value>file:///hadoop/dfs/name</value></property>
+<property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>false</value></property>
+<property><name>dfs.permissions.enabled</name><value>false</value></property>
+<property><name>dfs.webhdfs.enabled</name><value>true</value></property>
+<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
+<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/write/conf/hadoop-write/mapred-site.xml
similarity index 69%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/write/conf/hadoop-write/mapred-site.xml
index 6fe6404..d5cc450 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/write/conf/hadoop-write/mapred-site.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="UTF-8"?>
+<?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,13 +17,6 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
 </configuration>
diff --git a/docker/docker-compose/write/conf/hadoop-write/yarn-site.xml b/docker/docker-compose/write/conf/hadoop-write/yarn-site.xml
new file mode 100644
index 0000000..b55dd34
--- /dev/null
+++ b/docker/docker-compose/write/conf/hadoop-write/yarn-site.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<configuration>
+
+<!-- Site specific YARN configuration properties -->
+
+<property><name>yarn.resourcemanager.fs.state-store.uri</name><value>/rmstate</value></property>
+<property><name>yarn.timeline-service.generic-application-history.enabled</name><value>true</value></property>
+<property><name>mapreduce.map.output.compress</name><value>true</value></property>
+<property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value></property>
+<property><name>mapred.map.output.compress.codec</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>yarn.timeline-service.enabled</name><value>true</value></property>
+<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
+<property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value></property>
+<property><name>yarn.resourcemanager.system-metrics-publisher.enabled</name><value>true</value></property>
+<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/app-logs</value></property>
+<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
+<property><name>yarn.resourcemanager.resource_tracker.address</name><value>write-resourcemanager:8031</value></property>
+<property><name>yarn.resourcemanager.hostname</name><value>write-resourcemanager</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-vcores</name><value>4</value></property>
+<property><name>yarn.timeline-service.hostname</name><value>write-historyserver</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-mb</name><value>8192</value></property>
+<property><name>yarn.log.server.url</name><value>http://write-historyserver:8188/applicationhistory/logs/</value></property>
+<property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property>
+<property><name>yarn.resourcemanager.scheduler.address</name><value>write-resourcemanager:8030</value></property>
+<property><name>yarn.resourcemanager.address</name><value>write-resourcemanager:8032</value></property>
+<property><name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name><value>98.5</value></property>
+<property><name>yarn.nodemanager.resource.memory-mb</name><value>16384</value></property>
+<property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value></property>
+<property><name>yarn.resourcemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.timeline-service.bind-host</name><value>0.0.0.0</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/write/conf/hadoop/core-site.xml
similarity index 63%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/write/conf/hadoop/core-site.xml
index 6fe6404..dd5a81b 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/write/conf/hadoop/core-site.xml
@@ -17,13 +17,10 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
+<property><name>fs.defaultFS</name><value>hdfs://write-namenode:8020</value></property>
+<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
+<property><name>hadoop.http.staticuser.user</name><value>root</value></property>
+
 </configuration>
diff --git a/docker/docker-compose/write/conf/hadoop/hdfs-site.xml b/docker/docker-compose/write/conf/hadoop/hdfs-site.xml
new file mode 100644
index 0000000..cdf7778
--- /dev/null
+++ b/docker/docker-compose/write/conf/hadoop/hdfs-site.xml
@@ -0,0 +1,31 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+<property><name>dfs.namenode.name.dir</name><value>file:///hadoop/dfs/name</value></property>
+<property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>false</value></property>
+<property><name>dfs.permissions.enabled</name><value>false</value></property>
+<property><name>dfs.webhdfs.enabled</name><value>true</value></property>
+<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
+<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/docker-compose/write/conf/hadoop/mapred-site.xml
similarity index 69%
copy from docker/conf/hadoop/core-site.xml
copy to docker/docker-compose/write/conf/hadoop/mapred-site.xml
index 6fe6404..d5cc450 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/docker-compose/write/conf/hadoop/mapred-site.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="UTF-8"?>
+<?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,13 +17,6 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
 </configuration>
diff --git a/docker/docker-compose/write/conf/hadoop/yarn-site.xml b/docker/docker-compose/write/conf/hadoop/yarn-site.xml
new file mode 100644
index 0000000..b55dd34
--- /dev/null
+++ b/docker/docker-compose/write/conf/hadoop/yarn-site.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<configuration>
+
+<!-- Site specific YARN configuration properties -->
+
+<property><name>yarn.resourcemanager.fs.state-store.uri</name><value>/rmstate</value></property>
+<property><name>yarn.timeline-service.generic-application-history.enabled</name><value>true</value></property>
+<property><name>mapreduce.map.output.compress</name><value>true</value></property>
+<property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value></property>
+<property><name>mapred.map.output.compress.codec</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>yarn.timeline-service.enabled</name><value>true</value></property>
+<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
+<property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value></property>
+<property><name>yarn.resourcemanager.system-metrics-publisher.enabled</name><value>true</value></property>
+<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/app-logs</value></property>
+<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
+<property><name>yarn.resourcemanager.resource_tracker.address</name><value>write-resourcemanager:8031</value></property>
+<property><name>yarn.resourcemanager.hostname</name><value>write-resourcemanager</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-vcores</name><value>4</value></property>
+<property><name>yarn.timeline-service.hostname</name><value>write-historyserver</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-mb</name><value>8192</value></property>
+<property><name>yarn.log.server.url</name><value>http://write-historyserver:8188/applicationhistory/logs/</value></property>
+<property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property>
+<property><name>yarn.resourcemanager.scheduler.address</name><value>write-resourcemanager:8030</value></property>
+<property><name>yarn.resourcemanager.address</name><value>write-resourcemanager:8032</value></property>
+<property><name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name><value>98.5</value></property>
+<property><name>yarn.nodemanager.resource.memory-mb</name><value>16384</value></property>
+<property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value></property>
+<property><name>yarn.resourcemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.timeline-service.bind-host</name><value>0.0.0.0</value></property>
+</configuration>
diff --git a/docker/docker-compose/write/conf/hbase/hbase-site.xml b/docker/docker-compose/write/conf/hbase/hbase-site.xml
new file mode 100644
index 0000000..988d91c
--- /dev/null
+++ b/docker/docker-compose/write/conf/hbase/hbase-site.xml
@@ -0,0 +1,34 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+<property><name>hbase.zookeeper.quorum</name><value>read-zookeeper</value></property>
+<property><name>hbase.master</name><value>read-hbase-master:16000</value></property>
+<property><name>hbase.regionserver.port</name><value>16020</value></property>
+<property><name>hbase.regionserver.info.port</name><value>16030</value></property>
+<property><name>DIR</name><value>/etc/hbase</value></property>
+<property><name>hbase.cluster.distributed</name><value>true</value></property>
+<property><name>hbase.rootdir</name><value>hdfs://read-namenode:8020/hbase</value></property>
+<property><name>hbase.master.info.port</name><value>16010</value></property>
+<property><name>hbase.master.hostname</name><value>read-hbase-master</value></property>
+<property><name>hbase.master.port</name><value>16000</value></property>
+</configuration>
diff --git a/docker/docker-compose/write/conf/hive/hive-site.xml b/docker/docker-compose/write/conf/hive/hive-site.xml
new file mode 100644
index 0000000..c60fe36
--- /dev/null
+++ b/docker/docker-compose/write/conf/hive/hive-site.xml
@@ -0,0 +1,25 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--><configuration>
+    <property><name>hive.metastore.uris</name><value>thrift://write-hive-metastore:9083</value></property>
+    <property><name>datanucleus.autoCreateSchema</name><value>false</value></property>
+    <property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:postgresql://write-hive-metastore-postgresql/metastore</value></property>
+    <property><name>javax.jdo.option.ConnectionDriverName</name><value>org.postgresql.Driver</value></property>
+    <property><name>javax.jdo.option.ConnectionPassword</name><value>hive</value></property>
+    <property><name>javax.jdo.option.ConnectionUserName</name><value>hive</value></property>
+</configuration>
+
diff --git a/docker/docker-compose/write/docker-compose-kafka.yml b/docker/docker-compose/write/docker-compose-kafka.yml
new file mode 100644
index 0000000..9590c62
--- /dev/null
+++ b/docker/docker-compose/write/docker-compose-kafka.yml
@@ -0,0 +1,18 @@
+version: "3.3"
+
+services:
+  write-kafka:
+    image: ${KAFKA_IMAGETAG:-bitnami/kafka:2.0.0}
+    container_name: write-kafkabroker
+    hostname: write-kafkabroker
+    environment:
+      - KAFKA_ZOOKEEPER_CONNECT=write-zookeeper:2181
+      - ALLOW_PLAINTEXT_LISTENER=yes
+    networks:
+      - write_kylin
+    ports:
+      - 9092:9092
+
+networks:
+  write_kylin:
+    external: true
\ No newline at end of file
diff --git a/docker/docker-compose/write/docker-compose-write.yml b/docker/docker-compose/write/docker-compose-write.yml
new file mode 100644
index 0000000..aefe726
--- /dev/null
+++ b/docker/docker-compose/write/docker-compose-write.yml
@@ -0,0 +1,215 @@
+version: "3.3"
+
+services:
+  write-namenode:
+    image: ${HADOOP_NAMENODE_IMAGETAG:-bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8}
+    container_name: write-namenode
+    hostname: write-namenode
+    volumes:
+      - ./data/write_hadoop_namenode:/hadoop/dfs/name
+    environment:
+      - CLUSTER_NAME=test-write
+    env_file:
+      - write-hadoop.env
+    expose:
+      - 8020
+    ports:
+      - 50070:50070
+
+  write-datanode1:
+    image: ${HADOOP_DATANODE_IMAGETAG:-bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8}
+    container_name: write-datanode1
+    hostname: write-datanode1
+    volumes:
+      - ./data/write_hadoop_datanode1:/hadoop/dfs/data
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:50070"
+    env_file:
+      - write-hadoop.env
+    links:
+      - write-namenode
+
+  write-datanode2:
+    image: ${HADOOP_DATANODE_IMAGETAG:-bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8}
+    container_name: write-datanode2
+    hostname: write-datanode2
+    volumes:
+      - ./data/write_hadoop_datanode2:/hadoop/dfs/data
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:50070"
+    env_file:
+      - write-hadoop.env
+
+  write-datanode3:
+    image: ${HADOOP_DATANODE_IMAGETAG:-bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8}
+    container_name: write-datanode3
+    hostname: write-datanode3
+    volumes:
+      - ./data/write_hadoop_datanode3:/hadoop/dfs/data
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:50070"
+    env_file:
+      - write-hadoop.env
+
+  write-resourcemanager:
+    image: ${HADOOP_RESOURCEMANAGER_IMAGETAG:-bde2020/hadoop-resourcemanager:2.0.0-hadoop2.7.4-java8}
+    container_name: write-resourcemanager
+    hostname: write-resourcemanager
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075"
+    env_file:
+      - write-hadoop.env
+    ports:
+      - 8088:8088
+
+  write-nodemanager1:
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-bde2020/hadoop-nodemanager:2.0.0-hadoop2.7.4-java8}
+    container_name: write-nodemanager1
+    hostname: write-nodemanager1
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-resourcemanager:8088"
+    env_file:
+      - write-hadoop.env
+
+  write-nodemanager2:
+    image: ${HADOOP_NODEMANAGER_IMAGETAG:-bde2020/hadoop-nodemanager:2.0.0-hadoop2.7.4-java8}
+    container_name: write-nodemanager2
+    hostname: write-nodemanager2
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-resourcemanager:8088"
+    env_file:
+      - write-hadoop.env
+
+  write-historyserver:
+    image: ${HADOOP_HISTORYSERVER_IMAGETAG:-bde2020/hadoop-historyserver:2.0.0-hadoop2.7.4-java8}
+    container_name: write-historyserver
+    hostname: write-historyserver
+    volumes:
+      - ./data/write_hadoop_historyserver:/hadoop/yarn/timeline
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-resourcemanager:8088"
+    env_file:
+      - write-hadoop.env
+    ports:
+      - 8188:8188
+
+  write-hive-server:
+    image: ${HIVE_IMAGETAG:-apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5}
+    container_name: write-hive-server
+    hostname: write-hive-server
+    env_file:
+      - write-hadoop.env
+    environment:
+#      HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://write-hive-metastore/metastore"
+      HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:mysql://metastore-db/metastore"
+      SERVICE_PRECONDITION: "write-hive-metastore:9083"
+    ports:
+      - 10000:10000
+
+  write-hive-metastore:
+#    image: ${HIVE_IMAGETAG:-bde2020/hive:2.3.2-postgresql-metastore}
+    image: ${HIVE_IMAGETAG:-apachekylin/kylin-hive:hive_1.2.2_hadoop_2.8.5}
+    container_name: write-hive-metastore
+    hostname: write-hive-metastore
+    env_file:
+      - write-hadoop.env
+    command: /opt/hive/bin/hive --service metastore
+    expose:
+      - 9083
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 metastore-db:3306"
+#       SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-hive-metastore-postgresql:5432"
+
+#  write-hive-metastore-postgresql:
+#    image: bde2020/hive-metastore-postgresql:2.3.0
+#    container_name: write-hive-metastore-postgresql
+#    hostname: write-hive-metastore-postgresql
+
+  metastore-db:
+    image: mysql:5.6.49
+    container_name: metastore-db
+    hostname: metastore-db
+    volumes:
+      - ./data/mysql:/var/lib/mysql
+    environment:
+      - MYSQL_ROOT_PASSWORD=kylin
+      - MYSQL_DATABASE=metastore
+      - MYSQL_USER=kylin
+      - MYSQL_PASSWORD=kylin
+    ports:
+      - 3306:3306
+
+  write-zookeeper:
+    image: ${ZOOKEEPER_IMAGETAG:-zookeeper:3.4.10}
+    container_name: write-zookeeper
+    hostname: write-zookeeper
+    environment:
+      ZOO_MY_ID: 1
+      ZOO_SERVERS: server.1=0.0.0.0:2888:3888
+    ports:
+      - 2181:2181
+
+  write-kafka:
+    image: ${KAFKA_IMAGETAG:-bitnami/kafka:2.0.0}
+    container_name: write-kafkabroker
+    hostname: write-kafkabroker
+    environment:
+      - KAFKA_ZOOKEEPER_CONNECT=write-zookeeper:2181
+      - ALLOW_PLAINTEXT_LISTENER=yes
+    ports:
+      - 9092:9092
+
+  kerberos-kdc:
+    image: ${KERBEROS_IMAGE}
+    container_name: kerberos-kdc
+    hostname: kerberos-kdc
+
+  write-hbase-master:
+    image: ${HBASE_MASTER_IMAGETAG:-bde2020/hbase-master:1.0.0-hbase1.2.6}
+    container_name: write-hbase-master
+    hostname: write-hbase-master
+    env_file:
+      - write-hbase-distributed-local.env
+    environment:
+      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-zookeeper:2181"
+    ports:
+      - 16010:16010
+
+  write-hbase-regionserver1:
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-bde2020/hbase-regionserver:1.0.0-hbase1.2.6}
+    container_name: write-hbase-regionserver1
+    hostname: write-hbase-regionserver1
+    env_file:
+      - write-hbase-distributed-local.env
+    environment:
+      HBASE_CONF_hbase_regionserver_hostname: write-hbase-regionserver1
+      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-zookeeper:2181 write-hbase-master:16010"
+
+  write-hbase-regionserver2:
+    image: ${HBASE_REGIONSERVER_IMAGETAG:-bde2020/hbase-regionserver:1.0.0-hbase1.2.6}
+    container_name: write-hbase-regionserver2
+    hostname: write-hbase-regionserver2
+    env_file:
+      - write-hbase-distributed-local.env
+    environment:
+      HBASE_CONF_hbase_regionserver_hostname: write-hbase-regionserver2
+      SERVICE_PRECONDITION: "write-namenode:50070 write-datanode1:50075 write-datanode2:50075 write-datanode3:50075 write-zookeeper:2181 write-hbase-master:16010"
+
+  kylin-all:
+    image: ${CLIENT_IMAGETAG}
+    container_name: kylin-all
+    hostname: kylin-all
+    volumes:
+      - ./conf/hadoop:/etc/hadoop/conf
+      - ./conf/hbase:/etc/hbase/conf
+      - ./conf/hive:/etc/hive/conf
+      - ./kylin:/opt/kylin/
+    env_file:
+      - client.env
+    environment:
+      HADOOP_CONF_DIR: /etc/hadoop/conf
+      HIVE_CONF_DIR: /etc/hive/conf
+      HBASE_CONF_DIR: /etc/hbase/conf
+      KYLIN_HOME: /opt/kylin/kylin
+    ports:
+      - 7070:7070
diff --git a/docker/docker-compose/write/docker-compose-zookeeper.yml b/docker/docker-compose/write/docker-compose-zookeeper.yml
new file mode 100644
index 0000000..cece11b
--- /dev/null
+++ b/docker/docker-compose/write/docker-compose-zookeeper.yml
@@ -0,0 +1,18 @@
+version: "3.3"
+
+services:
+  write-zookeeper:
+    image: ${ZOOKEEPER_IMAGETAG:-zookeeper:3.4.10}
+    container_name: write-zookeeper
+    hostname: write-zookeeper
+    environment:
+      ZOO_MY_ID: 1
+      ZOO_SERVERS: server.1=0.0.0.0:2888:3888
+    networks:
+      - write_kylin
+    ports:
+      - 2181:2181
+
+networks:
+  write_kylin:
+    external: true
\ No newline at end of file
diff --git a/docker/docker-compose/write/write-hadoop.env b/docker/docker-compose/write/write-hadoop.env
new file mode 100644
index 0000000..8ec98c9
--- /dev/null
+++ b/docker/docker-compose/write/write-hadoop.env
@@ -0,0 +1,47 @@
+CORE_CONF_fs_defaultFS=hdfs://write-namenode:8020
+CORE_CONF_hadoop_http_staticuser_user=root
+CORE_CONF_hadoop_proxyuser_hue_hosts=*
+CORE_CONF_hadoop_proxyuser_hue_groups=*
+CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec
+
+HDFS_CONF_dfs_webhdfs_enabled=true
+HDFS_CONF_dfs_permissions_enabled=false
+HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false
+
+YARN_CONF_yarn_log___aggregation___enable=true
+YARN_CONF_yarn_log_server_url=http://write-historyserver:8188/applicationhistory/logs/
+YARN_CONF_yarn_resourcemanager_recovery_enabled=true
+YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
+YARN_CONF_yarn_resourcemanager_scheduler_class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
+YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___mb=8192
+YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___vcores=4
+YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
+YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
+YARN_CONF_yarn_resourcemanager_hostname=write-resourcemanager
+YARN_CONF_yarn_resourcemanager_address=write-resourcemanager:8032
+YARN_CONF_yarn_resourcemanager_scheduler_address=write-resourcemanager:8030
+YARN_CONF_yarn_resourcemanager_resource__tracker_address=write-resourcemanager:8031
+YARN_CONF_yarn_timeline___service_enabled=true
+YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
+YARN_CONF_yarn_timeline___service_hostname=write-historyserver
+YARN_CONF_mapreduce_map_output_compress=true
+YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec
+YARN_CONF_yarn_nodemanager_resource_memory___mb=16384
+YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8
+YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
+YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
+YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
+
+MAPRED_CONF_mapreduce_framework_name=yarn
+MAPRED_CONF_mapred_child_java_opts=-Xmx4096m
+MAPRED_CONF_mapreduce_map_memory_mb=4096
+MAPRED_CONF_mapreduce_reduce_memory_mb=8192
+MAPRED_CONF_mapreduce_map_java_opts=-Xmx3072m
+MAPRED_CONF_mapreduce_reduce_java_opts=-Xmx6144m
+
+HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:mysql://metastore-db/metastore
+HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName=com.mysql.jdbc.Driver
+HIVE_SITE_CONF_javax_jdo_option_ConnectionUserName=kylin
+HIVE_SITE_CONF_javax_jdo_option_ConnectionPassword=kylin
+HIVE_SITE_CONF_datanucleus_autoCreateSchema=true
+HIVE_SITE_CONF_hive_metastore_uris=thrift://write-hive-metastore:9083
\ No newline at end of file
diff --git a/docker/docker-compose/write/write-hbase-distributed-local.env b/docker/docker-compose/write/write-hbase-distributed-local.env
new file mode 100644
index 0000000..c866cef
--- /dev/null
+++ b/docker/docker-compose/write/write-hbase-distributed-local.env
@@ -0,0 +1,12 @@
+HBASE_CONF_hbase_rootdir=hdfs://write-namenode:8020/hbase
+HBASE_CONF_hbase_cluster_distributed=true
+HBASE_CONF_hbase_zookeeper_quorum=write-zookeeper
+
+HBASE_CONF_hbase_master=write-hbase-master:16000
+HBASE_CONF_hbase_master_hostname=write-hbase-master
+HBASE_CONF_hbase_master_port=16000
+HBASE_CONF_hbase_master_info_port=16010
+HBASE_CONF_hbase_regionserver_port=16020
+HBASE_CONF_hbase_regionserver_info_port=16030
+
+HBASE_MANAGES_ZK=false
\ No newline at end of file
diff --git a/docker/dockerfile/cluster/base/Dockerfile b/docker/dockerfile/cluster/base/Dockerfile
new file mode 100644
index 0000000..ccc05b3
--- /dev/null
+++ b/docker/dockerfile/cluster/base/Dockerfile
@@ -0,0 +1,78 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+FROM centos:7.3.1611
+MAINTAINER kylin
+
+USER root
+
+ARG JAVA_VERSION=jdk1.8.0_141
+ARG HADOOP_VERSION=2.8.5
+ARG INSTALL_FROM=local
+ARG HADOOP_URL=https://archive.apache.org/dist/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz
+
+ENV JAVA_HOME /opt/${JAVA_VERSION}
+ENV HADOOP_VERSION ${HADOOP_VERSION}
+ENV INSTALL_FROM ${INSTALL_FROM}
+ENV HADOOP_URL ${HADOOP_URL}
+
+# install tools
+RUN yum -y install lsof wget tar git unzip wget curl net-tools procps perl sed nc which
+
+# setup jdk
+RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.tar.gz" -P /opt \
+    && tar -zxvf /opt/jdk-8u141-linux-x64.tar.gz -C /opt/ \
+    && rm -f /opt/jdk-8u141-linux-x64.tar.gz
+
+# use buildkit
+#IF $INSTALL_FROM=="net"
+#RUN set -x \
+#    && echo "Fetch URL2 is : ${HADOOP_URL}" \
+#    && curl -fSL "${HADOOP_URL}" -o /tmp/hadoop.tar.gz \
+#    && curl -fSL "${HADOOP_URL}.asc" -o /tmp/hadoop.tar.gz.asc \
+#ELSE IF $INSTALL_FROM=="local"
+#COPY ${PACKAGE_PATH}hadoop-${HADOOP_VERSION}.tar.gz /tmp/hadoop.tar.gz
+#COPY ${PACKAGE_PATH}hadoop-${HADOOP_VERSION}.tar.gz.asc /tmp/hadoop.tar.gz.asc
+#DONE
+
+RUN set -x \
+    && echo "Fetch URL2 is : ${HADOOP_URL}" \
+    && curl -fSL "${HADOOP_URL}" -o /tmp/hadoop.tar.gz \
+    && curl -fSL "${HADOOP_URL}.asc" -o /tmp/hadoop.tar.gz.asc \
+
+RUN set -x \
+    && tar -xvf /tmp/hadoop.tar.gz -C /opt/ \
+    && rm /tmp/hadoop.tar.gz* \
+    && ln -s /opt/hadoop-$HADOOP_VERSION/etc/hadoop /etc/hadoop \
+    && cp /etc/hadoop/mapred-site.xml.template /etc/hadoop/mapred-site.xml \
+    && mkdir -p /opt/hadoop-$HADOOP_VERSION/logs \
+    && mkdir /hadoop-data
+
+ENV HADOOP_PREFIX=/opt/hadoop-$HADOOP_VERSION
+ENV HADOOP_CONF_DIR=/etc/hadoop
+ENV MULTIHOMED_NETWORK=1
+ENV HADOOP_HOME=${HADOOP_PREFIX}
+ENV HADOOP_INSTALL=${HADOOP_HOME}
+
+ENV USER=root
+ENV PATH $JAVA_HOME/bin:/usr/bin:/bin:$HADOOP_PREFIX/bin/:$PATH
+
+ADD entrypoint.sh /opt/entrypoint/hadoop/entrypoint.sh
+RUN chmod a+x /opt/entrypoint/hadoop/entrypoint.sh
+
+ENTRYPOINT ["/opt/entrypoint/hadoop/entrypoint.sh"]
+
diff --git a/docker/dockerfile/cluster/base/entrypoint.sh b/docker/dockerfile/cluster/base/entrypoint.sh
new file mode 100644
index 0000000..3479844
--- /dev/null
+++ b/docker/dockerfile/cluster/base/entrypoint.sh
@@ -0,0 +1,140 @@
+#!/bin/bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+
+#######################################################################################
+##            COPIED FROM                                                            ##
+##  https://github.com/big-data-europe/docker-hadoop/blob/master/base/entrypoint.sh  ##
+#                                                                                    ##
+#######################################################################################
+
+# Set some sensible defaults
+export CORE_CONF_fs_defaultFS=${CORE_CONF_fs_defaultFS:-hdfs://`hostname -f`:8020}
+
+function addProperty() {
+  local path=$1
+  local name=$2
+  local value=$3
+
+  local entry="<property><name>$name</name><value>${value}</value></property>"
+  local escapedEntry=$(echo $entry | sed 's/\//\\\//g')
+  sed -i "/<\/configuration>/ s/.*/${escapedEntry}\n&/" $path
+}
+
+function configure() {
+    local path=$1
+    local module=$2
+    local envPrefix=$3
+
+    local var
+    local value
+
+    echo "Configuring $module"
+    for c in `printenv | perl -sne 'print "$1 " if m/^${envPrefix}_(.+?)=.*/' -- -envPrefix=$envPrefix`; do
+        name=`echo ${c} | perl -pe 's/___/-/g; s/__/@/g; s/_/./g; s/@/_/g;'`
+        var="${envPrefix}_${c}"
+        value=${!var}
+        echo " - Setting $name=$value"
+        addProperty /etc/hadoop/$module-site.xml $name "$value"
+    done
+}
+
+configure /etc/hadoop/core-site.xml core CORE_CONF
+configure /etc/hadoop/hdfs-site.xml hdfs HDFS_CONF
+configure /etc/hadoop/yarn-site.xml yarn YARN_CONF
+configure /etc/hadoop/httpfs-site.xml httpfs HTTPFS_CONF
+configure /etc/hadoop/kms-site.xml kms KMS_CONF
+
+if [ "$MULTIHOMED_NETWORK" = "1" ]; then
+    echo "Configuring for multihomed network"
+
+    # HDFS
+    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.rpc-bind-host 0.0.0.0
+    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.servicerpc-bind-host 0.0.0.0
+    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.http-bind-host 0.0.0.0
+    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.https-bind-host 0.0.0.0
+    addProperty /etc/hadoop/hdfs-site.xml dfs.client.use.datanode.hostname true
+    addProperty /etc/hadoop/hdfs-site.xml dfs.datanode.use.datanode.hostname true
+
+    # YARN
+    addProperty /etc/hadoop/yarn-site.xml yarn.resourcemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/yarn-site.xml yarn.timeline-service.bind-host 0.0.0.0
+
+    # MAPRED
+    addProperty /etc/hadoop/mapred-site.xml yarn.nodemanager.bind-host 0.0.0.0
+fi
+
+if [ -n "$GANGLIA_HOST" ]; then
+    mv /etc/hadoop/hadoop-metrics.properties /etc/hadoop/hadoop-metrics.properties.orig
+    mv /etc/hadoop/hadoop-metrics2.properties /etc/hadoop/hadoop-metrics2.properties.orig
+
+    for module in mapred jvm rpc ugi; do
+        echo "$module.class=org.apache.hadoop.metrics.ganglia.GangliaContext31"
+        echo "$module.period=10"
+        echo "$module.servers=$GANGLIA_HOST:8649"
+    done > /etc/hadoop/hadoop-metrics.properties
+
+    for module in namenode datanode resourcemanager nodemanager mrappmaster jobhistoryserver; do
+        echo "$module.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31"
+        echo "$module.sink.ganglia.period=10"
+        echo "$module.sink.ganglia.supportsparse=true"
+        echo "$module.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both"
+        echo "$module.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40"
+        echo "$module.sink.ganglia.servers=$GANGLIA_HOST:8649"
+    done > /etc/hadoop/hadoop-metrics2.properties
+fi
+
+function wait_for_it()
+{
+    local serviceport=$1
+    local service=${serviceport%%:*}
+    local port=${serviceport#*:}
+    local retry_seconds=5
+    local max_try=100
+    let i=1
+
+    nc -z $service $port
+    result=$?
+
+    until [ $result -eq 0 ]; do
+      echo "[$i/$max_try] check for ${service}:${port}..."
+      echo "[$i/$max_try] ${service}:${port} is not available yet"
+      if (( $i == $max_try )); then
+        echo "[$i/$max_try] ${service}:${port} is still not available; giving up after ${max_try} tries. :/"
+        exit 1
+      fi
+
+      echo "[$i/$max_try] try in ${retry_seconds}s once again ..."
+      let "i++"
+      sleep $retry_seconds
+
+      nc -z $service $port
+      result=$?
+    done
+    echo "[$i/$max_try] $service:${port} is available."
+}
+
+for i in ${SERVICE_PRECONDITION[@]}
+do
+    wait_for_it ${i}
+done
+
+exec "$@"
diff --git a/docker/dockerfile/cluster/client/Dockerfile b/docker/dockerfile/cluster/client/Dockerfile
new file mode 100644
index 0000000..38cbbac
--- /dev/null
+++ b/docker/dockerfile/cluster/client/Dockerfile
@@ -0,0 +1,157 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+ARG JAVA_VERSION=jdk1.8.0_141
+ARG HADOOP_VERSION=2.8.5
+ARG HIVE_VERSION=1.2.1
+ARG HBASE_VERSION=1.1.2
+ARG ZOOKEEPER_VERSION=3.4.10
+ARG KAFKA_VERSION=2.0.0
+ARG SPARK_VERSION=2.3.1
+ARG SPARK_HADOOP_VERSION=2.6
+
+FROM apachekylin/kylin-hive:hive_${HIVE_VERSION}_hadoop_${HADOOP_VERSION} AS hive
+ENV JAVA_VERSION ${JAVA_VERSION}
+ENV HADOOP_VERSION ${HADOOP_VERSION}
+ENV HIVE_VERSION ${HIVE_VERSION}
+
+ARG HBASE_VERSION=1.1.2
+FROM apachekylin/kylin-hbase-master:hbase_${HBASE_VERSION} AS hbase
+ENV HBASE_VERSION ${HBASE_VERSION}
+
+
+ARG ZOOKEEPER_VERSION=3.4.10
+FROM zookeeper:${ZOOKEEPER_VERSION} AS zk
+ENV ZOOKEEPER_VERSION ${ZOOKEEPER_VERSION}
+
+ARG KAFKA_VERSION=2.0.0
+FROM bitnami/kafka:${KAFKA_VERSION} AS kafka
+ENV KAFKA_VERSION ${KAFKA_VERSION}
+
+FROM centos:7.3.1611
+MAINTAINER kylin
+USER root
+
+ARG JAVA_VERSION=jdk1.8.0_141
+ARG HADOOP_VERSION=2.8.5
+ARG HIVE_VERSION=1.2.1
+ARG HBASE_VERSION=1.1.2
+ARG ZOOKEEPER_VERSION=3.4.10
+ARG KAFKA_VERSION=2.0.0
+ARG SPARK_VERSION=2.3.1
+ARG SPARK_HADOOP_VERSION=2.6
+
+ENV JAVA_VERSION ${JAVA_VERSION}
+ENV HADOOP_VERSION ${HADOOP_VERSION}
+ENV HIVE_VERSION ${HIVE_VERSION}
+ENV HBASE_VERSION ${HBASE_VERSION}
+ENV ZOOKEEPER_VERSION ${ZOOKEEPER_VERSION}
+ENV KAFKA_VERSION ${KAFKA_VERSION}
+ENV SPARK_VERSION ${SPARK_VERSION}
+ENV SPARK_HADOOP_VERSION ${SPARK_HADOOP_VERSION}
+
+## install tools
+RUN yum -y install lsof wget tar git unzip wget curl net-tools procps perl sed nc which
+# install kerberos
+RUN yum -y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation
+
+RUN mkdir /opt/hadoop-$HADOOP_VERSION/
+
+COPY --from=hive /opt/jdk1.8.0_141/ /opt/jdk1.8.0_141/
+COPY --from=hive /opt/hadoop-$HADOOP_VERSION/ /opt/hadoop-$HADOOP_VERSION/
+COPY --from=hive /opt/hive/ /opt/hive/
+COPY --from=hive /opt/entrypoint/hadoop/entrypoint.sh /opt/entrypoint/hadoop/entrypoint.sh
+RUN chmod a+x /opt/entrypoint/hadoop/entrypoint.sh
+COPY --from=hive /opt/entrypoint/hive/entrypoint.sh /opt/entrypoint/hive/entrypoint.sh
+RUN chmod a+x /opt/entrypoint/hive/entrypoint.sh
+
+
+COPY --from=hbase /opt/hbase-$HBASE_VERSION/ /opt/hbase-$HBASE_VERSION/
+COPY --from=hbase /opt/entrypoint/hbase/entrypoint.sh /opt/entrypoint/hbase/entrypoint.sh
+RUN chmod a+x /opt/entrypoint/hbase/entrypoint.sh
+
+
+COPY --from=zk /zookeeper-${ZOOKEEPER_VERSION}/ /opt/zookeeper-${ZOOKEEPER_VERSION}/
+COPY --from=zk /docker-entrypoint.sh /opt/entrypoint/zookeeper/entrypoint.sh
+RUN chmod a+x /opt/entrypoint/zookeeper/entrypoint.sh
+
+COPY --from=kafka /opt/bitnami/kafka /opt/kafka
+COPY --from=kafka /app-entrypoint.sh /opt/entrypoint/kafka/entrypoint.sh
+RUN chmod a+x /opt/entrypoint/kafka/entrypoint.sh
+
+
+RUN set -x \
+    && ln -s /opt/hadoop-$HADOOP_VERSION/etc/hadoop /etc/hadoop \
+    && cp /etc/hadoop/mapred-site.xml.template /etc/hadoop/mapred-site.xml \
+    && mkdir -p /opt/hadoop-$HADOOP_VERSION/logs
+
+RUN ln -s /opt/hbase-$HBASE_VERSION/conf /etc/hbase
+
+
+ENV JAVA_HOME=/opt/${JAVA_VERSION}
+
+ENV HADOOP_PREFIX=/opt/hadoop-$HADOOP_VERSION
+ENV HADOOP_CONF_DIR=/etc/hadoop
+ENV HADOOP_HOME=${HADOOP_PREFIX}
+ENV HADOOP_INSTALL=${HADOOP_HOME}
+
+ENV HIVE_HOME=/opt/hive
+
+ENV HBASE_PREFIX=/opt/hbase-$HBASE_VERSION
+ENV HBASE_CONF_DIR=/etc/hbase
+ENV HBASE_HOME=${HBASE_PREFIX}
+
+
+ENV ZK_HOME=/opt/zookeeper-${ZOOKEEPER_VERSION}
+ENV ZOOCFGDIR=$ZK_HOME/conf
+ENV ZOO_USER=zookeeper
+ENV ZOO_CONF_DIR=$ZK_HOME/conf ZOO_PORT=2181 ZOO_TICK_TIME=2000 ZOO_INIT_LIMIT=5 ZOO_SYNC_LIMIT=2 ZOO_MAX_CLIENT_CNXNS=60
+
+ENV SPARK_URL=https://archive.apache.org/dist/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop${SPARK_HADOOP_VERSION}.tgz
+ENV SPARK_HOME=/opt/spark-$SPARK_VERSION-bin-hadoop${SPARK_HADOOP_VERSION}
+ENV SPARK_CONF_DIR=/opt/spark-$SPARK_VERSION-bin-hadoop${SPARK_HADOOP_VERSION}/conf
+
+RUN curl -fSL "${SPARK_URL}" -o /tmp/spark.tar.gz \
+    && tar -zxvf /tmp/spark.tar.gz -C /opt/ \
+    && rm -f /tmp/spark.tar.gz \
+    && cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf \
+    && cp $SPARK_HOME/yarn/*.jar $HADOOP_HOME/share/hadoop/yarn/lib
+
+#COPY spark-$SPARK_VERSION-bin-hadoop${SPARK_HADOOP_VERSION}.tgz /tmp/spark.tar.gz
+#RUN tar -zxvf /tmp/spark.tar.gz -C /opt/ \
+#    && rm -f /tmp/spark.tar.gz \
+#    && cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf \
+#    && cp $SPARK_HOME/yarn/*.jar $HADOOP_HOME/share/hadoop/yarn/lib
+
+#RUN cp $HIVE_HOME/lib/mysql-connector-java.jar $SPARK_HOME/jars
+RUN cp $HIVE_HOME/lib/postgresql-jdbc.jar  $SPARK_HOME/jars
+RUN cp $HBASE_HOME/lib/hbase-protocol-${HBASE_VERSION}.jar $SPARK_HOME/jars
+RUN echo spark.sql.catalogImplementation=hive > $SPARK_HOME/conf/spark-defaults.conf
+
+
+ENV PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HBASE_HOME/bin:$ZK_HOME/bin
+
+# 设置所有组件的客户端配置
+COPY entrypoint.sh /opt/entrypoint/client/entrypoint.sh
+RUN chmod a+x /opt/entrypoint/client/entrypoint.sh
+
+COPY run_cli.sh /run_cli.sh
+RUN chmod a+x  /run_cli.sh
+
+#ENTRYPOINT ["/opt/entrypoint/client/entrypoint.sh"]
+
+CMD ["/run_cli.sh"]
diff --git a/docker/conf/hadoop/core-site.xml b/docker/dockerfile/cluster/client/conf/hadoop-read/core-site.xml
similarity index 63%
copy from docker/conf/hadoop/core-site.xml
copy to docker/dockerfile/cluster/client/conf/hadoop-read/core-site.xml
index 6fe6404..69fc462 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/dockerfile/cluster/client/conf/hadoop-read/core-site.xml
@@ -17,13 +17,9 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
+<property><name>fs.defaultFS</name><value>hdfs://write-namenode:8020</value></property>
+<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
+<property><name>hadoop.http.staticuser.user</name><value>root</value></property>
 </configuration>
diff --git a/docker/dockerfile/cluster/client/conf/hadoop-read/hdfs-site.xml b/docker/dockerfile/cluster/client/conf/hadoop-read/hdfs-site.xml
new file mode 100644
index 0000000..cdf7778
--- /dev/null
+++ b/docker/dockerfile/cluster/client/conf/hadoop-read/hdfs-site.xml
@@ -0,0 +1,31 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+<property><name>dfs.namenode.name.dir</name><value>file:///hadoop/dfs/name</value></property>
+<property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>false</value></property>
+<property><name>dfs.permissions.enabled</name><value>false</value></property>
+<property><name>dfs.webhdfs.enabled</name><value>true</value></property>
+<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
+<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/dockerfile/cluster/client/conf/hadoop-read/mapred-site.xml
similarity index 69%
copy from docker/conf/hadoop/core-site.xml
copy to docker/dockerfile/cluster/client/conf/hadoop-read/mapred-site.xml
index 6fe6404..d5cc450 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/dockerfile/cluster/client/conf/hadoop-read/mapred-site.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="UTF-8"?>
+<?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,13 +17,6 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
 </configuration>
diff --git a/docker/dockerfile/cluster/client/conf/hadoop-read/yarn-site.xml b/docker/dockerfile/cluster/client/conf/hadoop-read/yarn-site.xml
new file mode 100644
index 0000000..392cf4c
--- /dev/null
+++ b/docker/dockerfile/cluster/client/conf/hadoop-read/yarn-site.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<configuration>
+
+<!-- Site specific YARN configuration properties -->
+
+<property><name>yarn.resourcemanager.fs.state-store.uri</name><value>/rmstate</value></property>
+<property><name>yarn.timeline-service.generic-application-history.enabled</name><value>true</value></property>
+<property><name>mapreduce.map.output.compress</name><value>true</value></property>
+<property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value></property>
+<property><name>mapred.map.output.compress.codec</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>yarn.timeline-service.enabled</name><value>true</value></property>
+<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
+<property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value></property>
+<property><name>yarn.resourcemanager.system-metrics-publisher.enabled</name><value>true</value></property>
+<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/app-logs</value></property>
+<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
+<property><name>yarn.resourcemanager.resource_tracker.address</name><value>read-resourcemanager:8031</value></property>
+<property><name>yarn.resourcemanager.hostname</name><value>read-resourcemanager</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-vcores</name><value>4</value></property>
+<property><name>yarn.timeline-service.hostname</name><value>read-historyserver</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-mb</name><value>8192</value></property>
+<property><name>yarn.log.server.url</name><value>http://read-historyserver:8188/applicationhistory/logs/</value></property>
+<property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property>
+<property><name>yarn.resourcemanager.scheduler.address</name><value>read-resourcemanager:8030</value></property>
+<property><name>yarn.resourcemanager.address</name><value>read-resourcemanager:8032</value></property>
+<property><name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name><value>98.5</value></property>
+<property><name>yarn.nodemanager.resource.memory-mb</name><value>16384</value></property>
+<property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value></property>
+<property><name>yarn.resourcemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.timeline-service.bind-host</name><value>0.0.0.0</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/dockerfile/cluster/client/conf/hadoop-write/core-site.xml
similarity index 63%
copy from docker/conf/hadoop/core-site.xml
copy to docker/dockerfile/cluster/client/conf/hadoop-write/core-site.xml
index 6fe6404..69fc462 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/dockerfile/cluster/client/conf/hadoop-write/core-site.xml
@@ -17,13 +17,9 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
+<property><name>fs.defaultFS</name><value>hdfs://write-namenode:8020</value></property>
+<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
+<property><name>hadoop.http.staticuser.user</name><value>root</value></property>
 </configuration>
diff --git a/docker/dockerfile/cluster/client/conf/hadoop-write/hdfs-site.xml b/docker/dockerfile/cluster/client/conf/hadoop-write/hdfs-site.xml
new file mode 100644
index 0000000..cdf7778
--- /dev/null
+++ b/docker/dockerfile/cluster/client/conf/hadoop-write/hdfs-site.xml
@@ -0,0 +1,31 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+<property><name>dfs.namenode.name.dir</name><value>file:///hadoop/dfs/name</value></property>
+<property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>false</value></property>
+<property><name>dfs.permissions.enabled</name><value>false</value></property>
+<property><name>dfs.webhdfs.enabled</name><value>true</value></property>
+<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
+<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
+<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
+</configuration>
diff --git a/docker/conf/hadoop/core-site.xml b/docker/dockerfile/cluster/client/conf/hadoop-write/mapred-site.xml
similarity index 69%
copy from docker/conf/hadoop/core-site.xml
copy to docker/dockerfile/cluster/client/conf/hadoop-write/mapred-site.xml
index 6fe6404..d5cc450 100644
--- a/docker/conf/hadoop/core-site.xml
+++ b/docker/dockerfile/cluster/client/conf/hadoop-write/mapred-site.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="UTF-8"?>
+<?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,13 +17,6 @@
 <!-- Put site-specific property overrides in this file. -->
 
 <configuration>
-    <property>
-        <name>hadoop.tmp.dir</name>
-        <value>/data/hadoop</value>
-        <description>Abase for other temporary directories.</description>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://localhost:9000</value>
-    </property>
+
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
 </configuration>
diff --git a/docker/dockerfile/cluster/client/conf/hadoop-write/yarn-site.xml b/docker/dockerfile/cluster/client/conf/hadoop-write/yarn-site.xml
new file mode 100644
index 0000000..b55dd34
--- /dev/null
+++ b/docker/dockerfile/cluster/client/conf/hadoop-write/yarn-site.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<configuration>
+
+<!-- Site specific YARN configuration properties -->
+
+<property><name>yarn.resourcemanager.fs.state-store.uri</name><value>/rmstate</value></property>
+<property><name>yarn.timeline-service.generic-application-history.enabled</name><value>true</value></property>
+<property><name>mapreduce.map.output.compress</name><value>true</value></property>
+<property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value></property>
+<property><name>mapred.map.output.compress.codec</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
+<property><name>yarn.timeline-service.enabled</name><value>true</value></property>
+<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
+<property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value></property>
+<property><name>yarn.resourcemanager.system-metrics-publisher.enabled</name><value>true</value></property>
+<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/app-logs</value></property>
+<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
+<property><name>yarn.resourcemanager.resource_tracker.address</name><value>write-resourcemanager:8031</value></property>
+<property><name>yarn.resourcemanager.hostname</name><value>write-resourcemanager</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-vcores</name><value>4</value></property>
+<property><name>yarn.timeline-service.hostname</name><value>write-historyserver</value></property>
+<property><name>yarn.scheduler.capacity.root.default.maximum-allocation-mb</name><value>8192</value></property>
+<property><name>yarn.log.server.url</name><value>http://write-historyserver:8188/applicationhistory/logs/</value></property>
+<property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property>
+<property><name>yarn.resourcemanager.scheduler.address</name><value>write-resourcemanager:8030</value></property>
+<property><name>yarn.resourcemanager.address</name><value>write-resourcemanager:8032</value></property>
+<property><name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name><value>98.5</value></property>
+<property><name>yarn.nodemanager.resource.memory-mb</name><value>16384</value></property>
+<property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value></property>
+<property><name>yarn.resourcemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value></property>
+<property><name>yarn.timeline-service.bind-host</name><value>0.0.0.0</value></property>
+</configuration>
diff --git a/docker/dockerfile/cluster/client/conf/hbase/hbase-site.xml b/docker/dockerfile/cluster/client/conf/hbase/hbase-site.xml
new file mode 100644
index 0000000..988d91c
--- /dev/null
+++ b/docker/dockerfile/cluster/client/conf/hbase/hbase-site.xml
@@ -0,0 +1,34 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+<property><name>hbase.zookeeper.quorum</name><value>read-zookeeper</value></property>
+<property><name>hbase.master</name><value>read-hbase-master:16000</value></property>
+<property><name>hbase.regionserver.port</name><value>16020</value></property>
+<property><name>hbase.regionserver.info.port</name><value>16030</value></property>
+<property><name>DIR</name><value>/etc/hbase</value></property>
+<property><name>hbase.cluster.distributed</name><value>true</value></property>
+<property><name>hbase.rootdir</name><value>hdfs://read-namenode:8020/hbase</value></property>
+<property><name>hbase.master.info.port</name><value>16010</value></property>
+<property><name>hbase.master.hostname</name><value>read-hbase-master</value></property>
+<property><name>hbase.master.port</name><value>16000</value></property>
+</configuration>
diff --git a/docker/dockerfile/cluster/client/conf/hive/hive-site.xml b/docker/dockerfile/cluster/client/conf/hive/hive-site.xml
new file mode 100644
index 0000000..c60fe36
--- /dev/null
+++ b/docker/dockerfile/cluster/client/conf/hive/hive-site.xml
@@ -0,0 +1,25 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--><configuration>
+    <property><name>hive.metastore.uris</name><value>thrift://write-hive-metastore:9083</value></property>
+    <property><name>datanucleus.autoCreateSchema</name><value>false</value></property>
+    <property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:postgresql://write-hive-metastore-postgresql/metastore</value></property>
+    <property><name>javax.jdo.option.ConnectionDriverName</name><value>org.postgresql.Driver</value></property>
+    <property><name>javax.jdo.option.ConnectionPassword</name><value>hive</value></property>
+    <property><name>javax.jdo.option.ConnectionUserName</name><value>hive</value></property>
+</configuration>
+
diff --git a/docker/dockerfile/cluster/client/entrypoint.sh b/docker/dockerfile/cluster/client/entrypoint.sh
new file mode 100644
index 0000000..dddc072
--- /dev/null
+++ b/docker/dockerfile/cluster/client/entrypoint.sh
@@ -0,0 +1,7 @@
+#!/bin/bash
+
+/opt/entrypoint/hadoop/entrypoint.sh
+/opt/entrypoint/hive/entrypoint.sh
+/opt/entrypoint/hbase/entrypoint.sh
+#/opt/entrypoint/zookeeper/entrypoint.sh
+#/opt/entrypoint/kafka/entrypoint.sh
diff --git a/docker/dockerfile/cluster/client/run_cli.sh b/docker/dockerfile/cluster/client/run_cli.sh
new file mode 100644
index 0000000..371c3e1
--- /dev/null
+++ b/docker/dockerfile/cluster/client/run_cli.sh
@@ -0,0 +1,10 @@
+#!/bin/bash
+
+/opt/entrypoint/hadoop/entrypoint.sh
+/opt/entrypoint/hive/entrypoint.sh
+/opt/entrypoint/hbase/entrypoint.sh
+
+while :
+do
+    sleep 1000
+done
\ No newline at end of file
diff --git a/docker/build_image.sh b/docker/dockerfile/cluster/datanode/Dockerfile
old mode 100755
new mode 100644
similarity index 70%
copy from docker/build_image.sh
copy to docker/dockerfile/cluster/datanode/Dockerfile
index 9c0b925..54bbc10
--- a/docker/build_image.sh
+++ b/docker/dockerfile/cluster/datanode/Dockerfile
@@ -1,5 +1,3 @@
-#!/usr/bin/env bash
-
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -17,11 +15,17 @@
 # limitations under the License.
 #
 
-DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
-cd ${DIR}
-echo "build image in dir "${DIR}
+ARG HADOOP_VERSION=2.8.5
+ARG HADOOP_DN_PORT=50075
+FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+
+ENV HADOOP_DN_PORT ${HADOOP_DN_PORT}
+
+ENV HDFS_CONF_dfs_datanode_data_dir=file:///hadoop/dfs/data
+RUN mkdir -p /hadoop/dfs/data
+VOLUME /hadoop/dfs/data
 
+ADD run_dn.sh /run_dn.sh
+RUN chmod a+x /run_dn.sh
 
-echo "start build Hadoop docker image"
-docker build -f Dockerfile_hadoop -t hadoop2.7-all-in-one-for-kylin4 .
-docker build -f Dockerfile -t apachekylin/apache-kylin-standalone:4.0.0-alpha .
+CMD ["/run_dn.sh"]
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/datanode/run_dn.sh
old mode 100755
new mode 100644
similarity index 76%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/datanode/run_dn.sh
index 3ed32ce..f3208ef
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/datanode/run_dn.sh
@@ -1,3 +1,5 @@
+#!/bin/bash
+
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -15,12 +17,10 @@
 # limitations under the License.
 #
 
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+datadir=`echo $HDFS_CONF_dfs_datanode_data_dir | perl -pe 's#file://##'`
+if [ ! -d $datadir ]; then
+  echo "Datanode data directory not found: $datadir"
+  exit 2
+fi
+
+$HADOOP_PREFIX/bin/hdfs --config $HADOOP_CONF_DIR datanode
diff --git a/docker/dockerfile/cluster/hbase/Dockerfile b/docker/dockerfile/cluster/hbase/Dockerfile
new file mode 100644
index 0000000..9b92d56
--- /dev/null
+++ b/docker/dockerfile/cluster/hbase/Dockerfile
@@ -0,0 +1,59 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+FROM centos:7.3.1611
+MAINTAINER kylin
+USER root
+
+ARG JAVA_VERSION=jdk1.8.0_141
+ARG HBASE_VERSION=1.1.2
+ARG HBASE_URL=https://archive.apache.org/dist/hbase/$HBASE_VERSION/hbase-$HBASE_VERSION-bin.tar.gz
+
+ENV JAVA_HOME /opt/${JAVA_VERSION}
+ENV HBASE_VERSION ${HBASE_VERSION}
+ENV HBASE_URL ${HBASE_URL}
+
+# install tools
+RUN yum -y install lsof wget tar git unzip wget curl net-tools procps perl sed nc which
+
+# setup jdk
+RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.tar.gz" -P /opt \
+    && tar -zxvf /opt/jdk-8u141-linux-x64.tar.gz -C /opt/ \
+    && rm -f /opt/jdk-8u141-linux-x64.tar.gz
+
+RUN set -x \
+    && curl -fSL "$HBASE_URL" -o /tmp/hbase.tar.gz \
+    && curl -fSL "$HBASE_URL.asc" -o /tmp/hbase.tar.gz.asc \
+    && tar -xvf /tmp/hbase.tar.gz -C /opt/ \
+    && rm /tmp/hbase.tar.gz*
+
+RUN ln -s /opt/hbase-$HBASE_VERSION/conf /etc/hbase
+RUN mkdir /opt/hbase-$HBASE_VERSION/logs
+
+RUN mkdir /hadoop-data
+
+ENV HBASE_PREFIX=/opt/hbase-$HBASE_VERSION
+ENV HBASE_HOME=${HBASE_PREFIX}
+ENV HBASE_CONF_DIR=/etc/hbase
+
+ENV USER=root
+ENV PATH $JAVA_HOME/bin:$HBASE_PREFIX/bin/:$PATH
+
+ADD entrypoint.sh /opt/entrypoint/hbase/entrypoint.sh
+RUN chmod a+x /opt/entrypoint/hbase/entrypoint.sh
+
+ENTRYPOINT ["/opt/entrypoint/hbase/entrypoint.sh"]
diff --git a/docker/dockerfile/cluster/hbase/entrypoint.sh b/docker/dockerfile/cluster/hbase/entrypoint.sh
new file mode 100644
index 0000000..5aea8d9
--- /dev/null
+++ b/docker/dockerfile/cluster/hbase/entrypoint.sh
@@ -0,0 +1,83 @@
+#!/bin/bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+function addProperty() {
+  local path=$1
+  local name=$2
+  local value=$3
+
+  local entry="<property><name>$name</name><value>${value}</value></property>"
+  local escapedEntry=$(echo $entry | sed 's/\//\\\//g')
+  sed -i "/<\/configuration>/ s/.*/${escapedEntry}\n&/" $path
+}
+
+function configure() {
+    local path=$1
+    local module=$2
+    local envPrefix=$3
+
+    local var
+    local value
+
+    echo "Configuring $module"
+    for c in `printenv | perl -sne 'print "$1 " if m/^${envPrefix}_(.+?)=.*/' -- -envPrefix=$envPrefix`; do
+        name=`echo ${c} | perl -pe 's/___/-/g; s/__/_/g; s/_/./g'`
+        var="${envPrefix}_${c}"
+        value=${!var}
+        echo " - Setting $name=$value"
+        addProperty /etc/hbase/$module-site.xml $name "$value"
+    done
+}
+
+configure /etc/hbase/hbase-site.xml hbase HBASE_CONF
+
+function wait_for_it()
+{
+    local serviceport=$1
+    local service=${serviceport%%:*}
+    local port=${serviceport#*:}
+    local retry_seconds=5
+    local max_try=100
+    let i=1
+
+    nc -z $service $port
+    result=$?
+
+    until [ $result -eq 0 ]; do
+      echo "[$i/$max_try] check for ${service}:${port}..."
+      echo "[$i/$max_try] ${service}:${port} is not available yet"
+      if (( $i == $max_try )); then
+        echo "[$i/$max_try] ${service}:${port} is still not available; giving up after ${max_try} tries. :/"
+        exit 1
+      fi
+
+      echo "[$i/$max_try] try in ${retry_seconds}s once again ..."
+      let "i++"
+      sleep $retry_seconds
+
+      nc -z $service $port
+      result=$?
+    done
+    echo "[$i/$max_try] $service:${port} is available."
+}
+
+for i in "${SERVICE_PRECONDITION[@]}"
+do
+    wait_for_it ${i}
+done
+
+exec $@
diff --git a/docker/build_image.sh b/docker/dockerfile/cluster/historyserver/Dockerfile
old mode 100755
new mode 100644
similarity index 60%
copy from docker/build_image.sh
copy to docker/dockerfile/cluster/historyserver/Dockerfile
index 9c0b925..2adda43
--- a/docker/build_image.sh
+++ b/docker/dockerfile/cluster/historyserver/Dockerfile
@@ -1,5 +1,3 @@
-#!/usr/bin/env bash
-
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -17,11 +15,20 @@
 # limitations under the License.
 #
 
-DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
-cd ${DIR}
-echo "build image in dir "${DIR}
+ARG HADOOP_VERSION=2.8.5
+FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+
+ARG HADOOP_HISTORY_PORT=8188
+ENV HADOOP_HISTORY_PORT ${HADOOP_HISTORY_PORT}
+EXPOSE ${HADOOP_HISTORY_PORT}
+
+HEALTHCHECK CMD curl -f http://localhost:${HADOOP_HISTORY_PORT}/ || exit 1
+
+ENV YARN_CONF_yarn_timeline___service_leveldb___timeline___store_path=/hadoop/yarn/timeline
+RUN mkdir -p /hadoop/yarn/timeline
+VOLUME /hadoop/yarn/timeline
 
+ADD run_history.sh /run_history.sh
+RUN chmod a+x /run_history.sh
 
-echo "start build Hadoop docker image"
-docker build -f Dockerfile_hadoop -t hadoop2.7-all-in-one-for-kylin4 .
-docker build -f Dockerfile -t apachekylin/apache-kylin-standalone:4.0.0-alpha .
+CMD ["/run_history.sh"]
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/historyserver/run_history.sh
old mode 100755
new mode 100644
similarity index 82%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/historyserver/run_history.sh
index 3ed32ce..6d7ae4e
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/historyserver/run_history.sh
@@ -1,3 +1,5 @@
+#!/bin/bash
+
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -15,12 +17,4 @@
 # limitations under the License.
 #
 
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+$HADOOP_PREFIX/bin/yarn --config $HADOOP_CONF_DIR historyserver
diff --git a/docker/dockerfile/cluster/hive/Dockerfile b/docker/dockerfile/cluster/hive/Dockerfile
new file mode 100644
index 0000000..46f81f4
--- /dev/null
+++ b/docker/dockerfile/cluster/hive/Dockerfile
@@ -0,0 +1,73 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+ARG HADOOP_VERSION=2.8.5
+FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+
+ENV HIVE_HOME /opt/hive
+ENV HADOOP_HOME /opt/hadoop-$HADOOP_VERSION
+
+WORKDIR /opt
+
+ARG HIVE_VERSION=1.2.1
+ARG HIVE_URL=https://archive.apache.org/dist/hive/hive-$HIVE_VERSION/apache-hive-$HIVE_VERSION-bin.tar.gz
+ENV HIVE_VERSION ${HIVE_VERSION}
+ENV HIVE_URL ${HIVE_URL}
+
+ARG MYSQL_CONN_VERSION=8.0.20
+ENV MYSQL_CONN_VERSION=${MYSQL_CONN_VERSION}
+ARG MYSQL_CONN_URL=https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-java-${MYSQL_CONN_VERSION}.tar.gz
+ENV MYSQL_CONN_URL=${MYSQL_CONN_URL}
+
+# install tools
+RUN yum -y install lsof wget tar git unzip wget curl net-tools procps perl sed nc which
+
+#Install Hive MySQL, PostgreSQL JDBC
+RUN echo "Hive URL is :${HIVE_URL}" \
+    && wget ${HIVE_URL} -O hive.tar.gz \
+    && tar -xzvf hive.tar.gz \
+    && mv *hive*-bin hive \
+    && wget $MYSQL_CONN_URL -O /tmp/mysql-connector-java.tar.gz \
+    && tar -xzvf /tmp/mysql-connector-java.tar.gz -C /tmp/ \
+    && cp /tmp/mysql-connector-java-${MYSQL_CONN_VERSION}/mysql-connector-java-${MYSQL_CONN_VERSION}.jar $HIVE_HOME/lib/mysql-connector-java.jar \
+    && rm /tmp/mysql-connector-java.tar.gz \
+    && rm -rf /tmp/mysql-connector-java-${MYSQL_CONN_VERSION} \
+    && wget https://jdbc.postgresql.org/download/postgresql-9.4.1212.jar -O $HIVE_HOME/lib/postgresql-jdbc.jar \
+    && rm hive.tar.gz
+
+#Custom configuration goes here
+ADD conf/hive-site.xml $HIVE_HOME/conf
+ADD conf/beeline-log4j2.properties $HIVE_HOME/conf
+ADD conf/hive-env.sh $HIVE_HOME/conf
+ADD conf/hive-exec-log4j2.properties $HIVE_HOME/conf
+ADD conf/hive-log4j2.properties $HIVE_HOME/conf
+ADD conf/ivysettings.xml $HIVE_HOME/conf
+ADD conf/llap-daemon-log4j2.properties $HIVE_HOME/conf
+
+COPY run_hv.sh /run_hv.sh
+RUN chmod +x /run_hv.sh
+
+COPY entrypoint.sh /opt/entrypoint/hive/entrypoint.sh
+RUN chmod +x /opt/entrypoint/hive/entrypoint.sh
+
+ENV PATH $HIVE_HOME/bin/:$PATH
+
+EXPOSE 10000
+EXPOSE 10002
+
+ENTRYPOINT ["/opt/entrypoint/hive/entrypoint.sh"]
+CMD ["/run_hv.sh"]
diff --git a/docker/dockerfile/cluster/hive/conf/beeline-log4j2.properties b/docker/dockerfile/cluster/hive/conf/beeline-log4j2.properties
new file mode 100644
index 0000000..d1305f8
--- /dev/null
+++ b/docker/dockerfile/cluster/hive/conf/beeline-log4j2.properties
@@ -0,0 +1,46 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+status = INFO
+name = BeelineLog4j2
+packages = org.apache.hadoop.hive.ql.log
+
+# list of properties
+property.hive.log.level = WARN
+property.hive.root.logger = console
+
+# list of all appenders
+appenders = console
+
+# console appender
+appender.console.type = Console
+appender.console.name = console
+appender.console.target = SYSTEM_ERR
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n
+
+# list of all loggers
+loggers = HiveConnection
+
+# HiveConnection logs useful info for dynamic service discovery
+logger.HiveConnection.name = org.apache.hive.jdbc.HiveConnection
+logger.HiveConnection.level = INFO
+
+# root logger
+rootLogger.level = ${sys:hive.log.level}
+rootLogger.appenderRefs = root
+rootLogger.appenderRef.root.ref = ${sys:hive.root.logger}
diff --git a/docker/dockerfile/cluster/hive/conf/hive-env.sh b/docker/dockerfile/cluster/hive/conf/hive-env.sh
new file mode 100644
index 0000000..f22407c
--- /dev/null
+++ b/docker/dockerfile/cluster/hive/conf/hive-env.sh
@@ -0,0 +1,55 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Hive and Hadoop environment variables here. These variables can be used
+# to control the execution of Hive. It should be used by admins to configure
+# the Hive installation (so that users do not have to set environment variables
+# or set command line parameters to get correct behavior).
+#
+# The hive service being invoked (CLI/HWI etc.) is available via the environment
+# variable SERVICE
+
+
+# Hive Client memory usage can be an issue if a large number of clients
+# are running at the same time. The flags below have been useful in 
+# reducing memory usage:
+#
+# if [ "$SERVICE" = "cli" ]; then
+#   if [ -z "$DEBUG" ]; then
+#     export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit"
+#   else
+#     export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit"
+#   fi
+# fi
+
+# The heap size of the jvm stared by hive shell script can be controlled via:
+#
+# export HADOOP_HEAPSIZE=1024
+#
+# Larger heap size may be required when running queries over large number of files or partitions. 
+# By default hive shell scripts use a heap size of 256 (MB).  Larger heap size would also be 
+# appropriate for hive server (hwi etc).
+
+
+# Set HADOOP_HOME to point to a specific hadoop install directory
+# HADOOP_HOME=${bin}/../../hadoop
+
+# Hive Configuration Directory can be controlled by:
+# export HIVE_CONF_DIR=
+
+# Folder containing extra ibraries required for hive compilation/execution can be controlled by:
+# export HIVE_AUX_JARS_PATH=
diff --git a/docker/dockerfile/cluster/hive/conf/hive-exec-log4j2.properties b/docker/dockerfile/cluster/hive/conf/hive-exec-log4j2.properties
new file mode 100644
index 0000000..a1e50eb
--- /dev/null
+++ b/docker/dockerfile/cluster/hive/conf/hive-exec-log4j2.properties
@@ -0,0 +1,67 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+status = INFO
+name = HiveExecLog4j2
+packages = org.apache.hadoop.hive.ql.log
+
+# list of properties
+property.hive.log.level = INFO
+property.hive.root.logger = FA
+property.hive.query.id = hadoop
+property.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}
+property.hive.log.file = ${sys:hive.query.id}.log
+
+# list of all appenders
+appenders = console, FA
+
+# console appender
+appender.console.type = Console
+appender.console.name = console
+appender.console.target = SYSTEM_ERR
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n
+
+# simple file appender
+appender.FA.type = File
+appender.FA.name = FA
+appender.FA.fileName = ${sys:hive.log.dir}/${sys:hive.log.file}
+appender.FA.layout.type = PatternLayout
+appender.FA.layout.pattern = %d{ISO8601} %-5p [%t]: %c{2} (%F:%M(%L)) - %m%n
+
+# list of all loggers
+loggers = NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX
+
+logger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn
+logger.NIOServerCnxn.level = WARN
+
+logger.ClientCnxnSocketNIO.name = org.apache.zookeeper.ClientCnxnSocketNIO
+logger.ClientCnxnSocketNIO.level = WARN
+
+logger.DataNucleus.name = DataNucleus
+logger.DataNucleus.level = ERROR
+
+logger.Datastore.name = Datastore
+logger.Datastore.level = ERROR
+
+logger.JPOX.name = JPOX
+logger.JPOX.level = ERROR
+
+# root logger
+rootLogger.level = ${sys:hive.log.level}
+rootLogger.appenderRefs = root
+rootLogger.appenderRef.root.ref = ${sys:hive.root.logger}
diff --git a/docker/dockerfile/cluster/hive/conf/hive-log4j2.properties b/docker/dockerfile/cluster/hive/conf/hive-log4j2.properties
new file mode 100644
index 0000000..5e5ce02
--- /dev/null
+++ b/docker/dockerfile/cluster/hive/conf/hive-log4j2.properties
@@ -0,0 +1,74 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+status = INFO
+name = HiveLog4j2
+packages = org.apache.hadoop.hive.ql.log
+
+# list of properties
+property.hive.log.level = INFO
+property.hive.root.logger = DRFA
+property.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}
+property.hive.log.file = hive.log
+
+# list of all appenders
+appenders = console, DRFA
+
+# console appender
+appender.console.type = Console
+appender.console.name = console
+appender.console.target = SYSTEM_ERR
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n
+
+# daily rolling file appender
+appender.DRFA.type = RollingFile
+appender.DRFA.name = DRFA
+appender.DRFA.fileName = ${sys:hive.log.dir}/${sys:hive.log.file}
+# Use %pid in the filePattern to append <process-id>@<host-name> to the filename if you want separate log files for different CLI session
+appender.DRFA.filePattern = ${sys:hive.log.dir}/${sys:hive.log.file}.%d{yyyy-MM-dd}
+appender.DRFA.layout.type = PatternLayout
+appender.DRFA.layout.pattern = %d{ISO8601} %-5p [%t]: %c{2} (%F:%M(%L)) - %m%n
+appender.DRFA.policies.type = Policies
+appender.DRFA.policies.time.type = TimeBasedTriggeringPolicy
+appender.DRFA.policies.time.interval = 1
+appender.DRFA.policies.time.modulate = true
+appender.DRFA.strategy.type = DefaultRolloverStrategy
+appender.DRFA.strategy.max = 30
+
+# list of all loggers
+loggers = NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX
+
+logger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn
+logger.NIOServerCnxn.level = WARN
+
+logger.ClientCnxnSocketNIO.name = org.apache.zookeeper.ClientCnxnSocketNIO
+logger.ClientCnxnSocketNIO.level = WARN
+
+logger.DataNucleus.name = DataNucleus
+logger.DataNucleus.level = ERROR
+
+logger.Datastore.name = Datastore
+logger.Datastore.level = ERROR
+
+logger.JPOX.name = JPOX
+logger.JPOX.level = ERROR
+
+# root logger
+rootLogger.level = ${sys:hive.log.level}
+rootLogger.appenderRefs = root
+rootLogger.appenderRef.root.ref = ${sys:hive.root.logger}
diff --git a/docker/dockerfile/cluster/hive/conf/hive-site.xml b/docker/dockerfile/cluster/hive/conf/hive-site.xml
new file mode 100644
index 0000000..60f3935
--- /dev/null
+++ b/docker/dockerfile/cluster/hive/conf/hive-site.xml
@@ -0,0 +1,18 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--><configuration>
+</configuration>
diff --git a/docker/dockerfile/cluster/hive/conf/ivysettings.xml b/docker/dockerfile/cluster/hive/conf/ivysettings.xml
new file mode 100644
index 0000000..d1b7819
--- /dev/null
+++ b/docker/dockerfile/cluster/hive/conf/ivysettings.xml
@@ -0,0 +1,44 @@
+<!--This file is used by grapes to download dependencies from a maven repository.
+    This is just a template and can be edited to add more repositories.
+-->
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<ivysettings>
+  <!--name of the defaultResolver should always be 'downloadGrapes'. -->
+  <settings defaultResolver="downloadGrapes"/>
+  <!-- Only set maven.local.repository if not already set -->
+  <property name="maven.local.repository" value="${user.home}/.m2/repository" override="false" />
+  <property name="m2-pattern"
+            value="file:${maven.local.repository}/[organisation]/[module]/[revision]/[module]-[revision](-[classifier]).[ext]"
+            override="false"/>
+  <resolvers>
+    <!-- more resolvers can be added here -->
+    <chain name="downloadGrapes">
+      <!-- This resolver uses ibiblio to find artifacts, compatible with maven2 repository -->
+      <ibiblio name="central" m2compatible="true"/>
+      <url name="local-maven2" m2compatible="true">
+        <artifact pattern="${m2-pattern}"/>
+      </url>
+      <!-- File resolver to add jars from the local system. -->
+      <filesystem name="test" checkmodified="true">
+        <artifact pattern="/tmp/[module]-[revision](-[classifier]).jar"/>
+      </filesystem>
+
+    </chain>
+  </resolvers>
+</ivysettings>
diff --git a/docker/dockerfile/cluster/hive/conf/llap-daemon-log4j2.properties b/docker/dockerfile/cluster/hive/conf/llap-daemon-log4j2.properties
new file mode 100644
index 0000000..f1b72eb
--- /dev/null
+++ b/docker/dockerfile/cluster/hive/conf/llap-daemon-log4j2.properties
@@ -0,0 +1,94 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+status = INFO
+name = LlapDaemonLog4j2
+packages = org.apache.hadoop.hive.ql.log
+
+# list of properties
+property.llap.daemon.log.level = INFO
+property.llap.daemon.root.logger = console
+property.llap.daemon.log.dir = .
+property.llap.daemon.log.file = llapdaemon.log
+property.llap.daemon.historylog.file = llapdaemon_history.log
+property.llap.daemon.log.maxfilesize = 256MB
+property.llap.daemon.log.maxbackupindex = 20
+
+# list of all appenders
+appenders = console, RFA, HISTORYAPPENDER
+
+# console appender
+appender.console.type = Console
+appender.console.name = console
+appender.console.target = SYSTEM_ERR
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} [%t%x] %p %c{2} : %m%n
+
+# rolling file appender
+appender.RFA.type = RollingFile
+appender.RFA.name = RFA
+appender.RFA.fileName = ${sys:llap.daemon.log.dir}/${sys:llap.daemon.log.file}
+appender.RFA.filePattern = ${sys:llap.daemon.log.dir}/${sys:llap.daemon.log.file}_%i
+appender.RFA.layout.type = PatternLayout
+appender.RFA.layout.pattern = %d{ISO8601} %-5p [%t%x]: %c{2} (%F:%M(%L)) - %m%n
+appender.RFA.policies.type = Policies
+appender.RFA.policies.size.type = SizeBasedTriggeringPolicy
+appender.RFA.policies.size.size = ${sys:llap.daemon.log.maxfilesize}
+appender.RFA.strategy.type = DefaultRolloverStrategy
+appender.RFA.strategy.max = ${sys:llap.daemon.log.maxbackupindex}
+
+# history file appender
+appender.HISTORYAPPENDER.type = RollingFile
+appender.HISTORYAPPENDER.name = HISTORYAPPENDER
+appender.HISTORYAPPENDER.fileName = ${sys:llap.daemon.log.dir}/${sys:llap.daemon.historylog.file}
+appender.HISTORYAPPENDER.filePattern = ${sys:llap.daemon.log.dir}/${sys:llap.daemon.historylog.file}_%i
+appender.HISTORYAPPENDER.layout.type = PatternLayout
+appender.HISTORYAPPENDER.layout.pattern = %m%n
+appender.HISTORYAPPENDER.policies.type = Policies
+appender.HISTORYAPPENDER.policies.size.type = SizeBasedTriggeringPolicy
+appender.HISTORYAPPENDER.policies.size.size = ${sys:llap.daemon.log.maxfilesize}
+appender.HISTORYAPPENDER.strategy.type = DefaultRolloverStrategy
+appender.HISTORYAPPENDER.strategy.max = ${sys:llap.daemon.log.maxbackupindex}
+
+# list of all loggers
+loggers = NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX, HistoryLogger
+
+logger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn
+logger.NIOServerCnxn.level = WARN
+
+logger.ClientCnxnSocketNIO.name = org.apache.zookeeper.ClientCnxnSocketNIO
+logger.ClientCnxnSocketNIO.level = WARN
+
+logger.DataNucleus.name = DataNucleus
+logger.DataNucleus.level = ERROR
+
+logger.Datastore.name = Datastore
+logger.Datastore.level = ERROR
+
+logger.JPOX.name = JPOX
+logger.JPOX.level = ERROR
+
+logger.HistoryLogger.name = org.apache.hadoop.hive.llap.daemon.HistoryLogger
+logger.HistoryLogger.level = INFO
+logger.HistoryLogger.additivity = false
+logger.HistoryLogger.appenderRefs = HistoryAppender
+logger.HistoryLogger.appenderRef.HistoryAppender.ref = HISTORYAPPENDER
+
+# root logger
+rootLogger.level = ${sys:llap.daemon.log.level}
+rootLogger.appenderRefs = root
+rootLogger.appenderRef.root.ref = ${sys:llap.daemon.root.logger}
diff --git a/docker/dockerfile/cluster/hive/entrypoint.sh b/docker/dockerfile/cluster/hive/entrypoint.sh
new file mode 100644
index 0000000..d6a888c
--- /dev/null
+++ b/docker/dockerfile/cluster/hive/entrypoint.sh
@@ -0,0 +1,136 @@
+#!/bin/bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Set some sensible defaults
+export CORE_CONF_fs_defaultFS=${CORE_CONF_fs_defaultFS:-hdfs://`hostname -f`:8020}
+
+function addProperty() {
+  local path=$1
+  local name=$2
+  local value=$3
+
+  local entry="<property><name>$name</name><value>${value}</value></property>"
+  local escapedEntry=$(echo $entry | sed 's/\//\\\//g')
+  sed -i "/<\/configuration>/ s/.*/${escapedEntry}\n&/" $path
+}
+
+function configure() {
+    local path=$1
+    local module=$2
+    local envPrefix=$3
+
+    local var
+    local value
+    
+    echo "Configuring $module"
+    for c in `printenv | perl -sne 'print "$1 " if m/^${envPrefix}_(.+?)=.*/' -- -envPrefix=$envPrefix`; do 
+        name=`echo ${c} | perl -pe 's/___/-/g; s/__/_/g; s/_/./g'`
+        var="${envPrefix}_${c}"
+        value=${!var}
+        echo " - Setting $name=$value"
+        addProperty $path $name "$value"
+    done
+}
+
+configure /etc/hadoop/core-site.xml core CORE_CONF
+configure /etc/hadoop/hdfs-site.xml hdfs HDFS_CONF
+configure /etc/hadoop/yarn-site.xml yarn YARN_CONF
+configure /etc/hadoop/httpfs-site.xml httpfs HTTPFS_CONF
+configure /etc/hadoop/kms-site.xml kms KMS_CONF
+configure /etc/hadoop/mapred-site.xml mapred MAPRED_CONF
+configure /etc/hadoop/hive-site.xml hive HIVE_SITE_CONF
+configure /opt/hive/conf/hive-site.xml hive HIVE_SITE_CONF
+
+if [ "$MULTIHOMED_NETWORK" = "1" ]; then
+    echo "Configuring for multihomed network"
+
+    # HDFS
+    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.rpc-bind-host 0.0.0.0
+    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.servicerpc-bind-host 0.0.0.0
+    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.http-bind-host 0.0.0.0
+    addProperty /etc/hadoop/hdfs-site.xml dfs.namenode.https-bind-host 0.0.0.0
+    addProperty /etc/hadoop/hdfs-site.xml dfs.client.use.datanode.hostname true
+    addProperty /etc/hadoop/hdfs-site.xml dfs.datanode.use.datanode.hostname true
+
+    # YARN
+    addProperty /etc/hadoop/yarn-site.xml yarn.resourcemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/yarn-site.xml yarn.nodemanager.bind-host 0.0.0.0
+    addProperty /etc/hadoop/yarn-site.xml yarn.timeline-service.bind-host 0.0.0.0
+
+    # MAPRED
+    addProperty /etc/hadoop/mapred-site.xml yarn.nodemanager.bind-host 0.0.0.0
+fi
+
+if [ -n "$GANGLIA_HOST" ]; then
+    mv /etc/hadoop/hadoop-metrics.properties /etc/hadoop/hadoop-metrics.properties.orig
+    mv /etc/hadoop/hadoop-metrics2.properties /etc/hadoop/hadoop-metrics2.properties.orig
+
+    for module in mapred jvm rpc ugi; do
+        echo "$module.class=org.apache.hadoop.metrics.ganglia.GangliaContext31"
+        echo "$module.period=10"
+        echo "$module.servers=$GANGLIA_HOST:8649"
+    done > /etc/hadoop/hadoop-metrics.properties
+    
+    for module in namenode datanode resourcemanager nodemanager mrappmaster jobhistoryserver; do
+        echo "$module.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31"
+        echo "$module.sink.ganglia.period=10"
+        echo "$module.sink.ganglia.supportsparse=true"
+        echo "$module.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both"
+        echo "$module.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40"
+        echo "$module.sink.ganglia.servers=$GANGLIA_HOST:8649"
+    done > /etc/hadoop/hadoop-metrics2.properties
+fi
+
+function wait_for_it()
+{
+    local serviceport=$1
+    local service=${serviceport%%:*}
+    local port=${serviceport#*:}
+    local retry_seconds=5
+    local max_try=100
+    let i=1
+
+    nc -z $service $port
+    result=$?
+
+    until [ $result -eq 0 ]; do
+      echo "[$i/$max_try] check for ${service}:${port}..."
+      echo "[$i/$max_try] ${service}:${port} is not available yet"
+      if (( $i == $max_try )); then
+        echo "[$i/$max_try] ${service}:${port} is still not available; giving up after ${max_try} tries. :/"
+        exit 1
+      fi
+      
+      echo "[$i/$max_try] try in ${retry_seconds}s once again ..."
+      let "i++"
+      sleep $retry_seconds
+
+      nc -z $service $port
+      result=$?
+    done
+    echo "[$i/$max_try] $service:${port} is available."
+}
+
+for i in ${SERVICE_PRECONDITION[@]}
+do
+    wait_for_it ${i}
+done
+
+exec $@
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/hive/run_hv.sh
old mode 100755
new mode 100644
similarity index 77%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/hive/run_hv.sh
index 3ed32ce..675937f
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/hive/run_hv.sh
@@ -1,3 +1,5 @@
+#!/bin/bash
+
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -15,12 +17,10 @@
 # limitations under the License.
 #
 
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+hadoop fs -mkdir       /tmp
+hadoop fs -mkdir -p    /user/hive/warehouse
+hadoop fs -chmod g+w   /tmp
+hadoop fs -chmod g+w   /user/hive/warehouse
+
+cd $HIVE_HOME/bin
+./hiveserver2 --hiveconf hive.server2.enable.doAs=false
diff --git a/docker/dockerfile/cluster/hmaster/Dockerfile b/docker/dockerfile/cluster/hmaster/Dockerfile
new file mode 100644
index 0000000..09aa0e3
--- /dev/null
+++ b/docker/dockerfile/cluster/hmaster/Dockerfile
@@ -0,0 +1,13 @@
+
+
+ARG HBASE_VERSION=1.1.2
+
+FROM apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION}
+
+ENV HBASE_VERSION ${HBASE_VERSION}
+COPY run_hm.sh /run_hm.sh
+RUN chmod +x /run_hm.sh
+
+EXPOSE 16000 16010
+
+CMD ["/run_hm.sh"]
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/hmaster/run_hm.sh
old mode 100755
new mode 100644
similarity index 82%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/hmaster/run_hm.sh
index 3ed32ce..1b1cda5
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/hmaster/run_hm.sh
@@ -1,3 +1,4 @@
+#!/bin/bash
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -14,13 +15,4 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+/opt/hbase-$HBASE_VERSION/bin/hbase master start
diff --git a/docker/dockerfile/cluster/hregionserver/Dockerfile b/docker/dockerfile/cluster/hregionserver/Dockerfile
new file mode 100644
index 0000000..aaced16
--- /dev/null
+++ b/docker/dockerfile/cluster/hregionserver/Dockerfile
@@ -0,0 +1,12 @@
+ARG HBASE_VERSION=1.1.2
+
+FROM apachekylin/kylin-hbase-base:hbase_${HBASE_VERSION}
+
+ENV HBASE_VERSION ${HBASE_VERSION}
+
+COPY run_hr.sh /run_hr.sh
+RUN chmod +x /run_hr.sh
+
+EXPOSE 16020 16030
+
+CMD ["/run_hr.sh"]
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/hregionserver/run_hr.sh
old mode 100755
new mode 100644
similarity index 82%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/hregionserver/run_hr.sh
index 3ed32ce..953ad43
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/hregionserver/run_hr.sh
@@ -1,3 +1,4 @@
+#!/bin/bash
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -14,13 +15,4 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+/opt/hbase-$HBASE_VERSION/bin/hbase regionserver start
diff --git a/docker/build_image.sh b/docker/dockerfile/cluster/kerberos/Dockerfile
old mode 100755
new mode 100644
similarity index 63%
copy from docker/build_image.sh
copy to docker/dockerfile/cluster/kerberos/Dockerfile
index 9c0b925..bc46f23
--- a/docker/build_image.sh
+++ b/docker/dockerfile/cluster/kerberos/Dockerfile
@@ -1,5 +1,3 @@
-#!/usr/bin/env bash
-
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -17,11 +15,21 @@
 # limitations under the License.
 #
 
-DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
-cd ${DIR}
-echo "build image in dir "${DIR}
+FROM centos:7.3.1611
+MAINTAINER kylin
+
+USER root
+
+# install tools
+RUN yum -y install lsof wget tar git unzip wget curl net-tools procps perl sed nc which
+# install kerberos
+RUN yum -y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation
+
+COPY conf/kadm5.acl  /var/kerberos/krb5kdc/kadm5.acl
+COPY conf/kdc.conf /var/kerberos/krb5kdc/kdc.conf
+COPY conf/krb5.conf /etc/krb5.conf
 
+ADD run_krb.sh /run_krb.sh
+RUN chmod a+x /run_krb.sh
 
-echo "start build Hadoop docker image"
-docker build -f Dockerfile_hadoop -t hadoop2.7-all-in-one-for-kylin4 .
-docker build -f Dockerfile -t apachekylin/apache-kylin-standalone:4.0.0-alpha .
+CMD ["/run_krb.sh"]
\ No newline at end of file
diff --git a/docker/dockerfile/cluster/kerberos/conf/kadm5.acl b/docker/dockerfile/cluster/kerberos/conf/kadm5.acl
new file mode 100644
index 0000000..47c8885
--- /dev/null
+++ b/docker/dockerfile/cluster/kerberos/conf/kadm5.acl
@@ -0,0 +1 @@
+*/kylin@KYLIN.COM	*
\ No newline at end of file
diff --git a/docker/build_image.sh b/docker/dockerfile/cluster/kerberos/conf/kdc.conf
old mode 100755
new mode 100644
similarity index 64%
copy from docker/build_image.sh
copy to docker/dockerfile/cluster/kerberos/conf/kdc.conf
index 9c0b925..aa3e6b6
--- a/docker/build_image.sh
+++ b/docker/dockerfile/cluster/kerberos/conf/kdc.conf
@@ -1,5 +1,3 @@
-#!/usr/bin/env bash
-
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -17,11 +15,15 @@
 # limitations under the License.
 #
 
-DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
-cd ${DIR}
-echo "build image in dir "${DIR}
-
+[kdcdefaults]
+kdc_ports = 88
+kdc_tcp_ports = 88
 
-echo "start build Hadoop docker image"
-docker build -f Dockerfile_hadoop -t hadoop2.7-all-in-one-for-kylin4 .
-docker build -f Dockerfile -t apachekylin/apache-kylin-standalone:4.0.0-alpha .
+[realms]
+CTYUN.COM = {
+ #master_key_type = aes256-cts
+ acl_file = /var/kerberos/krb5kdc/kadm5.acl
+ dict_file = /usr/share/dict/words
+ admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
+ supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
+}
\ No newline at end of file
diff --git a/docker/build_image.sh b/docker/dockerfile/cluster/kerberos/conf/krb5.conf
old mode 100755
new mode 100644
similarity index 59%
copy from docker/build_image.sh
copy to docker/dockerfile/cluster/kerberos/conf/krb5.conf
index 9c0b925..2f50c9c
--- a/docker/build_image.sh
+++ b/docker/dockerfile/cluster/kerberos/conf/krb5.conf
@@ -1,5 +1,3 @@
-#!/usr/bin/env bash
-
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -17,11 +15,29 @@
 # limitations under the License.
 #
 
-DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
-cd ${DIR}
-echo "build image in dir "${DIR}
+includedir /etc/krb5.conf.d/
+
+[logging]
+ default = FILE:/var/log/krb5libs.log
+ kdc = FILE:/var/log/krb5kdc.log
+ admin_server = FILE:/var/log/kadmind.log
+
+[libdefaults]
+ dns_lookup_realm = false
+ ticket_lifetime = 24h
+ renew_lifetime = 7d
+ forwardable = true
+ rdns = false
+ pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
+ default_realm = KYLIN.COM
+ default_ccache_name = KEYRING:persistent:%{uid}
 
+[realms]
+ CTYUN.COM = {
+  kdc = host-203
+  admin_server = host-203
+ }
 
-echo "start build Hadoop docker image"
-docker build -f Dockerfile_hadoop -t hadoop2.7-all-in-one-for-kylin4 .
-docker build -f Dockerfile -t apachekylin/apache-kylin-standalone:4.0.0-alpha .
+[domain_realm]
+ .ctyun.com = KYLIN.COM
+ ctyun.com = KYLIN.COM
\ No newline at end of file
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/kerberos/run_krb.sh
old mode 100755
new mode 100644
similarity index 82%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/kerberos/run_krb.sh
index 3ed32ce..a2a0ab8
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/kerberos/run_krb.sh
@@ -1,3 +1,4 @@
+#!/bin/bash
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -15,12 +16,11 @@
 # limitations under the License.
 #
 
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+kdb5_util create -s -r KYLIN.COM
+systemctl start krb5kdc kadmin
+systemctl enable krb5kdc kadmin
+
+while :
+do
+    sleep 1000
+done
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/kylin/Dockerfile
old mode 100755
new mode 100644
similarity index 75%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/kylin/Dockerfile
index 3ed32ce..2bd4a1b
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/kylin/Dockerfile
@@ -15,12 +15,11 @@
 # limitations under the License.
 #
 
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+ARG HADOOP_VERSION=2.8.5
+ARG HIVE_VERSION=1.2.1
+ARG HBASE_VERSION=1.1.2
+ARG SPARK_VERSION=2.3.3
+
+FROM apachekylin/kylin-client:hadoop_${HADOOP_VERSION}_hive_${HIVE_VERSION}_spark_${HBASE_VERSION}_spark_${SPARK_VERSION} AS client
+
+#CMD ["/bin/bash"]
\ No newline at end of file
diff --git a/docker/dockerfile/cluster/kylin/entrypoint.sh b/docker/dockerfile/cluster/kylin/entrypoint.sh
new file mode 100644
index 0000000..7a693aa
--- /dev/null
+++ b/docker/dockerfile/cluster/kylin/entrypoint.sh
@@ -0,0 +1,3 @@
+#!/bin/bash
+
+
diff --git a/docker/dockerfile/cluster/metastore-db/Dockerfile b/docker/dockerfile/cluster/metastore-db/Dockerfile
new file mode 100644
index 0000000..8a78964
--- /dev/null
+++ b/docker/dockerfile/cluster/metastore-db/Dockerfile
@@ -0,0 +1,12 @@
+ARG MYSQL_VERSION=5.6.49
+FROM mysql:${MYSQL_VERSION}
+
+ARG CREATE_DBS="kylin hive"
+ENV CREATE_DBS=$CREATE_DBS
+
+COPY run_db.sh /run_db.sh
+RUN chmod +x /run_db.sh
+
+ENTRYPOINT ["docker-entrypoint.sh"]
+
+CMD ["/run_db.sh"]
diff --git a/docker/dockerfile/cluster/metastore-db/run_db.sh b/docker/dockerfile/cluster/metastore-db/run_db.sh
new file mode 100644
index 0000000..dfaaef1
--- /dev/null
+++ b/docker/dockerfile/cluster/metastore-db/run_db.sh
@@ -0,0 +1,15 @@
+#!/bin/bash
+
+mysqld --user=root
+
+mysqladmin -uroot password kylin
+mysql -uroot -pkylin -e "grant all privileges on root.* to root@'%' identified by 'kylin' WITH GRANT OPTION; FLUSH PRIVILEGES;"
+
+for db in $CREATE_DBS; do
+  mysql -uroot -pkylin -e "create database $db;"
+  done
+
+while :
+do
+    sleep 10
+done
diff --git a/docker/build_image.sh b/docker/dockerfile/cluster/namenode/Dockerfile
old mode 100755
new mode 100644
similarity index 61%
copy from docker/build_image.sh
copy to docker/dockerfile/cluster/namenode/Dockerfile
index 9c0b925..3418680
--- a/docker/build_image.sh
+++ b/docker/dockerfile/cluster/namenode/Dockerfile
@@ -1,5 +1,3 @@
-#!/usr/bin/env bash
-
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -17,11 +15,22 @@
 # limitations under the License.
 #
 
-DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
-cd ${DIR}
-echo "build image in dir "${DIR}
+ARG HADOOP_VERSION=2.8.5
+FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+
+ENV HADOOP_VERSION ${HADOOP_VERSION}
+
+ARG HADOOP_WEBHDFS_PORT=50070
+ENV HADOOP_WEBHDFS_PORT ${HADOOP_WEBHDFS_PORT}
+EXPOSE ${HADOOP_WEBHDFS_PORT} 8020
+
+HEALTHCHECK CMD curl -f http://localhost:${HADOOP_WEBHDFS_PORT}/ || exit 1
+
+ENV HDFS_CONF_dfs_namenode_name_dir=file:///hadoop/dfs/name
+RUN mkdir -p /hadoop/dfs/name
+VOLUME /hadoop/dfs/name
 
+ADD run_nn.sh /run_nn.sh
+RUN chmod a+x /run_nn.sh
 
-echo "start build Hadoop docker image"
-docker build -f Dockerfile_hadoop -t hadoop2.7-all-in-one-for-kylin4 .
-docker build -f Dockerfile -t apachekylin/apache-kylin-standalone:4.0.0-alpha .
+CMD ["/run_nn.sh"]
diff --git a/docker/build_image.sh b/docker/dockerfile/cluster/namenode/run_nn.sh
old mode 100755
new mode 100644
similarity index 61%
rename from docker/build_image.sh
rename to docker/dockerfile/cluster/namenode/run_nn.sh
index 9c0b925..e4dc90f
--- a/docker/build_image.sh
+++ b/docker/dockerfile/cluster/namenode/run_nn.sh
@@ -1,4 +1,4 @@
-#!/usr/bin/env bash
+#!/bin/bash
 
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
@@ -17,11 +17,20 @@
 # limitations under the License.
 #
 
-DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
-cd ${DIR}
-echo "build image in dir "${DIR}
+namedir=`echo $HDFS_CONF_dfs_namenode_name_dir | perl -pe 's#file://##'`
+if [ ! -d $namedir ]; then
+  echo "Namenode name directory not found: $namedir"
+  exit 2
+fi
 
+if [ -z "$CLUSTER_NAME" ]; then
+  echo "Cluster name not specified"
+  exit 2
+fi
 
-echo "start build Hadoop docker image"
-docker build -f Dockerfile_hadoop -t hadoop2.7-all-in-one-for-kylin4 .
-docker build -f Dockerfile -t apachekylin/apache-kylin-standalone:4.0.0-alpha .
+if [ "`ls -A $namedir`" == "" ]; then
+  echo "Formatting namenode name directory: $namedir"
+  $HADOOP_PREFIX/bin/hdfs --config $HADOOP_CONF_DIR namenode -format $CLUSTER_NAME 
+fi
+
+$HADOOP_PREFIX/bin/hdfs --config $HADOOP_CONF_DIR namenode
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/nodemanager/Dockerfile
old mode 100755
new mode 100644
similarity index 76%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/nodemanager/Dockerfile
index 3ed32ce..8ec68df
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/nodemanager/Dockerfile
@@ -15,12 +15,15 @@
 # limitations under the License.
 #
 
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+ARG HADOOP_VERSION=2.8.5
+FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+
+MAINTAINER kylin
+
+EXPOSE 8042
+HEALTHCHECK CMD curl -f http://localhost:8042/ || exit 1
+
+ADD run_nm.sh /run_nm.sh
+RUN chmod a+x /run_nm.sh
+
+CMD ["/run_nm.sh"]
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/nodemanager/run_nm.sh
old mode 100755
new mode 100644
similarity index 82%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/nodemanager/run_nm.sh
index 3ed32ce..9a36690
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/nodemanager/run_nm.sh
@@ -1,3 +1,5 @@
+#!/bin/bash
+
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -15,12 +17,4 @@
 # limitations under the License.
 #
 
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+$HADOOP_PREFIX/bin/yarn --config $HADOOP_CONF_DIR nodemanager
diff --git a/docker/dockerfile/cluster/pom.xml b/docker/dockerfile/cluster/pom.xml
new file mode 100644
index 0000000..f6640a2
--- /dev/null
+++ b/docker/dockerfile/cluster/pom.xml
@@ -0,0 +1,81 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <parent>
+    <artifactId>hudi</artifactId>
+    <groupId>org.apache.hudi</groupId>
+    <version>0.6.1-SNAPSHOT</version>
+    <relativePath>../../../../pom.xml</relativePath>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+
+  <artifactId>hudi-hadoop-docker</artifactId>
+  <packaging>pom</packaging>
+  <modules>
+    <module>base</module>
+    <module>namenode</module>
+    <module>datanode</module>
+    <module>historyserver</module>
+    <module>hive_base</module>
+    <module>spark_base</module>
+    <module>sparkmaster</module>
+    <module>sparkworker</module>
+    <module>sparkadhoc</module>
+    <module>prestobase</module>
+  </modules>
+
+  <dependencies>
+    <dependency>
+      <groupId>org.apache.hudi</groupId>
+      <artifactId>hudi-spark-bundle_${scala.binary.version}</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+  </dependencies>
+
+  <properties>
+    <skipITs>false</skipITs>
+    <docker.build.skip>true</docker.build.skip>
+    <docker.spark.version>2.4.4</docker.spark.version>
+    <docker.hive.version>2.3.3</docker.hive.version>
+    <docker.hadoop.version>2.8.4</docker.hadoop.version>
+    <docker.presto.version>0.217</docker.presto.version>
+    <dockerfile.maven.version>1.4.3</dockerfile.maven.version>
+    <checkstyle.skip>true</checkstyle.skip>
+    <main.basedir>${project.parent.basedir}</main.basedir>
+  </properties>
+
+  <build>
+    <extensions>
+      <extension>
+        <groupId>com.spotify</groupId>
+        <artifactId>dockerfile-maven-extension</artifactId>
+        <version>${dockerfile.maven.version}</version>
+      </extension>
+    </extensions>
+    <plugins>
+     <plugin>
+        <groupId>com.spotify</groupId>
+        <artifactId>dockerfile-maven-plugin</artifactId>
+        <version>${dockerfile.maven.version}</version>
+        <configuration>
+          <skip>true</skip>
+        </configuration>
+      </plugin>
+    </plugins>
+  </build>
+</project>
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/resourcemanager/Dockerfile
old mode 100755
new mode 100644
similarity index 76%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/resourcemanager/Dockerfile
index 3ed32ce..b99027f
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/resourcemanager/Dockerfile
@@ -15,12 +15,15 @@
 # limitations under the License.
 #
 
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+ARG HADOOP_VERSION=2.8.5
+FROM apachekylin/kylin-hadoop-base:hadoop_${HADOOP_VERSION}
+
+MAINTAINER kylin
+
+EXPOSE 8088
+HEALTHCHECK CMD curl -f http://localhost:8088/ || exit 1
+
+ADD run_rm.sh /run_rm.sh
+RUN chmod a+x /run_rm.sh
+
+CMD ["/run_rm.sh"]
diff --git a/docker/run_container.sh b/docker/dockerfile/cluster/resourcemanager/run_rm.sh
old mode 100755
new mode 100644
similarity index 82%
copy from docker/run_container.sh
copy to docker/dockerfile/cluster/resourcemanager/run_rm.sh
index 3ed32ce..ed15e46
--- a/docker/run_container.sh
+++ b/docker/dockerfile/cluster/resourcemanager/run_rm.sh
@@ -1,3 +1,5 @@
+#!/bin/bash
+
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -15,12 +17,4 @@
 # limitations under the License.
 #
 
-docker run -d \
--m 8G \
--p 7070:7070 \
--p 8088:8088 \
--p 50070:50070 \
--p 8032:8032 \
--p 8042:8042 \
--p 2181:2181 \
-apachekylin/apache-kylin-standalone:4.0.0-alpha
+$HADOOP_PREFIX/bin/yarn --config $HADOOP_CONF_DIR resourcemanager
diff --git a/docker/Dockerfile b/docker/dockerfile/standalone/Dockerfile
similarity index 100%
rename from docker/Dockerfile
rename to docker/dockerfile/standalone/Dockerfile
diff --git a/docker/conf/hadoop/core-site.xml b/docker/dockerfile/standalone/conf/hadoop/core-site.xml
similarity index 100%
rename from docker/conf/hadoop/core-site.xml
rename to docker/dockerfile/standalone/conf/hadoop/core-site.xml
diff --git a/docker/conf/hadoop/hdfs-site.xml b/docker/dockerfile/standalone/conf/hadoop/hdfs-site.xml
similarity index 100%
rename from docker/conf/hadoop/hdfs-site.xml
rename to docker/dockerfile/standalone/conf/hadoop/hdfs-site.xml
diff --git a/docker/conf/hadoop/mapred-site.xml b/docker/dockerfile/standalone/conf/hadoop/mapred-site.xml
similarity index 100%
rename from docker/conf/hadoop/mapred-site.xml
rename to docker/dockerfile/standalone/conf/hadoop/mapred-site.xml
diff --git a/docker/conf/hadoop/yarn-site.xml b/docker/dockerfile/standalone/conf/hadoop/yarn-site.xml
similarity index 100%
rename from docker/conf/hadoop/yarn-site.xml
rename to docker/dockerfile/standalone/conf/hadoop/yarn-site.xml
diff --git a/docker/conf/hive/hive-site.xml b/docker/dockerfile/standalone/conf/hive/hive-site.xml
similarity index 100%
rename from docker/conf/hive/hive-site.xml
rename to docker/dockerfile/standalone/conf/hive/hive-site.xml
diff --git a/docker/conf/maven/settings.xml b/docker/dockerfile/standalone/conf/maven/settings.xml
similarity index 100%
rename from docker/conf/maven/settings.xml
rename to docker/dockerfile/standalone/conf/maven/settings.xml
diff --git a/docker/entrypoint.sh b/docker/dockerfile/standalone/entrypoint.sh
similarity index 100%
rename from docker/entrypoint.sh
rename to docker/dockerfile/standalone/entrypoint.sh
diff --git a/docker/setup_cluster.sh b/docker/setup_cluster.sh
new file mode 100644
index 0000000..0e3a260
--- /dev/null
+++ b/docker/setup_cluster.sh
@@ -0,0 +1,28 @@
+#!/bin/bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+SCRIPT_PATH=$(cd `dirname $0`; pwd)
+WS_ROOT=`dirname $SCRIPT_PATH`
+
+source ${SCRIPT_PATH}/build_cluster_images.sh
+
+# restart cluster
+source ${SCRIPT_PATH}/build_cluster_images.sh
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-write.yml down
+sleep 10
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/dokcer-compose/write/docker-compose-write.yml up -d
diff --git a/docker/run_container.sh b/docker/setup_standalone.sh
similarity index 100%
rename from docker/run_container.sh
rename to docker/setup_standalone.sh
diff --git a/docker/stop_cluster.sh b/docker/stop_cluster.sh
new file mode 100644
index 0000000..87f0ac4
--- /dev/null
+++ b/docker/stop_cluster.sh
@@ -0,0 +1,23 @@
+#!/bin/bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+SCRIPT_PATH=$(cd `dirname $0`; pwd)
+# set up root directory
+WS_ROOT=`dirname $SCRIPT_PATH`
+# shut down cluster
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-write.yml down


[kylin] 09/13: KYLIN-4775 Fix 0.0.0.0:10020 ConnectionRefused

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 5f16a065ace55d884532cf7d5ad867d94804a39c
Author: XiaoxiangYu <xx...@apache.org>
AuthorDate: Tue Oct 27 19:40:23 2020 +0800

    KYLIN-4775 Fix 0.0.0.0:10020 ConnectionRefused
---
 docker/docker-compose/others/client-write-read.env               | 4 ++--
 docker/docker-compose/others/client-write.env                    | 1 +
 docker/docker-compose/write/docker-compose-hadoop.yml            | 3 +++
 docker/docker-compose/write/write-hadoop.env                     | 1 +
 docker/dockerfile/cluster/client/conf/hadoop-write/yarn-site.xml | 1 +
 docker/dockerfile/cluster/historyserver/Dockerfile               | 1 +
 docker/dockerfile/cluster/historyserver/run_history.sh           | 7 ++++++-
 7 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/docker/docker-compose/others/client-write-read.env b/docker/docker-compose/others/client-write-read.env
index 1a9ecad..3e5bc54 100644
--- a/docker/docker-compose/others/client-write-read.env
+++ b/docker/docker-compose/others/client-write-read.env
@@ -31,6 +31,7 @@ YARN_CONF_yarn_nodemanager_resource_cpu___vcores=6
 YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
 YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
 YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
+YARN_CONF_mapreduce_jobhistory_address=write-historyserver:10020
 
 MAPRED_CONF_mapreduce_framework_name=yarn
 MAPRED_CONF_mapred_child_java_opts=-Xmx4096m
@@ -57,5 +58,4 @@ HBASE_CONF_hbase_master_info_port=16010
 HBASE_CONF_hbase_regionserver_port=16020
 HBASE_CONF_hbase_regionserver_info_port=16030
 
-HBASE_MANAGES_ZK=false
-
+HBASE_MANAGES_ZK=false
\ No newline at end of file
diff --git a/docker/docker-compose/others/client-write.env b/docker/docker-compose/others/client-write.env
index d47815c..aec9d02 100644
--- a/docker/docker-compose/others/client-write.env
+++ b/docker/docker-compose/others/client-write.env
@@ -31,6 +31,7 @@ YARN_CONF_yarn_nodemanager_resource_cpu___vcores=6
 YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
 YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
 YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
+YARN_CONF_mapreduce_jobhistory_address=write-historyserver:10020
 
 MAPRED_CONF_mapreduce_framework_name=yarn
 MAPRED_CONF_mapred_child_java_opts=-Xmx4096m
diff --git a/docker/docker-compose/write/docker-compose-hadoop.yml b/docker/docker-compose/write/docker-compose-hadoop.yml
index 8c75f37..47f54ec 100644
--- a/docker/docker-compose/write/docker-compose-hadoop.yml
+++ b/docker/docker-compose/write/docker-compose-hadoop.yml
@@ -125,6 +125,9 @@ services:
       - kylin
     ports:
       - 8188:8188
+      - 10020:10020
+    expose:
+      - 10020
 
 networks:
   kylin:
\ No newline at end of file
diff --git a/docker/docker-compose/write/write-hadoop.env b/docker/docker-compose/write/write-hadoop.env
index 670756f..a99c096 100644
--- a/docker/docker-compose/write/write-hadoop.env
+++ b/docker/docker-compose/write/write-hadoop.env
@@ -31,6 +31,7 @@ YARN_CONF_yarn_nodemanager_resource_cpu___vcores=6
 YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
 YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
 YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle
+YARN_CONF_mapreduce_jobhistory_address=write-historyserver:10020
 
 MAPRED_CONF_mapreduce_framework_name=yarn
 MAPRED_CONF_mapred_child_java_opts=-Xmx4096m
diff --git a/docker/dockerfile/cluster/client/conf/hadoop-write/yarn-site.xml b/docker/dockerfile/cluster/client/conf/hadoop-write/yarn-site.xml
index b55dd34..a60385a 100644
--- a/docker/dockerfile/cluster/client/conf/hadoop-write/yarn-site.xml
+++ b/docker/dockerfile/cluster/client/conf/hadoop-write/yarn-site.xml
@@ -33,6 +33,7 @@
 <property><name>yarn.timeline-service.hostname</name><value>write-historyserver</value></property>
 <property><name>yarn.scheduler.capacity.root.default.maximum-allocation-mb</name><value>8192</value></property>
 <property><name>yarn.log.server.url</name><value>http://write-historyserver:8188/applicationhistory/logs/</value></property>
+<property><name>mapreduce.jobhistory.address</name><value>write-historyserver:10020</value></property>
 <property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property>
 <property><name>yarn.resourcemanager.scheduler.address</name><value>write-resourcemanager:8030</value></property>
 <property><name>yarn.resourcemanager.address</name><value>write-resourcemanager:8032</value></property>
diff --git a/docker/dockerfile/cluster/historyserver/Dockerfile b/docker/dockerfile/cluster/historyserver/Dockerfile
index 7c89d00..c6f3496 100644
--- a/docker/dockerfile/cluster/historyserver/Dockerfile
+++ b/docker/dockerfile/cluster/historyserver/Dockerfile
@@ -21,6 +21,7 @@ FROM apachekylin/kylin-ci-hadoop-base:hadoop_${HADOOP_VERSION}
 ARG HADOOP_HISTORY_PORT=8188
 ENV HADOOP_HISTORY_PORT ${HADOOP_HISTORY_PORT}
 EXPOSE ${HADOOP_HISTORY_PORT}
+EXPOSE 10020
 
 HEALTHCHECK CMD curl -f http://localhost:${HADOOP_HISTORY_PORT}/ || exit 1
 
diff --git a/docker/dockerfile/cluster/historyserver/run_history.sh b/docker/dockerfile/cluster/historyserver/run_history.sh
index 6d7ae4e..47745b1 100644
--- a/docker/dockerfile/cluster/historyserver/run_history.sh
+++ b/docker/dockerfile/cluster/historyserver/run_history.sh
@@ -17,4 +17,9 @@
 # limitations under the License.
 #
 
-$HADOOP_PREFIX/bin/yarn --config $HADOOP_CONF_DIR historyserver
+#$HADOOP_PREFIX/bin/yarn --config $HADOOP_CONF_DIR historyserver
+$HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver
+while :
+do
+    sleep 100
+done
\ No newline at end of file


[kylin] 11/13: KYLIN-4775 Update docker/README-cluster.md

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit dc073f44bac77d9649977d62f5af0f4d33b4a4cf
Author: yaqian.zhang <59...@qq.com>
AuthorDate: Thu Oct 29 14:01:50 2020 +0800

    KYLIN-4775 Update docker/README-cluster.md
---
 docker/README-cluster.md | 47 +++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 41 insertions(+), 6 deletions(-)

diff --git a/docker/README-cluster.md b/docker/README-cluster.md
index f90ce3b..f5c4364 100644
--- a/docker/README-cluster.md
+++ b/docker/README-cluster.md
@@ -6,8 +6,14 @@ In order to provide hadoop cluster(s) (without manual deployment) for system lev
 
 ## Prepare
 
-- Install latest docker & docker-compose, following is what I use.
-
+- Install the latest `docker` & `docker-compose`, following versions are recommended.
+  - docker: 19.03.12+
+  - docker-compose: 1.26.2+
+  
+- Install `python3` and test automation tool `gauge`, following versions are recommended.
+  - python: 3.6+
+  - gauge: 1.1.4+
+ 
 - Check port 
 
     Port       |     Component     |     Comment
@@ -19,7 +25,7 @@ In order to provide hadoop cluster(s) (without manual deployment) for system lev
     16010      |       HBase       |       -    
     50070      |       HDFS        |       -            
 
-- Clone cource code
+- Clone source code
 
 ```shell 
 git clone
@@ -132,12 +138,41 @@ kylin-all            | http://kylin-all:7070/kylin                      |
 
 
 ## System Testing
+### How to package kylin binary
+
+```shell
+cd dev-support/build-release
+bash -x package.sh
+``` 
+
 ### How to start Kylin
 
 ```shell 
-copy kylin into /root/xiaoxiang.yu/kylin/docker/docker-compose/others/kylin
+## copy kylin into kylin/docker/docker-compose/others/kylin
 
-cp kylin.tar.gz /root/xiaoxiang.yu/kylin/docker/docker-compose/others/kylin
+cp kylin.tar.gz kylin/docker/docker-compose/others/kylin
 tar zxf kylin.tar.gz
 
-```
\ No newline at end of file
+cp -r apache-kylin-bin/*  kylin/docker/docker-compose/others/kylin/kylin-all
+cp -r apache-kylin-bin/* kylin/docker/docker-compose/others/kylin/kylin-job
+cp -r apache-kylin-bin/* kylin/docker/docker-compose/others/kylin/kylin-query
+
+## you can modify kylin/docker/docker-compose/others/kylin/kylin-*/kylin.properties before start kylin.
+
+## start kylin
+
+bash setup_service.sh --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 \
+      --enable_hbase yes --hbase_version 1.1.2  --enable_ldap nosh setup_cluster.sh \
+      --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 --enable_hbase yes \
+      --hbase_version 1.1.2  --enable_ldap no
+```
+
+### How to run automated tests
+
+```shell
+cd build/CI/kylin-system-testing
+pip install -r requirements.txt
+gauge run
+```
+
+Wait for the test to complete, then you can check build/CI/kylin-system-testing/reports/html-report/index.html for reports.
\ No newline at end of file


[kylin] 06/13: KYLIN-4778 package and release by docker image

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 4dd747adc05c4868c01d0544e3424ca1d486c2e0
Author: XiaoxiangYu <xx...@apache.org>
AuthorDate: Fri Oct 23 13:46:30 2020 +0800

    KYLIN-4778 package and release by docker image
---
 dev-support/build-release/Dockerfile              |  54 +++++++
 dev-support/build-release/conf/settings.xml       |  62 ++++++++
 dev-support/build-release/packaging.sh            |  49 ++++++
 dev-support/build-release/script/build_release.sh | 183 ++++++++++++++++++++++
 dev-support/build-release/script/entrypoint.sh    |  25 +++
 5 files changed, 373 insertions(+)

diff --git a/dev-support/build-release/Dockerfile b/dev-support/build-release/Dockerfile
new file mode 100644
index 0000000..def66d0
--- /dev/null
+++ b/dev-support/build-release/Dockerfile
@@ -0,0 +1,54 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Docker image for Kylin release
+FROM openjdk:8-slim
+
+ENV M2_HOME /root/apache-maven-3.6.1
+ENV PATH $PATH:$M2_HOME/bin
+USER root
+
+WORKDIR /root
+
+# install tools
+RUN set -eux; \
+	apt-get update; \
+	apt-get install -y --no-install-recommends lsof wget tar git unzip subversion
+
+# install maven
+RUN wget https://archive.apache.org/dist/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz \
+    && tar -zxvf apache-maven-3.6.1-bin.tar.gz \
+    && rm -f apache-maven-3.6.1-bin.tar.gz
+COPY conf/settings.xml $MVN_HOME/conf/settings.xml
+
+# install tomcat
+RUN wget https://archive.apache.org/dist/tomcat/tomcat-7/v7.0.100/bin/apache-tomcat-7.0.100.tar.gz
+
+# install npm
+RUN apt-get install -y curl
+RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - \
+    && apt-get update \
+    && apt-get install -y nodejs npm
+
+
+COPY ./script/entrypoint.sh /root/entrypoint.sh
+RUN chmod u+x /root/entrypoint.sh
+
+COPY ./script/build_release.sh /root/build_release.sh
+RUN chmod u+x /root/build_release.sh
+
+ENTRYPOINT ["/root/entrypoint.sh"]
\ No newline at end of file
diff --git a/dev-support/build-release/conf/settings.xml b/dev-support/build-release/conf/settings.xml
new file mode 100644
index 0000000..f187d47
--- /dev/null
+++ b/dev-support/build-release/conf/settings.xml
@@ -0,0 +1,62 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements.  See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership.  The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
+          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
+
+    <mirrors>
+        <mirror>
+            <id>nexus-aliyun</id>
+            <mirrorOf>central</mirrorOf>
+            <name>Nexus Aliyun</name>
+            <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
+        </mirror>
+    </mirrors>
+
+    <profiles>
+        <profile>
+            <repositories>
+                <repository>
+                    <id>nexus</id>
+                    <name>local private nexus</name>
+                    <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
+                    <releases>
+                        <enabled>true</enabled>
+                    </releases>
+                    <snapshots>
+                        <enabled>false</enabled>
+                    </snapshots>
+                </repository>
+            </repositories>
+            <pluginRepositories>
+                <pluginRepository>
+                    <id>nexus</id>
+                    <name>local private nexus</name>
+                    <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
+                    <releases>
+                        <enabled>true</enabled>
+                    </releases>
+                    <snapshots>
+                        <enabled>true</enabled>
+                    </snapshots>
+                </pluginRepository>
+            </pluginRepositories>
+        </profile>
+    </profiles>
+</settings>
\ No newline at end of file
diff --git a/dev-support/build-release/packaging.sh b/dev-support/build-release/packaging.sh
new file mode 100644
index 0000000..7997095
--- /dev/null
+++ b/dev-support/build-release/packaging.sh
@@ -0,0 +1,49 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+### Thank you for https://github.com/apache/spark/tree/master/dev/create-release .
+
+# docker build -f Dockerfile -t apachekylin/release-machine:jdk8-slim .
+# docker run --name machine apachekylin/release-machine:jdk8-slim
+
+cat > $ENVFILE <<EOF
+DRY_RUN=$DRY_RUN
+SKIP_TAG=$SKIP_TAG
+RUNNING_IN_DOCKER=1
+GIT_BRANCH=$GIT_BRANCH
+NEXT_VERSION=$NEXT_VERSION
+RELEASE_VERSION=$RELEASE_VERSION
+RELEASE_TAG=$RELEASE_TAG
+GIT_REF=$GIT_REF
+ASF_USERNAME=$ASF_USERNAME
+GIT_NAME=$GIT_NAME
+GIT_EMAIL=$GIT_EMAIL
+GPG_KEY=$GPG_KEY
+ASF_PASSWORD=$ASF_PASSWORD
+GPG_PASSPHRASE=$GPG_PASSPHRASE
+RELEASE_STEP=$RELEASE_STEP
+USER=$USER
+EOF
+
+
+docker run -ti \
+  --env-file "$ENVFILE" \
+  apachekylin/release-machine:jdk8-slim
+
+docker cp machine:/root/kylin/dist/apache-kylin-*-SNAPSHOT-bin.tar.gz .
\ No newline at end of file
diff --git a/dev-support/build-release/script/build_release.sh b/dev-support/build-release/script/build_release.sh
new file mode 100644
index 0000000..4e02d28
--- /dev/null
+++ b/dev-support/build-release/script/build_release.sh
@@ -0,0 +1,183 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+PACKAGE_ENABLE=false
+RELEASE_ENABLE=false
+
+function exit_with_usage {
+  cat << EOF
+usage: release-build.sh <package|publish-rc>
+Creates build deliverables from a Kylin commit.
+
+Top level targets are
+  package: Create binary packages and commit them to dist.apache.org/repos/dist/dev/spark/
+  publish-rc: Publish snapshot release to Apache snapshots
+
+All other inputs are environment variables
+
+GIT_REF - Release tag or commit to build from
+SPARK_PACKAGE_VERSION - Release identifier in top level package directory (e.g. 2.1.2-rc1)
+SPARK_VERSION - (optional) Version of Spark being built (e.g. 2.1.2)
+
+ASF_USERNAME - Username of ASF committer account
+ASF_PASSWORD - Password of ASF committer account
+
+GPG_KEY - GPG key used to sign release artifacts
+GPG_PASSPHRASE - Passphrase for GPG key
+EOF
+  exit 1
+}
+
+if [ $# -eq 0 ]; then
+  exit_with_usage
+fi
+
+if [[ $@ == *"help"* ]]; then
+  exit_with_usage
+fi
+
+if [[ "$1" == "package" ]]; then
+    PACKAGE_ENABLE=true
+fi
+
+if [[ "$1" == "publish-rc" ]]; then
+    PACKAGE_ENABLE=true
+    RELEASE_ENABLE=true
+fi
+
+set -e
+export LC_ALL=C.UTF-8
+export LANG=C.UTF-8
+
+####################################################
+####################################################
+####################################################
+####################################################
+#### Configuration
+
+KYLIN_PACKAGE_BRANCH=master
+KYLIN_PACKAGE_BRANCH_HADOOP3=master-hadoop3
+KYLIN_PACKAGE_VERSION=3.1.1
+KYLIN_PACKAGE_VERSION_RC=3.1.1-rc1
+NEXT_RELEASE_VERSION=3.1.2-SNAPSHOT
+ASF_USERNAME=xxyu
+ASF_PASSWORD=123
+GPG_KEY=123
+GPG_PASSPHRASE=123
+
+export source_release_folder=/root/kylin-release-folder/
+export binary_release_folder=/root/kylin-release-folder/kylin_bin
+export svn_release_folder=/root/dist/dev/kylin
+
+RELEASE_STAGING_LOCATION="https://dist.apache.org/repos/dist/dev/kylin"
+
+####################################################
+####################################################
+####################################################
+####################################################
+#### Prepare source code
+
+
+cd $source_release_folder
+git clone https://github.com/apache/kylin.git
+cd kylin
+git checkout ${KYLIN_PACKAGE_BRANCH}
+git pull --rebase
+git checkout ${KYLIN_PACKAGE_BRANCH}
+
+cp /root/apache-tomcat-7.0.100.tar.gz $source_release_folder/kylin/build
+
+
+####################################################
+####################################################
+####################################################
+####################################################
+#### Prepare tag & source tarball & upload maven artifact
+
+# Use release-plugin to check license & build source package & build and upload maven artifact
+#mvn -DskipTests -DreleaseVersion=${KYLIN_PACKAGE_VERSION} -DdevelopmentVersion=${NEXT_RELEASE_VERSION}-SNAPSHOT -Papache-release -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests" release:prepare
+#mvn -DskipTests -Papache-release -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests" release:perform
+
+
+####################################################
+####################################################
+####################################################
+####################################################
+####
+
+# Create a directory for this release candidate
+#mkdir ${svn_release_folder}/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
+#rm -rf target/apache-kylin-*ource-release.zip.asc.sha256
+
+# Move source code and signture of source code to release candidate directory
+#cp target/apache-kylin-*source-release.zip* ${svn_release_folder}/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
+
+# Go to package directory
+#cd $binary_release_folder
+#git checkout ${KYLIN_PACKAGE_BRANCH}
+#git pull --rebase
+
+# Checkout to tag, which is created by maven-release-plugin
+# Commit message looks like "[maven-release-plugin] prepare release kylin-4.0.0-alpha"
+#git checkout kylin-${RELEASE_VERSION}
+
+
+####################################################
+####################################################
+####################################################
+####################################################
+#### Build binary
+
+
+# Build first packages
+build/script/package.sh
+tar -zxf dist/apache-kylin-${RELEASE_VERSION}-bin.tar.gz
+mv apache-kylin-${RELEASE_VERSION}-bin apache-kylin-${RELEASE_VERSION}-bin-hadoop2
+tar -cvzf ~/dist/dev/kylin/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}/apache-kylin-${RELEASE_VERSION}-bin-hadoop2.tar.gz apache-kylin-${RELEASE_VERSION}-bin-hadoop2
+rm -rf apache-kylin-${RELEASE_VERSION}-bin-hadoop2
+
+#build/script/package.sh -P cdh5.7
+#tar -zxf dist/apache-kylin-${RELEASE_VERSION}-bin.tar.gz
+#mv apache-kylin-${RELEASE_VERSION}-bin apache-kylin-${RELEASE_VERSION}-bin-cdh57
+#tar -cvzf ~/dist/dev/kylin/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}/apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz apache-kylin-${RELEASE_VERSION}-bin-cdh57
+#rm -rf apache-kylin-${RELEASE_VERSION}-bin-cdh57
+
+
+####################################################
+####################################################
+####################################################
+####################################################
+#### Sign binary
+
+#cd ~/dist/dev/kylin/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
+#gpg --armor --output apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz.asc --detach-sig apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz
+#shasum -a 256 apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz > apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz.sha256
+#
+#gpg --armor --output apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz.asc --detach-sig apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz
+#shasum -a 256 apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz > apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz.sha256
+
+####################################################
+####################################################
+####################################################
+####################################################
+#### Upload to svn repository
+
+#cd ..
+#svn add apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
+#svn commit -m 'Checkin release artifacts for '${KYLIN_PACKAGE_VERSION_RC}
\ No newline at end of file
diff --git a/dev-support/build-release/script/entrypoint.sh b/dev-support/build-release/script/entrypoint.sh
new file mode 100644
index 0000000..d004f25
--- /dev/null
+++ b/dev-support/build-release/script/entrypoint.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+bash -x /root/build_release.sh > /root/build.log
+
+while :
+do
+    sleep 10
+done


[kylin] 12/13: KYLIN-4801 Clean up and add test sql

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 97cebbdca78911d505b63ad16ea5df825c951612
Author: yaqian.zhang <59...@qq.com>
AuthorDate: Tue Nov 3 18:03:10 2020 +0800

    KYLIN-4801 Clean up and add test sql
---
 .../data/release_test_0001.json                    | 626 ---------------------
 .../features/specs/generic_test.spec               |  64 ---
 .../features/specs/query/query.spec                |  22 +
 .../features/step_impl/before_suite.py             |  10 +-
 .../features/step_impl/generic_test_step.py        |  15 +-
 .../features/step_impl/query/query.py              |  40 ++
 .../kylin_instances/kylin_instance.yml             |   2 +-
 .../CI/kylin-system-testing/kylin_utils/equals.py  |  11 +-
 .../generic_desc_data/generic_desc_data_3x.json    |   0
 .../generic_desc_data/generic_desc_data_4x.json    |   2 +-
 .../query/sql/sql_test/sql1.sql                    |  26 +
 .../query/sql_result/sql_test/sql1.json            |   6 +
 build/CI/run-ci.sh                                 |  14 +-
 13 files changed, 126 insertions(+), 712 deletions(-)

diff --git a/build/CI/kylin-system-testing/data/release_test_0001.json b/build/CI/kylin-system-testing/data/release_test_0001.json
deleted file mode 100644
index 0b5558a..0000000
--- a/build/CI/kylin-system-testing/data/release_test_0001.json
+++ /dev/null
@@ -1,626 +0,0 @@
-{
-  "load_table_list":
-  "DEFAULT.KYLIN_SALES,DEFAULT.KYLIN_CAL_DT,DEFAULT.KYLIN_CATEGORY_GROUPINGS,DEFAULT.KYLIN_ACCOUNT,DEFAULT.KYLIN_COUNTRY",
-
-  "model_desc_data":
-  {
-    "uuid": "0928468a-9fab-4185-9a14-6f2e7c74823f",
-    "last_modified": 0,
-    "version": "3.0.0.20500",
-    "name": "release_test_0001_model",
-    "owner": null,
-    "is_draft": false,
-    "description": "",
-    "fact_table": "DEFAULT.KYLIN_SALES",
-    "lookups": [
-      {
-        "table": "DEFAULT.KYLIN_CAL_DT",
-        "kind": "LOOKUP",
-        "alias": "KYLIN_CAL_DT",
-        "join": {
-          "type": "inner",
-          "primary_key": [
-            "KYLIN_CAL_DT.CAL_DT"
-          ],
-          "foreign_key": [
-            "KYLIN_SALES.PART_DT"
-          ]
-        }
-      },
-      {
-        "table": "DEFAULT.KYLIN_CATEGORY_GROUPINGS",
-        "kind": "LOOKUP",
-        "alias": "KYLIN_CATEGORY_GROUPINGS",
-        "join": {
-          "type": "inner",
-          "primary_key": [
-            "KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID",
-            "KYLIN_CATEGORY_GROUPINGS.SITE_ID"
-          ],
-          "foreign_key": [
-            "KYLIN_SALES.LEAF_CATEG_ID",
-            "KYLIN_SALES.LSTG_SITE_ID"
-          ]
-        }
-      },
-      {
-        "table": "DEFAULT.KYLIN_ACCOUNT",
-        "kind": "LOOKUP",
-        "alias": "BUYER_ACCOUNT",
-        "join": {
-          "type": "inner",
-          "primary_key": [
-            "BUYER_ACCOUNT.ACCOUNT_ID"
-          ],
-          "foreign_key": [
-            "KYLIN_SALES.BUYER_ID"
-          ]
-        }
-      },
-      {
-        "table": "DEFAULT.KYLIN_ACCOUNT",
-        "kind": "LOOKUP",
-        "alias": "SELLER_ACCOUNT",
-        "join": {
-          "type": "inner",
-          "primary_key": [
-            "SELLER_ACCOUNT.ACCOUNT_ID"
-          ],
-          "foreign_key": [
-            "KYLIN_SALES.SELLER_ID"
-          ]
-        }
-      },
-      {
-        "table": "DEFAULT.KYLIN_COUNTRY",
-        "kind": "LOOKUP",
-        "alias": "BUYER_COUNTRY",
-        "join": {
-          "type": "inner",
-          "primary_key": [
-            "BUYER_COUNTRY.COUNTRY"
-          ],
-          "foreign_key": [
-            "BUYER_ACCOUNT.ACCOUNT_COUNTRY"
-          ]
-        }
-      },
-      {
-        "table": "DEFAULT.KYLIN_COUNTRY",
-        "kind": "LOOKUP",
-        "alias": "SELLER_COUNTRY",
-        "join": {
-          "type": "inner",
-          "primary_key": [
-            "SELLER_COUNTRY.COUNTRY"
-          ],
-          "foreign_key": [
-            "SELLER_ACCOUNT.ACCOUNT_COUNTRY"
-          ]
-        }
-      }
-    ],
-    "dimensions": [
-      {
-        "table": "KYLIN_SALES",
-        "columns": [
-          "TRANS_ID",
-          "SELLER_ID",
-          "BUYER_ID",
-          "PART_DT",
-          "LEAF_CATEG_ID",
-          "LSTG_FORMAT_NAME",
-          "LSTG_SITE_ID",
-          "OPS_USER_ID",
-          "OPS_REGION"
-        ]
-      },
-      {
-        "table": "KYLIN_CAL_DT",
-        "columns": [
-          "CAL_DT",
-          "WEEK_BEG_DT",
-          "MONTH_BEG_DT",
-          "YEAR_BEG_DT"
-        ]
-      },
-      {
-        "table": "KYLIN_CATEGORY_GROUPINGS",
-        "columns": [
-          "USER_DEFINED_FIELD1",
-          "USER_DEFINED_FIELD3",
-          "META_CATEG_NAME",
-          "CATEG_LVL2_NAME",
-          "CATEG_LVL3_NAME",
-          "LEAF_CATEG_ID",
-          "SITE_ID"
-        ]
-      },
-      {
-        "table": "BUYER_ACCOUNT",
-        "columns": [
-          "ACCOUNT_ID",
-          "ACCOUNT_BUYER_LEVEL",
-          "ACCOUNT_SELLER_LEVEL",
-          "ACCOUNT_COUNTRY",
-          "ACCOUNT_CONTACT"
-        ]
-      },
-      {
-        "table": "SELLER_ACCOUNT",
-        "columns": [
-          "ACCOUNT_ID",
-          "ACCOUNT_BUYER_LEVEL",
-          "ACCOUNT_SELLER_LEVEL",
-          "ACCOUNT_COUNTRY",
-          "ACCOUNT_CONTACT"
-        ]
-      },
-      {
-        "table": "BUYER_COUNTRY",
-        "columns": [
-          "COUNTRY",
-          "NAME"
-        ]
-      },
-      {
-        "table": "SELLER_COUNTRY",
-        "columns": [
-          "COUNTRY",
-          "NAME"
-        ]
-      }
-    ],
-    "metrics": [
-      "KYLIN_SALES.PRICE",
-      "KYLIN_SALES.ITEM_COUNT"
-    ],
-    "filter_condition": "",
-    "partition_desc": {
-      "partition_date_column": "KYLIN_SALES.PART_DT",
-      "partition_time_column": null,
-      "partition_date_start": 0,
-      "partition_date_format": "yyyy-MM-dd HH:mm:ss",
-      "partition_time_format": "HH:mm:ss",
-      "partition_type": "APPEND",
-      "partition_condition_builder": "org.apache.kylin.metadata.model.PartitionDesc$DefaultPartitionConditionBuilder"
-    },
-    "capacity": "MEDIUM",
-    "projectName": null
-  },
-  "cube_desc_data":
-  {
-    "uuid": "0ef9b7a8-3929-4dff-b59d-2100aadc8dbf",
-    "last_modified": 0,
-    "version": "3.0.0.20500",
-    "name": "release_test_0001_cube",
-    "is_draft": false,
-    "model_name": "release_test_0001_model",
-    "description": "",
-    "null_string": null,
-    "dimensions": [
-      {
-        "name": "TRANS_ID",
-        "table": "KYLIN_SALES",
-        "column": "TRANS_ID",
-        "derived": null
-      },
-      {
-        "name": "YEAR_BEG_DT",
-        "table": "KYLIN_CAL_DT",
-        "column": null,
-        "derived": [
-          "YEAR_BEG_DT"
-        ]
-      },
-      {
-        "name": "MONTH_BEG_DT",
-        "table": "KYLIN_CAL_DT",
-        "column": null,
-        "derived": [
-          "MONTH_BEG_DT"
-        ]
-      },
-      {
-        "name": "WEEK_BEG_DT",
-        "table": "KYLIN_CAL_DT",
-        "column": null,
-        "derived": [
-          "WEEK_BEG_DT"
-        ]
-      },
-      {
-        "name": "USER_DEFINED_FIELD1",
-        "table": "KYLIN_CATEGORY_GROUPINGS",
-        "column": null,
-        "derived": [
-          "USER_DEFINED_FIELD1"
-        ]
-      },
-      {
-        "name": "USER_DEFINED_FIELD3",
-        "table": "KYLIN_CATEGORY_GROUPINGS",
-        "column": null,
-        "derived": [
-          "USER_DEFINED_FIELD3"
-        ]
-      },
-      {
-        "name": "META_CATEG_NAME",
-        "table": "KYLIN_CATEGORY_GROUPINGS",
-        "column": "META_CATEG_NAME",
-        "derived": null
-      },
-      {
-        "name": "CATEG_LVL2_NAME",
-        "table": "KYLIN_CATEGORY_GROUPINGS",
-        "column": "CATEG_LVL2_NAME",
-        "derived": null
-      },
-      {
-        "name": "CATEG_LVL3_NAME",
-        "table": "KYLIN_CATEGORY_GROUPINGS",
-        "column": "CATEG_LVL3_NAME",
-        "derived": null
-      },
-      {
-        "name": "LSTG_FORMAT_NAME",
-        "table": "KYLIN_SALES",
-        "column": "LSTG_FORMAT_NAME",
-        "derived": null
-      },
-      {
-        "name": "SELLER_ID",
-        "table": "KYLIN_SALES",
-        "column": "SELLER_ID",
-        "derived": null
-      },
-      {
-        "name": "BUYER_ID",
-        "table": "KYLIN_SALES",
-        "column": "BUYER_ID",
-        "derived": null
-      },
-      {
-        "name": "ACCOUNT_BUYER_LEVEL",
-        "table": "BUYER_ACCOUNT",
-        "column": "ACCOUNT_BUYER_LEVEL",
-        "derived": null
-      },
-      {
-        "name": "ACCOUNT_SELLER_LEVEL",
-        "table": "SELLER_ACCOUNT",
-        "column": "ACCOUNT_SELLER_LEVEL",
-        "derived": null
-      },
-      {
-        "name": "BUYER_COUNTRY",
-        "table": "BUYER_ACCOUNT",
-        "column": "ACCOUNT_COUNTRY",
-        "derived": null
-      },
-      {
-        "name": "SELLER_COUNTRY",
-        "table": "SELLER_ACCOUNT",
-        "column": "ACCOUNT_COUNTRY",
-        "derived": null
-      },
-      {
-        "name": "BUYER_COUNTRY_NAME",
-        "table": "BUYER_COUNTRY",
-        "column": "NAME",
-        "derived": null
-      },
-      {
-        "name": "SELLER_COUNTRY_NAME",
-        "table": "SELLER_COUNTRY",
-        "column": "NAME",
-        "derived": null
-      },
-      {
-        "name": "OPS_USER_ID",
-        "table": "KYLIN_SALES",
-        "column": "OPS_USER_ID",
-        "derived": null
-      },
-      {
-        "name": "OPS_REGION",
-        "table": "KYLIN_SALES",
-        "column": "OPS_REGION",
-        "derived": null
-      }
-    ],
-    "measures": [
-      {
-        "name": "GMV_SUM",
-        "function": {
-          "expression": "SUM",
-          "parameter": {
-            "type": "column",
-            "value": "KYLIN_SALES.PRICE"
-          },
-          "returntype": "decimal(19,4)"
-        }
-      },
-      {
-        "name": "BUYER_LEVEL_SUM",
-        "function": {
-          "expression": "SUM",
-          "parameter": {
-            "type": "column",
-            "value": "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL"
-          },
-          "returntype": "bigint"
-        }
-      },
-      {
-        "name": "SELLER_LEVEL_SUM",
-        "function": {
-          "expression": "SUM",
-          "parameter": {
-            "type": "column",
-            "value": "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL"
-          },
-          "returntype": "bigint"
-        }
-      },
-      {
-        "name": "TRANS_CNT",
-        "function": {
-          "expression": "COUNT",
-          "parameter": {
-            "type": "constant",
-            "value": "1"
-          },
-          "returntype": "bigint"
-        }
-      },
-      {
-        "name": "SELLER_CNT_HLL",
-        "function": {
-          "expression": "COUNT_DISTINCT",
-          "parameter": {
-            "type": "column",
-            "value": "KYLIN_SALES.SELLER_ID"
-          },
-          "returntype": "hllc(10)"
-        }
-      },
-      {
-        "name": "TOP_SELLER",
-        "function": {
-          "expression": "TOP_N",
-          "parameter": {
-            "type": "column",
-            "value": "KYLIN_SALES.PRICE",
-            "next_parameter": {
-              "type": "column",
-              "value": "KYLIN_SALES.SELLER_ID"
-            }
-          },
-          "returntype": "topn(100)",
-          "configuration": {
-            "topn.encoding.KYLIN_SALES.SELLER_ID": "dict",
-            "topn.encoding_version.KYLIN_SALES.SELLER_ID": "1"
-          }
-        }
-      }
-    ],
-    "rowkey": {
-      "rowkey_columns": [
-        {
-          "column": "KYLIN_SALES.BUYER_ID",
-          "encoding": "integer:4",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_SALES.SELLER_ID",
-          "encoding": "integer:4",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_SALES.TRANS_ID",
-          "encoding": "integer:4",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_SALES.PART_DT",
-          "encoding": "date",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_SALES.LEAF_CATEG_ID",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "BUYER_COUNTRY.NAME",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "SELLER_COUNTRY.NAME",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_SALES.LSTG_FORMAT_NAME",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_SALES.LSTG_SITE_ID",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_SALES.OPS_USER_ID",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        },
-        {
-          "column": "KYLIN_SALES.OPS_REGION",
-          "encoding": "dict",
-          "encoding_version": 1,
-          "isShardBy": false
-        }
-      ]
-    },
-    "hbase_mapping": {
-      "column_family": [
-        {
-          "name": "F1",
-          "columns": [
-            {
-              "qualifier": "M",
-              "measure_refs": [
-                "GMV_SUM",
-                "BUYER_LEVEL_SUM",
-                "SELLER_LEVEL_SUM",
-                "TRANS_CNT"
-              ]
-            }
-          ]
-        },
-        {
-          "name": "F2",
-          "columns": [
-            {
-              "qualifier": "M",
-              "measure_refs": [
-                "SELLER_CNT_HLL",
-                "TOP_SELLER"
-              ]
-            }
-          ]
-        }
-      ]
-    },
-    "aggregation_groups": [
-      {
-        "includes": [
-          "KYLIN_SALES.PART_DT",
-          "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
-          "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
-          "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
-          "KYLIN_SALES.LEAF_CATEG_ID",
-          "KYLIN_SALES.LSTG_FORMAT_NAME",
-          "KYLIN_SALES.LSTG_SITE_ID",
-          "KYLIN_SALES.OPS_USER_ID",
-          "KYLIN_SALES.OPS_REGION",
-          "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
-          "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL",
-          "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
-          "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
-          "BUYER_COUNTRY.NAME",
-          "SELLER_COUNTRY.NAME"
-        ],
-        "select_rule": {
-          "hierarchy_dims": [
-            [
-              "KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME",
-              "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME",
-              "KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME",
-              "KYLIN_SALES.LEAF_CATEG_ID"
-            ]
-          ],
-          "mandatory_dims": [
-            "KYLIN_SALES.PART_DT"
-          ],
-          "joint_dims": [
-            [
-              "BUYER_ACCOUNT.ACCOUNT_COUNTRY",
-              "BUYER_COUNTRY.NAME"
-            ],
-            [
-              "SELLER_ACCOUNT.ACCOUNT_COUNTRY",
-              "SELLER_COUNTRY.NAME"
-            ],
-            [
-              "BUYER_ACCOUNT.ACCOUNT_BUYER_LEVEL",
-              "SELLER_ACCOUNT.ACCOUNT_SELLER_LEVEL"
-            ],
-            [
-              "KYLIN_SALES.LSTG_FORMAT_NAME",
-              "KYLIN_SALES.LSTG_SITE_ID"
-            ],
-            [
-              "KYLIN_SALES.OPS_USER_ID",
-              "KYLIN_SALES.OPS_REGION"
-            ]
-          ]
-        }
-      }
-    ],
-    "signature": null,
-    "notify_list": [],
-    "status_need_notify": [],
-    "partition_date_start": 0,
-    "partition_date_end": 3153600000000,
-    "auto_merge_time_ranges": [],
-    "volatile_range": 0,
-    "retention_range": 0,
-    "engine_type": 2,
-    "storage_type": 2,
-    "override_kylin_properties": {
-      "kylin.cube.aggrgroup.is-mandatory-only-valid": "true",
-      "kylin.engine.spark.rdd-partition-cut-mb": "500"
-    },
-    "cuboid_black_list": [],
-    "parent_forward": 3,
-    "mandatory_dimension_set_list": [],
-    "snapshot_table_desc_list": []
-  }
-}
\ No newline at end of file
diff --git a/build/CI/kylin-system-testing/features/specs/generic_test.spec b/build/CI/kylin-system-testing/features/specs/generic_test.spec
deleted file mode 100644
index d37e236..0000000
--- a/build/CI/kylin-system-testing/features/specs/generic_test.spec
+++ /dev/null
@@ -1,64 +0,0 @@
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-
-# Kylin Release Test
-Tags:3.x
-## Prepare env
-* Get kylin instance
-
-* prepare data file from "release_test_0001.json"
-
-* Create project "release_test_0001_project" and load table "load_table_list"
-
-
-## MR engine
-
-* Create model with "model_desc_data" in "release_test_0001_project"
-
-* Create cube with "cube_desc_data" in "release_test_0001_project", cube name is "release_test_0001_cube"
-
-* Build segment from "1325347200000" to "1356969600000" in "release_test_0001_cube"
-
-* Build segment from "1356969600000" to "1391011200000" in "release_test_0001_cube"
-
-* Merge cube "release_test_0001_cube" segment from "1325347200000" to "1391011200000"
-
-
-SPARK engine
-
-Clone cube "release_test_0001_cube" and name it "kylin_spark_cube" in "release_test_0001_project", modify build engine to "SPARK"
-
-Build segment from "1325347200000" to "1356969600000" in "kylin_spark_cube"
-
-Build segment from "1356969600000" to "1391011200000" in "kylin_spark_cube"
-
-Merge cube "kylin_spark_cube" segment from "1325347200000" to "1391011200000"
-
-
-## Query cube and pushdown
-
-* Query SQL "select count(*) from kylin_sales" and specify "release_test_0001_cube" cube to query in "release_test_0001_project", compare result with "10000"
-
-Query SQL "select count(*) from kylin_sales" and specify "kylin_spark_cube" cube to query in "release_test_0001_project", compare result with "10000"
-
-* Disable cube "release_test_0001_cube"
-
-Disable cube "kylin_spark_cube"
-
-* Query SQL "select count(*) from kylin_sales" in "release_test_0001_project" and pushdown, compare result with "10000"
-
-
-
diff --git a/build/CI/kylin-system-testing/features/specs/query/query.spec b/build/CI/kylin-system-testing/features/specs/query/query.spec
new file mode 100644
index 0000000..8cd3a6f
--- /dev/null
+++ b/build/CI/kylin-system-testing/features/specs/query/query.spec
@@ -0,0 +1,22 @@
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+# Kylin SQL test
+Tags:4.x
+
+## Query sql
+
+* Query all SQL file in directory "query/sql/sql_test/" in project "generic_test_project", compare result with hive pushdown result and compare metrics info with sql_result json file in "query/sql_result/sql_test/"
\ No newline at end of file
diff --git a/build/CI/kylin-system-testing/features/step_impl/before_suite.py b/build/CI/kylin-system-testing/features/step_impl/before_suite.py
index 3cd86ca..d1cd3fd 100644
--- a/build/CI/kylin-system-testing/features/step_impl/before_suite.py
+++ b/build/CI/kylin-system-testing/features/step_impl/before_suite.py
@@ -26,10 +26,10 @@ from kylin_utils import util
 def create_generic_model_and_cube():
     client = util.setup_instance('kylin_instance.yml')
     if client.version == '3.x':
-        with open(os.path.join('data/generic_desc_data', 'generic_desc_data_3x.json'), 'r') as f:
+        with open(os.path.join('meta_data/generic_desc_data', 'generic_desc_data_3x.json'), 'r') as f:
             data = json.load(f)
     elif client.version == '4.x':
-        with open(os.path.join('data/generic_desc_data', 'generic_desc_data_4x.json'), 'r') as f:
+        with open(os.path.join('meta_data/generic_desc_data', 'generic_desc_data_4x.json'), 'r') as f:
             data = json.load(f)
 
     project_name = client.generic_project
@@ -56,6 +56,6 @@ def create_generic_model_and_cube():
                                   cube_name=cube_name,
                                   cube_desc_data=cube_desc_data)
         assert json.loads(resp['cubeDescData'])['name'] == cube_name
-    if client.get_cube_instance(cube_name=cube_name).get('status') != 'READY':
-        resp = client.full_build_cube(cube_name=cube_name)
-        assert client.await_job_finished(job_id=resp['uuid'], waiting_time=20)
+    if client.get_cube_instance(cube_name=cube_name).get('status') != 'READY' and len(client.list_jobs(project_name=project_name, job_search_mode='CUBING_ONLY')) == 0:
+        client.full_build_cube(cube_name=cube_name)
+    assert client.await_all_jobs(project_name=project_name)
diff --git a/build/CI/kylin-system-testing/features/step_impl/generic_test_step.py b/build/CI/kylin-system-testing/features/step_impl/generic_test_step.py
index 0aabb98..cea47c1 100644
--- a/build/CI/kylin-system-testing/features/step_impl/generic_test_step.py
+++ b/build/CI/kylin-system-testing/features/step_impl/generic_test_step.py
@@ -28,10 +28,10 @@ def get_kylin_instance_with_config_file():
     client = util.setup_instance('kylin_instance.yml')
 
 
-@step("prepare data file from <release_test_0001.json>")
-def prepare_data_file_from(file_name):
+@step("prepare data file from release_test_0001.json")
+def prepare_data_file_from_data(file_name):
     global data
-    with open(os.path.join('data', file_name), 'r') as f:
+    with open(os.path.join('meta_data', file_name), 'r') as f:
         data = json.load(f)
 
 
@@ -54,7 +54,7 @@ def create_model_step(model_desc, project):
     assert json.loads(resp['modelDescData'])['name'] == model_name
 
 
-@step("Create cube with <cube_desc> in <prpject>, cube name is <cube_name>")
+@step("Create cube with <cube_desc> in <project>, cube name is <cube_name>")
 def create_cube_step(cube_desc, project, cube_name):
     resp = client.create_cube(project_name=project,
                               cube_name=cube_name,
@@ -113,3 +113,10 @@ def query_pushdown_step(sql, project, result):
     assert resp.get('cube') == ''
     assert resp.get('pushDown') is True
 
+
+@step("Query all SQL file in directory <directory>, compare result with hive pushdown result")
+def query_sql_file_and_compare(directory):
+    sql_directory = os.listdir(directory)
+    for sql_file in sql_directory:
+        sql = open(sql_file, 'r', encoding='utf8')
+        sqltxt = sql.readlines()
diff --git a/build/CI/kylin-system-testing/features/step_impl/query/query.py b/build/CI/kylin-system-testing/features/step_impl/query/query.py
new file mode 100644
index 0000000..55f5597
--- /dev/null
+++ b/build/CI/kylin-system-testing/features/step_impl/query/query.py
@@ -0,0 +1,40 @@
+#!/usr/bin/python
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import json
+
+from getgauge.python import step
+
+from kylin_utils import util
+from kylin_utils import equals
+
+
+@step("Query all SQL file in directory <sql_directory> in project <project_name>, compare result with hive pushdown result and compare metrics info with sql_result json file in <sql_result_directory>")
+def query_sql_file_and_compare(sql_directory, project_name, sql_result_directory):
+    sql_directory_list = os.listdir(sql_directory)
+    for sql_file_name in sql_directory_list:
+        with open(sql_directory + sql_file_name, 'r', encoding='utf8') as sql_file:
+            sql = sql_file.read()
+
+        client = util.setup_instance('kylin_instance.yml')
+        with open(sql_result_directory + sql_file_name.split(".")[0] + '.json', 'r', encoding='utf8') as expected_result_file:
+            expected_result = json.loads(expected_result_file.read())
+        equals.compare_sql_result(sql=sql, project=project_name, kylin_client=client, expected_result=expected_result)
+
+
+
diff --git a/build/CI/kylin-system-testing/kylin_instances/kylin_instance.yml b/build/CI/kylin-system-testing/kylin_instances/kylin_instance.yml
index 501428f..5454a41 100644
--- a/build/CI/kylin-system-testing/kylin_instances/kylin_instance.yml
+++ b/build/CI/kylin-system-testing/kylin_instances/kylin_instance.yml
@@ -17,6 +17,6 @@
 # All mode
 - host: localhost
   port: 7070
-  version: 3.x
+  version: 4.x
   hadoop_platform: HDP2.4
   deploy_mode: ALL
\ No newline at end of file
diff --git a/build/CI/kylin-system-testing/kylin_utils/equals.py b/build/CI/kylin-system-testing/kylin_utils/equals.py
index 9d44aaf..6c990d4 100644
--- a/build/CI/kylin-system-testing/kylin_utils/equals.py
+++ b/build/CI/kylin-system-testing/kylin_utils/equals.py
@@ -197,7 +197,7 @@ def dataset_equals(expect,
     return True
 
 
-def compare_sql_result(sql, project, kylin_client, cube=None):
+def compare_sql_result(sql, project, kylin_client, cube=None, expected_result=None):
     pushdown_project = kylin_client.pushdown_project
     if not util.if_project_exists(kylin_client=kylin_client, project=pushdown_project):
         kylin_client.create_project(project_name=pushdown_project)
@@ -218,4 +218,11 @@ def compare_sql_result(sql, project, kylin_client, cube=None):
     pushdown_resp = kylin_client.execute_query(project_name=pushdown_project, sql=sql)
     assert pushdown_resp.get('isException') is False
 
-    assert query_result_equals(kylin_resp, pushdown_resp)
\ No newline at end of file
+    assert query_result_equals(kylin_resp, pushdown_resp)
+
+    if expected_result is not None:
+        print(kylin_resp.get("totalScanCount"))
+        assert expected_result.get("totalScanCount") == kylin_resp.get("totalScanCount")
+        assert expected_result.get("totalScanBytes") == kylin_resp.get("totalScanBytes")
+        assert expected_result.get("totalScanFiles") == kylin_resp.get("totalScanFiles")
+        assert expected_result.get("pushDown") == kylin_resp.get("pushDown")
\ No newline at end of file
diff --git a/build/CI/kylin-system-testing/data/generic_desc_data/generic_desc_data_3x.json b/build/CI/kylin-system-testing/meta_data/generic_desc_data/generic_desc_data_3x.json
similarity index 100%
rename from build/CI/kylin-system-testing/data/generic_desc_data/generic_desc_data_3x.json
rename to build/CI/kylin-system-testing/meta_data/generic_desc_data/generic_desc_data_3x.json
diff --git a/build/CI/kylin-system-testing/data/generic_desc_data/generic_desc_data_4x.json b/build/CI/kylin-system-testing/meta_data/generic_desc_data/generic_desc_data_4x.json
similarity index 99%
rename from build/CI/kylin-system-testing/data/generic_desc_data/generic_desc_data_4x.json
rename to build/CI/kylin-system-testing/meta_data/generic_desc_data/generic_desc_data_4x.json
index 8d533b5..4055cf4 100644
--- a/build/CI/kylin-system-testing/data/generic_desc_data/generic_desc_data_4x.json
+++ b/build/CI/kylin-system-testing/meta_data/generic_desc_data/generic_desc_data_4x.json
@@ -6,7 +6,7 @@
     {
       "uuid": "0928468a-9fab-4185-9a14-6f2e7c74823f",
       "last_modified": 0,
-      "version": "3.0.0.20500",
+      "version": "4.0",
       "name": "generic_test_model",
       "owner": null,
       "is_draft": false,
diff --git a/build/CI/kylin-system-testing/query/sql/sql_test/sql1.sql b/build/CI/kylin-system-testing/query/sql/sql_test/sql1.sql
new file mode 100644
index 0000000..9d5e678
--- /dev/null
+++ b/build/CI/kylin-system-testing/query/sql/sql_test/sql1.sql
@@ -0,0 +1,26 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one
+-- or more contributor license agreements.  See the NOTICE file
+-- distributed with this work for additional information
+-- regarding copyright ownership.  The ASF licenses this file
+-- to you under the Apache License, Version 2.0 (the
+-- "License"); you may not use this file except in compliance
+-- with the License.  You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+
+SELECT cal_dt ,sum(price) AS sum_price
+FROM
+(SELECT kylin_cal_dt.cal_dt, kylin_sales.price
+FROM kylin_sales INNER JOIN kylin_cal_dt AS kylin_cal_dt
+ON kylin_sales.part_dt = kylin_cal_dt.cal_dt
+INNER JOIN kylin_category_groupings ON kylin_sales.leaf_categ_id = kylin_category_groupings.leaf_categ_id
+AND kylin_sales.lstg_site_id = kylin_category_groupings.site_id) t
+GROUP BY cal_dt;
\ No newline at end of file
diff --git a/build/CI/kylin-system-testing/query/sql_result/sql_test/sql1.json b/build/CI/kylin-system-testing/query/sql_result/sql_test/sql1.json
new file mode 100644
index 0000000..3c2ec22
--- /dev/null
+++ b/build/CI/kylin-system-testing/query/sql_result/sql_test/sql1.json
@@ -0,0 +1,6 @@
+{
+  "totalScanCount":7349,
+  "totalScanBytes":229078,
+  "totalScanFiles":2,
+  "pushDown": false
+}
\ No newline at end of file
diff --git a/build/CI/run-ci.sh b/build/CI/run-ci.sh
index 41a4bb6..acbb2c7 100644
--- a/build/CI/run-ci.sh
+++ b/build/CI/run-ci.sh
@@ -38,7 +38,7 @@ pwd
 # 1. Package kylin
 if [[ -z $binary_file ]]; then
   cd dev-support/build-release
-  bash -x package.sh
+  bash -x packaging.sh
   cd -
 fi
 
@@ -55,14 +55,10 @@ mkdir kylin-job
 
 cp -r apache-kylin-bin/* kylin-all
 cat > kylin-all/conf/kylin.properties <<EOL
+kylin.metadata.url=kylin_metadata@jdbc,url=jdbc:mysql://metastore-db:3306/metastore,username=kylin,password=kylin,maxActive=10,maxIdle=10
+kylin.env.zookeeper-connect-string=write-zookeeper:2181
 kylin.job.scheduler.default=100
-kylin.server.self-discovery-enabled=true
-kylin.query.pushdown.runner-class-name=org.apache.kylin.query.adhoc.PushDownRunnerJdbcImpl
-kylin.query.pushdown.update-enabled=false
-kylin.query.pushdown.jdbc.url=jdbc:hive2://write-hive-server:10000/default
-kylin.query.pushdown.jdbc.driver=org.apache.hive.jdbc.HiveDriver
-kylin.query.pushdown.jdbc.username=hive
-kylin.query.pushdown.jdbc.password=
+kylin.query.pushdown.runner-class-name=org.apache.kylin.query.pushdown.PushDownRunnerSparkImpl
 EOL
 
 #cp -r apache-kylin-bin/* kylin-query
@@ -123,7 +119,7 @@ sleep ${AWAIT_SECOND}
 
 cd build/CI/kylin-system-testing
 pip install -r requirements.txt
-gauge run --tags 3.x
+gauge run --tags 4.x
 cd -
 echo "Please check build/CI/kylin-system-testing/reports/html-report/index.html for reports."
 


[kylin] 08/13: KYLIN-4778 Package and release in docker container

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 9679252db3dbe36746e80d449791e9c53c47b15d
Author: XiaoxiangYu <xx...@apache.org>
AuthorDate: Tue Oct 27 03:37:35 2020 +0800

    KYLIN-4778 Package and release in docker container
---
 build/CI/run-ci.sh                                 |  71 ++++++----
 build/CI/testing/features/specs/generic_test.spec  |  15 ---
 build/CI/testing/kylin_utils/kylin.py              |   2 +-
 dev-support/build-release/packaging.sh             |  19 ++-
 dev-support/build-release/script/build_release.sh  | 148 ++++++++++-----------
 docker/dockerfile/cluster/client/run_cli.sh        |   9 +-
 .../{setup_cluster.sh => setup_hadoop_cluster.sh}  |  12 +-
 .../packaging.sh => docker/setup_service.sh        |  43 +++---
 docker/stop_cluster.sh                             |   6 +-
 9 files changed, 148 insertions(+), 177 deletions(-)

diff --git a/build/CI/run-ci.sh b/build/CI/run-ci.sh
index 574f892..28110b4 100644
--- a/build/CI/run-ci.sh
+++ b/build/CI/run-ci.sh
@@ -17,54 +17,59 @@
 # limitations under the License.
 #
 
+# This is a sample script for packaging & running system testing by docker container.
+
 # 1. Packaging for Kylin binary
 # 2. Deploy hadoop cluster
 # 3. Delpoy kylin cluster
 # 4. Run system testing
 # 5. Clean up
 
-INIT_HADOOP=1
+INIT_HADOOP=${INIT_HADOOP:-1}
 
 ###########################################
 ###########################################
 # 0. Prepare
-export JAVA_HOME=/usr/local/java
-export PATH=$JAVA_HOME/bin:$PATH
-export PATH=/root/xiaoxiang.yu/INSTALL/anaconda/bin:$PATH
-binary_file=/root/xiaoxiang.yu/BINARY/apache-kylin-3.1.2-SNAPSHOT-bin.tar.gz
-source ~/.bashrc
+AWAIT_SECOND=${AWAIT_SECOND:-240}
 pwd
 
 ###########################################
 ###########################################
 # 1. Package kylin
+if [[ -z $binary_file ]]; then
+  cd dev-support/build-release
+  bash -x package.sh
+  cd -
+fi
+
+binary_file=${binary_file:-apache-kylin-bin.tar.gz}
 
-#TODO
+# 1.1 Prepare Kylin conf
+cp $binary_file docker/docker-compose/others/kylin
 cd docker/docker-compose/others/kylin
-cp $binary_file .
-tar zxf apache-kylin-3.1.2-SNAPSHOT-bin.tar.gz
+tar zxf apache-kylin-bin.tar.gz
 
 mkdir kylin-all
 mkdir kylin-query
 mkdir kylin-job
 
-cp -r apache-kylin-3.1.2-SNAPSHOT-bin/* kylin-all
+cp -r apache-kylin-bin/* kylin-all
 cat > kylin-all/conf/kylin.properties <<EOL
 kylin.job.scheduler.default=100
 kylin.server.self-discovery-enabled=true
 EOL
 
-cp -r apache-kylin-3.1.2-SNAPSHOT-bin/* kylin-query
-cat > kylin-query/conf/kylin.properties <<EOL
-kylin.job.scheduler.default=100
-kylin.server.self-discovery-enabled=true
-EOL
-
-cp -r apache-kylin-3.1.2-SNAPSHOT-bin/* kylin-job
-cat > kylin-job/conf/kylin.properties <<EOL
-kylin.job.scheduler.default=100
-kylin.server.self-discovery-enabled=true
-EOL
+#cp -r apache-kylin-bin/* kylin-query
+#cat > kylin-query/conf/kylin.properties <<EOL
+#kylin.job.scheduler.default=100
+#kylin.server.self-discovery-enabled=true
+#EOL
+#
+#cp -r apache-kylin-bin/* kylin-job
+#cat > kylin-job/conf/kylin.properties <<EOL
+#kylin.job.scheduler.default=100
+#kylin.server.self-discovery-enabled=true
+#EOL
 
 cd -
 
@@ -72,41 +77,49 @@ cd -
 ###########################################
 # 2. Deploy Hadoop
 
-if [ "$INIT_HADOOP" = "1" ];
+if [ "$INIT_HADOOP" == "1" ];
 then
     echo "Restart Hadoop cluster."
     cd docker
 
     bash stop_cluster.sh
 
-    bash setup_cluster.sh --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 \
+    bash setup_hadoop_cluster.sh --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 \
       --enable_hbase yes --hbase_version 1.1.2  --enable_ldap nosh setup_cluster.sh \
       --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 --enable_hbase yes \
       --hbase_version 1.1.2  --enable_ldap no
     cd ..
+    sleep 100
 else
     echo "Do NOT restart Hadoop cluster."
 fi;
 
-docker ps
-
 ###########################################
 ###########################################
 # 3. Deploy Kylin
 
-# TODO
+echo "Restart Kylin cluster."
+
+cd docker
+bash setup_service.sh --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 \
+      --enable_hbase yes --hbase_version 1.1.2  --enable_ldap nosh setup_cluster.sh \
+      --cluster_mode write --hadoop_version 2.8.5 --hive_version 1.2.2 --enable_hbase yes \
+      --hbase_version 1.1.2  --enable_ldap no
+docker ps
+cd ..
 
 ###########################################
 ###########################################
 # 4. Run test
 
-echo "Wait about 6 minutes ..."
-sleep 360
+echo "Wait about 4 minutes ..."
+sleep ${AWAIT_SECOND}
 
 cd build/CI/testing
 pip install -r requirements.txt
 gauge run --tags 3.x
-cd ..
+cd -
+echo "Please check build/CI/testing/reports/html-report/index.html for reports."
 
 ###########################################
 ###########################################
diff --git a/build/CI/testing/features/specs/generic_test.spec b/build/CI/testing/features/specs/generic_test.spec
index 95e68fa..c2f6b5e 100644
--- a/build/CI/testing/features/specs/generic_test.spec
+++ b/build/CI/testing/features/specs/generic_test.spec
@@ -32,31 +32,16 @@ Tags:3.x
 * Merge cube "kylin_spark_cube" segment from "1325347200000" to "1391011200000"
 
 
-## FLINK engine
-
-* Clone cube "release_test_0001_cube" and name it "kylin_flink_cube" in "release_test_0001_project", modify build engine to "FLINK"
-
-* Build segment from "1325347200000" to "1356969600000" in "kylin_flink_cube"
-
-* Build segment from "1356969600000" to "1391011200000" in "kylin_flink_cube"
-
-* Merge cube "kylin_flink_cube" segment from "1325347200000" to "1391011200000"
-
-
 ## Query cube and pushdown
 
 * Query SQL "select count(*) from kylin_sales" and specify "release_test_0001_cube" cube to query in "release_test_0001_project", compare result with "10000"
 
 * Query SQL "select count(*) from kylin_sales" and specify "kylin_spark_cube" cube to query in "release_test_0001_project", compare result with "10000"
 
-* Query SQL "select count(*) from kylin_sales" and specify "kylin_flink_cube" cube to query in "release_test_0001_project", compare result with "10000"
-
 * Disable cube "release_test_0001_cube"
 
 * Disable cube "kylin_spark_cube"
 
-* Disable cube "kylin_flink_cube"
-
 * Query SQL "select count(*) from kylin_sales" in "release_test_0001_project" and pushdown, compare result with "10000"
 
 
diff --git a/build/CI/testing/kylin_utils/kylin.py b/build/CI/testing/kylin_utils/kylin.py
index 164f2ca..1cb9a46 100644
--- a/build/CI/testing/kylin_utils/kylin.py
+++ b/build/CI/testing/kylin_utils/kylin.py
@@ -259,7 +259,7 @@ class KylinHttpClient(BasicHttpClient):  # pylint: disable=too-many-public-metho
         return resp
 
     def update_cube_engine(self, cube_name, engine_type):
-        url = '/cubes/{cube}/{engine}'.format(cube=cube_name, engine=engine_type)
+        url = '/cubes/{cube}/engine/{engine}'.format(cube=cube_name, engine=engine_type)
         resp = self._request('PUT', url)
         return resp
 
diff --git a/dev-support/build-release/packaging.sh b/dev-support/build-release/packaging.sh
index 7997095..9084642 100644
--- a/dev-support/build-release/packaging.sh
+++ b/dev-support/build-release/packaging.sh
@@ -17,33 +17,32 @@
 # limitations under the License.
 #
 
-### Thank you for https://github.com/apache/spark/tree/master/dev/create-release .
-
+## Refer to https://github.com/apache/spark/tree/master/dev/create-release
 # docker build -f Dockerfile -t apachekylin/release-machine:jdk8-slim .
-# docker run --name machine apachekylin/release-machine:jdk8-slim
 
+ENVFILE="env.list"
 cat > $ENVFILE <<EOF
 DRY_RUN=$DRY_RUN
-SKIP_TAG=$SKIP_TAG
-RUNNING_IN_DOCKER=1
+RUNNING_CI=$RUNNING_CI
 GIT_BRANCH=$GIT_BRANCH
-NEXT_VERSION=$NEXT_VERSION
+GIT_BRANCH_HADOOP3=$GIT_BRANCH_HADOOP3
+NEXT_RELEASE_VERSION=$NEXT_RELEASE_VERSION
 RELEASE_VERSION=$RELEASE_VERSION
 RELEASE_TAG=$RELEASE_TAG
 GIT_REF=$GIT_REF
-ASF_USERNAME=$ASF_USERNAME
+GIT_REPO_URL=$GIT_REPO_URL
 GIT_NAME=$GIT_NAME
 GIT_EMAIL=$GIT_EMAIL
 GPG_KEY=$GPG_KEY
+ASF_USERNAME=$ASF_USERNAME
 ASF_PASSWORD=$ASF_PASSWORD
 GPG_PASSPHRASE=$GPG_PASSPHRASE
-RELEASE_STEP=$RELEASE_STEP
 USER=$USER
 EOF
 
-
 docker run -ti \
   --env-file "$ENVFILE" \
+  --name kylin-release-machine \
   apachekylin/release-machine:jdk8-slim
 
-docker cp machine:/root/kylin/dist/apache-kylin-*-SNAPSHOT-bin.tar.gz .
\ No newline at end of file
+docker cp kylin-release-machine:/root/ci/apache-kylin-bin.tar.gz ../../apache-kylin-bin.tar.gz
\ No newline at end of file
diff --git a/dev-support/build-release/script/build_release.sh b/dev-support/build-release/script/build_release.sh
index 4e02d28..0f2aa68 100644
--- a/dev-support/build-release/script/build_release.sh
+++ b/dev-support/build-release/script/build_release.sh
@@ -17,8 +17,11 @@
 # limitations under the License.
 #
 
-PACKAGE_ENABLE=false
-RELEASE_ENABLE=false
+export LC_ALL=C.UTF-8
+export LANG=C.UTF-8
+set -e
+PACKAGE_ENABLE=1
+RELEASE_ENABLE=0
 
 function exit_with_usage {
   cat << EOF
@@ -26,14 +29,13 @@ usage: release-build.sh <package|publish-rc>
 Creates build deliverables from a Kylin commit.
 
 Top level targets are
-  package: Create binary packages and commit them to dist.apache.org/repos/dist/dev/spark/
+  package: Create binary packages and commit them to dist.apache.org/repos/dist/dev/kylin/
   publish-rc: Publish snapshot release to Apache snapshots
 
 All other inputs are environment variables
 
 GIT_REF - Release tag or commit to build from
-SPARK_PACKAGE_VERSION - Release identifier in top level package directory (e.g. 2.1.2-rc1)
-SPARK_VERSION - (optional) Version of Spark being built (e.g. 2.1.2)
+RELEASE_VERSION - Release identifier in top level package directory (e.g. 3.1.2)
 
 ASF_USERNAME - Username of ASF committer account
 ASF_PASSWORD - Password of ASF committer account
@@ -53,104 +55,95 @@ if [[ $@ == *"help"* ]]; then
 fi
 
 if [[ "$1" == "package" ]]; then
-    PACKAGE_ENABLE=true
+    PACKAGE_ENABLE=1
 fi
 
 if [[ "$1" == "publish-rc" ]]; then
-    PACKAGE_ENABLE=true
-    RELEASE_ENABLE=true
+    PACKAGE_ENABLE=1
+    RELEASE_ENABLE=1
 fi
 
-set -e
-export LC_ALL=C.UTF-8
-export LANG=C.UTF-8
-
-####################################################
-####################################################
 ####################################################
 ####################################################
 #### Configuration
 
-KYLIN_PACKAGE_BRANCH=master
-KYLIN_PACKAGE_BRANCH_HADOOP3=master-hadoop3
-KYLIN_PACKAGE_VERSION=3.1.1
-KYLIN_PACKAGE_VERSION_RC=3.1.1-rc1
-NEXT_RELEASE_VERSION=3.1.2-SNAPSHOT
-ASF_USERNAME=xxyu
-ASF_PASSWORD=123
-GPG_KEY=123
-GPG_PASSPHRASE=123
+KYLIN_PACKAGE_BRANCH=${GIT_BRANCH:-master}
+KYLIN_PACKAGE_BRANCH_HADOOP3=${GIT_BRANCH_HADOOP3:-master-hadoop3}
+ASF_USERNAME=${ASF_USERNAME:-xxyu}
+RELEASE_VERSION=${RELEASE_VERSION:-3.1.2}
+NEXT_RELEASE_VERSION=${NEXT_RELEASE_VERSION:-3.1.3}
+RUNNING_CI=${RUNNING_CI:1}
+GIT_REPO_URL=${GIT_REPO_URL:-https://github.com/apache/kylin.git}
 
 export source_release_folder=/root/kylin-release-folder/
 export binary_release_folder=/root/kylin-release-folder/kylin_bin
 export svn_release_folder=/root/dist/dev/kylin
+export ci_package_folder=/root/ci
 
 RELEASE_STAGING_LOCATION="https://dist.apache.org/repos/dist/dev/kylin"
 
 ####################################################
 ####################################################
-####################################################
-####################################################
 #### Prepare source code
 
+mkdir -p $source_release_folder
+mkdir -p $binary_release_folder
+mkdir -p $svn_release_folder
+mkdir -p $ci_package_folder
 
 cd $source_release_folder
-git clone https://github.com/apache/kylin.git
+git clone $GIT_REPO_URL
 cd kylin
 git checkout ${KYLIN_PACKAGE_BRANCH}
 git pull --rebase
-git checkout ${KYLIN_PACKAGE_BRANCH}
 
 cp /root/apache-tomcat-7.0.100.tar.gz $source_release_folder/kylin/build
 
 
-####################################################
-####################################################
-####################################################
-####################################################
-#### Prepare tag & source tarball & upload maven artifact
-
-# Use release-plugin to check license & build source package & build and upload maven artifact
-#mvn -DskipTests -DreleaseVersion=${KYLIN_PACKAGE_VERSION} -DdevelopmentVersion=${NEXT_RELEASE_VERSION}-SNAPSHOT -Papache-release -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests" release:prepare
-#mvn -DskipTests -Papache-release -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests" release:perform
-
+if [[ "$RELEASE_ENABLE" == "1" ]]; then
+  echo "Build with maven-release-plugin ..."
+  ## Prepare tag & source tarball & upload maven artifact
 
-####################################################
-####################################################
-####################################################
-####################################################
-####
-
-# Create a directory for this release candidate
-#mkdir ${svn_release_folder}/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
-#rm -rf target/apache-kylin-*ource-release.zip.asc.sha256
+  # Use release-plugin to check license & build source package & build and upload maven artifact
+  #mvn -DskipTests -DreleaseVersion=${KYLIN_PACKAGE_VERSION} -DdevelopmentVersion=${NEXT_RELEASE_VERSION}-SNAPSHOT -Papache-release -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests" release:prepare
+  #mvn -DskipTests -Papache-release -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests" release:perform
 
-# Move source code and signture of source code to release candidate directory
-#cp target/apache-kylin-*source-release.zip* ${svn_release_folder}/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
+  # Create a directory for this release candidate
+  #mkdir ${svn_release_folder}/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
+  #rm -rf target/apache-kylin-*ource-release.zip.asc.sha256
 
-# Go to package directory
-#cd $binary_release_folder
-#git checkout ${KYLIN_PACKAGE_BRANCH}
-#git pull --rebase
+  # Move source code and signture of source code to release candidate directory
+  #cp target/apache-kylin-*source-release.zip* ${svn_release_folder}/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
 
-# Checkout to tag, which is created by maven-release-plugin
-# Commit message looks like "[maven-release-plugin] prepare release kylin-4.0.0-alpha"
-#git checkout kylin-${RELEASE_VERSION}
+  # Go to package directory
+  #cd $binary_release_folder
+  #git checkout ${KYLIN_PACKAGE_BRANCH}
+  #git pull --rebase
 
+  # Checkout to tag, which is created by maven-release-plugin
+  # Commit message looks like "[maven-release-plugin] prepare release kylin-4.0.0-alpha"
+  #git checkout kylin-${RELEASE_VERSION}
+fi
 
 ####################################################
 ####################################################
-####################################################
-####################################################
 #### Build binary
 
-
-# Build first packages
+# Build first package for Hadoop2
 build/script/package.sh
+if [[ "$RUNNING_CI" == "1" ]]; then
+    cp dist/apache-kylin-${RELEASE_VERSION}-bin.tar.gz ${ci_package_folder}
+    cd ${ci_package_folder}
+    tar -zxf dist/apache-kylin-${RELEASE_VERSION}-bin.tar.gz
+    mv apache-kylin-${RELEASE_VERSION}-bin apache-kylin-bin
+    tar -cvzf apache-kylin-bin.tar.gz apache-kylin-bin
+    rm -rf apache-kylin-${RELEASE_VERSION}-bin
+    cd -
+fi
 tar -zxf dist/apache-kylin-${RELEASE_VERSION}-bin.tar.gz
-mv apache-kylin-${RELEASE_VERSION}-bin apache-kylin-${RELEASE_VERSION}-bin-hadoop2
-tar -cvzf ~/dist/dev/kylin/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}/apache-kylin-${RELEASE_VERSION}-bin-hadoop2.tar.gz apache-kylin-${RELEASE_VERSION}-bin-hadoop2
-rm -rf apache-kylin-${RELEASE_VERSION}-bin-hadoop2
+mv apache-kylin-${RELEASE_VERSION}-bin apache-kylin-${RELEASE_VERSION}-bin-hbase1x
+tar -cvzf ~/dist/dev/kylin/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}/apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz apache-kylin-${RELEASE_VERSION}-bin-hbase1x
+rm -rf apache-kylin-${RELEASE_VERSION}-bin-hbase1x
 
 #build/script/package.sh -P cdh5.7
 #tar -zxf dist/apache-kylin-${RELEASE_VERSION}-bin.tar.gz
@@ -158,26 +151,21 @@ rm -rf apache-kylin-${RELEASE_VERSION}-bin-hadoop2
 #tar -cvzf ~/dist/dev/kylin/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}/apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz apache-kylin-${RELEASE_VERSION}-bin-cdh57
 #rm -rf apache-kylin-${RELEASE_VERSION}-bin-cdh57
 
-
-####################################################
 ####################################################
 ####################################################
-####################################################
-#### Sign binary
+#### Release binary
 
-#cd ~/dist/dev/kylin/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
-#gpg --armor --output apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz.asc --detach-sig apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz
-#shasum -a 256 apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz > apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz.sha256
-#
-#gpg --armor --output apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz.asc --detach-sig apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz
-#shasum -a 256 apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz > apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz.sha256
+if [[ "$RELEASE_ENABLE" == "1" ]]; then
+  ## Sign binary
+  cd ~/dist/dev/kylin/apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
+  gpg --armor --output apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz.asc --detach-sig apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz
+  shasum -a 256 apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz > apache-kylin-${RELEASE_VERSION}-bin-hbase1x.tar.gz.sha256
 
-####################################################
-####################################################
-####################################################
-####################################################
-#### Upload to svn repository
+  # gpg --armor --output apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz.asc --detach-sig apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz
+  # shasum -a 256 apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz > apache-kylin-${RELEASE_VERSION}-bin-cdh57.tar.gz.sha256
 
-#cd ..
-#svn add apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
-#svn commit -m 'Checkin release artifacts for '${KYLIN_PACKAGE_VERSION_RC}
\ No newline at end of file
+  ## Upload to svn repository
+  cd ..
+  svn add apache-kylin-${KYLIN_PACKAGE_VERSION_RC}
+  svn commit -m 'Checkin release artifacts for '${KYLIN_PACKAGE_VERSION_RC}
+fi
\ No newline at end of file
diff --git a/docker/dockerfile/cluster/client/run_cli.sh b/docker/dockerfile/cluster/client/run_cli.sh
index fcdd71c..ffd1ecd 100644
--- a/docker/dockerfile/cluster/client/run_cli.sh
+++ b/docker/dockerfile/cluster/client/run_cli.sh
@@ -4,12 +4,17 @@
 /opt/entrypoint/hive/entrypoint.sh
 /opt/entrypoint/hbase/entrypoint.sh
 
-sleep 180
+sleep 100
 
 cd $KYLIN_HOME
-sh bin/sample.sh
 sh bin/kylin.sh start
 
+# Only one node execute sample.sh
+if [[ $HOSTNAME =~ "kylin-all" ]]
+then
+  sh bin/sample.sh start
+fi
+
 while :
 do
     sleep 100
diff --git a/docker/setup_cluster.sh b/docker/setup_hadoop_cluster.sh
similarity index 86%
rename from docker/setup_cluster.sh
rename to docker/setup_hadoop_cluster.sh
index b323cd7..dd1db96 100644
--- a/docker/setup_cluster.sh
+++ b/docker/setup_hadoop_cluster.sh
@@ -58,21 +58,18 @@ fi
 
 
 if [ $CLUSTER_MODE == "write" ]; then
-  echo "Restart Kylin cluster & HBase cluster ......"
-  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write.yml down
+  echo "Restart HBase cluster ......"
   if [ $ENABLE_HBASE == "yes" ]; then
     KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hbase.yml down
     sleep 2
     KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hbase.yml up -d
   fi
-  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write.yml up -d
 fi
 
 if [ $CLUSTER_MODE == "write-read" ]; then
-  echo "Restart Kylin cluster[write-read mode] & Read HBase cluster ......"
+  echo "Restart standalone HBase/Hadoop cluster ......"
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-zookeeper.yml down
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hadoop.yml down
-  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write-read.yml down
   sleep 5
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-zookeeper.yml up -d
   KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hadoop.yml up -d
@@ -82,7 +79,4 @@ if [ $CLUSTER_MODE == "write-read" ]; then
     sleep 2
     KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hbase.yml up -d
   fi
-
-  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write-read.yml up -d
-fi
-
+fi
\ No newline at end of file
diff --git a/dev-support/build-release/packaging.sh b/docker/setup_service.sh
similarity index 50%
copy from dev-support/build-release/packaging.sh
copy to docker/setup_service.sh
index 7997095..e476e17 100644
--- a/dev-support/build-release/packaging.sh
+++ b/docker/setup_service.sh
@@ -1,5 +1,4 @@
-#!/usr/bin/env bash
-
+#!/bin/bash
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -17,33 +16,21 @@
 # limitations under the License.
 #
 
-### Thank you for https://github.com/apache/spark/tree/master/dev/create-release .
-
-# docker build -f Dockerfile -t apachekylin/release-machine:jdk8-slim .
-# docker run --name machine apachekylin/release-machine:jdk8-slim
+SCRIPT_PATH=$(cd `dirname $0`; pwd)
+WS_ROOT=`dirname $SCRIPT_PATH`
 
-cat > $ENVFILE <<EOF
-DRY_RUN=$DRY_RUN
-SKIP_TAG=$SKIP_TAG
-RUNNING_IN_DOCKER=1
-GIT_BRANCH=$GIT_BRANCH
-NEXT_VERSION=$NEXT_VERSION
-RELEASE_VERSION=$RELEASE_VERSION
-RELEASE_TAG=$RELEASE_TAG
-GIT_REF=$GIT_REF
-ASF_USERNAME=$ASF_USERNAME
-GIT_NAME=$GIT_NAME
-GIT_EMAIL=$GIT_EMAIL
-GPG_KEY=$GPG_KEY
-ASF_PASSWORD=$ASF_PASSWORD
-GPG_PASSPHRASE=$GPG_PASSPHRASE
-RELEASE_STEP=$RELEASE_STEP
-USER=$USER
-EOF
+source ${SCRIPT_PATH}/header.sh
 
+if [ $CLUSTER_MODE == "write" ]; then
+  echo "Restart Kylin cluster  ......"
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write.yml down
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write.yml up -d
+fi
 
-docker run -ti \
-  --env-file "$ENVFILE" \
-  apachekylin/release-machine:jdk8-slim
+if [ $CLUSTER_MODE == "write-read" ]; then
+  echo "Restart Kylin cluster[write-read mode] ......"
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write-read.yml down
+  sleep 5
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write-read.yml up -d
+fi
 
-docker cp machine:/root/kylin/dist/apache-kylin-*-SNAPSHOT-bin.tar.gz .
\ No newline at end of file
diff --git a/docker/stop_cluster.sh b/docker/stop_cluster.sh
index 24ce4e8..2a13057 100644
--- a/docker/stop_cluster.sh
+++ b/docker/stop_cluster.sh
@@ -42,6 +42,6 @@ KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docke
 KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hadoop.yml down
 
 # clean data
-#rm -rf ${SCRIPT_PATH}/docker-compose/write/data/*
-#rm -rf ${SCRIPT_PATH}/docker-compose/read/data/*
-#rm -rf ${SCRIPT_PATH}/docker-compose/others/data/*
+rm -rf ${SCRIPT_PATH}/docker-compose/write/data/*
+rm -rf ${SCRIPT_PATH}/docker-compose/read/data/*
+rm -rf ${SCRIPT_PATH}/docker-compose/others/data/*


[kylin] 03/13: KYLIN-4775 Fix

Posted by xx...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin-on-parquet-v2
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 823059d077a6d439b932ef99173671d177b3aa87
Author: XiaoxiangYu <xx...@apache.org>
AuthorDate: Mon Oct 19 20:06:39 2020 +0800

    KYLIN-4775 Fix
---
 docker/dockerfile/cluster/client/Dockerfile |  4 +-
 docker/setup_cluster.sh                     | 64 +++++++++++++++++++++++++++--
 2 files changed, 63 insertions(+), 5 deletions(-)

diff --git a/docker/dockerfile/cluster/client/Dockerfile b/docker/dockerfile/cluster/client/Dockerfile
index 46c1822..48008c1 100644
--- a/docker/dockerfile/cluster/client/Dockerfile
+++ b/docker/dockerfile/cluster/client/Dockerfile
@@ -145,10 +145,12 @@ RUN echo spark.sql.catalogImplementation=hive > $SPARK_HOME/conf/spark-defaults.
 
 ENV PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HBASE_HOME/bin:$ZK_HOME/bin
 
-# 设置所有组件的客户端配置
+# Hadoop Client Configuration
 COPY entrypoint.sh /opt/entrypoint/client/entrypoint.sh
 RUN chmod a+x /opt/entrypoint/client/entrypoint.sh
 
+RUN rm -f /opt/hbase-${HBASE_VERSION}/lib/hadoop-*.jar
+
 COPY run_cli.sh /run_cli.sh
 RUN chmod a+x  /run_cli.sh
 
diff --git a/docker/setup_cluster.sh b/docker/setup_cluster.sh
index e7ae80f..34cc01e 100644
--- a/docker/setup_cluster.sh
+++ b/docker/setup_cluster.sh
@@ -22,7 +22,63 @@ WS_ROOT=`dirname $SCRIPT_PATH`
 source ${SCRIPT_PATH}/build_cluster_images.sh
 
 # restart cluster
-source ${SCRIPT_PATH}/build_cluster_images.sh
-KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-write.yml down
-sleep 10
-KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/dokcer-compose/write/docker-compose-write.yml up -d
+
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hadoop.yml down
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-zookeeper.yml down
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-metastore.yml down
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hive.yml down
+sleep 5
+# hadoop
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hadoop.yml up -d
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-zookeeper.yml up -d
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-metastore.yml up -d
+KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hive.yml up -d
+
+
+if [ $ENABLE_KERBEROS == "yes" ]; then
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kerberos.yml down
+  sleep 2
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kerberos.yml up -d
+fi
+
+if [ $ENABLE_LDAP == "yes" ]; then
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-ldap.yml down
+  sleep 2
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-ldap.yml up -d
+fi
+
+if [ $ENABLE_KAFKA == "yes" ]; then
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-kafka.yml down
+  sleep 2
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-kafka.yml up -d
+fi
+
+
+if [ $CLUSTER_MODE == "write" ]; then
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write.yml down
+  if [ $ENABLE_HBASE == "yes" ]; then
+    KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hbase.yml down
+    sleep 2
+    KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/write/docker-compose-hbase.yml up -d
+  fi
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write.yml up -d
+fi
+
+# restart cluster
+if [ $CLUSTER_MODE == "write-read" ]; then
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-zookeeper.yml down
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hadoop.yml down
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write-read.yml down
+  sleep 5
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-zookeeper.yml up -d
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hadoop.yml up -d
+
+  if [ $ENABLE_HBASE == "yes" ]; then
+    KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hbase.yml down
+    sleep 2
+    KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/read/docker-compose-hbase.yml up -d
+  fi
+
+  KYLIN_WS=${WS_ROOT} docker-compose -f ${SCRIPT_PATH}/docker-compose/others/docker-compose-kylin-write-read.yml up -d
+fi
+