You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by ni...@apache.org on 2019/08/15 13:34:24 UTC

[kylin] branch 2.6.x updated (91fa4ac -> 75179ab)

This is an automated email from the ASF dual-hosted git repository.

nic pushed a change to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git.


    from 91fa4ac  KYLIN-4106 fix Illegal partition for SelfDefineSortableKey
     new 37ee3a8  KYLIN-4046 Refine JDBC Source(source.default=8)
     new 9650c8d  KYLIN-4047 Use push-down query when division dynamic column cube query is not supported (#689)
     new d192bcd  KYLIN-4013 Only show the cubes under one model
     new 53bcf44  Kylin-4037 Fix can't Cleanup Data in Hbase's HDFS Storage When Deploy Apache Kylin with Standalone HBase Cluster (#687)
     new 6703c7c  KYLIN-3628 Fix unexpected exception for select from Lookup Table
     new 4a30ba4  [KYLIN-4066] Fix No planner for not ROLE_ADMIN user
     new 063456b  KYLIN-4115 Always  load kafkaConsumerProperties
     new 13e8e9b  KYLIN-1856 Clean up old error in step output immediately after resume job
     new b767f4c  KYLIN-4057 Don't merge the job that has been discarded manually before.
     new 93da88d  KYLIN-4093: Slow query pages should be open to all users of the project
     new 75179ab  KYLIN-4121 Cleanup hive view intermediate tables after job be finished

The 11 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/kylin/common/SourceDialect.java     |  55 ++--
 .../java/org/apache/kylin/cube/CubeManager.java    |  38 ++-
 .../java/org/apache/kylin/job/JoinedFlatTable.java |   2 +-
 .../kylin/job/execution/ExecutableManager.java     |   2 +-
 .../kylin/exception/QueryOnCubeException.java      |  22 +-
 .../metadata/expression/BinaryTupleExpression.java |   5 +-
 .../apache/kylin/metadata/model/PartitionDesc.java |  27 +-
 .../org/apache/kylin/metadata/model/TblColRef.java |  18 ++
 .../metadata/expression/TupleExpressionTest.java   |   9 +-
 .../DefaultPartitionConditionBuilderTest.java      |   8 +-
 .../java/org/apache/kylin/engine/mr/CubingJob.java |   5 +
 pom.xml                                            |   3 +
 .../query/enumerator/LookupTableEnumerator.java    |   2 +-
 .../org/apache/kylin/query/util/PushDownUtil.java  |   4 +-
 .../apache/kylin/rest/job/StorageCleanupJob.java   | 108 ++++---
 .../org/apache/kylin/rest/service/CubeService.java |  18 +-
 .../kylin/rest/job/StorageCleanupJobTest.java      |   8 +-
 .../kylin/source/hive/GarbageCollectionStep.java   |  10 +-
 .../org/apache/kylin/source/jdbc/JdbcDialect.java  |  26 --
 .../org/apache/kylin/source/jdbc/JdbcExplorer.java |  17 +-
 .../kylin/source/jdbc/JdbcHiveInputBase.java       | 344 +++++++++++++++++++--
 .../apache/kylin/source/jdbc/JdbcTableReader.java  |  15 +-
 .../java/org/apache/kylin/source/jdbc/SqlUtil.java |   5 +-
 .../source/jdbc/extensible/JdbcHiveInputBase.java  |   2 +-
 .../source/jdbc/metadata/DefaultJdbcMetadata.java  |   6 +-
 .../kylin/source/jdbc/metadata/IJdbcMetadata.java  |   4 +
 .../source/jdbc/metadata/JdbcMetadataFactory.java  |  16 +-
 .../source/jdbc/metadata/MySQLJdbcMetadata.java    |   6 +
 .../jdbc/metadata/SQLServerJdbcMetadata.java       |   6 +
 .../apache/kylin/source/jdbc/JdbcExplorerTest.java |   4 +-
 .../kylin/source/jdbc/JdbcHiveInputBaseTest.java   |  68 ++++
 .../jdbc/metadata/JdbcMetadataFactoryTest.java     |  35 ---
 .../org/apache/kylin/source/kafka/KafkaSource.java |   5 +-
 .../kylin/source/kafka/util/KafkaClient.java       |  10 +-
 .../kylin/storage/hbase/HBaseConnection.java       |   9 +
 webapp/app/js/controllers/models.js                |  30 +-
 webapp/app/partials/cubes/cube_detail.html         |   2 +-
 webapp/app/partials/jobs/jobs.html                 |   3 +-
 webapp/app/partials/models/models_tree.html        |   1 +
 39 files changed, 724 insertions(+), 234 deletions(-)
 copy core-job/src/main/java/org/apache/kylin/job/exception/JobException.java => core-common/src/main/java/org/apache/kylin/common/SourceDialect.java (56%)
 copy core-job/src/main/java/org/apache/kylin/job/exception/ExecuteException.java => core-metadata/src/main/java/org/apache/kylin/exception/QueryOnCubeException.java (69%)
 delete mode 100644 source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcDialect.java
 create mode 100644 source-jdbc/src/test/java/org/apache/kylin/source/jdbc/JdbcHiveInputBaseTest.java
 delete mode 100644 source-jdbc/src/test/java/org/apache/kylin/source/jdbc/metadata/JdbcMetadataFactoryTest.java


[kylin] 09/11: KYLIN-4057 Don't merge the job that has been discarded manually before.

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit b767f4c08682155a3d6818aa6619a3380bb846aa
Author: rupengwang <wa...@live.cn>
AuthorDate: Wed Jul 17 17:32:21 2019 +0800

    KYLIN-4057 Don't merge the job that has been discarded manually before.
---
 .../java/org/apache/kylin/engine/mr/CubingJob.java     |  5 +++++
 .../org/apache/kylin/rest/service/CubeService.java     | 18 +++++++++++++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/engine-mr/src/main/java/org/apache/kylin/engine/mr/CubingJob.java b/engine-mr/src/main/java/org/apache/kylin/engine/mr/CubingJob.java
index fb1a7f4..13f4183 100644
--- a/engine-mr/src/main/java/org/apache/kylin/engine/mr/CubingJob.java
+++ b/engine-mr/src/main/java/org/apache/kylin/engine/mr/CubingJob.java
@@ -106,6 +106,7 @@ public class CubingJob extends DefaultChainedExecutable {
     private static final String DEPLOY_ENV_NAME = "envName";
     private static final String PROJECT_INSTANCE_NAME = "projectName";
     private static final String JOB_TYPE = "jobType";
+    private static final String SEGMENT_NAME = "segmentName";
 
     public static CubingJob createBuildJob(CubeSegment seg, String submitter, JobEngineConfig config) {
         return initCubingJob(seg, CubingJobTypeEnum.BUILD.toString(), submitter, config);
@@ -185,6 +186,10 @@ public class CubingJob extends DefaultChainedExecutable {
         return getParam(JOB_TYPE);
     }
 
+    public String getSegmentName() {
+        return getParam(SEGMENT_NAME);
+    }
+
     void setJobType(String jobType) {
         setParam(JOB_TYPE, jobType);
     }
diff --git a/server-base/src/main/java/org/apache/kylin/rest/service/CubeService.java b/server-base/src/main/java/org/apache/kylin/rest/service/CubeService.java
index 2a5ce26..b975cdc 100644
--- a/server-base/src/main/java/org/apache/kylin/rest/service/CubeService.java
+++ b/server-base/src/main/java/org/apache/kylin/rest/service/CubeService.java
@@ -684,7 +684,7 @@ public class CubeService extends BasicService implements InitializingBean {
             try {
                 cube = getCubeManager().getCube(cubeName);
                 SegmentRange offsets = cube.autoMergeCubeSegments();
-                if (offsets != null) {
+                if (offsets != null && !isMergingJobBeenDiscarded(cube, cubeName, cube.getProject(), offsets)) {
                     CubeSegment newSeg = getCubeManager().mergeSegments(cube, null, offsets, true);
                     logger.debug("Will submit merge job on " + newSeg);
                     DefaultChainedExecutable job = EngineFactory.createBatchMergeJob(newSeg, "SYSTEM");
@@ -698,6 +698,22 @@ public class CubeService extends BasicService implements InitializingBean {
         }
     }
 
+    //Don't merge the job that has been discarded manually before
+    private boolean isMergingJobBeenDiscarded(CubeInstance cubeInstance, String cubeName, String projectName, SegmentRange offsets) {
+        SegmentRange.TSRange tsRange = new SegmentRange.TSRange((Long) offsets.start.v, (Long) offsets.end.v);
+        String segmentName = CubeSegment.makeSegmentName(tsRange, null, cubeInstance.getModel());
+        final List<CubingJob> jobInstanceList = jobService.listJobsByRealizationName(cubeName, projectName, EnumSet.of(ExecutableState.DISCARDED));
+        for (CubingJob cubingJob : jobInstanceList) {
+            if (cubingJob.getSegmentName().equals(segmentName)) {
+                logger.debug("Merge job {} has been discarded before, will not merge.", segmentName);
+                return true;
+            }
+        }
+
+        return false;
+    }
+
+
     public void validateCubeDesc(CubeDesc desc, boolean isDraft) {
         Message msg = MsgPicker.getMsg();
 


[kylin] 05/11: KYLIN-3628 Fix unexpected exception for select from Lookup Table

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 6703c7c5ec2eabc5a6f5f264f44082be332ff881
Author: XiaoxiangYu <hi...@126.com>
AuthorDate: Wed Jul 3 10:18:25 2019 +0800

    KYLIN-3628 Fix unexpected exception for select from Lookup Table
    
    In some case, cube returned by findLatestSnapshot didn't contains the snapshot table
    for what you need. Because some dimension which exists in model but was removed in cube design phase, query such as "select * from LookupTable" will throw unexpected exception and confused user.
---
 .../java/org/apache/kylin/cube/CubeManager.java    | 38 +++++++++++++++++++---
 .../query/enumerator/LookupTableEnumerator.java    |  2 +-
 2 files changed, 35 insertions(+), 5 deletions(-)

diff --git a/core-cube/src/main/java/org/apache/kylin/cube/CubeManager.java b/core-cube/src/main/java/org/apache/kylin/cube/CubeManager.java
index db9c095..6a96f18 100755
--- a/core-cube/src/main/java/org/apache/kylin/cube/CubeManager.java
+++ b/core-cube/src/main/java/org/apache/kylin/cube/CubeManager.java
@@ -45,6 +45,7 @@ import org.apache.kylin.common.util.Pair;
 import org.apache.kylin.common.util.RandomUtil;
 import org.apache.kylin.cube.cuboid.Cuboid;
 import org.apache.kylin.cube.model.CubeDesc;
+import org.apache.kylin.cube.model.DimensionDesc;
 import org.apache.kylin.cube.model.SnapshotTableDesc;
 import org.apache.kylin.dict.DictionaryInfo;
 import org.apache.kylin.dict.DictionaryManager;
@@ -1223,16 +1224,21 @@ public class CubeManager implements IRealizationProvider {
         }
     }
 
-    public CubeInstance findLatestSnapshot(List<RealizationEntry> realizationEntries, String lookupTableName) {
+    /**
+     * To keep "select * from LOOKUP_TABLE" has consistent and latest result, we manually choose
+     * CubeInstance here to answer such query.
+     */
+    public CubeInstance findLatestSnapshot(List<RealizationEntry> realizationEntries, String lookupTableName,
+            CubeInstance cubeInstance) {
         CubeInstance cube = null;
-        if (realizationEntries.size() > 0) {
+        if (!realizationEntries.isEmpty()) {
             long maxBuildTime = Long.MIN_VALUE;
             RealizationRegistry registry = RealizationRegistry.getInstance(config);
             for (RealizationEntry entry : realizationEntries) {
                 IRealization realization = registry.getRealization(entry.getType(), entry.getRealization());
                 if (realization != null && realization.isReady() && realization instanceof CubeInstance) {
-                    if (realization.getModel().isLookupTable(lookupTableName)) {
-                        CubeInstance current = (CubeInstance) realization;
+                    CubeInstance current = (CubeInstance) realization;
+                    if (checkMeetSnapshotTable(current, lookupTableName)) {
                         CubeSegment segment = current.getLatestReadySegment();
                         if (segment != null) {
                             long latestBuildTime = segment.getLastBuildTime();
@@ -1245,6 +1251,30 @@ public class CubeManager implements IRealizationProvider {
                 }
             }
         }
+        if (!cubeInstance.equals(cube)) {
+            logger.debug("Picked cube {} over {} as it provides a more recent snapshot of the lookup table {}", cube,
+                    cubeInstance, lookupTableName);
+        }
         return cube;
     }
+
+    /**
+     * check if {toCheck} has snapshot of {lookupTableName}
+     * @param lookupTableName look like {SCHMEA}.{TABLE}
+     */
+    private boolean checkMeetSnapshotTable(CubeInstance toCheck, String lookupTableName) {
+        boolean checkRes = false;
+        String lookupTbl = lookupTableName;
+        String[] strArr = lookupTableName.split("\\.");
+        if (strArr.length > 1) {
+            lookupTbl = strArr[strArr.length - 1];
+        }
+        for (DimensionDesc dimensionDesc : toCheck.getDescriptor().getDimensions()) {
+            if (dimensionDesc.getTable().equalsIgnoreCase(lookupTbl)) {
+                checkRes = true;
+                break;
+            }
+        }
+        return checkRes;
+    }
 }
diff --git a/query/src/main/java/org/apache/kylin/query/enumerator/LookupTableEnumerator.java b/query/src/main/java/org/apache/kylin/query/enumerator/LookupTableEnumerator.java
index f8c7ad4..a4f28a8 100644
--- a/query/src/main/java/org/apache/kylin/query/enumerator/LookupTableEnumerator.java
+++ b/query/src/main/java/org/apache/kylin/query/enumerator/LookupTableEnumerator.java
@@ -60,7 +60,7 @@ public class LookupTableEnumerator implements Enumerator<Object[]> {
             List<RealizationEntry> realizationEntries = project.getRealizationEntries();
             String lookupTableName = olapContext.firstTableScan.getTableName();
             CubeManager cubeMgr = CubeManager.getInstance(cube.getConfig());
-            cube = cubeMgr.findLatestSnapshot(realizationEntries, lookupTableName);
+            cube = cubeMgr.findLatestSnapshot(realizationEntries, lookupTableName, cube);
             olapContext.realization = cube;
         } else if (olapContext.realization instanceof HybridInstance) {
             final HybridInstance hybridInstance = (HybridInstance) olapContext.realization;


[kylin] 07/11: KYLIN-4115 Always load kafkaConsumerProperties

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 063456b4c8ede083806a4fc79df68cbc4623c5fb
Author: chenzhx <ch...@apache.org>
AuthorDate: Fri Jul 26 18:18:13 2019 +0800

    KYLIN-4115 Always  load kafkaConsumerProperties
---
 .../main/java/org/apache/kylin/source/kafka/KafkaSource.java   |  5 ++++-
 .../java/org/apache/kylin/source/kafka/util/KafkaClient.java   | 10 +++++-----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSource.java b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSource.java
index 0680614..3a7182c 100644
--- a/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSource.java
+++ b/source-kafka/src/main/java/org/apache/kylin/source/kafka/KafkaSource.java
@@ -21,6 +21,7 @@ package org.apache.kylin.source.kafka;
 import java.io.IOException;
 import java.util.List;
 import java.util.Map;
+import java.util.Properties;
 
 import org.apache.kafka.clients.consumer.KafkaConsumer;
 import org.apache.kafka.common.PartitionInfo;
@@ -44,6 +45,7 @@ import org.apache.kylin.source.ISource;
 import org.apache.kylin.source.ISourceMetadataExplorer;
 import org.apache.kylin.source.SourcePartition;
 import org.apache.kylin.source.kafka.config.KafkaConfig;
+import org.apache.kylin.source.kafka.config.KafkaConsumerProperties;
 import org.apache.kylin.source.kafka.util.KafkaClient;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -105,7 +107,8 @@ public class KafkaSource implements ISource {
                 .getKafkaConfig(cube.getRootFactTable());
         final String brokers = KafkaClient.getKafkaBrokers(kafkaConfig);
         final String topic = kafkaConfig.getTopic();
-        try (final KafkaConsumer consumer = KafkaClient.getKafkaConsumer(brokers, cube.getName(), null)) {
+        Properties property = KafkaConsumerProperties.getInstanceFromEnv().extractKafkaConfigToProperties();
+        try (final KafkaConsumer consumer = KafkaClient.getKafkaConsumer(brokers, cube.getName(), property)) {
             final List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);
             logger.info("Get {} partitions for topic {} ", partitionInfos.size(), topic);
             for (PartitionInfo partitionInfo : partitionInfos) {
diff --git a/source-kafka/src/main/java/org/apache/kylin/source/kafka/util/KafkaClient.java b/source-kafka/src/main/java/org/apache/kylin/source/kafka/util/KafkaClient.java
index a781f8a..e8ce87d 100644
--- a/source-kafka/src/main/java/org/apache/kylin/source/kafka/util/KafkaClient.java
+++ b/source-kafka/src/main/java/org/apache/kylin/source/kafka/util/KafkaClient.java
@@ -57,16 +57,16 @@ public class KafkaClient {
 
     private static Properties constructDefaultKafkaConsumerProperties(String brokers, String consumerGroup, Properties properties) {
         Properties props = new Properties();
-        if (properties != null) {
-            for (Map.Entry entry : properties.entrySet()) {
-                props.put(entry.getKey(), entry.getValue());
-            }
-        }
         props.put("bootstrap.servers", brokers);
         props.put("key.deserializer", StringDeserializer.class.getName());
         props.put("value.deserializer", StringDeserializer.class.getName());
         props.put("group.id", consumerGroup);
         props.put("enable.auto.commit", "false");
+        if (properties != null) {
+            for (Map.Entry entry : properties.entrySet()) {
+                props.put(entry.getKey(), entry.getValue());
+            }
+        }
         return props;
     }
 


[kylin] 08/11: KYLIN-1856 Clean up old error in step output immediately after resume job

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 13e8e9be10b9699cdb6fa3b9e37c72d14e3332c0
Author: yaqian.zhang <59...@qq.com>
AuthorDate: Mon Jul 29 16:16:15 2019 +0800

    KYLIN-1856 Clean up old error in step output immediately after resume job
---
 .../src/main/java/org/apache/kylin/job/execution/ExecutableManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/core-job/src/main/java/org/apache/kylin/job/execution/ExecutableManager.java b/core-job/src/main/java/org/apache/kylin/job/execution/ExecutableManager.java
index e8f2512..90c9873 100644
--- a/core-job/src/main/java/org/apache/kylin/job/execution/ExecutableManager.java
+++ b/core-job/src/main/java/org/apache/kylin/job/execution/ExecutableManager.java
@@ -348,7 +348,7 @@ public class ExecutableManager {
             List<AbstractExecutable> tasks = ((DefaultChainedExecutable) job).getTasks();
             for (AbstractExecutable task : tasks) {
                 if (task.getStatus() == ExecutableState.ERROR || task.getStatus() == ExecutableState.STOPPED) {
-                    updateJobOutput(task.getId(), ExecutableState.READY, null, null);
+                    updateJobOutput(task.getId(), ExecutableState.READY, null, "no output");
                     break;
                 }
             }


[kylin] 04/11: Kylin-4037 Fix can't Cleanup Data in Hbase's HDFS Storage When Deploy Apache Kylin with Standalone HBase Cluster (#687)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 53bcf449bc06515b9dbcd21844b2f41e4e1e5dac
Author: wangxiaojing123 <49...@users.noreply.github.com>
AuthorDate: Tue Jul 16 14:52:31 2019 +0800

    Kylin-4037 Fix can't Cleanup Data in Hbase's HDFS Storage When Deploy Apache Kylin with Standalone HBase Cluster (#687)
    
    * Kylin-4037 Can't Cleanup Data in Hbase's HDFS Storage When Deploy Apache Kylin with Standalone HBase Cluster
    
    * Minor, code format
---
 .../apache/kylin/rest/job/StorageCleanupJob.java   | 108 ++++++++++++---------
 .../kylin/rest/job/StorageCleanupJobTest.java      |   8 +-
 .../kylin/storage/hbase/HBaseConnection.java       |   9 ++
 3 files changed, 74 insertions(+), 51 deletions(-)

diff --git a/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java b/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java
index 3b2e787..c8e73de 100755
--- a/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java
+++ b/server-base/src/main/java/org/apache/kylin/rest/job/StorageCleanupJob.java
@@ -38,7 +38,6 @@ import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.AbstractApplication;
 import org.apache.kylin.common.util.CliCommandExecutor;
@@ -57,6 +56,7 @@ import org.apache.kylin.job.execution.ExecutableState;
 import org.apache.kylin.metadata.MetadataConstants;
 import org.apache.kylin.source.ISourceMetadataExplorer;
 import org.apache.kylin.source.SourceManager;
+import org.apache.kylin.storage.hbase.HBaseConnection;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -74,27 +74,27 @@ public class StorageCleanupJob extends AbstractApplication {
             .withDescription("Warning: will delete all kylin intermediate hive tables").create("force");
 
     protected static final Logger logger = LoggerFactory.getLogger(StorageCleanupJob.class);
-    
+
     // ============================================================================
-    
+
     final protected KylinConfig config;
     final protected FileSystem hbaseFs;
     final protected FileSystem defaultFs;
     final protected ExecutableManager executableManager;
-    
+
     protected boolean delete = false;
     protected boolean force = false;
-    
+
     private List<String> hiveGarbageTables = Collections.emptyList();
     private List<String> hbaseGarbageTables = Collections.emptyList();
     private List<String> hdfsGarbageFiles = Collections.emptyList();
     private long hdfsGarbageFileBytes = 0;
 
     public StorageCleanupJob() throws IOException {
-        this(KylinConfig.getInstanceFromEnv(), HadoopUtil.getWorkingFileSystem(),
-                HadoopUtil.getWorkingFileSystem(HBaseConfiguration.create()));
+        this(KylinConfig.getInstanceFromEnv(), HadoopUtil.getWorkingFileSystem(), HBaseConnection
+                .getFileSystemInHBaseCluster(KylinConfig.getInstanceFromEnv().getHdfsWorkingDirectory()));
     }
-    
+
     protected StorageCleanupJob(KylinConfig config, FileSystem defaultFs, FileSystem hbaseFs) {
         this.config = config;
         this.defaultFs = defaultFs;
@@ -105,11 +105,11 @@ public class StorageCleanupJob extends AbstractApplication {
     public void setDelete(boolean delete) {
         this.delete = delete;
     }
-    
+
     public void setForce(boolean force) {
         this.force = force;
     }
-    
+
     public List<String> getHiveGarbageTables() {
         return hiveGarbageTables;
     }
@@ -141,13 +141,13 @@ public class StorageCleanupJob extends AbstractApplication {
         logger.info("force option value: '" + optionsHelper.getOptionValue(OPTION_FORCE) + "'");
         delete = Boolean.parseBoolean(optionsHelper.getOptionValue(OPTION_DELETE));
         force = Boolean.parseBoolean(optionsHelper.getOptionValue(OPTION_FORCE));
-        
+
         cleanup();
     }
-    
+
     // function entrance
     public void cleanup() throws Exception {
-        
+
         cleanUnusedIntermediateHiveTable();
         cleanUnusedHBaseTables();
         cleanUnusedHdfsFiles();
@@ -169,27 +169,19 @@ public class StorageCleanupJob extends AbstractApplication {
         }
     }
 
-    protected class UnusedHdfsFileCollector {
-        LinkedHashSet<Pair<FileSystem, String>> list = new LinkedHashSet<>();
-        
-        public void add(FileSystem fs, String path) {
-            list.add(Pair.newPair(fs, path));
-        }
-    }
-    
     private void cleanUnusedHdfsFiles() throws IOException {
-        
+
         UnusedHdfsFileCollector collector = new UnusedHdfsFileCollector();
         collectUnusedHdfsFiles(collector);
-        
+
         if (collector.list.isEmpty()) {
             logger.info("No HDFS files to clean up");
             return;
         }
-        
+
         long garbageBytes = 0;
         List<String> garbageList = new ArrayList<>();
-        
+
         for (Pair<FileSystem, String> entry : collector.list) {
             FileSystem fs = entry.getKey();
             String path = entry.getValue();
@@ -199,7 +191,7 @@ public class StorageCleanupJob extends AbstractApplication {
                 ContentSummary sum = fs.getContentSummary(new Path(path));
                 if (sum != null)
                     garbageBytes += sum.getLength();
-                
+
                 if (delete) {
                     logger.info("Deleting HDFS path " + path);
                     fs.delete(new Path(path), true);
@@ -210,32 +202,33 @@ public class StorageCleanupJob extends AbstractApplication {
                 logger.error("Error dealing unused HDFS path " + path, e);
             }
         }
-        
+
         hdfsGarbageFileBytes = garbageBytes;
         hdfsGarbageFiles = garbageList;
     }
-    
+
     protected void collectUnusedHdfsFiles(UnusedHdfsFileCollector collector) throws IOException {
         if (StringUtils.isNotEmpty(config.getHBaseClusterFs())) {
-            cleanUnusedHdfsFiles(hbaseFs, collector);
+            cleanUnusedHdfsFiles(hbaseFs, collector, true);
         }
-        cleanUnusedHdfsFiles(defaultFs, collector);
+        cleanUnusedHdfsFiles(defaultFs, collector, false);
     }
-    
-    private void cleanUnusedHdfsFiles(FileSystem fs, UnusedHdfsFileCollector collector) throws IOException {
+
+    private void cleanUnusedHdfsFiles(FileSystem fs, UnusedHdfsFileCollector collector, boolean hbaseFs)
+            throws IOException {
         final JobEngineConfig engineConfig = new JobEngineConfig(config);
         final CubeManager cubeMgr = CubeManager.getInstance(config);
-        
+
         List<String> allHdfsPathsNeedToBeDeleted = new ArrayList<String>();
-        
+
         try {
-            FileStatus[] fStatus = fs.listStatus(new Path(config.getHdfsWorkingDirectory()));
+            FileStatus[] fStatus = fs
+                    .listStatus(Path.getPathWithoutSchemeAndAuthority(new Path(config.getHdfsWorkingDirectory())));
             if (fStatus != null) {
                 for (FileStatus status : fStatus) {
                     String path = status.getPath().getName();
                     if (path.startsWith("kylin-")) {
-                        String kylinJobPath = engineConfig.getHdfsWorkingDirectory() + path;
-                        allHdfsPathsNeedToBeDeleted.add(kylinJobPath);
+                        allHdfsPathsNeedToBeDeleted.add(status.getPath().toString());
                     }
                 }
             }
@@ -249,6 +242,13 @@ public class StorageCleanupJob extends AbstractApplication {
             final ExecutableState state = executableManager.getOutput(jobId).getState();
             if (!state.isFinalState()) {
                 String path = JobBuilderSupport.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobId);
+
+                if (hbaseFs) {
+                    path = HBaseConnection.makeQualifiedPathInHBaseCluster(path);
+                } else {//Compatible with local fs, unit tests, mockito
+                    Path p = Path.getPathWithoutSchemeAndAuthority(new Path(path));
+                    path = HadoopUtil.getFileSystem(path).makeQualified(p).toString();
+                }
                 allHdfsPathsNeedToBeDeleted.remove(path);
                 logger.info("Skip " + path + " from deletion list, as the path belongs to job " + jobId
                         + " with status " + state);
@@ -261,13 +261,19 @@ public class StorageCleanupJob extends AbstractApplication {
                 String jobUuid = seg.getLastBuildJobID();
                 if (jobUuid != null && jobUuid.equals("") == false) {
                     String path = JobBuilderSupport.getJobWorkingDir(engineConfig.getHdfsWorkingDirectory(), jobUuid);
+                    if (hbaseFs) {
+                        path = HBaseConnection.makeQualifiedPathInHBaseCluster(path);
+                    } else {//Compatible with local fs, unit tests, mockito
+                        Path p = Path.getPathWithoutSchemeAndAuthority(new Path(path));
+                        path = HadoopUtil.getFileSystem(path).makeQualified(p).toString();
+                    }
                     allHdfsPathsNeedToBeDeleted.remove(path);
                     logger.info("Skip " + path + " from deletion list, as the path belongs to segment " + seg
                             + " of cube " + cube.getName());
                 }
             }
         }
-        
+
         // collect the garbage
         for (String path : allHdfsPathsNeedToBeDeleted)
             collector.add(fs, path);
@@ -283,12 +289,12 @@ public class StorageCleanupJob extends AbstractApplication {
                 throw e;
         }
     }
-    
+
     private void cleanUnusedIntermediateHiveTableInternal() throws Exception {
         final int uuidLength = 36;
         final String prefix = MetadataConstants.KYLIN_INTERMEDIATE_PREFIX;
         final String uuidPattern = "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}";
-        
+
         List<String> hiveTableNames = getHiveTables();
         Iterable<String> kylinIntermediates = Iterables.filter(hiveTableNames, new Predicate<String>() {
             @Override
@@ -310,7 +316,7 @@ public class StorageCleanupJob extends AbstractApplication {
 
             try {
                 String segmentId = getSegmentIdFromJobId(jobId);
-                if (segmentId != null) {//some jobs are not cubing jobs 
+                if (segmentId != null) {//some jobs are not cubing jobs
                     segmentId2JobId.put(segmentId, jobId);
                 }
             } catch (Exception ex) {
@@ -369,7 +375,7 @@ public class StorageCleanupJob extends AbstractApplication {
             logger.info("No Hive tables to clean up");
             return;
         }
-        
+
         if (delete) {
             try {
                 deleteHiveTables(allHiveTablesNeedToBeDeleted, segmentId2JobId);
@@ -382,13 +388,13 @@ public class StorageCleanupJob extends AbstractApplication {
             }
         }
     }
-    
+
     // override by test
     protected List<String> getHiveTables() throws Exception {
         ISourceMetadataExplorer explr = SourceManager.getDefaultSource().getSourceMetadataExplorer();
         return explr.listTables(config.getHiveDatabaseForIntermediateTable());
     }
-    
+
     // override by test
     protected CliCommandExecutor getCliCommandExecutor() throws IOException {
         return config.getCliCommandExecutor();
@@ -398,7 +404,7 @@ public class StorageCleanupJob extends AbstractApplication {
             throws IOException {
         final JobEngineConfig engineConfig = new JobEngineConfig(config);
         final int uuidLength = 36;
-        
+
         final String useDatabaseHql = "USE " + config.getHiveDatabaseForIntermediateTable() + ";";
         final HiveCmdBuilder hiveCmdBuilder = new HiveCmdBuilder();
         hiveCmdBuilder.addStatement(useDatabaseHql);
@@ -407,8 +413,8 @@ public class StorageCleanupJob extends AbstractApplication {
             logger.info("Deleting Hive table " + delHive);
         }
         getCliCommandExecutor().execute(hiveCmdBuilder.build());
-        
-        // If kylin.source.hive.keep-flat-table, some intermediate table might be kept. 
+
+        // If kylin.source.hive.keep-flat-table, some intermediate table might be kept.
         // Do delete external path.
         for (String tableToDelete : allHiveTablesNeedToBeDeleted) {
             String uuid = tableToDelete.substring(tableToDelete.length() - uuidLength, tableToDelete.length());
@@ -449,4 +455,12 @@ public class StorageCleanupJob extends AbstractApplication {
         return false;
     }
 
+    protected class UnusedHdfsFileCollector {
+        LinkedHashSet<Pair<FileSystem, String>> list = new LinkedHashSet<>();
+
+        public void add(FileSystem fs, String path) {
+            list.add(Pair.newPair(fs, path));
+        }
+    }
+
 }
diff --git a/server-base/src/test/java/org/apache/kylin/rest/job/StorageCleanupJobTest.java b/server-base/src/test/java/org/apache/kylin/rest/job/StorageCleanupJobTest.java
index e494884..3e4d9cb 100644
--- a/server-base/src/test/java/org/apache/kylin/rest/job/StorageCleanupJobTest.java
+++ b/server-base/src/test/java/org/apache/kylin/rest/job/StorageCleanupJobTest.java
@@ -84,15 +84,15 @@ public class StorageCleanupJobTest {
         FileStatus f2 = mock(FileStatus.class);
         FileStatus f3 = mock(FileStatus.class);
         // only remove FINISHED and DISCARDED job intermediate files, so this exclude.
-        when(f1.getPath()).thenReturn(new Path("kylin-091a0322-249c-43e7-91df-205603ab6883"));
+        when(f1.getPath()).thenReturn(new Path("file:///tmp/examples/test_metadata/kylin-091a0322-249c-43e7-91df-205603ab6883"));
         // remove every segment working dir from deletion list, so this exclude.
-        when(f2.getPath()).thenReturn(new Path("kylin-bcf2f125-9b0b-40dd-9509-95ec59b31333"));
-        when(f3.getPath()).thenReturn(new Path("kylin-to-be-delete"));
+        when(f2.getPath()).thenReturn(new Path("file:///tmp/examples/test_metadata/kylin-bcf2f125-9b0b-40dd-9509-95ec59b31333"));
+        when(f3.getPath()).thenReturn(new Path("file:///tmp/examples/test_metadata/kylin-to-be-delete"));
         statuses[0] = f1;
         statuses[1] = f2;
         statuses[2] = f3;
 
-        when(mockFs.listStatus(p1)).thenReturn(statuses);
+        when(mockFs.listStatus(Path.getPathWithoutSchemeAndAuthority(p1))).thenReturn(statuses);
         Path p2 = new Path("file:///tmp/examples/test_metadata/kylin-to-be-delete");
         when(mockFs.exists(p2)).thenReturn(true);
     }
diff --git a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/HBaseConnection.java b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/HBaseConnection.java
index 53e8a68..1fd1495 100644
--- a/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/HBaseConnection.java
+++ b/storage-hbase/src/main/java/org/apache/kylin/storage/hbase/HBaseConnection.java
@@ -237,6 +237,15 @@ public class HBaseConnection {
         return fs.makeQualified(path).toString();
     }
 
+    public static FileSystem getFileSystemInHBaseCluster(String inPath) {
+        Path path = new Path(inPath);
+        path = Path.getPathWithoutSchemeAndAuthority(path);
+
+        FileSystem fs = HadoopUtil.getFileSystem(path, getCurrentHBaseConfiguration()); // Must be HBase's FS, not working FS
+        return fs;
+    }
+
+
     // ============================================================================
 
     // returned Connection can be shared by multiple threads and does not require close()


[kylin] 03/11: KYLIN-4013 Only show the cubes under one model

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit d192bcd6b2bc9b04bedb8db42470e92c5131b715
Author: yuzhang <sh...@163.com>
AuthorDate: Sun Jun 23 11:56:08 2019 +0800

    KYLIN-4013 Only show the cubes under one model
---
 webapp/app/js/controllers/models.js         | 30 ++++++++++++++++++++++++++++-
 webapp/app/partials/models/models_tree.html |  1 +
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/webapp/app/js/controllers/models.js b/webapp/app/js/controllers/models.js
index d79c464..84e99e6 100644
--- a/webapp/app/js/controllers/models.js
+++ b/webapp/app/js/controllers/models.js
@@ -18,7 +18,7 @@
 
 'use strict';
 
-KylinApp.controller('ModelsCtrl', function ($scope, $q, $routeParams, $location, $window, $modal, MessageService, CubeDescService, CubeService, JobService, UserService, ProjectService, SweetAlert, loadingRequest, $log, modelConfig, ProjectModel, ModelService, MetaModel, modelsManager, cubesManager, TableModel, AccessService, MessageBox) {
+KylinApp.controller('ModelsCtrl', function ($scope, $q, $routeParams, $location, $window, $modal, MessageService, CubeDescService, CubeService, JobService, UserService, ProjectService, SweetAlert, loadingRequest, $log, modelConfig, ProjectModel, ModelService, MetaModel, modelsManager, cubesManager, TableModel, AccessService, MessageBox, CubeList) {
 
   //tree data
 
@@ -171,6 +171,34 @@ KylinApp.controller('ModelsCtrl', function ($scope, $q, $routeParams, $location,
     });
   }
 
+  $scope.listCubes = function(model) {
+    var defer = $q.defer();
+    var queryParam = {modelName: model.name};
+    if (!$scope.projectModel.isSelectedProjectValid() || !$scope.projectModel.projects.length) {
+      SweetAlert.swal('Oops...', "Please select target project.", 'info');
+      defer.resolve([]);
+      return defer.promise;
+    }
+
+    queryParam.projectName = $scope.projectModel.selectedProject;
+
+    $scope.loading = true;
+    CubeList.removeAll();
+    return CubeList.list(queryParam).then(function (resp) {
+      angular.forEach(CubeList.cubes, function(cube, index) {
+      })
+
+      $scope.loading = false;
+      defer.resolve(resp);
+      return defer.promise;
+
+    }, function(resp) {
+      $scope.loading = false;
+      defer.resolve([]);
+      SweetAlert.swal('Oops...', resp, 'error');
+      return defer.promise;
+    });
+  }
 
 
   $scope.openModal = function (model) {
diff --git a/webapp/app/partials/models/models_tree.html b/webapp/app/partials/models/models_tree.html
index c064525..1009f86 100644
--- a/webapp/app/partials/models/models_tree.html
+++ b/webapp/app/partials/models/models_tree.html
@@ -56,6 +56,7 @@
                 Action <span class="ace-icon fa fa-caret-down icon-on-right"></span>
               </button>
               <ul class="dropdown-menu" role="menu" style="right:0;left:auto;" ng-if="(userService.hasRole('ROLE_ADMIN') || hasPermission('model',model, permissions.ADMINISTRATION.mask, permissions.MANAGEMENT.mask))">
+                <li><a ng-click="listCubes(model)"  title="Using Cubes" style="cursor:pointer;margin-right: 8px;" >Cubes</a></li>
                 <li><a ng-click="editModel(model, false)"  title="Edit Model" style="cursor:pointer;margin-right: 8px;" >Edit</a></li>
                 <li><a ng-click="cloneModel(model)" title="Clone Model"  style="cursor:pointer;margin-right: 8px;" >Clone </a></li>
                 <li><a ng-click="dropModel(model)" title="Drop Model"  style="cursor:pointer;margin-right: 8px;">Drop</a></li>


[kylin] 02/11: KYLIN-4047 Use push-down query when division dynamic column cube query is not supported (#689)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 9650c8d597a57472677a283d51956f1a69a1e735
Author: wangxiaojing123 <49...@users.noreply.github.com>
AuthorDate: Sun Jul 14 18:19:10 2019 +0800

    KYLIN-4047 Use push-down query when division dynamic column cube query is not supported (#689)
    
    * KYLIN-4047 Use push-down query when division dynamic column cube query is not supported
    
    * Minor, Code format
---
 .../kylin/exception/QueryOnCubeException.java      | 43 ++++++++++++++++++++++
 .../metadata/expression/BinaryTupleExpression.java |  5 ++-
 .../metadata/expression/TupleExpressionTest.java   |  9 +++--
 .../org/apache/kylin/query/util/PushDownUtil.java  |  4 +-
 4 files changed, 54 insertions(+), 7 deletions(-)

diff --git a/core-metadata/src/main/java/org/apache/kylin/exception/QueryOnCubeException.java b/core-metadata/src/main/java/org/apache/kylin/exception/QueryOnCubeException.java
new file mode 100644
index 0000000..1b8708d
--- /dev/null
+++ b/core-metadata/src/main/java/org/apache/kylin/exception/QueryOnCubeException.java
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+package org.apache.kylin.exception;
+
+/**
+ *
+ */
+public class QueryOnCubeException extends RuntimeException {
+
+    private static final long serialVersionUID = 1L;
+
+    public QueryOnCubeException() {
+        super();
+    }
+
+    public QueryOnCubeException(String s) {
+        super(s);
+    }
+
+    public QueryOnCubeException(String message, Throwable cause) {
+        super(message, cause);
+    }
+
+    public QueryOnCubeException(Throwable cause) {
+        super(cause);
+    }
+}
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/expression/BinaryTupleExpression.java b/core-metadata/src/main/java/org/apache/kylin/metadata/expression/BinaryTupleExpression.java
index adafd9c..240d331 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/expression/BinaryTupleExpression.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/expression/BinaryTupleExpression.java
@@ -23,6 +23,7 @@ import java.nio.ByteBuffer;
 import java.util.List;
 
 import org.apache.kylin.common.util.DecimalUtil;
+import org.apache.kylin.exception.QueryOnCubeException;
 import org.apache.kylin.metadata.filter.IFilterCodeSystem;
 import org.apache.kylin.metadata.tuple.IEvaluatableTuple;
 
@@ -64,7 +65,7 @@ public class BinaryTupleExpression extends TupleExpression {
     private void verifyMultiply() {
         if (ExpressionColCollector.collectMeasureColumns(getLeft()).size() > 0 //
                 && ExpressionColCollector.collectMeasureColumns(getRight()).size() > 0) {
-            throw new IllegalArgumentException(
+            throw new QueryOnCubeException(
                     "That both of the two sides of the BinaryTupleExpression own columns is not supported for "
                             + operator.toString());
         }
@@ -72,7 +73,7 @@ public class BinaryTupleExpression extends TupleExpression {
 
     private void verifyDivide() {
         if (ExpressionColCollector.collectMeasureColumns(getRight()).size() > 0) {
-            throw new IllegalArgumentException(
+            throw new QueryOnCubeException(
                     "That the right side of the BinaryTupleExpression owns columns is not supported for "
                             + operator.toString());
         }
diff --git a/core-metadata/src/test/java/org/apache/kylin/metadata/expression/TupleExpressionTest.java b/core-metadata/src/test/java/org/apache/kylin/metadata/expression/TupleExpressionTest.java
index 7617c5a..5745920 100644
--- a/core-metadata/src/test/java/org/apache/kylin/metadata/expression/TupleExpressionTest.java
+++ b/core-metadata/src/test/java/org/apache/kylin/metadata/expression/TupleExpressionTest.java
@@ -23,6 +23,7 @@ import static org.junit.Assert.fail;
 import java.math.BigDecimal;
 
 import org.apache.kylin.common.util.LocalFileMetadataTestCase;
+import org.apache.kylin.exception.QueryOnCubeException;
 import org.apache.kylin.metadata.model.TableDesc;
 import org.apache.kylin.metadata.model.TblColRef;
 import org.junit.AfterClass;
@@ -65,8 +66,8 @@ public class TupleExpressionTest extends LocalFileMetadataTestCase {
                 Lists.newArrayList(constTuple2, colTuple2));
         try {
             biTuple2.verify();
-            fail("IllegalArgumentException should be thrown");
-        } catch (IllegalArgumentException e) {
+            fail("QueryOnCubeException should be thrown,That the right side of the BinaryTupleExpression owns columns is not supported for /");
+        } catch (QueryOnCubeException e) {
         }
 
         biTuple2 = new BinaryTupleExpression(TupleExpression.ExpressionOperatorEnum.DIVIDE,
@@ -77,8 +78,8 @@ public class TupleExpressionTest extends LocalFileMetadataTestCase {
                 Lists.<TupleExpression> newArrayList(biTuple1, biTuple2));
         try {
             biTuple.verify();
-            fail("IllegalArgumentException should be thrown");
-        } catch (IllegalArgumentException e) {
+            fail("QueryOnCubeException should be thrown,That both of the two sides of the BinaryTupleExpression own columns is not supported for *");
+        } catch (QueryOnCubeException e) {
         }
     }
 }
diff --git a/query/src/main/java/org/apache/kylin/query/util/PushDownUtil.java b/query/src/main/java/org/apache/kylin/query/util/PushDownUtil.java
index aa58cc4..c5e8d68 100644
--- a/query/src/main/java/org/apache/kylin/query/util/PushDownUtil.java
+++ b/query/src/main/java/org/apache/kylin/query/util/PushDownUtil.java
@@ -42,6 +42,7 @@ import org.apache.commons.lang3.exception.ExceptionUtils;
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.util.ClassUtil;
 import org.apache.kylin.common.util.Pair;
+import org.apache.kylin.exception.QueryOnCubeException;
 import org.apache.kylin.metadata.model.tool.CalciteParser;
 import org.apache.kylin.metadata.project.ProjectManager;
 import org.apache.kylin.metadata.querymeta.SelectedColumnMeta;
@@ -143,7 +144,8 @@ public class PushDownUtil {
             return (rootCause != null //
                     && (rootCause instanceof NoRealizationFoundException //
                             || rootCause instanceof SqlValidatorException //
-                            || rootCause instanceof RoutingIndicatorException)); //
+                            || rootCause instanceof RoutingIndicatorException //
+                            || rootCause instanceof QueryOnCubeException)); //
         }
     }
 


[kylin] 10/11: KYLIN-4093: Slow query pages should be open to all users of the project

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 93da88dbb7f2393f9728927bce6ea68c2f2e8483
Author: Liu Shaohui <li...@xiaomi.com>
AuthorDate: Wed Jun 12 15:06:36 2019 +0800

    KYLIN-4093: Slow query pages should be open to all users of the project
---
 webapp/app/partials/jobs/jobs.html | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/webapp/app/partials/jobs/jobs.html b/webapp/app/partials/jobs/jobs.html
index 7c8de32..1e31058 100644
--- a/webapp/app/partials/jobs/jobs.html
+++ b/webapp/app/partials/jobs/jobs.html
@@ -33,7 +33,7 @@
 
 <div class="row models-main dataTables_wrapper"  style="padding-top:10px;padding-left: 5px;">
   <div ng-class="row">
-    <tabset ng-if="userService.hasRole('ROLE_ADMIN')">
+    <tabset>
       <tab heading="Jobs" select="jobTabSelected('jobs')" active="tabs[2].active">
         <div class="col-xs-12" ng-include src="'partials/jobs/jobList.html'"></div>
       </tab>
@@ -41,7 +41,6 @@
         <div class="col-xs-12" ng-include src="'partials/jobs/badQuery.html'"></div>
       </tab>
     </tabset>
-    <div ng-if="!userService.hasRole('ROLE_ADMIN')" class="col-xs-12" ng-include src="'partials/jobs/jobList.html'"></div>
   </div>
 </div>
 


[kylin] 01/11: KYLIN-4046 Refine JDBC Source(source.default=8)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 37ee3a81603baf7b1d8252736bf94f037140320c
Author: hit-lacus <hi...@126.com>
AuthorDate: Mon Jun 24 15:34:04 2019 +0800

    KYLIN-4046 Refine JDBC Source(source.default=8)
    
    Currently, the function of ingest data from RDBMS(kylin.source.default=8) to Kylin has some problems , in this patch, I want to :
    
    1. fix case-sensitive
    2. fix weak dialect compatibility
    3. fix mis-use quote character
---
 .../org/apache/kylin/common/SourceDialect.java     |  56 +++-
 .../java/org/apache/kylin/job/JoinedFlatTable.java |   2 +-
 .../apache/kylin/metadata/model/PartitionDesc.java |  27 +-
 .../org/apache/kylin/metadata/model/TblColRef.java |  18 ++
 .../DefaultPartitionConditionBuilderTest.java      |   8 +-
 pom.xml                                            |   3 +
 .../org/apache/kylin/source/jdbc/JdbcDialect.java  |  26 --
 .../org/apache/kylin/source/jdbc/JdbcExplorer.java |  17 +-
 .../kylin/source/jdbc/JdbcHiveInputBase.java       | 344 +++++++++++++++++++--
 .../apache/kylin/source/jdbc/JdbcTableReader.java  |  15 +-
 .../java/org/apache/kylin/source/jdbc/SqlUtil.java |   5 +-
 .../source/jdbc/extensible/JdbcHiveInputBase.java  |   2 +-
 .../source/jdbc/metadata/DefaultJdbcMetadata.java  |   6 +-
 .../kylin/source/jdbc/metadata/IJdbcMetadata.java  |   4 +
 .../source/jdbc/metadata/JdbcMetadataFactory.java  |  16 +-
 .../source/jdbc/metadata/MySQLJdbcMetadata.java    |   6 +
 .../jdbc/metadata/SQLServerJdbcMetadata.java       |   6 +
 .../apache/kylin/source/jdbc/JdbcExplorerTest.java |   4 +-
 .../kylin/source/jdbc/JdbcHiveInputBaseTest.java   |  68 ++++
 19 files changed, 533 insertions(+), 100 deletions(-)

diff --git a/source-jdbc/src/test/java/org/apache/kylin/source/jdbc/metadata/JdbcMetadataFactoryTest.java b/core-common/src/main/java/org/apache/kylin/common/SourceDialect.java
similarity index 52%
rename from source-jdbc/src/test/java/org/apache/kylin/source/jdbc/metadata/JdbcMetadataFactoryTest.java
rename to core-common/src/main/java/org/apache/kylin/common/SourceDialect.java
index d9c7425..a87054d 100644
--- a/source-jdbc/src/test/java/org/apache/kylin/source/jdbc/metadata/JdbcMetadataFactoryTest.java
+++ b/core-common/src/main/java/org/apache/kylin/common/SourceDialect.java
@@ -15,21 +15,45 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.kylin.source.jdbc.metadata;
-
-import org.apache.kylin.source.jdbc.JdbcDialect;
-import org.junit.Assert;
-import org.junit.Test;
-
-public class JdbcMetadataFactoryTest {
-
-    @Test
-    public void testGetJdbcMetadata() {
-        Assert.assertTrue(
-                JdbcMetadataFactory.getJdbcMetadata(JdbcDialect.DIALECT_MSSQL, null) instanceof SQLServerJdbcMetadata);
-        Assert.assertTrue(
-                JdbcMetadataFactory.getJdbcMetadata(JdbcDialect.DIALECT_MYSQL, null) instanceof MySQLJdbcMetadata);
-        Assert.assertTrue(
-                JdbcMetadataFactory.getJdbcMetadata(JdbcDialect.DIALECT_VERTICA, null) instanceof DefaultJdbcMetadata);
+
+package org.apache.kylin.common;
+
+/**
+ * Decide sql pattern according to dialect from differenct data source
+ */
+public enum SourceDialect {
+    HIVE("hive"),
+
+    /**
+     * Support MySQL 5.7
+     */
+    MYSQL("mysql"),
+
+    /**
+     * Support Microsoft Sql Server 2017
+     */
+    SQL_SERVER("mssql"),
+
+    VERTICA("vertica"),
+
+    /**
+     * Others
+     */
+    UNKNOWN("unknown");
+
+    String source;
+
+    SourceDialect(String source) {
+        this.source = source;
+    }
+
+    public static SourceDialect getDialect(String name) {
+
+        for (SourceDialect dialect : SourceDialect.values()) {
+            if (dialect.source.equalsIgnoreCase(name)) {
+                return dialect;
+            }
+        }
+        return UNKNOWN;
     }
 }
diff --git a/core-job/src/main/java/org/apache/kylin/job/JoinedFlatTable.java b/core-job/src/main/java/org/apache/kylin/job/JoinedFlatTable.java
index 0d1cafb..4281885 100644
--- a/core-job/src/main/java/org/apache/kylin/job/JoinedFlatTable.java
+++ b/core-job/src/main/java/org/apache/kylin/job/JoinedFlatTable.java
@@ -241,7 +241,7 @@ public class JoinedFlatTable {
                 if (segRange != null && !segRange.isInfinite()) {
                     whereBuilder.append(" AND (");
                     String quotedPartitionCond = quoteIdentifierInSqlExpr(flatDesc,
-                            partDesc.getPartitionConditionBuilder().buildDateRangeCondition(partDesc, flatDesc.getSegment(), segRange));
+                            partDesc.getPartitionConditionBuilder().buildDateRangeCondition(partDesc, flatDesc.getSegment(), segRange, null));
                     whereBuilder.append(quotedPartitionCond);
                     whereBuilder.append(")" + sep);
                 }
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/model/PartitionDesc.java b/core-metadata/src/main/java/org/apache/kylin/metadata/model/PartitionDesc.java
index 56ededb..f93996e 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/model/PartitionDesc.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/model/PartitionDesc.java
@@ -20,6 +20,7 @@ package org.apache.kylin.metadata.model;
 
 import java.io.Serializable;
 import java.util.Locale;
+import java.util.function.Function;
 
 import org.apache.commons.lang3.StringUtils;
 import org.apache.kylin.common.util.ClassUtil;
@@ -184,19 +185,26 @@ public class PartitionDesc implements Serializable {
     // ============================================================================
 
     public static interface IPartitionConditionBuilder {
-        String buildDateRangeCondition(PartitionDesc partDesc, ISegment seg, SegmentRange segRange);
+        String buildDateRangeCondition(PartitionDesc partDesc, ISegment seg, SegmentRange segRange, Function<TblColRef, String> quoteFunc);
     }
 
     public static class DefaultPartitionConditionBuilder implements IPartitionConditionBuilder, Serializable {
 
         @Override
-        public String buildDateRangeCondition(PartitionDesc partDesc, ISegment seg, SegmentRange segRange) {
+        public String buildDateRangeCondition(PartitionDesc partDesc, ISegment seg, SegmentRange segRange, Function<TblColRef, String> quoteFunc) {
             long startInclusive = (Long) segRange.start.v;
             long endExclusive = (Long) segRange.end.v;
 
             TblColRef partitionDateColumn = partDesc.getPartitionDateColumnRef();
             TblColRef partitionTimeColumn = partDesc.getPartitionTimeColumnRef();
 
+            if (partitionDateColumn != null) {
+                partitionDateColumn.setQuotedFunc(quoteFunc);
+            }
+            if (partitionTimeColumn != null) {
+                partitionTimeColumn.setQuotedFunc(quoteFunc);
+            }
+
             StringBuilder builder = new StringBuilder();
 
             if (partDesc.partitionColumnIsYmdInt()) {
@@ -224,7 +232,7 @@ public class PartitionDesc implements Serializable {
 
         private static void buildSingleColumnRangeCondAsTimeMillis(StringBuilder builder, TblColRef partitionColumn,
                                                                    long startInclusive, long endExclusive) {
-            String partitionColumnName = partitionColumn.getIdentity();
+            String partitionColumnName = partitionColumn.getQuotedIdentity();
             builder.append(partitionColumnName + " >= " + startInclusive);
             builder.append(" AND ");
             builder.append(partitionColumnName + " < " + endExclusive);
@@ -232,7 +240,7 @@ public class PartitionDesc implements Serializable {
 
         private static void buildSingleColumnRangeCondAsYmdInt(StringBuilder builder, TblColRef partitionColumn,
                                                                long startInclusive, long endExclusive, String partitionColumnDateFormat) {
-            String partitionColumnName = partitionColumn.getIdentity();
+            String partitionColumnName = partitionColumn.getQuotedIdentity();
             builder.append(partitionColumnName + " >= "
                     + DateFormat.formatToDateStr(startInclusive, partitionColumnDateFormat));
             builder.append(" AND ");
@@ -242,7 +250,7 @@ public class PartitionDesc implements Serializable {
 
         private static void buildSingleColumnRangeCondition(StringBuilder builder, TblColRef partitionColumn,
                                                             long startInclusive, long endExclusive, String partitionColumnDateFormat) {
-            String partitionColumnName = partitionColumn.getIdentity();
+            String partitionColumnName = partitionColumn.getQuotedIdentity();
 
             if (endExclusive <= startInclusive) {
                 builder.append("1=0");
@@ -267,8 +275,8 @@ public class PartitionDesc implements Serializable {
         private static void buildMultipleColumnRangeCondition(StringBuilder builder, TblColRef partitionDateColumn,
                                                               TblColRef partitionTimeColumn, long startInclusive, long endExclusive, String partitionColumnDateFormat,
                                                               String partitionColumnTimeFormat, boolean partitionDateColumnIsYmdInt) {
-            String partitionDateColumnName = partitionDateColumn.getIdentity();
-            String partitionTimeColumnName = partitionTimeColumn.getIdentity();
+            String partitionDateColumnName = partitionDateColumn.getQuotedIdentity();
+            String partitionTimeColumnName = partitionTimeColumn.getQuotedIdentity();
             String singleQuotation = partitionDateColumnIsYmdInt ? "" : "'";
             builder.append("(");
             builder.append("(");
@@ -308,11 +316,14 @@ public class PartitionDesc implements Serializable {
     public static class YearMonthDayPartitionConditionBuilder implements IPartitionConditionBuilder {
 
         @Override
-        public String buildDateRangeCondition(PartitionDesc partDesc, ISegment seg, SegmentRange segRange) {
+        public String buildDateRangeCondition(PartitionDesc partDesc, ISegment seg, SegmentRange segRange, Function<TblColRef, String> func) {
             long startInclusive = (Long) segRange.start.v;
             long endExclusive = (Long) segRange.end.v;
 
             TblColRef partitionColumn = partDesc.getPartitionDateColumnRef();
+            if (partitionColumn != null) {
+                partitionColumn.setQuotedFunc(func);
+            }
             String tableAlias = partitionColumn.getTableAlias();
 
             String concatField = String.format(Locale.ROOT, "CONCAT(%s.YEAR,'-',%s.MONTH,'-',%s.DAY)", tableAlias,
diff --git a/core-metadata/src/main/java/org/apache/kylin/metadata/model/TblColRef.java b/core-metadata/src/main/java/org/apache/kylin/metadata/model/TblColRef.java
index 918eedf..0dc08a9 100644
--- a/core-metadata/src/main/java/org/apache/kylin/metadata/model/TblColRef.java
+++ b/core-metadata/src/main/java/org/apache/kylin/metadata/model/TblColRef.java
@@ -23,6 +23,8 @@ import static com.google.common.base.Preconditions.checkArgument;
 import java.io.Serializable;
 
 import java.util.Locale;
+import java.util.function.Function;
+
 import org.apache.commons.lang.StringUtils;
 import org.apache.kylin.metadata.datatype.DataType;
 
@@ -120,6 +122,15 @@ public class TblColRef implements Serializable {
     private String identity;
     private String parserDescription;
 
+    /**
+     * Function used to get quoted identitier
+     */
+    private transient Function<TblColRef, String> quotedFunc;
+
+    public void setQuotedFunc(Function<TblColRef, String> quotedFunc) {
+        this.quotedFunc = quotedFunc;
+    }
+
     TblColRef(ColumnDesc column) {
         this.column = column;
     }
@@ -238,6 +249,13 @@ public class TblColRef implements Serializable {
         return identity;
     }
 
+    public String getQuotedIdentity() {
+        if (quotedFunc == null)
+            return getIdentity();
+        else
+            return quotedFunc.apply(this);
+    }
+
     @Override
     public String toString() {
         if (isInnerColumn() && parserDescription != null)
diff --git a/core-metadata/src/test/java/org/apache/kylin/metadata/model/DefaultPartitionConditionBuilderTest.java b/core-metadata/src/test/java/org/apache/kylin/metadata/model/DefaultPartitionConditionBuilderTest.java
index b536e29..438fb4a 100644
--- a/core-metadata/src/test/java/org/apache/kylin/metadata/model/DefaultPartitionConditionBuilderTest.java
+++ b/core-metadata/src/test/java/org/apache/kylin/metadata/model/DefaultPartitionConditionBuilderTest.java
@@ -53,12 +53,12 @@ public class DefaultPartitionConditionBuilderTest extends LocalFileMetadataTestC
         partitionDesc.setPartitionDateColumn(col.getCanonicalName());
         partitionDesc.setPartitionDateFormat("yyyy-MM-dd");
         TSRange range = new TSRange(DateFormat.stringToMillis("2016-02-22"), DateFormat.stringToMillis("2016-02-23"));
-        String condition = partitionConditionBuilder.buildDateRangeCondition(partitionDesc, null, range);
+        String condition = partitionConditionBuilder.buildDateRangeCondition(partitionDesc, null, range, null);
         Assert.assertEquals("UNKNOWN_ALIAS.DATE_COLUMN >= '2016-02-22' AND UNKNOWN_ALIAS.DATE_COLUMN < '2016-02-23'",
                 condition);
 
         range = new TSRange(0L, 0L);
-        condition = partitionConditionBuilder.buildDateRangeCondition(partitionDesc, null, range);
+        condition = partitionConditionBuilder.buildDateRangeCondition(partitionDesc, null, range, null);
         Assert.assertEquals("1=0", condition);
     }
 
@@ -71,7 +71,7 @@ public class DefaultPartitionConditionBuilderTest extends LocalFileMetadataTestC
         partitionDesc.setPartitionTimeFormat("HH");
         TSRange range = new TSRange(DateFormat.stringToMillis("2016-02-22 00:00:00"),
                 DateFormat.stringToMillis("2016-02-23 01:00:00"));
-        String condition = partitionConditionBuilder.buildDateRangeCondition(partitionDesc, null, range);
+        String condition = partitionConditionBuilder.buildDateRangeCondition(partitionDesc, null, range, null);
         Assert.assertEquals("UNKNOWN_ALIAS.HOUR_COLUMN >= '00' AND UNKNOWN_ALIAS.HOUR_COLUMN < '01'", condition);
     }
 
@@ -88,7 +88,7 @@ public class DefaultPartitionConditionBuilderTest extends LocalFileMetadataTestC
         partitionDesc.setPartitionTimeFormat("H");
         TSRange range = new TSRange(DateFormat.stringToMillis("2016-02-22 00:00:00"),
                 DateFormat.stringToMillis("2016-02-23 01:00:00"));
-        String condition = partitionConditionBuilder.buildDateRangeCondition(partitionDesc, null, range);
+        String condition = partitionConditionBuilder.buildDateRangeCondition(partitionDesc, null, range, null);
         Assert.assertEquals(
                 "((UNKNOWN_ALIAS.DATE_COLUMN = '2016-02-22' AND UNKNOWN_ALIAS.HOUR_COLUMN >= '0') OR (UNKNOWN_ALIAS.DATE_COLUMN > '2016-02-22')) AND ((UNKNOWN_ALIAS.DATE_COLUMN = '2016-02-23' AND UNKNOWN_ALIAS.HOUR_COLUMN < '1') OR (UNKNOWN_ALIAS.DATE_COLUMN < '2016-02-23'))",
                 condition);
diff --git a/pom.xml b/pom.xml
index 84253be..3c691d8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1640,6 +1640,9 @@
             <groupId>org.apache.rat</groupId>
             <artifactId>apache-rat-plugin</artifactId>
             <configuration>
+              <!-- Used to print file with unapproved licenses in project to stand output -->
+              <consoleOutput>true</consoleOutput>
+
               <!-- Exclude files/folders for apache release -->
               <excludes>
                 <exclude>DEPENDENCIES</exclude>
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcDialect.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcDialect.java
deleted file mode 100644
index 7e5ecee..0000000
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcDialect.java
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-package org.apache.kylin.source.jdbc;
-
-public class JdbcDialect {
-    public static final String DIALECT_VERTICA = "vertica";
-    public static final String DIALECT_ORACLE = "oracle";
-    public static final String DIALECT_MYSQL = "mysql";
-    public static final String DIALECT_HIVE = "hive";
-    public static final String DIALECT_MSSQL = "mssql";
-}
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcExplorer.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcExplorer.java
index 7eb4fa9..d728dcf 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcExplorer.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcExplorer.java
@@ -30,6 +30,7 @@ import java.util.UUID;
 
 import org.apache.commons.lang3.StringUtils;
 import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.SourceDialect;
 import org.apache.kylin.common.util.DBUtils;
 import org.apache.kylin.common.util.Pair;
 import org.apache.kylin.common.util.RandomUtil;
@@ -50,7 +51,7 @@ public class JdbcExplorer implements ISourceMetadataExplorer, ISampleDataDeploye
     private static final Logger logger = LoggerFactory.getLogger(JdbcExplorer.class);
 
     private final KylinConfig config;
-    private final String dialect;
+    private final SourceDialect dialect;
     private final DBConnConf dbconf;
     private final IJdbcMetadata jdbcMetadataDialect;
 
@@ -61,7 +62,7 @@ public class JdbcExplorer implements ISourceMetadataExplorer, ISampleDataDeploye
         String jdbcUser = config.getJdbcSourceUser();
         String jdbcPass = config.getJdbcSourcePass();
         this.dbconf = new DBConnConf(driverClass, connectionUrl, jdbcUser, jdbcPass);
-        this.dialect = config.getJdbcSourceDialect();
+        this.dialect = SourceDialect.getDialect(config.getJdbcSourceDialect());
         this.jdbcMetadataDialect = JdbcMetadataFactory.getJdbcMetadata(dialect, dbconf);
     }
 
@@ -117,7 +118,7 @@ public class JdbcExplorer implements ISourceMetadataExplorer, ISampleDataDeploye
     }
 
     private String getSqlDataType(String javaDataType) {
-        if (JdbcDialect.DIALECT_VERTICA.equals(dialect) || JdbcDialect.DIALECT_MSSQL.equals(dialect)) {
+        if (SourceDialect.VERTICA.equals(dialect) || SourceDialect.MYSQL.equals(dialect)) {
             if (javaDataType.toLowerCase(Locale.ROOT).equals("double")) {
                 return "float";
             }
@@ -132,9 +133,9 @@ public class JdbcExplorer implements ISourceMetadataExplorer, ISampleDataDeploye
     }
 
     private String generateCreateSchemaSql(String schemaName) {
-        if (JdbcDialect.DIALECT_VERTICA.equals(dialect) || JdbcDialect.DIALECT_MYSQL.equals(dialect)) {
+        if (SourceDialect.VERTICA.equals(dialect) || SourceDialect.MYSQL.equals(dialect)) {
             return String.format(Locale.ROOT, "CREATE schema IF NOT EXISTS %s", schemaName);
-        } else if (JdbcDialect.DIALECT_MSSQL.equals(dialect)) {
+        } else if (SourceDialect.SQL_SERVER.equals(dialect)) {
             return String.format(Locale.ROOT,
                     "IF NOT EXISTS (SELECT name FROM sys.schemas WHERE name = N'%s') EXEC('CREATE SCHEMA"
                             + " [%s] AUTHORIZATION [dbo]')",
@@ -151,13 +152,13 @@ public class JdbcExplorer implements ISourceMetadataExplorer, ISampleDataDeploye
     }
 
     private String generateLoadDataSql(String tableName, String tableFileDir) {
-        if (JdbcDialect.DIALECT_VERTICA.equals(dialect)) {
+        if (SourceDialect.VERTICA.equals(dialect)) {
             return String.format(Locale.ROOT, "copy %s from local '%s/%s.csv' delimiter as ',';", tableName,
                     tableFileDir, tableName);
-        } else if (JdbcDialect.DIALECT_MYSQL.equals(dialect)) {
+        } else if (SourceDialect.MYSQL.equals(dialect)) {
             return String.format(Locale.ROOT, "LOAD DATA INFILE '%s/%s.csv' INTO %s FIELDS TERMINATED BY ',';",
                     tableFileDir, tableName, tableName);
-        } else if (JdbcDialect.DIALECT_MSSQL.equals(dialect)) {
+        } else if (SourceDialect.SQL_SERVER.equals(dialect)) {
             return String.format(Locale.ROOT, "BULK INSERT %s FROM '%s/%s.csv' WITH(FIELDTERMINATOR = ',')", tableName,
                     tableFileDir, tableName);
         } else {
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcHiveInputBase.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcHiveInputBase.java
index 20f2dcb..94594f3 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcHiveInputBase.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcHiveInputBase.java
@@ -18,28 +18,43 @@
 package org.apache.kylin.source.jdbc;
 
 import com.google.common.collect.Maps;
+import org.apache.commons.lang3.StringUtils;
 import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.SourceDialect;
 import org.apache.kylin.common.util.SourceConfigurationUtil;
 import org.apache.kylin.engine.mr.steps.CubingExecutableUtil;
 import org.apache.kylin.job.JoinedFlatTable;
 import org.apache.kylin.job.constant.ExecutableConstants;
 import org.apache.kylin.job.execution.AbstractExecutable;
 import org.apache.kylin.job.execution.DefaultChainedExecutable;
-import org.apache.kylin.job.util.FlatTableSqlQuoteUtils;
 import org.apache.kylin.metadata.TableMetadataManager;
+import org.apache.kylin.metadata.model.DataModelDesc;
 import org.apache.kylin.metadata.model.IJoinedFlatTableDesc;
+import org.apache.kylin.metadata.model.JoinDesc;
+import org.apache.kylin.metadata.model.JoinTableDesc;
 import org.apache.kylin.metadata.model.PartitionDesc;
 import org.apache.kylin.metadata.model.SegmentRange;
 import org.apache.kylin.metadata.model.TableExtDesc;
 import org.apache.kylin.metadata.model.TableRef;
 import org.apache.kylin.metadata.model.TblColRef;
+import org.apache.kylin.source.hive.DBConnConf;
 import org.apache.kylin.source.hive.HiveInputBase;
+import org.apache.kylin.source.jdbc.metadata.IJdbcMetadata;
+import org.apache.kylin.source.jdbc.metadata.JdbcMetadataFactory;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.sql.Connection;
+import java.sql.DatabaseMetaData;
+import java.sql.ResultSet;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
 import java.util.List;
 import java.util.Locale;
 import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
 
 public class JdbcHiveInputBase extends HiveInputBase {
     private static final Logger logger = LoggerFactory.getLogger(JdbcHiveInputBase.class);
@@ -47,9 +62,66 @@ public class JdbcHiveInputBase extends HiveInputBase {
     private static final String DEFAULT_QUEUE = "default";
 
     public static class JdbcBaseBatchCubingInputSide extends BaseBatchCubingInputSide {
+        private IJdbcMetadata jdbcMetadataDialect;
+        private DBConnConf dbconf;
+        private SourceDialect dialect;
+        private final Map<String, String> metaMap = new TreeMap<>();
 
         public JdbcBaseBatchCubingInputSide(IJoinedFlatTableDesc flatDesc) {
             super(flatDesc);
+            KylinConfig config = KylinConfig.getInstanceFromEnv();
+            String connectionUrl = config.getJdbcSourceConnectionUrl();
+            String driverClass = config.getJdbcSourceDriver();
+            String jdbcUser = config.getJdbcSourceUser();
+            String jdbcPass = config.getJdbcSourcePass();
+            dbconf = new DBConnConf(driverClass, connectionUrl, jdbcUser, jdbcPass);
+            dialect = SourceDialect.getDialect(config.getJdbcSourceDialect());
+            jdbcMetadataDialect = JdbcMetadataFactory.getJdbcMetadata(dialect, dbconf);
+            calCachedJdbcMeta(metaMap, dbconf, jdbcMetadataDialect);
+            if (logger.isTraceEnabled()) {
+                StringBuilder dumpInfo = new StringBuilder();
+                metaMap.forEach((k, v) -> dumpInfo.append("CachedMetadata: ").append(k).append(" => ").append(v)
+                        .append(System.lineSeparator()));
+                logger.trace(dumpInfo.toString());
+            }
+        }
+
+        /**
+         * Fetch and cache metadata from JDBC API, which will help to resolve
+         * case-sensitive problem of sql identifier
+         *
+         * @param metadataMap a Map which mapping upper case identifier to real/original identifier
+         */
+        public static void calCachedJdbcMeta(Map<String, String> metadataMap, DBConnConf dbconf,
+                IJdbcMetadata jdbcMetadataDialect) {
+            try (Connection connection = SqlUtil.getConnection(dbconf)) {
+                DatabaseMetaData databaseMetaData = connection.getMetaData();
+                for (String database : jdbcMetadataDialect.listDatabases()) {
+                    metadataMap.put(database.toUpperCase(Locale.ROOT), database);
+                    ResultSet tableRs = jdbcMetadataDialect.getTable(databaseMetaData, database, null);
+                    while (tableRs.next()) {
+                        String tableName = tableRs.getString("TABLE_NAME");
+                        ResultSet colRs = jdbcMetadataDialect.listColumns(databaseMetaData, database, tableName);
+                        while (colRs.next()) {
+                            String colName = colRs.getString("COLUMN_NAME");
+                            colName = database + "." + tableName + "." + colName;
+                            metadataMap.put(colName.toUpperCase(Locale.ROOT), colName);
+                        }
+                        colRs.close();
+                        tableName = database + "." + tableName;
+                        metadataMap.put(tableName.toUpperCase(Locale.ROOT), tableName);
+                    }
+                    tableRs.close();
+                }
+            } catch (IllegalStateException e) {
+                if (SqlUtil.DRIVER_MISS.equalsIgnoreCase(e.getMessage())) {
+                    logger.warn("Ignore JDBC Driver Missing in yarn node.", e);
+                } else {
+                    throw e;
+                }
+            } catch (Exception e) {
+                throw new IllegalStateException("Error when connect to JDBC source " + dbconf.getUrl(), e);
+            }
         }
 
         protected KylinConfig getConfig() {
@@ -148,22 +220,19 @@ public class JdbcHiveInputBase extends HiveInputBase {
                 partCol = partitionDesc.getPartitionDateColumn();//tablename.colname
             }
 
-            String splitTable;
             String splitTableAlias;
             String splitColumn;
             String splitDatabase;
             TblColRef splitColRef = determineSplitColumn();
-            splitTable = splitColRef.getTableRef().getTableName();
             splitTableAlias = splitColRef.getTableAlias();
-            splitColumn = JoinedFlatTable.getQuotedColExpressionInSourceDB(flatDesc, splitColRef);
+
+            splitColumn = getColumnIdentityQuoted(splitColRef, jdbcMetadataDialect, metaMap, true);
             splitDatabase = splitColRef.getColumnDesc().getTable().getDatabase();
 
-            //using sqoop to extract data from jdbc source and dump them to hive
-            String selectSql = JoinedFlatTable.generateSelectDataStatement(flatDesc, true, new String[] { partCol });
+            String selectSql = generateSelectDataStatementRDBMS(flatDesc, true, new String[] { partCol },
+                    jdbcMetadataDialect, metaMap);
             selectSql = escapeQuotationInSql(selectSql);
 
-
-
             String hiveTable = flatDesc.getTableName();
             String connectionUrl = config.getJdbcSourceConnectionUrl();
             String driverClass = config.getJdbcSourceDriver();
@@ -175,17 +244,19 @@ public class JdbcHiveInputBase extends HiveInputBase {
             String filedDelimiter = config.getJdbcSourceFieldDelimiter();
             int mapperNum = config.getSqoopMapperNum();
 
-            String bquery = String.format(Locale.ROOT, "SELECT min(%s), max(%s) FROM %s.%s as %s", splitColumn,
-                    splitColumn, splitDatabase, splitTable, splitTableAlias);
+            String bquery = String.format(Locale.ROOT, "SELECT min(%s), max(%s) FROM %s.%s ", splitColumn, splitColumn,
+                    getSchemaQuoted(metaMap, splitDatabase, jdbcMetadataDialect, true),
+                    getTableIdentityQuoted(splitColRef.getTableRef(), metaMap, jdbcMetadataDialect, true));
             if (partitionDesc.isPartitioned()) {
                 SegmentRange segRange = flatDesc.getSegRange();
                 if (segRange != null && !segRange.isInfinite()) {
                     if (partitionDesc.getPartitionDateColumnRef().getTableAlias().equals(splitTableAlias)
                             && (partitionDesc.getPartitionTimeColumnRef() == null || partitionDesc
-                            .getPartitionTimeColumnRef().getTableAlias().equals(splitTableAlias))) {
-                        String quotedPartCond = FlatTableSqlQuoteUtils.quoteIdentifierInSqlExpr(flatDesc,
-                                partitionDesc.getPartitionConditionBuilder().buildDateRangeCondition(partitionDesc,
-                                        flatDesc.getSegment(), segRange));
+                                    .getPartitionTimeColumnRef().getTableAlias().equals(splitTableAlias))) {
+
+                        String quotedPartCond = partitionDesc.getPartitionConditionBuilder().buildDateRangeCondition(
+                                partitionDesc, flatDesc.getSegment(), segRange,
+                                col -> getTableColumnIdentityQuoted(col, jdbcMetadataDialect, metaMap, true));
                         bquery += " WHERE " + quotedPartCond;
                     }
                 }
@@ -195,14 +266,13 @@ public class JdbcHiveInputBase extends HiveInputBase {
             // escape ` in cmd
             splitColumn = escapeQuotationInSql(splitColumn);
 
-            String cmd = String.format(Locale.ROOT,
-                    "%s/bin/sqoop import" + generateSqoopConfigArgString()
-                            + "--connect \"%s\" --driver %s --username %s --password \"%s\" --query \"%s AND \\$CONDITIONS\" "
-                            + "--target-dir %s/%s --split-by %s --boundary-query \"%s\" --null-string '%s' "
-                            + "--null-non-string '%s' --fields-terminated-by '%s' --num-mappers %d",
-                    sqoopHome, connectionUrl, driverClass, jdbcUser, jdbcPass, selectSql, jobWorkingDir, hiveTable,
-                    splitColumn, bquery, sqoopNullString, sqoopNullNonString, filedDelimiter, mapperNum);
-            logger.debug(String.format(Locale.ROOT, "sqoop cmd:%s", cmd));
+            String cmd = String.format(Locale.ROOT, "%s/bin/sqoop import" + generateSqoopConfigArgString()
+                    + "--connect \"%s\" --driver %s --username %s --password \"%s\" --query \"%s AND \\$CONDITIONS\" "
+                    + "--target-dir %s/%s --split-by %s --boundary-query \"%s\" --null-string '%s' "
+                    + "--null-non-string '%s' --fields-terminated-by '%s' --num-mappers %d", sqoopHome, connectionUrl,
+                    driverClass, jdbcUser, jdbcPass, selectSql, jobWorkingDir, hiveTable, splitColumn, bquery,
+                    sqoopNullString, sqoopNullNonString, filedDelimiter, mapperNum);
+            logger.debug("sqoop cmd : {}", cmd);
             CmdStep step = new CmdStep();
             step.setCmd(cmd);
             step.setName(ExecutableConstants.STEP_NAME_SQOOP_TO_FLAT_HIVE_TABLE);
@@ -212,7 +282,7 @@ public class JdbcHiveInputBase extends HiveInputBase {
         protected String generateSqoopConfigArgString() {
             KylinConfig kylinConfig = getConfig();
             Map<String, String> config = Maps.newHashMap();
-            config.put("mapreduce.job.queuename", getSqoopJobQueueName(kylinConfig)); // override job queue from mapreduce config
+            config.put(MR_OVERRIDE_QUEUE_KEY, getSqoopJobQueueName(kylinConfig)); // override job queue from mapreduce config
             config.putAll(SourceConfigurationUtil.loadSqoopConfiguration());
             config.putAll(kylinConfig.getSqoopConfigOverride());
 
@@ -229,4 +299,232 @@ public class JdbcHiveInputBase extends HiveInputBase {
         sqlExpr = sqlExpr.replaceAll("`", "\\\\`");
         return sqlExpr;
     }
+
+    private static String generateSelectDataStatementRDBMS(IJoinedFlatTableDesc flatDesc, boolean singleLine,
+            String[] skipAs, IJdbcMetadata metadata, Map<String, String> metaMap) {
+        SourceDialect dialect = metadata.getDialect();
+        final String sep = singleLine ? " " : "\n";
+
+        final List<String> skipAsList = (skipAs == null) ? new ArrayList<>() : Arrays.asList(skipAs);
+
+        StringBuilder sql = new StringBuilder();
+        sql.append("SELECT");
+        sql.append(sep);
+
+        for (int i = 0; i < flatDesc.getAllColumns().size(); i++) {
+            TblColRef col = flatDesc.getAllColumns().get(i);
+            if (i > 0) {
+                sql.append(",");
+            }
+            String colTotalName = String.format(Locale.ROOT, "%s.%s", col.getTableRef().getTableName(), col.getName());
+            if (skipAsList.contains(colTotalName)) {
+                sql.append(getTableColumnIdentityQuoted(col, metadata, metaMap, true)).append(sep);
+            } else {
+                sql.append(getTableColumnIdentityQuoted(col, metadata, metaMap, true)).append(" as ")
+                        .append(quoteIdentifier(JoinedFlatTable.colName(col), dialect)).append(sep);
+            }
+        }
+        appendJoinStatement(flatDesc, sql, singleLine, metadata, metaMap);
+        appendWhereStatement(flatDesc, sql, singleLine, metadata, metaMap);
+        return sql.toString();
+    }
+
+    private static void appendJoinStatement(IJoinedFlatTableDesc flatDesc, StringBuilder sql, boolean singleLine,
+            IJdbcMetadata metadata, Map<String, String> metaMap) {
+        final String sep = singleLine ? " " : "\n";
+        Set<TableRef> dimTableCache = new HashSet<>();
+
+        DataModelDesc model = flatDesc.getDataModel();
+        sql.append(" FROM ")
+                .append(getSchemaQuoted(metaMap,
+                        flatDesc.getDataModel().getRootFactTable().getTableDesc().getDatabase(), metadata, true))
+                .append(".")
+                .append(getTableIdentityQuoted(flatDesc.getDataModel().getRootFactTable(), metaMap, metadata, true));
+
+        sql.append(" ");
+        sql.append((getTableIdentityQuoted(flatDesc.getDataModel().getRootFactTable(), metaMap, metadata, true)))
+                .append(sep);
+
+        for (JoinTableDesc lookupDesc : model.getJoinTables()) {
+            JoinDesc join = lookupDesc.getJoin();
+            if (join != null && !join.getType().equals("")) {
+                TableRef dimTable = lookupDesc.getTableRef();
+                if (!dimTableCache.contains(dimTable)) {
+                    TblColRef[] pk = join.getPrimaryKeyColumns();
+                    TblColRef[] fk = join.getForeignKeyColumns();
+                    if (pk.length != fk.length) {
+                        throw new RuntimeException("Invalid join condition of lookup table:" + lookupDesc);
+                    }
+                    String joinType = join.getType().toUpperCase(Locale.ROOT);
+
+                    sql.append(joinType).append(" JOIN ")
+                            .append(getSchemaQuoted(metaMap, dimTable.getTableDesc().getDatabase(), metadata, true))
+                            .append(".").append(getTableIdentityQuoted(dimTable, metaMap, metadata, true));
+
+                    sql.append(" ");
+                    sql.append(getTableIdentityQuoted(dimTable, metaMap, metadata, true)).append(sep);
+                    sql.append("ON ");
+                    for (int i = 0; i < pk.length; i++) {
+                        if (i > 0) {
+                            sql.append(" AND ");
+                        }
+                        sql.append(getTableColumnIdentityQuoted(fk[i], metadata, metaMap, true)).append(" = ")
+                                .append(getTableColumnIdentityQuoted(pk[i], metadata, metaMap, true));
+                    }
+                    sql.append(sep);
+                    dimTableCache.add(dimTable);
+                }
+            }
+        }
+    }
+
+    private static void appendWhereStatement(IJoinedFlatTableDesc flatDesc, StringBuilder sql, boolean singleLine,
+            IJdbcMetadata metadata, Map<String, String> metaMap) {
+        final String sep = singleLine ? " " : "\n";
+
+        StringBuilder whereBuilder = new StringBuilder();
+        whereBuilder.append("WHERE 1=1");
+
+        DataModelDesc model = flatDesc.getDataModel();
+        if (StringUtils.isNotEmpty(model.getFilterCondition())) {
+            whereBuilder.append(" AND (").append(model.getFilterCondition()).append(") ");
+        }
+
+        if (flatDesc.getSegment() != null) {
+            PartitionDesc partDesc = model.getPartitionDesc();
+            if (partDesc != null && partDesc.getPartitionDateColumn() != null) {
+                SegmentRange segRange = flatDesc.getSegRange();
+
+                if (segRange != null && !segRange.isInfinite()) {
+                    whereBuilder.append(" AND (");
+                    whereBuilder.append(partDesc.getPartitionConditionBuilder().buildDateRangeCondition(partDesc,
+                            flatDesc.getSegment(), segRange,
+                            col -> getTableColumnIdentityQuoted(col, metadata, metaMap, true)));
+                    whereBuilder.append(")");
+                    whereBuilder.append(sep);
+                }
+            }
+        }
+        sql.append(whereBuilder.toString());
+    }
+
+    /**
+     * @return {TABLE_NAME}.{COLUMN_NAME}
+     */
+    private static String getTableColumnIdentityQuoted(TblColRef col, IJdbcMetadata metadata,
+            Map<String, String> metaMap, boolean needQuote) {
+        String tblName = getTableIdentityQuoted(col.getTableRef(), metaMap, metadata, needQuote);
+        String colName = getColumnIdentityQuoted(col, metadata, metaMap, needQuote);
+        return tblName + "." + colName;
+    }
+
+    /**
+     * @return {SCHEMA_NAME}
+     */
+    static String getSchemaQuoted(Map<String, String> metaMap, String database, IJdbcMetadata metadata,
+            boolean needQuote) {
+        String databaseName = fetchValue(database, null, null, metaMap);
+        if (needQuote) {
+            return quoteIdentifier(databaseName, metadata.getDialect());
+        } else {
+            return databaseName;
+        }
+    }
+
+    /**
+     * @return {TABLE_NAME}
+     */
+    static String getTableIdentityQuoted(TableRef tableRef, Map<String, String> metaMap, IJdbcMetadata metadata,
+            boolean needQuote) {
+        String value = fetchValue(tableRef.getTableDesc().getDatabase(), tableRef.getTableDesc().getName(), null,
+                metaMap);
+        String[] res = value.split("\\.");
+        value = res[res.length - 1];
+        if (needQuote) {
+            return quoteIdentifier(value, metadata.getDialect());
+        } else {
+            return value;
+        }
+    }
+
+    /**
+     * @return {TABLE_NAME}
+     */
+    static String getTableIdentityQuoted(String database, String table, Map<String, String> metaMap,
+            IJdbcMetadata metadata, boolean needQuote) {
+        String value = fetchValue(database, table, null, metaMap);
+        String[] res = value.split("\\.");
+        value = res[res.length - 1];
+        if (needQuote) {
+            return quoteIdentifier(value, metadata.getDialect());
+        } else {
+            return value;
+        }
+    }
+
+    /**
+     * @return {COLUMN_NAME}
+     */
+    private static String getColumnIdentityQuoted(TblColRef tblColRef, IJdbcMetadata metadata,
+            Map<String, String> metaMap, boolean needQuote) {
+        String value = fetchValue(tblColRef.getTableRef().getTableDesc().getDatabase(),
+                tblColRef.getTableRef().getTableDesc().getName(), tblColRef.getName(), metaMap);
+        String[] res = value.split("\\.");
+        value = res[res.length - 1];
+        if (needQuote) {
+            return quoteIdentifier(value, metadata.getDialect());
+        } else {
+            return value;
+        }
+    }
+
+    /**
+     * Quote the identifier acccording to sql dialect, as far as I know,
+     * MySQL use backtick(`), oracle 11g use double quotation("), sql server 2017
+     * use square brackets([ or ]) as quote character.
+     *
+     * @param identifier something looks like tableA.columnB
+     */
+    static String quoteIdentifier(String identifier, SourceDialect dialect) {
+        if (KylinConfig.getInstanceFromEnv().enableHiveDdlQuote()) {
+            String[] identifierArray = identifier.split("\\.");
+            String quoted = "";
+            for (int i = 0; i < identifierArray.length; i++) {
+                switch (dialect) {
+                case SQL_SERVER:
+                    identifierArray[i] = "[" + identifierArray[i] + "]";
+                    break;
+                case MYSQL:
+                case HIVE:
+                    identifierArray[i] = "`" + identifierArray[i] + "`";
+                    break;
+                default:
+                    String quote = KylinConfig.getInstanceFromEnv().getQuoteCharacter();
+                    identifierArray[i] = quote + identifierArray[i] + quote;
+                }
+            }
+            quoted = String.join(".", identifierArray);
+            return quoted;
+        } else {
+            return identifier;
+        }
+    }
+
+    static String fetchValue(String database, String table, String column, Map<String, String> metadataMap) {
+        String key;
+        if (table == null && column == null) {
+            key = database;
+        } else if (column == null) {
+            key = database + "." + table;
+        } else {
+            key = database + "." + table + "." + column;
+        }
+        String val = metadataMap.get(key.toUpperCase(Locale.ROOT));
+        if (val == null) {
+            logger.warn("Not find for {} from metadata cache.", key);
+            return key;
+        } else {
+            return val;
+        }
+    }
 }
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcTableReader.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcTableReader.java
index 3c2b4f9..1c689dd 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcTableReader.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/JdbcTableReader.java
@@ -23,10 +23,15 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Locale;
+import java.util.Map;
+import java.util.TreeMap;
 
 import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.SourceDialect;
 import org.apache.kylin.source.IReadableTable.TableReader;
 import org.apache.kylin.source.hive.DBConnConf;
+import org.apache.kylin.source.jdbc.metadata.IJdbcMetadata;
+import org.apache.kylin.source.jdbc.metadata.JdbcMetadataFactory;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -61,7 +66,15 @@ public class JdbcTableReader implements TableReader {
         String jdbcPass = config.getJdbcSourcePass();
         dbconf = new DBConnConf(driverClass, connectionUrl, jdbcUser, jdbcPass);
         jdbcCon = SqlUtil.getConnection(dbconf);
-        String sql = String.format(Locale.ROOT, "select * from %s.%s", dbName, tableName);
+        IJdbcMetadata meta = JdbcMetadataFactory
+                .getJdbcMetadata(SourceDialect.getDialect(config.getJdbcSourceDialect()), dbconf);
+
+        Map<String, String> metadataCache = new TreeMap<>();
+        JdbcHiveInputBase.JdbcBaseBatchCubingInputSide.calCachedJdbcMeta(metadataCache, dbconf, meta);
+        String database = JdbcHiveInputBase.getSchemaQuoted(metadataCache, dbName, meta, true);
+        String table = JdbcHiveInputBase.getTableIdentityQuoted(dbName, tableName, metadataCache, meta, true);
+
+        String sql = String.format(Locale.ROOT, "select * from %s.%s", database, table);
         try {
             statement = jdbcCon.createStatement();
             rs = statement.executeQuery(sql);
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/SqlUtil.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/SqlUtil.java
index 5242832..9299d78 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/SqlUtil.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/SqlUtil.java
@@ -23,7 +23,6 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.sql.Types;
 import java.util.Random;
-
 import org.apache.kylin.metadata.datatype.DataType;
 import org.apache.kylin.source.hive.DBConnConf;
 import org.slf4j.Logger;
@@ -62,6 +61,7 @@ public class SqlUtil {
     }
 
     public static final int tryTimes = 5;
+    public static final String DRIVER_MISS = "DRIVER_MISS";
 
     public static Connection getConnection(DBConnConf dbconf) {
         if (dbconf.getUrl() == null)
@@ -70,7 +70,8 @@ public class SqlUtil {
         try {
             Class.forName(dbconf.getDriver());
         } catch (Exception e) {
-            logger.error("", e);
+            logger.error("Miss Driver", e);
+            throw new IllegalStateException(DRIVER_MISS);
         }
         boolean got = false;
         int times = 0;
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/extensible/JdbcHiveInputBase.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/extensible/JdbcHiveInputBase.java
index 9fd6d30..fcafae2 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/extensible/JdbcHiveInputBase.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/extensible/JdbcHiveInputBase.java
@@ -94,7 +94,7 @@ public class JdbcHiveInputBase extends org.apache.kylin.source.jdbc.JdbcHiveInpu
                                     .getPartitionTimeColumnRef().getTableAlias().equals(splitTableAlias))) {
                         String quotedPartCond = FlatTableSqlQuoteUtils.quoteIdentifierInSqlExpr(flatDesc,
                                 partitionDesc.getPartitionConditionBuilder().buildDateRangeCondition(partitionDesc,
-                                        flatDesc.getSegment(), segRange));
+                                        flatDesc.getSegment(), segRange, null));
                         bquery += " WHERE " + quotedPartCond;
                     }
                 }
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/DefaultJdbcMetadata.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/DefaultJdbcMetadata.java
index 0842199..b9c65fc 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/DefaultJdbcMetadata.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/DefaultJdbcMetadata.java
@@ -23,8 +23,8 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.List;
-
 import java.util.Locale;
+import org.apache.kylin.common.SourceDialect;
 import org.apache.kylin.source.hive.DBConnConf;
 import org.apache.kylin.source.jdbc.SqlUtil;
 import org.slf4j.Logger;
@@ -74,4 +74,8 @@ public class DefaultJdbcMetadata implements IJdbcMetadata {
     public ResultSet listColumns(final DatabaseMetaData dbmd, String schema, String table) throws SQLException {
         return dbmd.getColumns(null, schema, table, null);
     }
+
+    public SourceDialect getDialect() {
+        return SourceDialect.UNKNOWN;
+    }
 }
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/IJdbcMetadata.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/IJdbcMetadata.java
index 169fe60..f41c3e8 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/IJdbcMetadata.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/IJdbcMetadata.java
@@ -17,12 +17,16 @@
 */
 package org.apache.kylin.source.jdbc.metadata;
 
+import org.apache.kylin.common.SourceDialect;
+
 import java.sql.DatabaseMetaData;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.List;
 
 public interface IJdbcMetadata {
+    SourceDialect getDialect();
+
     List<String> listDatabases() throws SQLException;
 
     List<String> listTables(String database) throws SQLException;
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/JdbcMetadataFactory.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/JdbcMetadataFactory.java
index ae4c0ff..498bc09 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/JdbcMetadataFactory.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/JdbcMetadataFactory.java
@@ -17,17 +17,19 @@
 */
 package org.apache.kylin.source.jdbc.metadata;
 
-import java.util.Locale;
+import org.apache.kylin.common.SourceDialect;
 import org.apache.kylin.source.hive.DBConnConf;
-import org.apache.kylin.source.jdbc.JdbcDialect;
 
-public abstract class JdbcMetadataFactory {
-    public static IJdbcMetadata getJdbcMetadata(String dialect, final DBConnConf dbConnConf) {
-        String jdbcDialect = (null == dialect) ? "" : dialect.toLowerCase(Locale.ROOT);
+public class JdbcMetadataFactory {
+
+    private JdbcMetadataFactory() {
+    }
+
+    public static IJdbcMetadata getJdbcMetadata(SourceDialect jdbcDialect, final DBConnConf dbConnConf) {
         switch (jdbcDialect) {
-        case (JdbcDialect.DIALECT_MSSQL):
+        case SQL_SERVER:
             return new SQLServerJdbcMetadata(dbConnConf);
-        case (JdbcDialect.DIALECT_MYSQL):
+        case MYSQL:
             return new MySQLJdbcMetadata(dbConnConf);
         default:
             return new DefaultJdbcMetadata(dbConnConf);
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/MySQLJdbcMetadata.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/MySQLJdbcMetadata.java
index 54c2a03..e3c523c 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/MySQLJdbcMetadata.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/MySQLJdbcMetadata.java
@@ -24,6 +24,7 @@ import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.List;
 
+import org.apache.kylin.common.SourceDialect;
 import org.apache.kylin.source.hive.DBConnConf;
 import org.apache.kylin.source.jdbc.SqlUtil;
 
@@ -64,4 +65,9 @@ public class MySQLJdbcMetadata extends DefaultJdbcMetadata {
     public ResultSet getTable(final DatabaseMetaData dbmd, String catalog, String table) throws SQLException {
         return dbmd.getTables(catalog, null, table, null);
     }
+
+    @Override
+    public SourceDialect getDialect() {
+        return SourceDialect.MYSQL;
+    }
 }
diff --git a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/SQLServerJdbcMetadata.java b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/SQLServerJdbcMetadata.java
index 5373672..696a350 100644
--- a/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/SQLServerJdbcMetadata.java
+++ b/source-jdbc/src/main/java/org/apache/kylin/source/jdbc/metadata/SQLServerJdbcMetadata.java
@@ -26,6 +26,7 @@ import java.util.List;
 import java.util.Set;
 
 import org.apache.commons.lang3.StringUtils;
+import org.apache.kylin.common.SourceDialect;
 import org.apache.kylin.source.hive.DBConnConf;
 import org.apache.kylin.source.jdbc.SqlUtil;
 
@@ -59,4 +60,9 @@ public class SQLServerJdbcMetadata extends DefaultJdbcMetadata {
         }
         return new ArrayList<>(ret);
     }
+
+    @Override
+    public SourceDialect getDialect() {
+        return SourceDialect.SQL_SERVER;
+    }
 }
diff --git a/source-jdbc/src/test/java/org/apache/kylin/source/jdbc/JdbcExplorerTest.java b/source-jdbc/src/test/java/org/apache/kylin/source/jdbc/JdbcExplorerTest.java
index a0df4f4..ed3d181 100644
--- a/source-jdbc/src/test/java/org/apache/kylin/source/jdbc/JdbcExplorerTest.java
+++ b/source-jdbc/src/test/java/org/apache/kylin/source/jdbc/JdbcExplorerTest.java
@@ -18,7 +18,6 @@
 package org.apache.kylin.source.jdbc;
 
 import static org.mockito.Matchers.any;
-import static org.mockito.Matchers.anyString;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.times;
@@ -35,6 +34,7 @@ import java.util.List;
 import java.util.Locale;
 
 import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.SourceDialect;
 import org.apache.kylin.common.util.LocalFileMetadataTestCase;
 import org.apache.kylin.common.util.Pair;
 import org.apache.kylin.metadata.model.ColumnDesc;
@@ -83,7 +83,7 @@ public class JdbcExplorerTest extends LocalFileMetadataTestCase {
         PowerMockito.stub(PowerMockito.method(SqlUtil.class, "getConnection")).toReturn(connection);
         PowerMockito.mockStatic(JdbcMetadataFactory.class);
 
-        when(JdbcMetadataFactory.getJdbcMetadata(anyString(), any(DBConnConf.class))).thenReturn(jdbcMetadata);
+        when(JdbcMetadataFactory.getJdbcMetadata(any(SourceDialect.class), any(DBConnConf.class))).thenReturn(jdbcMetadata);
         when(connection.getMetaData()).thenReturn(dbmd);
 
         jdbcExplorer = spy(JdbcExplorer.class);
diff --git a/source-jdbc/src/test/java/org/apache/kylin/source/jdbc/JdbcHiveInputBaseTest.java b/source-jdbc/src/test/java/org/apache/kylin/source/jdbc/JdbcHiveInputBaseTest.java
new file mode 100644
index 0000000..f6415e6
--- /dev/null
+++ b/source-jdbc/src/test/java/org/apache/kylin/source/jdbc/JdbcHiveInputBaseTest.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kylin.source.jdbc;
+
+import org.apache.kylin.common.KylinConfig;
+import org.apache.kylin.common.SourceDialect;
+import org.apache.kylin.common.util.LocalFileMetadataTestCase;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.junit.Assert.assertEquals;
+
+public class JdbcHiveInputBaseTest extends LocalFileMetadataTestCase {
+
+    @BeforeClass
+    public static void setupClass() throws SQLException {
+        staticCreateTestMetadata();
+        KylinConfig kylinConfig = KylinConfig.getInstanceFromEnv();
+        kylinConfig.setProperty("kylin.source.hive.quote-enabled", "true");
+    }
+
+    @Test
+    public void testFetchValue() {
+        Map<String, String> map = new HashMap<>();
+        String guess = JdbcHiveInputBase.fetchValue("DB_1", "TB_2", "COL_3", map);
+
+        // not found, return input value
+        assertEquals("DB_1.TB_2.COL_3", guess);
+        map.put("DB_1.TB_2.COL_3", "Db_1.Tb_2.Col_3");
+
+        guess = JdbcHiveInputBase.fetchValue("DB_1", "TB_2", "COL_3", map);
+        // found, return cached value
+        assertEquals("Db_1.Tb_2.Col_3", guess);
+    }
+
+    @Test
+    public void testQuoteIdentifier() {
+        String guess = JdbcHiveInputBase.quoteIdentifier("Tbl1.Col1", SourceDialect.MYSQL);
+        assertEquals("`Tbl1`.`Col1`", guess);
+        guess = JdbcHiveInputBase.quoteIdentifier("Tbl1.Col1", SourceDialect.SQL_SERVER);
+        assertEquals("[Tbl1].[Col1]", guess);
+    }
+
+    @AfterClass
+    public static void clenup() {
+        staticCleanupTestMetadata();
+    }
+}


[kylin] 11/11: KYLIN-4121 Cleanup hive view intermediate tables after job be finished

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 75179ab5d96a574e32807491edcf06d0a0292b07
Author: rupengwang <wa...@live.cn>
AuthorDate: Fri Aug 2 21:18:52 2019 +0800

    KYLIN-4121 Cleanup hive view intermediate tables after job be finished
---
 .../org/apache/kylin/source/hive/GarbageCollectionStep.java    | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/source-hive/src/main/java/org/apache/kylin/source/hive/GarbageCollectionStep.java b/source-hive/src/main/java/org/apache/kylin/source/hive/GarbageCollectionStep.java
index ed86513..c541751 100644
--- a/source-hive/src/main/java/org/apache/kylin/source/hive/GarbageCollectionStep.java
+++ b/source-hive/src/main/java/org/apache/kylin/source/hive/GarbageCollectionStep.java
@@ -47,8 +47,6 @@ public class GarbageCollectionStep extends AbstractExecutable {
         StringBuffer output = new StringBuffer();
         try {
             output.append(cleanUpIntermediateFlatTable(config));
-            // don't drop view to avoid concurrent issue
-            //output.append(cleanUpHiveViewIntermediateTable(config));
         } catch (IOException e) {
             logger.error("job:" + getId() + " execute finished with exception", e);
             return ExecuteResult.createError(e);
@@ -57,6 +55,7 @@ public class GarbageCollectionStep extends AbstractExecutable {
         return new ExecuteResult(ExecuteResult.State.SUCCEED, output.toString());
     }
 
+    //clean up both hive intermediate flat table and view table
     private String cleanUpIntermediateFlatTable(KylinConfig config) throws IOException {
         String quoteCharacter = FlatTableSqlQuoteUtils.getQuote();
         StringBuffer output = new StringBuffer();
@@ -94,9 +93,14 @@ public class GarbageCollectionStep extends AbstractExecutable {
         setParam("oldHiveTables", StringUtil.join(tableIdentity, ","));
     }
 
+    //get intermediate fact table and lookup table(if exists)
     private List<String> getIntermediateTables() {
         List<String> intermediateTables = Lists.newArrayList();
-        String[] tables = StringUtil.splitAndTrim(getParam("oldHiveTables"), ",");
+        String hiveTables = getParam("oldHiveTables");
+        if (this.getParams().containsKey("oldHiveViewIntermediateTables")) {
+            hiveTables += ("," + getParam("oldHiveViewIntermediateTables"));
+        }
+        String[] tables = StringUtil.splitAndTrim(hiveTables, ",");
         for (String t : tables) {
             intermediateTables.add(t);
         }


[kylin] 06/11: [KYLIN-4066] Fix No planner for not ROLE_ADMIN user

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nic pushed a commit to branch 2.6.x
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 4a30ba4f373ee13286e11fb1ef5e91002866caf1
Author: langdamao <la...@163.com>
AuthorDate: Thu Jul 4 17:34:06 2019 +0800

    [KYLIN-4066] Fix No planner for not ROLE_ADMIN user
    
    Signed-off-by: langdamao <la...@163.com>
---
 webapp/app/partials/cubes/cube_detail.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/webapp/app/partials/cubes/cube_detail.html b/webapp/app/partials/cubes/cube_detail.html
index 9443950..e08b1b9 100755
--- a/webapp/app/partials/cubes/cube_detail.html
+++ b/webapp/app/partials/cubes/cube_detail.html
@@ -41,7 +41,7 @@
             ng-if="userService.hasRole('ROLE_ADMIN')">
             <a href="" ng-click="cube.visiblePage='hbase';getHbaseInfo(cube)">Storage</a>
         </li>
-        <li class="{{cube.visiblePage=='planner'? 'active':''}}" ng-if="(userService.hasRole('ROLE_ADMIN') || hasPermission(cube, permissions.ADMINISTRATION.mask)) && isShowCubeplanner">
+        <li class="{{cube.visiblePage=='planner'? 'active':''}}" ng-if="(userService.hasRole('ROLE_ADMIN') || hasPermission('cube', cube, permissions.ADMINISTRATION.mask)) && isShowCubeplanner">
             <a href="" ng-click="cube.visiblePage='planner';getCubePlanner(cube);">Planner</a>
         </li>
     </ul>