You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by ra...@apache.org on 2018/02/03 19:43:08 UTC

[01/50] [abbrv] carbondata git commit: [HOTFIX] Correct CI url and add standard partition usage [Forced Update!]

Repository: carbondata
Updated Branches:
  refs/heads/branch-1.3 ba7589805 -> e16e87818 (forced update)


[HOTFIX] Correct CI url and add standard partition usage

This closes #1889


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/24ba2fe2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/24ba2fe2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/24ba2fe2

Branch: refs/heads/branch-1.3
Commit: 24ba2fe2226f9168dcde6c216948f8656488293d
Parents: 8a86d3f
Author: chenliang613 <ch...@huawei.com>
Authored: Tue Jan 30 22:35:02 2018 +0800
Committer: Jacky Li <ja...@qq.com>
Committed: Wed Jan 31 19:18:26 2018 +0800

----------------------------------------------------------------------
 README.md                                       | 12 +++----
 docs/data-management-on-carbondata.md           | 38 ++++++++++++++++++--
 .../examples/StandardPartitionExample.scala     |  7 ++--
 .../preaggregate/TestPreAggCreateCommand.scala  | 17 +++++++++
 4 files changed, 61 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/24ba2fe2/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index 15dba93..3b6792e 100644
--- a/README.md
+++ b/README.md
@@ -17,7 +17,7 @@
 
 <img src="/docs/images/CarbonData_logo.png" width="200" height="40">
 
-Apache CarbonData is an indexed columnar data format for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
+Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
 
 You can find the latest CarbonData document and learn more at:
 [http://carbondata.apache.org](http://carbondata.apache.org/)
@@ -25,14 +25,9 @@ You can find the latest CarbonData document and learn more at:
 [CarbonData cwiki](https://cwiki.apache.org/confluence/display/CARBONDATA/)
 
 ## Status
-Spark2.1:
-[![Build Status](https://builds.apache.org/buildStatus/icon?job=carbondata-master-spark-2.1)](https://builds.apache.org/view/A-D/view/CarbonData/job/carbondata-master-spark-2.1/badge/icon)
+Spark2.2:
+[![Build Status](https://builds.apache.org/buildStatus/icon?job=carbondata-master-spark-2.2)](https://builds.apache.org/view/A-D/view/CarbonData/job/carbondata-master-spark-2.2/lastBuild/testReport)
 [![Coverage Status](https://coveralls.io/repos/github/apache/carbondata/badge.svg?branch=master)](https://coveralls.io/github/apache/carbondata?branch=master)
-## Features
-CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:
-* Stores data along with index: it can significantly accelerate query performance and reduces the I/O scans and CPU resources, where there are filters in the query.  CarbonData index consists of multiple level of indices, a processing framework can leverage this index to reduce the task it needs to schedule and process, and it can also do skip scan in more finer grain unit (called blocklet) in task side scanning instead of scanning the whole file. 
-* Operable encoded data :Through supporting efficient compression and global encoding schemes, can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is "late materialized". 
-* Supports for various use cases with one single Data format : like interactive OLAP-style query, Sequential Access (big scan), Random Access (narrow scan). 
 
 ## Building CarbonData
 CarbonData is built using Apache Maven, to [build CarbonData](https://github.com/apache/carbondata/blob/master/build)
@@ -50,6 +45,7 @@ CarbonData is built using Apache Maven, to [build CarbonData](https://github.com
 
 ## Other Technical Material
 [Apache CarbonData meetup material](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=66850609)
+[Use Case Articles](https://cwiki.apache.org/confluence/display/CARBONDATA/CarbonData+Articles)
 
 ## Fork and Contribute
 This is an active open source project for everyone, and we are always open to people who want to use this system or contribute to it. 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/24ba2fe2/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index 3af95ac..d7954e1 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -567,9 +567,43 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
   ALTER TABLE table_name COMPACT 'MAJOR'
   ```
 
-## PARTITION
+  - **CLEAN SEGMENTS AFTER Compaction**
+  
+  Clean the segments which are compacted:
+  ```
+  CLEAN FILES FOR TABLE carbon_table
+  ```
+
+## STANDARD PARTITION
+
+  The partition is same as Spark, the creation partition command as below:
+  
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+                    [(col_name data_type , ...)]
+  PARTITIONED BY (partition_col_name data_type)
+  STORED BY 'carbondata'
+  [TBLPROPERTIES (property_name=property_value, ...)]
+  ```
+
+  Example:
+  ```
+  CREATE TABLE partitiontable0
+                  (id Int,
+                  vin String,
+                  phonenumber Long,
+                  area String,
+                  salary Int)
+                  PARTITIONED BY (country String)
+                  STORED BY 'org.apache.carbondata.format'
+                  TBLPROPERTIES('SORT_COLUMNS'='id,vin')
+                  )
+  ```
+
+
+## CARBONDATA PARTITION(HASH,RANGE,LIST)
 
-  Similar to other system's partition features, CarbonData's partition feature also can be used to improve query performance by filtering on the partition column.
+  The partition supports three type:(Hash,Range,List), similar to other system's partition features, CarbonData's partition feature can be used to improve query performance by filtering on the partition column.
 
 ### Create Hash Partition Table
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/24ba2fe2/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala
----------------------------------------------------------------------
diff --git a/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala b/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala
index 5a8e3f5..1126ecc 100644
--- a/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala
+++ b/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala
@@ -47,6 +47,7 @@ object StandardPartitionExample {
                 | salary Int)
                 | PARTITIONED BY (country String)
                 | STORED BY 'org.apache.carbondata.format'
+                | TBLPROPERTIES('SORT_COLUMNS'='id,vin')
               """.stripMargin)
 
     spark.sql(s"""
@@ -55,7 +56,7 @@ object StandardPartitionExample {
 
     spark.sql(
       s"""
-         | SELECT *
+         | SELECT country,id,vin,phonenumver,area,salary
          | FROM partitiontable0
       """.stripMargin).show()
 
@@ -65,8 +66,8 @@ object StandardPartitionExample {
     import scala.util.Random
     import spark.implicits._
     val r = new Random()
-    val df = spark.sparkContext.parallelize(1 to 10 * 1000 * 1000)
-      .map(x => ("No." + r.nextInt(100000), "country" + x % 8, "city" + x % 50, x % 300))
+    val df = spark.sparkContext.parallelize(1 to 10 * 1000 * 10)
+      .map(x => ("No." + r.nextInt(1000), "country" + x % 8, "city" + x % 50, x % 300))
       .toDF("ID", "country", "city", "population")
 
     // Create table without partition

http://git-wip-us.apache.org/repos/asf/carbondata/blob/24ba2fe2/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
index 303abf4..23132de 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.carbondata.integration.spark.testsuite.preaggregate
 
 import scala.collection.JavaConverters._


[12/50] [abbrv] carbondata git commit: [CARBONDATA-2012] Add support to load pre-aggregate in one transaction

Posted by ra...@apache.org.
[CARBONDATA-2012] Add support to load pre-aggregate in one transaction

Current if a table(t1) has 2 preaggregate table(p1,p2) then while loading all the pre-aggregate tables are committed(table status writing) and then the parent table is committed.

After this PR the flow would be like this:

load t1
load p1
load p2
write table status for p2 with transactionID
write table status for p1 with transactionID
rename tablestatus_UUID to tablestatus for p2
rename tablestatus_UUID to tablestatus for p1
write table status for t1

This closes #1781


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d680e9cf
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/d680e9cf
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/d680e9cf

Branch: refs/heads/branch-1.3
Commit: d680e9cf5016475e6e9b320c27be6503e1c6e66c
Parents: c9a02fc
Author: kunal642 <ku...@gmail.com>
Authored: Mon Jan 15 14:35:56 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Thu Feb 1 14:42:05 2018 +0530

----------------------------------------------------------------------
 .../datastore/filesystem/LocalCarbonFile.java   |   2 +-
 .../statusmanager/SegmentStatusManager.java     |  29 ++-
 .../core/util/path/CarbonTablePath.java         |   8 +
 .../hadoop/api/CarbonOutputCommitter.java       |   4 +
 .../carbondata/events/AlterTableEvents.scala    |  10 +
 .../spark/rdd/AggregateDataMapCompactor.scala   |  31 ++-
 .../spark/rdd/CarbonDataRDDFactory.scala        |  37 +++-
 .../spark/rdd/CarbonTableCompactor.scala        |  33 ++-
 .../scala/org/apache/spark/sql/CarbonEnv.scala  |   4 +-
 .../management/CarbonLoadDataCommand.scala      |  25 ++-
 .../CreatePreAggregateTableCommand.scala        |   7 +-
 .../preaaggregate/PreAggregateListeners.scala   | 220 +++++++++++++++++--
 .../preaaggregate/PreAggregateUtil.scala        |  35 +--
 .../processing/loading/events/LoadEvents.java   |  13 ++
 .../processing/util/CarbonLoaderUtil.java       |  49 ++++-
 15 files changed, 431 insertions(+), 76 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java b/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java
index 4ce78be..5df5a81 100644
--- a/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java
+++ b/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java
@@ -233,7 +233,7 @@ public class LocalCarbonFile implements CarbonFile {
 
   @Override public boolean renameForce(String changetoName) {
     File destFile = new File(changetoName);
-    if (destFile.exists()) {
+    if (destFile.exists() && !file.getAbsolutePath().equals(destFile.getAbsolutePath())) {
       if (destFile.delete()) {
         return file.renameTo(new File(changetoName));
       }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
index 6af0304..01f810e 100755
--- a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
+++ b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
@@ -178,23 +178,42 @@ public class SegmentStatusManager {
    * @return
    */
   public static LoadMetadataDetails[] readLoadMetadata(String metadataFolderPath) {
+    String metadataFileName = metadataFolderPath + CarbonCommonConstants.FILE_SEPARATOR
+        + CarbonCommonConstants.LOADMETADATA_FILENAME;
+    return readTableStatusFile(metadataFileName);
+  }
+
+  /**
+   * Reads the table status file with the specified UUID if non empty.
+   */
+  public static LoadMetadataDetails[] readLoadMetadata(String metaDataFolderPath, String uuid) {
+    String tableStatusFileName;
+    if (uuid.isEmpty()) {
+      tableStatusFileName = metaDataFolderPath + CarbonCommonConstants.FILE_SEPARATOR
+          + CarbonCommonConstants.LOADMETADATA_FILENAME;
+    } else {
+      tableStatusFileName = metaDataFolderPath + CarbonCommonConstants.FILE_SEPARATOR
+          + CarbonCommonConstants.LOADMETADATA_FILENAME + CarbonCommonConstants.UNDERSCORE + uuid;
+    }
+    return readTableStatusFile(tableStatusFileName);
+  }
+
+  public static LoadMetadataDetails[] readTableStatusFile(String tableStatusPath) {
     Gson gsonObjectToRead = new Gson();
     DataInputStream dataInputStream = null;
     BufferedReader buffReader = null;
     InputStreamReader inStream = null;
-    String metadataFileName = metadataFolderPath + CarbonCommonConstants.FILE_SEPARATOR
-        + CarbonCommonConstants.LOADMETADATA_FILENAME;
     LoadMetadataDetails[] listOfLoadFolderDetailsArray;
     AtomicFileOperations fileOperation =
-        new AtomicFileOperationsImpl(metadataFileName, FileFactory.getFileType(metadataFileName));
+        new AtomicFileOperationsImpl(tableStatusPath, FileFactory.getFileType(tableStatusPath));
 
     try {
-      if (!FileFactory.isFileExist(metadataFileName, FileFactory.getFileType(metadataFileName))) {
+      if (!FileFactory.isFileExist(tableStatusPath, FileFactory.getFileType(tableStatusPath))) {
         return new LoadMetadataDetails[0];
       }
       dataInputStream = fileOperation.openForRead();
       inStream = new InputStreamReader(dataInputStream,
-              Charset.forName(CarbonCommonConstants.DEFAULT_CHARSET));
+          Charset.forName(CarbonCommonConstants.DEFAULT_CHARSET));
       buffReader = new BufferedReader(inStream);
       listOfLoadFolderDetailsArray =
           gsonObjectToRead.fromJson(buffReader, LoadMetadataDetails[].class);

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
index 9e66657..fab6289 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
@@ -252,6 +252,14 @@ public class CarbonTablePath extends Path {
     return getMetaDataDir() + File.separator + TABLE_STATUS_FILE;
   }
 
+  public String getTableStatusFilePathWithUUID(String uuid) {
+    if (!uuid.isEmpty()) {
+      return getTableStatusFilePath() + CarbonCommonConstants.UNDERSCORE + uuid;
+    } else {
+      return getTableStatusFilePath();
+    }
+  }
+
   /**
    * Gets absolute path of data file
    *

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
----------------------------------------------------------------------
diff --git a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
index f6e928d..9cca1bb 100644
--- a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
+++ b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
@@ -115,8 +115,12 @@ public class CarbonOutputCommitter extends FileOutputCommitter {
         LoadEvents.LoadTablePreStatusUpdateEvent event =
             new LoadEvents.LoadTablePreStatusUpdateEvent(carbonTable.getCarbonTableIdentifier(),
                 loadModel);
+        LoadEvents.LoadTablePostStatusUpdateEvent postStatusUpdateEvent =
+            new LoadEvents.LoadTablePostStatusUpdateEvent(loadModel);
         try {
           OperationListenerBus.getInstance().fireEvent(event, (OperationContext) operationContext);
+          OperationListenerBus.getInstance().fireEvent(postStatusUpdateEvent,
+              (OperationContext) operationContext);
         } catch (Exception e) {
           throw new IOException(e);
         }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala
index 30e3f6f..ca1948a 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala
@@ -182,6 +182,16 @@ case class AlterTableCompactionPreStatusUpdateEvent(sparkSession: SparkSession,
     mergedLoadName: String) extends Event with AlterTableCompactionStatusUpdateEventInfo
 
 /**
+ * Compaction Event for handling post update status file operations, like committing child
+ * datamaps in one transaction
+ */
+case class AlterTableCompactionPostStatusUpdateEvent(
+    carbonTable: CarbonTable,
+    carbonMergerMapping: CarbonMergerMapping,
+    carbonLoadModel: CarbonLoadModel,
+    mergedLoadName: String) extends Event with AlterTableCompactionStatusUpdateEventInfo
+
+/**
  * Compaction Event for handling clean up in case of any compaction failure and abort the
  * operation, lister has to implement this event to handle failure scenarios
  *

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala
index 5f8f389..188e776 100644
--- a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala
+++ b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala
@@ -26,6 +26,7 @@ import org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand
 import org.apache.spark.sql.execution.command.preaaggregate.PreAggregateUtil
 
 import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.datastore.impl.FileFactory
 import org.apache.carbondata.core.statusmanager.{SegmentStatus, SegmentStatusManager}
 import org.apache.carbondata.core.util.path.CarbonStorePath
 import org.apache.carbondata.events.OperationContext
@@ -61,6 +62,7 @@ class AggregateDataMapCompactor(carbonLoadModel: CarbonLoadModel,
       CarbonSession.updateSessionInfoToCurrentThread(sqlContext.sparkSession)
       val loadCommand = operationContext.getProperty(carbonTable.getTableName + "_Compaction")
         .asInstanceOf[CarbonLoadDataCommand]
+      val uuid = Option(loadCommand.operationContext.getProperty("uuid")).getOrElse("").toString
       try {
         val newInternalOptions = loadCommand.internalOptions ++
                                  Map("mergedSegmentName" -> mergedLoadName)
@@ -70,7 +72,7 @@ class AggregateDataMapCompactor(carbonLoadModel: CarbonLoadModel,
                     sqlContext.sparkSession, loadCommand.logicalPlan.get))
         loadCommand.processData(sqlContext.sparkSession)
         val newLoadMetaDataDetails = SegmentStatusManager.readLoadMetadata(
-          carbonTable.getMetaDataFilepath)
+          carbonTable.getMetaDataFilepath, uuid)
         val updatedLoadMetaDataDetails = newLoadMetaDataDetails collect {
           case load if loadMetaDataDetails.contains(load) =>
             load.setMergedLoadName(mergedLoadName)
@@ -83,16 +85,37 @@ class AggregateDataMapCompactor(carbonLoadModel: CarbonLoadModel,
           .getCarbonTablePath(carbonLoadModel.getCarbonDataLoadSchema.getCarbonTable
             .getAbsoluteTableIdentifier)
         SegmentStatusManager
-          .writeLoadDetailsIntoFile(carbonTablePath.getTableStatusFilePath,
+          .writeLoadDetailsIntoFile(carbonTablePath.getTableStatusFilePathWithUUID(uuid),
             updatedLoadMetaDataDetails)
         carbonLoadModel.setLoadMetadataDetails(updatedLoadMetaDataDetails.toList.asJava)
       } finally {
         // check if any other segments needs compaction on in case of MINOR_COMPACTION.
         // For example: after 8.1 creation 0.1, 4.1, 8.1 have to be merged to 0.2 if threshhold
         // allows it.
+        // Also as the load which will be fired for 2nd level compaction will read the
+        // tablestatus file and not the tablestatus_UUID therefore we have to commit the
+        // intermediate tablestatus file for 2nd level compaction to be successful.
+        // This is required because:
+        //  1. after doing 12 loads and a compaction after every 4 loads the table status file will
+        //     have 0.1, 4.1, 8, 9, 10, 11 as Success segments. While tablestatus_UUID will have
+        //     0.1, 4.1, 8.1.
+        //  2. Now for 2nd level compaction 0.1, 8.1, 4.1 have to be merged to 0.2. therefore we
+        //     need to read the tablestatus_UUID. But load flow should always read tablestatus file
+        //     because it contains the actual In-Process status for the segments.
+        //  3. If we read the tablestatus then 8, 9, 10, 11 will keep getting compacted into 8.1.
+        //  4. Therefore tablestatus file will be committed in between multiple commits.
         if (!compactionModel.compactionType.equals(CompactionType.MAJOR)) {
-
-          executeCompaction()
+          if (!identifySegmentsToBeMerged().isEmpty) {
+            val carbonTablePath = CarbonStorePath
+              .getCarbonTablePath(carbonLoadModel.getCarbonDataLoadSchema.getCarbonTable
+                .getAbsoluteTableIdentifier)
+            val uuidTableStaus = carbonTablePath.getTableStatusFilePathWithUUID(uuid)
+            val tableStatus = carbonTablePath.getTableStatusFilePath
+            if (!uuidTableStaus.equalsIgnoreCase(tableStatus)) {
+              FileFactory.getCarbonFile(uuidTableStaus).renameForce(tableStatus)
+            }
+            executeCompaction()
+          }
         }
         CarbonSession
           .threadUnset(CarbonCommonConstants.CARBON_INPUT_SEGMENTS +

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
index 8212e85..3de0e70 100644
--- a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
+++ b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
@@ -39,6 +39,7 @@ import org.apache.spark.deploy.SparkHadoopUtil
 import org.apache.spark.rdd.{DataLoadCoalescedRDD, DataLoadPartitionCoalescer, NewHadoopRDD, RDD}
 import org.apache.spark.sql.{AnalysisException, CarbonEnv, DataFrame, Row, SQLContext}
 import org.apache.spark.sql.execution.command.{CompactionModel, ExecutionErrors, UpdateTableModel}
+import org.apache.spark.sql.execution.command.preaaggregate.PreAggregateUtil
 import org.apache.spark.sql.hive.DistributionUtil
 import org.apache.spark.sql.optimizer.CarbonFilters
 import org.apache.spark.sql.util.CarbonException
@@ -62,7 +63,7 @@ import org.apache.carbondata.events.{OperationContext, OperationListenerBus}
 import org.apache.carbondata.processing.exception.DataLoadingException
 import org.apache.carbondata.processing.loading.FailureCauses
 import org.apache.carbondata.processing.loading.csvinput.{BlockDetails, CSVInputFormat, StringArrayWritable}
-import org.apache.carbondata.processing.loading.events.LoadEvents.LoadTablePreStatusUpdateEvent
+import org.apache.carbondata.processing.loading.events.LoadEvents.{LoadTablePostStatusUpdateEvent, LoadTablePreStatusUpdateEvent}
 import org.apache.carbondata.processing.loading.exception.{CarbonDataLoadingException, NoRetryException}
 import org.apache.carbondata.processing.loading.model.{CarbonDataLoadSchema, CarbonLoadModel}
 import org.apache.carbondata.processing.loading.sort.SortScopeOptions
@@ -491,9 +492,10 @@ object CarbonDataRDDFactory {
       }
       return
     }
+    val uniqueTableStatusId = operationContext.getProperty("uuid").asInstanceOf[String]
     if (loadStatus == SegmentStatus.LOAD_FAILURE) {
       // update the load entry in table status file for changing the status to marked for delete
-      CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel)
+      CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel, uniqueTableStatusId)
       LOGGER.info("********starting clean up**********")
       CarbonLoaderUtil.deleteSegment(carbonLoadModel, carbonLoadModel.getSegmentId.toInt)
       LOGGER.info("********clean up done**********")
@@ -508,7 +510,7 @@ object CarbonDataRDDFactory {
           status(0)._2._2.failureCauses == FailureCauses.BAD_RECORDS &&
           carbonLoadModel.getBadRecordsAction.split(",")(1) == LoggerAction.FAIL.name) {
         // update the load entry in table status file for changing the status to marked for delete
-        CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel)
+        CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel, uniqueTableStatusId)
         LOGGER.info("********starting clean up**********")
         CarbonLoaderUtil.deleteSegment(carbonLoadModel, carbonLoadModel.getSegmentId.toInt)
         LOGGER.info("********clean up done**********")
@@ -532,6 +534,8 @@ object CarbonDataRDDFactory {
       }
 
       writeDictionary(carbonLoadModel, result, writeAll = false)
+      operationContext.setProperty(carbonTable.getTableUniqueName + "_Segment",
+        carbonLoadModel.getSegmentId)
       val loadTablePreStatusUpdateEvent: LoadTablePreStatusUpdateEvent =
         new LoadTablePreStatusUpdateEvent(
         carbonTable.getCarbonTableIdentifier,
@@ -543,9 +547,21 @@ object CarbonDataRDDFactory {
           carbonLoadModel,
           loadStatus,
           newEntryLoadStatus,
-          overwriteTable)
-      if (!done) {
-        CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel)
+          overwriteTable,
+          uniqueTableStatusId)
+      val loadTablePostStatusUpdateEvent: LoadTablePostStatusUpdateEvent =
+        new LoadTablePostStatusUpdateEvent(carbonLoadModel)
+      val commitComplete = try {
+        OperationListenerBus.getInstance()
+          .fireEvent(loadTablePostStatusUpdateEvent, operationContext)
+        true
+      } catch {
+        case ex: Exception =>
+          LOGGER.error(ex, "Problem while committing data maps")
+          false
+      }
+      if (!done && !commitComplete) {
+        CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel, uniqueTableStatusId)
         LOGGER.info("********starting clean up**********")
         CarbonLoaderUtil.deleteSegment(carbonLoadModel, carbonLoadModel.getSegmentId.toInt)
         LOGGER.info("********clean up done**********")
@@ -731,7 +747,8 @@ object CarbonDataRDDFactory {
       operationContext: OperationContext): Unit = {
     LOGGER.info(s"compaction need status is" +
                 s" ${ CarbonDataMergerUtil.checkIfAutoLoadMergingRequired(carbonTable) }")
-    if (CarbonDataMergerUtil.checkIfAutoLoadMergingRequired(carbonTable)) {
+    if (!carbonTable.isChildDataMap &&
+        CarbonDataMergerUtil.checkIfAutoLoadMergingRequired(carbonTable)) {
       LOGGER.audit(s"Compaction request received for table " +
                    s"${ carbonLoadModel.getDatabaseName }.${ carbonLoadModel.getTableName }")
       val compactionSize = 0
@@ -805,7 +822,8 @@ object CarbonDataRDDFactory {
       carbonLoadModel: CarbonLoadModel,
       loadStatus: SegmentStatus,
       newEntryLoadStatus: SegmentStatus,
-      overwriteTable: Boolean): Boolean = {
+      overwriteTable: Boolean,
+      uuid: String = ""): Boolean = {
     val carbonTable = carbonLoadModel.getCarbonDataLoadSchema.getCarbonTable
     val metadataDetails = if (status != null && status.size > 0 && status(0) != null) {
       status(0)._2._1
@@ -820,7 +838,7 @@ object CarbonDataRDDFactory {
     CarbonLoaderUtil
       .addDataIndexSizeIntoMetaEntry(metadataDetails, carbonLoadModel.getSegmentId, carbonTable)
     val done = CarbonLoaderUtil.recordNewLoadMetadata(metadataDetails, carbonLoadModel, false,
-      overwriteTable)
+      overwriteTable, uuid)
     if (!done) {
       val errorMessage = s"Dataload failed due to failure in table status updation for" +
                          s" ${carbonLoadModel.getTableName}"
@@ -835,7 +853,6 @@ object CarbonDataRDDFactory {
     done
   }
 
-
   /**
    * repartition the input data for partition table.
    */

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
index a0c8f65..8406d8d 100644
--- a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
+++ b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
@@ -32,10 +32,12 @@ import org.apache.carbondata.core.metadata.PartitionMapFileStore.PartitionMapper
 import org.apache.carbondata.core.mutate.CarbonUpdateUtil
 import org.apache.carbondata.core.statusmanager.{LoadMetadataDetails, SegmentStatusManager}
 import org.apache.carbondata.core.util.path.CarbonTablePath
-import org.apache.carbondata.events.{AlterTableCompactionPostEvent, AlterTableCompactionPreEvent, AlterTableCompactionPreStatusUpdateEvent, OperationContext, OperationListenerBus}
+import org.apache.carbondata.events._
+import org.apache.carbondata.processing.loading.events.LoadEvents.{LoadTablePostStatusUpdateEvent, LoadTablePreStatusUpdateEvent}
 import org.apache.carbondata.processing.loading.model.CarbonLoadModel
 import org.apache.carbondata.processing.merger.{CarbonDataMergerUtil, CompactionType}
 import org.apache.carbondata.spark.MergeResultImpl
+import org.apache.carbondata.spark.rdd.CarbonDataRDDFactory.LOGGER
 import org.apache.carbondata.spark.util.CommonUtil
 
 /**
@@ -245,8 +247,33 @@ class CarbonTableCompactor(carbonLoadModel: CarbonLoadModel,
         CarbonDataMergerUtil
           .updateLoadMetadataWithMergeStatus(loadsToMerge, carbonTable.getMetaDataFilepath,
             mergedLoadNumber, carbonLoadModel, compactionType)
-
-      if (!statusFileUpdation) {
+      val compactionLoadStatusPostEvent = AlterTableCompactionPostStatusUpdateEvent(carbonTable,
+        carbonMergerMapping,
+        carbonLoadModel,
+        mergedLoadName)
+      // Used to inform the commit listener that the commit is fired from compaction flow.
+      operationContext.setProperty("isCompaction", "true")
+      val commitComplete = try {
+        // Once main table compaction is done and 0.1, 4.1, 8.1 is created commit will happen for
+        // all the tables. The commit listener will compact the child tables until no more segments
+        // are left. But 2nd level compaction is yet to happen on the main table therefore again the
+        // compaction flow will try to commit the child tables which is wrong. This check tell the
+        // 2nd level compaction flow that the commit for datamaps is already done.
+        val isCommitDone = operationContext.getProperty("commitComplete")
+        if (isCommitDone != null) {
+          isCommitDone.toString.toBoolean
+        } else {
+          OperationListenerBus.getInstance()
+            .fireEvent(compactionLoadStatusPostEvent, operationContext)
+          true
+        }
+      } catch {
+        case ex: Exception =>
+          LOGGER.error(ex, "Problem while committing data maps")
+          false
+      }
+      operationContext.setProperty("commitComplete", commitComplete)
+      if (!statusFileUpdation && !commitComplete) {
         LOGGER.audit(s"Compaction request failed for table ${ carbonLoadModel.getDatabaseName }." +
                      s"${ carbonLoadModel.getTableName }")
         LOGGER.error(s"Compaction request failed for table ${ carbonLoadModel.getDatabaseName }." +

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
index 870b1f3..40035ce 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
@@ -33,7 +33,7 @@ import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier
 import org.apache.carbondata.core.metadata.schema.table.CarbonTable
 import org.apache.carbondata.core.util._
 import org.apache.carbondata.events._
-import org.apache.carbondata.processing.loading.events.LoadEvents.{LoadMetadataEvent, LoadTablePreExecutionEvent, LoadTablePreStatusUpdateEvent}
+import org.apache.carbondata.processing.loading.events.LoadEvents.{LoadMetadataEvent, LoadTablePostStatusUpdateEvent, LoadTablePreExecutionEvent, LoadTablePreStatusUpdateEvent}
 import org.apache.carbondata.spark.rdd.SparkReadSupport
 import org.apache.carbondata.spark.readsupport.SparkRowReadSupportImpl
 
@@ -148,6 +148,8 @@ object CarbonEnv {
         AlterPreAggregateTableCompactionPostListener)
       .addListener(classOf[LoadMetadataEvent], LoadProcessMetaListener)
       .addListener(classOf[LoadMetadataEvent], CompactionProcessMetaListener)
+      .addListener(classOf[LoadTablePostStatusUpdateEvent], CommitPreAggregateListener)
+      .addListener(classOf[AlterTableCompactionPostStatusUpdateEvent], CommitPreAggregateListener)
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
index 226a625..8e6c20e 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
@@ -19,6 +19,7 @@ package org.apache.spark.sql.execution.command.management
 
 import java.text.SimpleDateFormat
 import java.util
+import java.util.UUID
 
 import scala.collection.JavaConverters._
 import scala.collection.mutable
@@ -35,8 +36,8 @@ import org.apache.spark.sql._
 import org.apache.spark.sql.catalyst.{InternalRow, TableIdentifier}
 import org.apache.spark.sql.catalyst.analysis.{NoSuchTableException, UnresolvedAttribute}
 import org.apache.spark.sql.catalyst.catalog.CatalogTable
-import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference, Expression}
-import org.apache.spark.sql.catalyst.plans.logical.{InsertIntoTable, LogicalPlan, Project}
+import org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression}
+import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan, Project}
 import org.apache.spark.sql.execution.LogicalRDD
 import org.apache.spark.sql.execution.SQLExecution.EXECUTION_ID_KEY
 import org.apache.spark.sql.execution.command.{AtomicRunnableCommand, DataLoadTableFileMapping, UpdateTableModel}
@@ -119,6 +120,7 @@ case class CarbonLoadDataCommand(
     }
     Seq.empty
   }
+
   override def processData(sparkSession: SparkSession): Seq[Row] = {
     val LOGGER: LogService = LogServiceFactory.getLogService(this.getClass.getCanonicalName)
     val carbonProperty: CarbonProperties = CarbonProperties.getInstance()
@@ -176,7 +178,18 @@ case class CarbonLoadDataCommand(
       LOGGER.info(s"Deleting stale folders if present for table $dbName.$tableName")
       TableProcessingOperations.deletePartialLoadDataIfExist(table, false)
       var isUpdateTableStatusRequired = false
+      // if the table is child then extract the uuid from the operation context and the parent would
+      // already generated UUID.
+      // if parent table then generate a new UUID else use empty.
+      val uuid = if (table.isChildDataMap) {
+        Option(operationContext.getProperty("uuid")).getOrElse("").toString
+      } else if (table.hasAggregationDataMap) {
+        UUID.randomUUID().toString
+      } else {
+        ""
+      }
       try {
+        operationContext.setProperty("uuid", uuid)
         val loadTablePreExecutionEvent: LoadTablePreExecutionEvent =
           new LoadTablePreExecutionEvent(
             table.getCarbonTableIdentifier,
@@ -194,7 +207,9 @@ case class CarbonLoadDataCommand(
         DataLoadingUtil.deleteLoadsAndUpdateMetadata(isForceDeletion = false, table)
         // add the start entry for the new load in the table status file
         if (updateModel.isEmpty && !table.isHivePartitionTable) {
-          CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(carbonLoadModel, isOverwriteTable)
+          CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(
+            carbonLoadModel,
+            isOverwriteTable)
           isUpdateTableStatusRequired = true
         }
         if (isOverwriteTable) {
@@ -252,7 +267,7 @@ case class CarbonLoadDataCommand(
         case CausedBy(ex: NoRetryException) =>
           // update the load entry in table status file for changing the status to marked for delete
           if (isUpdateTableStatusRequired) {
-            CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel)
+            CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel, uuid)
           }
           LOGGER.error(ex, s"Dataload failure for $dbName.$tableName")
           throw new RuntimeException(s"Dataload failure for $dbName.$tableName, ${ex.getMessage}")
@@ -263,7 +278,7 @@ case class CarbonLoadDataCommand(
           LOGGER.error(ex)
           // update the load entry in table status file for changing the status to marked for delete
           if (isUpdateTableStatusRequired) {
-            CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel)
+            CarbonLoaderUtil.updateTableStatusForFailure(carbonLoadModel, uuid)
           }
           LOGGER.audit(s"Dataload failure for $dbName.$tableName. Please check the logs")
           throw ex

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
index dbbf90c..3de75c2 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
@@ -205,11 +205,12 @@ case class CreatePreAggregateTableCommand(
       loadCommand.dataFrame = Some(PreAggregateUtil
         .getDataFrame(sparkSession, loadCommand.logicalPlan.get))
       PreAggregateUtil.startDataLoadForDataMap(
-        parentTable,
+        TableIdentifier(parentTable.getTableName, Some(parentTable.getDatabaseName)),
         segmentToLoad = "*",
         validateSegments = true,
-        sparkSession,
-        loadCommand)
+        loadCommand,
+        isOverwrite = false,
+        sparkSession)
     }
     Seq.empty
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateListeners.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateListeners.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateListeners.scala
index 7b273ba..ed6be97 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateListeners.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateListeners.scala
@@ -17,19 +17,26 @@
 
 package org.apache.spark.sql.execution.command.preaaggregate
 
+import java.util.UUID
+
 import scala.collection.JavaConverters._
 import scala.collection.mutable
 
-import org.apache.spark.sql.{SparkSession}
+import org.apache.spark.sql.SparkSession
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.execution.command.AlterTableModel
 import org.apache.spark.sql.execution.command.management.{CarbonAlterTableCompactionCommand, CarbonLoadDataCommand}
 import org.apache.spark.sql.parser.CarbonSpark2SqlParser
 
-import org.apache.carbondata.core.metadata.schema.table.AggregationDataMapSchema
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.datastore.filesystem.{CarbonFile, CarbonFileFilter}
+import org.apache.carbondata.core.datastore.impl.FileFactory
+import org.apache.carbondata.core.metadata.schema.table.{AggregationDataMapSchema, CarbonTable}
+import org.apache.carbondata.core.statusmanager.{SegmentStatus, SegmentStatusManager}
 import org.apache.carbondata.core.util.CarbonUtil
+import org.apache.carbondata.core.util.path.CarbonTablePath
 import org.apache.carbondata.events._
-import org.apache.carbondata.processing.loading.events.LoadEvents.{LoadMetadataEvent, LoadTablePreExecutionEvent, LoadTablePreStatusUpdateEvent}
+import org.apache.carbondata.processing.loading.events.LoadEvents.{LoadMetadataEvent, LoadTablePostStatusUpdateEvent, LoadTablePreExecutionEvent, LoadTablePreStatusUpdateEvent}
 
 /**
  * below class will be used to create load command for compaction
@@ -71,9 +78,13 @@ object CompactionProcessMetaListener extends OperationEventListener {
           childDataFrame,
           false,
           sparkSession)
+        val uuid = Option(operationContext.getProperty("uuid")).
+          getOrElse(UUID.randomUUID()).toString
+        operationContext.setProperty("uuid", uuid)
         loadCommand.processMetadata(sparkSession)
         operationContext
           .setProperty(dataMapSchema.getChildSchema.getTableName + "_Compaction", loadCommand)
+        loadCommand.operationContext = operationContext
       }
     } else if (table.isChildDataMap) {
       val childTableName = table.getTableName
@@ -95,9 +106,13 @@ object CompactionProcessMetaListener extends OperationEventListener {
         childDataFrame,
         false,
         sparkSession)
+      val uuid = Option(operationContext.getProperty("uuid")).getOrElse("").toString
       loadCommand.processMetadata(sparkSession)
       operationContext.setProperty(table.getTableName + "_Compaction", loadCommand)
+      operationContext.setProperty("uuid", uuid)
+      loadCommand.operationContext = operationContext
     }
+
   }
 }
 
@@ -127,12 +142,17 @@ object LoadProcessMetaListener extends OperationEventListener {
         val sortedList = aggregationDataMapList.sortBy(_.getOrdinal)
         val parentTableName = table.getTableName
         val databaseName = table.getDatabaseName
+        // if the table is child then extract the uuid from the operation context and the parent
+        // would already generated UUID.
+        // if parent table then generate a new UUID else use empty.
+        val uuid =
+          Option(operationContext.getProperty("uuid")).getOrElse(UUID.randomUUID()).toString
         val list = scala.collection.mutable.ListBuffer.empty[AggregationDataMapSchema]
         for (dataMapSchema: AggregationDataMapSchema <- sortedList) {
           val childTableName = dataMapSchema.getRelationIdentifier.getTableName
           val childDatabaseName = dataMapSchema.getRelationIdentifier.getDatabaseName
           val childSelectQuery = if (!dataMapSchema.isTimeseriesDataMap) {
-            PreAggregateUtil.getChildQuery(dataMapSchema)
+            (PreAggregateUtil.getChildQuery(dataMapSchema), "")
           } else {
             // for timeseries rollup policy
             val tableSelectedForRollup = PreAggregateUtil.getRollupDataMapNameForTimeSeries(list,
@@ -140,18 +160,19 @@ object LoadProcessMetaListener extends OperationEventListener {
             list += dataMapSchema
             // if non of the rollup data map is selected hit the maintable and prepare query
             if (tableSelectedForRollup.isEmpty) {
-              PreAggregateUtil.createTimeSeriesSelectQueryFromMain(dataMapSchema.getChildSchema,
+              (PreAggregateUtil.createTimeSeriesSelectQueryFromMain(dataMapSchema.getChildSchema,
                 parentTableName,
-                databaseName)
+                databaseName), "")
             } else {
               // otherwise hit the select rollup datamap schema
-              PreAggregateUtil.createTimeseriesSelectQueryForRollup(dataMapSchema.getChildSchema,
+              (PreAggregateUtil.createTimeseriesSelectQueryForRollup(dataMapSchema.getChildSchema,
                 tableSelectedForRollup.get,
-                databaseName)
+                databaseName),
+                s"$databaseName.${tableSelectedForRollup.get.getChildSchema.getTableName}")
             }
           }
           val childDataFrame = sparkSession.sql(new CarbonSpark2SqlParser().addPreAggLoadFunction(
-            childSelectQuery)).drop("preAggLoad")
+            childSelectQuery._1)).drop("preAggLoad")
           val isOverwrite =
             operationContext.getProperty("isOverwrite").asInstanceOf[Boolean]
           val loadCommand = PreAggregateUtil.createLoadCommandForChild(
@@ -159,7 +180,10 @@ object LoadProcessMetaListener extends OperationEventListener {
             TableIdentifier(childTableName, Some(childDatabaseName)),
             childDataFrame,
             isOverwrite,
-            sparkSession)
+            sparkSession,
+            timeseriesParentTableName = childSelectQuery._2)
+          operationContext.setProperty("uuid", uuid)
+          loadCommand.operationContext.setProperty("uuid", uuid)
           loadCommand.processMetadata(sparkSession)
           operationContext.setProperty(dataMapSchema.getChildSchema.getTableName, loadCommand)
         }
@@ -191,25 +215,172 @@ object LoadPostAggregateListener extends OperationEventListener {
           .asInstanceOf[CarbonLoadDataCommand]
         childLoadCommand.dataFrame = Some(PreAggregateUtil
           .getDataFrame(sparkSession, childLoadCommand.logicalPlan.get))
-        val childOperationContext = new OperationContext
-        childOperationContext
-          .setProperty(dataMapSchema.getChildSchema.getTableName,
-            operationContext.getProperty(dataMapSchema.getChildSchema.getTableName))
         val isOverwrite =
           operationContext.getProperty("isOverwrite").asInstanceOf[Boolean]
-        childOperationContext.setProperty("isOverwrite", isOverwrite)
-        childOperationContext.setProperty(dataMapSchema.getChildSchema.getTableName + "_Compaction",
-          operationContext.getProperty(dataMapSchema.getChildSchema.getTableName + "_Compaction"))
-        childLoadCommand.operationContext = childOperationContext
+        childLoadCommand.operationContext = operationContext
+        val timeseriesParent = childLoadCommand.internalOptions.get("timeseriesParent")
+        val (parentTableIdentifier, segmentToLoad) =
+          if (timeseriesParent.isDefined && timeseriesParent.get.nonEmpty) {
+            val (parentTableDatabase, parentTableName) =
+              (timeseriesParent.get.split('.')(0), timeseriesParent.get.split('.')(1))
+            (TableIdentifier(parentTableName, Some(parentTableDatabase)),
+            operationContext.getProperty(
+              s"${parentTableDatabase}_${parentTableName}_Segment").toString)
+        } else {
+            (TableIdentifier(table.getTableName, Some(table.getDatabaseName)),
+              carbonLoadModel.getSegmentId)
+        }
         PreAggregateUtil.startDataLoadForDataMap(
-            table,
-            carbonLoadModel.getSegmentId,
+            parentTableIdentifier,
+            segmentToLoad,
             validateSegments = false,
-            sparkSession,
-          childLoadCommand)
+            childLoadCommand,
+            isOverwrite,
+            sparkSession)
+        }
+      }
+    }
+}
+
+/**
+ * This listener is used to commit all the child data aggregate tables in one transaction. If one
+ * failes all will be reverted to original state.
+ */
+object CommitPreAggregateListener extends OperationEventListener {
+
+  private val LOGGER = LogServiceFactory.getLogService(this.getClass.getCanonicalName)
+
+  override protected def onEvent(event: Event,
+      operationContext: OperationContext): Unit = {
+    // The same listener is called for both compaction and load therefore getting the
+    // carbonLoadModel from the appropriate event.
+    val carbonLoadModel = event match {
+      case loadEvent: LoadTablePostStatusUpdateEvent =>
+        loadEvent.getCarbonLoadModel
+      case compactionEvent: AlterTableCompactionPostStatusUpdateEvent =>
+        compactionEvent.carbonLoadModel
+    }
+    val isCompactionFlow = Option(
+      operationContext.getProperty("isCompaction")).getOrElse("false").toString.toBoolean
+    val dataMapSchemas =
+      carbonLoadModel.getCarbonDataLoadSchema.getCarbonTable.getTableInfo.getDataMapSchemaList
+    // extract all child LoadCommands
+    val childLoadCommands = if (!isCompactionFlow) {
+      // If not compaction flow then the key for load commands will be tableName
+        dataMapSchemas.asScala.map { dataMapSchema =>
+          operationContext.getProperty(dataMapSchema.getChildSchema.getTableName)
+            .asInstanceOf[CarbonLoadDataCommand]
+        }
+      } else {
+      // If not compaction flow then the key for load commands will be tableName_Compaction
+        dataMapSchemas.asScala.map { dataMapSchema =>
+          operationContext.getProperty(dataMapSchema.getChildSchema.getTableName + "_Compaction")
+            .asInstanceOf[CarbonLoadDataCommand]
+        }
+      }
+     if (dataMapSchemas.size() > 0) {
+       val uuid = operationContext.getProperty("uuid").toString
+      // keep committing until one fails
+      val renamedDataMaps = childLoadCommands.takeWhile { childLoadCommand =>
+        val childCarbonTable = childLoadCommand.table
+        val carbonTablePath =
+          new CarbonTablePath(childCarbonTable.getCarbonTableIdentifier,
+            childCarbonTable.getTablePath)
+        // Generate table status file name with UUID, forExample: tablestatus_1
+        val oldTableSchemaPath = carbonTablePath.getTableStatusFilePathWithUUID(uuid)
+        // Generate table status file name without UUID, forExample: tablestatus
+        val newTableSchemaPath = carbonTablePath.getTableStatusFilePath
+        renameDataMapTableStatusFiles(oldTableSchemaPath, newTableSchemaPath, uuid)
+      }
+      // if true then the commit for one of the child tables has failed
+      val commitFailed = renamedDataMaps.lengthCompare(dataMapSchemas.size()) != 0
+      if (commitFailed) {
+        LOGGER.warn("Reverting table status file to original state")
+        renamedDataMaps.foreach {
+          loadCommand =>
+            val carbonTable = loadCommand.table
+            val carbonTablePath = new CarbonTablePath(carbonTable.getCarbonTableIdentifier,
+              carbonTable.getTablePath)
+            // rename the backup tablestatus i.e tablestatus_backup_UUID to tablestatus
+            val backupTableSchemaPath = carbonTablePath.getTableStatusFilePath + "_backup_" + uuid
+            val tableSchemaPath = carbonTablePath.getTableStatusFilePath
+            markInProgressSegmentAsDeleted(backupTableSchemaPath, operationContext, loadCommand)
+            renameDataMapTableStatusFiles(backupTableSchemaPath, tableSchemaPath, "")
         }
       }
+      // after success/failure of commit delete all tablestatus files with UUID in their names.
+      // if commit failed then remove the segment directory
+      cleanUpStaleTableStatusFiles(childLoadCommands.map(_.table),
+        operationContext,
+        uuid)
+      if (commitFailed) {
+        sys.error("Failed to update table status for pre-aggregate table")
+      }
+    }
+
+
+  }
+
+  private def markInProgressSegmentAsDeleted(tableStatusFile: String,
+      operationContext: OperationContext,
+      loadDataCommand: CarbonLoadDataCommand): Unit = {
+    val loadMetaDataDetails = SegmentStatusManager.readTableStatusFile(tableStatusFile)
+    val segmentBeingLoaded =
+      operationContext.getProperty(loadDataCommand.table.getTableUniqueName + "_Segment").toString
+    val newDetails = loadMetaDataDetails.collect {
+      case detail if detail.getLoadName.equalsIgnoreCase(segmentBeingLoaded) =>
+        detail.setSegmentStatus(SegmentStatus.MARKED_FOR_DELETE)
+        detail
+      case others => others
+    }
+    SegmentStatusManager.writeLoadDetailsIntoFile(tableStatusFile, newDetails)
+  }
+
+  /**
+   *  Used to rename table status files for commit operation.
+   */
+  private def renameDataMapTableStatusFiles(sourceFileName: String,
+      destinationFileName: String, uuid: String) = {
+    val oldCarbonFile = FileFactory.getCarbonFile(sourceFileName)
+    val newCarbonFile = FileFactory.getCarbonFile(destinationFileName)
+    if (oldCarbonFile.exists() && newCarbonFile.exists()) {
+      val backUpPostFix = if (uuid.nonEmpty) {
+        "_backup_" + uuid
+      } else {
+        ""
+      }
+      LOGGER.info(s"Renaming $newCarbonFile to ${destinationFileName + backUpPostFix}")
+      if (newCarbonFile.renameForce(destinationFileName + backUpPostFix)) {
+        LOGGER.info(s"Renaming $oldCarbonFile to $destinationFileName")
+        oldCarbonFile.renameForce(destinationFileName)
+      } else {
+        LOGGER.info(s"Renaming $newCarbonFile to ${destinationFileName + backUpPostFix} failed")
+        false
+      }
+    } else {
+      false
     }
+  }
+
+  /**
+   * Used to remove table status files with UUID and segment folders.
+   */
+  private def cleanUpStaleTableStatusFiles(
+      childTables: Seq[CarbonTable],
+      operationContext: OperationContext,
+      uuid: String): Unit = {
+    childTables.foreach { childTable =>
+      val carbonTablePath = new CarbonTablePath(childTable.getCarbonTableIdentifier,
+        childTable.getTablePath)
+      val metaDataDir = FileFactory.getCarbonFile(carbonTablePath.getMetadataDirectoryPath)
+      val tableStatusFiles = metaDataDir.listFiles(new CarbonFileFilter {
+        override def accept(file: CarbonFile): Boolean = {
+          file.getName.contains(uuid) || file.getName.contains("backup")
+        }
+      })
+      tableStatusFiles.foreach(_.delete())
+    }
+  }
 }
 
 /**
@@ -226,6 +397,7 @@ object AlterPreAggregateTableCompactionPostListener extends OperationEventListen
     val compactionEvent = event.asInstanceOf[AlterTableCompactionPreStatusUpdateEvent]
     val carbonTable = compactionEvent.carbonTable
     val compactionType = compactionEvent.carbonMergerMapping.campactionType
+    val carbonLoadModel = compactionEvent.carbonLoadModel
     val sparkSession = compactionEvent.sparkSession
     if (CarbonUtil.hasAggregationDataMap(carbonTable)) {
       carbonTable.getTableInfo.getDataMapSchemaList.asScala.foreach { dataMapSchema =>
@@ -236,6 +408,10 @@ object AlterPreAggregateTableCompactionPostListener extends OperationEventListen
           compactionType.toString,
           Some(System.currentTimeMillis()),
           "")
+        operationContext.setProperty(
+          dataMapSchema.getRelationIdentifier.getDatabaseName + "_" +
+          dataMapSchema.getRelationIdentifier.getTableName + "_Segment",
+          carbonLoadModel.getSegmentId)
         CarbonAlterTableCompactionCommand(alterTableModel, operationContext = operationContext)
           .run(sparkSession)
       }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala
index dac5d5e..1d4ebec 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala
@@ -35,13 +35,16 @@ import org.apache.spark.sql.types.DataType
 
 import org.apache.carbondata.common.logging.{LogService, LogServiceFactory}
 import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.datastore.filesystem.{CarbonFile, CarbonFileFilter}
+import org.apache.carbondata.core.datastore.impl.FileFactory
 import org.apache.carbondata.core.locks.{CarbonLockUtil, ICarbonLock, LockUsage}
 import org.apache.carbondata.core.metadata.converter.ThriftWrapperSchemaConverterImpl
 import org.apache.carbondata.core.metadata.schema.table.{AggregationDataMapSchema, CarbonTable, DataMapSchema, TableSchema}
 import org.apache.carbondata.core.metadata.schema.table.column.ColumnSchema
 import org.apache.carbondata.core.util.CarbonUtil
-import org.apache.carbondata.core.util.path.CarbonStorePath
+import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}
 import org.apache.carbondata.format.TableInfo
+import org.apache.carbondata.processing.loading.model.CarbonLoadModel
 import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
 import org.apache.carbondata.spark.util.CommonUtil
 
@@ -581,32 +584,33 @@ object PreAggregateUtil {
    * This method will start load process on the data map
    */
   def startDataLoadForDataMap(
-      parentCarbonTable: CarbonTable,
+      parentTableIdentifier: TableIdentifier,
       segmentToLoad: String,
       validateSegments: Boolean,
-      sparkSession: SparkSession,
-      loadCommand: CarbonLoadDataCommand): Unit = {
+      loadCommand: CarbonLoadDataCommand,
+      isOverwrite: Boolean,
+      sparkSession: SparkSession): Unit = {
     CarbonSession.threadSet(
       CarbonCommonConstants.CARBON_INPUT_SEGMENTS +
-      parentCarbonTable.getDatabaseName + "." +
-      parentCarbonTable.getTableName,
+      parentTableIdentifier.database.getOrElse(sparkSession.catalog.currentDatabase) + "." +
+      parentTableIdentifier.table,
       segmentToLoad)
     CarbonSession.threadSet(
       CarbonCommonConstants.VALIDATE_CARBON_INPUT_SEGMENTS +
-      parentCarbonTable.getDatabaseName + "." +
-      parentCarbonTable.getTableName, validateSegments.toString)
+      parentTableIdentifier.database.getOrElse(sparkSession.catalog.currentDatabase) + "." +
+      parentTableIdentifier.table, validateSegments.toString)
     CarbonSession.updateSessionInfoToCurrentThread(sparkSession)
     try {
       loadCommand.processData(sparkSession)
     } finally {
       CarbonSession.threadUnset(
         CarbonCommonConstants.CARBON_INPUT_SEGMENTS +
-        parentCarbonTable.getDatabaseName + "." +
-        parentCarbonTable.getTableName)
+        parentTableIdentifier.database.getOrElse(sparkSession.catalog.currentDatabase) + "." +
+        parentTableIdentifier.table)
       CarbonSession.threadUnset(
         CarbonCommonConstants.VALIDATE_CARBON_INPUT_SEGMENTS +
-        parentCarbonTable.getDatabaseName + "." +
-        parentCarbonTable.getTableName)
+        parentTableIdentifier.database.getOrElse(sparkSession.catalog.currentDatabase) + "." +
+        parentTableIdentifier.table)
     }
   }
 
@@ -885,7 +889,8 @@ object PreAggregateUtil {
       dataMapIdentifier: TableIdentifier,
       dataFrame: DataFrame,
       isOverwrite: Boolean,
-      sparkSession: SparkSession): CarbonLoadDataCommand = {
+      sparkSession: SparkSession,
+      timeseriesParentTableName: String = ""): CarbonLoadDataCommand = {
     val headers = columns.asScala.filter { column =>
       !column.getColumnName.equalsIgnoreCase(CarbonCommonConstants.DEFAULT_INVISIBLE_DUMMY_MEASURE)
     }.sortBy(_.getSchemaOrdinal).map(_.getColumnName).mkString(",")
@@ -896,7 +901,8 @@ object PreAggregateUtil {
       Map("fileheader" -> headers),
       isOverwriteTable = isOverwrite,
       dataFrame = None,
-      internalOptions = Map(CarbonCommonConstants.IS_INTERNAL_LOAD_CALL -> "true"),
+      internalOptions = Map(CarbonCommonConstants.IS_INTERNAL_LOAD_CALL -> "true",
+        "timeseriesParent" -> timeseriesParentTableName),
       logicalPlan = Some(dataFrame.queryExecution.logical))
     loadCommand
   }
@@ -904,4 +910,5 @@ object PreAggregateUtil {
   def getDataFrame(sparkSession: SparkSession, child: LogicalPlan): DataFrame = {
     Dataset.ofRows(sparkSession, child)
   }
+
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java b/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java
index 78964e7..190c72c 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java
@@ -147,6 +147,19 @@ public class LoadEvents {
       return carbonTable;
     }
   }
+
+  public static class LoadTablePostStatusUpdateEvent extends Event {
+    private CarbonLoadModel carbonLoadModel;
+
+    public LoadTablePostStatusUpdateEvent(CarbonLoadModel carbonLoadModel) {
+      this.carbonLoadModel = carbonLoadModel;
+    }
+
+    public CarbonLoadModel getCarbonLoadModel() {
+      return carbonLoadModel;
+    }
+  }
+
   /**
    * Class for handling clean up in case of any failure and abort the operation.
    */

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java b/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
index 12fc5c1..3a83427 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
@@ -150,6 +150,22 @@ public final class CarbonLoaderUtil {
   public static boolean recordNewLoadMetadata(LoadMetadataDetails newMetaEntry,
       CarbonLoadModel loadModel, boolean loadStartEntry, boolean insertOverwrite)
       throws IOException {
+    return recordNewLoadMetadata(newMetaEntry, loadModel, loadStartEntry, insertOverwrite, "");
+  }
+
+  /**
+   * This API will write the load level metadata for the loadmanagement module inorder to
+   * manage the load and query execution management smoothly.
+   *
+   * @param newMetaEntry
+   * @param loadModel
+   * @param uuid
+   * @return boolean which determines whether status update is done or not.
+   * @throws IOException
+   */
+  public static boolean recordNewLoadMetadata(LoadMetadataDetails newMetaEntry,
+      CarbonLoadModel loadModel, boolean loadStartEntry, boolean insertOverwrite, String uuid)
+      throws IOException {
     boolean status = false;
     AbsoluteTableIdentifier absoluteTableIdentifier =
         loadModel.getCarbonDataLoadSchema().getCarbonTable().getAbsoluteTableIdentifier();
@@ -159,7 +175,12 @@ public final class CarbonLoaderUtil {
     if (!FileFactory.isFileExist(metadataPath, fileType)) {
       FileFactory.mkdirs(metadataPath, fileType);
     }
-    String tableStatusPath = carbonTablePath.getTableStatusFilePath();
+    String tableStatusPath;
+    if (loadModel.getCarbonDataLoadSchema().getCarbonTable().isChildDataMap() && !uuid.isEmpty()) {
+      tableStatusPath = carbonTablePath.getTableStatusFilePathWithUUID(uuid);
+    } else {
+      tableStatusPath = carbonTablePath.getTableStatusFilePath();
+    }
     SegmentStatusManager segmentStatusManager = new SegmentStatusManager(absoluteTableIdentifier);
     ICarbonLock carbonLock = segmentStatusManager.getTableStatusLock();
     int retryCount = CarbonLockUtil
@@ -314,7 +335,6 @@ public final class CarbonLoaderUtil {
         new AtomicFileOperationsImpl(dataLoadLocation, FileFactory.getFileType(dataLoadLocation));
 
     try {
-
       dataOutputStream = writeOperation.openForWrite(FileWriteOperation.OVERWRITE);
       brWriter = new BufferedWriter(new OutputStreamWriter(dataOutputStream,
               Charset.forName(CarbonCommonConstants.DEFAULT_CHARSET)));
@@ -367,7 +387,7 @@ public final class CarbonLoaderUtil {
 
 
   public static void readAndUpdateLoadProgressInTableMeta(CarbonLoadModel model,
-      boolean insertOverwrite) throws IOException {
+      boolean insertOverwrite, String uuid) throws IOException {
     LoadMetadataDetails newLoadMetaEntry = new LoadMetadataDetails();
     SegmentStatus status = SegmentStatus.INSERT_IN_PROGRESS;
     if (insertOverwrite) {
@@ -381,18 +401,23 @@ public final class CarbonLoaderUtil {
     }
     CarbonLoaderUtil
         .populateNewLoadMetaEntry(newLoadMetaEntry, status, model.getFactTimeStamp(), false);
-    boolean entryAdded =
-        CarbonLoaderUtil.recordNewLoadMetadata(newLoadMetaEntry, model, true, insertOverwrite);
+    boolean entryAdded = CarbonLoaderUtil
+        .recordNewLoadMetadata(newLoadMetaEntry, model, true, insertOverwrite, uuid);
     if (!entryAdded) {
       throw new IOException("Dataload failed due to failure in table status updation for "
           + model.getTableName());
     }
   }
 
+  public static void readAndUpdateLoadProgressInTableMeta(CarbonLoadModel model,
+      boolean insertOverwrite) throws IOException {
+    readAndUpdateLoadProgressInTableMeta(model, insertOverwrite, "");
+  }
+
   /**
    * This method will update the load failure entry in the table status file
    */
-  public static void updateTableStatusForFailure(CarbonLoadModel model)
+  public static void updateTableStatusForFailure(CarbonLoadModel model, String uuid)
       throws IOException {
     // in case if failure the load status should be "Marked for delete" so that it will be taken
     // care during clean up
@@ -404,14 +429,22 @@ public final class CarbonLoaderUtil {
     }
     CarbonLoaderUtil
         .populateNewLoadMetaEntry(loadMetaEntry, loadStatus, model.getFactTimeStamp(), true);
-    boolean entryAdded =
-        CarbonLoaderUtil.recordNewLoadMetadata(loadMetaEntry, model, false, false);
+    boolean entryAdded = CarbonLoaderUtil.recordNewLoadMetadata(
+        loadMetaEntry, model, false, false, uuid);
     if (!entryAdded) {
       throw new IOException(
           "Failed to update failure entry in table status for " + model.getTableName());
     }
   }
 
+  /**
+   * This method will update the load failure entry in the table status file with empty uuid.
+   */
+  public static void updateTableStatusForFailure(CarbonLoadModel model)
+      throws IOException {
+    updateTableStatusForFailure(model, "");
+  }
+
   public static Dictionary getDictionary(DictionaryColumnUniqueIdentifier columnIdentifier)
       throws IOException {
     Cache<DictionaryColumnUniqueIdentifier, Dictionary> dictCache =


[27/50] [abbrv] carbondata git commit: [CARBONDATA-2120]Fixed is null filter issue

Posted by ra...@apache.org.
[CARBONDATA-2120]Fixed is null filter issue

Problem: Is null filter is failing for numeric data type(No dictionary column).

Root cause: Min max calculation is wrong when no dictionary column is not the first column.

As it is not the first column null value can come in between and min max for null value is getting updated only when first row is null

Solution: Update the min max in all the case when value is null or not null for all type

This closes #1912


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6c097cbf
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/6c097cbf
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/6c097cbf

Branch: refs/heads/branch-1.3
Commit: 6c097cbf310e8d2199e57cde4fcc417122a8a1ca
Parents: 27ec651
Author: kumarvishal <ku...@gmail.com>
Authored: Fri Feb 2 17:53:57 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Sat Feb 3 00:23:39 2018 +0530

----------------------------------------------------------------------
 .../page/statistics/LVStringStatsCollector.java | 28 ++++++++---------
 .../core/util/path/CarbonTablePath.java         |  2 +-
 .../src/test/resources/newsample.csv            |  7 +++++
 .../testsuite/filterexpr/TestIsNullFilter.scala | 32 ++++++++++++++++++++
 4 files changed, 53 insertions(+), 16 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/6c097cbf/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java b/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java
index 61acec9..23795c5 100644
--- a/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java
+++ b/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java
@@ -73,28 +73,26 @@ public class LVStringStatsCollector implements ColumnPageStatsCollector {
   @Override
   public void update(byte[] value) {
     // input value is LV encoded
+    byte[] newValue = null;
     assert (value.length >= 2);
     if (value.length == 2) {
       assert (value[0] == 0 && value[1] == 0);
-      if (min == null && max == null) {
-        min = new byte[0];
-        max = new byte[0];
-      }
-      return;
+      newValue = new byte[0];
+    } else {
+      int length = (value[0] << 8) + (value[1] & 0xff);
+      assert (length > 0);
+      newValue = new byte[value.length - 2];
+      System.arraycopy(value, 2, newValue, 0, newValue.length);
     }
-    int length = (value[0] << 8) + (value[1] & 0xff);
-    assert (length > 0);
-    byte[] v = new byte[value.length - 2];
-    System.arraycopy(value, 2, v, 0, v.length);
     if (min == null && max == null) {
-      min = v;
-      max = v;
+      min = newValue;
+      max = newValue;
     } else {
-      if (ByteUtil.UnsafeComparer.INSTANCE.compareTo(min, v) > 0) {
-        min = v;
+      if (ByteUtil.UnsafeComparer.INSTANCE.compareTo(min, newValue) > 0) {
+        min = newValue;
       }
-      if (ByteUtil.UnsafeComparer.INSTANCE.compareTo(max, v) < 0) {
-        max = v;
+      if (ByteUtil.UnsafeComparer.INSTANCE.compareTo(max, newValue) < 0) {
+        max = newValue;
       }
     }
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6c097cbf/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
index d8c64c4..5a63d2f 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
@@ -75,7 +75,7 @@ public class CarbonTablePath extends Path {
    * @param carbonFilePath
    */
   public static String getFolderContainingFile(String carbonFilePath) {
-    return carbonFilePath.substring(0, carbonFilePath.lastIndexOf(File.separator));
+    return carbonFilePath.substring(0, carbonFilePath.lastIndexOf('/'));
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6c097cbf/integration/spark-common-test/src/test/resources/newsample.csv
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/resources/newsample.csv b/integration/spark-common-test/src/test/resources/newsample.csv
new file mode 100644
index 0000000..38cd3dd
--- /dev/null
+++ b/integration/spark-common-test/src/test/resources/newsample.csv
@@ -0,0 +1,7 @@
+id,name,time
+1,'one',2014-01-01 08:00:00
+2,'two',2014-01-02 08:02:00
+3,'three',2014-01-03 08:03:00
+4,'four',2014-01-04 08:04:00
+null,null,null
+5,'five',2020-10-20 08:01:20
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6c097cbf/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/TestIsNullFilter.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/TestIsNullFilter.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/TestIsNullFilter.scala
new file mode 100644
index 0000000..9e85c9e
--- /dev/null
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/TestIsNullFilter.scala
@@ -0,0 +1,32 @@
+package org.apache.carbondata.spark.testsuite.filterexpr
+
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.test.util.QueryTest
+import org.scalatest.BeforeAndAfterAll
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+class TestIsNullFilter extends QueryTest with BeforeAndAfterAll {
+  override def beforeAll: Unit = {
+    CarbonProperties.getInstance()
+      .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT)
+    sql("drop table if exists main")
+    sql("create table main(id int, name string, time timestamp) STORED BY 'org.apache.carbondata.format'")
+    sql(s"LOAD DATA LOCAL INPATH '$resourcesPath/newsample.csv' into table main OPTIONS('bad_records_action'='force')")
+  }
+
+  test("select * from main where time is null") {
+    checkAnswer(
+      sql("select count(*) from main where time is null"),
+      Seq(Row(1)))
+  }
+
+  override def afterAll: Unit = {
+    CarbonProperties.getInstance()
+      .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
+        CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT)
+    sql("drop table if exists main")
+  }
+
+}


[43/50] [abbrv] carbondata git commit: [CARBONDATA-2101]Restrict direct query on pre aggregate and timeseries datamap

Posted by ra...@apache.org.
[CARBONDATA-2101]Restrict direct query on pre aggregate and timeseries datamap

Restricting direct query on PreAggregate and timeseries data map
Added Property to run direct query on data map for testing purpose
validate.support.direct.query.on.datamap=true

This closes #1888


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/349be007
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/349be007
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/349be007

Branch: refs/heads/branch-1.3
Commit: 349be007fd20fb8c4a39b318e45b47445d2e798c
Parents: 46d9bf9
Author: kumarvishal <ku...@gmail.com>
Authored: Tue Jan 30 20:54:12 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Sat Feb 3 21:32:08 2018 +0530

----------------------------------------------------------------------
 .../core/constants/CarbonCommonConstants.java   | 10 +++++
 .../carbondata/core/util/SessionParams.java     |  2 +
 .../spark/sql/common/util/QueryTest.scala       |  4 ++
 .../apache/spark/sql/test/util/QueryTest.scala  |  3 ++
 .../spark/rdd/AggregateDataMapCompactor.scala   |  2 +
 .../sql/CarbonDatasourceHadoopRelation.scala    |  1 +
 .../scala/org/apache/spark/sql/CarbonEnv.scala  | 18 +++++++++
 .../preaaggregate/PreAggregateUtil.scala        |  2 +
 .../sql/hive/CarbonPreAggregateRules.scala      |  9 +++++
 .../sql/optimizer/CarbonLateDecodeRule.scala    | 40 +++++++++++++++++++-
 10 files changed, 89 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index a799e51..6e6482d 100644
--- a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1588,6 +1588,16 @@ public final class CarbonCommonConstants {
       "carbon.sort.storage.inmemory.size.inmb";
   public static final String IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT = "512";
 
+  @CarbonProperty
+  public static final String SUPPORT_DIRECT_QUERY_ON_DATAMAP =
+      "carbon.query.directQueryOnDataMap.enabled";
+  public static final String SUPPORT_DIRECT_QUERY_ON_DATAMAP_DEFAULTVALUE = "false";
+
+  @CarbonProperty
+  public static final String VALIDATE_DIRECT_QUERY_ON_DATAMAP =
+      "carbon.query.validate.directqueryondatamap";
+  public static final String VALIDATE_DIRECT_QUERY_ON_DATAMAP_DEFAULTVALUE = "true";
+
   private CarbonCommonConstants() {
   }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java b/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
index ddc7539..a6ff61e 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
@@ -199,6 +199,8 @@ public class SessionParams implements Serializable {
           }
         } else if (key.startsWith(CarbonCommonConstants.VALIDATE_CARBON_INPUT_SEGMENTS)) {
           isValid = true;
+        } else if (key.equalsIgnoreCase(CarbonCommonConstants.SUPPORT_DIRECT_QUERY_ON_DATAMAP)) {
+          isValid = true;
         } else {
           throw new InvalidConfigurationException(
               "The key " + key + " not supported for dynamic configuration.");

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
index d80efb8..9c5bc38 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
@@ -33,7 +33,9 @@ import org.apache.spark.sql.test.{ResourceRegisterAndCopier, TestQueryExecutor}
 import org.apache.spark.sql.{CarbonSession, DataFrame, Row, SQLContext}
 import org.scalatest.Suite
 
+import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.datastore.impl.FileFactory
+import org.apache.carbondata.core.util.CarbonProperties
 
 class QueryTest extends PlanTest with Suite {
 
@@ -43,6 +45,8 @@ class QueryTest extends PlanTest with Suite {
 
   // Add Locale setting
   Locale.setDefault(Locale.US)
+  CarbonProperties.getInstance()
+    .addProperty(CarbonCommonConstants.VALIDATE_DIRECT_QUERY_ON_DATAMAP, "false")
 
   /**
    * Runs the plan and makes sure the answer contains all of the keywords, or the

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala b/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
index b87473a..6e5630a 100644
--- a/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
+++ b/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
@@ -41,6 +41,9 @@ class QueryTest extends PlanTest {
   // Add Locale setting
   Locale.setDefault(Locale.US)
 
+  CarbonProperties.getInstance()
+    .addProperty(CarbonCommonConstants.VALIDATE_DIRECT_QUERY_ON_DATAMAP, "false")
+
   /**
    * Runs the plan and makes sure the answer contains all of the keywords, or the
    * none of keywords are listed in the answer

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala
index 188e776..c8a6b1d 100644
--- a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala
+++ b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala
@@ -70,6 +70,8 @@ class AggregateDataMapCompactor(carbonLoadModel: CarbonLoadModel,
         loadCommand.dataFrame =
                   Some(PreAggregateUtil.getDataFrame(
                     sqlContext.sparkSession, loadCommand.logicalPlan.get))
+        CarbonSession.threadSet(CarbonCommonConstants.SUPPORT_DIRECT_QUERY_ON_DATAMAP,
+          "true")
         loadCommand.processData(sqlContext.sparkSession)
         val newLoadMetaDataDetails = SegmentStatusManager.readLoadMetadata(
           carbonTable.getMetaDataFilepath, uuid)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDatasourceHadoopRelation.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDatasourceHadoopRelation.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDatasourceHadoopRelation.scala
index 39a0d1e..0978fab 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDatasourceHadoopRelation.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDatasourceHadoopRelation.scala
@@ -74,6 +74,7 @@ case class CarbonDatasourceHadoopRelation(
 
     val projection = new CarbonProjection
     requiredColumns.foreach(projection.addColumn)
+    CarbonSession.threadUnset(CarbonCommonConstants.SUPPORT_DIRECT_QUERY_ON_DATAMAP)
     val inputMetricsStats: CarbonInputMetrics = new CarbonInputMetrics
     new CarbonScanRDD(
       sparkSession,

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
index 6b12008..8444d25 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
@@ -265,4 +265,22 @@ object CarbonEnv {
       tableName)
   }
 
+  def getThreadParam(key: String, defaultValue: String) : String = {
+    val carbonSessionInfo = ThreadLocalSessionInfo.getCarbonSessionInfo
+    if (null != carbonSessionInfo) {
+      carbonSessionInfo.getThreadParams.getProperty(key, defaultValue)
+    } else {
+      defaultValue
+    }
+  }
+
+  def getSessionParam(key: String, defaultValue: String) : String = {
+    val carbonSessionInfo = ThreadLocalSessionInfo.getCarbonSessionInfo
+    if (null != carbonSessionInfo) {
+      carbonSessionInfo.getThreadParams.getProperty(key, defaultValue)
+    } else {
+      defaultValue
+    }
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala
index 1d4ebec..ae9bc9b 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/PreAggregateUtil.scala
@@ -599,6 +599,8 @@ object PreAggregateUtil {
       CarbonCommonConstants.VALIDATE_CARBON_INPUT_SEGMENTS +
       parentTableIdentifier.database.getOrElse(sparkSession.catalog.currentDatabase) + "." +
       parentTableIdentifier.table, validateSegments.toString)
+    CarbonSession.threadSet(CarbonCommonConstants.SUPPORT_DIRECT_QUERY_ON_DATAMAP,
+      "true")
     CarbonSession.updateSessionInfoToCurrentThread(sparkSession)
     try {
       loadCommand.processData(sparkSession)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
index de58805..7b4bc0d 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
@@ -259,6 +259,7 @@ case class CarbonPreAggregateQueryRules(sparkSession: SparkSession) extends Rule
    * @return transformed plan
    */
   def transformPreAggQueryPlan(logicalPlan: LogicalPlan): LogicalPlan = {
+    var isPlanUpdated = false
     val updatedPlan = logicalPlan.transform {
       case agg@Aggregate(
         grExp,
@@ -294,6 +295,7 @@ case class CarbonPreAggregateQueryRules(sparkSession: SparkSession) extends Rule
                   childPlan,
                   carbonTable,
                   agg)
+              isPlanUpdated = true
               Aggregate(updatedGroupExp,
                 updatedAggExp,
                 CarbonReflectionUtils
@@ -346,6 +348,7 @@ case class CarbonPreAggregateQueryRules(sparkSession: SparkSession) extends Rule
                   childPlan,
                   carbonTable,
                   agg)
+              isPlanUpdated = true
               Aggregate(updatedGroupExp,
                 updatedAggExp,
                 CarbonReflectionUtils
@@ -401,6 +404,7 @@ case class CarbonPreAggregateQueryRules(sparkSession: SparkSession) extends Rule
                   childPlan,
                   carbonTable,
                   agg)
+              isPlanUpdated = true
               Aggregate(updatedGroupExp,
                 updatedAggExp,
                 Filter(updatedFilterExpression.get,
@@ -461,6 +465,7 @@ case class CarbonPreAggregateQueryRules(sparkSession: SparkSession) extends Rule
                   childPlan,
                   carbonTable,
                   agg)
+              isPlanUpdated = true
               Aggregate(updatedGroupExp,
                 updatedAggExp,
                 Filter(updatedFilterExpression.get,
@@ -481,6 +486,10 @@ case class CarbonPreAggregateQueryRules(sparkSession: SparkSession) extends Rule
         }
 
     }
+    if(isPlanUpdated) {
+      CarbonSession.threadSet(CarbonCommonConstants.SUPPORT_DIRECT_QUERY_ON_DATAMAP,
+        "true")
+    }
     updatedPlan
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala
index 06ad0ad..0aa7514 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala
@@ -35,7 +35,7 @@ import org.apache.spark.sql.types.{IntegerType, StringType}
 import org.apache.carbondata.common.logging.LogServiceFactory
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.stats.QueryStatistic
-import org.apache.carbondata.core.util.CarbonTimeStatisticsFactory
+import org.apache.carbondata.core.util.{CarbonProperties, CarbonTimeStatisticsFactory, ThreadLocalSessionInfo}
 import org.apache.carbondata.spark.CarbonAliasDecoderRelation
 
 
@@ -66,7 +66,8 @@ class CarbonLateDecodeRule extends Rule[LogicalPlan] with PredicateHelper {
   }
 
   def checkIfRuleNeedToBeApplied(plan: LogicalPlan, removeSubQuery: Boolean = false): Boolean = {
-    relations = collectCarbonRelation(plan);
+    relations = collectCarbonRelation(plan)
+    validateQueryDirectlyOnDataMap(relations)
     if (relations.nonEmpty && !isOptimized(plan)) {
       // In case scalar subquery skip the transformation and update the flag.
       if (relations.exists(_.carbonRelation.isSubquery.nonEmpty)) {
@@ -87,6 +88,41 @@ class CarbonLateDecodeRule extends Rule[LogicalPlan] with PredicateHelper {
     }
   }
 
+  /**
+   * Below method will be used to validate if query is directly fired on pre aggregate
+   * data map or not
+   * @param relations all relations from query
+   *
+   */
+  def validateQueryDirectlyOnDataMap(relations: Seq[CarbonDecoderRelation]): Unit = {
+    var isPreAggDataMapExists = false
+    // first check if pre aggregate data map exists or not
+    relations.foreach{relation =>
+      if (relation.carbonRelation.carbonTable.isChildDataMap) {
+        isPreAggDataMapExists = true
+      }
+    }
+    val validateQuery = CarbonProperties.getInstance
+      .getProperty(CarbonCommonConstants.VALIDATE_DIRECT_QUERY_ON_DATAMAP,
+        CarbonCommonConstants.VALIDATE_DIRECT_QUERY_ON_DATAMAP_DEFAULTVALUE).toBoolean
+    var isThrowException = false
+    // if validate query is enabled and relation contains pre aggregate data map
+    if (validateQuery && isPreAggDataMapExists) {
+      val carbonSessionInfo = ThreadLocalSessionInfo.getCarbonSessionInfo
+      if (null != carbonSessionInfo) {
+        val supportQueryOnDataMap = CarbonEnv.getThreadParam(
+          CarbonCommonConstants.SUPPORT_DIRECT_QUERY_ON_DATAMAP,
+            CarbonCommonConstants.SUPPORT_DIRECT_QUERY_ON_DATAMAP_DEFAULTVALUE).toBoolean
+        if (!supportQueryOnDataMap) {
+          isThrowException = true
+        }
+      }
+    }
+    if(isThrowException) {
+      throw new AnalysisException("Query On DataMap not supported")
+    }
+  }
+
   private def collectCarbonRelation(plan: LogicalPlan): Seq[CarbonDecoderRelation] = {
     plan collect {
       case l: LogicalRelation if l.relation.isInstanceOf[CarbonDatasourceHadoopRelation] =>


[10/50] [abbrv] carbondata git commit: [CARBONDATA-2092] Fix compaction bug to prevent the compaction flow from going through the restructure compaction flow

Posted by ra...@apache.org.
[CARBONDATA-2092] Fix compaction bug to prevent the compaction flow from going through the restructure compaction flow

Problem and analysis:
During data load current schema timestamp is written to the carbondata fileHeader. This is used during compaction to decide whether the block is a restructured block or the block is according to the latest schema.
As the blocklet information is now stored in the index file, while laoding it in memory the carbondata file header is not read and due to this the schema timestamp is not getting set to the blocklet information. Due to this during compaction flow there is a mismatch on comparing the current schema time stamp with the timestamp stored in the block and the flow goes through the restructure compaction flow instead of normal compaction flow.

Impact:
Compaction performance degradation as restructure compaction flow involves sorting of data again.

Solution:
Modified code to fix compaction bug to prevent the compaction flow from going through the restructure compaction flow until and unless and restructure add or drop column operation has not been performed

This closes #1875


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/7ed144c5
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/7ed144c5
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/7ed144c5

Branch: refs/heads/branch-1.3
Commit: 7ed144c537b48353de1ee8bf710c884d555c01ce
Parents: f34ea5c
Author: manishgupta88 <to...@gmail.com>
Authored: Tue Jan 23 21:12:39 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Thu Feb 1 12:05:01 2018 +0530

----------------------------------------------------------------------
 .../util/AbstractDataFileFooterConverter.java   |  4 ++++
 .../core/util/CarbonMetadataUtil.java           |  6 ++++-
 .../apache/carbondata/core/util/CarbonUtil.java | 24 +++++++++++++++++---
 .../core/util/CarbonMetadataUtilTest.java       |  3 ++-
 format/src/main/thrift/carbondata_index.thrift  |  1 +
 .../carbondata/hadoop/CarbonInputSplit.java     |  2 ++
 .../carbondata/spark/rdd/CarbonMergerRDD.scala  |  3 +++
 .../CarbonGetTableDetailComandTestCase.scala    |  5 ++--
 .../processing/merger/CarbonCompactionUtil.java | 11 ++++++++-
 .../store/writer/AbstractFactDataWriter.java    |  3 ++-
 10 files changed, 53 insertions(+), 9 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java b/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
index 5ebf4cf..c7bc6aa 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
@@ -165,6 +165,10 @@ public abstract class AbstractDataFileFooterConverter {
         dataFileFooter.setBlockInfo(new BlockInfo(tableBlockInfo));
         dataFileFooter.setSegmentInfo(segmentInfo);
         dataFileFooter.setVersionId(tableBlockInfo.getVersion());
+        // In case of old schema time stamp will not be found in the index header
+        if (readIndexHeader.isSetSchema_time_stamp()) {
+          dataFileFooter.setSchemaUpdatedTimeStamp(readIndexHeader.getSchema_time_stamp());
+        }
         if (readBlockIndexInfo.isSetBlocklet_info()) {
           List<BlockletInfo> blockletInfoList = new ArrayList<BlockletInfo>();
           BlockletInfo blockletInfo = new DataFileFooterConverterV3()

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java b/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java
index 0ca0df8..9880b4d 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java
@@ -234,10 +234,12 @@ public class CarbonMetadataUtil {
    *
    * @param columnCardinality cardinality of each column
    * @param columnSchemaList  list of column present in the table
+   * @param bucketNumber
+   * @param schemaTimeStamp current timestamp of schema
    * @return Index header object
    */
   public static IndexHeader getIndexHeader(int[] columnCardinality,
-      List<ColumnSchema> columnSchemaList, int bucketNumber) {
+      List<ColumnSchema> columnSchemaList, int bucketNumber, long schemaTimeStamp) {
     // create segment info object
     SegmentInfo segmentInfo = new SegmentInfo();
     // set the number of columns
@@ -254,6 +256,8 @@ public class CarbonMetadataUtil {
     indexHeader.setTable_columns(columnSchemaList);
     // set the bucket number
     indexHeader.setBucket_id(bucketNumber);
+    // set the current schema time stamp which will used for deciding the restructured block
+    indexHeader.setSchema_time_stamp(schemaTimeStamp);
     return indexHeader;
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
index 600b1c9..e060c84 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
@@ -968,8 +968,27 @@ public final class CarbonUtil {
    * Below method will be used to read the data file matadata
    */
   public static DataFileFooter readMetadatFile(TableBlockInfo tableBlockInfo) throws IOException {
+    return getDataFileFooter(tableBlockInfo, false);
+  }
+
+  /**
+   * Below method will be used to read the data file matadata
+   *
+   * @param tableBlockInfo
+   * @param forceReadDataFileFooter flag to decide whether to read the footer of
+   *                                carbon data file forcefully
+   * @return
+   * @throws IOException
+   */
+  public static DataFileFooter readMetadatFile(TableBlockInfo tableBlockInfo,
+      boolean forceReadDataFileFooter) throws IOException {
+    return getDataFileFooter(tableBlockInfo, forceReadDataFileFooter);
+  }
+
+  private static DataFileFooter getDataFileFooter(TableBlockInfo tableBlockInfo,
+      boolean forceReadDataFileFooter) throws IOException {
     BlockletDetailInfo detailInfo = tableBlockInfo.getDetailInfo();
-    if (detailInfo == null) {
+    if (detailInfo == null || forceReadDataFileFooter) {
       AbstractDataFileFooterConverter fileFooterConverter =
           DataFileFooterConverterFactory.getInstance()
               .getDataFileFooterConverter(tableBlockInfo.getVersion());
@@ -977,8 +996,7 @@ public final class CarbonUtil {
     } else {
       DataFileFooter fileFooter = new DataFileFooter();
       fileFooter.setSchemaUpdatedTimeStamp(detailInfo.getSchemaUpdatedTimeStamp());
-      ColumnarFormatVersion version =
-          ColumnarFormatVersion.valueOf(detailInfo.getVersionNumber());
+      ColumnarFormatVersion version = ColumnarFormatVersion.valueOf(detailInfo.getVersionNumber());
       AbstractDataFileFooterConverter dataFileFooterConverter =
           DataFileFooterConverterFactory.getInstance().getDataFileFooterConverter(version);
       List<ColumnSchema> schema = dataFileFooterConverter.getSchema(tableBlockInfo);

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/core/src/test/java/org/apache/carbondata/core/util/CarbonMetadataUtilTest.java
----------------------------------------------------------------------
diff --git a/core/src/test/java/org/apache/carbondata/core/util/CarbonMetadataUtilTest.java b/core/src/test/java/org/apache/carbondata/core/util/CarbonMetadataUtilTest.java
index 40463a6..da31ea3 100644
--- a/core/src/test/java/org/apache/carbondata/core/util/CarbonMetadataUtilTest.java
+++ b/core/src/test/java/org/apache/carbondata/core/util/CarbonMetadataUtilTest.java
@@ -168,7 +168,8 @@ public class CarbonMetadataUtilTest {
     indexHeader.setSegment_info(segmentInfo);
     indexHeader.setTable_columns(columnSchemaList);
     indexHeader.setBucket_id(0);
-    IndexHeader indexheaderResult = getIndexHeader(columnCardinality, columnSchemaList, 0);
+    indexHeader.setSchema_time_stamp(0L);
+    IndexHeader indexheaderResult = getIndexHeader(columnCardinality, columnSchemaList, 0, 0L);
     assertEquals(indexHeader, indexheaderResult);
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/format/src/main/thrift/carbondata_index.thrift
----------------------------------------------------------------------
diff --git a/format/src/main/thrift/carbondata_index.thrift b/format/src/main/thrift/carbondata_index.thrift
index 60ec769..f77a256 100644
--- a/format/src/main/thrift/carbondata_index.thrift
+++ b/format/src/main/thrift/carbondata_index.thrift
@@ -31,6 +31,7 @@ struct IndexHeader{
   2: required list<schema.ColumnSchema> table_columns;	// Description of columns in this file
   3: required carbondata.SegmentInfo segment_info;	// Segment info (will be same/repeated for all files in this segment)
   4: optional i32 bucket_id; // Bucket number in which file contains
+  5: optional i64 schema_time_stamp; // Timestamp to compare column schema against master schema
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputSplit.java
----------------------------------------------------------------------
diff --git a/hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputSplit.java b/hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputSplit.java
index c1ef14d..5ab6605 100644
--- a/hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputSplit.java
+++ b/hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputSplit.java
@@ -181,6 +181,7 @@ public class CarbonInputSplit extends FileSplit
                 split.getSegmentId(), split.getLocations(), split.getLength(), blockletInfos,
                 split.getVersion(), split.getDeleteDeltaFiles());
         blockInfo.setDetailInfo(split.getDetailInfo());
+        blockInfo.setBlockOffset(split.getDetailInfo().getBlockFooterOffset());
         tableBlockInfoList.add(blockInfo);
       } catch (IOException e) {
         throw new RuntimeException("fail to get location of split: " + split, e);
@@ -199,6 +200,7 @@ public class CarbonInputSplit extends FileSplit
               inputSplit.getLength(), blockletInfos, inputSplit.getVersion(),
               inputSplit.getDeleteDeltaFiles());
       blockInfo.setDetailInfo(inputSplit.getDetailInfo());
+      blockInfo.setBlockOffset(inputSplit.getDetailInfo().getBlockFooterOffset());
       return blockInfo;
     } catch (IOException e) {
       throw new RuntimeException("fail to get location of split: " + inputSplit, e);

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
index c482a92..f37b0c5 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
@@ -183,6 +183,7 @@ class CarbonMergerRDD[K, V](
           .checkIfAnyRestructuredBlockExists(segmentMapping,
             dataFileMetadataSegMapping,
             carbonTable.getTableLastUpdatedTime)
+        LOGGER.info(s"Restructured block exists: $restructuredBlockExists")
         DataTypeUtil.setDataTypeConverter(new SparkDataTypeConverterImpl)
         exec = new CarbonCompactionExecutor(segmentMapping, segmentProperties,
           carbonTable, dataFileMetadataSegMapping, restructuredBlockExists)
@@ -215,6 +216,7 @@ class CarbonMergerRDD[K, V](
         carbonLoadModel.setPartitionId("0")
         var processor: AbstractResultProcessor = null
         if (restructuredBlockExists) {
+          LOGGER.info("CompactionResultSortProcessor flow is selected")
           processor = new CompactionResultSortProcessor(
             carbonLoadModel,
             carbonTable,
@@ -223,6 +225,7 @@ class CarbonMergerRDD[K, V](
             factTableName,
             partitionNames)
         } else {
+          LOGGER.info("RowResultMergerProcessor flow is selected")
           processor =
             new RowResultMergerProcessor(
               databaseName,

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala b/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala
index 2d90063..6265d0d 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala
@@ -42,9 +42,10 @@ class CarbonGetTableDetailCommandTestCase extends QueryTest with BeforeAndAfterA
 
     assertResult(2)(result.length)
     assertResult("table_info1")(result(0).getString(0))
-    assertResult(2136)(result(0).getLong(1))
+    // 2143 is the size of carbon table
+    assertResult(2143)(result(0).getLong(1))
     assertResult("table_info2")(result(1).getString(0))
-    assertResult(2136)(result(1).getLong(1))
+    assertResult(2143)(result(1).getLong(1))
   }
 
   override def afterAll: Unit = {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionUtil.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionUtil.java b/processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionUtil.java
index d796262..2a69f0d 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionUtil.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionUtil.java
@@ -119,7 +119,16 @@ public class CarbonCompactionUtil {
       DataFileFooter dataFileMatadata = null;
       // check if segId is already present in map
       List<DataFileFooter> metadataList = segmentBlockInfoMapping.get(segId);
-      dataFileMatadata = CarbonUtil.readMetadatFile(blockInfo);
+      // check to decide whether to read file footer of carbondata file forcefully. This will help
+      // in getting the schema last updated time based on which compaction flow is decided that
+      // whether it will go to restructure compaction flow or normal compaction flow.
+      // This decision will impact the compaction performance so it needs to be decided carefully
+      if (null != blockInfo.getDetailInfo()
+          && blockInfo.getDetailInfo().getSchemaUpdatedTimeStamp() == 0L) {
+        dataFileMatadata = CarbonUtil.readMetadatFile(blockInfo, true);
+      } else {
+        dataFileMatadata = CarbonUtil.readMetadatFile(blockInfo);
+      }
       if (null == metadataList) {
         // if it is not present
         eachSegmentBlocks.add(dataFileMatadata);

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/processing/src/main/java/org/apache/carbondata/processing/store/writer/AbstractFactDataWriter.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/store/writer/AbstractFactDataWriter.java b/processing/src/main/java/org/apache/carbondata/processing/store/writer/AbstractFactDataWriter.java
index d1fc17b..c0b8065 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/store/writer/AbstractFactDataWriter.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/store/writer/AbstractFactDataWriter.java
@@ -417,7 +417,8 @@ public abstract class AbstractFactDataWriter implements CarbonFactDataWriter {
   protected void writeIndexFile() throws IOException, CarbonDataWriterException {
     // get the header
     IndexHeader indexHeader = CarbonMetadataUtil
-        .getIndexHeader(localCardinality, thriftColumnSchemaList, model.getBucketId());
+        .getIndexHeader(localCardinality, thriftColumnSchemaList, model.getBucketId(),
+            model.getSchemaUpdatedTimeStamp());
     // get the block index info thrift
     List<BlockIndex> blockIndexThrift = CarbonMetadataUtil.getBlockIndexInfo(blockIndexInfoList);
     // randomly choose a temp location for index file


[40/50] [abbrv] carbondata git commit: [HOTFIX] Fix streaming test case issue for file input source

Posted by ra...@apache.org.
[HOTFIX] Fix streaming test case issue for file input source

Fix streaming test case issue for file input source

This closes #1922


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/36ff9321
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/36ff9321
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/36ff9321

Branch: refs/heads/branch-1.3
Commit: 36ff93216d7acce7bde7287a285d00944065da3b
Parents: 11f2371
Author: QiangCai <qi...@qq.com>
Authored: Sat Feb 3 18:04:49 2018 +0800
Committer: chenliang613 <ch...@huawei.com>
Committed: Sat Feb 3 21:27:31 2018 +0800

----------------------------------------------------------------------
 .../spark/carbondata/TestStreamingTableOperation.scala   | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/36ff9321/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
index e1e41dc..a368cef 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
@@ -233,6 +233,9 @@ class TestStreamingTableOperation extends QueryTest with BeforeAndAfterAll {
       sql("select count(*) from streaming.stream_table_file"),
       Seq(Row(25))
     )
+
+    val row = sql("select * from streaming.stream_table_file order by id").head()
+    assertResult(Row(10, "name_10", "city_10", 100000.0))(row)
   }
 
   // bad records
@@ -875,13 +878,7 @@ class TestStreamingTableOperation extends QueryTest with BeforeAndAfterAll {
           .add("file", "string")
         var qry: StreamingQuery = null
         try {
-          val readSocketDF = spark.readStream
-            .format("csv")
-            .option("sep", ",")
-            .schema(inputSchema)
-            .option("path", csvDataDir)
-            .option("header", "false")
-            .load()
+          val readSocketDF = spark.readStream.text(csvDataDir)
 
           // Write data from socket stream to carbondata file
           qry = readSocketDF.writeStream


[11/50] [abbrv] carbondata git commit: [CARBONDATA-2111] Fix the decoder issue when multiple joins are present in the TPCH query

Posted by ra...@apache.org.
[CARBONDATA-2111] Fix the decoder issue when multiple joins are present in the TPCH query

Problem
The TPCh query which has multiple joins fails to return any rows

Analysis
It is because of decode of dictionary columns not happening for some joins in case of self join and same column is used for join multiple times.

Solution
If project list attributes are not present as part of the decoder to be decoded attributes then add them to notDecodeCarryForward list, otherwise
there is a chance of skipping decoding of those columns in case of join case.If left and right plans both use same attribute but from the left
side it is not decoded and right side it is decoded then we should decide based on the above project list plan.

This closes #1895


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/c9a02fc2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/c9a02fc2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/c9a02fc2

Branch: refs/heads/branch-1.3
Commit: c9a02fc2a8389288085fae4ba5d7375d11de22ff
Parents: 7ed144c
Author: ravipesala <ra...@gmail.com>
Authored: Wed Jan 31 18:39:05 2018 +0530
Committer: manishgupta88 <to...@gmail.com>
Committed: Thu Feb 1 12:43:48 2018 +0530

----------------------------------------------------------------------
 .../allqueries/AllDataTypesTestCase.scala       | 47 +++++++++++++++++++-
 .../CarbonDecoderOptimizerHelper.scala          |  4 ++
 .../sql/optimizer/CarbonLateDecodeRule.scala    | 26 +++++++++++
 3 files changed, 76 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/c9a02fc2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
index e739091..afff2d0 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
@@ -17,8 +17,10 @@
 
 package org.apache.carbondata.spark.testsuite.allqueries
 
-import org.apache.spark.sql.{Row, SaveMode}
+import org.apache.spark.sql.catalyst.plans.logical.Join
+import org.apache.spark.sql.{CarbonDictionaryCatalystDecoder, Row, SaveMode}
 import org.scalatest.BeforeAndAfterAll
+
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.spark.sql.test.util.QueryTest
@@ -1154,4 +1156,47 @@ class AllDataTypesTestCase extends QueryTest with BeforeAndAfterAll {
 
   }
 
+  test("TPCH query issue with not joining with decoded values") {
+
+    sql("drop table if exists SUPPLIER")
+    sql("drop table if exists PARTSUPP")
+    sql("drop table if exists CUSTOMER")
+    sql("drop table if exists NATION")
+    sql("drop table if exists REGION")
+    sql("drop table if exists PART")
+    sql("drop table if exists LINEITEM")
+    sql("drop table if exists ORDERS")
+    sql("create table if not exists SUPPLIER(S_COMMENT string,S_SUPPKEY string,S_NAME string, S_ADDRESS string, S_NATIONKEY string, S_PHONE string, S_ACCTBAL double) STORED BY 'org.apache.carbondata.format'TBLPROPERTIES ('DICTIONARY_EXCLUDE'='S_COMMENT, S_SUPPKEY, S_NAME, S_ADDRESS, S_NATIONKEY, S_PHONE','table_blocksize'='300','SORT_COLUMNS'='')")
+    sql("create table if not exists PARTSUPP (  PS_PARTKEY int,  PS_SUPPKEY  string,  PS_AVAILQTY  int,  PS_SUPPLYCOST  double,  PS_COMMENT  string) STORED BY 'org.apache.carbondata.format'TBLPROPERTIES ('DICTIONARY_EXCLUDE'='PS_SUPPKEY,PS_COMMENT', 'table_blocksize'='300', 'no_inverted_index'='PS_SUPPKEY, PS_COMMENT','SORT_COLUMNS'='')")
+    sql("create table if not exists CUSTOMER(  C_MKTSEGMENT string,  C_NATIONKEY string,  C_CUSTKEY string,  C_NAME string,  C_ADDRESS string,  C_PHONE string,  C_ACCTBAL double,  C_COMMENT string) STORED BY 'org.apache.carbondata.format'TBLPROPERTIES ('DICTIONARY_INCLUDE'='C_MKTSEGMENT,C_NATIONKEY','DICTIONARY_EXCLUDE'='C_CUSTKEY,C_NAME,C_ADDRESS,C_PHONE,C_COMMENT', 'table_blocksize'='300', 'no_inverted_index'='C_CUSTKEY,C_NAME,C_ADDRESS,C_PHONE,C_COMMENT','SORT_COLUMNS'='C_MKTSEGMENT')")
+    sql("create table if not exists NATION (  N_NAME string,  N_NATIONKEY string,  N_REGIONKEY string,  N_COMMENT  string) STORED BY 'org.apache.carbondata.format'TBLPROPERTIES ('DICTIONARY_INCLUDE'='N_REGIONKEY','DICTIONARY_EXCLUDE'='N_COMMENT', 'table_blocksize'='300','no_inverted_index'='N_COMMENT','SORT_COLUMNS'='N_NAME')")
+    sql("create table if not exists REGION(  R_NAME string,  R_REGIONKEY string,  R_COMMENT string) STORED BY 'org.apache.carbondata.format'TBLPROPERTIES ('DICTIONARY_INCLUDE'='R_NAME,R_REGIONKEY','DICTIONARY_EXCLUDE'='R_COMMENT', 'table_blocksize'='300','no_inverted_index'='R_COMMENT','SORT_COLUMNS'='R_NAME')")
+    sql("create table if not exists PART(  P_BRAND string,  P_SIZE int,  P_CONTAINER string,  P_TYPE string,  P_PARTKEY INT ,  P_NAME string,  P_MFGR string,  P_RETAILPRICE double,  P_COMMENT string) STORED BY 'org.apache.carbondata.format'TBLPROPERTIES ('DICTIONARY_INCLUDE'='P_BRAND,P_SIZE,P_CONTAINER,P_MFGR','DICTIONARY_EXCLUDE'='P_NAME, P_COMMENT', 'table_blocksize'='300','no_inverted_index'='P_NAME,P_COMMENT,P_MFGR','SORT_COLUMNS'='P_SIZE,P_TYPE,P_NAME,P_BRAND,P_CONTAINER')")
+    sql("create table if not exists LINEITEM(  L_SHIPDATE date,  L_SHIPMODE string,  L_SHIPINSTRUCT string,  L_RETURNFLAG string,  L_RECEIPTDATE date,  L_ORDERKEY INT ,  L_PARTKEY INT ,  L_SUPPKEY   string,  L_LINENUMBER int,  L_QUANTITY double,  L_EXTENDEDPRICE double,  L_DISCOUNT double,  L_TAX double,  L_LINESTATUS string,  L_COMMITDATE date,  L_COMMENT  string) STORED BY 'org.apache.carbondata.format'TBLPROPERTIES ('DICTIONARY_INCLUDE'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RECEIPTDATE,L_COMMITDATE,L_RETURNFLAG,L_LINESTATUS','DICTIONARY_EXCLUDE'='L_SUPPKEY, L_COMMENT', 'table_blocksize'='300', 'no_inverted_index'='L_SUPPKEY,L_COMMENT','SORT_COLUMNS'='L_SHIPDATE,L_RETURNFLAG,L_SHIPMODE,L_RECEIPTDATE,L_SHIPINSTRUCT')")
+    sql("create table if not exists ORDERS(  O_ORDERDATE date,  O_ORDERPRIORITY string,  O_ORDERSTATUS string,  O_ORDERKEY int,  O_CUSTKEY string,  O_TOTALPRICE double,  O_CLERK string,  O_SHIPPRIORITY int,  O_COMMENT string) STORED BY 'org.apache.carbondata.format'TBLPROPERTIES ('DICTIONARY_INCLUDE'='O_ORDERDATE,O_ORDERSTATUS','DICTIONARY_EXCLUDE'='O_ORDERPRIORITY, O_CUSTKEY, O_CLERK, O_COMMENT', 'table_blocksize'='300','no_inverted_index'='O_ORDERPRIORITY, O_CUSTKEY, O_CLERK, O_COMMENT', 'SORT_COLUMNS'='O_ORDERDATE')")
+    val df = sql(
+      "select s_acctbal, s_name, n_name, p_partkey, p_mfgr, s_address, s_phone, s_comment from " +
+      "part, supplier, partsupp, nation, region where p_partkey = ps_partkey and s_suppkey = " +
+      "ps_suppkey and p_size = 15 and p_type like '%BRASS' and s_nationkey = n_nationkey and " +
+      "n_regionkey = r_regionkey and r_name = 'EUROPE' and ps_supplycost = ( select min" +
+      "(ps_supplycost) from partsupp, supplier,nation, region where p_partkey = ps_partkey and " +
+      "s_suppkey = ps_suppkey and s_nationkey = n_nationkey and n_regionkey = r_regionkey and " +
+      "r_name = 'EUROPE' ) order by s_acctbal desc, n_name, s_name, p_partkey limit 100")
+
+    val decoders = df.queryExecution.optimizedPlan.collect {
+      case p: CarbonDictionaryCatalystDecoder => p
+    }
+
+    assertResult(5)(decoders.length)
+
+    sql("drop table if exists SUPPLIER")
+    sql("drop table if exists PARTSUPP")
+    sql("drop table if exists CUSTOMER")
+    sql("drop table if exists NATION")
+    sql("drop table if exists REGION")
+    sql("drop table if exists PART")
+    sql("drop table if exists LINEITEM")
+    sql("drop table if exists ORDERS")
+  }
+
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata/blob/c9a02fc2/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala b/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala
index fee4b66..e055c86 100644
--- a/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala
+++ b/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala
@@ -43,6 +43,9 @@ case class CarbonDictionaryTempDecoder(
     isOuter: Boolean = false,
     aliasMap: Option[CarbonAliasDecoderRelation] = None) extends UnaryNode {
   var processed = false
+  // In case of join plan and project does not include the notDecode attributes then we should not
+  // carry forward to above plan.
+  val notDecodeCarryForward = new util.HashSet[AttributeReferenceWrapper]()
 
   def getAttrsNotDecode: util.Set[Attribute] = {
     val set = new util.HashSet[Attribute]()
@@ -111,6 +114,7 @@ class CarbonDecoderProcessor {
       decoderNotDecode: util.HashSet[AttributeReferenceWrapper]): Unit = {
     scalaList.reverseMap {
       case Node(cd: CarbonDictionaryTempDecoder) =>
+        cd.notDecodeCarryForward.asScala.foreach(decoderNotDecode.remove)
         decoderNotDecode.asScala.foreach(cd.attrsNotDecode.add)
         decoderNotDecode.asScala.foreach(cd.attrList.remove)
         decoderNotDecode.addAll(cd.attrList)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/c9a02fc2/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala
index 764891b..06ad0ad 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonLateDecodeRule.scala
@@ -522,6 +522,30 @@ class CarbonLateDecodeRule extends Rule[LogicalPlan] with PredicateHelper {
           }
       }
 
+
+    transFormedPlan transform {
+      // If project list attributes are not present as part of decoder to be decoded attributes
+      // then add them to notDecodeCarryForward list, otherwise there is a chance of skipping
+      // decoding of those columns in case of join case.
+      // If left and right plans both uses same attribute but from left side it is
+      // not decoded and right side it is decoded then we should decide based on the above project
+      // list plan.
+      case project@Project(projectList, child: Join) =>
+        val allAttr = new util.HashSet[AttributeReferenceWrapper]()
+        val allDecoder = child.collect {
+          case cd : CarbonDictionaryTempDecoder =>
+            allAttr.addAll(cd.attrList)
+            cd
+        }
+        if (allDecoder.nonEmpty && !allAttr.isEmpty) {
+          val notForward = allAttr.asScala.filterNot {attrWrapper =>
+            val attr = attrWrapper.attr
+            projectList.exists(f => attr.name.equalsIgnoreCase(f.name) && attr.exprId == f.exprId)
+          }
+          allDecoder.head.notDecodeCarryForward.addAll(notForward.asJava)
+        }
+        project
+    }
     val processor = new CarbonDecoderProcessor
     processor.updateDecoders(processor.getDecoderList(transFormedPlan))
     updateProjection(updateTempDecoder(transFormedPlan, aliasMap, attrMap))
@@ -825,5 +849,7 @@ case class CarbonDecoderRelation(
   }
 
   lazy val dictionaryMap = carbonRelation.carbonRelation.metaData.dictionaryMap
+
+  override def toString: String = carbonRelation.carbonTable.getTableUniqueName
 }
 


[25/50] [abbrv] carbondata git commit: [CARBONDATA-1918] Incorrect data is displayed when String is updated using Sentences

Posted by ra...@apache.org.
[CARBONDATA-1918] Incorrect data is displayed when String is updated using Sentences

Incorrect data is displayed when updating a String column using Sentences UDF. Sentences UDF will give us a Array, When updating string with array, wrong data is getting updated. Therefore, we have to check for the supported type before updating.

This closes  #1704


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/2610a609
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/2610a609
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/2610a609

Branch: refs/heads/branch-1.3
Commit: 2610a6091623c271552b7a69d402dded79ba3517
Parents: a9a0201
Author: dhatchayani <dh...@gmail.com>
Authored: Wed Dec 20 18:16:10 2017 +0530
Committer: kumarvishal <ku...@gmail.com>
Committed: Fri Feb 2 21:22:30 2018 +0530

----------------------------------------------------------------------
 .../sdv/generated/DataLoadingIUDTestCase.scala      |  8 ++++----
 .../testsuite/iud/UpdateCarbonTableTestCase.scala   | 13 +++++++++++++
 .../mutation/CarbonProjectForUpdateCommand.scala    | 16 ++++++++++++++++
 3 files changed, 33 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/2610a609/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
index b4459ab..4c232be 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
@@ -1858,13 +1858,13 @@ ignore("IUD-01-01-01_040-23", Include) {
        
 
 //Check for updating carbon table set column value to a value returned by split function
+//Split will give us array value
 test("IUD-01-01-01_040-25", Include) {
    sql(s"""create table if not exists default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
  sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set (active_status)= (split('t','a')) """).collect
-  checkAnswer(s""" select active_status from default.t_carbn01  group by active_status """,
-    Seq(Row("t\\")), "DataLoadingIUDTestCase_IUD-01-01-01_040-25")
-   sql(s"""drop table default.t_carbn01  """).collect
+ intercept[Exception] {
+   sql(s"""update default.t_carbn01  set (active_status)= (split('t','a')) """).collect
+ }
 }
        
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/2610a609/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala
index cf4fc07..98c9a16 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala
@@ -691,6 +691,19 @@ class UpdateCarbonTableTestCase extends QueryTest with BeforeAndAfterAll {
                      CarbonCommonConstants.FILE_SEPARATOR + "Part0")
     assert(f.list().length == 2)
   }
+  test("test sentences func in update statement") {
+    sql("drop table if exists senten")
+    sql("create table senten(name string, comment string) stored by 'carbondata'")
+    sql("insert into senten select 'aaa','comment for aaa'")
+    sql("insert into senten select 'bbb','comment for bbb'")
+    sql("select * from senten").show()
+    val errorMessage = intercept[Exception] {
+      sql("update senten set(comment)=(sentences('Hello there! How are you?'))").show()
+    }.getMessage
+    errorMessage
+      .contains("Unsupported data type: Array")
+    sql("drop table if exists senten")
+  }
 
   override def afterAll {
     sql("use default")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/2610a609/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala
index 2f12bef..318c904 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala
@@ -22,6 +22,7 @@ import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan, Project}
 import org.apache.spark.sql.execution.command._
 import org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand
 import org.apache.spark.sql.execution.datasources.LogicalRelation
+import org.apache.spark.sql.types.ArrayType
 import org.apache.spark.storage.StorageLevel
 
 import org.apache.carbondata.common.logging.LogServiceFactory
@@ -186,6 +187,18 @@ private[sql] case class CarbonProjectForUpdateCommand(
       (tableName == relation.identifier.getCarbonTableIdentifier.getTableName)
     }
 
+    // from the dataFrame schema iterate through all the column to be updated and
+    // check for the data type, if the data type is complex then throw exception
+    def checkForUnsupportedDataType(dataFrame: DataFrame): Unit = {
+      dataFrame.schema.foreach(col => {
+        // the new column to be updated will be appended with "-updatedColumn" suffix
+        if (col.name.endsWith(CarbonCommonConstants.UPDATED_COL_EXTENSION) &&
+            col.dataType.isInstanceOf[ArrayType]) {
+          throw new UnsupportedOperationException("Unsupported data type: Array")
+        }
+      })
+    }
+
     def getHeader(relation: CarbonDatasourceHadoopRelation, plan: LogicalPlan): String = {
       var header = ""
       var found = false
@@ -206,6 +219,9 @@ private[sql] case class CarbonProjectForUpdateCommand(
       }
       header
     }
+
+    // check for the data type of the new value to be updated
+    checkForUnsupportedDataType(dataFrame)
     val ex = dataFrame.queryExecution.analyzed
     val res = ex find {
       case relation: LogicalRelation


[34/50] [abbrv] carbondata git commit: [CARBONDATA-2111] Fix self join query with dictionary include

Posted by ra...@apache.org.
[CARBONDATA-2111] Fix self join query with dictionary include

This closes #1918


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6fd778ab
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/6fd778ab
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/6fd778ab

Branch: refs/heads/branch-1.3
Commit: 6fd778ab177fba19042b28d15a6e9b395477ec85
Parents: 22f78fa
Author: ravipesala <ra...@gmail.com>
Authored: Fri Feb 2 23:10:48 2018 +0530
Committer: Jacky Li <ja...@qq.com>
Committed: Sat Feb 3 17:30:57 2018 +0800

----------------------------------------------------------------------
 .../testsuite/allqueries/AllDataTypesTestCase.scala   | 14 ++++++++++++++
 .../sql/optimizer/CarbonDecoderOptimizerHelper.scala  |  2 +-
 2 files changed, 15 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/6fd778ab/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
index afff2d0..4c6b47a 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
@@ -1199,4 +1199,18 @@ class AllDataTypesTestCase extends QueryTest with BeforeAndAfterAll {
     sql("drop table if exists ORDERS")
   }
 
+  test("test self join query fail") {
+    sql("DROP TABLE IF EXISTS uniqdata_INCLUDEDICTIONARY")
+
+    sql("CREATE TABLE uniqdata_INCLUDEDICTIONARY (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('DICTIONARY_INCLUDE'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')")
+    sql(s"LOAD DATA INPATH '${resourcesPath + "/data_with_all_types.csv"}' into table" +
+              " uniqdata_INCLUDEDICTIONARY OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='\"'," +
+              "'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION," +
+              "DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2," +
+              "Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')")
+
+    val count = sql("select b.BIGINT_COLUMN1,b.DECIMAL_COLUMN1,b.Double_COLUMN1,b.DOB,b.CUST_ID,b.CUST_NAME from uniqdata_INCLUDEDICTIONARY a join uniqdata_INCLUDEDICTIONARY b on a.cust_name=b.cust_name and a.cust_name RLIKE '10'").count()
+    assert(count > 0)
+    sql("DROP TABLE IF EXISTS uniqdata_INCLUDEDICTIONARY")
+    }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6fd778ab/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala b/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala
index e055c86..fe7c35d 100644
--- a/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala
+++ b/integration/spark-common/src/main/scala/org/apache/spark/sql/optimizer/CarbonDecoderOptimizerHelper.scala
@@ -114,10 +114,10 @@ class CarbonDecoderProcessor {
       decoderNotDecode: util.HashSet[AttributeReferenceWrapper]): Unit = {
     scalaList.reverseMap {
       case Node(cd: CarbonDictionaryTempDecoder) =>
-        cd.notDecodeCarryForward.asScala.foreach(decoderNotDecode.remove)
         decoderNotDecode.asScala.foreach(cd.attrsNotDecode.add)
         decoderNotDecode.asScala.foreach(cd.attrList.remove)
         decoderNotDecode.addAll(cd.attrList)
+        cd.notDecodeCarryForward.asScala.foreach(decoderNotDecode.remove)
       case ArrayCarbonNode(children) =>
         children.foreach { child =>
           val notDecode = new util.HashSet[AttributeReferenceWrapper]


[45/50] [abbrv] carbondata git commit: [CARBONDATA-2127] Documentation for Hive Standard Partition

Posted by ra...@apache.org.
[CARBONDATA-2127] Documentation for Hive Standard Partition

This closes #1926


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/a7bcc763
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/a7bcc763
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/a7bcc763

Branch: refs/heads/branch-1.3
Commit: a7bcc763b5d1dea35f5015dadabb37a051a4f881
Parents: 4a251ba
Author: sgururajshetty <sg...@gmail.com>
Authored: Sat Feb 3 21:04:23 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Sat Feb 3 21:53:38 2018 +0530

----------------------------------------------------------------------
 docs/data-management-on-carbondata.md | 104 ++++++++++++++++++++++++++++-
 1 file changed, 103 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/a7bcc763/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index d9d4420..3acb711 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -20,12 +20,13 @@
 This tutorial is going to introduce all commands and data operations on CarbonData.
 
 * [CREATE TABLE](#create-table)
-* [CREATE DATABASE] (#create-database)
+* [CREATE DATABASE](#create-database)
 * [TABLE MANAGEMENT](#table-management)
 * [LOAD DATA](#load-data)
 * [UPDATE AND DELETE](#update-and-delete)
 * [COMPACTION](#compaction)
 * [PARTITION](#partition)
+* [HIVE STANDARD PARTITION](#hive-standard-partition)
 * [PRE-AGGREGATE TABLES](#agg-tables)
 * [BUCKETING](#bucketing)
 * [SEGMENT MANAGEMENT](#segment-management)
@@ -765,6 +766,107 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
   * The partitioned column can be excluded from SORT_COLUMNS, this will let other columns to do the efficient sorting.
   * When writing SQL on a partition table, try to use filters on the partition column.
 
+## HIVE STANDARD PARTITION
+
+  Carbon supports the partition which is custom implemented by carbon but due to compatibility issue does not allow you to use the feature of Hive. By using this function, you can use the feature available in Hive.
+
+### Create Partition Table
+
+  This command allows you to create table with partition.
+  
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
+    [(col_name data_type , ...)]
+    [COMMENT table_comment]
+    [PARTITIONED BY (col_name data_type , ...)]
+    [STORED BY file_format]
+    [TBLPROPERTIES (property_name=property_value, ...)]
+    [AS select_statement];
+  ```
+  
+  Example:
+  ```
+   CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                productNumber Int,
+                                productName String,
+                                storeCity String,
+                                storeProvince String,
+                                saleQuantity Int,
+                                revenue Int)
+  PARTITIONED BY (productCategory String, productBatch String)
+  STORED BY 'carbondata'
+  ```
+		
+### Load Data Using Static Partition
+
+  This command allows you to load data using static partition.
+  
+  ```
+  LOAD DATA [LOCAL] INPATH 'folder_path' 
+    INTO TABLE [db_name.]table_name PARTITION (partition_spec) 
+    OPTIONS(property_name=property_value, ...)
+  NSERT INTO INTO TABLE [db_name.]table_name PARTITION (partition_spec) SELECT STATMENT 
+  ```
+  
+  Example:
+  ```
+  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
+    INTO TABLE locationTable
+    PARTITION (country = 'US', state = 'CA')
+    
+  INSERT INTO TABLE locationTable
+    PARTITION (country = 'US', state = 'AL')
+    SELECT * FROM another_user au 
+    WHERE au.country = 'US' AND au.state = 'AL';
+  ```
+
+### Load Data Using Dynamic Partition
+
+  This command allows you to load data using dynamic partition. If partition spec is not specified, then the partition is considered as dynamic.
+
+  Example:
+  ```
+  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
+    INTO TABLE locationTable
+          
+  INSERT INTO TABLE locationTable
+    SELECT * FROM another_user au 
+    WHERE au.country = 'US' AND au.state = 'AL';
+  ```
+
+### Show Partitions
+
+  This command gets the Hive partition information of the table
+
+  ```
+  SHOW PARTITIONS [db_name.]table_name
+  ```
+
+### Drop Partition
+
+  This command drops the specified Hive partition only.
+  ```
+  ALTER TABLE table_name DROP [IF EXISTS] (PARTITION part_spec, ...)
+  ```
+
+### Insert OVERWRITE
+  
+  This command allows you to insert or load overwrite on a spcific partition.
+  
+  ```
+   INSERT OVERWRITE TABLE table_name
+    PARTITION (column = 'partition_name')
+    select_statement
+  ```
+  
+  Example:
+  ```
+  INSERT OVERWRITE TABLE partitioned_user
+    PARTITION (country = 'US')
+    SELECT * FROM another_user au 
+    WHERE au.country = 'US';
+  ```
+
 ## PRE-AGGREGATE TABLES
   Carbondata supports pre aggregating of data so that OLAP kind of queries can fetch data 
   much faster.Aggregate tables are created as datamaps so that the handling is as efficient as 


[47/50] [abbrv] carbondata git commit: [CARBONDATA-2125] like% filter is giving ArrayIndexOutOfBoundException in case of table having more pages

Posted by ra...@apache.org.
[CARBONDATA-2125] like% filter is giving ArrayIndexOutOfBoundException in case of table having more pages

Problem: like% filter is giving ArrayIndexOutOfBoundException in case of table having more pages
Solution: In RowlevelFilter the number of rows should be filled based on the rows in a page.

This closes #1909


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/50e2f2c8
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/50e2f2c8
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/50e2f2c8

Branch: refs/heads/branch-1.3
Commit: 50e2f2c8f2cc6ee4b72839b704a038666ae629ba
Parents: 54b7db5
Author: dhatchayani <dh...@gmail.com>
Authored: Fri Feb 2 10:55:19 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Sat Feb 3 22:55:36 2018 +0530

----------------------------------------------------------------------
 .../executer/RowLevelFilterExecuterImpl.java    | 10 ++++++--
 .../filter/executer/TrueFilterExecutor.java     |  2 +-
 .../filterexpr/FilterProcessorTestCase.scala    | 25 ++++++++++++++++++++
 3 files changed, 34 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/50e2f2c8/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
index 224a69f..89489a2 100644
--- a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
@@ -205,7 +205,10 @@ public class RowLevelFilterExecuterImpl implements FilterExecuter {
       } else {
         // specific for restructure case where default values need to be filled
         pageNumbers = blockChunkHolder.getDataBlock().numberOfPages();
-        numberOfRows = new int[] { blockChunkHolder.getDataBlock().nodeSize() };
+        numberOfRows = new int[pageNumbers];
+        for (int i = 0; i < pageNumbers; i++) {
+          numberOfRows[i] = blockChunkHolder.getDataBlock().getPageRowCount(i);
+        }
       }
     }
     if (msrColEvalutorInfoList.size() > 0) {
@@ -217,7 +220,10 @@ public class RowLevelFilterExecuterImpl implements FilterExecuter {
       } else {
         // specific for restructure case where default values need to be filled
         pageNumbers = blockChunkHolder.getDataBlock().numberOfPages();
-        numberOfRows = new int[] { blockChunkHolder.getDataBlock().nodeSize() };
+        numberOfRows = new int[pageNumbers];
+        for (int i = 0; i < pageNumbers; i++) {
+          numberOfRows[i] = blockChunkHolder.getDataBlock().getPageRowCount(i);
+        }
       }
     }
     BitSetGroup bitSetGroup = new BitSetGroup(pageNumbers);

http://git-wip-us.apache.org/repos/asf/carbondata/blob/50e2f2c8/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
index 92396ae..4b3738a 100644
--- a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
@@ -39,7 +39,7 @@ public class TrueFilterExecutor implements FilterExecuter {
     BitSetGroup group = new BitSetGroup(numberOfPages);
     for (int i = 0; i < numberOfPages; i++) {
       BitSet set = new BitSet();
-      set.flip(0, blockChunkHolder.getDataBlock().nodeSize());
+      set.flip(0, blockChunkHolder.getDataBlock().getPageRowCount(i));
       group.setBitSet(set, i);
     }
     return group;

http://git-wip-us.apache.org/repos/asf/carbondata/blob/50e2f2c8/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
index b92b379..d54906f 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
@@ -21,22 +21,30 @@ import java.sql.Timestamp
 
 import org.apache.spark.sql.Row
 import org.scalatest.BeforeAndAfterAll
+
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.spark.sql.test.util.QueryTest
 
+import org.apache.carbondata.spark.testsuite.datacompaction.CompactionSupportGlobalSortBigFileTest
+
 /**
   * Test Class for filter expression query on String datatypes
   *
   */
 class FilterProcessorTestCase extends QueryTest with BeforeAndAfterAll {
 
+  val file1 = resourcesPath + "/filter/file1.csv"
+
   override def beforeAll {
     sql("drop table if exists filtertestTables")
     sql("drop table if exists filtertestTablesWithDecimal")
     sql("drop table if exists filtertestTablesWithNull")
     sql("drop table if exists filterTimestampDataType")
     sql("drop table if exists noloadtable")
+    sql("drop table if exists like_filter")
+
+    CompactionSupportGlobalSortBigFileTest.createFile(file1, 500000, 0)
 
     sql("CREATE TABLE filtertestTables (ID int, date Timestamp, country String, " +
       "name String, phonetype String, serialname String, salary int) " +
@@ -279,6 +287,21 @@ class FilterProcessorTestCase extends QueryTest with BeforeAndAfterAll {
     sql("drop table if exists outofrange")
   }
 
+  test("like% test case with restructure") {
+    sql("drop table if exists like_filter")
+    sql(
+      """
+        | CREATE TABLE like_filter(id INT, name STRING, city STRING, age INT)
+        | STORED BY 'org.apache.carbondata.format'
+        | TBLPROPERTIES('SORT_COLUMNS'='city,name', 'SORT_SCOPE'='GLOBAL_SORT')
+      """.stripMargin)
+    sql(s"LOAD DATA LOCAL INPATH '$file1' INTO TABLE like_filter OPTIONS('header'='false')")
+    sql(
+      "ALTER TABLE like_filter ADD COLUMNS(filter STRING) TBLPROPERTIES ('DEFAULT.VALUE" +
+      ".FILTER'='altered column')")
+    checkAnswer(sql("select count(*) from like_filter where filter like '%column'"), Row(500000))
+  }
+
 
 
 
@@ -294,6 +317,8 @@ class FilterProcessorTestCase extends QueryTest with BeforeAndAfterAll {
     sql("DROP TABLE IF EXISTS big_int_basicc_Hive_1")
     sql("DROP TABLE IF EXISTS filtertestTablesWithNull")
     sql("DROP TABLE IF EXISTS filtertestTablesWithNullJoin")
+    sql("drop table if exists like_filter")
+    CompactionSupportGlobalSortBigFileTest.deleteFile(file1)
     CarbonProperties.getInstance()
       .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "dd-MM-yyyy")
   }


[08/50] [abbrv] carbondata git commit: [CARBONDATA-1616] Add CarbonData Streaming Ingestion Guide

Posted by ra...@apache.org.
[CARBONDATA-1616] Add CarbonData Streaming Ingestion Guide

Add CarbonData Streaming Ingestion Guide

This closes #1880


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/cdff1932
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/cdff1932
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/cdff1932

Branch: refs/heads/branch-1.3
Commit: cdff193255418e56ab4a98c441eb6b809142c9a2
Parents: c8a3eb5
Author: QiangCai <qi...@qq.com>
Authored: Thu Jan 4 11:52:07 2018 +0800
Committer: chenliang613 <ch...@huawei.com>
Committed: Thu Feb 1 10:59:36 2018 +0800

----------------------------------------------------------------------
 README.md               |   1 +
 docs/streaming-guide.md | 169 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 170 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/cdff1932/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index 3b6792e..952392b 100644
--- a/README.md
+++ b/README.md
@@ -39,6 +39,7 @@ CarbonData is built using Apache Maven, to [build CarbonData](https://github.com
 * [Data Management on CarbonData](https://github.com/apache/carbondata/blob/master/docs/data-management-on-carbondata.md)
 * [Cluster Installation and Deployment](https://github.com/apache/carbondata/blob/master/docs/installation-guide.md)
 * [Configuring Carbondata](https://github.com/apache/carbondata/blob/master/docs/configuration-parameters.md)
+* [Streaming Ingestion](https://github.com/apache/carbondata/blob/master/docs/streaming-guide.md)
 * [FAQ](https://github.com/apache/carbondata/blob/master/docs/faq.md)
 * [Trouble Shooting](https://github.com/apache/carbondata/blob/master/docs/troubleshooting.md)
 * [Useful Tips](https://github.com/apache/carbondata/blob/master/docs/useful-tips-on-carbondata.md)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/cdff1932/docs/streaming-guide.md
----------------------------------------------------------------------
diff --git a/docs/streaming-guide.md b/docs/streaming-guide.md
new file mode 100644
index 0000000..201f8e0
--- /dev/null
+++ b/docs/streaming-guide.md
@@ -0,0 +1,169 @@
+# CarbonData Streaming Ingestion
+
+## Quick example
+Download and unzip spark-2.2.0-bin-hadoop2.7.tgz, and export $SPARK_HOME
+
+Package carbon jar, and copy assembly/target/scala-2.11/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar to $SPARK_HOME/jars
+```shell
+mvn clean package -DskipTests -Pspark-2.2
+```
+
+Start a socket data server in a terminal
+```shell
+ nc -lk 9099
+```
+ type some CSV rows as following
+```csv
+1,col1
+2,col2
+3,col3
+4,col4
+5,col5
+```
+
+Start spark-shell in new terminal, type :paste, then copy and run the following code.
+```scala
+ import java.io.File
+ import org.apache.spark.sql.{CarbonEnv, SparkSession}
+ import org.apache.spark.sql.CarbonSession._
+ import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery}
+ import org.apache.carbondata.core.util.path.CarbonStorePath
+ 
+ val warehouse = new File("./warehouse").getCanonicalPath
+ val metastore = new File("./metastore").getCanonicalPath
+ 
+ val spark = SparkSession
+   .builder()
+   .master("local")
+   .appName("StreamExample")
+   .config("spark.sql.warehouse.dir", warehouse)
+   .getOrCreateCarbonSession(warehouse, metastore)
+
+ spark.sparkContext.setLogLevel("ERROR")
+
+ // drop table if exists previously
+ spark.sql(s"DROP TABLE IF EXISTS carbon_table")
+ // Create target carbon table and populate with initial data
+ spark.sql(
+   s"""
+      | CREATE TABLE carbon_table (
+      | col1 INT,
+      | col2 STRING
+      | )
+      | STORED BY 'carbondata'
+      | TBLPROPERTIES('streaming'='true')""".stripMargin)
+
+ val carbonTable = CarbonEnv.getCarbonTable(Some("default"), "carbon_table")(spark)
+ val tablePath = CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
+ 
+ // batch load
+ var qry: StreamingQuery = null
+ val readSocketDF = spark.readStream
+   .format("socket")
+   .option("host", "localhost")
+   .option("port", 9099)
+   .load()
+
+ // Write data from socket stream to carbondata file
+ qry = readSocketDF.writeStream
+   .format("carbondata")
+   .trigger(ProcessingTime("5 seconds"))
+   .option("checkpointLocation", tablePath.getStreamingCheckpointDir)
+   .option("dbName", "default")
+   .option("tableName", "carbon_table")
+   .start()
+
+ // start new thread to show data
+ new Thread() {
+   override def run(): Unit = {
+     do {
+       spark.sql("select * from carbon_table").show(false)
+       Thread.sleep(10000)
+     } while (true)
+   }
+ }.start()
+
+ qry.awaitTermination()
+```
+
+Continue to type some rows into data server, and spark-shell will show the new data of the table.
+
+## Create table with streaming property
+Streaming table is just a normal carbon table with "streaming" table property, user can create
+streaming table using following DDL.
+```sql
+ CREATE TABLE streaming_table (
+  col1 INT,
+  col2 STRING
+ )
+ STORED BY 'carbondata'
+ TBLPROPERTIES('streaming'='true')
+```
+
+ property name | default | description
+ ---|---|--- 
+ streaming | false |Whether to enable streaming ingest feature for this table <br /> Value range: true, false 
+ 
+ "DESC FORMATTED" command will show streaming property.
+ ```sql
+ DESC FORMATTED streaming_table
+ ```
+ 
+## Alter streaming property
+For an old table, use ALTER TABLE command to set the streaming property.
+```sql
+ALTER TABLE streaming_table SET TBLPROPERTIES('streaming'='true')
+```
+
+## Acquire streaming lock
+At the begin of streaming ingestion, the system will try to acquire the table level lock of streaming.lock file. If the system isn't able to acquire the lock of this table, it will throw an InterruptedException.
+
+## Create streaming segment
+The input data of streaming will be ingested into a segment of the CarbonData table, the status of this segment is streaming. CarbonData call it a streaming segment. The "tablestatus" file will record the segment status and data size. The user can use “SHOW SEGMENTS FOR TABLE tableName” to check segment status. 
+
+After the streaming segment reaches the max size, CarbonData will change the segment status to "streaming finish" from "streaming", and create new "streaming" segment to continue to ingest streaming data.
+
+option | default | description
+--- | --- | ---
+carbon.streaming.segment.max.size | 1024000000 | Unit: byte <br />max size of streaming segment
+
+segment status | description
+--- | ---
+streaming | The segment is running streaming ingestion
+streaming finish | The segment already finished streaming ingestion, <br /> it will be handed off to a segment in the columnar format
+
+## Change segment status
+Use below command to change the status of "streaming" segment to "streaming finish" segment.
+```sql
+ALTER TABLE streaming_table FINISH STREAMING
+```
+
+## Handoff "streaming finish" segment to columnar segment
+Use below command to handoff "streaming finish" segment to columnar format segment manually.
+```sql
+ALTER TABLE streaming_table COMPACT 'streaming'
+
+```
+
+## Auto handoff streaming segment
+Config the property "carbon.streaming.auto.handoff.enabled" to auto handoff streaming segment. If the value of this property is true, after the streaming segment reaches the max size, CarbonData will change this segment to "streaming finish" status and trigger to auto handoff this segment to columnar format segment in a new thread.
+
+property name | default | description
+--- | --- | ---
+carbon.streaming.auto.handoff.enabled | true | whether to auto trigger handoff operation
+
+## Close streaming table
+Use below command to handoff all streaming segments to columnar format segments and modify the streaming property to false, this table becomes a normal table.
+```sql
+ALTER TABLE streaming_table COMPACT 'close_streaming'
+
+```
+
+## Constraint
+1. reject set streaming property from true to false.
+2. reject UPDATE/DELETE command on the streaming table.
+3. reject create pre-aggregation DataMap on the streaming table.
+4. reject add the streaming property on the table with pre-aggregation DataMap.
+5. if the table has dictionary columns, it will not support concurrent data loading.
+6. block delete "streaming" segment while the streaming ingestion is running.
+7. block drop the streaming table while the streaming ingestion is running.


[36/50] [abbrv] carbondata git commit: [CARBONDATA-2112] Fixed bug for select operation on datamap with avg and a column name

Posted by ra...@apache.org.
[CARBONDATA-2112] Fixed bug for select operation on datamap with avg and a column name

Problem: When applying select operation(having a column name and an aggregate function) on a
table having a datamap, the data was coming out to be wrong as the group by expression
and aggregate expression were created incorrectly.

Solution: While creating the aggregate and group by expression, we were getting the child
column related to parent column name which was coming out to be wrong so added a new
check there to get the correct child column.

This closes #1910


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/91911af2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/91911af2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/91911af2

Branch: refs/heads/branch-1.3
Commit: 91911af231583b9e2b210dd685770836b358bcd0
Parents: 44e70d0
Author: Geetika Gupta <ge...@knoldus.in>
Authored: Fri Feb 2 14:02:38 2018 +0530
Committer: kunal642 <ku...@gmail.com>
Committed: Sat Feb 3 16:25:30 2018 +0530

----------------------------------------------------------------------
 .../metadata/schema/table/AggregationDataMapSchema.java  |  3 ++-
 .../testsuite/preaggregate/TestPreAggregateLoad.scala    | 11 +++++++++++
 2 files changed, 13 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/91911af2/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java
index e061812..2a16e1f 100644
--- a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java
+++ b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java
@@ -151,7 +151,8 @@ public class AggregationDataMapSchema extends DataMapSchema {
       List<ParentColumnTableRelation> parentColumnTableRelations =
           columnSchema.getParentColumnTableRelations();
       if (null != parentColumnTableRelations && parentColumnTableRelations.size() == 1
-          && parentColumnTableRelations.get(0).getColumnName().equals(columName)) {
+          && parentColumnTableRelations.get(0).getColumnName().equals(columName) &&
+          columnSchema.getColumnName().endsWith(columName)) {
         return columnSchema;
       }
     }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/91911af2/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
index b6b7a17..da1ffb5 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
@@ -405,4 +405,15 @@ test("check load and select for avg double datatype") {
     sql("drop table if exists maintable")
   }
 
+  test("check load and select for avg int datatype and group by") {
+    sql("drop table if exists maintable ")
+    sql("CREATE TABLE maintable(id int, city string, age int) stored by 'carbondata'")
+    sql(s"LOAD DATA LOCAL INPATH '$testData' into table maintable")
+    sql(s"LOAD DATA LOCAL INPATH '$testData' into table maintable")
+    sql(s"LOAD DATA LOCAL INPATH '$testData' into table maintable")
+    val rows = sql("select age,avg(age) from maintable group by age").collect()
+    sql("create datamap maintbl_douoble on table maintable using 'preaggregate' as select avg(age) from maintable group by age")
+    checkAnswer(sql("select age,avg(age) from maintable group by age"), rows)
+  }
+
 }


[29/50] [abbrv] carbondata git commit: [CARBONDATA-2098]Add Documentation for Pre-Aggregate tables

Posted by ra...@apache.org.
[CARBONDATA-2098]Add Documentation for Pre-Aggregate tables

Add Documentation for Pre-Aggregate tables

This closes #1886


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/71f8828b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/71f8828b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/71f8828b

Branch: refs/heads/branch-1.3
Commit: 71f8828be56ae9f3927a5fc4a5047794a740c6d1
Parents: da129d5
Author: Raghunandan S <ca...@gmail.com>
Authored: Mon Jan 29 08:54:49 2018 +0530
Committer: chenliang613 <ch...@huawei.com>
Committed: Sat Feb 3 15:45:30 2018 +0800

----------------------------------------------------------------------
 docs/data-management-on-carbondata.md           | 245 +++++++++++++++++++
 .../examples/PreAggregateTableExample.scala     | 145 +++++++++++
 .../TimeSeriesPreAggregateTableExample.scala    | 103 ++++++++
 3 files changed, 493 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/71f8828b/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index 3119935..0b35ed9 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -25,6 +25,7 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
 * [UPDATE AND DELETE](#update-and-delete)
 * [COMPACTION](#compaction)
 * [PARTITION](#partition)
+* [PRE-AGGREGATE TABLES](#agg-tables)
 * [BUCKETING](#bucketing)
 * [SEGMENT MANAGEMENT](#segment-management)
 
@@ -748,6 +749,250 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
   * The partitioned column can be excluded from SORT_COLUMNS, this will let other columns to do the efficient sorting.
   * When writing SQL on a partition table, try to use filters on the partition column.
 
+## PRE-AGGREGATE TABLES
+  Carbondata supports pre aggregating of data so that OLAP kind of queries can fetch data 
+  much faster.Aggregate tables are created as datamaps so that the handling is as efficient as 
+  other indexing support.Users can create as many aggregate tables they require as datamaps to 
+  improve their query performance,provided the storage requirements and loading speeds are 
+  acceptable.
+  
+  For main table called **sales** which is defined as 
+  
+  ```
+  CREATE TABLE sales (
+  order_time timestamp,
+  user_id string,
+  sex string,
+  country string,
+  quantity int,
+  price bigint)
+  STORED BY 'carbondata'
+  ```
+  
+  user can create pre-aggregate tables using the DDL
+  
+  ```
+  CREATE DATAMAP agg_sales
+  ON TABLE sales
+  USING "preaggregate"
+  AS
+  SELECT country, sex, sum(quantity), avg(price)
+  FROM sales
+  GROUP BY country, sex
+  ```
+  
+<b><p align="left">Functions supported in pre-aggregate tables</p></b>
+
+| Function | Rollup supported |
+|-----------|----------------|
+| SUM | Yes |
+| AVG | Yes |
+| MAX | Yes |
+| MIN | Yes |
+| COUNT | Yes |
+
+
+##### How pre-aggregate tables are selected
+For the main table **sales** and pre-aggregate table **agg_sales** created above, queries of the 
+kind
+```
+SELECT country, sex, sum(quantity), avg(price) from sales GROUP BY country, sex
+
+SELECT sex, sum(quantity) from sales GROUP BY sex
+
+SELECT sum(price), country from sales GROUP BY country
+``` 
+
+will be transformed by Query Planner to fetch data from pre-aggregate table **agg_sales**
+
+But queries of kind
+```
+SELECT user_id, country, sex, sum(quantity), avg(price) from sales GROUP BY country, sex
+
+SELECT sex, avg(quantity) from sales GROUP BY sex
+
+SELECT max(price), country from sales GROUP BY country
+```
+
+will fetch the data from the main table **sales**
+
+##### Loading data to pre-aggregate tables
+For existing table with loaded data, data load to pre-aggregate table will be triggered by the 
+CREATE DATAMAP statement when user creates the pre-aggregate table.
+For incremental loads after aggregates tables are created, loading data to main table triggers 
+the load to pre-aggregate tables once main table loading is complete.These loads are automic 
+meaning that data on main table and aggregate tables are only visible to the user after all tables 
+are loaded
+
+##### Querying data from pre-aggregate tables
+Pre-aggregate tables cannot be queries directly.Queries are to be made on main table.Internally 
+carbondata will check associated pre-aggregate tables with the main table and if the 
+pre-aggregate tables satisfy the query condition, the plan is transformed automatically to use 
+pre-aggregate table to fetch the data
+
+##### Compacting pre-aggregate tables
+Compaction is an optional operation for pre-aggregate table. If compaction is performed on main 
+table but not performed on pre-aggregate table, all queries still can benefit from pre-aggregate 
+table.To further improve performance on pre-aggregate table, compaction can be triggered on 
+pre-aggregate tables directly, it will merge the segments inside pre-aggregation table. 
+To do that, use ALTER TABLE COMPACT command on the pre-aggregate table just like the main table
+
+  NOTE:
+  * If the aggregate function used in the pre-aggregate table creation included distinct-count,
+     during compaction, the pre-aggregate table values are recomputed.This would a costly 
+     operation as compared to the compaction of pre-aggregate tables containing other aggregate 
+     functions alone
+ 
+##### Update/Delete Operations on pre-aggregate tables
+This functionality is not supported.
+
+  NOTE (<b>RESTRICTION</b>):
+  * Update/Delete operations are <b>not supported</b> on main table which has pre-aggregate tables 
+  created on it.All the pre-aggregate tables <b>will have to be dropped</b> before update/delete 
+  operations can be performed on the main table.Pre-aggregate tables can be rebuilt manually 
+  after update/delete operations are completed
+ 
+##### Delete Segment Operations on pre-aggregate tables
+This functionality is not supported.
+
+  NOTE (<b>RESTRICTION</b>):
+  * Delete Segment operations are <b>not supported</b> on main table which has pre-aggregate tables 
+  created on it.All the pre-aggregate tables <b>will have to be dropped</b> before update/delete 
+  operations can be performed on the main table.Pre-aggregate tables can be rebuilt manually 
+  after delete segment operations are completed
+  
+##### Alter Table Operations on pre-aggregate tables
+This functionality is not supported.
+
+  NOTE (<b>RESTRICTION</b>):
+  * Adding new column in new table does not have any affect on pre-aggregate tables. However if 
+  dropping or renaming a column has impact in pre-aggregate table, such operations will be 
+  rejected and error will be thrown.All the pre-aggregate tables <b>will have to be dropped</b> 
+  before Alter Operations can be performed on the main table.Pre-aggregate tables can be rebuilt 
+  manually after Alter Table operations are completed
+  
+### Supporting timeseries data
+Carbondata has built-in understanding of time hierarchy and levels: year, month, day, hour, minute.
+Multiple pre-aggregate tables can be created for the hierarchy and Carbondata can do automatic 
+roll-up for the queries on these hierarchies.
+
+  ```
+  CREATE DATAMAP agg_year
+  ON TABLE sales
+  USING "timeseries"
+  DMPROPERTIES (
+  'event_time’=’order_time’,
+  'year_granualrity’=’1’,
+  ) AS
+  SELECT order_time, country, sex, sum(quantity), max(quantity), count(user_id), sum(price),
+   avg(price) FROM sales GROUP BY order_time, country, sex
+    
+  CREATE DATAMAP agg_month
+  ON TABLE sales
+  USING "timeseries"
+  DMPROPERTIES (
+  'event_time’=’order_time’,
+  'month_granualrity’=’1’,
+  ) AS
+  SELECT order_time, country, sex, sum(quantity), max(quantity), count(user_id), sum(price),
+   avg(price) FROM sales GROUP BY order_time, country, sex
+    
+  CREATE DATAMAP agg_day
+  ON TABLE sales
+  USING "timeseries"
+  DMPROPERTIES (
+  'event_time’=’order_time’,
+  'day_granualrity’=’1’,
+  ) AS
+  SELECT order_time, country, sex, sum(quantity), max(quantity), count(user_id), sum(price),
+   avg(price) FROM sales GROUP BY order_time, country, sex
+        
+  CREATE DATAMAP agg_sales_hour
+  ON TABLE sales
+  USING "timeseries"
+  DMPROPERTIES (
+  'event_time’=’order_time’,
+  'hour_granualrity’=’1’,
+  ) AS
+  SELECT order_time, country, sex, sum(quantity), max(quantity), count(user_id), sum(price),
+   avg(price) FROM sales GROUP BY order_time, country, sex
+  
+  CREATE DATAMAP agg_minute
+  ON TABLE sales
+  USING "timeseries"
+  DMPROPERTIES (
+  'event_time’=’order_time’,
+  'minute_granualrity’=’1’,
+  ) AS
+  SELECT order_time, country, sex, sum(quantity), max(quantity), count(user_id), sum(price),
+   avg(price) FROM sales GROUP BY order_time, country, sex
+    
+  CREATE DATAMAP agg_minute
+  ON TABLE sales
+  USING "timeseries"
+  DMPROPERTIES (
+  'event_time’=’order_time’,
+  'minute_granualrity’=’1’,
+  ) AS
+  SELECT order_time, country, sex, sum(quantity), max(quantity), count(user_id), sum(price),
+   avg(price) FROM sales GROUP BY order_time, country, sex
+  ```
+  
+  For Querying data and automatically roll-up to the desired aggregation level,Carbondata supports 
+  UDF as
+  ```
+  timeseries(timeseries column name, ‘aggregation level’)
+  ```
+  ```
+  Select timeseries(order_time, ‘hour’), sum(quantity) from sales group by timeseries(order_time,
+  ’hour’)
+  ```
+  
+  It is **not necessary** to create pre-aggregate tables for each granularity unless required for 
+  query
+  .Carbondata
+   can roll-up the data and fetch it
+   
+  For Example: For main table **sales** , If pre-aggregate tables were created as  
+  
+  ```
+  CREATE DATAMAP agg_day
+    ON TABLE sales
+    USING "timeseries"
+    DMPROPERTIES (
+    'event_time’=’order_time’,
+    'day_granualrity’=’1’,
+    ) AS
+    SELECT order_time, country, sex, sum(quantity), max(quantity), count(user_id), sum(price),
+     avg(price) FROM sales GROUP BY order_time, country, sex
+          
+    CREATE DATAMAP agg_sales_hour
+    ON TABLE sales
+    USING "timeseries"
+    DMPROPERTIES (
+    'event_time’=’order_time’,
+    'hour_granualrity’=’1’,
+    ) AS
+    SELECT order_time, country, sex, sum(quantity), max(quantity), count(user_id), sum(price),
+     avg(price) FROM sales GROUP BY order_time, country, sex
+  ```
+  
+  Queries like below will be rolled-up and fetched from pre-aggregate tables
+  ```
+  Select timeseries(order_time, ‘month’), sum(quantity) from sales group by timeseries(order_time,
+    ’month’)
+    
+  Select timeseries(order_time, ‘year’), sum(quantity) from sales group by timeseries(order_time,
+    ’year’)
+  ```
+  
+  NOTE (<b>RESTRICTION</b>):
+  * Only value of 1 is supported for hierarchy levels. Other hierarchy levels are not supported. 
+  Other hierarchy levels are not supported
+  * pre-aggregate tables for the desired levels needs to be created one after the other
+  * pre-aggregate tables created for each level needs to be dropped separately 
+    
+
 ## BUCKETING
 
   Bucketing feature can be used to distribute/organize the table/partition data into multiple files such

http://git-wip-us.apache.org/repos/asf/carbondata/blob/71f8828b/examples/spark2/src/main/scala/org/apache/carbondata/examples/PreAggregateTableExample.scala
----------------------------------------------------------------------
diff --git a/examples/spark2/src/main/scala/org/apache/carbondata/examples/PreAggregateTableExample.scala b/examples/spark2/src/main/scala/org/apache/carbondata/examples/PreAggregateTableExample.scala
new file mode 100644
index 0000000..fe3a93d
--- /dev/null
+++ b/examples/spark2/src/main/scala/org/apache/carbondata/examples/PreAggregateTableExample.scala
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.spark.sql.SaveMode
+
+/**
+ * This example is for pre-aggregate tables.
+ */
+
+object PreAggregateTableExample {
+
+  def main(args: Array[String]) {
+
+    val rootPath = new File(this.getClass.getResource("/").getPath
+                            + "../../../..").getCanonicalPath
+    val testData = s"$rootPath/integration/spark-common-test/src/test/resources/sample.csv"
+    val spark = ExampleUtils.createCarbonSession("PreAggregateTableExample")
+
+    spark.sparkContext.setLogLevel("ERROR")
+
+    // 1. simple usage for Pre-aggregate tables creation and query
+    spark.sql("DROP TABLE IF EXISTS mainTable")
+    spark.sql("""
+                | CREATE TABLE mainTable
+                | (id Int,
+                | name String,
+                | city String,
+                | age Int)
+                | STORED BY 'org.apache.carbondata.format'
+              """.stripMargin)
+
+    spark.sql(s"""
+       LOAD DATA LOCAL INPATH '$testData' into table mainTable
+       """)
+
+    spark.sql(
+      s"""create datamap preagg_sum on table mainTable using 'preaggregate' as
+         | select id,sum(age) from mainTable group by id"""
+        .stripMargin)
+    spark.sql(
+      s"""create datamap preagg_avg on table mainTable using 'preaggregate' as
+         | select id,avg(age) from mainTable group by id"""
+        .stripMargin)
+    spark.sql(
+      s"""create datamap preagg_count on table mainTable using 'preaggregate' as
+         | select id,count(age) from mainTable group by id"""
+        .stripMargin)
+    spark.sql(
+      s"""create datamap preagg_min on table mainTable using 'preaggregate' as
+         | select id,min(age) from mainTable group by id"""
+        .stripMargin)
+    spark.sql(
+      s"""create datamap preagg_max on table mainTable using 'preaggregate' as
+         | select id,max(age) from mainTable group by id"""
+        .stripMargin)
+
+    spark.sql(
+      s"""
+         | SELECT id,max(age)
+         | FROM mainTable group by id
+      """.stripMargin).show()
+
+    // 2.compare the performance : with pre-aggregate VS main table
+
+    // build test data, if set the data is larger than 100M, it will take 10+ mins.
+    import spark.implicits._
+
+    import scala.util.Random
+    val r = new Random()
+    val df = spark.sparkContext.parallelize(1 to 10 * 1000 * 1000)
+      .map(x => ("No." + r.nextInt(100000), "name" + x % 8, "city" + x % 50, x % 60))
+      .toDF("ID", "name", "city", "age")
+
+    // Create table with pre-aggregate table
+    df.write.format("carbondata")
+      .option("tableName", "personTable")
+      .option("compress", "true")
+      .mode(SaveMode.Overwrite).save()
+
+    // Create table without pre-aggregate table
+    df.write.format("carbondata")
+      .option("tableName", "personTableWithoutAgg")
+      .option("compress", "true")
+      .mode(SaveMode.Overwrite).save()
+
+    // Create pre-aggregate table
+    spark.sql("""
+       CREATE datamap preagg_avg on table personTable using 'preaggregate' as
+       | select id,avg(age) from personTable group by id
+              """.stripMargin)
+
+    // define time function
+    def time(code: => Unit): Double = {
+      val start = System.currentTimeMillis()
+      code
+      // return time in second
+      (System.currentTimeMillis() - start).toDouble / 1000
+    }
+
+    val time_without_aggTable = time {
+      spark.sql(
+        s"""
+           | SELECT id, avg(age)
+           | FROM personTableWithoutAgg group by id
+      """.stripMargin).count()
+    }
+
+    val time_with_aggTable = time {
+      spark.sql(
+        s"""
+           | SELECT id, avg(age)
+           | FROM personTable group by id
+      """.stripMargin).count()
+    }
+    // scalastyle:off
+    println("time for query on table with pre-aggregate table:" + time_with_aggTable.toString)
+    println("time for query on table without pre-aggregate table:" + time_without_aggTable.toString)
+    // scalastyle:on
+
+    spark.sql("DROP TABLE IF EXISTS mainTable")
+    spark.sql("DROP TABLE IF EXISTS personTable")
+    spark.sql("DROP TABLE IF EXISTS personTableWithoutAgg")
+
+    spark.close()
+
+  }
+}

http://git-wip-us.apache.org/repos/asf/carbondata/blob/71f8828b/examples/spark2/src/main/scala/org/apache/carbondata/examples/TimeSeriesPreAggregateTableExample.scala
----------------------------------------------------------------------
diff --git a/examples/spark2/src/main/scala/org/apache/carbondata/examples/TimeSeriesPreAggregateTableExample.scala b/examples/spark2/src/main/scala/org/apache/carbondata/examples/TimeSeriesPreAggregateTableExample.scala
new file mode 100644
index 0000000..470d9ff
--- /dev/null
+++ b/examples/spark2/src/main/scala/org/apache/carbondata/examples/TimeSeriesPreAggregateTableExample.scala
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.spark.sql.SaveMode
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+/**
+ * This example is for time series pre-aggregate tables.
+ */
+
+object TimeSeriesPreAggregateTableExample {
+
+  def main(args: Array[String]) {
+
+    val rootPath = new File(this.getClass.getResource("/").getPath
+                            + "../../../..").getCanonicalPath
+    val testData = s"$rootPath/integration/spark-common-test/src/test/resources/timeseriestest.csv"
+    val spark = ExampleUtils.createCarbonSession("TimeSeriesPreAggregateTableExample")
+
+    spark.sparkContext.setLogLevel("ERROR")
+
+    import spark.implicits._
+
+    import scala.util.Random
+    val r = new Random()
+    val df = spark.sparkContext.parallelize(1 to 10 * 1000 )
+      .map(x => ("" + 20 + "%02d".format(r.nextInt(20)) + "-" + "%02d".format(r.nextInt(11) + 1) +
+        "-" + "%02d".format(r.nextInt(27) + 1) + " " + "%02d".format(r.nextInt(12)) + ":" +
+        "%02d".format(r.nextInt(59)) + ":" + "%02d".format(r.nextInt(59)), "name" + x % 8,
+        r.nextInt(60))).toDF("mytime", "name", "age")
+
+    // 1. usage for time series Pre-aggregate tables creation and query
+    spark.sql("drop table if exists timeSeriesTable")
+    spark.sql("CREATE TABLE timeSeriesTable(mytime timestamp," +
+      " name string, age int) STORED BY 'org.apache.carbondata.format'")
+    spark.sql(
+      s"""
+         | CREATE DATAMAP agg0_hour ON TABLE timeSeriesTable
+         | USING 'timeSeries'
+         | DMPROPERTIES (
+         | 'EVENT_TIME'='mytime',
+         | 'HOUR_GRANULARITY'='1')
+         | AS SELECT mytime, SUM(age) FROM timeSeriesTable
+         | GROUP BY mytime
+       """.stripMargin)
+    spark.sql(
+      s"""
+         | CREATE DATAMAP agg0_day ON TABLE timeSeriesTable
+         | USING 'timeSeries'
+         | DMPROPERTIES (
+         | 'EVENT_TIME'='mytime',
+         | 'DAY_GRANULARITY'='1')
+         | AS SELECT mytime, SUM(age) FROM timeSeriesTable
+         | GROUP BY mytime
+       """.stripMargin)
+
+
+    CarbonProperties.getInstance()
+      .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy-MM-dd HH:mm:ss")
+
+    df.write.format("carbondata")
+      .option("tableName", "timeSeriesTable")
+      .option("compress", "true")
+      .mode(SaveMode.Append).save()
+
+    spark.sql(
+      s"""
+         select sum(age), timeseries(mytime,'hour') from timeSeriesTable group by timeseries(mytime,
+         'hour')
+      """.stripMargin).show()
+
+    spark.sql(
+      s"""
+         select avg(age),timeseries(mytime,'year') from timeSeriesTable group by timeseries(mytime,
+         'year')
+      """.stripMargin).show()
+
+    spark.sql("DROP TABLE IF EXISTS timeSeriesTable")
+
+    spark.close()
+
+  }
+}


[33/50] [abbrv] carbondata git commit: [Documentation] Data types for Dictionary exclude & sort column

Posted by ra...@apache.org.
[Documentation] Data types for Dictionary exclude & sort column

Data types for Dictionary exclude & sort column

This closes #1907


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/22f78fab
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/22f78fab
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/22f78fab

Branch: refs/heads/branch-1.3
Commit: 22f78faba1b7314297434cf23d898aa5346dea37
Parents: e527c05
Author: sgururajshetty <sg...@gmail.com>
Authored: Thu Feb 1 20:30:12 2018 +0530
Committer: manishgupta88 <to...@gmail.com>
Committed: Sat Feb 3 14:17:18 2018 +0530

----------------------------------------------------------------------
 docs/data-management-on-carbondata.md | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/22f78fab/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index 0b35ed9..66cc048 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -45,13 +45,15 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
   
    - **Dictionary Encoding Configuration**
 
-     Dictionary encoding is turned off for all columns by default from 1.3 onwards, you can use this command for including columns to do dictionary encoding.
+     Dictionary encoding is turned off for all columns by default from 1.3 onwards, you can use this command for including or excluding columns to do dictionary encoding.
      Suggested use cases : do dictionary encoding for low cardinality columns, it might help to improve data compression ratio and performance.
 
      ```
      TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
-     ```
+	 ```
      
+	 NOTE: DICTIONARY_EXCLUDE supports only int, string, timestamp, long, bigint, and varchar data types.
+	 
    - **Inverted Index Configuration**
 
      By default inverted index is enabled, it might help to improve compression ratio and query speed, especially for low cardinality columns which are in reward position.
@@ -64,8 +66,9 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
    - **Sort Columns Configuration**
 
      This property is for users to specify which columns belong to the MDK(Multi-Dimensions-Key) index.
-     * If users don't specify "SORT_COLUMN" property, by default MDK index be built by using all dimension columns except complex datatype column. 
-     * If this property is specified but with empty argument, then the table will be loaded without sort..
+     * If users don't specify "SORT_COLUMN" property, by default MDK index be built by using all dimension columns except complex data type column. 
+     * If this property is specified but with empty argument, then the table will be loaded without sort.
+	 * This supports only string, date, timestamp, short, int, long, and boolean data types.
      Suggested use cases : Only build MDK index for required columns,it might help to improve the data loading performance.
 
      ```


[16/50] [abbrv] carbondata git commit: [CARBONDATA-2113]compatibility fix for v2

Posted by ra...@apache.org.
[CARBONDATA-2113]compatibility fix for v2

Fixes related to backword compatibility:

Count() issue:
when count() is run on old store where file format version is V2, it was unable to get the files, because when forming the filepath while creating the table block info object from index file info, the path was formed with double slash(//), beacuse in v2, local path was stored. so when trying to get this path with key, it was failing. So form the correct file path.

Select * issue:
when select * is run, then only the datachunck was considered as whole data and while uncompressing the measure data, it was failing. So, read proper data and datachunck before uncompressing.

Read Metadata file:
when readMetadataFile is called , the while reading the schema, it was explicitly returning null, so columns will be null and hence some nullpointerxception was coming. so do not return null, return proper schema, by reading the footer.

Vesion compatibility:
calculation the number of pages to be filled based on row count was not handled for V2 version, handled same, as no. of rows per page is different in V2 and V3

This closes #1901


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/1202e209
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/1202e209
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/1202e209

Branch: refs/heads/branch-1.3
Commit: 1202e209eccda172f9671fa0cc2deeebeb4af456
Parents: 1248bd4
Author: akashrn5 <ak...@gmail.com>
Authored: Thu Feb 1 12:42:49 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Thu Feb 1 22:16:39 2018 +0530

----------------------------------------------------------------------
 .../core/constants/CarbonVersionConstants.java       |  5 +++++
 .../v2/CompressedMeasureChunkFileBasedReaderV2.java  | 15 ++++++++++-----
 .../blockletindex/BlockletDataRefNodeWrapper.java    | 13 +++++++++++--
 .../carbondata/core/mutate/CarbonUpdateUtil.java     | 10 ++++------
 .../core/util/AbstractDataFileFooterConverter.java   |  3 ++-
 .../core/util/DataFileFooterConverter2.java          |  2 +-
 6 files changed, 33 insertions(+), 15 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/1202e209/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java b/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java
index 2d58b0b..22fbaf2 100644
--- a/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java
+++ b/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java
@@ -49,6 +49,11 @@ public final class CarbonVersionConstants {
    */
   public static final String CARBONDATA_BUILD_DATE;
 
+  /**
+   * number of rows per blocklet column page default value for V2 version
+   */
+  public static final int NUMBER_OF_ROWS_PER_BLOCKLET_COLUMN_PAGE_DEFAULT_V2 = 120000;
+
   static {
     // create input stream for CARBONDATA_VERSION_INFO_FILE
     InputStream resourceStream = Thread.currentThread().getContextClassLoader()

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1202e209/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java b/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java
index 2ddc202..d61f98a 100644
--- a/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java
+++ b/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java
@@ -52,7 +52,14 @@ public class CompressedMeasureChunkFileBasedReaderV2 extends AbstractMeasureChun
       throws IOException {
     int dataLength = 0;
     if (measureColumnChunkOffsets.size() - 1 == columnIndex) {
-      dataLength = measureColumnChunkLength.get(columnIndex);
+      DataChunk2 metadataChunk = null;
+      synchronized (fileReader) {
+        metadataChunk = CarbonUtil.readDataChunk(ByteBuffer.wrap(fileReader
+                .readByteArray(filePath, measureColumnChunkOffsets.get(columnIndex),
+                    measureColumnChunkLength.get(columnIndex))), 0,
+            measureColumnChunkLength.get(columnIndex));
+      }
+      dataLength = measureColumnChunkLength.get(columnIndex) + metadataChunk.data_page_length;
     } else {
       long currentMeasureOffset = measureColumnChunkOffsets.get(columnIndex);
       dataLength = (int) (measureColumnChunkOffsets.get(columnIndex + 1) - currentMeasureOffset);
@@ -115,9 +122,7 @@ public class CompressedMeasureChunkFileBasedReaderV2 extends AbstractMeasureChun
     ByteBuffer rawData = measureRawColumnChunk.getRawData();
     DataChunk2 measureColumnChunk = CarbonUtil.readDataChunk(rawData, copyPoint,
         measureColumnChunkLength.get(blockIndex));
-    if (measureColumnChunkOffsets.size() - 1 != blockIndex) {
-      copyPoint += measureColumnChunkLength.get(blockIndex);
-    }
+    copyPoint += measureColumnChunkLength.get(blockIndex);
 
     ColumnPage page = decodeMeasure(measureRawColumnChunk, measureColumnChunk, copyPoint);
     page.setNullBits(getNullBitSet(measureColumnChunk.presence));
@@ -130,7 +135,7 @@ public class CompressedMeasureChunkFileBasedReaderV2 extends AbstractMeasureChun
     List<ByteBuffer> encoder_meta = measureColumnChunk.getEncoder_meta();
     byte[] encodedMeta = encoder_meta.get(0).array();
 
-    ValueEncoderMeta meta = CarbonUtil.deserializeEncoderMetaV3(encodedMeta);
+    ValueEncoderMeta meta = CarbonUtil.deserializeEncoderMetaV2(encodedMeta);
     ColumnPageDecoder codec = encodingFactory.createDecoderLegacy(meta);
     byte[] rawData = measureRawColumnChunk.getRawData().array();
     return codec.decode(rawData, copyPoint, measureColumnChunk.data_page_length);

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1202e209/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java
index b672c58..4e49ede 100644
--- a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java
+++ b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java
@@ -21,6 +21,7 @@ import java.util.List;
 
 import org.apache.carbondata.core.cache.update.BlockletLevelDeleteDeltaDataCache;
 import org.apache.carbondata.core.constants.CarbonV3DataFormatConstants;
+import org.apache.carbondata.core.constants.CarbonVersionConstants;
 import org.apache.carbondata.core.datastore.DataRefNode;
 import org.apache.carbondata.core.datastore.FileHolder;
 import org.apache.carbondata.core.datastore.block.TableBlockInfo;
@@ -56,8 +57,16 @@ public class BlockletDataRefNodeWrapper implements DataRefNode {
       detailInfo.getBlockletInfo().setNumberOfPages(detailInfo.getPagesCount());
       detailInfo.setBlockletId(blockInfo.getDetailInfo().getBlockletId());
       int[] pageRowCount = new int[detailInfo.getPagesCount()];
-      int numberOfPagesCompletelyFilled = detailInfo.getRowCount()
-          / CarbonV3DataFormatConstants.NUMBER_OF_ROWS_PER_BLOCKLET_COLUMN_PAGE_DEFAULT;
+      int numberOfPagesCompletelyFilled = detailInfo.getRowCount();
+      // no. of rows to a page is 120000 in V2 and 32000 in V3, same is handled to get the number
+      // of pages filled
+      if (blockInfo.getVersion() == ColumnarFormatVersion.V2) {
+        numberOfPagesCompletelyFilled /=
+            CarbonVersionConstants.NUMBER_OF_ROWS_PER_BLOCKLET_COLUMN_PAGE_DEFAULT_V2;
+      } else {
+        numberOfPagesCompletelyFilled /=
+            CarbonV3DataFormatConstants.NUMBER_OF_ROWS_PER_BLOCKLET_COLUMN_PAGE_DEFAULT;
+      }
       int lastPageRowCount = detailInfo.getRowCount()
           % CarbonV3DataFormatConstants.NUMBER_OF_ROWS_PER_BLOCKLET_COLUMN_PAGE_DEFAULT;
       for (int i = 0; i < numberOfPagesCompletelyFilled; i++) {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1202e209/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java b/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java
index 0e4eec7..c5f61c2 100644
--- a/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java
@@ -587,26 +587,24 @@ public class CarbonUpdateUtil {
    */
   private static void deleteStaleCarbonDataFiles(LoadMetadataDetails segment,
       CarbonFile[] allSegmentFiles, SegmentUpdateStatusManager updateStatusManager) {
-    boolean doForceDelete = true;
-    boolean isAbortedFile = true;
     CarbonFile[] invalidUpdateDeltaFiles = updateStatusManager
         .getUpdateDeltaFilesList(segment.getLoadName(), false,
             CarbonCommonConstants.UPDATE_DELTA_FILE_EXT, true, allSegmentFiles,
-            isAbortedFile);
+            true);
     // now for each invalid delta file need to check the query execution time out
     // and then delete.
     for (CarbonFile invalidFile : invalidUpdateDeltaFiles) {
-      compareTimestampsAndDelete(invalidFile, doForceDelete, false);
+      compareTimestampsAndDelete(invalidFile, true, false);
     }
     // do the same for the index files.
     CarbonFile[] invalidIndexFiles = updateStatusManager
         .getUpdateDeltaFilesList(segment.getLoadName(), false,
             CarbonCommonConstants.UPDATE_INDEX_FILE_EXT, true, allSegmentFiles,
-            isAbortedFile);
+            true);
     // now for each invalid index file need to check the query execution time out
     // and then delete.
     for (CarbonFile invalidFile : invalidIndexFiles) {
-      compareTimestampsAndDelete(invalidFile, doForceDelete, false);
+      compareTimestampsAndDelete(invalidFile, true, false);
     }
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1202e209/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java b/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
index c7bc6aa..94a041a 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
@@ -208,7 +208,8 @@ public abstract class AbstractDataFileFooterConverter {
     if (fileName.lastIndexOf("/") > 0) {
       fileName = fileName.substring(fileName.lastIndexOf("/"));
     }
-    tableBlockInfo.setFilePath(parentPath + "/" + fileName);
+    fileName = (CarbonCommonConstants.FILE_SEPARATOR + fileName).replaceAll("//", "/");
+    tableBlockInfo.setFilePath(parentPath + fileName);
     return tableBlockInfo;
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1202e209/core/src/main/java/org/apache/carbondata/core/util/DataFileFooterConverter2.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/DataFileFooterConverter2.java b/core/src/main/java/org/apache/carbondata/core/util/DataFileFooterConverter2.java
index 8cd437f..863e1df 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/DataFileFooterConverter2.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/DataFileFooterConverter2.java
@@ -141,6 +141,6 @@ public class DataFileFooterConverter2 extends AbstractDataFileFooterConverter {
   }
 
   @Override public List<ColumnSchema> getSchema(TableBlockInfo tableBlockInfo) throws IOException {
-    return null;
+    return new DataFileFooterConverter().getSchema(tableBlockInfo);
   }
 }


[42/50] [abbrv] carbondata git commit: [CARBONDATA-2123] Refactor datamap schema thrift and datamap provider to use short name and classname

Posted by ra...@apache.org.
[CARBONDATA-2123] Refactor datamap schema thrift and datamap provider to use short name and classname

Update schema thrift file for datamap schema to correct the typo errors and updated the names.
Added class name to schema file and updated short name for each enum.

This closes #1919


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/46d9bf96
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/46d9bf96
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/46d9bf96

Branch: refs/heads/branch-1.3
Commit: 46d9bf966910afb98a4e4e9cf879f2a9beef5b72
Parents: 4677fc6
Author: ravipesala <ra...@gmail.com>
Authored: Sat Feb 3 00:18:10 2018 +0530
Committer: Jacky Li <ja...@qq.com>
Committed: Sat Feb 3 22:05:37 2018 +0800

----------------------------------------------------------------------
 .../core/constants/CarbonCommonConstants.java   |  2 -
 .../ThriftWrapperSchemaConverterImpl.java       | 10 ++---
 .../schema/datamap/DataMapProvider.java         | 39 +++++++++++++++++++-
 .../schema/table/DataMapSchemaFactory.java      | 12 +++---
 format/src/main/thrift/schema.thrift            | 14 ++++---
 .../preaggregate/TestPreAggCreateCommand.scala  |  3 +-
 .../timeseries/TestTimeSeriesCreateTable.scala  |  4 +-
 .../datamap/CarbonCreateDataMapCommand.scala    | 28 ++++++++------
 .../CreatePreAggregateTableCommand.scala        |  5 ++-
 9 files changed, 80 insertions(+), 37 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index 8480758..a799e51 100644
--- a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1455,8 +1455,6 @@ public final class CarbonCommonConstants {
 
   public static final String BITSET_PIPE_LINE_DEFAULT = "true";
 
-
-  public static final String AGGREGATIONDATAMAPSCHEMA = "AggregateDataMapHandler";
   /*
    * The total size of carbon data
    */

http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java b/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
index e9c5505..21ab797 100644
--- a/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
+++ b/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
@@ -343,7 +343,7 @@ public class ThriftWrapperSchemaConverterImpl implements SchemaConverter {
             .setDatabaseName(wrapperChildSchema.getRelationIdentifier().getDatabaseName());
         relationIdentifier.setTableName(wrapperChildSchema.getRelationIdentifier().getTableName());
         relationIdentifier.setTableId(wrapperChildSchema.getRelationIdentifier().getTableId());
-        thriftChildSchema.setRelationIdentifire(relationIdentifier);
+        thriftChildSchema.setChildTableIdentifier(relationIdentifier);
       }
       thriftChildSchema.setProperties(wrapperChildSchema.getProperties());
       thriftChildSchema.setClassName(wrapperChildSchema.getClassName());
@@ -648,11 +648,11 @@ public class ThriftWrapperSchemaConverterImpl implements SchemaConverter {
     DataMapSchema childSchema = new DataMapSchema(thriftDataMapSchema.getDataMapName(),
         thriftDataMapSchema.getClassName());
     childSchema.setProperties(thriftDataMapSchema.getProperties());
-    if (null != thriftDataMapSchema.getRelationIdentifire()) {
+    if (null != thriftDataMapSchema.getChildTableIdentifier()) {
       RelationIdentifier relationIdentifier =
-          new RelationIdentifier(thriftDataMapSchema.getRelationIdentifire().getDatabaseName(),
-              thriftDataMapSchema.getRelationIdentifire().getTableName(),
-              thriftDataMapSchema.getRelationIdentifire().getTableId());
+          new RelationIdentifier(thriftDataMapSchema.getChildTableIdentifier().getDatabaseName(),
+              thriftDataMapSchema.getChildTableIdentifier().getTableName(),
+              thriftDataMapSchema.getChildTableIdentifier().getTableId());
       childSchema.setRelationIdentifier(relationIdentifier);
       childSchema.setChildSchema(
           fromExternalToWrapperTableSchema(thriftDataMapSchema.getChildTableSchema(),

http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/core/src/main/java/org/apache/carbondata/core/metadata/schema/datamap/DataMapProvider.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/metadata/schema/datamap/DataMapProvider.java b/core/src/main/java/org/apache/carbondata/core/metadata/schema/datamap/DataMapProvider.java
index 65578b1..0052428 100644
--- a/core/src/main/java/org/apache/carbondata/core/metadata/schema/datamap/DataMapProvider.java
+++ b/core/src/main/java/org/apache/carbondata/core/metadata/schema/datamap/DataMapProvider.java
@@ -27,6 +27,41 @@ package org.apache.carbondata.core.metadata.schema.datamap;
  */
 
 public enum DataMapProvider {
-  PREAGGREGATE,
-  TIMESERIES;
+  PREAGGREGATE("org.apache.carbondata.core.datamap.AggregateDataMap", "preaggregate"),
+  TIMESERIES("org.apache.carbondata.core.datamap.TimeSeriesDataMap", "timeseries");
+
+  /**
+   * Fully qualified class name of datamap
+   */
+  private String className;
+
+  /**
+   * Short name representation of datamap
+   */
+  private String shortName;
+
+  DataMapProvider(String className, String shortName) {
+    this.className = className;
+    this.shortName = shortName;
+  }
+
+  public String getClassName() {
+    return className;
+  }
+
+  private boolean isEqual(String dataMapClass) {
+    return (dataMapClass != null &&
+        (dataMapClass.equals(className) ||
+        dataMapClass.equalsIgnoreCase(shortName)));
+  }
+
+  public static DataMapProvider getDataMapProvider(String dataMapClass) {
+    if (TIMESERIES.isEqual(dataMapClass)) {
+      return TIMESERIES;
+    } else if (PREAGGREGATE.isEqual(dataMapClass)) {
+      return PREAGGREGATE;
+    } else {
+      throw new UnsupportedOperationException("Unknown datamap provider/class " + dataMapClass);
+    }
+  }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/DataMapSchemaFactory.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/DataMapSchemaFactory.java b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/DataMapSchemaFactory.java
index 5729959..d0c7386 100644
--- a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/DataMapSchemaFactory.java
+++ b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/DataMapSchemaFactory.java
@@ -16,7 +16,7 @@
  */
 package org.apache.carbondata.core.metadata.schema.table;
 
-import static org.apache.carbondata.core.constants.CarbonCommonConstants.AGGREGATIONDATAMAPSCHEMA;
+import org.apache.carbondata.core.metadata.schema.datamap.DataMapProvider;
 
 public class DataMapSchemaFactory {
   public static final DataMapSchemaFactory INSTANCE = new DataMapSchemaFactory();
@@ -28,11 +28,11 @@ public class DataMapSchemaFactory {
    * @return data map schema
    */
   public DataMapSchema getDataMapSchema(String dataMapName, String className) {
-    switch (className) {
-      case AGGREGATIONDATAMAPSCHEMA:
-        return new AggregationDataMapSchema(dataMapName, className);
-      default:
-        return new DataMapSchema(dataMapName, className);
+    if (DataMapProvider.PREAGGREGATE.getClassName().equals(className) ||
+        DataMapProvider.TIMESERIES.getClassName().equals(className)) {
+      return new AggregationDataMapSchema(dataMapName, className);
+    } else {
+      return new DataMapSchema(dataMapName, className);
     }
   }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/format/src/main/thrift/schema.thrift
----------------------------------------------------------------------
diff --git a/format/src/main/thrift/schema.thrift b/format/src/main/thrift/schema.thrift
index a924009..b44fe19 100644
--- a/format/src/main/thrift/schema.thrift
+++ b/format/src/main/thrift/schema.thrift
@@ -192,13 +192,15 @@ struct DataMapSchema  {
     1: required string dataMapName;
     // class name
     2: required string className;
-    // relation indentifier
-    3: optional RelationIdentifier relationIdentifire;
-    // in case of preaggregate it will be used to maintain the child schema
+    // to maintain properties which are mentioned in DMPROPERTIES of DDL and also it
+    // stores properties of select query, query type like groupby, join in
+    // case of preaggregate/timeseries
+    3: optional map<string, string> properties;
+    // relation identifier of a table which stores data of datamaps like preaggregate/timeseries.
+    4: optional RelationIdentifier childTableIdentifier;
+    // in case of preaggregate/timeseries datamap it will be used to maintain the child schema
     // which will be usefull in case of query and data load
-    4: optional TableSchema childTableSchema;
-    // to maintain properties like select query, query type like groupby, join
-    5: optional map<string, string> properties;
+    5: optional TableSchema childTableSchema;
 }
 
 struct TableInfo{

http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
index 5d0f61b..6988adc 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
@@ -296,7 +296,8 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
           | GROUP BY column3,column5,column2
         """.stripMargin)
     }
-    assert(e.getMessage.contains("Unknown data map type abc"))
+    assert(e.getMessage.contains(
+      s"Unknown datamap provider/class abc"))
     sql("DROP DATAMAP IF EXISTS agg0 ON TABLE maintable")
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
index f3bbcaf..3d991a9 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
@@ -201,7 +201,7 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
           | GROUP BY dataTime
         """.stripMargin)
     }
-    assert(e.getMessage.equals("Unknown data map type abc"))
+    assert(e.getMessage.equals("Unknown datamap provider/class abc"))
   }
 
   test("test timeseries create table 12: USING and catch MalformedCarbonCommandException") {
@@ -216,7 +216,7 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
           | GROUP BY dataTime
         """.stripMargin)
     }
-    assert(e.getMessage.equals("Unknown data map type abc"))
+    assert(e.getMessage.equals("Unknown datamap provider/class abc"))
   }
 
   test("test timeseries create table 13: Only one granularity level can be defined 1") {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
index 242087e..f2f001e 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
@@ -24,6 +24,7 @@ import org.apache.spark.sql.execution.command.preaaggregate.CreatePreAggregateTa
 import org.apache.spark.sql.execution.command.timeseries.TimeSeriesUtil
 
 import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.datamap.DataMapProvider
 import org.apache.carbondata.core.metadata.schema.datamap.DataMapProvider._
 import org.apache.carbondata.spark.exception.{MalformedCarbonCommandException, MalformedDataMapCommandException}
 
@@ -60,7 +61,14 @@ case class CarbonCreateDataMapCommand(
     } else {
       dmProperties
     }
-
+    val dataMapProvider = {
+      try {
+        DataMapProvider.getDataMapProvider(dmClassName)
+      } catch {
+        case e: UnsupportedOperationException =>
+          throw new MalformedDataMapCommandException(e.getMessage)
+      }
+    }
     if (sparkSession.sessionState.catalog.listTables(dbName)
       .exists(_.table.equalsIgnoreCase(tableName))) {
       LOGGER.audit(
@@ -70,16 +78,16 @@ case class CarbonCreateDataMapCommand(
       if (!ifNotExistsSet) {
         throw new TableAlreadyExistsException(dbName, tableName)
       }
-    } else if (dmClassName.equalsIgnoreCase(PREAGGREGATE.toString) ||
-      dmClassName.equalsIgnoreCase(TIMESERIES.toString)) {
+    } else {
       TimeSeriesUtil.validateTimeSeriesGranularity(newDmProperties, dmClassName)
-      createPreAggregateTableCommands = if (dmClassName.equalsIgnoreCase(TIMESERIES.toString)) {
+      createPreAggregateTableCommands = if (dataMapProvider == TIMESERIES) {
         val details = TimeSeriesUtil
           .getTimeSeriesGranularityDetails(newDmProperties, dmClassName)
         val updatedDmProperties = newDmProperties - details._1
-        CreatePreAggregateTableCommand(dataMapName,
+        CreatePreAggregateTableCommand(
+          dataMapName,
           tableIdentifier,
-          dmClassName,
+          dataMapProvider,
           updatedDmProperties,
           queryString.get,
           Some(details._1),
@@ -88,14 +96,12 @@ case class CarbonCreateDataMapCommand(
         CreatePreAggregateTableCommand(
           dataMapName,
           tableIdentifier,
-          dmClassName,
+          dataMapProvider,
           newDmProperties,
           queryString.get,
           ifNotExistsSet = ifNotExistsSet)
       }
       createPreAggregateTableCommands.processMetadata(sparkSession)
-    } else {
-      throw new MalformedDataMapCommandException("Unknown data map type " + dmClassName)
     }
     LOGGER.audit(s"DataMap $dataMapName successfully added to Table ${tableIdentifier.table}")
     Seq.empty
@@ -110,7 +116,7 @@ case class CarbonCreateDataMapCommand(
         Seq.empty
       }
     } else {
-      throw new MalformedDataMapCommandException("Unknown data map type " + dmClassName)
+      throw new MalformedDataMapCommandException("Unknown datamap provider/class " + dmClassName)
     }
   }
 
@@ -123,7 +129,7 @@ case class CarbonCreateDataMapCommand(
         Seq.empty
       }
     } else {
-      throw new MalformedDataMapCommandException("Unknown data map type " + dmClassName)
+      throw new MalformedDataMapCommandException("Unknown datamap provider/class " + dmClassName)
     }
   }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
index 31a3403..231a001 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
@@ -30,6 +30,7 @@ import org.apache.spark.sql.execution.command.timeseries.TimeSeriesUtil
 import org.apache.spark.sql.parser.CarbonSpark2SqlParser
 
 import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.metadata.schema.datamap.DataMapProvider
 import org.apache.carbondata.core.metadata.schema.table.AggregationDataMapSchema
 import org.apache.carbondata.core.metadata.schema.table.CarbonTable
 import org.apache.carbondata.core.statusmanager.{SegmentStatus, SegmentStatusManager}
@@ -46,7 +47,7 @@ import org.apache.carbondata.spark.util.DataLoadingUtil
 case class CreatePreAggregateTableCommand(
     dataMapName: String,
     parentTableIdentifier: TableIdentifier,
-    dmClassName: String,
+    dataMapProvider: DataMapProvider,
     dmProperties: Map[String, String],
     queryString: String,
     timeSeriesFunction: Option[String] = None,
@@ -125,7 +126,7 @@ case class CreatePreAggregateTableCommand(
     // child schema object which will be updated on parent table about the
     val childSchema = tableInfo.getFactTable.buildChildSchema(
       dataMapName,
-      CarbonCommonConstants.AGGREGATIONDATAMAPSCHEMA,
+      dataMapProvider.getClassName,
       tableInfo.getDatabaseName,
       queryString,
       "AGGREGATION")


[07/50] [abbrv] carbondata git commit: [HOTFIX] Correct the order of dropping pre-aggregate tables.pre-aggregate tables to be dropped before main table is dropped

Posted by ra...@apache.org.
[HOTFIX] Correct the order of dropping pre-aggregate tables.pre-aggregate tables to be dropped before main table is dropped

This closes #1900


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/c8a3eb5c
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/c8a3eb5c
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/c8a3eb5c

Branch: refs/heads/branch-1.3
Commit: c8a3eb5cf79a3e68e7b4f19311f35ceca95b3003
Parents: 3dff273
Author: Raghunandan S <ca...@gmail.com>
Authored: Thu Feb 1 07:42:09 2018 +0530
Committer: Jacky Li <ja...@qq.com>
Committed: Thu Feb 1 10:37:17 2018 +0800

----------------------------------------------------------------------
 .../standardpartition/StandardPartitionTableQueryTestCase.scala   | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/c8a3eb5c/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
index b1fc0a7..0a86dee 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
@@ -274,7 +274,6 @@ test("Creation of partition table should fail if the colname in table schema and
 
   test("drop partition on preAggregate table should fail"){
     sql("drop table if exists partitionTable")
-    sql("drop datamap if exists preaggTable on table partitionTable")
     sql("create table partitionTable (id int,city string,age int) partitioned by(name string) stored by 'carbondata'".stripMargin)
     sql(
       s"""create datamap preaggTable on table partitionTable using 'preaggregate' as select id,sum(age) from partitionTable group by id"""
@@ -285,6 +284,7 @@ test("Creation of partition table should fail if the colname in table schema and
     intercept[Exception]{
       sql("alter table partitionTable drop PARTITION(name='John')")
     }
+    sql("drop datamap if exists preaggTable on table partitionTable")
   }
 
 
@@ -318,7 +318,6 @@ test("Creation of partition table should fail if the colname in table schema and
     sql("drop table if exists badrecordsPartitionintnull")
     sql("drop table if exists badrecordsPartitionintnullalt")
     sql("drop table if exists partitionTable")
-    sql("drop datamap if exists preaggTable on table partitionTable")
   }
 
 }


[05/50] [abbrv] carbondata git commit: [CARBONDATA-2089]SQL exception is masked due to assert(false) inside try catch and exception block always asserting true

Posted by ra...@apache.org.
http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
index d6fa3ca..b4459ab 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
@@ -20,6 +20,8 @@ package org.apache.carbondata.cluster.sdv.generated
 
 import java.sql.Timestamp
 
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.common.util._
 import org.scalatest.{BeforeAndAfter, BeforeAndAfterAll, BeforeAndAfterEach}
@@ -60,6 +62,9 @@ class DataLoadingIUDTestCase extends QueryTest with BeforeAndAfterAll with Befor
     sql("drop table if exists t_carbn01b").collect
     sql("drop table if exists T_Hive1").collect
     sql("drop table if exists T_Hive6").collect
+    sql(s"""create table default.t_carbn01b(Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""LOAD DATA INPATH '$resourcesPath/Data/InsertData/T_Hive1.csv' INTO table default.t_carbn01B options ('DELIMITER'=',', 'QUOTECHAR'='\', 'FILEHEADER'='Active_status,Item_type_cd,Qty_day_avg,Qty_total,Sell_price,Sell_pricep,Discount_price,Profit,Item_code,Item_name,Outlet_name,Update_time,Create_date')""").collect
+
   }
 
   override def before(fun: => Any) {
@@ -75,9 +80,7 @@ class DataLoadingIUDTestCase extends QueryTest with BeforeAndAfterAll with Befor
 
 //NA
 test("IUD-01-01-01_001-001", Include) {
-   sql(s"""create table default.t_carbn01b(Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""LOAD DATA INPATH '$resourcesPath/Data/InsertData/T_Hive1.csv' INTO table default.t_carbn01B options ('DELIMITER'=',', 'QUOTECHAR'='\', 'FILEHEADER'='Active_status,Item_type_cd,Qty_day_avg,Qty_total,Sell_price,Sell_pricep,Discount_price,Profit,Item_code,Item_name,Outlet_name,Update_time,Create_date')""").collect
-  sql("create table T_Hive1(Active_status BOOLEAN, Item_type_cd TINYINT, Qty_day_avg SMALLINT, Qty_total INT, Sell_price BIGINT, Sell_pricep FLOAT, Discount_price DOUBLE , Profit DECIMAL(3,2), Item_code STRING, Item_name VARCHAR(50), Outlet_name CHAR(100), Update_time TIMESTAMP, Create_date DATE) row format delimited fields terminated by ',' collection items terminated by '$'")
+   sql("create table T_Hive1(Active_status BOOLEAN, Item_type_cd TINYINT, Qty_day_avg SMALLINT, Qty_total INT, Sell_price BIGINT, Sell_pricep FLOAT, Discount_price DOUBLE , Profit DECIMAL(3,2), Item_code STRING, Item_name VARCHAR(50), Outlet_name CHAR(100), Update_time TIMESTAMP, Create_date DATE) row format delimited fields terminated by ',' collection items terminated by '$'")
  sql(s"""LOAD DATA INPATH '$resourcesPath/Data/InsertData/T_Hive1.csv' overwrite into table T_Hive1""").collect
  sql("create table T_Hive6(Item_code STRING, Sub_item_cd ARRAY<string>)row format delimited fields terminated by ',' collection items terminated by '$'")
  sql(s"""load data inpath '$resourcesPath/Data/InsertData/T_Hive1.csv' overwrite into table T_Hive6""").collect
@@ -115,16 +118,13 @@ test("IUD-01-01-01_001-02", Include) {
 
 //Check for update Carbon table using a data value on a string column without giving values in semi quote
 test("IUD-01-01-01_001-03", Include) {
-  try {
+  intercept[Exception] {
    sql(s"""drop table IF EXISTS default.t_carbn01""").collect
  sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
  sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
  sql(s"""update default.t_carbn01  set (active_status) = (NO) """).collect
     sql(s"""NA""").collect
     
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
    sql(s"""drop table default.t_carbn01  """).collect
 }
@@ -204,18 +204,14 @@ test("IUD-01-01-01_001-11", Include) {
 
 //Check for update Carbon table for a column where column  name is mentioned incorrectly
 test("IUD-01-01-01_001-14", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set (item_status_cd)  = ('10')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set (item_status_cd)  = ('10')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -245,35 +241,27 @@ test("IUD-01-01-01_001-16", Include) {
 
 //Check for update Carbon table for a numeric value column using string value
 test("IUD-01-01-01_001-17", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set (item_type_cd)  = ('Orange')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set (item_type_cd)  = ('Orange')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
 //Check for update Carbon table for a numeric value column using decimal value
 test("IUD-01-01-01_001-18", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set (item_type_cd)  = ('10.11')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set (item_type_cd)  = ('10.11')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -303,18 +291,14 @@ test("IUD-01-01-01_001-20", Include) {
 
 //Check for update Carbon table for a numeric Int value column using large numeric value which is beyond 32 bit
 test("IUD-01-01-01_001-21", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set (item_type_cd)  = (-2147483649)""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set (item_type_cd)  = (-2147483649)""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -380,18 +364,14 @@ test("IUD-01-01-01_001-26", Include) {
 
 //Check for update Carbon table for a decimal value column using String value
 test("IUD-01-01-01_001-27", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set (profit)  = ('hakshk')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set (profit)  = ('hakshk')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -445,86 +425,66 @@ test("IUD-01-01-01_001-31", Include) {
 
 //Check for update Carbon table for a time stamp  value column using date timestamp all formats.
 test("IUD-01-01-01_001-35", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set(update_time) = ('04-11-20004 18:13:59.113')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set(update_time) = ('04-11-20004 18:13:59.113')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
 //Check for update Carbon table for a time stamp  value column using string value
 test("IUD-01-01-01_001-32", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set(update_time) = ('fhjfhjfdshf')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set(update_time) = ('fhjfhjfdshf')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
 //Check for update Carbon table for a time stamp  value column using numeric
 test("IUD-01-01-01_001-33", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set(update_time) = (56546)""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set(update_time) = (56546)""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
 //Check for update Carbon table for a time stamp  value column using date 
 test("IUD-01-01-01_001-34", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set(update_time) = ('2016-11-04')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set(update_time) = ('2016-11-04')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
 //Check for update Carbon table for a time stamp  value column using date timestamp
 test("IUD-01-01-01_001-36", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set(update_time) = ('2016-11-04 18:63:59.113')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set(update_time) = ('2016-11-04 18:63:59.113')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -554,18 +514,14 @@ test("IUD-01-01-01_001-40", Include) {
 
 //Check update Carbon table using a / operation on a column value
 test("IUD-01-01-01_001-41", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set(item_type_cd)= (item_type_cd/1)""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set(item_type_cd)= (item_type_cd/1)""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -821,18 +777,14 @@ test("IUD-01-01-01_004-05", Include) {
 
 //Check for update Carbon table where source table is having big int and target is having int value column for update
 test("IUD-01-01-01_004-06", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  a set (a.item_type_cd) = (select b.sell_price from default.t_carbn01b b where b.sell_price=200000343430000000)""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  a set (a.item_type_cd) = (select b.sell_price from default.t_carbn01b b where b.sell_price=200000343430000000)""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -850,35 +802,27 @@ test("IUD-01-01-01_004-07", Include) {
 
 //Check for update Carbon table where source table is having string and target is having decimal value column for update
 test("IUD-01-01-01_004-08", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  a set (a.profit) = (select b.item_code from default.t_carbn01b b where b.item_code='DE3423ee')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  a set (a.profit) = (select b.item_code from default.t_carbn01b b where b.item_code='DE3423ee')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
 //Check for update Carbon table where source table is having string and target is having timestamp column for update
 test("IUD-01-01-01_004-09", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  a set (a.update_time) = (select b.item_code from default.t_carbn01b b where b.item_code='DE3423ee')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  a set (a.update_time) = (select b.item_code from default.t_carbn01b b where b.item_code='DE3423ee')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -968,17 +912,21 @@ test("IUD-01-01-01_005-12", Include) {
 
 //Check for update Carbon table where a update column is dimension and is defined with exclude dictionary. 
 test("IUD-01-01-01_005-13", Include) {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Item_type_cd INT, Profit DECIMAL(3,2))STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('DICTIONARY_INCLUDE'='Item_type_cd')""").collect
- sql(s"""insert into default.t_carbn01  select item_type_cd, profit from default.t_carbn01b""").collect
-
-  try {
+  sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+  sql(s"""create table default.t_carbn01 (Item_type_cd INT, Profit DECIMAL(3,2))STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('DICTIONARY_INCLUDE'='Item_type_cd')""").collect
+  sql(s"""insert into default.t_carbn01  select item_type_cd, profit from default.t_carbn01b""").collect
+  val currProperty = CarbonProperties.getInstance().getProperty(CarbonCommonConstants
+    .CARBON_BAD_RECORDS_ACTION);
+  CarbonProperties.getInstance()
+    .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, "FAIL")
+  intercept[Exception] {
     sql(s"""update default.t_carbn01  set (item_type_cd) = ('ASASDDD')""").collect
-    assert(false)
-  } catch {
-    case _ => assert(true)
+    CarbonProperties.getInstance()
+      .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, currProperty)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  CarbonProperties.getInstance()
+    .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, currProperty)
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -1061,18 +1009,14 @@ test("IUD-01-01-01_009-01", Include) {
 
 //Check update on carbon table using incorrect data value
 test("IUD-01-01-01_010-01", Include) {
-  try {
-   sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
- sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set Update_time = '11-11-2012 77:77:77') where item_code='ASD423ee')""").collect
+  intercept[Exception] {
+    sql(s"""drop table IF EXISTS default.t_carbn01 """).collect
+    sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""update default.t_carbn01  set Update_time = '11-11-2012 77:77:77') where item_code='ASD423ee')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -1586,17 +1530,13 @@ test("IUD-01-01-02_009-01", Include) {
 
 //Check update on carbon table where a column being updated with incorrect data type.
 test("IUD-01-01-02_011-01", Include) {
-  try {
-   sql(s"""create table if not exists default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""Update T_Carbn04 set (Item_type_cd) = ('Banana')""").collect
+  intercept[Exception] {
+    sql(s"""create table if not exists default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""Update T_Carbn04 set (Item_type_cd) = ('Banana')""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -1613,17 +1553,13 @@ test("IUD-01-01-01_022-01", Include) {
 
 //Check update on carbon table where multiple values are returned in expression.
 test("IUD-01-01-01_023-00", Include) {
-  try {
-   sql(s"""create table if not exists default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""Update default.t_carbn01  set Item_type_cd = (select Item_type_cd from default.t_carbn01b )""").collect
+  intercept[Exception] {
+    sql(s"""create table if not exists default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""Update default.t_carbn01  set Item_type_cd = (select Item_type_cd from default.t_carbn01b )""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 
@@ -1643,17 +1579,13 @@ test("IUD-01-01-02_023-01", Include) {
 
 //Check update on carbon table where non matching values are returned from expression.
 test("IUD-01-01-01_024-01", Include) {
-  try {
-   sql(s"""create table if not exists default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
- sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
- sql(s"""Update default.t_carbn01  set Item_type_cd = (select Item_code from default.t_carbn01b)""").collect
+  intercept[Exception] {
+    sql(s"""create table if not exists default.t_carbn01 (Active_status String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 'org.apache.carbondata.format'""").collect
+    sql(s"""insert into default.t_carbn01  select * from default.t_carbn01b""").collect
+    sql(s"""Update default.t_carbn01  set Item_type_cd = (select Item_code from default.t_carbn01b)""").collect
     sql(s"""NA""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-   sql(s"""drop table default.t_carbn01  """).collect
+  sql(s"""drop table default.t_carbn01  """).collect
 }
        
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
index 8ff47af..52396ee 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
@@ -124,7 +124,7 @@ class DataLoadingTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Data load-->Empty BadRecords Parameters
   test("BadRecord_Dataload_011", Include) {
-    try {
+    intercept[Exception] {
       sql(s"""CREATE TABLE badrecords_test1 (ID int,CUST_ID int,sal int,cust_name string) STORED BY 'org.apache.carbondata.format'""")
 
         .collect
@@ -133,11 +133,8 @@ class DataLoadingTestCase extends QueryTest with BeforeAndAfterAll {
       checkAnswer(
         s"""select count(*) from badrecords_test1""",
         Seq(Row(0)), "DataLoadingTestCase-BadRecord_Dataload_011")
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table badrecords_test1""").collect
+    sql(s"""drop table badrecords_test1""").collect
   }
 
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/InvertedindexTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/InvertedindexTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/InvertedindexTestCase.scala
index bae0124..d9d35fb 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/InvertedindexTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/InvertedindexTestCase.scala
@@ -886,17 +886,13 @@ class InvertedindexTestCase extends QueryTest with BeforeAndAfterAll {
   //to check alter drop column for no_inverted
   test("NoInvertedindex-TC097", Include) {
     sql(s"""drop table if exists uniqdata""").collect
-    try {
-     sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('COLUMN_GROUPS'='(CUST_NAME,ACTIVE_EMUI_VERSION)','DICTIONARY_INCLUDE'='CUST_ID','NO_INVERTED_INDEX'='CUST_NAME')""").collect
-   sql(s"""Alter table uniqdata drop columns(BIGINT_COLUMN1)""").collect
-   sql(s"""LOAD DATA INPATH '$resourcesPath/Data/noinverted.csv' into table uniqdata OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
+    intercept[Exception] {
+      sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('COLUMN_GROUPS'='(CUST_NAME,ACTIVE_EMUI_VERSION)','DICTIONARY_INCLUDE'='CUST_ID','NO_INVERTED_INDEX'='CUST_NAME')""").collect
+      sql(s"""Alter table uniqdata drop columns(BIGINT_COLUMN1)""").collect
+      sql(s"""LOAD DATA INPATH '$resourcesPath/Data/noinverted.csv' into table uniqdata OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
       sql(s"""select BIGINT_COLUMN1 from uniqdata""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists uniqdata""").collect
+    sql(s"""drop table if exists uniqdata""").collect
   }
 
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery1TestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery1TestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery1TestCase.scala
index d93b2ee..e213e49 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery1TestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery1TestCase.scala
@@ -44,15 +44,9 @@ test("OffHeapQuery-001-TC_001", Include) {
 
 //To check select query with limit as string
 test("OffHeapQuery-001-TC_002", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 limit """"").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -112,57 +106,33 @@ test("OffHeapQuery-001-TC_008", Include) {
 
 //To check where clause with OR and no operand
 test("OffHeapQuery-001-TC_009", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id > 1 OR """).collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check OR clause with LHS and RHS having no arguments
 test("OffHeapQuery-001-TC_010", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where OR """).collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check OR clause with LHS having no arguments
 test("OffHeapQuery-001-TC_011", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where OR cust_id > "1"""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check incorrect query 
 test("OffHeapQuery-001-TC_013", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id > 0 OR name  """).collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -231,15 +201,9 @@ test("OffHeapQuery-001-TC_020", Include) {
 
 //To check select count and distinct query execution 
 test("OffHeapQuery-001-TC_021", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select count(cust_id),distinct(cust_name) from uniqdataquery1""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -281,15 +245,9 @@ test("OffHeapQuery-001-TC_025", Include) {
 
 //To check query execution with IN operator without paranthesis
 test("OffHeapQuery-001-TC_027", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id IN 9000,9005""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -304,15 +262,9 @@ test("OffHeapQuery-001-TC_028", Include) {
 
 //To check query execution with IN operator with out specifying any field.
 test("OffHeapQuery-001-TC_029", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where IN(1,2)""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -354,15 +306,9 @@ test("OffHeapQuery-001-TC_033", Include) {
 
 //To check AND with using booleans in invalid syntax
 test("OffHeapQuery-001-TC_034", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where AND true""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -386,15 +332,9 @@ test("OffHeapQuery-001-TC_036", Include) {
 
 //To check AND using 0 and 1 treated as boolean values
 test("OffHeapQuery-001-TC_037", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where true aNd 0""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -418,29 +358,17 @@ test("OffHeapQuery-001-TC_039", Include) {
 
 //To check '='operator without Passing any value
 test("OffHeapQuery-001-TC_040", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id=""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check '='operator without Passing columnname and value.
 test("OffHeapQuery-001-TC_041", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where =""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -455,15 +383,9 @@ test("OffHeapQuery-001-TC_042", Include) {
 
 //To check '!='operator by keeping space between them
 test("OffHeapQuery-001-TC_043", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id !   = 9001""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -478,29 +400,17 @@ test("OffHeapQuery-001-TC_044", Include) {
 
 //To check '!='operator without providing any value
 test("OffHeapQuery-001-TC_045", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id != """).collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check '!='operator without providing any column name
 test("OffHeapQuery-001-TC_046", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where  != false""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -542,43 +452,25 @@ test("OffHeapQuery-001-TC_050", Include) {
 
 //To check 'NOT' operator in nested way
 test("OffHeapQuery-001-TC_051", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id NOT (NOT(true))""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check 'NOT' operator with parenthesis.
 test("OffHeapQuery-001-TC_052", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id NOT ()""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check 'NOT' operator without condition.
 test("OffHeapQuery-001-TC_053", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id NOT""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -593,29 +485,17 @@ test("OffHeapQuery-001-TC_054", Include) {
 
 //To check '>' operator without specifying column
 test("OffHeapQuery-001-TC_055", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where > 20""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check '>' operator without specifying value
 test("OffHeapQuery-001-TC_056", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id > """).collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -648,15 +528,9 @@ test("OffHeapQuery-001-TC_059", Include) {
 
 //To check '<' operator without specifying column
 test("OffHeapQuery-001-TC_060", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where < 5""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -680,29 +554,17 @@ test("OffHeapQuery-001-TC_062", Include) {
 
 //To check '<=' operator without specifying column
 test("OffHeapQuery-001-TC_063", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where  <= 2""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check '<=' operator without providing value
 test("OffHeapQuery-001-TC_064", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where  cust_id <= """).collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -717,29 +579,17 @@ test("OffHeapQuery-001-TC_065", Include) {
 
 //To check '<=' operator adding space between'<' and  '='
 test("OffHeapQuery-001-TC_066", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id < =  9002""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check 'BETWEEN' operator without providing range
 test("OffHeapQuery-001-TC_067", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where age between""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -799,29 +649,17 @@ test("OffHeapQuery-001-TC_073", Include) {
 
 //To check  'IS NULL' without providing column
 test("OffHeapQuery-001-TC_074", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where Is NulL""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check  'IS NOT NULL' without providing column
 test("OffHeapQuery-001-TC_075", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where IS NOT NULL""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -854,29 +692,17 @@ test("OffHeapQuery-001-TC_078", Include) {
 
 //To check Limit clause with where condition and no argument
 test("OffHeapQuery-001-TC_079", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id=10987 limit""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
 //To check Limit clause with where condition and decimal argument
 test("OffHeapQuery-001-TC_080", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id=10987 limit 0.0""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -927,15 +753,9 @@ test("OffHeapQuery-001-TC_085", Include) {
 
 //To check Full join 
 test("OffHeapQuery-001-TC_086", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select uniqdataquery1.CUST_ID from uniqdataquery1 FULL JOIN uniqdataquery11 where CUST_ID""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -1022,15 +842,9 @@ test("OffHeapQuery-001-TC_096", Include) {
 
 //To check SORT using 'AND' on multiple column 
 test("OffHeapQuery-001-TC_097", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select * from uniqdataquery1 where cust_id > 10544 sort by cust_name desc and cust_id asc""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -1054,15 +868,9 @@ test("OffHeapQuery-001-TC_099", Include) {
 
 //To check average aggregate function with no arguments
 test("OffHeapQuery-001-TC_100", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select cust_id,avg() from uniqdataquery1 group by cust_id""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -1077,15 +885,9 @@ test("OffHeapQuery-001-TC_101", Include) {
 
 //To check nested  average aggregate function
 test("OffHeapQuery-001-TC_102", Include) {
-  try {
-  
+  intercept[Exception] {
     sql(s"""select cust_id,avg(count(cust_id)) from uniqdataquery1 group by cust_id""").collect
-    
-    assert(false)
-  } catch {
-    case _ => assert(true)
   }
-  
 }
        
 
@@ -1172,15 +974,9 @@ test("OffHeapQuery-001-TC_108", Include) {
 
   //To check Order by without column name
   test("OffHeapQuery-001-TC_112", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 order by ASC""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1222,15 +1018,9 @@ test("OffHeapQuery-001-TC_108", Include) {
 
   //To check Using window without partition
   test("OffHeapQuery-001-TC_117", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_name, sum(bigint_column1) OVER w from uniqdataquery1 WINDOW w""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1245,13 +1035,8 @@ test("OffHeapQuery-001-TC_108", Include) {
 
   //To check Using ROLLUP without group by clause
   test("OffHeapQuery-001-TC_119", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_name from uniqdataquery1 with ROLLUP""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table uniqdataquery1""").collect
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery2TestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery2TestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery2TestCase.scala
index 10a9866..888070f 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery2TestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapQuery2TestCase.scala
@@ -44,15 +44,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check select query with limit as string
   test("OffHeapQuery-002-TC_121", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 limit """"").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -112,57 +106,33 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check where clause with OR and no operand
   test("OffHeapQuery-002-TC_128", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id > 1 OR """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check OR clause with LHS and RHS having no arguments
   test("OffHeapQuery-002-TC_129", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where OR """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check OR clause with LHS having no arguments
   test("OffHeapQuery-002-TC_130", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where OR cust_id > "1"""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check incorrect query
   test("OffHeapQuery-002-TC_132", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id > 0 OR name  """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -231,15 +201,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check select count and distinct query execution
   test("OffHeapQuery-002-TC_140", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select count(cust_id),distinct(cust_name) from uniqdataquery2""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -281,15 +245,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check query execution with IN operator without paranthesis
   test("OffHeapQuery-002-TC_146", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id IN 9000,9005""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -304,15 +262,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check query execution with IN operator with out specifying any field.
   test("OffHeapQuery-002-TC_148", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where IN(1,2)""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -354,15 +306,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check AND with using booleans in invalid syntax
   test("OffHeapQuery-002-TC_153", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where AND true""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -386,15 +332,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check AND using 0 and 1 treated as boolean values
   test("OffHeapQuery-002-TC_156", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where true aNd 0""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -418,29 +358,17 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '='operator without Passing any value
   test("OffHeapQuery-002-TC_159", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id=""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check '='operator without Passing columnname and value.
   test("OffHeapQuery-002-TC_160", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where =""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -455,15 +383,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '!='operator by keeping space between them
   test("OffHeapQuery-002-TC_162", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id !   = 9001""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -478,29 +400,17 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '!='operator without providing any value
   test("OffHeapQuery-002-TC_164", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id != """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check '!='operator without providing any column name
   test("OffHeapQuery-002-TC_165", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where  != false""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -542,43 +452,25 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check 'NOT' operator in nested way
   test("OffHeapQuery-002-TC_170", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id NOT (NOT(true))""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check 'NOT' operator with parenthesis.
   test("OffHeapQuery-002-TC_171", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id NOT ()""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check 'NOT' operator without condition.
   test("OffHeapQuery-002-TC_172", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id NOT""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -593,29 +485,17 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '>' operator without specifying column
   test("OffHeapQuery-002-TC_174", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where > 20""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check '>' operator without specifying value
   test("OffHeapQuery-002-TC_175", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id > """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -648,15 +528,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '<' operator without specifying column
   test("OffHeapQuery-002-TC_179", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where < 5""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -680,29 +554,17 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '<=' operator without specifying column
   test("OffHeapQuery-002-TC_182", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where  <= 2""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check '<=' operator without providing value
   test("OffHeapQuery-002-TC_183", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where  cust_id <= """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -717,13 +579,8 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '<=' operator adding space between'<' and  '='
   test("OffHeapQuery-002-TC_185", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id < =  9002""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
 
   }
@@ -731,15 +588,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check 'BETWEEN' operator without providing range
   test("OffHeapQuery-002-TC_186", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where age between""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -799,29 +650,17 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check  'IS NULL' without providing column
   test("OffHeapQuery-002-TC_193", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where Is NulL""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check  'IS NOT NULL' without providing column
   test("OffHeapQuery-002-TC_194", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where IS NOT NULL""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -854,29 +693,17 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Limit clause with where condition and no argument
   test("OffHeapQuery-002-TC_198", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id=10987 limit""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check Limit clause with where condition and decimal argument
   test("OffHeapQuery-002-TC_199", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id=10987 limit 0.0""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -928,15 +755,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Full join
   test("OffHeapQuery-002-TC_205", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select uniqdataquery2.CUST_ID from uniqdataquery2 FULL JOIN uniqdataquery22 where CUST_ID""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1023,15 +844,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check SORT using 'AND' on multiple column
   test("OffHeapQuery-002-TC_216", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 where cust_id > 10544 sort by cust_name desc and cust_id asc""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1055,15 +870,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check average aggregate function with no arguments
   test("OffHeapQuery-002-TC_219", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_id,avg() from uniqdataquery2 group by cust_id""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1078,15 +887,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check nested  average aggregate function
   test("OffHeapQuery-002-TC_221", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_id,avg(count(cust_id)) from uniqdataquery2 group by cust_id""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1173,15 +976,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Order by without column name
   test("OffHeapQuery-002-TC_231", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery2 order by ASC""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1223,15 +1020,9 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Using window without partition
   test("OffHeapQuery-002-TC_236", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_name, sum(bigint_column1) OVER w from uniqdataquery2 WINDOW w""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1246,13 +1037,8 @@ class OffheapQuery2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Using ROLLUP without group by clause
   test("OffHeapQuery-002-TC_238", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_name from uniqdataquery2 with ROLLUP""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table uniqdataquery2""").collect
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort1TestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort1TestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort1TestCase.scala
index 44287a2..b1cafee 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort1TestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort1TestCase.scala
@@ -74,13 +74,10 @@ class OffheapSort1TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To load data after setting offheap memory in carbon property file without folder path in load
   test("OffHeapSort_001-TC_004", Include) {
-    try {
+    intercept[Exception] {
       sql(s"""CREATE TABLE uniqdata13 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
 
       sql(s"""LOAD DATA  into table uniqdata13 OPTIONS('DELIMITER'=',' , 'FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
     sql(s"""drop table uniqdata13""").collect
 
@@ -90,13 +87,10 @@ class OffheapSort1TestCase extends QueryTest with BeforeAndAfterAll {
   //To load data after setting offheap memory in carbon property file without table_name in load
   test("OffHeapSort_001-TC_005", Include) {
     sql(s"""drop table if exists uniqdata14""").collect
-    try {
+    intercept[Exception] {
       sql(s"""CREATE TABLE uniqdata14 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
 
       sql(s"""LOAD DATA  INPATH '$resourcesPath/Data/HeapVector/2000_UniqData.csv' into table OPTIONS('DELIMITER'=',' , 'FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
 
     sql(s"""drop table if exists uniqdata14""").collect

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort2TestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort2TestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort2TestCase.scala
index b21ec20..21c74c9 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort2TestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/OffheapSort2TestCase.scala
@@ -70,14 +70,11 @@ class OffheapSort2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To load data after setting offheap memory in carbon property file without folder path in load
   test("OffHeapSort_002-TC_018", Include) {
-    try {
+    intercept[Exception] {
       sql(s"""drop table if exists uniqdata213""").collect
       sql(s"""CREATE TABLE uniqdata213 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
 
       sql(s"""LOAD DATA  into table uniqdata213 OPTIONS('DELIMITER'=',' , 'FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
 
     sql(s"""drop table if exists uniqdata213""").collect
@@ -87,14 +84,11 @@ class OffheapSort2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To load data after setting offheap memory in carbon property file without table_name in load
   test("OffHeapSort_002-TC_019", Include) {
-    try {
+    intercept[Exception] {
       sql(s"""drop table if exists uniqdata214""").collect
       sql(s"""CREATE TABLE uniqdata214 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
 
       sql(s"""LOAD DATA  INPATH '$resourcesPath/Data/HeapVector/2000_UniqData.csv' into table OPTIONS('DELIMITER'=',' , 'FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
 
     sql(s"""drop table if exists uniqdata214""").collect

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/PartitionTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/PartitionTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/PartitionTestCase.scala
index b89c353..31ec14e 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/PartitionTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/PartitionTestCase.scala
@@ -31,12 +31,9 @@ class PartitionTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verify exception if column in partitioned by is already specified in table schema
   test("Partition-Local-sort_TC001", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (INTEGER_COLUMN1 int)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='List','LIST_INFO'='1,3')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }
@@ -60,38 +57,31 @@ class PartitionTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verify exception if List info is not given with List type partition
   test("Partition-Local-sort_TC004", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (DOJ timestamp)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='List')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }
 
 
-  //Verify exception if Partition type is not given
+  //exception should not be thrown if Partition type is not given
   test("Partition-Local-sort_TC005", Include) {
     try {
-       sql(s"""drop table if exists uniqdata""").collect
+      sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (DOJ timestamp)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('LIST_INFO'='1,2')""").collect
-      assert(false)
+      sql(s"""drop table if exists uniqdata""").collect
     } catch {
-      case _ => assert(true)
+      case _ => assert(false)
     }
-     sql(s"""drop table if exists uniqdata""").collect
   }
 
 
   //Verify exception if Partition type is 'range' and LIST_INFO Is provided
   test("Partition-Local-sort_TC006", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double) PARTITIONED BY (DOJ timestamp)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='RANGE', 'LIST_INFO'='1,2')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }
@@ -99,12 +89,9 @@ class PartitionTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verify exception if Partition type is 'range' and NUM_PARTITIONS Is provided
   test("Partition-Local-sort_TC007", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (DOJ timestamp)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='RANGE', 'NUM_PARTITIONS'='1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }
@@ -128,12 +115,9 @@ class PartitionTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verify exception if Partition type is 'LIST' and NUM_PARTITIONS Is provided
   test("Partition-Local-sort_TC010", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (DOJ int)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'NUM_PARTITIONS'='1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }
@@ -141,12 +125,9 @@ class PartitionTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verify exception if Partition type is 'LIST' and RANGE_INFO Is provided
   test("Partition-Local-sort_TC011", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (DOJ timestamp)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'RANGE_INFO'='20160302,20150302')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }
@@ -154,12 +135,9 @@ class PartitionTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verify exception if datatype is not provided with partition column
   test("Partition-Local-sort_TC012", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (DOJ)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='20160302,20150302')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }
@@ -167,28 +145,23 @@ class PartitionTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verify exception if a non existent file header  is provided in partition
   test("Partition-Local-sort_TC013", Include) {
-    try {
-       sql(s"""drop table if exists uniqdata""").collect
-      sql(s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (DOJ timestamp)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='20160302,20150302')
+    intercept[Exception] {
+      sql(s"""drop table if exists uniqdata""").collect
+      sql(
+        s"""CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (DOJ timestamp)STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='20160302,20150302')
 
   LOAD DATA INPATH  '$resourcesPath/Data/partition/2000_UniqData_partition.csv' into table uniqdata OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='CUST_NAME,ACTIVE_EMUI_VERSION,DOJ,DOB,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1,DOJ,CUST_ID')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists uniqdata""").collect
+    sql(s"""drop table if exists uniqdata""").collect
   }
 
 
   //Verify exception if Partition By Is empty
   test("Partition-Local-sort_TC014", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int, DOJ timestamp) PARTITIONED BY ()STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='0,1')
   """).collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }
@@ -235,13 +208,10 @@ class PartitionTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verify exception if 2 partition columns are provided
   test("Partition-Local-sort_TC018", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""
   CREATE TABLE uniqdata (CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY (CUST_ID int , DOJ timestamp) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='0,1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }
@@ -384,16 +354,13 @@ class PartitionTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verify exception is thrown if partition column is dropped
   test("Partition-Local-sort_TC029", Include) {
-    try {
+    intercept[Exception] {
        sql(s"""drop table if exists uniqdata""").collect
       sql(s"""CREATE TABLE uniqdata (CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int, DOJ timestamp) PARTITIONED BY (CUST_ID int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='0,1')
 
   alter table uniqdata drop columns(CUST_ID)
 
   """).collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists uniqdata""").collect
   }


[50/50] [abbrv] carbondata git commit: [CARBONDATA-1454]false expression handling and block pruning

Posted by ra...@apache.org.
[CARBONDATA-1454]false expression handling and block pruning

Issue :- In case of wrong value/invalid for time-stamp and date data type. all blocks are identified for scan .

Solution :- Add False Expression handling and False Filter Executor. it can be used to handle invalid Filter value.

This closes #1915


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/e16e8781
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/e16e8781
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/e16e8781

Branch: refs/heads/branch-1.3
Commit: e16e878189baa82bee5ca8af8d1229b7733b454a
Parents: fa6cd8d
Author: BJangir <ba...@gmail.com>
Authored: Fri Feb 2 16:33:45 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Sun Feb 4 00:59:25 2018 +0530

----------------------------------------------------------------------
 .../scan/filter/FilterExpressionProcessor.java  |  3 +-
 .../carbondata/core/scan/filter/FilterUtil.java |  3 +
 .../filter/executer/FalseFilterExecutor.java    | 60 ++++++++++++++++
 .../scan/filter/intf/FilterExecuterType.java    |  2 +-
 .../FalseConditionalResolverImpl.java           | 61 ++++++++++++++++
 .../filterexpr/FilterProcessorTestCase.scala    | 74 +++++++++++++++++++-
 .../apache/spark/sql/CarbonBoundReference.scala |  4 ++
 .../execution/CastExpressionOptimization.scala  | 60 +++++++++++++---
 .../strategy/CarbonLateDecodeStrategy.scala     |  2 +
 .../spark/sql/optimizer/CarbonFilters.scala     |  4 ++
 10 files changed, 259 insertions(+), 14 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
index 5a1b7df..3e23aa3 100644
--- a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
@@ -63,6 +63,7 @@ import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
 import org.apache.carbondata.core.scan.filter.resolver.LogicalFilterResolverImpl;
 import org.apache.carbondata.core.scan.filter.resolver.RowLevelFilterResolverImpl;
 import org.apache.carbondata.core.scan.filter.resolver.RowLevelRangeFilterResolverImpl;
+import org.apache.carbondata.core.scan.filter.resolver.resolverinfo.FalseConditionalResolverImpl;
 import org.apache.carbondata.core.scan.filter.resolver.resolverinfo.TrueConditionalResolverImpl;
 import org.apache.carbondata.core.scan.partition.PartitionUtil;
 import org.apache.carbondata.core.scan.partition.Partitioner;
@@ -398,7 +399,7 @@ public class FilterExpressionProcessor implements FilterProcessor {
     ConditionalExpression condExpression = null;
     switch (filterExpressionType) {
       case FALSE:
-        return new RowLevelFilterResolverImpl(expression, false, false, tableIdentifier);
+        return new FalseConditionalResolverImpl(expression, false, false, tableIdentifier);
       case TRUE:
         return new TrueConditionalResolverImpl(expression, false, false, tableIdentifier);
       case EQUALS:

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
index 3268ca3..a08edc0 100644
--- a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
@@ -74,6 +74,7 @@ import org.apache.carbondata.core.scan.filter.executer.AndFilterExecuterImpl;
 import org.apache.carbondata.core.scan.filter.executer.DimColumnExecuterFilterInfo;
 import org.apache.carbondata.core.scan.filter.executer.ExcludeColGroupFilterExecuterImpl;
 import org.apache.carbondata.core.scan.filter.executer.ExcludeFilterExecuterImpl;
+import org.apache.carbondata.core.scan.filter.executer.FalseFilterExecutor;
 import org.apache.carbondata.core.scan.filter.executer.FilterExecuter;
 import org.apache.carbondata.core.scan.filter.executer.ImplicitIncludeFilterExecutorImpl;
 import org.apache.carbondata.core.scan.filter.executer.IncludeColGroupFilterExecuterImpl;
@@ -176,6 +177,8 @@ public final class FilterUtil {
                   .getFilterRangeValues(segmentProperties), segmentProperties);
         case TRUE:
           return new TrueFilterExecutor();
+        case FALSE:
+          return new FalseFilterExecutor();
         case ROWLEVEL:
         default:
           return new RowLevelFilterExecuterImpl(

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/FalseFilterExecutor.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/FalseFilterExecutor.java b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/FalseFilterExecutor.java
new file mode 100644
index 0000000..2d2a15c
--- /dev/null
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/FalseFilterExecutor.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.core.scan.filter.executer;
+
+import java.io.IOException;
+import java.util.BitSet;
+
+import org.apache.carbondata.core.scan.expression.exception.FilterUnsupportedException;
+import org.apache.carbondata.core.scan.filter.intf.RowIntf;
+import org.apache.carbondata.core.scan.processor.BlocksChunkHolder;
+import org.apache.carbondata.core.util.BitSetGroup;
+
+/**
+ * API will apply filter based on resolver instance
+ *
+ * @return
+ * @throws FilterUnsupportedException
+ */
+public class FalseFilterExecutor implements FilterExecuter {
+
+  @Override
+  public BitSetGroup applyFilter(BlocksChunkHolder blocksChunkHolder, boolean useBitsetPipeline)
+      throws FilterUnsupportedException, IOException {
+    int numberOfPages = blocksChunkHolder.getDataBlock().numberOfPages();
+    BitSetGroup group = new BitSetGroup(numberOfPages);
+    for (int i = 0; i < numberOfPages; i++) {
+      BitSet set = new BitSet();
+      group.setBitSet(set, i);
+    }
+    return group;
+  }
+
+  @Override public boolean applyFilter(RowIntf value, int dimOrdinalMax)
+      throws FilterUnsupportedException, IOException {
+    return false;
+  }
+
+  @Override public BitSet isScanRequired(byte[][] blockMaxValue, byte[][] blockMinValue) {
+
+    return new BitSet();
+  }
+
+  @Override public void readBlocks(BlocksChunkHolder blockChunkHolder) throws IOException {
+    // Do Nothing
+  }
+}

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/core/src/main/java/org/apache/carbondata/core/scan/filter/intf/FilterExecuterType.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/scan/filter/intf/FilterExecuterType.java b/core/src/main/java/org/apache/carbondata/core/scan/filter/intf/FilterExecuterType.java
index 42defc6..d10b2e5 100644
--- a/core/src/main/java/org/apache/carbondata/core/scan/filter/intf/FilterExecuterType.java
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/intf/FilterExecuterType.java
@@ -21,6 +21,6 @@ import java.io.Serializable;
 public enum FilterExecuterType implements Serializable {
 
   INCLUDE, EXCLUDE, OR, AND, RESTRUCTURE, ROWLEVEL, RANGE, ROWLEVEL_GREATERTHAN,
-  ROWLEVEL_GREATERTHAN_EQUALTO, ROWLEVEL_LESSTHAN_EQUALTO, ROWLEVEL_LESSTHAN, TRUE
+  ROWLEVEL_GREATERTHAN_EQUALTO, ROWLEVEL_LESSTHAN_EQUALTO, ROWLEVEL_LESSTHAN, TRUE, FALSE
 
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/core/src/main/java/org/apache/carbondata/core/scan/filter/resolver/resolverinfo/FalseConditionalResolverImpl.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/scan/filter/resolver/resolverinfo/FalseConditionalResolverImpl.java b/core/src/main/java/org/apache/carbondata/core/scan/filter/resolver/resolverinfo/FalseConditionalResolverImpl.java
new file mode 100644
index 0000000..eccda1e
--- /dev/null
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/resolver/resolverinfo/FalseConditionalResolverImpl.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.core.scan.filter.resolver.resolverinfo;
+
+import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
+import org.apache.carbondata.core.scan.expression.Expression;
+import org.apache.carbondata.core.scan.filter.TableProvider;
+import org.apache.carbondata.core.scan.filter.intf.FilterExecuterType;
+import org.apache.carbondata.core.scan.filter.resolver.ConditionalFilterResolverImpl;
+
+/* The expression with If FALSE will be resolved setting empty bitset. */
+public class FalseConditionalResolverImpl extends ConditionalFilterResolverImpl {
+
+  private static final long serialVersionUID = 4599541011924324041L;
+
+  public FalseConditionalResolverImpl(Expression exp, boolean isExpressionResolve,
+      boolean isIncludeFilter, AbsoluteTableIdentifier tableIdentifier) {
+    super(exp, isExpressionResolve, isIncludeFilter, tableIdentifier, false);
+  }
+
+  @Override public void resolve(AbsoluteTableIdentifier absoluteTableIdentifier,
+      TableProvider tableProvider) {
+  }
+
+  /**
+   * This method will provide the executer type to the callee inorder to identify
+   * the executer type for the filter resolution, False Expresssion willl not execute anything.
+   * it will return empty bitset
+   */
+  @Override public FilterExecuterType getFilterExecuterType() {
+    return FilterExecuterType.FALSE;
+  }
+
+  /**
+   * Method will the read filter expression corresponding to the resolver.
+   * This method is required in row level executer inorder to evaluate the filter
+   * expression against spark, as mentioned above row level is a special type
+   * filter resolver.
+   *
+   * @return Expression
+   */
+  public Expression getFilterExpresion() {
+    return exp;
+  }
+
+}
+

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
index d54906f..147756f 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
@@ -17,9 +17,9 @@
 
 package org.apache.carbondata.spark.testsuite.filterexpr
 
-import java.sql.Timestamp
+import java.sql.{Date, Timestamp}
 
-import org.apache.spark.sql.Row
+import org.apache.spark.sql.{DataFrame, Row}
 import org.scalatest.BeforeAndAfterAll
 
 import org.apache.carbondata.core.constants.CarbonCommonConstants
@@ -132,6 +132,11 @@ class FilterProcessorTestCase extends QueryTest with BeforeAndAfterAll {
     sql(s"""LOAD DATA INPATH '$resourcesPath/big_int_Decimal.csv'  INTO TABLE big_int_basicc_1 options ('DELIMITER'=',', 'QUOTECHAR'='\"', 'COMPLEX_DELIMITER_LEVEL_1'='$$','COMPLEX_DELIMITER_LEVEL_2'=':', 'FILEHEADER'= '')""")
     sql(s"load data local inpath '$resourcesPath/big_int_Decimal.csv' into table big_int_basicc_Hive")
     sql(s"load data local inpath '$resourcesPath/big_int_Decimal.csv' into table big_int_basicc_Hive_1")
+
+    sql("create table if not exists date_test(name String, age int, dob date,doj timestamp) stored by 'carbondata' ")
+    sql("insert into date_test select 'name1',12,'2014-01-01','2014-01-01 00:00:00' ")
+    sql("insert into date_test select 'name2',13,'2015-01-01','2015-01-01 00:00:00' ")
+    sql("insert into date_test select 'name3',14,'2016-01-01','2016-01-01 00:00:00' ")
   }
 
   test("Is not null filter") {
@@ -287,6 +292,70 @@ class FilterProcessorTestCase extends QueryTest with BeforeAndAfterAll {
     sql("drop table if exists outofrange")
   }
 
+  test("check invalid  date value") {
+    val df=sql("select * from date_test where dob=''")
+    assert(df.count()==0,"Wrong data are displayed on invalid date ")
+  }
+
+  test("check invalid  date with and filter value ") {
+    val df=sql("select * from date_test where dob='' and age=13")
+    assert(df.count()==0,"Wrong data are displayed on invalid date ")
+  }
+
+  test("check invalid  date with or filter value ") {
+    val df=sql("select * from date_test where dob='' or age=13")
+    checkAnswer(df,Seq(Row("name2",13,Date.valueOf("2015-01-01"),Timestamp.valueOf("2015-01-01 00:00:00.0"))))
+  }
+
+  test("check invalid  date Geaterthan filter value ") {
+    val df=sql("select * from date_test where doj > '0' ")
+    checkAnswer(df,Seq(Row("name1",12,Date.valueOf("2014-01-01"),Timestamp.valueOf("2014-01-01 00:00:00.0")),
+      Row("name2",13,Date.valueOf("2015-01-01"),Timestamp.valueOf("2015-01-01 00:00:00.0")),
+      Row("name3",14,Date.valueOf("2016-01-01"),Timestamp.valueOf("2016-01-01 00:00:00.0"))))
+  }
+  test("check invalid  date Geaterthan and lessthan filter value ") {
+    val df=sql("select * from date_test where doj > '0' and doj < '2015-01-01' ")
+    checkAnswer(df,Seq(Row("name1",12,Date.valueOf("2014-01-01"),Timestamp.valueOf("2014-01-01 00:00:00.0"))))
+  }
+  test("check invalid  date Geaterthan or lessthan filter value ") {
+    val df=sql("select * from date_test where doj > '0' or doj < '2015-01-01' ")
+    checkAnswer(df,Seq(Row("name1",12,Date.valueOf("2014-01-01"),Timestamp.valueOf("2014-01-01 00:00:00.0")),
+      Row("name2",13,Date.valueOf("2015-01-01"),Timestamp.valueOf("2015-01-01 00:00:00.0")),
+      Row("name3",14,Date.valueOf("2016-01-01"),Timestamp.valueOf("2016-01-01 00:00:00.0"))))
+  }
+
+  test("check invalid  timestamp value") {
+    val df=sql("select * from date_test where dob=''")
+    assert(df.count()==0,"Wrong data are displayed on invalid timestamp ")
+  }
+
+  test("check invalid  timestamp with and filter value ") {
+    val df=sql("select * from date_test where doj='' and age=13")
+    assert(df.count()==0,"Wrong data are displayed on invalid timestamp ")
+  }
+
+  test("check invalid  timestamp with or filter value ") {
+    val df=sql("select * from date_test where doj='' or age=13")
+    checkAnswer(df,Seq(Row("name2",13,Date.valueOf("2015-01-01"),Timestamp.valueOf("2015-01-01 00:00:00.0"))))
+  }
+
+  test("check invalid  timestamp Geaterthan filter value ") {
+    val df=sql("select * from date_test where doj > '0' ")
+    checkAnswer(df,Seq(Row("name1",12,Date.valueOf("2014-01-01"),Timestamp.valueOf("2014-01-01 00:00:00.0")),
+      Row("name2",13,Date.valueOf("2015-01-01"),Timestamp.valueOf("2015-01-01 00:00:00.0")),
+    Row("name3",14,Date.valueOf("2016-01-01"),Timestamp.valueOf("2016-01-01 00:00:00.0"))))
+  }
+  test("check invalid  timestamp Geaterthan and lessthan filter value ") {
+    val df=sql("select * from date_test where doj > '0' and doj < '2015-01-01 00:00:00' ")
+    checkAnswer(df,Seq(Row("name1",12,Date.valueOf("2014-01-01"),Timestamp.valueOf("2014-01-01 00:00:00.0"))))
+  }
+  test("check invalid  timestamp Geaterthan or lessthan filter value ") {
+    val df=sql("select * from date_test where doj > '0' or doj < '2015-01-01 00:00:00' ")
+    checkAnswer(df,Seq(Row("name1",12,Date.valueOf("2014-01-01"),Timestamp.valueOf("2014-01-01 00:00:00.0")),
+      Row("name2",13,Date.valueOf("2015-01-01"),Timestamp.valueOf("2015-01-01 00:00:00.0")),
+      Row("name3",14,Date.valueOf("2016-01-01"),Timestamp.valueOf("2016-01-01 00:00:00.0"))))
+  }
+
   test("like% test case with restructure") {
     sql("drop table if exists like_filter")
     sql(
@@ -319,6 +388,7 @@ class FilterProcessorTestCase extends QueryTest with BeforeAndAfterAll {
     sql("DROP TABLE IF EXISTS filtertestTablesWithNullJoin")
     sql("drop table if exists like_filter")
     CompactionSupportGlobalSortBigFileTest.deleteFile(file1)
+    sql("drop table if exists date_test")
     CarbonProperties.getInstance()
       .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "dd-MM-yyyy")
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonBoundReference.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonBoundReference.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonBoundReference.scala
index a043342..aa650e0 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonBoundReference.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonBoundReference.scala
@@ -29,6 +29,10 @@ case class CastExpr(expr: Expression) extends Filter {
   override def references: Array[String] = null
 }
 
+case class FalseExpr() extends Filter {
+  override def references: Array[String] = null
+}
+
 case class CarbonBoundReference(colExp: ColumnExpression, dataType: DataType, nullable: Boolean)
   extends LeafExpression with NamedExpression with CodegenFallback {
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/integration/spark2/src/main/scala/org/apache/spark/sql/execution/CastExpressionOptimization.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/CastExpressionOptimization.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/CastExpressionOptimization.scala
index 2ff8c42..2de3fe6 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/CastExpressionOptimization.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/CastExpressionOptimization.scala
@@ -25,6 +25,7 @@ import scala.collection.JavaConverters._
 
 import org.apache.spark.sql.catalyst.expressions.{Attribute, EmptyRow, EqualTo, Expression, GreaterThan, GreaterThanOrEqual, In, LessThan, LessThanOrEqual, Literal, Not}
 import org.apache.spark.sql.CastExpr
+import org.apache.spark.sql.FalseExpr
 import org.apache.spark.sql.sources
 import org.apache.spark.sql.types._
 import org.apache.spark.sql.CarbonExpressions.{MatchCast => Cast}
@@ -48,7 +49,7 @@ object CastExpressionOptimization {
           CarbonCommonConstants.CARBON_DATE_DEFAULT_FORMAT))
       parser.setTimeZone(TimeZone.getTimeZone("GMT"))
     } else {
-      throw new UnsupportedOperationException ("Unsupported DataType being evaluated.")
+      throw new UnsupportedOperationException("Unsupported DataType being evaluated.")
     }
     try {
       val value = parser.parse(v.toString).getTime() * 1000L
@@ -123,6 +124,7 @@ object CastExpressionOptimization {
       tempList.asScala
     }
   }
+
   /**
    * This routines tries to apply rules on Cast Filter Predicates and if the rules applied and the
    * values can be toss back to native datatypes the cast is removed. Current two rules are applied
@@ -238,7 +240,7 @@ object CastExpressionOptimization {
       case c@GreaterThan(Cast(a: Attribute, _), Literal(v, t)) =>
         a.dataType match {
           case ts@(_: DateType | _: TimestampType) if t.sameType(StringType) =>
-            updateFilterForTimeStamp(v, c, ts)
+            updateFilterForNonEqualTimeStamp(v, c, updateFilterForTimeStamp(v, c, ts))
           case i: IntegerType if t.sameType(DoubleType) =>
             updateFilterForInt(v, c)
           case s: ShortType if t.sameType(IntegerType) =>
@@ -248,7 +250,7 @@ object CastExpressionOptimization {
       case c@GreaterThan(Literal(v, t), Cast(a: Attribute, _)) =>
         a.dataType match {
           case ts@(_: DateType | _: TimestampType) if t.sameType(StringType) =>
-            updateFilterForTimeStamp(v, c, ts)
+            updateFilterForNonEqualTimeStamp(v, c, updateFilterForTimeStamp(v, c, ts))
           case i: IntegerType if t.sameType(DoubleType) =>
             updateFilterForInt(v, c)
           case s: ShortType if t.sameType(IntegerType) =>
@@ -258,7 +260,7 @@ object CastExpressionOptimization {
       case c@LessThan(Cast(a: Attribute, _), Literal(v, t)) =>
         a.dataType match {
           case ts@(_: DateType | _: TimestampType) if t.sameType(StringType) =>
-            updateFilterForTimeStamp(v, c, ts)
+            updateFilterForNonEqualTimeStamp(v, c, updateFilterForTimeStamp(v, c, ts))
           case i: IntegerType if t.sameType(DoubleType) =>
             updateFilterForInt(v, c)
           case s: ShortType if t.sameType(IntegerType) =>
@@ -268,7 +270,7 @@ object CastExpressionOptimization {
       case c@LessThan(Literal(v, t), Cast(a: Attribute, _)) =>
         a.dataType match {
           case ts@(_: DateType | _: TimestampType) if t.sameType(StringType) =>
-            updateFilterForTimeStamp(v, c, ts)
+            updateFilterForNonEqualTimeStamp(v, c, updateFilterForTimeStamp(v, c, ts))
           case i: IntegerType if t.sameType(DoubleType) =>
             updateFilterForInt(v, c)
           case s: ShortType if t.sameType(IntegerType) =>
@@ -278,7 +280,7 @@ object CastExpressionOptimization {
       case c@GreaterThanOrEqual(Cast(a: Attribute, _), Literal(v, t)) =>
         a.dataType match {
           case ts@(_: DateType | _: TimestampType) if t.sameType(StringType) =>
-            updateFilterForTimeStamp(v, c, ts)
+            updateFilterForNonEqualTimeStamp(v, c, updateFilterForTimeStamp(v, c, ts))
           case i: IntegerType if t.sameType(DoubleType) =>
             updateFilterForInt(v, c)
           case s: ShortType if t.sameType(IntegerType) =>
@@ -288,7 +290,7 @@ object CastExpressionOptimization {
       case c@GreaterThanOrEqual(Literal(v, t), Cast(a: Attribute, _)) =>
         a.dataType match {
           case ts@(_: DateType | _: TimestampType) if t.sameType(StringType) =>
-            updateFilterForTimeStamp(v, c, ts)
+            updateFilterForNonEqualTimeStamp(v, c, updateFilterForTimeStamp(v, c, ts))
           case i: IntegerType if t.sameType(DoubleType) =>
             updateFilterForInt(v, c)
           case s: ShortType if t.sameType(IntegerType) =>
@@ -298,7 +300,7 @@ object CastExpressionOptimization {
       case c@LessThanOrEqual(Cast(a: Attribute, _), Literal(v, t)) =>
         a.dataType match {
           case ts@(_: DateType | _: TimestampType) if t.sameType(StringType) =>
-            updateFilterForTimeStamp(v, c, ts)
+            updateFilterForNonEqualTimeStamp(v, c, updateFilterForTimeStamp(v, c, ts))
           case i: IntegerType if t.sameType(DoubleType) =>
             updateFilterForInt(v, c)
           case s: ShortType if t.sameType(IntegerType) =>
@@ -308,7 +310,7 @@ object CastExpressionOptimization {
       case c@LessThanOrEqual(Literal(v, t), Cast(a: Attribute, _)) =>
         a.dataType match {
           case ts@(_: DateType | _: TimestampType) if t.sameType(StringType) =>
-            updateFilterForTimeStamp(v, c, ts)
+            updateFilterForNonEqualTimeStamp(v, c, updateFilterForTimeStamp(v, c, ts))
           case i: IntegerType if t.sameType(DoubleType) =>
             updateFilterForInt(v, c)
           case s: ShortType if t.sameType(IntegerType) =>
@@ -320,6 +322,7 @@ object CastExpressionOptimization {
 
   /**
    * the method removes the cast for short type columns
+   *
    * @param actualValue
    * @param exp
    * @return
@@ -350,6 +353,41 @@ object CastExpressionOptimization {
   }
 
   /**
+   *
+   * @param actualValue actual value of filter
+   * @param exp         expression
+   * @param filter      Filter Expression
+   * @return return CastExpression or same Filter
+   */
+  def updateFilterForNonEqualTimeStamp(actualValue: Any, exp: Expression, filter: Option[Filter]):
+  Option[sources.Filter] = {
+    filter.get match {
+      case FalseExpr() if (validTimeComparisionForSpark(actualValue)) =>
+        Some(CastExpr(exp))
+      case _ =>
+        filter
+    }
+  }
+
+  /**
+   * Spark compares data based on double also.
+   * Ex. slect * ...where time >0 , this will return all data
+   * So better  give to Spark as Cast Expression.
+   *
+   * @param numericTimeValue
+   * @return if valid double return true,else false
+   */
+  def validTimeComparisionForSpark(numericTimeValue: Any): Boolean = {
+    try {
+      numericTimeValue.toString.toDouble
+      true
+    } catch {
+      case _ => false
+    }
+  }
+
+
+  /**
    * the method removes the cast for timestamp type columns
    *
    * @param actualValue
@@ -362,10 +400,12 @@ object CastExpressionOptimization {
     if (!newValue.equals(actualValue)) {
       updateFilterBasedOnFilterType(exp, newValue)
     } else {
-      Some(CastExpr(exp))
+      Some(FalseExpr())
     }
+
   }
 
+
   /**
    * the method removes the cast for the respective filter type
    *

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/CarbonLateDecodeStrategy.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/CarbonLateDecodeStrategy.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/CarbonLateDecodeStrategy.scala
index 4b1d11b..544c494 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/CarbonLateDecodeStrategy.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/CarbonLateDecodeStrategy.scala
@@ -616,6 +616,8 @@ private[sql] class CarbonLateDecodeStrategy extends SparkStrategy {
         Some(CarbonEndsWith(c))
       case c@Contains(a: Attribute, Literal(v, t)) =>
         Some(CarbonContainsWith(c))
+      case c@Literal(v, t) if (v == null) =>
+        Some(FalseExpr())
       case others => None
     }
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonFilters.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonFilters.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonFilters.scala
index 4d91375..c7767ce 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonFilters.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonFilters.scala
@@ -135,6 +135,8 @@ object CarbonFilters {
           }))
         case CastExpr(expr: Expression) =>
           Some(transformExpression(expr))
+        case FalseExpr() =>
+          Some(new FalseExpression(null))
         case _ => None
       }
     }
@@ -269,6 +271,8 @@ object CarbonFilters {
           Some(CarbonContainsWith(c))
         case c@Cast(a: Attribute, _) =>
           Some(CastExpr(c))
+        case c@Literal(v, t) if v == null =>
+          Some(FalseExpr())
         case others =>
           if (!or) {
             others.collect {


[06/50] [abbrv] carbondata git commit: [CARBONDATA-2089]SQL exception is masked due to assert(false) inside try catch and exception block always asserting true

Posted by ra...@apache.org.
[CARBONDATA-2089]SQL exception is masked due to assert(false) inside try catch and exception block always asserting true

Correct all SDV testcase to use intercept exception

This closes #1871


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/3dff273b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/3dff273b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/3dff273b

Branch: refs/heads/branch-1.3
Commit: 3dff273b4f1308fa76a91f6f22bb40eb2d2d9553
Parents: b2139ca
Author: Raghunandan S <ca...@gmail.com>
Authored: Sat Jan 27 20:49:47 2018 +0530
Committer: Jacky Li <ja...@qq.com>
Committed: Wed Jan 31 19:28:09 2018 +0800

----------------------------------------------------------------------
 .../sdv/generated/AlterTableTestCase.scala      | 250 ++++++---------
 .../sdv/generated/BatchSortLoad1TestCase.scala  |  39 +--
 .../sdv/generated/BatchSortLoad2TestCase.scala  |  32 +-
 .../sdv/generated/BatchSortQueryTestCase.scala  | 290 +++--------------
 .../sdv/generated/BucketingTestCase.scala       |  12 +-
 .../sdv/generated/ColumndictTestCase.scala      |  60 +---
 .../sdv/generated/DataLoadingIUDTestCase.scala  | 318 ++++++++-----------
 .../sdv/generated/DataLoadingTestCase.scala     |   7 +-
 .../sdv/generated/InvertedindexTestCase.scala   |  14 +-
 .../sdv/generated/OffheapQuery1TestCase.scala   | 287 +++--------------
 .../sdv/generated/OffheapQuery2TestCase.scala   | 286 +++--------------
 .../sdv/generated/OffheapSort1TestCase.scala    |  10 +-
 .../sdv/generated/OffheapSort2TestCase.scala    |  10 +-
 .../sdv/generated/PartitionTestCase.scala       |  71 ++---
 .../sdv/generated/SinglepassTestCase.scala      |  76 ++---
 15 files changed, 423 insertions(+), 1339 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
index b1a0f34..8899f5c 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
@@ -120,141 +120,107 @@ class AlterTableTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check alter table when the altered name is already present in the database
   test("RenameTable_001_08", Include) {
-    try {
-       sql(s"""create table test1 (name string, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx',1""").collect
-   sql(s"""create table test2 (name string, id int) stored by 'carbondata'""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (name string, id int) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx',1""").collect
+      sql(s"""create table test2 (name string, id int) stored by 'carbondata'""").collect
       sql(s"""alter table test1 RENAME TO test2""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
-   sql(s"""drop table if exists test2""").collect
+
+    sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test2""").collect
   }
 
 
   //Check alter table when the altered name is given multiple times
   test("RenameTable_001_09", Include) {
-    try {
-       sql(s"""create table test1 (name string, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx',1""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (name string, id int) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx',1""").collect
       sql(s"""alter table test1 RENAME TO test2 test3""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check delete column for dimension column
   test("DeleteCol_001_01", Include) {
-    try {
-     sql(s"""create table test1 (name string, id int) stored by 'carbondata' TBLPROPERTIES('DICTIONARY_INCLUDE'='id') """).collect
-   sql(s"""insert into test1 select 'xx',1""").collect
-   sql(s"""alter table test1 drop columns (name)""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (name string, id int) stored by 'carbondata' TBLPROPERTIES('DICTIONARY_INCLUDE'='id') """).collect
+      sql(s"""insert into test1 select 'xx',1""").collect
+      sql(s"""alter table test1 drop columns (name)""").collect
       sql(s"""select name from test1""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check delete column for measure column
   test("DeleteCol_001_02", Include) {
-    try {
-     sql(s"""create table test1 (name string, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx',1""").collect
-   sql(s"""alter table test1 drop columns (id)""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (name string, id int) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx',1""").collect
+      sql(s"""alter table test1 drop columns (id)""").collect
       sql(s"""select id from test1""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check delete column for measure and dimension column
   test("DeleteCol_001_03", Include) {
-    try {
-     sql(s"""create table test1 (name string, country string, upd_time timestamp, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx','yy',current_timestamp,1""").collect
-   sql(s"""alter table test1 drop columns (id,name)""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (name string, country string, upd_time timestamp, id int) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx','yy',current_timestamp,1""").collect
+      sql(s"""alter table test1 drop columns (id,name)""").collect
       sql(s"""select id,name  from test1""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check delete column for multiple column
   test("DeleteCol_001_04", Include) {
-    try {
-     sql(s"""create table test1 (name string, country string, upd_time timestamp, id int) stored by 'carbondata'  TBLPROPERTIES('DICTIONARY_INCLUDE'='id')""").collect
-   sql(s"""insert into test1 select 'xx','yy',current_timestamp,1""").collect
-   sql(s"""alter table test1 drop columns (name, upd_time)""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (name string, country string, upd_time timestamp, id int) stored by 'carbondata'  TBLPROPERTIES('DICTIONARY_INCLUDE'='id')""").collect
+      sql(s"""insert into test1 select 'xx','yy',current_timestamp,1""").collect
+      sql(s"""alter table test1 drop columns (name, upd_time)""").collect
       sql(s"""select name, upd_time from test1""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check delete column for all columns
   test("DeleteCol_001_05", Include) {
-    try {
-       sql(s"""create table test1 (name string, country string, upd_time timestamp, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx','yy',current_timestamp,1""").collect
-      sql(s"""alter table test1 drop columns (name, upd_time, country,id)""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
-    }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""create table test1 (name string, country string, upd_time timestamp, id int) stored by 'carbondata'""").collect
+    sql(s"""insert into test1 select 'xx','yy',current_timestamp,1""").collect
+    sql(s"""alter table test1 drop columns (name, upd_time, country,id)""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check delete column for include dictionary column
   test("DeleteCol_001_06", Include) {
-    try {
-     sql(s"""create table test1 (name string, id int) stored by 'carbondata' TBLPROPERTIES('DICTIONARY_INCLUDE'='id')""").collect
-   sql(s"""insert into test1 select 'xx',1""").collect
-   sql(s"""alter table test1 drop columns (id)""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (name string, id int) stored by 'carbondata' TBLPROPERTIES('DICTIONARY_INCLUDE'='id')""").collect
+      sql(s"""insert into test1 select 'xx',1""").collect
+      sql(s"""alter table test1 drop columns (id)""").collect
       sql(s"""select id from test1""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check delete column for timestamp column
   test("DeleteCol_001_08", Include) {
-    try {
-     sql(s"""create table test1 (name string, country string, upd_time timestamp, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx','yy',current_timestamp,1""").collect
-   sql(s"""alter table test1 drop columns (upd_time)""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (name string, country string, upd_time timestamp, id int) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx','yy',current_timestamp,1""").collect
+      sql(s"""alter table test1 drop columns (upd_time)""").collect
       sql(s"""select upd_time from test1""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
@@ -272,17 +238,13 @@ class AlterTableTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check the drop of added column will remove the column from table
   test("DeleteCol_001_09_2", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""create table test1 (name string, country string, upd_time timestamp, id int) stored by 'carbondata'""").collect
      sql(s"""insert into test1 select 'xx','yy',current_timestamp,1""").collect
      sql(s"""alter table test1 add columns (name2 string)""").collect
      sql(s"""insert into test1 select 'xx','yy',current_timestamp,1,'abc'""").collect
      sql(s"""alter table test1 drop columns (name2)""").collect
      sql(s"""select count(id) from test1 where name2 = 'abc'""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists test1""").collect
   }
@@ -451,16 +413,13 @@ class AlterTableTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check add column with option default value is given for an existing column
   test("AddColumn_001_14", Include) {
-    try {
+    intercept[Exception] {
       sql(s"""drop table if exists test1""").collect
       sql(s"""create table test1 (name string) stored by 'carbondata'""").collect
       sql(s"""insert into test1 select 'xx'""").collect
       sql(s"""ALTER TABLE test1 ADD COLUMNS (Id int) TBLPROPERTIES('DICTIONARY_INCLUDE'='id','default.value.name'='yy')""").collect
-      assert(false)
-      sql(s"""drop table if exists test1""").collect
-    } catch {
-      case _ => assert(true)
     }
+    sql(s"""drop table if exists test1""").collect
   }
 
 
@@ -489,17 +448,14 @@ class AlterTableTestCase extends QueryTest with BeforeAndAfterAll {
 
   //check drop table after table rename using old name
   test("DropTable_001_02", Include) {
-    try {
+    intercept[Exception] {
       sql(s"""drop table if exists test1""").collect
-     sql(s"""create table test1 (name string, price decimal(3,2)) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx',1.2""").collect
-   sql(s"""alter table test1 rename to test2""").collect
+      sql(s"""create table test1 (name string, price decimal(3,2)) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx',1.2""").collect
+      sql(s"""alter table test1 rename to test2""").collect
       sql(s"""drop table test1""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test2""").collect
+    sql(s"""drop table if exists test2""").collect
   }
 
 
@@ -734,15 +690,12 @@ class AlterTableTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check show segments on old table After altering the Table name.
   test("Showsegme_001_01", Include) {
-    try {
-       sql(s"""create table test1 (country string, id int) stored by 'carbondata'""").collect
-   sql(s"""alter table test1 rename to test2""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (country string, id int) stored by 'carbondata'""").collect
+      sql(s"""alter table test1 rename to test2""").collect
       sql(s"""show segments for table test1""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test2""").collect
+    sql(s"""drop table if exists test2""").collect
   }
 
 
@@ -828,65 +781,53 @@ class AlterTableTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check vertical compaction when all segments are created before drop column, check dropped column is not used in the compation
   test("Compaction_001_06", Include) {
-    try {
-     sql(s"""drop table if exists test1""").collect
-   sql(s"""drop table if exists test2""").collect
-   sql(s"""create table test1(name string, country string, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx','china',1""").collect
-   sql(s"""insert into test1 select 'xe','china',2""").collect
-   sql(s"""insert into test1 select 'xe','china',3""").collect
-   sql(s"""alter table test1 drop columns (country)""").collect
-   sql(s"""alter table test1 compact 'minor'""").collect
+    intercept[Exception] {
+      sql(s"""drop table if exists test1""").collect
+      sql(s"""drop table if exists test2""").collect
+      sql(s"""create table test1(name string, country string, id int) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx','china',1""").collect
+      sql(s"""insert into test1 select 'xe','china',2""").collect
+      sql(s"""insert into test1 select 'xe','china',3""").collect
+      sql(s"""alter table test1 drop columns (country)""").collect
+      sql(s"""alter table test1 compact 'minor'""").collect
       sql(s"""select country from test1 where country='china'""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check vertical compaction when some of the segments are created before drop column, check dropped column is not used in the compation
   test("Compaction_001_07", Include) {
-    try {
-     sql(s"""drop table if exists test1""").collect
-   sql(s"""drop table if exists test2""").collect
-   sql(s"""create table test1(name string, country string, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx','china',1""").collect
-   sql(s"""insert into test1 select 'xe','china',2""").collect
-   sql(s"""alter table test1 drop columns (country)""").collect
-   sql(s"""insert into test1 select 'xe',3""").collect
-   sql(s"""alter table test1 compact 'minor'""").collect
+    intercept[Exception] {
+      sql(s"""drop table if exists test1""").collect
+      sql(s"""drop table if exists test2""").collect
+      sql(s"""create table test1(name string, country string, id int) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx','china',1""").collect
+      sql(s"""insert into test1 select 'xe','china',2""").collect
+      sql(s"""alter table test1 drop columns (country)""").collect
+      sql(s"""insert into test1 select 'xe',3""").collect
+      sql(s"""alter table test1 compact 'minor'""").collect
       sql(s"""select country from test1 where country='china'""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check vertical compaction for multiple drop column, check dropped column is not used in the compation
   test("Compaction_001_08", Include) {
-    try {
-     sql(s"""drop table if exists test1""").collect
-   sql(s"""drop table if exists test2""").collect
-   sql(s"""create table test1(name string, country string, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx','china',1""").collect
-   sql(s"""alter table test1 drop columns (country)""").collect
-   sql(s"""insert into test1 select 'xe',3""").collect
-   sql(s"""alter table test1 drop columns (id)""").collect
-   sql(s"""insert into test1 select 'xe'""").collect
-   sql(s"""alter table test1 compact 'minor'""").collect
+    intercept[Exception] {
+      sql(s"""drop table if exists test1""").collect
+      sql(s"""drop table if exists test2""").collect
+      sql(s"""create table test1(name string, country string, id int) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx','china',1""").collect
+      sql(s"""alter table test1 drop columns (country)""").collect
+      sql(s"""insert into test1 select 'xe',3""").collect
+      sql(s"""alter table test1 drop columns (id)""").collect
+      sql(s"""insert into test1 select 'xe'""").collect
+      sql(s"""alter table test1 compact 'minor'""").collect
       sql(s"""select country from test1 where id=1""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test1""").collect
+    sql(s"""drop table if exists test1""").collect
   }
 
 
@@ -989,17 +930,14 @@ class AlterTableTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check delete segment is not allowed on old table name when table name is altered
   test("DeleteSeg_001_01", Include) {
-    try {
-       sql(s"""create table test1 (name string, id int) stored by 'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx',1""").collect
-   sql(s"""insert into test1 select 'xx',12""").collect
-   sql(s"""alter table test1 rename to test2""").collect
+    intercept[Exception] {
+      sql(s"""create table test1 (name string, id int) stored by 'carbondata'""").collect
+      sql(s"""insert into test1 select 'xx',1""").collect
+      sql(s"""insert into test1 select 'xx',12""").collect
+      sql(s"""alter table test1 rename to test2""").collect
       sql(s"""delete from table test1 where segment.id in (0)""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table if exists test2""").collect
+    sql(s"""drop table if exists test2""").collect
   }
 
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad1TestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad1TestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad1TestCase.scala
index 9eb5dec..d301218 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad1TestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad1TestCase.scala
@@ -68,27 +68,21 @@ class BatchSortLoad1TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To load data after setting sort scope and sort size in carbon property file without folder path in load
   test("Batch_sort_Loading_001-01-01-01_001-TC_004", Include) {
-    try {
-     sql(s"""CREATE TABLE uniqdata13 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
+    intercept[Exception] {
+      sql(s"""CREATE TABLE uniqdata13 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
       sql(s"""LOAD DATA  into table uniqdata13 OPTIONS('DELIMITER'=',' , 'FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table uniqdata13""").collect
+    sql(s"""drop table uniqdata13""").collect
   }
 
 
   //To load data after setting sort scope and sort size in carbon property file without table_name in load
   test("Batch_sort_Loading_001-01-01-01_001-TC_005", Include) {
-    try {
-     sql(s"""CREATE TABLE uniqdata14 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
+    intercept[Exception] {
+      sql(s"""CREATE TABLE uniqdata14 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
       sql(s"""LOAD DATA  INPATH '$resourcesPath/Data/uniqdata/2000_UniqData.csv' into table OPTIONS('DELIMITER'=',' , 'FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table uniqdata14""").collect
+    sql(s"""drop table uniqdata14""").collect
   }
 
 
@@ -232,14 +226,11 @@ class BatchSortLoad1TestCase extends QueryTest with BeforeAndAfterAll {
   //To load data after setting sort scope and sort size in carbon property file with ALL_DICTIONARY_PATH
   test("Batch_sort_Loading_001-01-01-01_001-TC_019", Include) {
     sql(s"""drop table if exists t3""").collect
-    try {
+    intercept[Exception] {
       sql(s"""CREATE TABLE t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/batchsort/data.csv' into table t3 options('ALL_DICTIONARY_PATH'='resourcesPath/Data/batchsort/data.dictionary')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table t3""").collect
+    sql(s"""drop table t3""").collect
   }
 
 
@@ -260,22 +251,16 @@ class BatchSortLoad1TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check sort_scope option with a wrong value
   test("Batch_sort_Loading_001-01-01-01_001-TC_023", Include) {
-    try {
-     sql(s"""CREATE TABLE uniqdata20a (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'carbondata' TBLPROPERTIES('SORT_SCOPE'='ABCXYZ')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
+    intercept[Exception] {
+      sql(s"""CREATE TABLE uniqdata20a (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'carbondata' TBLPROPERTIES('SORT_SCOPE'='ABCXYZ')""").collect
     }
   }
 
 
   //To check sort_scope option with null value
   test("Batch_sort_Loading_001-01-01-01_001-TC_024", Include) {
-    try {
-     sql(s"""CREATE TABLE uniqdata20a (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'carbondata' TBLPROPERTIES('SORT_SCOPE'='null')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
+    intercept[Exception] {
+      sql(s"""CREATE TABLE uniqdata20a (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'carbondata' TBLPROPERTIES('SORT_SCOPE'='null')""").collect
     }
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad2TestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad2TestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad2TestCase.scala
index 5fa6594..d3ff6aa 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad2TestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortLoad2TestCase.scala
@@ -69,27 +69,21 @@ class BatchSortLoad2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To load data after setting only sort scope in carbon property file without folder path in load
   test("Batch_sort_Loading_001-01-01-01_001-TC_030", Include) {
-    try {
-     sql(s"""CREATE TABLE uniqdata13 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
+    intercept[Exception] {
+      sql(s"""CREATE TABLE uniqdata13 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
       sql(s"""LOAD DATA  into table uniqdata13 OPTIONS('DELIMITER'=',' , 'FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table uniqdata13""").collect
+    sql(s"""drop table uniqdata13""").collect
   }
 
 
   //To load data after setting only sort scope in carbon property file without table_name in load
   test("Batch_sort_Loading_001-01-01-01_001-TC_031", Include) {
-    try {
-     sql(s"""CREATE TABLE uniqdata14 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
+    intercept[Exception] {
+      sql(s"""CREATE TABLE uniqdata14 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
       sql(s"""LOAD DATA  INPATH '$resourcesPath/Data/uniqdata/2000_UniqData.csv' into table OPTIONS('DELIMITER'=',' , 'FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table uniqdata14""").collect
+    sql(s"""drop table uniqdata14""").collect
   }
 
 
@@ -255,12 +249,9 @@ class BatchSortLoad2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check sort_scope option with a wrong value
   test("Batch_sort_Loading_001-01-01-01_001-TC_049", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE uniqdata20a (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA INPATH '$resourcesPath/Data/uniqdata/7000_UniqData.csv' into table uniqdata20a OPTIONS('DELIMITER'=',' , 'SORT_SCOPE'='ABCXYZ',‘SINGLE_PASS’=’true’,'QUOTECHAR'='"','COMMENTCHAR'='#','MULTILINE'='true','ESCAPECHAR'='\','BAD_RECORDS_ACTION'='REDIRECT','BAD_RECORDS_LOGGER_ENABLE'='TRUE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table uniqdata20a""").collect
   }
@@ -268,14 +259,11 @@ class BatchSortLoad2TestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check sort_scope option with null value
   test("Batch_sort_Loading_001-01-01-01_001-TC_050", Include) {
-    try {
-     sql(s"""CREATE TABLE uniqdata20a (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'carbondata'""").collect
+    intercept[Exception] {
+      sql(s"""CREATE TABLE uniqdata20a (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA INPATH '$resourcesPath/Data/uniqdata/7000_UniqData.csv' into table uniqdata20a OPTIONS('DELIMITER'=',' , 'SORT_SCOPE'='null',‘SINGLE_PASS’=’true’,'QUOTECHAR'='"','COMMENTCHAR'='#','MULTILINE'='true','ESCAPECHAR'='\','BAD_RECORDS_ACTION'='REDIRECT','BAD_RECORDS_LOGGER_ENABLE'='TRUE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table uniqdata20a""").collect
+    sql(s"""drop table uniqdata20a""").collect
   }
 
   val prop = CarbonProperties.getInstance()

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortQueryTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortQueryTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortQueryTestCase.scala
index cdebf51..11b060a 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortQueryTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BatchSortQueryTestCase.scala
@@ -44,15 +44,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check select query with limit as string
   test("Batch_sort_Querying_001-01-01-01_001-TC_002", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 limit """"").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -110,57 +104,33 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check where clause with OR and no operand
   test("Batch_sort_Querying_001-01-01-01_001-TC_009", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id > 1 OR """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check OR clause with LHS and RHS having no arguments
   test("Batch_sort_Querying_001-01-01-01_001-TC_010", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where OR """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check OR clause with LHS having no arguments
   test("Batch_sort_Querying_001-01-01-01_001-TC_011", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where OR cust_id > "1"""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check incorrect query
   test("Batch_sort_Querying_001-01-01-01_001-TC_013", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id > 0 OR name  """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -229,15 +199,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check select count and distinct query execution
   test("Batch_sort_Querying_001-01-01-01_001-TC_021", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select count(cust_id),distinct(cust_name) from uniqdataquery1""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -279,15 +243,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check query execution with IN operator without paranthesis
   test("Batch_sort_Querying_001-01-01-01_001-TC_027", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id IN 9000,9005""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -302,15 +260,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check query execution with IN operator with out specifying any field.
   test("Batch_sort_Querying_001-01-01-01_001-TC_029", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where IN(1,2)""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -352,15 +304,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check AND with using booleans in invalid syntax
   test("Batch_sort_Querying_001-01-01-01_001-TC_034", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where AND true""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -384,15 +330,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check AND using 0 and 1 treated as boolean values
   test("Batch_sort_Querying_001-01-01-01_001-TC_037", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where true aNd 0""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -416,29 +356,17 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '='operator without Passing any value
   test("Batch_sort_Querying_001-01-01-01_001-TC_040", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id=""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check '='operator without Passing columnname and value.
   test("Batch_sort_Querying_001-01-01-01_001-TC_041", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where =""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -453,15 +381,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '!='operator by keeping space between them
   test("Batch_sort_Querying_001-01-01-01_001-TC_043", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id !   = 9001""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -476,29 +398,17 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '!='operator without providing any value
   test("Batch_sort_Querying_001-01-01-01_001-TC_045", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id != """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check '!='operator without providing any column name
   test("Batch_sort_Querying_001-01-01-01_001-TC_046", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where  != false""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -540,43 +450,25 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check 'NOT' operator in nested way
   test("Batch_sort_Querying_001-01-01-01_001-TC_051", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id NOT (NOT(true))""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check 'NOT' operator with parenthesis.
   test("Batch_sort_Querying_001-01-01-01_001-TC_052", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id NOT ()""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check 'NOT' operator without condition.
   test("Batch_sort_Querying_001-01-01-01_001-TC_053", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id NOT""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -591,29 +483,17 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '>' operator without specifying column
   test("Batch_sort_Querying_001-01-01-01_001-TC_055", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where > 20""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check '>' operator without specifying value
   test("Batch_sort_Querying_001-01-01-01_001-TC_056", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id > """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -646,15 +526,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '<' operator without specifying column
   test("Batch_sort_Querying_001-01-01-01_001-TC_060", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where < 5""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -678,29 +552,17 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '<=' operator without specifying column
   test("Batch_sort_Querying_001-01-01-01_001-TC_063", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where  <= 2""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check '<=' operator without providing value
   test("Batch_sort_Querying_001-01-01-01_001-TC_064", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where  cust_id <= """).collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -715,29 +577,17 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check '<=' operator adding space between'<' and  '='
   test("Batch_sort_Querying_001-01-01-01_001-TC_066", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id < =  9002""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check 'BETWEEN' operator without providing range
   test("Batch_sort_Querying_001-01-01-01_001-TC_067", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where age between""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -797,29 +647,17 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check  'IS NULL' without providing column
   test("Batch_sort_Querying_001-01-01-01_001-TC_074", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where Is NulL""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check  'IS NOT NULL' without providing column
   test("Batch_sort_Querying_001-01-01-01_001-TC_075", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where IS NOT NULL""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -852,29 +690,17 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Limit clause with where condition and no argument
   test("Batch_sort_Querying_001-01-01-01_001-TC_079", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id=10987 limit""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check Limit clause with where condition and decimal argument
   test("Batch_sort_Querying_001-01-01-01_001-TC_080", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id=10987 limit 0.0""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -927,15 +753,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Full join
   test("Batch_sort_Querying_001-01-01-01_001-TC_086", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select uniqdataquery1.CUST_ID from uniqdataquery1 FULL JOIN uniqdataquery11 where CUST_ID""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1022,15 +842,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check SORT using 'AND' on multiple column
   test("Batch_sort_Querying_001-01-01-01_001-TC_097", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 where cust_id > 10544 sort by cust_name desc and cust_id asc""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1054,15 +868,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check average aggregate function with no arguments
   test("Batch_sort_Querying_001-01-01-01_001-TC_100", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_id,avg() from uniqdataquery1 group by cust_id""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1077,15 +885,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check nested  average aggregate function
   test("Batch_sort_Querying_001-01-01-01_001-TC_102", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_id,avg(count(cust_id)) from uniqdataquery1 group by cust_id""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1172,15 +974,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Order by without column name
   test("Batch_sort_Querying_001-01-01-01_001-TC_112", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select * from uniqdataquery1 order by ASC""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1222,15 +1018,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Using window without partition
   test("Batch_sort_Querying_001-01-01-01_001-TC_117", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_name, sum(bigint_column1) OVER w from uniqdataquery1 WINDOW w""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -1245,15 +1035,9 @@ class BatchSortQueryTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check Using ROLLUP without group by clause
   test("Batch_sort_Querying_001-01-01-01_001-TC_119", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""select cust_name from uniqdataquery1 with ROLLUP""").collect
-
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-     sql(s"""drop table uniqdataquery1""").collect
+    sql(s"""drop table uniqdataquery1""").collect
   }
-
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BucketingTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BucketingTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BucketingTestCase.scala
index 78f8945..501b089 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BucketingTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/BucketingTestCase.scala
@@ -40,28 +40,20 @@ class BucketingTestCase extends QueryTest with BeforeAndAfterAll {
   }
 
   test("test exception if bucketcolumns be measure column") {
-    try {
+    intercept[Exception] {
       sql("DROP TABLE IF EXISTS bucket_table")
       sql("CREATE TABLE bucket_table (ID Int, date Timestamp, country String, name String, phonetype String," +
           "serialname String, salary Int) STORED BY 'carbondata' TBLPROPERTIES " +
           "('BUCKETNUMBER'='4', 'BUCKETCOLUMNS'='ID')")
-      assert(false)
-    }
-    catch {
-      case _ => assert(true)
     }
   }
 
   test("test exception if bucketcolumns be complex data type column") {
-    try {
+    intercept[Exception] {
       sql("DROP TABLE IF EXISTS bucket_table")
       sql("CREATE TABLE bucket_table (Id int, number double, name string, " +
           "gamePoint array<double>, mac struct<num:double>) STORED BY 'carbondata' TBLPROPERTIES" +
           "('BUCKETNUMBER'='4', 'BUCKETCOLUMNS'='gamePoint')")
-      assert(false)
-    }
-    catch {
-      case _ => assert(true)
     }
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/ColumndictTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/ColumndictTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/ColumndictTestCase.scala
index f702254..c8e8f1b 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/ColumndictTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/ColumndictTestCase.scala
@@ -55,12 +55,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Load using external columndict for CSV having incomplete/wrong data/no data/null data
   test("Columndict-TC004", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/data.csv' into table t3 options('COLUMNDICT'='country:$resourcesPath/Data/columndict/inValidData.csv', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -197,12 +194,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Load using external columndict for table with measure and tableproperties(DICTIONARY_EXCLUDE, DICTIONARY_INCLUDE, BLOCKSIZE)
   test("Columndict-TC020", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata' TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB','DICTIONARY_EXCLUDE'='country')""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/data.csv' into table t3 options('COLUMNDICT'='country:'resourcesPath/Data/columndict/country.csv', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -210,12 +204,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Columndict parameter name validation
   ignore("Columndict-TC021", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata' TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB','DICTIONARY_EXCLUDE'='country')""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/data.csv' into table t3 options('COLUMNDICT'='countries:$resourcesPath/Data/columndict/country.csv', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -223,12 +214,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Columndict parameter value validation
   test("Columndict-TC022", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/data.csv' into table t3 options('COLUMNDICT'='salary:$resourcesPath/Data/columndict/country.csv', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -236,12 +224,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check for data validation in csv(empty/null/wrong data) for all_dictionary_path
   ignore("Columndict-TC023", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/inValidData.csv' into table t3 options('ALL_DICTIONARY_PATH'='$resourcesPath/Data/columndict/inValidData.dictionary', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -249,12 +234,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check for data validation in csv(empty/null/wrong data) for columndict
   test("Columndict-TC024", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/inValidData.csv' into table t3 options('COLUMNDICT'='country:'resourcesPath/Data/columndict/inValidData.csv', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -262,12 +244,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check for validation of external all_dictionary_path folder with incorrect path
   test("Columndict-TC025", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/inValidData.csv' into table t3 options('ALL_DICTIONARY_PATH'=''resourcesPath/Data/*.dictionary', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -275,12 +254,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check for validation of external all_dictionary_path folder with correct path
   test("Columndict-TC026", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/inValidData.csv' into table t3 options('ALL_DICTIONARY_PATH'='$resourcesPath/Data/columndict/*.dictionary', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -288,12 +264,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check for validation of external columndict folder with correct path
   test("Columndict-TC027", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/inValidData.csv' into table t3 options('COLUMNDICT'='country:'resourcesPath/Data/columndict/*.csv', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -301,12 +274,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check for validation of external all_dictionary_path file( missing /wrong path / wrong name)
   test("Columndict-TC028", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/inValidData.csv' into table t3 options('ALL_DICTIONARY_PATH'=''resourcesPath/Data/columndict/wrongName.dictionary', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -314,12 +284,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check for validation of external columndict file( missing /wrong path / wrong name)
   test("Columndict-TC029", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/inValidData.csv' into table t3 options('COLUMNDICT'='country:'resourcesPath/Data/columndict/wrongName.csv', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }
@@ -335,12 +302,9 @@ class ColumndictTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Check for different dictionary file extensions for columndict
   test("Columndict-TC031", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""CREATE TABLE IF NOT EXISTS t3 (ID Int, country String, name String, phonetype String, serialname String, salary Int,floatField float) STORED BY 'carbondata'""").collect
       sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/Data/columndict/inValidData.csv' into table t3 options('COLUMNDICT'='country:$resourcesPath/Data/columndict/country.txt', 'SINGLE_PASS'='true')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table if exists t3""").collect
   }


[44/50] [abbrv] carbondata git commit: [CARBONDATA-2128] Documentation for table path while creating the table

Posted by ra...@apache.org.
[CARBONDATA-2128] Documentation for table path while creating the table

This closes #1927


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/4a251ba1
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/4a251ba1
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/4a251ba1

Branch: refs/heads/branch-1.3
Commit: 4a251ba168236ea1d19c5e15ea6877145952d301
Parents: 349be00
Author: sgururajshetty <sg...@gmail.com>
Authored: Sat Feb 3 21:20:41 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Sat Feb 3 21:49:11 2018 +0530

----------------------------------------------------------------------
 docs/data-management-on-carbondata.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a251ba1/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index fef2371..d9d4420 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -32,12 +32,13 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
 
 ## CREATE TABLE
 
-  This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.
+  This command can be used to create a CarbonData table by specifying the list of fields along with the table properties. You can also specify the location where the table needs to be stored.
   
   ```
   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_name data_type , ...)]
   STORED BY 'carbondata'
   [TBLPROPERTIES (property_name=property_value, ...)]
+  [LOCATION 'path']
   ```  
   
 ### Usage Guidelines


[49/50] [abbrv] carbondata git commit: [HOTFIX] Some basic fix for 1.3.0 release

Posted by ra...@apache.org.
[HOTFIX] Some basic fix for 1.3.0 release

This closes #1924


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/fa6cd8d5
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/fa6cd8d5
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/fa6cd8d5

Branch: refs/heads/branch-1.3
Commit: fa6cd8d58632357cd29731d59398d1a43b282447
Parents: 4a2a2d1
Author: chenliang613 <ch...@huawei.com>
Authored: Sat Feb 3 21:06:55 2018 +0800
Committer: ravipesala <ra...@gmail.com>
Committed: Sun Feb 4 00:33:13 2018 +0530

----------------------------------------------------------------------
 docs/configuration-parameters.md                |   2 +-
 docs/data-management-on-carbondata.md           | 216 ++++++++-----------
 .../examples/StandardPartitionExample.scala     |  11 +-
 integration/spark2/pom.xml                      |   3 +
 4 files changed, 107 insertions(+), 125 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/fa6cd8d5/docs/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index 621574d..91f6cf5 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -61,7 +61,7 @@ This section provides the details of all the configurations required for CarbonD
 | carbon.options.bad.record.path |  | Specifies the HDFS path where bad records are stored. By default the value is Null. This path must to be configured by the user if bad record logger is enabled or bad record action redirect. | |
 | carbon.enable.vector.reader | true | This parameter increases the performance of select queries as it fetch columnar batch of size 4*1024 rows instead of fetching data row by row. | |
 | carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of the blocklet group. Higher value results in better sequential IO access.The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB). |  |
-| carbon.task.distribution | block | **block**: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **custom**: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. **blocklet**: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **merge_small_files**: Setting this value will merge all the small partitions to a size of (128 MB) during querying. The small partitions are combined to a map task to reduce the number of read task. This enhances the performance. | | 
+| carbon.task.distribution | block | **block**: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **custom**: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. **blocklet**: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **merge_small_files**: Setting this value will merge all the small partitions to a size of (128 MB is the default value of "spark.sql.files.maxPartitionBytes",it is configurable) during querying. The small partitions are combined to a map task to reduce the number of read task. This enhances the performance. | | 
 
 * **Compaction Configuration**
   

http://git-wip-us.apache.org/repos/asf/carbondata/blob/fa6cd8d5/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index 3acb711..9bb6c20 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -26,8 +26,7 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
 * [UPDATE AND DELETE](#update-and-delete)
 * [COMPACTION](#compaction)
 * [PARTITION](#partition)
-* [HIVE STANDARD PARTITION](#hive-standard-partition)
-* [PRE-AGGREGATE TABLES](#agg-tables)
+* [PRE-AGGREGATE TABLES](#pre-aggregate-tables)
 * [BUCKETING](#bucketing)
 * [SEGMENT MANAGEMENT](#segment-management)
 
@@ -54,8 +53,6 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
      ```
      TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
 	 ```
-     
-	 NOTE: DICTIONARY_EXCLUDE supports only int, string, timestamp, long, bigint, and varchar data types.
 	 
    - **Inverted Index Configuration**
 
@@ -603,34 +600,109 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
   CLEAN FILES FOR TABLE carbon_table
   ```
 
-## STANDARD PARTITION
+## PARTITION
+
+### STANDARD PARTITION
+
+  The partition is similar as spark and hive partition, user can use any column to build partition:
+  
+#### Create Partition Table
 
-  The partition is same as Spark, the creation partition command as below:
+  This command allows you to create table with partition.
   
   ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
-                    [(col_name data_type , ...)]
-  PARTITIONED BY (partition_col_name data_type)
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
+    [(col_name data_type , ...)]
+    [COMMENT table_comment]
+    [PARTITIONED BY (col_name data_type , ...)]
+    [STORED BY file_format]
+    [TBLPROPERTIES (property_name=property_value, ...)]
+  ```
+  
+  Example:
+  ```
+   CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                productNumber Int,
+                                productName String,
+                                storeCity String,
+                                storeProvince String,
+                                saleQuantity Int,
+                                revenue Int)
+  PARTITIONED BY (productCategory String, productBatch String)
   STORED BY 'carbondata'
-  [TBLPROPERTIES (property_name=property_value, ...)]
   ```
+		
+#### Load Data Using Static Partition 
+
+  This command allows you to load data using static partition.
+  
+  ```
+  LOAD DATA [LOCAL] INPATH 'folder_path' 
+    INTO TABLE [db_name.]table_name PARTITION (partition_spec) 
+    OPTIONS(property_name=property_value, ...)
+  NSERT INTO INTO TABLE [db_name.]table_name PARTITION (partition_spec) SELECT STATMENT 
+  ```
+  
+  Example:
+  ```
+  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
+    INTO TABLE locationTable
+    PARTITION (country = 'US', state = 'CA')
+    
+  INSERT INTO TABLE locationTable
+    PARTITION (country = 'US', state = 'AL')
+    SELECT * FROM another_user au 
+    WHERE au.country = 'US' AND au.state = 'AL';
+  ```
+
+#### Load Data Using Dynamic Partition
+
+  This command allows you to load data using dynamic partition. If partition spec is not specified, then the partition is considered as dynamic.
 
   Example:
   ```
-  CREATE TABLE partitiontable0
-                  (id Int,
-                  vin String,
-                  phonenumber Long,
-                  area String,
-                  salary Int)
-                  PARTITIONED BY (country String)
-                  STORED BY 'org.apache.carbondata.format'
-                  TBLPROPERTIES('SORT_COLUMNS'='id,vin')
-                  )
+  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
+    INTO TABLE locationTable
+          
+  INSERT INTO TABLE locationTable
+    SELECT * FROM another_user au 
+    WHERE au.country = 'US' AND au.state = 'AL';
   ```
 
+#### Show Partitions
+
+  This command gets the Hive partition information of the table
 
-## CARBONDATA PARTITION(HASH,RANGE,LIST)
+  ```
+  SHOW PARTITIONS [db_name.]table_name
+  ```
+
+#### Drop Partition
+
+  This command drops the specified Hive partition only.
+  ```
+  ALTER TABLE table_name DROP [IF EXISTS] (PARTITION part_spec, ...)
+  ```
+
+#### Insert OVERWRITE
+  
+  This command allows you to insert or load overwrite on a spcific partition.
+  
+  ```
+   INSERT OVERWRITE TABLE table_name
+    PARTITION (column = 'partition_name')
+    select_statement
+  ```
+  
+  Example:
+  ```
+  INSERT OVERWRITE TABLE partitioned_user
+    PARTITION (country = 'US')
+    SELECT * FROM another_user au 
+    WHERE au.country = 'US';
+  ```
+
+### CARBONDATA PARTITION(HASH,RANGE,LIST) -- Alpha feature, this partition not supports update and delete data.
 
   The partition supports three type:(Hash,Range,List), similar to other system's partition features, CarbonData's partition feature can be used to improve query performance by filtering on the partition column.
 
@@ -766,106 +838,6 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
   * The partitioned column can be excluded from SORT_COLUMNS, this will let other columns to do the efficient sorting.
   * When writing SQL on a partition table, try to use filters on the partition column.
 
-## HIVE STANDARD PARTITION
-
-  Carbon supports the partition which is custom implemented by carbon but due to compatibility issue does not allow you to use the feature of Hive. By using this function, you can use the feature available in Hive.
-
-### Create Partition Table
-
-  This command allows you to create table with partition.
-  
-  ```
-  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
-    [(col_name data_type , ...)]
-    [COMMENT table_comment]
-    [PARTITIONED BY (col_name data_type , ...)]
-    [STORED BY file_format]
-    [TBLPROPERTIES (property_name=property_value, ...)]
-    [AS select_statement];
-  ```
-  
-  Example:
-  ```
-   CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                productNumber Int,
-                                productName String,
-                                storeCity String,
-                                storeProvince String,
-                                saleQuantity Int,
-                                revenue Int)
-  PARTITIONED BY (productCategory String, productBatch String)
-  STORED BY 'carbondata'
-  ```
-		
-### Load Data Using Static Partition
-
-  This command allows you to load data using static partition.
-  
-  ```
-  LOAD DATA [LOCAL] INPATH 'folder_path' 
-    INTO TABLE [db_name.]table_name PARTITION (partition_spec) 
-    OPTIONS(property_name=property_value, ...)
-  NSERT INTO INTO TABLE [db_name.]table_name PARTITION (partition_spec) SELECT STATMENT 
-  ```
-  
-  Example:
-  ```
-  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
-    INTO TABLE locationTable
-    PARTITION (country = 'US', state = 'CA')
-    
-  INSERT INTO TABLE locationTable
-    PARTITION (country = 'US', state = 'AL')
-    SELECT * FROM another_user au 
-    WHERE au.country = 'US' AND au.state = 'AL';
-  ```
-
-### Load Data Using Dynamic Partition
-
-  This command allows you to load data using dynamic partition. If partition spec is not specified, then the partition is considered as dynamic.
-
-  Example:
-  ```
-  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
-    INTO TABLE locationTable
-          
-  INSERT INTO TABLE locationTable
-    SELECT * FROM another_user au 
-    WHERE au.country = 'US' AND au.state = 'AL';
-  ```
-
-### Show Partitions
-
-  This command gets the Hive partition information of the table
-
-  ```
-  SHOW PARTITIONS [db_name.]table_name
-  ```
-
-### Drop Partition
-
-  This command drops the specified Hive partition only.
-  ```
-  ALTER TABLE table_name DROP [IF EXISTS] (PARTITION part_spec, ...)
-  ```
-
-### Insert OVERWRITE
-  
-  This command allows you to insert or load overwrite on a spcific partition.
-  
-  ```
-   INSERT OVERWRITE TABLE table_name
-    PARTITION (column = 'partition_name')
-    select_statement
-  ```
-  
-  Example:
-  ```
-  INSERT OVERWRITE TABLE partitioned_user
-    PARTITION (country = 'US')
-    SELECT * FROM another_user au 
-    WHERE au.country = 'US';
-  ```
 
 ## PRE-AGGREGATE TABLES
   Carbondata supports pre aggregating of data so that OLAP kind of queries can fetch data 
@@ -989,7 +961,7 @@ This functionality is not supported.
   before Alter Operations can be performed on the main table.Pre-aggregate tables can be rebuilt 
   manually after Alter Table operations are completed
   
-### Supporting timeseries data
+### Supporting timeseries data (Alpha feature in 1.3.0)
 Carbondata has built-in understanding of time hierarchy and levels: year, month, day, hour, minute.
 Multiple pre-aggregate tables can be created for the hierarchy and Carbondata can do automatic 
 roll-up for the queries on these hierarchies.

http://git-wip-us.apache.org/repos/asf/carbondata/blob/fa6cd8d5/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala
----------------------------------------------------------------------
diff --git a/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala b/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala
index 1126ecc..20570a2 100644
--- a/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala
+++ b/examples/spark2/src/main/scala/org/apache/carbondata/examples/StandardPartitionExample.scala
@@ -56,7 +56,14 @@ object StandardPartitionExample {
 
     spark.sql(
       s"""
-         | SELECT country,id,vin,phonenumver,area,salary
+         | SELECT country,id,vin,phonenumber,area,salary
+         | FROM partitiontable0
+      """.stripMargin).show()
+
+    spark.sql("UPDATE partitiontable0 SET (salary) = (88888) WHERE country='UK'").show()
+    spark.sql(
+      s"""
+         | SELECT country,id,vin,phonenumber,area,salary
          | FROM partitiontable0
       """.stripMargin).show()
 
@@ -66,7 +73,7 @@ object StandardPartitionExample {
     import scala.util.Random
     import spark.implicits._
     val r = new Random()
-    val df = spark.sparkContext.parallelize(1 to 10 * 1000 * 10)
+    val df = spark.sparkContext.parallelize(1 to 10 * 100 * 1000)
       .map(x => ("No." + r.nextInt(1000), "country" + x % 8, "city" + x % 50, x % 300))
       .toDF("ID", "country", "city", "population")
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/fa6cd8d5/integration/spark2/pom.xml
----------------------------------------------------------------------
diff --git a/integration/spark2/pom.xml b/integration/spark2/pom.xml
index 60cb61f..9edb50e 100644
--- a/integration/spark2/pom.xml
+++ b/integration/spark2/pom.xml
@@ -209,6 +209,9 @@
     </profile>
     <profile>
     <id>spark-2.2</id>
+    <activation>
+      <activeByDefault>true</activeByDefault>
+    </activation>
     <properties>
       <spark.version>2.2.1</spark.version>
       <scala.binary.version>2.11</scala.binary.version>


[21/50] [abbrv] carbondata git commit: [Compatibility] Added changes for backward compatibility

Posted by ra...@apache.org.
[Compatibility] Added changes for backward compatibility

This PR will fix the issues related to old version and new version compatibility.
Issues fixed:
1. Schema file name was different in one of the previous versions.
2. Bucket number was not supported in the previous versions.
3. Table parameters were stored in lower case while in the current version we are reading in camel case.

This closes #1747


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/02eefca1
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/02eefca1
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/02eefca1

Branch: refs/heads/branch-1.3
Commit: 02eefca15862a8667d53e247272afb68efe7af60
Parents: 1b224a4
Author: kunal642 <ku...@gmail.com>
Authored: Mon Nov 20 20:36:54 2017 +0530
Committer: manishgupta88 <to...@gmail.com>
Committed: Fri Feb 2 12:08:50 2018 +0530

----------------------------------------------------------------------
 .../core/util/path/CarbonTablePath.java         | 22 ++++++++-
 .../carbondata/spark/util/CarbonScalaUtil.scala | 47 ++++++++++++++++++++
 .../org/apache/spark/sql/CarbonSource.scala     | 33 ++++++++------
 3 files changed, 87 insertions(+), 15 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/02eefca1/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
index fab6289..d8c64c4 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
@@ -233,7 +233,7 @@ public class CarbonTablePath extends Path {
    * @return absolute path of schema file
    */
   public String getSchemaFilePath() {
-    return getMetaDataDir() + File.separator + SCHEMA_FILE;
+    return getActualSchemaFilePath(tablePath);
   }
 
   /**
@@ -242,7 +242,22 @@ public class CarbonTablePath extends Path {
    * @return schema file path
    */
   public static String getSchemaFilePath(String tablePath) {
-    return tablePath + File.separator + METADATA_DIR + File.separator + SCHEMA_FILE;
+    return getActualSchemaFilePath(tablePath);
+  }
+
+  private static String getActualSchemaFilePath(String tablePath) {
+    String metaPath = tablePath + CarbonCommonConstants.FILE_SEPARATOR + METADATA_DIR;
+    CarbonFile carbonFile = FileFactory.getCarbonFile(metaPath);
+    CarbonFile[] schemaFile = carbonFile.listFiles(new CarbonFileFilter() {
+      @Override public boolean accept(CarbonFile file) {
+        return file.getName().startsWith(SCHEMA_FILE);
+      }
+    });
+    if (schemaFile != null && schemaFile.length > 0) {
+      return schemaFile[0].getAbsolutePath();
+    } else {
+      return metaPath + CarbonCommonConstants.FILE_SEPARATOR + SCHEMA_FILE;
+    }
   }
 
   /**
@@ -351,6 +366,9 @@ public class CarbonTablePath extends Path {
 
   private static String getCarbonIndexFileName(String taskNo, int bucketNumber,
       String factUpdatedtimeStamp) {
+    if (bucketNumber == -1) {
+      return taskNo + "-" + factUpdatedtimeStamp + INDEX_FILE_EXT;
+    }
     return taskNo + "-" + bucketNumber + "-" + factUpdatedtimeStamp + INDEX_FILE_EXT;
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/02eefca1/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala
index 86d25b4..262adf2 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala
@@ -404,4 +404,51 @@ object CarbonScalaUtil {
       })
     otherFields
   }
+
+  /**
+   * If the table is from an old store then the table parameters are in lowercase. In the current
+   * code we are reading the parameters as camel case.
+   * This method will convert all the schema parts to camel case
+   *
+   * @param parameters
+   * @return
+   */
+  def getDeserializedParameters(parameters: Map[String, String]): Map[String, String] = {
+    val keyParts = parameters.getOrElse("spark.sql.sources.options.keys.numparts", "0").toInt
+    if (keyParts == 0) {
+      parameters
+    } else {
+      val keyStr = 0 until keyParts map {
+        i => parameters(s"spark.sql.sources.options.keys.part.$i")
+      }
+      val finalProperties = scala.collection.mutable.Map.empty[String, String]
+      keyStr foreach {
+        key =>
+          var value = ""
+          for (numValues <- 0 until parameters(key.toLowerCase() + ".numparts").toInt) {
+            value += parameters(key.toLowerCase() + ".part" + numValues)
+          }
+          finalProperties.put(key, value)
+      }
+      // Database name would be extracted from the parameter first. There can be a scenario where
+      // the dbName is not written to the old schema therefore to be on a safer side we are
+      // extracting dbName from tableName if it exists.
+      val dbAndTableName = finalProperties("tableName").split(".")
+      if (dbAndTableName.length > 1) {
+        finalProperties.put("dbName", dbAndTableName(0))
+        finalProperties.put("tableName", dbAndTableName(1))
+      } else {
+        finalProperties.put("tableName", dbAndTableName(0))
+      }
+      // Overriding the tablePath in case tablepath already exists. This will happen when old
+      // table schema is updated by the new code then both `path` and `tablepath` will exist. In
+      // this case use tablepath
+      parameters.get("tablepath") match {
+        case Some(tablePath) => finalProperties.put("tablePath", tablePath)
+        case None =>
+      }
+      finalProperties.toMap
+    }
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/02eefca1/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonSource.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonSource.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonSource.scala
index e61b636..7d70534 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonSource.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonSource.scala
@@ -42,6 +42,7 @@ import org.apache.carbondata.core.metadata.schema.table.TableInfo
 import org.apache.carbondata.core.util.{CarbonProperties, CarbonUtil}
 import org.apache.carbondata.spark.CarbonOption
 import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
+import org.apache.carbondata.spark.util.CarbonScalaUtil
 import org.apache.carbondata.streaming.{CarbonStreamException, StreamSinkFactory}
 
 /**
@@ -59,16 +60,20 @@ class CarbonSource extends CreatableRelationProvider with RelationProvider
     CarbonEnv.getInstance(sqlContext.sparkSession)
     // if path is provided we can directly create Hadoop relation. \
     // Otherwise create datasource relation
-    parameters.get("tablePath") match {
+    val newParameters = CarbonScalaUtil.getDeserializedParameters(parameters)
+    newParameters.get("tablePath") match {
       case Some(path) => CarbonDatasourceHadoopRelation(sqlContext.sparkSession,
         Array(path),
-        parameters,
+        newParameters,
         None)
       case _ =>
-        val options = new CarbonOption(parameters)
+        val options = new CarbonOption(newParameters)
         val tablePath =
           CarbonEnv.getTablePath(options.dbName, options.tableName)(sqlContext.sparkSession)
-        CarbonDatasourceHadoopRelation(sqlContext.sparkSession, Array(tablePath), parameters, None)
+        CarbonDatasourceHadoopRelation(sqlContext.sparkSession,
+          Array(tablePath),
+          newParameters,
+          None)
     }
   }
 
@@ -79,13 +84,14 @@ class CarbonSource extends CreatableRelationProvider with RelationProvider
       parameters: Map[String, String],
       data: DataFrame): BaseRelation = {
     CarbonEnv.getInstance(sqlContext.sparkSession)
+    val newParameters = CarbonScalaUtil.getDeserializedParameters(parameters)
     // User should not specify path since only one store is supported in carbon currently,
     // after we support multi-store, we can remove this limitation
-    require(!parameters.contains("path"), "'path' should not be specified, " +
+    require(!newParameters.contains("path"), "'path' should not be specified, " +
                                           "the path to store carbon file is the 'storePath' " +
                                           "specified when creating CarbonContext")
 
-    val options = new CarbonOption(parameters)
+    val options = new CarbonOption(newParameters)
     val tablePath = new Path(
       CarbonEnv.getTablePath(options.dbName, options.tableName)(sqlContext.sparkSession))
     val isExists = tablePath.getFileSystem(sqlContext.sparkContext.hadoopConfiguration)
@@ -108,12 +114,12 @@ class CarbonSource extends CreatableRelationProvider with RelationProvider
 
     if (doSave) {
       // save data when the save mode is Overwrite.
-      new CarbonDataFrameWriter(sqlContext, data).saveAsCarbonFile(parameters)
+      new CarbonDataFrameWriter(sqlContext, data).saveAsCarbonFile(newParameters)
     } else if (doAppend) {
-      new CarbonDataFrameWriter(sqlContext, data).appendToCarbonFile(parameters)
+      new CarbonDataFrameWriter(sqlContext, data).appendToCarbonFile(newParameters)
     }
 
-    createRelation(sqlContext, parameters, data.schema)
+    createRelation(sqlContext, newParameters, data.schema)
   }
 
   // called by DDL operation with a USING clause
@@ -123,9 +129,10 @@ class CarbonSource extends CreatableRelationProvider with RelationProvider
       dataSchema: StructType): BaseRelation = {
     CarbonEnv.getInstance(sqlContext.sparkSession)
     addLateDecodeOptimization(sqlContext.sparkSession)
+    val newParameters = CarbonScalaUtil.getDeserializedParameters(parameters)
     val dbName: String =
-      CarbonEnv.getDatabaseName(parameters.get("dbName"))(sqlContext.sparkSession)
-    val tableOption: Option[String] = parameters.get("tableName")
+      CarbonEnv.getDatabaseName(newParameters.get("dbName"))(sqlContext.sparkSession)
+    val tableOption: Option[String] = newParameters.get("tableName")
     if (tableOption.isEmpty) {
       CarbonException.analysisException("Table creation failed. Table name is not specified")
     }
@@ -136,9 +143,9 @@ class CarbonSource extends CreatableRelationProvider with RelationProvider
     }
     val (path, updatedParams) = if (sqlContext.sparkSession.sessionState.catalog.listTables(dbName)
       .exists(_.table.equalsIgnoreCase(tableName))) {
-        getPathForTable(sqlContext.sparkSession, dbName, tableName, parameters)
+        getPathForTable(sqlContext.sparkSession, dbName, tableName, newParameters)
     } else {
-        createTableIfNotExists(sqlContext.sparkSession, parameters, dataSchema)
+        createTableIfNotExists(sqlContext.sparkSession, newParameters, dataSchema)
     }
 
     CarbonDatasourceHadoopRelation(sqlContext.sparkSession, Array(path), updatedParams,


[18/50] [abbrv] carbondata git commit: [CARBONDATA-2043] Configurable wait time for requesting executors and minimum registered executors ratio to continue the block distribution - carbon.dynamicAllocation.schedulerTimeout : to configure wait time. defal

Posted by ra...@apache.org.
[CARBONDATA-2043] Configurable wait time for requesting executors and minimum registered executors ratio to continue the block distribution
- carbon.dynamicAllocation.schedulerTimeout : to configure wait time. defalt 5sec, Min 5 sec and max 15 sec.
- carbon.scheduler.minRegisteredResourcesRatio :     min 0.1, max 1.0 and default to 0.8 to configure minimum registered executors ratio.

This closes #1822


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/473bd319
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/473bd319
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/473bd319

Branch: refs/heads/branch-1.3
Commit: 473bd3197a69e3c0574f8c07f04c29e43f7a023d
Parents: 54a381c
Author: mohammadshahidkhan <mo...@gmail.com>
Authored: Fri Dec 22 17:30:31 2017 +0530
Committer: Venkata Ramana G <ra...@huawei.com>
Committed: Fri Feb 2 11:10:23 2018 +0530

----------------------------------------------------------------------
 .../core/constants/CarbonCommonConstants.java   | 71 ++++++++++-----
 .../carbondata/core/util/CarbonProperties.java  | 90 +++++++++++++++-----
 .../core/CarbonPropertiesValidationTest.java    | 42 +++++++++
 .../spark/sql/hive/DistributionUtil.scala       | 67 ++++++++++-----
 4 files changed, 205 insertions(+), 65 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/473bd319/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index 7ae3034..87eec8a 100644
--- a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1149,29 +1149,6 @@ public final class CarbonCommonConstants {
    */
   public static final int DEFAULT_MAX_NUMBER_OF_COLUMNS = 20000;
 
-  /**
-   * Maximum waiting time (in seconds) for a query for requested executors to be started
-   */
-  @CarbonProperty
-  public static final String CARBON_EXECUTOR_STARTUP_TIMEOUT =
-      "carbon.max.executor.startup.timeout";
-
-  /**
-   * default value for executor start up waiting time out
-   */
-  public static final String CARBON_EXECUTOR_WAITING_TIMEOUT_DEFAULT = "5";
-
-  /**
-   * Max value. If value configured by user is more than this than this value will value will be
-   * considered
-   */
-  public static final int CARBON_EXECUTOR_WAITING_TIMEOUT_MAX = 60;
-
-  /**
-   * time for which thread will sleep and check again if the requested number of executors
-   * have been started
-   */
-  public static final int CARBON_EXECUTOR_STARTUP_THREAD_SLEEP_TIME = 250;
 
   /**
    * to enable unsafe column page in write step
@@ -1537,6 +1514,54 @@ public final class CarbonCommonConstants {
   public static final long HANDOFF_SIZE_DEFAULT = 1024L * 1024 * 1024;
 
   /**
+   * minimum required registered resource for starting block distribution
+   */
+  @CarbonProperty
+  public static final String CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO =
+      "carbon.scheduler.minregisteredresourcesratio";
+  /**
+   * default minimum required registered resource for starting block distribution
+   */
+  public static final String CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT = "0.8d";
+  /**
+   * minimum required registered resource for starting block distribution
+   */
+  public static final double CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MIN = 0.1d;
+  /**
+   * max minimum required registered resource for starting block distribution
+   */
+  public static final double CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MAX = 1.0d;
+
+  /**
+   * To define how much time scheduler should wait for the
+   * resource in dynamic allocation.
+   */
+  public static final String CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT =
+      "carbon.dynamicallocation.schedulertimeout";
+
+  /**
+   * default scheduler wait time
+   */
+  public static final String CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_DEFAULT = "5";
+
+  /**
+   * default value for executor start up waiting time out
+   */
+  public static final int CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_MIN = 5;
+
+  /**
+   * Max value. If value configured by user is more than this than this value will value will be
+   * considered
+   */
+  public static final int CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_MAX = 15;
+
+  /**
+   * time for which thread will sleep and check again if the requested number of executors
+   * have been started
+   */
+  public static final int CARBON_DYNAMIC_ALLOCATION_SCHEDULER_THREAD_SLEEP_TIME = 250;
+
+  /**
    * It allows queries on hive metastore directly along with filter information, otherwise first
    * fetches all partitions from hive and apply filters on it.
    */

http://git-wip-us.apache.org/repos/asf/carbondata/blob/473bd319/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java b/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java
index fd78efc..39a0b80 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java
@@ -35,7 +35,33 @@ import org.apache.carbondata.core.constants.CarbonCommonConstants;
 import org.apache.carbondata.core.constants.CarbonLoadOptionConstants;
 import org.apache.carbondata.core.constants.CarbonV3DataFormatConstants;
 import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
-import static org.apache.carbondata.core.constants.CarbonCommonConstants.*;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.BLOCKLET_SIZE;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_CUSTOM_BLOCK_DISTRIBUTION;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_DATA_FILE_VERSION;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_DATE_FORMAT;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_PREFETCH_BUFFERSIZE;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MAX;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MIN;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_SORT_FILE_WRITE_BUFFER_SIZE;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_TASK_DISTRIBUTION;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_BLOCK;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_BLOCKLET;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_CUSTOM;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_MERGE_FILES;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.CSV_READ_BUFFER_SIZE;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.ENABLE_AUTO_HANDOFF;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.ENABLE_UNSAFE_SORT;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.ENABLE_VECTOR_READER;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.HANDOFF_SIZE;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.LOCK_TYPE;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.NUM_CORES;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.NUM_CORES_BLOCK_SORT;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.SORT_INTERMEDIATE_FILES_LIMIT;
+import static org.apache.carbondata.core.constants.CarbonCommonConstants.SORT_SIZE;
 import static org.apache.carbondata.core.constants.CarbonV3DataFormatConstants.BLOCKLET_SIZE_IN_MB;
 import static org.apache.carbondata.core.constants.CarbonV3DataFormatConstants.NUMBER_OF_COLUMN_TO_READ_IN_IO;
 
@@ -106,8 +132,8 @@ public final class CarbonProperties {
       case CARBON_DATA_FILE_VERSION:
         validateCarbonDataFileVersion();
         break;
-      case CARBON_EXECUTOR_STARTUP_TIMEOUT:
-        validateExecutorStartUpTime();
+      case CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT:
+        validateDynamicSchedulerTimeOut();
         break;
       case CARBON_PREFETCH_BUFFERSIZE:
         validatePrefetchBufferSize();
@@ -156,6 +182,9 @@ public final class CarbonProperties {
       case ENABLE_AUTO_HANDOFF:
         validateHandoffSize();
         break;
+      case CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO:
+        validateSchedulerMinRegisteredRatio();
+        break;
       // TODO : Validation for carbon.lock.type should be handled for addProperty flow
       default:
         // none
@@ -171,7 +200,7 @@ public final class CarbonProperties {
     validateNumCoresBlockSort();
     validateSortSize();
     validateCarbonDataFileVersion();
-    validateExecutorStartUpTime();
+    validateDynamicSchedulerTimeOut();
     validatePrefetchBufferSize();
     validateBlockletGroupSizeInMB();
     validateNumberOfColumnPerIORead();
@@ -193,6 +222,7 @@ public final class CarbonProperties {
     validateSortFileWriteBufferSize();
     validateSortIntermediateFilesLimit();
     validateEnableAutoHandoff();
+    validateSchedulerMinRegisteredRatio();
   }
 
   /**
@@ -253,6 +283,36 @@ public final class CarbonProperties {
   }
 
   /**
+   * minimum required registered resource for starting block distribution
+   */
+  private void validateSchedulerMinRegisteredRatio() {
+    String value = carbonProperties
+        .getProperty(CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO,
+            CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT);
+    try {
+      double minRegisteredResourceRatio = java.lang.Double.parseDouble(value);
+      if (minRegisteredResourceRatio < CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MIN
+          || minRegisteredResourceRatio > CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MAX) {
+        LOGGER.warn("The value \"" + value
+            + "\" configured for key " + CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO
+            + "\" is not in range. Valid range is (byte) \""
+            + CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MIN + " to \""
+            + CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MAX + ". Using the default value \""
+            + CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT);
+        carbonProperties.setProperty(CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO,
+            CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT);
+      }
+    } catch (NumberFormatException e) {
+      LOGGER.warn("The value \"" + value
+          + "\" configured for key " + CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO
+          + "\" is invalid. Using the default value \""
+          + CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT);
+      carbonProperties.setProperty(CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO,
+          CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT);
+    }
+  }
+
+  /**
    * The method validate the validity of configured carbon.date.format value
    * and reset to default value if validation fail
    */
@@ -984,23 +1044,11 @@ public final class CarbonProperties {
   /**
    * This method will validate and set the value for executor start up waiting time out
    */
-  private void validateExecutorStartUpTime() {
-    int executorStartUpTimeOut = 0;
-    try {
-      executorStartUpTimeOut = Integer.parseInt(carbonProperties
-          .getProperty(CARBON_EXECUTOR_STARTUP_TIMEOUT,
-              CarbonCommonConstants.CARBON_EXECUTOR_WAITING_TIMEOUT_DEFAULT));
-      // If value configured by user is more than max value of time out then consider the max value
-      if (executorStartUpTimeOut > CarbonCommonConstants.CARBON_EXECUTOR_WAITING_TIMEOUT_MAX) {
-        executorStartUpTimeOut = CarbonCommonConstants.CARBON_EXECUTOR_WAITING_TIMEOUT_MAX;
-      }
-    } catch (NumberFormatException ne) {
-      executorStartUpTimeOut =
-          Integer.parseInt(CarbonCommonConstants.CARBON_EXECUTOR_WAITING_TIMEOUT_DEFAULT);
-    }
-    carbonProperties.setProperty(CARBON_EXECUTOR_STARTUP_TIMEOUT,
-        String.valueOf(executorStartUpTimeOut));
-    LOGGER.info("Executor start up wait time: " + executorStartUpTimeOut);
+  private void validateDynamicSchedulerTimeOut() {
+    validateRange(CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT,
+        CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_DEFAULT,
+        CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_MIN,
+        CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_MAX);
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/473bd319/core/src/test/java/org/apache/carbondata/core/CarbonPropertiesValidationTest.java
----------------------------------------------------------------------
diff --git a/core/src/test/java/org/apache/carbondata/core/CarbonPropertiesValidationTest.java b/core/src/test/java/org/apache/carbondata/core/CarbonPropertiesValidationTest.java
index daf6db0..bbfe26c 100644
--- a/core/src/test/java/org/apache/carbondata/core/CarbonPropertiesValidationTest.java
+++ b/core/src/test/java/org/apache/carbondata/core/CarbonPropertiesValidationTest.java
@@ -205,4 +205,46 @@ public class CarbonPropertiesValidationTest extends TestCase {
     assertTrue(CarbonCommonConstants.SORT_INTERMEDIATE_FILES_LIMIT_DEFAULT_VALUE
         .equalsIgnoreCase(valueAfterValidation));
   }
+
+  @Test public void testValidateDynamicSchedulerTimeOut() {
+    carbonProperties
+        .addProperty(CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT, "2");
+    String valueAfterValidation = carbonProperties
+        .getProperty(CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT);
+    assertTrue(valueAfterValidation
+        .equals(CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_DEFAULT));
+    carbonProperties
+        .addProperty(CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT, "16");
+    valueAfterValidation = carbonProperties
+        .getProperty(CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT);
+    assertTrue(valueAfterValidation
+        .equals(CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_DEFAULT));
+    carbonProperties
+        .addProperty(CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT, "15");
+    valueAfterValidation = carbonProperties
+        .getProperty(CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT);
+    assertTrue(valueAfterValidation
+        .equals("15"));
+
+  }
+  @Test public void testValidateSchedulerMinRegisteredRatio() {
+    carbonProperties
+        .addProperty(CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO, "0.0");
+    String valueAfterValidation = carbonProperties
+        .getProperty(CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO);
+    assertTrue(valueAfterValidation
+        .equals(CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT));
+    carbonProperties
+        .addProperty(CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO, "-0.1");
+    valueAfterValidation = carbonProperties
+        .getProperty(CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO);
+    assertTrue(valueAfterValidation
+        .equals(CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT));
+    carbonProperties
+        .addProperty(CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO, "0.1");
+    valueAfterValidation = carbonProperties
+        .getProperty(CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO);
+    assertTrue(valueAfterValidation.equals("0.1"));
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/473bd319/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala b/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala
index 37b722f..1958d61 100644
--- a/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala
+++ b/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala
@@ -20,6 +20,7 @@ package org.apache.spark.sql.hive
 import java.net.{InetAddress, InterfaceAddress, NetworkInterface}
 
 import scala.collection.JavaConverters._
+import scala.util.control.Breaks._
 
 import org.apache.spark.SparkContext
 import org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend
@@ -33,6 +34,26 @@ import org.apache.carbondata.processing.util.CarbonLoaderUtil
 object DistributionUtil {
   @transient
   val LOGGER = LogServiceFactory.getLogService(this.getClass.getCanonicalName)
+  /*
+   *  minimum required registered resource for starting block distribution
+   */
+  lazy val minRegisteredResourceRatio: Double = {
+    val value: String = CarbonProperties.getInstance()
+      .getProperty(CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO,
+        CarbonCommonConstants.CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT)
+    java.lang.Double.parseDouble(value)
+  }
+
+  /*
+   * node registration wait time
+   */
+  lazy val dynamicAllocationSchTimeOut: Integer = {
+    val value: String = CarbonProperties.getInstance()
+      .getProperty(CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT,
+        CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_DEFAULT)
+    // milli second
+    java.lang.Integer.parseInt(value) * 1000
+  }
 
   /*
    * This method will return the list of executers in the cluster.
@@ -202,18 +223,25 @@ object DistributionUtil {
     var nodes = DistributionUtil.getNodeList(sparkContext)
     // calculate the number of times loop has to run to check for starting
     // the requested number of executors
-    val threadSleepTime = CarbonCommonConstants.CARBON_EXECUTOR_STARTUP_THREAD_SLEEP_TIME
-    val loopCounter = calculateCounterBasedOnExecutorStartupTime(threadSleepTime)
-    var maxTimes = loopCounter
-    while (nodes.length < requiredExecutors && maxTimes > 0) {
-      Thread.sleep(threadSleepTime)
-      nodes = DistributionUtil.getNodeList(sparkContext)
-      maxTimes = maxTimes - 1
+    val threadSleepTime =
+    CarbonCommonConstants.CARBON_DYNAMIC_ALLOCATION_SCHEDULER_THREAD_SLEEP_TIME
+    val maxRetryCount = calculateMaxRetry
+    var maxTimes = maxRetryCount
+    breakable {
+      while (nodes.length < requiredExecutors && maxTimes > 0) {
+        Thread.sleep(threadSleepTime);
+        nodes = DistributionUtil.getNodeList(sparkContext)
+        maxTimes = maxTimes - 1;
+        val resourceRatio = (nodes.length.toDouble / requiredExecutors)
+        if (resourceRatio.compareTo(minRegisteredResourceRatio) >= 0) {
+          break
+        }
+      }
     }
     val timDiff = System.currentTimeMillis() - startTime
     LOGGER.info(s"Total Time taken to ensure the required executors : $timDiff")
     LOGGER.info(s"Time elapsed to allocate the required executors: " +
-      s"${(loopCounter - maxTimes) * threadSleepTime}")
+      s"${(maxRetryCount - maxTimes) * threadSleepTime}")
     nodes.distinct.toSeq
   }
 
@@ -245,21 +273,18 @@ object DistributionUtil {
   /**
    * This method will calculate how many times a loop will run with an interval of given sleep
    * time to wait for requested executors to come up
-   *
-   * @param threadSleepTime
-   * @return
+    *
+    * @return The max retry count
    */
-  private def calculateCounterBasedOnExecutorStartupTime(threadSleepTime: Int): Int = {
-    var executorStartUpTimeOut = CarbonProperties.getInstance
-      .getProperty(CarbonCommonConstants.CARBON_EXECUTOR_STARTUP_TIMEOUT,
-        CarbonCommonConstants.CARBON_EXECUTOR_WAITING_TIMEOUT_DEFAULT).toInt
-    // convert seconds into milliseconds for loop counter calculation
-    executorStartUpTimeOut = executorStartUpTimeOut * 1000
-    // make executor start up time exactly divisible by thread sleep time
-    val remainder = executorStartUpTimeOut % threadSleepTime
+  def calculateMaxRetry(): Int = {
+    val remainder = dynamicAllocationSchTimeOut % CarbonCommonConstants
+      .CARBON_DYNAMIC_ALLOCATION_SCHEDULER_THREAD_SLEEP_TIME
+    val retryCount: Int = dynamicAllocationSchTimeOut / CarbonCommonConstants
+      .CARBON_DYNAMIC_ALLOCATION_SCHEDULER_THREAD_SLEEP_TIME
     if (remainder > 0) {
-      executorStartUpTimeOut = executorStartUpTimeOut + threadSleepTime - remainder
+      retryCount + 1
+    } else {
+      retryCount
     }
-    executorStartUpTimeOut / threadSleepTime
   }
 }


[02/50] [abbrv] carbondata git commit: [CARBONDATA-2096] Add query test case for 'merge_small_files' distribution

Posted by ra...@apache.org.
[CARBONDATA-2096] Add query test case for 'merge_small_files' distribution

Add query test case for 'merge_small_files' distribution

This closes #1882


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d90280af
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/d90280af
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/d90280af

Branch: refs/heads/branch-1.3
Commit: d90280afc8adcab741c7aa29a99b450af78cd8e9
Parents: 24ba2fe
Author: QiangCai <qi...@qq.com>
Authored: Tue Jan 30 17:07:24 2018 +0800
Committer: Jacky Li <ja...@qq.com>
Committed: Wed Jan 31 19:21:04 2018 +0800

----------------------------------------------------------------------
 .../dataload/TestGlobalSortDataLoad.scala       | 27 ++++++++++++++++++--
 .../apache/spark/sql/test/util/QueryTest.scala  |  1 +
 2 files changed, 26 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/d90280af/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
index 9ce9675..50a38f1 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
@@ -25,14 +25,15 @@ import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
 import org.apache.spark.sql.Row
+import org.apache.spark.sql.execution.BatchedDataSourceScanExec
 import org.apache.spark.sql.test.TestQueryExecutor.projectPath
 import org.apache.spark.sql.test.util.QueryTest
 import org.scalatest.{BeforeAndAfterAll, BeforeAndAfterEach}
 
 import org.apache.carbondata.core.indexstore.blockletindex.SegmentIndexFileStore
 import org.apache.carbondata.core.metadata.CarbonMetadata
-import org.apache.carbondata.core.metadata.schema.table.CarbonTable
 import org.apache.carbondata.core.util.path.CarbonStorePath
+import org.apache.carbondata.spark.rdd.CarbonScanRDD
 
 class TestGlobalSortDataLoad extends QueryTest with BeforeAndAfterEach with BeforeAndAfterAll {
   var filePath: String = s"$resourcesPath/globalsort"
@@ -272,7 +273,29 @@ class TestGlobalSortDataLoad extends QueryTest with BeforeAndAfterEach with Befo
     val carbonTable = CarbonMetadata.getInstance().getCarbonTable("default", "carbon_globalsort")
     val carbonTablePath = CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
     val segmentDir = carbonTablePath.getSegmentDir("0", "0")
-    assertResult(5)(new File(segmentDir).listFiles().length)
+    assertResult(Math.max(4, defaultParallelism) + 1)(new File(segmentDir).listFiles().length)
+  }
+
+  test("Query with small files") {
+    try {
+      CarbonProperties.getInstance().addProperty(
+        CarbonCommonConstants.CARBON_TASK_DISTRIBUTION,
+        CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_MERGE_FILES)
+      for (i <- 0 until 10) {
+        sql(s"insert into carbon_globalsort select $i, 'name_$i', 'city_$i', ${ i % 100 }")
+      }
+      val df = sql("select * from carbon_globalsort")
+      val scanRdd = df.queryExecution.sparkPlan.collect {
+        case b: BatchedDataSourceScanExec if b.rdd.isInstanceOf[CarbonScanRDD] =>
+          b.rdd.asInstanceOf[CarbonScanRDD]
+      }.head
+      assertResult(defaultParallelism)(scanRdd.getPartitions.length)
+      assertResult(10)(df.count)
+    } finally {
+      CarbonProperties.getInstance().addProperty(
+        CarbonCommonConstants.CARBON_TASK_DISTRIBUTION,
+        CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_DEFAULT)
+    }
   }
 
   // ----------------------------------- INSERT INTO -----------------------------------

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d90280af/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala b/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
index 0079d1e..b87473a 100644
--- a/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
+++ b/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
@@ -107,6 +107,7 @@ class QueryTest extends PlanTest {
   val metastoredb = TestQueryExecutor.metastoredb
   val integrationPath = TestQueryExecutor.integrationPath
   val dblocation = TestQueryExecutor.location
+  val defaultParallelism = sqlContext.sparkContext.defaultParallelism
 }
 
 object QueryTest {


[31/50] [abbrv] carbondata git commit: [CARBONDATA-2115] Docummentation - Scenarios in which aggregate query is not fetching

Posted by ra...@apache.org.
[CARBONDATA-2115] Docummentation - Scenarios in which aggregate query is not fetching

Added the FAQ on Scenarios in which aggregate query is not fetching

This closes #1905


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/88757754
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/88757754
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/88757754

Branch: refs/heads/branch-1.3
Commit: 88757754e26423d741bb51f3f5c0222fea8de9f5
Parents: b48a8c2
Author: sgururajshetty <sg...@gmail.com>
Authored: Thu Feb 1 17:59:17 2018 +0530
Committer: chenliang613 <ch...@huawei.com>
Committed: Sat Feb 3 16:10:17 2018 +0800

----------------------------------------------------------------------
 docs/faq.md | 37 +++++++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/88757754/docs/faq.md
----------------------------------------------------------------------
diff --git a/docs/faq.md b/docs/faq.md
index 6bbd4f7..baa46cc 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -25,6 +25,7 @@
 * [What is Carbon Lock Type?](#what-is-carbon-lock-type)
 * [How to resolve Abstract Method Error?](#how-to-resolve-abstract-method-error)
 * [How Carbon will behave when execute insert operation in abnormal scenarios?](#how-carbon-will-behave-when-execute-insert-operation-in-abnormal-scenarios)
+* [Why aggregate query is not fetching data from aggregate table?] (#why-aggregate-query-is-not-fetching-data-from-aggregate-table)
 
 ## What are Bad Records?
 Records that fail to get loaded into the CarbonData due to data type incompatibility or are empty or have incompatible format are classified as Bad Records.
@@ -141,4 +142,40 @@ INSERT INTO TABLE carbon_table SELECT id, city FROM source_table;
 
 When the column type in carbon table is different from the column specified in select statement. The insert operation will still success, but you may get NULL in result, because NULL will be substitute value when conversion type failed.
 
+## Why aggregate query is not fetching data from aggregate table?
+Following are the aggregate queries that won’t fetch data from aggregate table:
+
+- **Scenario 1** :
+When SubQuery predicate is present in the query.
+
+Example 
+
+```
+create table gdp21(cntry smallint, gdp double, y_year date) stored by 'carbondata'
+create datamap ag1 on table gdp21 using 'preaggregate' as select cntry, sum(gdp) from gdp group by ctry;
+select ctry from pop1 where ctry in (select cntry from gdp21 group by cntry)
+```
+
+- **Scenario 2** : 
+When aggregate function along with ‘in’ filter. 
+
+Example.
+
+```
+create table gdp21(cntry smallint, gdp double, y_year date) stored by 'carbondata'
+create datamap ag1 on table gdp21 using 'preaggregate' as select cntry, sum(gdp) from gdp group by ctry;
+select cntry, sum(gdp) from gdp21 where cntry in (select ctry from pop1) group by cntry;
+```
+
+- **Scenario 3** : 
+When aggregate function having ‘join’ with Equal filter.
+
+Example.
+
+```
+create table gdp21(cntry smallint, gdp double, y_year date) stored by 'carbondata'
+create datamap ag1 on table gdp21 using 'preaggregate' as select cntry, sum(gdp) from gdp group by ctry;
+select cntry,sum(gdp) from gdp21,pop1 where cntry=ctry group by cntry;
+```
+
 


[46/50] [abbrv] carbondata git commit: [CARBONDATA-2119] Fixed deserialization issues for carbonLoadModel

Posted by ra...@apache.org.
[CARBONDATA-2119] Fixed deserialization issues for carbonLoadModel

Problem:
Load model was not getting de-serialized in the executor due to which 2 different carbon table objects were being created.
Solution:
Reconstruct carbonTable from tableInfo if not already created.

This closes #1911


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/54b7db51
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/54b7db51
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/54b7db51

Branch: refs/heads/branch-1.3
Commit: 54b7db51906340d6d7b417058f9665731fa51a21
Parents: a7bcc76
Author: kunal642 <ku...@gmail.com>
Authored: Fri Feb 2 17:37:51 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Sat Feb 3 22:02:54 2018 +0530

----------------------------------------------------------------------
 .../core/metadata/schema/table/CarbonTable.java          |  2 +-
 .../processing/loading/model/CarbonDataLoadSchema.java   | 11 ++++++++++-
 2 files changed, 11 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/54b7db51/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
index 4bb0d20..09ff440 100644
--- a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
+++ b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
@@ -141,7 +141,7 @@ public class CarbonTable implements Serializable {
    *
    * @param tableInfo
    */
-  private static void updateTableInfo(TableInfo tableInfo) {
+  public static void updateTableInfo(TableInfo tableInfo) {
     List<DataMapSchema> dataMapSchemas = new ArrayList<>();
     for (DataMapSchema dataMapSchema : tableInfo.getDataMapSchemaList()) {
       DataMapSchema newDataMapSchema = DataMapSchemaFactory.INSTANCE

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54b7db51/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java b/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
index d7aa103..a9d7bd8 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
@@ -37,6 +37,11 @@ public class CarbonDataLoadSchema implements Serializable {
   private CarbonTable carbonTable;
 
   /**
+   * Used to determine if the dataTypes have already been updated or not.
+   */
+  private transient boolean updatedDataTypes;
+
+  /**
    * CarbonDataLoadSchema constructor which takes CarbonTable
    *
    * @param carbonTable
@@ -51,7 +56,11 @@ public class CarbonDataLoadSchema implements Serializable {
    * @return carbonTable
    */
   public CarbonTable getCarbonTable() {
+    if (!updatedDataTypes) {
+      CarbonTable.updateTableInfo(carbonTable.getTableInfo());
+      updatedDataTypes = true;
+    }
     return carbonTable;
   }
 
-}
+}
\ No newline at end of file


[13/50] [abbrv] carbondata git commit: [CARBONDATA-2107]Fixed query failure in case if average case

Posted by ra...@apache.org.
[CARBONDATA-2107]Fixed query failure in case if average case

Average query is failing when data map has sum(column), average(column)

This closes #1894


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/19fdd4d7
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/19fdd4d7
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/19fdd4d7

Branch: refs/heads/branch-1.3
Commit: 19fdd4d7581477557f2771909cf54a95a0b6665d
Parents: d680e9c
Author: kumarvishal <ku...@gmail.com>
Authored: Wed Jan 31 17:44:55 2018 +0530
Committer: kunal642 <ku...@gmail.com>
Committed: Thu Feb 1 17:44:21 2018 +0530

----------------------------------------------------------------------
 .../preaggregate/TestPreAggregateTableSelection.scala    | 11 ++++++++++-
 .../apache/spark/sql/hive/CarbonPreAggregateRules.scala  |  8 ++++++--
 2 files changed, 16 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/19fdd4d7/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
index f9ac354..5fb7b02 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
@@ -29,6 +29,7 @@ class TestPreAggregateTableSelection extends QueryTest with BeforeAndAfterAll {
 
   override def beforeAll: Unit = {
     sql("drop table if exists mainTable")
+    sql("drop table if exists mainTableavg")
     sql("drop table if exists agg0")
     sql("drop table if exists agg1")
     sql("drop table if exists agg2")
@@ -47,7 +48,10 @@ class TestPreAggregateTableSelection extends QueryTest with BeforeAndAfterAll {
     sql("create datamap agg6 on table mainTable using 'preaggregate' as select name,min(age) from mainTable group by name")
     sql("create datamap agg7 on table mainTable using 'preaggregate' as select name,max(age) from mainTable group by name")
     sql("create datamap agg8 on table maintable using 'preaggregate' as select name, sum(id), avg(id) from maintable group by name")
+    sql("CREATE TABLE mainTableavg(id int, name string, city string, age bigint) STORED BY 'org.apache.carbondata.format'")
+    sql("create datamap agg0 on table mainTableavg using 'preaggregate' as select name,sum(age), avg(age) from mainTableavg group by name")
     sql(s"LOAD DATA LOCAL INPATH '$resourcesPath/measureinsertintotest.csv' into table mainTable")
+    sql(s"LOAD DATA LOCAL INPATH '$resourcesPath/measureinsertintotest.csv' into table mainTableavg")
   }
 
   test("test sum and avg on same column should give proper results") {
@@ -191,7 +195,6 @@ class TestPreAggregateTableSelection extends QueryTest with BeforeAndAfterAll {
     preAggTableValidator(df.queryExecution.analyzed, "maintable")
   }
 
-
   def preAggTableValidator(plan: LogicalPlan, actualTableName: String) : Unit ={
     var isValidPlan = false
     plan.transform {
@@ -312,8 +315,14 @@ test("test PreAggregate table selection with timeseries and normal together") {
 
     sql("select var_samp(name) from maintabletime  where name='Mikka' ")
   }
+
+  test("test PreAggregate table selection For Sum And Avg in aggregate table with bigint") {
+    val df = sql("select avg(age) from mainTableavg")
+    preAggTableValidator(df.queryExecution.analyzed, "mainTableavg_agg0")
+  }
   override def afterAll: Unit = {
     sql("drop table if exists mainTable")
+    sql("drop table if exists mainTable_avg")
     sql("drop table if exists lineitem")
     sql("DROP TABLE IF EXISTS maintabletime")
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/19fdd4d7/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
index 79cbe05..de58805 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
@@ -1023,10 +1023,14 @@ case class CarbonPreAggregateQueryRules(sparkSession: SparkSession) extends Rule
       // with aggregation sum and count.
       // Then add divide(sum(column with sum), sum(column with count)).
       case Average(exp: Expression) =>
-        Divide(AggregateExpression(Sum(attrs.head),
+        Divide(AggregateExpression(Sum(Cast(
+          attrs.head,
+          DoubleType)),
           aggExp.mode,
           isDistinct = false),
-          AggregateExpression(Sum(attrs.last),
+          AggregateExpression(Sum(Cast(
+            attrs.last,
+            DoubleType)),
             aggExp.mode,
             isDistinct = false))
     }


[41/50] [abbrv] carbondata git commit: [CARBONDATA-2126] Documentation for create database and custom location

Posted by ra...@apache.org.
[CARBONDATA-2126] Documentation for create database and custom location

Documentation for create database and custom location

This closes #1923


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/4677fc6b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/4677fc6b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/4677fc6b

Branch: refs/heads/branch-1.3
Commit: 4677fc6b437deadc47a234ae535a5017c6a2c4d8
Parents: 36ff932
Author: sgururajshetty <sg...@gmail.com>
Authored: Sat Feb 3 18:38:10 2018 +0530
Committer: chenliang613 <ch...@huawei.com>
Committed: Sat Feb 3 21:54:59 2018 +0800

----------------------------------------------------------------------
 docs/data-management-on-carbondata.md | 12 ++++++++++++
 1 file changed, 12 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/4677fc6b/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index 66cc048..fef2371 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -20,6 +20,7 @@
 This tutorial is going to introduce all commands and data operations on CarbonData.
 
 * [CREATE TABLE](#create-table)
+* [CREATE DATABASE] (#create-database)
 * [TABLE MANAGEMENT](#table-management)
 * [LOAD DATA](#load-data)
 * [UPDATE AND DELETE](#update-and-delete)
@@ -149,6 +150,17 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
                    'ALLOWED_COMPACTION_DAYS'='5')
    ```
 
+## CREATE DATABASE 
+  This function creates a new database. By default the database is created in Carbon store location, but you can also specify custom location.
+  ```
+  CREATE DATABASE [IF NOT EXISTS] database_name [LOCATION path];
+  ```
+  
+### Example
+  ```
+  CREATE DATABASE carbon LOCATION “hdfs://name_cluster/dir1/carbonstore”;
+  ```
+
 ## CREATE TABLE As SELECT
   This function allows you to create a Carbon table from any of the Parquet/Hive/Carbon table. This is beneficial when the user wants to create Carbon table from any other Parquet/Hive table and use the Carbon query engine to query and achieve better query results for cases where Carbon is faster than other file formats. Also this feature can be used for backing up the data.
   ```


[22/50] [abbrv] carbondata git commit: [CARBONDATA-2078][CARBONDATA-1516] Add 'if not exists' for creating datamap

Posted by ra...@apache.org.
[CARBONDATA-2078][CARBONDATA-1516] Add 'if not exists' for creating datamap

Add 'if not exists' function for creating datamap

This closes #1861


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/f9606e9d
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/f9606e9d
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/f9606e9d

Branch: refs/heads/branch-1.3
Commit: f9606e9d03d55bf57925c2ac176e92553c213d49
Parents: 02eefca
Author: xubo245 <60...@qq.com>
Authored: Thu Jan 25 21:27:27 2018 +0800
Committer: kunal642 <ku...@gmail.com>
Committed: Fri Feb 2 12:08:54 2018 +0530

----------------------------------------------------------------------
 .../preaggregate/TestPreAggCreateCommand.scala  | 55 ++++++++++-
 .../preaggregate/TestPreAggregateLoad.scala     | 96 ++++++++++++++++++-
 .../timeseries/TestTimeSeriesCreateTable.scala  | 85 +++++++++++++----
 .../timeseries/TestTimeseriesDataLoad.scala     | 99 +++++++++++++++++++-
 .../datamap/CarbonCreateDataMapCommand.scala    | 38 ++++++--
 .../CreatePreAggregateTableCommand.scala        |  5 +-
 .../sql/parser/CarbonSpark2SqlParser.scala      |  9 +-
 7 files changed, 353 insertions(+), 34 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9606e9d/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
index f1d7396..0cb1045 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
@@ -19,7 +19,7 @@ package org.apache.carbondata.integration.spark.testsuite.preaggregate
 
 import scala.collection.JavaConverters._
 
-import org.apache.spark.sql.CarbonDatasourceHadoopRelation
+import org.apache.spark.sql.{AnalysisException, CarbonDatasourceHadoopRelation}
 import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
 import org.apache.spark.sql.execution.datasources.LogicalRelation
 import org.apache.spark.sql.hive.CarbonRelation
@@ -321,7 +321,60 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
     checkExistence(sql("show tables"), false, "tbl_1_agg2_day","tbl_1_agg2_hour","tbl_1_agg2_month","tbl_1_agg2_year")
   }
 
+  test("test pre agg create table 21: should support 'if not exists'") {
+    try {
+      sql(
+        """
+          | CREATE DATAMAP IF NOT EXISTS agg0 ON TABLE mainTable
+          | USING 'preaggregate'
+          | AS SELECT
+          |   column3,
+          |   sum(column3),
+          |   column5,
+          |   sum(column5)
+          | FROM maintable
+          | GROUP BY column3,column5,column2
+        """.stripMargin)
+
+      sql(
+        """
+          | CREATE DATAMAP IF NOT EXISTS agg0 ON TABLE mainTable
+          | USING 'preaggregate'
+          | AS SELECT
+          |   column3,
+          |   sum(column3),
+          |   column5,
+          |   sum(column5)
+          | FROM maintable
+          | GROUP BY column3,column5,column2
+        """.stripMargin)
+      assert(true)
+    } catch {
+      case _: Exception =>
+        assert(false)
+    }
+    sql("DROP DATAMAP IF EXISTS agg0 ON TABLE maintable")
+  }
 
+  test("test pre agg create table 22: don't support 'create datamap if exists'") {
+    val e: Exception = intercept[AnalysisException] {
+      sql(
+        """
+          | CREATE DATAMAP IF EXISTS agg0 ON TABLE mainTable
+          | USING 'preaggregate'
+          | AS SELECT
+          |   column3,
+          |   sum(column3),
+          |   column5,
+          |   sum(column5)
+          | FROM maintable
+          | GROUP BY column3,column5,column2
+        """.stripMargin)
+      assert(true)
+    }
+    assert(e.getMessage.contains("identifier matching regex"))
+    sql("DROP DATAMAP IF EXISTS agg0 ON TABLE maintable")
+  }
 
   def getCarbontable(plan: LogicalPlan) : CarbonTable ={
     var carbonTable : CarbonTable = null

http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9606e9d/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
index 4ebf150..b6b7a17 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
@@ -21,9 +21,9 @@ import org.apache.spark.sql.Row
 import org.apache.spark.sql.test.util.QueryTest
 import org.apache.spark.util.SparkUtil4Test
 import org.scalatest.{BeforeAndAfterAll, Ignore}
-
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
 
 class TestPreAggregateLoad extends QueryTest with BeforeAndAfterAll {
 
@@ -310,5 +310,99 @@ test("check load and select for avg double datatype") {
     checkAnswer(sql("select name,avg(salary) from maintbl group by name"), rows)
   }
 
+  test("create datamap with 'if not exists' after load data into mainTable and create datamap") {
+    sql("DROP TABLE IF EXISTS maintable")
+    sql(
+      """
+        | CREATE TABLE maintable(id int, name string, city string, age int)
+        | STORED BY 'org.apache.carbondata.format'
+      """.stripMargin)
+    sql(s"LOAD DATA LOCAL INPATH '$testData' into table maintable")
+    sql(
+      s"""
+         | create datamap preagg_sum
+         | on table maintable
+         | using 'preaggregate'
+         | as select id,sum(age) from maintable
+         | group by id
+       """.stripMargin)
+
+    sql(
+      s"""
+         | create datamap if not exists preagg_sum
+         | on table maintable
+         | using 'preaggregate'
+         | as select id,sum(age) from maintable
+         | group by id
+       """.stripMargin)
+
+    checkAnswer(sql(s"select * from maintable_preagg_sum"),
+      Seq(Row(1, 31), Row(2, 27), Row(3, 70), Row(4, 55)))
+    sql("drop table if exists maintable")
+  }
+
+  test("create datamap with 'if not exists' after create datamap and load data into mainTable") {
+    sql("DROP TABLE IF EXISTS maintable")
+    sql(
+      """
+        | CREATE TABLE maintable(id int, name string, city string, age int)
+        | STORED BY 'org.apache.carbondata.format'
+      """.stripMargin)
+
+    sql(
+      s"""
+         | create datamap preagg_sum
+         | on table maintable
+         | using 'preaggregate'
+         | as select id,sum(age) from maintable
+         | group by id
+       """.stripMargin)
+    sql(s"LOAD DATA LOCAL INPATH '$testData' into table maintable")
+    sql(
+      s"""
+         | create datamap if not exists preagg_sum
+         | on table maintable
+         | using 'preaggregate'
+         | as select id,sum(age) from maintable
+         | group by id
+       """.stripMargin)
+
+    checkAnswer(sql(s"select * from maintable_preagg_sum"),
+      Seq(Row(1, 31), Row(2, 27), Row(3, 70), Row(4, 55)))
+    sql("drop table if exists maintable")
+  }
+
+  test("create datamap without 'if not exists' after load data into mainTable and create datamap") {
+    sql("DROP TABLE IF EXISTS maintable")
+    sql(
+      """
+        | CREATE TABLE maintable(id int, name string, city string, age int)
+        | STORED BY 'org.apache.carbondata.format'
+      """.stripMargin)
+    sql(s"LOAD DATA LOCAL INPATH '$testData' into table maintable")
+    sql(
+      s"""
+         | create datamap preagg_sum
+         | on table maintable
+         | using 'preaggregate'
+         | as select id,sum(age) from maintable
+         | group by id
+       """.stripMargin)
+
+    val e: Exception = intercept[TableAlreadyExistsException] {
+      sql(
+        s"""
+           | create datamap preagg_sum
+           | on table maintable
+           | using 'preaggregate'
+           | as select id,sum(age) from maintable
+           | group by id
+       """.stripMargin)
+    }
+    assert(e.getMessage.contains("already exists in database"))
+    checkAnswer(sql(s"select * from maintable_preagg_sum"),
+      Seq(Row(1, 31), Row(2, 27), Row(3, 70), Row(4, 55)))
+    sql("drop table if exists maintable")
+  }
 
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9606e9d/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
index 0ca7cb9..b63fd53 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
@@ -16,6 +16,7 @@
  */
 package org.apache.carbondata.integration.spark.testsuite.timeseries
 
+import org.apache.spark.sql.AnalysisException
 import org.apache.spark.sql.test.util.QueryTest
 import org.scalatest.BeforeAndAfterAll
 
@@ -81,29 +82,29 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
        """.stripMargin)
   }
 
-  test("test timeseries create table Zero") {
+  test("test timeseries create table 1") {
     checkExistence(sql("DESCRIBE FORMATTED mainTable_agg0_second"), true, "maintable_agg0_second")
     sql("drop datamap agg0_second on table mainTable")
   }
 
-  test("test timeseries create table One") {
+  test("test timeseries create table 2") {
     checkExistence(sql("DESCRIBE FORMATTED mainTable_agg0_hour"), true, "maintable_agg0_hour")
     sql("drop datamap agg0_hour on table mainTable")
   }
-  test("test timeseries create table two") {
+  test("test timeseries create table 3") {
     checkExistence(sql("DESCRIBE FORMATTED maintable_agg0_day"), true, "maintable_agg0_day")
     sql("drop datamap agg0_day on table mainTable")
   }
-  test("test timeseries create table three") {
+  test("test timeseries create table 4") {
     checkExistence(sql("DESCRIBE FORMATTED mainTable_agg0_month"), true, "maintable_agg0_month")
     sql("drop datamap agg0_month on table mainTable")
   }
-  test("test timeseries create table four") {
+  test("test timeseries create table 5") {
     checkExistence(sql("DESCRIBE FORMATTED mainTable_agg0_year"), true, "maintable_agg0_year")
     sql("drop datamap agg0_year on table mainTable")
   }
 
-  test("test timeseries create table five") {
+  test("test timeseries create table 6") {
     intercept[Exception] {
       sql(
         s"""
@@ -118,7 +119,7 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     }
   }
 
-  test("test timeseries create table Six") {
+  test("test timeseries create table 7") {
     intercept[Exception] {
       sql(
         s"""
@@ -133,7 +134,7 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     }
   }
 
-  test("test timeseries create table seven") {
+  test("test timeseries create table 8") {
     intercept[Exception] {
       sql(
         s"""
@@ -158,7 +159,7 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     }
   }
 
-  test("test timeseries create table Eight") {
+  test("test timeseries create table 9") {
     intercept[Exception] {
       sql(
         s"""
@@ -173,7 +174,7 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     }
   }
 
-  test("test timeseries create table Nine") {
+  test("test timeseries create table 10") {
     intercept[Exception] {
       sql(
         s"""
@@ -188,7 +189,7 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     }
   }
 
-  test("test timeseries create table: USING") {
+  test("test timeseries create table 11: USING") {
     val e: Exception = intercept[MalformedDataMapCommandException] {
       sql(
         """CREATE DATAMAP agg1 ON TABLE mainTable
@@ -203,7 +204,7 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     assert(e.getMessage.equals("Unknown data map type abc"))
   }
 
-  test("test timeseries create table: USING and catch MalformedCarbonCommandException") {
+  test("test timeseries create table 12: USING and catch MalformedCarbonCommandException") {
     val e: Exception = intercept[MalformedCarbonCommandException] {
       sql(
         """CREATE DATAMAP agg1 ON TABLE mainTable
@@ -218,7 +219,8 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     assert(e.getMessage.equals("Unknown data map type abc"))
   }
 
-  test("test timeseries create table: Only one granularity level can be defined 1") {
+  test("test timeseries create table 13: Only one granularity level can be defined 1") {
+    sql("DROP DATAMAP IF EXISTS agg0_second ON TABLE mainTable")
     val e: Exception = intercept[MalformedCarbonCommandException] {
       sql(
         s"""
@@ -238,7 +240,8 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     assert(e.getMessage.equals("Only one granularity level can be defined"))
   }
 
-  test("test timeseries create table: Only one granularity level can be defined 2") {
+  test("test timeseries create table 14: Only one granularity level can be defined 2") {
+    sql("DROP DATAMAP IF EXISTS agg0_second ON TABLE mainTable")
     val e: Exception = intercept[MalformedDataMapCommandException] {
       sql(
         s"""
@@ -255,7 +258,8 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     assert(e.getMessage.equals("Only one granularity level can be defined"))
   }
 
-  test("test timeseries create table: Only one granularity level can be defined 3") {
+  test("test timeseries create table 15: Only one granularity level can be defined 3") {
+    sql("DROP DATAMAP IF EXISTS agg0_second ON TABLE mainTable")
     val e: Exception = intercept[MalformedDataMapCommandException] {
       sql(
         s"""
@@ -272,7 +276,8 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     assert(e.getMessage.equals("Only one granularity level can be defined"))
   }
 
-  test("test timeseries create table: Granularity only support 1") {
+  test("test timeseries create table 16: Granularity only support 1") {
+    sql("DROP DATAMAP IF EXISTS agg0_second ON TABLE mainTable")
     val e = intercept[MalformedDataMapCommandException] {
       sql(
         s"""
@@ -288,7 +293,8 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     assert(e.getMessage.equals("Granularity only support 1"))
   }
 
-  test("test timeseries create table: Granularity only support 1 and throw Exception") {
+  test("test timeseries create table 17: Granularity only support 1 and throw Exception") {
+    sql("DROP DATAMAP IF EXISTS agg0_second ON TABLE mainTable")
     val e = intercept[MalformedCarbonCommandException] {
       sql(
         s"""
@@ -304,7 +310,8 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     assert(e.getMessage.equals("Granularity only support 1"))
   }
 
-  test("test timeseries create table: timeSeries should define time granularity") {
+  test("test timeseries create table 18: timeSeries should define time granularity") {
+    sql("DROP DATAMAP IF EXISTS agg0_second ON TABLE mainTable")
     val e = intercept[MalformedDataMapCommandException] {
       sql(
         s"""
@@ -319,6 +326,48 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     assert(e.getMessage.equals(s"$timeSeries should define time granularity"))
   }
 
+  test("test timeseries create table 19: should support if not exists") {
+    sql("DROP DATAMAP IF EXISTS agg1 ON TABLE mainTable")
+
+    sql(
+      s"""
+         | CREATE DATAMAP agg1 ON TABLE mainTable
+         | USING '$timeSeries'
+         | DMPROPERTIES (
+         |   'EVENT_TIME'='dataTime',
+         |   'MONTH_GRANULARITY'='1')
+         | AS SELECT dataTime, SUM(age) FROM mainTable
+         | GROUP BY dataTime
+        """.stripMargin)
+    sql(
+      s"""
+         | CREATE DATAMAP IF NOT EXISTS agg1 ON TABLE mainTable
+         | USING '$timeSeries'
+         | DMPROPERTIES (
+         |   'EVENT_TIME'='dataTime',
+         |   'MONTH_GRANULARITY'='1')
+         |AS SELECT dataTime, SUM(age) FROM mainTable
+         |GROUP BY dataTime
+        """.stripMargin)
+    checkExistence(sql("SHOW DATAMAP ON TABLE mainTable"), true, "agg1")
+    checkExistence(sql("DESC FORMATTED mainTable_agg1"), true, "maintable_age_sum")
+  }
+
+  test("test timeseries create table 20: don't support 'create datamap if exists'") {
+    val e: Exception = intercept[AnalysisException] {
+      sql(
+        s"""CREATE DATAMAP IF EXISTS agg2 ON TABLE mainTable
+          | USING '$timeSeries'
+          | DMPROPERTIES (
+          |   'EVENT_TIME'='dataTime',
+          |   'MONTH_GRANULARITY'='1')
+          | AS SELECT dataTime, SUM(age) FROM mainTable
+          | GROUP BY dataTime
+        """.stripMargin)
+    }
+    assert(e.getMessage.contains("identifier matching regex"))
+  }
+
   override def afterAll: Unit = {
     sql("DROP TABLE IF EXISTS mainTable")
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9606e9d/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeseriesDataLoad.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeseriesDataLoad.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeseriesDataLoad.scala
index 8bcdfc9..b43b93b 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeseriesDataLoad.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeseriesDataLoad.scala
@@ -19,9 +19,10 @@ package org.apache.carbondata.integration.spark.testsuite.timeseries
 import java.sql.Timestamp
 
 import org.apache.spark.sql.Row
+import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
 import org.apache.spark.sql.test.util.QueryTest
 import org.apache.spark.util.SparkUtil4Test
-import org.scalatest.{BeforeAndAfterAll, Ignore}
+import org.scalatest.BeforeAndAfterAll
 
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.metadata.schema.datamap.DataMapProvider.TIMESERIES
@@ -239,6 +240,102 @@ class TestTimeseriesDataLoad extends QueryTest with BeforeAndAfterAll {
         Row(Timestamp.valueOf("2016-02-23 01:02:50.0"),50)))
   }
 
+  test("create datamap without 'if not exists' after load data into mainTable and create datamap") {
+    sql("drop table if exists mainTable")
+    sql(
+      """
+        | CREATE TABLE mainTable(
+        |   mytime timestamp,
+        |   name string,
+        |   age int)
+        | STORED BY 'org.apache.carbondata.format'
+      """.stripMargin)
+    sql(s"LOAD DATA LOCAL INPATH '$resourcesPath/timeseriestest.csv' into table mainTable")
+
+    sql(
+      s"""
+         | CREATE DATAMAP agg0_second ON TABLE mainTable
+         | USING '$timeSeries'
+         | DMPROPERTIES (
+         |   'EVENT_TIME'='mytime',
+         |   'second_granularity'='1')
+         | AS SELECT mytime, SUM(age) FROM mainTable
+         | GROUP BY mytime
+        """.stripMargin)
+
+    checkAnswer(sql("select * from maintable_agg0_second"),
+      Seq(Row(Timestamp.valueOf("2016-02-23 01:01:30.0"), 10),
+        Row(Timestamp.valueOf("2016-02-23 01:01:40.0"), 20),
+        Row(Timestamp.valueOf("2016-02-23 01:01:50.0"), 30),
+        Row(Timestamp.valueOf("2016-02-23 01:02:30.0"), 40),
+        Row(Timestamp.valueOf("2016-02-23 01:02:40.0"), 50),
+        Row(Timestamp.valueOf("2016-02-23 01:02:50.0"), 50)))
+    val e: Exception = intercept[TableAlreadyExistsException] {
+      sql(
+        s"""
+           | CREATE DATAMAP agg0_second ON TABLE mainTable
+           | USING '$timeSeries'
+           | DMPROPERTIES (
+           |   'EVENT_TIME'='mytime',
+           |   'second_granularity'='1')
+           | AS SELECT mytime, SUM(age) FROM mainTable
+           | GROUP BY mytime
+        """.stripMargin)
+    }
+    assert(e.getMessage.contains("already exists in database"))
+  }
+
+  test("create datamap with 'if not exists' after load data into mainTable and create datamap") {
+    sql("drop table if exists mainTable")
+    sql(
+      """
+        | CREATE TABLE mainTable(
+        |   mytime timestamp,
+        |   name string,
+        |   age int)
+        | STORED BY 'org.apache.carbondata.format'
+      """.stripMargin)
+    sql(s"LOAD DATA LOCAL INPATH '$resourcesPath/timeseriestest.csv' into table mainTable")
+
+    sql(
+      s"""
+         | CREATE DATAMAP agg0_second ON TABLE mainTable
+         | USING '$timeSeries'
+         | DMPROPERTIES (
+         |   'EVENT_TIME'='mytime',
+         |   'second_granularity'='1')
+         | AS SELECT mytime, SUM(age) FROM mainTable
+         | GROUP BY mytime
+        """.stripMargin)
+
+    checkAnswer(sql("select * from maintable_agg0_second"),
+      Seq(Row(Timestamp.valueOf("2016-02-23 01:01:30.0"), 10),
+        Row(Timestamp.valueOf("2016-02-23 01:01:40.0"), 20),
+        Row(Timestamp.valueOf("2016-02-23 01:01:50.0"), 30),
+        Row(Timestamp.valueOf("2016-02-23 01:02:30.0"), 40),
+        Row(Timestamp.valueOf("2016-02-23 01:02:40.0"), 50),
+        Row(Timestamp.valueOf("2016-02-23 01:02:50.0"), 50)))
+
+    sql(
+      s"""
+         | CREATE DATAMAP IF NOT EXISTS  agg0_second ON TABLE mainTable
+         | USING '$timeSeries'
+         | DMPROPERTIES (
+         |   'EVENT_TIME'='mytime',
+         |   'second_granularity'='1')
+         | AS SELECT mytime, SUM(age) FROM mainTable
+         | GROUP BY mytime
+        """.stripMargin)
+
+    checkAnswer(sql("select * from maintable_agg0_second"),
+      Seq(Row(Timestamp.valueOf("2016-02-23 01:01:30.0"), 10),
+        Row(Timestamp.valueOf("2016-02-23 01:01:40.0"), 20),
+        Row(Timestamp.valueOf("2016-02-23 01:01:50.0"), 30),
+        Row(Timestamp.valueOf("2016-02-23 01:02:30.0"), 40),
+        Row(Timestamp.valueOf("2016-02-23 01:02:40.0"), 50),
+        Row(Timestamp.valueOf("2016-02-23 01:02:50.0"), 50)))
+  }
+
   override def afterAll: Unit = {
     sql("drop table if exists mainTable")
     sql("drop table if exists table_03")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9606e9d/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
index c4d32b4..da20ac5 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
@@ -17,6 +17,7 @@
 package org.apache.spark.sql.execution.command.datamap
 
 import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.execution.command._
 import org.apache.spark.sql.execution.command.preaaggregate.CreatePreAggregateTableCommand
@@ -35,10 +36,12 @@ case class CarbonCreateDataMapCommand(
     tableIdentifier: TableIdentifier,
     dmClassName: String,
     dmproperties: Map[String, String],
-    queryString: Option[String])
+    queryString: Option[String],
+    ifNotExistsSet: Boolean = false)
   extends AtomicRunnableCommand {
 
   var createPreAggregateTableCommands: CreatePreAggregateTableCommand = _
+  var tableIsExists: Boolean = false
 
   override def processMetadata(sparkSession: SparkSession): Seq[Row] = {
     // since streaming segment does not support building index and pre-aggregate yet,
@@ -49,10 +52,22 @@ case class CarbonCreateDataMapCommand(
       throw new MalformedCarbonCommandException("Streaming table does not support creating datamap")
     }
     val LOGGER = LogServiceFactory.getLogService(this.getClass.getCanonicalName)
+    val dbName = tableIdentifier.database.getOrElse("default")
+    val tableName = tableIdentifier.table + "_" + dataMapName
 
-    if (dmClassName.equalsIgnoreCase(PREAGGREGATE.toString) ||
+    if (sparkSession.sessionState.catalog.listTables(dbName)
+      .exists(_.table.equalsIgnoreCase(tableName))) {
+      LOGGER.audit(
+        s"Table creation with Database name [$dbName] and Table name [$tableName] failed. " +
+          s"Table [$tableName] already exists under database [$dbName]")
+      tableIsExists = true
+      if (!ifNotExistsSet) {
+        throw new TableAlreadyExistsException(dbName, tableName)
+      }
+    } else if (dmClassName.equalsIgnoreCase(PREAGGREGATE.toString) ||
       dmClassName.equalsIgnoreCase(TIMESERIES.toString)) {
       TimeSeriesUtil.validateTimeSeriesGranularity(dmproperties, dmClassName)
+
       createPreAggregateTableCommands = if (dmClassName.equalsIgnoreCase(TIMESERIES.toString)) {
         val details = TimeSeriesUtil
           .getTimeSeriesGranularityDetails(dmproperties, dmClassName)
@@ -62,15 +77,16 @@ case class CarbonCreateDataMapCommand(
           dmClassName,
           updatedDmProperties,
           queryString.get,
-          Some(details._1))
+          Some(details._1),
+          ifNotExistsSet = ifNotExistsSet)
       } else {
         CreatePreAggregateTableCommand(
           dataMapName,
           tableIdentifier,
           dmClassName,
           dmproperties,
-          queryString.get
-        )
+          queryString.get,
+          ifNotExistsSet = ifNotExistsSet)
       }
       createPreAggregateTableCommands.processMetadata(sparkSession)
     } else {
@@ -83,7 +99,11 @@ case class CarbonCreateDataMapCommand(
   override def processData(sparkSession: SparkSession): Seq[Row] = {
     if (dmClassName.equalsIgnoreCase(PREAGGREGATE.toString) ||
       dmClassName.equalsIgnoreCase(TIMESERIES.toString)) {
-      createPreAggregateTableCommands.processData(sparkSession)
+      if (!tableIsExists) {
+        createPreAggregateTableCommands.processData(sparkSession)
+      } else {
+        Seq.empty
+      }
     } else {
       throw new MalformedDataMapCommandException("Unknown data map type " + dmClassName)
     }
@@ -92,7 +112,11 @@ case class CarbonCreateDataMapCommand(
   override def undoMetadata(sparkSession: SparkSession, exception: Exception): Seq[Row] = {
     if (dmClassName.equalsIgnoreCase(PREAGGREGATE.toString) ||
       dmClassName.equalsIgnoreCase(TIMESERIES.toString)) {
-      createPreAggregateTableCommands.undoMetadata(sparkSession, exception)
+      if (!tableIsExists) {
+        createPreAggregateTableCommands.undoMetadata(sparkSession, exception)
+      } else {
+        Seq.empty
+      }
     } else {
       throw new MalformedDataMapCommandException("Unknown data map type " + dmClassName)
     }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9606e9d/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
index 3de75c2..31a3403 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/preaaggregate/CreatePreAggregateTableCommand.scala
@@ -49,7 +49,8 @@ case class CreatePreAggregateTableCommand(
     dmClassName: String,
     dmProperties: Map[String, String],
     queryString: String,
-    timeSeriesFunction: Option[String] = None)
+    timeSeriesFunction: Option[String] = None,
+    ifNotExistsSet: Boolean = false)
   extends AtomicRunnableCommand {
 
   var parentTable: CarbonTable = _
@@ -86,7 +87,7 @@ case class CreatePreAggregateTableCommand(
         parentTableIdentifier.database)
     // prepare table model of the collected tokens
     val tableModel: TableModel = new CarbonSpark2SqlParser().prepareTableModel(
-      ifNotExistPresent = false,
+      ifNotExistPresent = ifNotExistsSet,
       new CarbonSpark2SqlParser().convertDbNameToLowerCase(tableIdentifier.database),
       tableIdentifier.table.toLowerCase,
       fields,

http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9606e9d/integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSpark2SqlParser.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSpark2SqlParser.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSpark2SqlParser.scala
index 4045478..7addd26 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSpark2SqlParser.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSpark2SqlParser.scala
@@ -142,17 +142,18 @@ class CarbonSpark2SqlParser extends CarbonDDLSqlParser {
 
   /**
    * The syntax of datamap creation is as follows.
-   * CREATE DATAMAP datamapName ON TABLE tableName USING 'DataMapClassName'
+   * CREATE DATAMAP IF NOT EXISTS datamapName ON TABLE tableName USING 'DataMapClassName'
    * DMPROPERTIES('KEY'='VALUE') AS SELECT COUNT(COL1) FROM tableName
    */
   protected lazy val createDataMap: Parser[LogicalPlan] =
-    CREATE ~> DATAMAP ~> ident ~ (ON ~ TABLE) ~  (ident <~ ".").? ~ ident ~
+    CREATE ~> DATAMAP ~> opt(IF ~> NOT ~> EXISTS) ~ ident ~
+    (ON ~ TABLE) ~  (ident <~ ".").? ~ ident ~
     (USING ~> stringLit) ~ (DMPROPERTIES ~> "(" ~> repsep(loadOptions, ",") <~ ")").? ~
     (AS ~> restInput).? <~ opt(";") ^^ {
-      case dmname ~ ontable ~ dbName ~ tableName ~ className ~ dmprops ~ query =>
+      case ifnotexists ~ dmname ~ ontable ~ dbName ~ tableName ~ className ~ dmprops ~ query =>
         val map = dmprops.getOrElse(List[(String, String)]()).toMap[String, String]
         CarbonCreateDataMapCommand(
-          dmname, TableIdentifier(tableName, dbName), className, map, query)
+          dmname, TableIdentifier(tableName, dbName), className, map, query, ifnotexists.isDefined)
     }
 
   /**


[28/50] [abbrv] carbondata git commit: [CARBONDATA-2110]deprecate 'tempCSV' option of dataframe load

Posted by ra...@apache.org.
[CARBONDATA-2110]deprecate 'tempCSV' option of dataframe load

deprecate 'tempCSV' option of dataframe load, it won't generate temp file on hdfs, no matter the value of tempCSV

This closes #1916


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/da129d52
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/da129d52
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/da129d52

Branch: refs/heads/branch-1.3
Commit: da129d5277babe498fa5686fe53d01433d112bab
Parents: 6c097cb
Author: qiuchenjian <80...@qq.com>
Authored: Sat Feb 3 00:14:07 2018 +0800
Committer: Jacky Li <ja...@qq.com>
Committed: Sat Feb 3 15:29:08 2018 +0800

----------------------------------------------------------------------
 .../testsuite/dataload/TestLoadDataFrame.scala  | 19 ++++
 .../spark/sql/CarbonDataFrameWriter.scala       | 98 +-------------------
 2 files changed, 20 insertions(+), 97 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/da129d52/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala
index 6f03493..693c145 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala
@@ -29,6 +29,7 @@ class TestLoadDataFrame extends QueryTest with BeforeAndAfterAll {
   var df: DataFrame = _
   var dataFrame: DataFrame = _
   var df2: DataFrame = _
+  var df3: DataFrame = _
   var booldf:DataFrame = _
 
 
@@ -52,6 +53,10 @@ class TestLoadDataFrame extends QueryTest with BeforeAndAfterAll {
       .map(x => ("key_" + x, "str_" + x, x, x * 2, x * 3))
       .toDF("c1", "c2", "c3", "c4", "c5")
 
+    df3 = sqlContext.sparkContext.parallelize(1 to 3)
+      .map(x => (x.toString + "te,s\nt", x))
+      .toDF("c1", "c2")
+
     val boolrdd = sqlContext.sparkContext.parallelize(
       Row("anubhav",true) ::
         Row("prince",false) :: Nil)
@@ -74,6 +79,7 @@ class TestLoadDataFrame extends QueryTest with BeforeAndAfterAll {
     sql("DROP TABLE IF EXISTS carbon9")
     sql("DROP TABLE IF EXISTS carbon10")
     sql("DROP TABLE IF EXISTS carbon11")
+    sql("DROP TABLE IF EXISTS carbon12")
     sql("DROP TABLE IF EXISTS df_write_sort_column_not_specified")
     sql("DROP TABLE IF EXISTS df_write_specify_sort_column")
     sql("DROP TABLE IF EXISTS df_write_empty_sort_column")
@@ -261,6 +267,19 @@ test("test the boolean data type"){
     val isStreaming: String = descResult.collect().find(row=>row(0).asInstanceOf[String].trim.equalsIgnoreCase("streaming")).get.get(1).asInstanceOf[String]
     assert(isStreaming.contains("true"))
   }
+
+  test("test datasource table with specified char") {
+
+    df3.write
+      .format("carbondata")
+      .option("tableName", "carbon12")
+      .option("tempCSV", "true")
+      .mode(SaveMode.Overwrite)
+      .save()
+    checkAnswer(
+      sql("select count(*) from carbon12"), Row(3)
+    )
+  }
   private def getSortColumnValue(tableName: String): Array[String] = {
     val desc = sql(s"desc formatted $tableName")
     val sortColumnRow = desc.collect.find(r =>

http://git-wip-us.apache.org/repos/asf/carbondata/blob/da129d52/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala
index 2b06375..2be89b1 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala
@@ -17,16 +17,12 @@
 
 package org.apache.spark.sql
 
-import org.apache.hadoop.fs.Path
-import org.apache.hadoop.io.compress.GzipCodec
 import org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand
 import org.apache.spark.sql.types._
 import org.apache.spark.sql.util.CarbonException
 
 import org.apache.carbondata.common.logging.LogServiceFactory
-import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.metadata.datatype.{DataTypes => CarbonType}
-import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.spark.CarbonOption
 
 class CarbonDataFrameWriter(sqlContext: SQLContext, val dataFrame: DataFrame) {
@@ -46,90 +42,8 @@ class CarbonDataFrameWriter(sqlContext: SQLContext, val dataFrame: DataFrame) {
 
   private def writeToCarbonFile(parameters: Map[String, String] = Map()): Unit = {
     val options = new CarbonOption(parameters)
-    if (options.tempCSV) {
-      loadTempCSV(options)
-    } else {
-      loadDataFrame(options)
-    }
+    loadDataFrame(options)
   }
-
-  /**
-   * Firstly, saving DataFrame to CSV files
-   * Secondly, load CSV files
-   * @param options
-   */
-  private def loadTempCSV(options: CarbonOption): Unit = {
-    // temporary solution: write to csv file, then load the csv into carbon
-    val storePath = CarbonProperties.getStorePath
-    val tempCSVFolder = new StringBuilder(storePath).append(CarbonCommonConstants.FILE_SEPARATOR)
-      .append("tempCSV")
-      .append(CarbonCommonConstants.UNDERSCORE)
-      .append(CarbonEnv.getDatabaseName(options.dbName)(sqlContext.sparkSession))
-      .append(CarbonCommonConstants.UNDERSCORE)
-      .append(options.tableName)
-      .append(CarbonCommonConstants.UNDERSCORE)
-      .append(System.nanoTime())
-      .toString
-    writeToTempCSVFile(tempCSVFolder, options)
-
-    val tempCSVPath = new Path(tempCSVFolder)
-    val fs = tempCSVPath.getFileSystem(dataFrame.sqlContext.sparkContext.hadoopConfiguration)
-
-    def countSize(): Double = {
-      var size: Double = 0
-      val itor = fs.listFiles(tempCSVPath, true)
-      while (itor.hasNext) {
-        val f = itor.next()
-        if (f.getPath.getName.startsWith("part-")) {
-          size += f.getLen
-        }
-      }
-      size
-    }
-
-    LOGGER.info(s"temporary CSV file size: ${countSize / 1024 / 1024} MB")
-
-    try {
-      sqlContext.sql(makeLoadString(tempCSVFolder, options))
-    } finally {
-      fs.delete(tempCSVPath, true)
-    }
-  }
-
-  private def writeToTempCSVFile(tempCSVFolder: String, options: CarbonOption): Unit = {
-    val strRDD = dataFrame.rdd.mapPartitions { case iter =>
-      new Iterator[String] {
-        override def hasNext = iter.hasNext
-
-        def convertToCSVString(seq: Seq[Any]): String = {
-          val build = new java.lang.StringBuilder()
-          if (seq.head != null) {
-            build.append(seq.head.toString)
-          }
-          val itemIter = seq.tail.iterator
-          while (itemIter.hasNext) {
-            build.append(CarbonCommonConstants.COMMA)
-            val value = itemIter.next()
-            if (value != null) {
-              build.append(value.toString)
-            }
-          }
-          build.toString
-        }
-
-        override def next: String = {
-          convertToCSVString(iter.next.toSeq)
-        }
-      }
-    }
-
-    if (options.compress) {
-      strRDD.saveAsTextFile(tempCSVFolder, classOf[GzipCodec])
-    } else {
-      strRDD.saveAsTextFile(tempCSVFolder)
-    }
-  }
-
   /**
    * Loading DataFrame directly without saving DataFrame to CSV files.
    * @param options
@@ -189,14 +103,4 @@ class CarbonDataFrameWriter(sqlContext: SQLContext, val dataFrame: DataFrame) {
      """.stripMargin
   }
 
-  private def makeLoadString(csvFolder: String, options: CarbonOption): String = {
-    val dbName = CarbonEnv.getDatabaseName(options.dbName)(sqlContext.sparkSession)
-    s"""
-       | LOAD DATA INPATH '$csvFolder'
-       | INTO TABLE $dbName.${options.tableName}
-       | OPTIONS ('FILEHEADER' = '${dataFrame.columns.mkString(",")}',
-       | 'SINGLE_PASS' = '${options.singlePass}')
-     """.stripMargin
-  }
-
 }


[15/50] [abbrv] carbondata git commit: Problem: For old store the measure min and max values are written opposite (i.e min in place of max and max in place of min). Due to this computing of measure filter with current code is impacted. This problem speci

Posted by ra...@apache.org.
Problem:
For old store the measure min and max values are written opposite (i.e min in place of max and max in place of min). Due to this computing of measure filter with current code is impacted.
This problem specifically comes when measure data has negative values.

Impact
Filter query on measure

Solution
In order to sync with current min and max values for old store, measures min and max value is reversed by using an old store flag.

This closes #1879


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/1248bd4b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/1248bd4b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/1248bd4b

Branch: refs/heads/branch-1.3
Commit: 1248bd4b7ff4bb45392106082011dde7f9db460f
Parents: ee1c4d4
Author: manishgupta88 <to...@gmail.com>
Authored: Tue Jan 30 08:56:13 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Thu Feb 1 22:13:36 2018 +0530

----------------------------------------------------------------------
 .../core/datastore/block/TableBlockInfo.java    | 14 +++++
 .../blockletindex/BlockletDMComparator.java     |  2 +-
 .../blockletindex/BlockletDataMap.java          | 61 +++++---------------
 .../BlockletDataRefNodeWrapper.java             | 39 ++++++++++++-
 .../executor/impl/AbstractQueryExecutor.java    | 11 ++++
 .../core/scan/filter/ColumnFilterInfo.java      |  9 ++-
 .../apache/carbondata/core/util/CarbonUtil.java | 47 +++++++++++++++
 .../carbondata/core/util/CarbonUtilTest.java    | 46 +++++++++++++++
 8 files changed, 178 insertions(+), 51 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java b/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java
index c3cc551..b27b5fc 100644
--- a/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java
+++ b/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java
@@ -72,6 +72,12 @@ public class TableBlockInfo implements Distributable, Serializable {
   private String[] locations;
 
   private ColumnarFormatVersion version;
+
+  /**
+   * flag to determine whether the data block is from old store (version 1.1)
+   * or current store
+   */
+  private boolean isDataBlockFromOldStore;
   /**
    * The class holds the blockletsinfo
    */
@@ -410,4 +416,12 @@ public class TableBlockInfo implements Distributable, Serializable {
   public void setBlockletId(String blockletId) {
     this.blockletId = blockletId;
   }
+
+  public boolean isDataBlockFromOldStore() {
+    return isDataBlockFromOldStore;
+  }
+
+  public void setDataBlockFromOldStore(boolean dataBlockFromOldStore) {
+    isDataBlockFromOldStore = dataBlockFromOldStore;
+  }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java
index fccbda8..9a50600 100644
--- a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java
+++ b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java
@@ -63,7 +63,7 @@ public class BlockletDMComparator implements Comparator<DataMapRow> {
     int compareResult = 0;
     int processedNoDictionaryColumn = numberOfNoDictSortColumns;
     byte[][] firstBytes = splitKey(first.getByteArray(0));
-    byte[][] secondBytes = splitKey(second.getByteArray(0));
+    byte[][] secondBytes = splitKey(first.getByteArray(0));
     byte[] firstNoDictionaryKeys = firstBytes[1];
     ByteBuffer firstNoDictionaryKeyBuffer = ByteBuffer.wrap(firstNoDictionaryKeys);
     byte[] secondNoDictionaryKeys = secondBytes[1];

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMap.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMap.java b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMap.java
index b097c66..699f9e1 100644
--- a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMap.java
+++ b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMap.java
@@ -26,7 +26,6 @@ import java.io.UnsupportedEncodingException;
 import java.math.BigDecimal;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
-import java.util.Arrays;
 import java.util.BitSet;
 import java.util.Comparator;
 import java.util.List;
@@ -48,7 +47,6 @@ import org.apache.carbondata.core.indexstore.UnsafeMemoryDMStore;
 import org.apache.carbondata.core.indexstore.row.DataMapRow;
 import org.apache.carbondata.core.indexstore.row.DataMapRowImpl;
 import org.apache.carbondata.core.indexstore.schema.CarbonRowSchema;
-import org.apache.carbondata.core.keygenerator.KeyGenException;
 import org.apache.carbondata.core.memory.MemoryException;
 import org.apache.carbondata.core.metadata.blocklet.BlockletInfo;
 import org.apache.carbondata.core.metadata.blocklet.DataFileFooter;
@@ -64,6 +62,7 @@ import org.apache.carbondata.core.scan.filter.executer.FilterExecuter;
 import org.apache.carbondata.core.scan.filter.executer.ImplicitColumnFilterExecutor;
 import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
 import org.apache.carbondata.core.util.ByteUtil;
+import org.apache.carbondata.core.util.CarbonUtil;
 import org.apache.carbondata.core.util.DataFileFooterConverter;
 import org.apache.carbondata.core.util.DataTypeUtil;
 
@@ -298,18 +297,23 @@ public class BlockletDataMap implements DataMap, Cacheable {
 
     BlockletMinMaxIndex minMaxIndex = blockletIndex.getMinMaxIndex();
     byte[][] minValues = updateMinValues(minMaxIndex.getMinValues(), minMaxLen);
-    row.setRow(addMinMax(minMaxLen, schema[ordinal], minValues), ordinal);
+    byte[][] maxValues = updateMaxValues(minMaxIndex.getMaxValues(), minMaxLen);
+    // update min max values in case of old store
+    byte[][] updatedMinValues =
+        CarbonUtil.updateMinMaxValues(fileFooter, maxValues, minValues, true);
+    byte[][] updatedMaxValues =
+        CarbonUtil.updateMinMaxValues(fileFooter, maxValues, minValues, false);
+    row.setRow(addMinMax(minMaxLen, schema[ordinal], updatedMinValues), ordinal);
     // compute and set task level min values
     addTaskMinMaxValues(summaryRow, minMaxLen,
-        unsafeMemorySummaryDMStore.getSchema()[taskMinMaxOrdinal], minValues,
+        unsafeMemorySummaryDMStore.getSchema()[taskMinMaxOrdinal], updatedMinValues,
         TASK_MIN_VALUES_INDEX, true);
     ordinal++;
     taskMinMaxOrdinal++;
-    byte[][] maxValues = updateMaxValues(minMaxIndex.getMaxValues(), minMaxLen);
-    row.setRow(addMinMax(minMaxLen, schema[ordinal], maxValues), ordinal);
+    row.setRow(addMinMax(minMaxLen, schema[ordinal], updatedMaxValues), ordinal);
     // compute and set task level max values
     addTaskMinMaxValues(summaryRow, minMaxLen,
-        unsafeMemorySummaryDMStore.getSchema()[taskMinMaxOrdinal], maxValues,
+        unsafeMemorySummaryDMStore.getSchema()[taskMinMaxOrdinal], updatedMaxValues,
         TASK_MAX_VALUES_INDEX, false);
     ordinal++;
 
@@ -624,42 +628,7 @@ public class BlockletDataMap implements DataMap, Cacheable {
     if (unsafeMemoryDMStore.getRowCount() == 0) {
       return new ArrayList<>();
     }
-    // getting the start and end index key based on filter for hitting the
-    // selected block reference nodes based on filter resolver tree.
-    if (LOGGER.isDebugEnabled()) {
-      LOGGER.debug("preparing the start and end key for finding"
-          + "start and end block as per filter resolver");
-    }
     List<Blocklet> blocklets = new ArrayList<>();
-    Comparator<DataMapRow> comparator =
-        new BlockletDMComparator(segmentProperties.getColumnsValueSize(),
-            segmentProperties.getNumberOfSortColumns(),
-            segmentProperties.getNumberOfNoDictSortColumns());
-    List<IndexKey> listOfStartEndKeys = new ArrayList<IndexKey>(2);
-    FilterUtil
-        .traverseResolverTreeAndGetStartAndEndKey(segmentProperties, filterExp, listOfStartEndKeys);
-    // reading the first value from list which has start key
-    IndexKey searchStartKey = listOfStartEndKeys.get(0);
-    // reading the last value from list which has end key
-    IndexKey searchEndKey = listOfStartEndKeys.get(1);
-    if (null == searchStartKey && null == searchEndKey) {
-      try {
-        // TODO need to handle for no dictionary dimensions
-        searchStartKey = FilterUtil.prepareDefaultStartIndexKey(segmentProperties);
-        // TODO need to handle for no dictionary dimensions
-        searchEndKey = FilterUtil.prepareDefaultEndIndexKey(segmentProperties);
-      } catch (KeyGenException e) {
-        return null;
-      }
-    }
-    if (LOGGER.isDebugEnabled()) {
-      LOGGER.debug(
-          "Successfully retrieved the start and end key" + "Dictionary Start Key: " + Arrays
-              .toString(searchStartKey.getDictionaryKeys()) + "No Dictionary Start Key " + Arrays
-              .toString(searchStartKey.getNoDictionaryKeys()) + "Dictionary End Key: " + Arrays
-              .toString(searchEndKey.getDictionaryKeys()) + "No Dictionary End Key " + Arrays
-              .toString(searchEndKey.getNoDictionaryKeys()));
-    }
     if (filterExp == null) {
       int rowCount = unsafeMemoryDMStore.getRowCount();
       for (int i = 0; i < rowCount; i++) {
@@ -667,11 +636,13 @@ public class BlockletDataMap implements DataMap, Cacheable {
         blocklets.add(createBlocklet(safeRow, safeRow.getShort(BLOCKLET_ID_INDEX)));
       }
     } else {
-      int startIndex = findStartIndex(convertToRow(searchStartKey), comparator);
-      int endIndex = findEndIndex(convertToRow(searchEndKey), comparator);
+      // Remove B-tree jump logic as start and end key prepared is not
+      // correct for old store scenarios
+      int startIndex = 0;
+      int endIndex = unsafeMemoryDMStore.getRowCount();
       FilterExecuter filterExecuter =
           FilterUtil.getFilterExecuterTree(filterExp, segmentProperties, null);
-      while (startIndex <= endIndex) {
+      while (startIndex < endIndex) {
         DataMapRow safeRow = unsafeMemoryDMStore.getUnsafeRow(startIndex).convertToSafeRow();
         int blockletId = safeRow.getShort(BLOCKLET_ID_INDEX);
         String filePath = new String(safeRow.getByteArray(FILE_PATH_INDEX),

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java
index 097dd8c..b672c58 100644
--- a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java
+++ b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataRefNodeWrapper.java
@@ -132,13 +132,48 @@ public class BlockletDataRefNodeWrapper implements DataRefNode {
   public MeasureRawColumnChunk[] getMeasureChunks(FileHolder fileReader, int[][] blockIndexes)
       throws IOException {
     MeasureColumnChunkReader measureColumnChunkReader = getMeasureColumnChunkReader(fileReader);
-    return measureColumnChunkReader.readRawMeasureChunks(fileReader, blockIndexes);
+    MeasureRawColumnChunk[] measureRawColumnChunks =
+        measureColumnChunkReader.readRawMeasureChunks(fileReader, blockIndexes);
+    updateMeasureRawColumnChunkMinMaxValues(measureRawColumnChunks);
+    return measureRawColumnChunks;
   }
 
   @Override public MeasureRawColumnChunk getMeasureChunk(FileHolder fileReader, int blockIndex)
       throws IOException {
     MeasureColumnChunkReader measureColumnChunkReader = getMeasureColumnChunkReader(fileReader);
-    return measureColumnChunkReader.readRawMeasureChunk(fileReader, blockIndex);
+    MeasureRawColumnChunk measureRawColumnChunk =
+        measureColumnChunkReader.readRawMeasureChunk(fileReader, blockIndex);
+    updateMeasureRawColumnChunkMinMaxValues(measureRawColumnChunk);
+    return measureRawColumnChunk;
+  }
+
+  /**
+   * This method is written specifically for old store wherein the measure min and max values
+   * are written opposite (i.e min in place of max and amx in place of min). Due to this computing
+   * f measure filter with current code is impacted. In order to sync with current min and
+   * max values only in case old store and measures is reversed
+   *
+   * @param measureRawColumnChunk
+   */
+  private void updateMeasureRawColumnChunkMinMaxValues(
+      MeasureRawColumnChunk measureRawColumnChunk) {
+    if (blockInfos.get(index).isDataBlockFromOldStore()) {
+      byte[][] maxValues = measureRawColumnChunk.getMaxValues();
+      byte[][] minValues = measureRawColumnChunk.getMinValues();
+      measureRawColumnChunk.setMaxValues(minValues);
+      measureRawColumnChunk.setMinValues(maxValues);
+    }
+  }
+
+  private void updateMeasureRawColumnChunkMinMaxValues(
+      MeasureRawColumnChunk[] measureRawColumnChunks) {
+    if (blockInfos.get(index).isDataBlockFromOldStore()) {
+      for (int i = 0; i < measureRawColumnChunks.length; i++) {
+        if (null != measureRawColumnChunks[i]) {
+          updateMeasureRawColumnChunkMinMaxValues(measureRawColumnChunks[i]);
+        }
+      }
+    }
   }
 
   private DimensionColumnChunkReader getDimensionColumnChunkReader(FileHolder fileReader) {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/scan/executor/impl/AbstractQueryExecutor.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/scan/executor/impl/AbstractQueryExecutor.java b/core/src/main/java/org/apache/carbondata/core/scan/executor/impl/AbstractQueryExecutor.java
index c33d5ac..6875f35 100644
--- a/core/src/main/java/org/apache/carbondata/core/scan/executor/impl/AbstractQueryExecutor.java
+++ b/core/src/main/java/org/apache/carbondata/core/scan/executor/impl/AbstractQueryExecutor.java
@@ -225,9 +225,20 @@ public abstract class AbstractQueryExecutor<E> implements QueryExecutor<E> {
       TableBlockInfo info = blockInfo.copy();
       BlockletDetailInfo detailInfo = info.getDetailInfo();
       detailInfo.setRowCount(blockletInfo.getNumberOfRows());
+      // update min and max values in case of old store for measures as min and max is written
+      // opposite for measures in old store
+      byte[][] maxValues = CarbonUtil.updateMinMaxValues(fileFooter,
+          blockletInfo.getBlockletIndex().getMinMaxIndex().getMaxValues(),
+          blockletInfo.getBlockletIndex().getMinMaxIndex().getMinValues(), false);
+      byte[][] minValues = CarbonUtil.updateMinMaxValues(fileFooter,
+          blockletInfo.getBlockletIndex().getMinMaxIndex().getMaxValues(),
+          blockletInfo.getBlockletIndex().getMinMaxIndex().getMinValues(), true);
+      blockletInfo.getBlockletIndex().getMinMaxIndex().setMaxValues(maxValues);
+      blockletInfo.getBlockletIndex().getMinMaxIndex().setMinValues(minValues);
       detailInfo.setBlockletInfo(blockletInfo);
       detailInfo.setPagesCount((short) blockletInfo.getNumberOfPages());
       detailInfo.setBlockletId(count);
+      info.setDataBlockFromOldStore(true);
       tableBlockInfos.add(info);
       count++;
     }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/scan/filter/ColumnFilterInfo.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/scan/filter/ColumnFilterInfo.java b/core/src/main/java/org/apache/carbondata/core/scan/filter/ColumnFilterInfo.java
index b5b6017..75ec35e 100644
--- a/core/src/main/java/org/apache/carbondata/core/scan/filter/ColumnFilterInfo.java
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/ColumnFilterInfo.java
@@ -35,7 +35,7 @@ public class ColumnFilterInfo implements Serializable {
   /**
    * Implicit column filter values to be used for block and blocklet pruning
    */
-  private List<String> implicitColumnFilterList;
+  private Set<String> implicitColumnFilterList;
   private transient Set<String> implicitDriverColumnFilterList;
   private List<Integer> excludeFilterList;
   /**
@@ -85,12 +85,15 @@ public class ColumnFilterInfo implements Serializable {
   public void setExcludeFilterList(List<Integer> excludeFilterList) {
     this.excludeFilterList = excludeFilterList;
   }
-  public List<String> getImplicitColumnFilterList() {
+  public Set<String> getImplicitColumnFilterList() {
     return implicitColumnFilterList;
   }
 
   public void setImplicitColumnFilterList(List<String> implicitColumnFilterList) {
-    this.implicitColumnFilterList = implicitColumnFilterList;
+    // this is done to improve the query performance. As the list of size increases time taken to
+    // search in list will increase as list contains method uses equals check internally but set
+    // will be very fast as it will directly use the has code to find the bucket and search
+    this.implicitColumnFilterList = new HashSet<>(implicitColumnFilterList);
   }
 
   public List<Object> getMeasuresFilterValuesList() {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
index e060c84..b62b77d 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
@@ -85,6 +85,8 @@ import org.apache.carbondata.core.statusmanager.LoadMetadataDetails;
 import org.apache.carbondata.core.statusmanager.SegmentStatus;
 import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
 import org.apache.carbondata.core.statusmanager.SegmentUpdateStatusManager;
+import org.apache.carbondata.core.util.comparator.Comparator;
+import org.apache.carbondata.core.util.comparator.SerializableComparator;
 import org.apache.carbondata.core.util.path.CarbonStorePath;
 import org.apache.carbondata.core.util.path.CarbonTablePath;
 import org.apache.carbondata.format.BlockletHeader;
@@ -2397,5 +2399,50 @@ public final class CarbonUtil {
     return Base64.decodeBase64(objectString.getBytes(CarbonCommonConstants.DEFAULT_CHARSET));
   }
 
+  /**
+   * This method will be used to update the min and max values and this will be used in case of
+   * old store where min and max values for measures are written opposite
+   * (i.e max values in place of min and min in place of max values)
+   *
+   * @param dataFileFooter
+   * @param maxValues
+   * @param minValues
+   * @param isMinValueComparison
+   * @return
+   */
+  public static byte[][] updateMinMaxValues(DataFileFooter dataFileFooter, byte[][] maxValues,
+      byte[][] minValues, boolean isMinValueComparison) {
+    byte[][] updatedMinMaxValues = new byte[maxValues.length][];
+    if (isMinValueComparison) {
+      System.arraycopy(minValues, 0, updatedMinMaxValues, 0, minValues.length);
+    } else {
+      System.arraycopy(maxValues, 0, updatedMinMaxValues, 0, maxValues.length);
+    }
+    for (int i = 0; i < maxValues.length; i++) {
+      // update min and max values only for measures
+      if (!dataFileFooter.getColumnInTable().get(i).isDimensionColumn()) {
+        DataType dataType = dataFileFooter.getColumnInTable().get(i).getDataType();
+        SerializableComparator comparator = Comparator.getComparator(dataType);
+        int compare;
+        if (isMinValueComparison) {
+          compare = comparator
+              .compare(DataTypeUtil.getMeasureObjectFromDataType(maxValues[i], dataType),
+                  DataTypeUtil.getMeasureObjectFromDataType(minValues[i], dataType));
+          if (compare < 0) {
+            updatedMinMaxValues[i] = maxValues[i];
+          }
+        } else {
+          compare = comparator
+              .compare(DataTypeUtil.getMeasureObjectFromDataType(minValues[i], dataType),
+                  DataTypeUtil.getMeasureObjectFromDataType(maxValues[i], dataType));
+          if (compare > 0) {
+            updatedMinMaxValues[i] = minValues[i];
+          }
+        }
+      }
+    }
+    return updatedMinMaxValues;
+  }
+
 }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
----------------------------------------------------------------------
diff --git a/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java b/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
index fdb5310..984efdb 100644
--- a/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
+++ b/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
@@ -1045,6 +1045,52 @@ public class CarbonUtilTest {
     Assert.assertTrue(schemaString.length() > schema.length());
   }
 
+  @Test
+  public void testUpdateMinMaxValues() {
+    // create dimension and measure column schema
+    ColumnSchema dimensionColumnSchema = createColumnSchema(DataTypes.STRING, true);
+    ColumnSchema measureColumnSchema = createColumnSchema(DataTypes.DOUBLE, false);
+    List<ColumnSchema> columnSchemas = new ArrayList<>(2);
+    columnSchemas.add(dimensionColumnSchema);
+    columnSchemas.add(measureColumnSchema);
+    // create data file footer object
+    DataFileFooter fileFooter = new DataFileFooter();
+    fileFooter.setColumnInTable(columnSchemas);
+    // initialise the expected values
+    int expectedMaxValue = 5;
+    int expectedMinValue = 2;
+    double expectedMeasureMaxValue = 28.74;
+    double expectedMeasureMinValue = -21.46;
+    // initialise the minValues
+    byte[][] minValues = new byte[2][];
+    minValues[0] = new byte[] { 2 };
+    ByteBuffer buffer = ByteBuffer.allocate(8);
+    minValues[1] = (byte[]) buffer.putDouble(28.74).flip().array();
+    buffer = ByteBuffer.allocate(8);
+    // initialise the maxValues
+    byte[][] maxValues = new byte[2][];
+    maxValues[0] = new byte[] { 5 };
+    maxValues[1] = (byte[]) buffer.putDouble(-21.46).flip().array();
+    byte[][] updateMaxValues =
+        CarbonUtil.updateMinMaxValues(fileFooter, maxValues, minValues, false);
+    byte[][] updateMinValues =
+        CarbonUtil.updateMinMaxValues(fileFooter, maxValues, minValues, true);
+    // compare max values
+    assert (expectedMaxValue == ByteBuffer.wrap(updateMaxValues[0]).get());
+    assert (expectedMeasureMaxValue == ByteBuffer.wrap(updateMaxValues[1]).getDouble());
+
+    // compare min values
+    assert (expectedMinValue == ByteBuffer.wrap(updateMinValues[0]).get());
+    assert (expectedMeasureMinValue == ByteBuffer.wrap(updateMinValues[1]).getDouble());
+  }
+
+  private ColumnSchema createColumnSchema(DataType dataType, boolean isDimensionColumn) {
+    ColumnSchema columnSchema = new ColumnSchema();
+    columnSchema.setDataType(dataType);
+    columnSchema.setDimensionColumn(isDimensionColumn);
+    return columnSchema;
+  }
+
   private String generateString(int length) {
     StringBuilder builder = new StringBuilder();
     for (int i = 0; i < length; i++) {


[39/50] [abbrv] carbondata git commit: [CARBONDATA-2105] Fixed bug for null values when group by column is present as dictionary_include

Posted by ra...@apache.org.
[CARBONDATA-2105] Fixed bug for null values when group by column is present as dictionary_include

Refactored code to resolve issue of null values when group by column is present as dictionary_include.

This closes  #1917


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/11f23714
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/11f23714
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/11f23714

Branch: refs/heads/branch-1.3
Commit: 11f23714cd7ff49759a57e89ae947fde56a40c06
Parents: 55bffbe
Author: SangeetaGulia <sa...@knoldus.in>
Authored: Fri Feb 2 22:41:24 2018 +0530
Committer: kumarvishal <ku...@gmail.com>
Committed: Sat Feb 3 18:21:18 2018 +0530

----------------------------------------------------------------------
 .../preaggregate/TestPreAggCreateCommand.scala     |  4 ++--
 .../TestPreAggregateTableSelection.scala           | 17 +++++++++++++++++
 .../command/carbonTableSchemaCommon.scala          |  3 ++-
 3 files changed, 21 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/11f23714/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
index 38ab9ae..5d0f61b 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
@@ -200,8 +200,8 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
     sql("create datamap agg0 on table mainTable using 'preaggregate' as select column1, count(column1),column6, count(column6) from maintable group by column6,column1")
     val df = sql("select * from maintable_agg0")
     val carbontable = getCarbontable(df.queryExecution.analyzed)
-    assert(carbontable.getAllMeasures.size()==1)
-    assert(carbontable.getAllDimensions.size()==4)
+    assert(carbontable.getAllMeasures.size()==2)
+    assert(carbontable.getAllDimensions.size()==2)
     carbontable.getAllDimensions.asScala.foreach{ f =>
       assert(f.getEncoder.contains(Encoding.DICTIONARY))
     }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/11f23714/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
index 5fb7b02..19d4abe 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
@@ -38,6 +38,7 @@ class TestPreAggregateTableSelection extends QueryTest with BeforeAndAfterAll {
     sql("drop table if exists agg5")
     sql("drop table if exists agg6")
     sql("drop table if exists agg7")
+    sql("DROP TABLE IF EXISTS maintabledict")
     sql("CREATE TABLE mainTable(id int, name string, city string, age string) STORED BY 'org.apache.carbondata.format'")
     sql("create datamap agg0 on table mainTable using 'preaggregate' as select name from mainTable group by name")
     sql("create datamap agg1 on table mainTable using 'preaggregate' as select name,sum(age) from mainTable group by name")
@@ -320,11 +321,27 @@ test("test PreAggregate table selection with timeseries and normal together") {
     val df = sql("select avg(age) from mainTableavg")
     preAggTableValidator(df.queryExecution.analyzed, "mainTableavg_agg0")
   }
+
+  test("test PreAggregate table selection for avg with maintable containing dictionary include for group by column") {
+    sql(
+      "create table maintabledict(year int,month int,name string,salary int,dob string) stored" +
+      " by 'carbondata' tblproperties('DICTIONARY_INCLUDE'='year')")
+    sql("insert into maintabledict select 10,11,'x',12,'2014-01-01 00:00:00'")
+    sql("insert into maintabledict select 10,11,'x',12,'2014-01-01 00:00:00'")
+    sql(
+      "create datamap aggdict on table maintabledict using 'preaggregate' as select year,avg(year) from " +
+      "maintabledict group by year")
+    val df = sql("select year,avg(year) from maintabledict group by year")
+    checkAnswer(df, Seq(Row(10,10.0)))
+  }
+
+
   override def afterAll: Unit = {
     sql("drop table if exists mainTable")
     sql("drop table if exists mainTable_avg")
     sql("drop table if exists lineitem")
     sql("DROP TABLE IF EXISTS maintabletime")
+    sql("DROP TABLE IF EXISTS maintabledict")
   }
 
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/11f23714/integration/spark-common/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala b/integration/spark-common/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala
index 9a0098e..bc84e04 100644
--- a/integration/spark-common/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala
+++ b/integration/spark-common/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala
@@ -544,7 +544,8 @@ class TableNewProcessor(cm: TableModel) {
       val encoders = if (getEncoderFromParent(field)) {
         isAggFunPresent =
           cm.dataMapRelation.get.get(field).get.aggregateFunction.equalsIgnoreCase("sum") ||
-          cm.dataMapRelation.get.get(field).get.aggregateFunction.equals("avg")
+          cm.dataMapRelation.get.get(field).get.aggregateFunction.equals("avg") ||
+          cm.dataMapRelation.get.get(field).get.aggregateFunction.equals("count")
         if(!isAggFunPresent) {
           cm.parentTable.get.getColumnByName(
             cm.parentTable.get.getTableName,


[14/50] [abbrv] carbondata git commit: [CARBONDATA-2094] Filter DataMap Tables in Show Table Command

Posted by ra...@apache.org.
[CARBONDATA-2094] Filter DataMap Tables in Show Table Command

Currently Show Table command shows datamap tables (agg tablels) but show table command should not show aggregate tables.Solution :- Handle show table command in carbon side and Filter the datamap table and return rest of the tables.

This closes #1089


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/ee1c4d42
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/ee1c4d42
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/ee1c4d42

Branch: refs/heads/branch-1.3
Commit: ee1c4d42fc0837e515ac222c676bd46fe93795d5
Parents: 19fdd4d
Author: BJangir <ba...@gmail.com>
Authored: Mon Jan 29 23:46:56 2018 +0530
Committer: kumarvishal <ku...@gmail.com>
Committed: Thu Feb 1 18:42:05 2018 +0530

----------------------------------------------------------------------
 .../preaggregate/TestPreAggCreateCommand.scala  | 36 +++++++++
 .../preaggregate/TestPreAggregateDrop.scala     |  9 ++-
 .../command/table/CarbonShowTablesCommand.scala | 82 ++++++++++++++++++++
 .../spark/sql/hive/CarbonSessionState.scala     | 11 ++-
 .../spark/sql/hive/CarbonSessionState.scala     | 11 ++-
 5 files changed, 142 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/ee1c4d42/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
index 23132de..f1d7396 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
@@ -233,6 +233,20 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
   }
 
   val timeSeries = TIMESERIES.toString
+  test("remove agg tables from show table command") {
+    sql("DROP TABLE IF EXISTS tbl_1")
+    sql("DROP TABLE IF EXISTS sparktable")
+    sql("create table if not exists  tbl_1(imei string,age int,mac string ,prodate timestamp,update timestamp,gamepoint double,contrid double) stored by 'carbondata' ")
+    sql("create table if not exists sparktable(a int,b string)")
+    sql(
+      s"""create datamap preagg_sum on table tbl_1 using 'preaggregate' as select mac,avg(age) from tbl_1 group by mac"""
+        .stripMargin)
+    sql(
+      "create datamap agg2 on table tbl_1 using 'preaggregate' DMPROPERTIES ('timeseries" +
+      ".eventTime'='prodate', 'timeseries.hierarchy'='hour=1,day=1,month=1,year=1') as select prodate," +
+      "mac from tbl_1 group by prodate,mac")
+    checkExistence(sql("show tables"), false, "tbl_1_preagg_sum","tbl_1_agg2_day","tbl_1_agg2_hour","tbl_1_agg2_month","tbl_1_agg2_year")
+  }
 
   test("test pre agg  create table 21: create with preaggregate and hierarchy") {
     sql("DROP TABLE IF EXISTS maintabletime")
@@ -287,6 +301,28 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
     sql("DROP DATAMAP IF EXISTS agg0 ON TABLE maintable")
   }
 
+  test("remove  agg tables from show table command") {
+    sql("DROP TABLE IF EXISTS tbl_1")
+    sql("create table if not exists  tbl_1(imei string,age int,mac string ,prodate timestamp,update timestamp,gamepoint double,contrid double) stored by 'carbondata' ")
+    sql("create datamap agg1 on table tbl_1 using 'preaggregate' as select mac, sum(age) from tbl_1 group by mac")
+    sql("create table if not exists  sparktable(imei string,age int,mac string ,prodate timestamp,update timestamp,gamepoint double,contrid double) ")
+    checkExistence(sql("show tables"), false, "tbl_1_agg1")
+    checkExistence(sql("show tables"), true, "sparktable","tbl_1")
+  }
+
+
+  test("remove TimeSeries agg tables from show table command") {
+    sql("DROP TABLE IF EXISTS tbl_1")
+    sql("create table if not exists  tbl_1(imei string,age int,mac string ,prodate timestamp,update timestamp,gamepoint double,contrid double) stored by 'carbondata' ")
+    sql(
+      "create datamap agg2 on table tbl_1 using 'preaggregate' DMPROPERTIES ('timeseries" +
+      ".eventTime'='prodate', 'timeseries.hierarchy'='hour=1,day=1,month=1,year=1') as select prodate," +
+      "mac from tbl_1 group by prodate,mac")
+    checkExistence(sql("show tables"), false, "tbl_1_agg2_day","tbl_1_agg2_hour","tbl_1_agg2_month","tbl_1_agg2_year")
+  }
+
+
+
   def getCarbontable(plan: LogicalPlan) : CarbonTable ={
     var carbonTable : CarbonTable = null
     plan.transform {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/ee1c4d42/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateDrop.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateDrop.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateDrop.scala
index 1138adf..911a725 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateDrop.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateDrop.scala
@@ -46,8 +46,9 @@ class TestPreAggregateDrop extends QueryTest with BeforeAndAfterAll {
       " a,sum(c) from maintable group by a")
     sql("drop datamap if exists preagg2 on table maintable")
     val showTables = sql("show tables")
+    val showdatamaps =sql("show datamap on table maintable")
     checkExistence(showTables, false, "maintable_preagg2")
-    checkExistence(showTables, true, "maintable_preagg1")
+    checkExistence(showdatamaps, true, "maintable_preagg1")
   }
 
   test("drop datamap which is not existed") {
@@ -66,8 +67,9 @@ class TestPreAggregateDrop extends QueryTest with BeforeAndAfterAll {
 
     sql("drop datamap preagg_same on table maintable")
     var showTables = sql("show tables")
+    val showdatamaps =sql("show datamap on table maintable1")
     checkExistence(showTables, false, "maintable_preagg_same")
-    checkExistence(showTables, true, "maintable1_preagg_same")
+    checkExistence(showdatamaps, true, "maintable1_preagg_same")
     sql("drop datamap preagg_same on table maintable1")
     showTables = sql("show tables")
     checkExistence(showTables, false, "maintable1_preagg_same")
@@ -84,7 +86,8 @@ class TestPreAggregateDrop extends QueryTest with BeforeAndAfterAll {
     sql("create datamap preagg_same1 on table maintable using 'preaggregate' as select" +
         " a,sum(c) from maintable group by a")
     showTables = sql("show tables")
-    checkExistence(showTables, true, "maintable_preagg_same1")
+    val showdatamaps =sql("show datamap on table maintable")
+    checkExistence(showdatamaps, true, "maintable_preagg_same1")
     sql("drop datamap preagg_same1 on table maintable")
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/ee1c4d42/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonShowTablesCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonShowTablesCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonShowTablesCommand.scala
new file mode 100644
index 0000000..c2a91d8
--- /dev/null
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonShowTablesCommand.scala
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command.table
+
+import scala.collection.JavaConverters._
+
+import org.apache.spark.sql.{CarbonEnv, Row, SparkSession}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference}
+import org.apache.spark.sql.execution.command.MetadataCommand
+import org.apache.spark.sql.types.{BooleanType, StringType}
+
+
+private[sql] case class CarbonShowTablesCommand ( databaseName: Option[String],
+    tableIdentifierPattern: Option[String])  extends MetadataCommand{
+
+  // The result of SHOW TABLES has three columns: database, tableName and isTemporary.
+  override val output: Seq[Attribute] = {
+    AttributeReference("database", StringType, nullable = false)() ::
+    AttributeReference("tableName", StringType, nullable = false)() ::
+    AttributeReference("isTemporary", BooleanType, nullable = false)() :: Nil
+  }
+
+  override def processMetadata(sparkSession: SparkSession): Seq[Row] = {
+    // Since we need to return a Seq of rows, we will call getTables directly
+    // instead of calling tables in sparkSession.
+    // filterDataMaps Method is to Filter the Table.
+    val catalog = sparkSession.sessionState.catalog
+    val db = databaseName.getOrElse(catalog.getCurrentDatabase)
+    var tables =
+      tableIdentifierPattern.map(catalog.listTables(db, _)).getOrElse(catalog.listTables(db))
+    tables = filterDataMaps(tables, sparkSession)
+    tables.map { tableIdent =>
+      val isTemp = catalog.isTemporaryTable(tableIdent)
+      Row(tableIdent.database.getOrElse("default"), tableIdent.table, isTemp)
+    }
+  }
+
+  /**
+   *
+   * @param tables tableIdnetifers
+   * @param sparkSession sparksession
+   * @return  Tables after filter datamap tables
+   */
+  private def filterDataMaps(tables: Seq[TableIdentifier],
+      sparkSession: SparkSession): Seq[TableIdentifier] = {
+    // Filter carbon Tables then get CarbonTable and getDataMap List and filter the same
+    // as of now 2 times lookup is happening(filter  carbon table ,getDataMapList)
+    // TODO : add another PR (CARBONDATA-2103) to improve  with 1 lookup
+    val allDatamapTable = tables.filter { table =>
+      CarbonEnv.getInstance(sparkSession).carbonMetastore
+        .tableExists(table)(sparkSession)
+    }.map { table =>
+      val ctable = CarbonEnv.getCarbonTable(table.database, table.table)(sparkSession)
+      ctable.getTableInfo.getDataMapSchemaList.asScala
+    }
+    val alldamrelation = allDatamapTable
+      .flatMap { table =>
+        table.map(eachtable => eachtable.getRelationIdentifier.toString)
+      }
+    tables
+      .filter { table =>
+        !alldamrelation
+          .contains(table.database.getOrElse("default") + "." + table.identifier)
+      }
+  }
+}

http://git-wip-us.apache.org/repos/asf/carbondata/blob/ee1c4d42/integration/spark2/src/main/spark2.1/org/apache/spark/sql/hive/CarbonSessionState.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/spark2.1/org/apache/spark/sql/hive/CarbonSessionState.scala b/integration/spark2/src/main/spark2.1/org/apache/spark/sql/hive/CarbonSessionState.scala
index 0fe0f96..0b62e10 100644
--- a/integration/spark2/src/main/spark2.1/org/apache/spark/sql/hive/CarbonSessionState.scala
+++ b/integration/spark2/src/main/spark2.1/org/apache/spark/sql/hive/CarbonSessionState.scala
@@ -22,11 +22,12 @@ import org.apache.spark.sql.catalyst.catalog.{CatalogTablePartition, FunctionRes
 import org.apache.spark.sql.catalyst.expressions.{And, AttributeReference, BoundReference, Expression, InterpretedPredicate, PredicateSubquery, ScalarSubquery}
 import org.apache.spark.sql.catalyst.optimizer.Optimizer
 import org.apache.spark.sql.catalyst.parser.ParserInterface
-import org.apache.spark.sql.catalyst.parser.ParserUtils._
-import org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableContext
+import org.apache.spark.sql.catalyst.parser.ParserUtils.{string, _}
+import org.apache.spark.sql.catalyst.parser.SqlBaseParser.{CreateTableContext, ShowTablesContext}
 import org.apache.spark.sql.catalyst.plans.logical.{Filter, LogicalPlan, SubqueryAlias}
 import org.apache.spark.sql.catalyst.rules.Rule
 import org.apache.spark.sql.catalyst.{CatalystConf, TableIdentifier}
+import org.apache.spark.sql.execution.command.table.CarbonShowTablesCommand
 import org.apache.spark.sql.execution.datasources._
 import org.apache.spark.sql.execution.strategy.{CarbonLateDecodeStrategy, DDLStrategy, StreamingTableStrategy}
 import org.apache.spark.sql.execution.{SparkOptimizer, SparkSqlAstBuilder}
@@ -336,4 +337,10 @@ class CarbonSqlAstBuilder(conf: SQLConf, parser: CarbonSpark2SqlParser, sparkSes
       super.visitCreateTable(ctx)
     }
   }
+
+  override def visitShowTables(ctx: ShowTablesContext): LogicalPlan = withOrigin(ctx) {
+    CarbonShowTablesCommand(
+      Option(ctx.db).map(_.getText),
+      Option(ctx.pattern).map(string))
+  }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/ee1c4d42/integration/spark2/src/main/spark2.2/org/apache/spark/sql/hive/CarbonSessionState.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/spark2.2/org/apache/spark/sql/hive/CarbonSessionState.scala b/integration/spark2/src/main/spark2.2/org/apache/spark/sql/hive/CarbonSessionState.scala
index 3c151f0..baadd04 100644
--- a/integration/spark2/src/main/spark2.2/org/apache/spark/sql/hive/CarbonSessionState.scala
+++ b/integration/spark2/src/main/spark2.2/org/apache/spark/sql/hive/CarbonSessionState.scala
@@ -27,12 +27,13 @@ import org.apache.spark.sql.catalyst.catalog._
 import org.apache.spark.sql.catalyst.expressions.{And, AttributeReference, BoundReference, Exists, Expression, In, InterpretedPredicate, ListQuery, ScalarSubquery}
 import org.apache.spark.sql.catalyst.optimizer.Optimizer
 import org.apache.spark.sql.catalyst.parser.ParserInterface
-import org.apache.spark.sql.catalyst.parser.ParserUtils.string
-import org.apache.spark.sql.catalyst.parser.SqlBaseParser.{AddTableColumnsContext, ChangeColumnContext, CreateHiveTableContext, CreateTableContext}
+import org.apache.spark.sql.catalyst.parser.ParserUtils.{string, withOrigin}
+import org.apache.spark.sql.catalyst.parser.SqlBaseParser.{AddTableColumnsContext, ChangeColumnContext, CreateHiveTableContext, CreateTableContext, ShowTablesContext}
 import org.apache.spark.sql.catalyst.plans.logical.{Filter, LogicalPlan, SubqueryAlias}
 import org.apache.spark.sql.catalyst.rules.Rule
 import org.apache.spark.sql.execution.command._
 import org.apache.spark.sql.execution.command.schema.{CarbonAlterTableAddColumnCommand, CarbonAlterTableDataTypeChangeCommand}
+import org.apache.spark.sql.execution.command.table.CarbonShowTablesCommand
 import org.apache.spark.sql.execution.datasources.{FindDataSourceTable, LogicalRelation, PreWriteCheck, ResolveSQLOnFile, _}
 import org.apache.spark.sql.execution.strategy.{CarbonLateDecodeStrategy, DDLStrategy, StreamingTableStrategy}
 import org.apache.spark.sql.execution.{SparkOptimizer, SparkSqlAstBuilder}
@@ -395,4 +396,10 @@ class CarbonSqlAstBuilder(conf: SQLConf, parser: CarbonSpark2SqlParser, sparkSes
   override def visitCreateTable(ctx: CreateTableContext): LogicalPlan = {
     super.visitCreateTable(ctx)
   }
+
+  override def visitShowTables(ctx: ShowTablesContext): LogicalPlan = withOrigin(ctx) {
+    CarbonShowTablesCommand(
+      Option(ctx.db).map(_.getText),
+      Option(ctx.pattern).map(string))
+  }
 }


[26/50] [abbrv] carbondata git commit: [CARBONDATA-2108]Updated unsafe sort memory configuration

Posted by ra...@apache.org.
[CARBONDATA-2108]Updated unsafe sort memory configuration

Deprecated old property: sort.inmemory.size.inmb
Added new property: carbon.sort.storage.inmemory.size.inmb,
If user has configured old property then internally it will be converted to new property
for ex: If user has configured sort.inmemory.size.inmb then 20% memory will be used as working memory and rest for storage memory

This closes #1896


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/27ec6515
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/27ec6515
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/27ec6515

Branch: refs/heads/branch-1.3
Commit: 27ec6515a143dc3b697ac914bfcd4cfe10a49e17
Parents: 2610a60
Author: kumarvishal <ku...@gmail.com>
Authored: Wed Jan 31 18:43:02 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Fri Feb 2 23:18:21 2018 +0530

----------------------------------------------------------------------
 .../core/constants/CarbonCommonConstants.java   |  5 +
 .../core/memory/UnsafeMemoryManager.java        |  2 +-
 .../core/memory/UnsafeSortMemoryManager.java    |  6 +-
 .../carbondata/core/util/CarbonProperties.java  | 99 ++++++++++++++++++++
 4 files changed, 108 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/27ec6515/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index 87eec8a..8480758 100644
--- a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1585,6 +1585,11 @@ public final class CarbonCommonConstants {
 
   public static final String CARBON_ENABLE_PAGE_LEVEL_READER_IN_COMPACTION_DEFAULT = "true";
 
+  @CarbonProperty
+  public static final String IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB =
+      "carbon.sort.storage.inmemory.size.inmb";
+  public static final String IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT = "512";
+
   private CarbonCommonConstants() {
   }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/27ec6515/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java b/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
index 4222e14..d3b9b48 100644
--- a/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
+++ b/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
@@ -47,7 +47,7 @@ public class UnsafeMemoryManager {
           .getProperty(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB,
               CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB_DEFAULT));
     } catch (Exception e) {
-      size = Long.parseLong(CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB_DEFAULT);
+      size = Long.parseLong(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB_DEFAULT);
       LOGGER.info("Wrong memory size given, "
           + "so setting default value to " + size);
     }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/27ec6515/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java b/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java
index c63b320..67bb6cc 100644
--- a/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java
+++ b/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java
@@ -75,10 +75,10 @@ public class UnsafeSortMemoryManager {
     long size;
     try {
       size = Long.parseLong(CarbonProperties.getInstance()
-          .getProperty(CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB,
-              CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB_DEFAULT));
+          .getProperty(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB,
+              CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT));
     } catch (Exception e) {
-      size = Long.parseLong(CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB_DEFAULT);
+      size = Long.parseLong(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT);
       LOGGER.info("Wrong memory size given, " + "so setting default value to " + size);
     }
     if (size < 1024) {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/27ec6515/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java b/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java
index 39a0b80..3dc7b8f 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java
@@ -223,6 +223,9 @@ public final class CarbonProperties {
     validateSortIntermediateFilesLimit();
     validateEnableAutoHandoff();
     validateSchedulerMinRegisteredRatio();
+    validateSortMemorySizeInMB();
+    validateWorkingMemory();
+    validateSortStorageMemory();
   }
 
   /**
@@ -1252,4 +1255,100 @@ public final class CarbonProperties {
   public void addPropertyToPropertySet(Set<String> externalPropertySet) {
     propertySet.addAll(externalPropertySet);
   }
+
+  private void validateSortMemorySizeInMB() {
+    int sortMemorySizeInMBDefault =
+        Integer.parseInt(CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB_DEFAULT);
+    int sortMemorySizeInMB = 0;
+    try {
+      sortMemorySizeInMB = Integer.parseInt(
+          carbonProperties.getProperty(CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB));
+    } catch (NumberFormatException e) {
+      LOGGER.error(
+          "The specified value for property " + CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB
+              + "is Invalid." + " Taking the default value."
+              + CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB_DEFAULT);
+      sortMemorySizeInMB = sortMemorySizeInMBDefault;
+    }
+    if (sortMemorySizeInMB < sortMemorySizeInMBDefault) {
+      LOGGER.error(
+          "The specified value for property " + CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB
+              + "is less than default value." + ". Taking the default value."
+              + CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB_DEFAULT);
+      sortMemorySizeInMB = sortMemorySizeInMBDefault;
+    }
+    String unsafeWorkingMemoryString =
+        carbonProperties.getProperty(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB);
+    String unsafeSortStorageMemoryString =
+        carbonProperties.getProperty(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB);
+    int workingMemory = 512;
+    int sortStorageMemory;
+    if (null == unsafeWorkingMemoryString && null == unsafeSortStorageMemoryString) {
+      workingMemory = workingMemory > ((sortMemorySizeInMB * 20) / 100) ?
+          workingMemory :
+          ((sortMemorySizeInMB * 20) / 100);
+      sortStorageMemory = sortMemorySizeInMB - workingMemory;
+      carbonProperties
+          .setProperty(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB, workingMemory + "");
+      carbonProperties.setProperty(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB,
+          sortStorageMemory + "");
+    } else if (null != unsafeWorkingMemoryString && null == unsafeSortStorageMemoryString) {
+      carbonProperties.setProperty(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB,
+          sortMemorySizeInMB + "");
+    } else if (null == unsafeWorkingMemoryString && null != unsafeSortStorageMemoryString) {
+      carbonProperties
+          .setProperty(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB, sortMemorySizeInMB + "");
+    }
+  }
+
+  private void validateWorkingMemory() {
+    int unsafeWorkingMemoryDefault =
+        Integer.parseInt(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB_DEFAULT);
+    int unsafeWorkingMemory = 0;
+    try {
+      unsafeWorkingMemory = Integer.parseInt(
+          carbonProperties.getProperty(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB));
+    } catch (NumberFormatException e) {
+      LOGGER.error("The specified value for property "
+          + CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB_DEFAULT + "is invalid."
+          + " Taking the default value."
+          + CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB_DEFAULT);
+      unsafeWorkingMemory = unsafeWorkingMemoryDefault;
+    }
+    if (unsafeWorkingMemory < unsafeWorkingMemoryDefault) {
+      LOGGER.error("The specified value for property "
+          + CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB_DEFAULT
+          + "is less than the default value." + ". Taking the default value."
+          + CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB_DEFAULT);
+      unsafeWorkingMemory = unsafeWorkingMemoryDefault;
+    }
+    carbonProperties
+        .setProperty(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB, unsafeWorkingMemory + "");
+  }
+
+  private void validateSortStorageMemory() {
+    int unsafeSortStorageMemoryDefault =
+        Integer.parseInt(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT);
+    int unsafeSortStorageMemory = 0;
+    try {
+      unsafeSortStorageMemory = Integer.parseInt(carbonProperties
+          .getProperty(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB));
+    } catch (NumberFormatException e) {
+      LOGGER.error("The specified value for property "
+          + CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB + "is invalid."
+          + " Taking the default value."
+          + CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT);
+      unsafeSortStorageMemory = unsafeSortStorageMemoryDefault;
+    }
+    if (unsafeSortStorageMemory < unsafeSortStorageMemoryDefault) {
+      LOGGER.error("The specified value for property "
+          + CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB
+          + "is less than the default value." + " Taking the default value."
+          + CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT);
+      unsafeSortStorageMemory = unsafeSortStorageMemoryDefault;
+    }
+    carbonProperties.setProperty(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB,
+        unsafeSortStorageMemory + "");
+  }
+
 }


[20/50] [abbrv] carbondata git commit: [CARBONDATA-2116] Documentation for CTAS

Posted by ra...@apache.org.
[CARBONDATA-2116] Documentation for CTAS

Added the documentation for CTAS

This closes #1906


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/1b224a4a
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/1b224a4a
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/1b224a4a

Branch: refs/heads/branch-1.3
Commit: 1b224a4a971597c36e931eb8e17ccbd24cea642e
Parents: a3638ad
Author: sgururajshetty <sg...@gmail.com>
Authored: Thu Feb 1 20:04:54 2018 +0530
Committer: manishgupta88 <to...@gmail.com>
Committed: Fri Feb 2 11:48:04 2018 +0530

----------------------------------------------------------------------
 docs/data-management-on-carbondata.md | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/1b224a4a/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index d7954e1..3119935 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -144,7 +144,18 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
 				   'streaming'='true',
                    'ALLOWED_COMPACTION_DAYS'='5')
    ```
-        
+
+## CREATE TABLE As SELECT
+  This function allows you to create a Carbon table from any of the Parquet/Hive/Carbon table. This is beneficial when the user wants to create Carbon table from any other Parquet/Hive table and use the Carbon query engine to query and achieve better query results for cases where Carbon is faster than other file formats. Also this feature can be used for backing up the data.
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name STORED BY 'carbondata' [TBLPROPERTIES (key1=val1, key2=val2, ...)] AS select_statement;
+  ```
+
+### Examples
+  ```
+  CREATE TABLE ctas_select_parquet STORED BY 'carbondata' as select * from parquet_ctas_test;
+  ```
+   
 ## TABLE MANAGEMENT  
 
 ### SHOW TABLE


[04/50] [abbrv] carbondata git commit: [CARBONDATA-2089]SQL exception is masked due to assert(false) inside try catch and exception block always asserting true

Posted by ra...@apache.org.
http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala
index dab6e41..c57bd04 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala
@@ -21,9 +21,9 @@ package org.apache.carbondata.cluster.sdv.generated
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.common.util._
 import org.scalatest.BeforeAndAfterAll
-
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.spark.sql.test.TestQueryExecutor
 
 /**
  * Test Class for singlepassTestCase to verify all scenerios
@@ -55,80 +55,51 @@ class SinglepassTestCase extends QueryTest with BeforeAndAfterAll {
 
   //To check data loading from CSV with incomplete data
   test("Loading-004-01-01-01_001-TC_003", Include) {
-    try {
+    intercept[Exception] {
      sql(s"""drop table if exists uniqdata""").collect
    sql(s"""CREATE TABLE if not exists uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
       sql(s"""LOAD DATA INPATH '$resourcesPath/Data/singlepass/2000_UniqData_incomplete.csv' INTO TABLE uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='TRUE', 'FILEHEADER'= 'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check data loading from CSV with bad records
   test("Loading-004-01-01-01_001-TC_004", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""LOAD DATA INPATH '$resourcesPath/Data/singlepass/2000_UniqData_badrec.csv' INTO TABLE uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='TRUE', 'FILEHEADER'= 'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check data loading from CSV with no data
   test("Loading-004-01-01-01_001-TC_005", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""LOAD DATA INPATH '$resourcesPath/Data/singlepass/2000_UniqData_nodata.csv' INTO TABLE uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='TRUE', 'FILEHEADER'= 'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check data loading from CSV with incomplete data
   test("Loading-004-01-01-01_001-TC_006", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""LOAD DATA INPATH '$resourcesPath/Data/singlepass/2000_UniqData_incomplete.csv' INTO TABLE uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='FALSE', 'FILEHEADER'= 'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check data loading from CSV with wrong data
   test("Loading-004-01-01-01_001-TC_007", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""LOAD DATA INPATH '$resourcesPath/Data/singlepass/2000_UniqData_incomplete.csv' INTO TABLE uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='FALSE', 'FILEHEADER'= 'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
   //To check data loading from CSV with no data and 'SINGLEPASS' = 'FALSE'
   test("Loading-004-01-01-01_001-TC_008", Include) {
-    try {
-
+    intercept[Exception] {
       sql(s"""LOAD DATA INPATH '$resourcesPath/Data/singlepass/2000_UniqData_nodata.csv.csv' INTO TABLE uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='FALSE', 'FILEHEADER'= 'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-      assert(false)
-    } catch {
-      case _ => assert(true)
     }
-
   }
 
 
@@ -555,22 +526,35 @@ class SinglepassTestCase extends QueryTest with BeforeAndAfterAll {
   //Verifying load data with single Pass true and BAD_RECORDS_ACTION= ='FAIL
   test("Loading-004-01-01-01_001-TC_067", Include) {
     sql(s"""drop table if exists uniqdata""").collect
-    try {
-
-      sql(s"""CREATE TABLE if not exists uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""")
+    intercept[Exception] {
+      sql(s"""
+             | CREATE TABLE uniqdata(
+             | shortField SHORT,
+             | booleanField BOOLEAN,
+             | intField INT,
+             | bigintField LONG,
+             | doubleField DOUBLE,
+             | stringField STRING,
+             | decimalField DECIMAL(18,2),
+             | charField CHAR(5),
+             | floatField FLOAT,
+             | complexData ARRAY<STRING>,
+             | booleanField2 BOOLEAN
+             | )
+             | STORED BY 'carbondata'
+       """.stripMargin)
 
         .collect
 
 
-      sql(s"""LOAD DATA INPATH  '$resourcesPath/Data/singlepass/data/2000_UniqData.csv' into table uniqdata OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='TRUE', 'BAD_RECORDS_ACTION'='FAIL','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_Pass'='true')""")
+      sql(
+        s"""LOAD DATA INPATH  '${TestQueryExecutor
+          .integrationPath}/spark2/src/test/resources/bool/supportBooleanBadRecords.csv' into table uniqdata OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='TRUE', 'BAD_RECORDS_ACTION'='FAIL','FILEHEADER'='shortField,booleanField,intField,bigintField,doubleField,stringField,timestampField,decimalField,dateField,charField,floatField,complexData,booleanField2','SINGLE_Pass'='true')""".stripMargin)
         .collect
       checkAnswer(
         s"""select count(*) from uniqdata""",
         Seq(Row(2013)),
         "singlepassTestCase_Loading-004-01-01-01_001-TC_067")
-      assert(false)
-  } catch {
-    case _ => assert(true)
   }
      sql(s"""drop table uniqdata""").collect
   }
@@ -578,6 +562,7 @@ class SinglepassTestCase extends QueryTest with BeforeAndAfterAll {
 
   //Verifying load data with single Pass true and BAD_RECORDS_ACTION= ='REDIRECT'
   test("Loading-004-01-01-01_001-TC_071", Include) {
+    sql(s"""drop table if exists uniqdata""").collect
      sql(s"""CREATE TABLE if not exists uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'""").collect
 
 
@@ -717,7 +702,7 @@ class SinglepassTestCase extends QueryTest with BeforeAndAfterAll {
   //Verifying load data with single pass=false and column dictionary path
   test("Loading-004-01-01-01_001-TC_084", Include) {
     dropTable("uniqdata")
-    try {
+    intercept[Exception] {
       sql(s"""CREATE TABLE if not exists uniqdata (CUST_ID int,CUST_NAME String, DOB timestamp) STORED BY 'org.apache.carbondata.format'""")
 
         .collect
@@ -727,9 +712,6 @@ class SinglepassTestCase extends QueryTest with BeforeAndAfterAll {
         s"""select count(*) from uniqdata""",
         Seq(Row(10)),
         "singlepassTestCase_Loading-004-01-01-01_001-TC_084")
-      assert(false)
-  } catch {
-      case _ => assert(true)
     }
      sql(s"""drop table uniqdata""").collect
   }


[37/50] [abbrv] carbondata git commit: [CARBONDATA-2104] Add testcase for concurrent execution of insert overwrite and other command

Posted by ra...@apache.org.
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala
index f38304e..13d6274 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala
@@ -22,8 +22,7 @@ import scala.collection.JavaConverters._
 import org.apache.spark.sql.{CarbonEnv, Row, SparkSession, _}
 import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
 import org.apache.spark.sql.execution.SQLExecution.EXECUTION_ID_KEY
-import org.apache.spark.sql.execution.command.{Field, MetadataCommand, TableModel, TableNewProcessor}
-import org.apache.spark.sql.util.CarbonException
+import org.apache.spark.sql.execution.command.MetadataCommand
 
 import org.apache.carbondata.common.logging.LogServiceFactory
 import org.apache.carbondata.core.constants.CarbonCommonConstants
@@ -79,7 +78,7 @@ case class CarbonCreateTableCommand(
       }
 
       if (tableInfo.getFactTable.getListOfColumns.size <= 0) {
-        CarbonException.analysisException("Table should have at least one column.")
+        throwMetadataException(dbName, tableName, "Table should have at least one column.")
       }
 
       val operationContext = new OperationContext
@@ -125,7 +124,7 @@ case class CarbonCreateTableCommand(
             val msg = s"Create table'$tableName' in database '$dbName' failed"
             LOGGER.audit(msg.concat(", ").concat(e.getMessage))
             LOGGER.error(e, msg)
-            CarbonException.analysisException(msg.concat(", ").concat(e.getMessage))
+            throwMetadataException(dbName, tableName, msg)
         }
       }
       val createTablePostExecutionEvent: CreateTablePostExecutionEvent =

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
index 9c0eb57..7c895ab 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
@@ -20,11 +20,10 @@ package org.apache.spark.sql.execution.command.table
 import scala.collection.JavaConverters._
 import scala.collection.mutable.ListBuffer
 
-import org.apache.spark.sql.{AnalysisException, CarbonEnv, Row, SparkSession}
+import org.apache.spark.sql.{CarbonEnv, Row, SparkSession}
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 import org.apache.spark.sql.execution.command.AtomicRunnableCommand
-import org.apache.spark.sql.util.CarbonException
 
 import org.apache.carbondata.common.logging.{LogService, LogServiceFactory}
 import org.apache.carbondata.core.cache.dictionary.ManageDictionaryAndBTree
@@ -34,6 +33,7 @@ import org.apache.carbondata.core.metadata.schema.table.CarbonTable
 import org.apache.carbondata.core.statusmanager.SegmentStatusManager
 import org.apache.carbondata.core.util.CarbonUtil
 import org.apache.carbondata.events._
+import org.apache.carbondata.spark.exception.{ConcurrentOperationException, ProcessMetaDataException}
 
 case class CarbonDropTableCommand(
     ifExistsSet: Boolean,
@@ -55,8 +55,11 @@ case class CarbonDropTableCommand(
       locksToBeAcquired foreach {
         lock => carbonLocks += CarbonLockUtil.getLockObject(identifier, lock)
       }
-      LOGGER.audit(s"Deleting table [$tableName] under database [$dbName]")
       carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, tableName)(sparkSession)
+      if (SegmentStatusManager.isLoadInProgressInTable(carbonTable)) {
+        throw new ConcurrentOperationException(carbonTable, "loading", "drop table")
+      }
+      LOGGER.audit(s"Deleting table [$tableName] under database [$dbName]")
       if (carbonTable.isStreamingTable) {
         // streaming table should acquire streaming.lock
         carbonLocks += CarbonLockUtil.getLockObject(identifier, LockUsage.STREAMING_LOCK)
@@ -65,8 +68,9 @@ case class CarbonDropTableCommand(
       if (relationIdentifiers != null && !relationIdentifiers.isEmpty) {
         if (!dropChildTable) {
           if (!ifExistsSet) {
-            throw new Exception("Child table which is associated with datamap cannot " +
-                                "be dropped, use DROP DATAMAP command to drop")
+            throwMetadataException(dbName, tableName,
+              "Child table which is associated with datamap cannot be dropped, " +
+              "use DROP DATAMAP command to drop")
           } else {
             return Seq.empty
           }
@@ -79,10 +83,7 @@ case class CarbonDropTableCommand(
           ifExistsSet,
           sparkSession)
       OperationListenerBus.getInstance.fireEvent(dropTablePreEvent, operationContext)
-      if (SegmentStatusManager.checkIfAnyLoadInProgressForTable(carbonTable)) {
-        throw new AnalysisException(s"Data loading is in progress for table $tableName, drop " +
-                                    s"table operation is not allowed")
-      }
+
       CarbonEnv.getInstance(sparkSession).carbonMetastore.dropTable(identifier)(sparkSession)
 
       if (carbonTable.hasDataMapSchema) {
@@ -122,10 +123,12 @@ case class CarbonDropTableCommand(
         if (!ifExistsSet) {
           throw ex
         }
+      case ex: ConcurrentOperationException =>
+        throw ex
       case ex: Exception =>
-        LOGGER.error(ex, s"Dropping table $dbName.$tableName failed")
-        CarbonException.analysisException(
-          s"Dropping table $dbName.$tableName failed: ${ ex.getMessage }")
+        val msg = s"Dropping table $dbName.$tableName failed: ${ex.getMessage}"
+        LOGGER.error(ex, msg)
+        throwMetadataException(dbName, tableName, msg)
     } finally {
       if (carbonLocks.nonEmpty) {
         val unlocked = carbonLocks.forall(_.unlock())

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
index 44204d4..e1e41dc 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
@@ -34,7 +34,8 @@ import org.scalatest.BeforeAndAfterAll
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.statusmanager.{FileFormat, SegmentStatus}
 import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}
-import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
+import org.apache.carbondata.spark.exception.{MalformedCarbonCommandException, ProcessMetaDataException}
+import org.apache.carbondata.streaming.CarbonStreamException
 
 class TestStreamingTableOperation extends QueryTest with BeforeAndAfterAll {
 
@@ -201,13 +202,10 @@ class TestStreamingTableOperation extends QueryTest with BeforeAndAfterAll {
       val future = pool.submit(thread2)
       Thread.sleep(1000)
       thread1.interrupt()
-      try {
+      val msg = intercept[Exception] {
         future.get()
-        assert(false)
-      } catch {
-        case ex =>
-          assert(ex.getMessage.contains("is not a streaming table"))
       }
+      assert(msg.getMessage.contains("is not a streaming table"))
     } finally {
       if (server != null) {
         server.close()
@@ -655,10 +653,10 @@ class TestStreamingTableOperation extends QueryTest with BeforeAndAfterAll {
       thread1.start()
       thread2.start()
       Thread.sleep(1000)
-      val msg = intercept[Exception] {
+      val msg = intercept[ProcessMetaDataException] {
         sql(s"drop table streaming.stream_table_drop")
       }
-      assertResult("Dropping table streaming.stream_table_drop failed: Acquire table lock failed after retry, please try after some time;")(msg.getMessage)
+      assert(msg.getMessage.contains("Dropping table streaming.stream_table_drop failed: Acquire table lock failed after retry, please try after some time"))
       thread1.interrupt()
       thread2.interrupt()
     } finally {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/test/scala/org/apache/spark/carbondata/register/TestRegisterCarbonTable.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/register/TestRegisterCarbonTable.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/register/TestRegisterCarbonTable.scala
index 389f2cd..fe7df23 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/register/TestRegisterCarbonTable.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/register/TestRegisterCarbonTable.scala
@@ -24,6 +24,7 @@ import org.apache.spark.sql.{AnalysisException, CarbonEnv, Row}
 import org.scalatest.BeforeAndAfterAll
 
 import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.spark.exception.ProcessMetaDataException
 
 /**
  *
@@ -150,7 +151,7 @@ class TestRegisterCarbonTable extends QueryTest with BeforeAndAfterAll {
     sql("drop table carbontable")
     if (!CarbonEnv.getInstance(sqlContext.sparkSession).carbonMetastore.isReadFromHiveMetaStore) {
       restoreData(dblocation, "carbontable")
-      intercept[AnalysisException] {
+      intercept[ProcessMetaDataException] {
         sql("refresh table carbontable")
       }
       restoreData(dblocation, "carbontable_preagg1")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableRevertTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableRevertTestCase.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableRevertTestCase.scala
index 9a6efbe..4d5f88c 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableRevertTestCase.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableRevertTestCase.scala
@@ -24,7 +24,9 @@ import org.apache.spark.sql.common.util.Spark2QueryTest
 import org.apache.spark.sql.test.TestQueryExecutor
 import org.apache.spark.util.AlterTableUtil
 import org.scalatest.BeforeAndAfterAll
+
 import org.apache.carbondata.core.metadata.CarbonMetadata
+import org.apache.carbondata.spark.exception.ProcessMetaDataException
 
 class AlterTableRevertTestCase extends Spark2QueryTest with BeforeAndAfterAll {
 
@@ -38,7 +40,7 @@ class AlterTableRevertTestCase extends Spark2QueryTest with BeforeAndAfterAll {
   }
 
   test("test to revert new added columns on failure") {
-    intercept[RuntimeException] {
+    intercept[ProcessMetaDataException] {
       hiveClient.runSqlHive("set hive.security.authorization.enabled=true")
       sql(
         "Alter table reverttest add columns(newField string) TBLPROPERTIES" +
@@ -51,7 +53,7 @@ class AlterTableRevertTestCase extends Spark2QueryTest with BeforeAndAfterAll {
   }
 
   test("test to revert table name on failure") {
-    val exception = intercept[RuntimeException] {
+    val exception = intercept[ProcessMetaDataException] {
       new File(TestQueryExecutor.warehouse + "/reverttest_fail").mkdir()
       sql("alter table reverttest rename to reverttest_fail")
       new File(TestQueryExecutor.warehouse + "/reverttest_fail").delete()
@@ -62,7 +64,7 @@ class AlterTableRevertTestCase extends Spark2QueryTest with BeforeAndAfterAll {
   }
 
   test("test to revert drop columns on failure") {
-    intercept[Exception] {
+    intercept[ProcessMetaDataException] {
       hiveClient.runSqlHive("set hive.security.authorization.enabled=true")
       sql("Alter table reverttest drop columns(decimalField)")
       hiveClient.runSqlHive("set hive.security.authorization.enabled=false")
@@ -71,7 +73,7 @@ class AlterTableRevertTestCase extends Spark2QueryTest with BeforeAndAfterAll {
   }
 
   test("test to revert changed datatype on failure") {
-    intercept[Exception] {
+    intercept[ProcessMetaDataException] {
       hiveClient.runSqlHive("set hive.security.authorization.enabled=true")
       sql("Alter table reverttest change intField intfield bigint")
       hiveClient.runSqlHive("set hive.security.authorization.enabled=false")
@@ -81,7 +83,7 @@ class AlterTableRevertTestCase extends Spark2QueryTest with BeforeAndAfterAll {
   }
 
   test("test to check if dictionary files are deleted for new column if query fails") {
-    intercept[RuntimeException] {
+    intercept[ProcessMetaDataException] {
       hiveClient.runSqlHive("set hive.security.authorization.enabled=true")
       sql(
         "Alter table reverttest add columns(newField string) TBLPROPERTIES" +
@@ -100,11 +102,12 @@ class AlterTableRevertTestCase extends Spark2QueryTest with BeforeAndAfterAll {
     val locks = AlterTableUtil
       .validateTableAndAcquireLock("default", "reverttest", List("meta.lock"))(sqlContext
         .sparkSession)
-    val exception = intercept[RuntimeException] {
+    val exception = intercept[ProcessMetaDataException] {
       sql("alter table reverttest rename to revert")
     }
     AlterTableUtil.releaseLocks(locks)
-    assert(exception.getMessage == "Alter table rename table operation failed: Acquire table lock failed after retry, please try after some time")
+    assert(exception.getMessage.contains(
+      "Alter table rename table operation failed: Acquire table lock failed after retry, please try after some time"))
   }
 
   override def afterAll() {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala
index e89efdb..c88302d 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala
@@ -27,6 +27,7 @@ import org.scalatest.BeforeAndAfterAll
 
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.spark.exception.ProcessMetaDataException
 
 class AlterTableValidationTestCase extends Spark2QueryTest with BeforeAndAfterAll {
 
@@ -337,13 +338,13 @@ class AlterTableValidationTestCase extends Spark2QueryTest with BeforeAndAfterAl
     checkExistence(sql("desc restructure"), true, "intfield", "bigint")
     sql("alter table default.restructure change decimalfield deciMalfield Decimal(11,3)")
     sql("alter table default.restructure change decimalfield deciMalfield Decimal(12,3)")
-    intercept[RuntimeException] {
+    intercept[ProcessMetaDataException] {
       sql("alter table default.restructure change decimalfield deciMalfield Decimal(12,3)")
     }
-    intercept[RuntimeException] {
+    intercept[ProcessMetaDataException] {
       sql("alter table default.restructure change decimalfield deciMalfield Decimal(13,1)")
     }
-    intercept[RuntimeException] {
+    intercept[ProcessMetaDataException] {
       sql("alter table default.restructure change decimalfield deciMalfield Decimal(13,5)")
     }
     sql("alter table default.restructure change decimalfield deciMalfield Decimal(13,4)")
@@ -516,10 +517,10 @@ class AlterTableValidationTestCase extends Spark2QueryTest with BeforeAndAfterAl
     sql(
       "create datamap preagg1 on table PreAggMain using 'preaggregate' as select" +
       " a,sum(b) from PreAggMain group by a")
-    assert(intercept[RuntimeException] {
+    assert(intercept[ProcessMetaDataException] {
       sql("alter table preAggmain_preagg1 rename to preagg2")
     }.getMessage.contains("Rename operation for pre-aggregate table is not supported."))
-    assert(intercept[RuntimeException] {
+    assert(intercept[ProcessMetaDataException] {
       sql("alter table preaggmain rename to preaggmain_new")
     }.getMessage.contains("Rename operation is not supported for table with pre-aggregate tables"))
     sql("drop table if exists preaggMain")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AddColumnTestCases.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AddColumnTestCases.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AddColumnTestCases.scala
index d36dd26..ac10b9a 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AddColumnTestCases.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AddColumnTestCases.scala
@@ -28,7 +28,7 @@ import org.scalatest.BeforeAndAfterAll
 
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
-import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
+import org.apache.carbondata.spark.exception.{MalformedCarbonCommandException, ProcessMetaDataException}
 
 class AddColumnTestCases extends Spark2QueryTest with BeforeAndAfterAll {
 
@@ -649,7 +649,7 @@ class AddColumnTestCases extends Spark2QueryTest with BeforeAndAfterAll {
     sql(
       "create datamap preagg1 on table PreAggMain using 'preaggregate' as select" +
       " a,sum(b) from PreAggMain group by a")
-    assert(intercept[RuntimeException] {
+    assert(intercept[ProcessMetaDataException] {
       sql("alter table preaggmain_preagg1 add columns(d string)")
     }.getMessage.contains("Cannot add columns"))
     sql("drop table if exists preaggMain")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/ChangeDataTypeTestCases.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/ChangeDataTypeTestCases.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/ChangeDataTypeTestCases.scala
index 0124716..f92d613 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/ChangeDataTypeTestCases.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/ChangeDataTypeTestCases.scala
@@ -23,6 +23,8 @@ import org.apache.spark.sql.Row
 import org.apache.spark.sql.common.util.Spark2QueryTest
 import org.scalatest.BeforeAndAfterAll
 
+import org.apache.carbondata.spark.exception.ProcessMetaDataException
+
 class ChangeDataTypeTestCases extends Spark2QueryTest with BeforeAndAfterAll {
 
   override def beforeAll {
@@ -154,10 +156,10 @@ class ChangeDataTypeTestCases extends Spark2QueryTest with BeforeAndAfterAll {
     sql(
       "create datamap preagg1 on table PreAggMain using 'preaggregate' as select" +
       " a,sum(b) from PreAggMain group by a")
-    assert(intercept[RuntimeException] {
+    assert(intercept[ProcessMetaDataException] {
       sql("alter table preaggmain change a a long").show
     }.getMessage.contains("exists in a pre-aggregate table"))
-    assert(intercept[RuntimeException] {
+    assert(intercept[ProcessMetaDataException] {
       sql("alter table preaggmain_preagg1 change a a long").show
     }.getMessage.contains("Cannot change data type for columns in pre-aggregate table"))
     sql("drop table if exists preaggMain")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/DropColumnTestCases.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/DropColumnTestCases.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/DropColumnTestCases.scala
index 662d9d8..58c4821 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/DropColumnTestCases.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/DropColumnTestCases.scala
@@ -21,6 +21,8 @@ import org.apache.spark.sql.Row
 import org.apache.spark.sql.common.util.Spark2QueryTest
 import org.scalatest.BeforeAndAfterAll
 
+import org.apache.carbondata.spark.exception.ProcessMetaDataException
+
 class DropColumnTestCases extends Spark2QueryTest with BeforeAndAfterAll {
 
   override def beforeAll {
@@ -103,7 +105,7 @@ class DropColumnTestCases extends Spark2QueryTest with BeforeAndAfterAll {
       " a,sum(b) from PreAggMain group by a")
     sql("alter table preaggmain drop columns(c)")
 //    checkExistence(sql("desc table preaggmain"), false, "c")
-    assert(intercept[RuntimeException] {
+    assert(intercept[ProcessMetaDataException] {
       sql("alter table preaggmain_preagg1 drop columns(preaggmain_b_sum)").show
     }.getMessage.contains("Cannot drop columns in pre-aggreagate table"))
     sql("drop table if exists preaggMain")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java b/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
index 3a83427..00f13a5 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
@@ -221,12 +221,12 @@ public final class CarbonLoaderUtil {
           // is triggered
           for (LoadMetadataDetails entry : listOfLoadFolderDetails) {
             if (entry.getSegmentStatus() == SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS
-                && segmentStatusManager.checkIfValidLoadInProgress(
+                && SegmentStatusManager.isLoadInProgress(
                     absoluteTableIdentifier, entry.getLoadName())) {
               throw new RuntimeException("Already insert overwrite is in progress");
             } else if (newMetaEntry.getSegmentStatus() == SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS
                 && entry.getSegmentStatus() == SegmentStatus.INSERT_IN_PROGRESS
-                && segmentStatusManager.checkIfValidLoadInProgress(
+                && SegmentStatusManager.isLoadInProgress(
                     absoluteTableIdentifier, entry.getLoadName())) {
               throw new RuntimeException("Already insert into or load is in progress");
             }


[35/50] [abbrv] carbondata git commit: [CARBONDATA-2117]Fixed Syncronization issue in CarbonEnv

Posted by ra...@apache.org.
[CARBONDATA-2117]Fixed Syncronization issue in CarbonEnv

Problem: When creating multiple session (100) session initialisation is failing with below error

java.lang.IllegalArgumentException: requirement failed: Config entry enable.unsafe.sort already registered!

Solution: Currently in CarbonEnv we are updating global configuration(shared) and location configuration in class level synchronized block. In case of multiple session class level lock will not work , need to add global level lock so only one thread will update the global configuration

This closes #1908


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/44e70d08
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/44e70d08
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/44e70d08

Branch: refs/heads/branch-1.3
Commit: 44e70d08e0c73e2c65e9a0d147cbbbe965aaf9f7
Parents: 6fd778a
Author: kumarvishal <ku...@gmail.com>
Authored: Thu Feb 1 23:13:54 2018 +0530
Committer: Jacky Li <ja...@qq.com>
Committed: Sat Feb 3 17:36:40 2018 +0800

----------------------------------------------------------------------
 .../spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala    | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/44e70d08/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
index 40035ce..6b12008 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
@@ -68,7 +68,8 @@ class CarbonEnv {
 
     // added for handling timeseries function like hour, minute, day , month , year
     sparkSession.udf.register("timeseries", new TimeSeriesFunction)
-    synchronized {
+    // acquiring global level lock so global configuration will be updated by only one thread
+    CarbonEnv.carbonEnvMap.synchronized {
       if (!initialized) {
         // update carbon session parameters , preserve thread parameters
         val currentThreadSesssionInfo = ThreadLocalSessionInfo.getCarbonSessionInfo


[48/50] [abbrv] carbondata git commit: [CARBONDATA-2122] Add validation for empty bad record path

Posted by ra...@apache.org.
[CARBONDATA-2122] Add validation for empty bad record path

Data Load having bad record redirect with empty location should throw the exception of Invalid Path.

This closes #1914


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/4a2a2d1b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/4a2a2d1b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/4a2a2d1b

Branch: refs/heads/branch-1.3
Commit: 4a2a2d1b74901f96efc4ecf9cc16e9804884b929
Parents: 50e2f2c
Author: Jatin <ja...@knoldus.in>
Authored: Fri Feb 2 19:55:16 2018 +0530
Committer: kunal642 <ku...@gmail.com>
Committed: Sun Feb 4 00:23:19 2018 +0530

----------------------------------------------------------------------
 .../apache/carbondata/core/util/CarbonUtil.java |  7 +-
 .../sdv/generated/AlterTableTestCase.scala      |  2 -
 .../sdv/generated/DataLoadingTestCase.scala     |  5 +-
 .../badrecordloger/BadRecordActionTest.scala    | 71 +++++++++++++++++++-
 .../badrecordloger/BadRecordEmptyDataTest.scala |  5 --
 .../badrecordloger/BadRecordLoggerTest.scala    |  5 --
 .../StandardPartitionBadRecordLoggerTest.scala  |  5 --
 .../carbondata/spark/util/DataLoadingUtil.scala |  2 +-
 .../spark/sql/test/TestQueryExecutor.scala      | 16 ++---
 .../BadRecordPathLoadOptionTest.scala           | 11 ++-
 .../DataLoadFailAllTypeSortTest.scala           | 28 +-------
 .../NumericDimensionBadRecordTest.scala         |  6 +-
 .../AlterTableValidationTestCase.scala          |  3 -
 .../carbon/datastore/BlockIndexStoreTest.java   |  2 -
 14 files changed, 93 insertions(+), 75 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
index b62b77d..c208154 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
@@ -98,6 +98,7 @@ import com.google.gson.GsonBuilder;
 import org.apache.commons.codec.binary.Base64;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.lang.ArrayUtils;
+import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
@@ -1891,7 +1892,11 @@ public final class CarbonUtil {
    * @return
    */
   public static boolean isValidBadStorePath(String badRecordsLocation) {
-    return !(null == badRecordsLocation || badRecordsLocation.length() == 0);
+    if (StringUtils.isEmpty(badRecordsLocation)) {
+      return false;
+    } else {
+      return isFileExists(checkAndAppendHDFSUrl(badRecordsLocation));
+    }
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
index 8899f5c..4e53ea3 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
@@ -1016,8 +1016,6 @@ class AlterTableTestCase extends QueryTest with BeforeAndAfterAll {
     prop.addProperty("carbon.compaction.level.threshold", "2,1")
     prop.addProperty("carbon.enable.auto.load.merge", "false")
     prop.addProperty("carbon.bad.records.action", "FORCE")
-    prop.addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-      TestQueryExecutor.warehouse+"/baaaaaaadrecords")
   }
 
   override def afterAll: Unit = {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
index 52396ee..24a5aa4 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
@@ -27,6 +27,7 @@ import org.apache.spark.sql.test.TestQueryExecutor
 import org.scalatest.BeforeAndAfterAll
 
 import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.datastore.impl.FileFactory
 import org.apache.carbondata.core.util.CarbonProperties
 
 /**
@@ -1469,7 +1470,5 @@ class DataLoadingTestCase extends QueryTest with BeforeAndAfterAll {
 
   override protected def beforeAll(): Unit = {
     sql(s"""drop table if exists uniqdata""").collect
-    CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-      TestQueryExecutor.warehouse + "/baaaaaaadrecords")
-  }
+     }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordActionTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordActionTest.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordActionTest.scala
index 0249ddf..d85ee49 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordActionTest.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordActionTest.scala
@@ -1,21 +1,29 @@
 package org.apache.carbondata.spark.testsuite.badrecordloger
 
+import java.io.File
+
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.test.util.QueryTest
 import org.scalatest.BeforeAndAfterAll
 
+import org.apache.carbondata.common.constants.LoggerAction
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
 
-class BadRecordActionTest extends QueryTest with BeforeAndAfterAll  {
+class BadRecordActionTest extends QueryTest with BeforeAndAfterAll {
 
 
   val csvFilePath = s"$resourcesPath/badrecords/datasample.csv"
+  def currentPath: String = new File(this.getClass.getResource("/").getPath + "../../")
+    .getCanonicalPath
+  val badRecordFilePath: File =new File(currentPath + "/target/test/badRecords")
 
   override def beforeAll = {
     CarbonProperties.getInstance()
       .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
-
+    CarbonProperties.getInstance().addProperty(
+      CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, LoggerAction.FORCE.name())
+        badRecordFilePath.mkdirs()
     sql("drop table if exists sales")
   }
 
@@ -92,6 +100,65 @@ class BadRecordActionTest extends QueryTest with BeforeAndAfterAll  {
       Seq(Row(2)))
   }
 
+  test("test bad record REDIRECT but not having location should throw exception") {
+    sql("drop table if exists sales")
+    sql(
+      """CREATE TABLE IF NOT EXISTS sales(ID BigInt, date Timestamp, country String,
+          actual_price Double, Quantity int, sold_price Decimal(19,2)) STORED BY 'carbondata'""")
+    val exMessage = intercept[Exception] {
+      sql("LOAD DATA local inpath '" + csvFilePath + "' INTO TABLE sales OPTIONS" +
+          "('bad_records_action'='REDIRECT', 'DELIMITER'=" +
+          " ',', 'QUOTECHAR'= '\"', 'BAD_RECORD_PATH'='')")
+    }
+    assert(exMessage.getMessage.contains("Invalid bad records location."))
+  }
+
+  test("test bad record REDIRECT but not having empty location in option should throw exception") {
+    val badRecordLocation = CarbonProperties.getInstance()
+      .getProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC)
+    CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
+      CarbonCommonConstants.CARBON_BADRECORDS_LOC_DEFAULT_VAL)
+    sql("drop table if exists sales")
+    try {
+      sql(
+        """CREATE TABLE IF NOT EXISTS sales(ID BigInt, date Timestamp, country String,
+          actual_price Double, Quantity int, sold_price Decimal(19,2)) STORED BY 'carbondata'""")
+      val exMessage = intercept[Exception] {
+        sql("LOAD DATA local inpath '" + csvFilePath + "' INTO TABLE sales OPTIONS" +
+            "('bad_records_action'='REDIRECT', 'DELIMITER'=" +
+            " ',', 'QUOTECHAR'= '\"')")
+      }
+      assert(exMessage.getMessage.contains("Invalid bad records location."))
+    }
+    finally {
+      CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
+        badRecordLocation)
+    }
+  }
+
+  test("test bad record is REDIRECT with location in carbon properties should pass") {
+    sql("drop table if exists sales")
+      sql(
+        """CREATE TABLE IF NOT EXISTS sales(ID BigInt, date Timestamp, country String,
+          actual_price Double, Quantity int, sold_price Decimal(19,2)) STORED BY 'carbondata'""")
+      sql("LOAD DATA local inpath '" + csvFilePath + "' INTO TABLE sales OPTIONS" +
+          "('bad_records_action'='REDIRECT', 'DELIMITER'=" +
+          " ',', 'QUOTECHAR'= '\"')")
+  }
+
+  test("test bad record is redirect with location in option while data loading should pass") {
+    sql("drop table if exists sales")
+         sql(
+        """CREATE TABLE IF NOT EXISTS sales(ID BigInt, date Timestamp, country String,
+          actual_price Double, Quantity int, sold_price Decimal(19,2)) STORED BY 'carbondata'""")
+      sql("LOAD DATA local inpath '" + csvFilePath + "' INTO TABLE sales OPTIONS" +
+          "('bad_records_action'='REDIRECT', 'DELIMITER'=" +
+          " ',', 'QUOTECHAR'= '\"', 'BAD_RECORD_PATH'='" + {badRecordFilePath.getCanonicalPath} +
+          "')")
+      checkAnswer(sql("select count(*) from sales"),
+        Seq(Row(2)))
+  }
+
   override def afterAll() = {
     sql("drop table if exists sales")
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordEmptyDataTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordEmptyDataTest.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordEmptyDataTest.scala
index 4c6cc21..999fb6a 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordEmptyDataTest.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordEmptyDataTest.scala
@@ -49,11 +49,6 @@ class BadRecordEmptyDataTest extends QueryTest with BeforeAndAfterAll {
       sql("drop table IF EXISTS bigtab")
       CarbonProperties.getInstance().addProperty(
         CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, LoggerAction.FORCE.name())
-      CarbonProperties.getInstance()
-        .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-          new File("./target/test/badRecords")
-            .getCanonicalPath)
-      CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
       var csvFilePath = ""
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordLoggerTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordLoggerTest.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordLoggerTest.scala
index 797a972..694d25b 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordLoggerTest.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/badrecordloger/BadRecordLoggerTest.scala
@@ -56,11 +56,6 @@ class BadRecordLoggerTest extends QueryTest with BeforeAndAfterAll {
           actual_price Double, Quantity int, sold_price Decimal(19,2)) STORED BY 'carbondata'""")
 
       CarbonProperties.getInstance()
-        .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-          new File("./target/test/badRecords")
-            .getCanonicalPath)
-
-      CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
       var csvFilePath = s"$resourcesPath/badrecords/datasample.csv"
       sql("LOAD DATA local inpath '" + csvFilePath + "' INTO TABLE sales OPTIONS"

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionBadRecordLoggerTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionBadRecordLoggerTest.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionBadRecordLoggerTest.scala
index f916c5e..e44ccd6 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionBadRecordLoggerTest.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionBadRecordLoggerTest.scala
@@ -38,11 +38,6 @@ class StandardPartitionBadRecordLoggerTest extends QueryTest with BeforeAndAfter
   override def beforeAll {
     drop()
     CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-        new File("./target/test/badRecords")
-          .getCanonicalPath)
-
-    CarbonProperties.getInstance()
       .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
index 8b4c232..3696e23 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
@@ -248,10 +248,10 @@ object DataLoadingUtil {
 
     if (bad_records_logger_enable.toBoolean ||
         LoggerAction.REDIRECT.name().equalsIgnoreCase(bad_records_action)) {
-      bad_record_path = CarbonUtil.checkAndAppendHDFSUrl(bad_record_path)
       if (!CarbonUtil.isValidBadStorePath(bad_record_path)) {
         CarbonException.analysisException("Invalid bad records location.")
       }
+      bad_record_path = CarbonUtil.checkAndAppendHDFSUrl(bad_record_path)
     }
     carbonLoadModel.setBadRecordsLocation(bad_record_path)
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common/src/main/scala/org/apache/spark/sql/test/TestQueryExecutor.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/spark/sql/test/TestQueryExecutor.scala b/integration/spark-common/src/main/scala/org/apache/spark/sql/test/TestQueryExecutor.scala
index 78214ae..f582145 100644
--- a/integration/spark-common/src/main/scala/org/apache/spark/sql/test/TestQueryExecutor.scala
+++ b/integration/spark-common/src/main/scala/org/apache/spark/sql/test/TestQueryExecutor.scala
@@ -52,8 +52,6 @@ object TestQueryExecutor {
   val integrationPath = s"$projectPath/integration"
   val metastoredb = s"$integrationPath/spark-common/target"
   val location = s"$integrationPath/spark-common/target/dbpath"
-  val badStoreLocation = s"$integrationPath/spark-common/target/bad_store"
-  createDirectory(badStoreLocation)
   val masterUrl = {
     val property = System.getProperty("spark.master.url")
     if (property == null) {
@@ -62,13 +60,6 @@ object TestQueryExecutor {
       property
     }
   }
-  val badStorePath = s"$integrationPath/spark-common-test/target/badrecord";
-  try {
-    FileFactory.mkdirs(badStorePath, FileFactory.getFileType(badStorePath))
-  } catch {
-    case e : Exception =>
-      throw e;
-  }
   val hdfsUrl = {
     val property = System.getProperty("hdfs.url")
     if (property == null) {
@@ -106,6 +97,13 @@ object TestQueryExecutor {
     s"$integrationPath/spark-common/target/warehouse"
   }
 
+  val badStoreLocation = if (hdfsUrl.startsWith("hdfs://")) {
+       s"$hdfsUrl/bad_store_" + System.nanoTime()
+      } else {
+        s"$integrationPath/spark-common/target/bad_store"
+      }
+    createDirectory(badStoreLocation)
+
   val hiveresultpath = if (hdfsUrl.startsWith("hdfs://")) {
     val p = s"$hdfsUrl/hiveresultpath"
     FileFactory.mkdirs(p, FileFactory.getFileType(p))

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark2/src/test/scala/org/apache/spark/carbondata/BadRecordPathLoadOptionTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/BadRecordPathLoadOptionTest.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/BadRecordPathLoadOptionTest.scala
index 8bec6f6..a59ae67 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/BadRecordPathLoadOptionTest.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/BadRecordPathLoadOptionTest.scala
@@ -35,12 +35,10 @@ import org.apache.carbondata.core.util.CarbonProperties
  */
 class BadRecordPathLoadOptionTest extends Spark2QueryTest with BeforeAndAfterAll {
   var hiveContext: HiveContext = _
-  var badRecordPath: String = null
+
   override def beforeAll {
     try {
-       badRecordPath = new File("./target/test/badRecords")
-        .getCanonicalPath.replaceAll("\\\\","/")
-      sql("drop table IF EXISTS salestest")
+            sql("drop table IF EXISTS salestest")
     }
   }
 
@@ -51,7 +49,6 @@ class BadRecordPathLoadOptionTest extends Spark2QueryTest with BeforeAndAfterAll
     CarbonProperties.getInstance()
       .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
     val csvFilePath = s"$resourcesPath/badrecords/datasample.csv"
-    sql(s"set ${CarbonLoadOptionConstants.CARBON_OPTIONS_BAD_RECORD_PATH}=${badRecordPath}")
     sql("LOAD DATA local inpath '" + csvFilePath + "' INTO TABLE salestest OPTIONS" +
         "('bad_records_logger_enable'='true','bad_records_action'='redirect', 'DELIMITER'=" +
         " ',', 'QUOTECHAR'= '\"')")
@@ -66,7 +63,9 @@ class BadRecordPathLoadOptionTest extends Spark2QueryTest with BeforeAndAfterAll
   }
 
   def isFilesWrittenAtBadStoreLocation: Boolean = {
-    val badStorePath = badRecordPath + "/default/salestest/0/0"
+    val badStorePath = CarbonProperties.getInstance()
+                         .getProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC) +
+                       "/default/salestest/0/0"
     val carbonFile: CarbonFile = FileFactory
       .getCarbonFile(badStorePath, FileFactory.getFileType(badStorePath))
     var exists: Boolean = carbonFile.exists()

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark2/src/test/scala/org/apache/spark/carbondata/DataLoadFailAllTypeSortTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/DataLoadFailAllTypeSortTest.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/DataLoadFailAllTypeSortTest.scala
index 48519fd..121150c 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/DataLoadFailAllTypeSortTest.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/DataLoadFailAllTypeSortTest.scala
@@ -46,10 +46,6 @@ class DataLoadFailAllTypeSortTest extends Spark2QueryTest with BeforeAndAfterAll
   test("dataload with parallel merge with bad_records_action='FAIL'") {
     try {
       CarbonProperties.getInstance()
-        .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-          new File("./target/test/badRecords")
-            .getCanonicalPath)
-      CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
       CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, "FAIL");
@@ -76,10 +72,6 @@ class DataLoadFailAllTypeSortTest extends Spark2QueryTest with BeforeAndAfterAll
   test("dataload with ENABLE_UNSAFE_SORT='true' with bad_records_action='FAIL'") {
     try {
       CarbonProperties.getInstance()
-        .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-          new File("./target/test/badRecords")
-            .getCanonicalPath)
-      CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
       CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.ENABLE_UNSAFE_SORT, "true");
@@ -109,11 +101,7 @@ class DataLoadFailAllTypeSortTest extends Spark2QueryTest with BeforeAndAfterAll
 
   test("dataload with LOAD_USE_BATCH_SORT='true' with bad_records_action='FAIL'") {
     try {
-      CarbonProperties.getInstance()
-        .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-          new File("./target/test/badRecords")
-            .getCanonicalPath)
-      CarbonProperties.getInstance()
+        CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
       CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.LOAD_SORT_SCOPE, "batch_sort")
@@ -143,10 +131,6 @@ class DataLoadFailAllTypeSortTest extends Spark2QueryTest with BeforeAndAfterAll
   test("dataload with LOAD_USE_BATCH_SORT='true' with bad_records_action='FORCE'") {
     try {
       CarbonProperties.getInstance()
-        .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-          new File("./target/test/badRecords")
-            .getCanonicalPath)
-      CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
       CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.LOAD_SORT_SCOPE, "BATCH_SORT")
@@ -177,11 +161,7 @@ class DataLoadFailAllTypeSortTest extends Spark2QueryTest with BeforeAndAfterAll
 
   test("dataload with LOAD_USE_BATCH_SORT='true' with bad_records_action='REDIRECT'") {
     try {
-      CarbonProperties.getInstance()
-        .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-          new File("./target/test/badRecords")
-            .getCanonicalPath)
-      CarbonProperties.getInstance()
+        CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
       CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.LOAD_SORT_SCOPE, "BATCH_SORT")
@@ -211,10 +191,6 @@ class DataLoadFailAllTypeSortTest extends Spark2QueryTest with BeforeAndAfterAll
   test("dataload with table bucketing with bad_records_action='FAIL'") {
     try {
       CarbonProperties.getInstance()
-        .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-          new File("./target/test/badRecords")
-            .getCanonicalPath)
-      CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
       CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, "FAIL")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark2/src/test/scala/org/apache/spark/carbondata/datatype/NumericDimensionBadRecordTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/datatype/NumericDimensionBadRecordTest.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/datatype/NumericDimensionBadRecordTest.scala
index b1e0bde..44fea03 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/datatype/NumericDimensionBadRecordTest.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/datatype/NumericDimensionBadRecordTest.scala
@@ -43,11 +43,7 @@ class NumericDimensionBadRecordTest extends Spark2QueryTest with BeforeAndAfterA
       sql("drop table IF EXISTS floatDataType")
       sql("drop table IF EXISTS bigDecimalDataType")
       sql("drop table IF EXISTS stringDataType")
-      CarbonProperties.getInstance()
-        .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-          new File("./target/test/badRecords")
-            .getCanonicalPath)
-      CarbonProperties.getInstance()
+       CarbonProperties.getInstance()
         .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
       var csvFilePath = ""
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala
index c88302d..b62e3c9 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala
@@ -32,9 +32,6 @@ import org.apache.carbondata.spark.exception.ProcessMetaDataException
 class AlterTableValidationTestCase extends Spark2QueryTest with BeforeAndAfterAll {
 
   override def beforeAll {
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-        new File("./target/test/badRecords").getCanonicalPath)
 
     sql("drop table if exists restructure")
     sql("drop table if exists table1")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/processing/src/test/java/org/apache/carbondata/carbon/datastore/BlockIndexStoreTest.java
----------------------------------------------------------------------
diff --git a/processing/src/test/java/org/apache/carbondata/carbon/datastore/BlockIndexStoreTest.java b/processing/src/test/java/org/apache/carbondata/carbon/datastore/BlockIndexStoreTest.java
index 7925b35..63320ef 100644
--- a/processing/src/test/java/org/apache/carbondata/carbon/datastore/BlockIndexStoreTest.java
+++ b/processing/src/test/java/org/apache/carbondata/carbon/datastore/BlockIndexStoreTest.java
@@ -49,8 +49,6 @@ public class BlockIndexStoreTest extends TestCase {
           LogServiceFactory.getLogService(BlockIndexStoreTest.class.getName());
 
   @BeforeClass public void setUp() {
-    CarbonProperties.getInstance().
-        addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, "/tmp/carbon/badrecords");
     StoreCreator.createCarbonStore();
     CarbonProperties.getInstance().
         addProperty(CarbonCommonConstants.CARBON_MAX_DRIVER_LRU_CACHE_SIZE, "10");


[03/50] [abbrv] carbondata git commit: [CARBONDATA-2021]fix clean up issue when update operation is abprutly stopped

Posted by ra...@apache.org.
[CARBONDATA-2021]fix clean up issue when update operation is abprutly stopped

when delete is success and update is failed while writing status file then a stale carbon data file is created.
so removing that file on clean up . and also not considering that one during query.

This closes #1793


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/b2139cab
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/b2139cab
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/b2139cab

Branch: refs/heads/branch-1.3
Commit: b2139cabe8cdeb7c241e30a525d754578cfa5ec6
Parents: d90280a
Author: akashrn5 <ak...@gmail.com>
Authored: Wed Jan 10 20:29:43 2018 +0530
Committer: Jacky Li <ja...@qq.com>
Committed: Wed Jan 31 19:23:55 2018 +0800

----------------------------------------------------------------------
 .../core/mutate/CarbonUpdateUtil.java           | 64 ++++++++++++++++++--
 .../SegmentUpdateStatusManager.java             | 27 +++++++--
 .../apache/carbondata/core/util/CarbonUtil.java | 10 +++
 .../processing/util/CarbonLoaderUtil.java       |  6 +-
 4 files changed, 93 insertions(+), 14 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/b2139cab/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java b/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java
index f4566ac..0e4eec7 100644
--- a/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java
@@ -427,6 +427,10 @@ public class CarbonUpdateUtil {
 
     String validUpdateStatusFile = "";
 
+    boolean isAbortedFile = true;
+
+    boolean isInvalidFile = false;
+
     // scan through each segment.
 
     for (LoadMetadataDetails segment : details) {
@@ -450,10 +454,14 @@ public class CarbonUpdateUtil {
         SegmentUpdateStatusManager updateStatusManager =
                 new SegmentUpdateStatusManager(table.getAbsoluteTableIdentifier());
 
+        // deleting of the aborted file scenario.
+        deleteStaleCarbonDataFiles(segment, allSegmentFiles, updateStatusManager);
+
         // get Invalid update  delta files.
         CarbonFile[] invalidUpdateDeltaFiles = updateStatusManager
-                .getUpdateDeltaFilesList(segment.getLoadName(), false,
-                        CarbonCommonConstants.UPDATE_DELTA_FILE_EXT, true, allSegmentFiles);
+            .getUpdateDeltaFilesList(segment.getLoadName(), false,
+                CarbonCommonConstants.UPDATE_DELTA_FILE_EXT, true, allSegmentFiles,
+                isInvalidFile);
 
         // now for each invalid delta file need to check the query execution time out
         // and then delete.
@@ -465,8 +473,9 @@ public class CarbonUpdateUtil {
 
         // do the same for the index files.
         CarbonFile[] invalidIndexFiles = updateStatusManager
-                .getUpdateDeltaFilesList(segment.getLoadName(), false,
-                        CarbonCommonConstants.UPDATE_INDEX_FILE_EXT, true, allSegmentFiles);
+            .getUpdateDeltaFilesList(segment.getLoadName(), false,
+                CarbonCommonConstants.UPDATE_INDEX_FILE_EXT, true, allSegmentFiles,
+                isInvalidFile);
 
         // now for each invalid index file need to check the query execution time out
         // and then delete.
@@ -492,11 +501,20 @@ public class CarbonUpdateUtil {
             continue;
           }
 
+          // aborted scenario.
+          invalidDeleteDeltaFiles = updateStatusManager
+              .getDeleteDeltaInvalidFilesList(segment.getLoadName(), block, false,
+                  allSegmentFiles, isAbortedFile);
+          for (CarbonFile invalidFile : invalidDeleteDeltaFiles) {
+            boolean doForceDelete = true;
+            compareTimestampsAndDelete(invalidFile, doForceDelete, false);
+          }
+
           // case 1
           if (CarbonUpdateUtil.isBlockInvalid(block.getSegmentStatus())) {
             completeListOfDeleteDeltaFiles = updateStatusManager
                     .getDeleteDeltaInvalidFilesList(segment.getLoadName(), block, true,
-                            allSegmentFiles);
+                            allSegmentFiles, isInvalidFile);
             for (CarbonFile invalidFile : completeListOfDeleteDeltaFiles) {
 
               compareTimestampsAndDelete(invalidFile, forceDelete, false);
@@ -518,7 +536,7 @@ public class CarbonUpdateUtil {
           } else {
             invalidDeleteDeltaFiles = updateStatusManager
                     .getDeleteDeltaInvalidFilesList(segment.getLoadName(), block, false,
-                            allSegmentFiles);
+                            allSegmentFiles, isInvalidFile);
             for (CarbonFile invalidFile : invalidDeleteDeltaFiles) {
 
               compareTimestampsAndDelete(invalidFile, forceDelete, false);
@@ -559,6 +577,40 @@ public class CarbonUpdateUtil {
   }
 
   /**
+   * This function deletes all the stale carbondata files during clean up before update operation
+   * one scenario is if update operation is ubruptly stopped before updation of table status then
+   * the carbondata file created during update operation is stale file and it will be deleted in
+   * this function in next update operation
+   * @param segment
+   * @param allSegmentFiles
+   * @param updateStatusManager
+   */
+  private static void deleteStaleCarbonDataFiles(LoadMetadataDetails segment,
+      CarbonFile[] allSegmentFiles, SegmentUpdateStatusManager updateStatusManager) {
+    boolean doForceDelete = true;
+    boolean isAbortedFile = true;
+    CarbonFile[] invalidUpdateDeltaFiles = updateStatusManager
+        .getUpdateDeltaFilesList(segment.getLoadName(), false,
+            CarbonCommonConstants.UPDATE_DELTA_FILE_EXT, true, allSegmentFiles,
+            isAbortedFile);
+    // now for each invalid delta file need to check the query execution time out
+    // and then delete.
+    for (CarbonFile invalidFile : invalidUpdateDeltaFiles) {
+      compareTimestampsAndDelete(invalidFile, doForceDelete, false);
+    }
+    // do the same for the index files.
+    CarbonFile[] invalidIndexFiles = updateStatusManager
+        .getUpdateDeltaFilesList(segment.getLoadName(), false,
+            CarbonCommonConstants.UPDATE_INDEX_FILE_EXT, true, allSegmentFiles,
+            isAbortedFile);
+    // now for each invalid index file need to check the query execution time out
+    // and then delete.
+    for (CarbonFile invalidFile : invalidIndexFiles) {
+      compareTimestampsAndDelete(invalidFile, doForceDelete, false);
+    }
+  }
+
+  /**
    * This will tell whether the max query timeout has been expired or not.
    * @param fileTimestamp
    * @return

http://git-wip-us.apache.org/repos/asf/carbondata/blob/b2139cab/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentUpdateStatusManager.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentUpdateStatusManager.java b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentUpdateStatusManager.java
index df7eedd..e0e7b70 100644
--- a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentUpdateStatusManager.java
+++ b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentUpdateStatusManager.java
@@ -469,7 +469,7 @@ public class SegmentUpdateStatusManager {
    */
   public CarbonFile[] getUpdateDeltaFilesList(String segmentId, final boolean validUpdateFiles,
       final String fileExtension, final boolean excludeOriginalFact,
-      CarbonFile[] allFilesOfSegment) {
+      CarbonFile[] allFilesOfSegment, boolean isAbortedFile) {
 
     CarbonTablePath carbonTablePath = CarbonStorePath
         .getCarbonTablePath(absoluteTableIdentifier.getTablePath(),
@@ -528,7 +528,12 @@ public class SegmentUpdateStatusManager {
           }
         } else {
           // invalid cases.
-          if (Long.compare(timestamp, startTimeStampFinal) < 0) {
+          if (isAbortedFile) {
+            if (Long.compare(timestamp, endTimeStampFinal) > 0) {
+              listOfCarbonFiles.add(eachFile);
+            }
+          } else if (Long.compare(timestamp, startTimeStampFinal) < 0
+              || Long.compare(timestamp, endTimeStampFinal) > 0) {
             listOfCarbonFiles.add(eachFile);
           }
         }
@@ -934,11 +939,14 @@ public class SegmentUpdateStatusManager {
    */
   public CarbonFile[] getDeleteDeltaInvalidFilesList(final String segmentId,
       final SegmentUpdateDetails block, final boolean needCompleteList,
-      CarbonFile[] allSegmentFiles) {
+      CarbonFile[] allSegmentFiles, boolean isAbortedFile) {
 
     final long deltaStartTimestamp =
         getStartTimeOfDeltaFile(CarbonCommonConstants.DELETE_DELTA_FILE_EXT, block);
 
+    final long deltaEndTimestamp =
+        getEndTimeOfDeltaFile(CarbonCommonConstants.DELETE_DELTA_FILE_EXT, block);
+
     List<CarbonFile> files =
         new ArrayList<>(CarbonCommonConstants.DEFAULT_COLLECTION_SIZE);
 
@@ -956,9 +964,16 @@ public class SegmentUpdateStatusManager {
         long timestamp = CarbonUpdateUtil.getTimeStampAsLong(
             CarbonTablePath.DataFileUtil.getTimeStampFromDeleteDeltaFile(fileName));
 
-        if (block.getBlockName().equalsIgnoreCase(blkName) && (
-            Long.compare(timestamp, deltaStartTimestamp) < 0)) {
-          files.add(eachFile);
+        if (block.getBlockName().equalsIgnoreCase(blkName)) {
+
+          if (isAbortedFile) {
+            if (Long.compare(timestamp, deltaEndTimestamp) > 0) {
+              files.add(eachFile);
+            }
+          } else if (Long.compare(timestamp, deltaStartTimestamp) < 0
+              || Long.compare(timestamp, deltaEndTimestamp) > 0) {
+            files.add(eachFile);
+          }
         }
       }
     }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/b2139cab/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
index 5d7a09f..600b1c9 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
@@ -1701,6 +1701,16 @@ public final class CarbonUtil {
               && blockTimeStamp < invalidBlockVOForSegmentId.getUpdateDeltaStartTimestamp()))) {
         return true;
       }
+      // aborted files case.
+      if (invalidBlockVOForSegmentId.getLatestUpdateTimestamp() != null
+          && blockTimeStamp > invalidBlockVOForSegmentId.getLatestUpdateTimestamp()) {
+        return true;
+      }
+      // for 1st time starttime stamp will be empty so need to consider fact time stamp.
+      if (null == invalidBlockVOForSegmentId.getUpdateDeltaStartTimestamp()
+          && blockTimeStamp > invalidBlockVOForSegmentId.getFactTimestamp()) {
+        return true;
+      }
     }
     return false;
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/b2139cab/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java b/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
index fdc2cc3..12fc5c1 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java
@@ -375,8 +375,10 @@ public final class CarbonLoaderUtil {
     }
 
     // reading the start time of data load.
-    long loadStartTime = CarbonUpdateUtil.readCurrentTime();
-    model.setFactTimeStamp(loadStartTime);
+    if (model.getFactTimeStamp() == 0) {
+      long loadStartTime = CarbonUpdateUtil.readCurrentTime();
+      model.setFactTimeStamp(loadStartTime);
+    }
     CarbonLoaderUtil
         .populateNewLoadMetaEntry(newLoadMetaEntry, status, model.getFactTimeStamp(), false);
     boolean entryAdded =


[19/50] [abbrv] carbondata git commit: [CARBONDATA-1626] Documentation for add datasize and index size to table status file

Posted by ra...@apache.org.
[CARBONDATA-1626] Documentation for add datasize and index size to table status file

Added the parameter to add data size and index size to the table status.

This closes #1897


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/a3638adb
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/a3638adb
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/a3638adb

Branch: refs/heads/branch-1.3
Commit: a3638adbc392c9a12e7858f5a61427266a8937a1
Parents: 473bd31
Author: sgururajshetty <sg...@gmail.com>
Authored: Wed Jan 31 19:14:06 2018 +0530
Committer: manishgupta88 <to...@gmail.com>
Committed: Fri Feb 2 11:33:53 2018 +0530

----------------------------------------------------------------------
 docs/configuration-parameters.md | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/a3638adb/docs/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index fe207f2..b68a2d1 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -111,6 +111,8 @@ This section provides the details of all the configurations required for CarbonD
 | carbon.tempstore.location | /opt/Carbon/TempStoreLoc | Temporary store location. By default it takes System.getProperty("java.io.tmpdir"). |
 | carbon.load.log.counter | 500000 | Data loading records count logger. |
 | carbon.skip.empty.line | false | Setting this property ignores the empty lines in the CSV file during the data load |
+| carbon.enable.calculate.size | true | **For Load Operation**: Setting this property calculates the size of the carbon data file (.carbondata) and carbon index file (.carbonindex) for every load and updates the table status file. **For Describe Formatted**: Setting this property calculates the total size of the carbon data files and carbon index files for the respective table and displays in describe formatted command. | 
+
 
 
 * **Compaction Configuration**


[24/50] [abbrv] carbondata git commit: [CARBONDATA-2082] Timeseries pre-aggregate table should support the blank space

Posted by ra...@apache.org.
[CARBONDATA-2082] Timeseries pre-aggregate table should support the blank space

Timeseries pre-aggregate table should support the blank space, including:event_time,different franularity

This closes  #1902


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/a9a0201b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/a9a0201b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/a9a0201b

Branch: refs/heads/branch-1.3
Commit: a9a0201b468505c79d1881607fb0673ee588d85a
Parents: d3b228f
Author: xubo245 <60...@qq.com>
Authored: Thu Feb 1 15:32:36 2018 +0800
Committer: kumarvishal <ku...@gmail.com>
Committed: Fri Feb 2 18:38:44 2018 +0530

----------------------------------------------------------------------
 .../timeseries/TestTimeSeriesCreateTable.scala  | 76 ++++++++++++++++++++
 .../datamap/CarbonCreateDataMapCommand.scala    | 17 +++--
 .../command/timeseries/TimeSeriesUtil.scala     | 11 ++-
 3 files changed, 92 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/a9a0201b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
index b63fd53..f3bbcaf 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
@@ -368,6 +368,82 @@ class TestTimeSeriesCreateTable extends QueryTest with BeforeAndAfterAll {
     assert(e.getMessage.contains("identifier matching regex"))
   }
 
+  test("test timeseries create table 33: support event_time and granularity key with space") {
+    sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+    sql(
+      s"""CREATE DATAMAP agg1_month ON TABLE mainTable
+         |USING '$timeSeries'
+         |DMPROPERTIES (
+         |   ' event_time '='dataTime',
+         |   ' MONTH_GRANULARITY '='1')
+         |AS SELECT dataTime, SUM(age) FROM mainTable
+         |GROUP BY dataTime
+        """.stripMargin)
+    checkExistence(sql("SHOW DATAMAP ON TABLE maintable"), true, "maintable_agg1_month")
+    sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+  }
+
+
+  test("test timeseries create table 34: support event_time value with space") {
+    sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+    sql(
+      s"""CREATE DATAMAP agg1_month ON TABLE mainTable
+         |USING '$timeSeries'
+         |DMPROPERTIES (
+         |   'event_time '=' dataTime',
+         |   'MONTH_GRANULARITY '='1')
+         |AS SELECT dataTime, SUM(age) FROM mainTable
+         |GROUP BY dataTime
+        """.stripMargin)
+    checkExistence(sql("SHOW DATAMAP ON TABLE maintable"), true, "maintable_agg1_month")
+    sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+  }
+
+  test("test timeseries create table 35: support granularity value with space") {
+    sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+    sql(
+      s"""CREATE DATAMAP agg1_month ON TABLE mainTable
+         |USING '$timeSeries'
+         |DMPROPERTIES (
+         |   'event_time '='dataTime',
+         |   'MONTH_GRANULARITY '=' 1')
+         |AS SELECT dataTime, SUM(age) FROM mainTable
+         |GROUP BY dataTime
+        """.stripMargin)
+    checkExistence(sql("SHOW DATAMAP ON TABLE maintable"), true, "maintable_agg1_month")
+    sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+  }
+
+  test("test timeseries create table 36: support event_time and granularity value with space") {
+    sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+    sql(
+      s"""
+         | CREATE DATAMAP agg1_month ON TABLE mainTable
+         | USING '$timeSeries'
+         | DMPROPERTIES (
+         |   'EVENT_TIME'='dataTime   ',
+         |   'MONTH_GRANULARITY'=' 1  ')
+         | AS SELECT dataTime, SUM(age) FROM mainTable
+         | GROUP BY dataTime
+        """.stripMargin)
+    checkExistence(sql("SHOW DATAMAP ON TABLE maintable"), true, "maintable_agg1_month")
+  }
+
+  test("test timeseries create table 37:  unsupport event_time error value") {
+    sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+    intercept[NullPointerException] {
+      sql(
+        s"""CREATE DATAMAP agg1_month ON TABLE mainTable USING '$timeSeries'
+           |DMPROPERTIES (
+           |   'event_time'='data Time',
+           |   'MONTH_GRANULARITY'='1')
+           |AS SELECT dataTime, SUM(age) FROM mainTable
+           |GROUP BY dataTime
+        """.stripMargin)
+    }
+    sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+  }
+
   override def afterAll: Unit = {
     sql("DROP TABLE IF EXISTS mainTable")
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/a9a0201b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
index da20ac5..242087e 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala
@@ -35,7 +35,7 @@ case class CarbonCreateDataMapCommand(
     dataMapName: String,
     tableIdentifier: TableIdentifier,
     dmClassName: String,
-    dmproperties: Map[String, String],
+    dmProperties: Map[String, String],
     queryString: Option[String],
     ifNotExistsSet: Boolean = false)
   extends AtomicRunnableCommand {
@@ -54,6 +54,12 @@ case class CarbonCreateDataMapCommand(
     val LOGGER = LogServiceFactory.getLogService(this.getClass.getCanonicalName)
     val dbName = tableIdentifier.database.getOrElse("default")
     val tableName = tableIdentifier.table + "_" + dataMapName
+    val newDmProperties = if (dmProperties.get(TimeSeriesUtil.TIMESERIES_EVENTTIME).isDefined) {
+      dmProperties.updated(TimeSeriesUtil.TIMESERIES_EVENTTIME,
+        dmProperties.get(TimeSeriesUtil.TIMESERIES_EVENTTIME).get.trim)
+    } else {
+      dmProperties
+    }
 
     if (sparkSession.sessionState.catalog.listTables(dbName)
       .exists(_.table.equalsIgnoreCase(tableName))) {
@@ -66,12 +72,11 @@ case class CarbonCreateDataMapCommand(
       }
     } else if (dmClassName.equalsIgnoreCase(PREAGGREGATE.toString) ||
       dmClassName.equalsIgnoreCase(TIMESERIES.toString)) {
-      TimeSeriesUtil.validateTimeSeriesGranularity(dmproperties, dmClassName)
-
+      TimeSeriesUtil.validateTimeSeriesGranularity(newDmProperties, dmClassName)
       createPreAggregateTableCommands = if (dmClassName.equalsIgnoreCase(TIMESERIES.toString)) {
         val details = TimeSeriesUtil
-          .getTimeSeriesGranularityDetails(dmproperties, dmClassName)
-        val updatedDmProperties = dmproperties - details._1
+          .getTimeSeriesGranularityDetails(newDmProperties, dmClassName)
+        val updatedDmProperties = newDmProperties - details._1
         CreatePreAggregateTableCommand(dataMapName,
           tableIdentifier,
           dmClassName,
@@ -84,7 +89,7 @@ case class CarbonCreateDataMapCommand(
           dataMapName,
           tableIdentifier,
           dmClassName,
-          dmproperties,
+          newDmProperties,
           queryString.get,
           ifNotExistsSet = ifNotExistsSet)
       }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/a9a0201b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/timeseries/TimeSeriesUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/timeseries/TimeSeriesUtil.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/timeseries/TimeSeriesUtil.scala
index 987d4fe..45767da 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/timeseries/TimeSeriesUtil.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/timeseries/TimeSeriesUtil.scala
@@ -46,7 +46,7 @@ object TimeSeriesUtil {
     if (!eventTime.isDefined) {
       throw new MalformedCarbonCommandException("event_time not defined in time series")
     } else {
-      val carbonColumn = parentTable.getColumnByName(parentTable.getTableName, eventTime.get)
+      val carbonColumn = parentTable.getColumnByName(parentTable.getTableName, eventTime.get.trim)
       if (carbonColumn.getDataType != DataTypes.TIMESTAMP) {
         throw new MalformedCarbonCommandException(
           "Timeseries event time is only supported on Timestamp " +
@@ -110,7 +110,7 @@ object TimeSeriesUtil {
     val defaultValue = "1"
     for (granularity <- Granularity.values()) {
       if (dmProperties.get(granularity.getName).isDefined &&
-        dmProperties.get(granularity.getName).get.equalsIgnoreCase(defaultValue)) {
+        dmProperties.get(granularity.getName).get.trim.equalsIgnoreCase(defaultValue)) {
         return (granularity.toString.toLowerCase, dmProperties.get(granularity.getName).get)
       }
     }
@@ -168,10 +168,9 @@ object TimeSeriesUtil {
   /**
    * Below method will be used to validate whether timeseries column present in
    * select statement or not
-   * @param fieldMapping
-   *                     fields from select plan
-   * @param timeSeriesColumn
-   *                         timeseries column name
+   *
+   * @param fieldMapping     fields from select plan
+   * @param timeSeriesColumn timeseries column name
    */
   def validateEventTimeColumnExitsInSelect(fieldMapping: scala.collection.mutable
   .LinkedHashMap[Field, DataMapField],


[32/50] [abbrv] carbondata git commit: [CARBONDATA-2093] Use small file feature of global sort to minimise the carbondata file count

Posted by ra...@apache.org.
[CARBONDATA-2093] Use small file feature of global sort to minimise the carbondata file count

This closes #1876


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/e527c059
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/e527c059
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/e527c059

Branch: refs/heads/branch-1.3
Commit: e527c059e81e58503d568c82a3e7ac822a8a5b47
Parents: 8875775
Author: ravipesala <ra...@gmail.com>
Authored: Sun Jan 28 20:37:21 2018 +0530
Committer: QiangCai <qi...@qq.com>
Committed: Sat Feb 3 16:36:30 2018 +0800

----------------------------------------------------------------------
 .../StandardPartitionTableLoadingTestCase.scala |  77 ++++++++++-
 .../load/DataLoadProcessBuilderOnSpark.scala    | 130 +------------------
 .../carbondata/spark/util/DataLoadingUtil.scala | 127 ++++++++++++++++++
 .../management/CarbonLoadDataCommand.scala      |  94 ++++++--------
 .../sort/sortdata/SortParameters.java           |   4 +
 5 files changed, 249 insertions(+), 183 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/e527c059/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
index 16f252b..669d6e7 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
@@ -16,11 +16,12 @@
  */
 package org.apache.carbondata.spark.testsuite.standardpartition
 
-import java.io.{File, IOException}
+import java.io.{File, FileWriter, IOException}
 import java.util
 import java.util.concurrent.{Callable, ExecutorService, Executors}
 
 import org.apache.commons.io.FileUtils
+import org.apache.spark.sql.execution.BatchedDataSourceScanExec
 import org.apache.spark.sql.test.util.QueryTest
 import org.apache.spark.sql.{AnalysisException, CarbonEnv, Row}
 import org.scalatest.BeforeAndAfterAll
@@ -30,7 +31,8 @@ import org.apache.carbondata.core.datastore.filesystem.{CarbonFile, CarbonFileFi
 import org.apache.carbondata.core.datastore.impl.FileFactory
 import org.apache.carbondata.core.metadata.CarbonMetadata
 import org.apache.carbondata.core.util.CarbonProperties
-import org.apache.carbondata.core.util.path.CarbonTablePath
+import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}
+import org.apache.carbondata.spark.rdd.CarbonScanRDD
 
 class StandardPartitionTableLoadingTestCase extends QueryTest with BeforeAndAfterAll {
   var executorService: ExecutorService = _
@@ -409,6 +411,75 @@ class StandardPartitionTableLoadingTestCase extends QueryTest with BeforeAndAfte
       sql("select * from  casesensitivepartition where empno=17"))
   }
 
+  test("Partition LOAD with small files") {
+    sql("DROP TABLE IF EXISTS smallpartitionfiles")
+    sql(
+      """
+        | CREATE TABLE smallpartitionfiles(id INT, name STRING, age INT) PARTITIONED BY(city STRING)
+        | STORED BY 'org.apache.carbondata.format'
+      """.stripMargin)
+    val inputPath = new File("target/small_files").getCanonicalPath
+    val folder = new File(inputPath)
+    if (folder.exists()) {
+      FileUtils.deleteDirectory(folder)
+    }
+    folder.mkdir()
+    for (i <- 0 to 100) {
+      val file = s"$folder/file$i.csv"
+      val writer = new FileWriter(file)
+      writer.write("id,name,city,age\n")
+      writer.write(s"$i,name_$i,city_${i % 5},${ i % 100 }")
+      writer.close()
+    }
+    sql(s"LOAD DATA LOCAL INPATH '$inputPath' INTO TABLE smallpartitionfiles")
+    FileUtils.deleteDirectory(folder)
+    val carbonTable = CarbonMetadata.getInstance().getCarbonTable("default", "smallpartitionfiles")
+    val carbonTablePath = CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
+    val segmentDir = carbonTablePath.getSegmentDir("0", "0")
+    assert(new File(segmentDir).listFiles().length < 50)
+  }
+
+  test("verify partition read with small files") {
+    try {
+      CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TASK_DISTRIBUTION,
+        CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_MERGE_FILES)
+      sql("DROP TABLE IF EXISTS smallpartitionfilesread")
+      sql(
+        """
+          | CREATE TABLE smallpartitionfilesread(id INT, name STRING, age INT) PARTITIONED BY
+          | (city STRING)
+          | STORED BY 'org.apache.carbondata.format'
+        """.stripMargin)
+      val inputPath = new File("target/small_files").getCanonicalPath
+      val folder = new File(inputPath)
+      if (folder.exists()) {
+        FileUtils.deleteDirectory(folder)
+      }
+      folder.mkdir()
+      for (i <- 0 until 100) {
+        val file = s"$folder/file$i.csv"
+        val writer = new FileWriter(file)
+        writer.write("id,name,city,age\n")
+        writer.write(s"$i,name_$i,city_${ i },${ i % 100 }")
+        writer.close()
+      }
+      sql(s"LOAD DATA LOCAL INPATH '$inputPath' INTO TABLE smallpartitionfilesread")
+      FileUtils.deleteDirectory(folder)
+      val dataFrame = sql("select * from smallpartitionfilesread")
+      val scanRdd = dataFrame.queryExecution.sparkPlan.collect {
+        case b: BatchedDataSourceScanExec if b.rdd.isInstanceOf[CarbonScanRDD] => b.rdd
+          .asInstanceOf[CarbonScanRDD]
+      }.head
+      assert(scanRdd.getPartitions.length < 10)
+      assertResult(100)(dataFrame.count)
+    } finally {
+      CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TASK_DISTRIBUTION ,
+        CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_DEFAULT)
+    }
+  }
+
+
+
   def restoreData(dblocation: String, tableName: String) = {
     val destination = dblocation + CarbonCommonConstants.FILE_SEPARATOR + tableName
     val source = dblocation+ "_back" + CarbonCommonConstants.FILE_SEPARATOR + tableName
@@ -435,6 +506,8 @@ class StandardPartitionTableLoadingTestCase extends QueryTest with BeforeAndAfte
 
 
   override def afterAll = {
+    CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TASK_DISTRIBUTION ,
+      CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_DEFAULT)
     dropTable
     if (executorService != null && !executorService.isShutdown) {
       executorService.shutdownNow()

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e527c059/integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala
index 781b484..8be70a9 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala
@@ -17,26 +17,12 @@
 
 package org.apache.carbondata.spark.load
 
-import java.text.SimpleDateFormat
-import java.util.{Comparator, Date, Locale}
-
-import scala.collection.mutable.ArrayBuffer
+import java.util.Comparator
 
 import org.apache.hadoop.conf.Configuration
-import org.apache.hadoop.fs.Path
-import org.apache.hadoop.mapred.JobConf
-import org.apache.hadoop.mapreduce.{TaskAttemptID, TaskType}
-import org.apache.hadoop.mapreduce.lib.input.{FileInputFormat, FileSplit}
-import org.apache.hadoop.mapreduce.task.{JobContextImpl, TaskAttemptContextImpl}
 import org.apache.spark.TaskContext
-import org.apache.spark.deploy.SparkHadoopUtil
-import org.apache.spark.rdd.RDD
 import org.apache.spark.sql.{DataFrame, SparkSession}
-import org.apache.spark.sql.catalyst.expressions.GenericInternalRow
-import org.apache.spark.sql.catalyst.InternalRow
 import org.apache.spark.sql.execution.command.ExecutionErrors
-import org.apache.spark.sql.execution.datasources.{FilePartition, FileScanRDD, PartitionedFile}
-import org.apache.spark.sql.util.SparkSQLUtil.sessionState
 import org.apache.spark.storage.StorageLevel
 
 import org.apache.carbondata.common.logging.LogServiceFactory
@@ -45,12 +31,10 @@ import org.apache.carbondata.core.datastore.row.CarbonRow
 import org.apache.carbondata.core.statusmanager.{LoadMetadataDetails, SegmentStatus}
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.processing.loading.{DataLoadProcessBuilder, FailureCauses}
-import org.apache.carbondata.processing.loading.csvinput.CSVInputFormat
 import org.apache.carbondata.processing.loading.model.CarbonLoadModel
 import org.apache.carbondata.processing.sort.sortdata.{NewRowComparator, NewRowComparatorForNormalDims, SortParameters}
 import org.apache.carbondata.processing.util.CarbonDataProcessorUtil
-import org.apache.carbondata.spark.rdd.SerializableConfiguration
-import org.apache.carbondata.spark.util.CommonUtil
+import org.apache.carbondata.spark.util.DataLoadingUtil
 
 /**
  * Use sortBy operator in spark to load the data
@@ -68,7 +52,7 @@ object DataLoadProcessBuilderOnSpark {
     } else {
       // input data from files
       val columnCount = model.getCsvHeaderColumns.length
-      csvFileScanRDD(sparkSession, model, hadoopConf)
+      DataLoadingUtil.csvFileScanRDD(sparkSession, model, hadoopConf)
         .map(DataLoadProcessorStepOnSpark.toStringArrayRow(_, columnCount))
     }
 
@@ -166,112 +150,4 @@ object DataLoadProcessBuilderOnSpark {
       Array((uniqueLoadStatusId, (loadMetadataDetails, executionErrors)))
     }
   }
-
-  /**
-   * creates a RDD that does reading of multiple CSV files
-   */
-  def csvFileScanRDD(
-      spark: SparkSession,
-      model: CarbonLoadModel,
-      hadoopConf: Configuration
-  ): RDD[InternalRow] = {
-    // 1. partition
-    val defaultMaxSplitBytes = sessionState(spark).conf.filesMaxPartitionBytes
-    val openCostInBytes = sessionState(spark).conf.filesOpenCostInBytes
-    val defaultParallelism = spark.sparkContext.defaultParallelism
-    CommonUtil.configureCSVInputFormat(hadoopConf, model)
-    hadoopConf.set(FileInputFormat.INPUT_DIR, model.getFactFilePath)
-    val jobConf = new JobConf(hadoopConf)
-    SparkHadoopUtil.get.addCredentials(jobConf)
-    val jobContext = new JobContextImpl(jobConf, null)
-    val inputFormat = new CSVInputFormat()
-    val rawSplits = inputFormat.getSplits(jobContext).toArray
-    val splitFiles = rawSplits.map { split =>
-      val fileSplit = split.asInstanceOf[FileSplit]
-      PartitionedFile(
-        InternalRow.empty,
-        fileSplit.getPath.toString,
-        fileSplit.getStart,
-        fileSplit.getLength,
-        fileSplit.getLocations)
-    }.sortBy(_.length)(implicitly[Ordering[Long]].reverse)
-    val totalBytes = splitFiles.map(_.length + openCostInBytes).sum
-    val bytesPerCore = totalBytes / defaultParallelism
-
-    val maxSplitBytes = Math.min(defaultMaxSplitBytes, Math.max(openCostInBytes, bytesPerCore))
-    LOGGER.info(s"Planning scan with bin packing, max size: $maxSplitBytes bytes, " +
-                s"open cost is considered as scanning $openCostInBytes bytes.")
-
-    val partitions = new ArrayBuffer[FilePartition]
-    val currentFiles = new ArrayBuffer[PartitionedFile]
-    var currentSize = 0L
-
-    def closePartition(): Unit = {
-      if (currentFiles.nonEmpty) {
-        val newPartition =
-          FilePartition(
-            partitions.size,
-            currentFiles.toArray.toSeq)
-        partitions += newPartition
-      }
-      currentFiles.clear()
-      currentSize = 0
-    }
-
-    splitFiles.foreach { file =>
-      if (currentSize + file.length > maxSplitBytes) {
-        closePartition()
-      }
-      // Add the given file to the current partition.
-      currentSize += file.length + openCostInBytes
-      currentFiles += file
-    }
-    closePartition()
-
-    // 2. read function
-    val serializableConfiguration = new SerializableConfiguration(jobConf)
-    val readFunction = new (PartitionedFile => Iterator[InternalRow]) with Serializable {
-      override def apply(file: PartitionedFile): Iterator[InternalRow] = {
-        new Iterator[InternalRow] {
-          val hadoopConf = serializableConfiguration.value
-          val jobTrackerId: String = {
-            val formatter = new SimpleDateFormat("yyyyMMddHHmmss", Locale.US)
-            formatter.format(new Date())
-          }
-          val attemptId = new TaskAttemptID(jobTrackerId, 0, TaskType.MAP, 0, 0)
-          val hadoopAttemptContext = new TaskAttemptContextImpl(hadoopConf, attemptId)
-          val inputSplit =
-            new FileSplit(new Path(file.filePath), file.start, file.length, file.locations)
-          var finished = false
-          val inputFormat = new CSVInputFormat()
-          val reader = inputFormat.createRecordReader(inputSplit, hadoopAttemptContext)
-          reader.initialize(inputSplit, hadoopAttemptContext)
-
-          override def hasNext: Boolean = {
-            if (!finished) {
-              if (reader != null) {
-                if (reader.nextKeyValue()) {
-                  true
-                } else {
-                  finished = true
-                  reader.close()
-                  false
-                }
-              } else {
-                finished = true
-                false
-              }
-            } else {
-              false
-            }
-          }
-
-          override def next(): InternalRow = {
-            new GenericInternalRow(reader.getCurrentValue.get().asInstanceOf[Array[Any]])
-          }
-        }
-      }
-    }
-    new FileScanRDD(spark, readFunction, partitions)
-  }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e527c059/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
index 5e9f7fe..8b4c232 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
@@ -17,12 +17,28 @@
 
 package org.apache.carbondata.spark.util
 
+import java.text.SimpleDateFormat
+import java.util.{Date, Locale}
+
 import scala.collection.{immutable, mutable}
 import scala.collection.JavaConverters._
+import scala.collection.mutable.ArrayBuffer
 
 import org.apache.commons.lang3.StringUtils
 import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.mapred.JobConf
+import org.apache.hadoop.mapreduce.{TaskAttemptID, TaskType}
+import org.apache.hadoop.mapreduce.lib.input.{FileInputFormat, FileSplit}
+import org.apache.hadoop.mapreduce.task.{JobContextImpl, TaskAttemptContextImpl}
+import org.apache.spark.deploy.SparkHadoopUtil
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.SparkSession
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.expressions.GenericInternalRow
+import org.apache.spark.sql.execution.datasources.{FilePartition, FileScanRDD, PartitionedFile}
 import org.apache.spark.sql.util.CarbonException
+import org.apache.spark.sql.util.SparkSQLUtil.sessionState
 
 import org.apache.carbondata.common.constants.LoggerAction
 import org.apache.carbondata.common.logging.{LogService, LogServiceFactory}
@@ -32,10 +48,13 @@ import org.apache.carbondata.core.metadata.schema.table.CarbonTable
 import org.apache.carbondata.core.statusmanager.{SegmentStatus, SegmentStatusManager}
 import org.apache.carbondata.core.util.{CarbonProperties, CarbonUtil}
 import org.apache.carbondata.processing.loading.constants.DataLoadProcessorConstants
+import org.apache.carbondata.processing.loading.csvinput.CSVInputFormat
 import org.apache.carbondata.processing.loading.model.{CarbonDataLoadSchema, CarbonLoadModel}
 import org.apache.carbondata.processing.util.{CarbonLoaderUtil, DeleteLoadFolders, TableOptionConstant}
 import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
+import org.apache.carbondata.spark.load.DataLoadProcessBuilderOnSpark.LOGGER
 import org.apache.carbondata.spark.load.ValidateUtil
+import org.apache.carbondata.spark.rdd.SerializableConfiguration
 
 /**
  * the util object of data loading
@@ -403,4 +422,112 @@ object DataLoadingUtil {
     }
   }
 
+  /**
+   * creates a RDD that does reading of multiple CSV files
+   */
+  def csvFileScanRDD(
+      spark: SparkSession,
+      model: CarbonLoadModel,
+      hadoopConf: Configuration
+  ): RDD[InternalRow] = {
+    // 1. partition
+    val defaultMaxSplitBytes = sessionState(spark).conf.filesMaxPartitionBytes
+    val openCostInBytes = sessionState(spark).conf.filesOpenCostInBytes
+    val defaultParallelism = spark.sparkContext.defaultParallelism
+    CommonUtil.configureCSVInputFormat(hadoopConf, model)
+    hadoopConf.set(FileInputFormat.INPUT_DIR, model.getFactFilePath)
+    val jobConf = new JobConf(hadoopConf)
+    SparkHadoopUtil.get.addCredentials(jobConf)
+    val jobContext = new JobContextImpl(jobConf, null)
+    val inputFormat = new CSVInputFormat()
+    val rawSplits = inputFormat.getSplits(jobContext).toArray
+    val splitFiles = rawSplits.map { split =>
+      val fileSplit = split.asInstanceOf[FileSplit]
+      PartitionedFile(
+        InternalRow.empty,
+        fileSplit.getPath.toString,
+        fileSplit.getStart,
+        fileSplit.getLength,
+        fileSplit.getLocations)
+    }.sortBy(_.length)(implicitly[Ordering[Long]].reverse)
+    val totalBytes = splitFiles.map(_.length + openCostInBytes).sum
+    val bytesPerCore = totalBytes / defaultParallelism
+
+    val maxSplitBytes = Math.min(defaultMaxSplitBytes, Math.max(openCostInBytes, bytesPerCore))
+    LOGGER.info(s"Planning scan with bin packing, max size: $maxSplitBytes bytes, " +
+                s"open cost is considered as scanning $openCostInBytes bytes.")
+
+    val partitions = new ArrayBuffer[FilePartition]
+    val currentFiles = new ArrayBuffer[PartitionedFile]
+    var currentSize = 0L
+
+    def closePartition(): Unit = {
+      if (currentFiles.nonEmpty) {
+        val newPartition =
+          FilePartition(
+            partitions.size,
+            currentFiles.toArray.toSeq)
+        partitions += newPartition
+      }
+      currentFiles.clear()
+      currentSize = 0
+    }
+
+    splitFiles.foreach { file =>
+      if (currentSize + file.length > maxSplitBytes) {
+        closePartition()
+      }
+      // Add the given file to the current partition.
+      currentSize += file.length + openCostInBytes
+      currentFiles += file
+    }
+    closePartition()
+
+    // 2. read function
+    val serializableConfiguration = new SerializableConfiguration(jobConf)
+    val readFunction = new (PartitionedFile => Iterator[InternalRow]) with Serializable {
+      override def apply(file: PartitionedFile): Iterator[InternalRow] = {
+        new Iterator[InternalRow] {
+          val hadoopConf = serializableConfiguration.value
+          val jobTrackerId: String = {
+            val formatter = new SimpleDateFormat("yyyyMMddHHmmss", Locale.US)
+            formatter.format(new Date())
+          }
+          val attemptId = new TaskAttemptID(jobTrackerId, 0, TaskType.MAP, 0, 0)
+          val hadoopAttemptContext = new TaskAttemptContextImpl(hadoopConf, attemptId)
+          val inputSplit =
+            new FileSplit(new Path(file.filePath), file.start, file.length, file.locations)
+          var finished = false
+          val inputFormat = new CSVInputFormat()
+          val reader = inputFormat.createRecordReader(inputSplit, hadoopAttemptContext)
+          reader.initialize(inputSplit, hadoopAttemptContext)
+
+          override def hasNext: Boolean = {
+            if (!finished) {
+              if (reader != null) {
+                if (reader.nextKeyValue()) {
+                  true
+                } else {
+                  finished = true
+                  reader.close()
+                  false
+                }
+              } else {
+                finished = true
+                false
+              }
+            } else {
+              false
+            }
+          }
+
+          override def next(): InternalRow = {
+            new GenericInternalRow(reader.getCurrentValue.get().asInstanceOf[Array[Any]])
+          }
+        }
+      }
+    }
+    new FileScanRDD(spark, readFunction, partitions)
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e527c059/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
index 8e6c20e..7d49c11 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
@@ -26,22 +26,18 @@ import scala.collection.mutable
 
 import org.apache.commons.lang3.StringUtils
 import org.apache.hadoop.conf.Configuration
-import org.apache.hadoop.io.NullWritable
-import org.apache.hadoop.mapred.JobConf
-import org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-import org.apache.spark.deploy.SparkHadoopUtil
-import org.apache.spark.rdd.{NewHadoopRDD, RDD}
+import org.apache.spark.rdd.RDD
 import org.apache.spark.scheduler.{SparkListener, SparkListenerApplicationEnd}
 import org.apache.spark.sql._
 import org.apache.spark.sql.catalyst.{InternalRow, TableIdentifier}
 import org.apache.spark.sql.catalyst.analysis.{NoSuchTableException, UnresolvedAttribute}
 import org.apache.spark.sql.catalyst.catalog.CatalogTable
-import org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression}
+import org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression, GenericInternalRow}
 import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan, Project}
 import org.apache.spark.sql.execution.LogicalRDD
 import org.apache.spark.sql.execution.SQLExecution.EXECUTION_ID_KEY
 import org.apache.spark.sql.execution.command.{AtomicRunnableCommand, DataLoadTableFileMapping, UpdateTableModel}
-import org.apache.spark.sql.execution.datasources.{CarbonFileFormat, CatalogFileIndex, HadoopFsRelation, LogicalRelation}
+import org.apache.spark.sql.execution.datasources.{CarbonFileFormat, CatalogFileIndex, FindDataSourceTable, HadoopFsRelation, LogicalRelation}
 import org.apache.spark.sql.hive.CarbonRelation
 import org.apache.spark.sql.optimizer.CarbonFilters
 import org.apache.spark.sql.types.{StringType, StructField, StructType}
@@ -60,22 +56,21 @@ import org.apache.carbondata.core.metadata.schema.table.{CarbonTable, TableInfo}
 import org.apache.carbondata.core.mutate.{CarbonUpdateUtil, TupleIdEnum}
 import org.apache.carbondata.core.statusmanager.{SegmentStatus, SegmentStatusManager}
 import org.apache.carbondata.core.util.{CarbonProperties, CarbonUtil}
-import org.apache.carbondata.core.util.path.{CarbonStorePath}
+import org.apache.carbondata.core.util.path.CarbonStorePath
 import org.apache.carbondata.events.{OperationContext, OperationListenerBus}
 import org.apache.carbondata.events.exception.PreEventException
 import org.apache.carbondata.hadoop.util.ObjectSerializationUtil
 import org.apache.carbondata.processing.exception.DataLoadingException
 import org.apache.carbondata.processing.loading.TableProcessingOperations
-import org.apache.carbondata.processing.loading.csvinput.{CSVInputFormat, StringArrayWritable}
 import org.apache.carbondata.processing.loading.events.LoadEvents.{LoadMetadataEvent, LoadTablePostExecutionEvent, LoadTablePreExecutionEvent}
-import org.apache.carbondata.processing.loading.exception.{NoRetryException}
-import org.apache.carbondata.processing.loading.model.{CarbonLoadModel}
+import org.apache.carbondata.processing.loading.exception.NoRetryException
+import org.apache.carbondata.processing.loading.model.CarbonLoadModel
 import org.apache.carbondata.processing.util.CarbonLoaderUtil
 import org.apache.carbondata.spark.dictionary.provider.SecureDictionaryServiceProvider
 import org.apache.carbondata.spark.dictionary.server.SecureDictionaryServer
 import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
 import org.apache.carbondata.spark.rdd.{CarbonDataRDDFactory, CarbonDropPartitionCommitRDD, CarbonDropPartitionRDD}
-import org.apache.carbondata.spark.util.{CarbonScalaUtil, CommonUtil, DataLoadingUtil, GlobalDictionaryUtil}
+import org.apache.carbondata.spark.util.{CarbonScalaUtil, DataLoadingUtil, GlobalDictionaryUtil}
 
 case class CarbonLoadDataCommand(
     databaseNameOp: Option[String],
@@ -95,6 +90,10 @@ case class CarbonLoadDataCommand(
 
   var table: CarbonTable = _
 
+  var logicalPartitionRelation: LogicalRelation = _
+
+  var sizeInBytes: Long = _
+
   override def processMetadata(sparkSession: SparkSession): Seq[Row] = {
     val LOGGER: LogService = LogServiceFactory.getLogService(this.getClass.getCanonicalName)
     val dbName = CarbonEnv.getDatabaseName(databaseNameOp)(sparkSession)
@@ -113,6 +112,15 @@ case class CarbonLoadDataCommand(
         }
         relation.carbonTable
       }
+    if (table.isHivePartitionTable) {
+      logicalPartitionRelation =
+        new FindDataSourceTable(sparkSession).apply(
+          sparkSession.sessionState.catalog.lookupRelation(
+            TableIdentifier(tableName, databaseNameOp))).collect {
+          case l: LogicalRelation => l
+        }.head
+      sizeInBytes = logicalPartitionRelation.relation.sizeInBytes
+    }
     operationContext.setProperty("isOverwrite", isOverwriteTable)
     if(CarbonUtil.hasAggregationDataMap(table)) {
       val loadMetadataEvent = new LoadMetadataEvent(table, false)
@@ -500,20 +508,7 @@ case class CarbonLoadDataCommand(
       operationContext: OperationContext) = {
     val table = carbonLoadModel.getCarbonDataLoadSchema.getCarbonTable
     val identifier = TableIdentifier(table.getTableName, Some(table.getDatabaseName))
-    val logicalPlan =
-      sparkSession.sessionState.catalog.lookupRelation(
-        identifier)
-    val catalogTable: CatalogTable = logicalPlan.collect {
-      case l: LogicalRelation => l.catalogTable.get
-      case c // To make compatabile with spark 2.1 and 2.2 we need to compare classes
-        if c.getClass.getName.equals("org.apache.spark.sql.catalyst.catalog.CatalogRelation") ||
-            c.getClass.getName.equals("org.apache.spark.sql.catalyst.catalog.HiveTableRelation") ||
-            c.getClass.getName.equals(
-              "org.apache.spark.sql.catalyst.catalog.UnresolvedCatalogRelation") =>
-        CarbonReflectionUtils.getFieldOfCatalogTable(
-          "tableMeta",
-          c).asInstanceOf[CatalogTable]
-    }.head
+    val catalogTable: CatalogTable = logicalPartitionRelation.catalogTable.get
     val currentPartitions =
       CarbonFilters.getPartitions(Seq.empty[Expression], sparkSession, identifier)
     // Clean up the alreday dropped partitioned data
@@ -581,10 +576,6 @@ case class CarbonLoadDataCommand(
 
       } else {
         // input data from csv files. Convert to logical plan
-        CommonUtil.configureCSVInputFormat(hadoopConf, carbonLoadModel)
-        hadoopConf.set(FileInputFormat.INPUT_DIR, carbonLoadModel.getFactFilePath)
-        val jobConf = new JobConf(hadoopConf)
-        SparkHadoopUtil.get.addCredentials(jobConf)
         val attributes =
           StructType(carbonLoadModel.getCsvHeaderColumns.map(
             StructField(_, StringType))).toAttributes
@@ -603,28 +594,27 @@ case class CarbonLoadDataCommand(
         }
         val len = rowDataTypes.length
         var rdd =
-          new NewHadoopRDD[NullWritable, StringArrayWritable](
-            sparkSession.sparkContext,
-            classOf[CSVInputFormat],
-            classOf[NullWritable],
-            classOf[StringArrayWritable],
-            jobConf).map { case (key, value) =>
-            val data = new Array[Any](len)
-            var i = 0
-            val input = value.get()
-            val inputLen = Math.min(input.length, len)
-            while (i < inputLen) {
-              data(i) = UTF8String.fromString(input(i))
-              // If partition column then update empty value with special string otherwise spark
-              // makes it as null so we cannot internally handle badrecords.
-              if (partitionColumns(i)) {
-                if (input(i) != null && input(i).isEmpty) {
-                  data(i) = UTF8String.fromString(CarbonCommonConstants.MEMBER_DEFAULT_VAL)
+          DataLoadingUtil.csvFileScanRDD(
+            sparkSession,
+            model = carbonLoadModel,
+            hadoopConf)
+            .map { row =>
+              val data = new Array[Any](len)
+              var i = 0
+              val input = row.asInstanceOf[GenericInternalRow].values.asInstanceOf[Array[String]]
+              val inputLen = Math.min(input.length, len)
+              while (i < inputLen) {
+                data(i) = UTF8String.fromString(input(i))
+                // If partition column then update empty value with special string otherwise spark
+                // makes it as null so we cannot internally handle badrecords.
+                if (partitionColumns(i)) {
+                  if (input(i) != null && input(i).isEmpty) {
+                    data(i) = UTF8String.fromString(CarbonCommonConstants.MEMBER_DEFAULT_VAL)
+                  }
                 }
+                i = i + 1
               }
-              i = i + 1
-            }
-            InternalRow.fromSeq(data)
+              InternalRow.fromSeq(data)
 
           }
         // Only select the required columns
@@ -638,10 +628,6 @@ case class CarbonLoadDataCommand(
         }
         Project(output, LogicalRDD(attributes, rdd)(sparkSession))
       }
-      // TODO need to find a way to avoid double lookup
-      val sizeInBytes =
-        CarbonEnv.getInstance(sparkSession).carbonMetastore.lookupRelation(
-          catalogTable.identifier)(sparkSession).asInstanceOf[CarbonRelation].sizeInBytes
       val convertRelation = convertToLogicalRelation(
         catalogTable,
         sizeInBytes,

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e527c059/processing/src/main/java/org/apache/carbondata/processing/sort/sortdata/SortParameters.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/sort/sortdata/SortParameters.java b/processing/src/main/java/org/apache/carbondata/processing/sort/sortdata/SortParameters.java
index a2248ee..98d150e 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/sort/sortdata/SortParameters.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/sort/sortdata/SortParameters.java
@@ -403,6 +403,10 @@ public class SortParameters implements Serializable {
     LOGGER.info("temp file location: " + StringUtils.join(parameters.getTempFileLocation(), ","));
 
     int numberOfCores = carbonProperties.getNumberOfCores() / 2;
+    // In case of loading from partition we should use the cores specified by it
+    if (configuration.getWritingCoresCount() > 0) {
+      numberOfCores = configuration.getWritingCoresCount();
+    }
     parameters.setNumberOfCores(numberOfCores > 0 ? numberOfCores : 1);
 
     parameters.setFileWriteBufferSize(Integer.parseInt(carbonProperties


[30/50] [abbrv] carbondata git commit: [CARBONDATA-1880] Documentation for merging small files

Posted by ra...@apache.org.
[CARBONDATA-1880] Documentation for merging small files

Documentation for merging small files

This closes #1903


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/b48a8c21
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/b48a8c21
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/b48a8c21

Branch: refs/heads/branch-1.3
Commit: b48a8c21f75d642c5729bdc3f147a50685447f65
Parents: 71f8828
Author: sgururajshetty <sg...@gmail.com>
Authored: Wed Jan 31 19:25:16 2018 +0530
Committer: chenliang613 <ch...@huawei.com>
Committed: Sat Feb 3 16:05:56 2018 +0800

----------------------------------------------------------------------
 docs/configuration-parameters.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/b48a8c21/docs/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index b68a2d1..621574d 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -61,6 +61,7 @@ This section provides the details of all the configurations required for CarbonD
 | carbon.options.bad.record.path |  | Specifies the HDFS path where bad records are stored. By default the value is Null. This path must to be configured by the user if bad record logger is enabled or bad record action redirect. | |
 | carbon.enable.vector.reader | true | This parameter increases the performance of select queries as it fetch columnar batch of size 4*1024 rows instead of fetching data row by row. | |
 | carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of the blocklet group. Higher value results in better sequential IO access.The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB). |  |
+| carbon.task.distribution | block | **block**: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **custom**: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. **blocklet**: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **merge_small_files**: Setting this value will merge all the small partitions to a size of (128 MB) during querying. The small partitions are combined to a map task to reduce the number of read task. This enhances the performance. | | 
 
 * **Compaction Configuration**
   


[38/50] [abbrv] carbondata git commit: [CARBONDATA-2104] Add testcase for concurrent execution of insert overwrite and other command

Posted by ra...@apache.org.
[CARBONDATA-2104] Add testcase for concurrent execution of insert overwrite and other command

More testcases are added for concurrent execution of insert overwrite and other commands.
Fix bug of delete segment, clean file when insert overwrite in progress
Change in all command processMetadata to throw ProcessMetadataException instead of sys.error

This closes #1891


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/55bffbe2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/55bffbe2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/55bffbe2

Branch: refs/heads/branch-1.3
Commit: 55bffbe2dbe880bb2f7a1e51cf02d49e828098d9
Parents: 91911af
Author: Jacky Li <ja...@qq.com>
Authored: Wed Jan 31 13:10:11 2018 +0800
Committer: ravipesala <ra...@gmail.com>
Committed: Sat Feb 3 17:24:47 2018 +0530

----------------------------------------------------------------------
 .../statusmanager/SegmentStatusManager.java     |  59 ++--
 .../sdv/register/TestRegisterCarbonTable.scala  |   7 +-
 .../preaggregate/TestPreAggCreateCommand.scala  |   7 +-
 .../TestLoadTableConcurrentScenario.scala       |  78 -----
 .../iud/InsertOverwriteConcurrentTest.scala     | 204 -------------
 .../TestInsertAndOtherCommandConcurrent.scala   | 304 +++++++++++++++++++
 .../partition/TestShowPartitions.scala          |   9 +-
 .../StandardPartitionTableQueryTestCase.scala   |   4 +-
 .../command/CarbonTableSchemaCommonSuite.scala  |  11 +-
 .../exception/ConcurrentOperationException.java |  28 +-
 .../exception/ProcessMetaDataException.java     |  26 ++
 .../datamap/CarbonDropDataMapCommand.scala      |   2 +-
 .../CarbonAlterTableCompactionCommand.scala     |  15 +-
 .../CarbonAlterTableFinishStreaming.scala       |   4 +-
 .../management/CarbonCleanFilesCommand.scala    |   7 +
 .../CarbonDeleteLoadByIdCommand.scala           |   9 +-
 .../CarbonDeleteLoadByLoadDateCommand.scala     |   8 +
 .../management/RefreshCarbonTableCommand.scala  |   3 +-
 .../CarbonProjectForDeleteCommand.scala         |   8 +-
 .../CarbonProjectForUpdateCommand.scala         |   8 +-
 .../spark/sql/execution/command/package.scala   |   6 +
 ...rbonAlterTableDropHivePartitionCommand.scala |   2 +-
 .../CarbonAlterTableDropPartitionCommand.scala  |  16 +-
 .../CarbonAlterTableSplitPartitionCommand.scala |  11 +-
 .../CarbonShowCarbonPartitionsCommand.scala     |   7 +-
 .../CarbonAlterTableAddColumnCommand.scala      |   3 +-
 .../CarbonAlterTableDataTypeChangeCommand.scala |  15 +-
 .../CarbonAlterTableDropColumnCommand.scala     |  13 +-
 .../schema/CarbonAlterTableRenameCommand.scala  |  16 +-
 .../schema/CarbonAlterTableUnsetCommand.scala   |   4 +-
 .../table/CarbonCreateTableCommand.scala        |   7 +-
 .../command/table/CarbonDropTableCommand.scala  |  27 +-
 .../TestStreamingTableOperation.scala           |  14 +-
 .../register/TestRegisterCarbonTable.scala      |   3 +-
 .../restructure/AlterTableRevertTestCase.scala  |  17 +-
 .../AlterTableValidationTestCase.scala          |  11 +-
 .../vectorreader/AddColumnTestCases.scala       |   4 +-
 .../vectorreader/ChangeDataTypeTestCases.scala  |   6 +-
 .../vectorreader/DropColumnTestCases.scala      |   4 +-
 .../processing/util/CarbonLoaderUtil.java       |   4 +-
 40 files changed, 532 insertions(+), 459 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
index 01f810e..9d14c62 100755
--- a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
+++ b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
@@ -509,13 +509,13 @@ public class SegmentStatusManager {
             invalidLoadIds.add(loadId);
             return invalidLoadIds;
           } else if (SegmentStatus.INSERT_IN_PROGRESS == segmentStatus
-              && checkIfValidLoadInProgress(absoluteTableIdentifier, loadId)) {
+              && isLoadInProgress(absoluteTableIdentifier, loadId)) {
             // if the segment status is in progress then no need to delete that.
             LOG.error("Cannot delete the segment " + loadId + " which is load in progress");
             invalidLoadIds.add(loadId);
             return invalidLoadIds;
           } else if (SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS == segmentStatus
-              && checkIfValidLoadInProgress(absoluteTableIdentifier, loadId)) {
+              && isLoadInProgress(absoluteTableIdentifier, loadId)) {
             // if the segment status is overwrite in progress, then no need to delete that.
             LOG.error("Cannot delete the segment " + loadId + " which is load overwrite " +
                     "in progress");
@@ -572,12 +572,12 @@ public class SegmentStatusManager {
         } else if (SegmentStatus.STREAMING == segmentStatus) {
           LOG.info("Ignoring the segment : " + loadMetadata.getLoadName()
               + "as the segment is streaming in progress.");
-        } else if (SegmentStatus.INSERT_IN_PROGRESS == segmentStatus && checkIfValidLoadInProgress(
+        } else if (SegmentStatus.INSERT_IN_PROGRESS == segmentStatus && isLoadInProgress(
             absoluteTableIdentifier, loadMetadata.getLoadName())) {
           LOG.info("Ignoring the segment : " + loadMetadata.getLoadName()
               + "as the segment is insert in progress.");
         } else if (SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS == segmentStatus
-            && checkIfValidLoadInProgress(absoluteTableIdentifier, loadMetadata.getLoadName())) {
+            && isLoadInProgress(absoluteTableIdentifier, loadMetadata.getLoadName())) {
           LOG.info("Ignoring the segment : " + loadMetadata.getLoadName()
               + "as the segment is insert overwrite in progress.");
         } else if (SegmentStatus.MARKED_FOR_DELETE != segmentStatus) {
@@ -714,21 +714,23 @@ public class SegmentStatusManager {
   }
 
   /**
-   * This function checks if any load or insert overwrite is in progress before dropping that table
-   * @return
+   * Return true if any load or insert overwrite is in progress for specified table
    */
-  public static Boolean checkIfAnyLoadInProgressForTable(CarbonTable carbonTable) {
-    Boolean loadInProgress = false;
+  public static Boolean isLoadInProgressInTable(CarbonTable carbonTable) {
+    if (carbonTable == null) {
+      return false;
+    }
+    boolean loadInProgress = false;
     String metaPath = carbonTable.getMetaDataFilepath();
     LoadMetadataDetails[] listOfLoadFolderDetailsArray =
               SegmentStatusManager.readLoadMetadata(metaPath);
     if (listOfLoadFolderDetailsArray.length != 0) {
       for (LoadMetadataDetails loaddetail :listOfLoadFolderDetailsArray) {
         SegmentStatus segmentStatus = loaddetail.getSegmentStatus();
-        if (segmentStatus == SegmentStatus.INSERT_IN_PROGRESS ||
-                segmentStatus == SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS) {
+        if (segmentStatus == SegmentStatus.INSERT_IN_PROGRESS
+            || segmentStatus == SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS) {
           loadInProgress =
-              checkIfValidLoadInProgress(carbonTable.getAbsoluteTableIdentifier(),
+              isLoadInProgress(carbonTable.getAbsoluteTableIdentifier(),
                   loaddetail.getLoadName());
         }
       }
@@ -737,15 +739,36 @@ public class SegmentStatusManager {
   }
 
   /**
-   * This method will check for valid IN_PROGRESS segments.
-   * Tries to acquire a lock on the segment and decide on the stale segments
-   * @param absoluteTableIdentifier
-   *
+   * Return true if insert overwrite is in progress for specified table
+   */
+  public static Boolean isOverwriteInProgressInTable(CarbonTable carbonTable) {
+    if (carbonTable == null) {
+      return false;
+    }
+    boolean loadInProgress = false;
+    String metaPath = carbonTable.getMetaDataFilepath();
+    LoadMetadataDetails[] listOfLoadFolderDetailsArray =
+        SegmentStatusManager.readLoadMetadata(metaPath);
+    if (listOfLoadFolderDetailsArray.length != 0) {
+      for (LoadMetadataDetails loaddetail :listOfLoadFolderDetailsArray) {
+        SegmentStatus segmentStatus = loaddetail.getSegmentStatus();
+        if (segmentStatus == SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS) {
+          loadInProgress =
+              isLoadInProgress(carbonTable.getAbsoluteTableIdentifier(),
+                  loaddetail.getLoadName());
+        }
+      }
+    }
+    return loadInProgress;
+  }
+
+  /**
+   * Return true if the specified `loadName` is in progress, by checking the load lock.
    */
-  public static Boolean checkIfValidLoadInProgress(AbsoluteTableIdentifier absoluteTableIdentifier,
-      String loadId) {
+  public static Boolean isLoadInProgress(AbsoluteTableIdentifier absoluteTableIdentifier,
+      String loadName) {
     ICarbonLock segmentLock = CarbonLockFactory.getCarbonLockObj(absoluteTableIdentifier,
-        CarbonTablePath.addSegmentPrefix(loadId) + LockUsage.LOCK);
+        CarbonTablePath.addSegmentPrefix(loadName) + LockUsage.LOCK);
     try {
       return !segmentLock.lockWithRetries(1, 0);
     } finally {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/register/TestRegisterCarbonTable.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/register/TestRegisterCarbonTable.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/register/TestRegisterCarbonTable.scala
index e3620f7..bf07bd6 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/register/TestRegisterCarbonTable.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/register/TestRegisterCarbonTable.scala
@@ -26,6 +26,7 @@ import org.apache.spark.sql.{AnalysisException, CarbonEnv, Row}
 
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.datastore.impl.FileFactory
+import org.apache.carbondata.spark.exception.ProcessMetaDataException
 
 /**
  *
@@ -168,12 +169,8 @@ class TestRegisterCarbonTable extends QueryTest with BeforeAndAfterAll {
       backUpData(dbLocationCustom, "carbontable_preagg1")
       sql("drop table carbontable")
       restoreData(dbLocationCustom, "carbontable")
-      try {
+      intercept[ProcessMetaDataException] {
         sql("refresh table carbontable")
-        assert(false)
-      } catch {
-        case e: AnalysisException =>
-          assert(true)
       }
       restoreData(dbLocationCustom, "carbontable_preagg1")
     }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
index 0cb1045..38ab9ae 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
@@ -29,7 +29,7 @@ import org.scalatest.BeforeAndAfterAll
 import org.apache.carbondata.core.metadata.encoder.Encoding
 import org.apache.carbondata.core.metadata.schema.table.CarbonTable
 import org.apache.carbondata.core.metadata.schema.datamap.DataMapProvider.TIMESERIES
-import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
+import org.apache.carbondata.spark.exception.{MalformedCarbonCommandException, MalformedDataMapCommandException}
 
 class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
 
@@ -286,7 +286,7 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
   test("test pre agg create table 22: using invalid datamap provider") {
     sql("DROP DATAMAP IF EXISTS agg0 ON TABLE maintable")
 
-    val e: Exception = intercept[Exception] {
+    val e: Exception = intercept[MalformedDataMapCommandException] {
       sql(
         """
           | CREATE DATAMAP agg0 ON TABLE mainTable
@@ -296,8 +296,7 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
           | GROUP BY column3,column5,column2
         """.stripMargin)
     }
-    assert(e.getMessage.contains(
-      s"Unknown data map type abc"))
+    assert(e.getMessage.contains("Unknown data map type abc"))
     sql("DROP DATAMAP IF EXISTS agg0 ON TABLE maintable")
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/concurrent/TestLoadTableConcurrentScenario.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/concurrent/TestLoadTableConcurrentScenario.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/concurrent/TestLoadTableConcurrentScenario.scala
index 6af28c3..e69de29 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/concurrent/TestLoadTableConcurrentScenario.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/concurrent/TestLoadTableConcurrentScenario.scala
@@ -1,78 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.carbondata.spark.testsuite.concurrent
-
-import org.apache.carbondata.core.metadata.schema.table.CarbonTable
-import org.apache.carbondata.core.statusmanager.{SegmentStatus, SegmentStatusManager}
-import org.apache.spark.sql.CarbonEnv
-import org.apache.spark.sql.test.util.QueryTest
-import org.scalatest.BeforeAndAfterAll
-
-class TestLoadTableConcurrentScenario extends QueryTest with BeforeAndAfterAll {
-
-  var carbonTable: CarbonTable = _
-  var metaPath: String = _
-
-  override def beforeAll {
-    sql("use default")
-    sql("drop table if exists drop_concur")
-    sql("drop table if exists rename_concur")
-  }
-
-  test("do not allow drop table when load is in progress") {
-    sql("create table drop_concur(id int, name string) stored by 'carbondata'")
-    sql("insert into drop_concur select 1,'abc'")
-    sql("insert into drop_concur select 1,'abc'")
-    sql("insert into drop_concur select 1,'abc'")
-
-    carbonTable = CarbonEnv.getCarbonTable(Option("default"), "drop_concur")(sqlContext.sparkSession)
-    metaPath = carbonTable.getMetaDataFilepath
-    val listOfLoadFolderDetailsArray = SegmentStatusManager.readLoadMetadata(metaPath)
-    listOfLoadFolderDetailsArray(1).setSegmentStatus(SegmentStatus.INSERT_IN_PROGRESS)
-
-    try {
-      sql("drop table drop_concur")
-    } catch {
-      case ex: Throwable => assert(ex.getMessage.contains("Cannot drop table, load or insert overwrite is in progress"))
-    }
-  }
-
-  test("do not allow rename table when load is in progress") {
-    sql("create table rename_concur(id int, name string) stored by 'carbondata'")
-    sql("insert into rename_concur select 1,'abc'")
-    sql("insert into rename_concur select 1,'abc'")
-
-    carbonTable = CarbonEnv.getCarbonTable(Option("default"), "rename_concur")(sqlContext.sparkSession)
-    metaPath = carbonTable.getMetaDataFilepath
-    val listOfLoadFolderDetailsArray = SegmentStatusManager.readLoadMetadata(metaPath)
-    listOfLoadFolderDetailsArray(1).setSegmentStatus(SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS)
-
-    try {
-      sql("alter table rename_concur rename to rename_concur1")
-    } catch {
-      case ex: Throwable => assert(ex.getMessage.contains("alter rename failed, load, insert or insert overwrite " +
-        "is in progress for the table"))
-    }
-  }
-
-  override def afterAll: Unit = {
-    sql("use default")
-    sql("drop table if exists drop_concur")
-    sql("drop table if exists rename_concur")
-  }
-}

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/InsertOverwriteConcurrentTest.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/InsertOverwriteConcurrentTest.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/InsertOverwriteConcurrentTest.scala
deleted file mode 100644
index 25bdf7b..0000000
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/InsertOverwriteConcurrentTest.scala
+++ /dev/null
@@ -1,204 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.carbondata.spark.testsuite.iud
-
-import java.text.SimpleDateFormat
-import java.util
-import java.util.concurrent.{Callable, ExecutorService, Executors, Future}
-
-import scala.collection.JavaConverters._
-
-import org.apache.spark.sql.test.util.QueryTest
-import org.apache.spark.sql.{DataFrame, SaveMode}
-import org.scalatest.{BeforeAndAfterAll, BeforeAndAfterEach}
-
-import org.apache.carbondata.core.constants.CarbonCommonConstants
-import org.apache.carbondata.core.datamap.dev.{DataMap, DataMapFactory, DataMapWriter}
-import org.apache.carbondata.core.datamap.{DataMapDistributable, DataMapMeta, DataMapStoreManager}
-import org.apache.carbondata.core.datastore.page.ColumnPage
-import org.apache.carbondata.core.indexstore.schema.FilterType
-import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier
-import org.apache.carbondata.core.util.CarbonProperties
-import org.apache.carbondata.events.Event
-import org.apache.carbondata.spark.testsuite.datamap.C2DataMapFactory
-
-class InsertOverwriteConcurrentTest extends QueryTest with BeforeAndAfterAll with BeforeAndAfterEach {
-  private val executorService: ExecutorService = Executors.newFixedThreadPool(10)
-  var df: DataFrame = _
-
-  override def beforeAll {
-    dropTable()
-    buildTestData()
-
-    // register hook to the table to sleep, thus the other command will be executed
-    DataMapStoreManager.getInstance().createAndRegisterDataMap(
-      AbsoluteTableIdentifier.from(storeLocation + "/orders", "default", "orders"),
-      classOf[WaitingDataMap].getName,
-      "test")
-  }
-
-  private def buildTestData(): Unit = {
-
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_DATE_FORMAT, "yyyy-MM-dd")
-
-    // Simulate data and write to table orders
-    import sqlContext.implicits._
-
-    val sdf = new SimpleDateFormat("yyyy-MM-dd")
-    df = sqlContext.sparkSession.sparkContext.parallelize(1 to 150000)
-      .map(value => (value, new java.sql.Date(sdf.parse("2015-07-" + (value % 10 + 10)).getTime),
-        "china", "aaa" + value, "phone" + 555 * value, "ASD" + (60000 + value), 14999 + value,
-        "ordersTable" + value))
-      .toDF("o_id", "o_date", "o_country", "o_name",
-        "o_phonetype", "o_serialname", "o_salary", "o_comment")
-    createTable("orders")
-    createTable("orders_overwrite")
-  }
-
-  private def createTable(tableName: String): Unit = {
-    df.write
-      .format("carbondata")
-      .option("tableName", tableName)
-      .option("tempCSV", "false")
-      .mode(SaveMode.Overwrite)
-      .save()
-  }
-
-  override def afterAll {
-    executorService.shutdownNow()
-    dropTable()
-  }
-
-  override def beforeEach(): Unit = {
-    Global.overwriteRunning = false
-  }
-
-  private def dropTable() = {
-    sql("DROP TABLE IF EXISTS orders")
-    sql("DROP TABLE IF EXISTS orders_overwrite")
-  }
-
-  // run the input SQL and block until it is running
-  private def runSqlAsync(sql: String): Future[String] = {
-    assert(!Global.overwriteRunning)
-    var count = 0
-    val future = executorService.submit(
-      new QueryTask(sql)
-    )
-    while (!Global.overwriteRunning && count < 1000) {
-      Thread.sleep(10)
-      // to avoid dead loop in case WaitingDataMap is not invoked
-      count += 1
-    }
-    future
-  }
-
-  test("compaction should fail if insert overwrite is in progress") {
-    val future = runSqlAsync("insert overWrite table orders select * from orders_overwrite")
-    val ex = intercept[Exception]{
-      sql("alter table orders compact 'MINOR'")
-    }
-    assert(future.get.contains("PASS"))
-    assert(ex.getMessage.contains("Cannot run data loading and compaction on same table concurrently"))
-  }
-
-  test("update should fail if insert overwrite is in progress") {
-    val future = runSqlAsync("insert overWrite table orders select * from orders_overwrite")
-    val ex = intercept[Exception] {
-      sql("update orders set (o_country)=('newCountry') where o_country='china'").show
-    }
-    assert(future.get.contains("PASS"))
-    assert(ex.getMessage.contains("Cannot run data loading and update on same table concurrently"))
-  }
-
-  test("delete should fail if insert overwrite is in progress") {
-    val future = runSqlAsync("insert overWrite table orders select * from orders_overwrite")
-    val ex = intercept[Exception] {
-      sql("delete from orders where o_country='china'").show
-    }
-    assert(future.get.contains("PASS"))
-    assert(ex.getMessage.contains("Cannot run data loading and delete on same table concurrently"))
-  }
-
-  test("drop table should fail if insert overwrite is in progress") {
-    val future = runSqlAsync("insert overWrite table orders select * from orders_overwrite")
-    val ex = intercept[Exception] {
-      sql("drop table if exists orders")
-    }
-    assert(future.get.contains("PASS"))
-    assert(ex.getMessage.contains("Data loading is in progress for table orders, drop table operation is not allowed"))
-  }
-
-  class QueryTask(query: String) extends Callable[String] {
-    override def call(): String = {
-      var result = "PASS"
-      try {
-        sql(query).collect()
-      } catch {
-        case exception: Exception => LOGGER.error(exception.getMessage)
-          result = "FAIL"
-      }
-      result
-    }
-  }
-
-}
-
-object Global {
-  var overwriteRunning = false
-}
-
-class WaitingDataMap() extends DataMapFactory {
-
-  override def init(identifier: AbsoluteTableIdentifier, dataMapName: String): Unit = { }
-
-  override def fireEvent(event: Event): Unit = ???
-
-  override def clear(segmentId: String): Unit = {}
-
-  override def clear(): Unit = {}
-
-  override def getDataMaps(distributable: DataMapDistributable): java.util.List[DataMap] = ???
-
-  override def getDataMaps(segmentId: String): util.List[DataMap] = ???
-
-  override def createWriter(segmentId: String): DataMapWriter = {
-    new DataMapWriter {
-      override def onPageAdded(blockletId: Int, pageId: Int, pages: Array[ColumnPage]): Unit = { }
-
-      override def onBlockletEnd(blockletId: Int): Unit = { }
-
-      override def onBlockEnd(blockId: String): Unit = { }
-
-      override def onBlockletStart(blockletId: Int): Unit = { }
-
-      override def onBlockStart(blockId: String): Unit = {
-        // trigger the second SQL to execute
-        Global.overwriteRunning = true
-
-        // wait for 1 second to let second SQL to finish
-        Thread.sleep(1000)
-      }
-    }
-  }
-
-  override def getMeta: DataMapMeta = new DataMapMeta(List("o_country").asJava, FilterType.EQUALTO)
-
-  override def toDistributable(segmentId: String): util.List[DataMapDistributable] = ???
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/TestInsertAndOtherCommandConcurrent.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/TestInsertAndOtherCommandConcurrent.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/TestInsertAndOtherCommandConcurrent.scala
new file mode 100644
index 0000000..7067ef8
--- /dev/null
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/TestInsertAndOtherCommandConcurrent.scala
@@ -0,0 +1,304 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.spark.testsuite.iud
+
+import java.text.SimpleDateFormat
+import java.util
+import java.util.concurrent.{Callable, ExecutorService, Executors, Future}
+
+import scala.collection.JavaConverters._
+
+import org.apache.spark.sql.test.util.QueryTest
+import org.apache.spark.sql.{DataFrame, SaveMode}
+import org.scalatest.{BeforeAndAfterAll, BeforeAndAfterEach}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.datamap.dev.{DataMap, DataMapFactory, DataMapWriter}
+import org.apache.carbondata.core.datamap.{DataMapDistributable, DataMapMeta, DataMapStoreManager}
+import org.apache.carbondata.core.datastore.page.ColumnPage
+import org.apache.carbondata.core.indexstore.schema.FilterType
+import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier
+import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.events.Event
+import org.apache.carbondata.spark.exception.ConcurrentOperationException
+import org.apache.carbondata.spark.testsuite.datamap.C2DataMapFactory
+
+// This testsuite test insert and insert overwrite with other commands concurrently
+class TestInsertAndOtherCommandConcurrent extends QueryTest with BeforeAndAfterAll with BeforeAndAfterEach {
+  private val executorService: ExecutorService = Executors.newFixedThreadPool(10)
+  var df: DataFrame = _
+
+  override def beforeAll {
+    dropTable()
+    buildTestData()
+
+    // register hook to the table to sleep, thus the other command will be executed
+    DataMapStoreManager.getInstance().createAndRegisterDataMap(
+      AbsoluteTableIdentifier.from(storeLocation + "/orders", "default", "orders"),
+      classOf[WaitingDataMap].getName,
+      "test")
+  }
+
+  private def buildTestData(): Unit = {
+
+    CarbonProperties.getInstance()
+      .addProperty(CarbonCommonConstants.CARBON_DATE_FORMAT, "yyyy-MM-dd")
+
+    // Simulate data and write to table orders
+    import sqlContext.implicits._
+
+    val sdf = new SimpleDateFormat("yyyy-MM-dd")
+    df = sqlContext.sparkSession.sparkContext.parallelize(1 to 150000)
+      .map(value => (value, new java.sql.Date(sdf.parse("2015-07-" + (value % 10 + 10)).getTime),
+        "china", "aaa" + value, "phone" + 555 * value, "ASD" + (60000 + value), 14999 + value,
+        "ordersTable" + value))
+      .toDF("o_id", "o_date", "o_country", "o_name",
+        "o_phonetype", "o_serialname", "o_salary", "o_comment")
+    createTable("orders")
+    createTable("orders_overwrite")
+  }
+
+  private def createTable(tableName: String): Unit = {
+    df.write
+      .format("carbondata")
+      .option("tableName", tableName)
+      .option("tempCSV", "false")
+      .mode(SaveMode.Overwrite)
+      .save()
+  }
+
+  override def afterAll {
+    executorService.shutdownNow()
+    dropTable()
+  }
+
+  override def beforeEach(): Unit = {
+    Global.overwriteRunning = false
+  }
+
+  private def dropTable() = {
+    sql("DROP TABLE IF EXISTS orders")
+    sql("DROP TABLE IF EXISTS orders_overwrite")
+  }
+
+  // run the input SQL and block until it is running
+  private def runSqlAsync(sql: String): Future[String] = {
+    assert(!Global.overwriteRunning)
+    var count = 0
+    val future = executorService.submit(
+      new QueryTask(sql)
+    )
+    while (!Global.overwriteRunning && count < 1000) {
+      Thread.sleep(10)
+      // to avoid dead loop in case WaitingDataMap is not invoked
+      count += 1
+    }
+    future
+  }
+
+  // ----------- INSERT OVERWRITE --------------
+
+  test("compaction should fail if insert overwrite is in progress") {
+    val future = runSqlAsync("insert overwrite table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException]{
+      sql("alter table orders compact 'MINOR'")
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, compaction operation is not allowed"))
+  }
+
+  test("update should fail if insert overwrite is in progress") {
+    val future = runSqlAsync("insert overwrite table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("update orders set (o_country)=('newCountry') where o_country='china'").show
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, data update operation is not allowed"))
+  }
+
+  test("delete should fail if insert overwrite is in progress") {
+    val future = runSqlAsync("insert overwrite table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("delete from orders where o_country='china'").show
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, data delete operation is not allowed"))
+  }
+
+  test("drop table should fail if insert overwrite is in progress") {
+    val future = runSqlAsync("insert overwrite table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("drop table if exists orders")
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, drop table operation is not allowed"))
+  }
+
+  test("alter rename table should fail if insert overwrite is in progress") {
+    val future = runSqlAsync("insert overwrite table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("alter table orders rename to other")
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, alter table rename operation is not allowed"))
+  }
+
+  test("delete segment by id should fail if insert overwrite is in progress") {
+    val future = runSqlAsync("insert overwrite table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("DELETE FROM TABLE orders WHERE SEGMENT.ID IN (0)")
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "insert overwrite is in progress for table default.orders, delete segment operation is not allowed"))
+  }
+
+  test("delete segment by date should fail if insert overwrite is in progress") {
+    val future = runSqlAsync("insert overwrite table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("DELETE FROM TABLE orders WHERE SEGMENT.STARTTIME BEFORE '2099-06-01 12:05:06' ")
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "insert overwrite is in progress for table default.orders, delete segment operation is not allowed"))
+  }
+
+  test("clean file should fail if insert overwrite is in progress") {
+    val future = runSqlAsync("insert overwrite table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("clean files for table  orders")
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "insert overwrite is in progress for table default.orders, clean file operation is not allowed"))
+  }
+
+  // ----------- INSERT  --------------
+
+  test("compaction should fail if insert is in progress") {
+    val future = runSqlAsync("insert into table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException]{
+      sql("alter table orders compact 'MINOR'")
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, compaction operation is not allowed"))
+  }
+
+  test("update should fail if insert is in progress") {
+    val future = runSqlAsync("insert into table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("update orders set (o_country)=('newCountry') where o_country='china'").show
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, data update operation is not allowed"))
+  }
+
+  test("delete should fail if insert is in progress") {
+    val future = runSqlAsync("insert into table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("delete from orders where o_country='china'").show
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, data delete operation is not allowed"))
+  }
+
+  test("drop table should fail if insert is in progress") {
+    val future = runSqlAsync("insert into table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("drop table if exists orders")
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, drop table operation is not allowed"))
+  }
+
+  test("alter rename table should fail if insert is in progress") {
+    val future = runSqlAsync("insert into table orders select * from orders_overwrite")
+    val ex = intercept[ConcurrentOperationException] {
+      sql("alter table orders rename to other")
+    }
+    assert(future.get.contains("PASS"))
+    assert(ex.getMessage.contains(
+      "loading is in progress for table default.orders, alter table rename operation is not allowed"))
+  }
+
+  class QueryTask(query: String) extends Callable[String] {
+    override def call(): String = {
+      var result = "PASS"
+      try {
+        sql(query).collect()
+      } catch {
+        case exception: Exception => LOGGER.error(exception.getMessage)
+          result = "FAIL"
+      }
+      result
+    }
+  }
+
+}
+
+object Global {
+  var overwriteRunning = false
+}
+
+class WaitingDataMap() extends DataMapFactory {
+
+  override def init(identifier: AbsoluteTableIdentifier, dataMapName: String): Unit = { }
+
+  override def fireEvent(event: Event): Unit = ???
+
+  override def clear(segmentId: String): Unit = {}
+
+  override def clear(): Unit = {}
+
+  override def getDataMaps(distributable: DataMapDistributable): java.util.List[DataMap] = ???
+
+  override def getDataMaps(segmentId: String): util.List[DataMap] = ???
+
+  override def createWriter(segmentId: String): DataMapWriter = {
+    new DataMapWriter {
+      override def onPageAdded(blockletId: Int, pageId: Int, pages: Array[ColumnPage]): Unit = { }
+
+      override def onBlockletEnd(blockletId: Int): Unit = { }
+
+      override def onBlockEnd(blockId: String): Unit = { }
+
+      override def onBlockletStart(blockletId: Int): Unit = { }
+
+      override def onBlockStart(blockId: String): Unit = {
+        // trigger the second SQL to execute
+        Global.overwriteRunning = true
+
+        // wait for 1 second to let second SQL to finish
+        Thread.sleep(1000)
+      }
+    }
+  }
+
+  override def getMeta: DataMapMeta = new DataMapMeta(List("o_country").asJava, FilterType.EQUALTO)
+
+  override def toDistributable(segmentId: String): util.List[DataMapDistributable] = ???
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala
index 86305e7..4825968 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala
@@ -25,6 +25,8 @@ import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.spark.sql.test.util.QueryTest
 
+import org.apache.carbondata.spark.exception.ProcessMetaDataException
+
 class TestShowPartition  extends QueryTest with BeforeAndAfterAll {
   override def beforeAll = {
 
@@ -136,10 +138,11 @@ class TestShowPartition  extends QueryTest with BeforeAndAfterAll {
   }
 
   test("show partition table: exception when show not partition table") {
-    val errorMessage =
-      intercept[AnalysisException] { sql("show partitions notPartitionTable").show() }
+    val errorMessage = intercept[ProcessMetaDataException] {
+      sql("show partitions notPartitionTable").show()
+    }
     assert(errorMessage.getMessage.contains(
-      "SHOW PARTITIONS is not allowed on a table that is not partitioned: notpartitiontable"))
+      "SHOW PARTITIONS is not allowed on a table that is not partitioned"))
   }
 
   test("show partition table: hash table") {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
index 0a86dee..95345de 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
@@ -276,8 +276,8 @@ test("Creation of partition table should fail if the colname in table schema and
     sql("drop table if exists partitionTable")
     sql("create table partitionTable (id int,city string,age int) partitioned by(name string) stored by 'carbondata'".stripMargin)
     sql(
-      s"""create datamap preaggTable on table partitionTable using 'preaggregate' as select id,sum(age) from partitionTable group by id"""
-        .stripMargin)
+    s"""create datamap preaggTable on table partitionTable using 'preaggregate' as select id,sum(age) from partitionTable group by id"""
+      .stripMargin)
     sql("insert into partitionTable select 1,'Bangalore',30,'John'")
     sql("insert into partitionTable select 2,'Chennai',20,'Huawei'")
     checkAnswer(sql("show partitions partitionTable"), Seq(Row("name=John"),Row("name=Huawei")))

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common-test/src/test/scala/org/apache/spark/sql/execution/command/CarbonTableSchemaCommonSuite.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/spark/sql/execution/command/CarbonTableSchemaCommonSuite.scala b/integration/spark-common-test/src/test/scala/org/apache/spark/sql/execution/command/CarbonTableSchemaCommonSuite.scala
index 67dfa8f..bd02917 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/spark/sql/execution/command/CarbonTableSchemaCommonSuite.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/spark/sql/execution/command/CarbonTableSchemaCommonSuite.scala
@@ -22,6 +22,8 @@ import org.apache.spark.sql.test.util.QueryTest
 import org.junit.Assert
 import org.scalatest.BeforeAndAfterAll
 
+import org.apache.carbondata.spark.exception.ProcessMetaDataException
+
 class CarbonTableSchemaCommonSuite extends QueryTest with BeforeAndAfterAll {
 
   test("Creating table: Duplicate dimensions found with name, it should throw AnalysisException") {
@@ -53,20 +55,15 @@ class CarbonTableSchemaCommonSuite extends QueryTest with BeforeAndAfterAll {
          | STORED BY 'carbondata'
        """.stripMargin)
 
-    try {
+    val ex = intercept[ProcessMetaDataException] {
       sql(
         s"""
            | alter TABLE carbon_table add columns(
            | bb char(10)
             )
        """.stripMargin)
-      Assert.assertTrue(false)
-    } catch {
-      case _: RuntimeException => Assert.assertTrue(true)
-      case _: Exception => Assert.assertTrue(false)
-    } finally {
-      sql("DROP TABLE IF EXISTS carbon_table")
     }
+    sql("DROP TABLE IF EXISTS carbon_table")
   }
 
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/ConcurrentOperationException.java
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/ConcurrentOperationException.java b/integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/ConcurrentOperationException.java
index 1f3c07d..cc0047f 100644
--- a/integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/ConcurrentOperationException.java
+++ b/integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/ConcurrentOperationException.java
@@ -17,28 +17,22 @@
 
 package org.apache.carbondata.spark.exception;
 
-public class ConcurrentOperationException extends Exception {
+import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
 
-  /**
-   * The Error message.
-   */
-  private String msg = "";
+public class ConcurrentOperationException extends MalformedCarbonCommandException {
 
-  /**
-   * Constructor
-   *
-   * @param msg The error message for this exception.
-   */
-  public ConcurrentOperationException(String msg) {
-    super(msg);
-    this.msg = msg;
+  public ConcurrentOperationException(String dbName, String tableName, String command1,
+      String command2) {
+    super(command1 + " is in progress for table " + dbName + "." + tableName + ", " + command2 +
+      " operation is not allowed");
+  }
+
+  public ConcurrentOperationException(CarbonTable table, String command1, String command2) {
+    this(table.getDatabaseName(), table.getTableName(), command1, command2);
   }
 
-  /**
-   * getMessage
-   */
   public String getMessage() {
-    return this.msg;
+    return super.getMessage();
   }
 
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/ProcessMetaDataException.java
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/ProcessMetaDataException.java b/integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/ProcessMetaDataException.java
new file mode 100644
index 0000000..3e06bde
--- /dev/null
+++ b/integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/ProcessMetaDataException.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.spark.exception;
+
+// This exception will be thrown when processMetaData failed in
+// Carbon's RunnableCommand
+public class ProcessMetaDataException extends MalformedCarbonCommandException {
+  public ProcessMetaDataException(String dbName, String tableName, String msg) {
+    super("operation failed for " + dbName + "." + tableName + ": " + msg);
+  }
+}

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDropDataMapCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDropDataMapCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDropDataMapCommand.scala
index 1fa2494..bc55988 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDropDataMapCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDropDataMapCommand.scala
@@ -123,7 +123,7 @@ case class CarbonDropDataMapCommand(
         throw e
       case ex: Exception =>
         LOGGER.error(ex, s"Dropping datamap $dataMapName failed")
-        throw new MalformedCarbonCommandException(
+        throwMetadataException(dbName, tableName,
           s"Dropping datamap $dataMapName failed: ${ex.getMessage}")
     }
     finally {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala
index 2a77826..667d550 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala
@@ -83,14 +83,13 @@ case class CarbonAlterTableCompactionCommand(
       val loadMetadataEvent = new LoadMetadataEvent(table, true)
       OperationListenerBus.getInstance().fireEvent(loadMetadataEvent, operationContext)
     }
+    if (SegmentStatusManager.isLoadInProgressInTable(table)) {
+      throw new ConcurrentOperationException(table, "loading", "compaction")
+    }
     Seq.empty
   }
 
   override def processData(sparkSession: SparkSession): Seq[Row] = {
-    val LOGGER: LogService =
-      LogServiceFactory.getLogService(this.getClass.getName)
-    val tableName = alterTableModel.tableName.toLowerCase
-    val databaseName = alterTableModel.dbName.getOrElse(sparkSession.catalog.currentDatabase)
     operationContext.setProperty("compactionException", "true")
     var compactionType: CompactionType = null
     var compactionException = "true"
@@ -111,14 +110,6 @@ case class CarbonAlterTableCompactionCommand(
     } else if (compactionException.equalsIgnoreCase("false")) {
       Seq.empty
     } else {
-      val isLoadInProgress = SegmentStatusManager.checkIfAnyLoadInProgressForTable(table)
-      if (isLoadInProgress) {
-        val message = "Cannot run data loading and compaction on same table concurrently. " +
-                      "Please wait for load to finish"
-        LOGGER.error(message)
-        throw new ConcurrentOperationException(message)
-      }
-
       val carbonLoadModel = new CarbonLoadModel()
       carbonLoadModel.setTableName(table.getTableName)
       val dataLoadSchema = new CarbonDataLoadSchema(table)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableFinishStreaming.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableFinishStreaming.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableFinishStreaming.scala
index 59cc0c4..a477167 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableFinishStreaming.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableFinishStreaming.scala
@@ -17,8 +17,6 @@
 
 package org.apache.spark.sql.execution.command.management
 
-import java.io.IOException
-
 import org.apache.spark.sql.{CarbonEnv, Row, SparkSession}
 import org.apache.spark.sql.execution.command.MetadataCommand
 
@@ -46,7 +44,7 @@ case class CarbonAlterTableFinishStreaming(
         val msg = "Failed to finish streaming, because streaming is locked for table " +
                   carbonTable.getDatabaseName() + "." + carbonTable.getTableName()
         LOGGER.error(msg)
-        throw new IOException(msg)
+        throwMetadataException(carbonTable.getDatabaseName, carbonTable.getTableName, msg)
       }
     } finally {
       if (streamingLock.unlock()) {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonCleanFilesCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonCleanFilesCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonCleanFilesCommand.scala
index 4b68bd0..4f90fb5 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonCleanFilesCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonCleanFilesCommand.scala
@@ -26,7 +26,9 @@ import org.apache.spark.sql.optimizer.CarbonFilters
 import org.apache.carbondata.api.CarbonStore
 import org.apache.carbondata.common.logging.LogServiceFactory
 import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.statusmanager.SegmentStatusManager
 import org.apache.carbondata.events.{CleanFilesPostEvent, CleanFilesPreEvent, OperationContext, OperationListenerBus}
+import org.apache.carbondata.spark.exception.ConcurrentOperationException
 import org.apache.carbondata.spark.util.CommonUtil
 
 /**
@@ -45,6 +47,11 @@ case class CarbonCleanFilesCommand(
 
   override def processData(sparkSession: SparkSession): Seq[Row] = {
     val carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, tableName.get)(sparkSession)
+    // if insert overwrite in progress, do not allow delete segment
+    if (SegmentStatusManager.isOverwriteInProgressInTable(carbonTable)) {
+      throw new ConcurrentOperationException(carbonTable, "insert overwrite", "clean file")
+    }
+
     val operationContext = new OperationContext
     val cleanFilesPreEvent: CleanFilesPreEvent =
       CleanFilesPreEvent(carbonTable,

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByIdCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByIdCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByIdCommand.scala
index a2819cc..0861c63 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByIdCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByIdCommand.scala
@@ -21,7 +21,9 @@ import org.apache.spark.sql.{CarbonEnv, Row, SparkSession}
 import org.apache.spark.sql.execution.command.{Checker, DataCommand}
 
 import org.apache.carbondata.api.CarbonStore
+import org.apache.carbondata.core.statusmanager.SegmentStatusManager
 import org.apache.carbondata.events.{DeleteSegmentByIdPostEvent, DeleteSegmentByIdPreEvent, OperationContext, OperationListenerBus}
+import org.apache.carbondata.spark.exception.ConcurrentOperationException
 
 case class CarbonDeleteLoadByIdCommand(
     loadIds: Seq[String],
@@ -32,8 +34,13 @@ case class CarbonDeleteLoadByIdCommand(
   override def processData(sparkSession: SparkSession): Seq[Row] = {
     Checker.validateTableExists(databaseNameOp, tableName, sparkSession)
     val carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, tableName)(sparkSession)
-    val operationContext = new OperationContext
 
+    // if insert overwrite in progress, do not allow delete segment
+    if (SegmentStatusManager.isOverwriteInProgressInTable(carbonTable)) {
+      throw new ConcurrentOperationException(carbonTable, "insert overwrite", "delete segment")
+    }
+
+    val operationContext = new OperationContext
     val deleteSegmentByIdPreEvent: DeleteSegmentByIdPreEvent =
       DeleteSegmentByIdPreEvent(carbonTable,
         loadIds,

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByLoadDateCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByLoadDateCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByLoadDateCommand.scala
index 490bb58..dcbc6ce 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByLoadDateCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonDeleteLoadByLoadDateCommand.scala
@@ -21,7 +21,9 @@ import org.apache.spark.sql.{CarbonEnv, Row, SparkSession}
 import org.apache.spark.sql.execution.command.{Checker, DataCommand}
 
 import org.apache.carbondata.api.CarbonStore
+import org.apache.carbondata.core.statusmanager.SegmentStatusManager
 import org.apache.carbondata.events.{DeleteSegmentByDatePostEvent, DeleteSegmentByDatePreEvent, OperationContext, OperationListenerBus}
+import org.apache.carbondata.spark.exception.ConcurrentOperationException
 
 case class CarbonDeleteLoadByLoadDateCommand(
     databaseNameOp: Option[String],
@@ -33,6 +35,12 @@ case class CarbonDeleteLoadByLoadDateCommand(
   override def processData(sparkSession: SparkSession): Seq[Row] = {
     Checker.validateTableExists(databaseNameOp, tableName, sparkSession)
     val carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, tableName)(sparkSession)
+
+    // if insert overwrite in progress, do not allow delete segment
+    if (SegmentStatusManager.isOverwriteInProgressInTable(carbonTable)) {
+      throw new ConcurrentOperationException(carbonTable, "insert overwrite", "delete segment")
+    }
+
     val operationContext = new OperationContext
     val deleteSegmentByDatePreEvent: DeleteSegmentByDatePreEvent =
       DeleteSegmentByDatePreEvent(carbonTable,

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/RefreshCarbonTableCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/RefreshCarbonTableCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/RefreshCarbonTableCommand.scala
index 2983ea4..72ed051 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/RefreshCarbonTableCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/RefreshCarbonTableCommand.scala
@@ -26,7 +26,6 @@ import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.catalyst.catalog.CatalogTypes.TablePartitionSpec
 import org.apache.spark.sql.execution.command.{AlterTableAddPartitionCommand, MetadataCommand}
 import org.apache.spark.sql.execution.command.table.CarbonCreateTableCommand
-import org.apache.spark.sql.util.CarbonException
 
 import org.apache.carbondata.common.logging.{LogService, LogServiceFactory}
 import org.apache.carbondata.core.datastore.impl.FileFactory
@@ -89,7 +88,7 @@ case class RefreshCarbonTableCommand(
                       s"[$tableName] failed. All the aggregate Tables for table [$tableName] is" +
                       s" not copied under database [$databaseName]"
             LOGGER.audit(msg)
-            CarbonException.analysisException(msg)
+            throwMetadataException(databaseName, tableName, msg)
           }
           // 2.2.1 Register the aggregate tables to hive
           registerAggregates(databaseName, dataMapSchemaList)(sparkSession)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForDeleteCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForDeleteCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForDeleteCommand.scala
index 874d416..a37d6dc 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForDeleteCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForDeleteCommand.scala
@@ -44,12 +44,8 @@ private[sql] case class CarbonProjectForDeleteCommand(
   override def processData(sparkSession: SparkSession): Seq[Row] = {
     val LOGGER = LogServiceFactory.getLogService(this.getClass.getName)
     val carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, tableName)(sparkSession)
-    val isLoadInProgress = SegmentStatusManager.checkIfAnyLoadInProgressForTable(carbonTable)
-    if (isLoadInProgress) {
-      val errorMessage = "Cannot run data loading and delete on same table concurrently. Please " +
-                         "wait for load to finish"
-      LOGGER.error(errorMessage)
-      throw new ConcurrentOperationException(errorMessage)
+    if (SegmentStatusManager.isLoadInProgressInTable(carbonTable)) {
+      throw new ConcurrentOperationException(carbonTable, "loading", "data delete")
     }
 
     IUDCommonUtil.checkIfSegmentListIsSet(sparkSession, plan)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala
index 318c904..756d120 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/mutation/CarbonProjectForUpdateCommand.scala
@@ -55,12 +55,8 @@ private[sql] case class CarbonProjectForUpdateCommand(
       return Seq.empty
     }
     val carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, tableName)(sparkSession)
-    val isLoadInProgress = SegmentStatusManager.checkIfAnyLoadInProgressForTable(carbonTable)
-    if (isLoadInProgress) {
-      val errorMessage = "Cannot run data loading and update on same table concurrently. Please " +
-                         "wait for load to finish"
-      LOGGER.error(errorMessage)
-      throw new ConcurrentOperationException(errorMessage)
+    if (SegmentStatusManager.isLoadInProgressInTable(carbonTable)) {
+      throw new ConcurrentOperationException(carbonTable, "loading", "data update")
     }
 
     // trigger event for Update table

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/package.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/package.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/package.scala
index 4e983a2..4224efa 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/package.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/package.scala
@@ -24,6 +24,7 @@ import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 
 import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.spark.exception.ProcessMetaDataException
 
 object Checker {
   def validateTableExists(
@@ -45,6 +46,11 @@ object Checker {
  */
 trait MetadataProcessOpeation {
   def processMetadata(sparkSession: SparkSession): Seq[Row]
+
+  // call this to throw exception when processMetadata failed
+  def throwMetadataException(dbName: String, tableName: String, msg: String): Unit = {
+    throw new ProcessMetaDataException(dbName, tableName, msg)
+  }
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropHivePartitionCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropHivePartitionCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropHivePartitionCommand.scala
index 0158a32..cb4dece 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropHivePartitionCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropHivePartitionCommand.scala
@@ -72,7 +72,7 @@ case class CarbonAlterTableDropHivePartitionCommand(
       } catch {
         case e: Exception =>
           if (!ifExists) {
-            throw e
+            throwMetadataException(table.getDatabaseName, table.getTableName, e.getMessage)
           } else {
             log.warn(e.getMessage)
             return Seq.empty[Row]

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropPartitionCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropPartitionCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropPartitionCommand.scala
index 114c25d..7fe2658 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropPartitionCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropPartitionCommand.scala
@@ -38,7 +38,7 @@ import org.apache.carbondata.core.metadata.converter.ThriftWrapperSchemaConverte
 import org.apache.carbondata.core.metadata.schema.partition.PartitionType
 import org.apache.carbondata.core.mutate.CarbonUpdateUtil
 import org.apache.carbondata.core.statusmanager.SegmentStatusManager
-import org.apache.carbondata.core.util.{CarbonProperties, CarbonUtil}
+import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.core.util.path.CarbonStorePath
 import org.apache.carbondata.processing.loading.TableProcessingOperations
 import org.apache.carbondata.processing.loading.model.{CarbonDataLoadSchema, CarbonLoadModel}
@@ -62,17 +62,13 @@ case class CarbonAlterTableDropPartitionCommand(
       .asInstanceOf[CarbonRelation]
     val tablePath = relation.carbonTable.getTablePath
     carbonMetaStore.checkSchemasModifiedTimeAndReloadTable(TableIdentifier(tableName, Some(dbName)))
-    if (relation == null) {
-      sys.error(s"Table $dbName.$tableName does not exist")
-    }
-    if (null == CarbonMetadata.getInstance.getCarbonTable(dbName, tableName)) {
-      LOGGER.error(s"Alter table failed. table not found: $dbName.$tableName")
-      sys.error(s"Alter table failed. table not found: $dbName.$tableName")
+    if (relation == null || CarbonMetadata.getInstance.getCarbonTable(dbName, tableName) == null) {
+      throwMetadataException(dbName, tableName, "table not found")
     }
     val table = relation.carbonTable
     val partitionInfo = table.getPartitionInfo(tableName)
     if (partitionInfo == null) {
-      sys.error(s"Table $tableName is not a partition table.")
+      throwMetadataException(dbName, tableName, "table is not a partition table")
     }
     val partitionIds = partitionInfo.getPartitionIds.asScala.map(_.asInstanceOf[Int]).toList
     // keep a copy of partitionIdList before update partitionInfo.
@@ -92,11 +88,11 @@ case class CarbonAlterTableDropPartitionCommand(
         listInfo.remove(listToRemove)
         partitionInfo.setListInfo(listInfo)
       case PartitionType.RANGE_INTERVAL =>
-        sys.error(s"Dropping range interval partition isn't support yet!")
+        throwMetadataException(dbName, tableName,
+          "Dropping range interval partition is unsupported")
     }
     partitionInfo.dropPartition(partitionIndex)
     val carbonTablePath = CarbonStorePath.getCarbonTablePath(table.getAbsoluteTableIdentifier)
-    val schemaFilePath = carbonTablePath.getSchemaFilePath
     // read TableInfo
     val tableInfo = carbonMetaStore.getThriftTableInfo(carbonTablePath)(sparkSession)
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableSplitPartitionCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableSplitPartitionCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableSplitPartitionCommand.scala
index bafc96a..020a72c 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableSplitPartitionCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableSplitPartitionCommand.scala
@@ -40,7 +40,7 @@ import org.apache.carbondata.core.metadata.schema.PartitionInfo
 import org.apache.carbondata.core.metadata.schema.partition.PartitionType
 import org.apache.carbondata.core.mutate.CarbonUpdateUtil
 import org.apache.carbondata.core.statusmanager.SegmentStatusManager
-import org.apache.carbondata.core.util.{CarbonProperties, CarbonUtil}
+import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.core.util.path.CarbonStorePath
 import org.apache.carbondata.processing.loading.TableProcessingOperations
 import org.apache.carbondata.processing.loading.model.{CarbonDataLoadSchema, CarbonLoadModel}
@@ -65,12 +65,12 @@ case class CarbonAlterTableSplitPartitionCommand(
       .asInstanceOf[CarbonRelation]
     val tablePath = relation.carbonTable.getTablePath
     if (relation == null) {
-      sys.error(s"Table $dbName.$tableName does not exist")
+      throwMetadataException(dbName, tableName, "table not found")
     }
     carbonMetaStore.checkSchemasModifiedTimeAndReloadTable(TableIdentifier(tableName, Some(dbName)))
     if (null == CarbonMetadata.getInstance.getCarbonTable(dbName, tableName)) {
       LOGGER.error(s"Alter table failed. table not found: $dbName.$tableName")
-      sys.error(s"Alter table failed. table not found: $dbName.$tableName")
+      throwMetadataException(dbName, tableName, "table not found")
     }
     val table = relation.carbonTable
     val partitionInfo = table.getPartitionInfo(tableName)
@@ -80,16 +80,15 @@ case class CarbonAlterTableSplitPartitionCommand(
     oldPartitionIds.addAll(partitionIds.asJava)
 
     if (partitionInfo == null) {
-      sys.error(s"Table $tableName is not a partition table.")
+      throwMetadataException(dbName, tableName, "Table is not a partition table.")
     }
     if (partitionInfo.getPartitionType == PartitionType.HASH) {
-      sys.error(s"Hash partition table cannot be added or split!")
+      throwMetadataException(dbName, tableName, "Hash partition table cannot be added or split!")
     }
 
     updatePartitionInfo(partitionInfo, partitionIds)
 
     val carbonTablePath = CarbonStorePath.getCarbonTablePath(table.getAbsoluteTableIdentifier)
-    val schemaFilePath = carbonTablePath.getSchemaFilePath
     // read TableInfo
     val tableInfo = carbonMetaStore.getThriftTableInfo(carbonTablePath)(sparkSession)
     val schemaConverter = new ThriftWrapperSchemaConverterImpl()

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonShowCarbonPartitionsCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonShowCarbonPartitionsCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonShowCarbonPartitionsCommand.scala
index ed50835..9419b00 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonShowCarbonPartitionsCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonShowCarbonPartitionsCommand.scala
@@ -17,7 +17,7 @@
 
 package org.apache.spark.sql.execution.command.partition
 
-import org.apache.spark.sql.{AnalysisException, CarbonEnv, Row, SparkSession}
+import org.apache.spark.sql.{CarbonEnv, Row, SparkSession}
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.catalyst.expressions.Attribute
 import org.apache.spark.sql.execution.command.MetadataCommand
@@ -39,12 +39,11 @@ private[sql] case class CarbonShowCarbonPartitionsCommand(
     val relation = CarbonEnv.getInstance(sparkSession).carbonMetastore
       .lookupRelation(tableIdentifier)(sparkSession).asInstanceOf[CarbonRelation]
     val carbonTable = relation.carbonTable
-    val tableName = carbonTable.getTableName
     val partitionInfo = carbonTable.getPartitionInfo(
       carbonTable.getAbsoluteTableIdentifier.getCarbonTableIdentifier.getTableName)
     if (partitionInfo == null) {
-      throw new AnalysisException(
-        s"SHOW PARTITIONS is not allowed on a table that is not partitioned: $tableName")
+      throwMetadataException(carbonTable.getDatabaseName, carbonTable.getTableName,
+        "SHOW PARTITIONS is not allowed on a table that is not partitioned")
     }
     val partitionType = partitionInfo.getPartitionType
     val columnName = partitionInfo.getColumnSchemaList.get(0).getColumnName

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableAddColumnCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableAddColumnCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableAddColumnCommand.scala
index f3f01bb..4b43ea7 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableAddColumnCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableAddColumnCommand.scala
@@ -110,7 +110,8 @@ private[sql] case class CarbonAlterTableAddColumnCommand(
             carbonTable.getAbsoluteTableIdentifier).collect()
           AlterTableUtil.revertAddColumnChanges(dbName, tableName, timeStamp)(sparkSession)
         }
-        sys.error(s"Alter table add operation failed: ${e.getMessage}")
+        throwMetadataException(dbName, tableName,
+          s"Alter table add operation failed: ${e.getMessage}")
     } finally {
       // release lock after command execution completion
       AlterTableUtil.releaseLocks(locks)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDataTypeChangeCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDataTypeChangeCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDataTypeChangeCommand.scala
index 9bea935..571e23f 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDataTypeChangeCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDataTypeChangeCommand.scala
@@ -62,7 +62,7 @@ private[sql] case class CarbonAlterTableDataTypeChangeCommand(
       if (!carbonColumns.exists(_.getColName.equalsIgnoreCase(columnName))) {
         LOGGER.audit(s"Alter table change data type request has failed. " +
                      s"Column $columnName does not exist")
-        sys.error(s"Column does not exist: $columnName")
+        throwMetadataException(dbName, tableName, s"Column does not exist: $columnName")
       }
       val carbonColumn = carbonColumns.filter(_.getColName.equalsIgnoreCase(columnName))
       if (carbonColumn.size == 1) {
@@ -71,11 +71,11 @@ private[sql] case class CarbonAlterTableDataTypeChangeCommand(
       } else {
         LOGGER.audit(s"Alter table change data type request has failed. " +
                      s"Column $columnName is invalid")
-        sys.error(s"Invalid Column: $columnName")
+        throwMetadataException(dbName, tableName, s"Invalid Column: $columnName")
       }
       // read the latest schema file
-      val carbonTablePath = CarbonStorePath
-        .getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
+      val carbonTablePath =
+        CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
       val tableInfo: TableInfo = metastore.getThriftTableInfo(carbonTablePath)(sparkSession)
       // maintain the added column for schema evolution history
       var addColumnSchema: ColumnSchema = null
@@ -107,12 +107,13 @@ private[sql] case class CarbonAlterTableDataTypeChangeCommand(
       LOGGER.info(s"Alter table for data type change is successful for table $dbName.$tableName")
       LOGGER.audit(s"Alter table for data type change is successful for table $dbName.$tableName")
     } catch {
-      case e: Exception => LOGGER
-        .error("Alter table change datatype failed : " + e.getMessage)
+      case e: Exception =>
+        LOGGER.error("Alter table change datatype failed : " + e.getMessage)
         if (carbonTable != null) {
           AlterTableUtil.revertDataTypeChanges(dbName, tableName, timeStamp)(sparkSession)
         }
-        sys.error(s"Alter table data type change operation failed: ${e.getMessage}")
+        throwMetadataException(dbName, tableName,
+          s"Alter table data type change operation failed: ${e.getMessage}")
     } finally {
       // release lock after command execution completion
       AlterTableUtil.releaseLocks(locks)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDropColumnCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDropColumnCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDropColumnCommand.scala
index 0319d9e..780ac8f 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDropColumnCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableDropColumnCommand.scala
@@ -63,7 +63,7 @@ private[sql] case class CarbonAlterTableDropColumnCommand(
           tableColumn => partitionColumnSchemaList.contains(tableColumn)
         }
         if (partitionColumns.nonEmpty) {
-          throw new UnsupportedOperationException("Partition columns cannot be dropped: " +
+          throwMetadataException(dbName, tableName, "Partition columns cannot be dropped: " +
                                                   s"$partitionColumns")
         }
       }
@@ -85,7 +85,8 @@ private[sql] case class CarbonAlterTableDropColumnCommand(
           }
         }
         if (!columnExist) {
-          sys.error(s"Column $column does not exists in the table $dbName.$tableName")
+          throwMetadataException(dbName, tableName,
+            s"Column $column does not exists in the table $dbName.$tableName")
         }
       }
 
@@ -137,13 +138,13 @@ private[sql] case class CarbonAlterTableDropColumnCommand(
       LOGGER.info(s"Alter table for drop columns is successful for table $dbName.$tableName")
       LOGGER.audit(s"Alter table for drop columns is successful for table $dbName.$tableName")
     } catch {
-      case e: Exception => LOGGER
-        .error("Alter table drop columns failed : " + e.getMessage)
+      case e: Exception =>
+        LOGGER.error("Alter table drop columns failed : " + e.getMessage)
         if (carbonTable != null) {
           AlterTableUtil.revertDropColumnChanges(dbName, tableName, timeStamp)(sparkSession)
         }
-        e.printStackTrace()
-        sys.error(s"Alter table drop column operation failed: ${e.getMessage}")
+        throwMetadataException(dbName, tableName,
+          s"Alter table drop column operation failed: ${e.getMessage}")
     } finally {
       // release lock after command execution completion
       AlterTableUtil.releaseLocks(locks)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableRenameCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableRenameCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableRenameCommand.scala
index dd34f08..c8f64e1 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableRenameCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableRenameCommand.scala
@@ -17,8 +17,6 @@
 
 package org.apache.spark.sql.execution.command.schema
 
-import org.apache.hadoop.fs.Path
-import org.apache.spark.sql._
 import org.apache.spark.sql.{CarbonEnv, SparkSession}
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.execution.command.{AlterTableRenameModel, MetadataCommand}
@@ -37,7 +35,7 @@ import org.apache.carbondata.core.util.CarbonUtil
 import org.apache.carbondata.core.util.path.CarbonStorePath
 import org.apache.carbondata.events.{AlterTableRenamePostEvent, AlterTableRenamePreEvent, OperationContext, OperationListenerBus}
 import org.apache.carbondata.format.SchemaEvolutionEntry
-import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
+import org.apache.carbondata.spark.exception.{ConcurrentOperationException, MalformedCarbonCommandException}
 
 private[sql] case class CarbonAlterTableRenameCommand(
     alterTableRenameModel: AlterTableRenameModel)
@@ -70,7 +68,7 @@ private[sql] case class CarbonAlterTableRenameCommand(
     if (relation == null) {
       LOGGER.audit(s"Rename table request has failed. " +
                    s"Table $oldDatabaseName.$oldTableName does not exist")
-      sys.error(s"Table $oldDatabaseName.$oldTableName does not exist")
+      throwMetadataException(oldDatabaseName, oldTableName, "Table does not exist")
     }
     val locksToBeAcquired = List(LockUsage.METADATA_LOCK,
       LockUsage.COMPACTION_LOCK,
@@ -90,9 +88,8 @@ private[sql] case class CarbonAlterTableRenameCommand(
         .asInstanceOf[CarbonRelation].carbonTable
       carbonTableLockFilePath = carbonTable.getTablePath
       // if any load is in progress for table, do not allow rename table
-      if (SegmentStatusManager.checkIfAnyLoadInProgressForTable(carbonTable)) {
-        throw new AnalysisException(s"Data loading is in progress for table $oldTableName, alter " +
-                                    s"table rename operation is not allowed")
+      if (SegmentStatusManager.isLoadInProgressInTable(carbonTable)) {
+        throw new ConcurrentOperationException(carbonTable, "loading", "alter table rename")
       }
       // invalid data map for the old table, see CARBON-1690
       val oldTableIdentifier = carbonTable.getAbsoluteTableIdentifier
@@ -160,6 +157,8 @@ private[sql] case class CarbonAlterTableRenameCommand(
       LOGGER.audit(s"Table $oldTableName has been successfully renamed to $newTableName")
       LOGGER.info(s"Table $oldTableName has been successfully renamed to $newTableName")
     } catch {
+      case e: ConcurrentOperationException =>
+        throw e
       case e: Exception =>
         LOGGER.error(e, "Rename table failed: " + e.getMessage)
         if (carbonTable != null) {
@@ -172,7 +171,8 @@ private[sql] case class CarbonAlterTableRenameCommand(
               sparkSession)
           renameBadRecords(newTableName, oldTableName, oldDatabaseName)
         }
-        sys.error(s"Alter table rename table operation failed: ${e.getMessage}")
+        throwMetadataException(oldDatabaseName, oldTableName,
+          s"Alter table rename table operation failed: ${e.getMessage}")
     } finally {
       // case specific to rename table as after table rename old table path will not be found
       if (carbonTable != null) {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableUnsetCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableUnsetCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableUnsetCommand.scala
index 5be5b2c..2490f9e 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableUnsetCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableUnsetCommand.scala
@@ -17,14 +17,12 @@
 
 package org.apache.spark.sql.execution.command.schema
 
-import org.apache.spark.sql.{CarbonEnv, Row, SparkSession}
+import org.apache.spark.sql.{Row, SparkSession}
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.execution.command._
 import org.apache.spark.sql.hive.CarbonSessionCatalog
 import org.apache.spark.util.AlterTableUtil
 
-import org.apache.carbondata.common.logging.{LogService, LogServiceFactory}
-import org.apache.carbondata.format.TableInfo
 
 private[sql] case class CarbonAlterTableUnsetCommand(
     tableIdentifier: TableIdentifier,


[23/50] [abbrv] carbondata git commit: [CARBONDATA-2062] Configure the temp directory to be used for streaming handoff

Posted by ra...@apache.org.
[CARBONDATA-2062] Configure the temp directory to be used for streaming handoff

This closes #1841


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d3b228fb
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/d3b228fb
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/d3b228fb

Branch: refs/heads/branch-1.3
Commit: d3b228fb8cde5bace2fc932124ee68b8b2e4ee8c
Parents: f9606e9
Author: Raghunandan S <ca...@gmail.com>
Authored: Mon Jan 22 11:47:28 2018 +0530
Committer: QiangCai <qi...@qq.com>
Committed: Fri Feb 2 14:52:05 2018 +0800

----------------------------------------------------------------------
 .../spark/rdd/AlterTableLoadPartitionRDD.scala  | 34 ++-------------
 .../carbondata/spark/rdd/CarbonMergerRDD.scala  | 31 ++------------
 .../carbondata/spark/util/CommonUtil.scala      | 44 +++++++++++++++++++-
 .../carbondata/streaming/StreamHandoffRDD.scala |  4 ++
 4 files changed, 52 insertions(+), 61 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/d3b228fb/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala
index 35a8ea7..76c99f2 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala
@@ -18,21 +18,19 @@
 package org.apache.carbondata.spark.rdd
 
 import scala.collection.JavaConverters._
-import scala.util.Random
 
-import org.apache.spark.{Partition, SparkContext, SparkEnv, TaskContext}
+import org.apache.spark.{Partition, TaskContext}
 import org.apache.spark.rdd.RDD
 import org.apache.spark.sql.execution.command.AlterPartitionModel
 import org.apache.spark.util.PartitionUtils
 
 import org.apache.carbondata.common.logging.LogServiceFactory
 import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier
-import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.processing.loading.TableProcessingOperations
 import org.apache.carbondata.processing.partition.spliter.RowResultProcessor
 import org.apache.carbondata.processing.util.{CarbonDataProcessorUtil, CarbonLoaderUtil}
 import org.apache.carbondata.spark.AlterPartitionResult
-import org.apache.carbondata.spark.util.Util
+import org.apache.carbondata.spark.util.CommonUtil
 
 class AlterTableLoadPartitionRDD[K, V](alterPartitionModel: AlterPartitionModel,
     result: AlterPartitionResult[K, V],
@@ -65,33 +63,7 @@ class AlterTableLoadPartitionRDD[K, V](alterPartitionModel: AlterPartitionModel,
             carbonLoadModel.setTaskNo(String.valueOf(partitionId))
             carbonLoadModel.setSegmentId(segmentId)
             carbonLoadModel.setPartitionId("0")
-            val tempLocationKey = CarbonDataProcessorUtil
-              .getTempStoreLocationKey(carbonLoadModel.getDatabaseName,
-                  carbonLoadModel.getTableName,
-                  segmentId,
-                  carbonLoadModel.getTaskNo,
-                  false,
-                  true)
-            // this property is used to determine whether temp location for carbon is inside
-            // container temp dir or is yarn application directory.
-            val carbonUseLocalDir = CarbonProperties.getInstance()
-              .getProperty("carbon.use.local.dir", "false")
-
-            if (carbonUseLocalDir.equalsIgnoreCase("true")) {
-
-                val storeLocations = Util.getConfiguredLocalDirs(SparkEnv.get.conf)
-                if (null != storeLocations && storeLocations.nonEmpty) {
-                    storeLocation = storeLocations(Random.nextInt(storeLocations.length))
-                }
-                if (storeLocation == null) {
-                    storeLocation = System.getProperty("java.io.tmpdir")
-                }
-            } else {
-                storeLocation = System.getProperty("java.io.tmpdir")
-            }
-            storeLocation = storeLocation + '/' + System.nanoTime() + '/' + split.index
-            CarbonProperties.getInstance().addProperty(tempLocationKey, storeLocation)
-            LOGGER.info(s"Temp storeLocation taken is $storeLocation")
+            CommonUtil.setTempStoreLocation(split.index, carbonLoadModel, false, true)
 
             val tempStoreLoc = CarbonDataProcessorUtil.getLocalDataFolderLocation(databaseName,
                 factTableName,

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d3b228fb/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
index f37b0c5..0859f2e 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
@@ -24,7 +24,6 @@ import java.util.concurrent.atomic.AtomicInteger
 
 import scala.collection.JavaConverters._
 import scala.collection.mutable
-import scala.util.Random
 
 import org.apache.hadoop.conf.Configuration
 import org.apache.hadoop.mapred.JobConf
@@ -55,7 +54,7 @@ import org.apache.carbondata.processing.merger._
 import org.apache.carbondata.processing.splits.TableSplit
 import org.apache.carbondata.processing.util.{CarbonDataProcessorUtil, CarbonLoaderUtil}
 import org.apache.carbondata.spark.MergeResult
-import org.apache.carbondata.spark.util.{SparkDataTypeConverterImpl, Util}
+import org.apache.carbondata.spark.util.{CommonUtil, SparkDataTypeConverterImpl, Util}
 
 class CarbonMergerRDD[K, V](
     sc: SparkContext,
@@ -93,24 +92,7 @@ class CarbonMergerRDD[K, V](
       } else {
         null
       }
-      // this property is used to determine whether temp location for carbon is inside
-      // container temp dir or is yarn application directory.
-      val carbonUseLocalDir = CarbonProperties.getInstance()
-        .getProperty("carbon.use.local.dir", "false")
 
-      if (carbonUseLocalDir.equalsIgnoreCase("true")) {
-
-        val storeLocations = Util.getConfiguredLocalDirs(SparkEnv.get.conf)
-        if (null != storeLocations && storeLocations.nonEmpty) {
-          storeLocation = storeLocations(Random.nextInt(storeLocations.length))
-        }
-        if (storeLocation == null) {
-          storeLocation = System.getProperty("java.io.tmpdir")
-        }
-      } else {
-        storeLocation = System.getProperty("java.io.tmpdir")
-      }
-      storeLocation = storeLocation + '/' + "carbon" + System.nanoTime() + '_' + theSplit.index
       var mergeStatus = false
       var mergeNumber = ""
       var exec: CarbonCompactionExecutor = null
@@ -156,15 +138,8 @@ class CarbonMergerRDD[K, V](
           )
         }
         carbonLoadModel.setSegmentId(mergeNumber)
-        val tempLocationKey = CarbonDataProcessorUtil
-          .getTempStoreLocationKey(carbonLoadModel.getDatabaseName,
-            carbonLoadModel.getTableName,
-            carbonLoadModel.getSegmentId,
-            carbonLoadModel.getTaskNo,
-            true,
-            false)
-        CarbonProperties.getInstance().addProperty(tempLocationKey, storeLocation)
-        LOGGER.info(s"Temp storeLocation taken is $storeLocation")
+        CommonUtil.setTempStoreLocation(theSplit.index, carbonLoadModel, true, false)
+
         // get destination segment properties as sent from driver which is of last segment.
         val segmentProperties = new SegmentProperties(
           carbonMergerMapping.maxSegmentColumnSchemaList.asJava,

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d3b228fb/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
index b44a0fb..64e4bb1 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
@@ -24,11 +24,12 @@ import java.util.regex.{Matcher, Pattern}
 
 import scala.collection.JavaConverters._
 import scala.collection.mutable.Map
+import scala.util.Random
 
 import org.apache.commons.lang3.StringUtils
 import org.apache.hadoop.conf.Configuration
 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-import org.apache.spark.SparkContext
+import org.apache.spark.{SparkContext, SparkEnv}
 import org.apache.spark.sql.{Row, RowFactory}
 import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference}
 import org.apache.spark.sql.execution.command.{ColumnProperty, Field, PartitionerField}
@@ -53,9 +54,10 @@ import org.apache.carbondata.core.util.path.CarbonStorePath
 import org.apache.carbondata.processing.loading.csvinput.CSVInputFormat
 import org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException
 import org.apache.carbondata.processing.loading.model.CarbonLoadModel
-import org.apache.carbondata.processing.util.{CarbonDataProcessorUtil}
+import org.apache.carbondata.processing.util.CarbonDataProcessorUtil
 import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
 
+
 object CommonUtil {
   private val LOGGER = LogServiceFactory.getLogService(this.getClass.getCanonicalName)
 
@@ -890,4 +892,42 @@ object CommonUtil {
     (Integer.parseInt(scaleAndPrecision(0).trim), Integer.parseInt(scaleAndPrecision(1).trim))
   }
 
+
+  def setTempStoreLocation(
+      index: Int,
+      carbonLoadModel: CarbonLoadModel,
+      isCompactionFlow: Boolean,
+      isAltPartitionFlow: Boolean) : Unit = {
+    var storeLocation: String = null
+
+    // this property is used to determine whether temp location for carbon is inside
+    // container temp dir or is yarn application directory.
+    val carbonUseLocalDir = CarbonProperties.getInstance()
+      .getProperty("carbon.use.local.dir", "false")
+
+    if (carbonUseLocalDir.equalsIgnoreCase("true")) {
+
+      val storeLocations = Util.getConfiguredLocalDirs(SparkEnv.get.conf)
+      if (null != storeLocations && storeLocations.nonEmpty) {
+        storeLocation = storeLocations(Random.nextInt(storeLocations.length))
+      }
+      if (storeLocation == null) {
+        storeLocation = System.getProperty("java.io.tmpdir")
+      }
+    } else {
+      storeLocation = System.getProperty("java.io.tmpdir")
+    }
+    storeLocation = storeLocation + CarbonCommonConstants.FILE_SEPARATOR + "carbon" +
+      System.nanoTime() + CarbonCommonConstants.UNDERSCORE + index
+
+    val tempLocationKey = CarbonDataProcessorUtil
+      .getTempStoreLocationKey(carbonLoadModel.getDatabaseName,
+        carbonLoadModel.getTableName,
+        carbonLoadModel.getSegmentId,
+        carbonLoadModel.getTaskNo,
+        isCompactionFlow,
+        isAltPartitionFlow)
+    CarbonProperties.getInstance().addProperty(tempLocationKey, storeLocation)
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d3b228fb/streaming/src/main/scala/org/apache/carbondata/streaming/StreamHandoffRDD.scala
----------------------------------------------------------------------
diff --git a/streaming/src/main/scala/org/apache/carbondata/streaming/StreamHandoffRDD.scala b/streaming/src/main/scala/org/apache/carbondata/streaming/StreamHandoffRDD.scala
index a96ab32..41dfa50 100644
--- a/streaming/src/main/scala/org/apache/carbondata/streaming/StreamHandoffRDD.scala
+++ b/streaming/src/main/scala/org/apache/carbondata/streaming/StreamHandoffRDD.scala
@@ -46,6 +46,8 @@ import org.apache.carbondata.processing.merger.{CompactionResultSortProcessor, C
 import org.apache.carbondata.processing.util.CarbonLoaderUtil
 import org.apache.carbondata.spark.{HandoffResult, HandoffResultImpl}
 import org.apache.carbondata.spark.rdd.CarbonRDD
+import org.apache.carbondata.spark.util.CommonUtil
+
 
 /**
  * partition of the handoff segment
@@ -111,6 +113,8 @@ class StreamHandoffRDD[K, V](
     CarbonMetadata.getInstance().addCarbonTable(carbonTable)
     // the input iterator is using raw row
     val iteratorList = prepareInputIterator(split, carbonTable)
+
+    CommonUtil.setTempStoreLocation(split.index, carbonLoadModel, true, false)
     // use CompactionResultSortProcessor to sort data dan write to columnar files
     val processor = prepareHandoffProcessor(carbonTable)
     val status = processor.execute(iteratorList)


[09/50] [abbrv] carbondata git commit: [CARBONDATA-1840]Updated configuration-parameters.md for V3 format

Posted by ra...@apache.org.
[CARBONDATA-1840]Updated configuration-parameters.md for V3 format

Updated configuration-parameters.md for V3 format

This closes #1883


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/f34ea5c7
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/f34ea5c7
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/f34ea5c7

Branch: refs/heads/branch-1.3
Commit: f34ea5c70b38ac6934d9203264de4626d22f68e4
Parents: cdff193
Author: vandana <va...@gmail.com>
Authored: Tue Jan 30 15:12:34 2018 +0530
Committer: chenliang613 <ch...@huawei.com>
Committed: Thu Feb 1 11:03:29 2018 +0800

----------------------------------------------------------------------
 docs/configuration-parameters.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/f34ea5c7/docs/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index 522d222..fe207f2 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -35,7 +35,7 @@ This section provides the details of all the configurations required for the Car
 | carbon.storelocation | /user/hive/warehouse/carbon.store | Location where CarbonData will create the store, and write the data in its own format. NOTE: Store location should be in HDFS. |
 | carbon.ddl.base.hdfs.url | hdfs://hacluster/opt/data | This property is used to configure the HDFS relative path, the path configured in carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in fs.defaultFS. If this path is configured, then user need not pass the complete path while dataload. For example: If absolute path of the csv file is hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path "hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user can specify the csv path as /2016/xyz.csv. |
 | carbon.badRecords.location | /opt/Carbon/Spark/badrecords | Path where the bad records are stored. |
-| carbon.data.file.version | 2 | If this parameter value is set to 1, then CarbonData will support the data load which is in old format(0.x version). If the value is set to 2(1.x onwards version), then CarbonData will support the data load of new format only.|
+| carbon.data.file.version | 3 | If this parameter value is set to 1, then CarbonData will support the data load which is in old format(0.x version). If the value is set to 2(1.x onwards version), then CarbonData will support the data load of new format only. The default value for this parameter is 3(latest version is set as default version). It improves the query performance by ~20% to 50%. For configuring V3 format explicitly, add carbon.data.file.version = V3 in carbon.properties file. |
 | carbon.streaming.auto.handoff.enabled | true | If this parameter value is set to true, auto trigger handoff function will be enabled.|
 | carbon.streaming.segment.max.size | 1024000000 | This parameter defines the maximum size of the streaming segment. Setting this parameter to appropriate value will avoid impacting the streaming ingestion. The value is in bytes.|
 
@@ -60,6 +60,7 @@ This section provides the details of all the configurations required for CarbonD
 | carbon.options.is.empty.data.bad.record | false | If false, then empty ("" or '' or ,,) data will not be considered as bad record and vice versa. | |
 | carbon.options.bad.record.path |  | Specifies the HDFS path where bad records are stored. By default the value is Null. This path must to be configured by the user if bad record logger is enabled or bad record action redirect. | |
 | carbon.enable.vector.reader | true | This parameter increases the performance of select queries as it fetch columnar batch of size 4*1024 rows instead of fetching data row by row. | |
+| carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of the blocklet group. Higher value results in better sequential IO access.The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB). |  |
 
 * **Compaction Configuration**
   


[17/50] [abbrv] carbondata git commit: [CARBONDATA-2064] Add compaction listener

Posted by ra...@apache.org.
[CARBONDATA-2064] Add compaction listener

This closes #1847


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/54a381c2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/54a381c2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/54a381c2

Branch: refs/heads/branch-1.3
Commit: 54a381c27024ece07d400a4a1d36917bd3ca09f9
Parents: 1202e20
Author: dhatchayani <dh...@gmail.com>
Authored: Tue Jan 23 15:26:26 2018 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Thu Feb 1 22:20:33 2018 +0530

----------------------------------------------------------------------
 .../core/constants/CarbonCommonConstants.java   |   7 -
 .../hadoop/api/CarbonOutputCommitter.java       |  32 ++--
 .../sdv/generated/MergeIndexTestCase.scala      |  30 ++--
 .../CarbonIndexFileMergeTestCase.scala          |  48 +++---
 .../dataload/TestGlobalSortDataLoad.scala       |   2 +-
 .../StandardPartitionTableLoadingTestCase.scala |   5 -
 .../carbondata/events/AlterTableEvents.scala    |  14 +-
 .../spark/rdd/CarbonMergeFilesRDD.scala         |  84 ----------
 .../carbondata/spark/util/CommonUtil.scala      |  51 ------
 .../spark/rdd/CarbonDataRDDFactory.scala        |  14 --
 .../spark/rdd/CarbonTableCompactor.scala        |   2 -
 .../CarbonAlterTableCompactionCommand.scala     | 165 +++++++++----------
 .../sql/execution/strategy/DDLStrategy.scala    |  17 --
 .../CarbonGetTableDetailComandTestCase.scala    |   6 +-
 .../processing/loading/events/LoadEvents.java   |  12 ++
 .../processing/merger/CompactionType.java       |   1 -
 16 files changed, 155 insertions(+), 335 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index 77e8db8..7ae3034 100644
--- a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1478,13 +1478,6 @@ public final class CarbonCommonConstants {
 
   public static final String BITSET_PIPE_LINE_DEFAULT = "true";
 
-  /**
-   * It is internal configuration and used only for test purpose.
-   * It will merge the carbon index files with in the segment to single segment.
-   */
-  public static final String CARBON_MERGE_INDEX_IN_SEGMENT = "carbon.merge.index.in.segment";
-
-  public static final String CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT = "true";
 
   public static final String AGGREGATIONDATAMAPSCHEMA = "AggregateDataMapHandler";
   /*

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
----------------------------------------------------------------------
diff --git a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
index 9cca1bb..555ddd2 100644
--- a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
+++ b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
@@ -25,18 +25,15 @@ import java.util.Set;
 
 import org.apache.carbondata.common.logging.LogService;
 import org.apache.carbondata.common.logging.LogServiceFactory;
-import org.apache.carbondata.core.constants.CarbonCommonConstants;
 import org.apache.carbondata.core.metadata.PartitionMapFileStore;
 import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
 import org.apache.carbondata.core.mutate.CarbonUpdateUtil;
 import org.apache.carbondata.core.statusmanager.LoadMetadataDetails;
 import org.apache.carbondata.core.statusmanager.SegmentStatus;
 import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
-import org.apache.carbondata.core.util.CarbonProperties;
 import org.apache.carbondata.core.util.CarbonSessionInfo;
 import org.apache.carbondata.core.util.ThreadLocalSessionInfo;
 import org.apache.carbondata.core.util.path.CarbonTablePath;
-import org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter;
 import org.apache.carbondata.events.OperationContext;
 import org.apache.carbondata.events.OperationListenerBus;
 import org.apache.carbondata.processing.loading.events.LoadEvents;
@@ -126,7 +123,16 @@ public class CarbonOutputCommitter extends FileOutputCommitter {
         }
       }
       CarbonLoaderUtil.recordNewLoadMetadata(newMetaEntry, loadModel, false, overwriteSet);
-      mergeCarbonIndexFiles(segmentPath);
+      if (operationContext != null) {
+        LoadEvents.LoadTableMergePartitionEvent loadTableMergePartitionEvent =
+            new LoadEvents.LoadTableMergePartitionEvent(segmentPath);
+        try {
+          OperationListenerBus.getInstance()
+              .fireEvent(loadTableMergePartitionEvent, (OperationContext) operationContext);
+        } catch (Exception e) {
+          throw new IOException(e);
+        }
+      }
       String updateTime =
           context.getConfiguration().get(CarbonTableOutputFormat.UPADTE_TIMESTAMP, null);
       String segmentsToBeDeleted =
@@ -158,24 +164,6 @@ public class CarbonOutputCommitter extends FileOutputCommitter {
   }
 
   /**
-   * Merge index files to a new single file.
-   */
-  private void mergeCarbonIndexFiles(String segmentPath) throws IOException {
-    boolean mergeIndex = false;
-    try {
-      mergeIndex = Boolean.parseBoolean(CarbonProperties.getInstance().getProperty(
-          CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT,
-          CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT));
-    } catch (Exception e) {
-      mergeIndex = Boolean.parseBoolean(
-          CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT);
-    }
-    if (mergeIndex) {
-      new CarbonIndexFileMergeWriter().mergeCarbonIndexFilesOfSegment(segmentPath);
-    }
-  }
-
-  /**
    * Update the tablestatus as fail if any fail happens.
    *
    * @param context

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/MergeIndexTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/MergeIndexTestCase.scala b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/MergeIndexTestCase.scala
index cb0d02c..8e71257 100644
--- a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/MergeIndexTestCase.scala
+++ b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/MergeIndexTestCase.scala
@@ -29,6 +29,7 @@ import org.apache.carbondata.core.indexstore.blockletindex.SegmentIndexFileStore
 import org.apache.carbondata.core.metadata.{AbsoluteTableIdentifier, CarbonMetadata}
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}
+import org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter
 
 /**
  * Test Class for AlterTableTestCase to verify all scenerios
@@ -40,34 +41,30 @@ class MergeIndexTestCase extends QueryTest with BeforeAndAfterAll {
   override protected def afterAll(): Unit = {
     sql("DROP TABLE IF EXISTS nonindexmerge")
     sql("DROP TABLE IF EXISTS indexmerge")
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "true")
   }
 
   test("Verify correctness of index merge sdv") {
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "false")
     sql(s"""drop table if exists carbon_automation_nonmerge""").collect
 
     sql(s"""create table carbon_automation_nonmerge (imei string,deviceInformationId int,MAC string,deviceColor string,device_backColor string,modelId string,marketName string,AMSize string,ROMSize string,CUPAudit string,CPIClocked string,series string,productionDate timestamp,bomCode string,internalModels string, deliveryTime string, channelsId string, channelsName string , deliveryAreaId string, deliveryCountry string, deliveryProvince string, deliveryCity string,deliveryDistrict string, deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion string, Active_BacVerNumber string, Active_BacFlashVer string, Active_webUIVersion string, Active_webUITypeCarrVer string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, Active_phoneP
 ADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, Latest_DAY int, Latest_HOUR string, Latest_areaId string, Latest_country string, Latest_province string, Latest_city string, Latest_district string, Latest_street string, Latest_releaseId string, Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber string, Latest_BacFlashVer string, Latest_webUIVersion string, Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, Latest_operatorId string, gamePointDescription string,gamePointId double,contractNumber double) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ('DICTIONARY_INCLUDE'='deviceInformationId,Latest_YEAR,Latest_MONTH,Latest_DAY')""").collect
 
     sql(s"""LOAD DATA INPATH '$resourcesPath/Data/VmaLL100' INTO TABLE carbon_automation_nonmerge OPTIONS('DELIMITER'=',','QUOTECHAR'='"','FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSy
 sVersion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription')""").collect
     assert(getIndexFileCount("default", "carbon_automation_nonmerge", "0") == 2)
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "true")
     sql("DROP TABLE IF EXISTS carbon_automation_merge")
     sql(s"""create table carbon_automation_merge (imei string,deviceInformationId int,MAC string,deviceColor string,device_backColor string,modelId string,marketName string,AMSize string,ROMSize string,CUPAudit string,CPIClocked string,series string,productionDate timestamp,bomCode string,internalModels string, deliveryTime string, channelsId string, channelsName string , deliveryAreaId string, deliveryCountry string, deliveryProvince string, deliveryCity string,deliveryDistrict string, deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion string, Active_BacVerNumber string, Active_BacFlashVer string, Active_webUIVersion string, Active_webUITypeCarrVer string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, Active_phonePADP
 artitionedVersions string, Latest_YEAR int, Latest_MONTH int, Latest_DAY int, Latest_HOUR string, Latest_areaId string, Latest_country string, Latest_province string, Latest_city string, Latest_district string, Latest_street string, Latest_releaseId string, Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber string, Latest_BacFlashVer string, Latest_webUIVersion string, Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, Latest_operatorId string, gamePointDescription string,gamePointId double,contractNumber double) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ('DICTIONARY_INCLUDE'='deviceInformationId,Latest_YEAR,Latest_MONTH,Latest_DAY')""").collect
 
     sql(s"""LOAD DATA INPATH '$resourcesPath/Data/VmaLL100' INTO TABLE carbon_automation_merge OPTIONS('DELIMITER'=',','QUOTECHAR'='"','FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVe
 rsion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription')""").collect
 
+    val table = CarbonMetadata.getInstance().getCarbonTable("default","carbon_automation_merge")
+    val carbonTablePath = new CarbonTablePath(table.getCarbonTableIdentifier, table.getTablePath)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","0"), false)
     assert(getIndexFileCount("default", "carbon_automation_merge", "0") == 0)
     checkAnswer(sql("""Select count(*) from carbon_automation_nonmerge"""),
       sql("""Select count(*) from carbon_automation_merge"""))
   }
 
   test("Verify command of index merge  sdv") {
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "false")
     sql(s"""drop table if exists carbon_automation_nonmerge""").collect
 
     sql(s"""create table carbon_automation_nonmerge (imei string,deviceInformationId int,MAC string,deviceColor string,device_backColor string,modelId string,marketName string,AMSize string,ROMSize string,CUPAudit string,CPIClocked string,series string,productionDate timestamp,bomCode string,internalModels string, deliveryTime string, channelsId string, channelsName string , deliveryAreaId string, deliveryCountry string, deliveryProvince string, deliveryCity string,deliveryDistrict string, deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion string, Active_BacVerNumber string, Active_BacFlashVer string, Active_webUIVersion string, Active_webUITypeCarrVer string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, Active_phoneP
 ADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, Latest_DAY int, Latest_HOUR string, Latest_areaId string, Latest_country string, Latest_province string, Latest_city string, Latest_district string, Latest_street string, Latest_releaseId string, Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber string, Latest_BacFlashVer string, Latest_webUIVersion string, Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, Latest_operatorId string, gamePointDescription string,gamePointId double,contractNumber double) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ('DICTIONARY_INCLUDE'='deviceInformationId,Latest_YEAR,Latest_MONTH,Latest_DAY')""").collect
@@ -77,17 +74,18 @@ class MergeIndexTestCase extends QueryTest with BeforeAndAfterAll {
     val rows = sql("""Select count(*) from carbon_automation_nonmerge""").collect()
     assert(getIndexFileCount("default", "carbon_automation_nonmerge", "0") == 2)
     assert(getIndexFileCount("default", "carbon_automation_nonmerge", "1") == 2)
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "true")
-    sql("ALTER TABLE carbon_automation_nonmerge COMPACT 'SEGMENT_INDEX'").collect()
+    val table = CarbonMetadata.getInstance().getCarbonTable("default","carbon_automation_nonmerge")
+    val carbonTablePath = new CarbonTablePath(table.getCarbonTableIdentifier, table.getTablePath)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","0"), false)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","1"), false)
     assert(getIndexFileCount("default", "carbon_automation_nonmerge", "0") == 0)
     assert(getIndexFileCount("default", "carbon_automation_nonmerge", "1") == 0)
     checkAnswer(sql("""Select count(*) from carbon_automation_nonmerge"""), rows)
   }
 
   test("Verify index index merge with compaction  sdv") {
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "false")
     sql(s"""drop table if exists carbon_automation_nonmerge""").collect
 
     sql(s"""create table carbon_automation_nonmerge (imei string,deviceInformationId int,MAC string,deviceColor string,device_backColor string,modelId string,marketName string,AMSize string,ROMSize string,CUPAudit string,CPIClocked string,series string,productionDate timestamp,bomCode string,internalModels string, deliveryTime string, channelsId string, channelsName string , deliveryAreaId string, deliveryCountry string, deliveryProvince string, deliveryCity string,deliveryDistrict string, deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion string, Active_BacVerNumber string, Active_BacFlashVer string, Active_webUIVersion string, Active_webUITypeCarrVer string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, Active_phoneP
 ADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, Latest_DAY int, Latest_HOUR string, Latest_areaId string, Latest_country string, Latest_province string, Latest_city string, Latest_district string, Latest_street string, Latest_releaseId string, Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber string, Latest_BacFlashVer string, Latest_webUIVersion string, Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, Latest_operatorId string, gamePointDescription string,gamePointId double,contractNumber double) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ('DICTIONARY_INCLUDE'='deviceInformationId,Latest_YEAR,Latest_MONTH,Latest_DAY')""").collect
@@ -99,9 +97,11 @@ class MergeIndexTestCase extends QueryTest with BeforeAndAfterAll {
     assert(getIndexFileCount("default", "carbon_automation_nonmerge", "0") == 2)
     assert(getIndexFileCount("default", "carbon_automation_nonmerge", "1") == 2)
     assert(getIndexFileCount("default", "carbon_automation_nonmerge", "1") == 2)
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "true")
     sql("ALTER TABLE carbon_automation_nonmerge COMPACT 'minor'").collect()
+    val table = CarbonMetadata.getInstance().getCarbonTable("default","carbon_automation_nonmerge")
+    val carbonTablePath = new CarbonTablePath(table.getCarbonTableIdentifier, table.getTablePath)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","0.1"), false)
     assert(getIndexFileCount("default", "carbon_automation_nonmerge", "0.1") == 0)
     checkAnswer(sql("""Select count(*) from carbon_automation_nonmerge"""), rows)
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datacompaction/CarbonIndexFileMergeTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datacompaction/CarbonIndexFileMergeTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datacompaction/CarbonIndexFileMergeTestCase.scala
index c66107f..895b0b5 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datacompaction/CarbonIndexFileMergeTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datacompaction/CarbonIndexFileMergeTestCase.scala
@@ -20,12 +20,11 @@ package org.apache.carbondata.spark.testsuite.datacompaction
 import org.apache.spark.sql.test.util.QueryTest
 import org.scalatest.{BeforeAndAfterAll, BeforeAndAfterEach}
 
-import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.datastore.filesystem.{CarbonFile, CarbonFileFilter}
 import org.apache.carbondata.core.datastore.impl.FileFactory
 import org.apache.carbondata.core.metadata.CarbonMetadata
-import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.core.util.path.CarbonTablePath
+import org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter
 
 class CarbonIndexFileMergeTestCase
   extends QueryTest with BeforeAndAfterEach with BeforeAndAfterAll {
@@ -40,13 +39,9 @@ class CarbonIndexFileMergeTestCase
     CompactionSupportGlobalSortBigFileTest.deleteFile(file2)
     sql("DROP TABLE IF EXISTS nonindexmerge")
     sql("DROP TABLE IF EXISTS indexmerge")
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "true")
   }
 
   test("Verify correctness of index merge") {
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "false")
     sql("DROP TABLE IF EXISTS nonindexmerge")
     sql(
       """
@@ -57,8 +52,6 @@ class CarbonIndexFileMergeTestCase
     sql(s"LOAD DATA LOCAL INPATH '$file2' INTO TABLE nonindexmerge OPTIONS('header'='false', " +
         s"'GLOBAL_SORT_PARTITIONS'='100')")
     assert(getIndexFileCount("default_nonindexmerge", "0") == 100)
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "true")
     sql("DROP TABLE IF EXISTS indexmerge")
     sql(
       """
@@ -68,14 +61,16 @@ class CarbonIndexFileMergeTestCase
       """.stripMargin)
     sql(s"LOAD DATA LOCAL INPATH '$file2' INTO TABLE indexmerge OPTIONS('header'='false', " +
         s"'GLOBAL_SORT_PARTITIONS'='100')")
+    val table = CarbonMetadata.getInstance().getCarbonTable("default","indexmerge")
+    val carbonTablePath = new CarbonTablePath(table.getCarbonTableIdentifier, table.getTablePath)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","0"), false)
     assert(getIndexFileCount("default_indexmerge", "0") == 0)
     checkAnswer(sql("""Select count(*) from nonindexmerge"""),
       sql("""Select count(*) from indexmerge"""))
   }
 
   test("Verify command of index merge") {
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "false")
     sql("DROP TABLE IF EXISTS nonindexmerge")
     sql(
       """
@@ -90,17 +85,18 @@ class CarbonIndexFileMergeTestCase
     val rows = sql("""Select count(*) from nonindexmerge""").collect()
     assert(getIndexFileCount("default_nonindexmerge", "0") == 100)
     assert(getIndexFileCount("default_nonindexmerge", "1") == 100)
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "true")
-    sql("ALTER TABLE nonindexmerge COMPACT 'SEGMENT_INDEX'").collect()
+    val table = CarbonMetadata.getInstance().getCarbonTable("default","nonindexmerge")
+    val carbonTablePath = new CarbonTablePath(table.getCarbonTableIdentifier, table.getTablePath)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","0"), false)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","1"), false)
     assert(getIndexFileCount("default_nonindexmerge", "0") == 0)
     assert(getIndexFileCount("default_nonindexmerge", "1") == 0)
     checkAnswer(sql("""Select count(*) from nonindexmerge"""), rows)
   }
 
   test("Verify command of index merge without enabling property") {
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "false")
     sql("DROP TABLE IF EXISTS nonindexmerge")
     sql(
       """
@@ -115,15 +111,18 @@ class CarbonIndexFileMergeTestCase
     val rows = sql("""Select count(*) from nonindexmerge""").collect()
     assert(getIndexFileCount("default_nonindexmerge", "0") == 100)
     assert(getIndexFileCount("default_nonindexmerge", "1") == 100)
-    sql("ALTER TABLE nonindexmerge COMPACT 'SEGMENT_INDEX'").collect()
+    val table = CarbonMetadata.getInstance().getCarbonTable("default","nonindexmerge")
+    val carbonTablePath = new CarbonTablePath(table.getCarbonTableIdentifier, table.getTablePath)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","0"), false)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","1"), false)
     assert(getIndexFileCount("default_nonindexmerge", "0") == 0)
     assert(getIndexFileCount("default_nonindexmerge", "1") == 0)
     checkAnswer(sql("""Select count(*) from nonindexmerge"""), rows)
   }
 
   test("Verify index index merge with compaction") {
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "false")
     sql("DROP TABLE IF EXISTS nonindexmerge")
     sql(
       """
@@ -141,16 +140,16 @@ class CarbonIndexFileMergeTestCase
     assert(getIndexFileCount("default_nonindexmerge", "0") == 100)
     assert(getIndexFileCount("default_nonindexmerge", "1") == 100)
     assert(getIndexFileCount("default_nonindexmerge", "1") == 100)
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "true")
     sql("ALTER TABLE nonindexmerge COMPACT 'minor'").collect()
+    val table = CarbonMetadata.getInstance().getCarbonTable("default","nonindexmerge")
+    val carbonTablePath = new CarbonTablePath(table.getCarbonTableIdentifier, table.getTablePath)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","0.1"), false)
     assert(getIndexFileCount("default_nonindexmerge", "0.1") == 0)
     checkAnswer(sql("""Select count(*) from nonindexmerge"""), rows)
   }
 
   test("Verify index index merge for compacted segments") {
-    CarbonProperties.getInstance()
-      .addProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "false")
     sql("DROP TABLE IF EXISTS nonindexmerge")
     sql(
       """
@@ -172,7 +171,10 @@ class CarbonIndexFileMergeTestCase
     assert(getIndexFileCount("default_nonindexmerge", "2") == 100)
     assert(getIndexFileCount("default_nonindexmerge", "3") == 100)
     sql("ALTER TABLE nonindexmerge COMPACT 'minor'").collect()
-    sql("ALTER TABLE nonindexmerge COMPACT 'segment_index'").collect()
+    val table = CarbonMetadata.getInstance().getCarbonTable("default","nonindexmerge")
+    val carbonTablePath = new CarbonTablePath(table.getCarbonTableIdentifier, table.getTablePath)
+    new CarbonIndexFileMergeWriter()
+      .mergeCarbonIndexFilesOfSegment(carbonTablePath.getSegmentDir("0","0.1"), false)
     assert(getIndexFileCount("default_nonindexmerge", "0") == 100)
     assert(getIndexFileCount("default_nonindexmerge", "1") == 100)
     assert(getIndexFileCount("default_nonindexmerge", "2") == 100)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
index 50a38f1..0d9e0fd 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
@@ -273,7 +273,7 @@ class TestGlobalSortDataLoad extends QueryTest with BeforeAndAfterEach with Befo
     val carbonTable = CarbonMetadata.getInstance().getCarbonTable("default", "carbon_globalsort")
     val carbonTablePath = CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
     val segmentDir = carbonTablePath.getSegmentDir("0", "0")
-    assertResult(Math.max(4, defaultParallelism) + 1)(new File(segmentDir).listFiles().length)
+    assertResult(Math.max(7, defaultParallelism) + 1)(new File(segmentDir).listFiles().length)
   }
 
   test("Query with small files") {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
index 31d2598..16f252b 100644
--- a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
+++ b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
@@ -319,8 +319,6 @@ class StandardPartitionTableLoadingTestCase extends QueryTest with BeforeAndAfte
   }
 
   test("merge carbon index disable data loading for partition table for three partition column") {
-    CarbonProperties.getInstance.addProperty(
-      CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT, "false")
     sql(
       """
         | CREATE TABLE mergeindexpartitionthree (empno int, doj Timestamp,
@@ -340,9 +338,6 @@ class StandardPartitionTableLoadingTestCase extends QueryTest with BeforeAndAfte
     val files = carbonFile.listFiles(new CarbonFileFilter {
       override def accept(file: CarbonFile): Boolean = CarbonTablePath.isCarbonIndexFile(file.getName)
     })
-    CarbonProperties.getInstance.addProperty(
-      CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT,
-      CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT)
     assert(files.length == 10)
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala
index ca1948a..671e132 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala
@@ -17,7 +17,7 @@
 package org.apache.carbondata.events
 
 import org.apache.spark.sql.SparkSession
-import org.apache.spark.sql.execution.command.{AlterTableAddColumnsModel, AlterTableDataTypeChangeModel, AlterTableDropColumnModel, AlterTableRenameModel, CarbonMergerMapping}
+import org.apache.spark.sql.execution.command._
 
 import org.apache.carbondata.core.metadata.schema.table.CarbonTable
 import org.apache.carbondata.processing.loading.model.CarbonLoadModel
@@ -203,3 +203,15 @@ case class AlterTableCompactionAbortEvent(sparkSession: SparkSession,
     carbonTable: CarbonTable,
     carbonMergerMapping: CarbonMergerMapping,
     mergedLoadName: String) extends Event with AlterTableCompactionEventInfo
+
+
+/**
+ * Compaction Event for handling exception in compaction
+ *
+ * @param sparkSession
+ * @param carbonTable
+ * @param alterTableModel
+ */
+case class AlterTableCompactionExceptionEvent(sparkSession: SparkSession,
+    carbonTable: CarbonTable,
+    alterTableModel: AlterTableModel) extends Event with AlterTableCompactionEventInfo

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergeFilesRDD.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergeFilesRDD.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergeFilesRDD.scala
deleted file mode 100644
index 1087ea7..0000000
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergeFilesRDD.scala
+++ /dev/null
@@ -1,84 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.carbondata.spark.rdd
-
-import org.apache.spark.{Partition, SparkContext, TaskContext}
-
-import org.apache.carbondata.core.util.path.CarbonTablePath
-import org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter
-
-case class CarbonMergeFilePartition(rddId: Int, idx: Int, segmentPath: String)
-  extends Partition {
-
-  override val index: Int = idx
-
-  override def hashCode(): Int = 41 * (41 + rddId) + idx
-}
-
-/**
- * RDD to merge all carbonindex files of each segment to carbonindex file into the same segment.
- * @param sc
- * @param tablePath
- * @param segments segments to be merged
- */
-class CarbonMergeFilesRDD(
-    sc: SparkContext,
-    tablePath: String,
-    segments: Seq[String],
-    readFileFooterFromCarbonDataFile: Boolean)
-  extends CarbonRDD[String](sc, Nil) {
-
-  override def getPartitions: Array[Partition] = {
-    segments.zipWithIndex.map {s =>
-      CarbonMergeFilePartition(id, s._2, CarbonTablePath.getSegmentPath(tablePath, s._1))
-    }.toArray
-  }
-
-  override def internalCompute(theSplit: Partition, context: TaskContext): Iterator[String] = {
-    val iter = new Iterator[String] {
-      val split = theSplit.asInstanceOf[CarbonMergeFilePartition]
-      logInfo("Merging carbon index files of segment : " + split.segmentPath)
-
-      new CarbonIndexFileMergeWriter()
-        .mergeCarbonIndexFilesOfSegment(split.segmentPath, readFileFooterFromCarbonDataFile)
-
-      var havePair = false
-      var finished = false
-
-      override def hasNext: Boolean = {
-        if (!finished && !havePair) {
-          finished = true
-          havePair = !finished
-        }
-        !finished
-      }
-
-      override def next(): String = {
-        if (!hasNext) {
-          throw new java.util.NoSuchElementException("End of stream")
-        }
-        havePair = false
-        ""
-      }
-
-    }
-    iter
-  }
-
-}
-

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
index d96a051..b44a0fb 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
@@ -55,7 +55,6 @@ import org.apache.carbondata.processing.loading.exception.CarbonDataLoadingExcep
 import org.apache.carbondata.processing.loading.model.CarbonLoadModel
 import org.apache.carbondata.processing.util.{CarbonDataProcessorUtil}
 import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
-import org.apache.carbondata.spark.rdd.CarbonMergeFilesRDD
 
 object CommonUtil {
   private val LOGGER = LogServiceFactory.getLogService(this.getClass.getCanonicalName)
@@ -891,54 +890,4 @@ object CommonUtil {
     (Integer.parseInt(scaleAndPrecision(0).trim), Integer.parseInt(scaleAndPrecision(1).trim))
   }
 
-  /**
-   * Merge the carbonindex files with in the segment to carbonindexmerge file inside same segment
-   *
-   * @param sparkContext
-   * @param segmentIds
-   * @param tablePath
-   * @param carbonTable
-   * @param mergeIndexProperty
-   * @param readFileFooterFromCarbonDataFile flag to read file footer information from carbondata
-   *                                         file. This will used in case of upgrade from version
-   *                                         which do not store the blocklet info to current version
-   */
-  def mergeIndexFiles(sparkContext: SparkContext,
-      segmentIds: Seq[String],
-      tablePath: String,
-      carbonTable: CarbonTable,
-      mergeIndexProperty: Boolean,
-      readFileFooterFromCarbonDataFile: Boolean = false): Unit = {
-    if (mergeIndexProperty) {
-      new CarbonMergeFilesRDD(
-        sparkContext,
-        carbonTable.getTablePath,
-        segmentIds,
-        readFileFooterFromCarbonDataFile).collect()
-    } else {
-      try {
-        CarbonProperties.getInstance()
-          .getProperty(CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT).toBoolean
-        if (CarbonProperties.getInstance().getProperty(
-          CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT,
-          CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT).toBoolean) {
-          new CarbonMergeFilesRDD(
-            sparkContext,
-            carbonTable.getTablePath,
-            segmentIds,
-            readFileFooterFromCarbonDataFile).collect()
-        }
-      } catch {
-        case _: Exception =>
-          if (CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT.toBoolean) {
-            new CarbonMergeFilesRDD(
-              sparkContext,
-              carbonTable.getTablePath,
-              segmentIds,
-              readFileFooterFromCarbonDataFile).collect()
-          }
-      }
-    }
-  }
-
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
index 3de0e70..5c43d58 100644
--- a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
+++ b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
@@ -103,18 +103,6 @@ object CarbonDataRDDFactory {
       LOGGER.info(s"Acquired the compaction lock for table ${ carbonLoadModel.getDatabaseName }" +
           s".${ carbonLoadModel.getTableName }")
       try {
-        if (compactionType == CompactionType.SEGMENT_INDEX) {
-          // Just launch job to merge index and return
-          CommonUtil.mergeIndexFiles(
-            sqlContext.sparkContext,
-            CarbonDataMergerUtil.getValidSegmentList(
-              carbonTable.getAbsoluteTableIdentifier).asScala,
-            carbonLoadModel.getTablePath,
-            carbonTable,
-            true)
-          lock.unlock()
-          return
-        }
         startCompactionThreads(
           sqlContext,
           carbonLoadModel,
@@ -359,8 +347,6 @@ object CarbonDataRDDFactory {
           } else {
             loadDataFile(sqlContext, carbonLoadModel, hadoopConf)
           }
-          CommonUtil.mergeIndexFiles(sqlContext.sparkContext,
-            Seq(carbonLoadModel.getSegmentId), storePath, carbonTable, false)
           val newStatusMap = scala.collection.mutable.Map.empty[String, SegmentStatus]
           if (status.nonEmpty) {
             status.foreach { eachLoadStatus =>

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
index 8406d8d..bfe4e41 100644
--- a/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
+++ b/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
@@ -221,8 +221,6 @@ class CarbonTableCompactor(carbonLoadModel: CarbonLoadModel,
 
     if (finalMergeStatus) {
       val mergedLoadNumber = CarbonDataMergerUtil.getLoadNumberFromLoadName(mergedLoadName)
-      CommonUtil.mergeIndexFiles(
-        sc.sparkContext, Seq(mergedLoadNumber), tablePath, carbonTable, false)
       new PartitionMapFileStore().mergePartitionMapFiles(
         CarbonTablePath.getSegmentPath(tablePath, mergedLoadNumber),
         carbonLoadModel.getFactTimeStamp + "")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala
index fb0f9fe..2a77826 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAlterTableCompactionCommand.scala
@@ -34,16 +34,17 @@ import org.apache.carbondata.common.logging.{LogService, LogServiceFactory}
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.datastore.impl.FileFactory
 import org.apache.carbondata.core.locks.{CarbonLockFactory, LockUsage}
+import org.apache.carbondata.core.metadata.CarbonMetadata
 import org.apache.carbondata.core.metadata.schema.table.{CarbonTable, TableInfo}
 import org.apache.carbondata.core.mutate.CarbonUpdateUtil
 import org.apache.carbondata.core.statusmanager.SegmentStatusManager
 import org.apache.carbondata.core.util.{CarbonProperties, CarbonUtil}
 import org.apache.carbondata.core.util.path.CarbonStorePath
-import org.apache.carbondata.events.{AlterTableCompactionPostEvent, AlterTableCompactionPreEvent, AlterTableCompactionPreStatusUpdateEvent, OperationContext, OperationListenerBus}
+import org.apache.carbondata.events._
 import org.apache.carbondata.processing.loading.events.LoadEvents.LoadMetadataEvent
 import org.apache.carbondata.processing.loading.model.{CarbonDataLoadSchema, CarbonLoadModel}
 import org.apache.carbondata.processing.merger.{CarbonDataMergerUtil, CompactionType}
-import org.apache.carbondata.spark.exception.ConcurrentOperationException
+import org.apache.carbondata.spark.exception.{ConcurrentOperationException, MalformedCarbonCommandException}
 import org.apache.carbondata.spark.rdd.CarbonDataRDDFactory
 import org.apache.carbondata.spark.util.CommonUtil
 import org.apache.carbondata.streaming.StreamHandoffRDD
@@ -90,52 +91,74 @@ case class CarbonAlterTableCompactionCommand(
       LogServiceFactory.getLogService(this.getClass.getName)
     val tableName = alterTableModel.tableName.toLowerCase
     val databaseName = alterTableModel.dbName.getOrElse(sparkSession.catalog.currentDatabase)
-    val isLoadInProgress = SegmentStatusManager.checkIfAnyLoadInProgressForTable(table)
-    if (isLoadInProgress) {
-      val message = "Cannot run data loading and compaction on same table concurrently. " +
-                    "Please wait for load to finish"
-      LOGGER.error(message)
-      throw new ConcurrentOperationException(message)
-    }
-    val carbonLoadModel = new CarbonLoadModel()
-    carbonLoadModel.setTableName(table.getTableName)
-    val dataLoadSchema = new CarbonDataLoadSchema(table)
-    // Need to fill dimension relation
-    carbonLoadModel.setCarbonDataLoadSchema(dataLoadSchema)
-    carbonLoadModel.setTableName(table.getTableName)
-    carbonLoadModel.setDatabaseName(table.getDatabaseName)
-    carbonLoadModel.setTablePath(table.getTablePath)
-
-    var storeLocation = CarbonProperties.getInstance.getProperty(
-      CarbonCommonConstants.STORE_LOCATION_TEMP_PATH,
-      System.getProperty("java.io.tmpdir"))
-    storeLocation = storeLocation + "/carbonstore/" + System.nanoTime()
-    // trigger event for compaction
-    val alterTableCompactionPreEvent: AlterTableCompactionPreEvent =
-      AlterTableCompactionPreEvent(sparkSession, table, null, null)
-    OperationListenerBus.getInstance.fireEvent(alterTableCompactionPreEvent, operationContext)
+    operationContext.setProperty("compactionException", "true")
+    var compactionType: CompactionType = null
+    var compactionException = "true"
     try {
-      alterTableForCompaction(
-        sparkSession.sqlContext,
-        alterTableModel,
-        carbonLoadModel,
-        storeLocation,
-        operationContext)
+      compactionType = CompactionType.valueOf(alterTableModel.compactionType.toUpperCase)
     } catch {
-      case e: Exception =>
-        if (null != e.getMessage) {
-          CarbonException.analysisException(
-            s"Compaction failed. Please check logs for more info. ${ e.getMessage }")
-        } else {
-          CarbonException.analysisException(
-            "Exception in compaction. Please check logs for more info.")
-        }
+      case _: Exception =>
+        val alterTableCompactionExceptionEvent: AlterTableCompactionExceptionEvent =
+          AlterTableCompactionExceptionEvent(sparkSession, table, alterTableModel)
+        OperationListenerBus.getInstance
+          .fireEvent(alterTableCompactionExceptionEvent, operationContext)
+        compactionException = operationContext.getProperty("compactionException").toString
+    }
+
+    if (compactionException.equalsIgnoreCase("true") && null == compactionType) {
+      throw new MalformedCarbonCommandException(
+        "Unsupported alter operation on carbon table")
+    } else if (compactionException.equalsIgnoreCase("false")) {
+      Seq.empty
+    } else {
+      val isLoadInProgress = SegmentStatusManager.checkIfAnyLoadInProgressForTable(table)
+      if (isLoadInProgress) {
+        val message = "Cannot run data loading and compaction on same table concurrently. " +
+                      "Please wait for load to finish"
+        LOGGER.error(message)
+        throw new ConcurrentOperationException(message)
+      }
+
+      val carbonLoadModel = new CarbonLoadModel()
+      carbonLoadModel.setTableName(table.getTableName)
+      val dataLoadSchema = new CarbonDataLoadSchema(table)
+      // Need to fill dimension relation
+      carbonLoadModel.setCarbonDataLoadSchema(dataLoadSchema)
+      carbonLoadModel.setTableName(table.getTableName)
+      carbonLoadModel.setDatabaseName(table.getDatabaseName)
+      carbonLoadModel.setTablePath(table.getTablePath)
+
+      var storeLocation = CarbonProperties.getInstance.getProperty(
+        CarbonCommonConstants.STORE_LOCATION_TEMP_PATH,
+        System.getProperty("java.io.tmpdir"))
+      storeLocation = storeLocation + "/carbonstore/" + System.nanoTime()
+      // trigger event for compaction
+      val alterTableCompactionPreEvent: AlterTableCompactionPreEvent =
+        AlterTableCompactionPreEvent(sparkSession, table, null, null)
+      OperationListenerBus.getInstance.fireEvent(alterTableCompactionPreEvent, operationContext)
+      try {
+        alterTableForCompaction(
+          sparkSession.sqlContext,
+          alterTableModel,
+          carbonLoadModel,
+          storeLocation,
+          operationContext)
+      } catch {
+        case e: Exception =>
+          if (null != e.getMessage) {
+            CarbonException.analysisException(
+              s"Compaction failed. Please check logs for more info. ${ e.getMessage }")
+          } else {
+            CarbonException.analysisException(
+              "Exception in compaction. Please check logs for more info.")
+          }
+      }
+      // trigger event for compaction
+      val alterTableCompactionPostEvent: AlterTableCompactionPostEvent =
+        AlterTableCompactionPostEvent(sparkSession, table, null, null)
+      OperationListenerBus.getInstance.fireEvent(alterTableCompactionPostEvent, operationContext)
+      Seq.empty
     }
-    // trigger event for compaction
-    val alterTableCompactionPostEvent: AlterTableCompactionPostEvent =
-      AlterTableCompactionPostEvent(sparkSession, table, null, null)
-    OperationListenerBus.getInstance.fireEvent(alterTableCompactionPostEvent, operationContext)
-    Seq.empty
   }
 
   private def alterTableForCompaction(sqlContext: SQLContext,
@@ -225,50 +248,14 @@ case class CarbonAlterTableCompactionCommand(
         LOGGER.info("Acquired the compaction lock for table" +
                     s" ${ carbonLoadModel.getDatabaseName }.${ carbonLoadModel.getTableName }")
         try {
-          if (compactionType == CompactionType.SEGMENT_INDEX) {
-            // Just launch job to merge index and return
-            CommonUtil.mergeIndexFiles(
-              sqlContext.sparkContext,
-              CarbonDataMergerUtil.getValidSegmentList(
-                carbonTable.getAbsoluteTableIdentifier).asScala,
-              carbonLoadModel.getTablePath,
-              carbonTable,
-              mergeIndexProperty = true,
-              readFileFooterFromCarbonDataFile = true)
-
-            val carbonMergerMapping = CarbonMergerMapping(carbonTable.getTablePath,
-              carbonTable.getMetaDataFilepath,
-              "",
-              carbonTable.getDatabaseName,
-              carbonTable.getTableName,
-              Array(),
-              carbonTable.getAbsoluteTableIdentifier.getCarbonTableIdentifier.getTableId,
-              compactionType,
-              maxSegmentColCardinality = null,
-              maxSegmentColumnSchemaList = null,
-              compactionModel.currentPartitions,
-              null)
-
-            // trigger event for compaction
-            val alterTableCompactionPreStatusUpdateEvent: AlterTableCompactionPreStatusUpdateEvent =
-              AlterTableCompactionPreStatusUpdateEvent(sqlContext.sparkSession,
-                carbonTable,
-                carbonMergerMapping,
-                carbonLoadModel,
-                "")
-            OperationListenerBus.getInstance
-              .fireEvent(alterTableCompactionPreStatusUpdateEvent, operationContext)
-          lock.unlock()
-          } else {
-            CarbonDataRDDFactory.startCompactionThreads(
-              sqlContext,
-              carbonLoadModel,
-              storeLocation,
-              compactionModel,
-              lock,
-              operationContext
-            )
-          }
+          CarbonDataRDDFactory.startCompactionThreads(
+            sqlContext,
+            carbonLoadModel,
+            storeLocation,
+            compactionModel,
+            lock,
+            operationContext
+          )
         } catch {
           case e: Exception =>
             LOGGER.error(s"Exception in start compaction thread. ${ e.getMessage }")

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/DDLStrategy.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/DDLStrategy.scala b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/DDLStrategy.scala
index b174b94..83831e3 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/DDLStrategy.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/DDLStrategy.scala
@@ -100,24 +100,7 @@ class DDLStrategy(sparkSession: SparkSession) extends SparkStrategy {
           .tableExists(TableIdentifier(altertablemodel.tableName,
             altertablemodel.dbName))(sparkSession)
         if (isCarbonTable) {
-          var compactionType: CompactionType = null
-          try {
-            compactionType = CompactionType.valueOf(altertablemodel.compactionType.toUpperCase)
-          } catch {
-            case _: Exception =>
-              throw new MalformedCarbonCommandException(
-                "Unsupported alter operation on carbon table")
-          }
-          if (CompactionType.MINOR == compactionType ||
-              CompactionType.MAJOR == compactionType ||
-              CompactionType.SEGMENT_INDEX == compactionType ||
-              CompactionType.STREAMING == compactionType ||
-              CompactionType.CLOSE_STREAMING == compactionType) {
             ExecutedCommandExec(alterTable) :: Nil
-          } else {
-            throw new MalformedCarbonCommandException(
-              "Unsupported alter operation on carbon table")
-          }
         } else {
           throw new MalformedCarbonCommandException(
             "Operation not allowed : " + altertablemodel.alterSql)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala
----------------------------------------------------------------------
diff --git a/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala b/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala
index 6265d0d..48733dc 100644
--- a/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala
+++ b/integration/spark2/src/test/scala/org/apache/spark/sql/CarbonGetTableDetailComandTestCase.scala
@@ -42,10 +42,10 @@ class CarbonGetTableDetailCommandTestCase extends QueryTest with BeforeAndAfterA
 
     assertResult(2)(result.length)
     assertResult("table_info1")(result(0).getString(0))
-    // 2143 is the size of carbon table
-    assertResult(2143)(result(0).getLong(1))
+    // 2096 is the size of carbon table
+    assertResult(2096)(result(0).getLong(1))
     assertResult("table_info2")(result(1).getString(0))
-    assertResult(2143)(result(1).getLong(1))
+    assertResult(2096)(result(1).getLong(1))
   }
 
   override def afterAll: Unit = {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java b/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java
index 190c72c..a3fa292 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/loading/events/LoadEvents.java
@@ -181,4 +181,16 @@ public class LoadEvents {
       return carbonLoadModel;
     }
   }
+
+  public static class LoadTableMergePartitionEvent extends Event {
+    private String segmentPath;
+
+    public LoadTableMergePartitionEvent(String segmentPath) {
+      this.segmentPath = segmentPath;
+    }
+
+    public String getSegmentPath() {
+      return segmentPath;
+    }
+  }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/processing/src/main/java/org/apache/carbondata/processing/merger/CompactionType.java
----------------------------------------------------------------------
diff --git a/processing/src/main/java/org/apache/carbondata/processing/merger/CompactionType.java b/processing/src/main/java/org/apache/carbondata/processing/merger/CompactionType.java
index 39f56a2..9ed87fc 100644
--- a/processing/src/main/java/org/apache/carbondata/processing/merger/CompactionType.java
+++ b/processing/src/main/java/org/apache/carbondata/processing/merger/CompactionType.java
@@ -27,7 +27,6 @@ public enum CompactionType {
     MAJOR,
     IUD_UPDDEL_DELTA,
     IUD_DELETE_DELTA,
-    SEGMENT_INDEX,
     STREAMING,
     CLOSE_STREAMING,
     NONE