You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "Ramakrishna S (JIRA)" <ji...@apache.org> on 2017/11/20 07:57:00 UTC

[jira] [Updated] (CARBONDATA-1777) Carbon1.3.0-Pre-AggregateTable - Pre-aggregate tables creation in Spark-shell sessions are not used in the beeline session

     [ https://issues.apache.org/jira/browse/CARBONDATA-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ramakrishna S updated CARBONDATA-1777:
--------------------------------------
    Description: 
Steps:
Beeline:
1. Create table and load with  data
Spark-shell:
1. create a pre-aggregate table
Beeline:
1. Run aggregate query

*+Expected:+* Pre-aggregate table should be used in the aggregate query 
*+Actual:+* Pre-aggregate table is not used


1.
create table if not exists lineitem1(L_SHIPDATE string,L_SHIPMODE string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.5" into table lineitem1 options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');

2. 

 carbon.sql("create datamap agr1_lineitem1 ON TABLE lineitem1 USING 'org.apache.carbondata.datamap.AggregateDataMapHandler' as select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem1 group by l_returnflag, l_linestatus").show();

3. 
select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem1 where l_returnflag = 'R' group by l_returnflag, l_linestatus;

Actual:
0: jdbc:hive2://10.18.98.136:23040> show tables;
+-----------+---------------------------+--------------+--+
| database  |         tableName         | isTemporary  |
+-----------+---------------------------+--------------+--+
| test_db2  | lineitem1                 | false        |
| test_db2  | lineitem1_agr1_lineitem1  | false        |
+-----------+---------------------------+--------------+--+
2 rows selected (0.047 seconds)

Logs:
2017-11-20 15:46:48,314 | INFO  | [pool-23-thread-53] | Running query 'select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem1 where l_returnflag = 'R' group by l_returnflag, l_linestatus' with 7f3091a8-4d7b-40ac-840f-9db6f564c9cf | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
2017-11-20 15:46:48,314 | INFO  | [pool-23-thread-53] | Parsing command: select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem1 where l_returnflag = 'R' group by l_returnflag, l_linestatus | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
2017-11-20 15:46:48,353 | INFO  | [pool-23-thread-53] | 55: get_table : db=test_db2 tbl=lineitem1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
2017-11-20 15:46:48,353 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_table : db=test_db2 tbl=lineitem1	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
2017-11-20 15:46:48,354 | INFO  | [pool-23-thread-53] | 55: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:589)
2017-11-20 15:46:48,355 | INFO  | [pool-23-thread-53] | ObjectStore, initialize called | org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:289)
2017-11-20 15:46:48,360 | INFO  | [pool-23-thread-53] | Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing | org.datanucleus.util.Log4JLogger.info(Log4JLogger.java:77)
2017-11-20 15:46:48,362 | INFO  | [pool-23-thread-53] | Using direct SQL, underlying DB is MYSQL | org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:139)
2017-11-20 15:46:48,362 | INFO  | [pool-23-thread-53] | Initialized ObjectStore | org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:272)
2017-11-20 15:46:48,376 | INFO  | [pool-23-thread-53] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
2017-11-20 15:46:48,399 | INFO  | [pool-23-thread-53] | Schema changes have been detected for table: `lineitem1` | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
2017-11-20 15:46:48,399 | INFO  | [pool-23-thread-53] | 55: get_table : db=test_db2 tbl=lineitem1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
2017-11-20 15:46:48,400 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_table : db=test_db2 tbl=lineitem1	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
2017-11-20 15:46:48,413 | INFO  | [pool-23-thread-53] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
2017-11-20 15:46:48,415 | INFO  | [pool-23-thread-53] | 55: get_table : db=test_db2 tbl=lineitem1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
2017-11-20 15:46:48,415 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_table : db=test_db2 tbl=lineitem1	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
2017-11-20 15:46:48,428 | INFO  | [pool-23-thread-53] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
2017-11-20 15:46:48,431 | INFO  | [pool-23-thread-53] | 55: get_database: test_db2 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
2017-11-20 15:46:48,431 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_database: test_db2	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
2017-11-20 15:46:48,434 | INFO  | [pool-23-thread-53] | 55: get_database: test_db2 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
2017-11-20 15:46:48,434 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_database: test_db2	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
2017-11-20 15:46:48,437 | INFO  | [pool-23-thread-53] | 55: get_tables: db=test_db2 pat=* | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
2017-11-20 15:46:48,437 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_tables: db=test_db2 pat=*	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
2017-11-20 15:46:48,522 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Starting to optimize plan | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
2017-11-20 15:46:48,536 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Skip CarbonOptimizer | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
2017-11-20 15:46:48,679 | INFO  | [pool-23-thread-53] | Code generated in 41.000919 ms | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
2017-11-20 15:46:48,766 | INFO  | [pool-23-thread-53] | Code generated in 61.651832 ms | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
2017-11-20 15:46:48,821 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Table block size not specified for test_db2_lineitem1. Therefore considering the default value 1024 MB | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
2017-11-20 15:46:48,872 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Time taken to load blocklet datamap from file : hdfs://hacluster/user/test2/lineitem1/Fact/Part0/Segment_0/1_batchno0-0-1511163544085.carbonindexis 2 | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
2017-11-20 15:46:48,873 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Time taken to load blocklet datamap from file : hdfs://hacluster/user/test2/lineitem1/Fact/Part0/Segment_0/0_batchno0-0-1511163544085.carbonindexis 1 | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
2017-11-20 15:46:48,884 | INFO  | [pool-23-thread-53] | 
 Identified no.of.blocks: 2,
 no.of.tasks: 2,
 no.of.nodes: 0,
 parallelism: 2


  was:
Steps:
1. Create table and load with  data
2. Run update query on the table - this will take table metalock
3. In parallel run the pre-aggregate table create step - this will not be allowed due to table lock
4. Rerun pre-aggegate table create step

*+Expected:+* Pre-aggregate table should be created 
*+Actual:+* Pre-aggregate table creation fails

+Create, Load & Update+:
0: jdbc:hive2://10.18.98.136:23040> create table if not exists lineitem4(L_SHIPDATE string,L_SHIPMODE string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
+---------+--+
| Result  |
+---------+--+
+---------+--+
No rows selected (0.266 seconds)
0: jdbc:hive2://10.18.98.136:23040> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.5" into table lineitem4 options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
+---------+--+
| Result  |
+---------+--+
+---------+--+
No rows selected (6.331 seconds)
0: jdbc:hive2://10.18.98.136:23040> update lineitem4 set (l_linestatus) = ('xx');

+Create Datamap:+
0: jdbc:hive2://10.18.98.136:23040> create datamap agr_lineitem4 ON TABLE lineitem4 USING "org.apache.carbondata.datamap.AggregateDataMapHandler" as select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem4  group by l_returnflag, l_linestatus;
Error: java.lang.RuntimeException: Acquire table lock failed after retry, please try after some time (state=,code=0)
0: jdbc:hive2://10.18.98.136:23040> select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem4 group by l_returnflag, l_linestatus;
+---------------+---------------+------------------+---------------------+--------------------+--+
| l_returnflag  | l_linestatus  | sum(l_quantity)  |   avg(l_quantity)   | count(l_quantity)  |
+---------------+---------------+------------------+---------------------+--------------------+--+
| N             | xx            | 1.2863213E7      | 25.48745561614304   | 504688             |
| A             | xx            | 6318125.0        | 25.506342144783375  | 247708             |
| R             | xx            | 6321939.0        | 25.532459087898417  | 247604             |
+---------------+---------------+------------------+---------------------+--------------------+--+
3 rows selected (1.033 seconds)
0: jdbc:hive2://10.18.98.136:23040> create datamap agr_lineitem4 ON TABLE lineitem4 USING "org.apache.carbondata.datamap.AggregateDataMapHandler" as select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem4  group by l_returnflag, l_linestatus;
Error: java.lang.RuntimeException: Table [lineitem4_agr_lineitem4] already exists under database [test_db1] (state=,code=0)



> Carbon1.3.0-Pre-AggregateTable - Pre-aggregate tables creation in Spark-shell sessions are not used in the beeline session
> --------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-1777
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1777
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-load
>    Affects Versions: 1.3.0
>         Environment: Test - 3 node ant cluster
>            Reporter: Ramakrishna S
>            Assignee: Kunal Kapoor
>              Labels: DFX
>             Fix For: 1.3.0
>
>
> Steps:
> Beeline:
> 1. Create table and load with  data
> Spark-shell:
> 1. create a pre-aggregate table
> Beeline:
> 1. Run aggregate query
> *+Expected:+* Pre-aggregate table should be used in the aggregate query 
> *+Actual:+* Pre-aggregate table is not used
> 1.
> create table if not exists lineitem1(L_SHIPDATE string,L_SHIPMODE string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.5" into table lineitem1 options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> 2. 
>  carbon.sql("create datamap agr1_lineitem1 ON TABLE lineitem1 USING 'org.apache.carbondata.datamap.AggregateDataMapHandler' as select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem1 group by l_returnflag, l_linestatus").show();
> 3. 
> select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem1 where l_returnflag = 'R' group by l_returnflag, l_linestatus;
> Actual:
> 0: jdbc:hive2://10.18.98.136:23040> show tables;
> +-----------+---------------------------+--------------+--+
> | database  |         tableName         | isTemporary  |
> +-----------+---------------------------+--------------+--+
> | test_db2  | lineitem1                 | false        |
> | test_db2  | lineitem1_agr1_lineitem1  | false        |
> +-----------+---------------------------+--------------+--+
> 2 rows selected (0.047 seconds)
> Logs:
> 2017-11-20 15:46:48,314 | INFO  | [pool-23-thread-53] | Running query 'select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem1 where l_returnflag = 'R' group by l_returnflag, l_linestatus' with 7f3091a8-4d7b-40ac-840f-9db6f564c9cf | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,314 | INFO  | [pool-23-thread-53] | Parsing command: select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from lineitem1 where l_returnflag = 'R' group by l_returnflag, l_linestatus | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,353 | INFO  | [pool-23-thread-53] | 55: get_table : db=test_db2 tbl=lineitem1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,353 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_table : db=test_db2 tbl=lineitem1	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,354 | INFO  | [pool-23-thread-53] | 55: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:589)
> 2017-11-20 15:46:48,355 | INFO  | [pool-23-thread-53] | ObjectStore, initialize called | org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:289)
> 2017-11-20 15:46:48,360 | INFO  | [pool-23-thread-53] | Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing | org.datanucleus.util.Log4JLogger.info(Log4JLogger.java:77)
> 2017-11-20 15:46:48,362 | INFO  | [pool-23-thread-53] | Using direct SQL, underlying DB is MYSQL | org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:139)
> 2017-11-20 15:46:48,362 | INFO  | [pool-23-thread-53] | Initialized ObjectStore | org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:272)
> 2017-11-20 15:46:48,376 | INFO  | [pool-23-thread-53] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,399 | INFO  | [pool-23-thread-53] | Schema changes have been detected for table: `lineitem1` | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,399 | INFO  | [pool-23-thread-53] | 55: get_table : db=test_db2 tbl=lineitem1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,400 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_table : db=test_db2 tbl=lineitem1	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,413 | INFO  | [pool-23-thread-53] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,415 | INFO  | [pool-23-thread-53] | 55: get_table : db=test_db2 tbl=lineitem1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,415 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_table : db=test_db2 tbl=lineitem1	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,428 | INFO  | [pool-23-thread-53] | Parsing command: array<string> | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,431 | INFO  | [pool-23-thread-53] | 55: get_database: test_db2 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,431 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_database: test_db2	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,434 | INFO  | [pool-23-thread-53] | 55: get_database: test_db2 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,434 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_database: test_db2	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,437 | INFO  | [pool-23-thread-53] | 55: get_tables: db=test_db2 pat=* | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,437 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr	cmd=get_tables: db=test_db2 pat=*	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,522 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Starting to optimize plan | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,536 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Skip CarbonOptimizer | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,679 | INFO  | [pool-23-thread-53] | Code generated in 41.000919 ms | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,766 | INFO  | [pool-23-thread-53] | Code generated in 61.651832 ms | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,821 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Table block size not specified for test_db2_lineitem1. Therefore considering the default value 1024 MB | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,872 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Time taken to load blocklet datamap from file : hdfs://hacluster/user/test2/lineitem1/Fact/Part0/Segment_0/1_batchno0-0-1511163544085.carbonindexis 2 | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,873 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Time taken to load blocklet datamap from file : hdfs://hacluster/user/test2/lineitem1/Fact/Part0/Segment_0/0_batchno0-0-1511163544085.carbonindexis 1 | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,884 | INFO  | [pool-23-thread-53] | 
>  Identified no.of.blocks: 2,
>  no.of.tasks: 2,
>  no.of.nodes: 0,
>  parallelism: 2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)