You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Neeraja (JIRA)" <ji...@apache.org> on 2014/12/08 06:50:12 UTC

[jira] [Created] (DRILL-1823) Parquet write log messages after closing JDBC connection

Neeraja created DRILL-1823:
------------------------------

             Summary: Parquet write log messages after closing JDBC connection
                 Key: DRILL-1823
                 URL: https://issues.apache.org/jira/browse/DRILL-1823
             Project: Apache Drill
          Issue Type: Bug
            Reporter: Neeraja


Noticing that when using CTAS to create parquet files, most of the times there are a number of log messages that show up after closing the JDBC connection (i.e sqlline quit). Verbose logging is not enabled. Example below.
----

0: jdbc:drill:zk=local> create  table dfs.tmp.sampleparquet4 as (select trans_id, cast(`date` as date) transdate,cast(`time` as time) transtime, cast(amount as double) amount,`user_info`,`marketing_info`, `trans_info` from dfs.`/Users/nrentachintala/Downloads/sample.json` )
. . . . . . . . . . . > ;
+------------+---------------------------+
|  Fragment  | Number of records written |
+------------+---------------------------+
| 0_0        | 5                         |
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
+------------+---------------------------+
1 row selected (1.4 seconds)
0: jdbc:drill:zk=local> !quit
Closing: org.apache.drill.jdbc.DrillJdbc41Factory$DrillJdbc41Connection
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 77B for [trans_id] INT64: 5 values, 46B raw, 36B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 61B for [transdate] INT32: 5 values, 26B raw, 28B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 61B for [transtime] INT32: 5 values, 26B raw, 28B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 84B for [amount] DOUBLE: 5 values, 46B raw, 43B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 83B for [user_info, cust_id] INT64: 5 values, 47B raw, 42B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 69B for [user_info, device] BINARY: 5 values, 49B raw, 34B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 65B for [user_info, state] BINARY: 5 values, 37B raw, 36B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 73B for [marketing_info, camp_id] INT64: 5 values, 47B raw, 32B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 200B for [marketing_info, keywords] BINARY: 19 values, 173B raw, 162B comp, 1 pages, encodings: [PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 151B for [trans_info, prod_id] INT64: 20 values, 171B raw, 108B comp, 1 pages, encodings: [PLAIN, RLE]
Dec 7, 2014 9:47:14 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 74B for [trans_info, purch_flag] BINARY: 5 values, 51B raw, 40B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)