You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "Yu, Yucai" <yu...@intel.com> on 2016/10/21 06:49:33 UTC

Can we disable parquet logs in Spark?

Hi,

I see lots of parquet logs in container logs(YARN mode), like below:

stdout:
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 8,448B for [ss_promo_sk] INT32: 5,996 values, 8,513B raw, 8,409B comp, 1 pages, encodings: [PLAIN_DICTIONARY, BIT_PACKED, RLE], dic { 1,475 entries, 5,900B raw, 1,475B comp}
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 1,376B for [ss_ticket_number] INT32: 5,996 values, 1,730B raw, 1,340B comp, 1 pages, encodings: [PLAIN_DICTIONARY, BIT_PACKED, RLE], dic { 524 entries, 2,096B raw, 524B comp}
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 5,516B for [ss_quantity] INT32: 5,996 values, 5,567B raw, 5,479B comp, 1 pages, encodings: [PLAIN_DICTIONARY, BIT_PACKED, RLE], dic { 100 entries, 400B raw, 100B comp}
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 14,385B for [ss_wholesale_cost] INT32: 5,996 values, 23,931B raw, 14,346B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 15,043B for [ss_list_price] INT32: 5,996 values, 23,871B raw, 15,004B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 14,442B for [ss_sales_price] INT32: 5,996 values, 23,896B raw, 14,403B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 3,538B for [ss_ext_discount_amt] INT32: 5,996 values, 7,317B raw, 3,501B comp, 1 pages, encodings: [PLAIN_DICTIONARY, BIT_PACKED, RLE], dic { 1,139 entries, 4,556B raw, 1,139B comp}
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 18,052B for [ss_ext_sales_price] INT32: 5,996 values, 23,907B raw, 18,013B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Oc

I tried below in log4j.properties, but not work.
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR

Is there a way to disable them?

Thanks a lot!

Yucai

RE: Can we disable parquet logs in Spark?

Posted by "Yu, Yucai" <yu...@intel.com>.
I set "log4j.rootCategory=ERROR, console" and using "-file conf/log4f.properties" to make most of logs suppressed, but those org.apache.parquet log still exists.

Any way to disable them also?

Thanks,
Yucai

From: Yu, Yucai [mailto:yucai.yu@intel.com]
Sent: Friday, October 21, 2016 2:50 PM
To: user@spark.apache.org
Subject: Can we disable parquet logs in Spark?

Hi,

I see lots of parquet logs in container logs(YARN mode), like below:

stdout:
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 8,448B for [ss_promo_sk] INT32: 5,996 values, 8,513B raw, 8,409B comp, 1 pages, encodings: [PLAIN_DICTIONARY, BIT_PACKED, RLE], dic { 1,475 entries, 5,900B raw, 1,475B comp}
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 1,376B for [ss_ticket_number] INT32: 5,996 values, 1,730B raw, 1,340B comp, 1 pages, encodings: [PLAIN_DICTIONARY, BIT_PACKED, RLE], dic { 524 entries, 2,096B raw, 524B comp}
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 5,516B for [ss_quantity] INT32: 5,996 values, 5,567B raw, 5,479B comp, 1 pages, encodings: [PLAIN_DICTIONARY, BIT_PACKED, RLE], dic { 100 entries, 400B raw, 100B comp}
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 14,385B for [ss_wholesale_cost] INT32: 5,996 values, 23,931B raw, 14,346B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 15,043B for [ss_list_price] INT32: 5,996 values, 23,871B raw, 15,004B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 14,442B for [ss_sales_price] INT32: 5,996 values, 23,896B raw, 14,403B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 3,538B for [ss_ext_discount_amt] INT32: 5,996 values, 7,317B raw, 3,501B comp, 1 pages, encodings: [PLAIN_DICTIONARY, BIT_PACKED, RLE], dic { 1,139 entries, 4,556B raw, 1,139B comp}
Oct 21, 2016 2:27:30 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 18,052B for [ss_ext_sales_price] INT32: 5,996 values, 23,907B raw, 18,013B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE]
Oc

I tried below in log4j.properties, but not work.
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR

Is there a way to disable them?

Thanks a lot!

Yucai