You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Cheng Lian (JIRA)" <ji...@apache.org> on 2014/11/18 16:50:34 UTC
[jira] [Commented] (SPARK-4258) NPE with new Parquet Filters
[ https://issues.apache.org/jira/browse/SPARK-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14216329#comment-14216329 ]
Cheng Lian commented on SPARK-4258:
-----------------------------------
Reproduced this issue with the following test case added in {{ParquetQuerySuite}}:
{code}
test("empty row group filter pushdown") {
val oldConf = parquetFilterPushDown
val location = Utils.createTempDir()
val path = s"$location/empty.parquet"
setConf(SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED, "true")
try {
sparkContext.makeRDD((1 to 3).map(i => TestRDDEntry(i, null))).saveAsParquetFile(path)
parquetFile(path).registerTempTable("empty_pq")
val query = sql("""SELECT * FROM empty_pq WHERE value = "foo"""")
query.collect()
} finally {
setConf(SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED, oldConf.toString)
Utils.deleteRecursively(location)
}
}
{code}
This is a bug in Parquet: for a string column, if all values in a single column trunk are null, so do the {{min}} & {{max}} values in the column trunk statistics. However, while checking the statistics for column trunk pruning, a null check is missing, and causes this exception. Corresponding code can be found [here|https://github.com/apache/incubator-parquet-mr/blob/251a495d2a72de7e892ade7f64980f51f2fcc0dd/parquet-hadoop/src/main/java/parquet/filter2/statisticslevel/StatisticsFilter.java#L97-L100].
> NPE with new Parquet Filters
> ----------------------------
>
> Key: SPARK-4258
> URL: https://issues.apache.org/jira/browse/SPARK-4258
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Reporter: Michael Armbrust
> Assignee: Cheng Lian
> Priority: Blocker
>
> {code}
> Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 21.0 failed 4 times, most recent failure: Lost task 0.3 in stage 21.0 (TID 160, ip-10-0-247-144.us-west-2.compute.internal): java.lang.NullPointerException:
> parquet.io.api.Binary$ByteArrayBackedBinary.compareTo(Binary.java:206)
> parquet.io.api.Binary$ByteArrayBackedBinary.compareTo(Binary.java:162)
> parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:100)
> parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:47)
> parquet.filter2.predicate.Operators$Eq.accept(Operators.java:162)
> parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:210)
> parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:47)
> parquet.filter2.predicate.Operators$Or.accept(Operators.java:302)
> parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:201)
> parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:47)
> parquet.filter2.predicate.Operators$And.accept(Operators.java:290)
> parquet.filter2.statisticslevel.StatisticsFilter.canDrop(StatisticsFilter.java:52)
> parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:46)
> parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:22)
> parquet.filter2.compat.FilterCompat$FilterPredicateCompat.accept(FilterCompat.java:108)
> parquet.filter2.compat.RowGroupFilter.filterRowGroups(RowGroupFilter.java:28)
> parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:158)
> parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:138)
> {code}
> This occurs when reading parquet data encoded with the older version of the library for TPC-DS query 34. Will work on coming up with a smaller reproduction
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org