You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (JIRA)" <ji...@apache.org> on 2019/02/13 05:12:00 UTC

[jira] [Comment Edited] (SPARK-26865) ORC filter pushdown should be case insensitive by default

    [ https://issues.apache.org/jira/browse/SPARK-26865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16766781#comment-16766781 ] 

Dongjoon Hyun edited comment on SPARK-26865 at 2/13/19 5:11 AM:
----------------------------------------------------------------

Hi, [~cloud_fan]. The following is the result from the Apache Spark 2.4.0 and 2.3.2. Did you use additional configuration?

{code}
scala> spark.range(10).write.orc("/tmp/o1")

scala> spark.read.schema("ID long").orc("/tmp/o1").filter("id > 5").show
+---+
| ID|
+---+
|  8|
|  7|
|  6|
|  9|
+---+

scala> sc.version
res2: String = 2.4.0
{code}

{code}
scala> spark.range(10).write.mode("overwrite").orc("/tmp/o1")

scala> spark.read.schema("ID long").orc("/tmp/o1").filter("id > 5").show
+---+
| ID|
+---+
|  6|
|  9|
|  8|
|  7|
+---+


scala> sc.version
res3: String = 2.3.2
{code}


was (Author: dongjoon):
Hi, [~cloud_fan]. The following is the result from the Apache Spark 2.4.0. Did you use additional configuration?

{code}
scala> spark.range(10).write.orc("/tmp/o1")

scala> spark.read.schema("ID long").orc("/tmp/o1").filter("id > 5").show
+---+
| ID|
+---+
|  8|
|  7|
|  6|
|  9|
+---+

scala> sc.version
res2: String = 2.4.0
{code}

{code}
scala> spark.range(10).write.mode("overwrite").orc("/tmp/o1")

scala> spark.read.schema("ID long").orc("/tmp/o1").filter("id > 5").show
+---+
| ID|
+---+
|  6|
|  9|
|  8|
|  7|
+---+


scala> sc.version
res3: String = 2.3.2
{code}

> ORC filter pushdown should be case insensitive by default
> ---------------------------------------------------------
>
>                 Key: SPARK-26865
>                 URL: https://issues.apache.org/jira/browse/SPARK-26865
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Wenchen Fan
>            Priority: Major
>
> steps to reproduce:
> {code}
> spark.range(10).write.orc("/tmp/o1")
> spark.read.schema("ID long").orc("/tmp/o1").filter("id > 5").show
> java.util.NoSuchElementException: key not found: id
>   at scala.collection.immutable.Map$Map1.apply(Map.scala:114)
>   at org.apache.spark.sql.execution.datasources.orc.OrcFilters$.createBuilder(OrcFilters.scala:263)
>   at org.apache.spark.sql.execution.datasources.orc.OrcFilters$.buildSearchArgument(OrcFilters.scala:153)
>   at org.apache.spark.sql.execution.datasources.orc.OrcFilters$.$anonfun$convertibleFilters$1(OrcFilters.scala:99)
>   at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:244)
>   at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
>   at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:39)
>   at scala.collection.TraversableLike.flatMap(TraversableLike.scala:244)
>   at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:241)
>   at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
>   at org.apache.spark.sql.execution.datasources.orc.OrcFilters$.convertibleFilters(OrcFilters.scala:98)
>   at org.apache.spark.sql.execution.datasources.orc.OrcFilters$.createFilter(OrcFilters.scala:87)
>   at org.apache.spark.sql.execution.datasources.v2.orc.OrcScanBuilder.pushFilters(OrcScanBuilder.scala:50)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org