You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Reynold Xin (JIRA)" <ji...@apache.org> on 2016/09/28 23:26:21 UTC

[jira] [Resolved] (SPARK-17641) collect_set should ignore null values

     [ https://issues.apache.org/jira/browse/SPARK-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Reynold Xin resolved SPARK-17641.
---------------------------------
       Resolution: Fixed
         Assignee: Xiangrui Meng
    Fix Version/s: 2.1.0
                   2.0.1

> collect_set should ignore null values
> -------------------------------------
>
>                 Key: SPARK-17641
>                 URL: https://issues.apache.org/jira/browse/SPARK-17641
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>             Fix For: 2.0.1, 2.1.0
>
>
> `collect_set` throws the following exception when there are null values. It should ignore null values to be consistent with other aggregation methods.
> {code}
> select collect_set(null) from (select 1) tmp;
> java.lang.IllegalArgumentException: Flat hash tables cannot contain null elements.
> 	at scala.collection.mutable.FlatHashTable$HashUtils$class.elemHashCode(FlatHashTable.scala:390)
> 	at scala.collection.mutable.HashSet.elemHashCode(HashSet.scala:41)
> 	at scala.collection.mutable.FlatHashTable$class.addEntry(FlatHashTable.scala:136)
> 	at scala.collection.mutable.HashSet.addEntry(HashSet.scala:41)
> 	at scala.collection.mutable.HashSet.$plus$eq(HashSet.scala:60)
> 	at scala.collection.mutable.HashSet.$plus$eq(HashSet.scala:41)
> 	at org.apache.spark.sql.catalyst.expressions.aggregate.Collect.update(collect.scala:64)
> 	at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$1$$anonfun$applyOrElse$1.apply(AggregationIterator.scala:170)
> 	at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$1$$anonfun$applyOrElse$1.apply(AggregationIterator.scala:170)
> 	at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$generateProcessRow$1.apply(AggregationIterator.scala:186)
> 	at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$generateProcessRow$1.apply(AggregationIterator.scala:180)
> 	at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.processCurrentSortedGroup(SortBasedAggregationIterator.scala:115)
> 	at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:150)
> 	at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:29)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:232)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:225)
> {code}
> cc: [~yhuai]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org