You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2023/01/04 23:53:00 UTC

[jira] [Resolved] (SPARK-41833) DataFrame.collect() output parity with pyspark

     [ https://issues.apache.org/jira/browse/SPARK-41833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-41833.
----------------------------------
    Fix Version/s: 3.4.0
       Resolution: Fixed

Issue resolved by pull request 39386
[https://github.com/apache/spark/pull/39386]

> DataFrame.collect() output parity with pyspark
> ----------------------------------------------
>
>                 Key: SPARK-41833
>                 URL: https://issues.apache.org/jira/browse/SPARK-41833
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Connect
>    Affects Versions: 3.4.0
>            Reporter: Sandeep Singh
>            Assignee: Ruifeng Zheng
>            Priority: Major
>             Fix For: 3.4.0
>
>
> {code:java}
> **********************************************************************          
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1117, in pyspark.sql.connect.functions.array
> Failed example:
>     df.select(array('age', 'age').alias("arr")).collect()
> Expected:
>     [Row(arr=[2, 2]), Row(arr=[5, 5])]
> Got:
>     [Row(arr=array([2, 2])), Row(arr=array([5, 5]))]
> **********************************************************************
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1119, in pyspark.sql.connect.functions.array
> Failed example:
>     df.select(array([df.age, df.age]).alias("arr")).collect()
> Expected:
>     [Row(arr=[2, 2]), Row(arr=[5, 5])]
> Got:
>     [Row(arr=array([2, 2])), Row(arr=array([5, 5]))]
> **********************************************************************
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1124, in pyspark.sql.connect.functions.array_distinct
> Failed example:
>     df.select(array_distinct(df.data)).collect()
> Expected:
>     [Row(array_distinct(data)=[1, 2, 3]), Row(array_distinct(data)=[4, 5])]
> Got:
>     [Row(array_distinct(data)=array([1, 2, 3])), Row(array_distinct(data)=array([4, 5]))]
> **********************************************************************
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1135, in pyspark.sql.connect.functions.array_except
> Failed example:
>     df.select(array_except(df.c1, df.c2)).collect()
> Expected:
>     [Row(array_except(c1, c2)=['b'])]
> Got:
>     [Row(array_except(c1, c2)=array(['b'], dtype=object))]
> **********************************************************************
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1142, in pyspark.sql.connect.functions.array_intersect
> Failed example:
>     df.select(array_intersect(df.c1, df.c2)).collect()
> Expected:
>     [Row(array_intersect(c1, c2)=['a', 'c'])]
> Got:
>     [Row(array_intersect(c1, c2)=array(['a', 'c'], dtype=object))]
> **********************************************************************
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1180, in pyspark.sql.connect.functions.array_remove
> Failed example:
>     df.select(array_remove(df.data, 1)).collect()
> Expected:
>     [Row(array_remove(data, 1)=[2, 3]), Row(array_remove(data, 1)=[])]
> Got:
>     [Row(array_remove(data, 1)=array([2, 3])), Row(array_remove(data, 1)=array([], dtype=int64))]
> **********************************************************************
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1187, in pyspark.sql.connect.functions.array_repeat
> Failed example:
>     df.select(array_repeat(df.data, 3).alias('r')).collect()
> Expected:
>     [Row(r=['ab', 'ab', 'ab'])]
> Got:
>     [Row(r=array(['ab', 'ab', 'ab'], dtype=object))]
> **********************************************************************
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1204, in pyspark.sql.connect.functions.array_sort
> Failed example:
>     df.select(array_sort(df.data).alias('r')).collect()
> Expected:
>     [Row(r=[1, 2, 3, None]), Row(r=[1]), Row(r=[])]
> Got:
>     [Row(r=array([ 1.,  2.,  3., nan])), Row(r=array([1])), Row(r=array([], dtype=int64))]
> **********************************************************************
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1207, in pyspark.sql.connect.functions.array_sort
> Failed example:
>     df.select(array_sort(
>         "data",
>         lambda x, y: when(x.isNull() | y.isNull(), lit(0)).otherwise(length(y) - length(x))
>     ).alias("r")).collect()
> Expected:
>     [Row(r=['foobar', 'foo', None, 'bar']), Row(r=['foo']), Row(r=[])]
> Got:
>     [Row(r=array(['foobar', 'foo', None, 'bar'], dtype=object)), Row(r=array(['foo'], dtype=object)), Row(r=array([], dtype=object))]
> **********************************************************************
> File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", line 1209, in pyspark.sql.connect.functions.array_union
> Failed example:
>     df.select(array_union(df.c1, df.c2)).collect()
> Expected:
>     [Row(array_union(c1, c2)=['b', 'a', 'c', 'd', 'f'])]
> Got:
>     [Row(array_union(c1, c2)=array(['b', 'a', 'c', 'd', 'f'], dtype=object))]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org