You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nicholas Hakobian (JIRA)" <ji...@apache.org> on 2017/11/15 21:31:00 UTC

[jira] [Created] (SPARK-22532) Spark SQL function 'drop_duplicates' throws error when passing in a column that is an element of a struct

Nicholas Hakobian created SPARK-22532:
-----------------------------------------

             Summary: Spark SQL function 'drop_duplicates' throws error when passing in a column that is an element of a struct
                 Key: SPARK-22532
                 URL: https://issues.apache.org/jira/browse/SPARK-22532
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.2.0, 2.1.0
         Environment: Attempted on the following versions:
* Spark 2.1 (CDH 5.9.2 w/ SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904)
* Spark 2.1 (installed via homebrew)
* Spark 2.2 (installed via homebrew)

Also tried on Spark 1.6 that comes with CDH 5.9.2 and it works correctly; this appears to be a regression.
            Reporter: Nicholas Hakobian


When attempting to use drop_duplicates with a subset of columns that exist within a struct the following error it raised:

{noformat}
AnalysisException: u'Cannot resolve column name "header.eventId.lo" among (header);'
{noformat}

A complete example (using old sqlContext syntax so the same code can be run with Spark 1.x as well):
{noformat}
from pyspark.sql import Row
from pyspark.sql.functions import *

data = [
    Row(header=Row(eventId=Row(lo=0, hi=1))),
    Row(header=Row(eventId=Row(lo=0, hi=1))),
    Row(header=Row(eventId=Row(lo=1, hi=2))),
    Row(header=Row(eventId=Row(lo=2, hi=3))),
]

df = sqlContext.createDataFrame(data)

df.drop_duplicates(['header.eventId.lo', 'header.eventId.hi']).show()
{noformat}

produces the following stack trace:

{noformat}
---------------------------------------------------------------------------
AnalysisException                         Traceback (most recent call last)
<ipython-input-1-d44c25c1919c> in <module>()
     11 df = sqlContext.createDataFrame(data)
     12
---> 13 df.drop_duplicates(['header.eventId.lo', 'header.eventId.hi']).show()

/usr/local/Cellar/apache-spark/2.2.0/libexec/python/pyspark/sql/dataframe.py in dropDuplicates(self, subset)
   1243             jdf = self._jdf.dropDuplicates()
   1244         else:
-> 1245             jdf = self._jdf.dropDuplicates(self._jseq(subset))
   1246         return DataFrame(jdf, self.sql_ctx)
   1247

/usr/local/Cellar/apache-spark/2.2.0/libexec/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134
   1135         for temp_arg in temp_args:

/usr/local/Cellar/apache-spark/2.2.0/libexec/python/pyspark/sql/utils.py in deco(*a, **kw)
     67                                              e.java_exception.getStackTrace()))
     68             if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
     70             if s.startswith('org.apache.spark.sql.catalyst.analysis'):
     71                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)

AnalysisException: u'Cannot resolve column name "header.eventId.lo" among (header);'
{noformat}

This works _correctly_ in Spark 1.6, but fails in 2.1 (via homebrew and CDH) and 2.2 (via homebrew)

An inconvenient workaround (but it works) is the following:
{noformat}
(
    df
    .withColumn('lo', col('header.eventId.lo'))
    .withColumn('hi', col('header.eventId.hi'))
    .drop_duplicates(['lo', 'hi'])
    .drop('lo')
    .drop('hi')
    .show()
)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org