You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marco Gaido (JIRA)" <ji...@apache.org> on 2019/01/29 11:14:00 UTC

[jira] [Comment Edited] (SPARK-26767) Filter on a dropDuplicates dataframe gives inconsistency result

    [ https://issues.apache.org/jira/browse/SPARK-26767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16754882#comment-16754882 ] 

Marco Gaido edited comment on SPARK-26767 at 1/29/19 11:13 AM:
---------------------------------------------------------------

IIRC there was a similar JIRA reported. Maybe the problem is the same. The JIRA is SPARK-25420: please check the comments there. Your case may be the same.


was (Author: mgaido):
IIRC there was a similar JIRA reported. May you please try in a newer version (ideally current branch-2.3)? This may have been fixed.

> Filter on a dropDuplicates dataframe gives inconsistency result
> ---------------------------------------------------------------
>
>                 Key: SPARK-26767
>                 URL: https://issues.apache.org/jira/browse/SPARK-26767
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.0
>         Environment: To repeat the problem,
> (1) create a csv file with records holding same values for a subset of columns (e.g. colA, colB, colC).
> (2) read the csv file as a spark dataframe and then use dropDuplicates to dedup the subset of columns (i.e. dropDuplicates(["colA", "colB", "colC"]))
> (3) select the resulting dataframe with where clause. (i.e. df.where("colA = 'A' and colB='B' and colG='G' and colH='H').show(100,False))
>  
> => When (3) is rerun, it gives different number of resulting rows.
>            Reporter: Jeffrey
>            Priority: Major
>
> To repeat the problem,
> (1) create a csv file with records holding same values for a subset of columns (e.g. colA, colB, colC).
> (2) read the csv file as a spark dataframe and then use dropDuplicates to dedup the subset of columns (i.e. dropDuplicates(["colA", "colB", "colC"]))
> (3) select the resulting dataframe with where clause. (i.e. df.where("colA = 'A' and colB='B' and colG='G' and colH='H').show(100,False))
>  
> => When (3) is rerun, it gives different number of resulting rows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org