You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Punit Shah (Jira)" <ji...@apache.org> on 2020/09/15 09:44:00 UTC

[jira] [Created] (SPARK-32888) reading a parallized rdd with two identical records results in a zero count df when read via spark.read.csv

Punit Shah created SPARK-32888:
----------------------------------

             Summary: reading a parallized rdd with two identical records results in a zero count df when read via spark.read.csv
                 Key: SPARK-32888
                 URL: https://issues.apache.org/jira/browse/SPARK-32888
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 3.0.1, 3.0.0, 2.4.7, 2.4.6, 2.4.5
            Reporter: Punit Shah


* Imagine a two-row csv file like so (where the header and first record are duplicate rows):

aaa,bbb

aaa,bbb
 * The following is pyspark code
 * create a parallelized rdd like: {color:#FF0000}prdd = spark.read.text("test.csv").rdd.flatMap(lambda x : x){color}
 * {color:#172b4d}create a df like so: {color:#de350b}mydf = spark.read.csv(prdd, header=True){color}{color}
 * {color:#172b4d}{color:#de350b}df.count(){color:#172b4d} will result in a record count of zero (when it should be 1){color}{color}{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org