You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "anju (Jira)" <ji...@apache.org> on 2021/07/27 03:34:00 UTC

[jira] [Comment Edited] (SPARK-36277) Issue with record count of data frame while reading in DropMalformed mode

    [ https://issues.apache.org/jira/browse/SPARK-36277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17387733#comment-17387733 ] 

anju edited comment on SPARK-36277 at 7/27/21, 3:33 AM:
--------------------------------------------------------

[~hyukjin.kwon]Sure let me check and update. which version would you suggest?


was (Author: datumgirl):
Sure let me check and update

> Issue with record count of data frame while reading in DropMalformed mode
> -------------------------------------------------------------------------
>
>                 Key: SPARK-36277
>                 URL: https://issues.apache.org/jira/browse/SPARK-36277
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.4.3
>            Reporter: anju
>            Priority: Major
>         Attachments: 111.PNG, Inputfile.PNG, sample.csv
>
>
> I am writing the steps to reproduce the issue for "count" pyspark api while using mode as dropmalformed.
> I have a csv sample file in s3 bucket . I am reading the file using pyspark api for csv . I am reading the csv "without schema" and "with schema using mode 'dropmalformed' options  in two different dataframes . While displaying the "with schema using mode 'dropmalformed'" dataframe , the display looks good ,it is not showing the malformed records .But when we apply count api on the dataframe it gives the record count of actual file. I am expecting it should give me valid record count .
> here is the code used:-
> {code}
> without_schema_df=spark.read.csv("s3://noa-poc-lakeformation/data/test_files/sample.csv",header=True)
> schema = StructType([ \
>     StructField("firstname",StringType(),True), \
>     StructField("middlename",StringType(),True), \
>     StructField("lastname",StringType(),True), \
>     StructField("id", StringType(), True), \
>     StructField("gender", StringType(), True), \
>     StructField("salary", IntegerType(), True) \
>   ])
> with_schema_df = spark.read.csv("s3://noa-poc-lakeformation/data/test_files/sample.csv",header=True,schema=schema,mode="DROPMALFORMED")
> print("The dataframe with schema")
> with_schema_df.show()
> print("The dataframe without schema")
> without_schema_df.show()
> cnt_with_schema=with_schema_df.count()
> print("The  records count from with schema df :"+str(cnt_with_schema))
> cnt_without_schema=without_schema_df.count()
> print("The  records count from without schema df: "+str(cnt_without_schema))
> {code}
> here is the outputs screen shot 111.PNG is the outputs of the code and inputfile.csv is the input to the code
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org