You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Chen Zhang (Jira)" <ji...@apache.org> on 2020/09/22 13:54:00 UTC
[jira] [Commented] (SPARK-32956) Duplicate Columns in a csv file
[ https://issues.apache.org/jira/browse/SPARK-32956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17200092#comment-17200092 ]
Chen Zhang commented on SPARK-32956:
------------------------------------
In SPARK-16896, if the CSV data has duplicate column headers, put the index as the suffix.
In this case, _Sale_Amount_ is a duplicate column header.
Original column header:
{code:none}
Id, Product, Sale_Amount, Sale_Units, Sale_Amount2, Sale_Amount, Sale_Price{code}
Column header after adding index suffix:
{code:none}
Id, Product, Sale_Amount2, Sale_Units, Sale_Amount2, Sale_Amount5, Sale_Price{code}
The _Sale_Amount2_ after adding the suffix is still the same as the other column header.
Maybe we can add the suffix again when we find a new duplicate column header:
{code:none}
Id, Product, Sale_Amount22, Sale_Units, Sale_Amount24, Sale_Amount5, Sale_Price{code}
> Duplicate Columns in a csv file
> -------------------------------
>
> Key: SPARK-32956
> URL: https://issues.apache.org/jira/browse/SPARK-32956
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.4.3, 2.4.4, 2.4.5, 2.4.6, 2.4.7, 3.0.0, 3.0.1
> Reporter: Punit Shah
> Priority: Major
>
> Imagine a csv file shaped like:
> ========================================================
> Id,Product,Sale_Amount,Sale_Units,Sale_Amount2,Sale_Amount,Sale_Price
> 1,P,"6,40,728","6,40,728","6,40,728","6,40,728","6,40,728"
> 2,P,"5,81,644","5,81,644","5,81,644","5,81,644","5,81,644"
> =========================================================
> Reading this with the header=True will result in a stacktrace.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org