You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "abhijeet dada mote (Jira)" <ji...@apache.org> on 2020/08/07 04:17:00 UTC
[jira] [Updated] (SPARK-32562) Pyspark drop duplicate columns
[ https://issues.apache.org/jira/browse/SPARK-32562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
abhijeet dada mote updated SPARK-32562:
---------------------------------------
Description:
Hi All,
This is one suggestion can we have a feature in pyspark to remove duplicate columns?
I have come up with small code for that
{code:python}
def drop_duplicate_columns(_rdd_df):
column_names = _rdd_df.columns
duplicate_columns = set([x for x in column_names if column_names.count(x) > 1])
_rdd_df = _rdd_df.drop(*duplicate_columns)
return _rdd_df
{code}
Your suggestions are appreciatd and can work on this PR, this would be my first contribution(PR) to Pyspark if you guys agree with it
was:
Hi All,
This is one suggestion can we have a feature in pyspark to remove duplicate columns?
I have come up with small code for that
<code>
def drop_duplicate_columns(_rdd_df):
column_names = _rdd_df.columns
duplicate_columns = set([x for x in column_names if column_names.count(x) > 1])
_rdd_df = _rdd_df.drop(*duplicate_columns)
return _rdd_df
<code>
Your suggestions are appreciatd and can work on this PR, this would be my first contribution(PR) to Pyspark if you guys agree with it
> Pyspark drop duplicate columns
> ------------------------------
>
> Key: SPARK-32562
> URL: https://issues.apache.org/jira/browse/SPARK-32562
> Project: Spark
> Issue Type: Improvement
> Components: PySpark
> Affects Versions: 3.0.0
> Reporter: abhijeet dada mote
> Priority: Major
> Labels: newbie, starter
> Fix For: 3.0.0
>
> Original Estimate: 1h
> Remaining Estimate: 1h
>
> Hi All,
> This is one suggestion can we have a feature in pyspark to remove duplicate columns?
> I have come up with small code for that
> {code:python}
> def drop_duplicate_columns(_rdd_df):
> column_names = _rdd_df.columns
> duplicate_columns = set([x for x in column_names if column_names.count(x) > 1])
> _rdd_df = _rdd_df.drop(*duplicate_columns)
> return _rdd_df
> {code}
> Your suggestions are appreciatd and can work on this PR, this would be my first contribution(PR) to Pyspark if you guys agree with it
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org