You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by felixcheung <gi...@git.apache.org> on 2017/03/01 05:58:15 UTC

[GitHub] spark issue #17105: [SPARK-19773][SparkR] SparkDataFrame should not allow du...

Github user felixcheung commented on the issue:

    https://github.com/apache/spark/pull/17105
  
    @actuaryzhang there's a bit of a history about this... but long story short, Spark does support DataFrame with multiple columns having the same name, for example
    ```
    # in pyspark
    >>> from pyspark.sql import Row
    >>> from pyspark.sql.types import *
    >>> data = [(1, 2, 'Foo')]
    >>> df = spark.createDataFrame(data, ("key", "key", "value"))
    >>> df
    DataFrame[key: bigint, key: bigint, value: string]
    ```
    
    And each column will get a unique id, so underneath the cover they are not actually "duplicating".
    
    You could in fact get columns with same name when doing a self-join, for instance.
    
    Now the reason why you are getting an error with `df$a = df$a * 2.0` is because "a" is not a full unique id. You get the same in python
    
    ```
    >>> df.select(col("key"))
    ...
        raise AnalysisException(s.split(': ', 1)[1], stackTrace)
    pyspark.sql.utils.AnalysisException: u"Reference 'key' is ambiguous, could be: key#0L, key#1L.;"
    ```
    
    And in R, `df$a` is essentially a shortcut to that and so it will also fail similarly.
    
    As for why it is disallowed in `mutate` - it is just a factor of the implementation. I think we could potentially implement it to support duplicated names.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org