You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Wenchen Fan (JIRA)" <ji...@apache.org> on 2017/07/10 08:01:00 UTC

[jira] [Assigned] (SPARK-20460) Make it more consistent to handle column name duplication

     [ https://issues.apache.org/jira/browse/SPARK-20460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wenchen Fan reassigned SPARK-20460:
-----------------------------------

    Assignee: Takeshi Yamamuro

> Make it more consistent to handle column name duplication
> ---------------------------------------------------------
>
>                 Key: SPARK-20460
>                 URL: https://issues.apache.org/jira/browse/SPARK-20460
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: Takeshi Yamamuro
>            Assignee: Takeshi Yamamuro
>            Priority: Trivial
>             Fix For: 2.3.0
>
>
> In the current master, error handling is different when hitting column name duplication.
> {code}
> // json
> scala> val schema = StructType(StructField("a", IntegerType) :: StructField("a", IntegerType) :: Nil)
> scala> Seq("""{"a":1, "a":1}"""""").toDF().coalesce(1).write.mode("overwrite").text("/tmp/data")
> scala> spark.read.format("json").schema(schema).load("/tmp/data").show
> org.apache.spark.sql.AnalysisException: Reference 'a' is ambiguous, could be: a#12, a#13.;
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:287)
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:181)
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolve$1.apply(LogicalPlan.scala:153)
> scala> spark.read.format("json").load("/tmp/data").show
> org.apache.spark.sql.AnalysisException: Duplicate column(s) : "a" found, cannot save to JSON format;
>   at org.apache.spark.sql.execution.datasources.json.JsonDataSource.checkConstraints(JsonDataSource.scala:81)
>   at org.apache.spark.sql.execution.datasources.json.JsonDataSource.inferSchema(JsonDataSource.scala:63)
>   at org.apache.spark.sql.execution.datasources.json.JsonFileFormat.inferSchema(JsonFileFormat.scala:57)
>   at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:176)
>   at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:176)
> // csv
> scala> val schema = StructType(StructField("a", IntegerType) :: StructField("a", IntegerType) :: Nil)
> scala> Seq("a,a", "1,1").toDF().coalesce(1).write.mode("overwrite").text("/tmp/data")
> scala> spark.read.format("csv").schema(schema).option("header", false).load("/tmp/data").show
> org.apache.spark.sql.AnalysisException: Reference 'a' is ambiguous, could be: a#41, a#42.;
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:287)
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:181)
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolve$1.apply(LogicalPlan.scala:153)
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolve$1.apply(LogicalPlan.scala:152)
> // If `inferSchema` is true, a CSV format is duplicate-safe (See SPARK-16896)
> scala> spark.read.format("csv").option("header", true).load("/tmp/data").show
> +---+---+
> | a0| a1|
> +---+---+
> |  1|  1|
> +---+---+
> // parquet
> scala> val schema = StructType(StructField("a", IntegerType) :: StructField("a", IntegerType) :: Nil)
> scala> Seq((1, 1)).toDF("a", "b").coalesce(1).write.mode("overwrite").parquet("/tmp/data")
> scala> spark.read.format("parquet").schema(schema).option("header", false).load("/tmp/data").show
> org.apache.spark.sql.AnalysisException: Reference 'a' is ambiguous, could be: a#110, a#111.;
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:287)
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:181)
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolve$1.apply(LogicalPlan.scala:153)
>   at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolve$1.apply(LogicalPlan.scala:152)
>   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> {code}
> To make this error reason clearer, IMO we'd better to make it more consistent to handle column name duplication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org