You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by sr11231 <gi...@git.apache.org> on 2017/11/05 09:42:05 UTC

[GitHub] spark issue #17758: [SPARK-20460][SQL] Make it more consistent to handle col...

Github user sr11231 commented on the issue:

    https://github.com/apache/spark/pull/17758
  
    Still when you load the json file through Dataset[String] by doing `spark.read.json(spark.read.textFile("json.file")`, Spark does not trow any error and you get DataFrame with duplicate columns. Is that an expected behaviour and a feature or it's actually a bug?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org