You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2016/11/03 01:26:58 UTC

[jira] [Closed] (SPARK-15174) DataFrame does not have correct number of rows after dropDuplicates

     [ https://issues.apache.org/jira/browse/SPARK-15174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon closed SPARK-15174.
--------------------------------
    Resolution: Cannot Reproduce

I can't reproduce this in the current master. So, I am going to mark this as Cannot Reproduce. Please revoke my action if this is inappropriate.

{code}
scala> val df1 = spark.read.json(input)
org.apache.spark.sql.AnalysisException: Unable to infer schema for JSON at empty. It must be specified manually;
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$17.apply(DataSource.scala:438)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$17.apply(DataSource.scala:438)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:437)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
  at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:297)
  at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:250)
  ... 48 elided

scala> val df2 = spark.read.json(input).dropDuplicates
org.apache.spark.sql.AnalysisException: Unable to infer schema for JSON at empty. It must be specified manually;
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$17.apply(DataSource.scala:438)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$17.apply(DataSource.scala:438)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:437)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
  at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:297)
  at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:250)
  ... 48 elided
{code}

> DataFrame does not have correct number of rows after dropDuplicates
> -------------------------------------------------------------------
>
>                 Key: SPARK-15174
>                 URL: https://issues.apache.org/jira/browse/SPARK-15174
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.1
>            Reporter: Ian Hellstrom
>
> If you read an empty file/folder with the {{SQLContext.read()}} function and call {{DataFrame.dropDuplicates()}}, the number of rows is incorrect.
> {code}
> val input = "hdfs:///some/empty/directory"
> val df1 = sqlContext.read.json(input)
> val df2 = sqlContext.read.json(input).dropDuplicates
> df1.count == 0 // true
> df1.rdd.isEmpty // true
> df2.count == 0 // false: it's actually reported as 1
> df2.rdd.isEmpty // false
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org