You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Liang-Chi Hsieh (JIRA)" <ji...@apache.org> on 2018/07/08 03:23:00 UTC

[jira] [Commented] (SPARK-24756) Incorrect Statistics

    [ https://issues.apache.org/jira/browse/SPARK-24756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16535976#comment-16535976 ] 

Liang-Chi Hsieh commented on SPARK-24756:
-----------------------------------------

Because seems not yet a suitable approach to estimate the size of RDDs, we use {{defaultSizeInBytes}} as the statistics of data from an RDD for now.

> Incorrect Statistics
> --------------------
>
>                 Key: SPARK-24756
>                 URL: https://issues.apache.org/jira/browse/SPARK-24756
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.0
>            Reporter: Nick Jordan
>            Priority: Major
>
> I'm getting some odd results when looking at the statistics for a simple data frame:
> {code:java}
> val df = spark.sparkContext.parallelize(Seq("y")).toDF("y") df.queryExecution.stringWithStats{code}
> {noformat}
> == Optimized Logical Plan
> == Project [value#7 AS y#9], Statistics(sizeInBytes=8.0 EB, hints=none)
> +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#7], Statistics(sizeInBytes=8.0 EB, hints=none)
> +- ExternalRDD [obj#6], Statistics(sizeInBytes=8.0 EB, hints=none)
> {noformat}
> 8.0 Exabytes is clearly not right here.  It is worth noting that if I don't parallelize the Seq then I get the expected results.  
> This surfaced when I was running unit tests that verified that a broadcast hint was preserved which was failing because of the incorrect statistics.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org