You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/01/06 20:53:35 UTC

[jira] [Commented] (SPARK-5112) Expose SizeEstimator as a developer API

    [ https://issues.apache.org/jira/browse/SPARK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14266646#comment-14266646 ] 

Apache Spark commented on SPARK-5112:
-------------------------------------

User 'sryza' has created a pull request for this issue:
https://github.com/apache/spark/pull/3913

> Expose SizeEstimator as a developer API
> ---------------------------------------
>
>                 Key: SPARK-5112
>                 URL: https://issues.apache.org/jira/browse/SPARK-5112
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Sandy Ryza
>            Assignee: Sandy Ryza
>
> "The best way to size the amount of memory consumption your dataset will require is to create an RDD, put it into cache, and look at the SparkContext logs on your driver program. The logs will tell you how much memory each partition is consuming, which you can aggregate to get the total size of the RDD."
> -the Tuning Spark page
> This is a pain.  It would be much nicer to expose simply functionality for understanding the memory footprint of a Java object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org