You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/06/17 17:23:01 UTC

[jira] [Resolved] (SPARK-6698) RandomForest.scala (et al) hardcodes usage of StorageLevel.MEMORY_AND_DISK

     [ https://issues.apache.org/jira/browse/SPARK-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-6698.
------------------------------
    Resolution: Won't Fix

Closing per PR

> RandomForest.scala (et al) hardcodes usage of StorageLevel.MEMORY_AND_DISK
> --------------------------------------------------------------------------
>
>                 Key: SPARK-6698
>                 URL: https://issues.apache.org/jira/browse/SPARK-6698
>             Project: Spark
>          Issue Type: Improvement
>          Components: MLlib
>    Affects Versions: 1.3.0
>            Reporter: Michael Bieniosek
>            Priority: Minor
>
> In RandomForest.scala the feature input is persisted with StorageLevel.MEMORY_AND_DISK during the bagging phase, even if the bagging rate is set at 100%.  This forces the RDD to be stored unserialized, which causes major JVM GC headaches if the RDD is sizable.  
> Something similar happens in NodeIdCache.scala though I believe in this case the RDD is smaller.
> A simple fix would be to use the same StorageLevel as the input RDD. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org