You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:03:13 UTC
[jira] [Updated] (SPARK-3728) RandomForest: Learn models too large
to store in memory
[ https://issues.apache.org/jira/browse/SPARK-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-3728:
--------------------------------
Labels: bulk-closed (was: )
> RandomForest: Learn models too large to store in memory
> -------------------------------------------------------
>
> Key: SPARK-3728
> URL: https://issues.apache.org/jira/browse/SPARK-3728
> Project: Spark
> Issue Type: Improvement
> Components: MLlib
> Reporter: Joseph K. Bradley
> Priority: Minor
> Labels: bulk-closed
>
> Proposal: Write trees to disk as they are learned.
> RandomForest currently uses a FIFO queue, which means training all trees at once via breadth-first search. Using a FILO queue would encourage the code to finish one tree before moving on to new ones. This would allow the code to write trees to disk as they are learned.
> Note: It would also be possible to write nodes to disk as they are learned using a FIFO queue, once the example--node mapping is cached [JIRA]. The [Sequoia Forest package]() does this. However, it could be useful to learn trees progressively, so that future functionality such as early stopping (training fewer trees than expected) could be supported.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org