You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/08/30 01:34:20 UTC

[jira] [Assigned] (SPARK-3162) Train DecisionTree locally when possible

     [ https://issues.apache.org/jira/browse/SPARK-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-3162:
-----------------------------------

    Assignee: Apache Spark

> Train DecisionTree locally when possible
> ----------------------------------------
>
>                 Key: SPARK-3162
>                 URL: https://issues.apache.org/jira/browse/SPARK-3162
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML
>            Reporter: Joseph K. Bradley
>            Assignee: Apache Spark
>            Priority: Critical
>
> Improvement: communication
> Currently, every level of a DecisionTree is trained in a distributed manner.  However, at deeper levels in the tree, it is possible that a small set of training data will be matched with any given node.  If the node’s training data can fit on one machine’s memory, it may be more efficient to shuffle the data and do local training for the rest of the subtree rooted at that node.
> Note: It is possible that local training would become possible at different levels in different branches of the tree.  There are multiple options for handling this case:
> (1) Train in a distributed fashion until all remaining nodes can be trained locally.  This would entail training multiple levels at once (locally).
> (2) Train branches locally when possible, and interleave this with distributed training of the other branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org