You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Seth Hendrickson (JIRA)" <ji...@apache.org> on 2015/08/19 19:26:45 UTC

[jira] [Comment Edited] (SPARK-4240) Refine Tree Predictions in Gradient Boosting to Improve Prediction Accuracy.

    [ https://issues.apache.org/jira/browse/SPARK-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703402#comment-14703402 ] 

Seth Hendrickson edited comment on SPARK-4240 at 8/19/15 5:26 PM:
------------------------------------------------------------------

[~pprett] MLlib's current implementation for Gradient Boosted Trees does not perform a terminal node prediction update. Instead, the predicted value for each terminal node is determined by the impurity used to train the decision tree. The {{Variance}} impurity, for example, just averages the labels found in the terminal node. Terminal node predictions should be determined by the loss function for gradient boosting (e.g. AbsoluteError, SquaredError, etc).

I'd like to work on this if no one else has started it.


was (Author: sethah):
[~pprett] MLlib's current implementation for Gradient Boosted Trees does not perform a terminal node prediction update. Instead, the predicted value for each terminal node is determined by the impurity used to train the decision tree. The [[Variance]] impurity, for example, just averages the labels found in the terminal node. Terminal node predictions should be determined by the loss function for gradient boosting (e.g. AbsoluteError, SquaredError, etc).

I'd like to work on this if no one else has started it.

> Refine Tree Predictions in Gradient Boosting to Improve Prediction Accuracy.
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-4240
>                 URL: https://issues.apache.org/jira/browse/SPARK-4240
>             Project: Spark
>          Issue Type: New Feature
>          Components: MLlib
>    Affects Versions: 1.3.0
>            Reporter: Sung Chung
>
> The gradient boosting as currently implemented estimates the loss-gradient in each iteration using regression trees. At every iteration, the regression trees are trained/split to minimize predicted gradient variance. Additionally, the terminal node predictions are computed to minimize the prediction variance.
> However, such predictions won't be optimal for loss functions other than the mean-squared error. The TreeBoosting refinement can help mitigate this issue by modifying terminal node prediction values so that those predictions would directly minimize the actual loss function. Although this still doesn't change the fact that the tree splits were done through variance reduction, it should still lead to improvement in gradient estimations, and thus better performance.
> The details of this can be found in the R vignette. This paper also shows how to refine the terminal node predictions.
> http://www.saedsayad.com/docs/gbm2.pdf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org