You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2018/07/31 18:38:00 UTC
[jira] [Resolved] (SPARK-24609) PySpark/SparkR doc doesn't explain
RandomForestClassifier.featureSubsetStrategy well
[ https://issues.apache.org/jira/browse/SPARK-24609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-24609.
-------------------------------
Resolution: Fixed
Fix Version/s: 2.4.0
Issue resolved by pull request 21788
[https://github.com/apache/spark/pull/21788]
> PySpark/SparkR doc doesn't explain RandomForestClassifier.featureSubsetStrategy well
> ------------------------------------------------------------------------------------
>
> Key: SPARK-24609
> URL: https://issues.apache.org/jira/browse/SPARK-24609
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 2.3.1
> Reporter: Xiangrui Meng
> Assignee: zhengruifeng
> Priority: Major
> Fix For: 2.4.0
>
>
> In Scala doc ([https://spark.apache.org/docs/2.3.0/api/scala/index.html#org.apache.spark.ml.classification.RandomForestClassifier)], we have:
>
> {quote}The number of features to consider for splits at each tree node. Supported options:
> * "auto": Choose automatically for task: If numTrees == 1, set to "all." If numTrees > 1 (forest), set to "sqrt" for classification and to "onethird" for regression.
> * "all": use all features
> * "onethird": use 1/3 of the features
> * "sqrt": use sqrt(number of features)
> * "log2": use log2(number of features)
> * "n": when n is in the range (0, 1.0], use n * number of features. When n is in the range (1, number of features), use n features. (default = "auto")
> These various settings are based on the following references:
> * log2: tested in Breiman (2001)
> * sqrt: recommended by Breiman manual for random forests
> * The defaults of sqrt (classification) and onethird (regression) match the R randomForest package.{quote}
>
> The entire paragraph is missing in PySpark doc ([https://spark.apache.org/docs/2.3.0/api/python/pyspark.ml.html#pyspark.ml.classification.RandomForestClassifier.featureSubsetStrategy]). And same issue for SparkR (https://github.com/apache/spark/blob/master/R/pkg/R/mllib_tree.R#L365).
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org