You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:12:56 UTC

[jira] [Resolved] (SPARK-21340) Bring PySpark MLLib evaluation metrics to parity with Scala API

     [ https://issues.apache.org/jira/browse/SPARK-21340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-21340.
----------------------------------
    Resolution: Incomplete

> Bring PySpark MLLib evaluation metrics to parity with Scala API
> ---------------------------------------------------------------
>
>                 Key: SPARK-21340
>                 URL: https://issues.apache.org/jira/browse/SPARK-21340
>             Project: Spark
>          Issue Type: Improvement
>          Components: MLlib
>    Affects Versions: 2.1.1
>            Reporter: Jake Charland
>            Priority: Major
>              Labels: bulk-closed
>
> This JIRA is a request to bring in PySparks MLLib evaluation metrics to parity with the Scala API. For example in BinaryClassificationMetrics there are only two eval metrics exposed to pyspark, areaUnderROC and areaUnderPR while scala has support for a much wider set of eval metrics including precision recall curves and the ability to set thresholds for recall and precision values. These evaluation metrics are critical for understanding and seeing the performance of trained models and should be available to those using the pyspak api's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org