You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jake Charland (JIRA)" <ji...@apache.org> on 2017/07/07 14:19:00 UTC
[jira] [Created] (SPARK-21340) Bring PySpark MLLib evaluation
metrics to parity with Scala API
Jake Charland created SPARK-21340:
-------------------------------------
Summary: Bring PySpark MLLib evaluation metrics to parity with Scala API
Key: SPARK-21340
URL: https://issues.apache.org/jira/browse/SPARK-21340
Project: Spark
Issue Type: Improvement
Components: MLlib
Affects Versions: 2.1.1
Reporter: Jake Charland
This JIRA is a request to bring in PySparks MLLib evaluation metrics to parity with the Scala API. For example in BinaryClassificationMetrics there are only two eval metrics exposed to pyspark, areaUnderROC and areaUnderPR while scala has support for a much wider set of eval metrics including precision recall curves and the ability to set thresholds for recall and precision values. These evaluation metrics are critical for understanding and seeing the performance of trained models and should be available to those using the pyspak api's.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org