You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ajay Saini (JIRA)" <ji...@apache.org> on 2017/07/26 23:05:00 UTC
[jira] [Created] (SPARK-21542) Helper functions for custom Python
Persistence
Ajay Saini created SPARK-21542:
----------------------------------
Summary: Helper functions for custom Python Persistence
Key: SPARK-21542
URL: https://issues.apache.org/jira/browse/SPARK-21542
Project: Spark
Issue Type: New Feature
Components: ML, PySpark
Affects Versions: 2.2.0
Reporter: Ajay Saini
Currnetly, there is no way to easily persist Json-serializable parameters in Python only. All parameters in Python are persisted by converting them to Java objects and using the Java persistence implementation. In order to facilitate the creation of custom Python-only pipeline stages, it would be good to have a Python-only persistence framework so that these stages do not need to be implemented in Scala for persistence.
This task involves:
- Adding implementations for DefaultParamsReadable, DefaultParamsWriteable, DefaultParamsReader, and DefaultParamsWriter in pyspark.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org