You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:16:33 UTC

[jira] [Resolved] (SPARK-17048) ML model read for custom transformers in a pipeline does not work

     [ https://issues.apache.org/jira/browse/SPARK-17048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-17048.
----------------------------------
    Resolution: Incomplete

> ML model read for custom transformers in a pipeline does not work 
> ------------------------------------------------------------------
>
>                 Key: SPARK-17048
>                 URL: https://issues.apache.org/jira/browse/SPARK-17048
>             Project: Spark
>          Issue Type: Bug
>          Components: ML
>    Affects Versions: 2.0.0
>         Environment: Spark 2.0.0
> Java API
>            Reporter: Taras Matyashovskyy
>            Priority: Major
>              Labels: bulk-closed, easyfix, features
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> 0. Use Java API :( 
> 1. Create any custom ML transformer
> 2. Make it MLReadable and MLWritable
> 3. Add to pipeline
> 4. Evaluate model, e.g. CrossValidationModel, and save results to disk
> 5. For custom transformer you can use DefaultParamsReader and DefaultParamsWriter, for instance 
> 6. Load model from saved directory
> 7. All out-of-the-box objects are loaded successfully, e.g. Pipeline, Evaluator, etc.
> 8. Your custom transformer will fail with NPE
> Reason:
> ReadWrite.scala:447
> cls.getMethod("read").invoke(null).asInstanceOf[MLReader[T]].load(path)
> In Java this only works for static methods.
> As we are implementing MLReadable or MLWritable, then this call should be instance method call. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org