You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Tomas Nykodym (JIRA)" <ji...@apache.org> on 2018/01/26 23:19:00 UTC
[jira] [Created] (SPARK-23244) Incorrect handling of default values
when deserializing python wrappers of scala transformers
Tomas Nykodym created SPARK-23244:
-------------------------------------
Summary: Incorrect handling of default values when deserializing python wrappers of scala transformers
Key: SPARK-23244
URL: https://issues.apache.org/jira/browse/SPARK-23244
Project: Spark
Issue Type: Bug
Components: MLlib
Affects Versions: 2.2.1
Reporter: Tomas Nykodym
Default values are not handled properly when serializing/deserializing python trasnformers which are wrappers around scala objects. It looks like that after deserialization the default values which were based on uid do not get properly restored and values which were not set are set to their (original) default values.
Here's a simple code example using Bucketizer:
```\{python}
{{from pyspark.ml.feature import Bucketizer }}
{{>>> a = Bucketizer() }}
{{>>> a.save("bucketizer0") }}
{{>>> b = load("bucketizer0") }}
{{>>> a._defaultParamMap[a.outputCol]}}
u'Bucketizer_440bb49206c148989db7__output'
{{>>> b._defaultParamMap[b.outputCol] }}
u'Bucketizer_41cf9afbc559ca2bfc9a__output'
{{>>> a.isSet(a.outputCol) }}
{{False }}
{{>>> b.isSet(b.outputCol) }}
{{True}}{{}}
>>> a.getOutputCol()
u'Bucketizer_440bb49206c148989db7__output'
>>> b.getOutputCol()
u'Bucketizer_440bb49206c148989db7__output'
```
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org