You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Bago Amirbekian (JIRA)" <ji...@apache.org> on 2018/02/27 21:18:00 UTC

[jira] [Commented] (SPARK-19947) RFormulaModel always throws Exception on transforming data with NULL or Unseen labels

    [ https://issues.apache.org/jira/browse/SPARK-19947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16379282#comment-16379282 ] 

Bago Amirbekian commented on SPARK-19947:
-----------------------------------------

I think this was resolved by [https://github.com/apache/spark/pull/18496] & [https://github.com/apache/spark/pull/18613].

> RFormulaModel always throws Exception on transforming data with NULL or Unseen labels
> -------------------------------------------------------------------------------------
>
>                 Key: SPARK-19947
>                 URL: https://issues.apache.org/jira/browse/SPARK-19947
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML
>    Affects Versions: 2.1.0
>            Reporter: Andrey Yatsuk
>            Priority: Major
>
> I have trained ML model and big data table in parquet. I want add new column to this table with predicted values. I can't lose any data, but can having null values in it.
> RFormulaModel.fit() method creates new StringIndexer with default (handleInvalid="error") parameter. Also VectorAssembler on NULL values throwing Exception. So I must call df.na.drop() to transform this DataFrame and I don't want to do this.
> Need add to RFormula new parameter like handleInvalid in StringIndexer.
> Or add transform(Seq<Column>): Vector method which user can use as UDF method in df.withColumn("predicted", functions.callUDF(rFormulaModel::transform, Seq<Column>))



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org