You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Vinod KC (JIRA)" <ji...@apache.org> on 2015/09/01 08:57:45 UTC
[jira] [Comment Edited] (SPARK-10199) Avoid using reflections for
parquet model save
[ https://issues.apache.org/jira/browse/SPARK-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724459#comment-14724459 ]
Vinod KC edited comment on SPARK-10199 at 9/1/15 6:57 AM:
----------------------------------------------------------
[~mengxr]
1) I measured only schema inference part.
Now, I will add measure for entire save/load operation and schema inference part separately
2)Also I will run tests multiples times and will share the result
was (Author: vinodkc):
[~mengxr]
1) I measured only schema inference part.
Now, I will add measure for entire save/load operation and schema inference part separately
2)Also I will run test tests multiples times and will share the result
> Avoid using reflections for parquet model save
> ----------------------------------------------
>
> Key: SPARK-10199
> URL: https://issues.apache.org/jira/browse/SPARK-10199
> Project: Spark
> Issue Type: Improvement
> Components: ML, MLlib
> Reporter: Feynman Liang
> Priority: Minor
>
> These items are not high priority since the overhead writing to Parquest is much greater than for runtime reflections.
> Multiple model save/load in MLlib use case classes to infer a schema for the data frame saved to Parquet. However, inferring a schema from case classes or tuples uses [runtime reflection|https://github.com/apache/spark/blob/d7b4c095271c36fcc7f9ded267ecf5ec66fac803/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala#L361] which is unnecessary since the types are already known at the time `save` is called.
> It would be better to just specify the schema for the data frame directly using {{sqlContext.createDataFrame(dataRDD, schema)}}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org