You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/04/24 02:28:40 UTC

[jira] [Updated] (SPARK-6748) QueryPlan.schema should be a lazy val to avoid creating excessive duplicate StructType objects

     [ https://issues.apache.org/jira/browse/SPARK-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen updated SPARK-6748:
-----------------------------
    Assignee: Cheng Lian

> QueryPlan.schema should be a lazy val to avoid creating excessive duplicate StructType objects
> ----------------------------------------------------------------------------------------------
>
>                 Key: SPARK-6748
>                 URL: https://issues.apache.org/jira/browse/SPARK-6748
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.3.0
>            Reporter: Cheng Lian
>            Assignee: Cheng Lian
>             Fix For: 1.4.0
>
>
> Spotted this issue while trying to do a simple micro benchmark:
> {code}
> sc.parallelize(1 to 10000000).
>   map(i => (i, s"val_$i")).
>   toDF("key", "value").
>   saveAsParquetFile("file:///tmp/src.parquet")
> sqlContext.parquetFile("file:///tmp/src.parquet").collect()
> {code}
> YJP profiling result showed that, *10 million {{StructType}}, 10 million {{StructField \[\]}}, and 20 million {{StructField}} were allocated*.
> It turned out that {{DataFrame.collect()}} calls {{SparkPlan.executeCollect()}}, which consists of a single line:
> {code}
> execute().map(ScalaReflection.convertRowToScala(_, schema)).collect()
> {code}
> The problem is that, {{QueryPlan.schema}} is a function, and since 1.3.0, {{convertRowToScala}} starts returning a {{GenericRowWithSchema}}. These two facts result in 10 million rows, each with a separate schema object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org