You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/08/24 19:34:20 UTC

[jira] [Resolved] (SPARK-17092) DataFrame with large number of columns causing code generation error

     [ https://issues.apache.org/jira/browse/SPARK-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-17092.
-------------------------------
    Resolution: Duplicate

> DataFrame with large number of columns causing code generation error
> --------------------------------------------------------------------
>
>                 Key: SPARK-17092
>                 URL: https://issues.apache.org/jira/browse/SPARK-17092
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>         Environment: On vanilla Spark hadoop 2.7 Scala 2.11 in Linux CentOS, cluster with 9 slaves. Amazon AWS node size m3.2xlarge.
>            Reporter: Aris Vlasakakis
>
> On vanilla Spark hadoop 2.7 Scala 2.11:
> When I use randomSplit on a DataFrame with several hundreds of columns, I get Janino code generation errors. The lowest number of columns that triggers the bug is around 500 or less.
> The error message:
> ```
> Caused by: org.codehaus.janino.JaninoRuntimeException: Code of method "(Lorg/apache/spark/sql/catalyst/InternalRow;Lorg/apache/spark/sql/catalyst/InternalRow;)I" of class "org.apache.spark.sql.catalyst.ex
> pressions.GeneratedClass$SpecificOrdering" grows beyond 64 KB
> ```
> Here is a small code sample which causes it in spark-shell
> ```
> import org.apache.spark.sql.types.{DoubleType, StructType}
> import org.apache.spark.sql.{Row, SparkSession}
> val COLMAX: Double = 500.0
> val ROWSIZE: Int = 1000
> val intToRow: Int => Row = (i: Int) => Row.fromSeq(Range.Double.inclusive(1.0, COLMAX, 1.0).toSeq)
> val schema: StructType = (1 to COLMAX.toInt).foldLeft(new StructType())((s, i) => s.add(i.toString, DoubleType, nullable = true))
> val rdds = spark.sparkContext.parallelize((1 to ROWSIZE).map(intToRow))
> val df = spark.createDataFrame(rdds, schema)
> val Array(left, right) = df.randomSplit(Array(.8,.2))
> // This crashes
> left.count
> ```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org