You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Siddharth Dangi (JIRA)" <ji...@apache.org> on 2019/05/09 19:58:00 UTC

[jira] [Commented] (SPARK-23986) CompileException when using too many avg aggregation after joining

    [ https://issues.apache.org/jira/browse/SPARK-23986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836658#comment-16836658 ] 

Siddharth Dangi commented on SPARK-23986:
-----------------------------------------

[~pedromorfeu] I tried the workaround you mentioned above, but still encountered this issue (my code is below).

Since I don't have access to Spark 2.3.1, is there another workaround I can try with Spark 2.3.0?
{code:java}
val sumColNames = List[String](...)  // list of 25 Strings
val sumCols: List[Column] = sumColNames.map(name => sum(col(name)))

/** code that causes error */
val output = input
.groupBy(groupByColNames map col: _*)
.agg(sumCols.head, sumCols.tail: _*)

/** workaround I tried */
val middleIdx = sumCols.length / 2
val sumColsFirstHalf = sumCols.slice(0, middleIdx)
val sumColsSecondHalf = sumCols.slice(middleIdx, sumCols.length)

val grouped = input.groupBy(groupByCols)
val data1 = grouped.agg(sumColsFirstHalf.head, sumColsFirstHalf.tail: _*)
val data2 = grouped.agg(sumColsSecondHalf.head, sumColsSecondHalf.tail: _*)
val output = data1.join(data2, groupByColNames)
{code}

> CompileException when using too many avg aggregation after joining
> ------------------------------------------------------------------
>
>                 Key: SPARK-23986
>                 URL: https://issues.apache.org/jira/browse/SPARK-23986
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.0
>            Reporter: Michel Davit
>            Assignee: Marco Gaido
>            Priority: Major
>             Fix For: 2.3.1, 2.4.0
>
>         Attachments: spark-generated.java
>
>
> Considering the following code:
> {code:java}
>     val df1: DataFrame = sparkSession.sparkContext
>       .makeRDD(Seq((0, 1, 2, 3, 4, 5, 6)))
>       .toDF("key", "col1", "col2", "col3", "col4", "col5", "col6")
>     val df2: DataFrame = sparkSession.sparkContext
>       .makeRDD(Seq((0, "val1", "val2")))
>       .toDF("key", "dummy1", "dummy2")
>     val agg = df1
>       .join(df2, df1("key") === df2("key"), "leftouter")
>       .groupBy(df1("key"))
>       .agg(
>         avg("col2").as("avg2"),
>         avg("col3").as("avg3"),
>         avg("col4").as("avg4"),
>         avg("col1").as("avg1"),
>         avg("col5").as("avg5"),
>         avg("col6").as("avg6")
>       )
>     val head = agg.take(1)
> {code}
> This logs the following exception:
> {code:java}
> ERROR CodeGenerator: failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 467, Column 28: Redefinition of parameter "agg_expr_11"
> {code}
> I am not a spark expert but after investigation, I realized that the generated {{doConsume}} method is responsible of the exception.
> Indeed, {{avg}} calls several times {{org.apache.spark.sql.execution.CodegenSupport.constructDoConsumeFunction}}. The 1st time with the 'avg' Expr and a second time for the base aggregation Expr (count and sum).
> The problem comes from the generation of parameters in CodeGenerator:
> {code:java}
>   /**
>    * Returns a term name that is unique within this instance of a `CodegenContext`.
>    */
>   def freshName(name: String): String = synchronized {
>     val fullName = if (freshNamePrefix == "") {
>       name
>     } else {
>       s"${freshNamePrefix}_$name"
>     }
>     if (freshNameIds.contains(fullName)) {
>       val id = freshNameIds(fullName)
>       freshNameIds(fullName) = id + 1
>       s"$fullName$id"
>     } else {
>       freshNameIds += fullName -> 1
>       fullName
>     }
>   }
> {code}
> The {{freshNameIds}} already contains {{agg_expr_[1..6]}} from the 1st call.
>  The second call is made with {{agg_expr_[1..12]}} and generates the following names:
>  {{agg_expr_[11|21|31|41|51|61|11|12]}}. We then have a parameter name conflicts in the generated code: {{agg_expr_11.}}
> Appending the 'id' in s"$fullName$id" to generate unique term name is source of conflict. Maybe simply using undersoce can solve this issue : $fullName_$id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org