You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by mgaido91 <gi...@git.apache.org> on 2017/10/26 09:08:48 UTC

[GitHub] spark pull request #19563: [SPARK-22284][SQL] Fix 64KB JVM bytecode limit pr...

Github user mgaido91 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19563#discussion_r147084523
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/hash.scala ---
    @@ -389,9 +389,15 @@ abstract class HashExpression[E] extends Expression {
           input: String,
           result: String,
           fields: Array[StructField]): String = {
    -    fields.zipWithIndex.map { case (field, index) =>
    +    val hashes = fields.zipWithIndex.map { case (field, index) =>
           nullSafeElementHash(input, index.toString, field.nullable, field.dataType, result, ctx)
    -    }.mkString("\n")
    +    }
    +    val args = if (ctx.INPUT_ROW != null) {
    +      Seq(("InternalRow", input), ("InternalRow", ctx.INPUT_ROW))
    --- End diff --
    
    sorry, I cannot understand why you need to pass `ctx.INPUT_ROW` as an argument, might you please explain me? Thanks.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org