You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiao Li (JIRA)" <ji...@apache.org> on 2018/01/30 08:39:00 UTC

[jira] [Created] (SPARK-23267) Increase spark.sql.codegen.hugeMethodLimit to 65535

Xiao Li created SPARK-23267:
-------------------------------

             Summary: Increase spark.sql.codegen.hugeMethodLimit to 65535
                 Key: SPARK-23267
                 URL: https://issues.apache.org/jira/browse/SPARK-23267
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.3.0
            Reporter: Xiao Li
            Assignee: Xiao Li


Still saw the performance regression introduced by `spark.sql.codegen.hugeMethodLimit` in our internal workloads. There are two major issues in the current solution.
 * The size of the complied byte code is not identical to the bytecode size of the method. The detection is still not accurate. 
 * The bytecode size of a single operator (e.g., `SerializeFromObject`) could still exceed 8K limit. We saw the performance regression in such scenario. 

Since it is close to the release of 2.3, we decide to increase it to 64K for avoiding the perf regression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org