You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Davies Liu (JIRA)" <ji...@apache.org> on 2016/04/11 19:36:25 UTC

[jira] [Created] (SPARK-14538) Increase the default stack size of spark shell

Davies Liu created SPARK-14538:
----------------------------------

             Summary: Increase the default stack size of spark shell
                 Key: SPARK-14538
                 URL: https://issues.apache.org/jira/browse/SPARK-14538
             Project: Spark
          Issue Type: Bug
          Components: SQL
            Reporter: Davies Liu
            Assignee: Davies Liu


The default stack size of 64-bit JVM is 1M, the new SQL parser may fail as StackOverflow if the SQL query is very long. Also the anaylizer/optimizer very deep stacks, it may also fail on complicated query.

We should increase the default stack size for Spark (at least for driver) to 2M or 4M.

The down side of this is that we will increase the memory usage for each thread. Right now, a fresh new spark-shell (with HiveContext) take 814M on my mac, it increased to 886M if it change the stack size to 4M (about 20+ threads).

cc [~rxin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org