You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "panbingkun (Jira)" <ji...@apache.org> on 2022/04/20 01:46:00 UTC
[jira] [Updated] (SPARK-38960) Spark should fail fast if initial memory too large(set by "spark.executor.extraJavaOptions") for executor to start
[ https://issues.apache.org/jira/browse/SPARK-38960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
panbingkun updated SPARK-38960:
-------------------------------
Description:
if you set initial memory(set by "spark.executor.extraJavaOptions=-Xms\{XXX}G" ) larger than maximum memory(set by "spark.executor.memory")
Eg.
*spark.executor.memory=1G*
*spark.executor.extraJavaOptions=-Xms2G*
from the driver process you just see executor failures with no warning, since the more meaningful errors are buried in the executor logs.
Eg., on Yarn, you see:
{noformat}
Error occurred during initialization of VM
Initial heap size set to a larger value than the maximum heap size{noformat}
Instead we should just fail fast with a clear error message in the driver logs.
was:
if you set initial memory(set by "spark.executor.extraJavaOptions=-Xms\{XXX}G" ) larger than maximum memory(set by "spark.executor.memory")
Eg.
spark.executor.memory=1G
spark.executor.extraJavaOptions=-Xms2G
from the driver process you just see executor failures with no warning, since the more meaningful errors are buried in the executor logs.
Eg., on Yarn, you see:
Error occurred during initialization of VM
Initial heap size set to a larger value than the maximum heap size
Instead we should just fail fast with a clear error message in the driver logs.
> Spark should fail fast if initial memory too large(set by "spark.executor.extraJavaOptions") for executor to start
> ------------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-38960
> URL: https://issues.apache.org/jira/browse/SPARK-38960
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core, Spark Submit, YARN
> Affects Versions: 3.4.0
> Reporter: panbingkun
> Priority: Minor
> Fix For: 3.4.0
>
>
> if you set initial memory(set by "spark.executor.extraJavaOptions=-Xms\{XXX}G" ) larger than maximum memory(set by "spark.executor.memory")
> Eg.
> *spark.executor.memory=1G*
> *spark.executor.extraJavaOptions=-Xms2G*
>
> from the driver process you just see executor failures with no warning, since the more meaningful errors are buried in the executor logs.
> Eg., on Yarn, you see:
> {noformat}
> Error occurred during initialization of VM
> Initial heap size set to a larger value than the maximum heap size{noformat}
> Instead we should just fail fast with a clear error message in the driver logs.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org