You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:20:12 UTC
[jira] [Updated] (SPARK-13216) Spark streaming application not
honoring --num-executors in restarting of an application from a checkpoint
[ https://issues.apache.org/jira/browse/SPARK-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-13216:
---------------------------------
Labels: Streaming bulk-closed (was: Streaming)
> Spark streaming application not honoring --num-executors in restarting of an application from a checkpoint
> ----------------------------------------------------------------------------------------------------------
>
> Key: SPARK-13216
> URL: https://issues.apache.org/jira/browse/SPARK-13216
> Project: Spark
> Issue Type: Bug
> Components: DStreams, Spark Submit
> Affects Versions: 1.5.0
> Reporter: Neelesh Srinivas Salian
> Priority: Minor
> Labels: Streaming, bulk-closed
>
> Scenario to help understand:
> 1) The Spark streaming job with 12 executors was initiated with checkpointing enabled.
> 2) In version 1.3, the user was able to append the number of executors to 20 using --num-executors but was unable to do so in version 1.5.
> In 1.5, the spark application still runs with 13 executors (1 for driver and 12 executors).
> There is a need to start from the checkpoint itself and not restart the application to avoid the loss of information.
> 3) Checked the code in 1.3 and 1.5, which shows the command ''--num-executors" has been deprecated.
> Any thoughts on this? Not sure if anyone hit this one specifically before.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org