You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Saisai Shao (JIRA)" <ji...@apache.org> on 2018/01/12 05:38:00 UTC

[jira] [Commented] (SPARK-22958) Spark is stuck when the only one executor fails to register with driver

    [ https://issues.apache.org/jira/browse/SPARK-22958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16323569#comment-16323569 ] 

Saisai Shao commented on SPARK-22958:
-------------------------------------

If executor is failed to register itself to driver, it will exit itself after timeout. In your case for Spark on YARN, the exit of container will be detected by NM and report back to RM and AM, then AM will readjust the running executor number and launch a new executor. So I doubt the issue you met may not be exactly the same as you described above.

> Spark is stuck when the only one executor fails to register with driver
> -----------------------------------------------------------------------
>
>                 Key: SPARK-22958
>                 URL: https://issues.apache.org/jira/browse/SPARK-22958
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 2.1.0
>            Reporter: Shaoquan Zhang
>         Attachments: How new executor is registered.png
>
>
> We have encountered the following scenario. We run a very simple job in yarn cluster mode. This job needs only one executor to complete. In the running, this job was stuck forever.
> After checking the job log, we found an issue in the Spark. When executor fails to register with driver, YarnAllocator is blind to know it. As a result, the variable (numExecutorsRunning) maintained by YarnAllocator does not reflect the truth. When this variable is used to allocate resources to the running job, misunderstanding happens. As for our job, the misunderstanding results in forever stuck.
> The more details are as follows. The following figure shows how executor is allocated when the job starts to run. Now suppose only one executor is needed. In the figure, step 1,2,3 show how the executor is allocated. After the executor is allocated, it needs to register with the driver (step 4) and the driver responses to it (step 5). After the 5 steps, the executor can be used to run tasks.
> !How new executor is registered.png!
> In YarnAllocator, when step 3 is finished, it will increase the the variable "numExecutorsRunning" by one  as shown in the following code.
> {code:java}
> def updateInternalState(): Unit = synchronized {
>         // increase the numExecutorsRunning 
>         numExecutorsRunning += 1
>         executorIdToContainer(executorId) = container
>         containerIdToExecutorId(container.getId) = executorId
>         val containerSet = allocatedHostToContainersMap.getOrElseUpdate(executorHostname,
>           new HashSet[ContainerId])
>         containerSet += containerId
>         allocatedContainerToHostMap.put(containerId, executorHostname)
>       }
>       if (numExecutorsRunning < targetNumExecutors) {
>         if (launchContainers) {
>           launcherPool.execute(new Runnable {
>             override def run(): Unit = {
>               try {
>                 new ExecutorRunnable(
>                   Some(container),
>                   conf,
>                   sparkConf,
>                   driverUrl,
>                   executorId,
>                   executorHostname,
>                   executorMemory,
>                   executorCores,
>                   appAttemptId.getApplicationId.toString,
>                   securityMgr,
>                   localResources
>                 ).run()
>                 // step 3 is finished
>                 updateInternalState()
>               } catch {
>                 case NonFatal(e) =>
>                   logError(s"Failed to launch executor $executorId on container $containerId", e)
>                   // Assigned container should be released immediately to avoid unnecessary resource
>                   // occupation.
>                   amClient.releaseAssignedContainer(containerId)
>               }
>             }
>           })
>         } else {
>           // For test only
>           updateInternalState()
>         }
>       } else {
>         logInfo(("Skip launching executorRunnable as runnning Excecutors count: %d " +
>           "reached target Executors count: %d.").format(numExecutorsRunning, targetNumExecutors))
>       }
> {code}
>    
> Imagine the step 3 successes, but the step 4 is failed due to some reason (for example network fluctuation). The variable "numExecutorsRunning" is equal to 1. But, the fact is no executor is running. So, The variable "numExecutorsRunning" does not reflect the real number of running executors. For YarnAllocator, because the variable is equal to 1, it does not allocate any new executor even though no executor is actually running. If one job only needs one executor to complete, it will stuck forever since no executor runs its tasks. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org