You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marcelo Vanzin (JIRA)" <ji...@apache.org> on 2019/03/20 18:49:00 UTC
[jira] [Resolved] (SPARK-27094) Thread interrupt being swallowed
while launching executors in YarnAllocator
[ https://issues.apache.org/jira/browse/SPARK-27094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Marcelo Vanzin resolved SPARK-27094.
------------------------------------
Resolution: Fixed
Fix Version/s: 3.0.0
Issue resolved by pull request 24017
[https://github.com/apache/spark/pull/24017]
> Thread interrupt being swallowed while launching executors in YarnAllocator
> ---------------------------------------------------------------------------
>
> Key: SPARK-27094
> URL: https://issues.apache.org/jira/browse/SPARK-27094
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Affects Versions: 2.4.0
> Reporter: Marcelo Vanzin
> Assignee: Marcelo Vanzin
> Priority: Minor
> Fix For: 3.0.0
>
>
> When shutting down a SparkContext, the YarnAllocator thread is interrupted. If the interrupt happens just at the wrong time, you'll see something like this:
> {noformat}
> 19/03/05 07:04:20 WARN ScriptBasedMapping: Exception running blah
> java.io.IOException: java.lang.InterruptedException
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:578)
> at org.apache.hadoop.util.Shell.run(Shell.java:478)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:766)
> at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:251)
> at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:188)
> at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> at org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:101)
> at org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:81)
> at org.apache.spark.deploy.yarn.SparkRackResolver.resolve(SparkRackResolver.scala:37)
> at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$handleAllocatedContainers$2.apply(YarnAllocator.scala:431)
> at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$handleAllocatedContainers$2.apply(YarnAllocator.scala:430)
> at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at org.apache.spark.deploy.yarn.YarnAllocator.handleAllocatedContainers(YarnAllocator.scala:430)
> at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:281)
> at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:556)
> {noformat}
> That means the YARN code being called ({{RackResolver}}) is swallowing the interrupt , so the Spark allocator thread never exits. In this particular app, the allocator was in the middle of allocating a very large number of executors, so it seemed like the application was hung, and there were a lot of executor coming up even though the context was being shut down.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org