You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2020/07/09 02:53:00 UTC

[jira] [Commented] (SPARK-32236) Local cluster should shutdown gracefully

    [ https://issues.apache.org/jira/browse/SPARK-32236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154148#comment-17154148 ] 

Apache Spark commented on SPARK-32236:
--------------------------------------

User 'sarutak' has created a pull request for this issue:
https://github.com/apache/spark/pull/29049

> Local cluster should shutdown gracefully
> ----------------------------------------
>
>                 Key: SPARK-32236
>                 URL: https://issues.apache.org/jira/browse/SPARK-32236
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.1.0
>            Reporter: Kousuke Saruta
>            Assignee: Kousuke Saruta
>            Priority: Minor
>
> Almost every time we call sc.stop with local cluster mode, like following exceptions will be thrown.
> {code:java}
> 20/07/09 08:36:45 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
> org.apache.spark.rpc.RpcEnvStoppedException: RpcEnv already stopped.
>         at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:167)
>         at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150)
>         at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:691)
>         at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:253)
>         at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
>         at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:140)
>         at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:53)
>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>         at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>         at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:102)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>         at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
>         at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>         at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.lang.Thread.run(Thread.java:748)
> {code}
> The reason is the asynchronously sent RPC message KillExecutor from Master can be processed after the message loop stops in Worker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org