You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:34:07 UTC

[jira] [Resolved] (SPARK-18605) Spark Streaming ERROR TransportResponseHandler: Still have 1 requests outstanding when connection

     [ https://issues.apache.org/jira/browse/SPARK-18605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-18605.
----------------------------------
    Resolution: Incomplete

> Spark Streaming ERROR TransportResponseHandler: Still have 1 requests outstanding when connection 
> --------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18605
>                 URL: https://issues.apache.org/jira/browse/SPARK-18605
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 1.6.2
>         Environment: spark-submit \
> --driver-java-options "-XX:PermSize=1024M -XX:MaxPermSize=3072M" \
> --driver-memory 3G  \
> --class cn.com.jldata.ETLDiver \
> --master yarn \
> --deploy-mode cluster \
> --proxy-user hdfs \
> --executor-memory 5G \
> --executor-cores 3 \
> --num-executors 6 \
> --conf spark.dynamicAllocation.enabled=true \
> --conf spark.dynamicAllocation.initialExecutors=10 \
> --conf spark.dynamicAllocation.maxExecutors=20 \
> --conf spark.dynamicAllocation.minExecutors=6 \
> --conf spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=2 \
> --conf spark.network.timeout=300 \
> --conf spark.yarn.executor.memoryOverhead=4096 \
> --conf spark.yarn.driver.memoryOverhead=2048 \
> --conf spark.driver.cores=3 \
> --conf spark.shuffle.memoryFraction=0.5 \
> --conf spark.storage.memoryFraction=0.3 \
> --conf spark.core.connection.ack.wait.timeout=300  \
> --conf spark.shuffle.service.enabled=true \
> --conf spark.shuffle.service.port=7337 \
> --queue spark \
>            Reporter: jiafeng.zhang
>            Priority: Major
>              Labels: bulk-closed
>
> 16/11/26 11:01:02 WARN TransportChannelHandler: Exception in connection from dpnode12/192.168.9.26:7337
> java.io.IOException: Connection timed out
> 	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> 	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> 	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
> 	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
> 	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
> 	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
> 	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
> 	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> 	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> 	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> 	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> 	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> 	at java.lang.Thread.run(Thread.java:745)
> 16/11/26 11:01:02 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from dpnode12/192.168.9.26:7337 is closed
> 16/11/26 11:01:02 ERROR OneForOneBlockFetcher: Failed while starting block fetches
> java.io.IOException: Connection timed out
> 	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> 	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> 	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> 	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
> 	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
> 	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
> 	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
> 	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
> 	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> 	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> 	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> 	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> 	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> 	at java.lang.Thread.run(Thread.java:745)
> 16/11/26 11:01:02 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org