You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lianhui Wang (JIRA)" <ji...@apache.org> on 2014/11/02 14:33:33 UTC

[jira] [Created] (SPARK-4195) retry to fetch blocks's result when fetchfailed's reason is connection timeout

Lianhui Wang created SPARK-4195:
-----------------------------------

             Summary: retry to fetch blocks's result when fetchfailed's reason is connection timeout
                 Key: SPARK-4195
                 URL: https://issues.apache.org/jira/browse/SPARK-4195
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
            Reporter: Lianhui Wang


when there are many executors in a application(example:1000),Connection timeout often occure.Exception is:
WARN nio.SendingConnection: Error finishing connection 
java.net.ConnectException: Connection timed out
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.spark.network.nio.SendingConnection.finishConnect(Connection.scala:342)
        at org.apache.spark.network.nio.ConnectionManager$$anon$11.run(ConnectionManager.scala:273)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
that will make driver as these executors are lost,but in fact these executors are alive.so add retry mechanism to reduce the probability of the occurrence of this problem.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org