You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sun Rui (JIRA)" <ji...@apache.org> on 2016/01/04 04:04:39 UTC

[jira] [Commented] (SPARK-12609) Make R to JVM timeout configurable

    [ https://issues.apache.org/jira/browse/SPARK-12609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15080647#comment-15080647 ] 

Sun Rui commented on SPARK-12609:
---------------------------------

[~shivaram], did you meet a real case or just by reviewing code? Do we need to make configurable the timeout for socket connections of R workers (3600 seconds in daemon.R, and default timeout is used in worker.R)? 
Not sure if a value of 0 for timeout means infinity?


> Make R to JVM timeout configurable 
> -----------------------------------
>
>                 Key: SPARK-12609
>                 URL: https://issues.apache.org/jira/browse/SPARK-12609
>             Project: Spark
>          Issue Type: Improvement
>          Components: SparkR
>            Reporter: Shivaram Venkataraman
>
> The timeout from R to the JVM is hardcoded at 6000 seconds in https://github.com/apache/spark/blob/6c5bbd628aaedb6efb44c15f816fea8fb600decc/R/pkg/R/client.R#L22
> This results in Spark jobs that take more than 100 minutes to always fail. We should make this timeout configurable through SparkConf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org