You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Imran Rashid (JIRA)" <ji...@apache.org> on 2015/05/05 22:21:59 UTC
[jira] [Updated] (SPARK-6980) Akka timeout exceptions indicate
which conf controls them
[ https://issues.apache.org/jira/browse/SPARK-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Imran Rashid updated SPARK-6980:
--------------------------------
Description:
If you hit one of the akka timeouts, you just get an exception like
{code}
java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
{code}
The exception doesn't indicate how to change the timeout, though there is usually (always?) a corresponding setting in {{SparkConf}} . It would be nice if the exception including the relevant setting.
I think this should be pretty easy to do -- we just need to create something like a {{NamedTimeout}}. It would have its own {{await}} method, catches the akka timeout and throws its own exception. We should change {{RpcUtils.askTimeout}} and {{RpcUtils.lookupTimeout}} to always give a {{NamedTimeout}}, so we can be sure that anytime we have a timeout, we get a better exception.
Given the latest refactoring to the rpc layer, this needs to be done in both {{AkkaUtils}} and {{AkkaRpcEndpoint}}.
was:
If you hit one of the akka timeouts, you just get an exception like
{code}
java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
{code}
The exception doesn't indicate how to change the timeout, though there is usually (always?) a corresponding setting in {{SparkConf}} . It would be nice if the exception including the relevant setting.
I think this should be pretty easy to do -- we just need to create something like a {{NamedTimeout}}. It would have its own {{await}} method, catches the akka timeout and throws its own exception.
> Akka timeout exceptions indicate which conf controls them
> ---------------------------------------------------------
>
> Key: SPARK-6980
> URL: https://issues.apache.org/jira/browse/SPARK-6980
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Reporter: Imran Rashid
> Assignee: Harsh Gupta
> Priority: Minor
> Labels: starter
> Attachments: Spark-6980-Test.scala
>
>
> If you hit one of the akka timeouts, you just get an exception like
> {code}
> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
> {code}
> The exception doesn't indicate how to change the timeout, though there is usually (always?) a corresponding setting in {{SparkConf}} . It would be nice if the exception including the relevant setting.
> I think this should be pretty easy to do -- we just need to create something like a {{NamedTimeout}}. It would have its own {{await}} method, catches the akka timeout and throws its own exception. We should change {{RpcUtils.askTimeout}} and {{RpcUtils.lookupTimeout}} to always give a {{NamedTimeout}}, so we can be sure that anytime we have a timeout, we get a better exception.
> Given the latest refactoring to the rpc layer, this needs to be done in both {{AkkaUtils}} and {{AkkaRpcEndpoint}}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org