You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Chengxiang Li (JIRA)" <ji...@apache.org> on 2014/11/25 04:03:13 UTC

[jira] [Created] (HIVE-8956) Hive hangs while some error/exception happens beyond job execution[Spark Branch]

Chengxiang Li created HIVE-8956:
-----------------------------------

             Summary: Hive hangs while some error/exception happens beyond job execution[Spark Branch]
                 Key: HIVE-8956
                 URL: https://issues.apache.org/jira/browse/HIVE-8956
             Project: Hive
          Issue Type: Sub-task
          Components: Spark
            Reporter: Chengxiang Li


Remote spark client communicate with remote spark context asynchronously, if error/exception is throw out during job execution in remote spark context, it would be wrapped and send back to remote spark client, but if error/exception is throw out beyond job execution, such as job serialized failed, remote spark client would never know what's going on in remote spark context, and it would hangs there.
Set a timeout in remote spark client side may not a great idea, as we are not sure how long the query executed in spark cluster. we need find a way to check whether job has failed(whole life cycle) in remote spark context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)