You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Rui Li (JIRA)" <ji...@apache.org> on 2018/05/07 13:30:00 UTC
[jira] [Created] (HIVE-19439) MapWork shouldn't be reused when
Spark task fails during initialization
Rui Li created HIVE-19439:
-----------------------------
Summary: MapWork shouldn't be reused when Spark task fails during initialization
Key: HIVE-19439
URL: https://issues.apache.org/jira/browse/HIVE-19439
Project: Hive
Issue Type: Bug
Components: Spark
Reporter: Rui Li
Issue identified in HIVE-19388. When a Spark task fails during initializing the map operator, the task is retried with the same MapWork retrieved from cache. This can be problematic because the MapWork may be partially initialized, e.g. some operators are already in INIT state.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)