You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "YulongZ (Jira)" <ji...@apache.org> on 2022/06/06 05:19:00 UTC

[jira] [Commented] (HIVE-19439) MapWork shouldn't be reused when Spark task fails during initialization

    [ https://issues.apache.org/jira/browse/HIVE-19439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17550297#comment-17550297 ] 

YulongZ commented on HIVE-19439:
--------------------------------

 
{code:java}
//  Operator::initialize()
    if (state == State.INIT){
      return;    
    } {code}
Whether state is State.INIT or not, maybe we continue to force initialization for HOS.

> MapWork shouldn't be reused when Spark task fails during initialization
> -----------------------------------------------------------------------
>
>                 Key: HIVE-19439
>                 URL: https://issues.apache.org/jira/browse/HIVE-19439
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Rui Li
>            Priority: Major
>
> Issue identified in HIVE-19388. When a Spark task fails during initializing the map operator, the task is retried with the same MapWork retrieved from cache. This can be problematic because the MapWork may be partially initialized, e.g. some operators are already in INIT state.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)