You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Zheng Shao (JIRA)" <ji...@apache.org> on 2009/06/15 22:58:08 UTC

[jira] Reopened: (HIVE-557) Exception in FileSinkOperator's close should NOT be ignored

     [ https://issues.apache.org/jira/browse/HIVE-557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zheng Shao reopened HIVE-557:
-----------------------------


The "close()" is called in the "finally" block in "MapRunner.run()". If there is an out-of-memory error, Hive won't be able to catch it (because we only catch "Exception" now), so abort is NOT set to false, which is wrong.

We need to catch "Throwable" instead of "Exception".

> Exception in FileSinkOperator's close should NOT be ignored
> -----------------------------------------------------------
>
>                 Key: HIVE-557
>                 URL: https://issues.apache.org/jira/browse/HIVE-557
>             Project: Hadoop Hive
>          Issue Type: Bug
>          Components: Query Processor
>    Affects Versions: 0.3.0, 0.3.1
>            Reporter: Zheng Shao
>            Assignee: Zheng Shao
>             Fix For: 0.4.0
>
>         Attachments: HIVE-557.1.patch, HIVE-557.2.patch
>
>
> FileSinkOperator currently ignores all IOExceptions from close() and commit(). We should not ignore them, or the output file can be incomplete or missing.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.