You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Namit Jain (JIRA)" <ji...@apache.org> on 2010/08/16 21:46:18 UTC
[jira] Commented: (HIVE-1492) FileSinkOperator should remove
duplicated files from the same task based on file sizes
[ https://issues.apache.org/jira/browse/HIVE-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12899048#action_12899048 ]
Namit Jain commented on HIVE-1492:
----------------------------------
A better fix would be to catch next() in HiveRecordReader/CombineHiveRecordReader etc. and set the abort flag in ExecMapper in case of an exception.
There will be exactly 1 successful mapper in that case.
> FileSinkOperator should remove duplicated files from the same task based on file sizes
> --------------------------------------------------------------------------------------
>
> Key: HIVE-1492
> URL: https://issues.apache.org/jira/browse/HIVE-1492
> Project: Hadoop Hive
> Issue Type: Bug
> Affects Versions: 0.7.0
> Reporter: Ning Zhang
> Assignee: Ning Zhang
> Fix For: 0.6.0, 0.7.0
>
> Attachments: HIVE-1492.patch, HIVE-1492_branch-0.6.patch
>
>
> FileSinkOperator.jobClose() calls Utilities.removeTempOrDuplicateFiles() to retain only one file for each task. A task could produce multiple files due to failed attempts or speculative runs. The largest file should be retained rather than the first file for each task.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.