You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Ramkumar Vadali (JIRA)" <ji...@apache.org> on 2011/02/17 00:31:24 UTC
[jira] Created: (MAPREDUCE-2333) RAID jobs should delete temporary
files in the event of filesystem failures
RAID jobs should delete temporary files in the event of filesystem failures
---------------------------------------------------------------------------
Key: MAPREDUCE-2333
URL: https://issues.apache.org/jira/browse/MAPREDUCE-2333
Project: Hadoop Map/Reduce
Issue Type: Bug
Components: contrib/raid
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor
If the creation of a parity file or parity file HAR fails due to a filesystem level error, RAID should delete the temporary files. Specifically, datanode death during parity file creation would cause FSDataOutputStream.close() to throw an IOException. The RAID code should delete such a file.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira