You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Sharad Agarwal (JIRA)" <ji...@apache.org> on 2008/07/03 12:54:45 UTC

[jira] Commented: (HADOOP-3666) SequenceFile RecordReader should skip bad records

    [ https://issues.apache.org/jira/browse/HADOOP-3666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12610193#action_12610193 ] 

Sharad Agarwal commented on HADOOP-3666:
----------------------------------------

The recordReader should somehow inform the framework about skipping any of the records. I think it wont be acceptable for the user job to skip a record and not notify about it.
Doing it may require RecordReader interface to change. One way of doing that without changing the interface would be throw a specific exception from next(). The framework can catch it and based on the exception type can decide to again call next() or just fail the job. Also it would allow framework to keep something like skipped records counter.

> SequenceFile RecordReader should skip bad records
> -------------------------------------------------
>
>                 Key: HADOOP-3666
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3666
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.17.0
>            Reporter: Joydeep Sen Sarma
>
> Currently a bad record in a sequencefile leads to entire job being failed. the best workaround is to skip an errant file manually (by looking at what map task failed).  This is a sucky option because it's manual and because one should be able to skip a sequencefile block (instead of entire file).
> While we don't see this often (and i don't know why this corruption happened) - here's an example stack:
> Status : FAILED java.lang.NegativeArraySizeException
> 	at org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:96)
> 	at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:75)
> 	at org.apache.hadoop.io.BytesWritable.readFields(BytesWritable.java:130)
> 	at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:1640)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1712)
> 	at org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:79)
> 	at org.apache.hadoop.mapred.MapTask$1.next(MapTask.java:176)
> Ideally the recordreader should just skip the entire chunk if it gets an unrecoverable error while reading.
> This was the consensus in hadoop-153 as well (that data corruptions should be handled by recordreaders) and hadoop-3144 did something similar for textinputformat.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.