You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Devin Bayer (JIRA)" <ji...@apache.org> on 2012/08/10 12:21:29 UTC
[jira] [Commented] (MAPREDUCE-1487) io.DataInputBuffer.getLength()
semantic wrong/confused
[ https://issues.apache.org/jira/browse/MAPREDUCE-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13432685#comment-13432685 ]
Devin Bayer commented on MAPREDUCE-1487:
----------------------------------------
It's very embarrassing this issue isn't fixed. Do the developers realise Hadoop cannot even copy data from mapper to reducer without corruption?
> io.DataInputBuffer.getLength() semantic wrong/confused
> ------------------------------------------------------
>
> Key: MAPREDUCE-1487
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1487
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Affects Versions: 0.20.1, 0.20.2, 0.21.0
> Environment: linux
> Reporter: Yang Yang
>
> I was trying Google Protocol Buffer as a value type on hadoop,
> then when I used it in reducer, the parser always failed.
> while it worked fine on a plain inputstream reader or mapper.
> the reason is that the reducer interface in Task.java gave a buffer larger than an actual encoded record to the parser, and the parser does not stop until it reaches
> the buffer end, so it parsed some junk bytes.
> the root cause is due to hadoop.io.DataInputBuffer.java :
> in 0.20.1 DataInputBuffer.java line 47:
> public void reset(byte[] input, int start, int length) {
> this.buf = input;
> this.count = start+length;
> this.mark = start;
> this.pos = start;
> }
> public byte[] getData() { return buf; }
> public int getPosition() { return pos; }
> public int getLength() { return count; }
> we see that the above logic seems to assume that "getLength()" returns the total ** capacity ***, not the actual content length, of the buffer, yet latter code
> seems to assume the semantic that "length" is actual content length, i.e. end - start :
> /** Resets the data that the buffer reads. */
> public void reset(byte[] input, int start, int length) {
> buffer.reset(input, start, length);
> }
> i.e. if u call reset( getPosition(), getLength() ) on the same buffer again and again, the "length" would be infinitely increased.
> this confusion in semantic is reflected in many places, at leat in IFile.java, and Task.java, where it caused the original issue.
> around line 980 of Task.java, we see
> valueIn.reset(nextValueBytes.getData(), nextValueBytes.getPosition(), nextValueBytes.getLength())
> if the position above is not empty, the above actually sets a buffer too long, causing the reported issue.
> changing the Task.java as a hack , to
> valueIn.reset(nextValueBytes.getData(), nextValueBytes.getPosition(), nextValueBytes.getLength() - nextValueBytes.getPosition());
> fixed the issue, but the semantic of DataInputBuffer should be fixed and streamlined
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira