You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Yi Liu (JIRA)" <ji...@apache.org> on 2014/09/01 03:12:21 UTC
[jira] [Created] (HADOOP-11039) ByteBufferReadable API doc is
inconsistent with the implementations.
Yi Liu created HADOOP-11039:
-------------------------------
Summary: ByteBufferReadable API doc is inconsistent with the implementations.
Key: HADOOP-11039
URL: https://issues.apache.org/jira/browse/HADOOP-11039
Project: Hadoop Common
Issue Type: Bug
Components: documentation
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
In {{ByteBufferReadable}}, API doc of {{int read(ByteBuffer buf)}} says:
{quote}
After a successful call, buf.position() and buf.limit() should be unchanged, and therefore any data can be immediately read from buf. buf.mark() may be cleared or updated.
{quote}
{quote}
@param buf
the ByteBuffer to receive the results of the read operation. Up to
buf.limit() - buf.position() bytes may be read.
{quote}
But actually the implementations (e.g. {{DFSInputStream}}, {{RemoteBlockReader2}}) would be:
*Upon return, buf.position() will be advanced by the number of bytes read.*
code implementation of {{RemoteBlockReader2}} is as following:
{code}
@Override
public int read(ByteBuffer buf) throws IOException {
if (curDataSlice == null || curDataSlice.remaining() == 0 && bytesNeededToFinish > 0) {
readNextPacket();
}
if (curDataSlice.remaining() == 0) {
// we're at EOF now
return -1;
}
int nRead = Math.min(curDataSlice.remaining(), buf.remaining());
ByteBuffer writeSlice = curDataSlice.duplicate();
writeSlice.limit(writeSlice.position() + nRead);
buf.put(writeSlice);
curDataSlice.position(writeSlice.position());
return nRead;
}
{code}
This description is very important and will guide user how to use this API, and all the implementations should keep the same behavior. We should fix the javadoc.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)