You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by lei liu <li...@gmail.com> on 2013/12/31 10:01:17 UTC

ByteBuffer-based read API for pread

 There is ByteBuffer read API for sequential read in CDH4.3.1,
example:public synchronized int read(final ByteBuffer buf) throws
IOException  API. But there is not ByteBuffe read API for pread.

Why don't support ByteBuffer read API for pread in CDH4.3.1?

Thanks,

LiuLei

Re: ByteBuffer-based read API for pread

Posted by Colin McCabe <cm...@alumni.cmu.edu>.
It's true that HDFS (and Hadoop generally) doesn't currently have a
ByteBuffer-based pread API.  There is a JIRA open for this issue,
HDFS-3246.

I do not know if implementing a ByteBuffer API for pread would be as
big of a performance gain as implementing it for regular read.  One
issue is that when you do a pread, you always destroy the old
BlockReader object and create a new one.  This overhead may tend to
make the overhead of doing a single buffer copy less significant in
terms of total cost.  I suppose it partly depends on how big the
buffer is that is being copied... a really large pread would certainly
benefit from avoiding the copy into a byte array.

cheers,
Colin

On Tue, Dec 31, 2013 at 1:01 AM, lei liu <li...@gmail.com> wrote:
>  There is ByteBuffer read API for sequential read in CDH4.3.1,
> example:public synchronized int read(final ByteBuffer buf) throws
> IOException  API. But there is not ByteBuffe read API for pread.
>
> Why don't support ByteBuffer read API for pread in CDH4.3.1?
>
> Thanks,
>
> LiuLei