You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Tim Broberg (JIRA)" <ji...@apache.org> on 2012/07/03 03:30:58 UTC
[jira] [Updated] (HADOOP-8148) Zero-copy ByteBuffer-based
compressor / decompressor API
[ https://issues.apache.org/jira/browse/HADOOP-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tim Broberg updated HADOOP-8148:
--------------------------------
Attachment: (was: zerocopyifc.tgz)
> Zero-copy ByteBuffer-based compressor / decompressor API
> --------------------------------------------------------
>
> Key: HADOOP-8148
> URL: https://issues.apache.org/jira/browse/HADOOP-8148
> Project: Hadoop Common
> Issue Type: New Feature
> Components: io, performance
> Reporter: Tim Broberg
> Assignee: Tim Broberg
> Attachments: hadoop-8148.patch, hadoop8148.patch
>
>
> Per Todd Lipcon's comment in HDFS-2834, "
> Whenever a native decompression codec is being used, ... we generally have the following copies:
> 1) Socket -> DirectByteBuffer (in SocketChannel implementation)
> 2) DirectByteBuffer -> byte[] (in SocketInputStream)
> 3) byte[] -> Native buffer (set up for decompression)
> 4*) decompression to a different native buffer (not really a copy - decompression necessarily rewrites)
> 5) native buffer -> byte[]
> with the proposed improvement we can hopefully eliminate #2,#3 for all applications, and #2,#3,and #5 for libhdfs.
> "
> The interfaces in the attached patch attempt to address:
> A - Compression and decompression based on ByteBuffers (HDFS-2834)
> B - Zero-copy compression and decompression (HDFS-3051)
> C - Provide the caller a way to know how the max space required to hold compressed output.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira