You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2011/06/17 21:27:49 UTC
[Hadoop Wiki] Update of "UsingLzoCompression" by DougMeil
Dear Wiki user,
You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.
The "UsingLzoCompression" page has been changed by DougMeil:
http://wiki.apache.org/hadoop/UsingLzoCompression?action=diff&rev1=24&rev2=25
Comment:
Per stack, changing the repo to Todd's version of LZO
This distro doesn't contain all bug fixes (such as when LZO header or block header data falls on read boundary).
- Please get latest distro with all fixes from http://github.com/kevinweil/hadoop-lzo
+ Please get latest distro with all fixes from https://github.com/toddlipcon/hadoop-lzo
== Why compression? ==
By enabling compression, the store file (HFile) will use a compression algorithm on blocks as they are written (during flushes and compactions) and thus must be decompressed when reading.