You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Xabriel J Collazo Mojica (JIRA)" <ji...@apache.org> on 2015/02/28 01:12:04 UTC
[jira] [Created] (HADOOP-11644) Contribute CMX compression
Xabriel J Collazo Mojica created HADOOP-11644:
-------------------------------------------------
Summary: Contribute CMX compression
Key: HADOOP-11644
URL: https://issues.apache.org/jira/browse/HADOOP-11644
Project: Hadoop Common
Issue Type: Improvement
Components: io
Reporter: Xabriel J Collazo Mojica
Assignee: Xabriel J Collazo Mojica
Hadoop natively supports four main compression algorithms: BZIP2, LZ4, Snappy and ZLIB.
Each one of these algorithms fills a gap:
bzip2 : Very high compression ratio, splittable
LZ4 : Very fast, non splittable
Snappy : Very fast, non splittable
zLib : good balance of compression and speed.
We think there is a gap for a compression algorithm that can perform fast compress and decompress, while also being splittable. This can help significantly on jobs where the input file sizes are >= 1GB.
For this, IBM has developed CMX. CMX is a dictionary-based, block-oriented, splittable, concatenable compression algorithm developed specifically for Hadoop workloads. Many of our customers use CMX, and we would love to be able to contribute it to hadoop-common.
CMX is block oriented : We typically use 64k blocks. Blocks are independently decompressable.
CMX is splittable : We implement the SplittableCompressionCodec interface. All CMX files are a multiple of 64k, so the splittability is achieved in a simple way with no need for external indexes.
CMX is concatenable : Two independent CMX files can be concatenated together. We have seen that some projects like Apache Flume require this feature.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)