You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Arnaud Linz (JIRA)" <ji...@apache.org> on 2015/12/11 17:37:10 UTC

[jira] [Commented] (HADOOP-12007) GzipCodec native CodecPool leaks memory

    [ https://issues.apache.org/jira/browse/HADOOP-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052987#comment-15052987 ] 

Arnaud Linz commented on HADOOP-12007:
--------------------------------------

I have the same problem. Yarn kills my yarn container because my streaming app use GzipCodec and create a new off-heap buffer each time a new Hdfs file is created.


> GzipCodec native CodecPool leaks memory
> ---------------------------------------
>
>                 Key: HADOOP-12007
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12007
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.7.0
>            Reporter: Yejun Yang
>
> org/apache/hadoop/io/compress/GzipCodec.java call CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But compressor objects are actually never returned to pool which cause memory leak.
> HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually returns a CompressorStream which overrides close().
> This cause CodecPool.returnCompressor never being called. In my log file I can see lots of "Got brand-new compressor [.gz]" but no "Got recycled compressor".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)