You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by "Mike Drob (JIRA)" <ji...@apache.org> on 2013/07/09 02:27:48 UTC

[jira] [Commented] (ACCUMULO-1534) Tablet Server using large number of decompressors during a scan

    [ https://issues.apache.org/jira/browse/ACCUMULO-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702682#comment-13702682 ] 

Mike Drob commented on ACCUMULO-1534:
-------------------------------------

Another option is to modify the hadoop io.file.buffer.size to something smaller than the current value. In the cdh3 line it looks like it is 64k? Not sure about other distributions.
                
> Tablet Server using large number of decompressors during a scan
> ---------------------------------------------------------------
>
>                 Key: ACCUMULO-1534
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-1534
>             Project: Accumulo
>          Issue Type: Bug
>    Affects Versions: 1.4.3
>            Reporter: Mike Drob
>             Fix For: 1.4.4, 1.5.1, 1.6.0
>
>
> I believe this issue is similar to ACCUMULO-665. We've run into a situation where a complex iterator tree creates a large number of decompressors from the underlying CodecPool for serving scans. Each decompressor holds on to a large buffer and the total volume ends up killing the tserver.
> We have verified that turning off compression makes this problem go away.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira