You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@accumulo.apache.org by "Keith Turner (JIRA)" <ji...@apache.org> on 2012/06/21 18:27:44 UTC
[jira] [Commented] (ACCUMULO-624) iterators may open lots of
compressors
[ https://issues.apache.org/jira/browse/ACCUMULO-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13398538#comment-13398538 ]
Keith Turner commented on ACCUMULO-624:
---------------------------------------
A work around for this issue is to enable the block cache for the table. This will cause rfile blocks to be read into memory and closed immediately, releasing the decompressor. A decompressor will not be kept for each deep copy in this case.
I did some test with the intersecting iterator to verify this. W/o cache querying 5 terms would allocate 5 decompressors. W/ cache only one decompressor was allocated.
> iterators may open lots of compressors
> --------------------------------------
>
> Key: ACCUMULO-624
> URL: https://issues.apache.org/jira/browse/ACCUMULO-624
> Project: Accumulo
> Issue Type: Bug
> Components: tserver
> Reporter: Eric Newton
> Assignee: Keith Turner
>
> A large iterator tree may create many instances of Compressors. These instances are pulled from a pool that never decreases in size. So, if 50 simultaneous queries are run over dozens of files, each with a complex iterator stack, there will be thousands of compressors created. Each of these holds a large buffer. This can cause the server to run out of memory.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: [jira] [Commented] (ACCUMULO-624) iterators may open lots of compressors
Posted by Keith Turner <ke...@deenlo.com>.
I added a comment on the ticket
On Thu, Jun 21, 2012 at 1:38 PM, David Medinets
<da...@gmail.com> wrote:
> Is the testing code simple enough to attach to this ticket? I ask
> because others might want to replicate your results.
>
> On Thu, Jun 21, 2012 at 12:27 PM, Keith Turner (JIRA) <ji...@apache.org> wrote:
>>
>> [ https://issues.apache.org/jira/browse/ACCUMULO-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13398538#comment-13398538 ]
>>
>> Keith Turner commented on ACCUMULO-624:
>> ---------------------------------------
>>
>> A work around for this issue is to enable the block cache for the table. This will cause rfile blocks to be read into memory and closed immediately, releasing the decompressor. A decompressor will not be kept for each deep copy in this case.
>>
>> I did some test with the intersecting iterator to verify this. W/o cache querying 5 terms would allocate 5 decompressors. W/ cache only one decompressor was allocated.
>>
>>> iterators may open lots of compressors
>>> --------------------------------------
>>>
>>> Key: ACCUMULO-624
>>> URL: https://issues.apache.org/jira/browse/ACCUMULO-624
>>> Project: Accumulo
>>> Issue Type: Bug
>>> Components: tserver
>>> Reporter: Eric Newton
>>> Assignee: Keith Turner
>>>
>>> A large iterator tree may create many instances of Compressors. These instances are pulled from a pool that never decreases in size. So, if 50 simultaneous queries are run over dozens of files, each with a complex iterator stack, there will be thousands of compressors created. Each of these holds a large buffer. This can cause the server to run out of memory.
>>
>> --
>> This message is automatically generated by JIRA.
>> If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
>> For more information on JIRA, see: http://www.atlassian.com/software/jira
>>
>>
Re: [jira] [Commented] (ACCUMULO-624) iterators may open lots of compressors
Posted by David Medinets <da...@gmail.com>.
Is the testing code simple enough to attach to this ticket? I ask
because others might want to replicate your results.
On Thu, Jun 21, 2012 at 12:27 PM, Keith Turner (JIRA) <ji...@apache.org> wrote:
>
> [ https://issues.apache.org/jira/browse/ACCUMULO-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13398538#comment-13398538 ]
>
> Keith Turner commented on ACCUMULO-624:
> ---------------------------------------
>
> A work around for this issue is to enable the block cache for the table. This will cause rfile blocks to be read into memory and closed immediately, releasing the decompressor. A decompressor will not be kept for each deep copy in this case.
>
> I did some test with the intersecting iterator to verify this. W/o cache querying 5 terms would allocate 5 decompressors. W/ cache only one decompressor was allocated.
>
>> iterators may open lots of compressors
>> --------------------------------------
>>
>> Key: ACCUMULO-624
>> URL: https://issues.apache.org/jira/browse/ACCUMULO-624
>> Project: Accumulo
>> Issue Type: Bug
>> Components: tserver
>> Reporter: Eric Newton
>> Assignee: Keith Turner
>>
>> A large iterator tree may create many instances of Compressors. These instances are pulled from a pool that never decreases in size. So, if 50 simultaneous queries are run over dozens of files, each with a complex iterator stack, there will be thousands of compressors created. Each of these holds a large buffer. This can cause the server to run out of memory.
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
> For more information on JIRA, see: http://www.atlassian.com/software/jira
>
>