You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "nicolas ginder (JIRA)" <ji...@apache.org> on 2016/11/07 11:30:58 UTC

[jira] [Updated] (CASSANDRA-12707) JVM out of memory when querying an extra-large partition with lots of tombstones

     [ https://issues.apache.org/jira/browse/CASSANDRA-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

nicolas ginder updated CASSANDRA-12707:
---------------------------------------
    Reproduced In: 2.1.x, 2.2.x  (was: 2.1.x)

> JVM out of memory when querying an extra-large partition with lots of tombstones
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-12707
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12707
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: nicolas ginder
>             Fix For: 2.1.x, 2.2.x
>
>
> We have an extra large partition of 40 million cells where most of the cells were deleted. When querying this partition with a slice query, Cassandra runs out of memory as tombstones fill up the JVM heap. After debugging one of the large SSTable we found that this part of the code loads all the tombstones.
> In org.apache.cassandra.db.filter.QueryFilter
> ...
> public static Iterator<Cell> gatherTombstones(final ColumnFamily returnCF, final Iterator<? extends OnDiskAtom> iter)
>     {
> ...
> while (iter.hasNext())
>                 {
>                     OnDiskAtom atom = iter.next();
>                     if (atom instanceof Cell)
>                     {
>                         next = (Cell)atom;
>                         break;
>                     }
>                     else
>                     {
>                         returnCF.addAtom(atom);
>                     }
>                 }
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)