You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "nicolas ginder (JIRA)" <ji...@apache.org> on 2016/09/26 10:52:20 UTC
[jira] [Created] (CASSANDRA-12707) JVM out of memory when querying
an extra-large partition with lots of tombstones
nicolas ginder created CASSANDRA-12707:
------------------------------------------
Summary: JVM out of memory when querying an extra-large partition with lots of tombstones
Key: CASSANDRA-12707
URL: https://issues.apache.org/jira/browse/CASSANDRA-12707
Project: Cassandra
Issue Type: Bug
Components: Core
Reporter: nicolas ginder
We have an extra large partition of 40 million cells where most of the cells were deleted. When querying this partition, Cassandra runs out of memory as tombstones fill up the JVM heap. After debugging one of the large SSTable we found that this part of the code loads all the tombstones.
In org.apache.cassandra.db.filter.QueryFilter
...
public static Iterator<Cell> gatherTombstones(final ColumnFamily returnCF, final Iterator<? extends OnDiskAtom> iter)
{
...
while (iter.hasNext())
{
OnDiskAtom atom = iter.next();
if (atom instanceof Cell)
{
next = (Cell)atom;
break;
}
else
{
if(returnCF.deletionInfo().rangeCount() > DatabaseDescriptor.getTombstoneFailureThreshold()){
throw new TombstoneOverwhelmingException();
}
returnCF.addAtom(atom);
}
}
...
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)