You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Jason Tang <ar...@gmail.com> on 2014/07/01 02:32:07 UTC

Re: Any better solution to avoid TombstoneOverwhelmingException?

The traffic is continuously, which means when insert new records, at the
same time, old records are executed (deleted)

And the execution are based on time condition, so some stored records will
be executed (deleted), some will be executed in the next round.

For given TTL, it is same as delete, it will also generate the Tombstone.


2014-06-30 15:58 GMT+08:00 DuyHai Doan <do...@gmail.com>:

> Why don't you store all current data into one partition and for the next
> round of execution, switch to a new partition ? This way you don't even
> need to remove data (if you insert with a given TTL)
>
>
> On Mon, Jun 30, 2014 at 8:43 AM, Jason Tang <ar...@gmail.com> wrote:
>
>> Our application will use Cassandra to persistent for asynchronous tasks,
>> so in one time period, lots of records will be created in Cassandra (more
>> then 10M). Later it will be executed.
>>
>> Due to disk space limitation, the executed records will be deleted.
>> After gc_grace_seconds, it is expected to be auto removed from the disk.
>>
>> So for the next round of execution, the deleted records, should not be
>> queried out.
>>
>> In this traffic, it will be generated lots of tombstones.
>>
>> To avoid TombstoneOverwhelmingException, One way is to larger
>> tombstone_failure_threshold, but is there any impact for the system's
>> performance on my traffic model, or is there any better solution for this
>> traffic?
>>
>>
>> BRs
>> //Tang
>>
>
>