You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by eugene miretsky <eu...@gmail.com> on 2017/03/27 22:26:55 UTC

Issues while using TWCS compaction and Bulkloader

Hi,

We have a Cassandra 3.0.8 cluster, and we use the Bulkloader
<http://www.datastax.com/dev/blog/using-the-cassandra-bulk-loader-updated>
to upload time series data nightly. The data has a 3day TTL, and the
compaction window unit is 1h.

Generally the data fits into memory, all reads are served from OS page
cache, and the cluster works fine. However, we had a few unexplained
incidents:

   1. High page fault ratio: The happened ones, for 3-4 days and was
   resolved after we restarted the cluster. Have not been able to reproduce it
   since.
   2. High Bloom number of bloom filter false positive: Same as above.

Several questions:

   1. What could have caused the page fault, and/or bloom filter false
   positives?
   2. What's the right strategy for running repairs?
      1. Are repairs even required? We don't generate any tombstones.
      2. The following article suggests that incremental repairs should not
      be used with Date Tiered compactions, does it also apply to TWCS?
      https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesManualRepair.html

Cheers,
Eugene

Re: Issues while using TWCS compaction and Bulkloader

Posted by Alain RODRIGUEZ <ar...@gmail.com>.
Hi Eugene.


>    1. What could have caused the page fault, and/or bloom filter false
>    positives?
>    2. What's the right strategy for running repairs?
>
>
I am not seeing how related these 2 questions are. Given your mix of
questions I am guessing you tried to repair, probably using incremental
repairs. I imagine that it could have lead to create a lot of SSTables
because of anti-compactions. This big amount of SSTables would indeed
reduce bloom filter and page caching efficiency and create high latency.
It's a common issue on first run of incremental repair...

You probably don't need to repair, and if data fits into memory, you
probably don't want to repair using incremental repairs anyway. There are
some downsides and bugs around this feature.

Given your other email I already answered I guess you don't need to repair,
as data is temporary and only using TTL. I would ensure a strong
consistency by using CL = LOCAL_QUORUM on reads and write and not care
about entropy in your case as written data will 'soon' be deleted (but I
can be missing important context).

Sorry we did not answer your questions earlier, I hope it is still useful.

C*heers,
-----------------------
Alain Rodriguez - @arodream - alain@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2017-03-28 0:26 GMT+02:00 eugene miretsky <eu...@gmail.com>:

> Hi,
>
> We have a Cassandra 3.0.8 cluster, and we use the Bulkloader
> <http://www.datastax.com/dev/blog/using-the-cassandra-bulk-loader-updated>
> to upload time series data nightly. The data has a 3day TTL, and the
> compaction window unit is 1h.
>
> Generally the data fits into memory, all reads are served from OS page
> cache, and the cluster works fine. However, we had a few unexplained
> incidents:
>
>    1. High page fault ratio: The happened ones, for 3-4 days and was
>    resolved after we restarted the cluster. Have not been able to reproduce it
>    since.
>    2. High Bloom number of bloom filter false positive: Same as above.
>
> Several questions:
>
>    1. What could have caused the page fault, and/or bloom filter false
>    positives?
>    2. What's the right strategy for running repairs?
>       1. Are repairs even required? We don't generate any tombstones.
>       2. The following article suggests that incremental repairs should
>       not be used with Date Tiered compactions, does it also apply to TWCS?
>       https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/
>       opsRepairNodesManualRepair.html
>       <https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesManualRepair.html>
>
> Cheers,
> Eugene
>