You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Saladi Naidu <na...@yahoo.com> on 2015/09/15 04:37:14 UTC

LTCS Strategy Resulting in multiple SSTables

We are using Level Tiered Compaction Strategy on a Column Family. Below are CFSTATS from two nodes in same cluster, one node has 880 SStables in L0 whereas one node just has 1 SSTable in L0. In the node where there are multiple SStables, all of them are small size and created same time stamp. We ran Compaction, it did not result in much change, node remained with huge number of SStables. Due to this large number of SSTables, Read performance is being impacted
In same cluster, under same keyspace, we are observing this discrepancy in other column families as well. What is going wrong? What is the solution to fix this
---NODE1---

               Table: category_ranking_dedup

                               SSTable count: 1

                               SSTables in each level: [1, 0, 0, 0, 0, 0, 0, 0, 0]

                               Space used (live): 2012037

                               Space used (total): 2012037

                               Space used by snapshots (total): 0

                               SSTable Compression Ratio: 0.07677216119569073

                               Memtable cell count: 990

                               Memtable data size: 32082

                               Memtable switch count: 11

                               Local read count: 2842

                               Local read latency: 3.215 ms

                               Local write count: 18309

                               Local write latency: 5.008 ms

                               Pending flushes: 0

                               Bloom filter false positives: 0

                               Bloom filter false ratio: 0.00000

                               Bloom filter space used: 816

                               Compacted partition minimum bytes: 87

                               Compacted partition maximum bytes: 25109160

                               Compacted partition mean bytes: 22844

                               Average live cells per slice (last five minutes): 338.84588318085855

                               Maximum live cells per slice (last five minutes): 10002.0

                               Average tombstones per slice (last five minutes): 36.53307529908515

                               Maximum tombstones per slice (last five minutes): 36895.0

 ----NODE2---  

Table: category_ranking_dedup

                               SSTable count: 808

                               SSTables in each level: [808/4, 0, 0, 0, 0, 0, 0, 0, 0]

                               Space used (live): 291641980

                               Space used (total): 291641980

                               Space used by snapshots (total): 0

                               SSTable Compression Ratio: 0.1431106696818256

                               Memtable cell count: 4365293

                               Memtable data size: 3742375

                               Memtable switch count: 44

                               Local read count: 2061

                               Local read latency: 31.983 ms

                               Local write count: 30096

                               Local write latency: 27.449 ms

                               Pending flushes: 0

                               Bloom filter false positives: 0

                               Bloom filter false ratio: 0.00000

                               Bloom filter space used: 54544

                               Compacted partition minimum bytes: 87

                               Compacted partition maximum bytes: 25109160

                               Compacted partition mean bytes: 634491

                               Average live cells per slice (last five minutes): 416.1780688985929

                               Maximum live cells per slice (last five minutes): 10002.0

                               Average tombstones per slice (last five minutes): 45.11547792333818

                               Maximum tombstones per slice (last five minutes): 36895.0




 Naidu Saladi 

Re: LTCS Strategy Resulting in multiple SSTables

Posted by Nate McCall <na...@thelastpickle.com>.
You could try altering the table to use STCS, then force a major compaction
via 'nodetool compact', then alter the table back to LCS when it completes.
You may very well hit the same issues in process of doing this, however,
until you upgrade.


On Wed, Sep 16, 2015 at 1:25 PM, Saladi Naidu <na...@yahoo.com> wrote:

> Nate,
> Yes we are in process of upgrading to 2.1.9. Meanwhile I am looking for
> correcting the problem, do you know any recovery options to reduce the
> number of SS Tables. As SStbales are keep on increasing, the read
> performance is deteriorating
>
> Naidu Saladi
>
> ------------------------------
> *From:* Nate McCall <na...@thelastpickle.com>
> *To:* Cassandra Users <us...@cassandra.apache.org>; Saladi Naidu <
> naidusp2002@yahoo.com>
> *Sent:* Tuesday, September 15, 2015 4:53 PM
>
> *Subject:* Re: LTCS Strategy Resulting in multiple SSTables
>
> That's an early 2.1/known buggy version. There have been several issues
> fixed since which could cause that behavior. Most likely
> https://issues.apache.org/jira/browse/CASSANDRA-9592 ?
>
> Upgrade to 2.1.9 and see if the problem persists.
>
>
>
> On Tue, Sep 15, 2015 at 8:31 AM, Saladi Naidu <na...@yahoo.com>
> wrote:
>
> We are on 2.1.2 and planning to upgrade to 2.1.9
>
> Naidu Saladi
>
> ------------------------------
> *From:* Marcus Eriksson <kr...@gmail.com>
> *To:* user@cassandra.apache.org; Saladi Naidu <na...@yahoo.com>
> *Sent:* Tuesday, September 15, 2015 1:53 AM
> *Subject:* Re: LTCS Strategy Resulting in multiple SSTables
>
> if you are on Cassandra 2.2, it is probably this:
> https://issues.apache.org/jira/browse/CASSANDRA-10270
>
>
>
> On Tue, Sep 15, 2015 at 4:37 AM, Saladi Naidu <na...@yahoo.com>
> wrote:
>
> We are using Level Tiered Compaction Strategy on a Column Family. Below
> are CFSTATS from two nodes in same cluster, one node has 880 SStables in L0
> whereas one node just has 1 SSTable in L0. In the node where there are
> multiple SStables, all of them are small size and created same time stamp.
> We ran Compaction, it did not result in much change, node remained with
> huge number of SStables. Due to this large number of SSTables, Read
> performance is being impacted
>
> In same cluster, under same keyspace, we are observing this discrepancy in
> other column families as well. What is going wrong? What is the solution to
> fix this
>
> *---*NODE1*---*
> *                Table: category_ranking_dedup*
> *                                SSTable count: 1*
> *                                SSTables in each level: [1, 0, 0, 0, 0,
> 0, 0, 0, 0]*
> *                                Space used (live): 2012037*
> *                                Space used (total): 2012037*
> *                                Space used by snapshots (total): 0*
> *                                SSTable Compression Ratio:
> 0.07677216119569073*
> *                                Memtable cell count: 990*
> *                                Memtable data size: 32082*
> *                                Memtable switch count: 11*
> *                                Local read count: 2842*
> *                                Local read latency: 3.215 ms*
> *                                Local write count: 18309*
> *                                Local write latency: 5.008 ms*
> *                                Pending flushes: 0*
> *                                Bloom filter false positives: 0*
> *                                Bloom filter false ratio: 0.00000*
> *                                Bloom filter space used: 816*
> *                                Compacted partition minimum bytes: 87*
> *                                Compacted partition maximum bytes:
> 25109160*
> *                                Compacted partition mean bytes: 22844*
> *                                Average live cells per slice (last five
> minutes): 338.84588318085855*
> *                                Maximum live cells per slice (last five
> minutes): 10002.0*
> *                                Average tombstones per slice (last five
> minutes): 36.53307529908515*
> *                                Maximum tombstones per slice (last five
> minutes): 36895.0*
>
> *----NODE2---  *
> *Table: category_ranking_dedup*
> *                                SSTable count: 808*
> *                                SSTables in each level: [808/4, 0, 0, 0,
> 0, 0, 0, 0, 0]*
> *                                Space used (live): 291641980*
> *                                Space used (total): 291641980*
> *                                Space used by snapshots (total): 0*
> *                                SSTable Compression Ratio:
> 0.1431106696818256*
> *                                Memtable cell count: 4365293*
> *                                Memtable data size: 3742375*
> *                                Memtable switch count: 44*
> *                                Local read count: 2061*
> *                                Local read latency: 31.983 ms*
> *                                Local write count: 30096*
> *                                Local write latency: 27.449 ms*
> *                                Pending flushes: 0*
> *                                Bloom filter false positives: 0*
> *                                Bloom filter false ratio: 0.00000*
> *                                Bloom filter space used: 54544*
> *                                Compacted partition minimum bytes: 87*
> *                                Compacted partition maximum bytes:
> 25109160*
> *                                Compacted partition mean bytes: 634491*
> *                                Average live cells per slice (last five
> minutes): 416.1780688985929*
> *                                Maximum live cells per slice (last five
> minutes): 10002.0*
> *                                Average tombstones per slice (last five
> minutes): 45.11547792333818*
> *                                Maximum tombstones per slice (last five
> minutes): 36895.0*
>
>
>
>
> Naidu Saladi
>
>
>
>
>
>
>
> --
> -----------------
> Nate McCall
> Austin, TX
> @zznate
>
> Co-Founder & Sr. Technical Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
>
>


-- 
-----------------
Nate McCall
Austin, TX
@zznate

Co-Founder & Sr. Technical Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

Re: LTCS Strategy Resulting in multiple SSTables

Posted by Saladi Naidu <na...@yahoo.com>.
Nate,Yes we are in process of upgrading to 2.1.9. Meanwhile I am looking for correcting the problem, do you know any recovery options to reduce the number of SS Tables. As SStbales are keep on increasing, the read performance is deteriorating  Naidu Saladi 

      From: Nate McCall <na...@thelastpickle.com>
 To: Cassandra Users <us...@cassandra.apache.org>; Saladi Naidu <na...@yahoo.com> 
 Sent: Tuesday, September 15, 2015 4:53 PM
 Subject: Re: LTCS Strategy Resulting in multiple SSTables
   
That's an early 2.1/known buggy version. There have been several issues fixed since which could cause that behavior. Most likely https://issues.apache.org/jira/browse/CASSANDRA-9592 ? 
Upgrade to 2.1.9 and see if the problem persists. 


On Tue, Sep 15, 2015 at 8:31 AM, Saladi Naidu <na...@yahoo.com> wrote:

We are on 2.1.2 and planning to upgrade to 2.1.9  Naidu Saladi 

      From: Marcus Eriksson <kr...@gmail.com>
 To: user@cassandra.apache.org; Saladi Naidu <na...@yahoo.com> 
 Sent: Tuesday, September 15, 2015 1:53 AM
 Subject: Re: LTCS Strategy Resulting in multiple SSTables
   
if you are on Cassandra 2.2, it is probably this: https://issues.apache.org/jira/browse/CASSANDRA-10270


On Tue, Sep 15, 2015 at 4:37 AM, Saladi Naidu <na...@yahoo.com> wrote:

We are using Level Tiered Compaction Strategy on a Column Family. Below are CFSTATS from two nodes in same cluster, one node has 880 SStables in L0 whereas one node just has 1 SSTable in L0. In the node where there are multiple SStables, all of them are small size and created same time stamp. We ran Compaction, it did not result in much change, node remained with huge number of SStables. Due to this large number of SSTables, Read performance is being impacted
In same cluster, under same keyspace, we are observing this discrepancy in other column families as well. What is going wrong? What is the solution to fix this
---NODE1---               Table: category_ranking_dedup                               SSTable count: 1                               SSTables in each level: [1, 0, 0, 0, 0, 0, 0, 0, 0]                               Space used (live): 2012037                               Space used (total): 2012037                               Space used by snapshots (total): 0                               SSTable Compression Ratio: 0.07677216119569073                               Memtable cell count: 990                               Memtable data size: 32082                               Memtable switch count: 11                               Local read count: 2842                               Local read latency: 3.215 ms                               Local write count: 18309                               Local write latency: 5.008 ms                               Pending flushes: 0                               Bloom filter false positives: 0                               Bloom filter false ratio: 0.00000                               Bloom filter space used: 816                               Compacted partition minimum bytes: 87                               Compacted partition maximum bytes: 25109160                               Compacted partition mean bytes: 22844                               Average live cells per slice (last five minutes): 338.84588318085855                               Maximum live cells per slice (last five minutes): 10002.0                               Average tombstones per slice (last five minutes): 36.53307529908515                               Maximum tombstones per slice (last five minutes): 36895.0 ----NODE2---  Table: category_ranking_dedup                               SSTable count: 808                               SSTables in each level: [808/4, 0, 0, 0, 0, 0, 0, 0, 0]                               Space used (live): 291641980                               Space used (total): 291641980                               Space used by snapshots (total): 0                               SSTable Compression Ratio: 0.1431106696818256                               Memtable cell count: 4365293                               Memtable data size: 3742375                               Memtable switch count: 44                               Local read count: 2061                               Local read latency: 31.983 ms                               Local write count: 30096                               Local write latency: 27.449 ms                               Pending flushes: 0                               Bloom filter false positives: 0                               Bloom filter false ratio: 0.00000                               Bloom filter space used: 54544                               Compacted partition minimum bytes: 87                               Compacted partition maximum bytes: 25109160                               Compacted partition mean bytes: 634491                               Average live cells per slice (last five minutes): 416.1780688985929                               Maximum live cells per slice (last five minutes): 10002.0                               Average tombstones per slice (last five minutes): 45.11547792333818                               Maximum tombstones per slice (last five minutes): 36895.0


 Naidu Saladi 




   



-- 
-----------------
Nate McCall
Austin, TX
@zznate

Co-Founder & Sr. Technical Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

  

Re: LTCS Strategy Resulting in multiple SSTables

Posted by Nate McCall <na...@thelastpickle.com>.
That's an early 2.1/known buggy version. There have been several issues
fixed since which could cause that behavior. Most likely
https://issues.apache.org/jira/browse/CASSANDRA-9592 ?

Upgrade to 2.1.9 and see if the problem persists.

On Tue, Sep 15, 2015 at 8:31 AM, Saladi Naidu <na...@yahoo.com> wrote:

> We are on 2.1.2 and planning to upgrade to 2.1.9
>
> Naidu Saladi
>
> ------------------------------
> *From:* Marcus Eriksson <kr...@gmail.com>
> *To:* user@cassandra.apache.org; Saladi Naidu <na...@yahoo.com>
> *Sent:* Tuesday, September 15, 2015 1:53 AM
> *Subject:* Re: LTCS Strategy Resulting in multiple SSTables
>
> if you are on Cassandra 2.2, it is probably this:
> https://issues.apache.org/jira/browse/CASSANDRA-10270
>
>
>
> On Tue, Sep 15, 2015 at 4:37 AM, Saladi Naidu <na...@yahoo.com>
> wrote:
>
> We are using Level Tiered Compaction Strategy on a Column Family. Below
> are CFSTATS from two nodes in same cluster, one node has 880 SStables in L0
> whereas one node just has 1 SSTable in L0. In the node where there are
> multiple SStables, all of them are small size and created same time stamp.
> We ran Compaction, it did not result in much change, node remained with
> huge number of SStables. Due to this large number of SSTables, Read
> performance is being impacted
>
> In same cluster, under same keyspace, we are observing this discrepancy in
> other column families as well. What is going wrong? What is the solution to
> fix this
>
> *---*NODE1*---*
> *                Table: category_ranking_dedup*
> *                                SSTable count: 1*
> *                                SSTables in each level: [1, 0, 0, 0, 0,
> 0, 0, 0, 0]*
> *                                Space used (live): 2012037*
> *                                Space used (total): 2012037*
> *                                Space used by snapshots (total): 0*
> *                                SSTable Compression Ratio:
> 0.07677216119569073*
> *                                Memtable cell count: 990*
> *                                Memtable data size: 32082*
> *                                Memtable switch count: 11*
> *                                Local read count: 2842*
> *                                Local read latency: 3.215 ms*
> *                                Local write count: 18309*
> *                                Local write latency: 5.008 ms*
> *                                Pending flushes: 0*
> *                                Bloom filter false positives: 0*
> *                                Bloom filter false ratio: 0.00000*
> *                                Bloom filter space used: 816*
> *                                Compacted partition minimum bytes: 87*
> *                                Compacted partition maximum bytes:
> 25109160*
> *                                Compacted partition mean bytes: 22844*
> *                                Average live cells per slice (last five
> minutes): 338.84588318085855*
> *                                Maximum live cells per slice (last five
> minutes): 10002.0*
> *                                Average tombstones per slice (last five
> minutes): 36.53307529908515*
> *                                Maximum tombstones per slice (last five
> minutes): 36895.0*
>
> *----NODE2---  *
> *Table: category_ranking_dedup*
> *                                SSTable count: 808*
> *                                SSTables in each level: [808/4, 0, 0, 0,
> 0, 0, 0, 0, 0]*
> *                                Space used (live): 291641980*
> *                                Space used (total): 291641980*
> *                                Space used by snapshots (total): 0*
> *                                SSTable Compression Ratio:
> 0.1431106696818256*
> *                                Memtable cell count: 4365293*
> *                                Memtable data size: 3742375*
> *                                Memtable switch count: 44*
> *                                Local read count: 2061*
> *                                Local read latency: 31.983 ms*
> *                                Local write count: 30096*
> *                                Local write latency: 27.449 ms*
> *                                Pending flushes: 0*
> *                                Bloom filter false positives: 0*
> *                                Bloom filter false ratio: 0.00000*
> *                                Bloom filter space used: 54544*
> *                                Compacted partition minimum bytes: 87*
> *                                Compacted partition maximum bytes:
> 25109160*
> *                                Compacted partition mean bytes: 634491*
> *                                Average live cells per slice (last five
> minutes): 416.1780688985929*
> *                                Maximum live cells per slice (last five
> minutes): 10002.0*
> *                                Average tombstones per slice (last five
> minutes): 45.11547792333818*
> *                                Maximum tombstones per slice (last five
> minutes): 36895.0*
>
>
>
>
> Naidu Saladi
>
>
>
>
>


-- 
-----------------
Nate McCall
Austin, TX
@zznate

Co-Founder & Sr. Technical Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

Re: LTCS Strategy Resulting in multiple SSTables

Posted by Saladi Naidu <na...@yahoo.com>.
We are on 2.1.2 and planning to upgrade to 2.1.9  Naidu Saladi 

      From: Marcus Eriksson <kr...@gmail.com>
 To: user@cassandra.apache.org; Saladi Naidu <na...@yahoo.com> 
 Sent: Tuesday, September 15, 2015 1:53 AM
 Subject: Re: LTCS Strategy Resulting in multiple SSTables
   
if you are on Cassandra 2.2, it is probably this: https://issues.apache.org/jira/browse/CASSANDRA-10270


On Tue, Sep 15, 2015 at 4:37 AM, Saladi Naidu <na...@yahoo.com> wrote:

We are using Level Tiered Compaction Strategy on a Column Family. Below are CFSTATS from two nodes in same cluster, one node has 880 SStables in L0 whereas one node just has 1 SSTable in L0. In the node where there are multiple SStables, all of them are small size and created same time stamp. We ran Compaction, it did not result in much change, node remained with huge number of SStables. Due to this large number of SSTables, Read performance is being impacted
In same cluster, under same keyspace, we are observing this discrepancy in other column families as well. What is going wrong? What is the solution to fix this
---NODE1---               Table: category_ranking_dedup                               SSTable count: 1                               SSTables in each level: [1, 0, 0, 0, 0, 0, 0, 0, 0]                               Space used (live): 2012037                               Space used (total): 2012037                               Space used by snapshots (total): 0                               SSTable Compression Ratio: 0.07677216119569073                               Memtable cell count: 990                               Memtable data size: 32082                               Memtable switch count: 11                               Local read count: 2842                               Local read latency: 3.215 ms                               Local write count: 18309                               Local write latency: 5.008 ms                               Pending flushes: 0                               Bloom filter false positives: 0                               Bloom filter false ratio: 0.00000                               Bloom filter space used: 816                               Compacted partition minimum bytes: 87                               Compacted partition maximum bytes: 25109160                               Compacted partition mean bytes: 22844                               Average live cells per slice (last five minutes): 338.84588318085855                               Maximum live cells per slice (last five minutes): 10002.0                               Average tombstones per slice (last five minutes): 36.53307529908515                               Maximum tombstones per slice (last five minutes): 36895.0 ----NODE2---  Table: category_ranking_dedup                               SSTable count: 808                               SSTables in each level: [808/4, 0, 0, 0, 0, 0, 0, 0, 0]                               Space used (live): 291641980                               Space used (total): 291641980                               Space used by snapshots (total): 0                               SSTable Compression Ratio: 0.1431106696818256                               Memtable cell count: 4365293                               Memtable data size: 3742375                               Memtable switch count: 44                               Local read count: 2061                               Local read latency: 31.983 ms                               Local write count: 30096                               Local write latency: 27.449 ms                               Pending flushes: 0                               Bloom filter false positives: 0                               Bloom filter false ratio: 0.00000                               Bloom filter space used: 54544                               Compacted partition minimum bytes: 87                               Compacted partition maximum bytes: 25109160                               Compacted partition mean bytes: 634491                               Average live cells per slice (last five minutes): 416.1780688985929                               Maximum live cells per slice (last five minutes): 10002.0                               Average tombstones per slice (last five minutes): 45.11547792333818                               Maximum tombstones per slice (last five minutes): 36895.0


 Naidu Saladi 




   

Re: LTCS Strategy Resulting in multiple SSTables

Posted by Marcus Eriksson <kr...@gmail.com>.
if you are on Cassandra 2.2, it is probably this:
https://issues.apache.org/jira/browse/CASSANDRA-10270

On Tue, Sep 15, 2015 at 4:37 AM, Saladi Naidu <na...@yahoo.com> wrote:

> We are using Level Tiered Compaction Strategy on a Column Family. Below
> are CFSTATS from two nodes in same cluster, one node has 880 SStables in L0
> whereas one node just has 1 SSTable in L0. In the node where there are
> multiple SStables, all of them are small size and created same time stamp.
> We ran Compaction, it did not result in much change, node remained with
> huge number of SStables. Due to this large number of SSTables, Read
> performance is being impacted
>
> In same cluster, under same keyspace, we are observing this discrepancy in
> other column families as well. What is going wrong? What is the solution to
> fix this
>
> *---*NODE1*---*
> *                Table: category_ranking_dedup*
> *                                SSTable count: 1*
> *                                SSTables in each level: [1, 0, 0, 0, 0,
> 0, 0, 0, 0]*
> *                                Space used (live): 2012037*
> *                                Space used (total): 2012037*
> *                                Space used by snapshots (total): 0*
> *                                SSTable Compression Ratio:
> 0.07677216119569073*
> *                                Memtable cell count: 990*
> *                                Memtable data size: 32082*
> *                                Memtable switch count: 11*
> *                                Local read count: 2842*
> *                                Local read latency: 3.215 ms*
> *                                Local write count: 18309*
> *                                Local write latency: 5.008 ms*
> *                                Pending flushes: 0*
> *                                Bloom filter false positives: 0*
> *                                Bloom filter false ratio: 0.00000*
> *                                Bloom filter space used: 816*
> *                                Compacted partition minimum bytes: 87*
> *                                Compacted partition maximum bytes:
> 25109160*
> *                                Compacted partition mean bytes: 22844*
> *                                Average live cells per slice (last five
> minutes): 338.84588318085855*
> *                                Maximum live cells per slice (last five
> minutes): 10002.0*
> *                                Average tombstones per slice (last five
> minutes): 36.53307529908515*
> *                                Maximum tombstones per slice (last five
> minutes): 36895.0*
>
> *----NODE2---  *
> *Table: category_ranking_dedup*
> *                                SSTable count: 808*
> *                                SSTables in each level: [808/4, 0, 0, 0,
> 0, 0, 0, 0, 0]*
> *                                Space used (live): 291641980*
> *                                Space used (total): 291641980*
> *                                Space used by snapshots (total): 0*
> *                                SSTable Compression Ratio:
> 0.1431106696818256*
> *                                Memtable cell count: 4365293*
> *                                Memtable data size: 3742375*
> *                                Memtable switch count: 44*
> *                                Local read count: 2061*
> *                                Local read latency: 31.983 ms*
> *                                Local write count: 30096*
> *                                Local write latency: 27.449 ms*
> *                                Pending flushes: 0*
> *                                Bloom filter false positives: 0*
> *                                Bloom filter false ratio: 0.00000*
> *                                Bloom filter space used: 54544*
> *                                Compacted partition minimum bytes: 87*
> *                                Compacted partition maximum bytes:
> 25109160*
> *                                Compacted partition mean bytes: 634491*
> *                                Average live cells per slice (last five
> minutes): 416.1780688985929*
> *                                Maximum live cells per slice (last five
> minutes): 10002.0*
> *                                Average tombstones per slice (last five
> minutes): 45.11547792333818*
> *                                Maximum tombstones per slice (last five
> minutes): 36895.0*
>
>
>
>
> Naidu Saladi
>