You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by shalom sagges <sh...@gmail.com> on 2018/09/02 06:54:54 UTC

Re: Large sstables

If there are a lot of droppable tombstones, you could also run User Defined
Compaction on that (and on other) SSTable(s).

This blog post explains it well:
http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html

On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami <
mohamadrezarostami2@gmail.com> wrote:

> Hi,Dear Vitali
> The best option for you is migrating data to the new table and change
> portion key patterns to a better distribution of data and you sstables
> become smaller but if your data already have good distribution and your
> data is really big you must add new server to your datacenter, if you
> change compassion strategy it has some risk.
>
> > On Shahrivar 8, 1397 AP, at 19:54, Jeff Jirsa <jj...@gmail.com> wrote:
> >
> > Either of those are options, but there’s also sstablesplit to break it
> up a bit
> >
> > Switching to LCS can be a problem depending on how many sstables
> /overlaps you have
> >
> > --
> > Jeff Jirsa
> >
> >
> >> On Aug 30, 2018, at 8:05 AM, Vitali Dyachuk <vd...@gmail.com> wrote:
> >>
> >> Hi,
> >> Some of the sstables got too big 100gb and more so they are not
> compactiong any more so some of the disks are running out of space. I'm
> running C* 3.0.17, RF3 with 10 disks/jbod with STCS.
> >> What are my options? Completely delete all data on this node and rejoin
> it to the cluster, change CS to LCS then run repair?
> >> Vitali.
> >>
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
> > For additional commands, e-mail: user-help@cassandra.apache.org
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
> For additional commands, e-mail: user-help@cassandra.apache.org
>
>

RES: Large sstables

Posted by Versátil <ve...@versatilengenharia.com.br>.
Remove my email please

 

De: Vitali Dyachuk [mailto:vdjatsuk@gmail.com] 
Enviada em: quinta-feira, 6 de setembro de 2018 08:00
Para: user@cassandra.apache.org
Assunto: Re: Large sstables

 

What i have done is:
1) added more disks, so the compaction will carry on
2) when i've switched to LCS from STCS the STCS queues for the processing big sstables have remained, so i've stopped these queues with nodetool stop -id queue_id
    and LCS compaction has started to process sstables , i'm using 3.0.17 C* with RF3

However the question remains if i use sstablesplit on 200Gb sstables to split it to 200Mb files, will it help the LCS compaction?
Will LCS just take some data from that big sstable and try to merge with other sstable on L0 adn other levels so i just have to wait until the LCS compaction will finish?

 

 

On Sun, Sep 2, 2018 at 9:55 AM shalom sagges <shalomsagges@gmail.com <ma...@gmail.com> > wrote:

If there are a lot of droppable tombstones, you could also run User Defined Compaction on that (and on other) SSTable(s). 

 

This blog post explains it well:

http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html

 

On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami <mohamadrezarostami2@gmail.com <ma...@gmail.com> > wrote:

Hi,Dear Vitali
The best option for you is migrating data to the new table and change portion key patterns to a better distribution of data and you sstables become smaller but if your data already have good distribution and your data is really big you must add new server to your datacenter, if you change compassion strategy it has some risk.

> On Shahrivar 8, 1397 AP, at 19:54, Jeff Jirsa <jjirsa@gmail.com <ma...@gmail.com> > wrote:
> 
> Either of those are options, but there’s also sstablesplit to break it up a bit
> 
> Switching to LCS can be a problem depending on how many sstables /overlaps you have 
> 
> -- 
> Jeff Jirsa
> 
> 
>> On Aug 30, 2018, at 8:05 AM, Vitali Dyachuk <vdjatsuk@gmail.com <ma...@gmail.com> > wrote:
>> 
>> Hi,
>> Some of the sstables got too big 100gb and more so they are not compactiong any more so some of the disks are running out of space. I'm running C* 3.0.17, RF3 with 10 disks/jbod with STCS.
>> What are my options? Completely delete all data on this node and rejoin it to the cluster, change CS to LCS then run repair?
>> Vitali.
>> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org <ma...@cassandra.apache.org> 
> For additional commands, e-mail: user-help@cassandra.apache.org <ma...@cassandra.apache.org> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org <ma...@cassandra.apache.org> 
For additional commands, e-mail: user-help@cassandra.apache.org <ma...@cassandra.apache.org> 


Re: Large sstables

Posted by Vitali Dyachuk <vd...@gmail.com>.
What i have done is:
1) added more disks, so the compaction will carry on
2) when i've switched to LCS from STCS the STCS queues for the processing
big sstables have remained, so i've stopped these queues with nodetool stop
-id queue_id
    and LCS compaction has started to process sstables , i'm using 3.0.17
C* with RF3

However the question remains if i use sstablesplit on 200Gb sstables to
split it to 200Mb files, will it help the LCS compaction?
Will LCS just take some data from that big sstable and try to merge with
other sstable on L0 adn other levels so i just have to wait until the LCS
compaction will finish?


On Sun, Sep 2, 2018 at 9:55 AM shalom sagges <sh...@gmail.com> wrote:

> If there are a lot of droppable tombstones, you could also run User
> Defined Compaction on that (and on other) SSTable(s).
>
> This blog post explains it well:
> http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html
>
> On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami <
> mohamadrezarostami2@gmail.com> wrote:
>
>> Hi,Dear Vitali
>> The best option for you is migrating data to the new table and change
>> portion key patterns to a better distribution of data and you sstables
>> become smaller but if your data already have good distribution and your
>> data is really big you must add new server to your datacenter, if you
>> change compassion strategy it has some risk.
>>
>> > On Shahrivar 8, 1397 AP, at 19:54, Jeff Jirsa <jj...@gmail.com> wrote:
>> >
>> > Either of those are options, but there’s also sstablesplit to break it
>> up a bit
>> >
>> > Switching to LCS can be a problem depending on how many sstables
>> /overlaps you have
>> >
>> > --
>> > Jeff Jirsa
>> >
>> >
>> >> On Aug 30, 2018, at 8:05 AM, Vitali Dyachuk <vd...@gmail.com>
>> wrote:
>> >>
>> >> Hi,
>> >> Some of the sstables got too big 100gb and more so they are not
>> compactiong any more so some of the disks are running out of space. I'm
>> running C* 3.0.17, RF3 with 10 disks/jbod with STCS.
>> >> What are my options? Completely delete all data on this node and
>> rejoin it to the cluster, change CS to LCS then run repair?
>> >> Vitali.
>> >>
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
>> > For additional commands, e-mail: user-help@cassandra.apache.org
>> >
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
>> For additional commands, e-mail: user-help@cassandra.apache.org
>>
>>