You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Yan Chunlu <sp...@gmail.com> on 2011/09/19 03:51:07 UTC
cassandra crashed while repairing, leave node size X3
while doing repair on node3, the "Load" keep increasing, suddenly cassandra
has encountered OOM, and the "Load" stopped at 140GB, after cassandra came
back, I tried node cleanup but it seems not working....
does node repair generate many temp sstables? how to get rid of them?
thanks!
Address Status State Load Owns Token
113427455640312821154458202477256070484
node1 Up Normal 43 GB 33.33% 0
node2 Up Normal 59.52 GB 33.33%
56713727820156410577229101238628035242
node3 Down Normal 142.57 GB 33.33%
113427455640312821154458202477256070484
Re: cassandra crashed while repairing, leave node size X3
Posted by Yan Chunlu <sp...@gmail.com>.
so does major compaction actually "clean it" or "merge it", I am afraid it
give me a single large file....
On Mon, Sep 19, 2011 at 10:26 AM, Anand Somani <me...@gmail.com> wrote:
> In my tests I have seen repair sometimes take a lot of space (2-3 times),
> cleanup did not clean it, the only way I could clean that was using major
> compaction.
>
>
> On Sun, Sep 18, 2011 at 6:51 PM, Yan Chunlu <sp...@gmail.com> wrote:
>
>> while doing repair on node3, the "Load" keep increasing, suddenly
>> cassandra has encountered OOM, and the "Load" stopped at 140GB, after
>> cassandra came back, I tried node cleanup but it seems not working....
>>
>> does node repair generate many temp sstables? how to get rid of them?
>> thanks!
>>
>> Address Status State Load Owns Token
>>
>>
>> 113427455640312821154458202477256070484
>> node1 Up Normal 43 GB 33.33% 0
>>
>> node2 Up Normal 59.52 GB 33.33%
>> 56713727820156410577229101238628035242
>> node3 Down Normal 142.57 GB 33.33%
>> 113427455640312821154458202477256070484
>>
>
>
Re: cassandra crashed while repairing, leave node size X3
Posted by Yan Chunlu <sp...@gmail.com>.
I am using 0.7.4 too. and would waiting for 0.8.6 stable to release because
of CASSANDRA-3166.
did you already using 0.8.6 in production?
2011/9/19 Jonas Borgström <jo...@trioptima.com>
> On 09/19/2011 04:26 AM, Anand Somani wrote:
> > In my tests I have seen repair sometimes take a lot of space (2-3
> > times), cleanup did not clean it, the only way I could clean that was
> > using major compaction.
>
> Do you remember with what version you saw these problems?
>
> I've had the same problems with 0.7.4 but so far my repair tests with
> 0.8.6 seems to behave a lot better.
>
> / Jonas
>
Re: cassandra crashed while repairing, leave node size X3
Posted by Jonas Borgström <jo...@trioptima.com>.
On 09/19/2011 04:26 AM, Anand Somani wrote:
> In my tests I have seen repair sometimes take a lot of space (2-3
> times), cleanup did not clean it, the only way I could clean that was
> using major compaction.
Do you remember with what version you saw these problems?
I've had the same problems with 0.7.4 but so far my repair tests with
0.8.6 seems to behave a lot better.
/ Jonas
Re: cassandra crashed while repairing, leave node size X3
Posted by Yan Chunlu <sp...@gmail.com>.
got it, thanks!
On Tue, Sep 20, 2011 at 12:27 AM, Peter Schuller <
peter.schuller@infidyne.com> wrote:
> > In my tests I have seen repair sometimes take a lot of space (2-3 times),
> > cleanup did not clean it, the only way I could clean that was using major
> > compaction.
>
> https://issues.apache.org/jira/browse/CASSANDRA-2816 (follow links to
> other jiras)
> https://issues.apache.org/jira/browse/CASSANDRA-2699
>
> And yes to the one who asked: 'cleanup' only removes data that is not
> supposed to be on the node; repair transferes data that *should* be on
> the node, so only a compaction will cut down the size after a repair
> induced spike of load (data size).
>
> --
> / Peter Schuller (@scode on twitter)
>
Re: cassandra crashed while repairing, leave node size X3
Posted by Peter Schuller <pe...@infidyne.com>.
> In my tests I have seen repair sometimes take a lot of space (2-3 times),
> cleanup did not clean it, the only way I could clean that was using major
> compaction.
https://issues.apache.org/jira/browse/CASSANDRA-2816 (follow links to
other jiras)
https://issues.apache.org/jira/browse/CASSANDRA-2699
And yes to the one who asked: 'cleanup' only removes data that is not
supposed to be on the node; repair transferes data that *should* be on
the node, so only a compaction will cut down the size after a repair
induced spike of load (data size).
--
/ Peter Schuller (@scode on twitter)
Re: cassandra crashed while repairing, leave node size X3
Posted by Anand Somani <me...@gmail.com>.
In my tests I have seen repair sometimes take a lot of space (2-3 times),
cleanup did not clean it, the only way I could clean that was using major
compaction.
On Sun, Sep 18, 2011 at 6:51 PM, Yan Chunlu <sp...@gmail.com> wrote:
> while doing repair on node3, the "Load" keep increasing, suddenly cassandra
> has encountered OOM, and the "Load" stopped at 140GB, after cassandra came
> back, I tried node cleanup but it seems not working....
>
> does node repair generate many temp sstables? how to get rid of them?
> thanks!
>
> Address Status State Load Owns Token
>
>
> 113427455640312821154458202477256070484
> node1 Up Normal 43 GB 33.33% 0
>
> node2 Up Normal 59.52 GB 33.33%
> 56713727820156410577229101238628035242
> node3 Down Normal 142.57 GB 33.33%
> 113427455640312821154458202477256070484
>