You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Héctor Izquierdo Seliva <iz...@strands.com> on 2011/06/10 13:16:26 UTC

insufficient space to compact even the two smallest files, aborting

Hi, I'm running a test node with 0.8, and everytime I try to do a major
compaction on one of the column families this message pops up. I have
plenty of space on disk for it and the sum of all the sstables is
smaller than the free capacity. Is there any way to force the
compaction?


Re: insufficient space to compact even the two smallest files, aborting

Posted by Terje Marthinussen <tm...@gmail.com>.
12 sounds perfectly fine in this case.
4 buckets, 3 in each bucket, the minimum default threshold _per  is 4.

Terje

2011/6/10 Héctor Izquierdo Seliva <iz...@strands.com>

>
>
> El vie, 10-06-2011 a las 20:21 +0900, Terje Marthinussen escribió:
> > bug in the 0.8.0 release version.
> >
> >
> > Cassandra splits the sstables depending on size and tries to find (by
> > default) at least 4 files of similar size.
> >
> >
> > If it cannot find 4 files of similar size, it logs that message in
> > 0.8.0.
> >
> >
> > You can try to reduce the minimum required  files for compaction and
> > it will work.
> >
> >
> > Terje
>
>
> Hi Terje,
>
> There are 12 SSTables, so I don't think that's the problem. I will try
> anyway and see what happens.
>
>
>

Re: insufficient space to compact even the two smallest files, aborting

Posted by Héctor Izquierdo Seliva <iz...@strands.com>.

El vie, 10-06-2011 a las 20:21 +0900, Terje Marthinussen escribió:
> bug in the 0.8.0 release version.
> 
> 
> Cassandra splits the sstables depending on size and tries to find (by
> default) at least 4 files of similar size.
> 
> 
> If it cannot find 4 files of similar size, it logs that message in
> 0.8.0.
> 
> 
> You can try to reduce the minimum required  files for compaction and
> it will work.
> 
> 
> Terje


Hi Terje,

There are 12 SSTables, so I don't think that's the problem. I will try
anyway and see what happens.



Re: insufficient space to compact even the two smallest files, aborting

Posted by Héctor Izquierdo Seliva <iz...@strands.com>.
Hi Terje,

There are 12 SSTables, so I don't think that's the problem. I will try
anyway and see what happens.

El vie, 10-06-2011 a las 20:21 +0900, Terje Marthinussen escribió:

> bug in the 0.8.0 release version.
> 
> 
> 
> Cassandra splits the sstables depending on size and tries to find (by
> default) at least 4 files of similar size.
> 
> 
> If it cannot find 4 files of similar size, it logs that message in
> 0.8.0.
> 
> 
> You can try to reduce the minimum required  files for compaction and
> it will work.
> 




Re: insufficient space to compact even the two smallest files, aborting

Posted by Jonathan Ellis <jb...@gmail.com>.
You may also have been running into
https://issues.apache.org/jira/browse/CASSANDRA-2765. We'll have a fix
for this in 0.8.1.

2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
> I was already way over the minimum. There were 12 sstables. Also, is
> there any reason why scrub got stuck? I did not see anything in the
> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
> sstables size, and it stuck there for a couple hours .
>
> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
>> That most likely happened just because after scrub you had new files
>> and got over the "4" file minimum limit.
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-2697
>>
>> Is the bug report.
>>
>
>
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Re: insufficient space to compact even the two smallest files, aborting

Posted by Sylvain Lebresne <sy...@datastax.com>.
On Thu, Jun 23, 2011 at 10:23 AM, Jonathan Colby
<jo...@gmail.com> wrote:
> A compaction will be triggered when "min" number of same sized SStable files are found.   So what's actually the purpose of  the "max" part of the threshold?

It says, if there is more than "max" number of same sized SSTable
files, only compact "max" of those at the same time. This is really
just supposed to protect against degenerate cases but there is hardly
good reasons to ever change this (and you should hopefully never need
that protection anyway). However, one actual use of it is to
deactivate compaction (by setting the max to 0) if for some reason you
want that.

--
Sylvain

>
>
> On Jun 23, 2011, at 12:55 AM, aaron morton wrote:
>
>> Setting them to 2 and 2 means compaction can only ever compact 2 files at time, so it will be worse off.
>>
>> Lets the try following:
>>
>> - restore the compactions settings to the default 4 and 32
>> - run `ls -lah` in the data dir and grab the output
>> - run `nodetool flush` this will trigger minor compaction once the memtables have been flushed
>> - check the logs for messages from 'CompactionManager'
>> - when done grab the output from  `ls -lah` again.
>>
>> Hope that helps.
>>
>>
>> -----------------
>> Aaron Morton
>> Freelance Cassandra Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 23 Jun 2011, at 02:04, Héctor Izquierdo Seliva wrote:
>>
>>> Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
>>> to run compact, but it's not doing anything. There are over 69 sstables
>>> now, read performance is horrible, and it's taking an insane amount of
>>> space. Maybe I don't quite get how the new per bucket stuff works, but I
>>> think this is not normal behaviour.
>>>
>>> El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
>>>> As Terje already said in this thread, the threshold is per bucket
>>>> (group of similarly sized sstables) not per CF.
>>>>
>>>> 2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
>>>>> I was already way over the minimum. There were 12 sstables. Also, is
>>>>> there any reason why scrub got stuck? I did not see anything in the
>>>>> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
>>>>> sstables size, and it stuck there for a couple hours .
>>>>>
>>>>> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
>>>>>> That most likely happened just because after scrub you had new files
>>>>>> and got over the "4" file minimum limit.
>>>>>>
>>>>>> https://issues.apache.org/jira/browse/CASSANDRA-2697
>>>>>>
>>>>>> Is the bug report.
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>
>

Re: insufficient space to compact even the two smallest files, aborting

Posted by Jonathan Colby <jo...@gmail.com>.
A compaction will be triggered when "min" number of same sized SStable files are found.   So what's actually the purpose of  the "max" part of the threshold?   


On Jun 23, 2011, at 12:55 AM, aaron morton wrote:

> Setting them to 2 and 2 means compaction can only ever compact 2 files at time, so it will be worse off.
> 
> Lets the try following:
> 
> - restore the compactions settings to the default 4 and 32
> - run `ls -lah` in the data dir and grab the output
> - run `nodetool flush` this will trigger minor compaction once the memtables have been flushed
> - check the logs for messages from 'CompactionManager'
> - when done grab the output from  `ls -lah` again. 
> 
> Hope that helps. 
> 
> 
> -----------------
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 23 Jun 2011, at 02:04, Héctor Izquierdo Seliva wrote:
> 
>> Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
>> to run compact, but it's not doing anything. There are over 69 sstables
>> now, read performance is horrible, and it's taking an insane amount of
>> space. Maybe I don't quite get how the new per bucket stuff works, but I
>> think this is not normal behaviour.
>> 
>> El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
>>> As Terje already said in this thread, the threshold is per bucket
>>> (group of similarly sized sstables) not per CF.
>>> 
>>> 2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
>>>> I was already way over the minimum. There were 12 sstables. Also, is
>>>> there any reason why scrub got stuck? I did not see anything in the
>>>> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
>>>> sstables size, and it stuck there for a couple hours .
>>>> 
>>>> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
>>>>> That most likely happened just because after scrub you had new files
>>>>> and got over the "4" file minimum limit.
>>>>> 
>>>>> https://issues.apache.org/jira/browse/CASSANDRA-2697
>>>>> 
>>>>> Is the bug report.
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>>> 
>> 
>> 
> 


Re: insufficient space to compact even the two smallest files, aborting

Posted by aaron morton <aa...@thelastpickle.com>.
Missed that in the history, cheers. 
A
-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 23 Jun 2011, at 20:26, Sylvain Lebresne wrote:

> As Jonathan said earlier, you are hitting
> https://issues.apache.org/jira/browse/CASSANDRA-2765
> 
> This will be fixed in 0.8.1 that is currently under a vote and should be
> released soon (let's say beginning of next week, maybe sooner).
> 
> --
> Sylvain
> 
> 2011/6/23 Héctor Izquierdo Seliva <iz...@strands.com>:
>> Hi Aaron. Reverted back to 4-32. Did the flush but it did not trigger
>> any minor compaction. Ran compact by hand, and it picked only two
>> sstables.
>> 
>> Here's the ls before:
>> 
>> http://pastebin.com/xDtvVZvA
>> 
>> And this is the ls after:
>> 
>> http://pastebin.com/DcpbGvK6
>> 
>> Any suggestions?
>> 
>> 
>> 
>> El jue, 23-06-2011 a las 10:55 +1200, aaron morton escribió:
>>> Setting them to 2 and 2 means compaction can only ever compact 2 files at time, so it will be worse off.
>>> 
>>> Lets the try following:
>>> 
>>> - restore the compactions settings to the default 4 and 32
>>> - run `ls -lah` in the data dir and grab the output
>>> - run `nodetool flush` this will trigger minor compaction once the memtables have been flushed
>>> - check the logs for messages from 'CompactionManager'
>>> - when done grab the output from  `ls -lah` again.
>>> 
>>> Hope that helps.
>>> 
>>> 
>>> -----------------
>>> Aaron Morton
>>> Freelance Cassandra Developer
>>> @aaronmorton
>>> http://www.thelastpickle.com
>>> 
>>> On 23 Jun 2011, at 02:04, Héctor Izquierdo Seliva wrote:
>>> 
>>>> Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
>>>> to run compact, but it's not doing anything. There are over 69 sstables
>>>> now, read performance is horrible, and it's taking an insane amount of
>>>> space. Maybe I don't quite get how the new per bucket stuff works, but I
>>>> think this is not normal behaviour.
>>>> 
>>>> El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
>>>>> As Terje already said in this thread, the threshold is per bucket
>>>>> (group of similarly sized sstables) not per CF.
>>>>> 
>>>>> 2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
>>>>>> I was already way over the minimum. There were 12 sstables. Also, is
>>>>>> there any reason why scrub got stuck? I did not see anything in the
>>>>>> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
>>>>>> sstables size, and it stuck there for a couple hours .
>>>>>> 
>>>>>> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
>>>>>>> That most likely happened just because after scrub you had new files
>>>>>>> and got over the "4" file minimum limit.
>>>>>>> 
>>>>>>> https://issues.apache.org/jira/browse/CASSANDRA-2697
>>>>>>> 
>>>>>>> Is the bug report.
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>> 
>> 
>> 


Re: insufficient space to compact even the two smallest files, aborting

Posted by Sylvain Lebresne <sy...@datastax.com>.
As Jonathan said earlier, you are hitting
https://issues.apache.org/jira/browse/CASSANDRA-2765

This will be fixed in 0.8.1 that is currently under a vote and should be
released soon (let's say beginning of next week, maybe sooner).

--
Sylvain

2011/6/23 Héctor Izquierdo Seliva <iz...@strands.com>:
> Hi Aaron. Reverted back to 4-32. Did the flush but it did not trigger
> any minor compaction. Ran compact by hand, and it picked only two
> sstables.
>
> Here's the ls before:
>
> http://pastebin.com/xDtvVZvA
>
> And this is the ls after:
>
> http://pastebin.com/DcpbGvK6
>
> Any suggestions?
>
>
>
> El jue, 23-06-2011 a las 10:55 +1200, aaron morton escribió:
>> Setting them to 2 and 2 means compaction can only ever compact 2 files at time, so it will be worse off.
>>
>> Lets the try following:
>>
>> - restore the compactions settings to the default 4 and 32
>> - run `ls -lah` in the data dir and grab the output
>> - run `nodetool flush` this will trigger minor compaction once the memtables have been flushed
>> - check the logs for messages from 'CompactionManager'
>> - when done grab the output from  `ls -lah` again.
>>
>> Hope that helps.
>>
>>
>> -----------------
>> Aaron Morton
>> Freelance Cassandra Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 23 Jun 2011, at 02:04, Héctor Izquierdo Seliva wrote:
>>
>> > Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
>> > to run compact, but it's not doing anything. There are over 69 sstables
>> > now, read performance is horrible, and it's taking an insane amount of
>> > space. Maybe I don't quite get how the new per bucket stuff works, but I
>> > think this is not normal behaviour.
>> >
>> > El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
>> >> As Terje already said in this thread, the threshold is per bucket
>> >> (group of similarly sized sstables) not per CF.
>> >>
>> >> 2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
>> >>> I was already way over the minimum. There were 12 sstables. Also, is
>> >>> there any reason why scrub got stuck? I did not see anything in the
>> >>> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
>> >>> sstables size, and it stuck there for a couple hours .
>> >>>
>> >>> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
>> >>>> That most likely happened just because after scrub you had new files
>> >>>> and got over the "4" file minimum limit.
>> >>>>
>> >>>> https://issues.apache.org/jira/browse/CASSANDRA-2697
>> >>>>
>> >>>> Is the bug report.
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>
>> >>
>> >>
>> >
>> >
>>
>
>
>

Re: insufficient space to compact even the two smallest files, aborting

Posted by Héctor Izquierdo Seliva <iz...@strands.com>.
Btw, if I restart the node, then it happily proceeds with compaction.

El jue, 23-06-2011 a las 10:02 +0200, Héctor Izquierdo Seliva escribió:
> Hi Aaron. Reverted back to 4-32. Did the flush but it did not trigger
> any minor compaction. Ran compact by hand, and it picked only two
> sstables.
> 
> Here's the ls before:
> 
> http://pastebin.com/xDtvVZvA
> 
> And this is the ls after:
> 
> http://pastebin.com/DcpbGvK6
> 
> Any suggestions?
> 
> 
> 
> El jue, 23-06-2011 a las 10:55 +1200, aaron morton escribió:
> > Setting them to 2 and 2 means compaction can only ever compact 2 files at time, so it will be worse off.
> > 
> > Lets the try following:
> > 
> > - restore the compactions settings to the default 4 and 32
> > - run `ls -lah` in the data dir and grab the output
> > - run `nodetool flush` this will trigger minor compaction once the memtables have been flushed
> > - check the logs for messages from 'CompactionManager'
> > - when done grab the output from  `ls -lah` again. 
> > 
> > Hope that helps. 
> > 
> >  
> > -----------------
> > Aaron Morton
> > Freelance Cassandra Developer
> > @aaronmorton
> > http://www.thelastpickle.com
> > 
> > On 23 Jun 2011, at 02:04, Héctor Izquierdo Seliva wrote:
> > 
> > > Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
> > > to run compact, but it's not doing anything. There are over 69 sstables
> > > now, read performance is horrible, and it's taking an insane amount of
> > > space. Maybe I don't quite get how the new per bucket stuff works, but I
> > > think this is not normal behaviour.
> > > 
> > > El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
> > >> As Terje already said in this thread, the threshold is per bucket
> > >> (group of similarly sized sstables) not per CF.
> > >> 
> > >> 2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
> > >>> I was already way over the minimum. There were 12 sstables. Also, is
> > >>> there any reason why scrub got stuck? I did not see anything in the
> > >>> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
> > >>> sstables size, and it stuck there for a couple hours .
> > >>> 
> > >>> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
> > >>>> That most likely happened just because after scrub you had new files
> > >>>> and got over the "4" file minimum limit.
> > >>>> 
> > >>>> https://issues.apache.org/jira/browse/CASSANDRA-2697
> > >>>> 
> > >>>> Is the bug report.
> > >>>> 
> > >>> 
> > >>> 
> > >>> 
> > >>> 
> > >> 
> > >> 
> > >> 
> > > 
> > > 
> > 
> 
> 



Re: insufficient space to compact even the two smallest files, aborting

Posted by Héctor Izquierdo Seliva <iz...@strands.com>.
Hi Aaron. Reverted back to 4-32. Did the flush but it did not trigger
any minor compaction. Ran compact by hand, and it picked only two
sstables.

Here's the ls before:

http://pastebin.com/xDtvVZvA

And this is the ls after:

http://pastebin.com/DcpbGvK6

Any suggestions?



El jue, 23-06-2011 a las 10:55 +1200, aaron morton escribió:
> Setting them to 2 and 2 means compaction can only ever compact 2 files at time, so it will be worse off.
> 
> Lets the try following:
> 
> - restore the compactions settings to the default 4 and 32
> - run `ls -lah` in the data dir and grab the output
> - run `nodetool flush` this will trigger minor compaction once the memtables have been flushed
> - check the logs for messages from 'CompactionManager'
> - when done grab the output from  `ls -lah` again. 
> 
> Hope that helps. 
> 
>  
> -----------------
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 23 Jun 2011, at 02:04, Héctor Izquierdo Seliva wrote:
> 
> > Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
> > to run compact, but it's not doing anything. There are over 69 sstables
> > now, read performance is horrible, and it's taking an insane amount of
> > space. Maybe I don't quite get how the new per bucket stuff works, but I
> > think this is not normal behaviour.
> > 
> > El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
> >> As Terje already said in this thread, the threshold is per bucket
> >> (group of similarly sized sstables) not per CF.
> >> 
> >> 2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
> >>> I was already way over the minimum. There were 12 sstables. Also, is
> >>> there any reason why scrub got stuck? I did not see anything in the
> >>> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
> >>> sstables size, and it stuck there for a couple hours .
> >>> 
> >>> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
> >>>> That most likely happened just because after scrub you had new files
> >>>> and got over the "4" file minimum limit.
> >>>> 
> >>>> https://issues.apache.org/jira/browse/CASSANDRA-2697
> >>>> 
> >>>> Is the bug report.
> >>>> 
> >>> 
> >>> 
> >>> 
> >>> 
> >> 
> >> 
> >> 
> > 
> > 
> 



Re: insufficient space to compact even the two smallest files, aborting

Posted by aaron morton <aa...@thelastpickle.com>.
Setting them to 2 and 2 means compaction can only ever compact 2 files at time, so it will be worse off.

Lets the try following:

- restore the compactions settings to the default 4 and 32
- run `ls -lah` in the data dir and grab the output
- run `nodetool flush` this will trigger minor compaction once the memtables have been flushed
- check the logs for messages from 'CompactionManager'
- when done grab the output from  `ls -lah` again. 

Hope that helps. 

 
-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 23 Jun 2011, at 02:04, Héctor Izquierdo Seliva wrote:

> Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
> to run compact, but it's not doing anything. There are over 69 sstables
> now, read performance is horrible, and it's taking an insane amount of
> space. Maybe I don't quite get how the new per bucket stuff works, but I
> think this is not normal behaviour.
> 
> El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
>> As Terje already said in this thread, the threshold is per bucket
>> (group of similarly sized sstables) not per CF.
>> 
>> 2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
>>> I was already way over the minimum. There were 12 sstables. Also, is
>>> there any reason why scrub got stuck? I did not see anything in the
>>> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
>>> sstables size, and it stuck there for a couple hours .
>>> 
>>> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
>>>> That most likely happened just because after scrub you had new files
>>>> and got over the "4" file minimum limit.
>>>> 
>>>> https://issues.apache.org/jira/browse/CASSANDRA-2697
>>>> 
>>>> Is the bug report.
>>>> 
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> 
> 
> 


Re: insufficient space to compact even the two smallest files, aborting

Posted by Héctor Izquierdo Seliva <iz...@strands.com>.
Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
to run compact, but it's not doing anything. There are over 69 sstables
now, read performance is horrible, and it's taking an insane amount of
space. Maybe I don't quite get how the new per bucket stuff works, but I
think this is not normal behaviour.

El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
> As Terje already said in this thread, the threshold is per bucket
> (group of similarly sized sstables) not per CF.
> 
> 2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
> > I was already way over the minimum. There were 12 sstables. Also, is
> > there any reason why scrub got stuck? I did not see anything in the
> > logs. Via jmx I saw that the scrubbed bytes were equal to one of the
> > sstables size, and it stuck there for a couple hours .
> >
> > El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
> >> That most likely happened just because after scrub you had new files
> >> and got over the "4" file minimum limit.
> >>
> >> https://issues.apache.org/jira/browse/CASSANDRA-2697
> >>
> >> Is the bug report.
> >>
> >
> >
> >
> >
> 
> 
> 



Re: insufficient space to compact even the two smallest files, aborting

Posted by Jonathan Ellis <jb...@gmail.com>.
As Terje already said in this thread, the threshold is per bucket
(group of similarly sized sstables) not per CF.

2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>:
> I was already way over the minimum. There were 12 sstables. Also, is
> there any reason why scrub got stuck? I did not see anything in the
> logs. Via jmx I saw that the scrubbed bytes were equal to one of the
> sstables size, and it stuck there for a couple hours .
>
> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
>> That most likely happened just because after scrub you had new files
>> and got over the "4" file minimum limit.
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-2697
>>
>> Is the bug report.
>>
>
>
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Re: insufficient space to compact even the two smallest files, aborting

Posted by Héctor Izquierdo Seliva <iz...@strands.com>.
I was already way over the minimum. There were 12 sstables. Also, is
there any reason why scrub got stuck? I did not see anything in the
logs. Via jmx I saw that the scrubbed bytes were equal to one of the
sstables size, and it stuck there for a couple hours .

El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
> That most likely happened just because after scrub you had new files
> and got over the "4" file minimum limit.
> 
> https://issues.apache.org/jira/browse/CASSANDRA-2697
> 
> Is the bug report.
> 




Re: insufficient space to compact even the two smallest files, aborting

Posted by Terje Marthinussen <tm...@gmail.com>.
That most likely happened just because after scrub you had new files and got
over the "4" file minimum limit.

https://issues.apache.org/jira/browse/CASSANDRA-2697

Is the bug report.

2011/6/13 Héctor Izquierdo Seliva <iz...@strands.com>

> Hi All.  I found a way to be able to compact. I have to call scrub on
> the column family. Then scrub gets stuck forever. I restart the node,
> and voila! I can compact again without any message about not having
> enough space. This looks like a bug to me. What info would be needed to
> fill a report? This is on 0.8 updating from 0.7.5
>
>
>

Re: insufficient space to compact even the two smallest files, aborting

Posted by Héctor Izquierdo Seliva <iz...@strands.com>.
Hi All.  I found a way to be able to compact. I have to call scrub on
the column family. Then scrub gets stuck forever. I restart the node,
and voila! I can compact again without any message about not having
enough space. This looks like a bug to me. What info would be needed to
fill a report? This is on 0.8 updating from 0.7.5



Re: insufficient space to compact even the two smallest files, aborting

Posted by Héctor Izquierdo Seliva <iz...@strands.com>.
El vie, 10-06-2011 a las 23:40 +0900, Terje Marthinussen escribió:
> Yes, which is perfectly fine for a short time if all you want is to
> compact to one file for some reason.
> 
> 
> I run min_compaction_threshold = 2 on one system here with SSD. No
> problems with the more aggressive disk utilization on the SSDs from
> the extra compactions, reducing disk space is much more important.
> 
> 
> Note that this is a treshold per bucket of similar sized sstables. Not
> the total number of sstables, so a treshold of 2 will not give you one
> big file.
> 
> 
> Terje

Cassandra refuses to do a major compaction no matter what I do. There
are 110GB free, and all the sstables I want to compact amount to 15GB,
and the same message keeps popping up.



Re: insufficient space to compact even the two smallest files, aborting

Posted by Terje Marthinussen <tm...@gmail.com>.
Yes, which is perfectly fine for a short time if all you want is to compact
to one file for some reason.

I run min_compaction_threshold = 2 on one system here with SSD. No problems
with the more aggressive disk utilization on the SSDs from the extra
compactions, reducing disk space is much more important.

Note that this is a treshold per bucket of similar sized sstables. Not the
total number of sstables, so a treshold of 2 will not give you one big file.

Terje

On Fri, Jun 10, 2011 at 8:56 PM, Maki Watanabe <wa...@gmail.com>wrote:

> But decreasing min_compaction_threashold will affect on minor
> compaction frequency, won't it?
>
> maki
>
>
> 2011/6/10 Terje Marthinussen <tm...@gmail.com>:
> > bug in the 0.8.0 release version.
> > Cassandra splits the sstables depending on size and tries to find (by
> > default) at least 4 files of similar size.
> > If it cannot find 4 files of similar size, it logs that message in 0.8.0.
> > You can try to reduce the minimum required  files for compaction and it
> will
> > work.
> > Terje
> > 2011/6/10 Héctor Izquierdo Seliva <iz...@strands.com>
> >>
> >> Hi, I'm running a test node with 0.8, and everytime I try to do a major
> >> compaction on one of the column families this message pops up. I have
> >> plenty of space on disk for it and the sum of all the sstables is
> >> smaller than the free capacity. Is there any way to force the
> >> compaction?
> >>
> >
> >
>
>
>
> --
> w3m
>

Re: insufficient space to compact even the two smallest files, aborting

Posted by Maki Watanabe <wa...@gmail.com>.
But decreasing min_compaction_threashold will affect on minor
compaction frequency, won't it?

maki


2011/6/10 Terje Marthinussen <tm...@gmail.com>:
> bug in the 0.8.0 release version.
> Cassandra splits the sstables depending on size and tries to find (by
> default) at least 4 files of similar size.
> If it cannot find 4 files of similar size, it logs that message in 0.8.0.
> You can try to reduce the minimum required  files for compaction and it will
> work.
> Terje
> 2011/6/10 Héctor Izquierdo Seliva <iz...@strands.com>
>>
>> Hi, I'm running a test node with 0.8, and everytime I try to do a major
>> compaction on one of the column families this message pops up. I have
>> plenty of space on disk for it and the sum of all the sstables is
>> smaller than the free capacity. Is there any way to force the
>> compaction?
>>
>
>



-- 
w3m

Re: insufficient space to compact even the two smallest files, aborting

Posted by Terje Marthinussen <tm...@gmail.com>.
bug in the 0.8.0 release version.

Cassandra splits the sstables depending on size and tries to find (by
default) at least 4 files of similar size.

If it cannot find 4 files of similar size, it logs that message in 0.8.0.

You can try to reduce the minimum required  files for compaction and it will
work.

Terje

2011/6/10 Héctor Izquierdo Seliva <iz...@strands.com>

> Hi, I'm running a test node with 0.8, and everytime I try to do a major
> compaction on one of the column families this message pops up. I have
> plenty of space on disk for it and the sum of all the sstables is
> smaller than the free capacity. Is there any way to force the
> compaction?
>
>