You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Radim Kolar <hs...@sendmail.cz> on 2011/11/09 11:08:17 UTC

max_compaction_threshold

I found in stress tests that default setting this to 32 is way too high. 
Hadoop guys are using value 10 during merge sorts to not stress IO that 
much. I also discovered that filesystems like ZFS are using default io 
queue size of 10 per drive.

I tried run tests with 10, 15 and 32 and there is very little 
improvement 15 over 10, but 32 is performance drop. Test is simply 
create lot of 5m sstables by diabling minor compactions and then set it 
back and measure time elapsed.

Re: max_compaction_threshold

Posted by Radim Kolar <hs...@sendmail.cz>.
i consulted with hadoop expert and he told me that he is using value 100 
for merging segments. I will rerun tests with 100 to check.