You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Shuo Chen <ch...@gmail.com> on 2016/01/08 03:30:14 UTC
Too many compactions, maybe keyspace system?
Hi,
I am using Cassandra 2.0.16 with 4 nodes and found too many compactions for
this cluster. This caused too much full gc and choked the system. I have
discussed the high gc in previous mails but didnot get the satisfied
answers.
To clarify the source of the compactions, I shutdown all the clients and
there is no reading and writing requests outside. Besides,
hinted_handoff_enabled: false is set. The cluster is restarted yesterday
and have run for 1.5 days.
The heap is 8G and all the compaction settings are default settings. I
printed parts of gc histogram as follows:
num #instances #bytes class name
----------------------------------------------
1: 67758530 2168272960 java.util.concurrent.FutureTask
2: 67759745 1626233880
java.util.concurrent.Executors$RunnableAdapter
3: 67758576 1626205824
java.util.concurrent.LinkedBlockingQueue$Node
4: 67758529 1626204696
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask
5: 16935 72995576 [B
6: 240534 11545632 java.nio.HeapByteBuffer
7: 37416 5969800 [C
8: 41447 5624856 <constMethodKlass>
9: 41447 5315504 <methodKlass>
10: 104850 5032800
edu.stanford.ppl.concurrent.SnapTreeMap$Node
11: 4110 4564144 <constantPoolKlass>
12: 104781 3352992 org.apache.cassandra.db.Column
13: 4110 2824016 <instanceKlassKlass>
14: 4850 2689568 [J
15: 3564 2634240 <constantPoolCacheKlass>
16: 3016 1252544 <methodDataKlass>
17: 37192 892608 java.lang.String
18: 17641 564512
java.util.concurrent.ConcurrentHashMap$HashEntry
As we can see in this list, objects of compaction tasks remain in the heap.
The nodetool compactionstats as follows:
pending tasks: 64637410
Active compaction remaining time : n/a
Parts of nodetool compactionhistory:
Compaction History:
id keyspace_name
columnfamily_name compacted_at bytes_in
bytes_out rows_merged
8e4f8830-b04f-11e5-a211-45b7aa88107c system
sstable_activity 1451629144115 3342 915
{4:23}
96a6fcb0-b04b-11e5-a211-45b7aa88107c system hints
1451627440123 18970740 18970740 {1:1}
aefc3f10-b1b2-11e5-a211-45b7aa88107c system
sstable_activity 1451781670273 3201 906
{4:23}
1e76f1b0-b180-11e5-a211-45b7aa88107c system
sstable_activity 1451759952971 3303 700
{4:23}
e4d7d220-b518-11e5-ac1a-45b7aa88107c system hints
1452155422786 7837400 7746243 {1:1}
f305ddd0-b52a-11e5-bce5-45b7aa88107c system hints
1452163177517 5509472 0 {3:1}
5c5b15b0-b581-11e5-bce5-45b7aa88107c system
sstable_activity 1452200290955 3387 866
{4:23}
9cc8af70-b24a-11e5-a211-45b7aa88107c system
sstable_activity 1451846923239 3134 773
{4:23}
cf9e9400-b439-11e5-ac1a-45b7aa88107c system hints
1452059609408 10524457 10490132 {3:1}
ea1b5140-b2e2-11e5-a211-45b7aa88107c system
sstable_activity 1451912336468 3243 804
{4:23}
a4b41910-b2b1-11e5-a211-45b7aa88107c system hints
1451891174689 15426047 15236142 {1:3}
f1bb37a0-b204-11e5-a211-45b7aa88107c system hints
1451817000986 17713746 17566349 {1:1}
fcb5bd60-b4d9-11e5-ac1a-45b7aa88107c system
sstable_activity 1452128404534 3146 775
{4:22}
dcd0c720-b0b4-11e5-a211-45b7aa88107c system
sstable_activity 1451672654993 3189 797
{4:23}
ba0f2d60-b50c-11e5-ac1a-45b7aa88107c system
sstable_activity 1452150197046 3055 739
{4:22}
a0681d00-b3ad-11e5-a211-45b7aa88107c system
sstable_activity 1451999400656 3082 770
{4:23}
ce6b4d90-b35c-11e5-a211-45b7aa88107c system hints
1451964688617 13546056 13363986 {1:2}
a17d8f80-b24b-11e5-a211-45b7aa88107c system hints
1451847360632 16746317 16600978 {1:1}
1549d4a0-b51c-11e5-bce5-45b7aa88107c
webtrn_study_log_formallySCORM_STU_COURSE 1452156792554
46077372 41485236 {1:9, 2:4, 3:2, 4:5}
68e064a0-b439-11e5-ac1a-45b7aa88107c system local
1452059437034 1255 680 {4:1}
38591c10-b22b-11e5-a211-45b7aa88107c system hints
1451833440337 17351684 17002773 {1:1}
aba057f0-b1b4-11e5-a211-45b7aa88107c system hints
1451782523631 18743329 18577220 {1:1}
1fe29f40-b568-11e5-bce5-45b7aa88107c system
sstable_activity 1452189452084 3475 823
{4:23}
f7e02540-b1bf-11e5-a211-45b7aa88107c system hints
1451787376020 18577220 18403648 {1:1}
b3526be0-b3f9-11e5-a211-45b7aa88107c system
sstable_activity 1452032074142 3344 795
{1:2, 4:25}
2ffde460-b45b-11e5-ac1a-45b7aa88107c system
sstable_activity 1452073944486 3318 835
{4:22}
508ed070-b3fe-11e5-a211-45b7aa88107c system hints
1452034055927 11448413 11309063 {1:2}
fdbf8750-b44c-11e5-ac1a-45b7aa88107c system hints
1452067847237 10434789 10400837 {1:1}
9f5e38a0-b361-11e5-a211-45b7aa88107c system
sstable_activity 1451966757162 3134 730
{4:23}
621f05e0-b0ce-11e5-a211-45b7aa88107c system
sstable_activity 1451683616062 3060 719
{4:23}
5dde9ef0-b4b1-11e5-ac1a-45b7aa88107c system hints
1452110958175 9239287 9010439 {1:1}
8f94db70-b51b-11e5-bce5-45b7aa88107c system local
1452156568231 1201 688 {4:1}
4c414950-b32f-11e5-a211-45b7aa88107c system hints
1451945142885 14105741 13937247 {1:2}
bd0eb510-b388-11e5-a211-45b7aa88107c system hints
1451983557345 13062551 12823581 {1:2}
92707090-b09b-11e5-a211-45b7aa88107c system
sstable_activity 1451661792793 3149 689
{4:23}
aafc21e0-b497-11e5-ac1a-45b7aa88107c system hints
1452099920638 9542212 9382531 {1:1}
55e055e0-b263-11e5-a211-45b7aa88107c system hints
1451857541694 16600978 16364535 {1:1}
948bb3a0-b305-11e5-a211-45b7aa88107c system hints
1451927225306 14648744 14472833 {1:2}
7946c2a0-b162-11e5-a211-45b7aa88107c system hints
1451747220426 18970740 18970740 {1:1}
e5651290-b129-11e5-a211-45b7aa88107c system hints
1451722920505 18970740 18970740 {1:1}
b2226820-b05b-11e5-a211-45b7aa88107c system hints
1451634358178 18970740 18970740 {1:1}
11768640-b33f-11e5-a211-45b7aa88107c system hints
1451951916196 13937247 13717550 {1:2}
c05d07c0-b1fe-11e5-a211-45b7aa88107c system
sstable_activity 1451814341180 3329 852
{4:23}
029d1530-b376-11e5-a211-45b7aa88107c system hints
1451975513603 13204621 13062551 {1:2}
d1d10ac0-b441-11e5-ac1a-45b7aa88107c system
sstable_activity 1452063049068 3395 816
{1:3, 4:23}
0ec20100-b465-11e5-ac1a-45b7aa88107c system hints
1452078183696 10217777 10090370 {1:1}
b25689d0-b474-11e5-ac1a-45b7aa88107c system
sstable_activity 1452084900589 3221 776
{4:22}
50e84480-b321-11e5-a211-45b7aa88107c system hints
1451939137736 14276596 14105741 {1:2}
75ace380-b1e5-11e5-a211-45b7aa88107c system
sstable_activity 1451803478456 3171 820
{4:23}
d250c210-b441-11e5-ac1a-45b7aa88107c system
schema_columnfamilies 1452063049905 185693 46547
{1:1, 4:10}
As we can see, most compaction tasks come from keyspace system and
columnfamily hints. However hinted_handoff_enabled: false, so there may be
no new hints.
I checked the compaction settings of hints and sstable_activity.
hints:
compaction = {'enabled': 'false', 'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
sstable_activity:
compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
The background compaction is disabled for cf hints.
Most of the other use keyspaces are created by cassandra-cli. In previous
mail discussion, somebody says these cfs have memory and performance
issues. But as we can see most compactions come from keyspace system.
So what is source of compactions? Is high gc really related to the high
compactions?
--
*陈硕* *Shuo Chen*
chenatu2006@gmail.com
chenshuo@whaty.com
Re: Too many compactions, maybe keyspace system?
Posted by Shuo Chen <ch...@gmail.com>.
The attachment is result of grep -i GC /usr/local/cassandra/system.log >
gc.log
On Tue, Jan 12, 2016 at 1:12 PM, Shuo Chen <ch...@gmail.com> wrote:
> I have a assumption that, lots of pending compaction tasks jam the memory
> and raise full gc. The full chokes the process and slows down compaction.
> And this causes more pending compaction tasks and more pressure on memory.
>
> Is there a method to list the concrete details of pending compaction tasks?
>
> On Tue, Jan 12, 2016 at 11:41 AM, Robert Coli <rc...@eventbrite.com>
> wrote:
>
>> The lines you are looking for look like this :
>>
>> INFO [CompactionExecutor:48] 2016-01-12 09:07:59,995 CompactionTask.java
>> (line 120) Compacting
>> [SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4959-Data.db'),
>> SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4960-Data.db'),
>> SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4961-Data.db'),
>> SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4962-Data.db')]
>>
>> and
>>
>> INFO [CompactionExecutor:47] 2016-01-12 04:06:43,946 CompactionTask.java
>> (line 299) Compacted 4 sstables to
>> [/usr/local/cassandra/data/system/compaction_history/system-compaction_history-jb-3532,].
>> 14,413 bytes to 13,334 (~92% of original) in 52,135ms = 0.000244MB/s. 156
>> total partitions merged to 149. Partition merge counts were {1:156, }
>>
>> There are only 4 of the compaction completed messages in the log you
>> attached, tiny ones.
>>
>> The log doesn't look like much compaction is occurring?
>>
>> =Rob
>>
>>
>
>
> --
> *陈硕* *Shuo Chen*
> chenatu2006@gmail.com
> chenshuo@whaty.com
>
--
*陈硕* *Shuo Chen*
chenatu2006@gmail.com
chenshuo@whaty.com
Re: Too many compactions, maybe keyspace system?
Posted by Shuo Chen <ch...@gmail.com>.
I documented this on JIRA. Please see CASSANDRA-11025
<https://issues.apache.org/jira/browse/CASSANDRA-11025>
On Sun, Jan 17, 2016 at 11:48 PM, Sebastian Estevez <
sebastian.estevez@datastax.com> wrote:
> I agree that this may be worth a jira.
>
> Can you clarify this statement?
>
> >>5 keyspaces and about 100 cfs months
>
> How many total empty tables did you create? Creating hundreds of tables is
> a bad practice in Cassandra but I was not aware of a compaction impact like
> what you're describing.
>
> all the best,
>
> Sebastián
> On Jan 16, 2016 4:43 AM, "DuyHai Doan" <do...@gmail.com> wrote:
>
>> Interesting, maybe it worths filing a JIRA. Empty tables should not slow
>> down compaction of other tables
>>
>> On Sat, Jan 16, 2016 at 10:33 AM, Shuo Chen <ch...@gmail.com>
>> wrote:
>>
>>> Hi, Robert,
>>>
>>> I think I found the cause of the too many compactions. I used jmap to
>>> dump the heap and used Eclipse memory analyzer plugin to extract the heap.
>>>
>>> In previous reply, It shows the there are too many pending jobs in the
>>> Blocking queue. I checked the cf of the compaction task object. There are
>>> many cfs concerning some empty cfs I created before.
>>>
>>> I created 5 keyspaces and about 100 cfs months by cassandra-cli ago and
>>> didnot put any data yet. In fact, there is only 1 keypaces I created
>>> containing data and the other 5 keyspaces are empty.
>>>
>>> When I droped these 5 keyspaces and restarted the high compaction node,
>>> It runs normally with normal mount of compactions.
>>>
>>> So maybe there are some bugs of compaction for empty columnfamily?
>>>
>>> On Wed, Jan 13, 2016 at 2:39 AM, Robert Coli <rc...@eventbrite.com>
>>> wrote:
>>>
>>>> On Mon, Jan 11, 2016 at 9:12 PM, Shuo Chen <ch...@gmail.com>
>>>> wrote:
>>>>
>>>>> I have a assumption that, lots of pending compaction tasks jam the
>>>>> memory and raise full gc. The full chokes the process and slows down
>>>>> compaction. And this causes more pending compaction tasks and more pressure
>>>>> on memory.
>>>>>
>>>>
>>>> The question is why there are so many pending compactions, because your
>>>> log doesn't show that much compaction is happening. What keyspaces /
>>>> columnfamilies do you expect to be compacting, and how many SSTables do
>>>> they contain?
>>>>
>>>>
>>>>> Is there a method to list the concrete details of pending compaction
>>>>> tasks?
>>>>>
>>>>
>>>> Nope.
>>>>
>>>> For the record, this type of extended operational debugging is often
>>>> best carried out interactively on #cassandra on freenode IRC.. :)
>>>>
>>>> =Rob
>>>>
>>>
>>>
>>>
>>> --
>>> *陈硕* *Shuo Chen*
>>> chenatu2006@gmail.com
>>> chenshuo@whaty.com
>>>
>>
>>
--
*陈硕* *Shuo Chen*
chenatu2006@gmail.com
chenshuo@whaty.com
Re: Too many compactions, maybe keyspace system?
Posted by Sebastian Estevez <se...@datastax.com>.
I agree that this may be worth a jira.
Can you clarify this statement?
>>5 keyspaces and about 100 cfs months
How many total empty tables did you create? Creating hundreds of tables is
a bad practice in Cassandra but I was not aware of a compaction impact like
what you're describing.
all the best,
Sebastián
On Jan 16, 2016 4:43 AM, "DuyHai Doan" <do...@gmail.com> wrote:
> Interesting, maybe it worths filing a JIRA. Empty tables should not slow
> down compaction of other tables
>
> On Sat, Jan 16, 2016 at 10:33 AM, Shuo Chen <ch...@gmail.com> wrote:
>
>> Hi, Robert,
>>
>> I think I found the cause of the too many compactions. I used jmap to
>> dump the heap and used Eclipse memory analyzer plugin to extract the heap.
>>
>> In previous reply, It shows the there are too many pending jobs in the
>> Blocking queue. I checked the cf of the compaction task object. There are
>> many cfs concerning some empty cfs I created before.
>>
>> I created 5 keyspaces and about 100 cfs months by cassandra-cli ago and
>> didnot put any data yet. In fact, there is only 1 keypaces I created
>> containing data and the other 5 keyspaces are empty.
>>
>> When I droped these 5 keyspaces and restarted the high compaction node,
>> It runs normally with normal mount of compactions.
>>
>> So maybe there are some bugs of compaction for empty columnfamily?
>>
>> On Wed, Jan 13, 2016 at 2:39 AM, Robert Coli <rc...@eventbrite.com>
>> wrote:
>>
>>> On Mon, Jan 11, 2016 at 9:12 PM, Shuo Chen <ch...@gmail.com>
>>> wrote:
>>>
>>>> I have a assumption that, lots of pending compaction tasks jam the
>>>> memory and raise full gc. The full chokes the process and slows down
>>>> compaction. And this causes more pending compaction tasks and more pressure
>>>> on memory.
>>>>
>>>
>>> The question is why there are so many pending compactions, because your
>>> log doesn't show that much compaction is happening. What keyspaces /
>>> columnfamilies do you expect to be compacting, and how many SSTables do
>>> they contain?
>>>
>>>
>>>> Is there a method to list the concrete details of pending compaction
>>>> tasks?
>>>>
>>>
>>> Nope.
>>>
>>> For the record, this type of extended operational debugging is often
>>> best carried out interactively on #cassandra on freenode IRC.. :)
>>>
>>> =Rob
>>>
>>
>>
>>
>> --
>> *陈硕* *Shuo Chen*
>> chenatu2006@gmail.com
>> chenshuo@whaty.com
>>
>
>
Re: Too many compactions, maybe keyspace system?
Posted by DuyHai Doan <do...@gmail.com>.
Interesting, maybe it worths filing a JIRA. Empty tables should not slow
down compaction of other tables
On Sat, Jan 16, 2016 at 10:33 AM, Shuo Chen <ch...@gmail.com> wrote:
> Hi, Robert,
>
> I think I found the cause of the too many compactions. I used jmap to dump
> the heap and used Eclipse memory analyzer plugin to extract the heap.
>
> In previous reply, It shows the there are too many pending jobs in the
> Blocking queue. I checked the cf of the compaction task object. There are
> many cfs concerning some empty cfs I created before.
>
> I created 5 keyspaces and about 100 cfs months by cassandra-cli ago and
> didnot put any data yet. In fact, there is only 1 keypaces I created
> containing data and the other 5 keyspaces are empty.
>
> When I droped these 5 keyspaces and restarted the high compaction node, It
> runs normally with normal mount of compactions.
>
> So maybe there are some bugs of compaction for empty columnfamily?
>
> On Wed, Jan 13, 2016 at 2:39 AM, Robert Coli <rc...@eventbrite.com> wrote:
>
>> On Mon, Jan 11, 2016 at 9:12 PM, Shuo Chen <ch...@gmail.com> wrote:
>>
>>> I have a assumption that, lots of pending compaction tasks jam the
>>> memory and raise full gc. The full chokes the process and slows down
>>> compaction. And this causes more pending compaction tasks and more pressure
>>> on memory.
>>>
>>
>> The question is why there are so many pending compactions, because your
>> log doesn't show that much compaction is happening. What keyspaces /
>> columnfamilies do you expect to be compacting, and how many SSTables do
>> they contain?
>>
>>
>>> Is there a method to list the concrete details of pending compaction
>>> tasks?
>>>
>>
>> Nope.
>>
>> For the record, this type of extended operational debugging is often best
>> carried out interactively on #cassandra on freenode IRC.. :)
>>
>> =Rob
>>
>
>
>
> --
> *陈硕* *Shuo Chen*
> chenatu2006@gmail.com
> chenshuo@whaty.com
>
Re: Too many compactions, maybe keyspace system?
Posted by Shuo Chen <ch...@gmail.com>.
Hi, Robert,
I think I found the cause of the too many compactions. I used jmap to dump
the heap and used Eclipse memory analyzer plugin to extract the heap.
In previous reply, It shows the there are too many pending jobs in the
Blocking queue. I checked the cf of the compaction task object. There are
many cfs concerning some empty cfs I created before.
I created 5 keyspaces and about 100 cfs months by cassandra-cli ago and
didnot put any data yet. In fact, there is only 1 keypaces I created
containing data and the other 5 keyspaces are empty.
When I droped these 5 keyspaces and restarted the high compaction node, It
runs normally with normal mount of compactions.
So maybe there are some bugs of compaction for empty columnfamily?
On Wed, Jan 13, 2016 at 2:39 AM, Robert Coli <rc...@eventbrite.com> wrote:
> On Mon, Jan 11, 2016 at 9:12 PM, Shuo Chen <ch...@gmail.com> wrote:
>
>> I have a assumption that, lots of pending compaction tasks jam the memory
>> and raise full gc. The full chokes the process and slows down compaction.
>> And this causes more pending compaction tasks and more pressure on memory.
>>
>
> The question is why there are so many pending compactions, because your
> log doesn't show that much compaction is happening. What keyspaces /
> columnfamilies do you expect to be compacting, and how many SSTables do
> they contain?
>
>
>> Is there a method to list the concrete details of pending compaction
>> tasks?
>>
>
> Nope.
>
> For the record, this type of extended operational debugging is often best
> carried out interactively on #cassandra on freenode IRC.. :)
>
> =Rob
>
--
*陈硕* *Shuo Chen*
chenatu2006@gmail.com
chenshuo@whaty.com
Re: Too many compactions, maybe keyspace system?
Posted by Robert Coli <rc...@eventbrite.com>.
On Mon, Jan 11, 2016 at 9:12 PM, Shuo Chen <ch...@gmail.com> wrote:
> I have a assumption that, lots of pending compaction tasks jam the memory
> and raise full gc. The full chokes the process and slows down compaction.
> And this causes more pending compaction tasks and more pressure on memory.
>
The question is why there are so many pending compactions, because your log
doesn't show that much compaction is happening. What keyspaces /
columnfamilies do you expect to be compacting, and how many SSTables do
they contain?
> Is there a method to list the concrete details of pending compaction tasks?
>
Nope.
For the record, this type of extended operational debugging is often best
carried out interactively on #cassandra on freenode IRC.. :)
=Rob
Re: Too many compactions, maybe keyspace system?
Posted by Shuo Chen <ch...@gmail.com>.
I have a assumption that, lots of pending compaction tasks jam the memory
and raise full gc. The full chokes the process and slows down compaction.
And this causes more pending compaction tasks and more pressure on memory.
Is there a method to list the concrete details of pending compaction tasks?
On Tue, Jan 12, 2016 at 11:41 AM, Robert Coli <rc...@eventbrite.com> wrote:
> The lines you are looking for look like this :
>
> INFO [CompactionExecutor:48] 2016-01-12 09:07:59,995 CompactionTask.java
> (line 120) Compacting
> [SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4959-Data.db'),
> SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4960-Data.db'),
> SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4961-Data.db'),
> SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4962-Data.db')]
>
> and
>
> INFO [CompactionExecutor:47] 2016-01-12 04:06:43,946 CompactionTask.java
> (line 299) Compacted 4 sstables to
> [/usr/local/cassandra/data/system/compaction_history/system-compaction_history-jb-3532,].
> 14,413 bytes to 13,334 (~92% of original) in 52,135ms = 0.000244MB/s. 156
> total partitions merged to 149. Partition merge counts were {1:156, }
>
> There are only 4 of the compaction completed messages in the log you
> attached, tiny ones.
>
> The log doesn't look like much compaction is occurring?
>
> =Rob
>
>
--
*陈硕* *Shuo Chen*
chenatu2006@gmail.com
chenshuo@whaty.com
Re: Too many compactions, maybe keyspace system?
Posted by Robert Coli <rc...@eventbrite.com>.
The lines you are looking for look like this :
INFO [CompactionExecutor:48] 2016-01-12 09:07:59,995 CompactionTask.java
(line 120) Compacting
[SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4959-Data.db'),
SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4960-Data.db'),
SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4961-Data.db'),
SSTableReader(path='/usr/local/cassandra/data/system/sstable_activity/system-sstable_activity-jb-4962-Data.db')]
and
INFO [CompactionExecutor:47] 2016-01-12 04:06:43,946 CompactionTask.java
(line 299) Compacted 4 sstables to
[/usr/local/cassandra/data/system/compaction_history/system-compaction_history-jb-3532,].
14,413 bytes to 13,334 (~92% of original) in 52,135ms = 0.000244MB/s. 156
total partitions merged to 149. Partition merge counts were {1:156, }
There are only 4 of the compaction completed messages in the log you
attached, tiny ones.
The log doesn't look like much compaction is occurring?
=Rob
Re: Too many compactions, maybe keyspace system?
Posted by Shuo Chen <ch...@gmail.com>.
Thanks Rob!
I restarted the cluster at Jan 11 15:00 and I checked the status at Jan 12
9:30. Only one of the node have large mount pending tasks issue. I
increased the heap size to 12G before restarting and run nodetool stop
COMPACTION,
VALIDATION, CLEANUP, SCRUB, INDEX_BUILD.
Here is the compactionstats:
pending tasks: 112583326
Active compaction remaining time : n/a
I run grep -i compact /path/to/system.log but the result is tool long. The
result is in the attachment.
Besides I checked iotop of cassandra, It does not consuming too much io.
top result:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
129276 cassandr 20 0 16.9g 14g 12m S 100.2 23.7 1743:43 java
<---- cassandra process
216014 root 20 0 181m 14m 3500 S 7.3 0.0 0:05.00 iotop
185693 mysql 20 0 56.5g 44g 6952 S 5.3 70.1 4372:08 mysqld
29968 mysql 20 0 103m 1504 1024 S 1.3 0.0 9:02.57 awk
348 root 39 19 0 0 0 S 1.0 0.0 833:34.10 kipmi0
GCutil result of cassandra for 18 hours:
S0 S1 E O P YGC YGCT FGC FGCT GCT
100.00 0.00 100.00 100.00 59.57 256 370.527 1574 52600.562 52971.088
nodetool info at glance
[cassandra@whaty181 apache-cassandra-2.0.16]$ bin/nodetool info
Token : 0
ID : 5d90300f-2fb4-4065-9819-10ece285223d
Gossip active : true
Thrift active : true
Native Transport active: true
Load : 10.39 GB
Generation No : 1452506508
Uptime (seconds) : 58684
Heap Memory (MB) : 11499.87 / 12083.25
Off Heap Memory (MB) : 10.98
Data Center : DC1
Rack : RAC1
Exceptions : 0
Key Cache : size 33264 (bytes), capacity 104857600 (bytes),
859 hits, 975 requests, 0.881 recent hit rate, 14400 save period in seconds
Row Cache : size 0 (bytes), capacity 0 (bytes), 0 hits, 0
requests, NaN recent hit rate, 0 save period in seconds
Thanks again!
On Tue, Jan 12, 2016 at 3:54 AM, Robert Coli <rc...@eventbrite.com> wrote:
> On Sat, Jan 9, 2016 at 8:23 AM, Shuo Chen <ch...@gmail.com> wrote:
>
>> I don't know what exactly compaction logs is like in system.log. But I
>> see logs like this in system.log, I think maybe this is the compaction log
>>
>
> grep -i compact /path/to/system.log
>
> =Rob
>
>
--
*陈硕* *Shuo Chen*
chenatu2006@gmail.com
chenshuo@whaty.com
Re: Too many compactions, maybe keyspace system?
Posted by Robert Coli <rc...@eventbrite.com>.
On Sat, Jan 9, 2016 at 8:23 AM, Shuo Chen <ch...@gmail.com> wrote:
> I don't know what exactly compaction logs is like in system.log. But I see
> logs like this in system.log, I think maybe this is the compaction log
>
grep -i compact /path/to/system.log
=Rob
Re: Too many compactions, maybe keyspace system?
Posted by Shuo Chen <ch...@gmail.com>.
I don't know what exactly compaction logs is like in system.log. But I see
logs like this in system.log, I think maybe this is the compaction log
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,664 StatusLogger.java (line
118) ColumnFamily Memtable ops,data
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.schema_triggers 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.local 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.peers 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.batchlog 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.NodeIdInfo 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.compactions_in_progress 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.schema_keyspaces 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.compaction_history 9,2650
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.paxos 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,670 StatusLogger.java (line
121) system.schema_columns 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,671 StatusLogger.java (line
121) system.schema_columnfamilies 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,671 StatusLogger.java (line
121) system.IndexInfo 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,671 StatusLogger.java (line
121) system.range_xfers 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,671 StatusLogger.java (line
121) system.peer_events 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,671 StatusLogger.java (line
121) system.hints 0,0
INFO [ScheduledTasks:1] 2016-01-09 22:51:02,671 StatusLogger.java (line
121) system.sstable_activity 69,20930
...
To clarify the source of compaction, I restarted the cluster and truncate
system.hints at Jan 8. However, it also shows compactionhistory of hints
like this. I checked the timestamp however, many history items are compared
at Jan 6.
-b313-11e5-a211-45b7aa88107c system hints
1451933296208 14472833 14276596 {1:2}
7a7e5ad0-b42c-11e5-a211-45b7aa88107c system
sstable_activity 1452053883133 3215 852
{4:23}
16e97320-b264-11e5-a211-45b7aa88107c system
sstable_activity 1451857865554 3114 726
{4:23}
f8ab7d40-b62d-11e5-aa30-45b7aa88107c system
sstable_activity 1452274426644 3047 751
{4:23}
ea955a00-b173-11e5-a211-45b7aa88107c system hints
1451754711968 18970740 18970740 {1:1}
cf98c3c0-b54e-11e5-bce5-45b7aa88107c system
sstable_activity 1452178579964 3503 964
{4:23}
9e74bf80-b4c0-11e5-ac1a-45b7aa88107c system
sstable_activity 1452117508984 3135 800
{4:22}
4b9ffb20-b27d-11e5-a211-45b7aa88107c system
sstable_activity 1451868691410 3037 791
{4:23}
4f059400-b3e0-11e5-a211-45b7aa88107c system
sstable_activity 1452021168448 2985 802
{1:2, 4:23}
9de3b670-b439-11e5-ac1a-45b7aa88107c system hints
1452059525975 10532201 10550449 {2:2}
f1c2f710-b439-11e5-ac1a-45b7aa88107c system hints
1452059666689 10479415 10479415 {1:1}
9b4f7700-b439-11e5-ac1a-45b7aa88107c system hints
1452059521648 10594437 10532000 {1:1,
2:1}
402f42d0-b2fc-11e5-a211-45b7aa88107c system
sstable_activity 1451923218301 3184 776
{4:23}
47589f00-b231-11e5-a211-45b7aa88107c system
sstable_activity 1451836042480 3212 785
{4:23}
848ee140-b439-11e5-ac1a-45b7aa88107c system hints
1452059483476 10648367 10601270 {1:1,
2:1}
8736f390-b4f9-11e5-ac1a-45b7aa88107c system hints
1452141951305 8261575 8112577 {1:1}
ff591b80-b37a-11e5-a211-45b7aa88107c system
sstable_activity 1451977655608 3025 765
{4:23}
Besides, there are also lots pending compaction tasks over 70000000.
On Sat, Jan 9, 2016 at 2:42 AM, Robert Coli <rc...@eventbrite.com> wrote:
>
>
> On Thu, Jan 7, 2016 at 6:30 PM, Shuo Chen <ch...@gmail.com> wrote:,
>>
>> I am using Cassandra 2.0.16 with 4 nodes and found too many compactions
>> for this cluster. This caused too much full gc and choked the system. I
>> have discussed the high gc in previous mails but didnot get the satisfied
>> answers.
>>
>> To clarify the source of the compactions, I shutdown all the clients and
>> there is no reading and writing requests outside. Besides,
>> hinted_handoff_enabled: false is set. The cluster is restarted yesterday
>> and have run for 1.5 days.
>>
>
> Do you see compactions of the hints CF in the system.log?
>
> =Rob
>
>
--
*陈硕* *Shuo Chen*
chenatu2006@gmail.com
chenshuo@whaty.com
Re: Too many compactions, maybe keyspace system?
Posted by Robert Coli <rc...@eventbrite.com>.
On Thu, Jan 7, 2016 at 6:30 PM, Shuo Chen <ch...@gmail.com> wrote:,
>
> I am using Cassandra 2.0.16 with 4 nodes and found too many compactions
> for this cluster. This caused too much full gc and choked the system. I
> have discussed the high gc in previous mails but didnot get the satisfied
> answers.
>
> To clarify the source of the compactions, I shutdown all the clients and
> there is no reading and writing requests outside. Besides,
> hinted_handoff_enabled: false is set. The cluster is restarted yesterday
> and have run for 1.5 days.
>
Do you see compactions of the hints CF in the system.log?
=Rob