You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Skvazh Roman <r...@skvazh.com> on 2016/02/12 10:06:26 UTC

Cassandra eats all cpu cores, high load average

Hello!
We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
Only restart of this cassandra services helps.
Please advice.

One big table with wide rows. 600 Gb per node.
LZ4Compressor
LeveledCompaction

concurrent compactors: 4
compactor throughput: tried from 16 to 128
Concurrent_readers: from 16 to 32
Concurrent_writers: 128


https://gist.github.com/rskvazh/de916327779b98a437a6


 JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
 http://code.google.com/p/jvmtop

 Profiling PID 9256: org.apache.cassandra.service.CassandraDa

  95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
   1.39% (     0.06s) com.google.common.base.Objects.hashCode()
   1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
   0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
   0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
   0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
   0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()


ttop:

2016-02-12T08:20:25.605+0000 Process summary
  process cpu=1565.15%
  application cpu=1314.48% (user=1354.48% sys=-40.00%)
  other: cpu=250.67%
  heap allocation rate 146mb/s
[000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
[000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
[000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
[000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
[000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
[000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
[000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
[000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
[000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
[000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
[000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
[000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
[000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
[000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
[000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
[000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
[004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
[004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
[010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
[000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
[012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
[000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
[000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
[000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
[000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
[000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
[000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
[000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
[000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
[000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17

Re: Cassandra eats all cpu cores, high load average

Posted by Skvazh Roman <r...@skvazh.com>.
> 1. check disk IOPs and latency on the EBS volume. (dstat)

There is no pressure on IO when node is freezes. dstat:


> 2. turn GC log on in cassandra-env.sh  and use jstat to see what is happening to your HEAP.

2. ok, I'll wait for problem occurred.

> if you have internode_compression: all, try disabling it.
We already tried this - not helps.

> Also I would move to STCS if you have a lot of tombstones.
We will try this on one node.

> For G1 your heap is too small.
We will try this too. But with G1 load spikes was more stable than with CMS.


> On 12 Feb 2016, at 22:10, Carlos Rolo <ro...@pythian.com> wrote:
> 
> if you have internode_compression: all, try disabling it.
> Also I would move to STCS if you have a lot of tombstones.
> If they get pilled in higher levels you have to wait until those higher levels get compacted before you get them out.
> 
> For G1 your heap is too small. Bump that to 16GB (or at least 12GB). Or revert back to CMS with 12GB Heap / 3GB NewSize.
> 
> This are a bit wild guesses since I don't have eyes on the cluster, but from what I've seen, this should bring some improvements.
> 
> 
> 
> Regards,
> 
> Carlos Juzarte Rolo
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>  
> Pythian - Love your data
> 
> rolo@pythian | Twitter: @cjrolo | Linkedin: linkedin.com/in/carlosjuzarterolo <http://linkedin.com/in/carlosjuzarterolo>
> Mobile: +351 91 891 81 00 | Tel: +1 613 565 8696 x1649
> www.pythian.com <http://www.pythian.com/>
> On Fri, Feb 12, 2016 at 5:24 PM, Julien Anguenot <julien@anguenot.org <ma...@anguenot.org>> wrote:
> If you positive this is not compaction related I would:
> 
>    1. check disk IOPs and latency on the EBS volume. (dstat)
>    2. turn GC log on in cassandra-env.sh  and use jstat to see what is happening to your HEAP.
> 
> I have been asking about compactions initially because if you having one (1) big table written by all nodes and fully replicated to all nodes in the cluster would definitely trigger constant compactions on every nodes depending on write throughput.
> 
>    J. 
> 
>> On Feb 12, 2016, at 11:03 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>> 
>>> Does the load decrease and the node answers requests “normally” when you do disable auto-compaction? You actually see pending compactions on nodes having high load correct?
>> 
>> Nope.
>> 
>>> All seems legit here. Using G1 GC?
>> Yes
>> 
>> Problems also occurred on nodes without pending compactions.
>> 
>> 
>> 
>>> On 12 Feb 2016, at 18:44, Julien Anguenot <julien@anguenot.org <ma...@anguenot.org>> wrote:
>>> 
>>>> 
>>>> On Feb 12, 2016, at 9:24 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>>>> 
>>>> I have disabled autocompaction and stop it on highload node.
>>> 
>>> Does the load decrease and the node answers requests “normally” when you do disable auto-compaction? You actually see pending compactions on nodes having high load correct?
>>> 
>>>> Heap is 8Gb. gc_grace is 86400
>>>> All sstables is about 200-300 Mb.
>>> 
>>> All seems legit here. Using G1 GC?
>>> 
>>>> $ nodetool compactionstats
>>>> pending tasks: 14
>>> 
>>> Try to increase the compactors from 4 to 6-8 on a node, disable gossip and let it finish compacting and put it back in the ring by enabling gossip. See what happens.
>>> 
>>> The tombstones count growing is because the auto-aucompactions are disabled on these nodes. Probably not your issue.
>>> 
>>>    J.
>>> 
>>> 
>>>> 
>>>> $ dstat -lvnr 10
>>>> ---load-avg--- ---procs--- ------memory-usage----- ---paging-- -dsk/total- ---system-- ----total-cpu-usage---- -net/total- --io/total-
>>>> 1m   5m  15m |run blk new| used  buff  cach  free|  in   out | read  writ| int   csw |usr sys idl wai hiq siq| recv  send| read  writ
>>>> 29.4 28.6 23.5|0.0   0 1.2|11.3G  190M 17.6G  407M|   0     0 |7507k 7330k|  13k   40k| 11   1  88   0   0   0|   0     0 |96.5  64.6
>>>> 29.3 28.6 23.5| 29   0 0.9|11.3G  190M 17.6G  408M|   0     0 |   0   189k|9822  2319 | 99   0   0   0   0   0| 138k  120k|   0  4.30
>>>> 29.4 28.6 23.6| 30   0 2.0|11.3G  190M 17.6G  408M|   0     0 |   0    26k|8689  2189 |100   0   0   0   0   0| 139k  120k|   0  2.70
>>>> 29.4 28.7 23.6| 29   0 3.0|11.3G  190M 17.6G  408M|   0     0 |   0    20k|8722  1846 | 99   0   0   0   0   0| 136k  120k|   0  1.50 ^C
>>>> 
>>>> 
>>>> JvmTop 0.8.0 alpha - 15:20:37,  amd64, 16 cpus, Linux 3.14.44-3, load avg 28.09
>>>> http://code.google.com/p/jvmtop <http://code.google.com/p/jvmtop>
>>>> 
>>>> PID 32505: org.apache.cassandra.service.CassandraDaemon
>>>> ARGS:
>>>> VMARGS: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -XX:+CMSCl[...]
>>>> VM: Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 1.8.0_65
>>>> UP:  8:31m  #THR: 334  #THRPEAK: 437  #THRCREATED: 4694 USER: cassandra
>>>> GC-Time:  0: 8m   #GC-Runs: 6378      #TotalLoadedClasses: 5926
>>>> CPU: 97.96% GC:  0.00% HEAP:6049m /7540m NONHEAP:  82m /  n/a
>>>> 
>>>>  TID   NAME                                    STATE    CPU  TOTALCPU BLOCKEDBY
>>>>    447 SharedPool-Worker-45                 RUNNABLE 60.47%     1.03%
>>>>    343 SharedPool-Worker-2                  RUNNABLE 56.46%     3.07%
>>>>    349 SharedPool-Worker-8                  RUNNABLE 56.43%     1.61%
>>>>    456 SharedPool-Worker-25                 RUNNABLE 55.25%     1.06%
>>>>    483 SharedPool-Worker-40                 RUNNABLE 53.06%     1.04%
>>>>    475 SharedPool-Worker-53                 RUNNABLE 52.31%     1.03%
>>>>    464 SharedPool-Worker-20                 RUNNABLE 52.00%     1.11%
>>>>    577 SharedPool-Worker-71                 RUNNABLE 51.73%     1.02%
>>>>    404 SharedPool-Worker-10                 RUNNABLE 51.10%     1.29%
>>>>    486 SharedPool-Worker-34                 RUNNABLE 51.06%     1.03%
>>>> Note: Only top 10 threads (according cpu load) are shown!
>>>> 
>>>> 
>>>>> On 12 Feb 2016, at 18:14, Julien Anguenot <julien@anguenot.org <ma...@anguenot.org>> wrote:
>>>>> 
>>>>> At the time when the load is high and you have to restart, do you see any pending compactions when using `nodetool compactionstats`?
>>>>> 
>>>>> Possible to see a `nodetool compactionstats` taken *when* the load is too high?  Have you checked the size of your SSTables for that big table? Any large ones in there?  What about the Java HEAP configuration on these nodes?
>>>>> 
>>>>> If you have too many tombstones I would try to decrease gc_grace_seconds so they get cleared out earlier during compactions.
>>>>> 
>>>>>  J.
>>>>> 
>>>>>> On Feb 12, 2016, at 8:45 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>>>>>> 
>>>>>> There is 1-4 compactions at that moment.
>>>>>> We have many tombstones, which does not removed.
>>>>>> DroppableTombstoneRatio is 5-6 (greater than 1)
>>>>>> 
>>>>>>> On 12 Feb 2016, at 15:53, Julien Anguenot <julien@anguenot.org <ma...@anguenot.org>> wrote:
>>>>>>> 
>>>>>>> Hey, 
>>>>>>> 
>>>>>>> What about compactions count when that is happening?
>>>>>>> 
>>>>>>> J.
>>>>>>> 
>>>>>>> 
>>>>>>>> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>>>>>>>> 
>>>>>>>> Hello!
>>>>>>>> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
>>>>>>>> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
>>>>>>>> Only restart of this cassandra services helps.
>>>>>>>> Please advice.
>>>>>>>> 
>>>>>>>> One big table with wide rows. 600 Gb per node.
>>>>>>>> LZ4Compressor
>>>>>>>> LeveledCompaction
>>>>>>>> 
>>>>>>>> concurrent compactors: 4
>>>>>>>> compactor throughput: tried from 16 to 128
>>>>>>>> Concurrent_readers: from 16 to 32
>>>>>>>> Concurrent_writers: 128
>>>>>>>> 
>>>>>>>> 
>>>>>>>> https://gist.github.com/rskvazh/de916327779b98a437a6 <https://gist.github.com/rskvazh/de916327779b98a437a6>
>>>>>>>> 
>>>>>>>> 
>>>>>>>> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
>>>>>>>> http://code.google.com/p/jvmtop <http://code.google.com/p/jvmtop>
>>>>>>>> 
>>>>>>>> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>>>>>>>> 
>>>>>>>> 95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>>>>>>>> 1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>>>>>>>> 1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>>>>>>>> 0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>>>>>>>> 0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>>>>>>>> 0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>>>>>>>> 0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>>>>>>>> 
>>>>>>>> 
>>>>>>>> ttop:
>>>>>>>> 
>>>>>>>> 2016-02-12T08:20:25.605+0000 Process summary
>>>>>>>> process cpu=1565.15%
>>>>>>>> application cpu=1314.48% (user=1354.48% sys=-40.00%)
>>>>>>>> other: cpu=250.67%
>>>>>>>> heap allocation rate 146mb/s
>>>>>>>> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
>>>>>>>> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
>>>>>>>> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
>>>>>>>> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
>>>>>>>> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
>>>>>>>> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
>>>>>>>> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
>>>>>>>> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
>>>>>>>> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
>>>>>>>> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
>>>>>>>> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
>>>>>>>> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
>>>>>>>> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
>>>>>>>> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
>>>>>>>> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
>>>>>>>> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
>>>>>>>> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
>>>>>>>> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
>>>>>>>> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
>>>>>>>> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
>>>>>>>> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
>>>>>>>> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
>>>>>>>> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
>>>>>>>> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
>>>>>>>> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
>>>>>>>> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
>>>>>>>> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
>>>>>>>> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
>>>>>>>> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
>>>>>>>> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
>>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>> 
>>> --
>>> Julien Anguenot (@anguenot)
>>> USA +1.832.408.0344 <tel:+1.832.408.0344>  
>>> FR +33.7.86.85.70.44 <tel:%2B33.7.86.85.70.44>
> 
> 
> 
> 
> --
> 
> 
> 
> 
> 


Re: Cassandra eats all cpu cores, high load average

Posted by Carlos Rolo <ro...@pythian.com>.
if you have internode_compression: all, try disabling it.
Also I would move to STCS if you have a lot of tombstones.
If they get pilled in higher levels you have to wait until those higher
levels get compacted before you get them out.

For G1 your heap is too small. Bump that to 16GB (or at least 12GB). Or
revert back to CMS with 12GB Heap / 3GB NewSize.

This are a bit wild guesses since I don't have eyes on the cluster, but
from what I've seen, this should bring some improvements.



Regards,

Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
<http://linkedin.com/in/carlosjuzarterolo>*
Mobile: +351 91 891 81 00 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Fri, Feb 12, 2016 at 5:24 PM, Julien Anguenot <ju...@anguenot.org>
wrote:

> If you positive this is not compaction related I would:
>
>    1. check disk IOPs and latency on the EBS volume. (dstat)
>    2. turn GC log on in cassandra-env.sh  and use jstat to see what is
> happening to your HEAP.
>
> I have been asking about compactions initially because if you having one
> (1) big table written by all nodes and fully replicated to all nodes in the
> cluster would definitely trigger constant compactions on every nodes
> depending on write throughput.
>
>    J.
>
> On Feb 12, 2016, at 11:03 AM, Skvazh Roman <r...@skvazh.com> wrote:
>
> Does the load decrease and the node answers requests “normally” when you
> do disable auto-compaction? You actually see pending compactions on nodes
> having high load correct?
>
> Nope.
>
> All seems legit here. Using G1 GC?
>
> Yes
>
> Problems also occurred on nodes without pending compactions.
>
>
>
> On 12 Feb 2016, at 18:44, Julien Anguenot <ju...@anguenot.org> wrote:
>
>
> On Feb 12, 2016, at 9:24 AM, Skvazh Roman <r...@skvazh.com> wrote:
>
> I have disabled autocompaction and stop it on highload node.
>
>
> Does the load decrease and the node answers requests “normally” when you
> do disable auto-compaction? You actually see pending compactions on nodes
> having high load correct?
>
> Heap is 8Gb. gc_grace is 86400
>
> All sstables is about 200-300 Mb.
>
>
> All seems legit here. Using G1 GC?
>
> $ nodetool compactionstats
> pending tasks: 14
>
>
> Try to increase the compactors from 4 to 6-8 on a node, disable gossip and
> let it finish compacting and put it back in the ring by enabling gossip.
> See what happens.
>
> The tombstones count growing is because the auto-aucompactions are
> disabled on these nodes. Probably not your issue.
>
>    J.
>
>
>
> $ dstat -lvnr 10
> ---load-avg--- ---procs--- ------memory-usage----- ---paging-- -dsk/total-
> ---system-- ----total-cpu-usage---- -net/total- --io/total-
> 1m   5m  15m |run blk new| used  buff  cach  free|  in   out | read  writ|
> int   csw |usr sys idl wai hiq siq| recv  send| read  writ
> 29.4 28.6 23.5|0.0   0 1.2|11.3G  190M 17.6G  407M|   0     0 |7507k
> 7330k|  13k   40k| 11   1  88   0   0   0|   0     0 |96.5  64.6
> 29.3 28.6 23.5| 29   0 0.9|11.3G  190M 17.6G  408M|   0     0 |   0
>   189k|9822  2319 | 99   0   0   0   0   0| 138k  120k|   0  4.30
> 29.4 28.6 23.6| 30   0 2.0|11.3G  190M 17.6G  408M|   0     0 |   0
>    26k|8689  2189 |100   0   0   0   0   0| 139k  120k|   0  2.70
> 29.4 28.7 23.6| 29   0 3.0|11.3G  190M 17.6G  408M|   0     0 |   0
>    20k|8722  1846 | 99   0   0   0   0   0| 136k  120k|   0  1.50 ^C
>
>
> JvmTop 0.8.0 alpha - 15:20:37,  amd64, 16 cpus, Linux 3.14.44-3, load avg
> 28.09
> http://code.google.com/p/jvmtop
>
> PID 32505: org.apache.cassandra.service.CassandraDaemon
> ARGS:
> VMARGS: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar
> -XX:+CMSCl[...]
> VM: Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 1.8.0_65
> UP:  8:31m  #THR: 334  #THRPEAK: 437  #THRCREATED: 4694 USER: cassandra
> GC-Time:  0: 8m   #GC-Runs: 6378      #TotalLoadedClasses: 5926
> CPU: 97.96% GC:  0.00% HEAP:6049m /7540m NONHEAP:  82m /  n/a
>
>  TID   NAME                                    STATE    CPU  TOTALCPU
> BLOCKEDBY
>    447 SharedPool-Worker-45                 RUNNABLE 60.47%     1.03%
>    343 SharedPool-Worker-2                  RUNNABLE 56.46%     3.07%
>    349 SharedPool-Worker-8                  RUNNABLE 56.43%     1.61%
>    456 SharedPool-Worker-25                 RUNNABLE 55.25%     1.06%
>    483 SharedPool-Worker-40                 RUNNABLE 53.06%     1.04%
>    475 SharedPool-Worker-53                 RUNNABLE 52.31%     1.03%
>    464 SharedPool-Worker-20                 RUNNABLE 52.00%     1.11%
>    577 SharedPool-Worker-71                 RUNNABLE 51.73%     1.02%
>    404 SharedPool-Worker-10                 RUNNABLE 51.10%     1.29%
>    486 SharedPool-Worker-34                 RUNNABLE 51.06%     1.03%
> Note: Only top 10 threads (according cpu load) are shown!
>
>
> On 12 Feb 2016, at 18:14, Julien Anguenot <ju...@anguenot.org> wrote:
>
> At the time when the load is high and you have to restart, do you see any
> pending compactions when using `nodetool compactionstats`?
>
> Possible to see a `nodetool compactionstats` taken *when* the load is too
> high?  Have you checked the size of your SSTables for that big table? Any
> large ones in there?  What about the Java HEAP configuration on these nodes?
>
> If you have too many tombstones I would try to decrease gc_grace_seconds
> so they get cleared out earlier during compactions.
>
>  J.
>
> On Feb 12, 2016, at 8:45 AM, Skvazh Roman <r...@skvazh.com> wrote:
>
> There is 1-4 compactions at that moment.
> We have many tombstones, which does not removed.
> DroppableTombstoneRatio is 5-6 (greater than 1)
>
> On 12 Feb 2016, at 15:53, Julien Anguenot <ju...@anguenot.org> wrote:
>
> Hey,
>
> What about compactions count when that is happening?
>
> J.
>
>
> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r...@skvazh.com> wrote:
>
> Hello!
> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached
> 1.5 TB 4000 PIOPS EBS drive.
> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30
> - read requests drops of.
> Only restart of this cassandra services helps.
> Please advice.
>
> One big table with wide rows. 600 Gb per node.
> LZ4Compressor
> LeveledCompaction
>
> concurrent compactors: 4
> compactor throughput: tried from 16 to 128
> Concurrent_readers: from 16 to 32
> Concurrent_writers: 128
>
>
> https://gist.github.com/rskvazh/de916327779b98a437a6
>
>
> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg
> 19.35
> http://code.google.com/p/jvmtop
>
> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>
> 95.73% (     4.31s)
> ....google.common.collect.AbstractIterator.tryToComputeN()
> 1.39% (     0.06s) com.google.common.base.Objects.hashCode()
> 1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
> 0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
> 0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
> 0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
> 0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>
>
> ttop:
>
> 2016-02-12T08:20:25.605+0000 Process summary
> process cpu=1565.15%
> application cpu=1314.48% (user=1354.48% sys=-40.00%)
> other: cpu=250.67%
> heap allocation rate 146mb/s
> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP
> Connection(2673)-127.0.0.1
> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
>
>
>
>
>
>
>
> --
> Julien Anguenot (@anguenot)
> USA +1.832.408.0344
> FR +33.7.86.85.70.44
>
>
>
>

-- 


--




Re: Cassandra eats all cpu cores, high load average

Posted by Julien Anguenot <ju...@anguenot.org>.
If you positive this is not compaction related I would:

   1. check disk IOPs and latency on the EBS volume. (dstat)
   2. turn GC log on in cassandra-env.sh  and use jstat to see what is happening to your HEAP.

I have been asking about compactions initially because if you having one (1) big table written by all nodes and fully replicated to all nodes in the cluster would definitely trigger constant compactions on every nodes depending on write throughput.

   J. 

> On Feb 12, 2016, at 11:03 AM, Skvazh Roman <r...@skvazh.com> wrote:
> 
>> Does the load decrease and the node answers requests “normally” when you do disable auto-compaction? You actually see pending compactions on nodes having high load correct?
> 
> Nope.
> 
>> All seems legit here. Using G1 GC?
> Yes
> 
> Problems also occurred on nodes without pending compactions.
> 
> 
> 
>> On 12 Feb 2016, at 18:44, Julien Anguenot <julien@anguenot.org <ma...@anguenot.org>> wrote:
>> 
>>> 
>>> On Feb 12, 2016, at 9:24 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>>> 
>>> I have disabled autocompaction and stop it on highload node.
>> 
>> Does the load decrease and the node answers requests “normally” when you do disable auto-compaction? You actually see pending compactions on nodes having high load correct?
>> 
>>> Heap is 8Gb. gc_grace is 86400
>>> All sstables is about 200-300 Mb.
>> 
>> All seems legit here. Using G1 GC?
>> 
>>> $ nodetool compactionstats
>>> pending tasks: 14
>> 
>> Try to increase the compactors from 4 to 6-8 on a node, disable gossip and let it finish compacting and put it back in the ring by enabling gossip. See what happens.
>> 
>> The tombstones count growing is because the auto-aucompactions are disabled on these nodes. Probably not your issue.
>> 
>>    J.
>> 
>> 
>>> 
>>> $ dstat -lvnr 10
>>> ---load-avg--- ---procs--- ------memory-usage----- ---paging-- -dsk/total- ---system-- ----total-cpu-usage---- -net/total- --io/total-
>>> 1m   5m  15m |run blk new| used  buff  cach  free|  in   out | read  writ| int   csw |usr sys idl wai hiq siq| recv  send| read  writ
>>> 29.4 28.6 23.5|0.0   0 1.2|11.3G  190M 17.6G  407M|   0     0 |7507k 7330k|  13k   40k| 11   1  88   0   0   0|   0     0 |96.5  64.6
>>> 29.3 28.6 23.5| 29   0 0.9|11.3G  190M 17.6G  408M|   0     0 |   0   189k|9822  2319 | 99   0   0   0   0   0| 138k  120k|   0  4.30
>>> 29.4 28.6 23.6| 30   0 2.0|11.3G  190M 17.6G  408M|   0     0 |   0    26k|8689  2189 |100   0   0   0   0   0| 139k  120k|   0  2.70
>>> 29.4 28.7 23.6| 29   0 3.0|11.3G  190M 17.6G  408M|   0     0 |   0    20k|8722  1846 | 99   0   0   0   0   0| 136k  120k|   0  1.50 ^C
>>> 
>>> 
>>> JvmTop 0.8.0 alpha - 15:20:37,  amd64, 16 cpus, Linux 3.14.44-3, load avg 28.09
>>> http://code.google.com/p/jvmtop <http://code.google.com/p/jvmtop>
>>> 
>>> PID 32505: org.apache.cassandra.service.CassandraDaemon
>>> ARGS:
>>> VMARGS: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -XX:+CMSCl[...]
>>> VM: Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 1.8.0_65
>>> UP:  8:31m  #THR: 334  #THRPEAK: 437  #THRCREATED: 4694 USER: cassandra
>>> GC-Time:  0: 8m   #GC-Runs: 6378      #TotalLoadedClasses: 5926
>>> CPU: 97.96% GC:  0.00% HEAP:6049m /7540m NONHEAP:  82m /  n/a
>>> 
>>>  TID   NAME                                    STATE    CPU  TOTALCPU BLOCKEDBY
>>>    447 SharedPool-Worker-45                 RUNNABLE 60.47%     1.03%
>>>    343 SharedPool-Worker-2                  RUNNABLE 56.46%     3.07%
>>>    349 SharedPool-Worker-8                  RUNNABLE 56.43%     1.61%
>>>    456 SharedPool-Worker-25                 RUNNABLE 55.25%     1.06%
>>>    483 SharedPool-Worker-40                 RUNNABLE 53.06%     1.04%
>>>    475 SharedPool-Worker-53                 RUNNABLE 52.31%     1.03%
>>>    464 SharedPool-Worker-20                 RUNNABLE 52.00%     1.11%
>>>    577 SharedPool-Worker-71                 RUNNABLE 51.73%     1.02%
>>>    404 SharedPool-Worker-10                 RUNNABLE 51.10%     1.29%
>>>    486 SharedPool-Worker-34                 RUNNABLE 51.06%     1.03%
>>> Note: Only top 10 threads (according cpu load) are shown!
>>> 
>>> 
>>>> On 12 Feb 2016, at 18:14, Julien Anguenot <julien@anguenot.org <ma...@anguenot.org>> wrote:
>>>> 
>>>> At the time when the load is high and you have to restart, do you see any pending compactions when using `nodetool compactionstats`?
>>>> 
>>>> Possible to see a `nodetool compactionstats` taken *when* the load is too high?  Have you checked the size of your SSTables for that big table? Any large ones in there?  What about the Java HEAP configuration on these nodes?
>>>> 
>>>> If you have too many tombstones I would try to decrease gc_grace_seconds so they get cleared out earlier during compactions.
>>>> 
>>>>  J.
>>>> 
>>>>> On Feb 12, 2016, at 8:45 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>>>>> 
>>>>> There is 1-4 compactions at that moment.
>>>>> We have many tombstones, which does not removed.
>>>>> DroppableTombstoneRatio is 5-6 (greater than 1)
>>>>> 
>>>>>> On 12 Feb 2016, at 15:53, Julien Anguenot <julien@anguenot.org <ma...@anguenot.org>> wrote:
>>>>>> 
>>>>>> Hey, 
>>>>>> 
>>>>>> What about compactions count when that is happening?
>>>>>> 
>>>>>> J.
>>>>>> 
>>>>>> 
>>>>>>> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>>>>>>> 
>>>>>>> Hello!
>>>>>>> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
>>>>>>> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
>>>>>>> Only restart of this cassandra services helps.
>>>>>>> Please advice.
>>>>>>> 
>>>>>>> One big table with wide rows. 600 Gb per node.
>>>>>>> LZ4Compressor
>>>>>>> LeveledCompaction
>>>>>>> 
>>>>>>> concurrent compactors: 4
>>>>>>> compactor throughput: tried from 16 to 128
>>>>>>> Concurrent_readers: from 16 to 32
>>>>>>> Concurrent_writers: 128
>>>>>>> 
>>>>>>> 
>>>>>>> https://gist.github.com/rskvazh/de916327779b98a437a6 <https://gist.github.com/rskvazh/de916327779b98a437a6>
>>>>>>> 
>>>>>>> 
>>>>>>> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
>>>>>>> http://code.google.com/p/jvmtop <http://code.google.com/p/jvmtop>
>>>>>>> 
>>>>>>> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>>>>>>> 
>>>>>>> 95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>>>>>>> 1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>>>>>>> 1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>>>>>>> 0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>>>>>>> 0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>>>>>>> 0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>>>>>>> 0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>>>>>>> 
>>>>>>> 
>>>>>>> ttop:
>>>>>>> 
>>>>>>> 2016-02-12T08:20:25.605+0000 Process summary
>>>>>>> process cpu=1565.15%
>>>>>>> application cpu=1314.48% (user=1354.48% sys=-40.00%)
>>>>>>> other: cpu=250.67%
>>>>>>> heap allocation rate 146mb/s
>>>>>>> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
>>>>>>> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
>>>>>>> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
>>>>>>> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
>>>>>>> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
>>>>>>> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
>>>>>>> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
>>>>>>> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
>>>>>>> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
>>>>>>> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
>>>>>>> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
>>>>>>> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
>>>>>>> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
>>>>>>> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
>>>>>>> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
>>>>>>> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
>>>>>>> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
>>>>>>> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
>>>>>>> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
>>>>>>> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
>>>>>>> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
>>>>>>> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
>>>>>>> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
>>>>>>> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
>>>>>>> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
>>>>>>> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
>>>>>>> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
>>>>>>> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
>>>>>>> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
>>>>>>> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
>>>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>> 
>> --
>> Julien Anguenot (@anguenot)
>> USA +1.832.408.0344 <tel:+1.832.408.0344>  
>> FR +33.7.86.85.70.44




Re: Cassandra eats all cpu cores, high load average

Posted by Skvazh Roman <r...@skvazh.com>.
> Does the load decrease and the node answers requests “normally” when you do disable auto-compaction? You actually see pending compactions on nodes having high load correct?

Nope.

> All seems legit here. Using G1 GC?
Yes

Problems also occurred on nodes without pending compactions.



> On 12 Feb 2016, at 18:44, Julien Anguenot <ju...@anguenot.org> wrote:
> 
>> 
>> On Feb 12, 2016, at 9:24 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>> 
>> I have disabled autocompaction and stop it on highload node.
> 
> Does the load decrease and the node answers requests “normally” when you do disable auto-compaction? You actually see pending compactions on nodes having high load correct?
> 
>> Heap is 8Gb. gc_grace is 86400
>> All sstables is about 200-300 Mb.
> 
> All seems legit here. Using G1 GC?
> 
>> $ nodetool compactionstats
>> pending tasks: 14
> 
> Try to increase the compactors from 4 to 6-8 on a node, disable gossip and let it finish compacting and put it back in the ring by enabling gossip. See what happens.
> 
> The tombstones count growing is because the auto-aucompactions are disabled on these nodes. Probably not your issue.
> 
>    J.
> 
> 
>> 
>> $ dstat -lvnr 10
>> ---load-avg--- ---procs--- ------memory-usage----- ---paging-- -dsk/total- ---system-- ----total-cpu-usage---- -net/total- --io/total-
>> 1m   5m  15m |run blk new| used  buff  cach  free|  in   out | read  writ| int   csw |usr sys idl wai hiq siq| recv  send| read  writ
>> 29.4 28.6 23.5|0.0   0 1.2|11.3G  190M 17.6G  407M|   0     0 |7507k 7330k|  13k   40k| 11   1  88   0   0   0|   0     0 |96.5  64.6
>> 29.3 28.6 23.5| 29   0 0.9|11.3G  190M 17.6G  408M|   0     0 |   0   189k|9822  2319 | 99   0   0   0   0   0| 138k  120k|   0  4.30
>> 29.4 28.6 23.6| 30   0 2.0|11.3G  190M 17.6G  408M|   0     0 |   0    26k|8689  2189 |100   0   0   0   0   0| 139k  120k|   0  2.70
>> 29.4 28.7 23.6| 29   0 3.0|11.3G  190M 17.6G  408M|   0     0 |   0    20k|8722  1846 | 99   0   0   0   0   0| 136k  120k|   0  1.50 ^C
>> 
>> 
>> JvmTop 0.8.0 alpha - 15:20:37,  amd64, 16 cpus, Linux 3.14.44-3, load avg 28.09
>> http://code.google.com/p/jvmtop <http://code.google.com/p/jvmtop>
>> 
>> PID 32505: org.apache.cassandra.service.CassandraDaemon
>> ARGS:
>> VMARGS: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -XX:+CMSCl[...]
>> VM: Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 1.8.0_65
>> UP:  8:31m  #THR: 334  #THRPEAK: 437  #THRCREATED: 4694 USER: cassandra
>> GC-Time:  0: 8m   #GC-Runs: 6378      #TotalLoadedClasses: 5926
>> CPU: 97.96% GC:  0.00% HEAP:6049m /7540m NONHEAP:  82m /  n/a
>> 
>>  TID   NAME                                    STATE    CPU  TOTALCPU BLOCKEDBY
>>    447 SharedPool-Worker-45                 RUNNABLE 60.47%     1.03%
>>    343 SharedPool-Worker-2                  RUNNABLE 56.46%     3.07%
>>    349 SharedPool-Worker-8                  RUNNABLE 56.43%     1.61%
>>    456 SharedPool-Worker-25                 RUNNABLE 55.25%     1.06%
>>    483 SharedPool-Worker-40                 RUNNABLE 53.06%     1.04%
>>    475 SharedPool-Worker-53                 RUNNABLE 52.31%     1.03%
>>    464 SharedPool-Worker-20                 RUNNABLE 52.00%     1.11%
>>    577 SharedPool-Worker-71                 RUNNABLE 51.73%     1.02%
>>    404 SharedPool-Worker-10                 RUNNABLE 51.10%     1.29%
>>    486 SharedPool-Worker-34                 RUNNABLE 51.06%     1.03%
>> Note: Only top 10 threads (according cpu load) are shown!
>> 
>> 
>>> On 12 Feb 2016, at 18:14, Julien Anguenot <julien@anguenot.org <ma...@anguenot.org>> wrote:
>>> 
>>> At the time when the load is high and you have to restart, do you see any pending compactions when using `nodetool compactionstats`?
>>> 
>>> Possible to see a `nodetool compactionstats` taken *when* the load is too high?  Have you checked the size of your SSTables for that big table? Any large ones in there?  What about the Java HEAP configuration on these nodes?
>>> 
>>> If you have too many tombstones I would try to decrease gc_grace_seconds so they get cleared out earlier during compactions.
>>> 
>>>  J.
>>> 
>>>> On Feb 12, 2016, at 8:45 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>>>> 
>>>> There is 1-4 compactions at that moment.
>>>> We have many tombstones, which does not removed.
>>>> DroppableTombstoneRatio is 5-6 (greater than 1)
>>>> 
>>>>> On 12 Feb 2016, at 15:53, Julien Anguenot <julien@anguenot.org <ma...@anguenot.org>> wrote:
>>>>> 
>>>>> Hey, 
>>>>> 
>>>>> What about compactions count when that is happening?
>>>>> 
>>>>> J.
>>>>> 
>>>>> 
>>>>>> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
>>>>>> 
>>>>>> Hello!
>>>>>> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
>>>>>> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
>>>>>> Only restart of this cassandra services helps.
>>>>>> Please advice.
>>>>>> 
>>>>>> One big table with wide rows. 600 Gb per node.
>>>>>> LZ4Compressor
>>>>>> LeveledCompaction
>>>>>> 
>>>>>> concurrent compactors: 4
>>>>>> compactor throughput: tried from 16 to 128
>>>>>> Concurrent_readers: from 16 to 32
>>>>>> Concurrent_writers: 128
>>>>>> 
>>>>>> 
>>>>>> https://gist.github.com/rskvazh/de916327779b98a437a6 <https://gist.github.com/rskvazh/de916327779b98a437a6>
>>>>>> 
>>>>>> 
>>>>>> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
>>>>>> http://code.google.com/p/jvmtop
>>>>>> 
>>>>>> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>>>>>> 
>>>>>> 95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>>>>>> 1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>>>>>> 1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>>>>>> 0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>>>>>> 0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>>>>>> 0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>>>>>> 0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>>>>>> 
>>>>>> 
>>>>>> ttop:
>>>>>> 
>>>>>> 2016-02-12T08:20:25.605+0000 Process summary
>>>>>> process cpu=1565.15%
>>>>>> application cpu=1314.48% (user=1354.48% sys=-40.00%)
>>>>>> other: cpu=250.67%
>>>>>> heap allocation rate 146mb/s
>>>>>> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
>>>>>> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
>>>>>> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
>>>>>> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
>>>>>> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
>>>>>> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
>>>>>> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
>>>>>> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
>>>>>> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
>>>>>> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
>>>>>> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
>>>>>> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
>>>>>> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
>>>>>> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
>>>>>> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
>>>>>> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
>>>>>> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
>>>>>> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
>>>>>> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
>>>>>> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
>>>>>> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
>>>>>> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
>>>>>> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
>>>>>> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
>>>>>> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
>>>>>> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
>>>>>> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
>>>>>> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
>>>>>> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
>>>>>> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
>>>>> 
>>> 
>>> 
>>> 
>> 
> 
> --
> Julien Anguenot (@anguenot)
> USA +1.832.408.0344 <tel:+1.832.408.0344>  
> FR +33.7.86.85.70.44


Re: Cassandra eats all cpu cores, high load average

Posted by Julien Anguenot <ju...@anguenot.org>.
> On Feb 12, 2016, at 9:24 AM, Skvazh Roman <r...@skvazh.com> wrote:
> 
> I have disabled autocompaction and stop it on highload node.

Does the load decrease and the node answers requests “normally” when you do disable auto-compaction? You actually see pending compactions on nodes having high load correct?

> Heap is 8Gb. gc_grace is 86400
> All sstables is about 200-300 Mb.

All seems legit here. Using G1 GC?

> $ nodetool compactionstats
> pending tasks: 14

Try to increase the compactors from 4 to 6-8 on a node, disable gossip and let it finish compacting and put it back in the ring by enabling gossip. See what happens.

The tombstones count growing is because the auto-aucompactions are disabled on these nodes. Probably not your issue.

   J.


> 
> $ dstat -lvnr 10
> ---load-avg--- ---procs--- ------memory-usage----- ---paging-- -dsk/total- ---system-- ----total-cpu-usage---- -net/total- --io/total-
> 1m   5m  15m |run blk new| used  buff  cach  free|  in   out | read  writ| int   csw |usr sys idl wai hiq siq| recv  send| read  writ
> 29.4 28.6 23.5|0.0   0 1.2|11.3G  190M 17.6G  407M|   0     0 |7507k 7330k|  13k   40k| 11   1  88   0   0   0|   0     0 |96.5  64.6
> 29.3 28.6 23.5| 29   0 0.9|11.3G  190M 17.6G  408M|   0     0 |   0   189k|9822  2319 | 99   0   0   0   0   0| 138k  120k|   0  4.30
> 29.4 28.6 23.6| 30   0 2.0|11.3G  190M 17.6G  408M|   0     0 |   0    26k|8689  2189 |100   0   0   0   0   0| 139k  120k|   0  2.70
> 29.4 28.7 23.6| 29   0 3.0|11.3G  190M 17.6G  408M|   0     0 |   0    20k|8722  1846 | 99   0   0   0   0   0| 136k  120k|   0  1.50 ^C
> 
> 
> JvmTop 0.8.0 alpha - 15:20:37,  amd64, 16 cpus, Linux 3.14.44-3, load avg 28.09
> http://code.google.com/p/jvmtop
> 
> PID 32505: org.apache.cassandra.service.CassandraDaemon
> ARGS:
> VMARGS: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -XX:+CMSCl[...]
> VM: Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 1.8.0_65
> UP:  8:31m  #THR: 334  #THRPEAK: 437  #THRCREATED: 4694 USER: cassandra
> GC-Time:  0: 8m   #GC-Runs: 6378      #TotalLoadedClasses: 5926
> CPU: 97.96% GC:  0.00% HEAP:6049m /7540m NONHEAP:  82m /  n/a
> 
>  TID   NAME                                    STATE    CPU  TOTALCPU BLOCKEDBY
>    447 SharedPool-Worker-45                 RUNNABLE 60.47%     1.03%
>    343 SharedPool-Worker-2                  RUNNABLE 56.46%     3.07%
>    349 SharedPool-Worker-8                  RUNNABLE 56.43%     1.61%
>    456 SharedPool-Worker-25                 RUNNABLE 55.25%     1.06%
>    483 SharedPool-Worker-40                 RUNNABLE 53.06%     1.04%
>    475 SharedPool-Worker-53                 RUNNABLE 52.31%     1.03%
>    464 SharedPool-Worker-20                 RUNNABLE 52.00%     1.11%
>    577 SharedPool-Worker-71                 RUNNABLE 51.73%     1.02%
>    404 SharedPool-Worker-10                 RUNNABLE 51.10%     1.29%
>    486 SharedPool-Worker-34                 RUNNABLE 51.06%     1.03%
> Note: Only top 10 threads (according cpu load) are shown!
> 
> 
>> On 12 Feb 2016, at 18:14, Julien Anguenot <ju...@anguenot.org> wrote:
>> 
>> At the time when the load is high and you have to restart, do you see any pending compactions when using `nodetool compactionstats`?
>> 
>> Possible to see a `nodetool compactionstats` taken *when* the load is too high?  Have you checked the size of your SSTables for that big table? Any large ones in there?  What about the Java HEAP configuration on these nodes?
>> 
>> If you have too many tombstones I would try to decrease gc_grace_seconds so they get cleared out earlier during compactions.
>> 
>>  J.
>> 
>>> On Feb 12, 2016, at 8:45 AM, Skvazh Roman <r...@skvazh.com> wrote:
>>> 
>>> There is 1-4 compactions at that moment.
>>> We have many tombstones, which does not removed.
>>> DroppableTombstoneRatio is 5-6 (greater than 1)
>>> 
>>>> On 12 Feb 2016, at 15:53, Julien Anguenot <ju...@anguenot.org> wrote:
>>>> 
>>>> Hey, 
>>>> 
>>>> What about compactions count when that is happening?
>>>> 
>>>> J.
>>>> 
>>>> 
>>>>> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r...@skvazh.com> wrote:
>>>>> 
>>>>> Hello!
>>>>> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
>>>>> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
>>>>> Only restart of this cassandra services helps.
>>>>> Please advice.
>>>>> 
>>>>> One big table with wide rows. 600 Gb per node.
>>>>> LZ4Compressor
>>>>> LeveledCompaction
>>>>> 
>>>>> concurrent compactors: 4
>>>>> compactor throughput: tried from 16 to 128
>>>>> Concurrent_readers: from 16 to 32
>>>>> Concurrent_writers: 128
>>>>> 
>>>>> 
>>>>> https://gist.github.com/rskvazh/de916327779b98a437a6
>>>>> 
>>>>> 
>>>>> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
>>>>> http://code.google.com/p/jvmtop
>>>>> 
>>>>> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>>>>> 
>>>>> 95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>>>>> 1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>>>>> 1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>>>>> 0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>>>>> 0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>>>>> 0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>>>>> 0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>>>>> 
>>>>> 
>>>>> ttop:
>>>>> 
>>>>> 2016-02-12T08:20:25.605+0000 Process summary
>>>>> process cpu=1565.15%
>>>>> application cpu=1314.48% (user=1354.48% sys=-40.00%)
>>>>> other: cpu=250.67%
>>>>> heap allocation rate 146mb/s
>>>>> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
>>>>> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
>>>>> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
>>>>> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
>>>>> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
>>>>> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
>>>>> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
>>>>> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
>>>>> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
>>>>> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
>>>>> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
>>>>> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
>>>>> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
>>>>> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
>>>>> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
>>>>> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
>>>>> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
>>>>> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
>>>>> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
>>>>> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
>>>>> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
>>>>> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
>>>>> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
>>>>> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
>>>>> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
>>>>> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
>>>>> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
>>>>> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
>>>>> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
>>>>> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
>>>> 
>> 
>> 
>> 
> 

--
Julien Anguenot (@anguenot)
USA +1.832.408.0344 <tel:+1.832.408.0344>  
FR +33.7.86.85.70.44


Re: Cassandra eats all cpu cores, high load average

Posted by Skvazh Roman <r...@skvazh.com>.
I have disabled autocompaction and stop it on highload node.
Freezes all nodes sequentially, 2-6 simultaneously.

Heap is 8Gb. gc_grace is 86400
All sstables is about 200-300 Mb.

$ nodetool compactionstats
pending tasks: 14


$ dstat -lvnr 10
---load-avg--- ---procs--- ------memory-usage----- ---paging-- -dsk/total- ---system-- ----total-cpu-usage---- -net/total- --io/total-
 1m   5m  15m |run blk new| used  buff  cach  free|  in   out | read  writ| int   csw |usr sys idl wai hiq siq| recv  send| read  writ
29.4 28.6 23.5|0.0   0 1.2|11.3G  190M 17.6G  407M|   0     0 |7507k 7330k|  13k   40k| 11   1  88   0   0   0|   0     0 |96.5  64.6
29.3 28.6 23.5| 29   0 0.9|11.3G  190M 17.6G  408M|   0     0 |   0   189k|9822  2319 | 99   0   0   0   0   0| 138k  120k|   0  4.30
29.4 28.6 23.6| 30   0 2.0|11.3G  190M 17.6G  408M|   0     0 |   0    26k|8689  2189 |100   0   0   0   0   0| 139k  120k|   0  2.70
29.4 28.7 23.6| 29   0 3.0|11.3G  190M 17.6G  408M|   0     0 |   0    20k|8722  1846 | 99   0   0   0   0   0| 136k  120k|   0  1.50 ^C


JvmTop 0.8.0 alpha - 15:20:37,  amd64, 16 cpus, Linux 3.14.44-3, load avg 28.09
 http://code.google.com/p/jvmtop

 PID 32505: org.apache.cassandra.service.CassandraDaemon
 ARGS:
 VMARGS: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -XX:+CMSCl[...]
 VM: Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 1.8.0_65
 UP:  8:31m  #THR: 334  #THRPEAK: 437  #THRCREATED: 4694 USER: cassandra
 GC-Time:  0: 8m   #GC-Runs: 6378      #TotalLoadedClasses: 5926
 CPU: 97.96% GC:  0.00% HEAP:6049m /7540m NONHEAP:  82m /  n/a

  TID   NAME                                    STATE    CPU  TOTALCPU BLOCKEDBY
    447 SharedPool-Worker-45                 RUNNABLE 60.47%     1.03%
    343 SharedPool-Worker-2                  RUNNABLE 56.46%     3.07%
    349 SharedPool-Worker-8                  RUNNABLE 56.43%     1.61%
    456 SharedPool-Worker-25                 RUNNABLE 55.25%     1.06%
    483 SharedPool-Worker-40                 RUNNABLE 53.06%     1.04%
    475 SharedPool-Worker-53                 RUNNABLE 52.31%     1.03%
    464 SharedPool-Worker-20                 RUNNABLE 52.00%     1.11%
    577 SharedPool-Worker-71                 RUNNABLE 51.73%     1.02%
    404 SharedPool-Worker-10                 RUNNABLE 51.10%     1.29%
    486 SharedPool-Worker-34                 RUNNABLE 51.06%     1.03%
 Note: Only top 10 threads (according cpu load) are shown!


> On 12 Feb 2016, at 18:14, Julien Anguenot <ju...@anguenot.org> wrote:
> 
> At the time when the load is high and you have to restart, do you see any pending compactions when using `nodetool compactionstats`?
> 
> Possible to see a `nodetool compactionstats` taken *when* the load is too high?  Have you checked the size of your SSTables for that big table? Any large ones in there?  What about the Java HEAP configuration on these nodes?
> 
> If you have too many tombstones I would try to decrease gc_grace_seconds so they get cleared out earlier during compactions.
> 
>   J.
> 
>> On Feb 12, 2016, at 8:45 AM, Skvazh Roman <r...@skvazh.com> wrote:
>> 
>> There is 1-4 compactions at that moment.
>> We have many tombstones, which does not removed.
>> DroppableTombstoneRatio is 5-6 (greater than 1)
>> 
>>> On 12 Feb 2016, at 15:53, Julien Anguenot <ju...@anguenot.org> wrote:
>>> 
>>> Hey, 
>>> 
>>> What about compactions count when that is happening?
>>> 
>>> J.
>>> 
>>> 
>>>> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r...@skvazh.com> wrote:
>>>> 
>>>> Hello!
>>>> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
>>>> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
>>>> Only restart of this cassandra services helps.
>>>> Please advice.
>>>> 
>>>> One big table with wide rows. 600 Gb per node.
>>>> LZ4Compressor
>>>> LeveledCompaction
>>>> 
>>>> concurrent compactors: 4
>>>> compactor throughput: tried from 16 to 128
>>>> Concurrent_readers: from 16 to 32
>>>> Concurrent_writers: 128
>>>> 
>>>> 
>>>> https://gist.github.com/rskvazh/de916327779b98a437a6
>>>> 
>>>> 
>>>> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
>>>> http://code.google.com/p/jvmtop
>>>> 
>>>> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>>>> 
>>>> 95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>>>> 1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>>>> 1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>>>> 0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>>>> 0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>>>> 0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>>>> 0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>>>> 
>>>> 
>>>> ttop:
>>>> 
>>>> 2016-02-12T08:20:25.605+0000 Process summary
>>>> process cpu=1565.15%
>>>> application cpu=1314.48% (user=1354.48% sys=-40.00%)
>>>> other: cpu=250.67%
>>>> heap allocation rate 146mb/s
>>>> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
>>>> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
>>>> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
>>>> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
>>>> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
>>>> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
>>>> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
>>>> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
>>>> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
>>>> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
>>>> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
>>>> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
>>>> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
>>>> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
>>>> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
>>>> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
>>>> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
>>>> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
>>>> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
>>>> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
>>>> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
>>>> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
>>>> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
>>>> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
>>>> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
>>>> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
>>>> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
>>>> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
>>>> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
>>>> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
>>> 
> 
> 
> 


Re: Cassandra eats all cpu cores, high load average

Posted by Julien Anguenot <ju...@anguenot.org>.
At the time when the load is high and you have to restart, do you see any pending compactions when using `nodetool compactionstats`?

Possible to see a `nodetool compactionstats` taken *when* the load is too high?  Have you checked the size of your SSTables for that big table? Any large ones in there?  What about the Java HEAP configuration on these nodes?

If you have too many tombstones I would try to decrease gc_grace_seconds so they get cleared out earlier during compactions.

   J.

> On Feb 12, 2016, at 8:45 AM, Skvazh Roman <r...@skvazh.com> wrote:
> 
> There is 1-4 compactions at that moment.
> We have many tombstones, which does not removed.
> DroppableTombstoneRatio is 5-6 (greater than 1)
> 
>> On 12 Feb 2016, at 15:53, Julien Anguenot <ju...@anguenot.org> wrote:
>> 
>> Hey, 
>> 
>> What about compactions count when that is happening?
>> 
>>  J.
>> 
>> 
>>> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r...@skvazh.com> wrote:
>>> 
>>> Hello!
>>> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
>>> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
>>> Only restart of this cassandra services helps.
>>> Please advice.
>>> 
>>> One big table with wide rows. 600 Gb per node.
>>> LZ4Compressor
>>> LeveledCompaction
>>> 
>>> concurrent compactors: 4
>>> compactor throughput: tried from 16 to 128
>>> Concurrent_readers: from 16 to 32
>>> Concurrent_writers: 128
>>> 
>>> 
>>> https://gist.github.com/rskvazh/de916327779b98a437a6
>>> 
>>> 
>>> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
>>> http://code.google.com/p/jvmtop
>>> 
>>> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>>> 
>>> 95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>>> 1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>>> 1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>>> 0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>>> 0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>>> 0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>>> 0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>>> 
>>> 
>>> ttop:
>>> 
>>> 2016-02-12T08:20:25.605+0000 Process summary
>>> process cpu=1565.15%
>>> application cpu=1314.48% (user=1354.48% sys=-40.00%)
>>> other: cpu=250.67%
>>> heap allocation rate 146mb/s
>>> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
>>> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
>>> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
>>> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
>>> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
>>> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
>>> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
>>> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
>>> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
>>> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
>>> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
>>> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
>>> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
>>> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
>>> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
>>> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
>>> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
>>> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
>>> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
>>> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
>>> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
>>> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
>>> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
>>> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
>>> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
>>> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
>>> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
>>> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
>>> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
>>> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
>> 




Re: Cassandra eats all cpu cores, high load average

Posted by Skvazh Roman <r...@skvazh.com>.
After disabling binary, gossip, thrift node blocks on 16 read stages and 

[iadmin@ip-10-0-25-46 ~]$ nodetool tpstats
Pool Name                    Active   Pending      Completed   Blocked  All time blocked
MutationStage                     0         0       19587002         0                 0
ReadStage                        16    122722         825762         0                 0
RequestResponseStage              0         0       14281567         0                 0
ReadRepairStage                   0         0          37390         0                 0
CounterMutationStage              0         0              0         0                 0
MiscStage                         0         0              0         0                 0
HintedHandoff                     0         0            114         0                 0
GossipStage                       0         0          93775         0                 0
CacheCleanupExecutor              0         0              0         0                 0
InternalResponseStage             0         0              0         0                 0
CommitLogArchiver                 0         0              0         0                 0
CompactionExecutor                0         0          18523         0                 0
ValidationExecutor                0         0             18         0                 0
MigrationStage                    0         0              6         0                 0
AntiEntropyStage                  0         0             60         0                 0
PendingRangeCalculator            0         0             89         0                 0
Sampler                           0         0              0         0                 0
MemtableFlushWriter               0         0           2489         0                 0
MemtablePostFlush                 0         0           2562         0                 0
MemtableReclaimMemory             1        28           2461         0                 0

Message type           Dropped
READ                         0
RANGE_SLICE                  0
_TRACE                       0
MUTATION                     0
COUNTER_MUTATION             0
BINARY                       0
REQUEST_RESPONSE             0
PAGED_RANGE                  0
READ_REPAIR                  0

> On 12 Feb 2016, at 17:45, Skvazh Roman <r...@skvazh.com> wrote:
> 
> There is 1-4 compactions at that moment.
> We have many tombstones, which does not removed.
> DroppableTombstoneRatio is 5-6 (greater than 1)
> 
>> On 12 Feb 2016, at 15:53, Julien Anguenot <ju...@anguenot.org> wrote:
>> 
>> Hey, 
>> 
>> What about compactions count when that is happening?
>> 
>>  J.
>> 
>> 
>>> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r...@skvazh.com> wrote:
>>> 
>>> Hello!
>>> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
>>> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
>>> Only restart of this cassandra services helps.
>>> Please advice.
>>> 
>>> One big table with wide rows. 600 Gb per node.
>>> LZ4Compressor
>>> LeveledCompaction
>>> 
>>> concurrent compactors: 4
>>> compactor throughput: tried from 16 to 128
>>> Concurrent_readers: from 16 to 32
>>> Concurrent_writers: 128
>>> 
>>> 
>>> https://gist.github.com/rskvazh/de916327779b98a437a6
>>> 
>>> 
>>> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
>>> http://code.google.com/p/jvmtop
>>> 
>>> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>>> 
>>> 95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>>> 1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>>> 1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>>> 0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>>> 0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>>> 0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>>> 0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>>> 
>>> 
>>> ttop:
>>> 
>>> 2016-02-12T08:20:25.605+0000 Process summary
>>> process cpu=1565.15%
>>> application cpu=1314.48% (user=1354.48% sys=-40.00%)
>>> other: cpu=250.67%
>>> heap allocation rate 146mb/s
>>> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
>>> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
>>> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
>>> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
>>> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
>>> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
>>> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
>>> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
>>> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
>>> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
>>> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
>>> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
>>> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
>>> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
>>> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
>>> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
>>> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
>>> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
>>> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
>>> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
>>> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
>>> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
>>> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
>>> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
>>> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
>>> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
>>> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
>>> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
>>> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
>>> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
>> 
> 


Re: Cassandra eats all cpu cores, high load average

Posted by Skvazh Roman <r...@skvazh.com>.
There is 1-4 compactions at that moment.
We have many tombstones, which does not removed.
DroppableTombstoneRatio is 5-6 (greater than 1)

> On 12 Feb 2016, at 15:53, Julien Anguenot <ju...@anguenot.org> wrote:
> 
> Hey, 
> 
> What about compactions count when that is happening?
> 
>   J.
> 
> 
>> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r...@skvazh.com> wrote:
>> 
>> Hello!
>> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
>> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
>> Only restart of this cassandra services helps.
>> Please advice.
>> 
>> One big table with wide rows. 600 Gb per node.
>> LZ4Compressor
>> LeveledCompaction
>> 
>> concurrent compactors: 4
>> compactor throughput: tried from 16 to 128
>> Concurrent_readers: from 16 to 32
>> Concurrent_writers: 128
>> 
>> 
>> https://gist.github.com/rskvazh/de916327779b98a437a6
>> 
>> 
>> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
>> http://code.google.com/p/jvmtop
>> 
>> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>> 
>> 95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>>  1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>>  1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>>  0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>>  0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>>  0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>>  0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>> 
>> 
>> ttop:
>> 
>> 2016-02-12T08:20:25.605+0000 Process summary
>> process cpu=1565.15%
>> application cpu=1314.48% (user=1354.48% sys=-40.00%)
>> other: cpu=250.67%
>> heap allocation rate 146mb/s
>> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
>> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
>> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
>> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
>> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
>> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
>> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
>> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
>> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
>> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
>> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
>> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
>> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
>> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
>> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
>> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
>> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
>> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
>> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
>> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
>> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
>> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
>> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
>> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
>> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
>> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
>> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
>> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
>> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
>> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
> 


Re: Cassandra eats all cpu cores, high load average

Posted by Julien Anguenot <ju...@anguenot.org>.
Hey, 

What about compactions count when that is happening?

   J.


> On Feb 12, 2016, at 3:06 AM, Skvazh Roman <r...@skvazh.com> wrote:
> 
> Hello!
> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
> Only restart of this cassandra services helps.
> Please advice.
> 
> One big table with wide rows. 600 Gb per node.
> LZ4Compressor
> LeveledCompaction
> 
> concurrent compactors: 4
> compactor throughput: tried from 16 to 128
> Concurrent_readers: from 16 to 32
> Concurrent_writers: 128
> 
> 
> https://gist.github.com/rskvazh/de916327779b98a437a6
> 
> 
> JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
> http://code.google.com/p/jvmtop
> 
> Profiling PID 9256: org.apache.cassandra.service.CassandraDa
> 
>  95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>   1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>   1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>   0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>   0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>   0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>   0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
> 
> 
> ttop:
> 
> 2016-02-12T08:20:25.605+0000 Process summary
>  process cpu=1565.15%
>  application cpu=1314.48% (user=1354.48% sys=-40.00%)
>  other: cpu=250.67%
>  heap allocation rate 146mb/s
> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17


Re: Cassandra eats all cpu cores, high load average

Posted by Skvazh Roman <r...@skvazh.com>.
> Wide rows? How wide? How many rows per partition, typically and at the extreme? how many clustering columns?


Yes, wide rows with deletions of old data. 
Number of keys (estimate): 909428
How I can calculate rows per partition via nodetool/jmx?
~ From 100 to 5,000,000.

I know its anti-pattern - cyclic buffer / queue.
But problems begins when we start adding new nodes to cluster - old & new nodes do compactions.

DroppableTombstoneRatio: 5 - 6.5
TombstonesPerSlice: 0

CREATE TABLE home_timeline (
    uid blob,
    tuuid timeuuid,
    cid text,
    cuid blob,
    PRIMARY KEY (uid, tuuid)
) WITH CLUSTERING ORDER BY (tuuid ASC)
    AND bloom_filter_fp_chance = 0.1
    AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
    AND compaction = {'sstable_size_in_mb': '160', 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
    AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND dclocal_read_repair_chance = 0.0
    AND default_time_to_live = 0
    AND gc_grace_seconds = 86400
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.1
    AND speculative_retry = '99.0PERCENTILE';

But we query this table with ORDER BY tuuid DESC. Could this be a problem?

> When you restart the node does it revert to completely normal response?
Yes it does.

> Which release of Cassandra?
Cassandra 2.1.13

> Does every node eventually hit this problem?
Yes, almost all.

> After a restart, how long before the problem recurs for that node?
Sometimes it occurred after 5 minutes after restart, sometimes it was 4 hours.


> On 12 Feb 2016, at 23:20, Jack Krupansky <ja...@gmail.com> wrote:
> 
> Wide rows? How wide? How many rows per partition, typically and at the extreme? how many clustering columns?
> 
> When you restart the node does it revert to completely normal response?
> 
> Which release of Cassandra?
> 
> Does every node eventually hit this problem?
> 
> After a restart, how long before the problem recurs for that node?
> 
> -- Jack Krupansky
> 
> On Fri, Feb 12, 2016 at 4:06 AM, Skvazh Roman <r@skvazh.com <ma...@skvazh.com>> wrote:
> Hello!
> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5 TB 4000 PIOPS EBS drive.
> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 - read requests drops of.
> Only restart of this cassandra services helps.
> Please advice.
> 
> One big table with wide rows. 600 Gb per node.
> LZ4Compressor
> LeveledCompaction
> 
> concurrent compactors: 4
> compactor throughput: tried from 16 to 128
> Concurrent_readers: from 16 to 32
> Concurrent_writers: 128
> 
> 
> https://gist.github.com/rskvazh/de916327779b98a437a6 <https://gist.github.com/rskvazh/de916327779b98a437a6>
> 
> 
>  JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg 19.35
>  http://code.google.com/p/jvmtop <http://code.google.com/p/jvmtop>
> 
>  Profiling PID 9256: org.apache.cassandra.service.CassandraDa
> 
>   95.73% (     4.31s) ....google.common.collect.AbstractIterator.tryToComputeN()
>    1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>    1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>    0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>    0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>    0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>    0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
> 
> 
> ttop:
> 
> 2016-02-12T08:20:25.605+0000 Process summary
>   process cpu=1565.15%
>   application cpu=1314.48% (user=1354.48% sys=-40.00%)
>   other: cpu=250.67%
>   heap allocation rate 146mb/s
> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP Connection(2673)-127.0.0.1
> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17
> 


Re: Cassandra eats all cpu cores, high load average

Posted by Jack Krupansky <ja...@gmail.com>.
Wide rows? How wide? How many rows per partition, typically and at the
extreme? how many clustering columns?

When you restart the node does it revert to completely normal response?

Which release of Cassandra?

Does every node eventually hit this problem?

After a restart, how long before the problem recurs for that node?

-- Jack Krupansky

On Fri, Feb 12, 2016 at 4:06 AM, Skvazh Roman <r...@skvazh.com> wrote:

> Hello!
> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached
> 1.5 TB 4000 PIOPS EBS drive.
> Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30
> - read requests drops of.
> Only restart of this cassandra services helps.
> Please advice.
>
> One big table with wide rows. 600 Gb per node.
> LZ4Compressor
> LeveledCompaction
>
> concurrent compactors: 4
> compactor throughput: tried from 16 to 128
> Concurrent_readers: from 16 to 32
> Concurrent_writers: 128
>
>
> https://gist.github.com/rskvazh/de916327779b98a437a6
>
>
>  JvmTop 0.8.0 alpha - 06:51:10,  amd64, 16 cpus, Linux 3.14.44-3, load avg
> 19.35
>  http://code.google.com/p/jvmtop
>
>  Profiling PID 9256: org.apache.cassandra.service.CassandraDa
>
>   95.73% (     4.31s)
> ....google.common.collect.AbstractIterator.tryToComputeN()
>    1.39% (     0.06s) com.google.common.base.Objects.hashCode()
>    1.26% (     0.06s) io.netty.channel.epoll.Native.epollWait()
>    0.85% (     0.04s) net.jpountz.lz4.LZ4JNI.LZ4_compress_limitedOutput()
>    0.46% (     0.02s) net.jpountz.lz4.LZ4JNI.LZ4_decompress_fast()
>    0.26% (     0.01s) com.google.common.collect.Iterators$7.computeNext()
>    0.06% (     0.00s) io.netty.channel.epoll.Native.eventFdWrite()
>
>
> ttop:
>
> 2016-02-12T08:20:25.605+0000 Process summary
>   process cpu=1565.15%
>   application cpu=1314.48% (user=1354.48% sys=-40.00%)
>   other: cpu=250.67%
>   heap allocation rate 146mb/s
> [000405] user=76.25% sys=-0.54% alloc=     0b/s - SharedPool-Worker-9
> [000457] user=75.54% sys=-1.26% alloc=     0b/s - SharedPool-Worker-14
> [000451] user=73.52% sys= 0.29% alloc=     0b/s - SharedPool-Worker-16
> [000311] user=76.45% sys=-2.99% alloc=     0b/s - SharedPool-Worker-4
> [000389] user=70.69% sys= 2.62% alloc=     0b/s - SharedPool-Worker-6
> [000388] user=86.95% sys=-14.28% alloc=     0b/s - SharedPool-Worker-5
> [000404] user=70.69% sys= 0.10% alloc=     0b/s - SharedPool-Worker-8
> [000390] user=72.61% sys=-1.82% alloc=     0b/s - SharedPool-Worker-7
> [000255] user=87.86% sys=-17.87% alloc=     0b/s - SharedPool-Worker-1
> [000444] user=72.21% sys=-2.30% alloc=     0b/s - SharedPool-Worker-12
> [000310] user=71.50% sys=-2.31% alloc=     0b/s - SharedPool-Worker-3
> [000445] user=69.68% sys=-0.83% alloc=     0b/s - SharedPool-Worker-13
> [000406] user=72.61% sys=-4.40% alloc=     0b/s - SharedPool-Worker-10
> [000446] user=69.78% sys=-1.65% alloc=     0b/s - SharedPool-Worker-11
> [000452] user=66.86% sys= 0.22% alloc=     0b/s - SharedPool-Worker-15
> [000256] user=69.08% sys=-2.42% alloc=     0b/s - SharedPool-Worker-2
> [004496] user=29.99% sys= 0.59% alloc=   30mb/s - CompactionExecutor:15
> [004906] user=29.49% sys= 0.74% alloc=   39mb/s - CompactionExecutor:16
> [010143] user=28.58% sys= 0.25% alloc=   26mb/s - CompactionExecutor:17
> [000785] user=27.87% sys= 0.70% alloc=   38mb/s - CompactionExecutor:12
> [012723] user= 9.09% sys= 2.46% alloc= 2977kb/s - RMI TCP
> Connection(2673)-127.0.0.1
> [000555] user= 5.35% sys=-0.08% alloc=  474kb/s - SharedPool-Worker-24
> [000560] user= 3.94% sys= 0.07% alloc=  434kb/s - SharedPool-Worker-22
> [000557] user= 3.94% sys=-0.17% alloc=  339kb/s - SharedPool-Worker-25
> [000447] user= 2.73% sys= 0.60% alloc=  436kb/s - SharedPool-Worker-19
> [000563] user= 3.33% sys=-0.04% alloc=  460kb/s - SharedPool-Worker-20
> [000448] user= 2.73% sys= 0.27% alloc=  414kb/s - SharedPool-Worker-21
> [000554] user= 1.72% sys= 0.70% alloc=  232kb/s - SharedPool-Worker-26
> [000558] user= 1.41% sys= 0.39% alloc=  213kb/s - SharedPool-Worker-23
> [000450] user= 1.41% sys=-0.03% alloc=  158kb/s - SharedPool-Worker-17