You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@storm.apache.org by "Stan Miroshnikov (JIRA)" <ji...@apache.org> on 2015/03/17 23:26:38 UTC

[jira] [Commented] (STORM-503) Short disruptor queue wait time leads to high CPU usage when idle

    [ https://issues.apache.org/jira/browse/STORM-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14366206#comment-14366206 ] 

Stan Miroshnikov commented on STORM-503:
----------------------------------------

Seeing the same with Storm 0.9.3:

88% sun.misc.Unsafe.park(boolean, long) :native
71% java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long) :215
70% java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(long, java.util.concurrent.TimeUnit) :2163,2152
Execution on lines: 2163,2152
70% com.lmax.disruptor.BlockingWaitStrategy.waitFor(long, com.lmax.disruptor.Sequence, com.lmax.disruptor.Sequence[], com.lmax.disruptor.SequenceBarrier, long, java.util.concurrent.TimeUnit) :87
70% com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(long, long, java.util.concurrent.TimeUnit) :54
70% backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(com.lmax.disruptor.EventHandler) :97,99
70% backtype.storm.disruptor$consume_batch_when_available.invoke(java.lang.Object, java.lang.Object) :80


> Short disruptor queue wait time leads to high CPU usage when idle
> -----------------------------------------------------------------
>
>                 Key: STORM-503
>                 URL: https://issues.apache.org/jira/browse/STORM-503
>             Project: Apache Storm
>          Issue Type: Bug
>    Affects Versions: 0.9.2-incubating, 0.9.1-incubating, 0.9.3
>            Reporter: Milad Fatenejad
>            Priority: Minor
>
> I am fairly new to storm, but I observed some behavior which I believe may be unintended and wanted to report it...
> I was experimenting with using storm on a topology which had large numbers of threads (30) and was running on a single node for test purposes and noticed that even when no tuples were being processed, there was over 100% CPU utilization.
> I became concerned and investigated by attempting to reproduce with a very simple topology. I took the WordCountTopology from storm-starter and ran it in  an Ubuntu VM. I increased the sleep time in the RandomSentenceSpout that feeds the topology to 10 seconds so that there was effectively no work to do. I then modified the topology so that there were 30 threads for each bolt and only one instance of the spout. When I ran the topology I noticed that there was again 100% CPU usage when idle even on this very simple topology. After extensive experimentation (netty vs. zeromq, 0.9.3, 0.9.2, 0.9.1, multiple JVM versions) I used yourkit and found that the high utilization was coming from DisruptorQueue.consumeBatchWhenAvailable where there is this code:
> final long availableSequence = _barrier.waitFor(nextSequence, 10, TimeUnit.MILLISECONDS);
> I increased to 100 ms and was able to reduce the CPU utilization when idle. I am new to storm, so I am not sure what affect modifying this number has. I Is this expected behavior from storm? I would like to propose modifying the code so that this wait is configurable if possible...
> Thank You



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)