You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@storm.apache.org by "Jungtaek Lim (JIRA)" <ji...@apache.org> on 2015/07/17 01:43:04 UTC

[jira] [Resolved] (STORM-503) Short disruptor queue wait time leads to high CPU usage when idle

     [ https://issues.apache.org/jira/browse/STORM-503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jungtaek Lim resolved STORM-503.
--------------------------------
    Resolution: Fixed
      Assignee: Xingyu Su

Thanks [~xingyu] for great work.
I merged into master, 0.10.x, 0.9.x branches respectively.

And thanks all users/contributors for reporting!

> Short disruptor queue wait time leads to high CPU usage when idle
> -----------------------------------------------------------------
>
>                 Key: STORM-503
>                 URL: https://issues.apache.org/jira/browse/STORM-503
>             Project: Apache Storm
>          Issue Type: Bug
>    Affects Versions: 0.9.2-incubating, 0.9.1-incubating, 0.9.3
>            Reporter: Milad Fatenejad
>            Assignee: Xingyu Su
>            Priority: Minor
>
> I am fairly new to storm, but I observed some behavior which I believe may be unintended and wanted to report it...
> I was experimenting with using storm on a topology which had large numbers of threads (30) and was running on a single node for test purposes and noticed that even when no tuples were being processed, there was over 100% CPU utilization.
> I became concerned and investigated by attempting to reproduce with a very simple topology. I took the WordCountTopology from storm-starter and ran it in  an Ubuntu VM. I increased the sleep time in the RandomSentenceSpout that feeds the topology to 10 seconds so that there was effectively no work to do. I then modified the topology so that there were 30 threads for each bolt and only one instance of the spout. When I ran the topology I noticed that there was again 100% CPU usage when idle even on this very simple topology. After extensive experimentation (netty vs. zeromq, 0.9.3, 0.9.2, 0.9.1, multiple JVM versions) I used yourkit and found that the high utilization was coming from DisruptorQueue.consumeBatchWhenAvailable where there is this code:
> final long availableSequence = _barrier.waitFor(nextSequence, 10, TimeUnit.MILLISECONDS);
> I increased to 100 ms and was able to reduce the CPU utilization when idle. I am new to storm, so I am not sure what affect modifying this number has. I Is this expected behavior from storm? I would like to propose modifying the code so that this wait is configurable if possible...
> Thank You



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)