You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Jimmy Jones <ji...@gmx.co.uk> on 2013/08/21 14:39:06 UTC

System stalling

Hi,

I've got an simple processing system using the 0.22 C++ broker, all on one box, where an external system posts messages to the default headers exchange, and an ingest process receives them using a ring queue, transforms them and outputs to a different headers exchange. Various other processes pick messages of interest off that exchange using ring queues. Recently however the system has been stalling - I'm still receiving lots of data from the other system, but the ingest process suddenly goes to <5% CPU usage and its queue fills up and messages start getting discarded from the ring, the follow on processes go to practically 0% CPU and qpidd hovers around 95-120% CPU (normally its ~75%) and the rest of the system pretty much goes idle (no swapping, there is free memory)

I attached to the ingest process with gdb and it was stuck in send (waitForCapacity/waitForCompletionImpl) - I notice this can block. However given the rest of the system is idle when this problem occurs I can't understand why this would happen. I added a SIGALARM handler around send with a timeout of 30s and the process did sometimes get killed. Looking at qpid-tool it does seem to still be processing messages, just extremely slowly. My other observation is from netstat, the Send-Q of qpidd to the ingest process is 16363, and the Recv-Q and Send-Q of the ingest process are both 0.

Any ideas on what might be happening are very welcome!

Cheers,

Jimmy

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: System stalling

Posted by Gordon Sim <gs...@redhat.com>.
On 08/21/2013 01:39 PM, Jimmy Jones wrote:
> I've got an simple processing system using the 0.22 C++ broker, all
> on one box, where an external system posts messages to the default
> headers exchange, and an ingest process receives them using a ring
> queue, transforms them and outputs to a different headers exchange.
> Various other processes pick messages of interest off that exchange
> using ring queues. Recently however the system has been stalling -
> I'm still receiving lots of data from the other system, but the
> ingest process suddenly goes to <5% CPU usage and its queue fills up
> and messages start getting discarded from the ring, the follow on
> processes go to practically 0% CPU and qpidd hovers around 95-120%
> CPU (normally its ~75%) and the rest of the system pretty much goes
> idle (no swapping, there is free memory)
>
> I attached to the ingest process with gdb and it was stuck in send
> (waitForCapacity/waitForCompletionImpl) - I notice this can block.

Is there any queue bound to the second headers exchange, i.e. to the one 
this ingest process is sending to, that is not a ring queue? (If you run 
qpid-config queue -r, you get a quick listing of the queues and their 
bindings).

If there was a queue to which messages were enqueued that started to 
apply rpoducer flow control, then that would block your ingest process 
(and since the messages are still coming in, the broker would spend all 
its time just removing old ones to make space).

> However given the rest of the system is idle when this problem occurs
> I can't understand why this would happen. I added a SIGALARM handler
> around send with a timeout of 30s and the process did sometimes get
> killed. Looking at qpid-tool it does seem to still be processing
> messages, just extremely slowly. My other observation is from
> netstat, the Send-Q of qpidd to the ingest process is 16363, and the
> Recv-Q and Send-Q of the ingest process are both 0.
>
> Any ideas on what might be happening are very welcome!

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org