You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Girish Kumar <gi...@gmail.com> on 2013/08/26 11:39:21 UTC

Large # of Pending Tasks?

Hi,

I'm using a single node Cassandra on 24 Core/48GB system.  I have set my
writers and readers (in yaml) to be around 192 for both and read and write.
 In this mode, when stressed with 192 threads for inserts and reads, what I
observed via the jconsole of 'client request' the active read and write
tasks never go beyond 24.  However there are as many as 192 pending tasks.
 I was expecting Cassandra server to utilize the threads that are created
using yaml and perform as many parallel inserts and reads.

Why cassandra active task count doesn't go beyond 24 when the actual
writers and readers are around 192 ?

Any thoughts?
/BK

Re: Large # of Pending Tasks?

Posted by Girish Kumar <gi...@gmail.com>.
I'm running "ReleaseVersion: 1.2.5-SNAPSHOT" version.

-------------------------------------------------------------------------------------------------

Well, I see that REQUEST_RESPONSE is capped at # processor count (below
code).  The foreground writers and readers are still set at what c*.yaml
says.

May be the pending task pile up is because the response handlers are unable
to catch up with foreground writer/reader tasks ?  comments?

Thanks,
/BK




 stages.put(Stage.MUTATION, multiThreadedConfigurableStage(Stage.MUTATION,
getConcurrentWriters()));
        stages.put(Stage.READ, multiThreadedConfigurableStage(Stage.READ,
getConcurrentReaders()));
        stages.put(Stage.REQUEST_RESPONSE,
multiThreadedStage(Stage.REQUEST_RESPONSE,
FBUtilities.getAvailableProcessors()));
        stages.put(Stage.INTERNAL_RESPONSE,
multiThreadedStage(Stage.INTERNAL_RESPONSE,
FBUtilities.getAvailableProcessors()));


On Mon, Aug 26, 2013 at 8:36 PM, Nate McCall <na...@thelastpickle.com> wrote:

> Interesting - my understanding is that the only places where stages are
> capped at "available processors" have to do with clustering operations (see
> the static initializer in o.a.c.concurrent.StageManager).
>
> What version are you using, by the way? Most likely I'm missing something
> though.
>
>
> On Mon, Aug 26, 2013 at 9:36 AM, Girish Kumar <gi...@gmail.com>wrote:
>
>> This is the 'Active Count' as observed on jconsole under
>> 'org.apache.cassandra.request' ReadStage->Attributes.  Below details...
>>
>>
>> ActiveCount -> 12
>> ..
>> Core Pool Size -> 192
>> Core Threads -> 192
>> CurrentlyBlockedTasks ->  0
>> MaximumThreads -> 192
>> PendingTasks -> 691
>> TotalBlockedTasks -> 0
>>
>> I expect active count to go upto 192 as those many threads are available
>> to handle requests ? Am I missing reading something here?
>>
>> I observe that Active Count never goes beyond 24 and Pending Tasks Count
>> keeps building up?
>>
>> So question is what's the relation between Active count and max threads
>> available in the pool for execution?
>>
>> Thanks,
>> /BK
>>
>>
>>
>>
>>
>> On Mon, Aug 26, 2013 at 7:28 PM, Nate McCall <na...@thelastpickle.com>wrote:
>>
>>> I'm not quite sure what you mean by 'active task count' - what is the
>>> output of 'nodetool tpstats' when you run this test?
>>>
>>>
>>> On Mon, Aug 26, 2013 at 4:39 AM, Girish Kumar <gi...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm using a single node Cassandra on 24 Core/48GB system.  I have set
>>>> my writers and readers (in yaml) to be around 192 for both and read and
>>>> write.  In this mode, when stressed with 192 threads for inserts and reads,
>>>> what I observed via the jconsole of 'client request' the active read and
>>>> write tasks never go beyond 24.  However there are as many as 192 pending
>>>> tasks.  I was expecting Cassandra server to utilize the threads that
>>>> are created using yaml and perform as many parallel inserts and reads.
>>>>
>>>> Why cassandra active task count doesn't go beyond 24 when the actual
>>>> writers and readers are around 192 ?
>>>>
>>>> Any thoughts?
>>>> /BK
>>>>
>>>
>>>
>>
>

Re: Large # of Pending Tasks?

Posted by Nate McCall <na...@thelastpickle.com>.
Interesting - my understanding is that the only places where stages are
capped at "available processors" have to do with clustering operations (see
the static initializer in o.a.c.concurrent.StageManager).

What version are you using, by the way? Most likely I'm missing something
though.


On Mon, Aug 26, 2013 at 9:36 AM, Girish Kumar <gi...@gmail.com>wrote:

> This is the 'Active Count' as observed on jconsole under
> 'org.apache.cassandra.request' ReadStage->Attributes.  Below details...
>
>
> ActiveCount -> 12
> ..
> Core Pool Size -> 192
> Core Threads -> 192
> CurrentlyBlockedTasks ->  0
> MaximumThreads -> 192
> PendingTasks -> 691
> TotalBlockedTasks -> 0
>
> I expect active count to go upto 192 as those many threads are available
> to handle requests ? Am I missing reading something here?
>
> I observe that Active Count never goes beyond 24 and Pending Tasks Count
> keeps building up?
>
> So question is what's the relation between Active count and max threads
> available in the pool for execution?
>
> Thanks,
> /BK
>
>
>
>
>
> On Mon, Aug 26, 2013 at 7:28 PM, Nate McCall <na...@thelastpickle.com>wrote:
>
>> I'm not quite sure what you mean by 'active task count' - what is the
>> output of 'nodetool tpstats' when you run this test?
>>
>>
>> On Mon, Aug 26, 2013 at 4:39 AM, Girish Kumar <gi...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> I'm using a single node Cassandra on 24 Core/48GB system.  I have set
>>> my writers and readers (in yaml) to be around 192 for both and read and
>>> write.  In this mode, when stressed with 192 threads for inserts and reads,
>>> what I observed via the jconsole of 'client request' the active read and
>>> write tasks never go beyond 24.  However there are as many as 192 pending
>>> tasks.  I was expecting Cassandra server to utilize the threads that
>>> are created using yaml and perform as many parallel inserts and reads.
>>>
>>> Why cassandra active task count doesn't go beyond 24 when the actual
>>> writers and readers are around 192 ?
>>>
>>> Any thoughts?
>>> /BK
>>>
>>
>>
>

Re: Large # of Pending Tasks?

Posted by Girish Kumar <gi...@gmail.com>.
This is the 'Active Count' as observed on jconsole under
'org.apache.cassandra.request' ReadStage->Attributes.  Below details...


ActiveCount -> 12
..
Core Pool Size -> 192
Core Threads -> 192
CurrentlyBlockedTasks ->  0
MaximumThreads -> 192
PendingTasks -> 691
TotalBlockedTasks -> 0

I expect active count to go upto 192 as those many threads are available to
handle requests ? Am I missing reading something here?

I observe that Active Count never goes beyond 24 and Pending Tasks Count
keeps building up?

So question is what's the relation between Active count and max threads
available in the pool for execution?

Thanks,
/BK





On Mon, Aug 26, 2013 at 7:28 PM, Nate McCall <na...@thelastpickle.com> wrote:

> I'm not quite sure what you mean by 'active task count' - what is the
> output of 'nodetool tpstats' when you run this test?
>
>
> On Mon, Aug 26, 2013 at 4:39 AM, Girish Kumar <gi...@gmail.com>wrote:
>
>> Hi,
>>
>> I'm using a single node Cassandra on 24 Core/48GB system.  I have set my
>> writers and readers (in yaml) to be around 192 for both and read and write.
>>  In this mode, when stressed with 192 threads for inserts and reads, what I
>> observed via the jconsole of 'client request' the active read and write
>> tasks never go beyond 24.  However there are as many as 192 pending tasks.
>>  I was expecting Cassandra server to utilize the threads that are
>> created using yaml and perform as many parallel inserts and reads.
>>
>> Why cassandra active task count doesn't go beyond 24 when the actual
>> writers and readers are around 192 ?
>>
>> Any thoughts?
>> /BK
>>
>
>

Re: Large # of Pending Tasks?

Posted by Nate McCall <na...@thelastpickle.com>.
I'm not quite sure what you mean by 'active task count' - what is the
output of 'nodetool tpstats' when you run this test?


On Mon, Aug 26, 2013 at 4:39 AM, Girish Kumar <gi...@gmail.com>wrote:

> Hi,
>
> I'm using a single node Cassandra on 24 Core/48GB system.  I have set my
> writers and readers (in yaml) to be around 192 for both and read and write.
>  In this mode, when stressed with 192 threads for inserts and reads, what I
> observed via the jconsole of 'client request' the active read and write
> tasks never go beyond 24.  However there are as many as 192 pending tasks.
>  I was expecting Cassandra server to utilize the threads that are created
> using yaml and perform as many parallel inserts and reads.
>
> Why cassandra active task count doesn't go beyond 24 when the actual
> writers and readers are around 192 ?
>
> Any thoughts?
> /BK
>