You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Arya Goudarzi (JIRA)" <ji...@apache.org> on 2010/06/16 23:01:26 UTC

[jira] Commented: (CASSANDRA-1199) multiget_slice() calls using TBinaryProtocolAccelerated always take up to the TSocket->recvTimeout before returning results

    [ https://issues.apache.org/jira/browse/CASSANDRA-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12879492#action_12879492 ] 

Arya Goudarzi commented on CASSANDRA-1199:
------------------------------------------

This a timing example for the above scenario with TSocket's default timeouts:

100 Sequential Writes took: 0.4047749042511 seconds;
100 Sequential Reads took: 0.16357207298279 seconds;
100 Batch Read took: 0.77017998695374 seconds;

> multiget_slice() calls using TBinaryProtocolAccelerated always take up to the TSocket->recvTimeout before returning results
> ---------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-1199
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1199
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 0.7
>         Environment: CentOS 5.2
> Cassandra Nightly Build June 11th
> Thrift Trunc
> Using TBinaryProtocolAccelerated only!
>            Reporter: Arya Goudarzi
>         Attachments: multiget_slice.php
>
>
> I am comparing the following 
>    - Reading of 100 SuperColumns in 1 SCF row using multiget_slice() with 1 key and 1 column name in 100 loop iterations
>    - Reading of 100 SuperColumns in 1 SCF row using multiget_slice() with 1 key and 100 column names in a single call
> I always get a consistent result and that is the single call takes more time then 100 calls. After some investigation, it seamed that the time it took to execute multiget_slice with 100 columns is always close to the TSocket->recvTimeout, Increasing the recvTimeout results that call to take that much time before retuning. After digged into TSocket->read (TSocket.php line 261) and looking at some of the meta data of fread, it seams that none of the buffer chunks get the eof flag=1. And the stream waits till timeout has reached. 
> This only happens if TBinaryProtocolAccelerated (thrift_protocol.so) is used. 
> I have attached my code to reproduce this issue. You can set the timeouts to see how it affects the read call in multiget_slice.
> Please investigate and move to Thrift if not a Cassandra interface issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.