You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@thrift.apache.org by Oscar <ro...@gmail.com> on 2009/05/22 16:05:48 UTC

need client socket pool?

Hi all,

In our project we need call RPCs in different threads.

//pseudo code

void init()
{
  //initialize the transport and protocol
}


void threadA()
{
//call rpc a using the initialized client
}

void threadB()
{
//call rpc b using the initialized client
}


The above code has the race-condition problem.

My question is: lock the client directly or make a client socket pool?

What's your opinion?

Re: need client socket pool?

Posted by Ben Taitelbaum <bt...@cs.oberlin.edu>.
I tried for a while (in Ruby) to make some thread-safe client  
wrappers, but found that maintaining the synchronization logic (and  
other logic as well, for example at the time there were some issues  
with clearing out buffers after a failed call). I ended up just opting  
to create a new connection on every call, and for my purposes the  
slight performance hit was well worth the reliability gain.

If you use just a single client, then threadB will have to wait for  
threadA to finish, whereas using 2 clients will allow them to run in  
parallel, which should be a performance boost. If you want to get  
around this with just a single client, you can try to use the Twisted  
support for the python client (uses the reactor pattern), or make the  
calls to send_xxx and recv_xx yourself (synchronizing the calls, of  
course)

-Ben

On May 22, 2009, at 10:05 AM, Oscar wrote:

> Hi all,
>
> In our project we need call RPCs in different threads.
>
> //pseudo code
>
> void init()
> {
>  //initialize the transport and protocol
> }
>
>
> void threadA()
> {
> //call rpc a using the initialized client
> }
>
> void threadB()
> {
> //call rpc b using the initialized client
> }
>
>
> The above code has the race-condition problem.
>
> My question is: lock the client directly or make a client socket pool?
>
> What's your opinion?


Re: need client socket pool?

Posted by Bryan Duxbury <br...@rapleaf.com>.
In general, we tend to synchronize the client access. If your  
application is sensitive to waiting, though, the client pool sounds  
like a reasonable idea. I think that would be something cool to have  
in Thrift language libraries, if we could figure out a general enough  
way  to do it.

-Bryan

On May 22, 2009, at 7:05 AM, Oscar wrote:

> Hi all,
>
> In our project we need call RPCs in different threads.
>
> //pseudo code
>
> void init()
> {
>   //initialize the transport and protocol
> }
>
>
> void threadA()
> {
> //call rpc a using the initialized client
> }
>
> void threadB()
> {
> //call rpc b using the initialized client
> }
>
>
> The above code has the race-condition problem.
>
> My question is: lock the client directly or make a client socket pool?
>
> What's your opinion?