You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Christopher Wirt <ch...@struq.com> on 2013/07/11 11:53:17 UTC

listen_address and rpc_address address on different interface

Hello,

 

I was wondering if anyone has measured the performance improvements to
having the listen address and client address bound to different interface?

 

We a have 2gbit connection serving both at the moment and this doesn't come
close to being saturated. But being very keen on fast reads at the 99th
percentile we're interested in even the smallest improvements.

 

Next question - Has anyone ever moved an existing node to have the listen
address and client access address bound to different addresses?

 

Our Problem  

Currently our only address is a DNS entry which we would like to keep bound
to the client access.

If we were to take down a node and change the listen address then re-join
the ring, the other nodes will mark the node as dead when we take it down
and assume we have a new node when we bring it back on a different address. 

Lots of wasted rebalancing and compaction will start.

We use Cassandra 1.2.4 w/vnodes.

Not sure there will be anyway around this.

So back to question one, am I wasting my time?

 

Thanks,

Chris

 

 

 


Re: listen_address and rpc_address address on different interface

Posted by Robert Coli <rc...@eventbrite.com>.
On Thu, Jul 11, 2013 at 2:53 AM, Christopher Wirt <ch...@struq.com>wrote:

> **
>
> If we were to take down a node and change the listen address then re-join
> the ring, the other nodes will mark the node as dead when we take it down
> and assume we have a new node when we bring it back on a different address.
>
**
>
> Lots of wasted rebalancing and compaction will start.****
>
> We use Cassandra 1.2.4 w/vnodes.****
>
>
In theory you can :

1) stop cassandra
2) change ip/config/etc.
3) restart cassandra with auto_bootstrap=false in cassandra.yaml

I believe this should "just work" because the node knows what tokens it is
claiming from the system keyspace, it simply announces to the cluster that
it is now responsible for each of those ranges. The other nodes say should
just say "ok".

If you do this, please let us know the results! Obviously you should try it
first on a non-production cluster...

So back to question one, am I wasting my time?
>

My hunch is "probably" but it is just a hunch.

=Rob