You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Ramzi Rabah <rr...@playdom.com> on 2009/12/07 18:59:05 UTC

Re: Connecting to the cluster with failover (was: data modeling question)

                TSocket socket = new TSocket(hostName, port);
                TBinaryProtocol binaryProtocol = new
TBinaryProtocol(socket, false, false);
                Cassandra.Client client = new Cassandra.Client(binaryProtocol);
                socket.open();
                Map<String,String> tokenToHostMap = (Map<String,String>)
                        new
JSONTokener(client.get_string_property(CassandraServer.TOKEN_MAP)).nextValue();

this will return a list of servers in the cluster (both up or down).

You will obviously need to connect to a live node in the cluster to be
able to run this.


On Mon, Dec 7, 2009 at 6:31 AM, Mark Robson <ma...@gmail.com> wrote:
>
>
> 2009/12/3 Coe, Robin <ro...@bluecoat.com>
>>
>> So, considering that I currently have to take down a node to make a CF
>> change, I'm wondering how to perform automatic failover from my application?
>>  Is there a mechanism by which I can request from Cassandra all the
>> destination IP:ports for the nodes in a cluster, so I can adapt dynamically?
>>  For example, if I ramp up/down Cassandra instances based on server load, I
>> would like my application to automatically know what servers are available,
>> to execute automatic reconnection when the node I'm connected to goes down.
>
> This is a bigger question and one which would merit some dicussion
> generally.
>
> Nodes will need to be taken down, not just for CF changes but any other
> operational reason, during which time you won't want to have an outage.
>
> Applications querying data will need to still be able to do it, and those
> inserting will also need to be able to continue to insert (or handle a
> backlog if that is acceptable to the end-users).
>
> My suggestions are:
>
> 1. Use a IP-layer load balancer like LVS and have the servers add/remove
> themselves from the pool as they are up/down
> 2. have all your app servers also be a Cassandra node, and always connect
> locally. If the local Cassandra instance is unhealthy, remove the whole app
> server from the LVS pool.
>
> Of course you don't need to use LVS, any other IP-based load balancer would
> do. And of course, Cassandra itself needs a fixed non-changing address per
> node, so it would need to make sure it didn't use that address.
>
> In the event of a normal (i.e. administrative) shutdown, the admin could
> manually set the node down before doing the maintenance.
>
> I did some work on an experimental load balancer I call "Fluffy Cluster"
> here:
>
> http://code.google.com/p/fluffy-linux-cluster/
>
> This is not production-ready yet but could be useful.
>
> Mark
>

Re: Connecting to the cluster with failover (was: data modeling question)

Posted by Jonathan Ellis <jb...@gmail.com>.
no pointers, just the example code in contrib I linked

On Mon, Dec 7, 2009 at 1:46 PM, Mark Robson <ma...@gmail.com> wrote:
> 2009/12/7 Jonathan Ellis <jb...@gmail.com>
>>
>> Gary Dusbabek already did this, only better:
>> https://issues.apache.org/jira/browse/CASSANDRA-535,
>> http://issues.apache.org/jira/browse/CASSANDRA-596
>>
>
> So is there now support in trunk for a "remote clients api" version of
> Cassandra, if so, are there any pointers on how to use it?
>
> Cheers
>
> Mark
>

Re: Connecting to the cluster with failover (was: data modeling question)

Posted by Mark Robson <ma...@gmail.com>.
2009/12/7 Jonathan Ellis <jb...@gmail.com>

> Gary Dusbabek already did this, only better:
> https://issues.apache.org/jira/browse/CASSANDRA-535,
> http://issues.apache.org/jira/browse/CASSANDRA-596
>
>
So is there now support in trunk for a "remote clients api" version of
Cassandra, if so, are there any pointers on how to use it?

Cheers

Mark

Re: Connecting to the cluster with failover (was: data modeling question)

Posted by Jonathan Ellis <jb...@gmail.com>.
Gary Dusbabek already did this, only better:
https://issues.apache.org/jira/browse/CASSANDRA-535,
http://issues.apache.org/jira/browse/CASSANDRA-596

On Mon, Dec 7, 2009 at 1:40 PM, Mark Robson <ma...@gmail.com> wrote:
> 2009/12/7 Ramzi Rabah <rr...@playdom.com>
>>
>>                TSocket socket = new TSocket(hostName, port);
>>                TBinaryProtocol binaryProtocol = new
>> TBinaryProtocol(socket, false, false);
>>                Cassandra.Client client = new
>> Cassandra.Client(binaryProtocol);
>>                socket.open();
>>                Map<String,String> tokenToHostMap = (Map<String,String>)
>>                        new
>>
>> JSONTokener(client.get_string_property(CassandraServer.TOKEN_MAP)).nextValue();
>>
>> this will return a list of servers in the cluster (both up or down).
>>
>> You will obviously need to connect to a live node in the cluster to be
>> able to run this.
>
> Right, and an application could connect to a known live node (seed node,
> etc) periodically and store the result (retaining the previous values if it
> was unable to connect).
>
> That wouldn't solve the problem of being able to connect to a node which is
> available, *right now*. To do that reliably and with minimal latency you'd
> need something which Cassandra doesn't easily have, a load-balancer/ high
> availability setup.
>
> Personally I'd like to see Cassandra implement a "Front-end-only" node,
> which could run as a thrift protocol server but not join the ring itself
> hence not requiring persistent storage. This would mean that app servers
> could run a local front-end-only server and just talk to that.
>
> Mark
>

Re: Connecting to the cluster with failover (was: data modeling question)

Posted by Mark Robson <ma...@gmail.com>.
2009/12/7 Ramzi Rabah <rr...@playdom.com>

>                TSocket socket = new TSocket(hostName, port);
>                TBinaryProtocol binaryProtocol = new
> TBinaryProtocol(socket, false, false);
>                Cassandra.Client client = new
> Cassandra.Client(binaryProtocol);
>                socket.open();
>                Map<String,String> tokenToHostMap = (Map<String,String>)
>                        new
>
> JSONTokener(client.get_string_property(CassandraServer.TOKEN_MAP)).nextValue();
>
> this will return a list of servers in the cluster (both up or down).
>
> You will obviously need to connect to a live node in the cluster to be
> able to run this.
>

Right, and an application could connect to a known live node (seed node,
etc) periodically and store the result (retaining the previous values if it
was unable to connect).

That wouldn't solve the problem of being able to connect to a node which is
available, *right now*. To do that reliably and with minimal latency you'd
need something which Cassandra doesn't easily have, a load-balancer/ high
availability setup.

Personally I'd like to see Cassandra implement a "Front-end-only" node,
which could run as a thrift protocol server but not join the ring itself
hence not requiring persistent storage. This would mean that app servers
could run a local front-end-only server and just talk to that.

Mark