You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Tyler Hobbs (JIRA)" <ji...@apache.org> on 2015/09/01 17:20:47 UTC

[jira] [Comment Edited] (CASSANDRA-10052) Bringing one node down, makes the whole cluster go down for a second

    [ https://issues.apache.org/jira/browse/CASSANDRA-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14725534#comment-14725534 ] 

Tyler Hobbs edited comment on CASSANDRA-10052 at 9/1/15 3:20 PM:
-----------------------------------------------------------------

bq. I'm not sure why we should skip notifications altogether rather than still sending them using the Gossip endpoint address, in a similar way to what CASSANDRA-5899 does when rcp_address is set to 0.0.0.0? Tyler Hobbs any thoughts?

The python driver uniquely identifies nodes by their {{rpc_address}} (or {{broadcast_rpc_address}}).  I'm looking into what the other drivers do, but I'm guessing it's the same (EDIT: they are the same).  If we send a notification with the {{listen}}/{{broadcast_address}}, the drivers are likely to ignore it because the address won't be recognized.

Ultimately, I think the drivers may need to move to uniquely representing hosts by their {{broadcast_address}} to handle this kind of setup better.  That's a little tricky, because the initial contact points and load balancing policies all currently work on rpc addresses.  To help support this on the C* side, we should consider sending both the {{rpc_address}} and {{broadcast_address}} in push notifications.

bq. I see. Sounds like we should just special-case it and not send anything from onDown if a peer listening on localhost goes down.

I think this would work okay if we change the condition a bit.  Instead, don't send anything from {{onDown}} if a peer with the same ({{broadcast_}}){{rpc_address}} that we have goes down.  That _should_ only happen when the cluster is set up like this, but will still allow setups like ccm's to work normally.


was (Author: thobbs):
bq. I'm not sure why we should skip notifications altogether rather than still sending them using the Gossip endpoint address, in a similar way to what CASSANDRA-5899 does when rcp_address is set to 0.0.0.0? Tyler Hobbs any thoughts?

The python driver uniquely identifies nodes by their {{rpc_address}} (or {{broadcast_rpc_address}}).  I'm looking into what the other drivers do, but I'm guessing it's the same.  If we send a notification with the {{listen}}/{{broadcast_address}}, the drivers are likely to ignore it because the address won't be recognized.

Ultimately, I think the drivers may need to move to uniquely representing hosts by their {{broadcast_address}} to handle this kind of setup better.  That's a little tricky, because the initial contact points and load balancing policies all currently work on rpc addresses.  To help support this on the C* side, we should consider sending both the {{rpc_address}} and {{broadcast_address}} in push notifications.

bq. I see. Sounds like we should just special-case it and not send anything from onDown if a peer listening on localhost goes down.

I think this would work okay if we change the condition a bit.  Instead, don't send anything from {{onDown}} if a peer with the same ({{broadcast_}}){{rpc_address}} that we have goes down.  That _should_ only happen when the cluster is set up like this, but will still allow setups like ccm's to work normally.

> Bringing one node down, makes the whole cluster go down for a second
> --------------------------------------------------------------------
>
>                 Key: CASSANDRA-10052
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10052
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Sharvanath Pathak
>            Assignee: Stefania
>              Labels: client-impacting
>             Fix For: 2.1.x, 2.2.x
>
>
> When a node goes down, the other nodes learn that through the gossip.
> And I do see the log from (Gossiper.java):
> {code}
> private void markDead(InetAddress addr, EndpointState localState)
>    {
>        if (logger.isTraceEnabled())
>            logger.trace("marking as down {}", addr);
>        localState.markDead();
>        liveEndpoints.remove(addr);
>        unreachableEndpoints.put(addr, System.nanoTime());
>        logger.info("InetAddress {} is now DOWN", addr);
>        for (IEndpointStateChangeSubscriber subscriber : subscribers)
>            subscriber.onDead(addr, localState);
>        if (logger.isTraceEnabled())
>            logger.trace("Notified " + subscribers);
>    }
> {code}
> Saying: "InetAddress 192.168.101.1 is now Down", in the Cassandra's system log.
> Now on all the other nodes the client side (java driver) says, " Cannot connect to any host, scheduling retry in 1000 milliseconds". They eventually do reconnect but some queries fail during this intermediate period.
> To me it seems like when the server pushes the nodeDown event, it call the getRpcAddress(endpoint), and thus sends localhost as the argument in the nodeDown event.  
> As in org.apache.cassandra.transport.Server.java
> {code}
>   public void onDown(InetAddress endpoint)
>        {      
>            server.connectionTracker.send(Event.StatusChange.nodeDown(getRpcAddress(endpoint), server.socket.getPort()));
>        }
> {code}
> the getRpcAddress returns localhost for any endpoint if the cassandra.yaml is using localhost as the configuration for rpc_address (which by the way is the default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)