You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Stefania (JIRA)" <ji...@apache.org> on 2015/09/01 11:43:46 UTC

[jira] [Commented] (CASSANDRA-10052) Bringing one node down, makes the whole cluster go down for a second

    [ https://issues.apache.org/jira/browse/CASSANDRA-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14725082#comment-14725082 ] 

Stefania commented on CASSANDRA-10052:
--------------------------------------

Our strategy for testing pushed notifications relies mostly on distributed tests (dtests), where we start multiple nodes on the same box by overriding {{rpc_address}} to 127.0.01, 127.0.0.2 and so forth. Therefore this problem went undetected. I managed to create a [dtest|https://github.com/stef1927/cassandra-dtest/commits/10052] reproducing this problem nonetheless, albeit in a slightly contrived way as I had to create 3 nodes, make sure the node on 127.0.0.1 is stopped, then change the address of 127.0.0.3 to localhost (which clashes with 127.0.0.1) and check the notifications sent by 127.0.0.2 about 127.0.0.3.

I'm not sure why we should skip notifications altogether rather than still sending them using the Gossip endpoint address, in a similar way to what CASSANDRA-5899 does when {{rcp_address}} is set to 0.0.0.0? [~thobbs] any thoughts?

Here is a tentative [2.1 patch|https://github.com/stef1927/cassandra/commits/10052-2.1]. We change the address to the endpoint address but we could just as easily suppress the notifications.

[~jbellis] is this required for 2.0?

> Bringing one node down, makes the whole cluster go down for a second
> --------------------------------------------------------------------
>
>                 Key: CASSANDRA-10052
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10052
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Sharvanath Pathak
>            Assignee: Stefania
>            Priority: Critical
>
> When a node goes down, the other nodes learn that through the gossip.
> And I do see the log from (Gossiper.java):
> {code}
> private void markDead(InetAddress addr, EndpointState localState)
>    {
>        if (logger.isTraceEnabled())
>            logger.trace("marking as down {}", addr);
>        localState.markDead();
>        liveEndpoints.remove(addr);
>        unreachableEndpoints.put(addr, System.nanoTime());
>        logger.info("InetAddress {} is now DOWN", addr);
>        for (IEndpointStateChangeSubscriber subscriber : subscribers)
>            subscriber.onDead(addr, localState);
>        if (logger.isTraceEnabled())
>            logger.trace("Notified " + subscribers);
>    }
> {code}
> Saying: "InetAddress 192.168.101.1 is now Down", in the Cassandra's system log.
> Now on all the other nodes the client side (java driver) says, " Cannot connect to any host, scheduling retry in 1000 milliseconds". They eventually do reconnect but some queries fail during this intermediate period.
> To me it seems like when the server pushes the nodeDown event, it call the getRpcAddress(endpoint), and thus sends localhost as the argument in the nodeDown event.  
> As in org.apache.cassandra.transport.Server.java
> {code}
>   public void onDown(InetAddress endpoint)
>        {      
>            server.connectionTracker.send(Event.StatusChange.nodeDown(getRpcAddress(endpoint), server.socket.getPort()));
>        }
> {code}
> the getRpcAddress returns localhost for any endpoint if the cassandra.yaml is using localhost as the configuration for rpc_address (which by the way is the default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)