You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sling.apache.org by "Timothee Maret (JIRA)" <ji...@apache.org> on 2015/05/06 18:40:03 UTC

[jira] [Commented] (SLING-4627) TOPOLOGY_CHANGED in an eventually consistent repository

    [ https://issues.apache.org/jira/browse/SLING-4627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530846#comment-14530846 ] 

Timothee Maret commented on SLING-4627:
---------------------------------------

IIUC, this would mean that whenever the topology changes at a point in time X, each remaining instances in the topology would write a syncToken in the repository. Each instances would deem the data as being replicated up the point X, when they see most (or maybe all remaining) syncTokens.
If this understanding is correct, I'd have a few questions:

1. How can this mechanism guarantee that the data written by the instance that crashed at point X is replicated, assuming it can't write its own syncToken ? Assuming this instance is the leader, this may be problematic.

2. How would the syncToken be reused in consecutive consistency checks ? Couldn't it be possible that some instances see an old consistency token and consider it as valid ?


I don't know the details of Oak very well, but maybe there is queue of data to be replicated somewhere. Getting a hand on this queue may offer such guarantee that data has been replicated up to the point in time X. Assuming such queue exists each instance could write a piece of data at the time X and wait until it sees it out of the queue (or written in the Journal). This would allow to keep each instance to care only about themselves.

> TOPOLOGY_CHANGED in an eventually consistent repository
> -------------------------------------------------------
>
>                 Key: SLING-4627
>                 URL: https://issues.apache.org/jira/browse/SLING-4627
>             Project: Sling
>          Issue Type: Improvement
>          Components: Extensions
>            Reporter: Stefan Egli
>            Assignee: Stefan Egli
>            Priority: Critical
>         Attachments: SLING-4627.patch, SLING-4627.patch
>
>
> This is a parent ticket describing the +coordination effort needed between properly sending TOPOLOGY_CHANGED when running ontop of an eventually consistent repository+. These findings are independent of the implementation details used inside the discovery implementation, so apply to discovery.impl, discovery.etcd/.zookeeper/.oak etc. Tickets to implement this for specific implementation are best created separately (eg sub-task or related..). Also note that this assumes immediately sending TOPOLOGY_CHANGING as described [in SLING-3432|https://issues.apache.org/jira/browse/SLING-3432?focusedCommentId=14492494&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14492494]
> h5. The spectrum of possible TOPOLOGY_CHANGED events include the following scenarios:
> || scenario || classification || action ||
> | A. change is completely outside of local cluster | (/) uncritical | changes outside the cluster are considered uncritical for this exercise. |
> | B. a new instance joins the local cluster, this new instance is by contract not the leader (leader must be stable \[0\]) | (/) uncritical | a join of an instance is uncritical due to the fact that it merely joins the cluster and has thus no 'backlog' of changes that might be propagating through the (eventually consistent) repository. |
> | C. a non-leader *leaves* the local cluster | (x) *critical* | changes that were written by the leaving instance might still not be *seen* by all surviving (ie it can be that discovery is faster than the repository) and this must be assured before sending out TOPOLOGY_CHANGED. This is because the leaving instance could have written changes that are *topology dependent* and thus those changes must first be settled in the repository before continuing with a *new topology*. |
> | D. the leader *leaves* the local cluster (and thus a new leader is elected) | (x)(x) *very critical* | same as C except that this is more critical due to the fact that the leader left |
> | E. -the leader of the local cluster changes (without leaving)- this is not supported by contract (leader must be stable \[0\]) | (/) -irrelevant- | |
> So both C and D are about an instance leaving. And as mentioned above the survivors must assure they have read all changes of the leavers. There are two parts to this:
> * the leaver could have pending writes that are not yet in mongoD: I don't think this is the case. The only thing that can remain could be an uncommitted branch and that would be rolled back afaik.
> ** Exception to this is a partition: where the leaver didn't actually crash but is still hooked to the repository. *For this I'm not sure how it can be solved* yet.
> * the survivers could however not yet have read all changes (pending in the background read) and one way to make sure they did is to have each surviving instance write a (pseudo-) sync token to the repository. Once all survivors have seen this sync token of all other survivors, the assumption is that all pending changes are "flushed" through the eventually consistent repository and that it is safe to send out a TOPOLOGY_CHANGED event. 
> * this sync token must be *conflict free* and could be eg: {{/var/discovery/oak/clusterInstances/<slingId>/syncTokens/<newViewId>}} - where {{newViewId}} is defined by whatever discovery mechanism is used
> * a special case is when only one instance is remaining. It can then not wait for any other survivor to send a sync token. In that case sync tokens would not work. All it could then possibly do is to wait for a certain time (which should be larger than any expected background-read duration)
> [~mreutegg], [~chetanm] can you pls confirm/comment on the above "flush/sync token" approach? Thx!
> /cc [~marett]
> \[0\] - see [getLeader() in ClusterView|https://github.com/apache/sling/blob/trunk/bundles/extensions/discovery/api/src/main/java/org/apache/sling/discovery/ClusterView.java]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)