You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by GIROLAMI Philippe <ph...@cegedim.fr> on 2012/11/21 15:11:50 UTC

[SolrCloud] is softcommit cluster-wide for the collection ?

Hello,
We're working on integrating SolrCloud andwe're  wondering whether issuing a softCommit via Solrj forces the soft commit :

a) only on the receiving core or
b) to the whole cluster and the receiving cores forwards the soft commit to all replicas.

If the answer is a), what is the best practice to ensure data is indeed commited cluster-wide ?
If the answer is b), what would happen on a 1-replica setup if one commit succeeded and the replica commit failed  ?

Thanks
Philippe Girolami

Re: [SolrCloud] is softcommit cluster-wide for the collection ?

Posted by Mark Miller <ma...@gmail.com>.
On Nov 21, 2012, at 11:00 AM, GIROLAMI Philippe <ph...@cegedim.fr> wrote:

> Hi Mark,
> Thanks for the details
>>> If the answer is b), what would happen on a 1-replica setup if one commit succeeded and the replica commit failed  ?
>> What's the reason the commit failed? Either a really bad problem and that node will need to be restarted and either won't answer requests or it will 
>> be asked to recover by the leader when sending it an update that failed.
> Something dumb like a full disk for example. So I understand that the leader for the shard stored to the transaction log which means that if, in the worst case, it crashes and does not loose disk data, it will replay it. And for "slaves" crashes, they will get the commit log from the leader.
> Is this right ?

All of the nodes have their own transaction log. When a node comes back up, he will replay his local transaction log. Then he will contact the leader and compare versions - if he matches, it's all good - if not, he recovers from the leader. If he is the leader, he just replays his own local transaction log.

- Mark

RE: [SolrCloud] is softcommit cluster-wide for the collection ?

Posted by GIROLAMI Philippe <ph...@cegedim.fr>.
Hi Mark,
Thanks for the details
>> If the answer is b), what would happen on a 1-replica setup if one commit succeeded and the replica commit failed  ?
>What's the reason the commit failed? Either a really bad problem and that node will need to be restarted and either won't answer requests or it will 
>be asked to recover by the leader when sending it an update that failed.
Something dumb like a full disk for example. So I understand that the leader for the shard stored to the transaction log which means that if, in the worst case, it crashes and does not loose disk data, it will replay it. And for "slaves" crashes, they will get the commit log from the leader.
Is this right ?

>Because commits are not required for durability, it's probably not the issue that you think.
Sure looks like it !

Thanks

Re: [SolrCloud] is softcommit cluster-wide for the collection ?

Posted by Mark Miller <ma...@gmail.com>.
On Nov 21, 2012, at 9:11 AM, GIROLAMI Philippe <ph...@cegedim.fr> wrote:

> Hello,
> We're working on integrating SolrCloud andwe're  wondering whether issuing a softCommit via Solrj forces the soft commit :
> 
> a) only on the receiving core or
> b) to the whole cluster and the receiving cores forwards the soft commit to all replicas.

The answer is b.

> 
> If the answer is a), what is the best practice to ensure data is indeed commited cluster-wide ?

Commit is no longer what ensures durability in solrcloud. Because of the transactionlog, once a request is ack'd, it's in. Hard commits then become about relieving the memory pressure of the transactionlog, and soft commits are about visibility. Neither is required for durability.

> If the answer is b), what would happen on a 1-replica setup if one commit succeeded and the replica commit failed  ?

What's the reason the commit failed? Either a really bad problem and that node will need to be restarted and either won't answer requests or it will be asked to recover by the leader when sending it an update that failed.

Because commits are not required for durability, it's probably not the issue that you think.

- Mark