You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@lucene.apache.org by "Andrzej Bialecki (Jira)" <ji...@apache.org> on 2020/03/23 16:52:00 UTC

[jira] [Comment Edited] (SOLR-14347) Autoscaling placement wrong when concurrent replica placements are calculated

    [ https://issues.apache.org/jira/browse/SOLR-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17064940#comment-17064940 ] 

Andrzej Bialecki edited comment on SOLR-14347 at 3/23/20, 4:51 PM:
-------------------------------------------------------------------

It turns out that the bug was caused by the fact that per-collection policies are applied during calculations to the cached {{Session}} instance and cause side-effects that later affect calculations for other collections.

Setting the LockLevel.CLUSTER fixed this because all computations became sequential, but at a relatively high cost of blocking all other CLUSTER level operations. It appears that re-creating a {{Policy.Session}} in {{PolicyHelper.getReplicaLocations(...)}} fixes this behavior too, because the new Session doesn't carry over the side-effects from previous per-collection policies. There is a slight performance impact of this approach, because re-creating a Session is costly for large clusters, but it's less intrusive than locking out all other CLUSTER level ops.

We may re-visit this issue at some point to reduce this cost, but I think this fix at least protects us from the current completely wrong behavior.


was (Author: ab):
It turns out that the bug was caused by the fact that per-collection policies are applied during calculations and cause side-effects that later affect calculations for other collections.

Setting the LockLevel.CLUSTER fixed this because all computations became sequential, but at a relatively high cost of blocking all other CLUSTER level operations. It appears that re-creating a {{Policy.Session}} in {{PolicyHelper.getReplicaLocations(...)}} fixes this behavior too, because the new Session doesn't carry over the side-effects from previous per-collection policies. There is a slight performance impact of this approach, because re-creating a Session is costly for large clusters, but it's less intrusive than locking out all other CLUSTER level ops.

We may re-visit this issue at some point to reduce this cost, but I think this fix at least protects us from the current completely wrong behavior.

> Autoscaling placement wrong when concurrent replica placements are calculated
> -----------------------------------------------------------------------------
>
>                 Key: SOLR-14347
>                 URL: https://issues.apache.org/jira/browse/SOLR-14347
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: AutoScaling
>    Affects Versions: 8.5
>            Reporter: Andrzej Bialecki
>            Assignee: Andrzej Bialecki
>            Priority: Major
>         Attachments: SOLR-14347.patch
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Steps to reproduce:
>  * create a cluster of a few nodes (tested with 7 nodes)
>  * define per-collection policies that distribute replicas exclusively on different nodes per policy
>  * concurrently create a few collections, each using a different policy
>  * resulting replica placement will be seriously wrong, causing many policy violations
> Running the same scenario but instead creating collections sequentially results in no violations.
> I suspect this is caused by incorrect locking level for all collection operations (as defined in {{CollectionParams.CollectionAction}}) that create new replica placements - i.e. CREATE, ADDREPLICA, MOVEREPLICA, DELETENODE, REPLACENODE, SPLITSHARD, RESTORE, REINDEXCOLLECTION. All of these operations use the policy engine to create new replica placements, and as a result they change the cluster state. However, currently these operations are locked (in {{OverseerCollectionMessageHandler.lockTask}} ) using {{LockLevel.COLLECTION}}. In practice this means that the lock is held only for the particular collection that is being modified.
> A straightforward fix for this issue is to change the locking level to CLUSTER (and I confirm this fixes the scenario described above). However, this effectively serializes all collection operations listed above, which will result in general slow-down of all collection operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org