You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Timothy Potter (JIRA)" <ji...@apache.org> on 2014/03/14 18:04:46 UTC

[jira] [Updated] (SOLR-5860) Logging around core wait for state during startup / recovery is confusing

     [ https://issues.apache.org/jira/browse/SOLR-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Timothy Potter updated SOLR-5860:
---------------------------------

    Attachment: SOLR-5860.patch

Here's a patch that does a couple of things:

1) uses the leaderConflictResolveWait configuration property (from ZkController) as the max wait for this loop as it seems related to the cluster waiting to get the leader resolved for this core
 
2) forces a refresh of state from ZooKeeper every 15 seconds for the duration of the wait loop

3) logs it's activity every 15 seconds as well (so we know it is still waiting, esp when using a 3 minute timeout)

4) tries to get some information from ClusterState about the leader when generating the exception message ... this might not be useful but I wanted to give more context in the error message so that messages like "still waiting on state recovering, i see state recovering" make more sense

I didn't see in the existing tests where I could test this behavior explicitly. I did run the ChaosMonkeyNothingIsSafeTest and it invokes this code and passes.

> Logging around core wait for state during startup / recovery is confusing
> -------------------------------------------------------------------------
>
>                 Key: SOLR-5860
>                 URL: https://issues.apache.org/jira/browse/SOLR-5860
>             Project: Solr
>          Issue Type: Improvement
>          Components: SolrCloud
>            Reporter: Timothy Potter
>            Assignee: Shalin Shekhar Mangar
>            Priority: Minor
>         Attachments: SOLR-5860.patch
>
>
> I'm seeing some log messages like this:
> I was asked to wait on state recovering for HOST:8984_solr but I still do not see the requested state. I see state: recovering live:true
> This is very confusing because from the log, it seems like it's waiting to see the state it's in ... After digging through the code, it appears that it is really waiting for a leader to become active so that it has a leader to recover from.
> I'd like to improve the logging around this critical wait loop to give better context to what is happening. 
> Also, I would like to change the following so that we force state updates every 15 seconds for the entire wait period.
> -          if (retry == 15 || retry == 60) {
> +          if (retry % 15 == 0) {
> As-is, it's waiting 120 seconds but only forcing the state to update twice, once after 15 seconds and again after 60 … might be good to force updates for the full wait period.
> Lastly, I think it would be good to use the leaderConflictResolveWait setting (from ZkController) here as well since 120 may not be enough for a leader to become active in a busy cluster, esp. after the node the Overseer is running on. Maybe leaderConflictResolveWait + 5 seconds?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org