You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Shalin Shekhar Mangar (JIRA)" <ji...@apache.org> on 2013/05/30 20:13:22 UTC
[jira] [Commented] (SOLR-4693) Create a collections API to
delete/cleanup a Slice
[ https://issues.apache.org/jira/browse/SOLR-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13670552#comment-13670552 ]
Shalin Shekhar Mangar commented on SOLR-4693:
---------------------------------------------
Thanks Anshum.
A few comments:
# Can we use "collection" instead of "name" just like we use in splitshard?
# The following code will throw an exception for a shard with no range (custom hashing use-case). Also it allows deletion of slices in construction state going against the error message.
{code}
// For now, only allow for deletions of Inactive slices or custom hashes (range==null).
// TODO: Add check for range gaps on Slice deletion
if (!slice.getState().equals(Slice.INACTIVE) && slice.getRange() != null) {
throw new SolrException(ErrorCode.BAD_REQUEST,
"The slice: " + slice.getName() + " is not currently "
+ slice.getState() + ". Only inactive (or custom-hashed) slices can be deleted.");
}
{code}
# The "deletecore" call to overseer is redundant because it is also made by the CoreAdmin UNLOAD action.
# Can we re-use the code between "deletecollection" and "deleteshard"? The collectionCmd code checks for "live" state as well.
# In DeleteSliceTest, after setSliceAsInactive(), we should poll the slice state until it becomes inactive or until a timeout value instead of just waiting for 5000ms
# DeleteSliceTest.waitAndConfirmSliceDeletion is wrong. It does not actually use the counter variable. Also, cloudClient.getZkStateReader().getClusterState() doesn't actually force refresh the cluster state
# We should fail with appropriate error message if there were nodes which could not be unloaded. Perhaps a separate "deletecore" call is appropriate here?
# Do we know what would happen if such a "zombie" node comes back up? We need to make sure it cleans up properly.
> Create a collections API to delete/cleanup a Slice
> --------------------------------------------------
>
> Key: SOLR-4693
> URL: https://issues.apache.org/jira/browse/SOLR-4693
> Project: Solr
> Issue Type: Improvement
> Components: SolrCloud
> Reporter: Anshum Gupta
> Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-4693.patch, SOLR-4693.patch
>
>
> Have a collections API that cleans up a given shard.
> Among other places, this would be useful post the shard split call to manage the parent/original slice.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org