You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by vrindavda <vr...@gmail.com> on 2017/05/08 07:32:25 UTC

SPLITSHARD Working

Hi,

I need to SPLITSHARD such that one split remains on the same machine as
original and another uses new machines for leader and replicas. Is this
possible ? Please let me know what properties do I need to specify in
Collection API to achieve this.

Thank you,
Vrinda Davda



--
View this message in context: http://lucene.472066.n3.nabble.com/SPLITSHARD-Working-tp4333876.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: SPLITSHARD Working

Posted by Amrit Sarkar <sa...@gmail.com>.
Vrinda,

The expected behavior if parent shard 'shardA' resides on node'1', node'2'
... node'n' and do a SPLITSHARD on it.

the child shards, shardA_0 and shardA_1 will reside on node'1', node'2' ...
node'n'.

shardA ------- node'1' (leader) & node'2' (replica)

after splitshard;

shardA ------- node'1' (leader) & node'2' (replica) (INACTIVE)
shardA_0 ------ node'1' & node'2' (ACTIVE)
shardA_1 ------ node'1' & node'2' (ACTIVE)

Any one of them can be a leader and replica for the children nodes.

Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2

On Mon, May 8, 2017 at 4:32 PM, vrindavda <vr...@gmail.com> wrote:

> Thanks I go it.
>
> But I see that distribution of shards and replicas is not equal.
>
>  For Example in my case :
> I had shard 1 and shard2  on Node 1 and their replica_1 and replica_2 on
> Node 2.
> I did SHARDSPLIT on shard1  to get shard1_0 and shard1_1  such that
> and shard1_0_replica0 are created on Node 1 and shard1_0_replica1,
> shard1_1_replica1 and  shard1_1_replica0 on Node 2.
>
> Is this expected behavior ?
>
> Thank you,
> Vrinda Davda
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/SPLITSHARD-Working-tp4333876p4333922.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Re: SPLITSHARD Working

Posted by vrindavda <vr...@gmail.com>.
Thanks I go it.

But I see that distribution of shards and replicas is not equal.

 For Example in my case :
I had shard 1 and shard2  on Node 1 and their replica_1 and replica_2 on
Node 2. 
I did SHARDSPLIT on shard1  to get shard1_0 and shard1_1  such that 
and shard1_0_replica0 are created on Node 1 and shard1_0_replica1,
shard1_1_replica1 and  shard1_1_replica0 on Node 2.

Is this expected behavior ? 

Thank you,
Vrinda Davda



--
View this message in context: http://lucene.472066.n3.nabble.com/SPLITSHARD-Working-tp4333876p4333922.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: SPLITSHARD Working

Posted by Shalin Shekhar Mangar <sh...@gmail.com>.
No, split always happens on the original node. But you can move the
sub-shard leader to a new node once the split is complete by using
AddReplica/DeleteReplica collection API.

On Mon, May 8, 2017 at 1:02 PM, vrindavda <vr...@gmail.com> wrote:
> Hi,
>
> I need to SPLITSHARD such that one split remains on the same machine as
> original and another uses new machines for leader and replicas. Is this
> possible ? Please let me know what properties do I need to specify in
> Collection API to achieve this.
>
> Thank you,
> Vrinda Davda
>
>
>
> --
> View this message in context: http://lucene.472066.n3.nabble.com/SPLITSHARD-Working-tp4333876.html
> Sent from the Solr - User mailing list archive at Nabble.com.



-- 
Regards,
Shalin Shekhar Mangar.