You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@helix.apache.org by kishore g <g....@gmail.com> on 2017/02/01 04:55:42 UTC

Re: How to Dynamically increase the number of locks in Distributed Lock Manager

we don't have any explicit API in the lock manager recipe to change the
number of shards. But increasing the number of shards is as simple as
updating the number of partitions in the idealstate.

IdealState is = helixAdmin.getResourceIdealState(cluster, resource);
is.setNumPartitions(X);
helixAdmin.setResourceIdealState(cluster, is);

Let us know if that works.


Thanks,
Kishore G


On Tue, Jan 31, 2017 at 3:29 PM, Tejeswar Das <te...@gmail.com> wrote:

> Hi,
>
> I am using Helix’s Distributed Lock Manager recipe in my project, and each
> Lock represents a shard or partition. It has been working pretty well. I am
> able to run my service as a cluster of multiple  instances, and I see the
> shards getting evenly distributed, when a new service instance joins the
> cluster, or leaves cluster, etc.
>
> I have a use-case whereby I want to increase the number of shards in the
> cluster.
>
> Which means, I would like to be able to dynamically increase the number of
> locks that the Lock Manager is managing. Does the Lock Manager provide such
> capability?
>
> Please let me know.
>
> Thanks and regards
> Tej

Re: How to Dynamically increase the number of locks in Distributed Lock Manager

Posted by kishore g <g....@gmail.com>.
I see why it did not work.

 admin.rebalance is simply adding another entry for the new partition in
the idealstate.

For now, keep the admin.rebalance call. We should be able to fix this in
next release.

thanks,
Kishore G

On Wed, Feb 1, 2017 at 10:39 AM, Tejeswar Das <te...@gmail.com> wrote:

> I also thought that we do not need to call rebalance().
>
> But, when I remove the call to rebalance(), it does not work, even if the
> REBALANCE_MODE is set to FULL_AUTO.
>
> Is it possible that the current Shard Controller (master node) is not able
> to automatically detect IdealState config change?
>
> Here is the config (in ZK):
>
>
> "IDEAL_STATE_MODE" : "AUTO_REBALANCE",
> "REBALANCE_MODE" : "FULL_AUTO"
>
> On Jan 31, 2017, at 10:20 PM, kishore g <g....@gmail.com> wrote:
>
> Glad that worked. Calling rebalance is not needed if its running in AUTO
> mode.
>
> On Tue, Jan 31, 2017 at 10:03 PM, Tejeswar Das <te...@gmail.com> wrote:
>
>> Hi Kishore,
>>
>> Thanks for your response!
>>
>> Yep that worked!
>>
>> So basically I enhanced our service config (CLI) tool, that would use
>> HelixAdmin to increase the number of shards, and rebalance the cluster, so
>> that the newly added shards are picked up by the currently running
>> instances. It works as expected.
>>
>>
>>         final HelixAdmin admin = new ZKHelixAdmin(config.getZookeep
>> erConnectString());
>>
>>         final IdealState is = admin.getResourceIdealState(clusterName,
>> shardGroupName);
>>         is.setNumPartitions(updatedPartitionCount);
>>
>>         admin.setResourceIdealState(clusterName, shardGroupName, is);
>>
>>         admin.rebalance(clusterName, shardGroupName, 1);
>>
>> Thanks a lot for your help!
>>
>> Regards
>> Tej
>>
>> On Jan 31, 2017, at 8:55 PM, kishore g <g....@gmail.com> wrote:
>>
>> we don't have any explicit API in the lock manager recipe to change the
>> number of shards. But increasing the number of shards is as simple as
>> updating the number of partitions in the idealstate.
>>
>> IdealState is = helixAdmin.getResourceIdealState(cluster, resource);
>> is.setNumPartitions(X);
>> helixAdmin.setResourceIdealState(cluster, is);
>>
>> Let us know if that works.
>>
>>
>> Thanks,
>> Kishore G
>>
>>
>> On Tue, Jan 31, 2017 at 3:29 PM, Tejeswar Das <te...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I am using Helix’s Distributed Lock Manager recipe in my project, and
>>> each Lock represents a shard or partition. It has been working pretty well.
>>> I am able to run my service as a cluster of multiple  instances, and I see
>>> the shards getting evenly distributed, when a new service instance joins
>>> the cluster, or leaves cluster, etc.
>>>
>>> I have a use-case whereby I want to increase the number of shards in the
>>> cluster.
>>>
>>> Which means, I would like to be able to dynamically increase the number
>>> of locks that the Lock Manager is managing. Does the Lock Manager provide
>>> such capability?
>>>
>>> Please let me know.
>>>
>>> Thanks and regards
>>> Tej
>>
>>
>>
>>
>
>

Re: How to Dynamically increase the number of locks in Distributed Lock Manager

Posted by Tejeswar Das <te...@gmail.com>.
I also thought that we do not need to call rebalance().

But, when I remove the call to rebalance(), it does not work, even if the REBALANCE_MODE is set to FULL_AUTO.

Is it possible that the current Shard Controller (master node) is not able to automatically detect IdealState config change?

Here is the config (in ZK):


"IDEAL_STATE_MODE" : "AUTO_REBALANCE",
"REBALANCE_MODE" : "FULL_AUTO"

> On Jan 31, 2017, at 10:20 PM, kishore g <g....@gmail.com> wrote:
> 
> Glad that worked. Calling rebalance is not needed if its running in AUTO mode.
> 
> On Tue, Jan 31, 2017 at 10:03 PM, Tejeswar Das <tejeswar@gmail.com <ma...@gmail.com>> wrote:
> Hi Kishore,
> 
> Thanks for your response!
> 
> Yep that worked! 
> 
> So basically I enhanced our service config (CLI) tool, that would use HelixAdmin to increase the number of shards, and rebalance the cluster, so that the newly added shards are picked up by the currently running instances. It works as expected.
> 
> 
>         final HelixAdmin admin = new ZKHelixAdmin(config.getZookeeperConnectString());
> 
>         final IdealState is = admin.getResourceIdealState(clusterName, shardGroupName);
>         is.setNumPartitions(updatedPartitionCount);
> 
>         admin.setResourceIdealState(clusterName, shardGroupName, is);
> 
>         admin.rebalance(clusterName, shardGroupName, 1);
> 
> Thanks a lot for your help!
> 
> Regards
> Tej
> 
>> On Jan 31, 2017, at 8:55 PM, kishore g <g.kishore@gmail.com <ma...@gmail.com>> wrote:
>> 
>> we don't have any explicit API in the lock manager recipe to change the number of shards. But increasing the number of shards is as simple as updating the number of partitions in the idealstate.
>> 
>> IdealState is = helixAdmin.getResourceIdealState(cluster, resource);
>> is.setNumPartitions(X);
>> helixAdmin.setResourceIdealState(cluster, is);
>> 
>> Let us know if that works.
>> 
>> 
>> Thanks,
>> Kishore G
>>  
>> 
>> On Tue, Jan 31, 2017 at 3:29 PM, Tejeswar Das <tejeswar@gmail.com <ma...@gmail.com>> wrote:
>> Hi,
>> 
>> I am using Helix’s Distributed Lock Manager recipe in my project, and each Lock represents a shard or partition. It has been working pretty well. I am able to run my service as a cluster of multiple  instances, and I see the shards getting evenly distributed, when a new service instance joins the cluster, or leaves cluster, etc.
>> 
>> I have a use-case whereby I want to increase the number of shards in the cluster.
>> 
>> Which means, I would like to be able to dynamically increase the number of locks that the Lock Manager is managing. Does the Lock Manager provide such capability?
>> 
>> Please let me know.
>> 
>> Thanks and regards
>> Tej
>> 
> 
> 


Re: How to Dynamically increase the number of locks in Distributed Lock Manager

Posted by kishore g <g....@gmail.com>.
Glad that worked. Calling rebalance is not needed if its running in AUTO
mode.

On Tue, Jan 31, 2017 at 10:03 PM, Tejeswar Das <te...@gmail.com> wrote:

> Hi Kishore,
>
> Thanks for your response!
>
> Yep that worked!
>
> So basically I enhanced our service config (CLI) tool, that would use
> HelixAdmin to increase the number of shards, and rebalance the cluster, so
> that the newly added shards are picked up by the currently running
> instances. It works as expected.
>
>
>         final HelixAdmin admin = new ZKHelixAdmin(config.
> getZookeeperConnectString());
>
>         final IdealState is = admin.getResourceIdealState(clusterName,
> shardGroupName);
>         is.setNumPartitions(updatedPartitionCount);
>
>         admin.setResourceIdealState(clusterName, shardGroupName, is);
>
>         admin.rebalance(clusterName, shardGroupName, 1);
>
> Thanks a lot for your help!
>
> Regards
> Tej
>
> On Jan 31, 2017, at 8:55 PM, kishore g <g....@gmail.com> wrote:
>
> we don't have any explicit API in the lock manager recipe to change the
> number of shards. But increasing the number of shards is as simple as
> updating the number of partitions in the idealstate.
>
> IdealState is = helixAdmin.getResourceIdealState(cluster, resource);
> is.setNumPartitions(X);
> helixAdmin.setResourceIdealState(cluster, is);
>
> Let us know if that works.
>
>
> Thanks,
> Kishore G
>
>
> On Tue, Jan 31, 2017 at 3:29 PM, Tejeswar Das <te...@gmail.com> wrote:
>
>> Hi,
>>
>> I am using Helix’s Distributed Lock Manager recipe in my project, and
>> each Lock represents a shard or partition. It has been working pretty well.
>> I am able to run my service as a cluster of multiple  instances, and I see
>> the shards getting evenly distributed, when a new service instance joins
>> the cluster, or leaves cluster, etc.
>>
>> I have a use-case whereby I want to increase the number of shards in the
>> cluster.
>>
>> Which means, I would like to be able to dynamically increase the number
>> of locks that the Lock Manager is managing. Does the Lock Manager provide
>> such capability?
>>
>> Please let me know.
>>
>> Thanks and regards
>> Tej
>
>
>
>

Re: How to Dynamically increase the number of locks in Distributed Lock Manager

Posted by Tejeswar Das <te...@gmail.com>.
Hi Kishore,

Thanks for your response!

Yep that worked! 

So basically I enhanced our service config (CLI) tool, that would use HelixAdmin to increase the number of shards, and rebalance the cluster, so that the newly added shards are picked up by the currently running instances. It works as expected.


        final HelixAdmin admin = new ZKHelixAdmin(config.getZookeeperConnectString());

        final IdealState is = admin.getResourceIdealState(clusterName, shardGroupName);
        is.setNumPartitions(updatedPartitionCount);

        admin.setResourceIdealState(clusterName, shardGroupName, is);

        admin.rebalance(clusterName, shardGroupName, 1);

Thanks a lot for your help!

Regards
Tej

> On Jan 31, 2017, at 8:55 PM, kishore g <g....@gmail.com> wrote:
> 
> we don't have any explicit API in the lock manager recipe to change the number of shards. But increasing the number of shards is as simple as updating the number of partitions in the idealstate.
> 
> IdealState is = helixAdmin.getResourceIdealState(cluster, resource);
> is.setNumPartitions(X);
> helixAdmin.setResourceIdealState(cluster, is);
> 
> Let us know if that works.
> 
> 
> Thanks,
> Kishore G
>  
> 
> On Tue, Jan 31, 2017 at 3:29 PM, Tejeswar Das <tejeswar@gmail.com <ma...@gmail.com>> wrote:
> Hi,
> 
> I am using Helix’s Distributed Lock Manager recipe in my project, and each Lock represents a shard or partition. It has been working pretty well. I am able to run my service as a cluster of multiple  instances, and I see the shards getting evenly distributed, when a new service instance joins the cluster, or leaves cluster, etc.
> 
> I have a use-case whereby I want to increase the number of shards in the cluster.
> 
> Which means, I would like to be able to dynamically increase the number of locks that the Lock Manager is managing. Does the Lock Manager provide such capability?
> 
> Please let me know.
> 
> Thanks and regards
> Tej
>