You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@stratos.apache.org by Reka Thirunavukkarasu <re...@wso2.com> on 2014/10/02 14:58:14 UTC

[Discuss][Grouping] Handling Group level scaling in Composite Application

Hi

This is to discuss $subject. In the case of scaling with composite
application, we can divide it into three parts as Martin has also explained
in previous mails. I summarise as below from a Martin's mail with the
proposed possible solution in stratos to support group level scaling:

o   scaling by statistics,

o   scaling by group member and

o   scaling by group.



Based on this, the general algorithm would be like this (in order):

1.      Scale VMs until the cluster maximum is reached (in at least one of
the cluster within the group - scale by statistics)

We can address this in stratos with the usual autoscaling based on
statistics. A single cluster monitor is capable of taking this decision to
scale its own members.

2.      Scale up a new cluster of same type as the one which has reached
the maximum of VMs until the max member number is reached  (scale by group
member).

If a cluster of a group reaches the max based on the deployment policy,
then if there is a room in the partition to spin more instances, we can
simply update the deployment policy of that cluster to increase the max
instances. If there are more than one cluster of the group resides in a
partition, we can divide the max instances of the partition among all of
those clusters by keeping a ratio among those clusters like 3C1:4C2. In
case, when the partition is max out, if we extend the partition with more
hardware, then again we can update the deployment policy with new
max values of those clusters. So that relevant cluster monitors will
execute with the updated values.

3.      Scale up a new group instance of the same group type (or
definition), including all the respective dependencies  (scale by group)

We can achieve this by using combination of round-robin and
one-after-another algorithm in the deployment policy. For Eg:

You can deploy a group(G1) which contains of C1 and C2 clusters in
partition P1 and P2 using round-robin algorithm. So that C1 and C1 will get
the high availability. You can have another idle partition called P3. When
you decided to scale by group, then using one-after-another algorithm in
deployment policy, we can choose the P3 to bring up the G1 with relevant
minimum instances.

In that way, we can improve our deployment policy to support combination of
these algorithms within a network partition. When we have P1, P2 and P3 in
a network partition, we will be able to use round-robin among P1 and P2 and
we can use         one-after-another among (P1, P2) and P3.

Please share your thoughts on the above approach and Please add, if i have
missed any other points here..


Thanks,

Reka

-- 
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007

RE: [Discuss][Grouping] Handling Group level scaling in Composite Application

Posted by "Martin Eppel (meppel)" <me...@cisco.com>.
Hi Reka,

See inline (“Martin:”)

Thanks

Martin

From: Reka Thirunavukkarasu [mailto:reka@wso2.com]
Sent: Thursday, October 02, 2014 5:58 AM
To: dev
Cc: Lakmal Warusawithana; Shaheedur Haque (shahhaqu); Martin Eppel (meppel); Isuru Haththotuwa; Udara Liyanage
Subject: [Discuss][Grouping] Handling Group level scaling in Composite Application

Hi

This is to discuss $subject. In the case of scaling with composite application, we can divide it into three parts as Martin has also explained in previous mails. I summarise as below from a Martin's mail with the proposed possible solution in stratos to support group level scaling:


o   scaling by statistics,

o   scaling by group member and

o   scaling by group.

Based on this, the general algorithm would be like this (in order):

1.      Scale VMs until the cluster maximum is reached (in at least one of the cluster within the group - scale by statistics)

We can address this in stratos with the usual autoscaling based on statistics. A single cluster monitor is capable of taking this decision to scale its own members.

2.      Scale up a new cluster of same type as the one which has reached the maximum of VMs until the max member number is reached  (scale by group member).

If a cluster of a group reaches the max based on the deployment policy, then if there is a room in the partition to spin more instances, we can simply update the deployment policy of that cluster to increase the max instances. If there are more than one cluster of the group resides in a partition, we can divide the max instances of the partition among all of those clusters by keeping a ratio among those clusters like 3C1:4C2. In case, when the partition is max out, if we extend the partition with more hardware, then again we can update the deployment policy with new max values of those clusters. So that relevant cluster monitors will execute with the updated values.

“Martin:” is the max instance adjusted automatically by the autoscaler or does it have to be done per “user” request ?

3.      Scale up a new group instance of the same group type (or definition), including all the respective dependencies  (scale by group)

We can achieve this by using combination of round-robin and one-after-another algorithm in the deployment policy. For Eg:

You can deploy a group(G1) which contains of C1 and C2 clusters in partition P1 and P2 using round-robin algorithm. So that C1 and C1 will get the high availability. You can have another idle partition called P3. When you decided to scale by group, then using one-after-another algorithm in deployment policy, we can choose the P3 to bring up the G1 with relevant minimum instances.

In that way, we can improve our deployment policy to support combination of these algorithms within a network partition. When we have P1, P2 and P3 in a network partition, we will be able to use round-robin among P1 and P2 and we can use         one-after-another among (P1, P2) and P3.

“Martin:” I am not entirely sure I completely understand how this works using the round robin algorithm and the partitioning, I think it would be helpful to demonstrate the algorithm by using a more complex group with multiple cartridges and nested sub groups, including the group id / subscription alias generation, and event generation ?
A few more questions:
- when we spin up new instances of a group using the above mentioned algorithm will it also subsequently scale all the respective dependencies ? One of the main advantages of grouping (IMHO)) is that it scales up / down not only a specific instance of a group but also subsequent dependencies (cartridges and nested groups).
- For the new group instances, how will the group Id differ from the group which is scaled, how would we generate group Ids ?
- How will the scale down work (or termination of a group and it’s dependencies) ?
- What about group events, which event will be generated (e.g. group active, group down, etc and what parameters like group name, group id, app id ?



Please share your thoughts on the above approach and Please add, if i have missed any other points here..



Thanks,

Reka

--
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007<tel:%2B94776442007>


Re: [Discuss][Grouping] Handling Group level scaling in Composite Application

Posted by Isuru Haththotuwa <is...@apache.org>.
Hi Martin,

Please find some explanations inline:

On Tue, Nov 18, 2014 at 2:21 AM, Martin Eppel (meppel) <me...@cisco.com>
wrote:

>  Hi Reka,
>
>
>
> ·        Scale by group
> Looks like the main difference to scale by group, compared to the original
> proposal, is that when a group scales instead of “creating” multiple
> instances the group is extended across multiple partitions (using one of
> the mentioned algorithms). Side effect is that no new group (instance) Id
> is required as the group scales up.
>
Isuru: AFAIU, it would still be creating new instances (duplicating the
existing Group), but in a new partition.

>  ·        Scale by member:
> As long as there is room on the partition a group can scale by adjusting
> the max instance number of the clusters within the group. From your comment
> below it looks like this requires a manual intervention by the user ?
>
Isuru: For scaling by member, you do not need to have manual intervention.
Scaling by member map to the autoscaling feature that Stratos already has,
which will scale up/down individual members using the request count, memory
consumption and load average.

>
>
> Is this view correct ?
>
>
>
> I’ll forward the implementation proposal to our team for some feedback,
>
>
>
> Thanks
>
>
>
> Martin
>
>
>
> *From:* Reka Thirunavukkarasu [mailto:reka@wso2.com]
> *Sent:* Monday, November 17, 2014 4:23 AM
> *To:* Martin Eppel (meppel)
> *Cc:* dev; Lakmal Warusawithana; Shaheedur Haque (shahhaqu); Isuru
> Haththotuwa; Udara Liyanage
> *Subject:* Re: [Discuss][Grouping] Handling Group level scaling in
> Composite Application
>
>
>
> Hi Martin,
>
>
>
> Please find my comments inside related to this group scaling
> implementation.
>
>
>
>
> *Subject:* [Discuss][Grouping] Handling Group level scaling in Composite
> Application
>
>
>
> Hi
>
>
>
> This is to discuss $subject. In the case of scaling with composite
> application, we can divide it into three parts as Martin has also explained
> in previous mails. I summarise as below from a Martin's mail with the
> proposed possible solution in stratos to support group level scaling:
>
>
>
> o   scaling by statistics,
>
> o   scaling by group member and
>
> o   scaling by group.
>
>
>
> Based on this, the general algorithm would be like this (in order):
>
> 1.      Scale VMs until the cluster maximum is reached (in at least one
> of the cluster within the group - scale by statistics)
>
> We can address this in stratos with the usual autoscaling based on
> statistics. A single cluster monitor is capable of taking this decision to
> scale its own members.
>
> 2.      Scale up a new cluster of same type as the one which has reached
> the maximum of VMs until the max member number is reached  (scale by group
> member).
>
> If a cluster of a group reaches the max based on the deployment policy,
> then if there is a room in the partition to spin more instances, we can
> simply update the deployment policy of that cluster to increase the max
> instances. If there are more than one cluster of the group resides in a
> partition, we can divide the max instances of the partition among all of
> those clusters by keeping a ratio among those clusters like 3C1:4C2. In
> case, when the partition is max out, if we extend the partition with more
> hardware, then again we can update the deployment policy with new
> max values of those clusters. So that relevant cluster monitors will
> execute with the updated values.
>
> “Martin:” is the max instance adjusted automatically by the autoscaler or
> does it have to be done per “user” request ?
>
>  Yah. In this case, the max can be adjusted manually using the manual
> scaling support in stratos now. I'm not sure whether autoscaler can adjust
> this max automatically.
>
>   3.      Scale up a new group instance of the same group type (or
> definition), including all the respective dependencies  (scale by group)
>
> We can achieve this by using combination of round-robin and
> one-after-another algorithm in the deployment policy. For Eg:
>
> You can deploy a group(G1) which contains of C1 and C2 clusters in
> partition P1 and P2 using round-robin algorithm. So that C1 and C1 will get
> the high availability. You can have another idle partition called P3. When
> you decided to scale by group, then using one-after-another algorithm in
> deployment policy, we can choose the P3 to bring up the G1 with relevant
> minimum instances.
>
> In that way, we can improve our deployment policy to support combination
> of these algorithms within a network partition. When we have P1, P2 and P3
> in a network partition, we will be able to use round-robin among P1 and P2
> and we can use         one-after-another among (P1, P2) and P3.
>
> “Martin:” I am not entirely sure I completely understand how this works
> using the round robin algorithm and the partitioning, I think it would be
> helpful to demonstrate the algorithm by using a more complex group with
> multiple cartridges and nested sub groups, including the group id /
> subscription alias generation, and event generation ?
>
>  I have attached here with a sample json which would explain you about
> this new partition groups and algorithm. The deployment Policy can have the
> following:
>
>
>
> DeploymentPolicy
>
>       +NetworkPartitions
>
>              + id
>
>              + partitionGroupAlgo(will be applicable between partition
> groups)
>
>              + partitionGroups
>
>                       + id
>
>                       + partitionAlgo(will be applicable between
> partitions)
>
>                       + patitions
>
>
>
> As per the attached policy, autoscaler will choose the
> p1-p2-group partition group for the initial cluster monitor to start
> instances. When that p1-p2-group got to maxed out, it can notify the parent
> and choose the p3-p4-group for further spinning instances. When group gets
> the notification, it can notify other dependent child to switch to another
> partitionGroup. So, the dependent cluster will choose another defined
> partitionGroup with one-after-another algorithm. The requirement here is
> that both dependent clusters have to have the same number of
> partitionGroups available.
>
>   A few more questions:
> - when we spin up new instances of a group using the above mentioned
> algorithm will it also subsequently scale all the respective dependencies ?
> One of the main advantages of grouping (IMHO)) is that it scales up / down
> not only a specific instance of a group but also subsequent dependencies
> (cartridges and nested groups).
>
>  Yah..As i explained earlier the notification to the parent group will
> handle this.
>
>   - For the new group instances, how will the group Id differ from the
> group which is scaled, how would we generate group Ids ?
>
>  Since we use this algorithm, we no longer need a new group id to be
> generated to handle this.
>
>   - How will the scale down work (or termination of a group and it’s
> dependencies) ?
>
>  Scale down will also be handled by this algorithm. According to the
> attached definition, p1-p2-group got maxed out and we chose p3-p4-group
> using one-after-another, then until the chosen p3-p4-group got wiped out,
> we won't be scaling down the instances in p1-p2-group.
>
>
>
> Please let me know, if you need further clarification on this....
>
>
>
>
>
> Thanks,
>
> Reka
>
>   - What about group events, which event will be generated (e.g. group
> active, group down, etc and what parameters like group name, group id, app
> id ?
>
> Please share your thoughts on the above approach and Please add, if i have
> missed any other points here..
>
>
>
> Thanks,
>
> Reka
>
>
>
> --
>
> Reka Thirunavukkarasu
> Senior Software Engineer,
> WSO2, Inc.:http://wso2.com,
>
> Mobile: +94776442007
>
>
>
>
>
>
>
> --
>
> Reka Thirunavukkarasu
> Senior Software Engineer,
> WSO2, Inc.:http://wso2.com,
>
> Mobile: +94776442007
>
> --
> <%2B94776442007>
> Thanks and Regards,
>
> Isuru H.
> <%2B94776442007>
> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>
>
> * <http://wso2.com/>*
>
>
>

RE: [Discuss][Grouping] Handling Group level scaling in Composite Application

Posted by "Martin Eppel (meppel)" <me...@cisco.com>.
Hi Reka,


·        Scale by group
Looks like the main difference to scale by group, compared to the original proposal, is that when a group scales instead of “creating” multiple instances the group is extended across multiple partitions (using one of the mentioned algorithms). Side effect is that no new group (instance) Id is required as the group scales up.

·        Scale by member:
As long as there is room on the partition a group can scale by adjusting the max instance number of the clusters within the group. From your comment below it looks like this requires a manual intervention by the user ?

Is this view correct ?

I’ll forward the implementation proposal to our team for some feedback,

Thanks

Martin

From: Reka Thirunavukkarasu [mailto:reka@wso2.com]
Sent: Monday, November 17, 2014 4:23 AM
To: Martin Eppel (meppel)
Cc: dev; Lakmal Warusawithana; Shaheedur Haque (shahhaqu); Isuru Haththotuwa; Udara Liyanage
Subject: Re: [Discuss][Grouping] Handling Group level scaling in Composite Application

Hi Martin,

Please find my comments inside related to this group scaling implementation.


Subject: [Discuss][Grouping] Handling Group level scaling in Composite Application

Hi

This is to discuss $subject. In the case of scaling with composite application, we can divide it into three parts as Martin has also explained in previous mails. I summarise as below from a Martin's mail with the proposed possible solution in stratos to support group level scaling:


o   scaling by statistics,

o   scaling by group member and

o   scaling by group.

Based on this, the general algorithm would be like this (in order):

1.      Scale VMs until the cluster maximum is reached (in at least one of the cluster within the group - scale by statistics)

We can address this in stratos with the usual autoscaling based on statistics. A single cluster monitor is capable of taking this decision to scale its own members.

2.      Scale up a new cluster of same type as the one which has reached the maximum of VMs until the max member number is reached  (scale by group member).

If a cluster of a group reaches the max based on the deployment policy, then if there is a room in the partition to spin more instances, we can simply update the deployment policy of that cluster to increase the max instances. If there are more than one cluster of the group resides in a partition, we can divide the max instances of the partition among all of those clusters by keeping a ratio among those clusters like 3C1:4C2. In case, when the partition is max out, if we extend the partition with more hardware, then again we can update the deployment policy with new max values of those clusters. So that relevant cluster monitors will execute with the updated values.

“Martin:” is the max instance adjusted automatically by the autoscaler or does it have to be done per “user” request ?
Yah. In this case, the max can be adjusted manually using the manual scaling support in stratos now. I'm not sure whether autoscaler can adjust this max automatically.

3.      Scale up a new group instance of the same group type (or definition), including all the respective dependencies  (scale by group)

We can achieve this by using combination of round-robin and one-after-another algorithm in the deployment policy. For Eg:

You can deploy a group(G1) which contains of C1 and C2 clusters in partition P1 and P2 using round-robin algorithm. So that C1 and C1 will get the high availability. You can have another idle partition called P3. When you decided to scale by group, then using one-after-another algorithm in deployment policy, we can choose the P3 to bring up the G1 with relevant minimum instances.

In that way, we can improve our deployment policy to support combination of these algorithms within a network partition. When we have P1, P2 and P3 in a network partition, we will be able to use round-robin among P1 and P2 and we can use         one-after-another among (P1, P2) and P3.

“Martin:” I am not entirely sure I completely understand how this works using the round robin algorithm and the partitioning, I think it would be helpful to demonstrate the algorithm by using a more complex group with multiple cartridges and nested sub groups, including the group id / subscription alias generation, and event generation ?
I have attached here with a sample json which would explain you about this new partition groups and algorithm. The deployment Policy can have the following:

DeploymentPolicy
      +NetworkPartitions
             + id
             + partitionGroupAlgo(will be applicable between partition groups)
             + partitionGroups
                      + id
                      + partitionAlgo(will be applicable between partitions)
                      + patitions

As per the attached policy, autoscaler will choose the p1-p2-group partition group for the initial cluster monitor to start instances. When that p1-p2-group got to maxed out, it can notify the parent and choose the p3-p4-group for further spinning instances. When group gets the notification, it can notify other dependent child to switch to another partitionGroup. So, the dependent cluster will choose another defined partitionGroup with one-after-another algorithm. The requirement here is that both dependent clusters have to have the same number of partitionGroups available.

A few more questions:
- when we spin up new instances of a group using the above mentioned algorithm will it also subsequently scale all the respective dependencies ? One of the main advantages of grouping (IMHO)) is that it scales up / down not only a specific instance of a group but also subsequent dependencies (cartridges and nested groups).
Yah..As i explained earlier the notification to the parent group will handle this.

- For the new group instances, how will the group Id differ from the group which is scaled, how would we generate group Ids ?
Since we use this algorithm, we no longer need a new group id to be generated to handle this.

- How will the scale down work (or termination of a group and it’s dependencies) ?
Scale down will also be handled by this algorithm. According to the attached definition, p1-p2-group got maxed out and we chose p3-p4-group using one-after-another, then until the chosen p3-p4-group got wiped out, we won't be scaling down the instances in p1-p2-group.

Please let me know, if you need further clarification on this....


Thanks,
Reka

- What about group events, which event will be generated (e.g. group active, group down, etc and what parameters like group name, group id, app id ?

Please share your thoughts on the above approach and Please add, if i have missed any other points here..



Thanks,

Reka

--
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007<tel:%2B94776442007>




--
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007<tel:%2B94776442007>


Re: [Discuss][Grouping] Handling Group level scaling in Composite Application

Posted by Reka Thirunavukkarasu <re...@wso2.com>.
Hi Martin,

Please find my comments inside related to this group scaling
implementation.


> *Subject:* [Discuss][Grouping] Handling Group level scaling in Composite
> Application
>
>
>
> Hi
>
>
>
> This is to discuss $subject. In the case of scaling with composite
> application, we can divide it into three parts as Martin has also explained
> in previous mails. I summarise as below from a Martin's mail with the
> proposed possible solution in stratos to support group level scaling:
>
>
>
> o   scaling by statistics,
>
> o   scaling by group member and
>
> o   scaling by group.
>
>
>
> Based on this, the general algorithm would be like this (in order):
>
> 1.      Scale VMs until the cluster maximum is reached (in at least one
> of the cluster within the group - scale by statistics)
>
> We can address this in stratos with the usual autoscaling based on
> statistics. A single cluster monitor is capable of taking this decision to
> scale its own members.
>
> 2.      Scale up a new cluster of same type as the one which has reached
> the maximum of VMs until the max member number is reached  (scale by group
> member).
>
> If a cluster of a group reaches the max based on the deployment policy,
> then if there is a room in the partition to spin more instances, we can
> simply update the deployment policy of that cluster to increase the max
> instances. If there are more than one cluster of the group resides in a
> partition, we can divide the max instances of the partition among all of
> those clusters by keeping a ratio among those clusters like 3C1:4C2. In
> case, when the partition is max out, if we extend the partition with more
> hardware, then again we can update the deployment policy with new
> max values of those clusters. So that relevant cluster monitors will
> execute with the updated values.
>
> “Martin:” is the max instance adjusted automatically by the autoscaler or
> does it have to be done per “user” request ?
>
Yah. In this case, the max can be adjusted manually using the manual
scaling support in stratos now. I'm not sure whether autoscaler can adjust
this max automatically.

>  3.      Scale up a new group instance of the same group type (or
> definition), including all the respective dependencies  (scale by group)
>
> We can achieve this by using combination of round-robin and
> one-after-another algorithm in the deployment policy. For Eg:
>
> You can deploy a group(G1) which contains of C1 and C2 clusters in
> partition P1 and P2 using round-robin algorithm. So that C1 and C1 will get
> the high availability. You can have another idle partition called P3. When
> you decided to scale by group, then using one-after-another algorithm in
> deployment policy, we can choose the P3 to bring up the G1 with relevant
> minimum instances.
>
> In that way, we can improve our deployment policy to support combination
> of these algorithms within a network partition. When we have P1, P2 and P3
> in a network partition, we will be able to use round-robin among P1 and P2
> and we can use         one-after-another among (P1, P2) and P3.
>
> “Martin:” I am not entirely sure I completely understand how this works
> using the round robin algorithm and the partitioning, I think it would be
> helpful to demonstrate the algorithm by using a more complex group with
> multiple cartridges and nested sub groups, including the group id /
> subscription alias generation, and event generation ?
>
I have attached here with a sample json which would explain you about this
new partition groups and algorithm. The deployment Policy can have the
following:

DeploymentPolicy
      +NetworkPartitions
             + id
             + partitionGroupAlgo(will be applicable between partition
groups)
             + partitionGroups
                      + id
                      + partitionAlgo(will be applicable between partitions)
                      + patitions

As per the attached policy, autoscaler will choose the
p1-p2-group partition group for the initial cluster monitor to start
instances. When that p1-p2-group got to maxed out, it can notify the parent
and choose the p3-p4-group for further spinning instances. When group gets
the notification, it can notify other dependent child to switch to another
partitionGroup. So, the dependent cluster will choose another defined
partitionGroup with one-after-another algorithm. The requirement here is
that both dependent clusters have to have the same number of
partitionGroups available.

> A few more questions:
> - when we spin up new instances of a group using the above mentioned
> algorithm will it also subsequently scale all the respective dependencies ?
> One of the main advantages of grouping (IMHO)) is that it scales up / down
> not only a specific instance of a group but also subsequent dependencies
> (cartridges and nested groups).
>
Yah..As i explained earlier the notification to the parent group will
handle this.

> - For the new group instances, how will the group Id differ from the group
> which is scaled, how would we generate group Ids ?
>
Since we use this algorithm, we no longer need a new group id to be
generated to handle this.

> - How will the scale down work (or termination of a group and it’s
> dependencies) ?
>
Scale down will also be handled by this algorithm. According to the
attached definition, p1-p2-group got maxed out and we chose p3-p4-group
using one-after-another, then until the chosen p3-p4-group got wiped out,
we won't be scaling down the instances in p1-p2-group.

Please let me know, if you need further clarification on this....


Thanks,
Reka

> - What about group events, which event will be generated (e.g. group
> active, group down, etc and what parameters like group name, group id, app
> id ?
>
>  Please share your thoughts on the above approach and Please add, if i
> have missed any other points here..
>
>
>
> Thanks,
>
> Reka
>
>
>
> --
>
> Reka Thirunavukkarasu
> Senior Software Engineer,
> WSO2, Inc.:http://wso2.com,
>
> Mobile: +94776442007
>
>
>



-- 
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007

Re: [Discuss][Grouping] Handling Group level scaling in Composite Application

Posted by Reka Thirunavukkarasu <re...@wso2.com>.
Hi Martin,

Sorry for the delay...Thanks for the re-sending it..I will go through it
and update soon..

Thanks,
Reka

On Mon, Oct 6, 2014 at 1:09 AM, Martin Eppel (meppel) <me...@cisco.com>
wrote:

>  Resending it in case you missed it,
>
>
>
> Thanks
>
>
>
> Martin
>
>
>
> *From:* Martin Eppel (meppel)
> *Sent:* Thursday, October 02, 2014 6:29 PM
> *To:* 'Reka Thirunavukkarasu'; dev
> *Cc:* Lakmal Warusawithana; Shaheedur Haque (shahhaqu); Isuru
> Haththotuwa; Udara Liyanage
> *Subject:* RE: [Discuss][Grouping] Handling Group level scaling in
> Composite Application
>
>
>
> Hi Reka,
>
>
>
> See inline (“Martin:”)
>
>
>
> Thanks
>
>
>
> Martin
>
>
>
> *From:* Reka Thirunavukkarasu [mailto:reka@wso2.com <re...@wso2.com>]
> *Sent:* Thursday, October 02, 2014 5:58 AM
> *To:* dev
> *Cc:* Lakmal Warusawithana; Shaheedur Haque (shahhaqu); Martin Eppel
> (meppel); Isuru Haththotuwa; Udara Liyanage
> *Subject:* [Discuss][Grouping] Handling Group level scaling in Composite
> Application
>
>
>
> Hi
>
>
>
> This is to discuss $subject. In the case of scaling with composite
> application, we can divide it into three parts as Martin has also explained
> in previous mails. I summarise as below from a Martin's mail with the
> proposed possible solution in stratos to support group level scaling:
>
>
>
> o   scaling by statistics,
>
> o   scaling by group member and
>
> o   scaling by group.
>
>
>
> Based on this, the general algorithm would be like this (in order):
>
> 1.      Scale VMs until the cluster maximum is reached (in at least one
> of the cluster within the group - scale by statistics)
>
> We can address this in stratos with the usual autoscaling based on
> statistics. A single cluster monitor is capable of taking this decision to
> scale its own members.
>
> 2.      Scale up a new cluster of same type as the one which has reached
> the maximum of VMs until the max member number is reached  (scale by group
> member).
>
> If a cluster of a group reaches the max based on the deployment policy,
> then if there is a room in the partition to spin more instances, we can
> simply update the deployment policy of that cluster to increase the max
> instances. If there are more than one cluster of the group resides in a
> partition, we can divide the max instances of the partition among all of
> those clusters by keeping a ratio among those clusters like 3C1:4C2. In
> case, when the partition is max out, if we extend the partition with more
> hardware, then again we can update the deployment policy with new
> max values of those clusters. So that relevant cluster monitors will
> execute with the updated values.
>
> “Martin:” is the max instance adjusted automatically by the autoscaler or
> does it have to be done per “user” request ?
>
> 3.      Scale up a new group instance of the same group type (or
> definition), including all the respective dependencies  (scale by group)
>
> We can achieve this by using combination of round-robin and
> one-after-another algorithm in the deployment policy. For Eg:
>
> You can deploy a group(G1) which contains of C1 and C2 clusters in
> partition P1 and P2 using round-robin algorithm. So that C1 and C1 will get
> the high availability. You can have another idle partition called P3. When
> you decided to scale by group, then using one-after-another algorithm in
> deployment policy, we can choose the P3 to bring up the G1 with relevant
> minimum instances.
>
> In that way, we can improve our deployment policy to support combination
> of these algorithms within a network partition. When we have P1, P2 and P3
> in a network partition, we will be able to use round-robin among P1 and P2
> and we can use         one-after-another among (P1, P2) and P3.
>
> “Martin:” I am not entirely sure I completely understand how this works
> using the round robin algorithm and the partitioning, I think it would be
> helpful to demonstrate the algorithm by using a more complex group with
> multiple cartridges and nested sub groups, including the group id /
> subscription alias generation, and event generation ?
> A few more questions:
> - when we spin up new instances of a group using the above mentioned
> algorithm will it also subsequently scale all the respective dependencies ?
> One of the main advantages of grouping (IMHO)) is that it scales up / down
> not only a specific instance of a group but also subsequent dependencies
> (cartridges and nested groups).
> - For the new group instances, how will the group Id differ from the group
> which is scaled, how would we generate group Ids ?
> - How will the scale down work (or termination of a group and it’s
> dependencies) ?
> - What about group events, which event will be generated (e.g. group
> active, group down, etc and what parameters like group name, group id, app
> id ?
>
>  Please share your thoughts on the above approach and Please add, if i
> have missed any other points here..
>
>
>
> Thanks,
>
> Reka
>
>
>
> --
>
> Reka Thirunavukkarasu
> Senior Software Engineer,
> WSO2, Inc.:http://wso2.com,
>
> Mobile: +94776442007
>
>
>



-- 
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007

RE: [Discuss][Grouping] Handling Group level scaling in Composite Application

Posted by "Martin Eppel (meppel)" <me...@cisco.com>.
Resending it in case you missed it,

Thanks

Martin

From: Martin Eppel (meppel)
Sent: Thursday, October 02, 2014 6:29 PM
To: 'Reka Thirunavukkarasu'; dev
Cc: Lakmal Warusawithana; Shaheedur Haque (shahhaqu); Isuru Haththotuwa; Udara Liyanage
Subject: RE: [Discuss][Grouping] Handling Group level scaling in Composite Application

Hi Reka,

See inline (“Martin:”)

Thanks

Martin

From: Reka Thirunavukkarasu [mailto:reka@wso2.com]
Sent: Thursday, October 02, 2014 5:58 AM
To: dev
Cc: Lakmal Warusawithana; Shaheedur Haque (shahhaqu); Martin Eppel (meppel); Isuru Haththotuwa; Udara Liyanage
Subject: [Discuss][Grouping] Handling Group level scaling in Composite Application

Hi

This is to discuss $subject. In the case of scaling with composite application, we can divide it into three parts as Martin has also explained in previous mails. I summarise as below from a Martin's mail with the proposed possible solution in stratos to support group level scaling:


o   scaling by statistics,

o   scaling by group member and

o   scaling by group.

Based on this, the general algorithm would be like this (in order):

1.      Scale VMs until the cluster maximum is reached (in at least one of the cluster within the group - scale by statistics)

We can address this in stratos with the usual autoscaling based on statistics. A single cluster monitor is capable of taking this decision to scale its own members.

2.      Scale up a new cluster of same type as the one which has reached the maximum of VMs until the max member number is reached  (scale by group member).

If a cluster of a group reaches the max based on the deployment policy, then if there is a room in the partition to spin more instances, we can simply update the deployment policy of that cluster to increase the max instances. If there are more than one cluster of the group resides in a partition, we can divide the max instances of the partition among all of those clusters by keeping a ratio among those clusters like 3C1:4C2. In case, when the partition is max out, if we extend the partition with more hardware, then again we can update the deployment policy with new max values of those clusters. So that relevant cluster monitors will execute with the updated values.

“Martin:” is the max instance adjusted automatically by the autoscaler or does it have to be done per “user” request ?

3.      Scale up a new group instance of the same group type (or definition), including all the respective dependencies  (scale by group)

We can achieve this by using combination of round-robin and one-after-another algorithm in the deployment policy. For Eg:

You can deploy a group(G1) which contains of C1 and C2 clusters in partition P1 and P2 using round-robin algorithm. So that C1 and C1 will get the high availability. You can have another idle partition called P3. When you decided to scale by group, then using one-after-another algorithm in deployment policy, we can choose the P3 to bring up the G1 with relevant minimum instances.

In that way, we can improve our deployment policy to support combination of these algorithms within a network partition. When we have P1, P2 and P3 in a network partition, we will be able to use round-robin among P1 and P2 and we can use         one-after-another among (P1, P2) and P3.

“Martin:” I am not entirely sure I completely understand how this works using the round robin algorithm and the partitioning, I think it would be helpful to demonstrate the algorithm by using a more complex group with multiple cartridges and nested sub groups, including the group id / subscription alias generation, and event generation ?
A few more questions:
- when we spin up new instances of a group using the above mentioned algorithm will it also subsequently scale all the respective dependencies ? One of the main advantages of grouping (IMHO)) is that it scales up / down not only a specific instance of a group but also subsequent dependencies (cartridges and nested groups).
- For the new group instances, how will the group Id differ from the group which is scaled, how would we generate group Ids ?
- How will the scale down work (or termination of a group and it’s dependencies) ?
- What about group events, which event will be generated (e.g. group active, group down, etc and what parameters like group name, group id, app id ?


Please share your thoughts on the above approach and Please add, if i have missed any other points here..



Thanks,

Reka

--
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007<tel:%2B94776442007>