You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@stratos.apache.org by Reka Thirunavukkarasu <re...@wso2.com> on 2014/10/28 08:41:20 UTC

[Grouping] Testing update of Developer Preview-3

Hi all,

This is to update the testing Developer Preview-3 for the end to end work
flow. Since we have introduced the termination behaviour, we are executing
the following steps to verify  flow.

* Deploy an composite application with nested groups
* Autoscaler wil bring them using defined startup order
* Application will become Active

Case 1:

* Terminate one cluster's VM from IaaS (where this cluster is
*independent* from
all other siblings)
* Nothing will happen to parents
* Cluster eventually become active.

This is working fine.

Case 2:

 * Terminate one cluster's VM from IaaS (where this cluster is *dependent* on
some siblings)
* It will notify the parent about inActive state
* Parent will behave according its specified termination behaviour and
notify its parent
* When this notification stops where a parent has *kill-none or at
application level, *that parent will push all the children to be terminated.
* Once all the children are terminated from the sub section, that parent
will bring them in parallel.

Finalising this by identifying issues.

Case 3:

* Unsubscribing from application
   - all the cluster will be marked as terminated and they will gradually
terminated..
   - once all the clusters are terminated, parent will be terminated
   - Eventually application will be terminated and send the application
terminated event
   - all others act upon application terminated event and remove the
application related information from their side.

The above is working fine now..

   - Metadata service will also remove app details (We are testing this)

FYI:
All the identified sibling to be terminated, will be terminated in parallel
as of now. We are not maintaining any order when terminating as i explained
in the earlier mail.

Isuruh/Udara, can you also add, if i miss any testing steps?

Thanks,
Reka

-- 
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007

Re: [Grouping] Testing update of Developer Preview-3

Posted by Reka Thirunavukkarasu <re...@wso2.com>.
Hi Isuru,

On Tue, Oct 28, 2014 at 3:46 PM, Isuru Haththotuwa <is...@apache.org>
wrote:

> Hi,
>
> On Tue, Oct 28, 2014 at 2:43 PM, Reka Thirunavukkarasu <re...@wso2.com>
> wrote:
>
>> Hi
>>
>> On Tue, Oct 28, 2014 at 2:22 PM, Isuru Haththotuwa <is...@apache.org>
>> wrote:
>>
>>>
>>>
>>> On Tue, Oct 28, 2014 at 1:48 PM, Reka Thirunavukkarasu <re...@wso2.com>
>>> wrote:
>>>
>>>> Hi
>>>>
>>>> On Tue, Oct 28, 2014 at 1:16 PM, Isuru Haththotuwa <is...@apache.org>
>>>> wrote:
>>>>
>>>>> Thanks Reka for starting this Thread.
>>>>>
>>>>> Found two issues related to undeploying an Application:
>>>>>    1. https://issues.apache.org/jira/browse/STRATOS-918 - Fixed now.
>>>>>
>>>>>     2. Undeploying an Application doesn't remove it properly until the
>>>>> Member is activated. Looking in to this now.
>>>>>
>>>>
>>>>
>>>> We will need this fix for the member fault as well. If cluster monitor
>>>> starts a member upon member fault before the whole cluster termination,
>>>> then that cluster monitor is becoming active. Hence not going to terminated
>>>> state. Looking into that now..
>>>>
>>> What is the State Transition in this case? Is it Terminating to Active?
>>> If so we might be able to generically handle this, since its a invalid
>>> state Transfer and mark the cluster as Invalid, and then terminate. For
>>> this, we need to introduce a new error state to cluster statuses. WDYT?
>>>
>>
>> +1 to introduce error states. So that those which are in error state can
>> be terminated by relevant monitors.
>>
>> But in this case the cluster should go through active --> inActive -->
>> terminating --> terminated. But due to network delay in receiving inActive
>> when member fault receives, cluster monitor tries to satisfy the min rule
>> by bringing one new member instead of the one got terminated. Then when
>> cluster monitor receives inActive, it tries to notify parent and etc. But
>> the newly spawned member got activated. then cluster monitor becomes
>> activated. After that, parent monitors send terminating notification. But
>> somehow this active monitor skips the terminating event.
>>
>
> Not sure if this is a silly suggestion since I might not have understood
> the scenario fully here.  As I understand, the problem is that Cluster
> Monitor's mincheck getting triggered before the Cluster Monitor is marked
> as inactive. Since member fault is a case where we need to give the control
> to the parent (if dependency flag is set), can we pause the Cluster Monitor
> till the decision is taken from the parent? The  Cluster Monitor can be
> resumed after parent gives back the control to the child.
>

+1. This will be good point. I will set a flag in cluster monitor which
identified as dependent upon the member fault event. So that ClusterMonitor
will be paused from that point onwards...

Thanks,
Reka

>
>>>> Thanks,
>>>> Reka
>>>>
>>>>
>>>>
>>>>>
>>>>> On Tue, Oct 28, 2014 at 1:11 PM, Reka Thirunavukkarasu <re...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> This is to update the testing Developer Preview-3 for the end to end
>>>>>> work flow. Since we have introduced the termination behaviour, we are
>>>>>> executing the following steps to verify  flow.
>>>>>>
>>>>>> * Deploy an composite application with nested groups
>>>>>> * Autoscaler wil bring them using defined startup order
>>>>>> * Application will become Active
>>>>>>
>>>>>> Case 1:
>>>>>>
>>>>>> * Terminate one cluster's VM from IaaS (where this cluster is
>>>>>> *independent* from all other siblings)
>>>>>> * Nothing will happen to parents
>>>>>> * Cluster eventually become active.
>>>>>>
>>>>>> This is working fine.
>>>>>>
>>>>>> Case 2:
>>>>>>
>>>>>>  * Terminate one cluster's VM from IaaS (where this cluster is
>>>>>> *dependent* on some siblings)
>>>>>> * It will notify the parent about inActive state
>>>>>> * Parent will behave according its specified termination behaviour
>>>>>> and notify its parent
>>>>>> * When this notification stops where a parent has *kill-none or at
>>>>>> application level, *that parent will push all the children to be
>>>>>> terminated.
>>>>>> * Once all the children are terminated from the sub section, that
>>>>>> parent will bring them in parallel.
>>>>>>
>>>>>> Finalising this by identifying issues.
>>>>>>
>>>>>> Case 3:
>>>>>>
>>>>>> * Unsubscribing from application
>>>>>>    - all the cluster will be marked as terminated and they will
>>>>>> gradually terminated..
>>>>>>    - once all the clusters are terminated, parent will be terminated
>>>>>>    - Eventually application will be terminated and send the
>>>>>> application terminated event
>>>>>>    - all others act upon application terminated event and remove the
>>>>>> application related information from their side.
>>>>>>
>>>>>
>>>>>> The above is working fine now..
>>>>>>
>>>>>>    - Metadata service will also remove app details (We are testing
>>>>>> this)
>>>>>>
>>>>>> FYI:
>>>>>> All the identified sibling to be terminated, will be terminated in
>>>>>> parallel as of now. We are not maintaining any order when terminating as i
>>>>>> explained in the earlier mail.
>>>>>>
>>>>>> Isuruh/Udara, can you also add, if i miss any testing steps?
>>>>>>
>>>>>> Thanks,
>>>>>> Reka
>>>>>>
>>>>>> --
>>>>>> Reka Thirunavukkarasu
>>>>>> Senior Software Engineer,
>>>>>> WSO2, Inc.:http://wso2.com,
>>>>>> Mobile:
>>>>>> +94776442007
>>>>>>
>>>>>> --
>>>>>> <%2B94776442007>
>>>>>> Thanks and Regards,
>>>>>>
>>>>>> Isuru H.
>>>>>> <%2B94776442007>
>>>>>> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>>>>>>
>>>>>>
>>>>>> * <http://wso2.com/>*
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>>> --
>>>> Reka Thirunavukkarasu
>>>> Senior Software Engineer,
>>>> WSO2, Inc.:http://wso2.com,
>>>> Mobile: +94776442007
>>>>
>>>> --
>>>> <%2B94776442007>
>>>> Thanks and Regards,
>>>>
>>>> Isuru H.
>>>> <%2B94776442007>
>>>> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>>>>
>>>>
>>>> * <http://wso2.com/>*
>>>>
>>>>
>>>>
>>
>>
>> --
>> Reka Thirunavukkarasu
>> Senior Software Engineer,
>> WSO2, Inc.:http://wso2.com,
>> Mobile: +94776442007
>>
>> --
>> <%2B94776442007>
>> Thanks and Regards,
>>
>> Isuru H.
>> <%2B94776442007>
>> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>>
>>
>> * <http://wso2.com/>*
>>
>>
>>


-- 
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007

Re: [Grouping] Testing update of Developer Preview-3

Posted by Isuru Haththotuwa <is...@apache.org>.
Hi,

On Tue, Oct 28, 2014 at 2:43 PM, Reka Thirunavukkarasu <re...@wso2.com>
wrote:

> Hi
>
> On Tue, Oct 28, 2014 at 2:22 PM, Isuru Haththotuwa <is...@apache.org>
> wrote:
>
>>
>>
>> On Tue, Oct 28, 2014 at 1:48 PM, Reka Thirunavukkarasu <re...@wso2.com>
>> wrote:
>>
>>> Hi
>>>
>>> On Tue, Oct 28, 2014 at 1:16 PM, Isuru Haththotuwa <is...@apache.org>
>>> wrote:
>>>
>>>> Thanks Reka for starting this Thread.
>>>>
>>>> Found two issues related to undeploying an Application:
>>>>    1. https://issues.apache.org/jira/browse/STRATOS-918 - Fixed now.
>>>>
>>>>     2. Undeploying an Application doesn't remove it properly until the
>>>> Member is activated. Looking in to this now.
>>>>
>>>
>>>
>>> We will need this fix for the member fault as well. If cluster monitor
>>> starts a member upon member fault before the whole cluster termination,
>>> then that cluster monitor is becoming active. Hence not going to terminated
>>> state. Looking into that now..
>>>
>> What is the State Transition in this case? Is it Terminating to Active?
>> If so we might be able to generically handle this, since its a invalid
>> state Transfer and mark the cluster as Invalid, and then terminate. For
>> this, we need to introduce a new error state to cluster statuses. WDYT?
>>
>
> +1 to introduce error states. So that those which are in error state can
> be terminated by relevant monitors.
>
> But in this case the cluster should go through active --> inActive -->
> terminating --> terminated. But due to network delay in receiving inActive
> when member fault receives, cluster monitor tries to satisfy the min rule
> by bringing one new member instead of the one got terminated. Then when
> cluster monitor receives inActive, it tries to notify parent and etc. But
> the newly spawned member got activated. then cluster monitor becomes
> activated. After that, parent monitors send terminating notification. But
> somehow this active monitor skips the terminating event.
>

Not sure if this is a silly suggestion since I might not have understood
the scenario fully here.  As I understand, the problem is that Cluster
Monitor's mincheck getting triggered before the Cluster Monitor is marked
as inactive. Since member fault is a case where we need to give the control
to the parent (if dependency flag is set), can we pause the Cluster Monitor
till the decision is taken from the parent? The  Cluster Monitor can be
resumed after parent gives back the control to the child.

>
>>> Thanks,
>>> Reka
>>>
>>>
>>>
>>>>
>>>> On Tue, Oct 28, 2014 at 1:11 PM, Reka Thirunavukkarasu <re...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> This is to update the testing Developer Preview-3 for the end to end
>>>>> work flow. Since we have introduced the termination behaviour, we are
>>>>> executing the following steps to verify  flow.
>>>>>
>>>>> * Deploy an composite application with nested groups
>>>>> * Autoscaler wil bring them using defined startup order
>>>>> * Application will become Active
>>>>>
>>>>> Case 1:
>>>>>
>>>>> * Terminate one cluster's VM from IaaS (where this cluster is
>>>>> *independent* from all other siblings)
>>>>> * Nothing will happen to parents
>>>>> * Cluster eventually become active.
>>>>>
>>>>> This is working fine.
>>>>>
>>>>> Case 2:
>>>>>
>>>>>  * Terminate one cluster's VM from IaaS (where this cluster is
>>>>> *dependent* on some siblings)
>>>>> * It will notify the parent about inActive state
>>>>> * Parent will behave according its specified termination behaviour and
>>>>> notify its parent
>>>>> * When this notification stops where a parent has *kill-none or at
>>>>> application level, *that parent will push all the children to be
>>>>> terminated.
>>>>> * Once all the children are terminated from the sub section, that
>>>>> parent will bring them in parallel.
>>>>>
>>>>> Finalising this by identifying issues.
>>>>>
>>>>> Case 3:
>>>>>
>>>>> * Unsubscribing from application
>>>>>    - all the cluster will be marked as terminated and they will
>>>>> gradually terminated..
>>>>>    - once all the clusters are terminated, parent will be terminated
>>>>>    - Eventually application will be terminated and send the
>>>>> application terminated event
>>>>>    - all others act upon application terminated event and remove the
>>>>> application related information from their side.
>>>>>
>>>>
>>>>> The above is working fine now..
>>>>>
>>>>>    - Metadata service will also remove app details (We are testing
>>>>> this)
>>>>>
>>>>> FYI:
>>>>> All the identified sibling to be terminated, will be terminated in
>>>>> parallel as of now. We are not maintaining any order when terminating as i
>>>>> explained in the earlier mail.
>>>>>
>>>>> Isuruh/Udara, can you also add, if i miss any testing steps?
>>>>>
>>>>> Thanks,
>>>>> Reka
>>>>>
>>>>> --
>>>>> Reka Thirunavukkarasu
>>>>> Senior Software Engineer,
>>>>> WSO2, Inc.:http://wso2.com,
>>>>> Mobile:
>>>>> +94776442007
>>>>>
>>>>> --
>>>>> <%2B94776442007>
>>>>> Thanks and Regards,
>>>>>
>>>>> Isuru H.
>>>>> <%2B94776442007>
>>>>> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>>>>>
>>>>>
>>>>> * <http://wso2.com/>*
>>>>>
>>>>>
>>>>>
>>>
>>>
>>> --
>>> Reka Thirunavukkarasu
>>> Senior Software Engineer,
>>> WSO2, Inc.:http://wso2.com,
>>> Mobile: +94776442007
>>>
>>> --
>>> <%2B94776442007>
>>> Thanks and Regards,
>>>
>>> Isuru H.
>>> <%2B94776442007>
>>> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>>>
>>>
>>> * <http://wso2.com/>*
>>>
>>>
>>>
>
>
> --
> Reka Thirunavukkarasu
> Senior Software Engineer,
> WSO2, Inc.:http://wso2.com,
> Mobile: +94776442007
>
> --
> <%2B94776442007>
> Thanks and Regards,
>
> Isuru H.
> <%2B94776442007>
> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>
>
> * <http://wso2.com/>*
>
>
>

Re: [Grouping] Testing update of Developer Preview-3

Posted by Reka Thirunavukkarasu <re...@wso2.com>.
Hi

On Tue, Oct 28, 2014 at 2:22 PM, Isuru Haththotuwa <is...@apache.org>
wrote:

>
>
> On Tue, Oct 28, 2014 at 1:48 PM, Reka Thirunavukkarasu <re...@wso2.com>
> wrote:
>
>> Hi
>>
>> On Tue, Oct 28, 2014 at 1:16 PM, Isuru Haththotuwa <is...@apache.org>
>> wrote:
>>
>>> Thanks Reka for starting this Thread.
>>>
>>> Found two issues related to undeploying an Application:
>>>    1. https://issues.apache.org/jira/browse/STRATOS-918 - Fixed now.
>>>
>>>     2. Undeploying an Application doesn't remove it properly until the
>>> Member is activated. Looking in to this now.
>>>
>>
>>
>> We will need this fix for the member fault as well. If cluster monitor
>> starts a member upon member fault before the whole cluster termination,
>> then that cluster monitor is becoming active. Hence not going to terminated
>> state. Looking into that now..
>>
> What is the State Transition in this case? Is it Terminating to Active? If
> so we might be able to generically handle this, since its a invalid state
> Transfer and mark the cluster as Invalid, and then terminate. For this, we
> need to introduce a new error state to cluster statuses. WDYT?
>

+1 to introduce error states. So that those which are in error state can be
terminated by relevant monitors.

But in this case the cluster should go through active --> inActive -->
terminating --> terminated. But due to network delay in receiving inActive
when member fault receives, cluster monitor tries to satisfy the min rule
by bringing one new member instead of the one got terminated. Then when
cluster monitor receives inActive, it tries to notify parent and etc. But
the newly spawned member got activated. then cluster monitor becomes
activated. After that, parent monitors send terminating notification. But
somehow this active monitor skips the terminating event.

>
>> Thanks,
>> Reka
>>
>>
>>
>>>
>>> On Tue, Oct 28, 2014 at 1:11 PM, Reka Thirunavukkarasu <re...@wso2.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> This is to update the testing Developer Preview-3 for the end to end
>>>> work flow. Since we have introduced the termination behaviour, we are
>>>> executing the following steps to verify  flow.
>>>>
>>>> * Deploy an composite application with nested groups
>>>> * Autoscaler wil bring them using defined startup order
>>>> * Application will become Active
>>>>
>>>> Case 1:
>>>>
>>>> * Terminate one cluster's VM from IaaS (where this cluster is
>>>> *independent* from all other siblings)
>>>> * Nothing will happen to parents
>>>> * Cluster eventually become active.
>>>>
>>>> This is working fine.
>>>>
>>>> Case 2:
>>>>
>>>>  * Terminate one cluster's VM from IaaS (where this cluster is
>>>> *dependent* on some siblings)
>>>> * It will notify the parent about inActive state
>>>> * Parent will behave according its specified termination behaviour and
>>>> notify its parent
>>>> * When this notification stops where a parent has *kill-none or at
>>>> application level, *that parent will push all the children to be
>>>> terminated.
>>>> * Once all the children are terminated from the sub section, that
>>>> parent will bring them in parallel.
>>>>
>>>> Finalising this by identifying issues.
>>>>
>>>> Case 3:
>>>>
>>>> * Unsubscribing from application
>>>>    - all the cluster will be marked as terminated and they will
>>>> gradually terminated..
>>>>    - once all the clusters are terminated, parent will be terminated
>>>>    - Eventually application will be terminated and send the application
>>>> terminated event
>>>>    - all others act upon application terminated event and remove the
>>>> application related information from their side.
>>>>
>>>
>>>> The above is working fine now..
>>>>
>>>>    - Metadata service will also remove app details (We are testing this)
>>>>
>>>> FYI:
>>>> All the identified sibling to be terminated, will be terminated in
>>>> parallel as of now. We are not maintaining any order when terminating as i
>>>> explained in the earlier mail.
>>>>
>>>> Isuruh/Udara, can you also add, if i miss any testing steps?
>>>>
>>>> Thanks,
>>>> Reka
>>>>
>>>> --
>>>> Reka Thirunavukkarasu
>>>> Senior Software Engineer,
>>>> WSO2, Inc.:http://wso2.com,
>>>> Mobile:
>>>> +94776442007
>>>>
>>>> --
>>>> <%2B94776442007>
>>>> Thanks and Regards,
>>>>
>>>> Isuru H.
>>>> <%2B94776442007>
>>>> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>>>>
>>>>
>>>> * <http://wso2.com/>*
>>>>
>>>>
>>>>
>>
>>
>> --
>> Reka Thirunavukkarasu
>> Senior Software Engineer,
>> WSO2, Inc.:http://wso2.com,
>> Mobile: +94776442007
>>
>> --
>> <%2B94776442007>
>> Thanks and Regards,
>>
>> Isuru H.
>> <%2B94776442007>
>> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>>
>>
>> * <http://wso2.com/>*
>>
>>
>>


-- 
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007

Re: [Grouping] Testing update of Developer Preview-3

Posted by Isuru Haththotuwa <is...@apache.org>.
On Tue, Oct 28, 2014 at 1:48 PM, Reka Thirunavukkarasu <re...@wso2.com>
wrote:

> Hi
>
> On Tue, Oct 28, 2014 at 1:16 PM, Isuru Haththotuwa <is...@apache.org>
> wrote:
>
>> Thanks Reka for starting this Thread.
>>
>> Found two issues related to undeploying an Application:
>>    1. https://issues.apache.org/jira/browse/STRATOS-918 - Fixed now.
>>
>>     2. Undeploying an Application doesn't remove it properly until the
>> Member is activated. Looking in to this now.
>>
>
>
> We will need this fix for the member fault as well. If cluster monitor
> starts a member upon member fault before the whole cluster termination,
> then that cluster monitor is becoming active. Hence not going to terminated
> state. Looking into that now..
>
What is the State Transition in this case? Is it Terminating to Active? If
so we might be able to generically handle this, since its a invalid state
Transfer and mark the cluster as Invalid, and then terminate. For this, we
need to introduce a new error state to cluster statuses. WDYT?

>
> Thanks,
> Reka
>
>
>
>>
>> On Tue, Oct 28, 2014 at 1:11 PM, Reka Thirunavukkarasu <re...@wso2.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> This is to update the testing Developer Preview-3 for the end to end
>>> work flow. Since we have introduced the termination behaviour, we are
>>> executing the following steps to verify  flow.
>>>
>>> * Deploy an composite application with nested groups
>>> * Autoscaler wil bring them using defined startup order
>>> * Application will become Active
>>>
>>> Case 1:
>>>
>>> * Terminate one cluster's VM from IaaS (where this cluster is
>>> *independent* from all other siblings)
>>> * Nothing will happen to parents
>>> * Cluster eventually become active.
>>>
>>> This is working fine.
>>>
>>> Case 2:
>>>
>>>  * Terminate one cluster's VM from IaaS (where this cluster is
>>> *dependent* on some siblings)
>>> * It will notify the parent about inActive state
>>> * Parent will behave according its specified termination behaviour and
>>> notify its parent
>>> * When this notification stops where a parent has *kill-none or at
>>> application level, *that parent will push all the children to be
>>> terminated.
>>> * Once all the children are terminated from the sub section, that parent
>>> will bring them in parallel.
>>>
>>> Finalising this by identifying issues.
>>>
>>> Case 3:
>>>
>>> * Unsubscribing from application
>>>    - all the cluster will be marked as terminated and they will
>>> gradually terminated..
>>>    - once all the clusters are terminated, parent will be terminated
>>>    - Eventually application will be terminated and send the application
>>> terminated event
>>>    - all others act upon application terminated event and remove the
>>> application related information from their side.
>>>
>>
>>> The above is working fine now..
>>>
>>>    - Metadata service will also remove app details (We are testing this)
>>>
>>> FYI:
>>> All the identified sibling to be terminated, will be terminated in
>>> parallel as of now. We are not maintaining any order when terminating as i
>>> explained in the earlier mail.
>>>
>>> Isuruh/Udara, can you also add, if i miss any testing steps?
>>>
>>> Thanks,
>>> Reka
>>>
>>> --
>>> Reka Thirunavukkarasu
>>> Senior Software Engineer,
>>> WSO2, Inc.:http://wso2.com,
>>> Mobile:
>>> +94776442007
>>>
>>> --
>>> <%2B94776442007>
>>> Thanks and Regards,
>>>
>>> Isuru H.
>>> <%2B94776442007>
>>> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>>>
>>>
>>> * <http://wso2.com/>*
>>>
>>>
>>>
>
>
> --
> Reka Thirunavukkarasu
> Senior Software Engineer,
> WSO2, Inc.:http://wso2.com,
> Mobile: +94776442007
>
> --
> <%2B94776442007>
> Thanks and Regards,
>
> Isuru H.
> <%2B94776442007>
> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>
>
> * <http://wso2.com/>*
>
>
>

Re: [Grouping] Testing update of Developer Preview-3

Posted by Reka Thirunavukkarasu <re...@wso2.com>.
Hi

On Tue, Oct 28, 2014 at 1:16 PM, Isuru Haththotuwa <is...@apache.org>
wrote:

> Thanks Reka for starting this Thread.
>
> Found two issues related to undeploying an Application:
>    1. https://issues.apache.org/jira/browse/STRATOS-918 - Fixed now.
>
>     2. Undeploying an Application doesn't remove it properly until the
> Member is activated. Looking in to this now.
>


We will need this fix for the member fault as well. If cluster monitor
starts a member upon member fault before the whole cluster termination,
then that cluster monitor is becoming active. Hence not going to terminated
state. Looking into that now..

Thanks,
Reka



>
> On Tue, Oct 28, 2014 at 1:11 PM, Reka Thirunavukkarasu <re...@wso2.com>
> wrote:
>
>> Hi all,
>>
>> This is to update the testing Developer Preview-3 for the end to end work
>> flow. Since we have introduced the termination behaviour, we are executing
>> the following steps to verify  flow.
>>
>> * Deploy an composite application with nested groups
>> * Autoscaler wil bring them using defined startup order
>> * Application will become Active
>>
>> Case 1:
>>
>> * Terminate one cluster's VM from IaaS (where this cluster is
>> *independent* from all other siblings)
>> * Nothing will happen to parents
>> * Cluster eventually become active.
>>
>> This is working fine.
>>
>> Case 2:
>>
>>  * Terminate one cluster's VM from IaaS (where this cluster is
>> *dependent* on some siblings)
>> * It will notify the parent about inActive state
>> * Parent will behave according its specified termination behaviour and
>> notify its parent
>> * When this notification stops where a parent has *kill-none or at
>> application level, *that parent will push all the children to be
>> terminated.
>> * Once all the children are terminated from the sub section, that parent
>> will bring them in parallel.
>>
>> Finalising this by identifying issues.
>>
>> Case 3:
>>
>> * Unsubscribing from application
>>    - all the cluster will be marked as terminated and they will gradually
>> terminated..
>>    - once all the clusters are terminated, parent will be terminated
>>    - Eventually application will be terminated and send the application
>> terminated event
>>    - all others act upon application terminated event and remove the
>> application related information from their side.
>>
>
>> The above is working fine now..
>>
>>    - Metadata service will also remove app details (We are testing this)
>>
>> FYI:
>> All the identified sibling to be terminated, will be terminated in
>> parallel as of now. We are not maintaining any order when terminating as i
>> explained in the earlier mail.
>>
>> Isuruh/Udara, can you also add, if i miss any testing steps?
>>
>> Thanks,
>> Reka
>>
>> --
>> Reka Thirunavukkarasu
>> Senior Software Engineer,
>> WSO2, Inc.:http://wso2.com,
>> Mobile:
>> +94776442007
>>
>> --
>> <%2B94776442007>
>> Thanks and Regards,
>>
>> Isuru H.
>> <%2B94776442007>
>> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>>
>>
>> * <http://wso2.com/>*
>>
>>
>>


-- 
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007

Re: [Grouping] Testing update of Developer Preview-3

Posted by Isuru Haththotuwa <is...@apache.org>.
Thanks Reka for starting this Thread.

Found two issues related to undeploying an Application:
   1. https://issues.apache.org/jira/browse/STRATOS-918 - Fixed now.

    2. Undeploying an Application doesn't remove it properly until the
Member is activated. Looking in to this now.

On Tue, Oct 28, 2014 at 1:11 PM, Reka Thirunavukkarasu <re...@wso2.com>
wrote:

> Hi all,
>
> This is to update the testing Developer Preview-3 for the end to end work
> flow. Since we have introduced the termination behaviour, we are executing
> the following steps to verify  flow.
>
> * Deploy an composite application with nested groups
> * Autoscaler wil bring them using defined startup order
> * Application will become Active
>
> Case 1:
>
> * Terminate one cluster's VM from IaaS (where this cluster is
> *independent* from all other siblings)
> * Nothing will happen to parents
> * Cluster eventually become active.
>
> This is working fine.
>
> Case 2:
>
>  * Terminate one cluster's VM from IaaS (where this cluster is *dependent* on
> some siblings)
> * It will notify the parent about inActive state
> * Parent will behave according its specified termination behaviour and
> notify its parent
> * When this notification stops where a parent has *kill-none or at
> application level, *that parent will push all the children to be
> terminated.
> * Once all the children are terminated from the sub section, that parent
> will bring them in parallel.
>
> Finalising this by identifying issues.
>
> Case 3:
>
> * Unsubscribing from application
>    - all the cluster will be marked as terminated and they will gradually
> terminated..
>    - once all the clusters are terminated, parent will be terminated
>    - Eventually application will be terminated and send the application
> terminated event
>    - all others act upon application terminated event and remove the
> application related information from their side.
>

> The above is working fine now..
>
>    - Metadata service will also remove app details (We are testing this)
>
> FYI:
> All the identified sibling to be terminated, will be terminated in
> parallel as of now. We are not maintaining any order when terminating as i
> explained in the earlier mail.
>
> Isuruh/Udara, can you also add, if i miss any testing steps?
>
> Thanks,
> Reka
>
> --
> Reka Thirunavukkarasu
> Senior Software Engineer,
> WSO2, Inc.:http://wso2.com,
> Mobile: +94776442007
>
> --
> <%2B94776442007>
> Thanks and Regards,
>
> Isuru H.
> <%2B94776442007>
> +94 716 358 048 <%2B94776442007>* <http://wso2.com/>*
>
>
> * <http://wso2.com/>*
>
>
>

RE: [Grouping] Testing update of Developer Preview-3

Posted by "Martin Eppel (meppel)" <me...@cisco.com>.
Hi Reka,

I am not sure about all the use cases but I followed up with our team regarding the termination order when we discussed it in a previous email thread (see attached response) and it seems necessary to consider the startup order in the termination. I’ll copy him on the email for some help with the use cases,

Hi Shaheed,
can you help us with some of use cases on Reka’s question below  (maintaining reverse startup order when terminating VMs,
quoting Reka’s question from the email thread below:

Do you have any real use case where it requires terminating the VMs using startup order? Since one of the VM of B becomes faulty and it is not in the correct startup order (it is started after A), do we still need to consider the startup order for the rest of the dependent VMs? Sorry, if i have misunderstood “

Thanks

Martin

From: Reka Thirunavukkarasu [mailto:reka@wso2.com]
Sent: Tuesday, October 28, 2014 9:57 PM
To: Martin Eppel (meppel)
Cc: dev; Isuru Haththotuwa; Udara Liyanage
Subject: Re: [Grouping] Testing update of Developer Preview-3

Hi Martin,

Thanks for bringing this up.

On Wed, Oct 29, 2014 at 2:13 AM, Martin Eppel (meppel) <me...@cisco.com>> wrote:
Hi Reka,

Eventually termination has to follow (in reverse order) the startup sequence:

If A depends on B, B on C (startup sequence first  C, second B, third  A)

And B is getting faulty (leaves active state),

we have to

with termination flag == kill_dependents:  terminate A

with termination flag == kill_all: terminate first A, and second C (to maintain startup order)

with termination flag == kill_none”: terminate none

Yah..This will definitely be an improvement to the termination behaviour. As i discussed in the mail "[Grouping][Part-2] Handling termination Behaviour in Composite Application Monitor Hierarchy", as of now we are supporting  terminating dependents or all in parallel. We will consider this terminate the clusters using startup order as time permits.

Do you have any real use case where it requires terminating the VMs using startup order? Since one of the VM of B becomes faulty and it is not in the correct startup order (it is started after A), do we still need to consider the startup order for the rest of the dependent VMs? Sorry, if i have misunderstood..


Ad case 2.)

The scenario should work as follows:

We have a cluster A and a cluster B (all in the same group).

Cluster A depends on cluster B to be active (dependency flag is set to kill_dependents).

To test, we terminate a VM from cluster B, which should set cluster B inactive and the group and the parent (group or application) will be notified.

Since cluster B is inactive (and I assume all other VMs in the cluster as well ?) cluster A will be terminated as well.


 Correct ?

Yah. We have done the similar implementation to terminate the dependent cluster.


Thanks,
Reka

Btw,

I attached the initial document for grouping which describes the start  / stopping sequences, (see section “Core Behaviour: Starting and Stopping”)

Thanks

Martin

From: Reka Thirunavukkarasu [mailto:reka@wso2.com<ma...@wso2.com>]
Sent: Tuesday, October 28, 2014 12:41 AM
To: dev
Cc: Martin Eppel (meppel); Isuru Haththotuwa; Udara Liyanage
Subject: [Grouping] Testing update of Developer Preview-3

Hi all,

This is to update the testing Developer Preview-3 for the end to end work flow. Since we have introduced the termination behaviour, we are executing the following steps to verify  flow.

* Deploy an composite application with nested groups
* Autoscaler wil bring them using defined startup order
* Application will become Active

Case 1:

* Terminate one cluster's VM from IaaS (where this cluster is independent from all other siblings)
* Nothing will happen to parents
* Cluster eventually become active.

This is working fine.

Case 2:

 * Terminate one cluster's VM from IaaS (where this cluster is dependent on some siblings)
* It will notify the parent about inActive state
* Parent will behave according its specified termination behaviour and notify its parent
* When this notification stops where a parent has kill-none or at application level, that parent will push all the children to be terminated.
* Once all the children are terminated from the sub section, that parent will bring them in parallel.

Finalising this by identifying issues.

Case 3:

* Unsubscribing from application
   - all the cluster will be marked as terminated and they will gradually terminated..
   - once all the clusters are terminated, parent will be terminated
   - Eventually application will be terminated and send the application terminated event
   - all others act upon application terminated event and remove the application related information from their side.

The above is working fine now..

   - Metadata service will also remove app details (We are testing this)

FYI:
All the identified sibling to be terminated, will be terminated in parallel as of now. We are not maintaining any order when terminating as i explained in the earlier mail.

Isuruh/Udara, can you also add, if i miss any testing steps?

Thanks,
Reka
[https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif]

--
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007<tel:%2B94776442007>




--
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007<tel:%2B94776442007>


Re: [Grouping] Testing update of Developer Preview-3

Posted by Reka Thirunavukkarasu <re...@wso2.com>.
Hi Martin,

Thanks for bringing this up.

On Wed, Oct 29, 2014 at 2:13 AM, Martin Eppel (meppel) <me...@cisco.com>
wrote:

>  Hi Reka,
>
>
>
> Eventually termination has to follow (in reverse order) the startup
> sequence:
>
>
>
> If A depends on B, B on C (startup sequence first  C, second B, third  A)
>
>
>
> And B is getting faulty (leaves active state),
>
>
>
> we have to
>
>
>
> with termination flag == kill_dependents:  terminate A
>
>
>
> with termination flag == kill_all: terminate first A, and second C (to
> maintain startup order)
>
>
>
> with termination flag == kill_none”: terminate none
>

Yah..This will definitely be an improvement to the termination behaviour.
As i discussed in the mail "[Grouping][Part-2] Handling termination
Behaviour in Composite Application Monitor Hierarchy", as of now we are
supporting  terminating dependents or all in parallel. We will consider
this terminate the clusters using startup order as time permits.

Do you have any real use case where it requires terminating the VMs using
startup order? Since one of the VM of B becomes faulty and it is not in the
correct startup order (it is started after A), do we still need to consider
the startup order for the rest of the dependent VMs? Sorry, if i have
misunderstood..


Ad case 2.)
>
>
>
> The scenario should work as follows:
>
>
>
> We have a cluster A and a cluster B (all in the same group).
>
>
> Cluster A depends on cluster B to be active (dependency flag is set to
> kill_dependents).
>
>
>
> To test, we terminate a VM from cluster B, which should set cluster B
> inactive and the group and the parent (group or application) will be
> notified.
>
>
>
> Since cluster B is inactive (and I assume all other VMs in the cluster as
> well ?) cluster A will be terminated as well.
>


  Correct ?
>

Yah. We have done the similar implementation to terminate the dependent
cluster.


Thanks,
Reka

>
>
> Btw,
>
>
>
> I attached the initial document for grouping which describes the start  /
> stopping sequences, (see section “Core Behaviour: Starting and Stopping”)
>
>
>
> Thanks
>
>
>
> Martin
>
>
>
> *From:* Reka Thirunavukkarasu [mailto:reka@wso2.com]
> *Sent:* Tuesday, October 28, 2014 12:41 AM
> *To:* dev
> *Cc:* Martin Eppel (meppel); Isuru Haththotuwa; Udara Liyanage
> *Subject:* [Grouping] Testing update of Developer Preview-3
>
>
>
> Hi all,
>
>
>
> This is to update the testing Developer Preview-3 for the end to end work
> flow. Since we have introduced the termination behaviour, we are executing
> the following steps to verify  flow.
>
>
>
> * Deploy an composite application with nested groups
>
> * Autoscaler wil bring them using defined startup order
>
> * Application will become Active
>
>
>
> Case 1:
>
>
>
> * Terminate one cluster's VM from IaaS (where this cluster is
> *independent* from all other siblings)
>
> * Nothing will happen to parents
>
> * Cluster eventually become active.
>
>
>
> This is working fine.
>
>
>
> Case 2:
>
>
>
>  * Terminate one cluster's VM from IaaS (where this cluster is *dependent* on
> some siblings)
>
> * It will notify the parent about inActive state
>
> * Parent will behave according its specified termination behaviour and
> notify its parent
>
> * When this notification stops where a parent has *kill-none or at
> application level, *that parent will push all the children to be
> terminated.
>
> * Once all the children are terminated from the sub section, that parent
> will bring them in parallel.
>
>
>
> Finalising this by identifying issues.
>
>
>
> Case 3:
>
>
>
> * Unsubscribing from application
>
>    - all the cluster will be marked as terminated and they will gradually
> terminated..
>
>    - once all the clusters are terminated, parent will be terminated
>
>    - Eventually application will be terminated and send the application
> terminated event
>
>    - all others act upon application terminated event and remove the
> application related information from their side.
>
>
>
> The above is working fine now..
>
>
>
>    - Metadata service will also remove app details (We are testing this)
>
>
>
> FYI:
>
> All the identified sibling to be terminated, will be terminated in
> parallel as of now. We are not maintaining any order when terminating as i
> explained in the earlier mail.
>
>
>
> Isuruh/Udara, can you also add, if i miss any testing steps?
>
>
>
> Thanks,
>
> Reka
>
>
>
> --
>
> Reka Thirunavukkarasu
> Senior Software Engineer,
> WSO2, Inc.:http://wso2.com,
>
> Mobile: +94776442007
>
>
>



-- 
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007

RE: [Grouping] Testing update of Developer Preview-3

Posted by "Martin Eppel (meppel)" <me...@cisco.com>.
Hi Reka,

Eventually termination has to follow (in reverse order) the startup sequence:

If A depends on B, B on C (startup sequence first  C, second B, third  A)

And B is getting faulty (leaves active state),

we have to

with termination flag == kill_dependents:  terminate A

with termination flag == kill_all: terminate first A, and second C (to maintain startup order)

with termination flag == kill_none”: terminate none


Ad case 2.)

The scenario should work as follows:

We have a cluster A and a cluster B (all in the same group).

Cluster A depends on cluster B to be active (dependency flag is set to kill_dependents).

To test, we terminate a VM from cluster B, which should set cluster B inactive and the group and the parent (group or application) will be notified.

Since cluster B is inactive (and I assume all other VMs in the cluster as well ?) cluster A will be terminated as well.

Correct ?

Btw,

I attached the initial document for grouping which describes the start  / stopping sequences, (see section “Core Behaviour: Starting and Stopping”)

Thanks

Martin

From: Reka Thirunavukkarasu [mailto:reka@wso2.com]
Sent: Tuesday, October 28, 2014 12:41 AM
To: dev
Cc: Martin Eppel (meppel); Isuru Haththotuwa; Udara Liyanage
Subject: [Grouping] Testing update of Developer Preview-3

Hi all,

This is to update the testing Developer Preview-3 for the end to end work flow. Since we have introduced the termination behaviour, we are executing the following steps to verify  flow.

* Deploy an composite application with nested groups
* Autoscaler wil bring them using defined startup order
* Application will become Active

Case 1:

* Terminate one cluster's VM from IaaS (where this cluster is independent from all other siblings)
* Nothing will happen to parents
* Cluster eventually become active.

This is working fine.

Case 2:

 * Terminate one cluster's VM from IaaS (where this cluster is dependent on some siblings)
* It will notify the parent about inActive state
* Parent will behave according its specified termination behaviour and notify its parent
* When this notification stops where a parent has kill-none or at application level, that parent will push all the children to be terminated.
* Once all the children are terminated from the sub section, that parent will bring them in parallel.

Finalising this by identifying issues.

Case 3:

* Unsubscribing from application
   - all the cluster will be marked as terminated and they will gradually terminated..
   - once all the clusters are terminated, parent will be terminated
   - Eventually application will be terminated and send the application terminated event
   - all others act upon application terminated event and remove the application related information from their side.

The above is working fine now..

   - Metadata service will also remove app details (We are testing this)

FYI:
All the identified sibling to be terminated, will be terminated in parallel as of now. We are not maintaining any order when terminating as i explained in the earlier mail.

Isuruh/Udara, can you also add, if i miss any testing steps?

Thanks,
Reka
[https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif]

--
Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007