You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@airavata.apache.org by Sachith Withana <sw...@gmail.com> on 2014/01/16 16:58:51 UTC

Orchestrator Overview Meeting Summary

Hi All,

This is the summary of the meeting we had Wednesday( 01/16/14) on the
Orchestrator.

Orchestrator Overview
I Introduced the Orchestrator and I have attached the presentation herewith.

Adding Job Cloning capability to the Orchestrator API
Saminda suggested that we should have a way to clone an existing job and
run it with different inputs or on a different host or both. Here's the
Jira for that.[1]

Gfac embedded vs Gfac as a service
We have implemented the embedded Gfac and decided to use it for now.
Gfac as a service is a long term goal to have. Until we get the
Orchestrator complete we will use the embedded Gfac.

Job statuses for the Orchestrator and the Gfac
We need to come up with multi-level job statuses. User-level,
Orchestartor-level and the Gfac-level statuses. Also the mapping between
them is open for discussion. We didn't come to a conclusion on the matter.
We will discuss this topic in an upcoming meeting.


[1] https://issues.apache.org/jira/browse/AIRAVATA-989

-- 
Thanks,
Sachith Withana

Re: Orchestrator Overview Meeting Summary

Posted by Suresh Marru <sm...@apache.org>.
Thanks Sachith for this overview talk. Nice summary.

Suresh

On Jan 16, 2014, at 10:58 AM, Sachith Withana <sw...@gmail.com> wrote:

> Hi All,
> 
> This is the summary of the meeting we had Wednesday( 01/16/14) on the Orchestrator.
> 
> Orchestrator Overview
> I Introduced the Orchestrator and I have attached the presentation herewith.
> 
> Adding Job Cloning capability to the Orchestrator API
> Saminda suggested that we should have a way to clone an existing job and run it with different inputs or on a different host or both. Here's the Jira for that.[1]
> 
> Gfac embedded vs Gfac as a service
> We have implemented the embedded Gfac and decided to use it for now. 
> Gfac as a service is a long term goal to have. Until we get the Orchestrator complete we will use the embedded Gfac. 
> 
> Job statuses for the Orchestrator and the Gfac
> We need to come up with multi-level job statuses. User-level, Orchestartor-level and the Gfac-level statuses. Also the mapping between them is open for discussion. We didn't come to a conclusion on the matter. We will discuss this topic in an upcoming meeting. 
> 
> 
> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
> 
> -- 
> Thanks,
> Sachith Withana
> 
> <orchestrator_presentation.pdf>


Re: Orchestrator Overview Meeting Summary

Posted by Saminda Wijeratne <sa...@gmail.com>.
On Sun, Jan 19, 2014 at 4:03 PM, Lahiru Gunathilake <gl...@gmail.com>wrote:

> Hi saminda,
>
> I am writing this to clarify the CIPRES scenario, please correct me if I
> am wrong.
>
> CIPRES  users create experiments with all the parameters.
>
> Easy step is they simply give the input values and run jobs (because they
> store job related configuration to application descriptor, and doesn't have
> to send job configuratino data).
>
> Second scenario is when they want to change the job configuration data.
>
For CIPRES how they manage the 2nd scenario when needed is by defining 2
tools for different deployments of the same application (The name of the
tool somewhat reflects the deployment location).

>
> To handle this case we are trying to think of a template approach ?
>
We are just considering template as another way to look at solving it.

>
> If my understanding above is correct, we need to save the job
> configuration data each experiment have used if that is different from the
> original. Or we need to create a separate App descriptor each time some
> user change some parameter in AD (this is not a good approach).
>
> How about we create a base Application descriptor and associate it with a
> runtime job data used for each experiment invocation ? In that case we have
> to save finally used job configuration and users can view this information
> for analyse the experiment results. In this case users can send this data
> along the request (this works fine with Orchestrator now if user send
> Application Descriptor along the request).
>
+1

>
> WDYT ?
>
> Lahiru
>
>
> On Mon, Jan 20, 2014 at 12:40 AM, Suresh Marru <sm...@apache.org> wrote:
>
>>
>> On Jan 19, 2014, at 12:38 PM, Saminda Wijeratne <sa...@gmail.com>
>> wrote:
>>
>> > My initial idea is to have an experiment template saved and later users
>> would launch a experiment template as much as they would want each time
>> creating an experiment only at the launch. If users want to make small
>> changes, they could take the template, change it and save it again either
>> to a new template or to the same one. But I was wondering how intuitive it
>> would be to the user to follow up such an approach.
>>
>> I like the template approach as one of the an implementation option, but
>> I wonder if it is not applicable for the current discussion of cloning. Let
>> me explain my thoughts more clearly.
>>
>> For eScience use cases workflow (or application in this case) is the
>> recipe and experiment is an instance of executing the recipe. So naturally
>> workflow and application descriptions are templates and instantiated for
>> each execution. But here I see the use case is cloning the experiment (an
>> instance of end result) and not the application/workflow template (which is
>> what Amila alluded earlier on this thread). By exploratory nature of
>> science, experiments are trial and errors, so it may not be a-priorly
>> possible to determine re-usable experiments and template them. Rather users
>> roll the dice and when they start seeing expected results, they would like
>> clone the experiments and fine tune it or repeat it over finer data and so
>> forth. So in summary, I think applications/workflows are good examples for
>> template approach and experiments are good for after-the-fact cloning.
>>
> I think what you mentioned for the idea of a template for a gateway makes
alot of sense.

>
>> I say cloning experiments can be an implement as templates, because if a
>> user is having a huge list of executed experiments then it will be tough to
>> navigate through the workspace to find the ones they want to clone. So an
>> option can be provided to mark the ones they think are worth cloning in
>> future and make the shorter list available. This very arguably mimics
>> templates.
>>
> This sounds like a new use case for Airavata?

Just a thought,

Experiment = Experiment ID + Experiment Metadata (eg: name, user,
date/time, status...) + Experiment Configuration (eg: input, descriptors to
use...)

IMO cloning an experiment is just duplicating "Experiment Configuration"
and creating new "Experiment ID" + "Experiment Metadata"


>> Suresh
>>
>> >
>> >
>> > On Sun, Jan 19, 2014 at 7:58 AM, Suresh Marru <sm...@apache.org>
>> wrote:
>> > I see Amila’s point and can be argued that, Airavata Client can fetch
>> experiment, modify what is needed and re-submit as a new experiment.
>> >
>> > But I agree with Saminda, if an experiment has dozens of inputs and if
>> say only parameter or scheduling info needs to be changes, cloning makes it
>> useful. The challenge though is how to communicate what all needs to be
>> changed? Should we assume anything explicitly not passed remains as
>> original experiment and the ones passed are overridden?
>> >
>> > I think the word clone seems fine and also aligns with the Java Clone
>> interpretation [1].
>> >
>> > This brings up another question, should there be only create, launch,
>> clone and terminate experiments or should we also have a configure
>> experiment? The purpose of configure is to let the client slowly load up
>> the object as it has the information and only launch it when it is ready.
>> That way portals need not have an intermediate persistence for these
>> objects and facilitate users to build an experiment in long sessions.
>> Thought?
>> >
>> > Suresh
>> > [1] -
>> http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#clone()
>> >
>> > On Jan 17, 2014, at 2:05 PM, Saminda Wijeratne <sa...@gmail.com>
>> wrote:
>> >
>> > > an experiment will not define new descriptors but rather point to an
>> existing descriptor(s). IMO (correct me if I'm wrong),
>> > >
>> > > Experiment = Application + Input value(s) for application +
>> Configuration data for managing job
>> > >
>> > > Application = Service Descriptor + Host Descriptor + Application
>> Descriptor
>> > >
>> > > Thus for an experiment it involves quite the amount of data of which
>> needs to be specified. Thus it is easier to make a copy of it rather than
>> asking the user to specify all of the data again when only there are very
>> few changes compared to original experiment. Perhaps the confusion here is
>> the word "clone"?
>> > >
>> > >
>> > > On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara <
>> thejaka.amila@gmail.com> wrote:
>> > > This seems like adding new experiment definition. (i.e. new
>> descriptors).
>> > > As far as I understood this should be handled at UI layer (?). For
>> the backend it will just be new descriptor definitions (?).
>> > > Maybe I am missing something.
>> > >
>> > > - AJ
>> > >
>> > >
>> > > On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <
>> samindaw@gmail.com> wrote:
>> > > This was in accordance with the CIPRES usecase scenario where users
>> would want to rerun their tasks but with subset of slightly different
>> parameters/input. This is particularly useful for them because their tasks
>> can include more than 20-30 parameters most of the time.
>> > >
>> > >
>> > > On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com>
>> wrote:
>> > > Hi Amila,
>> > >
>> > > The use of the word "cloning" is misleading.
>> > >
>> > > Saminda suggested that, we would need to run the application in a
>> different host ( based on the users intuition of the host availability/
>> efficiency) keeping all the other variables constant( inputs changes are
>> also allowed). As an example: if a job keeps failing on one host, the user
>> should be allowed to submit the job to another host.
>> > >
>> > > We should come up with a different name for the scenario..
>> > >
>> > >
>> > > On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <
>> thejaka.amila@gmail.com> wrote:
>> > >
>> > >
>> > >
>> > > On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <
>> swsachith@gmail.com> wrote:
>> > > Hi All,
>> > >
>> > > This is the summary of the meeting we had Wednesday( 01/16/14) on the
>> Orchestrator.
>> > >
>> > > Orchestrator Overview
>> > > I Introduced the Orchestrator and I have attached the presentation
>> herewith.
>> > >
>> > > Adding Job Cloning capability to the Orchestrator API
>> > > Saminda suggested that we should have a way to clone an existing job
>> and run it with different inputs or on a different host or both. Here's the
>> Jira for that.[1]
>> > >
>> > > I didnt quite understand what cloning does. Once descriptors are
>> setup we can run experiment with different inputs, many times we want. So
>> what is the actual need to have cloning ?
>> > >
>> > > Thanks
>> > > Thejaka Amila
>> > >
>> > >
>> > > Gfac embedded vs Gfac as a service
>> > > We have implemented the embedded Gfac and decided to use it for now.
>> > > Gfac as a service is a long term goal to have. Until we get the
>> Orchestrator complete we will use the embedded Gfac.
>> > >
>> > > Job statuses for the Orchestrator and the Gfac
>> > > We need to come up with multi-level job statuses. User-level,
>> Orchestartor-level and the Gfac-level statuses. Also the mapping between
>> them is open for discussion. We didn't come to a conclusion on the matter.
>> We will discuss this topic in an upcoming meeting.
>> > >
>> > >
>> > > [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>> > >
>> > > --
>> > > Thanks,
>> > > Sachith Withana
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > --
>> > > Thanks,
>> > > Sachith Withana
>> > >
>> > >
>> > >
>> > >
>> >
>> >
>>
>>
>
>
> --
> System Analyst Programmer
> PTI Lab
> Indiana University
>

Re: Orchestrator Overview Meeting Summary

Posted by Lahiru Gunathilake <gl...@gmail.com>.
Hi saminda,

I am writing this to clarify the CIPRES scenario, please correct me if I am
wrong.

CIPRES  users create experiments with all the parameters.

Easy step is they simply give the input values and run jobs (because they
store job related configuration to application descriptor, and doesn't have
to send job configuratino data).

Second scenario is when they want to change the job configuration data.

To handle this case we are trying to think of a template approach ?

If my understanding above is correct, we need to save the job configuration
data each experiment have used if that is different from the original. Or
we need to create a separate App descriptor each time some user change some
parameter in AD (this is not a good approach).

How about we create a base Application descriptor and associate it with a
runtime job data used for each experiment invocation ? In that case we have
to save finally used job configuration and users can view this information
for analyse the experiment results. In this case users can send this data
along the request (this works fine with Orchestrator now if user send
Application Descriptor along the request).

WDYT ?

Lahiru


On Mon, Jan 20, 2014 at 12:40 AM, Suresh Marru <sm...@apache.org> wrote:

>
> On Jan 19, 2014, at 12:38 PM, Saminda Wijeratne <sa...@gmail.com>
> wrote:
>
> > My initial idea is to have an experiment template saved and later users
> would launch a experiment template as much as they would want each time
> creating an experiment only at the launch. If users want to make small
> changes, they could take the template, change it and save it again either
> to a new template or to the same one. But I was wondering how intuitive it
> would be to the user to follow up such an approach.
>
> I like the template approach as one of the an implementation option, but I
> wonder if it is not applicable for the current discussion of cloning. Let
> me explain my thoughts more clearly.
>
> For eScience use cases workflow (or application in this case) is the
> recipe and experiment is an instance of executing the recipe. So naturally
> workflow and application descriptions are templates and instantiated for
> each execution. But here I see the use case is cloning the experiment (an
> instance of end result) and not the application/workflow template (which is
> what Amila alluded earlier on this thread). By exploratory nature of
> science, experiments are trial and errors, so it may not be a-priorly
> possible to determine re-usable experiments and template them. Rather users
> roll the dice and when they start seeing expected results, they would like
> clone the experiments and fine tune it or repeat it over finer data and so
> forth. So in summary, I think applications/workflows are good examples for
> template approach and experiments are good for after-the-fact cloning.
>
> I say cloning experiments can be an implement as templates, because if a
> user is having a huge list of executed experiments then it will be tough to
> navigate through the workspace to find the ones they want to clone. So an
> option can be provided to mark the ones they think are worth cloning in
> future and make the shorter list available. This very arguably mimics
> templates.
>
> Suresh
>
> >
> >
> > On Sun, Jan 19, 2014 at 7:58 AM, Suresh Marru <sm...@apache.org> wrote:
> > I see Amila’s point and can be argued that, Airavata Client can fetch
> experiment, modify what is needed and re-submit as a new experiment.
> >
> > But I agree with Saminda, if an experiment has dozens of inputs and if
> say only parameter or scheduling info needs to be changes, cloning makes it
> useful. The challenge though is how to communicate what all needs to be
> changed? Should we assume anything explicitly not passed remains as
> original experiment and the ones passed are overridden?
> >
> > I think the word clone seems fine and also aligns with the Java Clone
> interpretation [1].
> >
> > This brings up another question, should there be only create, launch,
> clone and terminate experiments or should we also have a configure
> experiment? The purpose of configure is to let the client slowly load up
> the object as it has the information and only launch it when it is ready.
> That way portals need not have an intermediate persistence for these
> objects and facilitate users to build an experiment in long sessions.
> Thought?
> >
> > Suresh
> > [1] -
> http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#clone()
> >
> > On Jan 17, 2014, at 2:05 PM, Saminda Wijeratne <sa...@gmail.com>
> wrote:
> >
> > > an experiment will not define new descriptors but rather point to an
> existing descriptor(s). IMO (correct me if I'm wrong),
> > >
> > > Experiment = Application + Input value(s) for application +
> Configuration data for managing job
> > >
> > > Application = Service Descriptor + Host Descriptor + Application
> Descriptor
> > >
> > > Thus for an experiment it involves quite the amount of data of which
> needs to be specified. Thus it is easier to make a copy of it rather than
> asking the user to specify all of the data again when only there are very
> few changes compared to original experiment. Perhaps the confusion here is
> the word "clone"?
> > >
> > >
> > > On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara <
> thejaka.amila@gmail.com> wrote:
> > > This seems like adding new experiment definition. (i.e. new
> descriptors).
> > > As far as I understood this should be handled at UI layer (?). For the
> backend it will just be new descriptor definitions (?).
> > > Maybe I am missing something.
> > >
> > > - AJ
> > >
> > >
> > > On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <sa...@gmail.com>
> wrote:
> > > This was in accordance with the CIPRES usecase scenario where users
> would want to rerun their tasks but with subset of slightly different
> parameters/input. This is particularly useful for them because their tasks
> can include more than 20-30 parameters most of the time.
> > >
> > >
> > > On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com>
> wrote:
> > > Hi Amila,
> > >
> > > The use of the word "cloning" is misleading.
> > >
> > > Saminda suggested that, we would need to run the application in a
> different host ( based on the users intuition of the host availability/
> efficiency) keeping all the other variables constant( inputs changes are
> also allowed). As an example: if a job keeps failing on one host, the user
> should be allowed to submit the job to another host.
> > >
> > > We should come up with a different name for the scenario..
> > >
> > >
> > > On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <
> thejaka.amila@gmail.com> wrote:
> > >
> > >
> > >
> > > On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com>
> wrote:
> > > Hi All,
> > >
> > > This is the summary of the meeting we had Wednesday( 01/16/14) on the
> Orchestrator.
> > >
> > > Orchestrator Overview
> > > I Introduced the Orchestrator and I have attached the presentation
> herewith.
> > >
> > > Adding Job Cloning capability to the Orchestrator API
> > > Saminda suggested that we should have a way to clone an existing job
> and run it with different inputs or on a different host or both. Here's the
> Jira for that.[1]
> > >
> > > I didnt quite understand what cloning does. Once descriptors are setup
> we can run experiment with different inputs, many times we want. So what is
> the actual need to have cloning ?
> > >
> > > Thanks
> > > Thejaka Amila
> > >
> > >
> > > Gfac embedded vs Gfac as a service
> > > We have implemented the embedded Gfac and decided to use it for now.
> > > Gfac as a service is a long term goal to have. Until we get the
> Orchestrator complete we will use the embedded Gfac.
> > >
> > > Job statuses for the Orchestrator and the Gfac
> > > We need to come up with multi-level job statuses. User-level,
> Orchestartor-level and the Gfac-level statuses. Also the mapping between
> them is open for discussion. We didn't come to a conclusion on the matter.
> We will discuss this topic in an upcoming meeting.
> > >
> > >
> > > [1] https://issues.apache.org/jira/browse/AIRAVATA-989
> > >
> > > --
> > > Thanks,
> > > Sachith Withana
> > >
> > >
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Sachith Withana
> > >
> > >
> > >
> > >
> >
> >
>
>


-- 
System Analyst Programmer
PTI Lab
Indiana University

Re: Orchestrator Overview Meeting Summary

Posted by Suresh Marru <sm...@apache.org>.
On Jan 19, 2014, at 12:38 PM, Saminda Wijeratne <sa...@gmail.com> wrote:

> My initial idea is to have an experiment template saved and later users would launch a experiment template as much as they would want each time creating an experiment only at the launch. If users want to make small changes, they could take the template, change it and save it again either to a new template or to the same one. But I was wondering how intuitive it would be to the user to follow up such an approach.

I like the template approach as one of the an implementation option, but I wonder if it is not applicable for the current discussion of cloning. Let me explain my thoughts more clearly. 

For eScience use cases workflow (or application in this case) is the recipe and experiment is an instance of executing the recipe. So naturally workflow and application descriptions are templates and instantiated for each execution. But here I see the use case is cloning the experiment (an instance of end result) and not the application/workflow template (which is what Amila alluded earlier on this thread). By exploratory nature of science, experiments are trial and errors, so it may not be a-priorly possible to determine re-usable experiments and template them. Rather users roll the dice and when they start seeing expected results, they would like clone the experiments and fine tune it or repeat it over finer data and so forth. So in summary, I think applications/workflows are good examples for template approach and experiments are good for after-the-fact cloning. 

I say cloning experiments can be an implement as templates, because if a user is having a huge list of executed experiments then it will be tough to navigate through the workspace to find the ones they want to clone. So an option can be provided to mark the ones they think are worth cloning in future and make the shorter list available. This very arguably mimics templates. 

Suresh

> 
> 
> On Sun, Jan 19, 2014 at 7:58 AM, Suresh Marru <sm...@apache.org> wrote:
> I see Amila’s point and can be argued that, Airavata Client can fetch experiment, modify what is needed and re-submit as a new experiment.
> 
> But I agree with Saminda, if an experiment has dozens of inputs and if say only parameter or scheduling info needs to be changes, cloning makes it useful. The challenge though is how to communicate what all needs to be changed? Should we assume anything explicitly not passed remains as original experiment and the ones passed are overridden?
> 
> I think the word clone seems fine and also aligns with the Java Clone interpretation [1].
> 
> This brings up another question, should there be only create, launch, clone and terminate experiments or should we also have a configure experiment? The purpose of configure is to let the client slowly load up the object as it has the information and only launch it when it is ready. That way portals need not have an intermediate persistence for these objects and facilitate users to build an experiment in long sessions. Thought?
> 
> Suresh
> [1] - http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#clone()
> 
> On Jan 17, 2014, at 2:05 PM, Saminda Wijeratne <sa...@gmail.com> wrote:
> 
> > an experiment will not define new descriptors but rather point to an existing descriptor(s). IMO (correct me if I'm wrong),
> >
> > Experiment = Application + Input value(s) for application + Configuration data for managing job
> >
> > Application = Service Descriptor + Host Descriptor + Application Descriptor
> >
> > Thus for an experiment it involves quite the amount of data of which needs to be specified. Thus it is easier to make a copy of it rather than asking the user to specify all of the data again when only there are very few changes compared to original experiment. Perhaps the confusion here is the word "clone"?
> >
> >
> > On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara <th...@gmail.com> wrote:
> > This seems like adding new experiment definition. (i.e. new descriptors).
> > As far as I understood this should be handled at UI layer (?). For the backend it will just be new descriptor definitions (?).
> > Maybe I am missing something.
> >
> > - AJ
> >
> >
> > On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <sa...@gmail.com> wrote:
> > This was in accordance with the CIPRES usecase scenario where users would want to rerun their tasks but with subset of slightly different parameters/input. This is particularly useful for them because their tasks can include more than 20-30 parameters most of the time.
> >
> >
> > On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com> wrote:
> > Hi Amila,
> >
> > The use of the word "cloning" is misleading.
> >
> > Saminda suggested that, we would need to run the application in a different host ( based on the users intuition of the host availability/ efficiency) keeping all the other variables constant( inputs changes are also allowed). As an example: if a job keeps failing on one host, the user should be allowed to submit the job to another host.
> >
> > We should come up with a different name for the scenario..
> >
> >
> > On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <th...@gmail.com> wrote:
> >
> >
> >
> > On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com> wrote:
> > Hi All,
> >
> > This is the summary of the meeting we had Wednesday( 01/16/14) on the Orchestrator.
> >
> > Orchestrator Overview
> > I Introduced the Orchestrator and I have attached the presentation herewith.
> >
> > Adding Job Cloning capability to the Orchestrator API
> > Saminda suggested that we should have a way to clone an existing job and run it with different inputs or on a different host or both. Here's the Jira for that.[1]
> >
> > I didnt quite understand what cloning does. Once descriptors are setup we can run experiment with different inputs, many times we want. So what is the actual need to have cloning ?
> >
> > Thanks
> > Thejaka Amila
> >
> >
> > Gfac embedded vs Gfac as a service
> > We have implemented the embedded Gfac and decided to use it for now.
> > Gfac as a service is a long term goal to have. Until we get the Orchestrator complete we will use the embedded Gfac.
> >
> > Job statuses for the Orchestrator and the Gfac
> > We need to come up with multi-level job statuses. User-level, Orchestartor-level and the Gfac-level statuses. Also the mapping between them is open for discussion. We didn't come to a conclusion on the matter. We will discuss this topic in an upcoming meeting.
> >
> >
> > [1] https://issues.apache.org/jira/browse/AIRAVATA-989
> >
> > --
> > Thanks,
> > Sachith Withana
> >
> >
> >
> >
> >
> > --
> > Thanks,
> > Sachith Withana
> >
> >
> >
> >
> 
> 


Re: Orchestrator Overview Meeting Summary

Posted by Saminda Wijeratne <sa...@gmail.com>.
My initial idea is to have an experiment template saved and later users
would launch a experiment template as much as they would want each time
creating an experiment only at the launch. If users want to make small
changes, they could take the template, change it and save it again either
to a new template or to the same one. But I was wondering how intuitive it
would be to the user to follow up such an approach.


On Sun, Jan 19, 2014 at 7:58 AM, Suresh Marru <sm...@apache.org> wrote:

> I see Amila’s point and can be argued that, Airavata Client can fetch
> experiment, modify what is needed and re-submit as a new experiment.
>
> But I agree with Saminda, if an experiment has dozens of inputs and if say
> only parameter or scheduling info needs to be changes, cloning makes it
> useful. The challenge though is how to communicate what all needs to be
> changed? Should we assume anything explicitly not passed remains as
> original experiment and the ones passed are overridden?
>
> I think the word clone seems fine and also aligns with the Java Clone
> interpretation [1].
>
> This brings up another question, should there be only create, launch,
> clone and terminate experiments or should we also have a configure
> experiment? The purpose of configure is to let the client slowly load up
> the object as it has the information and only launch it when it is ready.
> That way portals need not have an intermediate persistence for these
> objects and facilitate users to build an experiment in long sessions.
> Thought?
>
> Suresh
> [1] -
> http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#clone()
>
> On Jan 17, 2014, at 2:05 PM, Saminda Wijeratne <sa...@gmail.com> wrote:
>
> > an experiment will not define new descriptors but rather point to an
> existing descriptor(s). IMO (correct me if I'm wrong),
> >
> > Experiment = Application + Input value(s) for application +
> Configuration data for managing job
> >
> > Application = Service Descriptor + Host Descriptor + Application
> Descriptor
> >
> > Thus for an experiment it involves quite the amount of data of which
> needs to be specified. Thus it is easier to make a copy of it rather than
> asking the user to specify all of the data again when only there are very
> few changes compared to original experiment. Perhaps the confusion here is
> the word "clone"?
> >
> >
> > On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara <
> thejaka.amila@gmail.com> wrote:
> > This seems like adding new experiment definition. (i.e. new descriptors).
> > As far as I understood this should be handled at UI layer (?). For the
> backend it will just be new descriptor definitions (?).
> > Maybe I am missing something.
> >
> > - AJ
> >
> >
> > On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <sa...@gmail.com>
> wrote:
> > This was in accordance with the CIPRES usecase scenario where users
> would want to rerun their tasks but with subset of slightly different
> parameters/input. This is particularly useful for them because their tasks
> can include more than 20-30 parameters most of the time.
> >
> >
> > On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com>
> wrote:
> > Hi Amila,
> >
> > The use of the word "cloning" is misleading.
> >
> > Saminda suggested that, we would need to run the application in a
> different host ( based on the users intuition of the host availability/
> efficiency) keeping all the other variables constant( inputs changes are
> also allowed). As an example: if a job keeps failing on one host, the user
> should be allowed to submit the job to another host.
> >
> > We should come up with a different name for the scenario..
> >
> >
> > On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <
> thejaka.amila@gmail.com> wrote:
> >
> >
> >
> > On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com>
> wrote:
> > Hi All,
> >
> > This is the summary of the meeting we had Wednesday( 01/16/14) on the
> Orchestrator.
> >
> > Orchestrator Overview
> > I Introduced the Orchestrator and I have attached the presentation
> herewith.
> >
> > Adding Job Cloning capability to the Orchestrator API
> > Saminda suggested that we should have a way to clone an existing job and
> run it with different inputs or on a different host or both. Here's the
> Jira for that.[1]
> >
> > I didnt quite understand what cloning does. Once descriptors are setup
> we can run experiment with different inputs, many times we want. So what is
> the actual need to have cloning ?
> >
> > Thanks
> > Thejaka Amila
> >
> >
> > Gfac embedded vs Gfac as a service
> > We have implemented the embedded Gfac and decided to use it for now.
> > Gfac as a service is a long term goal to have. Until we get the
> Orchestrator complete we will use the embedded Gfac.
> >
> > Job statuses for the Orchestrator and the Gfac
> > We need to come up with multi-level job statuses. User-level,
> Orchestartor-level and the Gfac-level statuses. Also the mapping between
> them is open for discussion. We didn't come to a conclusion on the matter.
> We will discuss this topic in an upcoming meeting.
> >
> >
> > [1] https://issues.apache.org/jira/browse/AIRAVATA-989
> >
> > --
> > Thanks,
> > Sachith Withana
> >
> >
> >
> >
> >
> > --
> > Thanks,
> > Sachith Withana
> >
> >
> >
> >
>
>

Re: Orchestrator Overview Meeting Summary

Posted by Suresh Marru <sm...@apache.org>.
I am yet to finish up on Airavata API thrift files related to orchestrator. But just committed partial files to - https://svn.apache.org/repos/asf/airavata/trunk/modules/thrift-interfaces/

I will finish and ask for a larger feedback on all the API methods, but related to this discussion, can we have an explicit configure call? For all use cases which prefer to persist locally and only call Airavata when they have the full experiment object loaded up, they can use ConfigureAndLaunchExperiment? 

I will add detailed comments and intended use to these IDL’s. I am trying to model these along the lines of widely used thrift definitions of evernote, cassandra and Facebook.

Suresh

On Jan 20, 2014, at 10:14 AM, Marlon Pierce <ma...@iu.edu> wrote:

> I have two minds on the "configure experiment" method. On the one hand,
> most of the gateways we are taking use cases from already have a local
> persistence mechanism for this, so we don't have a driver. And I'm sure
> there will be implementation subtleties. On the other hand, it would be
> a good feature to provide for new gateways. Telling them to go implement
> a DB for this by themselves would be bad practice, especially when we
> should have the experience to do it correctly.
> 
> The AMBER portal could be a good use case. I think this is currently in
> the "nice to have" list.
> 
> 
> Marlon
> 
> On 1/19/14 10:58 AM, Suresh Marru wrote:
>> I see Amila’s point and can be argued that, Airavata Client can fetch experiment, modify what is needed and re-submit as a new experiment.
>> 
>> But I agree with Saminda, if an experiment has dozens of inputs and if say only parameter or scheduling info needs to be changes, cloning makes it useful. The challenge though is how to communicate what all needs to be changed? Should we assume anything explicitly not passed remains as original experiment and the ones passed are overridden? 
>> 
>> I think the word clone seems fine and also aligns with the Java Clone interpretation [1].
>> 
>> This brings up another question, should there be only create, launch, clone and terminate experiments or should we also have a configure experiment? The purpose of configure is to let the client slowly load up the object as it has the information and only launch it when it is ready. That way portals need not have an intermediate persistence for these objects and facilitate users to build an experiment in long sessions. Thought?
>> 
>> Suresh
>> [1] - http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#clone()
>> 
>> On Jan 17, 2014, at 2:05 PM, Saminda Wijeratne <sa...@gmail.com> wrote:
>> 
>>> an experiment will not define new descriptors but rather point to an existing descriptor(s). IMO (correct me if I'm wrong),
>>> 
>>> Experiment = Application + Input value(s) for application + Configuration data for managing job
>>> 
>>> Application = Service Descriptor + Host Descriptor + Application Descriptor
>>> 
>>> Thus for an experiment it involves quite the amount of data of which needs to be specified. Thus it is easier to make a copy of it rather than asking the user to specify all of the data again when only there are very few changes compared to original experiment. Perhaps the confusion here is the word "clone"?
>>> 
>>> 
>>> On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara <th...@gmail.com> wrote:
>>> This seems like adding new experiment definition. (i.e. new descriptors).
>>> As far as I understood this should be handled at UI layer (?). For the backend it will just be new descriptor definitions (?).
>>> Maybe I am missing something.
>>> 
>>> - AJ
>>> 
>>> 
>>> On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <sa...@gmail.com> wrote:
>>> This was in accordance with the CIPRES usecase scenario where users would want to rerun their tasks but with subset of slightly different parameters/input. This is particularly useful for them because their tasks can include more than 20-30 parameters most of the time.
>>> 
>>> 
>>> On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com> wrote:
>>> Hi Amila,
>>> 
>>> The use of the word "cloning" is misleading.
>>> 
>>> Saminda suggested that, we would need to run the application in a different host ( based on the users intuition of the host availability/ efficiency) keeping all the other variables constant( inputs changes are also allowed). As an example: if a job keeps failing on one host, the user should be allowed to submit the job to another host. 
>>> 
>>> We should come up with a different name for the scenario.. 
>>> 
>>> 
>>> On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <th...@gmail.com> wrote:
>>> 
>>> 
>>> 
>>> On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com> wrote:
>>> Hi All,
>>> 
>>> This is the summary of the meeting we had Wednesday( 01/16/14) on the Orchestrator.
>>> 
>>> Orchestrator Overview
>>> I Introduced the Orchestrator and I have attached the presentation herewith.
>>> 
>>> Adding Job Cloning capability to the Orchestrator API
>>> Saminda suggested that we should have a way to clone an existing job and run it with different inputs or on a different host or both. Here's the Jira for that.[1]
>>> 
>>> I didnt quite understand what cloning does. Once descriptors are setup we can run experiment with different inputs, many times we want. So what is the actual need to have cloning ?
>>> 
>>> Thanks
>>> Thejaka Amila
>>> 
>>> 
>>> Gfac embedded vs Gfac as a service
>>> We have implemented the embedded Gfac and decided to use it for now. 
>>> Gfac as a service is a long term goal to have. Until we get the Orchestrator complete we will use the embedded Gfac. 
>>> 
>>> Job statuses for the Orchestrator and the Gfac
>>> We need to come up with multi-level job statuses. User-level, Orchestartor-level and the Gfac-level statuses. Also the mapping between them is open for discussion. We didn't come to a conclusion on the matter. We will discuss this topic in an upcoming meeting. 
>>> 
>>> 
>>> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>>> 
>>> -- 
>>> Thanks,
>>> Sachith Withana
>>> 
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Thanks,
>>> Sachith Withana
>>> 
>>> 
>>> 
>>> 
> 


Re: Orchestrator Overview Meeting Summary

Posted by Marlon Pierce <ma...@iu.edu>.
I have two minds on the "configure experiment" method. On the one hand,
most of the gateways we are taking use cases from already have a local
persistence mechanism for this, so we don't have a driver. And I'm sure
there will be implementation subtleties. On the other hand, it would be
a good feature to provide for new gateways. Telling them to go implement
a DB for this by themselves would be bad practice, especially when we
should have the experience to do it correctly.

The AMBER portal could be a good use case. I think this is currently in
the "nice to have" list.


Marlon

On 1/19/14 10:58 AM, Suresh Marru wrote:
> I see Amila’s point and can be argued that, Airavata Client can fetch experiment, modify what is needed and re-submit as a new experiment.
>
> But I agree with Saminda, if an experiment has dozens of inputs and if say only parameter or scheduling info needs to be changes, cloning makes it useful. The challenge though is how to communicate what all needs to be changed? Should we assume anything explicitly not passed remains as original experiment and the ones passed are overridden? 
>
> I think the word clone seems fine and also aligns with the Java Clone interpretation [1].
>
> This brings up another question, should there be only create, launch, clone and terminate experiments or should we also have a configure experiment? The purpose of configure is to let the client slowly load up the object as it has the information and only launch it when it is ready. That way portals need not have an intermediate persistence for these objects and facilitate users to build an experiment in long sessions. Thought?
>
> Suresh
> [1] - http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#clone()
>
> On Jan 17, 2014, at 2:05 PM, Saminda Wijeratne <sa...@gmail.com> wrote:
>
>> an experiment will not define new descriptors but rather point to an existing descriptor(s). IMO (correct me if I'm wrong),
>>
>> Experiment = Application + Input value(s) for application + Configuration data for managing job
>>
>> Application = Service Descriptor + Host Descriptor + Application Descriptor
>>
>> Thus for an experiment it involves quite the amount of data of which needs to be specified. Thus it is easier to make a copy of it rather than asking the user to specify all of the data again when only there are very few changes compared to original experiment. Perhaps the confusion here is the word "clone"?
>>
>>
>> On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara <th...@gmail.com> wrote:
>> This seems like adding new experiment definition. (i.e. new descriptors).
>> As far as I understood this should be handled at UI layer (?). For the backend it will just be new descriptor definitions (?).
>> Maybe I am missing something.
>>
>> - AJ
>>
>>
>> On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <sa...@gmail.com> wrote:
>> This was in accordance with the CIPRES usecase scenario where users would want to rerun their tasks but with subset of slightly different parameters/input. This is particularly useful for them because their tasks can include more than 20-30 parameters most of the time.
>>
>>
>> On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com> wrote:
>> Hi Amila,
>>
>> The use of the word "cloning" is misleading.
>>
>> Saminda suggested that, we would need to run the application in a different host ( based on the users intuition of the host availability/ efficiency) keeping all the other variables constant( inputs changes are also allowed). As an example: if a job keeps failing on one host, the user should be allowed to submit the job to another host. 
>>
>> We should come up with a different name for the scenario.. 
>>
>>
>> On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <th...@gmail.com> wrote:
>>
>>
>>
>> On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com> wrote:
>> Hi All,
>>
>> This is the summary of the meeting we had Wednesday( 01/16/14) on the Orchestrator.
>>
>> Orchestrator Overview
>> I Introduced the Orchestrator and I have attached the presentation herewith.
>>
>> Adding Job Cloning capability to the Orchestrator API
>> Saminda suggested that we should have a way to clone an existing job and run it with different inputs or on a different host or both. Here's the Jira for that.[1]
>>
>> I didnt quite understand what cloning does. Once descriptors are setup we can run experiment with different inputs, many times we want. So what is the actual need to have cloning ?
>>
>> Thanks
>> Thejaka Amila
>>  
>>
>> Gfac embedded vs Gfac as a service
>> We have implemented the embedded Gfac and decided to use it for now. 
>> Gfac as a service is a long term goal to have. Until we get the Orchestrator complete we will use the embedded Gfac. 
>>
>> Job statuses for the Orchestrator and the Gfac
>> We need to come up with multi-level job statuses. User-level, Orchestartor-level and the Gfac-level statuses. Also the mapping between them is open for discussion. We didn't come to a conclusion on the matter. We will discuss this topic in an upcoming meeting. 
>>
>>
>> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>>
>> -- 
>> Thanks,
>> Sachith Withana
>>
>>
>>
>>
>>
>> -- 
>> Thanks,
>> Sachith Withana
>>
>>
>>
>>


Re: Orchestrator Overview Meeting Summary

Posted by Suresh Marru <sm...@apache.org>.
I see Amila’s point and can be argued that, Airavata Client can fetch experiment, modify what is needed and re-submit as a new experiment.

But I agree with Saminda, if an experiment has dozens of inputs and if say only parameter or scheduling info needs to be changes, cloning makes it useful. The challenge though is how to communicate what all needs to be changed? Should we assume anything explicitly not passed remains as original experiment and the ones passed are overridden? 

I think the word clone seems fine and also aligns with the Java Clone interpretation [1].

This brings up another question, should there be only create, launch, clone and terminate experiments or should we also have a configure experiment? The purpose of configure is to let the client slowly load up the object as it has the information and only launch it when it is ready. That way portals need not have an intermediate persistence for these objects and facilitate users to build an experiment in long sessions. Thought?

Suresh
[1] - http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#clone()

On Jan 17, 2014, at 2:05 PM, Saminda Wijeratne <sa...@gmail.com> wrote:

> an experiment will not define new descriptors but rather point to an existing descriptor(s). IMO (correct me if I'm wrong),
> 
> Experiment = Application + Input value(s) for application + Configuration data for managing job
> 
> Application = Service Descriptor + Host Descriptor + Application Descriptor
> 
> Thus for an experiment it involves quite the amount of data of which needs to be specified. Thus it is easier to make a copy of it rather than asking the user to specify all of the data again when only there are very few changes compared to original experiment. Perhaps the confusion here is the word "clone"?
> 
> 
> On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara <th...@gmail.com> wrote:
> This seems like adding new experiment definition. (i.e. new descriptors).
> As far as I understood this should be handled at UI layer (?). For the backend it will just be new descriptor definitions (?).
> Maybe I am missing something.
> 
> - AJ
> 
> 
> On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <sa...@gmail.com> wrote:
> This was in accordance with the CIPRES usecase scenario where users would want to rerun their tasks but with subset of slightly different parameters/input. This is particularly useful for them because their tasks can include more than 20-30 parameters most of the time.
> 
> 
> On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com> wrote:
> Hi Amila,
> 
> The use of the word "cloning" is misleading.
> 
> Saminda suggested that, we would need to run the application in a different host ( based on the users intuition of the host availability/ efficiency) keeping all the other variables constant( inputs changes are also allowed). As an example: if a job keeps failing on one host, the user should be allowed to submit the job to another host. 
> 
> We should come up with a different name for the scenario.. 
> 
> 
> On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <th...@gmail.com> wrote:
> 
> 
> 
> On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com> wrote:
> Hi All,
> 
> This is the summary of the meeting we had Wednesday( 01/16/14) on the Orchestrator.
> 
> Orchestrator Overview
> I Introduced the Orchestrator and I have attached the presentation herewith.
> 
> Adding Job Cloning capability to the Orchestrator API
> Saminda suggested that we should have a way to clone an existing job and run it with different inputs or on a different host or both. Here's the Jira for that.[1]
> 
> I didnt quite understand what cloning does. Once descriptors are setup we can run experiment with different inputs, many times we want. So what is the actual need to have cloning ?
> 
> Thanks
> Thejaka Amila
>  
> 
> Gfac embedded vs Gfac as a service
> We have implemented the embedded Gfac and decided to use it for now. 
> Gfac as a service is a long term goal to have. Until we get the Orchestrator complete we will use the embedded Gfac. 
> 
> Job statuses for the Orchestrator and the Gfac
> We need to come up with multi-level job statuses. User-level, Orchestartor-level and the Gfac-level statuses. Also the mapping between them is open for discussion. We didn't come to a conclusion on the matter. We will discuss this topic in an upcoming meeting. 
> 
> 
> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
> 
> -- 
> Thanks,
> Sachith Withana
> 
> 
> 
> 
> 
> -- 
> Thanks,
> Sachith Withana
> 
> 
> 
> 


Re: Orchestrator Overview Meeting Summary

Posted by Saminda Wijeratne <sa...@gmail.com>.
an experiment will not define new descriptors but rather point to an
existing descriptor(s). IMO (correct me if I'm wrong),

Experiment = Application + Input value(s) for application + Configuration
data for managing job

Application = Service Descriptor + Host Descriptor + Application Descriptor

Thus for an experiment it involves quite the amount of data of which needs
to be specified. Thus it is easier to make a copy of it rather than asking
the user to specify all of the data again when only there are very few
changes compared to original experiment. Perhaps the confusion here is the
word "clone"?


On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara
<th...@gmail.com>wrote:

> This seems like adding new experiment definition. (i.e. new descriptors).
> As far as I understood this should be handled at UI layer (?). For the
> backend it will just be new descriptor definitions (?).
> Maybe I am missing something.
>
> - AJ
>
>
> On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <sa...@gmail.com>wrote:
>
>> This was in accordance with the CIPRES usecase scenario where users would
>> want to rerun their tasks but with subset of slightly different
>> parameters/input. This is particularly useful for them because their tasks
>> can include more than 20-30 parameters most of the time.
>>
>>
>> On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com>wrote:
>>
>>> Hi Amila,
>>>
>>> The use of the word "cloning" is misleading.
>>>
>>> Saminda suggested that, we would need to run the application in a
>>> different host ( based on the users intuition of the host availability/
>>> efficiency) keeping all the other variables constant( inputs changes are
>>> also allowed). As an example: if a job keeps failing on one host, the user
>>> should be allowed to submit the job to another host.
>>>
>>> We should come up with a different name for the scenario..
>>>
>>>
>>> On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <
>>> thejaka.amila@gmail.com> wrote:
>>>
>>>>
>>>>
>>>>
>>>> On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com>wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> This is the summary of the meeting we had Wednesday( 01/16/14) on the
>>>>> Orchestrator.
>>>>>
>>>>> Orchestrator Overview
>>>>> I Introduced the Orchestrator and I have attached the presentation
>>>>> herewith.
>>>>>
>>>>> Adding Job Cloning capability to the Orchestrator API
>>>>> Saminda suggested that we should have a way to clone an existing job
>>>>> and run it with different inputs or on a different host or both. Here's the
>>>>> Jira for that.[1]
>>>>>
>>>>
>>>> I didnt quite understand what cloning does. Once descriptors are setup
>>>> we can run experiment with different inputs, many times we want. So what is
>>>> the actual need to have cloning ?
>>>>
>>>> Thanks
>>>> Thejaka Amila
>>>>
>>>>
>>>>>
>>>>> Gfac embedded vs Gfac as a service
>>>>> We have implemented the embedded Gfac and decided to use it for now.
>>>>> Gfac as a service is a long term goal to have. Until we get the
>>>>> Orchestrator complete we will use the embedded Gfac.
>>>>>
>>>>> Job statuses for the Orchestrator and the Gfac
>>>>> We need to come up with multi-level job statuses. User-level,
>>>>> Orchestartor-level and the Gfac-level statuses. Also the mapping between
>>>>> them is open for discussion. We didn't come to a conclusion on the matter.
>>>>> We will discuss this topic in an upcoming meeting.
>>>>>
>>>>>
>>>>> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>>>>>
>>>>> --
>>>>> Thanks,
>>>>>  Sachith Withana
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Sachith Withana
>>>
>>>
>>
>

Re: Orchestrator Overview Meeting Summary

Posted by Amila Jayasekara <th...@gmail.com>.
This seems like adding new experiment definition. (i.e. new descriptors).
As far as I understood this should be handled at UI layer (?). For the
backend it will just be new descriptor definitions (?).
Maybe I am missing something.

- AJ


On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <sa...@gmail.com>wrote:

> This was in accordance with the CIPRES usecase scenario where users would
> want to rerun their tasks but with subset of slightly different
> parameters/input. This is particularly useful for them because their tasks
> can include more than 20-30 parameters most of the time.
>
>
> On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com>wrote:
>
>> Hi Amila,
>>
>> The use of the word "cloning" is misleading.
>>
>> Saminda suggested that, we would need to run the application in a
>> different host ( based on the users intuition of the host availability/
>> efficiency) keeping all the other variables constant( inputs changes are
>> also allowed). As an example: if a job keeps failing on one host, the user
>> should be allowed to submit the job to another host.
>>
>> We should come up with a different name for the scenario..
>>
>>
>> On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <
>> thejaka.amila@gmail.com> wrote:
>>
>>>
>>>
>>>
>>> On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com>wrote:
>>>
>>>> Hi All,
>>>>
>>>> This is the summary of the meeting we had Wednesday( 01/16/14) on the
>>>> Orchestrator.
>>>>
>>>> Orchestrator Overview
>>>> I Introduced the Orchestrator and I have attached the presentation
>>>> herewith.
>>>>
>>>> Adding Job Cloning capability to the Orchestrator API
>>>> Saminda suggested that we should have a way to clone an existing job
>>>> and run it with different inputs or on a different host or both. Here's the
>>>> Jira for that.[1]
>>>>
>>>
>>> I didnt quite understand what cloning does. Once descriptors are setup
>>> we can run experiment with different inputs, many times we want. So what is
>>> the actual need to have cloning ?
>>>
>>> Thanks
>>> Thejaka Amila
>>>
>>>
>>>>
>>>> Gfac embedded vs Gfac as a service
>>>> We have implemented the embedded Gfac and decided to use it for now.
>>>> Gfac as a service is a long term goal to have. Until we get the
>>>> Orchestrator complete we will use the embedded Gfac.
>>>>
>>>> Job statuses for the Orchestrator and the Gfac
>>>> We need to come up with multi-level job statuses. User-level,
>>>> Orchestartor-level and the Gfac-level statuses. Also the mapping between
>>>> them is open for discussion. We didn't come to a conclusion on the matter.
>>>> We will discuss this topic in an upcoming meeting.
>>>>
>>>>
>>>> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>>>>
>>>> --
>>>> Thanks,
>>>>  Sachith Withana
>>>>
>>>>
>>>
>>
>>
>> --
>> Thanks,
>> Sachith Withana
>>
>>
>

Re: Orchestrator Overview Meeting Summary

Posted by Saminda Wijeratne <sa...@gmail.com>.
This was in accordance with the CIPRES usecase scenario where users would
want to rerun their tasks but with subset of slightly different
parameters/input. This is particularly useful for them because their tasks
can include more than 20-30 parameters most of the time.


On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <sw...@gmail.com>wrote:

> Hi Amila,
>
> The use of the word "cloning" is misleading.
>
> Saminda suggested that, we would need to run the application in a
> different host ( based on the users intuition of the host availability/
> efficiency) keeping all the other variables constant( inputs changes are
> also allowed). As an example: if a job keeps failing on one host, the user
> should be allowed to submit the job to another host.
>
> We should come up with a different name for the scenario..
>
>
> On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <
> thejaka.amila@gmail.com> wrote:
>
>>
>>
>>
>> On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com>wrote:
>>
>>> Hi All,
>>>
>>> This is the summary of the meeting we had Wednesday( 01/16/14) on the
>>> Orchestrator.
>>>
>>> Orchestrator Overview
>>> I Introduced the Orchestrator and I have attached the presentation
>>> herewith.
>>>
>>> Adding Job Cloning capability to the Orchestrator API
>>> Saminda suggested that we should have a way to clone an existing job and
>>> run it with different inputs or on a different host or both. Here's the
>>> Jira for that.[1]
>>>
>>
>> I didnt quite understand what cloning does. Once descriptors are setup we
>> can run experiment with different inputs, many times we want. So what is
>> the actual need to have cloning ?
>>
>> Thanks
>> Thejaka Amila
>>
>>
>>>
>>> Gfac embedded vs Gfac as a service
>>> We have implemented the embedded Gfac and decided to use it for now.
>>> Gfac as a service is a long term goal to have. Until we get the
>>> Orchestrator complete we will use the embedded Gfac.
>>>
>>> Job statuses for the Orchestrator and the Gfac
>>> We need to come up with multi-level job statuses. User-level,
>>> Orchestartor-level and the Gfac-level statuses. Also the mapping between
>>> them is open for discussion. We didn't come to a conclusion on the matter.
>>> We will discuss this topic in an upcoming meeting.
>>>
>>>
>>> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>>>
>>> --
>>> Thanks,
>>>  Sachith Withana
>>>
>>>
>>
>
>
> --
> Thanks,
> Sachith Withana
>
>

Re: Orchestrator Overview Meeting Summary

Posted by Sachith Withana <sw...@gmail.com>.
Hi Amila,

The use of the word "cloning" is misleading.

Saminda suggested that, we would need to run the application in a different
host ( based on the users intuition of the host availability/ efficiency)
keeping all the other variables constant( inputs changes are also allowed).
As an example: if a job keeps failing on one host, the user should be
allowed to submit the job to another host.

We should come up with a different name for the scenario..


On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara
<th...@gmail.com>wrote:

>
>
>
> On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com>wrote:
>
>> Hi All,
>>
>> This is the summary of the meeting we had Wednesday( 01/16/14) on the
>> Orchestrator.
>>
>> Orchestrator Overview
>> I Introduced the Orchestrator and I have attached the presentation
>> herewith.
>>
>> Adding Job Cloning capability to the Orchestrator API
>> Saminda suggested that we should have a way to clone an existing job and
>> run it with different inputs or on a different host or both. Here's the
>> Jira for that.[1]
>>
>
> I didnt quite understand what cloning does. Once descriptors are setup we
> can run experiment with different inputs, many times we want. So what is
> the actual need to have cloning ?
>
> Thanks
> Thejaka Amila
>
>
>>
>> Gfac embedded vs Gfac as a service
>> We have implemented the embedded Gfac and decided to use it for now.
>> Gfac as a service is a long term goal to have. Until we get the
>> Orchestrator complete we will use the embedded Gfac.
>>
>> Job statuses for the Orchestrator and the Gfac
>> We need to come up with multi-level job statuses. User-level,
>> Orchestartor-level and the Gfac-level statuses. Also the mapping between
>> them is open for discussion. We didn't come to a conclusion on the matter.
>> We will discuss this topic in an upcoming meeting.
>>
>>
>> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>>
>> --
>> Thanks,
>>  Sachith Withana
>>
>>
>


-- 
Thanks,
Sachith Withana

Re: Orchestrator Overview Meeting Summary

Posted by Amila Jayasekara <th...@gmail.com>.
On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <sw...@gmail.com>wrote:

> Hi All,
>
> This is the summary of the meeting we had Wednesday( 01/16/14) on the
> Orchestrator.
>
> Orchestrator Overview
> I Introduced the Orchestrator and I have attached the presentation
> herewith.
>
> Adding Job Cloning capability to the Orchestrator API
> Saminda suggested that we should have a way to clone an existing job and
> run it with different inputs or on a different host or both. Here's the
> Jira for that.[1]
>

I didnt quite understand what cloning does. Once descriptors are setup we
can run experiment with different inputs, many times we want. So what is
the actual need to have cloning ?

Thanks
Thejaka Amila


>
> Gfac embedded vs Gfac as a service
> We have implemented the embedded Gfac and decided to use it for now.
> Gfac as a service is a long term goal to have. Until we get the
> Orchestrator complete we will use the embedded Gfac.
>
> Job statuses for the Orchestrator and the Gfac
> We need to come up with multi-level job statuses. User-level,
> Orchestartor-level and the Gfac-level statuses. Also the mapping between
> them is open for discussion. We didn't come to a conclusion on the matter.
> We will discuss this topic in an upcoming meeting.
>
>
> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>
> --
> Thanks,
>  Sachith Withana
>
>