You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tuscany.apache.org by Simon Laws <si...@googlemail.com> on 2008/01/29 16:14:03 UTC

Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

On Jan 28, 2008 5:38 PM, Simon Laws <si...@googlemail.com> wrote:

> snip...
>
> > I'm not too keen on scanning a disk directory as it doesn't apply to a
> > distributed environment, I'd prefer to:
> > - define a model representing a contribution repository
> > - persist it in some XML form
> >
>
>
> I've started on some model code in my sandbox [1]. Feel free to use and
> abuse.
>
> Regards
>
> Simon
>
> [1]
> http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
>

Looking a svn I find there is already a ContributionRepository
implementation [1]. There may be a little bit too much function in there at
the moment but it's useful to see it none the less. So, to work out what it
does. First question concerns the "store()" method.

public URL store(String contribution, URL sourceURL, InputStream
contributionStream).

Can someone explain what the sourceURL is for?

The model in my sandbox [2], which is very simlar to the XML that the
current contribution repository uses, now holds node and contribution name
information [3]. These could be two separate models to decouple the
management of contributions from the process of associating them together.
I'd keep the info in one place but I expect other's views will vary.

[1]
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/contribution-impl/src/main/java/org/apache/tuscany/sca/contribution/service/impl/ContributionRepositoryImpl.java
[2] http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
[3]
http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/domain-model-xml/src/test/resources/org/apache/tuscany/sca/domain/model/xml/test.domain

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Simon Laws <si...@googlemail.com>.
On Tue, Feb 5, 2008 at 8:34 AM, Jean-Sebastien Delfino <js...@apache.org>
wrote:

> Venkata Krishnan wrote:
> > It would also be good to have some sort of 'ping' function that could be
> > used to check if a service is receptive to requests.  Infact I wonder if
> the
> > Workspace Admin should also be able to test this sort of a ping per
> > binding.  Is this something that can go into the section (B) .. or is
> this
> > out of place ?
> >
>
> Good idea, I'd put it section (D). A node runtime needs to provide a way
> to monitor its status.
>
> --
> Jean-Sebastien
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>
> Hi Sebastien

I see you have started to check in code related to steps A and B. I have
time this week to start helping on this and thought I would start looking at
the back end of B and moving into C but don't want to tread on you toes.

I made some code to experiment with before I went on holiday so it's not
integrated with your code (it just uses the Workspace interface). What I was
starting to look at was resolving a domain level composite which includes
unresolved composites. I.e. I built a composite which includes the
deployable composites for a series of contributions and am learning about
resolution and re-resolution.

I'm not doing anything about composite selection for deployment just yet.
That will come from the node model/gui/command line. I just want to work out
how we get the domain resolution going in this context.

If you are not already doing this I'll carry on experimenting in my sandbox
for a little while longer and spawn of a separate thread to discuss.

Simon

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
Venkata Krishnan wrote:
> It would also be good to have some sort of 'ping' function that could be
> used to check if a service is receptive to requests.  Infact I wonder if the
> Workspace Admin should also be able to test this sort of a ping per
> binding.  Is this something that can go into the section (B) .. or is this
> out of place ?
> 

Good idea, I'd put it section (D). A node runtime needs to provide a way 
to monitor its status.

-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Venkata Krishnan <fo...@gmail.com>.
It would also be good to have some sort of 'ping' function that could be
used to check if a service is receptive to requests.  Infact I wonder if the
Workspace Admin should also be able to test this sort of a ping per
binding.  Is this something that can go into the section (B) .. or is this
out of place ?

- Venkat

On Feb 3, 2008 12:26 PM, Jean-Sebastien Delfino <js...@apache.org>
wrote:

> Simon Laws wrote:
> [snip]
> > From what you are saying a short term shopping list of functions seems
> to be
> > emerging.
> >
> > Contribution uploader/manager(via browser)
> > Contribution addition/management from command line (adding as Luciano
> has
> > started this and useful for testing)
> > Workspace to register added contributions contributions
> > Parser to turn workspace contributions into a model that can be
> inspected
> > (doesn't need the machinery of a runtime)
> > Validator for validating contributions in a workspace
> > Domain/Node model reader/writer (implementation.node)
> > Function for assigning composites to nodes
> > Function for processing assigned composites in the context of the domain
> > (reference resolution, autowire) (again can be more lightweight than a
> > runtime but does needs access to binding specific processing)
> > Deployer for writing out contributions for nodes
> >
> > What else is there?
> >
> > Simon
> >
>
> Looks good to me, building on your initial list I added a few more items
> and tried to organize them in three categories:
>
> A) Contribution workspace (containing installed contributions):
> - Contribution model representing a contribution
> - Reader for the contribution model
> - Workspace model representing a collection of contributions
> - Reader/writer for the workspace model
> - HTTP based service for accessing the workspace
> - Web browser client for the workspace service
> - Command line client for the workspace service
> - Validator for contributions in a workspace
>
> B) Domain composite (containing deployed composites):
> - We can just reuse the existing composite model
> - HTTP based service for accessing the domain composite
> - Web browser client for the domain composite service
> - Command line client for the domain composite service
> - Validator for composites deployed in the domain composite
> - Function for processing wiring in the domain
>
> C) Node configuration
> - Implementation.node model
> - Reader/writer for the implementation.node model
> - Function for configuring composites assigned to nodes
> - Function for pushing contributions and composites to nodes
>
> D) Node runtime
> - Runtime that loads a set of contributions and a composite
> - HTTP based service for starting/stopping a node
>
> --
> Jean-Sebastien
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>
>

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
Simon Laws wrote:
> On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws <si...@googlemail.com>
> wrote:
> 
>>
>> On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino <
>> jsdelfino@apache.org> wrote:
>>
>>> Jean-Sebastien Delfino wrote:
>>>> Simon Laws wrote:
>>>>> I've been running the workspace code today with a view to integrating
>>> the
>>>>> new code in assembly which calculates service endpoints i.e. point4
>>>>> above.
>>>>>
>>>>> I think we need to amend point 4 to make this work properly..
>>>>>
>>>>> 4. Point my Web browser to the various ATOM collections to get:
>>>>> - lists of contributions, composites and nodes
>>>>> - list of contributions that are required by a given contribution
>>>>> - the source of a particular composite
>>>>> - the output of a composite after the domain composite has been built
>>> by
>>>>> CompositeBuilder
>>>>>
>>>>> Looking at the code in DeployableCompositeCollectionImpl I see that
>>> on
>>>>> doGet() it builds the request composite. What the last point  needs
>>> to
>>>>> do is
>>>>>
>>>>> - read the whole domain
>>>>> - set up all of the service URIs for each of the included composites
>>>>> taking
>>>>> into account the node to which each composite is assigned
>>>>> - build the whole domain using CompositeBuilder
>>>>> - extract the required composite from the domain and serialize it
>>> out.
>>>> Yes, exactly!
>>>>
>>>>> Are you changing this code or can I put this in?
>>>> Just go ahead, I'll update and merge if I have any other changes in
>>> the
>>>> same classes.
>>>>
>>> Simon, a quick update: I've done an initial bring-up of node2-impl. It's
>>> still a little rough but you can give it a try if you want.
>>>
>>> The steps to run the store app for example with node2 are as follows:
>>>
>>> 1) use workspace-admin to add the store and assets contributions to the
>>> domain;
>>>
>>> 2) add the store composite to the domain composite using the admin as
>>> well;
>>>
>>> 3) start the StoreLauncher2 class that I just added to the store module;
>>>
>>> 4) that will start an instance of node2 with all the node config served
>>> from the admin app.
>>>
>>> So the next step is to integrate your node allocation code with
>>> workspace-admin and that will complete the story. Then we'll be able to
>>> remove all the currently hardcoded endpoint URIs from the composites.
>>>
>>> I'll send a more detailed description and steps to run more scenarios
>>> later on Friday.
>>>
>>> --
>>> Jean-Sebastien
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
>>> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>>>
>>> Ok, sounds good. I've done the uri integration although there are some
>> issues we need to discuss. First I'll update with your code, commit my
>> changes and then post here about the issues.
>>
>> Regards
>>
>> Simon
>>
> I've now checked in my changes (last commit was 634762) to integrate the URI
> calculation code with the workspace. I've run the new store launcher
> following Sebastien's instructions from a previous post to this thread. I
> don't seem to have broken it too much although I'm not seeing any prices for
> the catalog items.

I was seeing that issue too before, it's a minor bug in the property 
writing code, which is not writing property values correctly.

> Issues with the URI generation code....
> 
> I have to turn model resolution back on by uncommenting a line in
> ContributionContentProcessor.resolve. Otherwise the JavaImplementation types
> are not read and
> compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
> composite, null); can't generate default services. I then had to tun it back
> off to make the store sample work. I need some help on this one.

I'm investigating now.

> 
> If you hand craft services it seems to be OK although I have noticed,
> looking at the generated SCDL, that it seems to be assuming that all
> generated service names will be based on the implementation classname
> regardless of whether the interface is marked as @Remotable or not. Feels
> like a bug somewhere so am going to look at that next.

OK

> 
> To get Java implementation resolution to work I needed to hack in the Java
> factories setup in the DeployableCompositeCollectionImpl.initialize()
> method.  This is not very good and raises the bigger question about the set
> up in here. It's creating a set of extension points in parallel to those
> created by the runtime running this component. Can we either use the
> registry created by the underlying runtime or do similar generic setup.

Yes, I'd like to keep the infrastructure used by the admin decoupled 
from the infrastructure of the runtime hosting the admin, but I'll try 
to simplify the setup by creating an instance of runtime for the admin 
and getting the necessary objects out of it, instead of assembling it 
from scratch as it is now.

> The code doesn't currently distinguish between those services that are
> @Remotable and those that aren't
> 
> Simon
> 


-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
Jean-Sebastien Delfino wrote:
> Simon Laws wrote:
>> On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws <si...@googlemail.com>
>> wrote:
>>
>>>
>>> On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino <
>>> jsdelfino@apache.org> wrote:
>>>
>>>> Jean-Sebastien Delfino wrote:
>>>>> Simon Laws wrote:
>>>>>> I've been running the workspace code today with a view to integrating
>>>> the
>>>>>> new code in assembly which calculates service endpoints i.e. point4
>>>>>> above.
>>>>>>
>>>>>> I think we need to amend point 4 to make this work properly..
>>>>>>
>>>>>> 4. Point my Web browser to the various ATOM collections to get:
>>>>>> - lists of contributions, composites and nodes
>>>>>> - list of contributions that are required by a given contribution
>>>>>> - the source of a particular composite
>>>>>> - the output of a composite after the domain composite has been built
>>>> by
>>>>>> CompositeBuilder
>>>>>>
>>>>>> Looking at the code in DeployableCompositeCollectionImpl I see that
>>>> on
>>>>>> doGet() it builds the request composite. What the last point  needs
>>>> to
>>>>>> do is
>>>>>>
>>>>>> - read the whole domain
>>>>>> - set up all of the service URIs for each of the included composites
>>>>>> taking
>>>>>> into account the node to which each composite is assigned
>>>>>> - build the whole domain using CompositeBuilder
>>>>>> - extract the required composite from the domain and serialize it
>>>> out.
>>>>> Yes, exactly!
>>>>>
>>>>>> Are you changing this code or can I put this in?
>>>>> Just go ahead, I'll update and merge if I have any other changes in
>>>> the
>>>>> same classes.
>>>>>
>>>> Simon, a quick update: I've done an initial bring-up of node2-impl. 
>>>> It's
>>>> still a little rough but you can give it a try if you want.
>>>>
>>>> The steps to run the store app for example with node2 are as follows:
>>>>
>>>> 1) use workspace-admin to add the store and assets contributions to the
>>>> domain;
>>>>
>>>> 2) add the store composite to the domain composite using the admin as
>>>> well;
>>>>
>>>> 3) start the StoreLauncher2 class that I just added to the store 
>>>> module;
>>>>
>>>> 4) that will start an instance of node2 with all the node config served
>>>> from the admin app.
>>>>
>>>> So the next step is to integrate your node allocation code with
>>>> workspace-admin and that will complete the story. Then we'll be able to
>>>> remove all the currently hardcoded endpoint URIs from the composites.
>>>>
>>>> I'll send a more detailed description and steps to run more scenarios
>>>> later on Friday.
>>>>
>>>> -- 
>>>> Jean-Sebastien
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
>>>> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>>>>
>>>> Ok, sounds good. I've done the uri integration although there are some
>>> issues we need to discuss. First I'll update with your code, commit my
>>> changes and then post here about the issues.
>>>
>>> Regards
>>>
>>> Simon
>>>
>> I've now checked in my changes (last commit was 634762) to integrate 
>> the URI
>> calculation code with the workspace. I've run the new store launcher
>> following Sebastien's instructions from a previous post to this thread. I
>> don't seem to have broken it too much although I'm not seeing any 
>> prices for
>> the catalog items.
> 
> I was seeing that issue too before, it's a minor bug in the property 
> writing code, which is not writing property values correctly.
> 
>> Issues with the URI generation code....
>>
>> I have to turn model resolution back on by uncommenting a line in
>> ContributionContentProcessor.resolve. Otherwise the JavaImplementation 
>> types
>> are not read and
>> compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
>> composite, null); can't generate default services. I then had to tun 
>> it back
>> off to make the store sample work. I need some help on this one.
> 
> I'm investigating now.
> 
>>
>> If you hand craft services it seems to be OK although I have noticed,
>> looking at the generated SCDL, that it seems to be assuming that all
>> generated service names will be based on the implementation classname
>> regardless of whether the interface is marked as @Remotable or not. Feels
>> like a bug somewhere so am going to look at that next.
> 
> OK
> 
>>
>> To get Java implementation resolution to work I needed to hack in the 
>> Java
>> factories setup in the DeployableCompositeCollectionImpl.initialize()
>> method.  This is not very good and raises the bigger question about 
>> the set
>> up in here. It's creating a set of extension points in parallel to those
>> created by the runtime running this component. Can we either use the
>> registry created by the underlying runtime or do similar generic setup.
> 
> Yes, I'd like to keep the infrastructure used by the admin decoupled 
> from the infrastructure of the runtime hosting the admin, but I'll try 
> to simplify the setup by creating an instance of runtime for the admin 
> and getting the necessary objects out of it, instead of assembling it 
> from scratch as it is now.
> 
>> The code doesn't currently distinguish between those services that are
>> @Remotable and those that aren't
>>
>> Simon
>>
> 
> 

Simon,

After a few more changes, the domain / node allocation, default URI 
calculation and resolution of references across nodes now works OK.

I was able to remove all the hardcoded URIs in the tutorial composites 
as they now get determined from the configuration of the nodes that the 
composites are deployed to.

You can use the latest tutorial modules to see the end to end 
integration, with the following steps:

1. Start tutorial/domain/.../LaunchTutorialAdmin.

2. Open http://localhost:9990/ui/composite in your Web browser. You 
should see all the tutorial contributions and deployables that I've 
added to that domain.

3. Click the feeds in the "composite install image" to see the resolved 
composites.

4. Start all the launch programs in tutorial/nodes, you can start them 
in any order you want.

5. Open tutorial/assets/tutorial.html in your Web browser, follow the 
links to the various store implementations.

Hope this helps.
-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
Simon Laws wrote:
> On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws <si...@googlemail.com>
> wrote:
> 
>>
>> On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino <
>> jsdelfino@apache.org> wrote:
>>
>>> Jean-Sebastien Delfino wrote:
>>>> Simon Laws wrote:
>>>>> I've been running the workspace code today with a view to integrating
>>> the
>>>>> new code in assembly which calculates service endpoints i.e. point4
>>>>> above.
>>>>>
>>>>> I think we need to amend point 4 to make this work properly..
>>>>>
>>>>> 4. Point my Web browser to the various ATOM collections to get:
>>>>> - lists of contributions, composites and nodes
>>>>> - list of contributions that are required by a given contribution
>>>>> - the source of a particular composite
>>>>> - the output of a composite after the domain composite has been built
>>> by
>>>>> CompositeBuilder
>>>>>
>>>>> Looking at the code in DeployableCompositeCollectionImpl I see that
>>> on
>>>>> doGet() it builds the request composite. What the last point  needs
>>> to
>>>>> do is
>>>>>
>>>>> - read the whole domain
>>>>> - set up all of the service URIs for each of the included composites
>>>>> taking
>>>>> into account the node to which each composite is assigned
>>>>> - build the whole domain using CompositeBuilder
>>>>> - extract the required composite from the domain and serialize it
>>> out.
>>>> Yes, exactly!
>>>>
>>>>> Are you changing this code or can I put this in?
>>>> Just go ahead, I'll update and merge if I have any other changes in
>>> the
>>>> same classes.
>>>>
>>> Simon, a quick update: I've done an initial bring-up of node2-impl. It's
>>> still a little rough but you can give it a try if you want.
>>>
>>> The steps to run the store app for example with node2 are as follows:
>>>
>>> 1) use workspace-admin to add the store and assets contributions to the
>>> domain;
>>>
>>> 2) add the store composite to the domain composite using the admin as
>>> well;
>>>
>>> 3) start the StoreLauncher2 class that I just added to the store module;
>>>
>>> 4) that will start an instance of node2 with all the node config served
>>> from the admin app.
>>>
>>> So the next step is to integrate your node allocation code with
>>> workspace-admin and that will complete the story. Then we'll be able to
>>> remove all the currently hardcoded endpoint URIs from the composites.
>>>
>>> I'll send a more detailed description and steps to run more scenarios
>>> later on Friday.
>>>
>>> --
>>> Jean-Sebastien
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
>>> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>>>
>>> Ok, sounds good. I've done the uri integration although there are some
>> issues we need to discuss. First I'll update with your code, commit my
>> changes and then post here about the issues.
>>
>> Regards
>>
>> Simon
>>
> I've now checked in my changes (last commit was 634762) to integrate the URI
> calculation code with the workspace. I've run the new store launcher
> following Sebastien's instructions from a previous post to this thread. I
> don't seem to have broken it too much although I'm not seeing any prices for
> the catalog items.

I was seeing that issue too before, it's a minor bug in the property 
writing code, which is not writing property values correctly.

> Issues with the URI generation code....
> 
> I have to turn model resolution back on by uncommenting a line in
> ContributionContentProcessor.resolve. Otherwise the JavaImplementation types
> are not read and
> compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
> composite, null); can't generate default services. I then had to tun it back
> off to make the store sample work. I need some help on this one.

I'm investigating now.

> 
> If you hand craft services it seems to be OK although I have noticed,
> looking at the generated SCDL, that it seems to be assuming that all
> generated service names will be based on the implementation classname
> regardless of whether the interface is marked as @Remotable or not. Feels
> like a bug somewhere so am going to look at that next.

OK

> 
> To get Java implementation resolution to work I needed to hack in the Java
> factories setup in the DeployableCompositeCollectionImpl.initialize()
> method.  This is not very good and raises the bigger question about the set
> up in here. It's creating a set of extension points in parallel to those
> created by the runtime running this component. Can we either use the
> registry created by the underlying runtime or do similar generic setup.

Yes, I'd like to keep the infrastructure used by the admin decoupled 
from the infrastructure of the runtime hosting the admin, but I'll try 
to simplify the setup by creating an instance of runtime for the admin 
and getting the necessary objects out of it, instead of assembling it 
from scratch as it is now.

> The code doesn't currently distinguish between those services that are
> @Remotable and those that aren't
> 
> Simon
> 


-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Simon Laws <si...@googlemail.com>.
On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws <si...@googlemail.com>
wrote:

>
>
> On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino <
> jsdelfino@apache.org> wrote:
>
> > Jean-Sebastien Delfino wrote:
> > > Simon Laws wrote:
> > >>
> > >> I've been running the workspace code today with a view to integrating
> > the
> > >> new code in assembly which calculates service endpoints i.e. point4
> > >> above.
> > >>
> > >> I think we need to amend point 4 to make this work properly..
> > >>
> > >> 4. Point my Web browser to the various ATOM collections to get:
> > >> - lists of contributions, composites and nodes
> > >> - list of contributions that are required by a given contribution
> > >> - the source of a particular composite
> > >> - the output of a composite after the domain composite has been built
> > by
> > >> CompositeBuilder
> > >>
> > >> Looking at the code in DeployableCompositeCollectionImpl I see that
> > on
> > >> doGet() it builds the request composite. What the last point  needs
> > to
> > >> do is
> > >>
> > >> - read the whole domain
> > >> - set up all of the service URIs for each of the included composites
> > >> taking
> > >> into account the node to which each composite is assigned
> > >> - build the whole domain using CompositeBuilder
> > >> - extract the required composite from the domain and serialize it
> > out.
> > >
> > > Yes, exactly!
> > >
> > >>
> > >> Are you changing this code or can I put this in?
> > >
> > > Just go ahead, I'll update and merge if I have any other changes in
> > the
> > > same classes.
> > >
> >
> > Simon, a quick update: I've done an initial bring-up of node2-impl. It's
> > still a little rough but you can give it a try if you want.
> >
> > The steps to run the store app for example with node2 are as follows:
> >
> > 1) use workspace-admin to add the store and assets contributions to the
> > domain;
> >
> > 2) add the store composite to the domain composite using the admin as
> > well;
> >
> > 3) start the StoreLauncher2 class that I just added to the store module;
> >
> > 4) that will start an instance of node2 with all the node config served
> > from the admin app.
> >
> > So the next step is to integrate your node allocation code with
> > workspace-admin and that will complete the story. Then we'll be able to
> > remove all the currently hardcoded endpoint URIs from the composites.
> >
> > I'll send a more detailed description and steps to run more scenarios
> > later on Friday.
> >
> > --
> > Jean-Sebastien
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> > For additional commands, e-mail: tuscany-dev-help@ws.apache.org
> >
> > Ok, sounds good. I've done the uri integration although there are some
> issues we need to discuss. First I'll update with your code, commit my
> changes and then post here about the issues.
>
> Regards
>
> Simon
>
I've now checked in my changes (last commit was 634762) to integrate the URI
calculation code with the workspace. I've run the new store launcher
following Sebastien's instructions from a previous post to this thread. I
don't seem to have broken it too much although I'm not seeing any prices for
the catalog items.

Issues with the URI generation code....

I have to turn model resolution back on by uncommenting a line in
ContributionContentProcessor.resolve. Otherwise the JavaImplementation types
are not read and
compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
composite, null); can't generate default services. I then had to tun it back
off to make the store sample work. I need some help on this one.

If you hand craft services it seems to be OK although I have noticed,
looking at the generated SCDL, that it seems to be assuming that all
generated service names will be based on the implementation classname
regardless of whether the interface is marked as @Remotable or not. Feels
like a bug somewhere so am going to look at that next.

To get Java implementation resolution to work I needed to hack in the Java
factories setup in the DeployableCompositeCollectionImpl.initialize()
method.  This is not very good and raises the bigger question about the set
up in here. It's creating a set of extension points in parallel to those
created by the runtime running this component. Can we either use the
registry created by the underlying runtime or do similar generic setup.

The code doesn't currently distinguish between those services that are
@Remotable and those that aren't

Simon

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Simon Laws <si...@googlemail.com>.
On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino <
jsdelfino@apache.org> wrote:

> Jean-Sebastien Delfino wrote:
> > Simon Laws wrote:
> >>
> >> I've been running the workspace code today with a view to integrating
> the
> >> new code in assembly which calculates service endpoints i.e. point4
> >> above.
> >>
> >> I think we need to amend point 4 to make this work properly..
> >>
> >> 4. Point my Web browser to the various ATOM collections to get:
> >> - lists of contributions, composites and nodes
> >> - list of contributions that are required by a given contribution
> >> - the source of a particular composite
> >> - the output of a composite after the domain composite has been built
> by
> >> CompositeBuilder
> >>
> >> Looking at the code in DeployableCompositeCollectionImpl I see that on
> >> doGet() it builds the request composite. What the last point  needs to
> >> do is
> >>
> >> - read the whole domain
> >> - set up all of the service URIs for each of the included composites
> >> taking
> >> into account the node to which each composite is assigned
> >> - build the whole domain using CompositeBuilder
> >> - extract the required composite from the domain and serialize it out.
> >
> > Yes, exactly!
> >
> >>
> >> Are you changing this code or can I put this in?
> >
> > Just go ahead, I'll update and merge if I have any other changes in the
> > same classes.
> >
>
> Simon, a quick update: I've done an initial bring-up of node2-impl. It's
> still a little rough but you can give it a try if you want.
>
> The steps to run the store app for example with node2 are as follows:
>
> 1) use workspace-admin to add the store and assets contributions to the
> domain;
>
> 2) add the store composite to the domain composite using the admin as
> well;
>
> 3) start the StoreLauncher2 class that I just added to the store module;
>
> 4) that will start an instance of node2 with all the node config served
> from the admin app.
>
> So the next step is to integrate your node allocation code with
> workspace-admin and that will complete the story. Then we'll be able to
> remove all the currently hardcoded endpoint URIs from the composites.
>
> I'll send a more detailed description and steps to run more scenarios
> later on Friday.
>
> --
> Jean-Sebastien
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>
> Ok, sounds good. I've done the uri integration although there are some
issues we need to discuss. First I'll update with your code, commit my
changes and then post here about the issues.

Regards

Simon

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
Jean-Sebastien Delfino wrote:
> Simon Laws wrote:
>>
>> I've been running the workspace code today with a view to integrating the
>> new code in assembly which calculates service endpoints i.e. point4 
>> above.
>>
>> I think we need to amend point 4 to make this work properly..
>>
>> 4. Point my Web browser to the various ATOM collections to get:
>> - lists of contributions, composites and nodes
>> - list of contributions that are required by a given contribution
>> - the source of a particular composite
>> - the output of a composite after the domain composite has been built by
>> CompositeBuilder
>>
>> Looking at the code in DeployableCompositeCollectionImpl I see that on
>> doGet() it builds the request composite. What the last point  needs to 
>> do is
>>
>> - read the whole domain
>> - set up all of the service URIs for each of the included composites 
>> taking
>> into account the node to which each composite is assigned
>> - build the whole domain using CompositeBuilder
>> - extract the required composite from the domain and serialize it out.
> 
> Yes, exactly!
> 
>>
>> Are you changing this code or can I put this in?
> 
> Just go ahead, I'll update and merge if I have any other changes in the 
> same classes.
> 

Simon, a quick update: I've done an initial bring-up of node2-impl. It's 
still a little rough but you can give it a try if you want.

The steps to run the store app for example with node2 are as follows:

1) use workspace-admin to add the store and assets contributions to the 
domain;

2) add the store composite to the domain composite using the admin as well;

3) start the StoreLauncher2 class that I just added to the store module;

4) that will start an instance of node2 with all the node config served 
from the admin app.

So the next step is to integrate your node allocation code with 
workspace-admin and that will complete the story. Then we'll be able to 
remove all the currently hardcoded endpoint URIs from the composites.

I'll send a more detailed description and steps to run more scenarios 
later on Friday.

-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
Simon Laws wrote:
> 
> I've been running the workspace code today with a view to integrating the
> new code in assembly which calculates service endpoints i.e. point4 above.
> 
> I think we need to amend point 4 to make this work properly..
> 
> 4. Point my Web browser to the various ATOM collections to get:
> - lists of contributions, composites and nodes
> - list of contributions that are required by a given contribution
> - the source of a particular composite
> - the output of a composite after the domain composite has been built by
> CompositeBuilder
> 
> Looking at the code in DeployableCompositeCollectionImpl I see that on
> doGet() it builds the request composite. What the last point  needs to do is
> 
> - read the whole domain
> - set up all of the service URIs for each of the included composites taking
> into account the node to which each composite is assigned
> - build the whole domain using CompositeBuilder
> - extract the required composite from the domain and serialize it out.

Yes, exactly!

> 
> Are you changing this code or can I put this in?

Just go ahead, I'll update and merge if I have any other changes in the 
same classes.

-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Simon Laws <si...@googlemail.com>.
On Fri, Feb 29, 2008 at 5:37 PM, Jean-Sebastien Delfino <
jsdelfino@apache.org> wrote:

> Comments inline.
>
> >>>>> A) Contribution workspace (containing installed contributions):
> >>>>> - Contribution model representing a contribution
> >>>>> - Reader for the contribution model
> >>>>> - Workspace model representing a collection of contributions
> >>>>> - Reader/writer for the workspace model
> >>>>> - HTTP based service for accessing the workspace
> >>>>> - Web browser client for the workspace service
> >>>>> - Command line client for the workspace service
> >>>>> - Validator for contributions in a workspace
> >
> > I started looking at step D). Having a rest from URLs :-) In the context
> of
> > this thread the node can loose it's connection to the domain and hence
> the
> > factory and the node interface slims down. So "Runtime that loads a set
> of
> > contributions and a composite" becomes;
> >
> > create a node
> > add some contributions (addContribution) and mark a composite for
> > starting(currently called addToDomainLevelComposite).
> > start the node
> > stop the node
> >
> > You could then recycle (destroy) the node and repeat if required.
> >
> > This all sound like a suggestion Sebastien made about 5 months ago ;-) I
> > have started to check in an alternative implementation of the node
> > (node2-impl). I haven't changed any interfaces yet so I don't break any
> > existing tests (and the code doesn't run yet!).
> >
> > Anyhow. I've been looking at the workspace code for parts A and B that
> has
> > recently been committed. It would seem to be fairly representative of
> the
> > motivating scenario [1].  I don't have detailed question yet but
> > interestingly it looks like contributions, composites etc are exposed as
> > HTTP resources. Sebastien, It would be useful to have a summary of you
> > thoughts on how it is intended to hang together and how these will be
> used.
>
> I've basically created three services:
>
> workspace - Provides access to a collection of links to contributions,
> their URI and location. Also provides functions to get the list of
> contribution dependencies and validate a contribution.
>
> composites - Provides access to a collection of links to the composites
> present in to the domain composite. Also provides a function returning a
> particular composite once it has been 'built' (by CompositeBuilder),
> i.e. its references, properties etc have been resolved.
>
> nodes - Provides access to a collection of links to composites
> describing the <implementation.node> components which represent SCA nodes.
>
> There's another "file upload" service that I'm using to upload
> contribution files and other files to some storage area but it's just
> temporary.
>
> I'm using <binding.atom> to expose the above collections as editable
> ATOM-Pub collections (and ATOM feeds of contributions, composites, nodes).
>
> Here's how I'm using these services as an SCA domain administrator:
>
> 1. Add one or more links to contributions to the workspace. They can be
> anywhere accessible on the network through a URL, or local on disk. The
> workspace just keeps track of the list.
>
> 2. Add one or more composites to the composites collection. They become
> part of the domain composite.
>
> 3. Add one or more composites declaring SCA nodes to the nodes
> collection. The nodes are described as SCA components of type
> <implementation.node>. A node component names the application composite
> that is assigned to run on it (see implementation-node-xml for an
> example).
>
> 4. Point my Web browser to the various ATOM collections to get:
> - lists of contributions, composites and nodes
> - list of contributions that are required by a given contribution
> - the source of a particular composite
> - the output of a composite built by CompositeBuilder
>
> Here, I'm hoping that the work you've started to "assign endpoint info
> to domain model" [2] will help CompositeBuilder produce the correct
> fully resolved composite.
>
> 5. Pick a node, point my Web browser to its composite description and
> write down:
> - $node = URL of the composite describing the node
> - $composite = URL of the application composite that's assigned to it
> - $contrib = URL the list of contribution dependencies.
>
> 6. When you have node2-impl ready :) from the command line do:
> sca-node $node $composite $contrib
> this should start the SCA node, which can get its description, composite
> and contributions from these URLs.
>
> or for (6) start the node directly from my Web browser as described in
> [1], but one step at a time... that can come later when we have the
> basic building blocks working OK :)
>
>
> >
> > I guess these HTTP resource bring a deployment dimension.
> >
> > Local - Give the node contribution URLs that point to the local file
> system
> > from where the node reads the contribution (this is how it has worked to
> > date)
> > Remote - Give it contribution URLs that point out to HTTP resource so
> the
> > node can read the contributions from where they are stored in the
> network
> >
> > Was that the intention?
>
> Yes. I don't always want to have to upload contributions to some server
> or even have to copy them around. The collection of contributions should
> be able to point to contributions directly in my IDE workspace for
> example (and it supports that today).
>
> > [1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27362.html
> [2] http://marc.info/?l=tuscany-dev&m=120422784528176
>
> --
> Jean-Sebastien
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>
> Great summary Sebastien. Thank you.

I've been running the workspace code today with a view to integrating the
new code in assembly which calculates service endpoints i.e. point4 above.

I think we need to amend point 4 to make this work properly..

4. Point my Web browser to the various ATOM collections to get:
- lists of contributions, composites and nodes
- list of contributions that are required by a given contribution
- the source of a particular composite
- the output of a composite after the domain composite has been built by
CompositeBuilder

Looking at the code in DeployableCompositeCollectionImpl I see that on
doGet() it builds the request composite. What the last point  needs to do is

- read the whole domain
- set up all of the service URIs for each of the included composites taking
into account the node to which each composite is assigned
- build the whole domain using CompositeBuilder
- extract the required composite from the domain and serialize it out.

Are you changing this code or can I put this in?

Simon

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
Comments inline.

>>>>> A) Contribution workspace (containing installed contributions):
>>>>> - Contribution model representing a contribution
>>>>> - Reader for the contribution model
>>>>> - Workspace model representing a collection of contributions
>>>>> - Reader/writer for the workspace model
>>>>> - HTTP based service for accessing the workspace
>>>>> - Web browser client for the workspace service
>>>>> - Command line client for the workspace service
>>>>> - Validator for contributions in a workspace
> 
> I started looking at step D). Having a rest from URLs :-) In the context of
> this thread the node can loose it's connection to the domain and hence the
> factory and the node interface slims down. So "Runtime that loads a set of
> contributions and a composite" becomes;
> 
> create a node
> add some contributions (addContribution) and mark a composite for
> starting(currently called addToDomainLevelComposite).
> start the node
> stop the node
> 
> You could then recycle (destroy) the node and repeat if required.
> 
> This all sound like a suggestion Sebastien made about 5 months ago ;-) I
> have started to check in an alternative implementation of the node
> (node2-impl). I haven't changed any interfaces yet so I don't break any
> existing tests (and the code doesn't run yet!).
> 
> Anyhow. I've been looking at the workspace code for parts A and B that has
> recently been committed. It would seem to be fairly representative of the
> motivating scenario [1].  I don't have detailed question yet but
> interestingly it looks like contributions, composites etc are exposed as
> HTTP resources. Sebastien, It would be useful to have a summary of you
> thoughts on how it is intended to hang together and how these will be used.

I've basically created three services:

workspace - Provides access to a collection of links to contributions, 
their URI and location. Also provides functions to get the list of 
contribution dependencies and validate a contribution.

composites - Provides access to a collection of links to the composites 
present in to the domain composite. Also provides a function returning a 
particular composite once it has been 'built' (by CompositeBuilder), 
i.e. its references, properties etc have been resolved.

nodes - Provides access to a collection of links to composites 
describing the <implementation.node> components which represent SCA nodes.

There's another "file upload" service that I'm using to upload 
contribution files and other files to some storage area but it's just 
temporary.

I'm using <binding.atom> to expose the above collections as editable 
ATOM-Pub collections (and ATOM feeds of contributions, composites, nodes).

Here's how I'm using these services as an SCA domain administrator:

1. Add one or more links to contributions to the workspace. They can be 
anywhere accessible on the network through a URL, or local on disk. The 
workspace just keeps track of the list.

2. Add one or more composites to the composites collection. They become 
part of the domain composite.

3. Add one or more composites declaring SCA nodes to the nodes 
collection. The nodes are described as SCA components of type 
<implementation.node>. A node component names the application composite 
that is assigned to run on it (see implementation-node-xml for an example).

4. Point my Web browser to the various ATOM collections to get:
- lists of contributions, composites and nodes
- list of contributions that are required by a given contribution
- the source of a particular composite
- the output of a composite built by CompositeBuilder

Here, I'm hoping that the work you've started to "assign endpoint info 
to domain model" [2] will help CompositeBuilder produce the correct 
fully resolved composite.

5. Pick a node, point my Web browser to its composite description and 
write down:
- $node = URL of the composite describing the node
- $composite = URL of the application composite that's assigned to it
- $contrib = URL the list of contribution dependencies.

6. When you have node2-impl ready :) from the command line do:
sca-node $node $composite $contrib
this should start the SCA node, which can get its description, composite 
and contributions from these URLs.

or for (6) start the node directly from my Web browser as described in 
[1], but one step at a time... that can come later when we have the 
basic building blocks working OK :)


> 
> I guess these HTTP resource bring a deployment dimension.
> 
> Local - Give the node contribution URLs that point to the local file system
> from where the node reads the contribution (this is how it has worked to
> date)
> Remote - Give it contribution URLs that point out to HTTP resource so the
> node can read the contributions from where they are stored in the network
> 
> Was that the intention?

Yes. I don't always want to have to upload contributions to some server 
or even have to copy them around. The collection of contributions should 
be able to point to contributions directly in my IDE workspace for 
example (and it supports that today).

> [1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27362.html
[2] http://marc.info/?l=tuscany-dev&m=120422784528176

-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Simon Laws <si...@googlemail.com>.
On Tue, Feb 26, 2008 at 5:57 PM, Simon Laws <si...@googlemail.com>
wrote:

>
>
> On Mon, Feb 25, 2008 at 4:17 PM, Jean-Sebastien Delfino <
> jsdelfino@apache.org> wrote:
>
> >  >> Jean-Sebastien Delfino wrote:
> > >> Looks good to me, building on your initial list I added a few more
> > items
> > >> and tried to organize them in three categories:
> > >>
> > >> A) Contribution workspace (containing installed contributions):
> > >> - Contribution model representing a contribution
> > >> - Reader for the contribution model
> > >> - Workspace model representing a collection of contributions
> > >> - Reader/writer for the workspace model
> > >> - HTTP based service for accessing the workspace
> > >> - Web browser client for the workspace service
> > >> - Command line client for the workspace service
> > >> - Validator for contributions in a workspace
> > >>
> > >>
> > > ant elder wrote:
> > > Do you have you heart set on calling this a workspace or are you open
> > to
> > > calling it something else like a repository?
> > >
> >
> > I think that they are two different concepts, here are two analogies:
> >
> > - We in Tuscany assemble our distro out of artifacts from multiple Maven
> > repositories.
> >
> > - An application developer (for example using Eclipse) can connect
> > Eclipse workspace to multiple SVN repositories.
> >
> > What I'm looking after here is similar to the above 'distro' or 'Eclipse
> > workspace', basically an assembly of contributions, artifacts of various
> > kinds, that I can load in a 'workspace', resolve, validate and run,
> > different from the repository or repositories that I get the artifacts
> > from.
> > --
> > Jean-Sebastien
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> > For additional commands, e-mail: tuscany-dev-help@ws.apache.org
> >
>
> To me repository (in my mind somewhere to store things) describes a much
> less active entity compared to the workspace which has to do a lot of work
> to load and assimilate information from multiple contributions. I'm not sure
> about workspace either but to me it's better than repository and it's not
> domain which has caused us all kinds of problems.
>
> My 2c
>
> Simon
>

I started looking at step D). Having a rest from URLs :-) In the context of
this thread the node can loose it's connection to the domain and hence the
factory and the node interface slims down. So "Runtime that loads a set of
contributions and a composite" becomes;

create a node
add some contributions (addContribution) and mark a composite for
starting(currently called addToDomainLevelComposite).
start the node
stop the node

You could then recycle (destroy) the node and repeat if required.

This all sound like a suggestion Sebastien made about 5 months ago ;-) I
have started to check in an alternative implementation of the node
(node2-impl). I haven't changed any interfaces yet so I don't break any
existing tests (and the code doesn't run yet!).

Anyhow. I've been looking at the workspace code for parts A and B that has
recently been committed. It would seem to be fairly representative of the
motivating scenario [1].  I don't have detailed question yet but
interestingly it looks like contributions, composites etc are exposed as
HTTP resources. Sebastien, It would be useful to have a summary of you
thoughts on how it is intended to hang together and how these will be used.

I guess these HTTP resource bring a deployment dimension.

Local - Give the node contribution URLs that point to the local file system
from where the node reads the contribution (this is how it has worked to
date)
Remote - Give it contribution URLs that point out to HTTP resource so the
node can read the contributions from where they are stored in the network

Was that the intention?

Simon

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27362.html

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Simon Laws <si...@googlemail.com>.
On Mon, Feb 25, 2008 at 4:17 PM, Jean-Sebastien Delfino <
jsdelfino@apache.org> wrote:

>  >> Jean-Sebastien Delfino wrote:
> >> Looks good to me, building on your initial list I added a few more
> items
> >> and tried to organize them in three categories:
> >>
> >> A) Contribution workspace (containing installed contributions):
> >> - Contribution model representing a contribution
> >> - Reader for the contribution model
> >> - Workspace model representing a collection of contributions
> >> - Reader/writer for the workspace model
> >> - HTTP based service for accessing the workspace
> >> - Web browser client for the workspace service
> >> - Command line client for the workspace service
> >> - Validator for contributions in a workspace
> >>
> >>
> > ant elder wrote:
> > Do you have you heart set on calling this a workspace or are you open to
> > calling it something else like a repository?
> >
>
> I think that they are two different concepts, here are two analogies:
>
> - We in Tuscany assemble our distro out of artifacts from multiple Maven
> repositories.
>
> - An application developer (for example using Eclipse) can connect
> Eclipse workspace to multiple SVN repositories.
>
> What I'm looking after here is similar to the above 'distro' or 'Eclipse
> workspace', basically an assembly of contributions, artifacts of various
> kinds, that I can load in a 'workspace', resolve, validate and run,
> different from the repository or repositories that I get the artifacts
> from.
> --
> Jean-Sebastien
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>

To me repository (in my mind somewhere to store things) describes a much
less active entity compared to the workspace which has to do a lot of work
to load and assimilate information from multiple contributions. I'm not sure
about workspace either but to me it's better than repository and it's not
domain which has caused us all kinds of problems.

My 2c

Simon

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
 >> Jean-Sebastien Delfino wrote:
>> Looks good to me, building on your initial list I added a few more items
>> and tried to organize them in three categories:
>>
>> A) Contribution workspace (containing installed contributions):
>> - Contribution model representing a contribution
>> - Reader for the contribution model
>> - Workspace model representing a collection of contributions
>> - Reader/writer for the workspace model
>> - HTTP based service for accessing the workspace
>> - Web browser client for the workspace service
>> - Command line client for the workspace service
>> - Validator for contributions in a workspace
>>
>>
> ant elder wrote:
> Do you have you heart set on calling this a workspace or are you open to
> calling it something else like a repository?
> 

I think that they are two different concepts, here are two analogies:

- We in Tuscany assemble our distro out of artifacts from multiple Maven 
repositories.

- An application developer (for example using Eclipse) can connect 
Eclipse workspace to multiple SVN repositories.

What I'm looking after here is similar to the above 'distro' or 'Eclipse 
workspace', basically an assembly of contributions, artifacts of various 
kinds, that I can load in a 'workspace', resolve, validate and run, 
different from the repository or repositories that I get the artifacts from.
-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by ant elder <an...@gmail.com>.
On Sun, Feb 3, 2008 at 6:56 AM, Jean-Sebastien Delfino <js...@apache.org>
wrote:

> Simon Laws wrote:
> [snip]
> > From what you are saying a short term shopping list of functions seems
> to be
> > emerging.
> >
> > Contribution uploader/manager(via browser)
> > Contribution addition/management from command line (adding as Luciano
> has
> > started this and useful for testing)
> > Workspace to register added contributions contributions
> > Parser to turn workspace contributions into a model that can be
> inspected
> > (doesn't need the machinery of a runtime)
> > Validator for validating contributions in a workspace
> > Domain/Node model reader/writer (implementation.node)
> > Function for assigning composites to nodes
> > Function for processing assigned composites in the context of the domain
> > (reference resolution, autowire) (again can be more lightweight than a
> > runtime but does needs access to binding specific processing)
> > Deployer for writing out contributions for nodes
> >
> > What else is there?
> >
> > Simon
> >
>
> Looks good to me, building on your initial list I added a few more items
> and tried to organize them in three categories:
>
> A) Contribution workspace (containing installed contributions):
> - Contribution model representing a contribution
> - Reader for the contribution model
> - Workspace model representing a collection of contributions
> - Reader/writer for the workspace model
> - HTTP based service for accessing the workspace
> - Web browser client for the workspace service
> - Command line client for the workspace service
> - Validator for contributions in a workspace
>
>
Do you have you heart set on calling this a workspace or are you open to
calling it something else like a repository?

   ...ant

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
Simon Laws wrote:
[snip]
> From what you are saying a short term shopping list of functions seems to be
> emerging.
> 
> Contribution uploader/manager(via browser)
> Contribution addition/management from command line (adding as Luciano has
> started this and useful for testing)
> Workspace to register added contributions contributions
> Parser to turn workspace contributions into a model that can be inspected
> (doesn't need the machinery of a runtime)
> Validator for validating contributions in a workspace
> Domain/Node model reader/writer (implementation.node)
> Function for assigning composites to nodes
> Function for processing assigned composites in the context of the domain
> (reference resolution, autowire) (again can be more lightweight than a
> runtime but does needs access to binding specific processing)
> Deployer for writing out contributions for nodes
> 
> What else is there?
> 
> Simon
> 

Looks good to me, building on your initial list I added a few more items 
and tried to organize them in three categories:

A) Contribution workspace (containing installed contributions):
- Contribution model representing a contribution
- Reader for the contribution model
- Workspace model representing a collection of contributions
- Reader/writer for the workspace model
- HTTP based service for accessing the workspace
- Web browser client for the workspace service
- Command line client for the workspace service
- Validator for contributions in a workspace

B) Domain composite (containing deployed composites):
- We can just reuse the existing composite model
- HTTP based service for accessing the domain composite
- Web browser client for the domain composite service
- Command line client for the domain composite service
- Validator for composites deployed in the domain composite
- Function for processing wiring in the domain

C) Node configuration
- Implementation.node model
- Reader/writer for the implementation.node model
- Function for configuring composites assigned to nodes
- Function for pushing contributions and composites to nodes

D) Node runtime
- Runtime that loads a set of contributions and a composite
- HTTP based service for starting/stopping a node

-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Simon Laws <si...@googlemail.com>.
On Jan 30, 2008 12:24 AM, Jean-Sebastien Delfino <js...@apache.org>
wrote:

> Simon Laws wrote:
> [snip]
> > The model in my sandbox [2], which is very simlar to the XML that the
> > current contribution repository uses, now holds node and contribution
> name
> > information [3]. These could be two separate models to decouple the
> > management of contributions from the process of associating them
> together.
>
> I like the decoupling part:
>
> - A workspace containing contributions (basically just a contribution
> URI -> URL association). I've started to add that Workspace interface to
> the contribution package.
>
> - A description of the network containing nodes, we don't need a new
> model for that, as we already have implementation-node and can use
> something like:
>
> <composite name="bobsNetWork">
>
>   <component name="bobsNode1">
>     <implementation.node ...>
>   </component>
>
>   <component name="bobsNode2">
>     <implementation.node ...>
>   </component>
>
> </composite>
>
> --
> Jean-Sebastien
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>

>From what you are saying a short term shopping list of functions seems to be
emerging.

Contribution uploader/manager(via browser)
Contribution addition/management from command line (adding as Luciano has
started this and useful for testing)
Workspace to register added contributions contributions
Parser to turn workspace contributions into a model that can be inspected
(doesn't need the machinery of a runtime)
Validator for validating contributions in a workspace
Domain/Node model reader/writer (implementation.node)
Function for assigning composites to nodes
Function for processing assigned composites in the context of the domain
(reference resolution, autowire) (again can be more lightweight than a
runtime but does needs access to binding specific processing)
Deployer for writing out contributions for nodes

What else is there?

Simon

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Jean-Sebastien Delfino <js...@apache.org>.
Simon Laws wrote:
[snip]
> The model in my sandbox [2], which is very simlar to the XML that the
> current contribution repository uses, now holds node and contribution name
> information [3]. These could be two separate models to decouple the
> management of contributions from the process of associating them together.

I like the decoupling part:

- A workspace containing contributions (basically just a contribution 
URI -> URL association). I've started to add that Workspace interface to 
the contribution package.

- A description of the network containing nodes, we don't need a new 
model for that, as we already have implementation-node and can use 
something like:

<composite name="bobsNetWork">

   <component name="bobsNode1">
     <implementation.node ...>
   </component>

   <component name="bobsNode2">
     <implementation.node ...>
   </component>

</composite>

-- 
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Simon Laws <si...@googlemail.com>.
On Jan 29, 2008 4:22 PM, Luciano Resende <lu...@gmail.com> wrote:

> Comments inline. Note that I have also some prototype of a install
> program in my sandbox.
>
> On Jan 29, 2008 7:14 AM, Simon Laws <si...@googlemail.com> wrote:
> > On Jan 28, 2008 5:38 PM, Simon Laws <si...@googlemail.com> wrote:
> >
> > > snip...
> > >
> > > > I'm not too keen on scanning a disk directory as it doesn't apply to
> a
> > > > distributed environment, I'd prefer to:
> > > > - define a model representing a contribution repository
> > > > - persist it in some XML form
> > > >
> > >
> > >
> > > I've started on some model code in my sandbox [1]. Feel free to use
> and
> > > abuse.
> > >
> > > Regards
> > >
> > > Simon
> > >
> > > [1]
> > >
> http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
> > >
> >
> > Looking a svn I find there is already a ContributionRepository
> > implementation [1]. There may be a little bit too much function in there
> at
> > the moment but it's useful to see it none the less. So, to work out what
> it
> > does. First question concerns the "store()" method.
> >
> > public URL store(String contribution, URL sourceURL, InputStream
> > contributionStream).
> >
> > Can someone explain what the sourceURL is for?
>
> contribution is the URI for the contribution being stored
>
> SourceURL is the URL pointing to the contribution you want to store in
> the repository.
>
> InputStream is the content of the contribution (optional)
>
> >
> > The model in my sandbox [2], which is very simlar to the XML that the
> > current contribution repository uses, now holds node and contribution
> name
> > information [3]. These could be two separate models to decouple the
> > management of contributions from the process of associating them
> together.
> > I'd keep the info in one place but I expect other's views will vary.
> >
> > [1]
> >
> http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/contribution-impl/src/main/java/org/apache/tuscany/sca/contribution/service/impl/ContributionRepositoryImpl.java
> > [2]
> http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
> > [3]
> >
> http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/domain-model-xml/src/test/resources/org/apache/tuscany/sca/domain/model/xml/test.domain
> >
>
>
>
> --
> Luciano Resende
> Apache Tuscany Committer
> http://people.apache.org/~lresende <http://people.apache.org/%7Elresende>
> http://lresende.blogspot.com/
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>
> Luciano

Thanks for the heads up on the installer stuff. Actually makes the intention
much clearer when you see the code being used. I'll add some more thoughts
to this thread shortly.

Thanks

Simon

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

Posted by Luciano Resende <lu...@gmail.com>.
Comments inline. Note that I have also some prototype of a install
program in my sandbox.

On Jan 29, 2008 7:14 AM, Simon Laws <si...@googlemail.com> wrote:
> On Jan 28, 2008 5:38 PM, Simon Laws <si...@googlemail.com> wrote:
>
> > snip...
> >
> > > I'm not too keen on scanning a disk directory as it doesn't apply to a
> > > distributed environment, I'd prefer to:
> > > - define a model representing a contribution repository
> > > - persist it in some XML form
> > >
> >
> >
> > I've started on some model code in my sandbox [1]. Feel free to use and
> > abuse.
> >
> > Regards
> >
> > Simon
> >
> > [1]
> > http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
> >
>
> Looking a svn I find there is already a ContributionRepository
> implementation [1]. There may be a little bit too much function in there at
> the moment but it's useful to see it none the less. So, to work out what it
> does. First question concerns the "store()" method.
>
> public URL store(String contribution, URL sourceURL, InputStream
> contributionStream).
>
> Can someone explain what the sourceURL is for?

contribution is the URI for the contribution being stored

SourceURL is the URL pointing to the contribution you want to store in
the repository.

InputStream is the content of the contribution (optional)

>
> The model in my sandbox [2], which is very simlar to the XML that the
> current contribution repository uses, now holds node and contribution name
> information [3]. These could be two separate models to decouple the
> management of contributions from the process of associating them together.
> I'd keep the info in one place but I expect other's views will vary.
>
> [1]
> http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/contribution-impl/src/main/java/org/apache/tuscany/sca/contribution/service/impl/ContributionRepositoryImpl.java
> [2] http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
> [3]
> http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/domain-model-xml/src/test/resources/org/apache/tuscany/sca/domain/model/xml/test.domain
>



-- 
Luciano Resende
Apache Tuscany Committer
http://people.apache.org/~lresende
http://lresende.blogspot.com/

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org