You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@avalon.apache.org by Stephen McConnell <mc...@apache.org> on 2003/09/24 13:16:35 UTC

[RT] structural evolution

Avalon classic:

  |-------------|        |-----------|
  |             |        |           |
  |  container  |<------>| component |
  |             |        |           |
  |-------------|        |-----------|

Which has evolved ( meta + composition + maven project ):

  |-------------|        |-----------|       |-----------|
  |             |  http  |           |       |           |
  | repository  |<------>| container |<----->| component |
  |             |        |           |       |           |
  |-------------|        |-----------|       |-----------|

Leading to the next logical step:

  |-------------|        |-----------|
  |             |        |           |
  | repository  |<------>| agency    |
  |             |        |           |       |------------|       
|-----------|
  |-------------|        |           | http  |            |       
|           |
                         |           |<----->| container  |<----->| 
component |
  |-------------|        |           |  rmi  |            |       
|           |
  |             | ldap ? |           | iiop  |------------|       
|-----------|
  | registry    |<------>|           |
  |             |        |           |
  |-------------|        |-----------|


Scenario - forget about "locate, install, customize, deploy" - instead 
think about register once, and execute.  For example, if I have a 
composite component that requires a product install, instead of dragging 
in a default configuration, I want to drag in a customized configuration 
matching my profile and environment and I want it to work with zero (or 
at least near zero) intervention.  That logic resides in the "agency".  
It uses information about me, my domain, resources, etc. (stored in a 
registry) to dynamically construct a solution based on deployment 
information and artifacts available across a set of repositories.

Any thought about how we could go about building such an animal?

Stephen.

-- 

Stephen J. McConnell
mailto:mcconnell@apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


Re: [RT] structural evolution

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Thursday 25 September 2003 00:50, Stephen McConnell wrote:
> Niclas Hedhman wrote:
> >I just wanted to highlight that my notion of component is a much tighter
> >entity, something like a black box with a known set of interfaces, to
> > which it has to adhere. NOW, those interfaces are pretty loose, and I
> > would like to see a stricter contract.
>
> How close are you to something you can propose/talk about kick-around?

It was all done for Phoenix, and is now slowly (very!) transferred over to 
Merlin...

The "DevKit" recognizes 3 stages, Block Specification, Block Implementation 
and Projects (you would call it deployments). 

DevKit Installation
==============
The DevKit ask a dozen questions about the user, company, package names, where 
development is located and central repository (provided by me typically).

Block Specification
==============
1. An ANT script asks for a name, a Service interface creates a full directory 
structure, creates the Service interface and the skeleton for a web based UI 
and documentation (xdocs).

2. After the first step, the spec is compilable and "operational".

3. User adds the java code and documentation.

4. User creates a expected XML representation of objects part of the 
specification, and uses them as templates for creating the UI parts, 
transformations for SVG, HTML and WML.

5. User runs ANT, and a Block Spec tarball is created, complete with the Jar 
files, the documentation (both HTML and XDocs) and UI parts.

6. (Not ready yet), the tarball is optionally published on central repository.


Block Implementation
================
1. A similar ANT script asks for a name, the Service interface to implement 
and the name of the Service implementation class, and creates the 
implementation class directory structures.

2. User copies the Block Spec tarball to the "spec/" directory.

3. User does the implementation and documentation. Also default configurations 
are made.

4. User can add additional UI parts, which "extend" beyond the specification 
requirement.

5. The ant script merges the spec and the impl, Xdocs wise, source code wise 
and UI wise, into a single tarball.

6. (not done) The implementation can be published on central repository.


Project / Deployment
================

1. Ant script creates directory structure, and skeleton environment.xml, 
assembly.xml.

2. User copies in all the block implementation tarballs to use, into "impl/" 
directory.

3. First run of Ant extracts and merges configurations, xdocs sitemap, and the 
UI stuff. The user creates the assembly.xml (not done for Merlin yet).

4. Ant creates the SAR and WAR files respectively. The WAR file is a Cocoon 
serving both the xdocs as well as the "real-time view" of the SAR 
application.

5. (not done) The project can be published.
(not gotten too far in the "project" area before starting to work on the 
Merlin transition.

In the above, there are also unit testing, UI testing and testing in 
container.
The idea of generating the HTML docs and making them part of each tarball is 
that they can be directly read without Cocoon or Forrest. xdocs are 
distributed so they can be merged into a single site.


The formats of the tarballs are only optimized for "merging" activites in 
subsequent steps, and are far from "nice" in layout. Since I don't use any 
Ant Tasks or other scripts, everything is done with copy, there are a lot of 
things to be put in the correct places.

The Ant script for each stage is referring to a global ant script, and people 
are not allowed to mess around with the properties, or I won't guarantee it 
works (=restrictions).


Does this give an overview?


Niclas

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


Re: [RT] structural evolution

Posted by Stephen McConnell <mc...@apache.org>.

Niclas Hedhman wrote:

>On Wednesday 24 September 2003 21:07, Stephen McConnell wrote:
>  
>
>>Instead of looking at this magic occuring in the agency relative to a
>>static set of preferences,its easier to think of this as an interaction
>>between a container (on the client machine) (that holds a *lot* of
>>context), the user (if necessary), a product profile referencing
>>deployment criteria, directives, configurations, etc. (on the server),
>>and a persistent store holding product install information together with
>>user preferences.
>>
>>Solutions could be assembled on the server (agency) in much the same way
>>that we compose component deployment solutions today (in the client
>>container) - by using a very small about of user supplied information,
>>applying/leveraging product descriptors, and matching/resolving these
>>relative to candidates established by the container.
>>    
>>
>
>Ok, still very abstract (I'm like Leo - concrete code say a lot) and may not 
>fully grasp the implications of what you are outlining above.
>

Here is something a little more concrete (not code but closer).  Today 
we can request the inclusion of a component into a composite by 
referencing a logical resource (e.g. james:james:server).  This 
capability presumes that we have a seperate configuration target ready 
to apply to qualify the deployment behaviour. Inside Merlin these two 
things are used as the principal parameters in the creation of a 
DeploymentModel. The model is in effect a complete ready-to-roll 
deployment scenario.

Now lets slip into imagination.  Instead of composition via an 
implementation reference - I want to compose via a service dependency 
and runtime context. I can do this by declaring a url - pointing to an 
agency and parameterized with values that qualify the service.  Inside 
Merlin we can suppliment the url conneciton with addition parameters 
(runtime context).  What we can back would be dependent on local runtime 
policies but (to keep things simple for the moment) lets assume that 
what I get is a serialized DeploymentModel.  What this implies is that 
the entire creation of the deployment scenario has been undertaken by 
the server.  All we need to do is to add this scenario to out local 
containment model and we are ready to commence the local assembly and 
deployment phases.  Keep in mind that while the deployment model was 
created by the remote agency, the model contains the complete 
information about required physical resources (jar files, etc.) which we 
resolve relative to our local repository. 

Putting it another way:

1. we post a request for a service
2. we get back a deployment scenario
3. we validate the scenario againstout local environment
4. we execute the scenario


>>>1. Whatever "user preferences" you can dream up, they are probably not
>>>needed by the component.
>>>      
>>>
>>My experience is the contary.  I have several products running here on
>>my machine.  The actual configuration info used across thse products is
>>remarkable similar.  Keep in mind that I'm benefiting from all of the
>>configuration management available in Merlin - typically the actual
>>configuration data that needs to be changed is really small. Furthermore
>>- its remarkable similar in terms of root information - e.g. host names,
>>authentication criteria, etc.  
>>    
>>
>
>Could this be related to that you are involved with a particular "type" of 
>applications? 
>

<snip type cast examples/>

>I think you get the point... 
>  
>

I do.  The sort of patterns I'm seeing are common because they across 
similar types of products (business applications). However, the same 
problem occurs - for example, I may have a business process that 
requires relatively orthoginal information - unrelated to a partiular 
notion of well known context. But we have weapons ...

>Now, instead of rejecting your ideas, let's move forward...
>
>If the components could expose what they expect, and that information could be 
>collected by tools on behalf of the Agency, I think both's needs are 
>satisfied. I.e. Once the assembly is completed, the "Agency" would know what 
>configuration information is required, and could "ask" the "user" for the 
>details, and fill in "last time values" as defaults.
>  
>
This is a scenario that demonstrates a "interactive-enabled" client 
policy and case in which we can leverage the JNLP API to trigger web 
based interaction with the client - i.e. bringing merlin, end-user, and 
agency together in a product assembly process.  The agency caturing the 
application specific data (forms or whatever) and feeding this into the 
parameterization of a deployment model that is subsequently returned to 
the client's container.

>In your case, you would have your "network stuff" (which I believe is your 
>commonality you have seen), which may not be too extensive, whereas an 
>assembled project in our case may request 300 parameters or more.
>We could then divide the "defaults" over a bunch of "users" for common similar 
>projects. 
>

This is a potentially a fun area - because we can get into a scenario 
where a server requests that the client deployment a component that will 
act as a helper server to the server to supply domain specific 
information.  I have not thought about this in depth at all - but the 
possibity to establish and deploy a domin hepler on the client side of 
the process supporting server side solution assembly seems well within 
scope.

>
>Hmmm? Maybe...
>  
>

:-P

>  
>
>>The parrallels between component composition and solutions assembly are
>>food for thought.
>>    
>>
>
>Yes, indeed...
>
>  
>
>>I not saying this is a good thing, but if you know in detail the
>>component model (including the meta info and meta data models) you can
>>establish and maintain very strong component contracts.  I agreee that
>>there are parts of our specification that are sticky and others that are
>>just plain wobbly (e.g. selector semantics).  Personally I don't find
>>this limiting - mainly because I stay away from sticky and wobbly areas.
>>    
>>
>
>I just wanted to highlight that my notion of component is a much tighter 
>entity, something like a black box with a known set of interfaces, to which 
>it has to adhere. NOW, those interfaces are pretty loose, and I would like to 
>see a stricter contract.
>

How close are you to something you can propose/talk about kick-around?

Steve.

-- 

Stephen J. McConnell
mailto:mcconnell@apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


Re: [RT] structural evolution

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Wednesday 24 September 2003 21:07, Stephen McConnell wrote:
> Instead of looking at this magic occuring in the agency relative to a
> static set of preferences,its easier to think of this as an interaction
> between a container (on the client machine) (that holds a *lot* of
> context), the user (if necessary), a product profile referencing
> deployment criteria, directives, configurations, etc. (on the server),
> and a persistent store holding product install information together with
> user preferences.
>
> Solutions could be assembled on the server (agency) in much the same way
> that we compose component deployment solutions today (in the client
> container) - by using a very small about of user supplied information,
> applying/leveraging product descriptors, and matching/resolving these
> relative to candidates established by the container.

Ok, still very abstract (I'm like Leo - concrete code say a lot) and may not 
fully grasp the implications of what you are outlining above.


> >1. Whatever "user preferences" you can dream up, they are probably not
> > needed by the component.
>
> My experience is the contary.  I have several products running here on
> my machine.  The actual configuration info used across thse products is
> remarkable similar.  Keep in mind that I'm benefiting from all of the
> configuration management available in Merlin - typically the actual
> configuration data that needs to be changed is really small. Furthermore
> - its remarkable similar in terms of root information - e.g. host names,
> authentication criteria, etc.  

Could this be related to that you are involved with a particular "type" of 
applications? 
For instance, our components under development have a slightly larger "span of 
scope". 
AlarmService - configuration deals with common attributes (meta-data if you 
like) of alarm points, priority of threads running the service and not much 
more.
TimingFactory - configuration deals with global schedules (Holidays, Working 
hours) and not much else.
SerialDevice - configuration deals with which external devices are connected, 
what each of these devices capabilities are, the priority of polling these, 
and a lot in these lines.
Mailer - outgoing mail related stuff - what you are more accustomed to.

I think you get the point... 

Now, instead of rejecting your ideas, let's move forward...

If the components could expose what they expect, and that information could be 
collected by tools on behalf of the Agency, I think both's needs are 
satisfied. I.e. Once the assembly is completed, the "Agency" would know what 
configuration information is required, and could "ask" the "user" for the 
details, and fill in "last time values" as defaults.

In your case, you would have your "network stuff" (which I believe is your 
commonality you have seen), which may not be too extensive, whereas an 
assembled project in our case may request 300 parameters or more.
We could then divide the "defaults" over a bunch of "users" for common similar 
projects. 

Hmmm? Maybe...

> The parrallels between component composition and solutions assembly are
> food for thought.

Yes, indeed...

> I not saying this is a good thing, but if you know in detail the
> component model (including the meta info and meta data models) you can
> establish and maintain very strong component contracts.  I agreee that
> there are parts of our specification that are sticky and others that are
> just plain wobbly (e.g. selector semantics).  Personally I don't find
> this limiting - mainly because I stay away from sticky and wobbly areas.

I just wanted to highlight that my notion of component is a much tighter 
entity, something like a black box with a known set of interfaces, to which 
it has to adhere. NOW, those interfaces are pretty loose, and I would like to 
see a stricter contract.
When I think "component" I very much draw the parallel to more physical 
engineering faculties, let's say plumbing. A steel pipe is not useful if 
there is no datasheet, or if it couldn't be measured, not bent, not welded 
and so on.

> So - yes - there is room for improvement - and - no - what we have is
> more than sufficient.

Maybe.

> In fact I think that the notions I'm talking about and the things you
> describing
> above (and in some of your prev. posts) share a comon requirement for a
> product
> meta model. At the same time our repective
> aims/ideas/functional-objectives are
> very different. 

Probably. I have productivity and ensured quality in mind, and don't mind 
getting certain limitations imposed if it is required to gain that. Others 
may think differently.

> Even so, one should not necessarily negate the other ;-).

Not at all...


Niclas

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


Re: [RT] structural evolution

Posted by Stephen McConnell <mc...@apache.org>.

Niclas Hedhman wrote:

>On Wednesday 24 September 2003 19:16, Stephen McConnell wrote:
>
><snip what="ascii art" />
>
>  
>
>>Scenario - forget about "locate, install, customize, deploy" - instead
>>think about register once, and execute.  For example, if I have a
>>composite component that requires a product install, instead of dragging
>>in a default configuration, I want to drag in a customized configuration
>>matching my profile and environment and I want it to work with zero (or
>>at least near zero) intervention.  That logic resides in the "agency".
>>It uses information about me, my domain, resources, etc. (stored in a
>>registry) to dynamically construct a solution based on deployment
>>information and artifacts available across a set of repositories.
>>    
>>
>
>I don't fully follow your logic.
>
>Component maker NH creates component NSC which it places in a repository 
>somewhere. NSC is composed by all kind of resources, Avalon components and 
>other "stuff", which resides on their own repositories.
>
>NH puts on hat "Agency Logic Creator" and tries to figure out what kind of 
>particular configuration a particular user of NSC wants, by matching the 
>"user preferences" (whatever that is) with known configuration points in NSC 
>and the components/resources it uses.
>
>Am I with you that far?
>

Ummm, sort of - but I would present it differently. 

Instead of looking at this magic occuring in the agency relative to a 
static set of preferences,its easier to think of this as an interaction 
between a container (on the client machine) (that holds a *lot* of 
context), the user (if necessary), a product profile referencing 
deployment criteria, directives, configurations, etc. (on the server), 
and a persistent store holding product install information together with 
user preferences. 

Solutions could be assembled on the server (agency) in much the same way 
that we compose component deployment solutions today (in the client 
container) - by using a very small about of user supplied information, 
applying/leveraging product descriptors, and matching/resolving these 
relative to candidates established by the container.

>
>If so,
>1. Whatever "user preferences" you can dream up, they are probably not needed 
>by the component.
>

My experience is the contary.  I have several products running here on 
my machine.  The actual configuration info used across thse products is 
remarkable similar.  Keep in mind that I'm benefiting from all of the 
configuration management available in Merlin - typically the actual 
configuration data that needs to be changed is really small. Furthermore 
- its remarkable similar in terms of root information - e.g. host names, 
authentication criteria, etc.  But also - as to be expected, there are 
variations in format, semantics and so on.  This is addressable if you 
have appropriate criteria and directives to play with.  Compare this 
with the ability for Merlin to construct complex context solutions, 
based on (a) a small set of standard context entries, criteria expressed 
in meta-info, and component specific directives that enable resolution 
across an arbitary array of components.

The parrallels between component composition and solutions assembly are 
food for thought.

>2. Since every component/resource evolves independently, it will be near 
>impossible to track the needed changes from version to version. Especially, 
>if the Agency program allow a block to depend on a particular older version.
>

This has to come from the *product/publisher* and the *consumer* - just 
as the information for component deployment is resolved relative to 
information from the component *type/developer* and *assembler*.  
Parrallel concepts - different abstractions.

>I only see problems, and gut-feeling say "wrong direction". Concentrate on 
>creating better component standards. As I have said in the passed, I don't 
>feel we have components in Avalon yet. They are far too loose and not 
>complete imho. Also, for the Agency concept to work, I think you will need 
>stronger component contracts.
>

:-)

I think its a case of knowing the terrain. 

I not saying this is a good thing, but if you know in detail the 
component model (including the meta info and meta data models) you can 
establish and maintain very strong component contracts.  I agreee that 
there are parts of our specification that are sticky and others that are 
just plain wobbly (e.g. selector semantics).  Personally I don't find 
this limiting - mainly because I stay away from sticky and wobbly areas. 

So - yes - there is room for improvement - and - no - what we have is 
more than sufficient.

>
>a. Components to encompass not only JAR material, but docs, binaries, GUI 
>parts, Admin parts, whatever. Here is where I am concentrating my efforts at 
>the moment.
>
>b. Components today are "runtime only". A component in my world have a 
>"non-runtime" interface(s) as well, possibly in code, so that tools and 
>containers can "talk" to them in a passive mode. This is today handled by XML 
>files, but that is probably too limited. Why shouldn't a component have a 
>behaviour outside the container?
>

No reason at all. 

In fact I think that the notions I'm talking about and the things you 
describing
above (and in some of your prev. posts) share a comon requirement for a 
product
meta model. At the same time our repective 
aims/ideas/functional-objectives are
very different. Even so, one should not necessarily negate the other ;-).

Cheers, Stephen.

>My 0.02 ringgit worth...
>

-- 

Stephen J. McConnell
mailto:mcconnell@apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


RE: [RT] structural evolution

Posted by Leo Sutic <le...@inspireinfrastructure.com>.

> From: Niclas Hedhman [mailto:niclas@hedhman.org] 
> 
> On Wednesday 24 September 2003 21:27, Leo Sutic wrote:
> > Not more than the problems you get from compile-time 
> > dependencies when 
> > running maven. (Which works fine for me.)
> 
> Unfortunately, I have not been so lucky, and heard about 
> others having problem 
> with the dependency structures of Maven.

I think the issue is that it requires a good solid infrastructure
behind it - many projects end up doing releases that are
dependent on SNAPSHOT versions of other packages.

This, of course, messes things up a little. But not more than
the degree to which it is already messed up - I remember Cocoon
releasing with -dev versions of Xalan, for example (which is
a no-no).

But the general idea - declaring dependencies and having the container
auto-assemble the app looks fine to me.

/LS


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


Re: [RT] structural evolution

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Wednesday 24 September 2003 21:27, Leo Sutic wrote:
> Not more than the problems you get from compile-time dependencies
> when running maven. (Which works fine for me.)

Unfortunately, I have not been so lucky, and heard about others having problem 
with the dependency structures of Maven.
Even, I think the fact that it is difficult to replicate someone else's setup, 
can potentially makes it harder to track down bugs.
I guess this is not really the topic at hand.

Niclas

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


RE: [RT] structural evolution

Posted by Leo Sutic <le...@inspireinfrastructure.com>.

> From: Niclas Hedhman [mailto:niclas@hedhman.org] 
>
> I only see problems, and gut-feeling say "wrong direction". 

Not more than the problems you get from compile-time dependencies
when running maven. (Which works fine for me.)

/LS


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


Re: [RT] structural evolution

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Wednesday 24 September 2003 19:16, Stephen McConnell wrote:

<snip what="ascii art" />

> Scenario - forget about "locate, install, customize, deploy" - instead
> think about register once, and execute.  For example, if I have a
> composite component that requires a product install, instead of dragging
> in a default configuration, I want to drag in a customized configuration
> matching my profile and environment and I want it to work with zero (or
> at least near zero) intervention.  That logic resides in the "agency".
> It uses information about me, my domain, resources, etc. (stored in a
> registry) to dynamically construct a solution based on deployment
> information and artifacts available across a set of repositories.

I don't fully follow your logic.

Component maker NH creates component NSC which it places in a repository 
somewhere. NSC is composed by all kind of resources, Avalon components and 
other "stuff", which resides on their own repositories.

NH puts on hat "Agency Logic Creator" and tries to figure out what kind of 
particular configuration a particular user of NSC wants, by matching the 
"user preferences" (whatever that is) with known configuration points in NSC 
and the components/resources it uses.

Am I with you that far?

If so,
1. Whatever "user preferences" you can dream up, they are probably not needed 
by the component.
2. Since every component/resource evolves independently, it will be near 
impossible to track the needed changes from version to version. Especially, 
if the Agency program allow a block to depend on a particular older version.

I only see problems, and gut-feeling say "wrong direction". Concentrate on 
creating better component standards. As I have said in the passed, I don't 
feel we have components in Avalon yet. They are far too loose and not 
complete imho. Also, for the Agency concept to work, I think you will need 
stronger component contracts.

a. Components to encompass not only JAR material, but docs, binaries, GUI 
parts, Admin parts, whatever. Here is where I am concentrating my efforts at 
the moment.

b. Components today are "runtime only". A component in my world have a 
"non-runtime" interface(s) as well, possibly in code, so that tools and 
containers can "talk" to them in a passive mode. This is today handled by XML 
files, but that is probably too limited. Why shouldn't a component have a 
behaviour outside the container?

My 0.02 ringgit worth...

Niclas

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


Re: [RT] structural evolution

Posted by Stephen McConnell <mc...@apache.org>.

Berin Loritsch wrote:

> Stephen McConnell wrote:
>
> <snip type="ascii-art"/>
>
>>
>> Scenario - forget about "locate, install, customize, deploy" - 
>> instead think about register once, and execute.  For example, if I 
>> have a composite component that requires a product install, instead 
>> of dragging in a default configuration, I want to drag in a 
>> customized configuration matching my profile and environment and I 
>> want it to work with zero (or at least near zero) intervention.  That 
>> logic resides in the "agency".  It uses information about me, my 
>> domain, resources, etc. (stored in a registry) to dynamically 
>> construct a solution based on deployment information and artifacts 
>> available across a set of repositories.
>>
>> Any thought about how we could go about building such an animal?
>
>
> Before I jump into the nitty gritty, I'd like to point out some questions
> that I am a little fuzzy on with the concept you've put together here.
> Bear with me, because I am back a few steps.
>
> Remembering that Avalon can be used quite well on both server systems as
> well as client applications.  In fact, I have a couple client application
> using Avalon on the same machine.  I am currently using Fortress which is
> the still the classic deployment model you outlined above.
>
> As such I need to have all my required JARs for the application in one
> location.  This can be good or bad depending on your view.  It is 
> certainly
> easy to grasp.  Here are the strengths and weaknesses I percieve from 
> this
> approach:
>
> * You know exacly what JARs are being used with your application.
> * You know how to find the JARs used with your application.
> * You have to duplicate the JARs all over the place (bad)
> * The application will work the first time without an internet connection
>   in place (unless it is intentionally a network bound application).
> * You have to be omniscient in the sense of identifying the proper jars
>   and configuring its components (bad)
>
> So if I understand the extra layer that you added to Merlin with the
> repository, it solves the duplication of JARs and the ability to resolve
> JARs somewhat dynamically.  What is not clear to me is what happens when
> we have sepparate applications that all require the Merlin kernel.  For
> example, let's say I have a client that can graphically assist the 
> application
> assembler in generating the proper kernel and component configuration 
> files.
> In addition to that, I have a client that performs remote monitoring of
> my server.  Both of these applications have to run on the same machine at
> the same time without stepping on each others toes.  With the GUIApp 
> framework
> I am working (Fortress based), I can take care of this with an App naming
> scheme and the java.util.prefs package (as well as a relative 
> directory for
> the app in the user temp directory).  I am not sure how or what Merlin is
> doing for that.  So, assuming I want to take advantage of the repository
> (which I do), what more do I need to do? 


Quick notes on repository features/benefits:

1. local repository is a cache just like the maven model *without* 
SNAPSHOT sementics
2. is the place for policy handling concerning resource importing
3. eliminates duplication
4. eliminates need to know re. versions etc.
5. merlin variant comes with ability to populate repository with a bar file
6. bar files mean we can handle offline scenarios
7. bar files also let us resolve licensing issues

Respository usage is really easy - just create a repository using new 
DefaultFileRepository and start playing with the Repository API.  All 
the code is isolated under the avalon-repository-xxx-1.0.jar files.  
I'll leave the details to Pete Courcoux because he's about to publish a 
book on the subject.

<pete> ;-)   </pete>

>
> Lastly, the "Agency" Server/Component seems to be a single interface for
> the repository and adding a new concept called the "registry".  The 
> question
> is whether you want that registry to be local or remote.  


I consider the "Agency" as an interface that would would be remotely 
accessible.  Specifically I see it running as a process shared across 
either multiple kernels, or, in seperate domain shared across multiple 
users (possibly two differnet implementations).  Things behind the 
agency are implementation concerns - for example, the agency when 
preparing a DeploymentModel solution needs access to information about 
products and services which it get from a repository.  The registitry 
serves as stor for information about deployment solutions handed to the 
client (and info about the client).  For example, if I asked from an FTP 
server and I got back an precondifurated read-to-roll FTP 
DeploymentModel, if I make changes to the model (like further treaking 
of parameters), I want to register the model back into the registry.  
The next time I need the FTP solution its sitting there in the 
repository waiting for me.

> The simple solution
> is to have a local registry and use the java.util.prefs package.  For 
> remote
> solutions you are talking a remote filesystem, database, LDAP server, 
> etc.
> The actual implementation can be different, but look at what has been 
> done
> in this space already. 


With this in mind - an important functional criteria is the ability to 
chain agencies behond agencies.  On one hand its a good way of 
differntiating domain  solutions - e.g. agencies handling enterprise 
services (FTP, Email, Web, B2B, etc.) compared to an agency specialized 
in something like industrial process control handler.  This has some 
really interesting implications i the area of referral managent and some 
nasty suprises in terms of circular referral - but that's another story.

> JNDI seems to be the best bet for a single interface regardless of 
> remote or
> local access to the registry.  Whatever you are doing for the 
> repository seems
> to be working for you, so I would probably leverage that. 


Yep -

>
>
> The next question is how exactly are you planning on setting up the 
> agency?
> Is this like a ORB Name Server in concept where it would be a separate 
> service
> or server running?  


Yes - for example as a NT service on my local machine (personal agency 
model), or, as a remotely accessible server handling n clients.

> Considering that you are planning on communicating over
> one of several transports that's the way it looks.  I would recommend 
> securable
> transports (unlike RMI), but that is only my opinion. 


Actually secure communications are critical - at least in the remote 
model because we need to transfer client identity, runtime context, 
policy, etc. Fill inplementation of the remote server needs to be bullet 
proof in this area.

>
>
> Keep in mind that in critical network environments like a DMZ, the 
> separate
> agency server would probably be vetoed if everything can be done locally.
> The less traffic over a network and the fewer open ports on the 
> servers, the
> better.  It looks like you are hoping that the agency server will be 
> something
> to assist clustering at a later date.  Is that true?  (if so, I will 
> have to
> wipe the drool from the corner of my mouth).


ROTFL at the thought of Berin drooling!!!

:-D



> If so, we need to have a plan
> to lock that thing down so that it is just as secure as it is usable.


Agreed.

Steve.

-- 

Stephen J. McConnell
mailto:mcconnell@apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


Re: [RT] structural evolution

Posted by Berin Loritsch <bl...@apache.org>.
Stephen McConnell wrote:

<snip type="ascii-art"/>

> 
> Scenario - forget about "locate, install, customize, deploy" - instead 
> think about register once, and execute.  For example, if I have a 
> composite component that requires a product install, instead of dragging 
> in a default configuration, I want to drag in a customized configuration 
> matching my profile and environment and I want it to work with zero (or 
> at least near zero) intervention.  That logic resides in the "agency".  
> It uses information about me, my domain, resources, etc. (stored in a 
> registry) to dynamically construct a solution based on deployment 
> information and artifacts available across a set of repositories.
> 
> Any thought about how we could go about building such an animal?

Before I jump into the nitty gritty, I'd like to point out some questions
that I am a little fuzzy on with the concept you've put together here.
Bear with me, because I am back a few steps.

Remembering that Avalon can be used quite well on both server systems as
well as client applications.  In fact, I have a couple client application
using Avalon on the same machine.  I am currently using Fortress which is
the still the classic deployment model you outlined above.

As such I need to have all my required JARs for the application in one
location.  This can be good or bad depending on your view.  It is certainly
easy to grasp.  Here are the strengths and weaknesses I percieve from this
approach:

* You know exacly what JARs are being used with your application.
* You know how to find the JARs used with your application.
* You have to duplicate the JARs all over the place (bad)
* The application will work the first time without an internet connection
   in place (unless it is intentionally a network bound application).
* You have to be omniscient in the sense of identifying the proper jars
   and configuring its components (bad)

So if I understand the extra layer that you added to Merlin with the
repository, it solves the duplication of JARs and the ability to resolve
JARs somewhat dynamically.  What is not clear to me is what happens when
we have sepparate applications that all require the Merlin kernel.  For
example, let's say I have a client that can graphically assist the application
assembler in generating the proper kernel and component configuration files.
In addition to that, I have a client that performs remote monitoring of
my server.  Both of these applications have to run on the same machine at
the same time without stepping on each others toes.  With the GUIApp framework
I am working (Fortress based), I can take care of this with an App naming
scheme and the java.util.prefs package (as well as a relative directory for
the app in the user temp directory).  I am not sure how or what Merlin is
doing for that.  So, assuming I want to take advantage of the repository
(which I do), what more do I need to do?


Lastly, the "Agency" Server/Component seems to be a single interface for
the repository and adding a new concept called the "registry".  The question
is whether you want that registry to be local or remote.  The simple solution
is to have a local registry and use the java.util.prefs package.  For remote
solutions you are talking a remote filesystem, database, LDAP server, etc.
The actual implementation can be different, but look at what has been done
in this space already.

JNDI seems to be the best bet for a single interface regardless of remote or
local access to the registry.  Whatever you are doing for the repository seems
to be working for you, so I would probably leverage that.

The next question is how exactly are you planning on setting up the agency?
Is this like a ORB Name Server in concept where it would be a separate service
or server running?  Considering that you are planning on communicating over
one of several transports that's the way it looks.  I would recommend securable
transports (unlike RMI), but that is only my opinion.

Keep in mind that in critical network environments like a DMZ, the separate
agency server would probably be vetoed if everything can be done locally.
The less traffic over a network and the fewer open ports on the servers, the
better.  It looks like you are hoping that the agency server will be something
to assist clustering at a later date.  Is that true?  (if so, I will have to
wipe the drool from the corner of my mouth).  If so, we need to have a plan
to lock that thing down so that it is just as secure as it is usable.

-- 

"They that give up essential liberty to obtain a little temporary safety
  deserve neither liberty nor safety."
                 - Benjamin Franklin


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org