You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@depot.apache.org by Nicola Ken Barozzi <ni...@apache.org> on 2004/06/17 12:21:06 UTC

Moving forward or letting go

It's some months that we have depot, and things have stalled. It happens 
in Opensource, and even big and succesfull projects have months that 
seems empty.

But...

Depot is not pushing, it's not getting used. We have to give it another 
push. I'll try to give a hand, but I'm currently very active on Forrest. 
and Adam is joyfully bound to Gump, so we'll need all the help we can 
get :-)

Simple plan:

1 - get the site in a readable state
2 - use depot in Cocoon for showing what it can be used for
     (added parallelly to current system)
3 - do a nightly release
4 - publicize it in the usual places (the erverside, blogs, java.net,
     javalobby, freshmeat, etc)

The important thing for me now is point 2, and 1 will be a byproduct, as 
I learn to use it.

Stay ready for my questions :-)

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Sunday 20 June 2004 16:51, Markus M. May wrote:

> > Yes, you will need to define or use server-side meta-data. The immediate
> > effect after solving that is the Classloader (not classloading)
> > establishment, since the 'type' of classloader the dependency belongs to,
> > MUST be managed by the repository system, or you have missed a major
> > goal.

> I agree on the meta-data stored on the server-side, but I did not
> understood the 'type' of classloader and the major goal. Can you please
> enlighten me?

Yes, most people don't get it immediately, and possibly not until faced with 
the problem straight in your face (like for me, Stephen been on about it for 
a long time.)

Exactly how much support is required by the Repository system is perhaps 
debatable, but the basic concept is needed;

Project A depends on Jar B.
Jar B has a chained, perhaps indirect, dependency on Jar C
The Repository system is responsible to establish that Jar C must reside in a 
different classloader from Jar B.

Example; Any pluggable functionality can not reside in the same classloader as 
the management, since it must be possible to purge the classes and reload.

Example; In Avalon Merlin, the components doesn't have access to system 
(Merlin) implementation classes at all. They have access to the API only, so 
the top level component classloader has the APIs classloader as its parent, 
just like the Merlin implementation classes has.

For Project TomDickHarry this may be 'fluff' and above their heads, but for 
seriously managed application frameworks, this is important.
Maven lacks this completely, and because of it, it is very difficult for us to 
do proper unit testing. (The compile time dependency is required, and is 
thrown in by Maven, and ends up in the wrong classloader...)

> > If Depot is only a tool for build systems itself, then you are also
> > missing the goal, i.e. providing a functionality to a handful of build
> > systems, each having their own solutions to this concern, is not a recipe
> > for wide-spread acceptance, and a chicken-egg problem.
>
> You mean basically, that build systems like Maven need some plugins to
> load from a central repository. This is done through the antlets and the
> loading of this was one of the goals for a next release of depot, when i
> remember correctly?

Chicken-egg problem I was referring to was basically, 
Depot needs "real projects/products" to be using it, for it to gather enough 
use-cases, start getting something useful together to graduate from the 
Incubator.
Noone is keen on being dependent on projects in Incubator.

And if the consensus at Depot community, is that Depot should provide some 
rudimentary support for Maven, Ant-extensions and other build systems out 
there, you are narrowing the choices of willing projects to provide the 
'spring board' out of Incubator, as they already have solutions for the 
problem at hand.

The community here must have some balls to step up to the task beyond Maven. 
Doing artifact downloads from Maven repositories is too little too late, as 
Leo Simons can do that in <100 lines of javascript in Ant. Without the 
minimum of chained dependencies (which will lead to the classloader concern) 
there is no incentives at all to use Depot today.

There is a real need for advanced applications like Avalon Meriln (who have a 
solution) and Geronimo (?). Without the balls and the vision, I doubt that 
Depot will flourish, as 'in-house solutions' is just as good, and we maintain 
control. Not until you get the snowball effect, projects/products will pick 
it up and use it.


Cheers
Niclas

P.S. I am very opinionated based on 'fragmented excerpts' from the mailing 
list. I have not studied with Depot really consist of today at a detail 
level, so a lot of the above could be utterly wrong. If so, Depot needs to 
step up and get the message across louder to the other ASF projects that can 
benefit.

-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


Re: classloader was: Moving forward or letting go

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Niclas Hedhman wrote:
...
> I still believe that Gump descriptors should be an 'output artifact' and not 
> an 'input artifact' in Repository Heaven.
> 
> Creation of a solid model that fits the needs we can find, then it shouldn't 
> be that hard to generate Gump descriptors as a side-effect.

I think we are talking different languages ;-)

I'm not saying that the Gump descriptor is a blessing. I just see that 
there is *a lot* of metadata information *already* there, and it's 
*updated*.

Now, we can make our own version, or build on something that is already 
there. I think that it's possible and better, _but_ it's too early to 
tell, as we haven't decided what we need yet ;-)

So, let's forget for a moment this thing and start building our layers.

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: classloader was: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Tuesday 22 June 2004 03:41, Nicola Ken Barozzi wrote:
> Stephen McConnell wrote:
> > Nicola Ken Barozzi wrote:
> >> Gump metadata != Gump being setup.
> > Gump meta-data is insufficient.
> It sure is. But it can be enhanced without having Gump barf on extra tags.

I still believe that Gump descriptors should be an 'output artifact' and not 
an 'input artifact' in Repository Heaven.

Creation of a solid model that fits the needs we can find, then it shouldn't 
be that hard to generate Gump descriptors as a side-effect.

Cheers
Niclas
-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


Re: classloader was: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Tuesday 22 June 2004 05:49, Nicola Ken Barozzi wrote:
> I get it.
>
> In essence, a single Avalon system needs to load different artifacts in
> separate classloaders during the same run.

Yes, it is not a Build time concern, but builds include the running of tests, 
and some of these tests are fairly extensive.

> Ok, let's now tackle the other bit, that layering...

Agree.


Cheers
Niclas
-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


Re: classloader was: Moving forward or letting go

Posted by Nicola Ken Barozzi <ni...@apache.org>.
I get it.

In essence, a single Avalon system needs to load different artifacts in 
separate classloaders during the same run.

This is whacky! ;-P

 From a Gump perspective, this should not be needed, as to build one 
needs a simple classloader with a list of jars. But you are not talking 
about build time...

Ok, let's now tackle the other bit, that layering...

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: classloader was: Moving forward or letting go

Posted by Stephen McConnell <mc...@apache.org>.
Nicola Ken Barozzi wrote:

> 
> Stephen McConnell wrote:
> ...
> 
>> Going the direction of multiple gump files means invoking a build 
>> multiple time.  This is massive inefficiency - do a build to generate 
>> the classes directory and a jar file, do another build to run the 
>> testcase, 
> 
> 
> You call it inefficiency, I call it *safe* separation of builds. I don't 
> see the problem here. Note that I'm talking about *Gump*, not about a 
> build system, that uses also Gump metadata.
> 
>> but then when you need the above information for generation of a build 
>> artifact - well - you sunk.  You cannot do it with gump as it is today.
> 
> 
> I don't understand this last sentence.

Sorry ... s/you/your

Basically the issue is that a gump descriptor is designed around the 
notion of a single path (path as in ant path concept used for classpath 
construction).  When dealing with the construction of information for a 
plugin scenario you need to run a test case using a different classpath 
to the build cycle.  The test scenario will use information generated 
from information about API, SPI and implementation classpaths - but hang 
on - gump is only providing us with a single classpath.  All of a sudden 
you faced with the problem of building artifacts bit by bit across 
successive gump runs.

> 
>> The solution is to do to gump what Sam did to Ant community  .. he 
>> basically said - "hey .. there is an application that knows more about 
>> the classpath information than you do" and from that intervention ant 
>> added the ability to override the classloader definition that ant uses.
>>
>> Apply this same logic to gump - there is a build system that knows 
>> more about the class loading requirements than gump does - and gump 
>> needs to delegate responsibility to that system - just as ant 
>> delegates responsibility to gump.
> 
> 
> It doesn't make sense. You mean that one should delegate
> 
>  buildsystem -> CI system -> buildsystem

I'm saying that products like magic and maven know more about the 
classloader criteria than maven does.  Just as ant delegates the 
responsibility of classpath definition to gump, so should gump delegate 
responsibility to applications that know more about the context than 
gump does.

E.g.

|------------|      |---------------|     |-------------|
| gump       | ---> | magic         | --> | project     |
|            | <--- |               |     |-------------|
|------------|      |---------------|
|            |
|            |      |---------------|     |-------------|
|            | ---> | ant           | --> | project     |
|            | <--- |---------------|     |-------------|
|------------|

... and the only difference here between ant and magic is that magic 
knows about mult-staged classloaders (see below) and multi-mode 
classpath policies (where multi-mode means different classloaders for 
build, test and runtime).

> ?
> 
> Gump took away the responsibility from the build system, why should he 
> give it back?

Because just as gump knows more about the context than ant, magic (or 
maven) knows more about the context than gump.


>>>> I.e. gump is very focused on the pure compile scenarios and does not 
>>>> deal with the realities of test and runtime environments that load 
>>>> plugins dynamically.
>>>
>>>
>>> You cannot create fixed metadata for dynamically loaded plugins 
>>> (components), unless you decide to declare them, and the above is 
>>> sufficient.
>>
>>
>> Consider the problem of generating the meta data for a multi-staged 
>> classloader 
> 
> 
> What's a 'multi-staged classloader'?

|-----------------------|
| bootstrap-classloader |
|-----------------------|
         ^
         |
|-----------------------|
| api-classloader       |
|-----------------------|
         ^
         |
|-----------------------|
| spi-classloader       |
|-----------------------|
         ^
         |
|-----------------------|
| impl-classloader      |
|-----------------------|

The api classloader is constructed by a container and is typically 
supplied as a parent classloader for a container.  The spi classloader 
is constructed as a child of the api loader and is typically used to 
load privileged facilities that interact with a container SPI (Service 
Provider Interface).  The impl classloader is private to the application 
managing a set of pluggable components.


>> containing API, SPI and IMPL separation based on one or multiple gump 
>> definitions .. 
> 
> 
> A classloader containing 'separation'?

Sure - think of it in terms of:

    * respectable
    * exposed
    * naked

The API respectable, an SPI is exposed, the impl - that's getting naked.

>> you could write a special task to handle phased buildup of data,
> 
> 
> 'Phased buildup'?

Using gump as it is today on a project by project basis would require 
successive gump runs to build up "staged" classpath information - 
because of the basics of gump - a project is a classpath definition.  A 
staged classloader is potentially three classloader definitions (in gump 
terms).  In magic terms its just one.  Mapping gump to magic requires 
three gump projects to generate one of multiple artifacts created in a 
magic build.  I.e. gump does not mesh nicely with the building and 
testing of plug-in based systems.

Plugin based systems absolutely need good repository system.


>> and another task to consolidate this and progressively - over three 
>> gump build cycles you could produce the meta-data.  Or, you could just 
>> say to magic - <artifact/> and if gump is opened up a bit .. the 
>> generated artifact will be totally linked in to gump generated resources
> 
> 
> 'Linked in to gump generated resources'?

Gump generates stuff .. to build the meta-data to run tests I need to 
know the addressed of gump generated content.  I.e. I need to link to 
gump generated resource.

> 
>> - which means that subsequent builds that are using the plugin are 
>> running against the gump content.
> 
> 
> You totally lost me here.

Image you have a project that has the following dependencies:

    * log4j (runtime dependency)
    * avalon-framework-api
    * avalon-framework-impl (test-time dependency)
    * avalon-meta-tools (plugin)

Imagine also that this project generates a staged classloader descriptor 
using within the testcase for the project.  To do a real gump 
assessment, the avalon-meta-tools meta-data descriptor needs to be 
generated to reference gump generated jar paths.  The avalon-meta-tools 
jar itself is not a compile, build or runtime dependency ... its just a 
tool used to generate some meta-information as part of the build 
process.  The avalon-framework-impl dependency is not a runtime 
dependency because this is provided by the container that will run the 
artifact produced by this build - but it is needed to compile and 
execute unit tests.  When the test system launches - it loads meta data 
created by the avalon-meta-tools plugin, and loads the subject of this 
build as a plugin.  All in all there are something like six different 
classpath definitions flying around here.

I.e. getting lost is a completely reasonable feeling!

;-)

> 
>> The point is that gump build information is not sufficiently rich when 
>> it comes down to really using a repository in a productive manner when 
>> dealing with pluggable artifacts (and this covers both build and 
>> runtime concerns).  How this this effect Depot? Simply that gump 
>> project descriptors should be considered as an application specific 
>> descriptor - not a generic solution.
> 
> 
> Sorry, I don't understand.

The thing is that a repository "to me" is the source of deployment 
solutions.  The definitions of those solutions can be expressed in 
meta-data (and the avalon crew have got this stuff down-tap).  The 
source of that meta-data can be through published meta-data descriptors 
or descriptors that are dynamically generated in response to service 
requests.  Either way - the underlying repository is a fundamental unit 
in the deployment equation - and the language between the client is by 
far a classloader subject.

Hope that helps.

Cheers, Steve.


-- 

|---------------------------------------|
| Magic by Merlin                       |
| Production by Avalon                  |
|                                       |
| http://avalon.apache.org              |
|---------------------------------------|

Re: classloader was: Moving forward or letting go

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Stephen McConnell wrote:
...
> Going the direction of multiple gump files means invoking a build 
> multiple time.  This is massive inefficiency - do a build to generate 
> the classes directory and a jar file, do another build to run the 
> testcase, 

You call it inefficiency, I call it *safe* separation of builds. I don't 
see the problem here. Note that I'm talking about *Gump*, not about a 
build system, that uses also Gump metadata.

> but then when you need the above information for generation of 
> a build artifact - well - you sunk.  You cannot do it with gump as it is 
> today.

I don't understand this last sentence.

> The solution is to do to gump what Sam did to Ant community  .. he 
> basically said - "hey .. there is an application that knows more about 
> the classpath information than you do" and from that intervention ant 
> added the ability to override the classloader definition that ant uses.
> 
> Apply this same logic to gump - there is a build system that knows more 
> about the class loading requirements than gump does - and gump needs to 
> delegate responsibility to that system - just as ant delegates 
> responsibility to gump.

It doesn't make sense. You mean that one should delegate

  buildsystem -> CI system -> buildsystem

?

Gump took away the responsibility from the build system, why should he 
give it back?

>>> I.e. gump is very focused on the pure compile scenarios and does not 
>>> deal with the realities of test and runtime environments that load 
>>> plugins dynamically.
>>
>> You cannot create fixed metadata for dynamically loaded plugins 
>> (components), unless you decide to declare them, and the above is 
>> sufficient.
> 
> Consider the problem of generating the meta data for a multi-staged 
> classloader 

What's a 'multi-staged classloader'?

> containing API, SPI and IMPL separation based on one or 
> multiple gump definitions .. 

A classloader containing 'separation'?

> you could write a special task to handle 
> phased buildup of data,

'Phased buildup'?

> and another task to consolidate this and 
> progressively - over three gump build cycles you could produce the 
> meta-data.  Or, you could just say to magic - <artifact/> and if gump is 
> opened up a bit .. the generated artifact will be totally linked in to 
> gump generated resources

'Linked in to gump generated resources'?

> - which means that subsequent builds that are 
> using the plugin are running against the gump content.

You totally lost me here.

> The point is that gump build information is not sufficiently rich when 
> it comes down to really using a repository in a productive manner when 
> dealing with pluggable artifacts (and this covers both build and runtime 
> concerns).  How this this effect Depot? Simply that gump project 
> descriptors should be considered as an application specific descriptor - 
> not a generic solution.

Sorry, I don't understand.

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: classloader was: Moving forward or letting go

Posted by Stephen McConnell <mc...@apache.org>.
Nicola Ken Barozzi wrote:
> 
> Stephen McConnell wrote:
> 
>> Nicola Ken Barozzi wrote:
> 
> ...
> 
>>> Gump metadata != Gump being setup.
>>
>>
>> Gump meta-data is insufficient.
> 
> 
> It sure is. But it can be enhanced without having Gump barf on extra tags.
> 
>> In order to create a functionally sufficient expression of path 
>> information you would need 6 separate gump project descriptors per 
>> project:
>>
>>    build
>>    test
>>    runtime-api
>>    runtime-spi
>>    runtime-impl
>>    runtime-composite
> 
> 
> Gump uses the word "project" in an improper way, as it's more about a 
> project descriptor.
> 
> You can do the above in Gump by creating avalon, avalon-test, 
> avalon-api, etc... If you look at the descriptors this is for example 
> what Ant and many other projects do.

Going the direction of multiple gump files means invoking a build 
multiple time.  This is massive inefficiency - do a build to generate 
the classes directory and a jar file, do another build to run the 
testcase, but then when you need the above information for generation of 
a build artifact - well - you sunk.  You cannot do it with gump as it is 
today.

The solution is to do to gump what Sam did to Ant community  .. he 
basically said - "hey .. there is an application that knows more about 
the classpath information than you do" and from that intervention ant 
added the ability to override the classloader definition that ant uses.

Apply this same logic to gump - there is a build system that knows more 
about the class loading requirements than gump does - and gump needs to 
delegate responsibility to that system - just as ant delegates 
responsibility to gump.

> 
>> I.e. gump is very focused on the pure compile scenarios and does not 
>> deal with the realities of test and runtime environments that load 
>> plugins dynamically.
> 
> 
> You cannot create fixed metadata for dynamically loaded plugins 
> (components), unless you decide to declare them, and the above is 
> sufficient.

Consider the problem of generating the meta data for a multi-staged 
classloader containing API, SPI and IMPL separation based on one or 
multiple gump definitions .. you could write a special task to handle 
phased buildup of data, and another task to consolidate this and 
progressively - over three gump build cycles you could produce the 
meta-data.  Or, you could just say to magic - <artifact/> and if gump is 
opened up a bit .. the generated artifact will be totally linked in to 
gump generated resources - which means that subsequent builds that are 
using the plugin are running against the gump content.

The point is that gump build information is not sufficiently rich when 
it comes down to really using a repository in a productive manner when 
dealing with pluggable artifacts (and this covers both build and runtime 
concerns).  How this this effect Depot? Simply that gump project 
descriptors should be considered as an application specific descriptor - 
not a generic solution.

Cheers, Steve.

p.s.

Re. gump management - I'm currently playing around with the notion of 
one gump project covering all of avalon - the single project definition 
generated by magic that declares the external dependencies (about 8 
artifacts) and the Avalon produced artifacts (about 60 or more).  The 
magic build will generate everything including plugins and metadata and 
  publish this back to gump.

SJM

-- 

|---------------------------------------|
| Magic by Merlin                       |
| Production by Avalon                  |
|                                       |
| http://avalon.apache.org              |
|---------------------------------------|

Re: classloader was: Moving forward or letting go

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Stephen McConnell wrote:

> Nicola Ken Barozzi wrote:
...
>> Gump metadata != Gump being setup.
> 
> Gump meta-data is insufficient.

It sure is. But it can be enhanced without having Gump barf on extra tags.

> In order to create a functionally sufficient expression of path 
> information you would need 6 separate gump project descriptors per project:
> 
>    build
>    test
>    runtime-api
>    runtime-spi
>    runtime-impl
>    runtime-composite

Gump uses the word "project" in an improper way, as it's more about a 
project descriptor.

You can do the above in Gump by creating avalon, avalon-test, 
avalon-api, etc... If you look at the descriptors this is for example 
what Ant and many other projects do.

> I.e. gump is very focused on the pure compile scenarios and does not 
> deal with the realities of test and runtime environments that load 
> plugins dynamically.

You cannot create fixed metadata for dynamically loaded plugins 
(components), unless you decide to declare them, and the above is 
sufficient.

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: classloader was: Moving forward or letting go

Posted by Stephen McConnell <mc...@apache.org>.
Nicola Ken Barozzi wrote:

> 
> Niclas Hedhman wrote:
> 
>> On Monday 21 June 2004 19:01, Nicola Ken Barozzi wrote:
> 
> ...
> 
>>>> Gump?? Sorry, how on earth did you manage to get a "Continuous
>>>> Integration System" to be part of a 'Jar Hell' solution?
>>>
>>>
>>> The Gump Metadata is a rich source of dependencies. Stephen AFAIK is
>>> investigating in this too.
>>
>>
>> How are you going to rely on Gump for 3rd party projects, who have no 
>> interest in having their own Gump setup, but for sure want to harness 
>> the power we are all striving for.
> 
> 
> Gump metadata != Gump being setup.

Gump meta-data is insufficient.

In order to create a functionally sufficient expression of path 
information you would need 6 separate gump project descriptors per project:

    build
    test
    runtime-api
    runtime-spi
    runtime-impl
    runtime-composite

I.e. gump is very focused on the pure compile scenarios and does not 
deal with the realities of test and runtime environments that load 
plugins dynamically.

Cheers, Steve.

-- 

|---------------------------------------|
| Magic by Merlin                       |
| Production by Avalon                  |
|                                       |
| http://avalon.apache.org              |
|---------------------------------------|

Re: classloader was: Moving forward or letting go

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Niclas Hedhman wrote:
> On Monday 21 June 2004 19:01, Nicola Ken Barozzi wrote:
...
>>>Gump?? Sorry, how on earth did you manage to get a "Continuous
>>>Integration System" to be part of a 'Jar Hell' solution?
>>
>>The Gump Metadata is a rich source of dependencies. Stephen AFAIK is
>>investigating in this too.
> 
> How are you going to rely on Gump for 3rd party projects, who have no interest 
> in having their own Gump setup, but for sure want to harness the power we are 
> all striving for.

Gump metadata != Gump being setup.

> ATM, I only know of three build systems (Ant for this discussion is more of a 
> build toolkit, than a complete system, so I leave that out), namely Maven, 
> Gump and 'our pet' Magic.

Keep in mind that you are talking to three developers that have worked 
on Centipede even before Maven had the concept of plugins :-)

We did our "magic" way before, and Depot is in fact a spinoff.

> All these solves the dependency pattern in their own way. 
> Magic solves all _our_ concerns, i.e. chained dependencies, classloader 
> establishment and the standard stuff.
> Gump solves chained dependencies, but currently doesn't bother about 
> classloader concerns.
> Maven does neither handle chained dependencies nor classloader concerns.
> 
> Stephen is currently trying to work out how to teach Gump the classloader 
> tricks, and I haven't followed that very closely.

Gump is not written in Java anymore, so you're out of luck on this one. ;-)

Over at Krysalis we had started Viprom, that was an abstraction over the 
object model. Dunno what o do now though.

>>>We have chained dependencies in place. It works well, but our down side
>>>is that only Avalon tools generate and understand the necessary meta
>>>information required to support this feature.
>>
>>That's why using Gump metadata would bring projects closer.
> 
> Maybe you are looking at this from the wrong end. If Depot could solidly 
> define what complex projects (such as Avalon) require, in form of meta 
> information, then one should teach Gump to use it.

Meta information, that's the point. Mvan has his own object model, Gump 
has a merge.xml DOM that we can use as an object model... what should we 
use?

>>The only real issue I see is the catch22 problem you have outlined about
>>Avalon using Incubator code and viceversa.
>>Let me disagree with it though. It's ok that an Apache project does not
>>rely on incubating projects, but if some of the developers are part of
>>this incubating project, does it still make sense?
> 
> Probably not. I could imagine that there is even a few more phases involved;
>  *  Phase I: Avalon Repository is copied across, but Avalon maintain a 
> parallel codebase, and changes are merged from one to the other.
>  *  Phase II: Avalon Repository is removed from the Avalon codebase.
>  *  Phase III: Avalon Repository has its package names and so forth changed to 
> suit the Depot project.
>  *  Phase IV: Bits and pieces are broken out into other parts of Depot, while 
> maintaining enough compatibility with Avalon Merlin.

 From our POV it's just simpler as this:

* Repository is moved under Depot with package names changed.
   (It's parallelly kept in Avalon for as much as Avalon wants, it's
    not a Depot concern)
* Merge of the codebases

>>Would this ease concerns?
> 
> Perhaps. To be totally honest, few people in Avalon care much about what 
> Stephen and I decide about the codebase, as long as compatibility remains. 
> So, I'll discuss it with Stephen and see how we can tackle this.

'k

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: classloader was: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Monday 21 June 2004 19:01, Nicola Ken Barozzi wrote:

> I don't agree here, Nick, classloading is part of artifact handling,
> albeit in the JVM.
> It can and IMHO should live as a Depot subproject.

Thanks for the thumbs up... I was getting a bit depressed :o)

> > Gump?? Sorry, how on earth did you manage to get a "Continuous
> > Integration System" to be part of a 'Jar Hell' solution?
>
> The Gump Metadata is a rich source of dependencies. Stephen AFAIK is
> investigating in this too.

How are you going to rely on Gump for 3rd party projects, who have no interest 
in having their own Gump setup, but for sure want to harness the power we are 
all striving for.

ATM, I only know of three build systems (Ant for this discussion is more of a 
build toolkit, than a complete system, so I leave that out), namely Maven, 
Gump and 'our pet' Magic.
All these solves the dependency pattern in their own way. 
Magic solves all _our_ concerns, i.e. chained dependencies, classloader 
establishment and the standard stuff.
Gump solves chained dependencies, but currently doesn't bother about 
classloader concerns.
Maven does neither handle chained dependencies nor classloader concerns.

Stephen is currently trying to work out how to teach Gump the classloader 
tricks, and I haven't followed that very closely.

> > We have chained dependencies in place. It works well, but our down side
> > is that only Avalon tools generate and understand the necessary meta
> > information required to support this feature.
>
> That's why using Gump metadata would bring projects closer.

Maybe you are looking at this from the wrong end. If Depot could solidly 
define what complex projects (such as Avalon) require, in form of meta 
information, then one should teach Gump to use it.

> The only real issue I see is the catch22 problem you have outlined about
> Avalon using Incubator code and viceversa.
> Let me disagree with it though. It's ok that an Apache project does not
> rely on incubating projects, but if some of the developers are part of
> this incubating project, does it still make sense?

Probably not. I could imagine that there is even a few more phases involved;
 *  Phase I: Avalon Repository is copied across, but Avalon maintain a 
parallel codebase, and changes are merged from one to the other.
 *  Phase II: Avalon Repository is removed from the Avalon codebase.
 *  Phase III: Avalon Repository has its package names and so forth changed to 
suit the Depot project.
 *  Phase IV: Bits and pieces are broken out into other parts of Depot, while 
maintaining enough compatibility with Avalon Merlin.

> Would this ease concerns?

Perhaps. To be totally honest, few people in Avalon care much about what 
Stephen and I decide about the codebase, as long as compatibility remains. 
So, I'll discuss it with Stephen and see how we can tackle this.

Cheers
Niclas

-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


Re: classloader was: Moving forward or letting go

Posted by Nick Chalko <ni...@chalko.com>.
Nicola Ken Barozzi wrote:

>
> Niclas Hedhman wrote:
>
>> On Monday 21 June 2004 13:04, Nick Chalko wrote:
>>
> ...
>
>>> Classloading is a real problem in java, and an important one to tackle
>>> but I prefer to keep the scope of Depot limited.  Other project's like
>>> Avalon can tackle the classloaders.  Perhaps we can take over the
>>> version/download/security  stuff.
>>
>
> I don't agree here, Nick, classloading is part of artifact handling, 
> albeit in the JVM.
>
> It can and IMHO should live as a Depot subproject.

I withdraw my -1,
Classloading as a separate project does make sense for Depot.  I should 
have used a -0 at worst.

R,
Nick


Re: classloader was: Moving forward or letting go

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Niclas Hedhman wrote:
> On Monday 21 June 2004 13:04, Nick Chalko wrote:
> 
...
>>Classloading is a real problem in java, and an important one to tackle
>>but I prefer to keep the scope of Depot limited.  Other project's like
>>Avalon can tackle the classloaders.  Perhaps we can take over the
>>version/download/security  stuff.

I don't agree here, Nick, classloading is part of artifact handling, 
albeit in the JVM.

It can and IMHO should live as a Depot subproject.

> The problem comes in when you introduce chained dependency. How do you signal 
> that such a thing exist to the 'user' ?
...
>>The issue chained dependencies is important, and I think gump can be of
>>assistance.  However gump only reflects the current state and we need
>>access to the dependencies for other versions as well.
> 
> Gump?? Sorry, how on earth did you manage to get a "Continuous Integration 
> System" to be part of a 'Jar Hell' solution?

The Gump Metadata is a rich source of dependencies. Stephen AFAIK is 
investigating in this too.

> We have chained dependencies in place. It works well, but our down side is 
> that only Avalon tools generate and understand the necessary meta information 
> required to support this feature.

That's why using Gump metadata would bring projects closer.

...
>>What do you see as the common ground for us to participate on ?
> 
> ATM, the biggest problem is that we;
>  *  Know too little about each other's concerns and view points.
>  *  Doesn't understand each other's codebases.
>  *  Disagree of the total scope of Depot.
> 
> What _I_ really would like to do is move Avalon Repository to Depot as a 
> sub-project, but there are some 'community problems' with that, i.e. Depot is 
> in Incubator, and Avalon has said NO to depending on Incubator projects.
> Anyway, once Repository was in Depot, one could take out the bits and pieces 
> that exist elsewhere in the Depot codebase.

I have read the Avalon Repository site and it's very much in line with 
Depot.

The only real issue I see is the catch22 problem you have outlined about 
Avalon using Incubator code and viceversa.

Let me disagree with it though. It's ok that an Apache project does not 
rely on incubating projects, but if some of the developers are part of 
this incubating project, does it still make sense?

I mean, imagine that we move Avalon repository code in Depot and start 
merging. I don't see it a problem for Avalon, as the development of it 
is still happening also with Avalon people, as it was before, and Avalon 
can still decide to fork back in case it wants to.

Would this ease concerns?

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: classloader was: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Monday 21 June 2004 13:04, Nick Chalko wrote:

> For me the target use case for depot has always been managing artifacts
> needed to build.  So class loaders beyond setting a <path> resource in
> ant has never been a needed task.

Hmmm.... But it has been solved over and over, and to consolidate everyone's 
effort requires that something 'extra' is brought to the table. No?

> Handling a chain of dependencies is something that we would like to do
> but it has never been a pressing concern.  For the most part I scratch
> what itches,  and jars for an ant build itches me all the time, so that
> is where I scratch.

Ok, that is a fair point. But one can still be setting out on a "vision", and 
gathering some support around such vision, and some people may join in and 
help out. Scratching the itch is just what you do today :o)

> I understand some of what avalon is doing with downloading needed jars
> for an application server.

Kind-of correct. Since the benefit is not for Avalon Merlin itself, but for 
our users. When they include a JarABC resource, it is very nice that they 
don't need to worry about that JarDEF, JarGHI and half a dozen others also 
need to be downloaded as a result of depending on JarABC.

>     * Version,
>           o Marking
>           o Comparing
>           o Computability,

We are drawing closer to the conclusion that version should be a unique 
number, and basically eyeing the SVN number itself, as that is giving us the 
sideeffect of knowning exactly how to rebuild the artifact in question.

>     * Downloading.
>           o Maintaining a local cache of jars
>           o Updating a local cache of jars
>           o getting the "best" jar available.
>           o mirrors

"best" probably means the 'Version Constraint' that I saw on the web site. I 
still have some mixed feelings about this, not by concept but the 
design/impl.

>     * Security
>           o verify md5 signatures
>           o verify other signatures.

MD5 is not security, only a download checksum.
Proper signature handling, especially now when ASF is getting a CA box up and 
running, is definately something good.... but otoh it doesn't exist yet, and 
we'll probably have that up and running too in Avalon Repository, before 
Depot has something useful in place (am I too negative? Sorry in that case.)

> Classloading is a real problem in java, and an important one to tackle
> but I prefer to keep the scope of Depot limited.  Other project's like
> Avalon can tackle the classloaders.  Perhaps we can take over the
> version/download/security  stuff.

The problem comes in when you introduce chained dependency. How do you signal 
that such a thing exist to the 'user' ?

> In a perfect world would would the depot API  as used in your class
> loader look like ?

Something like;
Artifact artifact = Artifact.locate( "jar:avalon:avalon-framework", version );
ClassLoader cl = artifact.getClassLoader();

'version' above is some form of version descriptor. This part requires some 
serious thinking.

> The issue chained dependencies is important, and I think gump can be of
> assistance.  However gump only reflects the current state and we need
> access to the dependencies for other versions as well.

Gump?? Sorry, how on earth did you manage to get a "Continuous Integration 
System" to be part of a 'Jar Hell' solution?
We have chained dependencies in place. It works well, but our down side is 
that only Avalon tools generate and understand the necessary meta information 
required to support this feature.

> So work on the Meta info is a place we can share efforts.  But it is a
> goal for Depot to work, at least in a basic/default way WITHOUT any
> separate meta info.

Ok, that is not a problem.

> What do you see as the common ground for us to participate on ?

ATM, the biggest problem is that we;
 *  Know too little about each other's concerns and view points.
 *  Doesn't understand each other's codebases.
 *  Disagree of the total scope of Depot.

What _I_ really would like to do is move Avalon Repository to Depot as a 
sub-project, but there are some 'community problems' with that, i.e. Depot is 
in Incubator, and Avalon has said NO to depending on Incubator projects.
Anyway, once Repository was in Depot, one could take out the bits and pieces 
that exist elsewhere in the Depot codebase.


Cheers
Niclas
-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


Re: classloader was: Moving forward or letting go

Posted by Nick Chalko <ni...@chalko.com>.
Niclas Hedhman wrote:

>
>>am -1 for directly handling classloading.
>>    
>>
>
>Then please provide an answer to the question;
>
>How do you intend to provide generic meta information to the Depot client, and 
>how is that meta information generated and handed to Depot prior to 
>publishing the artifacts?
>  
>

>And secondly; Who should be responsible to define the classloader concern, 
>expressed in generic meta information?
>
>  
>


>Since I suspect the answers to the above is "Not Depot's concern" and "Not 
>Depot", then I am sorry to say that Depot future is very bleak, and I doubt 
>it will receive any support from Avalon.
>
>I hope the "not Depot's concern" stems from the lack of understanding the 
>problem at hand, and that you will gain that insight sooner or later.
>  
>


For me the target use case for depot has always been managing artifacts 
needed to build.  So class loaders beyond setting a <path> resource in 
ant has never been a needed task.
Handling a chain of dependencies is something that we would like to do 
but it has never been a pressing concern.  For the most part I scratch 
what itches,  and jars for an ant build itches me all the time, so that 
is where I scratch. 
I understand some of what avalon is doing with downloading needed jars 
for an application server. 

Here is the pieces of code I think can be useful to avalon. 

    * Version,
          o Marking
          o Comparing
          o Computability,
    * Downloading.
          o Maintaining a local cache of jars
          o Updating a local cache of jars
          o getting the "best" jar available.
          o mirrors
    * Security
          o verify md5 signatures
          o verify other signatures.


Having  outside user of our API would be great.  Our API has gotten 
really FAT and needs cleaning.  If you are interested in investigating 
using our API I will be happy to help.

For the future of Depot, I think it is important to try to produce the 
smallest useful set of tools possible,  not the biggest. 
Classloading is a real problem in java, and an important one to tackle 
but I prefer to keep the scope of Depot limited.  Other project's like 
Avalon can tackle the classloaders.  Perhaps we can take over the 
version/download/security  stuff.

In a perfect world would would the depot API  as used in your class 
loader look like ?

 File theJar = depot.getResource("log4j","1.2", 
booleanGetDepenetedProjects);
?


The issue chained dependencies is important, and I think gump can be of 
assistance.  However gump only reflects the current state and we need 
access to the dependencies for other versions as well. 

So work on the Meta info is a place we can share efforts.  But it is a 
goal for Depot to work, at least in a basic/default way WITHOUT any 
separate meta info.

What do you see as the common ground for us to participate on ?

R,
Nick

  


Re: classloader was: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Monday 21 June 2004 05:27, Nick Chalko wrote:
> Class loading is not a goal.
> IMHO it is orthogonal, to depot.   Depot  gets/manages a repository of
> artifacts.
> For classloading,  once you have the right jar in a known place on the
> local file system,  then something else can handle classloading.
>
> I am -1 for directly handling classloading.

Then please provide an answer to the question;

How do you intend to provide generic meta information to the Depot client, and 
how is that meta information generated and handed to Depot prior to 
publishing the artifacts?

And secondly; Who should be responsible to define the classloader concern, 
expressed in generic meta information?

Since I suspect the answers to the above is "Not Depot's concern" and "Not 
Depot", then I am sorry to say that Depot future is very bleak, and I doubt 
it will receive any support from Avalon.

I hope the "not Depot's concern" stems from the lack of understanding the 
problem at hand, and that you will gain that insight sooner or later.

Cheers
Niclas
-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


Re: Moving forward or letting go

Posted by "Markus M. May" <mm...@gmx.net>.
Hello,
+1 from me. Was not intent to start a flame war.

R,

Markus

Nicola Ken Barozzi wrote:
> 
> Mark R. Diggory wrote:
> 
>> Lets not start a flame war, discussion here is how to get groups 
>> working together and find commonality in code and repository 
>> architecture etc, there are individuals who use and work on Maven 
>> present and in the discussion. Ultimately we seek standardization in 
>> repository structure and content so that the "Users" don't have to 
>> suffer because the "Developers" of these separate projects can't seem 
>> to get along with each other and work together. I represent someone 
>> whose fed up with the constant bickering between projects that should 
>> be working together to establish standards and consistency.
>>
>> As a Jakarta Commons developer and user, I do not need one system for 
>> building projects, what I do need is one repository for publishing 
>> content, not three.
> 
> 
> +1 all the way
> 
> I'd like to put this on our homepage as our manifesto. Objections?
> 


Re: Moving forward or letting go

Posted by "Mark R. Diggory" <md...@latte.harvard.edu>.
You might reword a sentance or two out of the first person, but feel 
free to reuse it. -Mark

Nicola Ken Barozzi wrote:

>
> Mark R. Diggory wrote:
>
>> Lets not start a flame war, discussion here is how to get groups 
>> working together and find commonality in code and repository 
>> architecture etc, there are individuals who use and work on Maven 
>> present and in the discussion. Ultimately we seek standardization in 
>> repository structure and content so that the "Users" don't have to 
>> suffer because the "Developers" of these separate projects can't seem 
>> to get along with each other and work together. I represent someone 
>> whose fed up with the constant bickering between projects that should 
>> be working together to establish standards and consistency.
>>
>> As a Jakarta Commons developer and user, I do not need one system for 
>> building projects, what I do need is one repository for publishing 
>> content, not three.
>
>
> +1 all the way
>
> I'd like to put this on our homepage as our manifesto. Objections?
>


Re: Moving forward or letting go

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Mark R. Diggory wrote:
> Lets not start a flame war, discussion here is how to get groups working 
> together and find commonality in code and repository architecture etc, 
> there are individuals who use and work on Maven present and in the 
> discussion. Ultimately we seek standardization in repository structure 
> and content so that the "Users" don't have to suffer because the 
> "Developers" of these separate projects can't seem to get along with 
> each other and work together. I represent someone whose fed up with the 
> constant bickering between projects that should be working together to 
> establish standards and consistency.
> 
> As a Jakarta Commons developer and user, I do not need one system for 
> building projects, what I do need is one repository for publishing 
> content, not three.

+1 all the way

I'd like to put this on our homepage as our manifesto. Objections?

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: Moving forward or letting go

Posted by "Mark R. Diggory" <md...@latte.harvard.edu>.
Lets not start a flame war, discussion here is how to get groups working 
together and find commonality in code and repository architecture etc, 
there are individuals who use and work on Maven present and in the 
discussion. Ultimately we seek standardization in repository structure 
and content so that the "Users" don't have to suffer because the 
"Developers" of these separate projects can't seem to get along with 
each other and work together. I represent someone whose fed up with the 
constant bickering between projects that should be working together to 
establish standards and consistency.

As a Jakarta Commons developer and user, I do not need one system for 
building projects, what I do need is one repository for publishing 
content, not three.

-Mark Diggory

Markus M. May wrote:

>> Death to Maven!!!  :o)
>
> Hmm, wouldn't go this far, but I like ANT way more. Thats why I really 
> like the antlets (antworks.sf.net), this is just an easy extension to 
> Ant and it is growing also quite in the same direction of the POM 
> stuff :-)
>
>>
>> Cheers
>> Niclas
>
>
> R,
>
> Markus



Re: Moving forward or letting go

Posted by "Markus M. May" <mm...@gmx.net>.
Hello Niclas,
I did not understand a couple of your comments.

> Ok. Avalon Repository has a SPI model in place, but code is currently 
> required, since we cater to our own needs first :o)
Well, yeah, pretty much the same we are doing.
> 
> Yes, you will need to define or use server-side meta-data. The immediate 
> effect after solving that is the Classloader (not classloading) 
> establishment, since the 'type' of classloader the dependency belongs to, 
> MUST be managed by the repository system, or you have missed a major goal.
I agree on the meta-data stored on the server-side, but I did not 
understood the 'type' of classloader and the major goal. Can you please 
enlighten me?
> 
> If Depot is only a tool for build systems itself, then you are also missing 
> the goal, i.e. providing a functionality to a handful of build systems, each 
> having their own solutions to this concern, is not a recipe for wide-spread 
> acceptance, and a chicken-egg problem.
> 
You mean basically, that build systems like Maven need some plugins to 
load from a central repository. This is done through the antlets and the 
loading of this was one of the goals for a next release of depot, when i 
remember correctly?

> 
> And then ALL projects with Avalon has a single POM-like XML file containing 
> the dependencies (incl, classloading concerns), the definitions, versioning 
> and some other smaller stuff.
> 
> Death to Maven!!!  :o)
Hmm, wouldn't go this far, but I like ANT way more. Thats why I really 
like the antlets (antworks.sf.net), this is just an easy extension to 
Ant and it is growing also quite in the same direction of the POM stuff :-)

> 
> Cheers
> Niclas

R,

Markus

Re: classloader was: Moving forward or letting go

Posted by "Adam R. B. Jack" <aj...@trysybase.com>.
> >Yes, you will need to define or use server-side meta-data. The immediate
> >effect after solving that is the Classloader (not classloading)
> >establishment, since the 'type' of classloader the dependency belongs to,
> >MUST be managed by the repository system, or you have missed a major
goal.
> Class loading is not a goal.
> IMHO it is orthogonal, to depot.   Depot  gets/manages a repository of
> artifacts.
> For classloading,  once you have the right jar in a known place on the
> local file system,  then something else can handle classloading.
>
> I am -1 for directly handling classloading.

Depot Update has nothing to do with Class Loading, but is there any reason
we couldn't allow a separate Depot project to do that (using Depot Update or
some core called Depot Download)?

I'm not advocating that Depot try to be all things to all people, but I
think that CL is a extension that folks will want.  (If/when I get version
constraints off the ground, then maybe I could attempt to persuade this
project to work with it, but that is a future conversation.)

regards,

Adam


classloader was: Moving forward or letting go

Posted by Nick Chalko <ni...@chalko.com>.
Niclas Hedhman wrote:

>On Sunday 20 June 2004 07:16, Markus M. May wrote:
>
>  
>
>>Depot offers a little more. The current design covers Maven repositories
>>as well as flat file repositories (for the local repository e.g.). 
>>    
>>
>
>Ok. Avalon Repository has a SPI model in place, but code is currently 
>required, since we cater to our own needs first :o)
>
>  
>
>>But111`11`1`1 Depot has nothing to do with the classloading itself. It is like
>>already state right now, only for the build dependencies. The chained
>>dependencies are resolved via the dependencies of the dependencies. The
>>design therefor is not yet clear, because the needed meta-data for this
>>is not saved in the repository.
>>    
>>
>
>Yes, you will need to define or use server-side meta-data. The immediate 
>effect after solving that is the Classloader (not classloading) 
>establishment, since the 'type' of classloader the dependency belongs to, 
>MUST be managed by the repository system, or you have missed a major goal.
>
>  
>
Class loading is not a goal. 
IMHO it is orthogonal, to depot.   Depot  gets/manages a repository of 
artifacts.
For classloading,  once you have the right jar in a known place on the 
local file system,  then something else can handle classloading.

I am -1 for directly handling classloading.

Handling dependencies is a goal.  Adam describes this as version 
constraints http://incubator.apache.org/depot/version/constraints.html

R,
Nick


Re: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Sunday 20 June 2004 07:16, Markus M. May wrote:

> Depot offers a little more. The current design covers Maven repositories
> as well as flat file repositories (for the local repository e.g.). 

Ok. Avalon Repository has a SPI model in place, but code is currently 
required, since we cater to our own needs first :o)

> But Depot has nothing to do with the classloading itself. It is like
> already state right now, only for the build dependencies. The chained
> dependencies are resolved via the dependencies of the dependencies. The
> design therefor is not yet clear, because the needed meta-data for this
> is not saved in the repository.

Yes, you will need to define or use server-side meta-data. The immediate 
effect after solving that is the Classloader (not classloading) 
establishment, since the 'type' of classloader the dependency belongs to, 
MUST be managed by the repository system, or you have missed a major goal.

If Depot is only a tool for build systems itself, then you are also missing 
the goal, i.e. providing a functionality to a handful of build systems, each 
having their own solutions to this concern, is not a recipe for wide-spread 
acceptance, and a chicken-egg problem.

> > We at Avalon, also have created our own build system, based enitrely on
> > Ant, doing just about the same things that Maven is famous for, but at
> > 10x the speed (3min instead of 30-40min on my system for the entire
> > Avalon build). We call this product "Magic", and it too has 'repository
> > features', but we have not used any of the parts in Avalon Repository,
> > largely because Magic builds Repository, and we really don't want that
> > kind of cyclic dependency.
>
> Nice :-) The depot build system is based on antlets, which is a pretty
> cool build system basically driven by nick chalko. The antlets offer
> some "reuse" components for the build.

We have opted NOT to use building blocks, but on the same basis as Maven 
started out. I.e. "Follow a pattern and all is catered for."
A typical build.xml looks like

<project name="avalon-activation-impl" default="dist" basedir="."
    xmlns:x="antlib:org.apache.avalon.tools">
  <property file="build.properties"/>
  <import file="${project.home}/build/standard.xml"/>

  <target name="init" depends="standard.init">
    <x:filter key="avalon-logging-logkit-impl" feature="uri"
       token="AVALON-LOGGING-LOGKIT-SPEC"/>
  </target>

  <target name="package" depends="standard.package">
    <x:artifact/>
  </target>

</project>

And then ALL projects with Avalon has a single POM-like XML file containing 
the dependencies (incl, classloading concerns), the definitions, versioning 
and some other smaller stuff.

Death to Maven!!!  :o)

Cheers
Niclas
-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


version was:(Avalon & Depot)

Posted by Nick Chalko <ni...@chalko.com>.
Niclas Hedhman wrote:

>
>Short notice on versioning;
>We have more or less concluded that 'version strings' now in use are 
>malicious. They mix concerns too much and should probably not be used in the 
>way it is (indicating buildnumber, compatibility, development stage et 
>cetera). They should be seen as opaque strings and a separate system should 
>be made to express the orthogonal concerns of such version.
>
>  
>

That is why we have a separate project just for version concerns.

See
http://incubator.apache.org/depot/version/




Re: (Avalon & Depot) Re: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Monday 21 June 2004 01:51, Adam R. B. Jack wrote:

> I'd be interesting in collaborating to keep parts of Depot, and integrate
> with Avalon's code. I think that bringing fresh eyes into the code (and
> onto the problem) would force us (Depot) to focus and clean-up the
> docs/designs (on Wiki). I think a joint goal -- with joint use cases --
> could really work this out to something practical. Yes, I'd be very
> interested in that.

Good. I'll try to set aside some time to be part of Depot move forward...

Short notice on versioning;
We have more or less concluded that 'version strings' now in use are 
malicious. They mix concerns too much and should probably not be used in the 
way it is (indicating buildnumber, compatibility, development stage et 
cetera). They should be seen as opaque strings and a separate system should 
be made to express the orthogonal concerns of such version.

More about this later.

> BTW: So Magic can use Ant tasks, is that it? I've read about it (in mails)
> but I hadn't registered that. Interesting.

Magic is really a set of Tasks, and a tiny "boiler-template" for invoking it 
all. And it is all done using the standard mechanisms available in Ant.
That said, there is a Model, which defines all the projects within the 'larger 
project', and standard targets (similar to Maven's goals) which are 
available.
And since the underlying mechansim is 100% Ant, any modifications to the 
build.xml (boiler-template mentioned above), will be carried in its expected 
fashion.
Next on _my_ agenda for Magic is to allow Java logic (in script form) to be 
invoked as part of the build process, when doing in Ant or Ant+JavaScript 
makes it messy.
Another issue I think is on our agenda, is that Avalon Repository is not used 
in Magic, since Magic builds Repository, and that would be nice to solve as 
well.


Cheers
Niclas
-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


Re: Moving forward or letting go

Posted by "Markus M. May" <mm...@gmx.net>.
Hello,


Niclas Hedhman wrote:
> On Friday 18 June 2004 02:49, Adam R. B. Jack wrote:
> 
>>So -- Avalon repository. How do we see about talking to it? What
>>protocols/APIs? Depot tends to use HTTP (to a file system based repository,
>>ala Maven's Ibiblio, ala ASFRepo spec.) What more is wanted?
> 
> 
> Ok, your notes has been registered.
> Let us start the discussion around Avalon Repository, and see if something can 
> be learnt from it (over at Avalon we are pretty pleased with it).
> 
> What is it NOT;
> * Server application.
> * A tool to provide central repository services.
> * A generic toolkit for use in all types of cases (unlike the intent of 
> Depot).
> * Running on top of Merlin.
> * As generic as it possibly could be.
Okay, here we have IMHO some things in common. Depot is not a server 
application and provides only client support for the repository (means 
basically fetching the dependencies). Well we are not running on Merlin 
obviously :-)

> 
> So, It is a client to http-transport repositories, maven styled, but allowing 
> for extra meta info (in a separate file) to be able to handle;
>  * Chained dependencies (i.e. dependencies of dependencies)
>  * Establishing the classloader hierarchy of the downloaded Jar resources.
Depot offers a little more. The current design covers Maven repositories 
as well as flat file repositories (for the local repository e.g.). There 
were a couple of discussions to provide a kind of configuration to also 
support other repository types. This is currently possible through the 
implementation of a couple of abstract classes.
But Depot has nothing to do with the classloading itself. It is like 
already state right now, only for the build dependencies. The chained 
dependencies are resolved via the dependencies of the dependencies. The 
design therefor is not yet clear, because the needed meta-data for this 
is not saved in the repository.
> 
> Repository drives Merlin, meaning Merlin uses Repository to get hold of 
> resources (including itself!) hosted at repositories. The Avalon build tools, 
> includes Maven/Ant/Magic plugins/task that generate the resource meta info.
> The .meta file for Merlin itself is attached below;
> 
> All-in-all, Avalon Repository is both very capable and complete, but it is not 
> 'toolkit-like'. OTOH, 8 months ago it was completely embedded inside Merlin, 
> without any traces as a standalone package, so the first step of refactoring 
> has been made.
> 
> We at Avalon, also have created our own build system, based enitrely on Ant, 
> doing just about the same things that Maven is famous for, but at 10x the 
> speed (3min instead of 30-40min on my system for the entire Avalon build).
> We call this product "Magic", and it too has 'repository features', but we 
> have not used any of the parts in Avalon Repository, largely because Magic 
> builds Repository, and we really don't want that kind of cyclic dependency.
Nice :-) The depot build system is based on antlets, which is a pretty 
cool build system basically driven by nick chalko. The antlets offer 
some "reuse" components for the build.
> 
> I hope you can digest this info a bit. The important Avalon crowd, Aaron, 
> Stephen, Alex and myself, have expressed a wish to move Repository 
> functionality into Depot, and get Depot out of Incubator and get proper 
> releases out. Personally, I think Depot importance is big enough to validate 
> a TLP.
fine :-)
> 
> Cheers
> Niclas
> 
> #
> # Meta classifier.
> #
> meta.domain = avalon
> meta.version = 1.1
> 
> #
> # Artifact descriptor.
> #
> avalon.artifact.group = avalon/merlin
> avalon.artifact.name = avalon-merlin-impl
> avalon.artifact.version = 3.3.0
> avalon.artifact.signature = 20040617.091454
> 
> #
> # Factory classname.
> #
> avalon.artifact.factory = org.apache.avalon.merlin.impl.DefaultFactory
> 
> #
> # API dependencies.
> #
> avalon.artifact.dependency.api.0 = 
> artifact:jar:avalon/framework/avalon-framework-api#4.2.1
> avalon.artifact.dependency.api.1 = 
> artifact:jar:avalon/util/avalon-util-lifecycle#1.1.1
> 
> #
> # SPI dependencies.
> #
> avalon.artifact.dependency.spi.0 = 
> artifact:jar:avalon/util/avalon-util-extension-api#1.2.0
> avalon.artifact.dependency.spi.1 = 
> artifact:jar:avalon/merlin/avalon-merlin-api#3.3.0
> avalon.artifact.dependency.spi.2 = 
> artifact:jar:avalon/composition/avalon-composition-api#2.0.0
> avalon.artifact.dependency.spi.3 = 
> artifact:jar:avalon/repository/avalon-repository-api#2.0.0
> avalon.artifact.dependency.spi.4 = 
> artifact:jar:avalon/logging/avalon-logging-api#1.0.0
> avalon.artifact.dependency.spi.5 = 
> artifact:jar:avalon/meta/avalon-meta-api#1.4.0
> avalon.artifact.dependency.spi.6 = 
> artifact:jar:avalon/meta/avalon-meta-spi#1.4.0
> avalon.artifact.dependency.spi.7 = 
> artifact:jar:avalon/repository/avalon-repository-spi#2.0.0
> avalon.artifact.dependency.spi.8 = 
> artifact:jar:avalon/logging/avalon-logging-spi#1.0.0
> avalon.artifact.dependency.spi.9 = 
> artifact:jar:avalon/composition/avalon-composition-spi#2.0.0
> 
> #
> # Implementation dependencies.
> #
> avalon.artifact.dependency.0 = 
> artifact:jar:avalon/composition/avalon-composition-impl#2.0.1
> avalon.artifact.dependency.1 = 
> artifact:jar:avalon/repository/avalon-repository-main#2.0.0
> avalon.artifact.dependency.2 = 
> artifact:jar:avalon/repository/avalon-repository-util#2.0.0
> avalon.artifact.dependency.3 = 
> artifact:jar:avalon/util/avalon-util-exception#1.0.0
> avalon.artifact.dependency.4 = artifact:jar:avalon/util/avalon-util-env#1.1.1
> avalon.artifact.dependency.5 = artifact:jar:avalon/util/avalon-util-i18n#1.0.0
> avalon.artifact.dependency.6 = 
> artifact:jar:avalon/util/avalon-util-criteria#1.1.0
> avalon.artifact.dependency.7 = 
> artifact:jar:avalon/util/avalon-util-defaults#1.2.1
> avalon.artifact.dependency.8 = artifact:jar:avalon/meta/avalon-meta-impl#1.4.0
> avalon.artifact.dependency.9 = 
> artifact:jar:avalon/util/avalon-util-configuration#1.0.0
> avalon.artifact.dependency.10 = 
> artifact:jar:avalon/framework/avalon-framework-impl#4.2.1
> avalon.artifact.dependency.11 = 
> artifact:jar:avalon/framework/avalon-framework-legacy#4.2.1
> avalon.artifact.dependency.12 = artifact:jar:avalon/logkit/avalon-logkit#2.0.0
> avalon.artifact.dependency.13 = artifact:jar:log4j/log4j#1.2.8
> avalon.artifact.dependency.14 = artifact:jar:servletapi/servletapi#2.3
> avalon.artifact.dependency.15 = artifact:jar:avalon/tools/mailapi#1.3.1
> avalon.artifact.dependency.16 = artifact:jar:avalon/tools/jms#1.1
> avalon.artifact.dependency.17 = 
> artifact:jar:avalon/util/avalon-util-extension-impl#1.2.0
> avalon.artifact.dependency.18 = 
> artifact:jar:avalon/logging/avalon-logging-impl#1.0.0
> 
> #
> # EOF.
> #
> 


(Avalon & Depot) Re: Moving forward or letting go

Posted by "Adam R. B. Jack" <aj...@trysybase.com>.
Niclas wrote:

> Let us start the discussion around Avalon Repository, and see if something
can
> be learnt from it (over at Avalon we are pretty pleased with it).
    [...]
> I hope you can digest this info a bit. The important Avalon crowd, Aaron,
> Stephen, Alex and myself, have expressed a wish to move Repository
> functionality into Depot, and get Depot out of Incubator and get proper
> releases out. Personally, I think Depot importance is big enough to
validate
> a TLP.

I've not been responding 'cos I've been trying to absorb & evaluate. I am
finding this thread compelling. I like a lot of what I read here.

We called Depot -- not Ruper/Greebo (original source code) -- 'cos we wanted
to be open to accept outside influences (primarily Avalon's, also Wagon's,
whatever), and reading this I'm glad we did. We need input/help/drive like
this.

My thoughts are these... Ruper was based upon the concept that we "query a
repository for latest/best fit" and download that. Not download version X
from http://Y (one can do that with a simple <ant <get (HTTP GET)) but pick
the latest 'fit', and download that. Basically, that is my passion w/ a
download tool -- don't let the developer stagnate on older jars if there is
a compatible beter one. For details see:
http://incubator.apache.org/depot/update/

That said, I think we've got too much code for a simple problem & I think
that is hindering us. [My first passion being version,
http://incubator.apache.org/depot/version/, it brings a bunch of baggage
that may or may not help Depot Update.] I think we need to maintain the goal
we have, but also supprot the simple straight-forward 'download this'. There
is enough (simple by needed) work there with MD5 checks, and maybe
click-through acceptance of licenses.

I'd be interesting in collaborating to keep parts of Depot, and integrate
with Avalon's code. I think that bringing fresh eyes into the code (and onto
the problem) would force us (Depot) to focus and clean-up the docs/designs
(on Wiki). I think a joint goal -- with joint use cases -- could really work
this out to something practical. Yes, I'd be very interested in that.

BTW: So Magic can use Ant tasks, is that it? I've read about it (in mails)
but I hadn't registered that. Interesting.

regards,

Adam


Re: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Friday 18 June 2004 02:49, Adam R. B. Jack wrote:
> So -- Avalon repository. How do we see about talking to it? What
> protocols/APIs? Depot tends to use HTTP (to a file system based repository,
> ala Maven's Ibiblio, ala ASFRepo spec.) What more is wanted?

Ok, your notes has been registered.
Let us start the discussion around Avalon Repository, and see if something can 
be learnt from it (over at Avalon we are pretty pleased with it).

What is it NOT;
* Server application.
* A tool to provide central repository services.
* A generic toolkit for use in all types of cases (unlike the intent of 
Depot).
* Running on top of Merlin.
* As generic as it possibly could be.

So, It is a client to http-transport repositories, maven styled, but allowing 
for extra meta info (in a separate file) to be able to handle;
 * Chained dependencies (i.e. dependencies of dependencies)
 * Establishing the classloader hierarchy of the downloaded Jar resources.

Repository drives Merlin, meaning Merlin uses Repository to get hold of 
resources (including itself!) hosted at repositories. The Avalon build tools, 
includes Maven/Ant/Magic plugins/task that generate the resource meta info.
The .meta file for Merlin itself is attached below;

All-in-all, Avalon Repository is both very capable and complete, but it is not 
'toolkit-like'. OTOH, 8 months ago it was completely embedded inside Merlin, 
without any traces as a standalone package, so the first step of refactoring 
has been made.

We at Avalon, also have created our own build system, based enitrely on Ant, 
doing just about the same things that Maven is famous for, but at 10x the 
speed (3min instead of 30-40min on my system for the entire Avalon build).
We call this product "Magic", and it too has 'repository features', but we 
have not used any of the parts in Avalon Repository, largely because Magic 
builds Repository, and we really don't want that kind of cyclic dependency.

I hope you can digest this info a bit. The important Avalon crowd, Aaron, 
Stephen, Alex and myself, have expressed a wish to move Repository 
functionality into Depot, and get Depot out of Incubator and get proper 
releases out. Personally, I think Depot importance is big enough to validate 
a TLP.

Cheers
Niclas

#
# Meta classifier.
#
meta.domain = avalon
meta.version = 1.1

#
# Artifact descriptor.
#
avalon.artifact.group = avalon/merlin
avalon.artifact.name = avalon-merlin-impl
avalon.artifact.version = 3.3.0
avalon.artifact.signature = 20040617.091454

#
# Factory classname.
#
avalon.artifact.factory = org.apache.avalon.merlin.impl.DefaultFactory

#
# API dependencies.
#
avalon.artifact.dependency.api.0 = 
artifact:jar:avalon/framework/avalon-framework-api#4.2.1
avalon.artifact.dependency.api.1 = 
artifact:jar:avalon/util/avalon-util-lifecycle#1.1.1

#
# SPI dependencies.
#
avalon.artifact.dependency.spi.0 = 
artifact:jar:avalon/util/avalon-util-extension-api#1.2.0
avalon.artifact.dependency.spi.1 = 
artifact:jar:avalon/merlin/avalon-merlin-api#3.3.0
avalon.artifact.dependency.spi.2 = 
artifact:jar:avalon/composition/avalon-composition-api#2.0.0
avalon.artifact.dependency.spi.3 = 
artifact:jar:avalon/repository/avalon-repository-api#2.0.0
avalon.artifact.dependency.spi.4 = 
artifact:jar:avalon/logging/avalon-logging-api#1.0.0
avalon.artifact.dependency.spi.5 = 
artifact:jar:avalon/meta/avalon-meta-api#1.4.0
avalon.artifact.dependency.spi.6 = 
artifact:jar:avalon/meta/avalon-meta-spi#1.4.0
avalon.artifact.dependency.spi.7 = 
artifact:jar:avalon/repository/avalon-repository-spi#2.0.0
avalon.artifact.dependency.spi.8 = 
artifact:jar:avalon/logging/avalon-logging-spi#1.0.0
avalon.artifact.dependency.spi.9 = 
artifact:jar:avalon/composition/avalon-composition-spi#2.0.0

#
# Implementation dependencies.
#
avalon.artifact.dependency.0 = 
artifact:jar:avalon/composition/avalon-composition-impl#2.0.1
avalon.artifact.dependency.1 = 
artifact:jar:avalon/repository/avalon-repository-main#2.0.0
avalon.artifact.dependency.2 = 
artifact:jar:avalon/repository/avalon-repository-util#2.0.0
avalon.artifact.dependency.3 = 
artifact:jar:avalon/util/avalon-util-exception#1.0.0
avalon.artifact.dependency.4 = artifact:jar:avalon/util/avalon-util-env#1.1.1
avalon.artifact.dependency.5 = artifact:jar:avalon/util/avalon-util-i18n#1.0.0
avalon.artifact.dependency.6 = 
artifact:jar:avalon/util/avalon-util-criteria#1.1.0
avalon.artifact.dependency.7 = 
artifact:jar:avalon/util/avalon-util-defaults#1.2.1
avalon.artifact.dependency.8 = artifact:jar:avalon/meta/avalon-meta-impl#1.4.0
avalon.artifact.dependency.9 = 
artifact:jar:avalon/util/avalon-util-configuration#1.0.0
avalon.artifact.dependency.10 = 
artifact:jar:avalon/framework/avalon-framework-impl#4.2.1
avalon.artifact.dependency.11 = 
artifact:jar:avalon/framework/avalon-framework-legacy#4.2.1
avalon.artifact.dependency.12 = artifact:jar:avalon/logkit/avalon-logkit#2.0.0
avalon.artifact.dependency.13 = artifact:jar:log4j/log4j#1.2.8
avalon.artifact.dependency.14 = artifact:jar:servletapi/servletapi#2.3
avalon.artifact.dependency.15 = artifact:jar:avalon/tools/mailapi#1.3.1
avalon.artifact.dependency.16 = artifact:jar:avalon/tools/jms#1.1
avalon.artifact.dependency.17 = 
artifact:jar:avalon/util/avalon-util-extension-impl#1.2.0
avalon.artifact.dependency.18 = 
artifact:jar:avalon/logging/avalon-logging-impl#1.0.0

#
# EOF.
#

-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


Re: Moving forward or letting go

Posted by "Adam R. B. Jack" <aj...@trysybase.com>.
> > Also, I think this problem can only be broken with
small/focused/standalone
> > tools -- like a browser separate from an HTTP server -- so folks don't
get
> > into an all or nothing situation. Hence, I think Depot ought start as a
> > client, maybe forever.
> Well, it was always my main intent to have a small tool to help solve
> the dependency issues during the build process. Also, as far as I am
> concerned I would like to see Depot evolving into the Ant area.
> Ant is still somewhat of a standard in the build community and it was
> never really a topic to also solve other dependencies as build
> dependencies (means, we are not resolving runtime dependencies, right?).

Yup, we need to keep it small and simple/focsed. Nothing fancy. Nick's done
some good work with Ant, and we have some tasks and types. Maybe we need to
just try to get those out, for now.

> > So -- Avalon repository. How do we see about talking to it? What
> > protocols/APIs? Depot tends to use HTTP (to a file system based
repository,
> > ala Maven's Ibiblio, ala ASFRepo spec.) What more is wanted?

> Sure, there are lots of other protocols which could be supported, but
> the one working is HTTP, and to solve lots of the dependency issues in
> Depot I believe we should restrict the usage to HTTP. If Depot evolves,
> we still could add more, but ...

That is my view (hence we depend upon java.net with optional dependencies on
HttpClient and VFS) with VFS being the protocol extension point (not our
ballywack). I think of one as simple (in a non-derogatory way) jsut a file
system w/ HTTP access, but maybe Avalon's is different. Here I was trying to
find out more about Avalon's repository.

regards

Adam


Re: Moving forward or letting go

Posted by "Markus M. May" <mm...@apache.org>.
Hello,
I am, like the rest of the team here, right now a little bit busy with 
other projects, mostly work issues. Anyway, here are some comments on Depot.

> Also, I think this problem can only be broken with small/focused/standalone
> tools -- like a browser separate from an HTTP server -- so folks don't get
> into an all or nothing situation. Hence, I think Depot ought start as a
> client, maybe forever.
Well, it was always my main intent to have a small tool to help solve 
the dependency issues during the build process. Also, as far as I am 
concerned I would like to see Depot evolving into the Ant area.
Ant is still somewhat of a standard in the build community and it was 
never really a topic to also solve other dependencies as build 
dependencies (means, we are not resolving runtime dependencies, right?).
> 
> So -- Avalon repository. How do we see about talking to it? What
> protocols/APIs? Depot tends to use HTTP (to a file system based repository,
> ala Maven's Ibiblio, ala ASFRepo spec.) What more is wanted?
Sure, there are lots of other protocols which could be supported, but 
the one working is HTTP, and to solve lots of the dependency issues in 
Depot I believe we should restrict the usage to HTTP. If Depot evolves, 
we still could add more, but ...

Cheers,

Markus

Re: Moving forward or letting go

Posted by "Adam R. B. Jack" <aj...@trysybase.com>.
> Lurking around here to find out how to cross-pollinate or merge the Depot
> efforts with the apparent parallel of the Avalon Repository, which in
essence
> have the same goals (and perhaps some more);

(As you might recall) I once tried to find out more about Avalon's
repository work, but then got too swamped to follow-up. Actually, Depot (as
it stands today) would rather be a set of client-side tools than an actual
repository. Basically it has commandline tools, ant tasks, and APIs to fit
into folks environments.

It seems (to me) that if we can break down the issues of dependencies (and
version compatibility ranges within those)  we can start to agree on
repositories/tools/naming. I guess this is how I got myself roped into Gump,
and have note focused on Depot.

FWIIW: and I'm eager to dig into versions, since Jar-Hell is my pet
peave/motivator.

> I (and Stephen I think) have not dug into the details of what you have,
and
> that is perhaps a shame in itself.
> But at the same time, I 'feel' we have a higher level of maturity in
Avalon
> Repository (it is running in production systems) and I don't see Depot
> community jumping up and down waving arms saying "Use this instead" or
trying
> to absorb the features that we desparately need.

We tried the 'Repository', but [IMHO] the problem is too large (and perhaps
subjective) to handle at a theory level. I think this problem will only be
broken with enough folks getting hand on experience (e.g. Wagon). I want to
see Depot succeed if no more than as prototype, and experiment into the
space. I'd be nice if it could do more.

Also, I think this problem can only be broken with small/focused/standalone
tools -- like a browser separate from an HTTP server -- so folks don't get
into an all or nothing situation. Hence, I think Depot ought start as a
client, maybe forever.

So -- Avalon repository. How do we see about talking to it? What
protocols/APIs? Depot tends to use HTTP (to a file system based repository,
ala Maven's Ibiblio, ala ASFRepo spec.) What more is wanted?

regards,

Adam


Re: Moving forward or letting go

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Friday 18 June 2004 01:51, Adam R. B. Jack wrote:

> We came here wanting artifact downloads,
> from repositories, and we have that (albeit crude/basic). 

Lurking around here to find out how to cross-pollinate or merge the Depot 
efforts with the apparent parallel of the Avalon Repository, which in essence 
have the same goals (and perhaps some more);

1. Artifact downloads,
2. Downloads of chained dependencies,
3. Establishment of Classloader hierarchies.

I (and Stephen I think) have not dug into the details of what you have, and 
that is perhaps a shame in itself.
But at the same time, I 'feel' we have a higher level of maturity in Avalon 
Repository (it is running in production systems) and I don't see Depot 
community jumping up and down waving arms saying "Use this instead" or trying 
to absorb the features that we desparately need.

I understand that this is perhaps a massive under-taking, and some people 
around here thinks that Avalon should die, but it would definately raise the 
usage levels dramatically.

Cheers
Niclas
-- 
   +------//-------------------+
  / http://www.bali.ac        /
 / http://niclas.hedhman.org / 
+------//-------------------+


Re: Moving forward or letting go

Posted by "Mark R. Diggory" <md...@latte.harvard.edu>.
Nicola Ken Barozzi wrote:

>
> Mark R. Diggory wrote:
> ...
>
>> What about a "home"? Where will depot live as a tool?  For example, 
>> is it appropriate as a Jakarta Commons Component? Growth will occur 
>> if the project is situated in the proper location such that users can 
>> find and will use it. 
>
>
> Hmmm... the initial idea was to go top level, but is it shooting too 
> high? (I'm thinking out loud) Having as a goal to go to Jakarta 
> Commons and thus using the Sandbox may be an idea.
>
Unfortunately,  the Depots dependencies may not make it a likely 
candidate for JC, for instance, HttpClient is moving out of JC and to 
the Jakarta level in the near future. There is an effort to keep 
external dependencies in JC under control (even to the Jakarta level). 
There are a few projects with non-standard externals, but these are 
somewhat grandfathered. There may be parts of Depot that would benefit 
highly by existing in JC, I'm not sure which though. In which case I 
would suspect that Depot would either have to be in somewhere like 
Jakarta or its own TLP.  Getting into Jakarta would attract Jakarta 
developers, maybe as a TLP there is a wider audience still. I can see 
this is not an easy decision.

-Mark


Re: Moving forward or letting go

Posted by "Adam R. B. Jack" <aj...@trysybase.com>.
> Mark R. Diggory wrote:
> ...
> > What about a "home"?
>
> Hmmm... the initial idea was to go top level, but is it shooting too
> high? (I'm thinking out loud)

TLP has seemed to be shooting high (hence I never pushed to move out of
incubation, even earlier when we might have had some momentum). That said,
this is trying to be apache-wide (at a minimum), I'm tempted to shoot that
high. With how TLPs are going @ Apache these days, it might be the right
call & bring the least baggage. I think this issue/topic is a ways off in
our future, though.

However, this team/community needs to get active -- re-activate around it's
goals -- and re-find it's purpose. We came here wanting artifact downloads,
from repositories, and we have that (albeit crude/basic). We have a bunch of
pieces (probably more code than we needed, my fault) we just need to
leverage it, use it & get it streamlined.

To my mind, once we get Depot used in a few places (in anger), the rest
[including documentation/community] will follow. At least, that is my hope
(hence I'm putting it into Gump, and using it at work).

regards,

Adam


Re: Moving forward or letting go

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Mark R. Diggory wrote:
...
> What about a "home"? Where will depot live as a tool?  For example, is 
> it appropriate as a Jakarta Commons Component? Growth will occur if the 
> project is situated in the proper location such that users can find and 
> will use it. 

Hmmm... the initial idea was to go top level, but is it shooting too 
high? (I'm thinking out loud) Having as a goal to go to Jakarta Commons 
and thus using the Sandbox may be an idea.

Or is just telling people in the right forums enough?

> Already seeing successful reuse of Jakarta Commons 
> Components throughout Apache.  Depot, as a Jakarta Commons Component 
> would open doors for much reusage throughout Apache as many projects are 
> already using commons components and feel very comfortable with using 
> them. If Depot were a Commons Component, users who already feel familiar 
> with the Commons package layout would feel less apprehensive in terms of 
> picking it up and introducing it as a dependency. 

I have already seen that some projects are interested in using Depot but 
are afraid of the dependency. In particular from Cocoon I have heard the 
question if it's them needing us or viceversa. Probably going in Jakarta 
Commons would ease this.

On the other hand Jakarta Sandbox is seen equally unstable, so I'm not 
sure it will be so different.

> More Dependencies = More Involvement.

I agree. That's why I want to propose it to Cocoon, and publicize it more.

I'd say we try to do this last push and see how it goes. The points I 
highlighted have to be done in any case, so eventual destination changes 
can still be done later on.

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: Moving forward or letting go

Posted by "Mark R. Diggory" <md...@latte.harvard.edu>.
Michael Davey wrote:

> Nicola Ken Barozzi wrote:
>
>>
>> It's some months that we have depot, and things have stalled. It 
>> happens in Opensource, and even big and succesfull projects have 
>> months that seems empty.
>>
>> But...
>>
>> Depot is not pushing, it's not getting used. We have to give it 
>> another push. I'll try to give a hand, but I'm currently very active 
>> on Forrest. and Adam is joyfully bound to Gump, so we'll need all the 
>> help we can get :-)
>
>
> IMHO, the biggest barrier is currently the documentation.  I've been 
> following the discussions on this mailing list and checking the 
> website from time to time, but am not prepared to work out all the 
> details of how depot works by reading the actual source.  A 
> five-minute tutorial on the key features of depot and a description of 
> how to set up a demo instance to tinker with would be very useful.
>
What about a "home"? Where will depot live as a tool?  For example, is 
it appropriate as a Jakarta Commons Component? Growth will occur if the 
project is situated in the proper location such that users can find and 
will use it. Already seeing successful reuse of Jakarta Commons 
Components throughout Apache.  Depot, as a Jakarta Commons Component 
would open doors for much reusage throughout Apache as many projects are 
already using commons components and feel very comfortable with using 
them. If Depot were a Commons Component, users who already feel familiar 
with the Commons package layout would feel less apprehensive in terms of 
picking it up and introducing it as a dependency. More Dependencies = 
More Involvement.

-Mark


Re: Moving forward or letting go

Posted by Michael Davey <Mi...@coderage.org>.
Nicola Ken Barozzi wrote:

>
> It's some months that we have depot, and things have stalled. It 
> happens in Opensource, and even big and succesfull projects have 
> months that seems empty.
>
> But...
>
> Depot is not pushing, it's not getting used. We have to give it 
> another push. I'll try to give a hand, but I'm currently very active 
> on Forrest. and Adam is joyfully bound to Gump, so we'll need all the 
> help we can get :-)

IMHO, the biggest barrier is currently the documentation.  I've been 
following the discussions on this mailing list and checking the website 
from time to time, but am not prepared to work out all the details of 
how depot works by reading the actual source.  A five-minute tutorial on 
the key features of depot and a description of how to set up a demo 
instance to tinker with would be very useful.

-- 
Michael


Re: Moving forward or letting go

Posted by Nicola Ken Barozzi <ni...@apache.org>.

Nick Chalko wrote:

> Adam R. B. Jack wrote:
> 
>>> Simple plan:
>>>
>>> 1 - get the site in a readable state
>>> 2 - use depot in Cocoon for showing what it can be used for
>>>     (added parallelly to current system)
>>> 3 - do a nightly release
>>> 4 - publicize it in the usual places (the erverside, blogs, java.net,
>>>     javalobby, freshmeat, etc)
...
> I am back now,  I will peek at the cocoon list and help out.

I'm the only one wanting to do it, there is no discussion on the Cocoon 
list.

What I need now is a reply to 'Use case documents'.

TIA :-)

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------

Re: Moving forward or letting go

Posted by Nick Chalko <ni...@chalko.com>.
Adam R. B. Jack wrote:

>
>  
>
>>Simple plan:
>>
>>1 - get the site in a readable state
>>2 - use depot in Cocoon for showing what it can be used for
>>     (added parallelly to current system)
>>3 - do a nightly release
>>4 - publicize it in the usual places (the erverside, blogs, java.net,
>>     javalobby, freshmeat, etc)
>>    
>>
>
>Simpler plan : "usage". You taught me it, and I agree. As such, I agree --- 
>(2) is key. I think the AntWorks team ought want to help here, and I know
>Nick was interested before he got yanked away. Perhaps try to work with them
>on this.
>
>  
>
I am back now,  I will peek at the cocoon list and help out.

R,
Nick


Re: Moving forward or letting go

Posted by "Adam R. B. Jack" <aj...@trysybase.com>.
> It's some months that we have depot, and things have stalled.

I think we stalled coming into Apache (not sure why, for me timing &
events -- I have a multi-page internal blog posting bemoaning my misery with
Eclipse 3.0 [infra moving to https from http for SVN led me here], and
especially Eclipse w/ SVN.) I think I lost momentum w/ Depot internals, and
I lost my way from there. IMHO: We have to find a way to solidify design
decisions/choices, 'cos it can't be kept in ones head (and a wiki is
somewhat free-form).

> Depot is not pushing, it's not getting used. We have to give it another
> push. I'll try to give a hand, but I'm currently very active on Forrest.
> and Adam is joyfully bound to Gump, so we'll need all the help we can
> get :-)

Ah Gump, not so joyful some days. :( I am certainly cound to it --- I can't
seem to pull away, even when I want to. I've wanted to work on Depot for
months, but keep gettign sucked back to Gump. I just spent a week getting
Gump moved over to DOM. Something about the old way of doing things just
over whelemed me, and I couldn't move on.

That said, when I made this change. I felt liberated. I *finally* coded
Depot into Gump (a start) and am at the point of completing that. I'll write
more to the Gump list.

Also, Nick seems to be making good progress w/ AntWorks and Depot. Not as
much as he'd like (I think I need to help him w/ my parts after I'm done w/
Gump Depot), but progress. Folks are using it! [Nick is just busy/away for a
couple of weeks, or he'd tell you details.]

> Simple plan:
>
> 1 - get the site in a readable state
> 2 - use depot in Cocoon for showing what it can be used for
>      (added parallelly to current system)
> 3 - do a nightly release
> 4 - publicize it in the usual places (the erverside, blogs, java.net,
>      javalobby, freshmeat, etc)

Simpler plan : "usage". You taught me it, and I agree. As such, I agree --- 
(2) is key. I think the AntWorks team ought want to help here, and I know
Nick was interested before he got yanked away. Perhaps try to work with them
on this.

> The important thing for me now is point 2, and 1 will be a byproduct, as
> I learn to use it.
>
> Stay ready for my questions :-)

Awesome.

regards,

Adam