You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@depot.apache.org by Nick Chalko <ni...@chalko.com> on 2004/06/20 21:27:10 UTC
classloader was: Moving forward or letting go
Niclas Hedhman wrote:
>On Sunday 20 June 2004 07:16, Markus M. May wrote:
>
>
>
>>Depot offers a little more. The current design covers Maven repositories
>>as well as flat file repositories (for the local repository e.g.).
>>
>>
>
>Ok. Avalon Repository has a SPI model in place, but code is currently
>required, since we cater to our own needs first :o)
>
>
>
>>But111`11`1`1 Depot has nothing to do with the classloading itself. It is like
>>already state right now, only for the build dependencies. The chained
>>dependencies are resolved via the dependencies of the dependencies. The
>>design therefor is not yet clear, because the needed meta-data for this
>>is not saved in the repository.
>>
>>
>
>Yes, you will need to define or use server-side meta-data. The immediate
>effect after solving that is the Classloader (not classloading)
>establishment, since the 'type' of classloader the dependency belongs to,
>MUST be managed by the repository system, or you have missed a major goal.
>
>
>
Class loading is not a goal.
IMHO it is orthogonal, to depot. Depot gets/manages a repository of
artifacts.
For classloading, once you have the right jar in a known place on the
local file system, then something else can handle classloading.
I am -1 for directly handling classloading.
Handling dependencies is a goal. Adam describes this as version
constraints http://incubator.apache.org/depot/version/constraints.html
R,
Nick
Re: classloader was: Moving forward or letting go
Posted by Nicola Ken Barozzi <ni...@apache.org>.
Niclas Hedhman wrote:
...
> I still believe that Gump descriptors should be an 'output artifact' and not
> an 'input artifact' in Repository Heaven.
>
> Creation of a solid model that fits the needs we can find, then it shouldn't
> be that hard to generate Gump descriptors as a side-effect.
I think we are talking different languages ;-)
I'm not saying that the Gump descriptor is a blessing. I just see that
there is *a lot* of metadata information *already* there, and it's
*updated*.
Now, we can make our own version, or build on something that is already
there. I think that it's possible and better, _but_ it's too early to
tell, as we haven't decided what we need yet ;-)
So, let's forget for a moment this thing and start building our layers.
--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------
Re: classloader was: Moving forward or letting go
Posted by Niclas Hedhman <ni...@hedhman.org>.
On Tuesday 22 June 2004 03:41, Nicola Ken Barozzi wrote:
> Stephen McConnell wrote:
> > Nicola Ken Barozzi wrote:
> >> Gump metadata != Gump being setup.
> > Gump meta-data is insufficient.
> It sure is. But it can be enhanced without having Gump barf on extra tags.
I still believe that Gump descriptors should be an 'output artifact' and not
an 'input artifact' in Repository Heaven.
Creation of a solid model that fits the needs we can find, then it shouldn't
be that hard to generate Gump descriptors as a side-effect.
Cheers
Niclas
--
+------//-------------------+
/ http://www.bali.ac /
/ http://niclas.hedhman.org /
+------//-------------------+
Re: classloader was: Moving forward or letting go
Posted by Niclas Hedhman <ni...@hedhman.org>.
On Tuesday 22 June 2004 05:49, Nicola Ken Barozzi wrote:
> I get it.
>
> In essence, a single Avalon system needs to load different artifacts in
> separate classloaders during the same run.
Yes, it is not a Build time concern, but builds include the running of tests,
and some of these tests are fairly extensive.
> Ok, let's now tackle the other bit, that layering...
Agree.
Cheers
Niclas
--
+------//-------------------+
/ http://www.bali.ac /
/ http://niclas.hedhman.org /
+------//-------------------+
Re: classloader was: Moving forward or letting go
Posted by Nicola Ken Barozzi <ni...@apache.org>.
I get it.
In essence, a single Avalon system needs to load different artifacts in
separate classloaders during the same run.
This is whacky! ;-P
From a Gump perspective, this should not be needed, as to build one
needs a simple classloader with a list of jars. But you are not talking
about build time...
Ok, let's now tackle the other bit, that layering...
--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------
Re: classloader was: Moving forward or letting go
Posted by Stephen McConnell <mc...@apache.org>.
Nicola Ken Barozzi wrote:
>
> Stephen McConnell wrote:
> ...
>
>> Going the direction of multiple gump files means invoking a build
>> multiple time. This is massive inefficiency - do a build to generate
>> the classes directory and a jar file, do another build to run the
>> testcase,
>
>
> You call it inefficiency, I call it *safe* separation of builds. I don't
> see the problem here. Note that I'm talking about *Gump*, not about a
> build system, that uses also Gump metadata.
>
>> but then when you need the above information for generation of a build
>> artifact - well - you sunk. You cannot do it with gump as it is today.
>
>
> I don't understand this last sentence.
Sorry ... s/you/your
Basically the issue is that a gump descriptor is designed around the
notion of a single path (path as in ant path concept used for classpath
construction). When dealing with the construction of information for a
plugin scenario you need to run a test case using a different classpath
to the build cycle. The test scenario will use information generated
from information about API, SPI and implementation classpaths - but hang
on - gump is only providing us with a single classpath. All of a sudden
you faced with the problem of building artifacts bit by bit across
successive gump runs.
>
>> The solution is to do to gump what Sam did to Ant community .. he
>> basically said - "hey .. there is an application that knows more about
>> the classpath information than you do" and from that intervention ant
>> added the ability to override the classloader definition that ant uses.
>>
>> Apply this same logic to gump - there is a build system that knows
>> more about the class loading requirements than gump does - and gump
>> needs to delegate responsibility to that system - just as ant
>> delegates responsibility to gump.
>
>
> It doesn't make sense. You mean that one should delegate
>
> buildsystem -> CI system -> buildsystem
I'm saying that products like magic and maven know more about the
classloader criteria than maven does. Just as ant delegates the
responsibility of classpath definition to gump, so should gump delegate
responsibility to applications that know more about the context than
gump does.
E.g.
|------------| |---------------| |-------------|
| gump | ---> | magic | --> | project |
| | <--- | | |-------------|
|------------| |---------------|
| |
| | |---------------| |-------------|
| | ---> | ant | --> | project |
| | <--- |---------------| |-------------|
|------------|
... and the only difference here between ant and magic is that magic
knows about mult-staged classloaders (see below) and multi-mode
classpath policies (where multi-mode means different classloaders for
build, test and runtime).
> ?
>
> Gump took away the responsibility from the build system, why should he
> give it back?
Because just as gump knows more about the context than ant, magic (or
maven) knows more about the context than gump.
>>>> I.e. gump is very focused on the pure compile scenarios and does not
>>>> deal with the realities of test and runtime environments that load
>>>> plugins dynamically.
>>>
>>>
>>> You cannot create fixed metadata for dynamically loaded plugins
>>> (components), unless you decide to declare them, and the above is
>>> sufficient.
>>
>>
>> Consider the problem of generating the meta data for a multi-staged
>> classloader
>
>
> What's a 'multi-staged classloader'?
|-----------------------|
| bootstrap-classloader |
|-----------------------|
^
|
|-----------------------|
| api-classloader |
|-----------------------|
^
|
|-----------------------|
| spi-classloader |
|-----------------------|
^
|
|-----------------------|
| impl-classloader |
|-----------------------|
The api classloader is constructed by a container and is typically
supplied as a parent classloader for a container. The spi classloader
is constructed as a child of the api loader and is typically used to
load privileged facilities that interact with a container SPI (Service
Provider Interface). The impl classloader is private to the application
managing a set of pluggable components.
>> containing API, SPI and IMPL separation based on one or multiple gump
>> definitions ..
>
>
> A classloader containing 'separation'?
Sure - think of it in terms of:
* respectable
* exposed
* naked
The API respectable, an SPI is exposed, the impl - that's getting naked.
>> you could write a special task to handle phased buildup of data,
>
>
> 'Phased buildup'?
Using gump as it is today on a project by project basis would require
successive gump runs to build up "staged" classpath information -
because of the basics of gump - a project is a classpath definition. A
staged classloader is potentially three classloader definitions (in gump
terms). In magic terms its just one. Mapping gump to magic requires
three gump projects to generate one of multiple artifacts created in a
magic build. I.e. gump does not mesh nicely with the building and
testing of plug-in based systems.
Plugin based systems absolutely need good repository system.
>> and another task to consolidate this and progressively - over three
>> gump build cycles you could produce the meta-data. Or, you could just
>> say to magic - <artifact/> and if gump is opened up a bit .. the
>> generated artifact will be totally linked in to gump generated resources
>
>
> 'Linked in to gump generated resources'?
Gump generates stuff .. to build the meta-data to run tests I need to
know the addressed of gump generated content. I.e. I need to link to
gump generated resource.
>
>> - which means that subsequent builds that are using the plugin are
>> running against the gump content.
>
>
> You totally lost me here.
Image you have a project that has the following dependencies:
* log4j (runtime dependency)
* avalon-framework-api
* avalon-framework-impl (test-time dependency)
* avalon-meta-tools (plugin)
Imagine also that this project generates a staged classloader descriptor
using within the testcase for the project. To do a real gump
assessment, the avalon-meta-tools meta-data descriptor needs to be
generated to reference gump generated jar paths. The avalon-meta-tools
jar itself is not a compile, build or runtime dependency ... its just a
tool used to generate some meta-information as part of the build
process. The avalon-framework-impl dependency is not a runtime
dependency because this is provided by the container that will run the
artifact produced by this build - but it is needed to compile and
execute unit tests. When the test system launches - it loads meta data
created by the avalon-meta-tools plugin, and loads the subject of this
build as a plugin. All in all there are something like six different
classpath definitions flying around here.
I.e. getting lost is a completely reasonable feeling!
;-)
>
>> The point is that gump build information is not sufficiently rich when
>> it comes down to really using a repository in a productive manner when
>> dealing with pluggable artifacts (and this covers both build and
>> runtime concerns). How this this effect Depot? Simply that gump
>> project descriptors should be considered as an application specific
>> descriptor - not a generic solution.
>
>
> Sorry, I don't understand.
The thing is that a repository "to me" is the source of deployment
solutions. The definitions of those solutions can be expressed in
meta-data (and the avalon crew have got this stuff down-tap). The
source of that meta-data can be through published meta-data descriptors
or descriptors that are dynamically generated in response to service
requests. Either way - the underlying repository is a fundamental unit
in the deployment equation - and the language between the client is by
far a classloader subject.
Hope that helps.
Cheers, Steve.
--
|---------------------------------------|
| Magic by Merlin |
| Production by Avalon |
| |
| http://avalon.apache.org |
|---------------------------------------|
Re: classloader was: Moving forward or letting go
Posted by Nicola Ken Barozzi <ni...@apache.org>.
Stephen McConnell wrote:
...
> Going the direction of multiple gump files means invoking a build
> multiple time. This is massive inefficiency - do a build to generate
> the classes directory and a jar file, do another build to run the
> testcase,
You call it inefficiency, I call it *safe* separation of builds. I don't
see the problem here. Note that I'm talking about *Gump*, not about a
build system, that uses also Gump metadata.
> but then when you need the above information for generation of
> a build artifact - well - you sunk. You cannot do it with gump as it is
> today.
I don't understand this last sentence.
> The solution is to do to gump what Sam did to Ant community .. he
> basically said - "hey .. there is an application that knows more about
> the classpath information than you do" and from that intervention ant
> added the ability to override the classloader definition that ant uses.
>
> Apply this same logic to gump - there is a build system that knows more
> about the class loading requirements than gump does - and gump needs to
> delegate responsibility to that system - just as ant delegates
> responsibility to gump.
It doesn't make sense. You mean that one should delegate
buildsystem -> CI system -> buildsystem
?
Gump took away the responsibility from the build system, why should he
give it back?
>>> I.e. gump is very focused on the pure compile scenarios and does not
>>> deal with the realities of test and runtime environments that load
>>> plugins dynamically.
>>
>> You cannot create fixed metadata for dynamically loaded plugins
>> (components), unless you decide to declare them, and the above is
>> sufficient.
>
> Consider the problem of generating the meta data for a multi-staged
> classloader
What's a 'multi-staged classloader'?
> containing API, SPI and IMPL separation based on one or
> multiple gump definitions ..
A classloader containing 'separation'?
> you could write a special task to handle
> phased buildup of data,
'Phased buildup'?
> and another task to consolidate this and
> progressively - over three gump build cycles you could produce the
> meta-data. Or, you could just say to magic - <artifact/> and if gump is
> opened up a bit .. the generated artifact will be totally linked in to
> gump generated resources
'Linked in to gump generated resources'?
> - which means that subsequent builds that are
> using the plugin are running against the gump content.
You totally lost me here.
> The point is that gump build information is not sufficiently rich when
> it comes down to really using a repository in a productive manner when
> dealing with pluggable artifacts (and this covers both build and runtime
> concerns). How this this effect Depot? Simply that gump project
> descriptors should be considered as an application specific descriptor -
> not a generic solution.
Sorry, I don't understand.
--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------
Re: classloader was: Moving forward or letting go
Posted by Stephen McConnell <mc...@apache.org>.
Nicola Ken Barozzi wrote:
>
> Stephen McConnell wrote:
>
>> Nicola Ken Barozzi wrote:
>
> ...
>
>>> Gump metadata != Gump being setup.
>>
>>
>> Gump meta-data is insufficient.
>
>
> It sure is. But it can be enhanced without having Gump barf on extra tags.
>
>> In order to create a functionally sufficient expression of path
>> information you would need 6 separate gump project descriptors per
>> project:
>>
>> build
>> test
>> runtime-api
>> runtime-spi
>> runtime-impl
>> runtime-composite
>
>
> Gump uses the word "project" in an improper way, as it's more about a
> project descriptor.
>
> You can do the above in Gump by creating avalon, avalon-test,
> avalon-api, etc... If you look at the descriptors this is for example
> what Ant and many other projects do.
Going the direction of multiple gump files means invoking a build
multiple time. This is massive inefficiency - do a build to generate
the classes directory and a jar file, do another build to run the
testcase, but then when you need the above information for generation of
a build artifact - well - you sunk. You cannot do it with gump as it is
today.
The solution is to do to gump what Sam did to Ant community .. he
basically said - "hey .. there is an application that knows more about
the classpath information than you do" and from that intervention ant
added the ability to override the classloader definition that ant uses.
Apply this same logic to gump - there is a build system that knows more
about the class loading requirements than gump does - and gump needs to
delegate responsibility to that system - just as ant delegates
responsibility to gump.
>
>> I.e. gump is very focused on the pure compile scenarios and does not
>> deal with the realities of test and runtime environments that load
>> plugins dynamically.
>
>
> You cannot create fixed metadata for dynamically loaded plugins
> (components), unless you decide to declare them, and the above is
> sufficient.
Consider the problem of generating the meta data for a multi-staged
classloader containing API, SPI and IMPL separation based on one or
multiple gump definitions .. you could write a special task to handle
phased buildup of data, and another task to consolidate this and
progressively - over three gump build cycles you could produce the
meta-data. Or, you could just say to magic - <artifact/> and if gump is
opened up a bit .. the generated artifact will be totally linked in to
gump generated resources - which means that subsequent builds that are
using the plugin are running against the gump content.
The point is that gump build information is not sufficiently rich when
it comes down to really using a repository in a productive manner when
dealing with pluggable artifacts (and this covers both build and runtime
concerns). How this this effect Depot? Simply that gump project
descriptors should be considered as an application specific descriptor -
not a generic solution.
Cheers, Steve.
p.s.
Re. gump management - I'm currently playing around with the notion of
one gump project covering all of avalon - the single project definition
generated by magic that declares the external dependencies (about 8
artifacts) and the Avalon produced artifacts (about 60 or more). The
magic build will generate everything including plugins and metadata and
publish this back to gump.
SJM
--
|---------------------------------------|
| Magic by Merlin |
| Production by Avalon |
| |
| http://avalon.apache.org |
|---------------------------------------|
Re: classloader was: Moving forward or letting go
Posted by Nicola Ken Barozzi <ni...@apache.org>.
Stephen McConnell wrote:
> Nicola Ken Barozzi wrote:
...
>> Gump metadata != Gump being setup.
>
> Gump meta-data is insufficient.
It sure is. But it can be enhanced without having Gump barf on extra tags.
> In order to create a functionally sufficient expression of path
> information you would need 6 separate gump project descriptors per project:
>
> build
> test
> runtime-api
> runtime-spi
> runtime-impl
> runtime-composite
Gump uses the word "project" in an improper way, as it's more about a
project descriptor.
You can do the above in Gump by creating avalon, avalon-test,
avalon-api, etc... If you look at the descriptors this is for example
what Ant and many other projects do.
> I.e. gump is very focused on the pure compile scenarios and does not
> deal with the realities of test and runtime environments that load
> plugins dynamically.
You cannot create fixed metadata for dynamically loaded plugins
(components), unless you decide to declare them, and the above is
sufficient.
--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------
Re: classloader was: Moving forward or letting go
Posted by Stephen McConnell <mc...@apache.org>.
Nicola Ken Barozzi wrote:
>
> Niclas Hedhman wrote:
>
>> On Monday 21 June 2004 19:01, Nicola Ken Barozzi wrote:
>
> ...
>
>>>> Gump?? Sorry, how on earth did you manage to get a "Continuous
>>>> Integration System" to be part of a 'Jar Hell' solution?
>>>
>>>
>>> The Gump Metadata is a rich source of dependencies. Stephen AFAIK is
>>> investigating in this too.
>>
>>
>> How are you going to rely on Gump for 3rd party projects, who have no
>> interest in having their own Gump setup, but for sure want to harness
>> the power we are all striving for.
>
>
> Gump metadata != Gump being setup.
Gump meta-data is insufficient.
In order to create a functionally sufficient expression of path
information you would need 6 separate gump project descriptors per project:
build
test
runtime-api
runtime-spi
runtime-impl
runtime-composite
I.e. gump is very focused on the pure compile scenarios and does not
deal with the realities of test and runtime environments that load
plugins dynamically.
Cheers, Steve.
--
|---------------------------------------|
| Magic by Merlin |
| Production by Avalon |
| |
| http://avalon.apache.org |
|---------------------------------------|
Re: classloader was: Moving forward or letting go
Posted by Nicola Ken Barozzi <ni...@apache.org>.
Niclas Hedhman wrote:
> On Monday 21 June 2004 19:01, Nicola Ken Barozzi wrote:
...
>>>Gump?? Sorry, how on earth did you manage to get a "Continuous
>>>Integration System" to be part of a 'Jar Hell' solution?
>>
>>The Gump Metadata is a rich source of dependencies. Stephen AFAIK is
>>investigating in this too.
>
> How are you going to rely on Gump for 3rd party projects, who have no interest
> in having their own Gump setup, but for sure want to harness the power we are
> all striving for.
Gump metadata != Gump being setup.
> ATM, I only know of three build systems (Ant for this discussion is more of a
> build toolkit, than a complete system, so I leave that out), namely Maven,
> Gump and 'our pet' Magic.
Keep in mind that you are talking to three developers that have worked
on Centipede even before Maven had the concept of plugins :-)
We did our "magic" way before, and Depot is in fact a spinoff.
> All these solves the dependency pattern in their own way.
> Magic solves all _our_ concerns, i.e. chained dependencies, classloader
> establishment and the standard stuff.
> Gump solves chained dependencies, but currently doesn't bother about
> classloader concerns.
> Maven does neither handle chained dependencies nor classloader concerns.
>
> Stephen is currently trying to work out how to teach Gump the classloader
> tricks, and I haven't followed that very closely.
Gump is not written in Java anymore, so you're out of luck on this one. ;-)
Over at Krysalis we had started Viprom, that was an abstraction over the
object model. Dunno what o do now though.
>>>We have chained dependencies in place. It works well, but our down side
>>>is that only Avalon tools generate and understand the necessary meta
>>>information required to support this feature.
>>
>>That's why using Gump metadata would bring projects closer.
>
> Maybe you are looking at this from the wrong end. If Depot could solidly
> define what complex projects (such as Avalon) require, in form of meta
> information, then one should teach Gump to use it.
Meta information, that's the point. Mvan has his own object model, Gump
has a merge.xml DOM that we can use as an object model... what should we
use?
>>The only real issue I see is the catch22 problem you have outlined about
>>Avalon using Incubator code and viceversa.
>>Let me disagree with it though. It's ok that an Apache project does not
>>rely on incubating projects, but if some of the developers are part of
>>this incubating project, does it still make sense?
>
> Probably not. I could imagine that there is even a few more phases involved;
> * Phase I: Avalon Repository is copied across, but Avalon maintain a
> parallel codebase, and changes are merged from one to the other.
> * Phase II: Avalon Repository is removed from the Avalon codebase.
> * Phase III: Avalon Repository has its package names and so forth changed to
> suit the Depot project.
> * Phase IV: Bits and pieces are broken out into other parts of Depot, while
> maintaining enough compatibility with Avalon Merlin.
From our POV it's just simpler as this:
* Repository is moved under Depot with package names changed.
(It's parallelly kept in Avalon for as much as Avalon wants, it's
not a Depot concern)
* Merge of the codebases
>>Would this ease concerns?
>
> Perhaps. To be totally honest, few people in Avalon care much about what
> Stephen and I decide about the codebase, as long as compatibility remains.
> So, I'll discuss it with Stephen and see how we can tackle this.
'k
--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------
Re: classloader was: Moving forward or letting go
Posted by Niclas Hedhman <ni...@hedhman.org>.
On Monday 21 June 2004 19:01, Nicola Ken Barozzi wrote:
> I don't agree here, Nick, classloading is part of artifact handling,
> albeit in the JVM.
> It can and IMHO should live as a Depot subproject.
Thanks for the thumbs up... I was getting a bit depressed :o)
> > Gump?? Sorry, how on earth did you manage to get a "Continuous
> > Integration System" to be part of a 'Jar Hell' solution?
>
> The Gump Metadata is a rich source of dependencies. Stephen AFAIK is
> investigating in this too.
How are you going to rely on Gump for 3rd party projects, who have no interest
in having their own Gump setup, but for sure want to harness the power we are
all striving for.
ATM, I only know of three build systems (Ant for this discussion is more of a
build toolkit, than a complete system, so I leave that out), namely Maven,
Gump and 'our pet' Magic.
All these solves the dependency pattern in their own way.
Magic solves all _our_ concerns, i.e. chained dependencies, classloader
establishment and the standard stuff.
Gump solves chained dependencies, but currently doesn't bother about
classloader concerns.
Maven does neither handle chained dependencies nor classloader concerns.
Stephen is currently trying to work out how to teach Gump the classloader
tricks, and I haven't followed that very closely.
> > We have chained dependencies in place. It works well, but our down side
> > is that only Avalon tools generate and understand the necessary meta
> > information required to support this feature.
>
> That's why using Gump metadata would bring projects closer.
Maybe you are looking at this from the wrong end. If Depot could solidly
define what complex projects (such as Avalon) require, in form of meta
information, then one should teach Gump to use it.
> The only real issue I see is the catch22 problem you have outlined about
> Avalon using Incubator code and viceversa.
> Let me disagree with it though. It's ok that an Apache project does not
> rely on incubating projects, but if some of the developers are part of
> this incubating project, does it still make sense?
Probably not. I could imagine that there is even a few more phases involved;
* Phase I: Avalon Repository is copied across, but Avalon maintain a
parallel codebase, and changes are merged from one to the other.
* Phase II: Avalon Repository is removed from the Avalon codebase.
* Phase III: Avalon Repository has its package names and so forth changed to
suit the Depot project.
* Phase IV: Bits and pieces are broken out into other parts of Depot, while
maintaining enough compatibility with Avalon Merlin.
> Would this ease concerns?
Perhaps. To be totally honest, few people in Avalon care much about what
Stephen and I decide about the codebase, as long as compatibility remains.
So, I'll discuss it with Stephen and see how we can tackle this.
Cheers
Niclas
--
+------//-------------------+
/ http://www.bali.ac /
/ http://niclas.hedhman.org /
+------//-------------------+
Re: classloader was: Moving forward or letting go
Posted by Nick Chalko <ni...@chalko.com>.
Nicola Ken Barozzi wrote:
>
> Niclas Hedhman wrote:
>
>> On Monday 21 June 2004 13:04, Nick Chalko wrote:
>>
> ...
>
>>> Classloading is a real problem in java, and an important one to tackle
>>> but I prefer to keep the scope of Depot limited. Other project's like
>>> Avalon can tackle the classloaders. Perhaps we can take over the
>>> version/download/security stuff.
>>
>
> I don't agree here, Nick, classloading is part of artifact handling,
> albeit in the JVM.
>
> It can and IMHO should live as a Depot subproject.
I withdraw my -1,
Classloading as a separate project does make sense for Depot. I should
have used a -0 at worst.
R,
Nick
Re: classloader was: Moving forward or letting go
Posted by Nicola Ken Barozzi <ni...@apache.org>.
Niclas Hedhman wrote:
> On Monday 21 June 2004 13:04, Nick Chalko wrote:
>
...
>>Classloading is a real problem in java, and an important one to tackle
>>but I prefer to keep the scope of Depot limited. Other project's like
>>Avalon can tackle the classloaders. Perhaps we can take over the
>>version/download/security stuff.
I don't agree here, Nick, classloading is part of artifact handling,
albeit in the JVM.
It can and IMHO should live as a Depot subproject.
> The problem comes in when you introduce chained dependency. How do you signal
> that such a thing exist to the 'user' ?
...
>>The issue chained dependencies is important, and I think gump can be of
>>assistance. However gump only reflects the current state and we need
>>access to the dependencies for other versions as well.
>
> Gump?? Sorry, how on earth did you manage to get a "Continuous Integration
> System" to be part of a 'Jar Hell' solution?
The Gump Metadata is a rich source of dependencies. Stephen AFAIK is
investigating in this too.
> We have chained dependencies in place. It works well, but our down side is
> that only Avalon tools generate and understand the necessary meta information
> required to support this feature.
That's why using Gump metadata would bring projects closer.
...
>>What do you see as the common ground for us to participate on ?
>
> ATM, the biggest problem is that we;
> * Know too little about each other's concerns and view points.
> * Doesn't understand each other's codebases.
> * Disagree of the total scope of Depot.
>
> What _I_ really would like to do is move Avalon Repository to Depot as a
> sub-project, but there are some 'community problems' with that, i.e. Depot is
> in Incubator, and Avalon has said NO to depending on Incubator projects.
> Anyway, once Repository was in Depot, one could take out the bits and pieces
> that exist elsewhere in the Depot codebase.
I have read the Avalon Repository site and it's very much in line with
Depot.
The only real issue I see is the catch22 problem you have outlined about
Avalon using Incubator code and viceversa.
Let me disagree with it though. It's ok that an Apache project does not
rely on incubating projects, but if some of the developers are part of
this incubating project, does it still make sense?
I mean, imagine that we move Avalon repository code in Depot and start
merging. I don't see it a problem for Avalon, as the development of it
is still happening also with Avalon people, as it was before, and Avalon
can still decide to fork back in case it wants to.
Would this ease concerns?
--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------
Re: classloader was: Moving forward or letting go
Posted by Niclas Hedhman <ni...@hedhman.org>.
On Monday 21 June 2004 13:04, Nick Chalko wrote:
> For me the target use case for depot has always been managing artifacts
> needed to build. So class loaders beyond setting a <path> resource in
> ant has never been a needed task.
Hmmm.... But it has been solved over and over, and to consolidate everyone's
effort requires that something 'extra' is brought to the table. No?
> Handling a chain of dependencies is something that we would like to do
> but it has never been a pressing concern. For the most part I scratch
> what itches, and jars for an ant build itches me all the time, so that
> is where I scratch.
Ok, that is a fair point. But one can still be setting out on a "vision", and
gathering some support around such vision, and some people may join in and
help out. Scratching the itch is just what you do today :o)
> I understand some of what avalon is doing with downloading needed jars
> for an application server.
Kind-of correct. Since the benefit is not for Avalon Merlin itself, but for
our users. When they include a JarABC resource, it is very nice that they
don't need to worry about that JarDEF, JarGHI and half a dozen others also
need to be downloaded as a result of depending on JarABC.
> * Version,
> o Marking
> o Comparing
> o Computability,
We are drawing closer to the conclusion that version should be a unique
number, and basically eyeing the SVN number itself, as that is giving us the
sideeffect of knowning exactly how to rebuild the artifact in question.
> * Downloading.
> o Maintaining a local cache of jars
> o Updating a local cache of jars
> o getting the "best" jar available.
> o mirrors
"best" probably means the 'Version Constraint' that I saw on the web site. I
still have some mixed feelings about this, not by concept but the
design/impl.
> * Security
> o verify md5 signatures
> o verify other signatures.
MD5 is not security, only a download checksum.
Proper signature handling, especially now when ASF is getting a CA box up and
running, is definately something good.... but otoh it doesn't exist yet, and
we'll probably have that up and running too in Avalon Repository, before
Depot has something useful in place (am I too negative? Sorry in that case.)
> Classloading is a real problem in java, and an important one to tackle
> but I prefer to keep the scope of Depot limited. Other project's like
> Avalon can tackle the classloaders. Perhaps we can take over the
> version/download/security stuff.
The problem comes in when you introduce chained dependency. How do you signal
that such a thing exist to the 'user' ?
> In a perfect world would would the depot API as used in your class
> loader look like ?
Something like;
Artifact artifact = Artifact.locate( "jar:avalon:avalon-framework", version );
ClassLoader cl = artifact.getClassLoader();
'version' above is some form of version descriptor. This part requires some
serious thinking.
> The issue chained dependencies is important, and I think gump can be of
> assistance. However gump only reflects the current state and we need
> access to the dependencies for other versions as well.
Gump?? Sorry, how on earth did you manage to get a "Continuous Integration
System" to be part of a 'Jar Hell' solution?
We have chained dependencies in place. It works well, but our down side is
that only Avalon tools generate and understand the necessary meta information
required to support this feature.
> So work on the Meta info is a place we can share efforts. But it is a
> goal for Depot to work, at least in a basic/default way WITHOUT any
> separate meta info.
Ok, that is not a problem.
> What do you see as the common ground for us to participate on ?
ATM, the biggest problem is that we;
* Know too little about each other's concerns and view points.
* Doesn't understand each other's codebases.
* Disagree of the total scope of Depot.
What _I_ really would like to do is move Avalon Repository to Depot as a
sub-project, but there are some 'community problems' with that, i.e. Depot is
in Incubator, and Avalon has said NO to depending on Incubator projects.
Anyway, once Repository was in Depot, one could take out the bits and pieces
that exist elsewhere in the Depot codebase.
Cheers
Niclas
--
+------//-------------------+
/ http://www.bali.ac /
/ http://niclas.hedhman.org /
+------//-------------------+
Re: classloader was: Moving forward or letting go
Posted by Nick Chalko <ni...@chalko.com>.
Niclas Hedhman wrote:
>
>>am -1 for directly handling classloading.
>>
>>
>
>Then please provide an answer to the question;
>
>How do you intend to provide generic meta information to the Depot client, and
>how is that meta information generated and handed to Depot prior to
>publishing the artifacts?
>
>
>And secondly; Who should be responsible to define the classloader concern,
>expressed in generic meta information?
>
>
>
>Since I suspect the answers to the above is "Not Depot's concern" and "Not
>Depot", then I am sorry to say that Depot future is very bleak, and I doubt
>it will receive any support from Avalon.
>
>I hope the "not Depot's concern" stems from the lack of understanding the
>problem at hand, and that you will gain that insight sooner or later.
>
>
For me the target use case for depot has always been managing artifacts
needed to build. So class loaders beyond setting a <path> resource in
ant has never been a needed task.
Handling a chain of dependencies is something that we would like to do
but it has never been a pressing concern. For the most part I scratch
what itches, and jars for an ant build itches me all the time, so that
is where I scratch.
I understand some of what avalon is doing with downloading needed jars
for an application server.
Here is the pieces of code I think can be useful to avalon.
* Version,
o Marking
o Comparing
o Computability,
* Downloading.
o Maintaining a local cache of jars
o Updating a local cache of jars
o getting the "best" jar available.
o mirrors
* Security
o verify md5 signatures
o verify other signatures.
Having outside user of our API would be great. Our API has gotten
really FAT and needs cleaning. If you are interested in investigating
using our API I will be happy to help.
For the future of Depot, I think it is important to try to produce the
smallest useful set of tools possible, not the biggest.
Classloading is a real problem in java, and an important one to tackle
but I prefer to keep the scope of Depot limited. Other project's like
Avalon can tackle the classloaders. Perhaps we can take over the
version/download/security stuff.
In a perfect world would would the depot API as used in your class
loader look like ?
File theJar = depot.getResource("log4j","1.2",
booleanGetDepenetedProjects);
?
The issue chained dependencies is important, and I think gump can be of
assistance. However gump only reflects the current state and we need
access to the dependencies for other versions as well.
So work on the Meta info is a place we can share efforts. But it is a
goal for Depot to work, at least in a basic/default way WITHOUT any
separate meta info.
What do you see as the common ground for us to participate on ?
R,
Nick
Re: classloader was: Moving forward or letting go
Posted by Niclas Hedhman <ni...@hedhman.org>.
On Monday 21 June 2004 05:27, Nick Chalko wrote:
> Class loading is not a goal.
> IMHO it is orthogonal, to depot. Depot gets/manages a repository of
> artifacts.
> For classloading, once you have the right jar in a known place on the
> local file system, then something else can handle classloading.
>
> I am -1 for directly handling classloading.
Then please provide an answer to the question;
How do you intend to provide generic meta information to the Depot client, and
how is that meta information generated and handed to Depot prior to
publishing the artifacts?
And secondly; Who should be responsible to define the classloader concern,
expressed in generic meta information?
Since I suspect the answers to the above is "Not Depot's concern" and "Not
Depot", then I am sorry to say that Depot future is very bleak, and I doubt
it will receive any support from Avalon.
I hope the "not Depot's concern" stems from the lack of understanding the
problem at hand, and that you will gain that insight sooner or later.
Cheers
Niclas
--
+------//-------------------+
/ http://www.bali.ac /
/ http://niclas.hedhman.org /
+------//-------------------+
Re: classloader was: Moving forward or letting go
Posted by "Adam R. B. Jack" <aj...@trysybase.com>.
> >Yes, you will need to define or use server-side meta-data. The immediate
> >effect after solving that is the Classloader (not classloading)
> >establishment, since the 'type' of classloader the dependency belongs to,
> >MUST be managed by the repository system, or you have missed a major
goal.
> Class loading is not a goal.
> IMHO it is orthogonal, to depot. Depot gets/manages a repository of
> artifacts.
> For classloading, once you have the right jar in a known place on the
> local file system, then something else can handle classloading.
>
> I am -1 for directly handling classloading.
Depot Update has nothing to do with Class Loading, but is there any reason
we couldn't allow a separate Depot project to do that (using Depot Update or
some core called Depot Download)?
I'm not advocating that Depot try to be all things to all people, but I
think that CL is a extension that folks will want. (If/when I get version
constraints off the ground, then maybe I could attempt to persuade this
project to work with it, but that is a future conversation.)
regards,
Adam