You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@geronimo.apache.org by Jeremy Boynes <jb...@apache.org> on 2004/11/07 06:51:42 UTC

online and offline deployment

As promised Thursday, here are the details of my concerns about mixing
offline and online deployment.

My concerns on this issue stem from how we package GBeans together for
use by the kernel. Rather than handling them one-by-one, Geronimo uses
the notion of a pre-canned Configuration which contains a number of
GBean instances and the classpath information needed to load them.
Configurations can be loaded by the kernel and when started bring all
the GBeans they contain online together.

A key feature of Configurations is they are portable between different
Geronimo installations - specifically a Configuration can run in any
Geronimo kernel that can resolve its dependencies. This is less critical
for the single-server mode we have now but is very important as Geronimo
scales to clustered or grid configurations - it allows us to efficiently
move applications between the servers on demand.

This also has benefits where change management is important, such as
business critical installations. For example, a Configuration can be
built and signed in a test or integration environment and moved
*provably unchanged* though the test, stage and release to production
process. Alternatively, an OEM can release an application to channel as
a signed Configuration, end-users can have the assurance it has not been
tampered with, and the OEM can reduce costs by reducing problems caused
by variations in the end-user environment.

In the kernel, the process of loading and unloading Configuration is
handled by a ConfigurationManager that uses ConfigurationStores to store
them. The store exposes a simple API for installing and uninstalling
Configurations and for retrieving them so they can be loaded. We have a
simple LocalConfigStore implementation that uses the local filesystem to
store them; other implementations are possible using different
persistence approaches such as databases, LDAP or proprietary
configuration management systems.


The deployment system in Geronimo is the interface between user-domain
artifacts such as J2EE modules (EARs, WARs, etc.) or deployment plans
and the configuration management system described above. It essentially
combines modules with plans and generates Configurations.

It comprises three parts:
* External interfaces such as the command line tool, console or JSR-88
   provider that get the modules and plans from the user
* ConfigurationBuilders such as EARConfigBuilder and
   ServiceConfigBuilder that do the combination and produce the target
   Configuration
* Back-end interfaces that store the Configuration either in a
   ConfigurationStore or as an output file

The ConfigurationBuilders are GBeans and run inside a Geronimo kernel.
Apart from ease of implementation, they also have access to the
resources provided by that system - for example, they can use the
Repository to load classes during processing, and they can use the
ConfigurationManager to load other Configurations that the target may be
dependent on.


To support online deployment, we run a deployment system inside the same
kernel as the J2EE server - it is actually part of the
org/apache/geronimo/Server Configuration although work is progress to
allow it to be run as a separate dependent configuation.

The JSR-88 provider interacts with this deployment system to fulfill the
spec requirements for distribute, start, stop, undeploy etc. For
example, during a distribute operation the module and plan are passed to
the deployment system, it uses an EARConfigBuilder to produce the output
Configuration, which it then installs in the target ConfigurationStore.
A JSR-88 start operation causes the Configuration to be loaded from the
store and then started.


However, this leaves us with a chicken-and-egg problem. The online
deployment system above is itself part of a configuration - how do we
build that configuation?

To solve this, and because it seemed generally useful, we built a
standalone offline deployment system. Run from the command line, this
would take module + plan and produce a Configuration. To reuse as much 
of the configuration building infrastructure as possible, it boots an
embedded Geronimo kernel and loads a Configuration containing just the
deployment system. As a running kernel, it also provides access to a
Repository and ConfigurationStore that the ConfigurationBuilders can use
to resolve dependencies (including dependencies on other
Configurations). However, these are *its* Repository and
ConfigurationStores and *not* those from the target server.

To cheat our way around the chicken-and-egg problem we took the simple
but expedient solution of having the standalone deployer and the default
server use the same type and location of store and repository. Then, by
simply telling the standalone deployer to install a configuration into
its own store it would also be available to the default server
configuration. This is a hack, pure, simple and effective.

When we introduce any additional complexity into the situation, then
this hack starts to break down. For example, if the user adds a
database-based ConfigurationStore to the server (for example, to make
GBean state persistence more reliable) then the standalone deployer
would not be able to install the generated Configuration into that store.


All things considered, I think having options in the standalone deployer
that rely on it sharing the same type and location of Repository and
ConfigurationStore will lead to obscure behaviour and strange behaviour
as soon as we progress beyond the most basic default configuration. That
is why I voted at the start for "2 simple tools rather than one complex
one."  Going further as has been proposed and coupling the standalone
deployer to the internal implementation of the
PersistentConfigurationList seems like pouring gasoline on the fire.

I have been portrayed on this list as being alone in my opinion but I
will point out that in the initial vote Eric LeGoff, Aaron Mulder and
David Jencks also voted for 2 tools (as opposed to Peter Lynch, Davanum
Srinivas, Hiram Chirino and Bruce Snyder who voted for one); Dain
Sundstrom voted for one tool, but wanted another to support the
functionality we have to output Configurations as jars ("that is another
tool for another day" but we need it now to build the server) which
sounds like two tools to me.

After that vote, Aaron proposed and attained consensus for a single
tool. The syntax is simple enough and mirrors the JSR-88 API making it
ideal for online deployment (which is all JSR-88 supports).

However, during implementation Aaron ran into the issues described above
and on the thread from 11/4/04 when trying to support the offline mode
not covered by JSR-88. These are clearly technical issues which we need
to resolve. To facilitate that, Aaron proposed to commit his work so
that all could see and discuss; he and I were promptly and unjustifiably
  flamed by some members of the community.

Since Thursday he has committed this code and I think we need to review 
where we are. My belief is that the online side is fully implemented, 
that the standalone deployer works as before (package option), and the 
big remaining issue is the one described above where someone is trying 
to "deploy" applications to an offline server.

In this message

http://nagoya.apache.org/eyebrowse/ReadMsg?listName=dev@geronimo.apache.org&msgNo=9696

I wrote that you could distribute to your heart's content; this was
wrong. The discussion with Aaron highlighted that the problem about
store type and location applies to distribute as well as the other
operations. It looks like the only thing you can reliably do offline is
package a Configuration for later use.

I would suggest then, rather than the --add option I proposed for the
server we instead have a --install option which boots the server,
restarts all previously running configs and installs the new one. The
offline usage would then be:

java -jar deployer.jar package foo.war foo-plan.xml foo.car
java -jar server.jar --install foo.car

This also provides a simple mechanism for deploying once and running 
everywhere: the output configuration can be installed in multiple places 
as easily as one.

The issue with this is that it fits the admin's view better than the
developer's. However, I continue to believe the:

   start server
   repeat
      write code
      build (with distribute/start to online server)
      test
   until app works or it's time to go home

cycle is what most developers use and that Aaron's changes (in
conjunction with the existing Maven plugin) have made it easy for them
to work that way. They are not really interested in fancy offline
deployment tricks.

To support carrier-grade configuration management, clustered and grid
environments and OEMs, I believe we need an effective way of generating
pre-packaged configurations using requires Maven/Ant plugins
that can be used in the release process, tools like Aaron's that an
administrator can use from the command line, and mechanisms for
installing them in and for transporting them between between servers.

I think we are very close to achieving this and if we can address these
last issues then Geronimo will be acceptable to both the developer
community and to serious IT decision makers.

--
Jeremy



Re: online and offline deployment

Posted by "Geir Magnusson Jr." <ge...@apache.org>.
On Nov 8, 2004, at 3:59 PM, Dain Sundstrom wrote:

> On Nov 8, 2004, at 5:29 PM, Geir Magnusson Jr wrote:
>
>>
>> On Nov 8, 2004, at 1:52 PM, Dain Sundstrom wrote:
>>
>>> Guys,
>>>
>>> After reading these two massive emails, I feel no closer to 
>>> understanding what Jeremy wants to change.  Jeremy, I think it would 
>>> help me if you could summarize what you propose we do differently 
>>> from what was voted on the week before last and clarify whether your 
>>> proposal would be in addition to the solution we voted to do or if 
>>> it would replace that solution.
>>>
>>
>> Maybe be could call you on the phone and explain?
>
> I am not sure how taking this offline is going to clarify Jeremy's 
> technical concerns regarding deployment for the rest of the community? 
>  I think that this discussion would benefit from a more structured 
> proposal whereby the pros and cons could be discussed openly.  It 
> seems that the technical discussion of his alternate proposal is 
> likely to degenerate if there is not a clear list of points that can 
> be referenced.

Dain, come on...  It was a *joke*.

geir

-- 
Geir Magnusson Jr                                  +1-203-665-6437
geirm@apache.org


Re: online and offline deployment

Posted by Dain Sundstrom <ds...@gluecode.com>.
On Nov 8, 2004, at 5:29 PM, Geir Magnusson Jr wrote:

>
> On Nov 8, 2004, at 1:52 PM, Dain Sundstrom wrote:
>
>> Guys,
>>
>> After reading these two massive emails, I feel no closer to 
>> understanding what Jeremy wants to change.  Jeremy, I think it would 
>> help me if you could summarize what you propose we do differently 
>> from what was voted on the week before last and clarify whether your 
>> proposal would be in addition to the solution we voted to do or if it 
>> would replace that solution.
>>
>
> Maybe be could call you on the phone and explain?

I am not sure how taking this offline is going to clarify Jeremy's 
technical concerns regarding deployment for the rest of the community?  
I think that this discussion would benefit from a more structured 
proposal whereby the pros and cons could be discussed openly.  It seems 
that the technical discussion of his alternate proposal is likely to 
degenerate if there is not a clear list of points that can be 
referenced.

-dain


Re: online and offline deployment

Posted by Geir Magnusson Jr <ge...@4quarters.com>.
On Nov 8, 2004, at 1:52 PM, Dain Sundstrom wrote:

> Guys,
>
> After reading these two massive emails, I feel no closer to  
> understanding what Jeremy wants to change.  Jeremy, I think it would  
> help me if you could summarize what you propose we do differently from  
> what was voted on the week before last and clarify whether your  
> proposal would be in addition to the solution we voted to do or if it  
> would replace that solution.
>

Maybe be could call you on the phone and explain?

geir

> Thanks,
>
> -dain
>
> --
> Dain Sundstrom
> Chief Architect
> Gluecode Software
> 310.536.8355, ext. 26
>
> On Nov 7, 2004, at 6:36 AM, Aaron Mulder wrote:
>
>> 	Just to reiterate, I think Jeremy is saying that using the
>> deployer tool for offline install is limited because it doesn't know  
>> what
>> GBeans the server is using for the ConfigStore and  
>> PersistentConfigList
>> and so on.  If we instead actually start the server to do an "offline"
>> deployment/installation, then all the corrct GBeans will be running  
>> and
>> that is no longer an issue.
>>
>> 	An alternative would be for the deployer to inspect the server's
>> configuration when it starts, and load every dependency from the
>> immediate parent of the module to be deployed up through the "root",  
>> and
>> that should identify the correct ConfigStore and PersistentConfigList.
>> But this is tricky too, since how would it know what ConfigStore to  
>> load
>> the configurations out of (including the configuration for the
>> ConfigStore, aargh!).  In the end, I suspect this depends on how
>> server.jar was packaged, and if you plan to start your server with
>> start-my-server.jar instead of server.jar then I don't know how the
>> deployer would know that, so I don't know where it would get the  
>> original
>> ConfigStore reference from -- perhaps we'd need to give it an option  
>> to
>> identify your server startup JAR.  But I think this would still fail  
>> if
>> the server was running (since you'd probably clash for ports trying to
>> load some of the services between ConfigStore and application), so  
>> it's an
>> "offline" deploy in name only.
>>
>> 	Another option is that we can provide a tool that works 100% for
>> the default server configuration  
>> (LocalConfigStore+FileConfigurationList).
>> But it would not work in the face of customizations to the 2 core
>> components: if you swap out your LocalConfigStore, then the tool  
>> would not
>> work (it would install into the wrong place), and if you swap out your
>> PersistentConfigurationList then the tool would be unable to mark any
>> module to be started.  If we wanted to, we could make offline deploy  
>> tools
>> available for different combinations of those GBeans, or give you a
>> procedure to build a new deploy tool from an old one.
>>
>> 	The main reason I feel that this is important is that most other
>> products support it.  Generally if you copy a new EAR over an old one
>> while the server is not running, and then start the server, the new
>> version of the EAR will deploy on startup.  (Tomcat 5 in the one I can
>> think of that doesn't do this).  I just hate to tell people that  
>> things
>> that used to work won't work any more if they move to Geronimo.  On  
>> the
>> other hand, I think this behavior was mostly implemented via a hot  
>> deploy
>> directory, so if we provide a GBean for a hot deploy directory, then  
>> maybe
>> we don't need a offline deploy tool at all (beyond for building the
>> server).
>>
>> 	And I guess the last issue is related.  In the long run, it will
>> be nice/necessary to have some kind of packaged-configuration-handling
>> features, in the deploy tool or another tool:
>>  - extract a CAR file from an entry in a server's ConfigStore
>>  - sign a CAR file (either in the server's ConfigStore or as a file)
>>  - transfer a packaged configuration directly from one server to  
>> another
>>  - deploy a CAR file into a server
>>
>> Aaron
>>
>> On Sat, 6 Nov 2004, Jeremy Boynes wrote:
>>> As promised Thursday, here are the details of my concerns about  
>>> mixing
>>> offline and online deployment.
>>>
>>> My concerns on this issue stem from how we package GBeans together  
>>> for
>>> use by the kernel. Rather than handling them one-by-one, Geronimo  
>>> uses
>>> the notion of a pre-canned Configuration which contains a number of
>>> GBean instances and the classpath information needed to load them.
>>> Configurations can be loaded by the kernel and when started bring all
>>> the GBeans they contain online together.
>>>
>>> A key feature of Configurations is they are portable between  
>>> different
>>> Geronimo installations - specifically a Configuration can run in any
>>> Geronimo kernel that can resolve its dependencies. This is less  
>>> critical
>>> for the single-server mode we have now but is very important as  
>>> Geronimo
>>> scales to clustered or grid configurations - it allows us to  
>>> efficiently
>>> move applications between the servers on demand.
>>>
>>> This also has benefits where change management is important, such as
>>> business critical installations. For example, a Configuration can be
>>> built and signed in a test or integration environment and moved
>>> *provably unchanged* though the test, stage and release to production
>>> process. Alternatively, an OEM can release an application to channel  
>>> as
>>> a signed Configuration, end-users can have the assurance it has not  
>>> been
>>> tampered with, and the OEM can reduce costs by reducing problems  
>>> caused
>>> by variations in the end-user environment.
>>>
>>> In the kernel, the process of loading and unloading Configuration is
>>> handled by a ConfigurationManager that uses ConfigurationStores to  
>>> store
>>> them. The store exposes a simple API for installing and uninstalling
>>> Configurations and for retrieving them so they can be loaded. We  
>>> have a
>>> simple LocalConfigStore implementation that uses the local  
>>> filesystem to
>>> store them; other implementations are possible using different
>>> persistence approaches such as databases, LDAP or proprietary
>>> configuration management systems.
>>>
>>>
>>> The deployment system in Geronimo is the interface between  
>>> user-domain
>>> artifacts such as J2EE modules (EARs, WARs, etc.) or deployment plans
>>> and the configuration management system described above. It  
>>> essentially
>>> combines modules with plans and generates Configurations.
>>>
>>> It comprises three parts:
>>> * External interfaces such as the command line tool, console or  
>>> JSR-88
>>>    provider that get the modules and plans from the user
>>> * ConfigurationBuilders such as EARConfigBuilder and
>>>    ServiceConfigBuilder that do the combination and produce the  
>>> target
>>>    Configuration
>>> * Back-end interfaces that store the Configuration either in a
>>>    ConfigurationStore or as an output file
>>>
>>> The ConfigurationBuilders are GBeans and run inside a Geronimo  
>>> kernel.
>>> Apart from ease of implementation, they also have access to the
>>> resources provided by that system - for example, they can use the
>>> Repository to load classes during processing, and they can use the
>>> ConfigurationManager to load other Configurations that the target  
>>> may be
>>> dependent on.
>>>
>>>
>>> To support online deployment, we run a deployment system inside the  
>>> same
>>> kernel as the J2EE server - it is actually part of the
>>> org/apache/geronimo/Server Configuration although work is progress to
>>> allow it to be run as a separate dependent configuation.
>>>
>>> The JSR-88 provider interacts with this deployment system to fulfill  
>>> the
>>> spec requirements for distribute, start, stop, undeploy etc. For
>>> example, during a distribute operation the module and plan are  
>>> passed to
>>> the deployment system, it uses an EARConfigBuilder to produce the  
>>> output
>>> Configuration, which it then installs in the target  
>>> ConfigurationStore.
>>> A JSR-88 start operation causes the Configuration to be loaded from  
>>> the
>>> store and then started.
>>>
>>>
>>> However, this leaves us with a chicken-and-egg problem. The online
>>> deployment system above is itself part of a configuration - how do we
>>> build that configuation?
>>>
>>> To solve this, and because it seemed generally useful, we built a
>>> standalone offline deployment system. Run from the command line, this
>>> would take module + plan and produce a Configuration. To reuse as  
>>> much
>>> of the configuration building infrastructure as possible, it boots an
>>> embedded Geronimo kernel and loads a Configuration containing just  
>>> the
>>> deployment system. As a running kernel, it also provides access to a
>>> Repository and ConfigurationStore that the ConfigurationBuilders can  
>>> use
>>> to resolve dependencies (including dependencies on other
>>> Configurations). However, these are *its* Repository and
>>> ConfigurationStores and *not* those from the target server.
>>>
>>> To cheat our way around the chicken-and-egg problem we took the  
>>> simple
>>> but expedient solution of having the standalone deployer and the  
>>> default
>>> server use the same type and location of store and repository. Then,  
>>> by
>>> simply telling the standalone deployer to install a configuration  
>>> into
>>> its own store it would also be available to the default server
>>> configuration. This is a hack, pure, simple and effective.
>>>
>>> When we introduce any additional complexity into the situation, then
>>> this hack starts to break down. For example, if the user adds a
>>> database-based ConfigurationStore to the server (for example, to make
>>> GBean state persistence more reliable) then the standalone deployer
>>> would not be able to install the generated Configuration into that  
>>> store.
>>>
>>>
>>> All things considered, I think having options in the standalone  
>>> deployer
>>> that rely on it sharing the same type and location of Repository and
>>> ConfigurationStore will lead to obscure behaviour and strange  
>>> behaviour
>>> as soon as we progress beyond the most basic default configuration.  
>>> That
>>> is why I voted at the start for "2 simple tools rather than one  
>>> complex
>>> one."  Going further as has been proposed and coupling the standalone
>>> deployer to the internal implementation of the
>>> PersistentConfigurationList seems like pouring gasoline on the fire.
>>>
>>> I have been portrayed on this list as being alone in my opinion but I
>>> will point out that in the initial vote Eric LeGoff, Aaron Mulder and
>>> David Jencks also voted for 2 tools (as opposed to Peter Lynch,  
>>> Davanum
>>> Srinivas, Hiram Chirino and Bruce Snyder who voted for one); Dain
>>> Sundstrom voted for one tool, but wanted another to support the
>>> functionality we have to output Configurations as jars ("that is  
>>> another
>>> tool for another day" but we need it now to build the server) which
>>> sounds like two tools to me.
>>>
>>> After that vote, Aaron proposed and attained consensus for a single
>>> tool. The syntax is simple enough and mirrors the JSR-88 API making  
>>> it
>>> ideal for online deployment (which is all JSR-88 supports).
>>>
>>> However, during implementation Aaron ran into the issues described  
>>> above
>>> and on the thread from 11/4/04 when trying to support the offline  
>>> mode
>>> not covered by JSR-88. These are clearly technical issues which we  
>>> need
>>> to resolve. To facilitate that, Aaron proposed to commit his work so
>>> that all could see and discuss; he and I were promptly and  
>>> unjustifiably
>>>   flamed by some members of the community.
>>>
>>> Since Thursday he has committed this code and I think we need to  
>>> review
>>> where we are. My belief is that the online side is fully implemented,
>>> that the standalone deployer works as before (package option), and  
>>> the
>>> big remaining issue is the one described above where someone is  
>>> trying
>>> to "deploy" applications to an offline server.
>>>
>>> In this message
>>>
>>> http://nagoya.apache.org/eyebrowse/ReadMsg? 
>>> listName=dev@geronimo.apache.org&msgNo=9696
>>>
>>> I wrote that you could distribute to your heart's content; this was
>>> wrong. The discussion with Aaron highlighted that the problem about
>>> store type and location applies to distribute as well as the other
>>> operations. It looks like the only thing you can reliably do offline  
>>> is
>>> package a Configuration for later use.
>>>
>>> I would suggest then, rather than the --add option I proposed for the
>>> server we instead have a --install option which boots the server,
>>> restarts all previously running configs and installs the new one. The
>>> offline usage would then be:
>>>
>>> java -jar deployer.jar package foo.war foo-plan.xml foo.car
>>> java -jar server.jar --install foo.car
>>>
>>> This also provides a simple mechanism for deploying once and running
>>> everywhere: the output configuration can be installed in multiple  
>>> places
>>> as easily as one.
>>>
>>> The issue with this is that it fits the admin's view better than the
>>> developer's. However, I continue to believe the:
>>>
>>>    start server
>>>    repeat
>>>       write code
>>>       build (with distribute/start to online server)
>>>       test
>>>    until app works or it's time to go home
>>>
>>> cycle is what most developers use and that Aaron's changes (in
>>> conjunction with the existing Maven plugin) have made it easy for  
>>> them
>>> to work that way. They are not really interested in fancy offline
>>> deployment tricks.
>>>
>>> To support carrier-grade configuration management, clustered and grid
>>> environments and OEMs, I believe we need an effective way of  
>>> generating
>>> pre-packaged configurations using requires Maven/Ant plugins
>>> that can be used in the release process, tools like Aaron's that an
>>> administrator can use from the command line, and mechanisms for
>>> installing them in and for transporting them between between servers.
>>>
>>> I think we are very close to achieving this and if we can address  
>>> these
>>> last issues then Geronimo will be acceptable to both the developer
>>> community and to serious IT decision makers.
>>>
>>> --
>>> Jeremy
>>>
>>>
>>>
>
>
-- 
Geir Magnusson Jr                                  +1-203-665-6437
geir@gluecode.com


Re: online and offline deployment

Posted by David Jencks <dj...@gluecode.com>.
On Nov 9, 2004, at 9:27 AM, Jeremy Boynes wrote:

> Aaron Mulder wrote:
>> Jeremy,
>> 	What is your feeling on the "package --install" command?  Because
>> right now, that runs in offline mode, and assumes that it should 
>> install
>> into a file-based local configuration store.  Which means it's 
>> subject to
>> all the problems you've raised if the server is not using a file-based
>> local configuration store, etc.
>
> The original purpose of that (in the old offline deployer) was to 
> install the generated configuration in the /deployer's/ config store 
> to that it could be used to resolved dependencies for future 
> deployments. It was the happy co-incidence (ok, a bootstrap hack) that 
> the deployer and server were using the same store that allowed us to 
> pre-load configurations into it.
>
>
>> 	Of course we use this during server construction.  But it might be
>> best to start the server immediately after bootstrapping the 
>> deployer, and
>> do all the rest of the server construction tasks as online 
>> deployments.  (That would also have the pleasant side effect of 
>> making it all run
>> faster, because you wouldn't be starting and stopping a kernel for 
>> every
>> deployment.)
>
> I'm all for making it faster but I have reservations about starting 
> the server during the build process as that will trigger a lot of 
> initialization code (like creating a transaction log, a Derby 
> database, etc.) We could nuke it after building the configs but that 
> seems funky.
>
> One thing that was talked about a while ago (off-list probably :-) ) 
> was the concept of having a deployment server that could be used to 
> perform deployments. It would basically be the deployment config 
> running as a Daemon which a deployer could talk to; the server would 
> do the deployment and return the configuration it built to the 
> deployer. As the configurations were portable they could then be moved 
> around to the servers where they were intended to run.
>
> If we changed the bootstrap code to build such a deployment server, 
> then it could used during the build process and we would not have to 
> create new JVMs all the time.
>
> It should be possible to boot such a server inside the Maven VM and 
> talk to it directly. In fact, with a couple of minor tweaks that might 
> even be possible with the current standalone deployment config.

I don't think this will work until all the deployment code is in 
different modules from the runtime code.  That's why I've been trying 
so hard to get it separated.

david jencks

>
>> Then the bootstrapping would be the only operation that
>> actually installed in offline mode, and we could remove the unsafe
>> "--install" option from the package command -- or actually make the
>> package command perform the installation in online mode, if we have a
>> deployment module that knows how to deploy a "CAR" file.  Or better 
>> yet,
>> have the package command run in reverse order -- first distribute the
>> configuration into the running server, and then dump a CAR file from 
>> the
>> configuration in the config store.
>
> Aside from building the initial deployer (the hard coded Bootstrap 
> class) I don't think we should do anything special to build the 
> default server configuration. We should be able to do it the same way 
> any other user would configure their own server.
>
> The challenge facing us and any other user is the same - pre-loading a 
> server's configuration store. The command line executable "CAR" can be 
> built by the standalone deployer as can all of its configurations. The 
> problem is how to get them into the new server's store(s).
>
> As I've said before, we currently hack this by making the deployer and 
> server use the same store but we really need to find a general 
> solution.
>
> I have a half-baked idea about a special type of bootstrap deployment 
> where the deployer would interact with a fledging server to set up its 
> stores and repositories and then install the appropriate jars and 
> configurations. This would be driven by bootstrap plan (probably in 
> XML) that told it what to do. I need to think a bit more and I'll send 
> in a proposal to the list when its a little more done.
>
> Hey, perhaps we could discuss this off-list at ApacheCon ;-)
>
> --
> Jeremy
>


Re: online and offline deployment

Posted by Jeremy Boynes <jb...@gluecode.com>.
Aaron Mulder wrote:
> Jeremy,
> 	What is your feeling on the "package --install" command?  Because
> right now, that runs in offline mode, and assumes that it should install
> into a file-based local configuration store.  Which means it's subject to
> all the problems you've raised if the server is not using a file-based
> local configuration store, etc.
> 

The original purpose of that (in the old offline deployer) was to 
install the generated configuration in the /deployer's/ config store to 
that it could be used to resolved dependencies for future deployments. 
It was the happy co-incidence (ok, a bootstrap hack) that the deployer 
and server were using the same store that allowed us to pre-load 
configurations into it.


> 	Of course we use this during server construction.  But it might be
> best to start the server immediately after bootstrapping the deployer, and
> do all the rest of the server construction tasks as online deployments.  
> (That would also have the pleasant side effect of making it all run
> faster, because you wouldn't be starting and stopping a kernel for every
> deployment.)  

I'm all for making it faster but I have reservations about starting the 
server during the build process as that will trigger a lot of 
initialization code (like creating a transaction log, a Derby database, 
etc.) We could nuke it after building the configs but that seems funky.

One thing that was talked about a while ago (off-list probably :-) ) was 
the concept of having a deployment server that could be used to perform 
deployments. It would basically be the deployment config running as a 
Daemon which a deployer could talk to; the server would do the 
deployment and return the configuration it built to the deployer. As the 
configurations were portable they could then be moved around to the 
servers where they were intended to run.

If we changed the bootstrap code to build such a deployment server, then 
it could used during the build process and we would not have to create 
new JVMs all the time.

It should be possible to boot such a server inside the Maven VM and talk 
to it directly. In fact, with a couple of minor tweaks that might even 
be possible with the current standalone deployment config.

> Then the bootstrapping would be the only operation that
> actually installed in offline mode, and we could remove the unsafe
> "--install" option from the package command -- or actually make the
> package command perform the installation in online mode, if we have a
> deployment module that knows how to deploy a "CAR" file.  Or better yet,
> have the package command run in reverse order -- first distribute the
> configuration into the running server, and then dump a CAR file from the
> configuration in the config store.
> 

Aside from building the initial deployer (the hard coded Bootstrap 
class) I don't think we should do anything special to build the default 
server configuration. We should be able to do it the same way any other 
user would configure their own server.

The challenge facing us and any other user is the same - pre-loading a 
server's configuration store. The command line executable "CAR" can be 
built by the standalone deployer as can all of its configurations. The 
problem is how to get them into the new server's store(s).

As I've said before, we currently hack this by making the deployer and 
server use the same store but we really need to find a general solution.

I have a half-baked idea about a special type of bootstrap deployment 
where the deployer would interact with a fledging server to set up its 
stores and repositories and then install the appropriate jars and 
configurations. This would be driven by bootstrap plan (probably in XML) 
that told it what to do. I need to think a bit more and I'll send in a 
proposal to the list when its a little more done.

Hey, perhaps we could discuss this off-list at ApacheCon ;-)

--
Jeremy

Re: online and offline deployment

Posted by Aaron Mulder <am...@alumni.princeton.edu>.
Jeremy,
	What is your feeling on the "package --install" command?  Because
right now, that runs in offline mode, and assumes that it should install
into a file-based local configuration store.  Which means it's subject to
all the problems you've raised if the server is not using a file-based
local configuration store, etc.

	Of course we use this during server construction.  But it might be
best to start the server immediately after bootstrapping the deployer, and
do all the rest of the server construction tasks as online deployments.  
(That would also have the pleasant side effect of making it all run
faster, because you wouldn't be starting and stopping a kernel for every
deployment.)  Then the bootstrapping would be the only operation that
actually installed in offline mode, and we could remove the unsafe
"--install" option from the package command -- or actually make the
package command perform the installation in online mode, if we have a
deployment module that knows how to deploy a "CAR" file.  Or better yet,
have the package command run in reverse order -- first distribute the
configuration into the running server, and then dump a CAR file from the
configuration in the config store.

	I think someone raised some of these possibilities before, but I
don't remember who.

Aaron

On Mon, 8 Nov 2004, Jeremy Boynes wrote:
> I argue that JSR-88 features such as start, stop, ... that are intended 
> for online use do not work in offline mode and trying to implement them 
> is resulting in an incomplete solution that will be problematic in all 
> but the degenerate case. The plan to hack start based on a specific 
> implementation of ConfigurationStore is indicative that there is 
> something fundamentally wrong here.
> 
> Aaron tried to implement the feature as planned, ran into a problem, and 
> asked on the list if anyone minded if he hacked around it. I did, for 
> reasons that have been explained.
> 
> Can you take another look at the mails again and see if you can come up 
> with a proposal that works? I would hate to waste another day writing a 
> another mail that you do not understand.
> 
> And despite the on-list conversations that make it pretty clear I think 
> there are technical problems, to keep you happy here is a formal -1 veto 
> on any implementation intended for general use that couples the 
> standalone deployer to the implementation of the target server's config 
> store or which requires the deployer to boot the target server to do 
> offline deployment. I have already +1'd Aaron's existing changes.
> 
> --
> Jeremy
> 

Re: online and offline deployment

Posted by Jeremy Boynes <jb...@apache.org>.
Dain Sundstrom wrote:
> Guys,
> 
> After reading these two massive emails, I feel no closer to  
> understanding what Jeremy wants to change.  Jeremy, I think it would  
> help me if you could summarize what you propose we do differently from  
> what was voted on the week before last and clarify whether your  
> proposal would be in addition to the solution we voted to do or if it  
> would replace that solution.
> 

I argue that JSR-88 features such as start, stop, ... that are intended 
for online use do not work in offline mode and trying to implement them 
is resulting in an incomplete solution that will be problematic in all 
but the degenerate case. The plan to hack start based on a specific 
implementation of ConfigurationStore is indicative that there is 
something fundamentally wrong here.

Aaron tried to implement the feature as planned, ran into a problem, and 
asked on the list if anyone minded if he hacked around it. I did, for 
reasons that have been explained.

Can you take another look at the mails again and see if you can come up 
with a proposal that works? I would hate to waste another day writing a 
another mail that you do not understand.

And despite the on-list conversations that make it pretty clear I think 
there are technical problems, to keep you happy here is a formal -1 veto 
on any implementation intended for general use that couples the 
standalone deployer to the implementation of the target server's config 
store or which requires the deployer to boot the target server to do 
offline deployment. I have already +1'd Aaron's existing changes.

--
Jeremy

Re: online and offline deployment

Posted by Dain Sundstrom <ds...@gluecode.com>.
Guys,

After reading these two massive emails, I feel no closer to  
understanding what Jeremy wants to change.  Jeremy, I think it would  
help me if you could summarize what you propose we do differently from  
what was voted on the week before last and clarify whether your  
proposal would be in addition to the solution we voted to do or if it  
would replace that solution.

Thanks,

-dain

--
Dain Sundstrom
Chief Architect
Gluecode Software
310.536.8355, ext. 26

On Nov 7, 2004, at 6:36 AM, Aaron Mulder wrote:

> 	Just to reiterate, I think Jeremy is saying that using the
> deployer tool for offline install is limited because it doesn't know  
> what
> GBeans the server is using for the ConfigStore and PersistentConfigList
> and so on.  If we instead actually start the server to do an "offline"
> deployment/installation, then all the corrct GBeans will be running and
> that is no longer an issue.
>
> 	An alternative would be for the deployer to inspect the server's
> configuration when it starts, and load every dependency from the
> immediate parent of the module to be deployed up through the "root",  
> and
> that should identify the correct ConfigStore and PersistentConfigList.
> But this is tricky too, since how would it know what ConfigStore to  
> load
> the configurations out of (including the configuration for the
> ConfigStore, aargh!).  In the end, I suspect this depends on how
> server.jar was packaged, and if you plan to start your server with
> start-my-server.jar instead of server.jar then I don't know how the
> deployer would know that, so I don't know where it would get the  
> original
> ConfigStore reference from -- perhaps we'd need to give it an option to
> identify your server startup JAR.  But I think this would still fail if
> the server was running (since you'd probably clash for ports trying to
> load some of the services between ConfigStore and application), so  
> it's an
> "offline" deploy in name only.
>
> 	Another option is that we can provide a tool that works 100% for
> the default server configuration  
> (LocalConfigStore+FileConfigurationList).
> But it would not work in the face of customizations to the 2 core
> components: if you swap out your LocalConfigStore, then the tool would  
> not
> work (it would install into the wrong place), and if you swap out your
> PersistentConfigurationList then the tool would be unable to mark any
> module to be started.  If we wanted to, we could make offline deploy  
> tools
> available for different combinations of those GBeans, or give you a
> procedure to build a new deploy tool from an old one.
>
> 	The main reason I feel that this is important is that most other
> products support it.  Generally if you copy a new EAR over an old one
> while the server is not running, and then start the server, the new
> version of the EAR will deploy on startup.  (Tomcat 5 in the one I can
> think of that doesn't do this).  I just hate to tell people that things
> that used to work won't work any more if they move to Geronimo.  On the
> other hand, I think this behavior was mostly implemented via a hot  
> deploy
> directory, so if we provide a GBean for a hot deploy directory, then  
> maybe
> we don't need a offline deploy tool at all (beyond for building the
> server).
>
> 	And I guess the last issue is related.  In the long run, it will
> be nice/necessary to have some kind of packaged-configuration-handling
> features, in the deploy tool or another tool:
>  - extract a CAR file from an entry in a server's ConfigStore
>  - sign a CAR file (either in the server's ConfigStore or as a file)
>  - transfer a packaged configuration directly from one server to  
> another
>  - deploy a CAR file into a server
>
> Aaron
>
> On Sat, 6 Nov 2004, Jeremy Boynes wrote:
>> As promised Thursday, here are the details of my concerns about mixing
>> offline and online deployment.
>>
>> My concerns on this issue stem from how we package GBeans together for
>> use by the kernel. Rather than handling them one-by-one, Geronimo uses
>> the notion of a pre-canned Configuration which contains a number of
>> GBean instances and the classpath information needed to load them.
>> Configurations can be loaded by the kernel and when started bring all
>> the GBeans they contain online together.
>>
>> A key feature of Configurations is they are portable between different
>> Geronimo installations - specifically a Configuration can run in any
>> Geronimo kernel that can resolve its dependencies. This is less  
>> critical
>> for the single-server mode we have now but is very important as  
>> Geronimo
>> scales to clustered or grid configurations - it allows us to  
>> efficiently
>> move applications between the servers on demand.
>>
>> This also has benefits where change management is important, such as
>> business critical installations. For example, a Configuration can be
>> built and signed in a test or integration environment and moved
>> *provably unchanged* though the test, stage and release to production
>> process. Alternatively, an OEM can release an application to channel  
>> as
>> a signed Configuration, end-users can have the assurance it has not  
>> been
>> tampered with, and the OEM can reduce costs by reducing problems  
>> caused
>> by variations in the end-user environment.
>>
>> In the kernel, the process of loading and unloading Configuration is
>> handled by a ConfigurationManager that uses ConfigurationStores to  
>> store
>> them. The store exposes a simple API for installing and uninstalling
>> Configurations and for retrieving them so they can be loaded. We have  
>> a
>> simple LocalConfigStore implementation that uses the local filesystem  
>> to
>> store them; other implementations are possible using different
>> persistence approaches such as databases, LDAP or proprietary
>> configuration management systems.
>>
>>
>> The deployment system in Geronimo is the interface between user-domain
>> artifacts such as J2EE modules (EARs, WARs, etc.) or deployment plans
>> and the configuration management system described above. It  
>> essentially
>> combines modules with plans and generates Configurations.
>>
>> It comprises three parts:
>> * External interfaces such as the command line tool, console or JSR-88
>>    provider that get the modules and plans from the user
>> * ConfigurationBuilders such as EARConfigBuilder and
>>    ServiceConfigBuilder that do the combination and produce the target
>>    Configuration
>> * Back-end interfaces that store the Configuration either in a
>>    ConfigurationStore or as an output file
>>
>> The ConfigurationBuilders are GBeans and run inside a Geronimo kernel.
>> Apart from ease of implementation, they also have access to the
>> resources provided by that system - for example, they can use the
>> Repository to load classes during processing, and they can use the
>> ConfigurationManager to load other Configurations that the target may  
>> be
>> dependent on.
>>
>>
>> To support online deployment, we run a deployment system inside the  
>> same
>> kernel as the J2EE server - it is actually part of the
>> org/apache/geronimo/Server Configuration although work is progress to
>> allow it to be run as a separate dependent configuation.
>>
>> The JSR-88 provider interacts with this deployment system to fulfill  
>> the
>> spec requirements for distribute, start, stop, undeploy etc. For
>> example, during a distribute operation the module and plan are passed  
>> to
>> the deployment system, it uses an EARConfigBuilder to produce the  
>> output
>> Configuration, which it then installs in the target  
>> ConfigurationStore.
>> A JSR-88 start operation causes the Configuration to be loaded from  
>> the
>> store and then started.
>>
>>
>> However, this leaves us with a chicken-and-egg problem. The online
>> deployment system above is itself part of a configuration - how do we
>> build that configuation?
>>
>> To solve this, and because it seemed generally useful, we built a
>> standalone offline deployment system. Run from the command line, this
>> would take module + plan and produce a Configuration. To reuse as much
>> of the configuration building infrastructure as possible, it boots an
>> embedded Geronimo kernel and loads a Configuration containing just the
>> deployment system. As a running kernel, it also provides access to a
>> Repository and ConfigurationStore that the ConfigurationBuilders can  
>> use
>> to resolve dependencies (including dependencies on other
>> Configurations). However, these are *its* Repository and
>> ConfigurationStores and *not* those from the target server.
>>
>> To cheat our way around the chicken-and-egg problem we took the simple
>> but expedient solution of having the standalone deployer and the  
>> default
>> server use the same type and location of store and repository. Then,  
>> by
>> simply telling the standalone deployer to install a configuration into
>> its own store it would also be available to the default server
>> configuration. This is a hack, pure, simple and effective.
>>
>> When we introduce any additional complexity into the situation, then
>> this hack starts to break down. For example, if the user adds a
>> database-based ConfigurationStore to the server (for example, to make
>> GBean state persistence more reliable) then the standalone deployer
>> would not be able to install the generated Configuration into that  
>> store.
>>
>>
>> All things considered, I think having options in the standalone  
>> deployer
>> that rely on it sharing the same type and location of Repository and
>> ConfigurationStore will lead to obscure behaviour and strange  
>> behaviour
>> as soon as we progress beyond the most basic default configuration.  
>> That
>> is why I voted at the start for "2 simple tools rather than one  
>> complex
>> one."  Going further as has been proposed and coupling the standalone
>> deployer to the internal implementation of the
>> PersistentConfigurationList seems like pouring gasoline on the fire.
>>
>> I have been portrayed on this list as being alone in my opinion but I
>> will point out that in the initial vote Eric LeGoff, Aaron Mulder and
>> David Jencks also voted for 2 tools (as opposed to Peter Lynch,  
>> Davanum
>> Srinivas, Hiram Chirino and Bruce Snyder who voted for one); Dain
>> Sundstrom voted for one tool, but wanted another to support the
>> functionality we have to output Configurations as jars ("that is  
>> another
>> tool for another day" but we need it now to build the server) which
>> sounds like two tools to me.
>>
>> After that vote, Aaron proposed and attained consensus for a single
>> tool. The syntax is simple enough and mirrors the JSR-88 API making it
>> ideal for online deployment (which is all JSR-88 supports).
>>
>> However, during implementation Aaron ran into the issues described  
>> above
>> and on the thread from 11/4/04 when trying to support the offline mode
>> not covered by JSR-88. These are clearly technical issues which we  
>> need
>> to resolve. To facilitate that, Aaron proposed to commit his work so
>> that all could see and discuss; he and I were promptly and  
>> unjustifiably
>>   flamed by some members of the community.
>>
>> Since Thursday he has committed this code and I think we need to  
>> review
>> where we are. My belief is that the online side is fully implemented,
>> that the standalone deployer works as before (package option), and the
>> big remaining issue is the one described above where someone is trying
>> to "deploy" applications to an offline server.
>>
>> In this message
>>
>> http://nagoya.apache.org/eyebrowse/ReadMsg? 
>> listName=dev@geronimo.apache.org&msgNo=9696
>>
>> I wrote that you could distribute to your heart's content; this was
>> wrong. The discussion with Aaron highlighted that the problem about
>> store type and location applies to distribute as well as the other
>> operations. It looks like the only thing you can reliably do offline  
>> is
>> package a Configuration for later use.
>>
>> I would suggest then, rather than the --add option I proposed for the
>> server we instead have a --install option which boots the server,
>> restarts all previously running configs and installs the new one. The
>> offline usage would then be:
>>
>> java -jar deployer.jar package foo.war foo-plan.xml foo.car
>> java -jar server.jar --install foo.car
>>
>> This also provides a simple mechanism for deploying once and running
>> everywhere: the output configuration can be installed in multiple  
>> places
>> as easily as one.
>>
>> The issue with this is that it fits the admin's view better than the
>> developer's. However, I continue to believe the:
>>
>>    start server
>>    repeat
>>       write code
>>       build (with distribute/start to online server)
>>       test
>>    until app works or it's time to go home
>>
>> cycle is what most developers use and that Aaron's changes (in
>> conjunction with the existing Maven plugin) have made it easy for them
>> to work that way. They are not really interested in fancy offline
>> deployment tricks.
>>
>> To support carrier-grade configuration management, clustered and grid
>> environments and OEMs, I believe we need an effective way of  
>> generating
>> pre-packaged configurations using requires Maven/Ant plugins
>> that can be used in the release process, tools like Aaron's that an
>> administrator can use from the command line, and mechanisms for
>> installing them in and for transporting them between between servers.
>>
>> I think we are very close to achieving this and if we can address  
>> these
>> last issues then Geronimo will be acceptable to both the developer
>> community and to serious IT decision makers.
>>
>> --
>> Jeremy
>>
>>
>>


Re: online and offline deployment

Posted by Bruce Snyder <fe...@frii.com>.
Aaron Mulder wrote:

> 	Can you state your current opinion and reasoning?  I think that's
> more valuable than a vote, at this stage.  I know I'd like to have a
> broader discussion to try to agree on the best solution, or at least the
> best specific options to propose for a vote.  For example, I think we've
> kind of gone past the point where "1 tool" or "2 tools" is a useful vote
> -- if two, it's a question of which two and what our goals should be for
> each.  If one, again, what do we attempt to support in that one?

I thought that the dilemma for the vote was regarding the simplication 
of the deployment tool for code sake rather than for the user sake (i.e. 
one tool was difficult to code whereas two tools was difficult for the 
user). I see now that there is a need for two tools, each of which 
serves a very specific purpose. I now think that much of the dilemma 
surrounds the term 'deploy'.

I cast my vote in favor of a single tool simply from a user's point of 
view, all the while thinking that the strategy pattern would easily sort 
out the coding dilemma.

Bruce
-- 
perl -e 'print 
unpack("u30","<0G)U8V4\\@4VYY9&5R\\"F9E<G)E=\\$\\!F<FEI+F-O;0\\`\\`");'

The Castor Project
http://www.castor.org/

Apache Geronimo
http://geronimo.apache.org/

Re: online and offline deployment

Posted by Aaron Mulder <am...@alumni.princeton.edu>.
Bruce,
	Can you state your current opinion and reasoning?  I think that's
more valuable than a vote, at this stage.  I know I'd like to have a
broader discussion to try to agree on the best solution, or at least the
best specific options to propose for a vote.  For example, I think we've
kind of gone past the point where "1 tool" or "2 tools" is a useful vote
-- if two, it's a question of which two and what our goals should be for
each.  If one, again, what do we attempt to support in that one?

Thanks,
	Aaron

On Sun, 7 Nov 2004, Bruce Snyder wrote:
> I spent part of last night and part of this morning re-reading all of 
> this info to better understand the dilemma over the deploy tool. After 
> reading these two messages a couple times I feel like I understand the 
> issues at hand far better than when I cast my vote.
> 
> I'm sure that others will find themselves with the same quandary I 
> currently have, whereby upon further education of the issues surrounding 
> the deployment tool, I wish I could recast my vote. If others do, in 
> fact, have the same sentiment, then I propose that the deploy tool vote 
> be recalled and we start a fresh vote on the same topic, or, I just 
> change my vote. Does anyone else feel this way?
> 
> Bruce
> -- 
> perl -e 'print 
> unpack("u30","<0G)U8V4\\@4VYY9&5R\\"F9E<G)E=\\$\\!F<FEI+F-O;0\\`\\`");'
> 
> The Castor Project
> http://www.castor.org/
> 
> Apache Geronimo
> http://geronimo.apache.org/
> 

Re: online and offline deployment

Posted by Bruce Snyder <fe...@frii.com>.
Aaron Mulder wrote:

> 	Just to reiterate, I think Jeremy is saying that using the 
> deployer tool for offline install is limited because it doesn't know what 
> GBeans the server is using for the ConfigStore and PersistentConfigList 
> and so on.  If we instead actually start the server to do an "offline" 
> deployment/installation, then all the corrct GBeans will be running and 
> that is no longer an issue.
> 
> 	An alternative would be for the deployer to inspect the server's 
> configuration when it starts, and load every dependency from the 
> immediate parent of the module to be deployed up through the "root", and 
> that should identify the correct ConfigStore and PersistentConfigList.
> But this is tricky too, since how would it know what ConfigStore to load 
> the configurations out of (including the configuration for the 
> ConfigStore, aargh!).  In the end, I suspect this depends on how 
> server.jar was packaged, and if you plan to start your server with 
> start-my-server.jar instead of server.jar then I don't know how the 
> deployer would know that, so I don't know where it would get the original 
> ConfigStore reference from -- perhaps we'd need to give it an option to 
> identify your server startup JAR.  But I think this would still fail if 
> the server was running (since you'd probably clash for ports trying to 
> load some of the services between ConfigStore and application), so it's an 
> "offline" deploy in name only.
> 
> 	Another option is that we can provide a tool that works 100% for
> the default server configuration (LocalConfigStore+FileConfigurationList).  
> But it would not work in the face of customizations to the 2 core
> components: if you swap out your LocalConfigStore, then the tool would not
> work (it would install into the wrong place), and if you swap out your
> PersistentConfigurationList then the tool would be unable to mark any
> module to be started.  If we wanted to, we could make offline deploy tools 
> available for different combinations of those GBeans, or give you a 
> procedure to build a new deploy tool from an old one.
> 
> 	The main reason I feel that this is important is that most other
> products support it.  Generally if you copy a new EAR over an old one
> while the server is not running, and then start the server, the new
> version of the EAR will deploy on startup.  (Tomcat 5 in the one I can
> think of that doesn't do this).  I just hate to tell people that things 
> that used to work won't work any more if they move to Geronimo.  On the 
> other hand, I think this behavior was mostly implemented via a hot deploy 
> directory, so if we provide a GBean for a hot deploy directory, then maybe 
> we don't need a offline deploy tool at all (beyond for building the 
> server).
> 
> 	And I guess the last issue is related.  In the long run, it will
> be nice/necessary to have some kind of packaged-configuration-handling
> features, in the deploy tool or another tool:
>  - extract a CAR file from an entry in a server's ConfigStore
>  - sign a CAR file (either in the server's ConfigStore or as a file)
>  - transfer a packaged configuration directly from one server to another
>  - deploy a CAR file into a server
> 
> Aaron
> 
> On Sat, 6 Nov 2004, Jeremy Boynes wrote:
> 
>>As promised Thursday, here are the details of my concerns about mixing
>>offline and online deployment.
>>
>>My concerns on this issue stem from how we package GBeans together for
>>use by the kernel. Rather than handling them one-by-one, Geronimo uses
>>the notion of a pre-canned Configuration which contains a number of
>>GBean instances and the classpath information needed to load them.
>>Configurations can be loaded by the kernel and when started bring all
>>the GBeans they contain online together.
>>
>>A key feature of Configurations is they are portable between different
>>Geronimo installations - specifically a Configuration can run in any
>>Geronimo kernel that can resolve its dependencies. This is less critical
>>for the single-server mode we have now but is very important as Geronimo
>>scales to clustered or grid configurations - it allows us to efficiently
>>move applications between the servers on demand.
>>
>>This also has benefits where change management is important, such as
>>business critical installations. For example, a Configuration can be
>>built and signed in a test or integration environment and moved
>>*provably unchanged* though the test, stage and release to production
>>process. Alternatively, an OEM can release an application to channel as
>>a signed Configuration, end-users can have the assurance it has not been
>>tampered with, and the OEM can reduce costs by reducing problems caused
>>by variations in the end-user environment.
>>
>>In the kernel, the process of loading and unloading Configuration is
>>handled by a ConfigurationManager that uses ConfigurationStores to store
>>them. The store exposes a simple API for installing and uninstalling
>>Configurations and for retrieving them so they can be loaded. We have a
>>simple LocalConfigStore implementation that uses the local filesystem to
>>store them; other implementations are possible using different
>>persistence approaches such as databases, LDAP or proprietary
>>configuration management systems.
>>
>>
>>The deployment system in Geronimo is the interface between user-domain
>>artifacts such as J2EE modules (EARs, WARs, etc.) or deployment plans
>>and the configuration management system described above. It essentially
>>combines modules with plans and generates Configurations.
>>
>>It comprises three parts:
>>* External interfaces such as the command line tool, console or JSR-88
>>   provider that get the modules and plans from the user
>>* ConfigurationBuilders such as EARConfigBuilder and
>>   ServiceConfigBuilder that do the combination and produce the target
>>   Configuration
>>* Back-end interfaces that store the Configuration either in a
>>   ConfigurationStore or as an output file
>>
>>The ConfigurationBuilders are GBeans and run inside a Geronimo kernel.
>>Apart from ease of implementation, they also have access to the
>>resources provided by that system - for example, they can use the
>>Repository to load classes during processing, and they can use the
>>ConfigurationManager to load other Configurations that the target may be
>>dependent on.
>>
>>
>>To support online deployment, we run a deployment system inside the same
>>kernel as the J2EE server - it is actually part of the
>>org/apache/geronimo/Server Configuration although work is progress to
>>allow it to be run as a separate dependent configuation.
>>
>>The JSR-88 provider interacts with this deployment system to fulfill the
>>spec requirements for distribute, start, stop, undeploy etc. For
>>example, during a distribute operation the module and plan are passed to
>>the deployment system, it uses an EARConfigBuilder to produce the output
>>Configuration, which it then installs in the target ConfigurationStore.
>>A JSR-88 start operation causes the Configuration to be loaded from the
>>store and then started.
>>
>>
>>However, this leaves us with a chicken-and-egg problem. The online
>>deployment system above is itself part of a configuration - how do we
>>build that configuation?
>>
>>To solve this, and because it seemed generally useful, we built a
>>standalone offline deployment system. Run from the command line, this
>>would take module + plan and produce a Configuration. To reuse as much 
>>of the configuration building infrastructure as possible, it boots an
>>embedded Geronimo kernel and loads a Configuration containing just the
>>deployment system. As a running kernel, it also provides access to a
>>Repository and ConfigurationStore that the ConfigurationBuilders can use
>>to resolve dependencies (including dependencies on other
>>Configurations). However, these are *its* Repository and
>>ConfigurationStores and *not* those from the target server.
>>
>>To cheat our way around the chicken-and-egg problem we took the simple
>>but expedient solution of having the standalone deployer and the default
>>server use the same type and location of store and repository. Then, by
>>simply telling the standalone deployer to install a configuration into
>>its own store it would also be available to the default server
>>configuration. This is a hack, pure, simple and effective.
>>
>>When we introduce any additional complexity into the situation, then
>>this hack starts to break down. For example, if the user adds a
>>database-based ConfigurationStore to the server (for example, to make
>>GBean state persistence more reliable) then the standalone deployer
>>would not be able to install the generated Configuration into that store.
>>
>>
>>All things considered, I think having options in the standalone deployer
>>that rely on it sharing the same type and location of Repository and
>>ConfigurationStore will lead to obscure behaviour and strange behaviour
>>as soon as we progress beyond the most basic default configuration. That
>>is why I voted at the start for "2 simple tools rather than one complex
>>one."  Going further as has been proposed and coupling the standalone
>>deployer to the internal implementation of the
>>PersistentConfigurationList seems like pouring gasoline on the fire.
>>
>>I have been portrayed on this list as being alone in my opinion but I
>>will point out that in the initial vote Eric LeGoff, Aaron Mulder and
>>David Jencks also voted for 2 tools (as opposed to Peter Lynch, Davanum
>>Srinivas, Hiram Chirino and Bruce Snyder who voted for one); Dain
>>Sundstrom voted for one tool, but wanted another to support the
>>functionality we have to output Configurations as jars ("that is another
>>tool for another day" but we need it now to build the server) which
>>sounds like two tools to me.
>>
>>After that vote, Aaron proposed and attained consensus for a single
>>tool. The syntax is simple enough and mirrors the JSR-88 API making it
>>ideal for online deployment (which is all JSR-88 supports).
>>
>>However, during implementation Aaron ran into the issues described above
>>and on the thread from 11/4/04 when trying to support the offline mode
>>not covered by JSR-88. These are clearly technical issues which we need
>>to resolve. To facilitate that, Aaron proposed to commit his work so
>>that all could see and discuss; he and I were promptly and unjustifiably
>>  flamed by some members of the community.
>>
>>Since Thursday he has committed this code and I think we need to review 
>>where we are. My belief is that the online side is fully implemented, 
>>that the standalone deployer works as before (package option), and the 
>>big remaining issue is the one described above where someone is trying 
>>to "deploy" applications to an offline server.
>>
>>In this message
>>
>>http://nagoya.apache.org/eyebrowse/ReadMsg?listName=dev@geronimo.apache.org&msgNo=9696
>>
>>I wrote that you could distribute to your heart's content; this was
>>wrong. The discussion with Aaron highlighted that the problem about
>>store type and location applies to distribute as well as the other
>>operations. It looks like the only thing you can reliably do offline is
>>package a Configuration for later use.
>>
>>I would suggest then, rather than the --add option I proposed for the
>>server we instead have a --install option which boots the server,
>>restarts all previously running configs and installs the new one. The
>>offline usage would then be:
>>
>>java -jar deployer.jar package foo.war foo-plan.xml foo.car
>>java -jar server.jar --install foo.car
>>
>>This also provides a simple mechanism for deploying once and running 
>>everywhere: the output configuration can be installed in multiple places 
>>as easily as one.
>>
>>The issue with this is that it fits the admin's view better than the
>>developer's. However, I continue to believe the:
>>
>>   start server
>>   repeat
>>      write code
>>      build (with distribute/start to online server)
>>      test
>>   until app works or it's time to go home
>>
>>cycle is what most developers use and that Aaron's changes (in
>>conjunction with the existing Maven plugin) have made it easy for them
>>to work that way. They are not really interested in fancy offline
>>deployment tricks.
>>
>>To support carrier-grade configuration management, clustered and grid
>>environments and OEMs, I believe we need an effective way of generating
>>pre-packaged configurations using requires Maven/Ant plugins
>>that can be used in the release process, tools like Aaron's that an
>>administrator can use from the command line, and mechanisms for
>>installing them in and for transporting them between between servers.
>>
>>I think we are very close to achieving this and if we can address these
>>last issues then Geronimo will be acceptable to both the developer
>>community and to serious IT decision makers.

I spent part of last night and part of this morning re-reading all of 
this info to better understand the dilemma over the deploy tool. After 
reading these two messages a couple times I feel like I understand the 
issues at hand far better than when I cast my vote.

I'm sure that others will find themselves with the same quandary I 
currently have, whereby upon further education of the issues surrounding 
the deployment tool, I wish I could recast my vote. If others do, in 
fact, have the same sentiment, then I propose that the deploy tool vote 
be recalled and we start a fresh vote on the same topic, or, I just 
change my vote. Does anyone else feel this way?

Bruce
-- 
perl -e 'print 
unpack("u30","<0G)U8V4\\@4VYY9&5R\\"F9E<G)E=\\$\\!F<FEI+F-O;0\\`\\`");'

The Castor Project
http://www.castor.org/

Apache Geronimo
http://geronimo.apache.org/

Re: online and offline deployment

Posted by Aaron Mulder <am...@alumni.princeton.edu>.
	Just to reiterate, I think Jeremy is saying that using the 
deployer tool for offline install is limited because it doesn't know what 
GBeans the server is using for the ConfigStore and PersistentConfigList 
and so on.  If we instead actually start the server to do an "offline" 
deployment/installation, then all the corrct GBeans will be running and 
that is no longer an issue.

	An alternative would be for the deployer to inspect the server's 
configuration when it starts, and load every dependency from the 
immediate parent of the module to be deployed up through the "root", and 
that should identify the correct ConfigStore and PersistentConfigList.
But this is tricky too, since how would it know what ConfigStore to load 
the configurations out of (including the configuration for the 
ConfigStore, aargh!).  In the end, I suspect this depends on how 
server.jar was packaged, and if you plan to start your server with 
start-my-server.jar instead of server.jar then I don't know how the 
deployer would know that, so I don't know where it would get the original 
ConfigStore reference from -- perhaps we'd need to give it an option to 
identify your server startup JAR.  But I think this would still fail if 
the server was running (since you'd probably clash for ports trying to 
load some of the services between ConfigStore and application), so it's an 
"offline" deploy in name only.

	Another option is that we can provide a tool that works 100% for
the default server configuration (LocalConfigStore+FileConfigurationList).  
But it would not work in the face of customizations to the 2 core
components: if you swap out your LocalConfigStore, then the tool would not
work (it would install into the wrong place), and if you swap out your
PersistentConfigurationList then the tool would be unable to mark any
module to be started.  If we wanted to, we could make offline deploy tools 
available for different combinations of those GBeans, or give you a 
procedure to build a new deploy tool from an old one.

	The main reason I feel that this is important is that most other
products support it.  Generally if you copy a new EAR over an old one
while the server is not running, and then start the server, the new
version of the EAR will deploy on startup.  (Tomcat 5 in the one I can
think of that doesn't do this).  I just hate to tell people that things 
that used to work won't work any more if they move to Geronimo.  On the 
other hand, I think this behavior was mostly implemented via a hot deploy 
directory, so if we provide a GBean for a hot deploy directory, then maybe 
we don't need a offline deploy tool at all (beyond for building the 
server).

	And I guess the last issue is related.  In the long run, it will
be nice/necessary to have some kind of packaged-configuration-handling
features, in the deploy tool or another tool:
 - extract a CAR file from an entry in a server's ConfigStore
 - sign a CAR file (either in the server's ConfigStore or as a file)
 - transfer a packaged configuration directly from one server to another
 - deploy a CAR file into a server

Aaron

On Sat, 6 Nov 2004, Jeremy Boynes wrote:
> As promised Thursday, here are the details of my concerns about mixing
> offline and online deployment.
> 
> My concerns on this issue stem from how we package GBeans together for
> use by the kernel. Rather than handling them one-by-one, Geronimo uses
> the notion of a pre-canned Configuration which contains a number of
> GBean instances and the classpath information needed to load them.
> Configurations can be loaded by the kernel and when started bring all
> the GBeans they contain online together.
> 
> A key feature of Configurations is they are portable between different
> Geronimo installations - specifically a Configuration can run in any
> Geronimo kernel that can resolve its dependencies. This is less critical
> for the single-server mode we have now but is very important as Geronimo
> scales to clustered or grid configurations - it allows us to efficiently
> move applications between the servers on demand.
> 
> This also has benefits where change management is important, such as
> business critical installations. For example, a Configuration can be
> built and signed in a test or integration environment and moved
> *provably unchanged* though the test, stage and release to production
> process. Alternatively, an OEM can release an application to channel as
> a signed Configuration, end-users can have the assurance it has not been
> tampered with, and the OEM can reduce costs by reducing problems caused
> by variations in the end-user environment.
> 
> In the kernel, the process of loading and unloading Configuration is
> handled by a ConfigurationManager that uses ConfigurationStores to store
> them. The store exposes a simple API for installing and uninstalling
> Configurations and for retrieving them so they can be loaded. We have a
> simple LocalConfigStore implementation that uses the local filesystem to
> store them; other implementations are possible using different
> persistence approaches such as databases, LDAP or proprietary
> configuration management systems.
> 
> 
> The deployment system in Geronimo is the interface between user-domain
> artifacts such as J2EE modules (EARs, WARs, etc.) or deployment plans
> and the configuration management system described above. It essentially
> combines modules with plans and generates Configurations.
> 
> It comprises three parts:
> * External interfaces such as the command line tool, console or JSR-88
>    provider that get the modules and plans from the user
> * ConfigurationBuilders such as EARConfigBuilder and
>    ServiceConfigBuilder that do the combination and produce the target
>    Configuration
> * Back-end interfaces that store the Configuration either in a
>    ConfigurationStore or as an output file
> 
> The ConfigurationBuilders are GBeans and run inside a Geronimo kernel.
> Apart from ease of implementation, they also have access to the
> resources provided by that system - for example, they can use the
> Repository to load classes during processing, and they can use the
> ConfigurationManager to load other Configurations that the target may be
> dependent on.
> 
> 
> To support online deployment, we run a deployment system inside the same
> kernel as the J2EE server - it is actually part of the
> org/apache/geronimo/Server Configuration although work is progress to
> allow it to be run as a separate dependent configuation.
> 
> The JSR-88 provider interacts with this deployment system to fulfill the
> spec requirements for distribute, start, stop, undeploy etc. For
> example, during a distribute operation the module and plan are passed to
> the deployment system, it uses an EARConfigBuilder to produce the output
> Configuration, which it then installs in the target ConfigurationStore.
> A JSR-88 start operation causes the Configuration to be loaded from the
> store and then started.
> 
> 
> However, this leaves us with a chicken-and-egg problem. The online
> deployment system above is itself part of a configuration - how do we
> build that configuation?
> 
> To solve this, and because it seemed generally useful, we built a
> standalone offline deployment system. Run from the command line, this
> would take module + plan and produce a Configuration. To reuse as much 
> of the configuration building infrastructure as possible, it boots an
> embedded Geronimo kernel and loads a Configuration containing just the
> deployment system. As a running kernel, it also provides access to a
> Repository and ConfigurationStore that the ConfigurationBuilders can use
> to resolve dependencies (including dependencies on other
> Configurations). However, these are *its* Repository and
> ConfigurationStores and *not* those from the target server.
> 
> To cheat our way around the chicken-and-egg problem we took the simple
> but expedient solution of having the standalone deployer and the default
> server use the same type and location of store and repository. Then, by
> simply telling the standalone deployer to install a configuration into
> its own store it would also be available to the default server
> configuration. This is a hack, pure, simple and effective.
> 
> When we introduce any additional complexity into the situation, then
> this hack starts to break down. For example, if the user adds a
> database-based ConfigurationStore to the server (for example, to make
> GBean state persistence more reliable) then the standalone deployer
> would not be able to install the generated Configuration into that store.
> 
> 
> All things considered, I think having options in the standalone deployer
> that rely on it sharing the same type and location of Repository and
> ConfigurationStore will lead to obscure behaviour and strange behaviour
> as soon as we progress beyond the most basic default configuration. That
> is why I voted at the start for "2 simple tools rather than one complex
> one."  Going further as has been proposed and coupling the standalone
> deployer to the internal implementation of the
> PersistentConfigurationList seems like pouring gasoline on the fire.
> 
> I have been portrayed on this list as being alone in my opinion but I
> will point out that in the initial vote Eric LeGoff, Aaron Mulder and
> David Jencks also voted for 2 tools (as opposed to Peter Lynch, Davanum
> Srinivas, Hiram Chirino and Bruce Snyder who voted for one); Dain
> Sundstrom voted for one tool, but wanted another to support the
> functionality we have to output Configurations as jars ("that is another
> tool for another day" but we need it now to build the server) which
> sounds like two tools to me.
> 
> After that vote, Aaron proposed and attained consensus for a single
> tool. The syntax is simple enough and mirrors the JSR-88 API making it
> ideal for online deployment (which is all JSR-88 supports).
> 
> However, during implementation Aaron ran into the issues described above
> and on the thread from 11/4/04 when trying to support the offline mode
> not covered by JSR-88. These are clearly technical issues which we need
> to resolve. To facilitate that, Aaron proposed to commit his work so
> that all could see and discuss; he and I were promptly and unjustifiably
>   flamed by some members of the community.
> 
> Since Thursday he has committed this code and I think we need to review 
> where we are. My belief is that the online side is fully implemented, 
> that the standalone deployer works as before (package option), and the 
> big remaining issue is the one described above where someone is trying 
> to "deploy" applications to an offline server.
> 
> In this message
> 
> http://nagoya.apache.org/eyebrowse/ReadMsg?listName=dev@geronimo.apache.org&msgNo=9696
> 
> I wrote that you could distribute to your heart's content; this was
> wrong. The discussion with Aaron highlighted that the problem about
> store type and location applies to distribute as well as the other
> operations. It looks like the only thing you can reliably do offline is
> package a Configuration for later use.
> 
> I would suggest then, rather than the --add option I proposed for the
> server we instead have a --install option which boots the server,
> restarts all previously running configs and installs the new one. The
> offline usage would then be:
> 
> java -jar deployer.jar package foo.war foo-plan.xml foo.car
> java -jar server.jar --install foo.car
> 
> This also provides a simple mechanism for deploying once and running 
> everywhere: the output configuration can be installed in multiple places 
> as easily as one.
> 
> The issue with this is that it fits the admin's view better than the
> developer's. However, I continue to believe the:
> 
>    start server
>    repeat
>       write code
>       build (with distribute/start to online server)
>       test
>    until app works or it's time to go home
> 
> cycle is what most developers use and that Aaron's changes (in
> conjunction with the existing Maven plugin) have made it easy for them
> to work that way. They are not really interested in fancy offline
> deployment tricks.
> 
> To support carrier-grade configuration management, clustered and grid
> environments and OEMs, I believe we need an effective way of generating
> pre-packaged configurations using requires Maven/Ant plugins
> that can be used in the release process, tools like Aaron's that an
> administrator can use from the command line, and mechanisms for
> installing them in and for transporting them between between servers.
> 
> I think we are very close to achieving this and if we can address these
> last issues then Geronimo will be acceptable to both the developer
> community and to serious IT decision makers.
> 
> --
> Jeremy
> 
> 
> 

Re: online and offline deployment

Posted by Aaron Mulder <am...@alumni.princeton.edu>.
On Sun, 7 Nov 2004, Jeremy Boynes wrote:
> ...
> If someone implements this (dealing, of course, with all the nasties) 
> then all the better; in fact, IIRC there is some old scanning code of 
> mine lying around in the repo somewhere.
> 
> However, due to the technical issues I would still not advocate scanning 
> as being the recommended way of "deploying" an application into Geronimo.

	Right.  This I agree with 100%.  I am fine with recommending 
external plans and online deployment tools.  I just believe we should be 
willing to support the old style too.

	I believe David took a somewhat firmer position than you:

On Sun, 7 Nov 2004, David Jencks wrote:
> I don't think "drop, wait & wonder" hot deployment should be supported.

	That's what I was objecting to.  Supported, yes; recommended, no.

Aaron

Re: online and offline deployment

Posted by Jeremy Boynes <jb...@gluecode.com>.
I think there is a misconception here between what is supported and what 
is recommended. There have been a lot of bad ideas in the past (EJB1.0 
DD anyone?) and people have tried to improve on them. I don't think we 
need to make bad ideas from the past the recommended way of doing things 
even if we support them.

More specific comments inline.

Aaron Mulder wrote:
> On Sun, 7 Nov 2004, David Jencks wrote:
> 
>>I don't think "drop, wait & wonder" hot deployment should be supported.  
>>  This only supports deployment of applications with embedded plans or  
>>applications that need no plan. 
> 
> 
> 	I don't understand the objections to this.  "embedded plans" is 
> the way every J2EE application I've ever seen has worked, save the days 
> when WebSphere made you save your plan to DB2 instead of XML files.  Every 
> tool in the space today puts your plans in the archive.
> 

The biggest objection to this is the way in which you have to crack open 
multiple levels of archive in order to add the deployment information. 
Not only is this a PITA but it also compromises the provenance of the 
supplied archive - any original signature is lost.

> 	Granted, the current J2EE leadership seems to think that no
> application archive should contain server-specific information, but that
> is not a standard, that is a paradigm shift.  Don't you think it will take
> a long time before the average J2EE developer stops trying to pack their
> server-specific deployment descriptor (or "deployment plan") into their
> archives?  Refusing to support the by-far-most-common method of J2EE
> packaging and deployment is IMHO only going to turn people off to the
> product, even if you argue that it's "more correct".
> 

No one is refusing to support this - in fact, it is already fully 
supported. We just recommend using an external plan.

> 	This is still a different issue than offline deployment, though, 
> since a directory scanner would only work while the server was online.  As 
> well, I'd be fine if the directory scanner declined to deploy anything 
> without a Geronimo plan, or just produced errors along the lines of 
> "unable to resolve reference to foo, please include a Geronimo deployment 
> plan (geronimo-jetty.xml)"...
> 

Again, I don't think anyone is refusing to support this - David and I 
just don't think it's a good idea for technical reasons such as 
incompatibility with JSR-88, reliance on proprietary embedded plans, 
copy problems, non-deterministic outcomes, ...

If someone implements this (dealing, of course, with all the nasties) 
then all the better; in fact, IIRC there is some old scanning code of 
mine lying around in the repo somewhere.

However, due to the technical issues I would still not advocate scanning 
as being the recommended way of "deploying" an application into Geronimo.

--
Jeremy

Re: online and offline deployment

Posted by Aaron Mulder <am...@alumni.princeton.edu>.
On Sun, 7 Nov 2004, David Jencks wrote:
> I don't think "drop, wait & wonder" hot deployment should be supported.  
>   This only supports deployment of applications with embedded plans or  
> applications that need no plan. 

	I don't understand the objections to this.  "embedded plans" is 
the way every J2EE application I've ever seen has worked, save the days 
when WebSphere made you save your plan to DB2 instead of XML files.  Every 
tool in the space today puts your plans in the archive.

	Granted, the current J2EE leadership seems to think that no
application archive should contain server-specific information, but that
is not a standard, that is a paradigm shift.  Don't you think it will take
a long time before the average J2EE developer stops trying to pack their
server-specific deployment descriptor (or "deployment plan") into their
archives?  Refusing to support the by-far-most-common method of J2EE
packaging and deployment is IMHO only going to turn people off to the
product, even if you argue that it's "more correct".

	This is still a different issue than offline deployment, though, 
since a directory scanner would only work while the server was online.  As 
well, I'd be fine if the directory scanner declined to deploy anything 
without a Geronimo plan, or just produced errors along the lines of 
"unable to resolve reference to foo, please include a Geronimo deployment 
plan (geronimo-jetty.xml)"...

Aaron

Re: online and offline deployment

Posted by David Jencks <da...@yahoo.com>.
Thank you Jeremy,  this is a very clear explanation of the  
architecture.  I agree completely with your point of view.  I think we  
should get some version of this explanation on the wiki.

I apologize for not entering this discussion earlier, but I didn't have  
time to think through the issues thoroughly.  I have thought all along  
that offline "deployment" was a bad idea and should not be used outside  
very restricted bootstrap scenarios.  I would prefer our assembly to  
proceed by starting a server as soon as possible and using the maven  
plugin to distribute the modules to that running server.

I don't think "drop, wait & wonder" hot deployment should be supported.  
  This only supports deployment of applications with embedded plans or  
applications that need no plan.  I think there are almost no useful  
applications that will work without a plan to at least resolve  
resource-refs.  I would prefer we put more effort into tools that help  
identify the minimum amount of information needed for a plan and  
generate the plan, and then  deploy to a running server.

To me, making sure the online deployment/undeployment cycle is really  
reliable even in the face of multiple deployment errors and long chains  
of parent configurations that are not previously started is a much  
better idea than supporting any offline deployment.

thanks
david jencks

On Nov 6, 2004, at 9:51 PM, Jeremy Boynes wrote:

> As promised Thursday, here are the details of my concerns about mixing
> offline and online deployment.
>
> My concerns on this issue stem from how we package GBeans together for
> use by the kernel. Rather than handling them one-by-one, Geronimo uses
> the notion of a pre-canned Configuration which contains a number of
> GBean instances and the classpath information needed to load them.
> Configurations can be loaded by the kernel and when started bring all
> the GBeans they contain online together.
>
> A key feature of Configurations is they are portable between different
> Geronimo installations - specifically a Configuration can run in any
> Geronimo kernel that can resolve its dependencies. This is less  
> critical
> for the single-server mode we have now but is very important as  
> Geronimo
> scales to clustered or grid configurations - it allows us to  
> efficiently
> move applications between the servers on demand.
>
> This also has benefits where change management is important, such as
> business critical installations. For example, a Configuration can be
> built and signed in a test or integration environment and moved
> *provably unchanged* though the test, stage and release to production
> process. Alternatively, an OEM can release an application to channel as
> a signed Configuration, end-users can have the assurance it has not  
> been
> tampered with, and the OEM can reduce costs by reducing problems caused
> by variations in the end-user environment.
>
> In the kernel, the process of loading and unloading Configuration is
> handled by a ConfigurationManager that uses ConfigurationStores to  
> store
> them. The store exposes a simple API for installing and uninstalling
> Configurations and for retrieving them so they can be loaded. We have a
> simple LocalConfigStore implementation that uses the local filesystem  
> to
> store them; other implementations are possible using different
> persistence approaches such as databases, LDAP or proprietary
> configuration management systems.
>
>
> The deployment system in Geronimo is the interface between user-domain
> artifacts such as J2EE modules (EARs, WARs, etc.) or deployment plans
> and the configuration management system described above. It essentially
> combines modules with plans and generates Configurations.
>
> It comprises three parts:
> * External interfaces such as the command line tool, console or JSR-88
>   provider that get the modules and plans from the user
> * ConfigurationBuilders such as EARConfigBuilder and
>   ServiceConfigBuilder that do the combination and produce the target
>   Configuration
> * Back-end interfaces that store the Configuration either in a
>   ConfigurationStore or as an output file
>
> The ConfigurationBuilders are GBeans and run inside a Geronimo kernel.
> Apart from ease of implementation, they also have access to the
> resources provided by that system - for example, they can use the
> Repository to load classes during processing, and they can use the
> ConfigurationManager to load other Configurations that the target may  
> be
> dependent on.
>
>
> To support online deployment, we run a deployment system inside the  
> same
> kernel as the J2EE server - it is actually part of the
> org/apache/geronimo/Server Configuration although work is progress to
> allow it to be run as a separate dependent configuation.
>
> The JSR-88 provider interacts with this deployment system to fulfill  
> the
> spec requirements for distribute, start, stop, undeploy etc. For
> example, during a distribute operation the module and plan are passed  
> to
> the deployment system, it uses an EARConfigBuilder to produce the  
> output
> Configuration, which it then installs in the target ConfigurationStore.
> A JSR-88 start operation causes the Configuration to be loaded from the
> store and then started.
>
>
> However, this leaves us with a chicken-and-egg problem. The online
> deployment system above is itself part of a configuration - how do we
> build that configuation?
>
> To solve this, and because it seemed generally useful, we built a
> standalone offline deployment system. Run from the command line, this
> would take module + plan and produce a Configuration. To reuse as much  
> of the configuration building infrastructure as possible, it boots an
> embedded Geronimo kernel and loads a Configuration containing just the
> deployment system. As a running kernel, it also provides access to a
> Repository and ConfigurationStore that the ConfigurationBuilders can  
> use
> to resolve dependencies (including dependencies on other
> Configurations). However, these are *its* Repository and
> ConfigurationStores and *not* those from the target server.
>
> To cheat our way around the chicken-and-egg problem we took the simple
> but expedient solution of having the standalone deployer and the  
> default
> server use the same type and location of store and repository. Then, by
> simply telling the standalone deployer to install a configuration into
> its own store it would also be available to the default server
> configuration. This is a hack, pure, simple and effective.
>
> When we introduce any additional complexity into the situation, then
> this hack starts to break down. For example, if the user adds a
> database-based ConfigurationStore to the server (for example, to make
> GBean state persistence more reliable) then the standalone deployer
> would not be able to install the generated Configuration into that  
> store.
>
>
> All things considered, I think having options in the standalone  
> deployer
> that rely on it sharing the same type and location of Repository and
> ConfigurationStore will lead to obscure behaviour and strange behaviour
> as soon as we progress beyond the most basic default configuration.  
> That
> is why I voted at the start for "2 simple tools rather than one complex
> one."  Going further as has been proposed and coupling the standalone
> deployer to the internal implementation of the
> PersistentConfigurationList seems like pouring gasoline on the fire.
>
> I have been portrayed on this list as being alone in my opinion but I
> will point out that in the initial vote Eric LeGoff, Aaron Mulder and
> David Jencks also voted for 2 tools (as opposed to Peter Lynch, Davanum
> Srinivas, Hiram Chirino and Bruce Snyder who voted for one); Dain
> Sundstrom voted for one tool, but wanted another to support the
> functionality we have to output Configurations as jars ("that is  
> another
> tool for another day" but we need it now to build the server) which
> sounds like two tools to me.
>
> After that vote, Aaron proposed and attained consensus for a single
> tool. The syntax is simple enough and mirrors the JSR-88 API making it
> ideal for online deployment (which is all JSR-88 supports).
>
> However, during implementation Aaron ran into the issues described  
> above
> and on the thread from 11/4/04 when trying to support the offline mode
> not covered by JSR-88. These are clearly technical issues which we need
> to resolve. To facilitate that, Aaron proposed to commit his work so
> that all could see and discuss; he and I were promptly and  
> unjustifiably
>  flamed by some members of the community.
>
> Since Thursday he has committed this code and I think we need to  
> review where we are. My belief is that the online side is fully  
> implemented, that the standalone deployer works as before (package  
> option), and the big remaining issue is the one described above where  
> someone is trying to "deploy" applications to an offline server.
>
> In this message
>
> http://nagoya.apache.org/eyebrowse/ReadMsg? 
> listName=dev@geronimo.apache.org&msgNo=9696
>
> I wrote that you could distribute to your heart's content; this was
> wrong. The discussion with Aaron highlighted that the problem about
> store type and location applies to distribute as well as the other
> operations. It looks like the only thing you can reliably do offline is
> package a Configuration for later use.
>
> I would suggest then, rather than the --add option I proposed for the
> server we instead have a --install option which boots the server,
> restarts all previously running configs and installs the new one. The
> offline usage would then be:
>
> java -jar deployer.jar package foo.war foo-plan.xml foo.car
> java -jar server.jar --install foo.car
>
> This also provides a simple mechanism for deploying once and running  
> everywhere: the output configuration can be installed in multiple  
> places as easily as one.
>
> The issue with this is that it fits the admin's view better than the
> developer's. However, I continue to believe the:
>
>   start server
>   repeat
>      write code
>      build (with distribute/start to online server)
>      test
>   until app works or it's time to go home
>
> cycle is what most developers use and that Aaron's changes (in
> conjunction with the existing Maven plugin) have made it easy for them
> to work that way. They are not really interested in fancy offline
> deployment tricks.
>
> To support carrier-grade configuration management, clustered and grid
> environments and OEMs, I believe we need an effective way of generating
> pre-packaged configurations using requires Maven/Ant plugins
> that can be used in the release process, tools like Aaron's that an
> administrator can use from the command line, and mechanisms for
> installing them in and for transporting them between between servers.
>
> I think we are very close to achieving this and if we can address these
> last issues then Geronimo will be acceptable to both the developer
> community and to serious IT decision makers.
>
> --
> Jeremy
>
>