You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by William A Rowe Jr <wr...@rowe-clan.net> on 2018/04/13 17:28:10 UTC

Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Terrific analysis! But on the meta-question...

Instead of changing the behavior of httpd on each and every subversion
bump, is it time to revisit our revisioning discipline and hygiene?

I promise to stay out of such discussion provided that one equally stubborn
and intractable PMC member agrees to do the same, and let the balance of
the PMC make our decision, moving forwards.

On Fri, Apr 13, 2018, 06:11 Joe Orton <jo...@redhat.com> wrote:

> On Thu, Apr 12, 2018 at 09:38:46PM +0200, Ruediger Pluem wrote:
> > On 04/12/2018 09:28 AM, Joe Orton wrote:
> > > But logged is:
> > >
> > > ::1 - - [12/Apr/2018:08:11:12 +0100] "GET /agag HTTP/1.1" 404 12
> HTTPS=on SNI=localhost.localdomain
> > > 127.0.0.1 - - [12/Apr/2018:08:11:15 +0100] "GET /agag HTTP/1.1" 404 12
> HTTPS=- SNI=-
> > >
> > > Now mod_ssl only sees the "off" SSLSrvConfigRec in the second vhost so
> > > the logging is wrong.
> >
> > What does the same test result in with 2.4.29?
>
> Excellent question, I should have checked that.  Long e-mail follows,
> sorry.
>
> In fact it is the same with 2.4.29, because the SSLSrvConfigRec
> associated with the vhost's server_rec is the same as the default/base
> (non-SSL) server_rec, aka base_server passed to post_config hooks aka
> the ap_server_conf global.
>
> So, maybe I understand this a bit better now.
>
> Config with three vhosts / server_rec structs:
> a) base server config :80 non-SSL (<-- ap_server_conf/base_server)
> b) alpha vhost :443, explicit SSLEngine on, SSLCertificateFile etc
> c) beta vhost :443, no SSL*
>
> For 2.4.29 mod_ssl config derived is:
> a) SSLSrvConfigRec for base_server = { whatever config at global scope }
> b) SSLSrvConfigRec for alpha = { sc->enabled = TRUE, ... }
> c) SSLSrvConfigRec pointer for beta == SSLSrvConfigRec for base_server
>    in the lookup vector (pointer is copied prior to ALWAYS_MERGE flag)
>
> For 2.4.33 it is:
> a) and b) exactly as before
> c) separate SSLSrvConfigRec for beta = { merged copy of config at global }
>    time because of the ALWAYS_MERGE flag, i.e. still sc->enabled = UNSET
>
> When running ssl_init_Module(post_config hook), with 2.4.29:
> - SSLSrvConfig(base_server)->enabled = FALSE because UNSET previously
> - SSLSrvConfig(base_server)->vhost_id gets overwritten with vhost_id
>   for beta vhost because it's later in the loop and there's no check
>
> And with 2.4.33:
> - SSLSrvConfig(beta)->enabled is UNSET but gets flipped to ENABLED,
>   then startup fails (the issue in question)
>
> w/my patch for 2.4.33:
> - SSLSrvConfig(beta)->enabled is FALSE and startup works
>
> At run-time a request via SSL which matches the beta vhost via SNI/Host:
>
> For 2.4.29:
> - r->server is the beta vhost and mySrvConfig(r->server) still gives
>   you the ***base_server*** SSLSrvConfigRec i.e. sc->enabled=FALSE
> - thus e.g. ssl_hook_Fixup() does nada
>
> For 2.4.33 plus my patch:
> - r->server is the beta vhost and mySrvConfig(r->server) gives
>   you the SSLSrvConfigRec which is also sc->enabled = FALSE
> - thus e.g. ssl_hook_Fixup() also does nada
>
> I was trying to convince myself whether mySrvConfig(r->server) is going
> to change between 2.4.29 and .33+patch in this case, and I think it
> should be identical, because it is *only* the handling of ->enabled
> which has changed with _ALWAYS_MERGE.
>
> TL;DR:
> 1. my head hurts
> 2. I think my patch is OK
>
> Anyone read this far?
>

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Graham Leggett <mi...@sharp.fm>.
On 19 Apr 2018, at 11:57 AM, Joe Orton <jo...@redhat.com> wrote:

> Feel like I should drop 2c in here...
> 
> I'd be VERY happy to see more frequent "major" version bumps, i.e. 
> 2.4->2.6->2.8 or whatever which break backwards compat/ABI.  We have the 
> chance to break compat every ~6 months in Fedora so it's no problem 
> getting new code into the hands of users.

As an end user of the software I would hate that.

I love the fact that I can drop httpd v2.4.latest as published by ASF onto a RHEL machine and it “just works”. No recompiling modules to a new ABI, particularly large modules with large ecosystems, no mess, no fuss. No discovery that to get feature X I need to upgrade through two major versions along with the dependency hell that results to get there.

I get it that this convenience comes at a price - Redhat is doing work that I would otherwise do, but then that’s what I’m paying Redhat for.

That said - yes, we should work to release v2.6.x. Just not every six months.

Regards,
Graham
—


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Joe Orton <jo...@redhat.com>.
Feel like I should drop 2c in here...

I'd be VERY happy to see more frequent "major" version bumps, i.e. 
2.4->2.6->2.8 or whatever which break backwards compat/ABI.  We have the 
chance to break compat every ~6 months in Fedora so it's no problem 
getting new code into the hands of users.

I've spent much of my upstream time this year trying to get all RHEL7 
httpd features&fixes backported to 2.4.x and have only made it about 90% 
of the way (some big chunks like mod_systemd, suexec stuff remain); 
would love to not have to burn more time on backports because that stuff 
is in 2.6.0 already.

At the moment I think we have to accept that 2.4.x is going to be a bit 
unstable if we're trying to backport everything without *either* having 
good test coverage (which we don't) or having new code widely tested by 
users.

Regards, Joe

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Graham Leggett <mi...@sharp.fm>.
On 15 Apr 2018, at 3:25 AM, Yehuda Katz <ye...@ymkatz.net> wrote:

> That also assumes the OS distributions pick up the point releases. RedHat certainly doesn't pick up the new features, only bug fixes.

By design - that is what “Redhat Enterprise Linux” is.

Regards,
Graham
—


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Ruediger Pluem <rp...@apache.org>.

On 04/15/2018 03:25 AM, Yehuda Katz wrote:
> On Sat, Apr 14, 2018 at 9:48 AM, Jim Jagielski <jim@jagunet.com <ma...@jagunet.com>> wrote:
> 
>     IMO, the below ignores the impacts on OS distributors who
>     provide httpd. We have seen how long it takes for them
>     to go from 2.2 to 2.4... I can't imagine the impact for our
>     end user community if "new features" cause a minor
>     bump all the time and we "force" distributions for
>     2.4->2.6->2.8->2.10...
> 
>     Just my 2c
> 
> 
> That also assumes the OS distributions pick up the point releases. RedHat certainly doesn't pick up the new features,
> only bug fixes.

But in this case users of those binaries shouldn't be affected by the regressions as they only receive bug fix backports
:-P.
Seriously, we also had regressions just in bug fixes and the feature backporting makes it sometimes harder to correctly
backport pure bug fixes for these distributions as the code changes are bigger in the stable branch because of the
feature backports. Of course this is not our direct issue and main concern here.
One idea that comes to my mind is whether we should have an "LTS" version that only receives bugfixes and another
"stable" branch that receives new feature (on the same level of API compatibility we currently grant for our stable
branches). E.g. you could go with a major release X.Y.0.Z and after some minor releases Z which still allow
feature and bugfixes to be backported in an API compatible way (to give the new major some time to stabilize) split of
X.Y.1.Z to allow feature backports in an API compatible way and allow only bug fix backports to X.Y.0.Z.
Questions that pop up with this are:

1. Is there enough manpower and willingness to maintain this?
2. How will the commercial 3rd parties handle this? Will they only support X.Y.0.Z or will they also support X.Y.1.Z?
   From an API point of view it doesn't matter as X.Y.0 and X.Y.1 follow the same API guarantee. X.Y.1 just has a
   bigger regression risk.


Other typical suspects are:

1. Improve testing suite(s).
2. Give the future release a broader real life testing exposure. The question is how we can get to this. I am not sure
   if RC release will help here, because it requires a sufficient amount of people to use and test them. Seeing RC on a
   release might make them saying: Nah, let others do that testing I will wait until RC is gone and take it then.
   So I am not sure if RC releases will improve the exposure compared to the current exposure during the voting.
   If it is just a matter of time (voting usually "only" takes 72 hours, compared to a RC release which would be around
   longer before the next RC or final release) it could help indeed.

Sorry for the rant.


Regards

Rüdiger


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Yehuda Katz <ye...@ymkatz.net>.
On Sat, Apr 14, 2018 at 9:48 AM, Jim Jagielski <ji...@jagunet.com> wrote:

> IMO, the below ignores the impacts on OS distributors who
> provide httpd. We have seen how long it takes for them
> to go from 2.2 to 2.4... I can't imagine the impact for our
> end user community if "new features" cause a minor
> bump all the time and we "force" distributions for
> 2.4->2.6->2.8->2.10...
>
> Just my 2c
>
>
That also assumes the OS distributions pick up the point releases. RedHat
certainly doesn't pick up the new features, only bug fixes.

- Y



> > On Apr 13, 2018, at 2:28 PM, David Zuelke <dz...@salesforce.com>
> wrote:
> >
> > Remember the thread I started on that quite a while ago? ;)
> >
> > IMO:
> >
> > - x.y.0 for new features
> > - x.y.z for bugfixes only
> > - stop the endless backporting
> > - make x.y.0 releases more often
> > - x.y.0 goes through alpha, beta, RC phases
> > - x.y.z goes through RC phases
> >
> > That's how PHP has been doing it for a few years, and it's amazing how
> > well it works, how few regressions there are, and how predictable the
> > cycle is (they cut an x.y.zRC1 every four weeks like clockwork, with
> > exceptions only around late December because of holiday season).
> >
> > This would also fix all the confusing cases where two or three faulty
> > releases get made, end up in the changelog, but ultimately are never
> > released.
> >
> >
> > On Fri, Apr 13, 2018 at 5:28 PM, William A Rowe Jr <wr...@rowe-clan.net>
> wrote:
> >> Terrific analysis! But on the meta-question...
> >>
> >> Instead of changing the behavior of httpd on each and every subversion
> bump,
> >> is it time to revisit our revisioning discipline and hygiene?
> >>
> >> I promise to stay out of such discussion provided that one equally
> stubborn
> >> and intractable PMC member agrees to do the same, and let the balance
> of the
> >> PMC make our decision, moving forwards.
> >>
> >> On Fri, Apr 13, 2018, 06:11 Joe Orton <jo...@redhat.com> wrote:
> >>>
> >>> On Thu, Apr 12, 2018 at 09:38:46PM +0200, Ruediger Pluem wrote:
> >>>> On 04/12/2018 09:28 AM, Joe Orton wrote:
> >>>>> But logged is:
> >>>>>
> >>>>> ::1 - - [12/Apr/2018:08:11:12 +0100] "GET /agag HTTP/1.1" 404 12
> >>>>> HTTPS=on SNI=localhost.localdomain
> >>>>> 127.0.0.1 - - [12/Apr/2018:08:11:15 +0100] "GET /agag HTTP/1.1" 404
> 12
> >>>>> HTTPS=- SNI=-
> >>>>>
> >>>>> Now mod_ssl only sees the "off" SSLSrvConfigRec in the second vhost
> so
> >>>>> the logging is wrong.
> >>>>
> >>>> What does the same test result in with 2.4.29?
> >>>
> >>> Excellent question, I should have checked that.  Long e-mail follows,
> >>> sorry.
> >>>
> >>> In fact it is the same with 2.4.29, because the SSLSrvConfigRec
> >>> associated with the vhost's server_rec is the same as the default/base
> >>> (non-SSL) server_rec, aka base_server passed to post_config hooks aka
> >>> the ap_server_conf global.
> >>>
> >>> So, maybe I understand this a bit better now.
> >>>
> >>> Config with three vhosts / server_rec structs:
> >>> a) base server config :80 non-SSL (<-- ap_server_conf/base_server)
> >>> b) alpha vhost :443, explicit SSLEngine on, SSLCertificateFile etc
> >>> c) beta vhost :443, no SSL*
> >>>
> >>> For 2.4.29 mod_ssl config derived is:
> >>> a) SSLSrvConfigRec for base_server = { whatever config at global scope
> }
> >>> b) SSLSrvConfigRec for alpha = { sc->enabled = TRUE, ... }
> >>> c) SSLSrvConfigRec pointer for beta == SSLSrvConfigRec for base_server
> >>>   in the lookup vector (pointer is copied prior to ALWAYS_MERGE flag)
> >>>
> >>> For 2.4.33 it is:
> >>> a) and b) exactly as before
> >>> c) separate SSLSrvConfigRec for beta = { merged copy of config at
> global }
> >>>   time because of the ALWAYS_MERGE flag, i.e. still sc->enabled = UNSET
> >>>
> >>> When running ssl_init_Module(post_config hook), with 2.4.29:
> >>> - SSLSrvConfig(base_server)->enabled = FALSE because UNSET previously
> >>> - SSLSrvConfig(base_server)->vhost_id gets overwritten with vhost_id
> >>>  for beta vhost because it's later in the loop and there's no check
> >>>
> >>> And with 2.4.33:
> >>> - SSLSrvConfig(beta)->enabled is UNSET but gets flipped to ENABLED,
> >>>  then startup fails (the issue in question)
> >>>
> >>> w/my patch for 2.4.33:
> >>> - SSLSrvConfig(beta)->enabled is FALSE and startup works
> >>>
> >>> At run-time a request via SSL which matches the beta vhost via
> SNI/Host:
> >>>
> >>> For 2.4.29:
> >>> - r->server is the beta vhost and mySrvConfig(r->server) still gives
> >>>  you the ***base_server*** SSLSrvConfigRec i.e. sc->enabled=FALSE
> >>> - thus e.g. ssl_hook_Fixup() does nada
> >>>
> >>> For 2.4.33 plus my patch:
> >>> - r->server is the beta vhost and mySrvConfig(r->server) gives
> >>>  you the SSLSrvConfigRec which is also sc->enabled = FALSE
> >>> - thus e.g. ssl_hook_Fixup() also does nada
> >>>
> >>> I was trying to convince myself whether mySrvConfig(r->server) is going
> >>> to change between 2.4.29 and .33+patch in this case, and I think it
> >>> should be identical, because it is *only* the handling of ->enabled
> >>> which has changed with _ALWAYS_MERGE.
> >>>
> >>> TL;DR:
> >>> 1. my head hurts
> >>> 2. I think my patch is OK
> >>>
> >>> Anyone read this far?
>
>

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Ruediger Pluem <rp...@apache.org>.

On 04/14/2018 04:34 PM, Nick Kew wrote:
> 
>> On 14 Apr 2018, at 14:48, Jim Jagielski <ji...@jaguNET.com> wrote:
>>
>> IMO, the below ignores the impacts on OS distributors who
>> provide httpd. We have seen how long it takes for them
>> to go from 2.2 to 2.4... I can't imagine the impact for our
>> end user community if "new features" cause a minor
>> bump all the time and we "force" distributions for
>> 2.4->2.6->2.8->2.10…
> 
> Chicken&egg.  httpd version numbers creep in a petty pace from year to year,
> and packagers do likewise.  Contrast a product like, say, Firefox, where major
> versions just whoosh by, and distros increment theirs every few months.
> 
>> Just my 2c
> 
> Indeed, a change needs to be a considered thing, and there are issues.
> 
>>> On Apr 13, 2018, at 2:28 PM, David Zuelke <dz...@salesforce.com> wrote:
>>>
>>> Remember the thread I started on that quite a while ago? ;)
> 
> Nope.
> 
>>> - x.y.0 for new features
>>> - x.y.z for bugfixes only
>>> - stop the endless backporting
>>> - make x.y.0 releases more often
> 
> An issue there is the API/ABI promise.  We are a stable product, and one of our
> virtues is the guarantee that a third-party module written for x.y.z will continue to
> work at both source and binary level for x.y.(z+n).

This is the biggest issue here and where we need some way to keep this promise for a longer time. Especially commercial
module suppliers are extremely slow there, even if a recompile of their source against the new version would already do
like it did in many cases during the 2.2 -> 2.4 transition.
I am not sure currently how our user base looks like:

1. How many take it from OS distros?
2. How many take it from sources that deliver the latest 2.4 or compile themselves.
3. How many use closed source 3rd party modules that need long term API/ABI promises?

Regards

Rüdiger


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by David Zuelke <dz...@salesforce.com>.
On Sat, Apr 14, 2018 at 4:34 PM, Nick Kew <ni...@apache.org> wrote:
>
>> On 14 Apr 2018, at 14:48, Jim Jagielski <ji...@jaguNET.com> wrote:
>>
>> IMO, the below ignores the impacts on OS distributors who
>> provide httpd. We have seen how long it takes for them
>> to go from 2.2 to 2.4... I can't imagine the impact for our
>> end user community if "new features" cause a minor
>> bump all the time and we "force" distributions for
>> 2.4->2.6->2.8->2.10…
>
> Chicken&egg.  httpd version numbers creep in a petty pace from year to year,
> and packagers do likewise.  Contrast a product like, say, Firefox, where major
> versions just whoosh by, and distros increment theirs every few months.
>

It's not like distros pick up patch releases anyway. They backport
fixes to whatever they "froze" to upon first release, and that's it.

Debian and Ubuntu, for instance, just pick the latest PHP that's
released at the time the freeze for a version happens, and that's it.


>>> On Apr 13, 2018, at 2:28 PM, David Zuelke <dz...@salesforce.com> wrote:
>>>
>>> Remember the thread I started on that quite a while ago? ;)
>
> Nope.

https://lists.apache.org/thread.html/9afe84b5c2e7691f0190210e2377a6d504a6a77ff1481812f44f65d4@%3Cdev.httpd.apache.org%3E

>>> - x.y.0 for new features
>>> - x.y.z for bugfixes only
>>> - stop the endless backporting
>>> - make x.y.0 releases more often
>
> An issue there is the API/ABI promise.  We are a stable product, and one of our
> virtues is the guarantee that a third-party module written for x.y.z will continue to
> work at both source and binary level for x.y.(z+n).
>
> Frequent x.y.0 releases devalue that promise unless we extend it to x.(y+m).*,
> which would in turn push us into new x.0.0 releases, and raise new questions
> over the whole organisation of our repos.
>
> I’m not saying you’re wrong: in fact I think there’s merit in the proposal.
> But it would need a considered roadmap from here to there.

Well, one important thing to keep in mind is that a x.y.0 release
doesn't preclude the x.(y-1) series from receiving fixes. Users don't
have to update immediately if they're using third-party modules, as
there would still be bug fixes for a while, and eventually only
security fixes, before the x.(y-1) series would be fully EOL.

PHP has a very predictable timeline for this:
http://php.net/supported-versions.php

It's also worth noting that with more frequent x.y.0 releases (say,
one per year), it's likely that internal changes will be a lot smaller
and more incremental. PHP is in a similar situation to HTTPD with its
extensions system, and extensions that worked with PHP 7.0 either
compiled fine against 7.1 and later 7.2 out of the box, or required
only very few modifications.

David

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Nick Kew <ni...@apache.org>.
> On 14 Apr 2018, at 14:48, Jim Jagielski <ji...@jaguNET.com> wrote:
> 
> IMO, the below ignores the impacts on OS distributors who
> provide httpd. We have seen how long it takes for them
> to go from 2.2 to 2.4... I can't imagine the impact for our
> end user community if "new features" cause a minor
> bump all the time and we "force" distributions for
> 2.4->2.6->2.8->2.10…

Chicken&egg.  httpd version numbers creep in a petty pace from year to year,
and packagers do likewise.  Contrast a product like, say, Firefox, where major
versions just whoosh by, and distros increment theirs every few months.

> Just my 2c

Indeed, a change needs to be a considered thing, and there are issues.

>> On Apr 13, 2018, at 2:28 PM, David Zuelke <dz...@salesforce.com> wrote:
>> 
>> Remember the thread I started on that quite a while ago? ;)

Nope.

>> - x.y.0 for new features
>> - x.y.z for bugfixes only
>> - stop the endless backporting
>> - make x.y.0 releases more often

An issue there is the API/ABI promise.  We are a stable product, and one of our
virtues is the guarantee that a third-party module written for x.y.z will continue to
work at both source and binary level for x.y.(z+n).

Frequent x.y.0 releases devalue that promise unless we extend it to x.(y+m).*,
which would in turn push us into new x.0.0 releases, and raise new questions
over the whole organisation of our repos.

I’m not saying you’re wrong: in fact I think there’s merit in the proposal.
But it would need a considered roadmap from here to there.

— 
Nick Kew

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Alain Toussaint <al...@vocatus.pub>.
Le mercredi 18 avril 2018 à 19:31 -0500, Daniel Ruggeri a écrit :
> On 4/18/2018 1:34 PM, Alain Toussaint wrote:
> > > As an aside - httpd has a —enable-layout option in configure that defines where things should
> > > go.
> > > If you patch the following file how you want it and submit it to us, we can formally support
> > > LFS
> > > out the box and you can remove the need for your patch:
> > > 
> > > https://svn.apache.org/repos/asf/httpd/sandbox/replacelimit/config.layout
> > > 
> > > Regards,
> > > Graham
> > > —
> > > 
> > 
> > Great idea which I'll submit to the power that be.
> > 
> > Alain
> 
> Minor correction to the URL for latest and greatest:
> https://svn.apache.org/repos/asf/httpd/httpd/trunk/config.layout
> 
> As we love to say, "patches welcome!"
> 
> Feel free to just submit your diff here (since dev@ IS the power that be)
> 

I've been tasked with the patch modification at BLFS. I'll handle it tomorrow morning and submit it.

Alain

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Daniel Ruggeri <dr...@primary.net>.
On 4/18/2018 1:34 PM, Alain Toussaint wrote:
>> As an aside - httpd has a —enable-layout option in configure that defines where things should go.
>> If you patch the following file how you want it and submit it to us, we can formally support LFS
>> out the box and you can remove the need for your patch:
>>
>> https://svn.apache.org/repos/asf/httpd/sandbox/replacelimit/config.layout
>>
>> Regards,
>> Graham
>> —
>>
> Great idea which I'll submit to the power that be.
>
> Alain

Minor correction to the URL for latest and greatest:
https://svn.apache.org/repos/asf/httpd/httpd/trunk/config.layout

As we love to say, "patches welcome!"

Feel free to just submit your diff here (since dev@ IS the power that be)

-- 
Daniel Ruggeri


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Alain Toussaint <al...@vocatus.pub>.
Le mercredi 18 avril 2018 à 11:41 +0200, Graham Leggett a écrit :
> On 17 Apr 2018, at 7:17 PM, Alain Toussaint <al...@vocatus.pub> wrote:
> 
> > > No
> > > distribution (that I am aware of) ships something called Apache httpd v2.4.29.
> > 
> > At LFS (linux from scratch), we're the exception confirming the rule of shipping v2.4.29 with
> > the
> > single patch of defining a preferred layout (the BLFS layout patch) in LFS/BLFS v8.2.
> > 
> > B/LFS-svn is shipping with v2.4.33 currently.
> > 
> > Alain (bug chaser for B/LFS and ALFS working toward editorship).
> 
> Looking at http://www.linuxfromscratch.org/blfs/view/svn/server/apache.html it doesn’t appear that
> you’re shipping httpd at all, instead you’re directing people to get httpd from the ASF, and are
> supplying a patch to make it work with LFS. Both of these activities are entirely fine.

We do have mirrors for the cases where upstream changes deviate from the book's instruction and the
automated ALFS tool draw primarily from the mirrors before hitting upstream but peoples doing thing
by hands while reading the books do take the packages from upstream.

> As an aside - httpd has a —enable-layout option in configure that defines where things should go.
> If you patch the following file how you want it and submit it to us, we can formally support LFS
> out the box and you can remove the need for your patch:
> 
> https://svn.apache.org/repos/asf/httpd/sandbox/replacelimit/config.layout
> 
> Regards,
> Graham
> —
> 

Great idea which I'll submit to the power that be.

Alain

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Graham Leggett <mi...@sharp.fm>.
On 17 Apr 2018, at 7:17 PM, Alain Toussaint <al...@vocatus.pub> wrote:

>> No
>> distribution (that I am aware of) ships something called Apache httpd v2.4.29.
> 
> At LFS (linux from scratch), we're the exception confirming the rule of shipping v2.4.29 with the
> single patch of defining a preferred layout (the BLFS layout patch) in LFS/BLFS v8.2.
> 
> B/LFS-svn is shipping with v2.4.33 currently.
> 
> Alain (bug chaser for B/LFS and ALFS working toward editorship).

Looking at http://www.linuxfromscratch.org/blfs/view/svn/server/apache.html it doesn’t appear that you’re shipping httpd at all, instead you’re directing people to get httpd from the ASF, and are supplying a patch to make it work with LFS. Both of these activities are entirely fine.

As an aside - httpd has a —enable-layout option in configure that defines where things should go. If you patch the following file how you want it and submit it to us, we can formally support LFS out the box and you can remove the need for your patch:

https://svn.apache.org/repos/asf/httpd/sandbox/replacelimit/config.layout

Regards,
Graham
—


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Alain Toussaint <al...@vocatus.pub>.
> No
> distribution (that I am aware of) ships something called Apache httpd v2.4.29.

At LFS (linux from scratch), we're the exception confirming the rule of shipping v2.4.29 with the
single patch of defining a preferred layout (the BLFS layout patch) in LFS/BLFS v8.2.

B/LFS-svn is shipping with v2.4.33 currently.

Alain (bug chaser for B/LFS and ALFS working toward editorship).


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Graham Leggett <mi...@sharp.fm>.
On 17 Apr 2018, at 5:40 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:

>> I’m not following the “all in vain”.
>> 
>> This patch in v2.4.33 was dine specifically to fix an issue in Xenial, and Ubuntu is on the case:
>> 
>> https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1750356
> 
> Then Ubuntu is distributing neither httpd 2.4.33 nor 2.4.29, as
> published by the Apache HTTP Project. This is another example of
> cherry picking a miscellany of fixes.

Yes. This is the very definition of the Ubuntu “Long Term Support” releases. It is also the very definition of “Redhat Enterprise Linux”.

> If a distributor shipped a source package of something called Apache
> httpd 2.4.29, which is obviously not .29 but .29+{stuff}, what would
> be our reaction?

No reaction.

There is no source of confusion. The distros all use (for example) v2.4.29 as their baseline version, and then a sub-version-number that to indicate their patch level on top of ours. No distribution (that I am aware of) ships something called Apache httpd v2.4.29.

The distributions have been doing this nigh on two decades - the stability of a given software baseline which will not suddenly break at 3am some arbitrary Sunday in the middle of the holidays is the very product they’re selling. This works because they ship a baseline, plus carefully curated fixes as required by their communities, trading off the needs of their communities and stability.

None of this is new.

It turns out that we, the httpd project (and apr), have had the exact same approach to stability that the distros have had for the last two decades. As a result, you can take an ASF supplied httpd RPM and drop it into Redhat Enterprise Linux and this “just works”, because our ABI guarantees align exactly with the ABI guarantees of the stable distros.

Regards,
Graham
—


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Eric Covener <co...@gmail.com>.
> If a distributor shipped a source package of something called Apache
> httpd 2.4.29, which is obviously not .29 but .29+{stuff}, what would
> be our reaction?

The package name/filename/etc or the compiled-in server version?
For the former, it's already differentiated on most distros I've seen.
For the latter, I don't have any real concern as most people
understand if it's complex & packaged, it's patched.

-- 
Eric Covener
covener@gmail.com

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 17, 2018 at 9:47 AM, Graham Leggett <mi...@sharp.fm> wrote:
> On 17 Apr 2018, at 4:41 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>
>> And everything contributed to 2.4.33 release? All in vain. None of
>> that in this OS distribution, because, code freeze.
>
> I’m not following the “all in vain”.
>
> This patch in v2.4.33 was dine specifically to fix an issue in Xenial, and Ubuntu is on the case:
>
> https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1750356

Then Ubuntu is distributing neither httpd 2.4.33 nor 2.4.29, as
published by the Apache HTTP Project. This is another example of
cherry picking a miscellany of fixes.

>> We observe the "code freeze" effect (defined by three different
>> distributors) coupled with distributors deep distrust of our releases,
>> so by continuously polluting our version major.minor release with more
>> and more cruft, those users are denied not only the new cruft, but all
>> the bug fixes to the old cruft as well... there's really no other
>> explanation for the users of one of our most common distributions to
>> be locked out of several subversions worth of bugfix corrections.
>
> I’m lost - what problem are you trying to solve?

The problem identified above, distributors falling into the role of
individually, project-by-project, release-by-release managing
versioning of what other modern software projects arbitrage in their
own subversion branches.

The use of the Apache HTTP Server mark itself is predicated on the
software shipped by Apache HTTP Project. So this forking leads to
interesting questions (probably permissible for combinations of code
released at different points by the project).

If a distributor shipped a source package of something called Apache
httpd 2.4.29, which is obviously not .29 but .29+{stuff}, what would
be our reaction?

Re: 2.4.3x regression w/SSL vhost configs

Posted by "lists@rhsoft.net" <li...@rhsoft.net>.

Am 19.04.2018 um 17:55 schrieb David Zuelke:
> I hate to break this to you, and I do not want to discredit the
> amazing work all the contributors here are doing, but httpd 2.4 is of
> miserable, miserable quality when it comes to breaks and regressions.
> 
> I maintain the PHP/Apache/Nginx infrastructure at Heroku, and I was
> able to use the following httpd releases only in the last ~2.5 years:
> 
> - 2.4.16
> - 2.4.18
> - 2.4.20
> - 2.4.29
>  -2.4.33

2.4.29 was a official release
2.4.33 was a official release

30, 31, 32 never was a release, the where at voting, regressions where
fund and fixed - so the gap 29-33 is as explected because a RC either
get released 1:1 or not at all

please review your numbers with the list-archive of rejected RC's

it's just bike-shedding if 30,31,32 should not have existed at all and
have been a 30RC1, 30RC2, 30RC3 -> 30GA but you where not supposed to
use 30, 31, 32 at all for anything than testing and report regressions


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Micha Lenk <mi...@lenk.info>.
On Fri, Apr 20, 2018 at 08:14:16AM -0400, Jim Jagielski wrote:
> On Apr 20, 2018, at 8:04 AM, Micha Lenk <mi...@lenk.info> wrote:
> > [...], I value the ability to distinguish between bugfix-only
> > releases and feature addition releases.
> 
> I understand that, thx. I also understand how a minor bump makes that
> easier. It would make, however, other people's lives and jobs *more*
> difficult, I think, so it's all about balance. I can see how
> distinguishing the difference is also nice, but what "value" do you
> derive from it? I am genuinely curious. Thx!

To be honest, our commercial interest in bugfixes is simply higher than
getting new features. So, I expect integrating a bugfix-only release to
be much less effort (in terms of porting our own modules, patches,
additional testing scrutiny) than a release that re-works internal core
functionality like the request handling for the sake of adding a new
feature like the entirely new support for h2.

But I am equally genuinely curious what would make "other people's lives
and jobs *more* difficult". What exactly do you refer to here?

> This is a "hack", of course, but what if CHANGES specifically called
> out new features like we do w/ SECURITY items?

Not being a native speaker I am not sure I understand your question
correctly. Can you please elaborate that question a bit?

Regards,
Micha

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On Apr 20, 2018, at 8:04 AM, Micha Lenk <mi...@lenk.info> wrote:
> 
> In my role as Debian Developer maintaining the Debian packages of other OSS projects, and also in my role of maintaining a commercial reverse proxy product based on Apache httpd during my day job, I value the ability to distinguish between bugfix-only releases and feature addition releases.
> 

I understand that, thx. I also understand how a minor bump makes that easier. It would make, however, other people's lives and jobs *more* difficult, I think, so it's all about balance. I can see how distinguishing the difference is also nice, but what "value" do you derive from it? I am genuinely curious. Thx!

This is a "hack", of course, but what if CHANGES specifically called out new features like we do w/ SECURITY items?


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Fri, Apr 20, 2018, 10:37 Paul Querna <pa...@querna.org> wrote:
>
> I believe having more minor releases and less major backports to patch
> releases is a good thing.
>
> I believe we gave the even/odd, 2.1/2.3 "unstable", thing a long run.
> About 15 years of it.
>
> Since then the wider open source world has gone to a more canonical
> semver.  I think we should generally align with that.
> <https://semver.org/>
>
> As an outcome, that would mean more major & minor releases, and less
> patch releases.  I think that if we break an existing ABI, we need to
> bump the major.  I think other projects are successfully doing this,
> and it has little effect on how distributions are packaging their
> projects.  Distros pick a point in time, whatever the current
> major.minor.patch is.

According to my math and history here, which begins with binary stability
(and not much feature changes in the period) at 1.3.14, and continuing
throughout all of 2.0/2.2/2.4...

Those would have been counted, using any semver scheme, as 2.0+++, 3.0+++,
4.0+++. Over the span of 18 years.

That would put us at approaching 5.0.0, with discussion of refactoring how
our URI's can successfully handle %CH entities correctly and finally close
a group of (now mitigated but still broken) security issues. Mop up some
other crufty bits in the process. Perhaps achieve 99.9% source
compatibility with judicious use of compatibility macros.

2.0+ major revisions (ABI stable-ish) persisted over 6 years each. That
isn't terribly shabby. Future majors would be even less frequent, as the
framework proves durable.

Pretend we are at 4.x, what would be our minor? I count only 21 releases in
2.4.x and 3 of those were immediate reactions to busted releases. That
corresponds to releasing 4.17.1 in the span of 6 years. (I'm not suggesting
renumbering, obviously... only trying to frame our project in semver terms
to test if this would be helpful or harmful.)

About 3 minor releases per year, and even that was perhaps too infrequent.
The subversion bug-fix releases were certainly too infrequent, we shouldn't
have waited so long on the fixes, but new development, and yes, refactoring
also slow us down a lot.

Thanks for the observations!

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Paul Querna <pa...@querna.org>.
I believe having more minor releases and less major backports to patch
releases is a good thing.

I believe we gave the even/odd, 2.1/2.3 "unstable", thing a long run.
About 15 years of it.

Since then the wider open source world has gone to a more canonical
semver.  I think we should generally align with that.
<https://semver.org/>

As an outcome, that would mean more major & minor releases, and less
patch releases.  I think that if we break an existing ABI, we need to
bump the major.  I think other projects are successfully doing this,
and it has little effect on how distributions are packaging their
projects.  Distros pick a point in time, whatever the current
major.minor.patch is.


On Fri, Apr 20, 2018 at 5:04 AM, Micha Lenk <mi...@lenk.info> wrote:
> Hi all,
>
> On 04/20/2018 01:34 PM, Jim Jagielski wrote:
>>
>> But why does it matter that h2 was added in 2.4.x instead of
>> a 2.6.0?
>
>
> Because it sets a bad precedence (or even continues to do so)?
>
>> Every new feature must bump the minor? Even if
>> there is no corresponding ABI issue?
>
>
> Why not?
>
> In my role as Debian Developer maintaining the Debian packages of other OSS
> projects, and also in my role of maintaining a commercial reverse proxy
> product based on Apache httpd during my day job, I value the ability to
> distinguish between bugfix-only releases and feature addition releases.
>
> This does not mean that a minor bump needs to happen at almost every
> release. But not bumping the minor for years (which seems to be the current
> pattern of the httpd project) is just worse, because it increases the
> incentive to squeeze features like h2 into releases that are meant (or
> perceived) as bugfix-only releases.
>
> Regards,
> Micha

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Micha Lenk <mi...@lenk.info>.
Hi all,

On 04/20/2018 01:34 PM, Jim Jagielski wrote:
> But why does it matter that h2 was added in 2.4.x instead of
> a 2.6.0?

Because it sets a bad precedence (or even continues to do so)?

> Every new feature must bump the minor? Even if
> there is no corresponding ABI issue?

Why not?

In my role as Debian Developer maintaining the Debian packages of other 
OSS projects, and also in my role of maintaining a commercial reverse 
proxy product based on Apache httpd during my day job, I value the 
ability to distinguish between bugfix-only releases and feature addition 
releases.

This does not mean that a minor bump needs to happen at almost every 
release. But not bumping the minor for years (which seems to be the 
current pattern of the httpd project) is just worse, because it 
increases the incentive to squeeze features like h2 into releases that 
are meant (or perceived) as bugfix-only releases.

Regards,
Micha

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On Apr 19, 2018, at 4:35 PM, David Zuelke <dz...@salesforce.com> wrote:
> 
> 
> Of course, but that's exactly my point. It was introduced not in
> 2.4.0, but in 2.4.17. Five "H2…" config directives are available in
> 2.4.18+ only, one in 2.4.19+, and three in 2.4.24+.
> 

But why does it matter that h2 was added in 2.4.x instead of
a 2.6.0? Every new feature must bump the minor? Even if
there is no corresponding ABI issue?

You wrote:
  It makes such little sense to land h2 support in 2.4.something, as
  opposed to having it as an official "brand new, try it out" subproject
  first, and then bundle it with 2.6.

h2 was a '"brand new, try it out" subproject', external
to httpd. It was brought in via a generous donation of
code and because there was a desire and "need" for it to
be an official part of the distro.

In general, even though PHP says that all "New feature"
must be in a Minor bump, this has not ever been the
case for httpd. Otherwise we'd be up to version 2.223.x
by now, after having left version 1.6542.x :)

And finally, it's not quite apples/apples comparing
language version numbering to *server software* versioning,
especially when there is a large, external module eco-system
for the server software that relies on the minor number
being "only" ABI related.

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
All informative feedback is welcome on this /discussion/ thread.

Jim, again, stop. Bullying list watchers with negative feedback
into silence is a CoC violation.

David, thank you for your detailed feedback. We are reading,
whether the feedback is warmly received or not.



On Thu, Apr 19, 2018 at 1:25 PM, Jim Jagielski <ji...@jagunet.com> wrote:
>
>
>> On Apr 19, 2018, at 11:55 AM, David Zuelke <dz...@salesforce.com> wrote:
>>
>>
>> I hate to break this to you, and I do not want to discredit the
>> amazing work all the contributors here are doing, but httpd 2.4 is of
>> miserable, miserable quality when it comes to breaks and regressions.
>>
>
> Gee Thanks! That is an amazing compliment to be sure. I have
> NO idea how ANYONE could take that in any way as discrediting
> the work being done.
>
> Sarcasm aside, could we do better? Yes. Can we do better? Yes.
> Should we do better? Yes. Will we do better? Yes.
>
> BTW, you DID see how h2 actually came INTO httpd, didn't you??
>

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Graham Leggett <mi...@sharp.fm>.
On 19 Apr 2018, at 10:35 PM, David Zuelke <dz...@salesforce.com> wrote:

> Of course, but that's exactly my point. It was introduced not in
> 2.4.0, but in 2.4.17. Five "H2…" config directives are available in
> 2.4.18+ only, one in 2.4.19+, and three in 2.4.24+.

H2 support was marked as “experimental” in the versions you are listing, and so changes to these directives were entirely expected as per our process.

As per our changelog, H2 support was marked as fully production ready and therefore subject to our normal versioning rules as of v2.4.26:

http://www-eu.apache.org/dist//httpd/CHANGES_2.4

Our process allows the introduction of new features for the reasons already explained previously.

Regards,
Graham
—


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by David Zuelke <dz...@salesforce.com>.
On Thu, Apr 19, 2018 at 11:07 PM, Mark Blackman <ma...@exonetric.com> wrote:
>
>
>> On 19 Apr 2018, at 21:35, David Zuelke <dz...@salesforce.com> wrote:
>>
>> I'm not saying no directives should ever be added in point releases or
>> anything, but the constant backporting of *features* to 2.4 has
>> contributed to the relatively high number of regressions, and to a
>> lack of progress on 2.6/3.0, because, well, if anything can be put
>> into 2.4.next, why bother?
>>
>> David
>
> What’s the rule for *features*?

That remains to be defined. Generally, I'd say anything that doesn't
correct existing functionality, or anything that changes defaults, or
anything that changes behavior with existing settings, is a
feature/break/change and not a fix, so would belong in 2.next.0.

More or less the Semver approach, essentially.

See e.g. https://wiki.php.net/rfc/releaseprocess

David

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Mark Blackman <ma...@exonetric.com>.

> On 19 Apr 2018, at 21:35, David Zuelke <dz...@salesforce.com> wrote:
> 
> On Thu, Apr 19, 2018 at 8:25 PM, Jim Jagielski <ji...@jagunet.com> wrote:
>> 
>> 
>>> On Apr 19, 2018, at 11:55 AM, David Zuelke <dz...@salesforce.com> wrote:
>>> 
>>> 
>>> I hate to break this to you, and I do not want to discredit the
>>> amazing work all the contributors here are doing, but httpd 2.4 is of
>>> miserable, miserable quality when it comes to breaks and regressions.
>>> 
>> 
>> Gee Thanks! That is an amazing compliment to be sure. I have
>> NO idea how ANYONE could take that in any way as discrediting
>> the work being done.
>> 
>> Sarcasm aside, could we do better? Yes. Can we do better? Yes.
>> Should we do better? Yes. Will we do better? Yes.
>> 
>> BTW, you DID see how h2 actually came INTO httpd, didn't you??
> 
> Of course, but that's exactly my point. It was introduced not in
> 2.4.0, but in 2.4.17. Five "H2…" config directives are available in
> 2.4.18+ only, one in 2.4.19+, and three in 2.4.24+.
> 
> I'm not saying no directives should ever be added in point releases or
> anything, but the constant backporting of *features* to 2.4 has
> contributed to the relatively high number of regressions, and to a
> lack of progress on 2.6/3.0, because, well, if anything can be put
> into 2.4.next, why bother?
> 
> David

What’s the rule for *features*?

- Mark

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by David Zuelke <dz...@salesforce.com>.
On Thu, Apr 19, 2018 at 8:25 PM, Jim Jagielski <ji...@jagunet.com> wrote:
>
>
>> On Apr 19, 2018, at 11:55 AM, David Zuelke <dz...@salesforce.com> wrote:
>>
>>
>> I hate to break this to you, and I do not want to discredit the
>> amazing work all the contributors here are doing, but httpd 2.4 is of
>> miserable, miserable quality when it comes to breaks and regressions.
>>
>
> Gee Thanks! That is an amazing compliment to be sure. I have
> NO idea how ANYONE could take that in any way as discrediting
> the work being done.
>
> Sarcasm aside, could we do better? Yes. Can we do better? Yes.
> Should we do better? Yes. Will we do better? Yes.
>
> BTW, you DID see how h2 actually came INTO httpd, didn't you??

Of course, but that's exactly my point. It was introduced not in
2.4.0, but in 2.4.17. Five "H2…" config directives are available in
2.4.18+ only, one in 2.4.19+, and three in 2.4.24+.

I'm not saying no directives should ever be added in point releases or
anything, but the constant backporting of *features* to 2.4 has
contributed to the relatively high number of regressions, and to a
lack of progress on 2.6/3.0, because, well, if anything can be put
into 2.4.next, why bother?

David

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On Apr 19, 2018, at 11:55 AM, David Zuelke <dz...@salesforce.com> wrote:
> 
> 
> I hate to break this to you, and I do not want to discredit the
> amazing work all the contributors here are doing, but httpd 2.4 is of
> miserable, miserable quality when it comes to breaks and regressions.
> 

Gee Thanks! That is an amazing compliment to be sure. I have
NO idea how ANYONE could take that in any way as discrediting
the work being done.

Sarcasm aside, could we do better? Yes. Can we do better? Yes.
Should we do better? Yes. Will we do better? Yes.

BTW, you DID see how h2 actually came INTO httpd, didn't you??


AW: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.

> -----Ursprüngliche Nachricht-----
> Von: David Zuelke <dz...@salesforce.com>
> Gesendet: Montag, 23. April 2018 18:09
> An: dev@httpd.apache.org
> Betreff: Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost
> configs)
> 
> On Sat, Apr 21, 2018 at 12:45 PM, Graham Leggett <mi...@sharp.fm>
> wrote:
> > On 19 Apr 2018, at 5:55 PM, David Zuelke <dz...@salesforce.com>
> wrote:
> >
> >> I hate to break this to you, and I do not want to discredit the
> >> amazing work all the contributors here are doing, but httpd 2.4 is of
> >> miserable, miserable quality when it comes to breaks and regressions.
> >>
> >> I maintain the PHP/Apache/Nginx infrastructure at Heroku, and I was
> >> able to use the following httpd releases only in the last ~2.5 years:
> >>
> >> - 2.4.16
> >> - 2.4.18
> >> - 2.4.20
> >> - 2.4.29
> >> -2.4.33
> >>
> >> Mostly because of regressions around mod_proxy(_fcgi), REDIRECT_URL,
> whatever.
> >
> > Did you bring these regressions to our attention? Regressions get fix
> very quickly - there was an 18 month period between 2.4.20 and 2.4.29,
> what stopped it being possible to upgrade in that time?
> 
> I double checked. It was actually 2.4.27, not 2.4.29; 15 months, not 18.
> My bad.
> 
> The issue was the PHP-FPM + mod_proxy_fcgi regression introduced in
> 2.4.21; it got reported pretty quickly but took a while to address.
> 
> It finally got fixed in 2.4.26:
> https://bz.apache.org/bugzilla/show_bug.cgi?id=60576
> 
> But that fix broke SCRIPT_NAME:
> https://bz.apache.org/bugzilla/show_bug.cgi?id=61202
> 
> So 2.4.27 was functional again.
> 
> That means between April 11, 2016, and July 11, 2017, httpd with
> mod_proxy_fcgi and PHP-FPM was broken.
> 
> The original change was made against trunk
> (https://bz.apache.org/bugzilla/show_bug.cgi?id=59618) and then
> backported to 2.4.next.

Which was an unfortunate regression that took long to be fixed correctly,
but as far as I remember it did not involve any new features nor refactoring.
"Just" a bugfix that caused a regression that was hard to fix.

> 
> >
> > (As other people have said, there was no release between 2.4.16 and
> 2.4.18, 2.4.19 was replaced two weeks later, and there were no releases
> for you to have used between v2.4.29 and 2.4.33)
> >
> >> This is not any person's fault. This is the fault of the process. The
> >> process can be repaired: bugfixes only in 2.4.x, do RC cycles for
> >> bugfix releases as well (that alone makes the changelog look a lot
> >> less confusing, which is useful for the project's image, see also the
> >> Nginx marketing/FUD discussion in the other thread), and start
> testing
> >> new features in modules first.
> >
> > Unfortunately this misses a fundamental reality of what the httpd
> project is - we are the foundation under many many other things, and
> when we jump from v2.4.x to v2.6.x, our complete ecosystem above us
> needs to be recompiled.
> 
> Going from 2.4.x to 2.6.0 doesn't mean that 2.4.x would no longer be
> maintained. There could easily be some predictable, defined support
> policy for keeping older versions alive.

Which requires enough manpower to maintain all these branches. Looking at the past
15 years we never maintained more than two stable branches at the same time.
This is no argument against 2.6.0 as we currently only maintain 2.4.x, but against
having 2.6+n.0 each time we want to add new features and keep maintaining 2.6+n-x.0
I think experience shows that if we would release 2.6 the activity to backport new
features to 2.4 would drop over time.

> 
> The other thing is ABI versus API stability; you could say 2.x.
> versions retain API compatibility, but not ABI compatibility, so while
> modules would have to be rebuilt against newer version series, that
> could in virtually all cases happen without having to touch the
> module's code.

This might work with open source modules, but even here you would lose the chance
e.g. on LTS distributions to compile your own Apache with later features being on
the same ABI level as the OS delivered Apache and install a OS delivered package
of a module to use it with this instead of needing to compile it on your own.
It does not work at all with closed source modules, because as soon as these
vendors need to recompile you will either have to wait for a longer period of
time or need to upgrade their product as well to a newer version.

> 
> >> It makes such little sense to land h2 support in 2.4.something, as
> >> opposed to having it as an official "brand new, try it out"
> subproject
> >> first, and then bundle it with 2.6.
> >
> > Not only does it make sense, but it is vital we do so.
> >
> > We needed to get h2 support into the hands of end users - end users
> who were not going to recompile their entire web stack, who install
> software from distros who are not going to upgrade, who were deploying
> modules from vendors that were not going to recompile.
> 
> But that's what I'm saying. Why was h2 not kept as a module (for the
> few people that are already deploying HTTP/2 stacks), let it mature

It is a module. Nobody was forced to use it. As mentioned it was experimental
in the beginning. This means it was not compiled by default and the backport
procedure was much faster and less strict (this changed as soon as the experimental
status was removed). But it matured very quickly that way as it was already used
by lots of users that wanted to use this experimental module and that understood
its status being experimental. People who did not want to take the risk just did not use it.

> this way, and then put it into everyone's hands as part of 2.6.0,
> which could be the big shiny new feature, to give everyone an
> incentive to move to that new major version?
> 
> > Our average user will deploy whatever comes by default on their
> operating system, they’re not going to have a dedicated team that
> deploys a custom stack for their application. It is vital we respect the
> needs of these groups of users.
> 
> That is even more of an argument to move to a more predictable cycle
> and have patch releases only fix issues, because it means new features
> see the light of day more quickly, so more people who just use what
> comes with their OS would benefit from them.
> 
> Nobody who uses Apache as part of Debian, Ubuntu, RHEL, whatever, gets
> new 2.4.next features. Those distros freeze Apache at whatever is the
> latest version when their cutoff date is due, and then only backport
> security fixes.

This is true, but they don't get them with another versioning as well.
But with the current approach they can just compile Apache and use
the 3rd party modules from the distribution.

Regards

Rüdiger

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by David Zuelke <dz...@salesforce.com>.
On Sat, Apr 21, 2018 at 12:45 PM, Graham Leggett <mi...@sharp.fm> wrote:
> On 19 Apr 2018, at 5:55 PM, David Zuelke <dz...@salesforce.com> wrote:
>
>> I hate to break this to you, and I do not want to discredit the
>> amazing work all the contributors here are doing, but httpd 2.4 is of
>> miserable, miserable quality when it comes to breaks and regressions.
>>
>> I maintain the PHP/Apache/Nginx infrastructure at Heroku, and I was
>> able to use the following httpd releases only in the last ~2.5 years:
>>
>> - 2.4.16
>> - 2.4.18
>> - 2.4.20
>> - 2.4.29
>> -2.4.33
>>
>> Mostly because of regressions around mod_proxy(_fcgi), REDIRECT_URL, whatever.
>
> Did you bring these regressions to our attention? Regressions get fix very quickly - there was an 18 month period between 2.4.20 and 2.4.29, what stopped it being possible to upgrade in that time?

I double checked. It was actually 2.4.27, not 2.4.29; 15 months, not 18. My bad.

The issue was the PHP-FPM + mod_proxy_fcgi regression introduced in
2.4.21; it got reported pretty quickly but took a while to address.

It finally got fixed in 2.4.26:
https://bz.apache.org/bugzilla/show_bug.cgi?id=60576

But that fix broke SCRIPT_NAME:
https://bz.apache.org/bugzilla/show_bug.cgi?id=61202

So 2.4.27 was functional again.

That means between April 11, 2016, and July 11, 2017, httpd with
mod_proxy_fcgi and PHP-FPM was broken.

The original change was made against trunk
(https://bz.apache.org/bugzilla/show_bug.cgi?id=59618) and then
backported to 2.4.next.

>
> (As other people have said, there was no release between 2.4.16 and 2.4.18, 2.4.19 was replaced two weeks later, and there were no releases for you to have used between v2.4.29 and 2.4.33)
>
>> This is not any person's fault. This is the fault of the process. The
>> process can be repaired: bugfixes only in 2.4.x, do RC cycles for
>> bugfix releases as well (that alone makes the changelog look a lot
>> less confusing, which is useful for the project's image, see also the
>> Nginx marketing/FUD discussion in the other thread), and start testing
>> new features in modules first.
>
> Unfortunately this misses a fundamental reality of what the httpd project is - we are the foundation under many many other things, and when we jump from v2.4.x to v2.6.x, our complete ecosystem above us needs to be recompiled.

Going from 2.4.x to 2.6.0 doesn't mean that 2.4.x would no longer be
maintained. There could easily be some predictable, defined support
policy for keeping older versions alive.

The other thing is ABI versus API stability; you could say 2.x.
versions retain API compatibility, but not ABI compatibility, so while
modules would have to be rebuilt against newer version series, that
could in virtually all cases happen without having to touch the
module's code.

>> It makes such little sense to land h2 support in 2.4.something, as
>> opposed to having it as an official "brand new, try it out" subproject
>> first, and then bundle it with 2.6.
>
> Not only does it make sense, but it is vital we do so.
>
> We needed to get h2 support into the hands of end users - end users who were not going to recompile their entire web stack, who install software from distros who are not going to upgrade, who were deploying modules from vendors that were not going to recompile.

But that's what I'm saying. Why was h2 not kept as a module (for the
few people that are already deploying HTTP/2 stacks), let it mature
this way, and then put it into everyone's hands as part of 2.6.0,
which could be the big shiny new feature, to give everyone an
incentive to move to that new major version?

> Our average user will deploy whatever comes by default on their operating system, they’re not going to have a dedicated team that deploys a custom stack for their application. It is vital we respect the needs of these groups of users.

That is even more of an argument to move to a more predictable cycle
and have patch releases only fix issues, because it means new features
see the light of day more quickly, so more people who just use what
comes with their OS would benefit from them.

Nobody who uses Apache as part of Debian, Ubuntu, RHEL, whatever, gets
new 2.4.next features. Those distros freeze Apache at whatever is the
latest version when their cutoff date is due, and then only backport
security fixes.

>> Really, I'd suggest taking a close look at the PHP release cycle, with
>> their schedules, their RFC policies, everything. As I said in that
>> other thread, the PHP project was in exactly the same spot a few years
>> ago and they have pulled themselves out of that mess with amazing
>> results.
>
> Specifically what about the php release cycle are you referring to? I was burned badly a number of years ago by php config file formats being changed in point releases, have they improved their stability?

Yes. Nothing like that has happened since PHP 5.4, when the new
release process started and all of this got fixed. I explained this
several times in this thread, other threads, and the thread a year
plus ago I linked to.

David

Re: A proposal...

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On Apr 23, 2018, at 10:22 AM, Graham Leggett <mi...@sharp.fm> wrote:
> 
> My perl knowledge is very rusty, making perl tests is going to be harder for some than others.
> 

Yeah, that IS an issue. It is also not as well documented as desired[1].

Should we look at using something external as a well to complement/supplement it? Or even start adding some specific tests under the ./test subdirectory in the repo. Maybe say that the requirement is some sort of test "bundled" w/ the feature; it doesn't need to be under the perl test framework. Or maybe some way the perl test framework can call other test scripts written in whatever language someone wants; it simply sets things up, lets the script run and checks the return status.


1. https://perl.apache.org/docs/general/testing/testing.html

Re: A proposal...

Posted by Graham Leggett <mi...@sharp.fm>.
On 23 Apr 2018, at 4:00 PM, Jim Jagielski <ji...@jaguNET.com> wrote:

> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.

+1.

> We have a test framework. The questions are:
> 
> 1. Are we using it?

Is there a CI set up for building httpd?

Is there a CI available we could use to trigger the test suite on a regular basis?

(I believe the answer is yes for APR).

> 2. Are we using it sufficiently well?
> 3. If not, what can we do to improve that?
> 4. Can we supplement/replace it w/ other frameworks?
> 
> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
> 
> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.

My perl knowledge is very rusty, making perl tests is going to be harder for some than others.

Regards,
Graham
—


AW: A proposal...

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.

> -----Ursprüngliche Nachricht-----
> Von: Stefan Eissing <st...@greenbytes.de>
> Gesendet: Montag, 23. April 2018 17:08
> An: dev@httpd.apache.org
> Betreff: Re: A proposal...
> 
> Such undocumented and untested behaviour, which nevertheless is
> considered a regression, cannot be avoided, since it cannot be
> anticipated by people currently working on those code parts. This is a
> legacy of the past, it seems, which we can only overcome by breakage and
> resulting, added test cases.
> 
> > In other words: nothing backported unless it also involves some
> changes to the Perl test framework or some pretty convincing reasons why
> it's not required.
> 
> See above, this will not fix the unforeseeable breakage that results
> from use cases unknown and untested.
> 

Agreed. Even if do perfect testing for all new stuff it will take time until we see positive results as the past will hurt us here for a while. So we shouldn't give up too fast if do not see positive results immediately 😊

Regards

Rüdiger

Re: A proposal...

Posted by Stefan Eissing <st...@greenbytes.de>.

> Am 23.04.2018 um 17:07 schrieb Stefan Eissing <st...@greenbytes.de>:
> 
> I do that for stuff I wrote myself. Not because I care only about that, but because the coverage and documentation of other server parts does give me an i
> dea of what should work and what should not. So, I am the 

*the coverage and documentation of other server parts does *NOT* give me

Re: A proposal...

Posted by Stefan Eissing <st...@greenbytes.de>.

> Am 23.04.2018 um 16:00 schrieb Jim Jagielski <ji...@jaguNET.com>:
> 
> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
> 
> We have a test framework. The questions are:

Personal view/usage answers:

> 1. Are we using it?

On release candidates only.

> 2. Are we using it sufficiently well?

 * I only added very basic tests for h2, since Perl's capabilities here are rather limited.
 * the whole framework was hard to figure out. It took me a while to get vhost setups working.

> 3. If not, what can we do to improve that?

 * A CI setup would help.

> 4. Can we supplement/replace it w/ other frameworks?

 * For mod_h2 I started with just shell scripts. Those still make my h2 test suite,
   using nghttp and curl client as well as go (if available).
 * For mod_md I used pytest which I found an excellent framework. The test suite
   is available in the github repository of mod_md
 * Based on Robert Swiecki's hongfuzz, there is a h2fuzz project for fuzzing
   our server at https://github.com/icing/h2fuzz. This works very well on a Linux
   style system.

So, I do run a collection of things. All are documented, but none is really tied into
the httpd testing framework.

> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.

I do that for stuff I wrote myself. Not because I care only about that, but because the coverage and documentation of other server parts does give me an idea of what should work and what should not. So, I am the wrong guy to place assertions into test cases for those code parts.

Example: the current mod_ssl enabled quirkyness discovered by Joe would ideally be documented now in a new test case. But neither me nor Yann would have found that before release via testing (the tests worked) nor did we anticipate such breakage.

Such undocumented and untested behaviour, which nevertheless is considered a regression, cannot be avoided, since it cannot be anticipated by people currently working on those code parts. This is a legacy of the past, it seems, which we can only overcome by breakage and resulting, added test cases.

> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.

See above, this will not fix the unforeseeable breakage that results from use cases unknown and untested.

-Stefan

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
At the pace of our (currently 'minor', contrast to 'patch') releases there
are about 2-4 / year. I agree with the idea of monthly bug fix patch
releases.

Declaring the first minor of each year as LTS for 2 years, we could get
security fixes into legacy users hands. It would be a good starting point
for anyone trying to patch some version between LTS and LTS-1.

Those that don't update for years seem to rarely pay much attention to
vulnerabilities anyways, and distributors choose their own path, so this
seems like a good compromise.

Security fixes -> trunk (next minor) > current minor > last LTS major.minor
> previous LTS major.minor.

I agree with Eric that optionally enabling a fix during the current minor
might be useful (think HTTP_PROXY protection), but these would rarely map
to the behavior of the next version minor (optional for patch, but default
to new recommended behavior in next version minor.)




On Tue, Apr 24, 2018, 13:29 Eric Covener <co...@gmail.com> wrote:

> > Should we also need some kind of LTS version? If yes, how to choose them?
> I think it would be required with frequent minor releases.
>

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
> Should we also need some kind of LTS version? If yes, how to choose them?
I think it would be required with frequent minor releases.

Re: A proposal...

Posted by Marion et Christophe JAILLET <ch...@wanadoo.fr>.
Le 24/04/2018 à 19:58, Daniel Ruggeri a écrit :
> One thing you mention above is "wait for a new minor release". I can 
> definitely see that being an issue for our current maj.minor layout 
> given that minor bumps are measured in years. In this proposal, unless 
> there's a pressing need to send out a patch release right now, the 
> next version WOULD be that minor bump. Put into practice, I would see 
> major bumps being measured in years, minor bumps in quarters and patch 
> bumps in weeks/months.
I think the same.
But we should be clear on how long we maintain each version and the 
effort needed for that.

How long does we backport bug fixes?
How long does we fix security issues?
Should we also need some kind of LTS version? If yes, how to choose 
them? M.0.0 version? In an unpredictable way as Linux does, "when it's 
time for it"? On a timely basis as Ubuntu does?

2.2 vs 2.4 was already not that active in the last months/years of 2.2, 
as already discussed in the list.
I'm a bit reluctant to backport things in, let say, 4 minors branches 
because we maintain them for 1 year (4 quarters) + 1 or maybe even 2 LTS 
branches!
To dare to go this way, either me need much more man power (and I'm 
please to see many names active on the list these days), or we should 
avoid writing bugs, so we don't have to maintain fix for them :)

CJ

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
On Wed, Apr 25, 2018 at 1:50 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> On Tue, Apr 24, 2018 at 3:46 PM, Eric Covener <co...@gmail.com> wrote:
>>>> One thing you mention above is "wait for a new minor release". I can
>>>> definitely see that being an issue for our current maj.minor layout given
>>>> that minor bumps are measured in years. In this proposal, unless there's a
>>>> pressing need to send out a patch release right now, the next version WOULD
>>>> be that minor bump. Put into practice, I would see major bumps being
>>>> measured in years, minor bumps in quarters and patch bumps in weeks/months.
>>
>> I don't see how the minor releases would be serviceable for very long
>> there. If they're not serviceable,
>> then users have to move up anyway, then you're back at the status quo
>> with the dot in a different place.
>
> I don't see where a version minor will be serviced for a particularly long
> time after the next minor is released *as GA*. So, if version 3.5.0 comes
> along and introduces some rather unstable or unproved code, and gets
> the seal of approval as -alpha... 3.5.1 is a bit better but has known bugs,
> it earns a -beta. Finally 3.5.2 is released as GA. In all of that time, I'd
> expect the project continues to fix defects is 3.4.x on a very regular
> basis, not expecting anyone to pick up 3.5 during that time. This is what
> substantially differs from using our least significant revision element
> for both minor and patch scope changes.

Thanks Bill. This aspect does look helpful.

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Thu, Apr 26, 2018 at 10:13 AM, Jim Jagielski <ji...@jagunet.com> wrote:
>
>> On Apr 25, 2018, at 1:50 PM, William A Rowe Jr <wr...@rowe-clan.net>
wrote:
>>
>> Because of our conflation of patch and enhancement,
>
> It is hardly just "our"... tons and tons of s/w uses the patch number
bump for not only patches but also features and enhancements, including our
"competitors".
>
> I still fail to see anything which leads me to believe that our numbering
is the core issue that must be fixed. I am extremely open to my mind being
changed :)

It is not numbering, for sure. There are dozens of approaches to that which
throw all the changes into the blender, and dozens of approaches which keep
bug fixes and maintenance distinct from enhancements and feature creep.
Semantic Versioning is only interesting here because it is a strategy that
was
successfully adopted by the APR project, our fellow Subversion project and
even our highly valued nghttp2 dependency.

What I've seen suggested is to put httpd into a "sleep" mode for the
duration
of a longer-running RC cycle of a week or so, easily stretched out into a
month
or more for complex additions. Putting the release branch to sleep for a
month
impedes new development efforts, and puts us in a pretty precarious position
if a zero-day or very critical security report surfaces. Feature or change
is half
ready, so such critical fixes comes with new risk.

Any policy and versioning scheme which allows maintenance changes to be
released on a regular basis, and allows new development to proceed at full
steam when any particular committer is able to contribute their volunteer
time,
and lets the community test those enhancements and experiments without
disrupting users relying on a stable platform would be a win. Modifying the
RC proposal to fork from the time of -rc1 and have a 2.4.34 release branch
for a month, while 2.4.35 continues at pace is one alternative solution.

There, the -rc is simply a different wording of -alpha/-beta. This means
2.4.35
may be released with critical bug fixes long before 2.4.34 is ready to go.
Or
we renumber the 2.4.34 branch to 2.4.35 and release a very limited 2.4.34
of strictly critical fixes, curated in a rush, when such events happen or
serious
regression occurs. For early adopters at 2.4.34-rc1, and editing all of the
associated docs changes to reflect the renumbering would be a headache.
This is part of why httpd declares version numbers are cheap.

What seems to be agreed is that the even-odds way of approaching things
was a short term fix which didn't move us very quickly, bogged down major
new efforts, and sits basically abandoned.

What seems apparent is that conflating enhancements with getting fixes
into users hands is that users don't get fixes, including for enhancements
recently introduced, for months on end. Reflecting on our current state and
six years of activity, and you can look at this from the lens of using RC's
or semver semantics;

 tag     mos (since prior GA tag)
2.4.33 GA  5mos Mar 17 18 minor
2.4.32 rc  5mos Mar 09 18 minor-beta
2.4.31 nr  5mos Mar 03 18 minor-beta
2.4.30 nr  4mos Feb 19 18 minor-beta (security +1 mos GA delay)
2.4.29 GA  1mos Oct 17 17 minor
2.4.28 GA  2mos Sep 25 17 minor (security)
2.4.27 GA  1mos Jul  6 17 patch (security)
2.4.26 GA  6mos Jun 13 17 minor (security)
2.4.25 GA  6mos Dec 16 16 minor (security)
2.4.24 nr  6mos Dec 16 16 minor-beta
2.4.23 GA  3mos Jun 30 16 minor (security)
2.4.22 nr  3mos Jun 20 16 minor-beta
2.4.21 nr  3mos Jun 16 16 minor-beta
2.4.20 GA  4mos Apr  4 16 minor (security)
2.4.19 nr  3mos Mar 21 16 minor-beta
2.4.18 GA  2mos Dec  8 15 minor
2.4.17 GA  3mos Oct  9 15 minor
2.4.16 GA  6mos Jul  9 15 minor (security +5 mos GA delay)
2.4.15 nr  5mos Jun 19 15 minor-beta
2.4.14 nr  5mos Jun 11 15 minor-beta
2.4.13 nr  5mos Jun  4 15 minor-beta
2.4.12 GA  6mos Jan 22 15 minor (security +2 mos GA delay)
2.4.11 nr  6mos Jan 15 15 minor-beta
2.4.10 GA  4mos Jul 15 14 minor (security)
 2.4.9 GA  4mos Mar 13 14 minor (security)
 2.4.8 nr  4mos Mar 11 14 minor-beta
 2.4.7 GA  4mos Nov 19 13 minor (security)
 2.4.6 GA  5mos Jul 15 13 minor (security +2 mos GA delay)
 2.4.5 nr  5mos Jul 11 13 minor-beta
 2.4.4 GA  6mos Feb 18 13 minor (security)
 2.4.3 GA  4mos Aug 17 12 minor (security +2 mos GA delay)
 2.4.2 GA  2mos Apr  5 12 minor (security +1 mos GA delay)
 2.4.1 GA 38mos Feb 13 12 major
 2.4.0 nr 37mos Jan 16 12 major-beta
2.3.16 rc 36mos Dec 15 11 major-beta
 2.3.0 rc start Dec  6 08 major-beta

2.4.27 illustrates that we can turn around a patch quickly when the
other churn is excluded. (2.4.29 illustrates that we can even add
new features and release a minor update in a month, but our track
record proves this is the exception, not the rule.)

Our present versioning schema doesn't allow us to deliver this
software on a consistent or predictable or stable basis. That's
why I started the conversation wide open to different versioning
schemas and policy suggestions. There are lots of alternatives.
Starting with issuing easy-to-review patch releases which are
not overloaded with all the new goodies to slow down putting
our fixes in user's hands promptly.

Re: A proposal...

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On Apr 25, 2018, at 1:50 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> 
> Because of our conflation of patch and enhancement,

It is hardly just "our"... tons and tons of s/w uses the patch number bump for not only patches but also features and enhancements, including our "competitors".

I still fail to see anything which leads me to believe that our numbering is the core issue that must be fixed. I am extremely open to my mind being changed :)

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 24, 2018 at 3:46 PM, Eric Covener <co...@gmail.com> wrote:
>>> One thing you mention above is "wait for a new minor release". I can
>>> definitely see that being an issue for our current maj.minor layout given
>>> that minor bumps are measured in years. In this proposal, unless there's a
>>> pressing need to send out a patch release right now, the next version WOULD
>>> be that minor bump. Put into practice, I would see major bumps being
>>> measured in years, minor bumps in quarters and patch bumps in weeks/months.
>
> I don't see how the minor releases would be serviceable for very long
> there. If they're not serviceable,
> then users have to move up anyway, then you're back at the status quo
> with the dot in a different place.

I don't see where a version minor will be serviced for a particularly long
time after the next minor is released *as GA*. So, if version 3.5.0 comes
along and introduces some rather unstable or unproved code, and gets
the seal of approval as -alpha... 3.5.1 is a bit better but has known bugs,
it earns a -beta. Finally 3.5.2 is released as GA. In all of that time, I'd
expect the project continues to fix defects is 3.4.x on a very regular
basis, not expecting anyone to pick up 3.5 during that time. This is what
substantially differs from using our least significant revision element
for both minor and patch scope changes.

If we adopt this as 3.0.0 to start; The 2.4.x users would continue to need
security fixes for some time. When 4.0.0 was done in another decade,
again 3.x.n users will be the ones needing help for some time.

What the change accomplishes is that new development is never a gating
factor of creating a patch release. Contrawise, reliable patch delivery is
no longer a gating factor to new development. Each lives on its own track,
and successful new development supersedes the previous version minor.

Because of our conflation of patch and enhancement, the issue you had
brought up, HttpProtocolOptions, occured "as a release". But I'd suggest
that if 2.2 and 2.4 were each "major versions" (as users and developers
understand that term), I would have submitted such a radical refactoring
as a new version minor of each of those two flavors. Note that some of
those actual changes would likely have occured some 4 years previous,
when first proposed, had trunk not been removed from the release
continuum for 6 years.

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
>> One thing you mention above is "wait for a new minor release". I can
>> definitely see that being an issue for our current maj.minor layout given
>> that minor bumps are measured in years. In this proposal, unless there's a
>> pressing need to send out a patch release right now, the next version WOULD
>> be that minor bump. Put into practice, I would see major bumps being
>> measured in years, minor bumps in quarters and patch bumps in weeks/months.

I don't see how the minor releases would be serviceable for very long
there. If they're not serviceable,
then users have to move up anyway, then you're back at the status quo
with the dot in a different place.

>>> For me including this would poison almost any proposal it is added to.
>>> In the context above: I want to use directives for opt-in of fixes in
>>> a patch release.
>>
>>
>> FWIW, I propose that a directive addition would be a minor bump because
>> directives are part of a configuration "contract" with users - a set of
>> directives that exist in that major.minor. By adding directives in a patch,
>> we break the contract that would state "Any configuration valid in 3.4.x
>> will always be valid in 3.4.x." We can't do that today, but it would be
>> great if we could. Adding directives only in a minor bump provides a clean
>> point at which a known set of directives are valid.

I don't see the value in a backwards compatible configuration
contract, why we would tie our hands like that? Does anyone see this
aspect of an issue if it's orthogonal to new function?

Re: A proposal...

Posted by Rainer Jung <ra...@kippdata.de>.
Am 24.04.2018 um 19:58 schrieb Daniel Ruggeri:
> On 2018-04-24 09:22, Eric Covener wrote:
>> On Tue, Apr 24, 2018 at 10:08 AM, William A Rowe Jr 
>> <wr...@rowe-clan.net> wrote:
>>> On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> wrote:
>>>>> Yes, exactly correct. We have three "contracts" to keep that I 
>>>>> think aligns very well with the following semver "contracts":
>>>>> Major => API/ABI compatibility for modules
>>>>> Minor => Feature and directives
>>>>> Patch => Functional and configuration syntax guarantees
>>>>>
>>>>> Demonstrating by way of a few examples:
>>>>> If we add a directive but do not change exported structure, that 
>>>>> would result in a minor bump since the directive is part of the 
>>>>> feature set that would necessitate a config change to use (not 
>>>>> forward compatible).
>>>>
>>>> I don't agree that adding directives is adding function,  in terms of
>>>> versioning or user expectations.  I don't see why it a new directive
>>>> or parameter should necessarily wait for a new minor release
>>>> especially when there's so much sensitivity to behavior changes. It
>>>> seems backwards.
>>>
>>> As a general rule, adding a directive introduces a new feature, along
>>> with new functions, and structure additions.
>>
>> I won't argue the semantics any further, but I don't agree there is
>> any such equivalence or general rule.
> 
> One thing you mention above is "wait for a new minor release". I can 
> definitely see that being an issue for our current maj.minor layout 
> given that minor bumps are measured in years. In this proposal, unless 
> there's a pressing need to send out a patch release right now, the next 
> version WOULD be that minor bump. Put into practice, I would see major 
> bumps being measured in years, minor bumps in quarters and patch bumps 
> in weeks/months.
> 
>>
>> For me including this would poison almost any proposal it is added to.
>> In the context above: I want to use directives for opt-in of fixes in
>> a patch release.
> 
> FWIW, I propose that a directive addition would be a minor bump because 
> directives are part of a configuration "contract" with users - a set of 
> directives that exist in that major.minor. By adding directives in a 
> patch, we break the contract that would state "Any configuration valid 
> in 3.4.x will always be valid in 3.4.x." We can't do that today, but it 
> would be great if we could. Adding directives only in a minor bump 
> provides a clean point at which a known set of directives are valid.

When directives control new features, I would totally agree. An example 
that might be harder to decide, was the security hardening a little ago 
where the parsing of lines was made much stricter. For security reasons 
this became the default, but for interoperability with broken clients we 
allowed the strict parser to get switched off by a new directive.

It was a security patch, so should become part of a patch release, but 
due to changed behavior, the directive would also be needed for people 
who prefer old behavior over enhanced security.

If we would argue, that that hardening was a big enough change to only 
include it in a minor release, then we must be aware, that people could 
only use this security enhanced version by also getting all of the other 
new features in that version, which is typically not what you want when 
you update just for security reasons.

Regards,

Rainer

Re: A proposal...

Posted by Ruediger Pluem <rp...@apache.org>.

On 04/24/2018 01:50 PM, Rainer Jung wrote:
> Am 24.04.2018 um 13:19 schrieb Daniel Ruggeri:
>>
>>
>> On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group" <ru...@vodafone.com> wrote:
>>>
>>>
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: Rainer Jung <ra...@kippdata.de>
>>>> Gesendet: Montag, 23. April 2018 16:47
>>>> An: dev@httpd.apache.org
>>>> Betreff: Re: A proposal...
>>>>
>>>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>>>> It seems that, IMO, if there was not so much concern about
>>>> "regressions" in releases, this whole revisit-versioning debate would
>>>> not have come up. This implies, to me at least, that the root cause
>>> (as
>>>> I've said before) appears to be one related to QA and testing more
>>> than
>>>> anything. Unless we address this, then nothing else really matters.
>>>>>
>>>>> We have a test framework. The questions are:
>>>>>
>>>>>    1. Are we using it?
>>>>>    2. Are we using it sufficiently well?
>>>>>    3. If not, what can we do to improve that?
>>>>>    4. Can we supplement/replace it w/ other frameworks?
>>>>>
>>>>> It does seem to me that each time we patch something, there should
>>> be
>>>> a test added or extended which covers that bug. We have gotten lax in
>>>> that. Same for features. And the more substantial the change (ie, the
>>>> more core code it touches, or the more it refactors something), the
>>> more
>>>> we should envision what tests can be in place which ensure nothing
>>>> breaks.
>>>>>
>>>>> In other words: nothing backported unless it also involves some
>>>> changes to the Perl test framework or some pretty convincing reasons
>>> why
>>>> it's not required.
>>>>
>>>> I agree with the importance of the test framework, but would also
>>> like
>>>> to mention that getting additional test feedback from the community
>>>> seems also important. That's why IMHO the RC style of releasing could
>>> be
>>>> helpful by attracting more test effort before a release.
>>>
>>> I think RC style releasing could help. Another thought that came to my
>>> mind that
>>> I haven't worked out how we could implement this is the following:
>>>
>>> Do "double releases". Taking the current state we would do:
>>>
>>> Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>>> fixes / security fixes.
>>> 2.4.35 additional features / improvements on top of 2.4.34 as we do so
>>> far.
>>>
>>> The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>>> contains bug fixes / security fixes
>>> on top of 2.4.35, 2.4.37 additional features / improvements on top of
>>> 2.4.36.
>>> So 2.4.36 would contain the additional features / improvements we had
>>> in 2.4.35 as well, but they
>>> have been in the "wild" for some time and the issues should have been
>>> identified and fixed as part
>>> of 2.4.36.
>>> Users would then have a choice what to take.
>>>
>>> Regards
>>>
>>> Rüdiger
>>
>> Interesting idea. This idea seems to be converging on semver-like principles where the double release would look like:
>> 2.12.4 => 2.12.3 + fixes
>> 2.13.0 => 2.12.4 + features
>> ... which I like as a direction. However, I think distinguishing between patch/feature releases in the patch number
>> alone would be confusing to the user base.
> 
> ... although at least in the Java world that is what happens there since a few years. For example Java 1.8.0_171
> includes security fixes and critical patches, 1.8.0_172 released at the same day includes additional features. Or as
> Oracle phrases it: "Java SE 8u171 includes important bug fixes. Oracle strongly recommends that all Java SE 8 users
> upgrade to this release. Java SE 8u172 is a patch-set update, including all of 8u171 plus additional bug fixes
> (described in the release notes).".

Damn it. You found the source of my idea :-)

> 
> Unfortunately it seems they have given up the idea starting with Java 9. So pointing to the Java 8 situation is not that
> convincing ...

IMHO the whole Java versioning after 8 is not very appealing to me. But this is just following the general new version
strategy of Oracle which I regard as confusing with respect to support lifecycles.

Regards

Rüdiger

Re: A proposal...

Posted by Rainer Jung <ra...@kippdata.de>.
Am 24.04.2018 um 13:19 schrieb Daniel Ruggeri:
> 
> 
> On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group" <ru...@vodafone.com> wrote:
>>
>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: Rainer Jung <ra...@kippdata.de>
>>> Gesendet: Montag, 23. April 2018 16:47
>>> An: dev@httpd.apache.org
>>> Betreff: Re: A proposal...
>>>
>>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>>> It seems that, IMO, if there was not so much concern about
>>> "regressions" in releases, this whole revisit-versioning debate would
>>> not have come up. This implies, to me at least, that the root cause
>> (as
>>> I've said before) appears to be one related to QA and testing more
>> than
>>> anything. Unless we address this, then nothing else really matters.
>>>>
>>>> We have a test framework. The questions are:
>>>>
>>>>    1. Are we using it?
>>>>    2. Are we using it sufficiently well?
>>>>    3. If not, what can we do to improve that?
>>>>    4. Can we supplement/replace it w/ other frameworks?
>>>>
>>>> It does seem to me that each time we patch something, there should
>> be
>>> a test added or extended which covers that bug. We have gotten lax in
>>> that. Same for features. And the more substantial the change (ie, the
>>> more core code it touches, or the more it refactors something), the
>> more
>>> we should envision what tests can be in place which ensure nothing
>>> breaks.
>>>>
>>>> In other words: nothing backported unless it also involves some
>>> changes to the Perl test framework or some pretty convincing reasons
>> why
>>> it's not required.
>>>
>>> I agree with the importance of the test framework, but would also
>> like
>>> to mention that getting additional test feedback from the community
>>> seems also important. That's why IMHO the RC style of releasing could
>> be
>>> helpful by attracting more test effort before a release.
>>
>> I think RC style releasing could help. Another thought that came to my
>> mind that
>> I haven't worked out how we could implement this is the following:
>>
>> Do "double releases". Taking the current state we would do:
>>
>> Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>> fixes / security fixes.
>> 2.4.35 additional features / improvements on top of 2.4.34 as we do so
>> far.
>>
>> The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>> contains bug fixes / security fixes
>> on top of 2.4.35, 2.4.37 additional features / improvements on top of
>> 2.4.36.
>> So 2.4.36 would contain the additional features / improvements we had
>> in 2.4.35 as well, but they
>> have been in the "wild" for some time and the issues should have been
>> identified and fixed as part
>> of 2.4.36.
>> Users would then have a choice what to take.
>>
>> Regards
>>
>> Rüdiger
> 
> Interesting idea. This idea seems to be converging on semver-like principles where the double release would look like:
> 2.12.4 => 2.12.3 + fixes
> 2.13.0 => 2.12.4 + features
> ... which I like as a direction. However, I think distinguishing between patch/feature releases in the patch number alone would be confusing to the user base.

... although at least in the Java world that is what happens there since 
a few years. For example Java 1.8.0_171 includes security fixes and 
critical patches, 1.8.0_172 released at the same day includes additional 
features. Or as Oracle phrases it: "Java SE 8u171 includes important bug 
fixes. Oracle strongly recommends that all Java SE 8 users upgrade to 
this release. Java SE 8u172 is a patch-set update, including all of 
8u171 plus additional bug fixes (described in the release notes).".

Unfortunately it seems they have given up the idea starting with Java 9. 
So pointing to the Java 8 situation is not that convincing ...

Regards,

Rainer

Re: A proposal...

Posted by Ruediger Pluem <rp...@apache.org>.

On 04/24/2018 02:52 PM, Daniel Ruggeri wrote:
> 

> In all cases, the changelog would clearly state the changes and we would ship what we consider to be the best default config. Ideally, we would also point the changelog entry to the SVN patches which implement the change so downstream has an easier time picking and choosing what they want.
> 

Adding the revision of the backport commit to the CHANGES entry seems like a good idea.

Regards

Rüdiger

Re: A proposal...

Posted by Daniel Ruggeri <dr...@primary.net>.
On 2018-04-24 09:22, Eric Covener wrote:
> On Tue, Apr 24, 2018 at 10:08 AM, William A Rowe Jr 
> <wr...@rowe-clan.net> wrote:
>> On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> 
>> wrote:
>>>> Yes, exactly correct. We have three "contracts" to keep that I think 
>>>> aligns very well with the following semver "contracts":
>>>> Major => API/ABI compatibility for modules
>>>> Minor => Feature and directives
>>>> Patch => Functional and configuration syntax guarantees
>>>> 
>>>> Demonstrating by way of a few examples:
>>>> If we add a directive but do not change exported structure, that 
>>>> would result in a minor bump since the directive is part of the 
>>>> feature set that would necessitate a config change to use (not 
>>>> forward compatible).
>>> 
>>> I don't agree that adding directives is adding function,  in terms of
>>> versioning or user expectations.  I don't see why it a new directive
>>> or parameter should necessarily wait for a new minor release
>>> especially when there's so much sensitivity to behavior changes. It
>>> seems backwards.
>> 
>> As a general rule, adding a directive introduces a new feature, along
>> with new functions, and structure additions.
> 
> I won't argue the semantics any further, but I don't agree there is
> any such equivalence or general rule.

One thing you mention above is "wait for a new minor release". I can 
definitely see that being an issue for our current maj.minor layout 
given that minor bumps are measured in years. In this proposal, unless 
there's a pressing need to send out a patch release right now, the next 
version WOULD be that minor bump. Put into practice, I would see major 
bumps being measured in years, minor bumps in quarters and patch bumps 
in weeks/months.

> 
> For me including this would poison almost any proposal it is added to.
> In the context above: I want to use directives for opt-in of fixes in
> a patch release.

FWIW, I propose that a directive addition would be a minor bump because 
directives are part of a configuration "contract" with users - a set of 
directives that exist in that major.minor. By adding directives in a 
patch, we break the contract that would state "Any configuration valid 
in 3.4.x will always be valid in 3.4.x." We can't do that today, but it 
would be great if we could. Adding directives only in a minor bump 
provides a clean point at which a known set of directives are valid.

-- 
Daniel Ruggeri

Re: A proposal...

Posted by Yann Ylavic <yl...@gmail.com>.
On Tue, Apr 24, 2018 at 4:22 PM, Eric Covener <co...@gmail.com> wrote:
> On Tue, Apr 24, 2018 at 10:08 AM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>> On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> wrote:
>>>> Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
>>>> Major => API/ABI compatibility for modules
>>>> Minor => Feature and directives
>>>> Patch => Functional and configuration syntax guarantees
>>>>
>>>> Demonstrating by way of a few examples:
>>>> If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).
>>>
>>> I don't agree that adding directives is adding function,  in terms of
>>> versioning or user expectations.  I don't see why it a new directive
>>> or parameter should necessarily wait for a new minor release
>>> especially when there's so much sensitivity to behavior changes. It
>>> seems backwards.
>>
>> As a general rule, adding a directive introduces a new feature, along
>> with new functions, and structure additions.
>
> I won't argue the semantics any further, but I don't agree there is
> any such equivalence or general rule.
>
> For me including this would poison almost any proposal it is added to.
> In the context above: I want to use directives for opt-in of fixes in
> a patch release.

I agree with Eric here, new directives are sometimes the way to fix
something for those who need to, without breaking the others that
don't.

By the way, if we bump minor for any non-forward-backportable change,
who is going to maintain all the "current minor minus n" versions
while all of the new/fancy things are in current only (and minor keeps
bumping)?
I'm afraid it won't help users stuck at some minor version (because of
API/ABI) if they don't get bugfixes because their version doesn't get
attraction/attention anymore.
IOW, what maintenance would we garantee/apply for some minor version
if we keep bumping minor numbers to get new stuff out?

Not an opposition, just wanting to have a clear picture. Remember that
some/most? of us have never been actor in a new httpd minor release,
not to talk about a major one ;)


Regards,
Yann.

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
On Tue, Apr 24, 2018 at 10:08 AM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> wrote:
>>> Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
>>> Major => API/ABI compatibility for modules
>>> Minor => Feature and directives
>>> Patch => Functional and configuration syntax guarantees
>>>
>>> Demonstrating by way of a few examples:
>>> If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).
>>
>> I don't agree that adding directives is adding function,  in terms of
>> versioning or user expectations.  I don't see why it a new directive
>> or parameter should necessarily wait for a new minor release
>> especially when there's so much sensitivity to behavior changes. It
>> seems backwards.
>
> As a general rule, adding a directive introduces a new feature, along
> with new functions, and structure additions.

I won't argue the semantics any further, but I don't agree there is
any such equivalence or general rule.

For me including this would poison almost any proposal it is added to.
In the context above: I want to use directives for opt-in of fixes in
a patch release.

-- 
Eric Covener
covener@gmail.com

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> wrote:
>> Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
>> Major => API/ABI compatibility for modules
>> Minor => Feature and directives
>> Patch => Functional and configuration syntax guarantees
>>
>> Demonstrating by way of a few examples:
>> If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).
>
> I don't agree that adding directives is adding function,  in terms of
> versioning or user expectations.  I don't see why it a new directive
> or parameter should necessarily wait for a new minor release
> especially when there's so much sensitivity to behavior changes. It
> seems backwards.

As a general rule, adding a directive introduces a new feature, along
with new functions, and structure additions.

If someone says "try the WizBang directive", it is much clearer if this
appears in 2.7.0 and stays there without being renamed or dropped
until some future minor release. So we can claim the docs apply to
version major.minor with no confusion about the set of features in
this 2.7 flavor of Apache. 3-6 months later, some version 2.8 might
up and change those, but we can be careful about not making any
gratuitous changes without offering some back-compat support of
older directive names. (E.g. NameVirtualHost could have been a
no-op directive for a considerable time with no harm to the user's
config or intent.)

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
> Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
> Major => API/ABI compatibility for modules
> Minor => Feature and directives
> Patch => Functional and configuration syntax guarantees
>
> Demonstrating by way of a few examples:
> If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).

I don't agree that adding directives is adding function,  in terms of
versioning or user expectations.  I don't see why it a new directive
or parameter should necessarily wait for a new minor release
especially when there's so much sensitivity to behavior changes. It
seems backwards.

> If we were to fix a security bug that does not impact running configs, that would be a patch bump since a config that works today must work tomorrow for the same maj.min.
> If we were to change default behavior, we would bump minor. This is because although the change doesn't break existing explicit configs of the directive, it would modify behavior due to implicit defaults => a visible change in functionality.

I think it is more illustrative to turn this around and say certain
changes must wait for a minor or major release.

To me the case worth enumerating is what tolerance for behavior change
we want to allow for a security fix that goes into a patch release.

> In all cases, the changelog would clearly state the changes and we would ship what we consider to be the best default config.

I'm not understanding the "best default config" part. Is this a way to
illustrate the stuff with bad hard-coded default values that won't be
fixed until the next minor? I think the term you're using is a little
broad/abstract for something like that.

Re: A proposal...

Posted by Daniel Ruggeri <DR...@primary.net>.

On April 24, 2018 6:53:52 AM CDT, Ruediger Pluem <rp...@apache.org> wrote:
>
>
>On 04/24/2018 01:19 PM, Daniel Ruggeri wrote:
>> 
>> 
>> On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group"
><ru...@vodafone.com> wrote:
>>>
>>>
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: Rainer Jung <ra...@kippdata.de>
>>>> Gesendet: Montag, 23. April 2018 16:47
>>>> An: dev@httpd.apache.org
>>>> Betreff: Re: A proposal...
>>>>
>>>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>>>> It seems that, IMO, if there was not so much concern about
>>>> "regressions" in releases, this whole revisit-versioning debate
>would
>>>> not have come up. This implies, to me at least, that the root cause
>>> (as
>>>> I've said before) appears to be one related to QA and testing more
>>> than
>>>> anything. Unless we address this, then nothing else really matters.
>>>>>
>>>>> We have a test framework. The questions are:
>>>>>
>>>>>   1. Are we using it?
>>>>>   2. Are we using it sufficiently well?
>>>>>   3. If not, what can we do to improve that?
>>>>>   4. Can we supplement/replace it w/ other frameworks?
>>>>>
>>>>> It does seem to me that each time we patch something, there should
>>> be
>>>> a test added or extended which covers that bug. We have gotten lax
>in
>>>> that. Same for features. And the more substantial the change (ie,
>the
>>>> more core code it touches, or the more it refactors something), the
>>> more
>>>> we should envision what tests can be in place which ensure nothing
>>>> breaks.
>>>>>
>>>>> In other words: nothing backported unless it also involves some
>>>> changes to the Perl test framework or some pretty convincing
>reasons
>>> why
>>>> it's not required.
>>>>
>>>> I agree with the importance of the test framework, but would also
>>> like
>>>> to mention that getting additional test feedback from the community
>>>> seems also important. That's why IMHO the RC style of releasing
>could
>>> be
>>>> helpful by attracting more test effort before a release.
>>>
>>> I think RC style releasing could help. Another thought that came to
>my
>>> mind that
>>> I haven't worked out how we could implement this is the following:
>>>
>>> Do "double releases". Taking the current state we would do:
>>>
>>> Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>>> fixes / security fixes.
>>> 2.4.35 additional features / improvements on top of 2.4.34 as we do
>so
>>> far.
>>>
>>> The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>>> contains bug fixes / security fixes
>>> on top of 2.4.35, 2.4.37 additional features / improvements on top
>of
>>> 2.4.36.
>>> So 2.4.36 would contain the additional features / improvements we
>had
>>> in 2.4.35 as well, but they
>>> have been in the "wild" for some time and the issues should have
>been
>>> identified and fixed as part
>>> of 2.4.36.
>>> Users would then have a choice what to take.
>>>
>>> Regards
>>>
>>> Rüdiger
>> 
>> Interesting idea. This idea seems to be converging on semver-like
>principles where the double release would look like:
>> 2.12.4 => 2.12.3 + fixes
>> 2.13.0 => 2.12.4 + features
>> ... which I like as a direction. However, I think distinguishing
>between patch/feature releases in the patch number alone would be
>confusing to the user base.
>> 
>
>And for 2.x we would stay API/ABI stable just like as we do to day with
>a stable release? The next API/ABI incompatible
>version would be 3.x in that scheme?
>
>Regards
>
>Rüdiger

Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
Major => API/ABI compatibility for modules
Minor => Feature and directives
Patch => Functional and configuration syntax guarantees

Demonstrating by way of a few examples:
If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).
If we were to fix a security bug that does not impact running configs, that would be a patch bump since a config that works today must work tomorrow for the same maj.min.
If we were to change default behavior, we would bump minor. This is because although the change doesn't break existing explicit configs of the directive, it would modify behavior due to implicit defaults => a visible change in functionality.
Introducing H2 would have been a minor bump because it adds both new directives and new functionality.
The switch from experimental to GA for H2 would have been a minor bump, not because of functional changes, but because of a change in our "contract" to users of code readiness.
Refactoring exported core structures for better H2 support would be a major bump due to potential ABI breakage.
A bug fix that requires API changes and adds directives would still be a major bump.
Experiments for major changes would be done in a testing branch and merged to trunk as the next major.
A minor bump (feature/functional/etc) would be cut from current trunk while a patch bump is made from the maj/minor it fixes (I haven't yet worked out what this proposal would look like in svn)

In all cases, the changelog would clearly state the changes and we would ship what we consider to be the best default config. Ideally, we would also point the changelog entry to the SVN patches which implement the change so downstream has an easier time picking and choosing what they want.

-- 
Daniel Ruggeri

Re: A proposal...

Posted by Ruediger Pluem <rp...@apache.org>.

On 04/24/2018 01:19 PM, Daniel Ruggeri wrote:
> 
> 
> On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group" <ru...@vodafone.com> wrote:
>>
>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: Rainer Jung <ra...@kippdata.de>
>>> Gesendet: Montag, 23. April 2018 16:47
>>> An: dev@httpd.apache.org
>>> Betreff: Re: A proposal...
>>>
>>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>>> It seems that, IMO, if there was not so much concern about
>>> "regressions" in releases, this whole revisit-versioning debate would
>>> not have come up. This implies, to me at least, that the root cause
>> (as
>>> I've said before) appears to be one related to QA and testing more
>> than
>>> anything. Unless we address this, then nothing else really matters.
>>>>
>>>> We have a test framework. The questions are:
>>>>
>>>>   1. Are we using it?
>>>>   2. Are we using it sufficiently well?
>>>>   3. If not, what can we do to improve that?
>>>>   4. Can we supplement/replace it w/ other frameworks?
>>>>
>>>> It does seem to me that each time we patch something, there should
>> be
>>> a test added or extended which covers that bug. We have gotten lax in
>>> that. Same for features. And the more substantial the change (ie, the
>>> more core code it touches, or the more it refactors something), the
>> more
>>> we should envision what tests can be in place which ensure nothing
>>> breaks.
>>>>
>>>> In other words: nothing backported unless it also involves some
>>> changes to the Perl test framework or some pretty convincing reasons
>> why
>>> it's not required.
>>>
>>> I agree with the importance of the test framework, but would also
>> like
>>> to mention that getting additional test feedback from the community
>>> seems also important. That's why IMHO the RC style of releasing could
>> be
>>> helpful by attracting more test effort before a release.
>>
>> I think RC style releasing could help. Another thought that came to my
>> mind that
>> I haven't worked out how we could implement this is the following:
>>
>> Do "double releases". Taking the current state we would do:
>>
>> Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>> fixes / security fixes.
>> 2.4.35 additional features / improvements on top of 2.4.34 as we do so
>> far.
>>
>> The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>> contains bug fixes / security fixes
>> on top of 2.4.35, 2.4.37 additional features / improvements on top of
>> 2.4.36.
>> So 2.4.36 would contain the additional features / improvements we had
>> in 2.4.35 as well, but they
>> have been in the "wild" for some time and the issues should have been
>> identified and fixed as part
>> of 2.4.36.
>> Users would then have a choice what to take.
>>
>> Regards
>>
>> Rüdiger
> 
> Interesting idea. This idea seems to be converging on semver-like principles where the double release would look like:
> 2.12.4 => 2.12.3 + fixes
> 2.13.0 => 2.12.4 + features
> ... which I like as a direction. However, I think distinguishing between patch/feature releases in the patch number alone would be confusing to the user base.
> 

And for 2.x we would stay API/ABI stable just like as we do to day with a stable release? The next API/ABI incompatible
version would be 3.x in that scheme?

Regards

Rüdiger


Re: A proposal...

Posted by Daniel Ruggeri <DR...@primary.net>.

On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group" <ru...@vodafone.com> wrote:
>
>
>> -----Ursprüngliche Nachricht-----
>> Von: Rainer Jung <ra...@kippdata.de>
>> Gesendet: Montag, 23. April 2018 16:47
>> An: dev@httpd.apache.org
>> Betreff: Re: A proposal...
>> 
>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>> > It seems that, IMO, if there was not so much concern about
>> "regressions" in releases, this whole revisit-versioning debate would
>> not have come up. This implies, to me at least, that the root cause
>(as
>> I've said before) appears to be one related to QA and testing more
>than
>> anything. Unless we address this, then nothing else really matters.
>> >
>> > We have a test framework. The questions are:
>> >
>> >   1. Are we using it?
>> >   2. Are we using it sufficiently well?
>> >   3. If not, what can we do to improve that?
>> >   4. Can we supplement/replace it w/ other frameworks?
>> >
>> > It does seem to me that each time we patch something, there should
>be
>> a test added or extended which covers that bug. We have gotten lax in
>> that. Same for features. And the more substantial the change (ie, the
>> more core code it touches, or the more it refactors something), the
>more
>> we should envision what tests can be in place which ensure nothing
>> breaks.
>> >
>> > In other words: nothing backported unless it also involves some
>> changes to the Perl test framework or some pretty convincing reasons
>why
>> it's not required.
>> 
>> I agree with the importance of the test framework, but would also
>like
>> to mention that getting additional test feedback from the community
>> seems also important. That's why IMHO the RC style of releasing could
>be
>> helpful by attracting more test effort before a release.
>
>I think RC style releasing could help. Another thought that came to my
>mind that
>I haven't worked out how we could implement this is the following:
>
>Do "double releases". Taking the current state we would do:
>
>Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>fixes / security fixes.
>2.4.35 additional features / improvements on top of 2.4.34 as we do so
>far.
>
>The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>contains bug fixes / security fixes
>on top of 2.4.35, 2.4.37 additional features / improvements on top of
>2.4.36.
>So 2.4.36 would contain the additional features / improvements we had
>in 2.4.35 as well, but they
>have been in the "wild" for some time and the issues should have been
>identified and fixed as part
>of 2.4.36.
>Users would then have a choice what to take.
>
>Regards
>
>Rüdiger

Interesting idea. This idea seems to be converging on semver-like principles where the double release would look like:
2.12.4 => 2.12.3 + fixes
2.13.0 => 2.12.4 + features
... which I like as a direction. However, I think distinguishing between patch/feature releases in the patch number alone would be confusing to the user base.
-- 
Daniel Ruggeri

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Mon, Apr 23, 2018 at 1:05 PM, Jim Jagielski <ji...@jagunet.com> wrote:
>
>> On Apr 23, 2018, at 12:54 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>>
>> +1; I see any "patch" releases (semver definition) as adopting well-tested bug
>> fixes. In some cases, complex patches could arrive first on a new minor branch
>> for longer alpha/beta scrutiny, before being accepted as-a-patch. This
>> could have
>> helped our php-fpm users with that crazy 2.4.2# cycle of tweak-and-break.
>
> What really helped was having test cases, which are now part
> of the test framework.

More to the point this would have always been iterative. Fix one to break
another. You aren't going to anticipate every side effect writing the
initial test.

It would be great to understand how our PR system failed us in engaging
with PHP users to identify *all* the side effects of 'whatever' change we were
making to the location transcription. And tests were added as things were
broken, more tests added and those broke other things.

To suggest tests would have solved this is silly. The tests were necessary,
and derived from user reports of testing out our code. That it took so many
releases over a year was sort of inexplicable, and if we can sort that out,
we will end up with a better process no matter how we change test rules
or release versioning.

Re: A proposal...

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On Apr 23, 2018, at 12:54 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> 
> 
> +1; I see any "patch" releases (semver definition) as adopting well-tested bug
> fixes. In some cases, complex patches could arrive first on a new minor branch
> for longer alpha/beta scrutiny, before being accepted as-a-patch. This
> could have
> helped our php-fpm users with that crazy 2.4.2# cycle of tweak-and-break.
> 

What really helped was having test cases, which are now part
of the test framework.


Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Mon, Apr 23, 2018 at 9:47 AM, Rainer Jung <ra...@kippdata.de> wrote:
> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>
>> It seems that, IMO, if there was not so much concern about "regressions"
>> in releases, this whole revisit-versioning debate would not have come up.

Additional concerns that amplify the regressions; last minute code dumps
with minimal review upon each point release. A three day review window
for the success of the combined result. Insufficient community review of
new features (w/wo new directives) with no alpha or beta releases in over
half a decade (h2/md excepted.)

>> It does seem to me that each time we patch something, there should be a
>> test added or extended which covers that bug. We have gotten lax in that.
>> Same for features. And the more substantial the change (ie, the more core
>> code it touches, or the more it refactors something), the more we should
>> envision what tests can be in place which ensure nothing breaks.

+1!

>> In other words: nothing backported unless it also involves some changes to
>> the Perl test framework or some pretty convincing reasons why it's not
>> required.

Or horse-before-the-cart, put in the test for a spec violation/problem behavior
in the code, and add it to TODO. The suite doesn't fail, but serves as a flag
for a defect to be corrected.

Even better (and we have been good about this)... make corresponding docs
changes a prereq, in addition to test.

> I agree with the importance of the test framework, but would also like to
> mention that getting additional test feedback from the community seems also
> important. That's why IMHO the RC style of releasing could be helpful by
> attracting more test effort before a release.
>
> And for the more complex modules like mod_proxy, mod_ssl and the event MPM,
> some of the hickups might have been hard to detect with the test framework.
> That's why I think having a more stable branch 2.4 with less feature
> backports and another branch that evolves faster would give downstreams a
> choice.

+1; I see any "patch" releases (semver definition) as adopting well-tested bug
fixes. In some cases, complex patches could arrive first on a new minor branch
for longer alpha/beta scrutiny, before being accepted as-a-patch. This
could have
helped our php-fpm users with that crazy 2.4.2# cycle of tweak-and-break.

I'd hope we would reintroduce alpha/beta review of new features coinciding
with release m.n.0 with a much longer tail for feature review. Maybe it requires
two or three patch releases before GA, maybe it is accepted as GA on the
very first candidate.

A patch release can be reviewed in a week, but needs to be reviewed in days
to move a security defect fix into users' hands after it is revealed
to our svn/git.
On very rare occasions (once a decade or so), we accelerate this to 24 hours.

A feature release/significant behavior change needs a community, and this is
not a review that happens in a week. I'd expect better adoption of new features
by drawing in our users@ and extended communities to help review additions.

AW: A proposal...

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.

> -----Ursprüngliche Nachricht-----
> Von: Rainer Jung <ra...@kippdata.de>
> Gesendet: Montag, 23. April 2018 16:47
> An: dev@httpd.apache.org
> Betreff: Re: A proposal...
> 
> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
> > It seems that, IMO, if there was not so much concern about
> "regressions" in releases, this whole revisit-versioning debate would
> not have come up. This implies, to me at least, that the root cause (as
> I've said before) appears to be one related to QA and testing more than
> anything. Unless we address this, then nothing else really matters.
> >
> > We have a test framework. The questions are:
> >
> >   1. Are we using it?
> >   2. Are we using it sufficiently well?
> >   3. If not, what can we do to improve that?
> >   4. Can we supplement/replace it w/ other frameworks?
> >
> > It does seem to me that each time we patch something, there should be
> a test added or extended which covers that bug. We have gotten lax in
> that. Same for features. And the more substantial the change (ie, the
> more core code it touches, or the more it refactors something), the more
> we should envision what tests can be in place which ensure nothing
> breaks.
> >
> > In other words: nothing backported unless it also involves some
> changes to the Perl test framework or some pretty convincing reasons why
> it's not required.
> 
> I agree with the importance of the test framework, but would also like
> to mention that getting additional test feedback from the community
> seems also important. That's why IMHO the RC style of releasing could be
> helpful by attracting more test effort before a release.

I think RC style releasing could help. Another thought that came to my mind that
I haven't worked out how we could implement this is the following:

Do "double releases". Taking the current state we would do:

Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug fixes / security fixes.
2.4.35 additional features / improvements on top of 2.4.34 as we do so far.

The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only contains bug fixes / security fixes
on top of 2.4.35, 2.4.37 additional features / improvements on top of 2.4.36.
So 2.4.36 would contain the additional features / improvements we had in 2.4.35 as well, but they
have been in the "wild" for some time and the issues should have been identified and fixed as part
of 2.4.36.
Users would then have a choice what to take.

Regards

Rüdiger


Re: A proposal...

Posted by Rainer Jung <ra...@kippdata.de>.
Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
> 
> We have a test framework. The questions are:
> 
>   1. Are we using it?
>   2. Are we using it sufficiently well?
>   3. If not, what can we do to improve that?
>   4. Can we supplement/replace it w/ other frameworks?
> 
> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
> 
> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.

I agree with the importance of the test framework, but would also like 
to mention that getting additional test feedback from the community 
seems also important. That's why IMHO the RC style of releasing could be 
helpful by attracting more test effort before a release.

And for the more complex modules like mod_proxy, mod_ssl and the event 
MPM, some of the hickups might have been hard to detect with the test 
framework. That's why I think having a more stable branch 2.4 with less 
feature backports and another branch that evolves faster would give 
downstreams a choice.

Regards,

Rainer


Re: A proposal...

Posted by Daniel Ruggeri <dr...@primary.net>.
On 2018-04-23 09:00, Jim Jagielski wrote:
> It seems that, IMO, if there was not so much concern about
> "regressions" in releases, this whole revisit-versioning debate would
> not have come up. This implies, to me at least, that the root cause
> (as I've said before) appears to be one related to QA and testing more
> than anything. Unless we address this, then nothing else really
> matters.
> 
> We have a test framework. The questions are:
> 
>  1. Are we using it?
>  2. Are we using it sufficiently well?
>  3. If not, what can we do to improve that?
>  4. Can we supplement/replace it w/ other frameworks?

My opinion (I think mentioned here on-list before, too) is that the 
framework is too... mystical. A lot of us do not understand how it works 
and it's a significant cognitive exercise to get started. Getting it 
installed and up and running is also non-trivial.

I am willing to invest time working with anyone who would like to 
generate more documentation to demystify the framework. Pair 
programming, maybe, to go with this newfangled test driven design 
thought??? :-). I do not understand the ins and outs of the framework 
very well, but am willing to learn more to ferret out the things that 
should be better documented. Things like, "How do I add a vhost for a 
specific test?", "Are there any convenient test wrappers for HTTP(s) 
requests?", "How do I write a test case from scratch?" would be a great 
first start.


Also, FWIW, at $dayjob we use serverspec (https://serverspec.org/) as a 
testing framework for infrastructure like httpd. After some initial 
thrashing and avoidance, I've come to like it quite well. If we prefer 
to keep with a scripting language for tests (I do), Ruby is a decent 
choice since it has all the niceties that we'd expect (HTTP(s), 
XML/JSON/YML, threading, native testing framework, crypto) built in. I'm 
happy to provide an example or two if anyone is interested in exploring 
the topic in more depth.


> 
> It does seem to me that each time we patch something, there should be
> a test added or extended which covers that bug. We have gotten lax in
> that. Same for features. And the more substantial the change (ie, the
> more core code it touches, or the more it refactors something), the
> more we should envision what tests can be in place which ensure
> nothing breaks.
> 
> In other words: nothing backported unless it also involves some
> changes to the Perl test framework or some pretty convincing reasons
> why it's not required.

I completely support creating this as a procedure, provided we tackle 
the "how do I test stuff" doco challenges, too.

-- 
Daniel Ruggeri

Re: A proposal...

Posted by Paul Querna <pa...@querna.org>.
On Mon, Apr 23, 2018 at 11:17 AM, Christophe Jaillet
<ch...@wanadoo.fr> wrote:
> Le 23/04/2018 à 16:00, Jim Jagielski a écrit :
>>
>> It seems that, IMO, if there was not so much concern about "regressions"
>> in releases, this whole revisit-versioning debate would not have come up.
>> This implies, to me at least, that the root cause (as I've said before)
>> appears to be one related to QA and testing more than anything. Unless we
>> address this, then nothing else really matters.
>>
>> We have a test framework. The questions are:
>>
>>   1. Are we using it?
>>   2. Are we using it sufficiently well?
>>   3. If not, what can we do to improve that?
>>   4. Can we supplement/replace it w/ other frameworks?
>>
>> It does seem to me that each time we patch something, there should be a
>> test added or extended which covers that bug. We have gotten lax in that.
>> Same for features. And the more substantial the change (ie, the more core
>> code it touches, or the more it refactors something), the more we should
>> envision what tests can be in place which ensure nothing breaks.
>>
>> In other words: nothing backported unless it also involves some changes to
>> the Perl test framework or some pretty convincing reasons why it's not
>> required.
>>
>
> Hi,
> +1000 on my side for more tests.
>
> But, IMHO, the perl framework is complex to understand for most of us.
>
> Last week I tried to play with it. I tried to update proxy_balancer.t
> because only lbmethod=byrequests is tested.
> The current test itself is really simple. It just checks if the module
> didn't crashed (i.e we receive 200).
> I tried to extend it for the other lbmethod available. This looked as an
> easy task. But figuring the relation between:
>    <VirtualHost proxy_http_bal1>
> and
>    BalancerMember http://@SERVERNAME@:@PROXY_HTTP_BAL1_PORT@
> still remains a mystery to me.
>
>
> The ./test framework could be useful as well.
> At least it is written in C, so the entry ticket should be cheaper for most
> of us.
> But every thing can't be done with it, I guess.
> Maybe, we should at least have some unit testing for each ap_ function? The
> behavior of these function should not change as it can be used by 3rd party
> modules.

I agree that having a quick way to make function level tests would be
very helpful.  It's something largely missing from httpd. (APR has
more)

Even in making mod_log_json, testing it via the test framework is
complicated, as its not a module that changes the output of an HTTP
Request, vs I could make a few quick C-based tests that make sure
things are being serialized correctly very easily.

> The more tests, the better, but I believe that most regressions come from
> interaction between all what is possible with httpd.
> A test-suite is only a test-suite. Everything can't be tested.
>
>
> IMHO, as a minimum, all CVE should have their dedicated test which
> explicitly fails with version n, and succeeds with version n+1.
> It would help to make sure than known security issues don't come back.
>
>
>
> Another question with the perl framework.
> Is there a way to send "invalid" data/request with it?
> All, I see is some GET(...). I guess that it sends well formed date.
> Checking the behavior when invalid queries are received would be great.
> Some kind of RFC compliancy check.
>
>
> just my 2c,
> CJ

Re: A proposal...

Posted by Alain Toussaint <al...@vocatus.pub>.
> Hi,
> +1000 on my side for more tests.
> 
> But, IMHO, the perl framework is complex to understand for most of us.

From what I saw, the preferred scripting language having access to httpd's internal seem to be lua
at the present time. I also think that redoing (recoding) the testing framework to use Lua and get
to the point where sufficient testing (for regression and anything else) is possible is a huge
endeavor. I am also not aware of the history of httpd or its mailing list archive but I am willing
to invest some time to make that happen (I also have 5 days of course per week and a part-time job 1
day a week plus editorship on BLFS to work on but BLFS and test case of httpd will be done jointly).

Still, I will ask you all if reimplementation of the testing framework is possible / feasible in
Lua?

my 0.02$

Alain

Re: A proposal...

Posted by Marion et Christophe JAILLET <ch...@wanadoo.fr>.
As express by others, please don't !

IMHO, ONE language/framework is all what we need. Having a set of 
unrelated materials will bring nightmares and something hard, not to say 
impossible, to maintain/understand.

So we should keep it as-is, or switch to something new. But trying to 
please every-one is not the right way to go.
Even if the existing framework looks hard to me, I still think that it 
is a good option. Others have been able to extend the base, so it is 
possible :)

CJ

Le 24/04/2018 à 14:50, Jim Jagielski a écrit :
> One idea is that we setup, using the existing perl test framework, a sort of "catch all" test, where the framework simply runs all scripts from a subdir via system() (or whatever), and the reports success or failure. Those scripts could be written in anything. This would mean that people could add tests w/o knowing any Perl at all. It would require, however, some sort of process since those scripts themselves would need to be universal enough that all testers can run them.
>
> I may give that a whirl... I have some nodejs scripts that test websockets and I may see how/if I can "wrap" them within the test framework.


Re: A proposal...

Posted by Daniel Ruggeri <DR...@primary.net>.
Hi, Jim;
   Further to that point, simply reaping the exit code of zero or non-zero should be enough for a test to communicate success or failure.

   My only concern with this concept is that it could make our testing framework require a *very* unique set of system libraries, binaries and interpeters to be installed to run the full suite of tests. For a strawman example, I don't have nodejs on my Linux testing machine (easily fixable), but it doesn't seem clear if AIX is supported by nodejs (maybe not fixable?). Other languages like golang are in the same boat. Maybe we could have the test framework inquire with the script/binary if the execution environment can run the test before executing the test itself?
   The other thing I wonder about is how difficult it will become to maintain the tests since some concerns with the current framework's language have already been expressed. For its faults and virtues, at least the test framework is in a single language. I suspect most of us can figure out what other languages are doing, so maybe it's not a big deal... WDYT?
-- 
Daniel Ruggeri

On April 24, 2018 7:50:18 AM CDT, Jim Jagielski <ji...@jaguNET.com> wrote:
>One idea is that we setup, using the existing perl test framework, a
>sort of "catch all" test, where the framework simply runs all scripts
>from a subdir via system() (or whatever), and the reports success or
>failure. Those scripts could be written in anything. This would mean
>that people could add tests w/o knowing any Perl at all. It would
>require, however, some sort of process since those scripts themselves
>would need to be universal enough that all testers can run them.
>
>I may give that a whirl... I have some nodejs scripts that test
>websockets and I may see how/if I can "wrap" them within the test
>framework.

Re: AW: A proposal...

Posted by Alain Toussaint <al...@vocatus.pub>.
>  I would say that leaves us with Perl, Python or
> something like that as base language.

The reasons I suggested Lua previously is because it's the only programming language modules found
in the sources of httpd:

https://svn.apache.org/viewvc/httpd/httpd/trunk/modules/

specifically: https://svn.apache.org/viewvc/httpd/httpd/trunk/modules/lua/

mod_perl, mod_python and other languages modules are external to the project. I don't know if the
presence of a particular module for a programming language is actually needed but from the
documentation I've read about the Lua module is that it has excellent access to the inard of httpd
which would facilitate white box testing (I'd assume the current perl framework do the job for black
box testing).

As for platforms Lua run on: aix, bsd and {free,net,open}bsd, Linux, OSX, windows, solaris. Probably
some more.

> If we switch the framework we need to consider that with all gaps we have, we already have
> a large amount of tests in the current framework that need to be ported over time.

Sadly, yes.

Alain

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 24, 2018 at 8:37 AM, Plüm, Rüdiger, Vodafone Group
<ru...@vodafone.com> wrote:
>
> If we switch the framework we need to consider that with all gaps we have, we already have
> a large amount of tests in the current framework that need to be ported over time.

The OpenSSL project overhauled their test schema for 1.1, IIRC?
Wondering if people have thoughts on that one, whether their logic
would help us? I'm working on getting all our dependencies' test
logic going on Windows, which might kick off some ideas.

Splitting much of the core httpd binary, especially server/util*.c into
a libhttpd consumable by third parties can be accompanied by the
same C-language test schema for regression checks that the APR
project adopted; that would move many tests to a language we all
aught to be comfortable with.

AW: A proposal...

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.

> -----Ursprüngliche Nachricht-----
> Von: Eric Covener <co...@gmail.com>
> Gesendet: Dienstag, 24. April 2018 15:31
> An: Apache HTTP Server Development List <de...@httpd.apache.org>
> Betreff: Re: A proposal...
> 
> On Tue, Apr 24, 2018 at 8:50 AM, Jim Jagielski <ji...@jagunet.com> wrote:
> > One idea is that we setup, using the existing perl test framework, a
> sort of "catch all" test, where the framework simply runs all scripts
> from a subdir via system() (or whatever), and the reports success or
> failure. Those scripts could be written in anything. This would mean
> that people could add tests w/o knowing any Perl at all. It would
> require, however, some sort of process since those scripts themselves
> would need to be universal enough that all testers can run them.
> >
> > I may give that a whirl... I have some nodejs scripts that test
> websockets and I may see how/if I can "wrap" them within the test
> framework.
> 
> I fear this would lead to M frameworks and N languages which makes it
> harder for maintainers (prereqs, languages, etc) and fragments
> whatever potential there is for improvements to the harness/tools.

My concern as well. I think this will lead to a less usable framework overall.
It might be more usable for some, but overall it is less usable.
I also have my issues understanding the Perl framework, but I think it should be one
framework that is platform independent. I would say that leaves us with Perl, Python or
something like that as base language.
If we switch the framework we need to consider that with all gaps we have, we already have
a large amount of tests in the current framework that need to be ported over time.

Regards

Rüdiger

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
On Tue, Apr 24, 2018 at 8:50 AM, Jim Jagielski <ji...@jagunet.com> wrote:
> One idea is that we setup, using the existing perl test framework, a sort of "catch all" test, where the framework simply runs all scripts from a subdir via system() (or whatever), and the reports success or failure. Those scripts could be written in anything. This would mean that people could add tests w/o knowing any Perl at all. It would require, however, some sort of process since those scripts themselves would need to be universal enough that all testers can run them.
>
> I may give that a whirl... I have some nodejs scripts that test websockets and I may see how/if I can "wrap" them within the test framework.

I fear this would lead to M frameworks and N languages which makes it
harder for maintainers (prereqs, languages, etc) and fragments
whatever potential there is for improvements to the harness/tools.

Re: A proposal...

Posted by Jim Jagielski <ji...@jaguNET.com>.
One idea is that we setup, using the existing perl test framework, a sort of "catch all" test, where the framework simply runs all scripts from a subdir via system() (or whatever), and the reports success or failure. Those scripts could be written in anything. This would mean that people could add tests w/o knowing any Perl at all. It would require, however, some sort of process since those scripts themselves would need to be universal enough that all testers can run them.

I may give that a whirl... I have some nodejs scripts that test websockets and I may see how/if I can "wrap" them within the test framework.

Re: A proposal...

Posted by Marion et Christophe JAILLET <ch...@wanadoo.fr>.
Le 23/04/2018 à 23:09, Mark Blackman a écrit :
> 
> 
>> On 23 Apr 2018, at 19:17, Christophe Jaillet <ch...@wanadoo.fr> wrote:
>>
>> Le 23/04/2018 à 16:00, Jim Jagielski a écrit :
>>> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
>>> We have a test framework. The questions are:
>>>   1. Are we using it?
>>>   2. Are we using it sufficiently well?
>>>   3. If not, what can we do to improve that?
>>>   4. Can we supplement/replace it w/ other frameworks?
>>> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
>>> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.
>>
>> Hi,
>> +1000 on my side for more tests.
>>
>> But, IMHO, the perl framework is complex to understand for most of us.
> 
> Do you believe the Perl element is contributing to the complexity? I’d say Perl is perfect for this case in general, although I would have to look at it first to confirm.

For my personal case, Yes, I consider that the Perl syntax itself is 
complex and/or tricky. That is certainly because I've never worked that 
much with it.
I think that this can limit the number of people who can increase our 
test coverage.

> 
> I certainly believe adequate testing is a bigger and more important problem to solve than versioning policies, although some versioning policies might make it simpler to allow enough time for decent testing to happen. I personally have a stronger incentive to help with testing, than I do with versioning policies.
> 
> - Mark
> 


Re: A proposal...

Posted by Mark Blackman <ma...@exonetric.com>.

> On 23 Apr 2018, at 19:17, Christophe Jaillet <ch...@wanadoo.fr> wrote:
> 
> Le 23/04/2018 à 16:00, Jim Jagielski a écrit :
>> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
>> We have a test framework. The questions are:
>>  1. Are we using it?
>>  2. Are we using it sufficiently well?
>>  3. If not, what can we do to improve that?
>>  4. Can we supplement/replace it w/ other frameworks?
>> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
>> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.
> 
> Hi,
> +1000 on my side for more tests.
> 
> But, IMHO, the perl framework is complex to understand for most of us.

Do you believe the Perl element is contributing to the complexity? I’d say Perl is perfect for this case in general, although I would have to look at it first to confirm.

I certainly believe adequate testing is a bigger and more important problem to solve than versioning policies, although some versioning policies might make it simpler to allow enough time for decent testing to happen. I personally have a stronger incentive to help with testing, than I do with versioning policies.

- Mark

Re: A proposal...

Posted by Christophe Jaillet <ch...@wanadoo.fr>.
Le 23/04/2018 à 16:00, Jim Jagielski a écrit :
> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
> 
> We have a test framework. The questions are:
> 
>   1. Are we using it?
>   2. Are we using it sufficiently well?
>   3. If not, what can we do to improve that?
>   4. Can we supplement/replace it w/ other frameworks?
> 
> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
> 
> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.
> 

Hi,
+1000 on my side for more tests.

But, IMHO, the perl framework is complex to understand for most of us.

Last week I tried to play with it. I tried to update proxy_balancer.t 
because only lbmethod=byrequests is tested.
The current test itself is really simple. It just checks if the module 
didn't crashed (i.e we receive 200).
I tried to extend it for the other lbmethod available. This looked as an 
easy task. But figuring the relation between:
    <VirtualHost proxy_http_bal1>
and
    BalancerMember http://@SERVERNAME@:@PROXY_HTTP_BAL1_PORT@
still remains a mystery to me.


The ./test framework could be useful as well.
At least it is written in C, so the entry ticket should be cheaper for 
most of us.
But every thing can't be done with it, I guess.
Maybe, we should at least have some unit testing for each ap_ function? 
The behavior of these function should not change as it can be used by 
3rd party modules.


The more tests, the better, but I believe that most regressions come 
from interaction between all what is possible with httpd.
A test-suite is only a test-suite. Everything can't be tested.


IMHO, as a minimum, all CVE should have their dedicated test which 
explicitly fails with version n, and succeeds with version n+1.
It would help to make sure than known security issues don't come back.



Another question with the perl framework.
Is there a way to send "invalid" data/request with it?
All, I see is some GET(...). I guess that it sends well formed date. 
Checking the behavior when invalid queries are received would be great.
Some kind of RFC compliancy check.


just my 2c,
CJ

Re: A proposal...

Posted by Micha Lenk <mi...@lenk.info>.
Just a side node, some days ago I just realized that the source package 
of the apache2 package in Debian seems to include the test suite for the 
purpose of running it as part of the continuous integration test 
'run-test-suite': https://ci.debian.net/packages/a/apache2/

In my recently provided bugfix (#62186) I included a change of the test 
suite, but so far it looks like it isn't integrated yet (do I really 
need to file a separate bugzilla in the other project for that?).

 From the experience with doing so, I agree with others that in the long 
run maintaining some Perl-based test framework will probably make 
contributions pretty unpopular, especially for contributors that didn't 
work with Perl before.

For the addition of new regression tests (as others suggested) it would 
be pretty cool if they can be added in a somewhat more popular 
(scripting) language (Python and pytest were already mentioned). Yet the 
number of test frameworks to execute should stay at a manageable low number.

That being said, I am all for extending the use of any test framework.

Regards,
Micha

A proposal...

Posted by Jim Jagielski <ji...@jaguNET.com>.
It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.

We have a test framework. The questions are:

 1. Are we using it?
 2. Are we using it sufficiently well?
 3. If not, what can we do to improve that?
 4. Can we supplement/replace it w/ other frameworks?

It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.

In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Graham Leggett <mi...@sharp.fm>.
On 19 Apr 2018, at 5:55 PM, David Zuelke <dz...@salesforce.com> wrote:

> I hate to break this to you, and I do not want to discredit the
> amazing work all the contributors here are doing, but httpd 2.4 is of
> miserable, miserable quality when it comes to breaks and regressions.
> 
> I maintain the PHP/Apache/Nginx infrastructure at Heroku, and I was
> able to use the following httpd releases only in the last ~2.5 years:
> 
> - 2.4.16
> - 2.4.18
> - 2.4.20
> - 2.4.29
> -2.4.33
> 
> Mostly because of regressions around mod_proxy(_fcgi), REDIRECT_URL, whatever.

Did you bring these regressions to our attention? Regressions get fix very quickly - there was an 18 month period between 2.4.20 and 2.4.29, what stopped it being possible to upgrade in that time?

(As other people have said, there was no release between 2.4.16 and 2.4.18, 2.4.19 was replaced two weeks later, and there were no releases for you to have used between v2.4.29 and 2.4.33)

> This is not any person's fault. This is the fault of the process. The
> process can be repaired: bugfixes only in 2.4.x, do RC cycles for
> bugfix releases as well (that alone makes the changelog look a lot
> less confusing, which is useful for the project's image, see also the
> Nginx marketing/FUD discussion in the other thread), and start testing
> new features in modules first.

Unfortunately this misses a fundamental reality of what the httpd project is - we are the foundation under many many other things, and when we jump from v2.4.x to v2.6.x, our complete ecosystem above us needs to be recompiled.

This cannot be ignored, understated or taken lightly.

> It makes such little sense to land h2 support in 2.4.something, as
> opposed to having it as an official "brand new, try it out" subproject
> first, and then bundle it with 2.6.

Not only does it make sense, but it is vital we do so.

We needed to get h2 support into the hands of end users - end users who were not going to recompile their entire web stack, who install software from distros who are not going to upgrade, who were deploying modules from vendors that were not going to recompile.

Our average user will deploy whatever comes by default on their operating system, they’re not going to have a dedicated team that deploys a custom stack for their application. It is vital we respect the needs of these groups of users.

> Speaking of which, I'd also suggest dropping this odd/even number
> meaning experimental/stable versioning scheme, since it only
> aggravates the problem: never-ending experiments that go stale, maybe
> even get half backported, and meanwhile are subconsciously perceived
> as one more hurdle towards a next bigger release.

I don’t see us having any of the problems you describe above to the extent where it would be a real problem. Part of the process of creating an odd numbered branch is to determine what from trunk gets carried over and what is not, this is all catered for in the process.

> Really, I'd suggest taking a close look at the PHP release cycle, with
> their schedules, their RFC policies, everything. As I said in that
> other thread, the PHP project was in exactly the same spot a few years
> ago and they have pulled themselves out of that mess with amazing
> results.

Specifically what about the php release cycle are you referring to? I was burned badly a number of years ago by php config file formats being changed in point releases, have they improved their stability?

> I am also happy to make introductions to release managers and
> maintainers there. Heck I am betting some of them would happily serve
> as tutors for the httpd project ;) I'm certainly willing to help too.
> But IMO you need a clean cut and shake up the entire process, not just
> a little, because otherwise you won't get rid of some of the old
> habits that have been plaguing the project.

There is a fundamental tension between two groups of people - people who want stability, and people who want features.

I believe the httpd project has been very successful at trading off against these two, and other projects have a lot to learn from us.

Regards,
Graham
—


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by David Zuelke <dz...@salesforce.com>.
On Wed, Apr 18, 2018 at 8:01 AM, Stefan Eissing
<st...@greenbytes.de> wrote:
>
>
>> Am 17.04.2018 um 19:18 schrieb William A Rowe Jr <wr...@rowe-clan.net>:
>>
>>> The architecture of v2.4 has been very stable, the need for breaking changes has been largely non existent, and the focus since 2011 has been to get changes backported to v2.4 instead.
>>>
>>> To distill this down to raw numbers, there have been 1546 discrete backports (my simple grep of CHANGES) since 2011 - which has provided an enormous amount of enhancement for the collective community’s scrutiny.
>>
>> And the corresponding number of regressions and behavior changes. None
>> of these have enjoyed an "RC" or "beta", whatever one calls it, to
>> validate before adoption - other than our claim of "best httpd yet".
>> It has been an entirely new kitchen sink on every subversion release.
>
> All my substantial functional additions had beta releases for months before being voted into the 2.4.x branch. With binary beta packages available for several platforms by several supporters.
>
> William, this painting our world a dark and miserable place is coming back every few months. It is a disservice to the people who contribute changes here.

I hate to break this to you, and I do not want to discredit the
amazing work all the contributors here are doing, but httpd 2.4 is of
miserable, miserable quality when it comes to breaks and regressions.

I maintain the PHP/Apache/Nginx infrastructure at Heroku, and I was
able to use the following httpd releases only in the last ~2.5 years:

- 2.4.16
- 2.4.18
- 2.4.20
- 2.4.29
 -2.4.33

Mostly because of regressions around mod_proxy(_fcgi), REDIRECT_URL, whatever.

This is not any person's fault. This is the fault of the process. The
process can be repaired: bugfixes only in 2.4.x, do RC cycles for
bugfix releases as well (that alone makes the changelog look a lot
less confusing, which is useful for the project's image, see also the
Nginx marketing/FUD discussion in the other thread), and start testing
new features in modules first.

It makes such little sense to land h2 support in 2.4.something, as
opposed to having it as an official "brand new, try it out" subproject
first, and then bundle it with 2.6.

Speaking of which, I'd also suggest dropping this odd/even number
meaning experimental/stable versioning scheme, since it only
aggravates the problem: never-ending experiments that go stale, maybe
even get half backported, and meanwhile are subconsciously perceived
as one more hurdle towards a next bigger release.

Really, I'd suggest taking a close look at the PHP release cycle, with
their schedules, their RFC policies, everything. As I said in that
other thread, the PHP project was in exactly the same spot a few years
ago and they have pulled themselves out of that mess with amazing
results.

I am also happy to make introductions to release managers and
maintainers there. Heck I am betting some of them would happily serve
as tutors for the httpd project ;) I'm certainly willing to help too.
But IMO you need a clean cut and shake up the entire process, not just
a little, because otherwise you won't get rid of some of the old
habits that have been plaguing the project.

David

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Wed, Apr 18, 2018 at 1:01 AM, Stefan Eissing
<st...@greenbytes.de> wrote:
>
>> Am 17.04.2018 um 19:18 schrieb William A Rowe Jr <wr...@rowe-clan.net>:
>>
>>> On Tue, Apr 17, 2018 at 11:17 AM, Graham Leggett <mi...@sharp.fm> wrote:
>>>> On 17 Apr 2018, at 6:08 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>>>>
>>>> No enhancement since 2011-12-19 has been presented for the collective
>>>> community's scrutiny.
>>>
>>> Again, I’m not following.
>>>
>>> The architecture of v2.4 has been very stable, the need for breaking changes has been largely non existent, and the focus since 2011 has been to get changes backported to v2.4 instead.
>>>
>>> To distill this down to raw numbers, there have been 1546 discrete backports (my simple grep of CHANGES) since 2011 - which has provided an enormous amount of enhancement for the collective community’s scrutiny.
>>
>> And the corresponding number of regressions and behavior changes. None
>> of these have enjoyed an "RC" or "beta", whatever one calls it, to
>> validate before adoption - other than our claim of "best httpd yet".
>> It has been an entirely new kitchen sink on every subversion release.
>
> All my substantial functional additions had beta releases for months before being voted into the 2.4.x branch. With binary beta packages available for several platforms by several supporters.

Yes; this is an exception. But as you first encountered, the scope of
changes requires extensive rewiring of the hook processing phases
and the architecture of modules themselves.

You are in one instance (h2) spanning two worlds, one of very stable
API's, architecture and predictable release cadences (nghttp2), and "us".
Which makes your life easier, and more enjoyable?

Since we never see a beta of the collective work, we don't pick up the
various build problems (mod_md.h missing from make install on unix?
CMake ignorant of new files and paths?) Your underlying module
sources, built from apxs or similar I'm not worried about. The various
patches required to glue it together is what I worry about.

In most versioning schemes, these fundamental changes to the API,
behavior and existing modules would have been set off with another
version minor, or when redefining the module struct and existing hook
behavior, a version major update. There would have been at least one
beta of the collective work, issues would have been uncovered, then
we return to fixing up the smaller bugs that don't require refactoring.

> William, this painting our world a dark and miserable place is coming back every few months. It is a disservice to the people who contribute changes here.

If stating the plain facts of the state of our current release(s) continues
to be dark and miserable, that mirrors some disservice to the people
who are *trying* to consume our software.

I understand why you would say that from a recent PMC business
private post; I plan to share (and paint dark) that picture with the full
dev community, with some real improvements.

>>> You seem to be making a mountain out of a molehill, I just don’t see the problem you’re trying to solve.
>>
>> You are welcome to attribute this concern any way you like, and be
>> satisfied with whatever yardstick you wish to measure it by. If you
>> interpret our users as desiring enhancement and not stability, then
>> those are the interests you should advocate. I'll leave this thread
>> alone for another week again to give them the opportunity to chime in.
>
> There are alternative ways to be creative and innovate than going through this PMC into a semi annual release.

Exactly... that's why I started this thread without any prescription.
Let's hear them all, and agree to some. There are some overloaded
deeply-held beliefs here about the scarcity of version majors which
we first need to set aside, starting another thread on the raw data
from that exercise.

> Releasing a module (plus some small patches) on github opens ways for collaboration with people who like Apache and new stuff. Distros like Debian unstable and Fedora pick up stuff from there. PPAs for apt are made available. Steffen offers Windows builts.

Yes; this is true of both subversion releases and major version releases.

Take a radically incompatible example; PCRE 8 seems to have some
perpetual life while PCRE 10 has very slow adoption, because old
stack optimizations no longer play in a world where stack corruption
exploits are trivial, and he deliberately made such things hard to do.
Still, the distributors ship PCRE 10 to be consumed alongside the
older cruftier choice.

> The release cycle is hours, to the benefit of all interested. Be it a blocking bug fixed or a nice feature implemented. These are mostly people who do it for fun. Some even run large server clusters, so a „hobbyist“ label does not apply.

Hours, yes, but we've had a willing RM, who has automated even
more of this than Jim or I had, and has a very hard time finding
any target to point to. E.g. "ok, that looks like the right resolution
to the last of the regressions... let's..." ... "...oh there are all these
other shiny objects in STATUS... rock-n-roll!!!" No release in this
in-between state. No pause to major enhancements or refactoring
long enough for our users and code to catch their breath. And
really no suggestion of continuity between 2.4.prev and 2.4.next,
in config syntax or behavior. That doesn't sound fun for any user,
casual or corporate.

> It‘s simply fun. It‘s how I think FOSS is supposed to work and has worked best in the past. I plan to continue doing it.

++1. Corollary; don't cause it to be un-fun for others, our user
participants included. Need some wild west playground of the
next version, right alongside some fenced "working" release
of the "baked" version. Our M.m.s versioning schema isn't
facilitating this at all, which is why I ask for any and all ideas.

> If that stuff makes it all back into the Apache svn is not that relevant to me. Because it‘s the least rewarding and fun part.
>
> (I am just talking about my feelings here, YMMV.

ack, thanks for sharing!

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Stefan Eissing <st...@greenbytes.de>.

> Am 17.04.2018 um 19:18 schrieb William A Rowe Jr <wr...@rowe-clan.net>:
> 
>> On Tue, Apr 17, 2018 at 11:28 AM, Graham Leggett <mi...@sharp.fm> wrote:
>> 
>> The distributions have been doing this nigh on two decades - the stability of a given software baseline which will not suddenly break at 3am some arbitrary Sunday in the middle of the holidays is the very product they’re selling. This works because they ship a baseline, plus carefully curated fixes as required by their communities, trading off the needs of their communities and stability.
> 
> So with respect to *our* communities...
> 
>> On Tue, Apr 17, 2018 at 11:17 AM, Graham Leggett <mi...@sharp.fm> wrote:
>>> On 17 Apr 2018, at 6:08 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>>> 
>>> No enhancement since 2011-12-19 has been presented for the collective
>>> community's scrutiny.
>> 
>> Again, I’m not following.
>> 
>> The architecture of v2.4 has been very stable, the need for breaking changes has been largely non existent, and the focus since 2011 has been to get changes backported to v2.4 instead.
>> 
>> To distill this down to raw numbers, there have been 1546 discrete backports (my simple grep of CHANGES) since 2011 - which has provided an enormous amount of enhancement for the collective community’s scrutiny.
> 
> And the corresponding number of regressions and behavior changes. None
> of these have enjoyed an "RC" or "beta", whatever one calls it, to
> validate before adoption - other than our claim of "best httpd yet".
> It has been an entirely new kitchen sink on every subversion release.

All my substantial functional additions had beta releases for months before being voted into the 2.4.x branch. With binary beta packages available for several platforms by several supporters.

William, this painting our world a dark and miserable place is coming back every few months. It is a disservice to the people who contribute changes here.

>> You seem to be making a mountain out of a molehill, I just don’t see the problem you’re trying to solve.
> 
> You are welcome to attribute this concern any way you like, and be
> satisfied with whatever yardstick you wish to measure it by. If you
> interpret our users as desiring enhancement and not stability, then
> those are the interests you should advocate. I'll leave this thread
> alone for another week again to give them the opportunity to chime in.

There are alternative ways to be creative and innovate than going through this PMC into a semi annual release.

Releasing a module (plus some small patches) on github opens ways for collaboration with people who like Apache and new stuff. Distros like Debian unstable and Fedora pick up stuff from there. PPAs for apt are made available. Steffen offers Windows builts.

The release cycle is hours, to the benefit of all interested. Be it a blocking bug fixed or a nice feature implemented. These are mostly people who do it for fun. Some even run large server clusters, so a „hobbyist“ label does not apply.

It‘s simply fun. It‘s how I think FOSS is supposed to work and has worked best in the past. I plan to continue doing it.

If that stuff makes it all back into the Apache svn is not that relevant to me. Because it‘s the least rewarding and fun part.

(I am just talking about my feelings here, YMMV.






Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Rainer Jung <ra...@kippdata.de>.
Am 18.04.2018 um 18:07 schrieb William A Rowe Jr:
> On Wed, Apr 18, 2018 at 10:57 AM, Rainer Jung <ra...@kippdata.de> wrote:
>>
>> Since this thread was triggered by the mod_ssl config merging problems: I
>> think that was a case where a new feature was really nice, but to implement
>> it the needed changes where not not easy to understand in detail. Combined
>> with the complex behavior of mod_ssl w.r.t. config merging we ended up with
>> unexpected config merging problems. So that specific problem belongs to your
>> class 1 (I think both, the one detected by Mark Blackman and the one
>> received via Joe).
>>
>> It is not a total surprise, that regressions - be it due to features, bug
>> fixing or refactoring - most often happen in mod_proxy or mod_ssl. These are
>> IMHO by far the most complex modules (together with the event MPM).
>> Unfortunately the same parts are very attractive for features, so we have
>> some need to touch them.
> 
> Keep in mind Windows has been broken on nearly every release,
> recently in the core and mod_ssl build. So although I forked an SSL
> regression thread, please don't read into this that they are primary
> "culprits"... even core changes broke mod_security.

You are right, windows builds is also a fragile area. Not because the 
builds themselves are fragile, but many of us do not have them in their 
focus.

> I am not blaming either proxy+ssl modules, nor their developers.
> 
> I'm raising process issues, not contribution or contributor issues.
> Looking for a scheme to let contributors shine by putting our code
> enhancements and major refactoring through a community review
> process, which has been neglected for most of this decade.
> 
> 
>> All in all I'd prefer an attempt to have a faster moving 2.6 and a stable
>> backport branch 2.4 real soon.
> 
> Question, if 2.6 is moving "fast", and handled as 2.4 was, is there any
> net benefit for better releases? What can we agree to in versioning
> prior to 2.6.x-GA that will ease the process for contributors, external
> module authors, distributors and users?

I would expect people picking up 2.6 to be in a better position doing 
frequent updates and having a bit higher tolerance for an occasional 
break as the downside of getting new features quickly.

So people could choose: stick to the slowly moving 2.4 branch (and get 
it from the enterprise distro vendor) or switch to the faster moving 2.6 
with the downside of a higher risk in regressions and using a different 
source for the binaries (or of course build themselves).

Regards,

Rainer

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Wed, Apr 18, 2018 at 10:57 AM, Rainer Jung <ra...@kippdata.de> wrote:
>
> Since this thread was triggered by the mod_ssl config merging problems: I
> think that was a case where a new feature was really nice, but to implement
> it the needed changes where not not easy to understand in detail. Combined
> with the complex behavior of mod_ssl w.r.t. config merging we ended up with
> unexpected config merging problems. So that specific problem belongs to your
> class 1 (I think both, the one detected by Mark Blackman and the one
> received via Joe).
>
> It is not a total surprise, that regressions - be it due to features, bug
> fixing or refactoring - most often happen in mod_proxy or mod_ssl. These are
> IMHO by far the most complex modules (together with the event MPM).
> Unfortunately the same parts are very attractive for features, so we have
> some need to touch them.

Keep in mind Windows has been broken on nearly every release,
recently in the core and mod_ssl build. So although I forked an SSL
regression thread, please don't read into this that they are primary
"culprits"... even core changes broke mod_security.

I am not blaming either proxy+ssl modules, nor their developers.

I'm raising process issues, not contribution or contributor issues.
Looking for a scheme to let contributors shine by putting our code
enhancements and major refactoring through a community review
process, which has been neglected for most of this decade.


> All in all I'd prefer an attempt to have a faster moving 2.6 and a stable
> backport branch 2.4 real soon.

Question, if 2.6 is moving "fast", and handled as 2.4 was, is there any
net benefit for better releases? What can we agree to in versioning
prior to 2.6.x-GA that will ease the process for contributors, external
module authors, distributors and users?

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Rainer Jung <ra...@kippdata.de>.
Am 18.04.2018 um 15:07 schrieb Jim Jagielski:
> There are, IMO at least, 3 types of "regression" that we should be
> concerned about or that some people are concerned about:
> 
>    1. New features:
>       Undoubtedly, new features will likely have bugs and
>       no by adding new features we could be adding bugs
>       which could be seen as a regression.
> 
>    2. Simple fixes:
>       A simple fix causes a regression.
> 
>    3. Wholesale refactoring:
>       IMO, this is the one which is the most problematic for
>       us lately. We have seen several cases where a simple
>       bug or a desire to "make this part of the code better"
>       has resulted in huge amounts of code churn, major rewrites
>       and major refactoring.
> 
> My PoV is that:
> 
>   o We need to continue to add new features. We must provide better
>     QA and testing.
> 
>   o We need to avoid our natural inclinations to look at "fixing
>     bugs" as an opportunity for major refactors. If people want
>     to major refactor, fine. But that stays in trunk. What is
>     important is that we patch the bug 1st. Premature re-factoring
>     is as bad as premature optimization.

Since this thread was triggered by the mod_ssl config merging problems: 
I think that was a case where a new feature was really nice, but to 
implement it the needed changes where not not easy to understand in 
detail. Combined with the complex behavior of mod_ssl w.r.t. config 
merging we ended up with unexpected config merging problems. So that 
specific problem belongs to your class 1 (I think both, the one detected 
by Mark Blackman and the one received via Joe).

It is not a total surprise, that regressions - be it due to features, 
bug fixing or refactoring - most often happen in mod_proxy or mod_ssl. 
These are IMHO by far the most complex modules (together with the event 
MPM). Unfortunately the same parts are very attractive for features, so 
we have some need to touch them.

When implementing a feature or fixing a bug it is often hard to decide 
where the border is crossed that makes a change a "major" refactoring or 
whether a changes still mostly helps to understand the code.

We do lack a robust procedure and resources for testing complex 
configurations and also testing on many platforms. I don't have a 
solution for that :(

More release branches would only help, if people would actually pick 
them up. So we would need to keep features in the higher branches at 
least for a noticeable time. Of course the enterprise distros would not 
pick the additional release branches up, but maybe if they have 
attractive features, they might get picked up by faster moving distros, 
inside container images and by 3rd-parties like Apache Lounge. We could 
even provide builds for some platforms as a voluntary service.

All in all I'd prefer an attempt to have a faster moving 2.6 and a 
stable backport branch 2.4 real soon.

Regards,

Rainer

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Jim Jagielski <ji...@jaguNET.com>.
There are, IMO at least, 3 types of "regression" that we should be
concerned about or that some people are concerned about:

  1. New features:
     Undoubtedly, new features will likely have bugs and
     no by adding new features we could be adding bugs
     which could be seen as a regression.

  2. Simple fixes:
     A simple fix causes a regression.

  3. Wholesale refactoring:
     IMO, this is the one which is the most problematic for
     us lately. We have seen several cases where a simple
     bug or a desire to "make this part of the code better"
     has resulted in huge amounts of code churn, major rewrites
     and major refactoring.

My PoV is that:

 o We need to continue to add new features. We must provide better
   QA and testing.

 o We need to avoid our natural inclinations to look at "fixing
   bugs" as an opportunity for major refactors. If people want
   to major refactor, fine. But that stays in trunk. What is
   important is that we patch the bug 1st. Premature re-factoring
   is as bad as premature optimization.

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 17, 2018 at 11:28 AM, Graham Leggett <mi...@sharp.fm> wrote:
>
> The distributions have been doing this nigh on two decades - the stability of a given software baseline which will not suddenly break at 3am some arbitrary Sunday in the middle of the holidays is the very product they’re selling. This works because they ship a baseline, plus carefully curated fixes as required by their communities, trading off the needs of their communities and stability.

So with respect to *our* communities...

On Tue, Apr 17, 2018 at 11:17 AM, Graham Leggett <mi...@sharp.fm> wrote:
> On 17 Apr 2018, at 6:08 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>
>> No enhancement since 2011-12-19 has been presented for the collective
>> community's scrutiny.
>
> Again, I’m not following.
>
> The architecture of v2.4 has been very stable, the need for breaking changes has been largely non existent, and the focus since 2011 has been to get changes backported to v2.4 instead.
>
> To distill this down to raw numbers, there have been 1546 discrete backports (my simple grep of CHANGES) since 2011 - which has provided an enormous amount of enhancement for the collective community’s scrutiny.

And the corresponding number of regressions and behavior changes. None
of these have enjoyed an "RC" or "beta", whatever one calls it, to
validate before adoption - other than our claim of "best httpd yet".
It has been an entirely new kitchen sink on every subversion release.

> You seem to be making a mountain out of a molehill, I just don’t see the problem you’re trying to solve.

You are welcome to attribute this concern any way you like, and be
satisfied with whatever yardstick you wish to measure it by. If you
interpret our users as desiring enhancement and not stability, then
those are the interests you should advocate. I'll leave this thread
alone for another week again to give them the opportunity to chime in.

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Graham Leggett <mi...@sharp.fm>.
On 17 Apr 2018, at 6:08 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:

> No enhancement since 2011-12-19 has been presented for the collective
> community's scrutiny.

Again, I’m not following.

The architecture of v2.4 has been very stable, the need for breaking changes has been largely non existent, and the focus since 2011 has been to get changes backported to v2.4 instead.

To distill this down to raw numbers, there have been 1546 discrete backports (my simple grep of CHANGES) since 2011 - which has provided an enormous amount of enhancement for the collective community’s scrutiny.

You seem to be making a mountain out of a molehill, I just don’t see the problem you’re trying to solve.

Regards,
Graham
—


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 17, 2018 at 10:50 AM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>
> No enhancement since 2011-12-19 has been subjected to any community
> scrutiny. This was the date 2.3.16-beta for 2.4 was announced.

Sorry that statement is somewhat unfair...

* Anyone is welcome to "be a developer" and check out trunk, same for
2.4 branch. It simply isn't "published" till it is released.
* Anyone participating at dev@ can join in for three days of voting.
* PR watchers frequently test proposed fixes, some with new features.
* Steffen and others offer up binaries of proposed backports or
modules, e.g. the h2 and mod_md efforts.

The word "any" is way off-base. Trying this instead;

No enhancement since 2011-12-19 has been presented for the collective
community's scrutiny.

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 17, 2018 at 9:47 AM, Graham Leggett <mi...@sharp.fm> wrote:
> On 17 Apr 2018, at 4:41 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>
>> We observe the "code freeze" effect (defined by three different
>> distributors) coupled with distributors deep distrust of our releases,
>> so by continuously polluting our version major.minor release with more
>> and more cruft, those users are denied not only the new cruft, but all
>> the bug fixes to the old cruft as well... there's really no other
>> explanation for the users of one of our most common distributions to
>> be locked out of several subversions worth of bugfix corrections.
>
> I’m lost - what problem are you trying to solve?

There is a second problem implied above, which I overlooked, sorry.

No enhancement since 2011-12-19 has been subjected to any community
scrutiny. This was the date 2.3.16-beta for 2.4 was announced.

Yes, patches go through test frameworks and peer review. But every
enhancement has been foisted on the user community without any
pre-adoption scrutiny.

This is made plain by the frequent number of rejected release
candidates, and by the equally frequent number of post-release
regression reports.

No enhancement I'm aware of has been rejected by the dev@ community;
eventually objections will be withdrawn, with enough committers will
rubber stamp whatever is in STATUS.

The project has been responsive to these regressions by releasing
fixes, which themselves are overloaded with new features and behavior
changes.

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Graham Leggett <mi...@sharp.fm>.
On 17 Apr 2018, at 4:41 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:

> And everything contributed to 2.4.33 release? All in vain. None of
> that in this OS distribution, because, code freeze.

I’m not following the “all in vain”.

This patch in v2.4.33 was dine specifically to fix an issue in Xenial, and Ubuntu is on the case:

https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1750356

> We observe the "code freeze" effect (defined by three different
> distributors) coupled with distributors deep distrust of our releases,
> so by continuously polluting our version major.minor release with more
> and more cruft, those users are denied not only the new cruft, but all
> the bug fixes to the old cruft as well... there's really no other
> explanation for the users of one of our most common distributions to
> be locked out of several subversions worth of bugfix corrections.

I’m lost - what problem are you trying to solve?

Regards,
Graham
—


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Sat, Apr 14, 2018 at 8:48 AM, Jim Jagielski <ji...@jagunet.com> wrote:
> IMO, the below ignores the impacts on OS distributors who
> provide httpd. We have seen how long it takes for them
> to go from 2.2 to 2.4...

They went to 2.4 once 2.4 was no longer beta. There is this concept
called "code freeze". At that point in the most modern OS distribution
(for the few weeks that lasts), Ubuntu 18.04 ships with...

Apache httpd 2.4.29
Apache apr 1.6.3
Apache apr-util 1.6.1
libcurl 7.58
OpenSSL 1.1.0g
nghttp2 1.30.0
brotli 1.0.3
Expat 2.2.5
jansson 2.11
libxml2 2.9.4
lua 5.3.3
PCRE 8.39 + 10.31
ZLib 1.2.11

How long will it take Ubuntu to pick up 2.4.next + OpenSSL 1.1.1 to
support TLSv1.3? Answer: next release, the 18.04 ship sailed.

And everything contributed to 2.4.33 release? All in vain. None of
that in this OS distribution, because, code freeze.

Nobody installing Ubuntu 18.04 finds TLSv1.3 from OpenSSL and their
consuming programs out of the box. This means 18.10, or 20.04, 2 years
from now - for those "stable" users.

The only thing this imaginary numbering question provokes is fear of
moving the project forwards. In the time its taken this project to
make minor tweaks around the edges in httpd, and jump forward by only
a handful of large leaps over 20 years, how many versions did our
primary open-source consumers release? Ubuntu 18.04 again - the only
thing almost as slow as httpd evolution has been lynx;

chromium 65  firefox 59  konqueror 17 lynx 2.8.9

Thanks David, and Nick, for trying to dispel this paranoia that
version numbers will cause users and distributors to flee the project.

Here's my concern, if .subversion meant bug fix (and bug fix, only)
then httpd distributed across Debian, Ubuntu, Redhat, Fedora,
Free/Open/NetBSD etc etc could correspond to something the httpd
project released. Because they all cherry pick only "necessary" bug
fixes (and each define those differently), not one of them distributes
*Apache Software Foundation Apache Web Server httpd*. By mashing all
the fun new stuff into the same numbers because adoption yadda yadda,
none of them can turn to httpd for the necessary fixes for their
distribution, and they certainly can't simply review and rubber stamp
our subversion release for the "right" set of bug fixes.

The irony in all this is that I was taught "version numbers are cheap"
fairly early, by this very project. No truth-in-advertising, that
"subversion numbers are cheap, major version numbers are a heavy lift
of 20 years of baggage."

You also claimed some delay in 2.4 adoption, when there was none in
fact. In 2014;

January; OpenSUSE 13 ships httpd 2.4.10
March; Ubuntu 14.04 ships httpd 2.4.7
April; RedHat 7 ships httpd 2.4.6

We observe the "code freeze" effect (defined by three different
distributors) coupled with distributors deep distrust of our releases,
so by continuously polluting our version major.minor release with more
and more cruft, those users are denied not only the new cruft, but all
the bug fixes to the old cruft as well... there's really no other
explanation for the users of one of our most common distributions to
be locked out of several subversions worth of bugfix corrections.

Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by Jim Jagielski <ji...@jaguNET.com>.
IMO, the below ignores the impacts on OS distributors who
provide httpd. We have seen how long it takes for them
to go from 2.2 to 2.4... I can't imagine the impact for our
end user community if "new features" cause a minor
bump all the time and we "force" distributions for
2.4->2.6->2.8->2.10...

Just my 2c

> On Apr 13, 2018, at 2:28 PM, David Zuelke <dz...@salesforce.com> wrote:
> 
> Remember the thread I started on that quite a while ago? ;)
> 
> IMO:
> 
> - x.y.0 for new features
> - x.y.z for bugfixes only
> - stop the endless backporting
> - make x.y.0 releases more often
> - x.y.0 goes through alpha, beta, RC phases
> - x.y.z goes through RC phases
> 
> That's how PHP has been doing it for a few years, and it's amazing how
> well it works, how few regressions there are, and how predictable the
> cycle is (they cut an x.y.zRC1 every four weeks like clockwork, with
> exceptions only around late December because of holiday season).
> 
> This would also fix all the confusing cases where two or three faulty
> releases get made, end up in the changelog, but ultimately are never
> released.
> 
> 
> On Fri, Apr 13, 2018 at 5:28 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>> Terrific analysis! But on the meta-question...
>> 
>> Instead of changing the behavior of httpd on each and every subversion bump,
>> is it time to revisit our revisioning discipline and hygiene?
>> 
>> I promise to stay out of such discussion provided that one equally stubborn
>> and intractable PMC member agrees to do the same, and let the balance of the
>> PMC make our decision, moving forwards.
>> 
>> On Fri, Apr 13, 2018, 06:11 Joe Orton <jo...@redhat.com> wrote:
>>> 
>>> On Thu, Apr 12, 2018 at 09:38:46PM +0200, Ruediger Pluem wrote:
>>>> On 04/12/2018 09:28 AM, Joe Orton wrote:
>>>>> But logged is:
>>>>> 
>>>>> ::1 - - [12/Apr/2018:08:11:12 +0100] "GET /agag HTTP/1.1" 404 12
>>>>> HTTPS=on SNI=localhost.localdomain
>>>>> 127.0.0.1 - - [12/Apr/2018:08:11:15 +0100] "GET /agag HTTP/1.1" 404 12
>>>>> HTTPS=- SNI=-
>>>>> 
>>>>> Now mod_ssl only sees the "off" SSLSrvConfigRec in the second vhost so
>>>>> the logging is wrong.
>>>> 
>>>> What does the same test result in with 2.4.29?
>>> 
>>> Excellent question, I should have checked that.  Long e-mail follows,
>>> sorry.
>>> 
>>> In fact it is the same with 2.4.29, because the SSLSrvConfigRec
>>> associated with the vhost's server_rec is the same as the default/base
>>> (non-SSL) server_rec, aka base_server passed to post_config hooks aka
>>> the ap_server_conf global.
>>> 
>>> So, maybe I understand this a bit better now.
>>> 
>>> Config with three vhosts / server_rec structs:
>>> a) base server config :80 non-SSL (<-- ap_server_conf/base_server)
>>> b) alpha vhost :443, explicit SSLEngine on, SSLCertificateFile etc
>>> c) beta vhost :443, no SSL*
>>> 
>>> For 2.4.29 mod_ssl config derived is:
>>> a) SSLSrvConfigRec for base_server = { whatever config at global scope }
>>> b) SSLSrvConfigRec for alpha = { sc->enabled = TRUE, ... }
>>> c) SSLSrvConfigRec pointer for beta == SSLSrvConfigRec for base_server
>>>   in the lookup vector (pointer is copied prior to ALWAYS_MERGE flag)
>>> 
>>> For 2.4.33 it is:
>>> a) and b) exactly as before
>>> c) separate SSLSrvConfigRec for beta = { merged copy of config at global }
>>>   time because of the ALWAYS_MERGE flag, i.e. still sc->enabled = UNSET
>>> 
>>> When running ssl_init_Module(post_config hook), with 2.4.29:
>>> - SSLSrvConfig(base_server)->enabled = FALSE because UNSET previously
>>> - SSLSrvConfig(base_server)->vhost_id gets overwritten with vhost_id
>>>  for beta vhost because it's later in the loop and there's no check
>>> 
>>> And with 2.4.33:
>>> - SSLSrvConfig(beta)->enabled is UNSET but gets flipped to ENABLED,
>>>  then startup fails (the issue in question)
>>> 
>>> w/my patch for 2.4.33:
>>> - SSLSrvConfig(beta)->enabled is FALSE and startup works
>>> 
>>> At run-time a request via SSL which matches the beta vhost via SNI/Host:
>>> 
>>> For 2.4.29:
>>> - r->server is the beta vhost and mySrvConfig(r->server) still gives
>>>  you the ***base_server*** SSLSrvConfigRec i.e. sc->enabled=FALSE
>>> - thus e.g. ssl_hook_Fixup() does nada
>>> 
>>> For 2.4.33 plus my patch:
>>> - r->server is the beta vhost and mySrvConfig(r->server) gives
>>>  you the SSLSrvConfigRec which is also sc->enabled = FALSE
>>> - thus e.g. ssl_hook_Fixup() also does nada
>>> 
>>> I was trying to convince myself whether mySrvConfig(r->server) is going
>>> to change between 2.4.29 and .33+patch in this case, and I think it
>>> should be identical, because it is *only* the handling of ->enabled
>>> which has changed with _ALWAYS_MERGE.
>>> 
>>> TL;DR:
>>> 1. my head hurts
>>> 2. I think my patch is OK
>>> 
>>> Anyone read this far?


Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

Posted by David Zuelke <dz...@salesforce.com>.
Remember the thread I started on that quite a while ago? ;)

IMO:

- x.y.0 for new features
- x.y.z for bugfixes only
- stop the endless backporting
- make x.y.0 releases more often
- x.y.0 goes through alpha, beta, RC phases
- x.y.z goes through RC phases

That's how PHP has been doing it for a few years, and it's amazing how
well it works, how few regressions there are, and how predictable the
cycle is (they cut an x.y.zRC1 every four weeks like clockwork, with
exceptions only around late December because of holiday season).

This would also fix all the confusing cases where two or three faulty
releases get made, end up in the changelog, but ultimately are never
released.


On Fri, Apr 13, 2018 at 5:28 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> Terrific analysis! But on the meta-question...
>
> Instead of changing the behavior of httpd on each and every subversion bump,
> is it time to revisit our revisioning discipline and hygiene?
>
> I promise to stay out of such discussion provided that one equally stubborn
> and intractable PMC member agrees to do the same, and let the balance of the
> PMC make our decision, moving forwards.
>
> On Fri, Apr 13, 2018, 06:11 Joe Orton <jo...@redhat.com> wrote:
>>
>> On Thu, Apr 12, 2018 at 09:38:46PM +0200, Ruediger Pluem wrote:
>> > On 04/12/2018 09:28 AM, Joe Orton wrote:
>> > > But logged is:
>> > >
>> > > ::1 - - [12/Apr/2018:08:11:12 +0100] "GET /agag HTTP/1.1" 404 12
>> > > HTTPS=on SNI=localhost.localdomain
>> > > 127.0.0.1 - - [12/Apr/2018:08:11:15 +0100] "GET /agag HTTP/1.1" 404 12
>> > > HTTPS=- SNI=-
>> > >
>> > > Now mod_ssl only sees the "off" SSLSrvConfigRec in the second vhost so
>> > > the logging is wrong.
>> >
>> > What does the same test result in with 2.4.29?
>>
>> Excellent question, I should have checked that.  Long e-mail follows,
>> sorry.
>>
>> In fact it is the same with 2.4.29, because the SSLSrvConfigRec
>> associated with the vhost's server_rec is the same as the default/base
>> (non-SSL) server_rec, aka base_server passed to post_config hooks aka
>> the ap_server_conf global.
>>
>> So, maybe I understand this a bit better now.
>>
>> Config with three vhosts / server_rec structs:
>> a) base server config :80 non-SSL (<-- ap_server_conf/base_server)
>> b) alpha vhost :443, explicit SSLEngine on, SSLCertificateFile etc
>> c) beta vhost :443, no SSL*
>>
>> For 2.4.29 mod_ssl config derived is:
>> a) SSLSrvConfigRec for base_server = { whatever config at global scope }
>> b) SSLSrvConfigRec for alpha = { sc->enabled = TRUE, ... }
>> c) SSLSrvConfigRec pointer for beta == SSLSrvConfigRec for base_server
>>    in the lookup vector (pointer is copied prior to ALWAYS_MERGE flag)
>>
>> For 2.4.33 it is:
>> a) and b) exactly as before
>> c) separate SSLSrvConfigRec for beta = { merged copy of config at global }
>>    time because of the ALWAYS_MERGE flag, i.e. still sc->enabled = UNSET
>>
>> When running ssl_init_Module(post_config hook), with 2.4.29:
>> - SSLSrvConfig(base_server)->enabled = FALSE because UNSET previously
>> - SSLSrvConfig(base_server)->vhost_id gets overwritten with vhost_id
>>   for beta vhost because it's later in the loop and there's no check
>>
>> And with 2.4.33:
>> - SSLSrvConfig(beta)->enabled is UNSET but gets flipped to ENABLED,
>>   then startup fails (the issue in question)
>>
>> w/my patch for 2.4.33:
>> - SSLSrvConfig(beta)->enabled is FALSE and startup works
>>
>> At run-time a request via SSL which matches the beta vhost via SNI/Host:
>>
>> For 2.4.29:
>> - r->server is the beta vhost and mySrvConfig(r->server) still gives
>>   you the ***base_server*** SSLSrvConfigRec i.e. sc->enabled=FALSE
>> - thus e.g. ssl_hook_Fixup() does nada
>>
>> For 2.4.33 plus my patch:
>> - r->server is the beta vhost and mySrvConfig(r->server) gives
>>   you the SSLSrvConfigRec which is also sc->enabled = FALSE
>> - thus e.g. ssl_hook_Fixup() also does nada
>>
>> I was trying to convince myself whether mySrvConfig(r->server) is going
>> to change between 2.4.29 and .33+patch in this case, and I think it
>> should be identical, because it is *only* the handling of ->enabled
>> which has changed with _ALWAYS_MERGE.
>>
>> TL;DR:
>> 1. my head hurts
>> 2. I think my patch is OK
>>
>> Anyone read this far?