You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Jim Jagielski <ji...@jaguNET.com> on 2018/04/23 14:00:03 UTC

A proposal...

It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.

We have a test framework. The questions are:

 1. Are we using it?
 2. Are we using it sufficiently well?
 3. If not, what can we do to improve that?
 4. Can we supplement/replace it w/ other frameworks?

It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.

In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.

Re: A proposal...

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On Apr 23, 2018, at 10:22 AM, Graham Leggett <mi...@sharp.fm> wrote:
> 
> My perl knowledge is very rusty, making perl tests is going to be harder for some than others.
> 

Yeah, that IS an issue. It is also not as well documented as desired[1].

Should we look at using something external as a well to complement/supplement it? Or even start adding some specific tests under the ./test subdirectory in the repo. Maybe say that the requirement is some sort of test "bundled" w/ the feature; it doesn't need to be under the perl test framework. Or maybe some way the perl test framework can call other test scripts written in whatever language someone wants; it simply sets things up, lets the script run and checks the return status.


1. https://perl.apache.org/docs/general/testing/testing.html

Re: A proposal...

Posted by Graham Leggett <mi...@sharp.fm>.
On 23 Apr 2018, at 4:00 PM, Jim Jagielski <ji...@jaguNET.com> wrote:

> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.

+1.

> We have a test framework. The questions are:
> 
> 1. Are we using it?

Is there a CI set up for building httpd?

Is there a CI available we could use to trigger the test suite on a regular basis?

(I believe the answer is yes for APR).

> 2. Are we using it sufficiently well?
> 3. If not, what can we do to improve that?
> 4. Can we supplement/replace it w/ other frameworks?
> 
> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
> 
> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.

My perl knowledge is very rusty, making perl tests is going to be harder for some than others.

Regards,
Graham
—


AW: A proposal...

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.

> -----Ursprüngliche Nachricht-----
> Von: Stefan Eissing <st...@greenbytes.de>
> Gesendet: Montag, 23. April 2018 17:08
> An: dev@httpd.apache.org
> Betreff: Re: A proposal...
> 
> Such undocumented and untested behaviour, which nevertheless is
> considered a regression, cannot be avoided, since it cannot be
> anticipated by people currently working on those code parts. This is a
> legacy of the past, it seems, which we can only overcome by breakage and
> resulting, added test cases.
> 
> > In other words: nothing backported unless it also involves some
> changes to the Perl test framework or some pretty convincing reasons why
> it's not required.
> 
> See above, this will not fix the unforeseeable breakage that results
> from use cases unknown and untested.
> 

Agreed. Even if do perfect testing for all new stuff it will take time until we see positive results as the past will hurt us here for a while. So we shouldn't give up too fast if do not see positive results immediately 😊

Regards

Rüdiger

Re: A proposal...

Posted by Stefan Eissing <st...@greenbytes.de>.

> Am 23.04.2018 um 17:07 schrieb Stefan Eissing <st...@greenbytes.de>:
> 
> I do that for stuff I wrote myself. Not because I care only about that, but because the coverage and documentation of other server parts does give me an i
> dea of what should work and what should not. So, I am the 

*the coverage and documentation of other server parts does *NOT* give me

Re: A proposal...

Posted by Stefan Eissing <st...@greenbytes.de>.

> Am 23.04.2018 um 16:00 schrieb Jim Jagielski <ji...@jaguNET.com>:
> 
> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
> 
> We have a test framework. The questions are:

Personal view/usage answers:

> 1. Are we using it?

On release candidates only.

> 2. Are we using it sufficiently well?

 * I only added very basic tests for h2, since Perl's capabilities here are rather limited.
 * the whole framework was hard to figure out. It took me a while to get vhost setups working.

> 3. If not, what can we do to improve that?

 * A CI setup would help.

> 4. Can we supplement/replace it w/ other frameworks?

 * For mod_h2 I started with just shell scripts. Those still make my h2 test suite,
   using nghttp and curl client as well as go (if available).
 * For mod_md I used pytest which I found an excellent framework. The test suite
   is available in the github repository of mod_md
 * Based on Robert Swiecki's hongfuzz, there is a h2fuzz project for fuzzing
   our server at https://github.com/icing/h2fuzz. This works very well on a Linux
   style system.

So, I do run a collection of things. All are documented, but none is really tied into
the httpd testing framework.

> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.

I do that for stuff I wrote myself. Not because I care only about that, but because the coverage and documentation of other server parts does give me an idea of what should work and what should not. So, I am the wrong guy to place assertions into test cases for those code parts.

Example: the current mod_ssl enabled quirkyness discovered by Joe would ideally be documented now in a new test case. But neither me nor Yann would have found that before release via testing (the tests worked) nor did we anticipate such breakage.

Such undocumented and untested behaviour, which nevertheless is considered a regression, cannot be avoided, since it cannot be anticipated by people currently working on those code parts. This is a legacy of the past, it seems, which we can only overcome by breakage and resulting, added test cases.

> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.

See above, this will not fix the unforeseeable breakage that results from use cases unknown and untested.

-Stefan

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
At the pace of our (currently 'minor', contrast to 'patch') releases there
are about 2-4 / year. I agree with the idea of monthly bug fix patch
releases.

Declaring the first minor of each year as LTS for 2 years, we could get
security fixes into legacy users hands. It would be a good starting point
for anyone trying to patch some version between LTS and LTS-1.

Those that don't update for years seem to rarely pay much attention to
vulnerabilities anyways, and distributors choose their own path, so this
seems like a good compromise.

Security fixes -> trunk (next minor) > current minor > last LTS major.minor
> previous LTS major.minor.

I agree with Eric that optionally enabling a fix during the current minor
might be useful (think HTTP_PROXY protection), but these would rarely map
to the behavior of the next version minor (optional for patch, but default
to new recommended behavior in next version minor.)




On Tue, Apr 24, 2018, 13:29 Eric Covener <co...@gmail.com> wrote:

> > Should we also need some kind of LTS version? If yes, how to choose them?
> I think it would be required with frequent minor releases.
>

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
> Should we also need some kind of LTS version? If yes, how to choose them?
I think it would be required with frequent minor releases.

Re: A proposal...

Posted by Marion et Christophe JAILLET <ch...@wanadoo.fr>.
Le 24/04/2018 à 19:58, Daniel Ruggeri a écrit :
> One thing you mention above is "wait for a new minor release". I can 
> definitely see that being an issue for our current maj.minor layout 
> given that minor bumps are measured in years. In this proposal, unless 
> there's a pressing need to send out a patch release right now, the 
> next version WOULD be that minor bump. Put into practice, I would see 
> major bumps being measured in years, minor bumps in quarters and patch 
> bumps in weeks/months.
I think the same.
But we should be clear on how long we maintain each version and the 
effort needed for that.

How long does we backport bug fixes?
How long does we fix security issues?
Should we also need some kind of LTS version? If yes, how to choose 
them? M.0.0 version? In an unpredictable way as Linux does, "when it's 
time for it"? On a timely basis as Ubuntu does?

2.2 vs 2.4 was already not that active in the last months/years of 2.2, 
as already discussed in the list.
I'm a bit reluctant to backport things in, let say, 4 minors branches 
because we maintain them for 1 year (4 quarters) + 1 or maybe even 2 LTS 
branches!
To dare to go this way, either me need much more man power (and I'm 
please to see many names active on the list these days), or we should 
avoid writing bugs, so we don't have to maintain fix for them :)

CJ

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
On Wed, Apr 25, 2018 at 1:50 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> On Tue, Apr 24, 2018 at 3:46 PM, Eric Covener <co...@gmail.com> wrote:
>>>> One thing you mention above is "wait for a new minor release". I can
>>>> definitely see that being an issue for our current maj.minor layout given
>>>> that minor bumps are measured in years. In this proposal, unless there's a
>>>> pressing need to send out a patch release right now, the next version WOULD
>>>> be that minor bump. Put into practice, I would see major bumps being
>>>> measured in years, minor bumps in quarters and patch bumps in weeks/months.
>>
>> I don't see how the minor releases would be serviceable for very long
>> there. If they're not serviceable,
>> then users have to move up anyway, then you're back at the status quo
>> with the dot in a different place.
>
> I don't see where a version minor will be serviced for a particularly long
> time after the next minor is released *as GA*. So, if version 3.5.0 comes
> along and introduces some rather unstable or unproved code, and gets
> the seal of approval as -alpha... 3.5.1 is a bit better but has known bugs,
> it earns a -beta. Finally 3.5.2 is released as GA. In all of that time, I'd
> expect the project continues to fix defects is 3.4.x on a very regular
> basis, not expecting anyone to pick up 3.5 during that time. This is what
> substantially differs from using our least significant revision element
> for both minor and patch scope changes.

Thanks Bill. This aspect does look helpful.

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Thu, Apr 26, 2018 at 10:13 AM, Jim Jagielski <ji...@jagunet.com> wrote:
>
>> On Apr 25, 2018, at 1:50 PM, William A Rowe Jr <wr...@rowe-clan.net>
wrote:
>>
>> Because of our conflation of patch and enhancement,
>
> It is hardly just "our"... tons and tons of s/w uses the patch number
bump for not only patches but also features and enhancements, including our
"competitors".
>
> I still fail to see anything which leads me to believe that our numbering
is the core issue that must be fixed. I am extremely open to my mind being
changed :)

It is not numbering, for sure. There are dozens of approaches to that which
throw all the changes into the blender, and dozens of approaches which keep
bug fixes and maintenance distinct from enhancements and feature creep.
Semantic Versioning is only interesting here because it is a strategy that
was
successfully adopted by the APR project, our fellow Subversion project and
even our highly valued nghttp2 dependency.

What I've seen suggested is to put httpd into a "sleep" mode for the
duration
of a longer-running RC cycle of a week or so, easily stretched out into a
month
or more for complex additions. Putting the release branch to sleep for a
month
impedes new development efforts, and puts us in a pretty precarious position
if a zero-day or very critical security report surfaces. Feature or change
is half
ready, so such critical fixes comes with new risk.

Any policy and versioning scheme which allows maintenance changes to be
released on a regular basis, and allows new development to proceed at full
steam when any particular committer is able to contribute their volunteer
time,
and lets the community test those enhancements and experiments without
disrupting users relying on a stable platform would be a win. Modifying the
RC proposal to fork from the time of -rc1 and have a 2.4.34 release branch
for a month, while 2.4.35 continues at pace is one alternative solution.

There, the -rc is simply a different wording of -alpha/-beta. This means
2.4.35
may be released with critical bug fixes long before 2.4.34 is ready to go.
Or
we renumber the 2.4.34 branch to 2.4.35 and release a very limited 2.4.34
of strictly critical fixes, curated in a rush, when such events happen or
serious
regression occurs. For early adopters at 2.4.34-rc1, and editing all of the
associated docs changes to reflect the renumbering would be a headache.
This is part of why httpd declares version numbers are cheap.

What seems to be agreed is that the even-odds way of approaching things
was a short term fix which didn't move us very quickly, bogged down major
new efforts, and sits basically abandoned.

What seems apparent is that conflating enhancements with getting fixes
into users hands is that users don't get fixes, including for enhancements
recently introduced, for months on end. Reflecting on our current state and
six years of activity, and you can look at this from the lens of using RC's
or semver semantics;

 tag     mos (since prior GA tag)
2.4.33 GA  5mos Mar 17 18 minor
2.4.32 rc  5mos Mar 09 18 minor-beta
2.4.31 nr  5mos Mar 03 18 minor-beta
2.4.30 nr  4mos Feb 19 18 minor-beta (security +1 mos GA delay)
2.4.29 GA  1mos Oct 17 17 minor
2.4.28 GA  2mos Sep 25 17 minor (security)
2.4.27 GA  1mos Jul  6 17 patch (security)
2.4.26 GA  6mos Jun 13 17 minor (security)
2.4.25 GA  6mos Dec 16 16 minor (security)
2.4.24 nr  6mos Dec 16 16 minor-beta
2.4.23 GA  3mos Jun 30 16 minor (security)
2.4.22 nr  3mos Jun 20 16 minor-beta
2.4.21 nr  3mos Jun 16 16 minor-beta
2.4.20 GA  4mos Apr  4 16 minor (security)
2.4.19 nr  3mos Mar 21 16 minor-beta
2.4.18 GA  2mos Dec  8 15 minor
2.4.17 GA  3mos Oct  9 15 minor
2.4.16 GA  6mos Jul  9 15 minor (security +5 mos GA delay)
2.4.15 nr  5mos Jun 19 15 minor-beta
2.4.14 nr  5mos Jun 11 15 minor-beta
2.4.13 nr  5mos Jun  4 15 minor-beta
2.4.12 GA  6mos Jan 22 15 minor (security +2 mos GA delay)
2.4.11 nr  6mos Jan 15 15 minor-beta
2.4.10 GA  4mos Jul 15 14 minor (security)
 2.4.9 GA  4mos Mar 13 14 minor (security)
 2.4.8 nr  4mos Mar 11 14 minor-beta
 2.4.7 GA  4mos Nov 19 13 minor (security)
 2.4.6 GA  5mos Jul 15 13 minor (security +2 mos GA delay)
 2.4.5 nr  5mos Jul 11 13 minor-beta
 2.4.4 GA  6mos Feb 18 13 minor (security)
 2.4.3 GA  4mos Aug 17 12 minor (security +2 mos GA delay)
 2.4.2 GA  2mos Apr  5 12 minor (security +1 mos GA delay)
 2.4.1 GA 38mos Feb 13 12 major
 2.4.0 nr 37mos Jan 16 12 major-beta
2.3.16 rc 36mos Dec 15 11 major-beta
 2.3.0 rc start Dec  6 08 major-beta

2.4.27 illustrates that we can turn around a patch quickly when the
other churn is excluded. (2.4.29 illustrates that we can even add
new features and release a minor update in a month, but our track
record proves this is the exception, not the rule.)

Our present versioning schema doesn't allow us to deliver this
software on a consistent or predictable or stable basis. That's
why I started the conversation wide open to different versioning
schemas and policy suggestions. There are lots of alternatives.
Starting with issuing easy-to-review patch releases which are
not overloaded with all the new goodies to slow down putting
our fixes in user's hands promptly.

Re: A proposal...

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On Apr 25, 2018, at 1:50 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> 
> Because of our conflation of patch and enhancement,

It is hardly just "our"... tons and tons of s/w uses the patch number bump for not only patches but also features and enhancements, including our "competitors".

I still fail to see anything which leads me to believe that our numbering is the core issue that must be fixed. I am extremely open to my mind being changed :)

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 24, 2018 at 3:46 PM, Eric Covener <co...@gmail.com> wrote:
>>> One thing you mention above is "wait for a new minor release". I can
>>> definitely see that being an issue for our current maj.minor layout given
>>> that minor bumps are measured in years. In this proposal, unless there's a
>>> pressing need to send out a patch release right now, the next version WOULD
>>> be that minor bump. Put into practice, I would see major bumps being
>>> measured in years, minor bumps in quarters and patch bumps in weeks/months.
>
> I don't see how the minor releases would be serviceable for very long
> there. If they're not serviceable,
> then users have to move up anyway, then you're back at the status quo
> with the dot in a different place.

I don't see where a version minor will be serviced for a particularly long
time after the next minor is released *as GA*. So, if version 3.5.0 comes
along and introduces some rather unstable or unproved code, and gets
the seal of approval as -alpha... 3.5.1 is a bit better but has known bugs,
it earns a -beta. Finally 3.5.2 is released as GA. In all of that time, I'd
expect the project continues to fix defects is 3.4.x on a very regular
basis, not expecting anyone to pick up 3.5 during that time. This is what
substantially differs from using our least significant revision element
for both minor and patch scope changes.

If we adopt this as 3.0.0 to start; The 2.4.x users would continue to need
security fixes for some time. When 4.0.0 was done in another decade,
again 3.x.n users will be the ones needing help for some time.

What the change accomplishes is that new development is never a gating
factor of creating a patch release. Contrawise, reliable patch delivery is
no longer a gating factor to new development. Each lives on its own track,
and successful new development supersedes the previous version minor.

Because of our conflation of patch and enhancement, the issue you had
brought up, HttpProtocolOptions, occured "as a release". But I'd suggest
that if 2.2 and 2.4 were each "major versions" (as users and developers
understand that term), I would have submitted such a radical refactoring
as a new version minor of each of those two flavors. Note that some of
those actual changes would likely have occured some 4 years previous,
when first proposed, had trunk not been removed from the release
continuum for 6 years.

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
>> One thing you mention above is "wait for a new minor release". I can
>> definitely see that being an issue for our current maj.minor layout given
>> that minor bumps are measured in years. In this proposal, unless there's a
>> pressing need to send out a patch release right now, the next version WOULD
>> be that minor bump. Put into practice, I would see major bumps being
>> measured in years, minor bumps in quarters and patch bumps in weeks/months.

I don't see how the minor releases would be serviceable for very long
there. If they're not serviceable,
then users have to move up anyway, then you're back at the status quo
with the dot in a different place.

>>> For me including this would poison almost any proposal it is added to.
>>> In the context above: I want to use directives for opt-in of fixes in
>>> a patch release.
>>
>>
>> FWIW, I propose that a directive addition would be a minor bump because
>> directives are part of a configuration "contract" with users - a set of
>> directives that exist in that major.minor. By adding directives in a patch,
>> we break the contract that would state "Any configuration valid in 3.4.x
>> will always be valid in 3.4.x." We can't do that today, but it would be
>> great if we could. Adding directives only in a minor bump provides a clean
>> point at which a known set of directives are valid.

I don't see the value in a backwards compatible configuration
contract, why we would tie our hands like that? Does anyone see this
aspect of an issue if it's orthogonal to new function?

Re: A proposal...

Posted by Rainer Jung <ra...@kippdata.de>.
Am 24.04.2018 um 19:58 schrieb Daniel Ruggeri:
> On 2018-04-24 09:22, Eric Covener wrote:
>> On Tue, Apr 24, 2018 at 10:08 AM, William A Rowe Jr 
>> <wr...@rowe-clan.net> wrote:
>>> On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> wrote:
>>>>> Yes, exactly correct. We have three "contracts" to keep that I 
>>>>> think aligns very well with the following semver "contracts":
>>>>> Major => API/ABI compatibility for modules
>>>>> Minor => Feature and directives
>>>>> Patch => Functional and configuration syntax guarantees
>>>>>
>>>>> Demonstrating by way of a few examples:
>>>>> If we add a directive but do not change exported structure, that 
>>>>> would result in a minor bump since the directive is part of the 
>>>>> feature set that would necessitate a config change to use (not 
>>>>> forward compatible).
>>>>
>>>> I don't agree that adding directives is adding function,  in terms of
>>>> versioning or user expectations.  I don't see why it a new directive
>>>> or parameter should necessarily wait for a new minor release
>>>> especially when there's so much sensitivity to behavior changes. It
>>>> seems backwards.
>>>
>>> As a general rule, adding a directive introduces a new feature, along
>>> with new functions, and structure additions.
>>
>> I won't argue the semantics any further, but I don't agree there is
>> any such equivalence or general rule.
> 
> One thing you mention above is "wait for a new minor release". I can 
> definitely see that being an issue for our current maj.minor layout 
> given that minor bumps are measured in years. In this proposal, unless 
> there's a pressing need to send out a patch release right now, the next 
> version WOULD be that minor bump. Put into practice, I would see major 
> bumps being measured in years, minor bumps in quarters and patch bumps 
> in weeks/months.
> 
>>
>> For me including this would poison almost any proposal it is added to.
>> In the context above: I want to use directives for opt-in of fixes in
>> a patch release.
> 
> FWIW, I propose that a directive addition would be a minor bump because 
> directives are part of a configuration "contract" with users - a set of 
> directives that exist in that major.minor. By adding directives in a 
> patch, we break the contract that would state "Any configuration valid 
> in 3.4.x will always be valid in 3.4.x." We can't do that today, but it 
> would be great if we could. Adding directives only in a minor bump 
> provides a clean point at which a known set of directives are valid.

When directives control new features, I would totally agree. An example 
that might be harder to decide, was the security hardening a little ago 
where the parsing of lines was made much stricter. For security reasons 
this became the default, but for interoperability with broken clients we 
allowed the strict parser to get switched off by a new directive.

It was a security patch, so should become part of a patch release, but 
due to changed behavior, the directive would also be needed for people 
who prefer old behavior over enhanced security.

If we would argue, that that hardening was a big enough change to only 
include it in a minor release, then we must be aware, that people could 
only use this security enhanced version by also getting all of the other 
new features in that version, which is typically not what you want when 
you update just for security reasons.

Regards,

Rainer

Re: A proposal...

Posted by Ruediger Pluem <rp...@apache.org>.

On 04/24/2018 01:50 PM, Rainer Jung wrote:
> Am 24.04.2018 um 13:19 schrieb Daniel Ruggeri:
>>
>>
>> On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group" <ru...@vodafone.com> wrote:
>>>
>>>
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: Rainer Jung <ra...@kippdata.de>
>>>> Gesendet: Montag, 23. April 2018 16:47
>>>> An: dev@httpd.apache.org
>>>> Betreff: Re: A proposal...
>>>>
>>>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>>>> It seems that, IMO, if there was not so much concern about
>>>> "regressions" in releases, this whole revisit-versioning debate would
>>>> not have come up. This implies, to me at least, that the root cause
>>> (as
>>>> I've said before) appears to be one related to QA and testing more
>>> than
>>>> anything. Unless we address this, then nothing else really matters.
>>>>>
>>>>> We have a test framework. The questions are:
>>>>>
>>>>>    1. Are we using it?
>>>>>    2. Are we using it sufficiently well?
>>>>>    3. If not, what can we do to improve that?
>>>>>    4. Can we supplement/replace it w/ other frameworks?
>>>>>
>>>>> It does seem to me that each time we patch something, there should
>>> be
>>>> a test added or extended which covers that bug. We have gotten lax in
>>>> that. Same for features. And the more substantial the change (ie, the
>>>> more core code it touches, or the more it refactors something), the
>>> more
>>>> we should envision what tests can be in place which ensure nothing
>>>> breaks.
>>>>>
>>>>> In other words: nothing backported unless it also involves some
>>>> changes to the Perl test framework or some pretty convincing reasons
>>> why
>>>> it's not required.
>>>>
>>>> I agree with the importance of the test framework, but would also
>>> like
>>>> to mention that getting additional test feedback from the community
>>>> seems also important. That's why IMHO the RC style of releasing could
>>> be
>>>> helpful by attracting more test effort before a release.
>>>
>>> I think RC style releasing could help. Another thought that came to my
>>> mind that
>>> I haven't worked out how we could implement this is the following:
>>>
>>> Do "double releases". Taking the current state we would do:
>>>
>>> Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>>> fixes / security fixes.
>>> 2.4.35 additional features / improvements on top of 2.4.34 as we do so
>>> far.
>>>
>>> The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>>> contains bug fixes / security fixes
>>> on top of 2.4.35, 2.4.37 additional features / improvements on top of
>>> 2.4.36.
>>> So 2.4.36 would contain the additional features / improvements we had
>>> in 2.4.35 as well, but they
>>> have been in the "wild" for some time and the issues should have been
>>> identified and fixed as part
>>> of 2.4.36.
>>> Users would then have a choice what to take.
>>>
>>> Regards
>>>
>>> Rüdiger
>>
>> Interesting idea. This idea seems to be converging on semver-like principles where the double release would look like:
>> 2.12.4 => 2.12.3 + fixes
>> 2.13.0 => 2.12.4 + features
>> ... which I like as a direction. However, I think distinguishing between patch/feature releases in the patch number
>> alone would be confusing to the user base.
> 
> ... although at least in the Java world that is what happens there since a few years. For example Java 1.8.0_171
> includes security fixes and critical patches, 1.8.0_172 released at the same day includes additional features. Or as
> Oracle phrases it: "Java SE 8u171 includes important bug fixes. Oracle strongly recommends that all Java SE 8 users
> upgrade to this release. Java SE 8u172 is a patch-set update, including all of 8u171 plus additional bug fixes
> (described in the release notes).".

Damn it. You found the source of my idea :-)

> 
> Unfortunately it seems they have given up the idea starting with Java 9. So pointing to the Java 8 situation is not that
> convincing ...

IMHO the whole Java versioning after 8 is not very appealing to me. But this is just following the general new version
strategy of Oracle which I regard as confusing with respect to support lifecycles.

Regards

Rüdiger

Re: A proposal...

Posted by Rainer Jung <ra...@kippdata.de>.
Am 24.04.2018 um 13:19 schrieb Daniel Ruggeri:
> 
> 
> On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group" <ru...@vodafone.com> wrote:
>>
>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: Rainer Jung <ra...@kippdata.de>
>>> Gesendet: Montag, 23. April 2018 16:47
>>> An: dev@httpd.apache.org
>>> Betreff: Re: A proposal...
>>>
>>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>>> It seems that, IMO, if there was not so much concern about
>>> "regressions" in releases, this whole revisit-versioning debate would
>>> not have come up. This implies, to me at least, that the root cause
>> (as
>>> I've said before) appears to be one related to QA and testing more
>> than
>>> anything. Unless we address this, then nothing else really matters.
>>>>
>>>> We have a test framework. The questions are:
>>>>
>>>>    1. Are we using it?
>>>>    2. Are we using it sufficiently well?
>>>>    3. If not, what can we do to improve that?
>>>>    4. Can we supplement/replace it w/ other frameworks?
>>>>
>>>> It does seem to me that each time we patch something, there should
>> be
>>> a test added or extended which covers that bug. We have gotten lax in
>>> that. Same for features. And the more substantial the change (ie, the
>>> more core code it touches, or the more it refactors something), the
>> more
>>> we should envision what tests can be in place which ensure nothing
>>> breaks.
>>>>
>>>> In other words: nothing backported unless it also involves some
>>> changes to the Perl test framework or some pretty convincing reasons
>> why
>>> it's not required.
>>>
>>> I agree with the importance of the test framework, but would also
>> like
>>> to mention that getting additional test feedback from the community
>>> seems also important. That's why IMHO the RC style of releasing could
>> be
>>> helpful by attracting more test effort before a release.
>>
>> I think RC style releasing could help. Another thought that came to my
>> mind that
>> I haven't worked out how we could implement this is the following:
>>
>> Do "double releases". Taking the current state we would do:
>>
>> Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>> fixes / security fixes.
>> 2.4.35 additional features / improvements on top of 2.4.34 as we do so
>> far.
>>
>> The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>> contains bug fixes / security fixes
>> on top of 2.4.35, 2.4.37 additional features / improvements on top of
>> 2.4.36.
>> So 2.4.36 would contain the additional features / improvements we had
>> in 2.4.35 as well, but they
>> have been in the "wild" for some time and the issues should have been
>> identified and fixed as part
>> of 2.4.36.
>> Users would then have a choice what to take.
>>
>> Regards
>>
>> Rüdiger
> 
> Interesting idea. This idea seems to be converging on semver-like principles where the double release would look like:
> 2.12.4 => 2.12.3 + fixes
> 2.13.0 => 2.12.4 + features
> ... which I like as a direction. However, I think distinguishing between patch/feature releases in the patch number alone would be confusing to the user base.

... although at least in the Java world that is what happens there since 
a few years. For example Java 1.8.0_171 includes security fixes and 
critical patches, 1.8.0_172 released at the same day includes additional 
features. Or as Oracle phrases it: "Java SE 8u171 includes important bug 
fixes. Oracle strongly recommends that all Java SE 8 users upgrade to 
this release. Java SE 8u172 is a patch-set update, including all of 
8u171 plus additional bug fixes (described in the release notes).".

Unfortunately it seems they have given up the idea starting with Java 9. 
So pointing to the Java 8 situation is not that convincing ...

Regards,

Rainer

Re: A proposal...

Posted by Ruediger Pluem <rp...@apache.org>.

On 04/24/2018 02:52 PM, Daniel Ruggeri wrote:
> 

> In all cases, the changelog would clearly state the changes and we would ship what we consider to be the best default config. Ideally, we would also point the changelog entry to the SVN patches which implement the change so downstream has an easier time picking and choosing what they want.
> 

Adding the revision of the backport commit to the CHANGES entry seems like a good idea.

Regards

Rüdiger

Re: A proposal...

Posted by Daniel Ruggeri <dr...@primary.net>.
On 2018-04-24 09:22, Eric Covener wrote:
> On Tue, Apr 24, 2018 at 10:08 AM, William A Rowe Jr 
> <wr...@rowe-clan.net> wrote:
>> On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> 
>> wrote:
>>>> Yes, exactly correct. We have three "contracts" to keep that I think 
>>>> aligns very well with the following semver "contracts":
>>>> Major => API/ABI compatibility for modules
>>>> Minor => Feature and directives
>>>> Patch => Functional and configuration syntax guarantees
>>>> 
>>>> Demonstrating by way of a few examples:
>>>> If we add a directive but do not change exported structure, that 
>>>> would result in a minor bump since the directive is part of the 
>>>> feature set that would necessitate a config change to use (not 
>>>> forward compatible).
>>> 
>>> I don't agree that adding directives is adding function,  in terms of
>>> versioning or user expectations.  I don't see why it a new directive
>>> or parameter should necessarily wait for a new minor release
>>> especially when there's so much sensitivity to behavior changes. It
>>> seems backwards.
>> 
>> As a general rule, adding a directive introduces a new feature, along
>> with new functions, and structure additions.
> 
> I won't argue the semantics any further, but I don't agree there is
> any such equivalence or general rule.

One thing you mention above is "wait for a new minor release". I can 
definitely see that being an issue for our current maj.minor layout 
given that minor bumps are measured in years. In this proposal, unless 
there's a pressing need to send out a patch release right now, the next 
version WOULD be that minor bump. Put into practice, I would see major 
bumps being measured in years, minor bumps in quarters and patch bumps 
in weeks/months.

> 
> For me including this would poison almost any proposal it is added to.
> In the context above: I want to use directives for opt-in of fixes in
> a patch release.

FWIW, I propose that a directive addition would be a minor bump because 
directives are part of a configuration "contract" with users - a set of 
directives that exist in that major.minor. By adding directives in a 
patch, we break the contract that would state "Any configuration valid 
in 3.4.x will always be valid in 3.4.x." We can't do that today, but it 
would be great if we could. Adding directives only in a minor bump 
provides a clean point at which a known set of directives are valid.

-- 
Daniel Ruggeri

Re: A proposal...

Posted by Yann Ylavic <yl...@gmail.com>.
On Tue, Apr 24, 2018 at 4:22 PM, Eric Covener <co...@gmail.com> wrote:
> On Tue, Apr 24, 2018 at 10:08 AM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>> On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> wrote:
>>>> Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
>>>> Major => API/ABI compatibility for modules
>>>> Minor => Feature and directives
>>>> Patch => Functional and configuration syntax guarantees
>>>>
>>>> Demonstrating by way of a few examples:
>>>> If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).
>>>
>>> I don't agree that adding directives is adding function,  in terms of
>>> versioning or user expectations.  I don't see why it a new directive
>>> or parameter should necessarily wait for a new minor release
>>> especially when there's so much sensitivity to behavior changes. It
>>> seems backwards.
>>
>> As a general rule, adding a directive introduces a new feature, along
>> with new functions, and structure additions.
>
> I won't argue the semantics any further, but I don't agree there is
> any such equivalence or general rule.
>
> For me including this would poison almost any proposal it is added to.
> In the context above: I want to use directives for opt-in of fixes in
> a patch release.

I agree with Eric here, new directives are sometimes the way to fix
something for those who need to, without breaking the others that
don't.

By the way, if we bump minor for any non-forward-backportable change,
who is going to maintain all the "current minor minus n" versions
while all of the new/fancy things are in current only (and minor keeps
bumping)?
I'm afraid it won't help users stuck at some minor version (because of
API/ABI) if they don't get bugfixes because their version doesn't get
attraction/attention anymore.
IOW, what maintenance would we garantee/apply for some minor version
if we keep bumping minor numbers to get new stuff out?

Not an opposition, just wanting to have a clear picture. Remember that
some/most? of us have never been actor in a new httpd minor release,
not to talk about a major one ;)


Regards,
Yann.

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
On Tue, Apr 24, 2018 at 10:08 AM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> wrote:
>>> Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
>>> Major => API/ABI compatibility for modules
>>> Minor => Feature and directives
>>> Patch => Functional and configuration syntax guarantees
>>>
>>> Demonstrating by way of a few examples:
>>> If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).
>>
>> I don't agree that adding directives is adding function,  in terms of
>> versioning or user expectations.  I don't see why it a new directive
>> or parameter should necessarily wait for a new minor release
>> especially when there's so much sensitivity to behavior changes. It
>> seems backwards.
>
> As a general rule, adding a directive introduces a new feature, along
> with new functions, and structure additions.

I won't argue the semantics any further, but I don't agree there is
any such equivalence or general rule.

For me including this would poison almost any proposal it is added to.
In the context above: I want to use directives for opt-in of fixes in
a patch release.

-- 
Eric Covener
covener@gmail.com

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 24, 2018 at 8:27 AM, Eric Covener <co...@gmail.com> wrote:
>> Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
>> Major => API/ABI compatibility for modules
>> Minor => Feature and directives
>> Patch => Functional and configuration syntax guarantees
>>
>> Demonstrating by way of a few examples:
>> If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).
>
> I don't agree that adding directives is adding function,  in terms of
> versioning or user expectations.  I don't see why it a new directive
> or parameter should necessarily wait for a new minor release
> especially when there's so much sensitivity to behavior changes. It
> seems backwards.

As a general rule, adding a directive introduces a new feature, along
with new functions, and structure additions.

If someone says "try the WizBang directive", it is much clearer if this
appears in 2.7.0 and stays there without being renamed or dropped
until some future minor release. So we can claim the docs apply to
version major.minor with no confusion about the set of features in
this 2.7 flavor of Apache. 3-6 months later, some version 2.8 might
up and change those, but we can be careful about not making any
gratuitous changes without offering some back-compat support of
older directive names. (E.g. NameVirtualHost could have been a
no-op directive for a considerable time with no harm to the user's
config or intent.)

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
> Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
> Major => API/ABI compatibility for modules
> Minor => Feature and directives
> Patch => Functional and configuration syntax guarantees
>
> Demonstrating by way of a few examples:
> If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).

I don't agree that adding directives is adding function,  in terms of
versioning or user expectations.  I don't see why it a new directive
or parameter should necessarily wait for a new minor release
especially when there's so much sensitivity to behavior changes. It
seems backwards.

> If we were to fix a security bug that does not impact running configs, that would be a patch bump since a config that works today must work tomorrow for the same maj.min.
> If we were to change default behavior, we would bump minor. This is because although the change doesn't break existing explicit configs of the directive, it would modify behavior due to implicit defaults => a visible change in functionality.

I think it is more illustrative to turn this around and say certain
changes must wait for a minor or major release.

To me the case worth enumerating is what tolerance for behavior change
we want to allow for a security fix that goes into a patch release.

> In all cases, the changelog would clearly state the changes and we would ship what we consider to be the best default config.

I'm not understanding the "best default config" part. Is this a way to
illustrate the stuff with bad hard-coded default values that won't be
fixed until the next minor? I think the term you're using is a little
broad/abstract for something like that.

Re: A proposal...

Posted by Daniel Ruggeri <DR...@primary.net>.

On April 24, 2018 6:53:52 AM CDT, Ruediger Pluem <rp...@apache.org> wrote:
>
>
>On 04/24/2018 01:19 PM, Daniel Ruggeri wrote:
>> 
>> 
>> On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group"
><ru...@vodafone.com> wrote:
>>>
>>>
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: Rainer Jung <ra...@kippdata.de>
>>>> Gesendet: Montag, 23. April 2018 16:47
>>>> An: dev@httpd.apache.org
>>>> Betreff: Re: A proposal...
>>>>
>>>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>>>> It seems that, IMO, if there was not so much concern about
>>>> "regressions" in releases, this whole revisit-versioning debate
>would
>>>> not have come up. This implies, to me at least, that the root cause
>>> (as
>>>> I've said before) appears to be one related to QA and testing more
>>> than
>>>> anything. Unless we address this, then nothing else really matters.
>>>>>
>>>>> We have a test framework. The questions are:
>>>>>
>>>>>   1. Are we using it?
>>>>>   2. Are we using it sufficiently well?
>>>>>   3. If not, what can we do to improve that?
>>>>>   4. Can we supplement/replace it w/ other frameworks?
>>>>>
>>>>> It does seem to me that each time we patch something, there should
>>> be
>>>> a test added or extended which covers that bug. We have gotten lax
>in
>>>> that. Same for features. And the more substantial the change (ie,
>the
>>>> more core code it touches, or the more it refactors something), the
>>> more
>>>> we should envision what tests can be in place which ensure nothing
>>>> breaks.
>>>>>
>>>>> In other words: nothing backported unless it also involves some
>>>> changes to the Perl test framework or some pretty convincing
>reasons
>>> why
>>>> it's not required.
>>>>
>>>> I agree with the importance of the test framework, but would also
>>> like
>>>> to mention that getting additional test feedback from the community
>>>> seems also important. That's why IMHO the RC style of releasing
>could
>>> be
>>>> helpful by attracting more test effort before a release.
>>>
>>> I think RC style releasing could help. Another thought that came to
>my
>>> mind that
>>> I haven't worked out how we could implement this is the following:
>>>
>>> Do "double releases". Taking the current state we would do:
>>>
>>> Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>>> fixes / security fixes.
>>> 2.4.35 additional features / improvements on top of 2.4.34 as we do
>so
>>> far.
>>>
>>> The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>>> contains bug fixes / security fixes
>>> on top of 2.4.35, 2.4.37 additional features / improvements on top
>of
>>> 2.4.36.
>>> So 2.4.36 would contain the additional features / improvements we
>had
>>> in 2.4.35 as well, but they
>>> have been in the "wild" for some time and the issues should have
>been
>>> identified and fixed as part
>>> of 2.4.36.
>>> Users would then have a choice what to take.
>>>
>>> Regards
>>>
>>> Rüdiger
>> 
>> Interesting idea. This idea seems to be converging on semver-like
>principles where the double release would look like:
>> 2.12.4 => 2.12.3 + fixes
>> 2.13.0 => 2.12.4 + features
>> ... which I like as a direction. However, I think distinguishing
>between patch/feature releases in the patch number alone would be
>confusing to the user base.
>> 
>
>And for 2.x we would stay API/ABI stable just like as we do to day with
>a stable release? The next API/ABI incompatible
>version would be 3.x in that scheme?
>
>Regards
>
>Rüdiger

Yes, exactly correct. We have three "contracts" to keep that I think aligns very well with the following semver "contracts":
Major => API/ABI compatibility for modules
Minor => Feature and directives
Patch => Functional and configuration syntax guarantees

Demonstrating by way of a few examples:
If we add a directive but do not change exported structure, that would result in a minor bump since the directive is part of the feature set that would necessitate a config change to use (not forward compatible).
If we were to fix a security bug that does not impact running configs, that would be a patch bump since a config that works today must work tomorrow for the same maj.min.
If we were to change default behavior, we would bump minor. This is because although the change doesn't break existing explicit configs of the directive, it would modify behavior due to implicit defaults => a visible change in functionality.
Introducing H2 would have been a minor bump because it adds both new directives and new functionality.
The switch from experimental to GA for H2 would have been a minor bump, not because of functional changes, but because of a change in our "contract" to users of code readiness.
Refactoring exported core structures for better H2 support would be a major bump due to potential ABI breakage.
A bug fix that requires API changes and adds directives would still be a major bump.
Experiments for major changes would be done in a testing branch and merged to trunk as the next major.
A minor bump (feature/functional/etc) would be cut from current trunk while a patch bump is made from the maj/minor it fixes (I haven't yet worked out what this proposal would look like in svn)

In all cases, the changelog would clearly state the changes and we would ship what we consider to be the best default config. Ideally, we would also point the changelog entry to the SVN patches which implement the change so downstream has an easier time picking and choosing what they want.

-- 
Daniel Ruggeri

Re: A proposal...

Posted by Ruediger Pluem <rp...@apache.org>.

On 04/24/2018 01:19 PM, Daniel Ruggeri wrote:
> 
> 
> On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group" <ru...@vodafone.com> wrote:
>>
>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: Rainer Jung <ra...@kippdata.de>
>>> Gesendet: Montag, 23. April 2018 16:47
>>> An: dev@httpd.apache.org
>>> Betreff: Re: A proposal...
>>>
>>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>>> It seems that, IMO, if there was not so much concern about
>>> "regressions" in releases, this whole revisit-versioning debate would
>>> not have come up. This implies, to me at least, that the root cause
>> (as
>>> I've said before) appears to be one related to QA and testing more
>> than
>>> anything. Unless we address this, then nothing else really matters.
>>>>
>>>> We have a test framework. The questions are:
>>>>
>>>>   1. Are we using it?
>>>>   2. Are we using it sufficiently well?
>>>>   3. If not, what can we do to improve that?
>>>>   4. Can we supplement/replace it w/ other frameworks?
>>>>
>>>> It does seem to me that each time we patch something, there should
>> be
>>> a test added or extended which covers that bug. We have gotten lax in
>>> that. Same for features. And the more substantial the change (ie, the
>>> more core code it touches, or the more it refactors something), the
>> more
>>> we should envision what tests can be in place which ensure nothing
>>> breaks.
>>>>
>>>> In other words: nothing backported unless it also involves some
>>> changes to the Perl test framework or some pretty convincing reasons
>> why
>>> it's not required.
>>>
>>> I agree with the importance of the test framework, but would also
>> like
>>> to mention that getting additional test feedback from the community
>>> seems also important. That's why IMHO the RC style of releasing could
>> be
>>> helpful by attracting more test effort before a release.
>>
>> I think RC style releasing could help. Another thought that came to my
>> mind that
>> I haven't worked out how we could implement this is the following:
>>
>> Do "double releases". Taking the current state we would do:
>>
>> Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>> fixes / security fixes.
>> 2.4.35 additional features / improvements on top of 2.4.34 as we do so
>> far.
>>
>> The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>> contains bug fixes / security fixes
>> on top of 2.4.35, 2.4.37 additional features / improvements on top of
>> 2.4.36.
>> So 2.4.36 would contain the additional features / improvements we had
>> in 2.4.35 as well, but they
>> have been in the "wild" for some time and the issues should have been
>> identified and fixed as part
>> of 2.4.36.
>> Users would then have a choice what to take.
>>
>> Regards
>>
>> Rüdiger
> 
> Interesting idea. This idea seems to be converging on semver-like principles where the double release would look like:
> 2.12.4 => 2.12.3 + fixes
> 2.13.0 => 2.12.4 + features
> ... which I like as a direction. However, I think distinguishing between patch/feature releases in the patch number alone would be confusing to the user base.
> 

And for 2.x we would stay API/ABI stable just like as we do to day with a stable release? The next API/ABI incompatible
version would be 3.x in that scheme?

Regards

Rüdiger


Re: A proposal...

Posted by Daniel Ruggeri <DR...@primary.net>.

On April 24, 2018 1:38:26 AM CDT, "Plüm, Rüdiger, Vodafone Group" <ru...@vodafone.com> wrote:
>
>
>> -----Ursprüngliche Nachricht-----
>> Von: Rainer Jung <ra...@kippdata.de>
>> Gesendet: Montag, 23. April 2018 16:47
>> An: dev@httpd.apache.org
>> Betreff: Re: A proposal...
>> 
>> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>> > It seems that, IMO, if there was not so much concern about
>> "regressions" in releases, this whole revisit-versioning debate would
>> not have come up. This implies, to me at least, that the root cause
>(as
>> I've said before) appears to be one related to QA and testing more
>than
>> anything. Unless we address this, then nothing else really matters.
>> >
>> > We have a test framework. The questions are:
>> >
>> >   1. Are we using it?
>> >   2. Are we using it sufficiently well?
>> >   3. If not, what can we do to improve that?
>> >   4. Can we supplement/replace it w/ other frameworks?
>> >
>> > It does seem to me that each time we patch something, there should
>be
>> a test added or extended which covers that bug. We have gotten lax in
>> that. Same for features. And the more substantial the change (ie, the
>> more core code it touches, or the more it refactors something), the
>more
>> we should envision what tests can be in place which ensure nothing
>> breaks.
>> >
>> > In other words: nothing backported unless it also involves some
>> changes to the Perl test framework or some pretty convincing reasons
>why
>> it's not required.
>> 
>> I agree with the importance of the test framework, but would also
>like
>> to mention that getting additional test feedback from the community
>> seems also important. That's why IMHO the RC style of releasing could
>be
>> helpful by attracting more test effort before a release.
>
>I think RC style releasing could help. Another thought that came to my
>mind that
>I haven't worked out how we could implement this is the following:
>
>Do "double releases". Taking the current state we would do:
>
>Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug
>fixes / security fixes.
>2.4.35 additional features / improvements on top of 2.4.34 as we do so
>far.
>
>The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only
>contains bug fixes / security fixes
>on top of 2.4.35, 2.4.37 additional features / improvements on top of
>2.4.36.
>So 2.4.36 would contain the additional features / improvements we had
>in 2.4.35 as well, but they
>have been in the "wild" for some time and the issues should have been
>identified and fixed as part
>of 2.4.36.
>Users would then have a choice what to take.
>
>Regards
>
>Rüdiger

Interesting idea. This idea seems to be converging on semver-like principles where the double release would look like:
2.12.4 => 2.12.3 + fixes
2.13.0 => 2.12.4 + features
... which I like as a direction. However, I think distinguishing between patch/feature releases in the patch number alone would be confusing to the user base.
-- 
Daniel Ruggeri

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Mon, Apr 23, 2018 at 1:05 PM, Jim Jagielski <ji...@jagunet.com> wrote:
>
>> On Apr 23, 2018, at 12:54 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>>
>> +1; I see any "patch" releases (semver definition) as adopting well-tested bug
>> fixes. In some cases, complex patches could arrive first on a new minor branch
>> for longer alpha/beta scrutiny, before being accepted as-a-patch. This
>> could have
>> helped our php-fpm users with that crazy 2.4.2# cycle of tweak-and-break.
>
> What really helped was having test cases, which are now part
> of the test framework.

More to the point this would have always been iterative. Fix one to break
another. You aren't going to anticipate every side effect writing the
initial test.

It would be great to understand how our PR system failed us in engaging
with PHP users to identify *all* the side effects of 'whatever' change we were
making to the location transcription. And tests were added as things were
broken, more tests added and those broke other things.

To suggest tests would have solved this is silly. The tests were necessary,
and derived from user reports of testing out our code. That it took so many
releases over a year was sort of inexplicable, and if we can sort that out,
we will end up with a better process no matter how we change test rules
or release versioning.

Re: A proposal...

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On Apr 23, 2018, at 12:54 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> 
> 
> +1; I see any "patch" releases (semver definition) as adopting well-tested bug
> fixes. In some cases, complex patches could arrive first on a new minor branch
> for longer alpha/beta scrutiny, before being accepted as-a-patch. This
> could have
> helped our php-fpm users with that crazy 2.4.2# cycle of tweak-and-break.
> 

What really helped was having test cases, which are now part
of the test framework.


Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Mon, Apr 23, 2018 at 9:47 AM, Rainer Jung <ra...@kippdata.de> wrote:
> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
>>
>> It seems that, IMO, if there was not so much concern about "regressions"
>> in releases, this whole revisit-versioning debate would not have come up.

Additional concerns that amplify the regressions; last minute code dumps
with minimal review upon each point release. A three day review window
for the success of the combined result. Insufficient community review of
new features (w/wo new directives) with no alpha or beta releases in over
half a decade (h2/md excepted.)

>> It does seem to me that each time we patch something, there should be a
>> test added or extended which covers that bug. We have gotten lax in that.
>> Same for features. And the more substantial the change (ie, the more core
>> code it touches, or the more it refactors something), the more we should
>> envision what tests can be in place which ensure nothing breaks.

+1!

>> In other words: nothing backported unless it also involves some changes to
>> the Perl test framework or some pretty convincing reasons why it's not
>> required.

Or horse-before-the-cart, put in the test for a spec violation/problem behavior
in the code, and add it to TODO. The suite doesn't fail, but serves as a flag
for a defect to be corrected.

Even better (and we have been good about this)... make corresponding docs
changes a prereq, in addition to test.

> I agree with the importance of the test framework, but would also like to
> mention that getting additional test feedback from the community seems also
> important. That's why IMHO the RC style of releasing could be helpful by
> attracting more test effort before a release.
>
> And for the more complex modules like mod_proxy, mod_ssl and the event MPM,
> some of the hickups might have been hard to detect with the test framework.
> That's why I think having a more stable branch 2.4 with less feature
> backports and another branch that evolves faster would give downstreams a
> choice.

+1; I see any "patch" releases (semver definition) as adopting well-tested bug
fixes. In some cases, complex patches could arrive first on a new minor branch
for longer alpha/beta scrutiny, before being accepted as-a-patch. This
could have
helped our php-fpm users with that crazy 2.4.2# cycle of tweak-and-break.

I'd hope we would reintroduce alpha/beta review of new features coinciding
with release m.n.0 with a much longer tail for feature review. Maybe it requires
two or three patch releases before GA, maybe it is accepted as GA on the
very first candidate.

A patch release can be reviewed in a week, but needs to be reviewed in days
to move a security defect fix into users' hands after it is revealed
to our svn/git.
On very rare occasions (once a decade or so), we accelerate this to 24 hours.

A feature release/significant behavior change needs a community, and this is
not a review that happens in a week. I'd expect better adoption of new features
by drawing in our users@ and extended communities to help review additions.

AW: A proposal...

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.

> -----Ursprüngliche Nachricht-----
> Von: Rainer Jung <ra...@kippdata.de>
> Gesendet: Montag, 23. April 2018 16:47
> An: dev@httpd.apache.org
> Betreff: Re: A proposal...
> 
> Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
> > It seems that, IMO, if there was not so much concern about
> "regressions" in releases, this whole revisit-versioning debate would
> not have come up. This implies, to me at least, that the root cause (as
> I've said before) appears to be one related to QA and testing more than
> anything. Unless we address this, then nothing else really matters.
> >
> > We have a test framework. The questions are:
> >
> >   1. Are we using it?
> >   2. Are we using it sufficiently well?
> >   3. If not, what can we do to improve that?
> >   4. Can we supplement/replace it w/ other frameworks?
> >
> > It does seem to me that each time we patch something, there should be
> a test added or extended which covers that bug. We have gotten lax in
> that. Same for features. And the more substantial the change (ie, the
> more core code it touches, or the more it refactors something), the more
> we should envision what tests can be in place which ensure nothing
> breaks.
> >
> > In other words: nothing backported unless it also involves some
> changes to the Perl test framework or some pretty convincing reasons why
> it's not required.
> 
> I agree with the importance of the test framework, but would also like
> to mention that getting additional test feedback from the community
> seems also important. That's why IMHO the RC style of releasing could be
> helpful by attracting more test effort before a release.

I think RC style releasing could help. Another thought that came to my mind that
I haven't worked out how we could implement this is the following:

Do "double releases". Taking the current state we would do:

Release 2.4.34 and 2.4.35 at the same time. 2.4.34 only contains bug fixes / security fixes.
2.4.35 additional features / improvements on top of 2.4.34 as we do so far.

The next "double release" would be 2.4.36 / 2.4.37. 2.4.36 only contains bug fixes / security fixes
on top of 2.4.35, 2.4.37 additional features / improvements on top of 2.4.36.
So 2.4.36 would contain the additional features / improvements we had in 2.4.35 as well, but they
have been in the "wild" for some time and the issues should have been identified and fixed as part
of 2.4.36.
Users would then have a choice what to take.

Regards

Rüdiger


Re: A proposal...

Posted by Rainer Jung <ra...@kippdata.de>.
Am 23.04.2018 um 16:00 schrieb Jim Jagielski:
> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
> 
> We have a test framework. The questions are:
> 
>   1. Are we using it?
>   2. Are we using it sufficiently well?
>   3. If not, what can we do to improve that?
>   4. Can we supplement/replace it w/ other frameworks?
> 
> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
> 
> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.

I agree with the importance of the test framework, but would also like 
to mention that getting additional test feedback from the community 
seems also important. That's why IMHO the RC style of releasing could be 
helpful by attracting more test effort before a release.

And for the more complex modules like mod_proxy, mod_ssl and the event 
MPM, some of the hickups might have been hard to detect with the test 
framework. That's why I think having a more stable branch 2.4 with less 
feature backports and another branch that evolves faster would give 
downstreams a choice.

Regards,

Rainer


Re: A proposal...

Posted by Daniel Ruggeri <dr...@primary.net>.
On 2018-04-23 09:00, Jim Jagielski wrote:
> It seems that, IMO, if there was not so much concern about
> "regressions" in releases, this whole revisit-versioning debate would
> not have come up. This implies, to me at least, that the root cause
> (as I've said before) appears to be one related to QA and testing more
> than anything. Unless we address this, then nothing else really
> matters.
> 
> We have a test framework. The questions are:
> 
>  1. Are we using it?
>  2. Are we using it sufficiently well?
>  3. If not, what can we do to improve that?
>  4. Can we supplement/replace it w/ other frameworks?

My opinion (I think mentioned here on-list before, too) is that the 
framework is too... mystical. A lot of us do not understand how it works 
and it's a significant cognitive exercise to get started. Getting it 
installed and up and running is also non-trivial.

I am willing to invest time working with anyone who would like to 
generate more documentation to demystify the framework. Pair 
programming, maybe, to go with this newfangled test driven design 
thought??? :-). I do not understand the ins and outs of the framework 
very well, but am willing to learn more to ferret out the things that 
should be better documented. Things like, "How do I add a vhost for a 
specific test?", "Are there any convenient test wrappers for HTTP(s) 
requests?", "How do I write a test case from scratch?" would be a great 
first start.


Also, FWIW, at $dayjob we use serverspec (https://serverspec.org/) as a 
testing framework for infrastructure like httpd. After some initial 
thrashing and avoidance, I've come to like it quite well. If we prefer 
to keep with a scripting language for tests (I do), Ruby is a decent 
choice since it has all the niceties that we'd expect (HTTP(s), 
XML/JSON/YML, threading, native testing framework, crypto) built in. I'm 
happy to provide an example or two if anyone is interested in exploring 
the topic in more depth.


> 
> It does seem to me that each time we patch something, there should be
> a test added or extended which covers that bug. We have gotten lax in
> that. Same for features. And the more substantial the change (ie, the
> more core code it touches, or the more it refactors something), the
> more we should envision what tests can be in place which ensure
> nothing breaks.
> 
> In other words: nothing backported unless it also involves some
> changes to the Perl test framework or some pretty convincing reasons
> why it's not required.

I completely support creating this as a procedure, provided we tackle 
the "how do I test stuff" doco challenges, too.

-- 
Daniel Ruggeri

Re: A proposal...

Posted by Paul Querna <pa...@querna.org>.
On Mon, Apr 23, 2018 at 11:17 AM, Christophe Jaillet
<ch...@wanadoo.fr> wrote:
> Le 23/04/2018 à 16:00, Jim Jagielski a écrit :
>>
>> It seems that, IMO, if there was not so much concern about "regressions"
>> in releases, this whole revisit-versioning debate would not have come up.
>> This implies, to me at least, that the root cause (as I've said before)
>> appears to be one related to QA and testing more than anything. Unless we
>> address this, then nothing else really matters.
>>
>> We have a test framework. The questions are:
>>
>>   1. Are we using it?
>>   2. Are we using it sufficiently well?
>>   3. If not, what can we do to improve that?
>>   4. Can we supplement/replace it w/ other frameworks?
>>
>> It does seem to me that each time we patch something, there should be a
>> test added or extended which covers that bug. We have gotten lax in that.
>> Same for features. And the more substantial the change (ie, the more core
>> code it touches, or the more it refactors something), the more we should
>> envision what tests can be in place which ensure nothing breaks.
>>
>> In other words: nothing backported unless it also involves some changes to
>> the Perl test framework or some pretty convincing reasons why it's not
>> required.
>>
>
> Hi,
> +1000 on my side for more tests.
>
> But, IMHO, the perl framework is complex to understand for most of us.
>
> Last week I tried to play with it. I tried to update proxy_balancer.t
> because only lbmethod=byrequests is tested.
> The current test itself is really simple. It just checks if the module
> didn't crashed (i.e we receive 200).
> I tried to extend it for the other lbmethod available. This looked as an
> easy task. But figuring the relation between:
>    <VirtualHost proxy_http_bal1>
> and
>    BalancerMember http://@SERVERNAME@:@PROXY_HTTP_BAL1_PORT@
> still remains a mystery to me.
>
>
> The ./test framework could be useful as well.
> At least it is written in C, so the entry ticket should be cheaper for most
> of us.
> But every thing can't be done with it, I guess.
> Maybe, we should at least have some unit testing for each ap_ function? The
> behavior of these function should not change as it can be used by 3rd party
> modules.

I agree that having a quick way to make function level tests would be
very helpful.  It's something largely missing from httpd. (APR has
more)

Even in making mod_log_json, testing it via the test framework is
complicated, as its not a module that changes the output of an HTTP
Request, vs I could make a few quick C-based tests that make sure
things are being serialized correctly very easily.

> The more tests, the better, but I believe that most regressions come from
> interaction between all what is possible with httpd.
> A test-suite is only a test-suite. Everything can't be tested.
>
>
> IMHO, as a minimum, all CVE should have their dedicated test which
> explicitly fails with version n, and succeeds with version n+1.
> It would help to make sure than known security issues don't come back.
>
>
>
> Another question with the perl framework.
> Is there a way to send "invalid" data/request with it?
> All, I see is some GET(...). I guess that it sends well formed date.
> Checking the behavior when invalid queries are received would be great.
> Some kind of RFC compliancy check.
>
>
> just my 2c,
> CJ

Re: A proposal...

Posted by Alain Toussaint <al...@vocatus.pub>.
> Hi,
> +1000 on my side for more tests.
> 
> But, IMHO, the perl framework is complex to understand for most of us.

From what I saw, the preferred scripting language having access to httpd's internal seem to be lua
at the present time. I also think that redoing (recoding) the testing framework to use Lua and get
to the point where sufficient testing (for regression and anything else) is possible is a huge
endeavor. I am also not aware of the history of httpd or its mailing list archive but I am willing
to invest some time to make that happen (I also have 5 days of course per week and a part-time job 1
day a week plus editorship on BLFS to work on but BLFS and test case of httpd will be done jointly).

Still, I will ask you all if reimplementation of the testing framework is possible / feasible in
Lua?

my 0.02$

Alain

Re: A proposal...

Posted by Marion et Christophe JAILLET <ch...@wanadoo.fr>.
As express by others, please don't !

IMHO, ONE language/framework is all what we need. Having a set of 
unrelated materials will bring nightmares and something hard, not to say 
impossible, to maintain/understand.

So we should keep it as-is, or switch to something new. But trying to 
please every-one is not the right way to go.
Even if the existing framework looks hard to me, I still think that it 
is a good option. Others have been able to extend the base, so it is 
possible :)

CJ

Le 24/04/2018 à 14:50, Jim Jagielski a écrit :
> One idea is that we setup, using the existing perl test framework, a sort of "catch all" test, where the framework simply runs all scripts from a subdir via system() (or whatever), and the reports success or failure. Those scripts could be written in anything. This would mean that people could add tests w/o knowing any Perl at all. It would require, however, some sort of process since those scripts themselves would need to be universal enough that all testers can run them.
>
> I may give that a whirl... I have some nodejs scripts that test websockets and I may see how/if I can "wrap" them within the test framework.


Re: A proposal...

Posted by Daniel Ruggeri <DR...@primary.net>.
Hi, Jim;
   Further to that point, simply reaping the exit code of zero or non-zero should be enough for a test to communicate success or failure.

   My only concern with this concept is that it could make our testing framework require a *very* unique set of system libraries, binaries and interpeters to be installed to run the full suite of tests. For a strawman example, I don't have nodejs on my Linux testing machine (easily fixable), but it doesn't seem clear if AIX is supported by nodejs (maybe not fixable?). Other languages like golang are in the same boat. Maybe we could have the test framework inquire with the script/binary if the execution environment can run the test before executing the test itself?
   The other thing I wonder about is how difficult it will become to maintain the tests since some concerns with the current framework's language have already been expressed. For its faults and virtues, at least the test framework is in a single language. I suspect most of us can figure out what other languages are doing, so maybe it's not a big deal... WDYT?
-- 
Daniel Ruggeri

On April 24, 2018 7:50:18 AM CDT, Jim Jagielski <ji...@jaguNET.com> wrote:
>One idea is that we setup, using the existing perl test framework, a
>sort of "catch all" test, where the framework simply runs all scripts
>from a subdir via system() (or whatever), and the reports success or
>failure. Those scripts could be written in anything. This would mean
>that people could add tests w/o knowing any Perl at all. It would
>require, however, some sort of process since those scripts themselves
>would need to be universal enough that all testers can run them.
>
>I may give that a whirl... I have some nodejs scripts that test
>websockets and I may see how/if I can "wrap" them within the test
>framework.

Re: AW: A proposal...

Posted by Alain Toussaint <al...@vocatus.pub>.
>  I would say that leaves us with Perl, Python or
> something like that as base language.

The reasons I suggested Lua previously is because it's the only programming language modules found
in the sources of httpd:

https://svn.apache.org/viewvc/httpd/httpd/trunk/modules/

specifically: https://svn.apache.org/viewvc/httpd/httpd/trunk/modules/lua/

mod_perl, mod_python and other languages modules are external to the project. I don't know if the
presence of a particular module for a programming language is actually needed but from the
documentation I've read about the Lua module is that it has excellent access to the inard of httpd
which would facilitate white box testing (I'd assume the current perl framework do the job for black
box testing).

As for platforms Lua run on: aix, bsd and {free,net,open}bsd, Linux, OSX, windows, solaris. Probably
some more.

> If we switch the framework we need to consider that with all gaps we have, we already have
> a large amount of tests in the current framework that need to be ported over time.

Sadly, yes.

Alain

Re: A proposal...

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Tue, Apr 24, 2018 at 8:37 AM, Plüm, Rüdiger, Vodafone Group
<ru...@vodafone.com> wrote:
>
> If we switch the framework we need to consider that with all gaps we have, we already have
> a large amount of tests in the current framework that need to be ported over time.

The OpenSSL project overhauled their test schema for 1.1, IIRC?
Wondering if people have thoughts on that one, whether their logic
would help us? I'm working on getting all our dependencies' test
logic going on Windows, which might kick off some ideas.

Splitting much of the core httpd binary, especially server/util*.c into
a libhttpd consumable by third parties can be accompanied by the
same C-language test schema for regression checks that the APR
project adopted; that would move many tests to a language we all
aught to be comfortable with.

AW: A proposal...

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.

> -----Ursprüngliche Nachricht-----
> Von: Eric Covener <co...@gmail.com>
> Gesendet: Dienstag, 24. April 2018 15:31
> An: Apache HTTP Server Development List <de...@httpd.apache.org>
> Betreff: Re: A proposal...
> 
> On Tue, Apr 24, 2018 at 8:50 AM, Jim Jagielski <ji...@jagunet.com> wrote:
> > One idea is that we setup, using the existing perl test framework, a
> sort of "catch all" test, where the framework simply runs all scripts
> from a subdir via system() (or whatever), and the reports success or
> failure. Those scripts could be written in anything. This would mean
> that people could add tests w/o knowing any Perl at all. It would
> require, however, some sort of process since those scripts themselves
> would need to be universal enough that all testers can run them.
> >
> > I may give that a whirl... I have some nodejs scripts that test
> websockets and I may see how/if I can "wrap" them within the test
> framework.
> 
> I fear this would lead to M frameworks and N languages which makes it
> harder for maintainers (prereqs, languages, etc) and fragments
> whatever potential there is for improvements to the harness/tools.

My concern as well. I think this will lead to a less usable framework overall.
It might be more usable for some, but overall it is less usable.
I also have my issues understanding the Perl framework, but I think it should be one
framework that is platform independent. I would say that leaves us with Perl, Python or
something like that as base language.
If we switch the framework we need to consider that with all gaps we have, we already have
a large amount of tests in the current framework that need to be ported over time.

Regards

Rüdiger

Re: A proposal...

Posted by Eric Covener <co...@gmail.com>.
On Tue, Apr 24, 2018 at 8:50 AM, Jim Jagielski <ji...@jagunet.com> wrote:
> One idea is that we setup, using the existing perl test framework, a sort of "catch all" test, where the framework simply runs all scripts from a subdir via system() (or whatever), and the reports success or failure. Those scripts could be written in anything. This would mean that people could add tests w/o knowing any Perl at all. It would require, however, some sort of process since those scripts themselves would need to be universal enough that all testers can run them.
>
> I may give that a whirl... I have some nodejs scripts that test websockets and I may see how/if I can "wrap" them within the test framework.

I fear this would lead to M frameworks and N languages which makes it
harder for maintainers (prereqs, languages, etc) and fragments
whatever potential there is for improvements to the harness/tools.

Re: A proposal...

Posted by Jim Jagielski <ji...@jaguNET.com>.
One idea is that we setup, using the existing perl test framework, a sort of "catch all" test, where the framework simply runs all scripts from a subdir via system() (or whatever), and the reports success or failure. Those scripts could be written in anything. This would mean that people could add tests w/o knowing any Perl at all. It would require, however, some sort of process since those scripts themselves would need to be universal enough that all testers can run them.

I may give that a whirl... I have some nodejs scripts that test websockets and I may see how/if I can "wrap" them within the test framework.

Re: A proposal...

Posted by Marion et Christophe JAILLET <ch...@wanadoo.fr>.
Le 23/04/2018 à 23:09, Mark Blackman a écrit :
> 
> 
>> On 23 Apr 2018, at 19:17, Christophe Jaillet <ch...@wanadoo.fr> wrote:
>>
>> Le 23/04/2018 à 16:00, Jim Jagielski a écrit :
>>> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
>>> We have a test framework. The questions are:
>>>   1. Are we using it?
>>>   2. Are we using it sufficiently well?
>>>   3. If not, what can we do to improve that?
>>>   4. Can we supplement/replace it w/ other frameworks?
>>> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
>>> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.
>>
>> Hi,
>> +1000 on my side for more tests.
>>
>> But, IMHO, the perl framework is complex to understand for most of us.
> 
> Do you believe the Perl element is contributing to the complexity? I’d say Perl is perfect for this case in general, although I would have to look at it first to confirm.

For my personal case, Yes, I consider that the Perl syntax itself is 
complex and/or tricky. That is certainly because I've never worked that 
much with it.
I think that this can limit the number of people who can increase our 
test coverage.

> 
> I certainly believe adequate testing is a bigger and more important problem to solve than versioning policies, although some versioning policies might make it simpler to allow enough time for decent testing to happen. I personally have a stronger incentive to help with testing, than I do with versioning policies.
> 
> - Mark
> 


Re: A proposal...

Posted by Mark Blackman <ma...@exonetric.com>.

> On 23 Apr 2018, at 19:17, Christophe Jaillet <ch...@wanadoo.fr> wrote:
> 
> Le 23/04/2018 à 16:00, Jim Jagielski a écrit :
>> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
>> We have a test framework. The questions are:
>>  1. Are we using it?
>>  2. Are we using it sufficiently well?
>>  3. If not, what can we do to improve that?
>>  4. Can we supplement/replace it w/ other frameworks?
>> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
>> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.
> 
> Hi,
> +1000 on my side for more tests.
> 
> But, IMHO, the perl framework is complex to understand for most of us.

Do you believe the Perl element is contributing to the complexity? I’d say Perl is perfect for this case in general, although I would have to look at it first to confirm.

I certainly believe adequate testing is a bigger and more important problem to solve than versioning policies, although some versioning policies might make it simpler to allow enough time for decent testing to happen. I personally have a stronger incentive to help with testing, than I do with versioning policies.

- Mark

Re: A proposal...

Posted by Christophe Jaillet <ch...@wanadoo.fr>.
Le 23/04/2018 à 16:00, Jim Jagielski a écrit :
> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
> 
> We have a test framework. The questions are:
> 
>   1. Are we using it?
>   2. Are we using it sufficiently well?
>   3. If not, what can we do to improve that?
>   4. Can we supplement/replace it w/ other frameworks?
> 
> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.
> 
> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.
> 

Hi,
+1000 on my side for more tests.

But, IMHO, the perl framework is complex to understand for most of us.

Last week I tried to play with it. I tried to update proxy_balancer.t 
because only lbmethod=byrequests is tested.
The current test itself is really simple. It just checks if the module 
didn't crashed (i.e we receive 200).
I tried to extend it for the other lbmethod available. This looked as an 
easy task. But figuring the relation between:
    <VirtualHost proxy_http_bal1>
and
    BalancerMember http://@SERVERNAME@:@PROXY_HTTP_BAL1_PORT@
still remains a mystery to me.


The ./test framework could be useful as well.
At least it is written in C, so the entry ticket should be cheaper for 
most of us.
But every thing can't be done with it, I guess.
Maybe, we should at least have some unit testing for each ap_ function? 
The behavior of these function should not change as it can be used by 
3rd party modules.


The more tests, the better, but I believe that most regressions come 
from interaction between all what is possible with httpd.
A test-suite is only a test-suite. Everything can't be tested.


IMHO, as a minimum, all CVE should have their dedicated test which 
explicitly fails with version n, and succeeds with version n+1.
It would help to make sure than known security issues don't come back.



Another question with the perl framework.
Is there a way to send "invalid" data/request with it?
All, I see is some GET(...). I guess that it sends well formed date. 
Checking the behavior when invalid queries are received would be great.
Some kind of RFC compliancy check.


just my 2c,
CJ

Re: A proposal...

Posted by Micha Lenk <mi...@lenk.info>.
Just a side node, some days ago I just realized that the source package 
of the apache2 package in Debian seems to include the test suite for the 
purpose of running it as part of the continuous integration test 
'run-test-suite': https://ci.debian.net/packages/a/apache2/

In my recently provided bugfix (#62186) I included a change of the test 
suite, but so far it looks like it isn't integrated yet (do I really 
need to file a separate bugzilla in the other project for that?).

 From the experience with doing so, I agree with others that in the long 
run maintaining some Perl-based test framework will probably make 
contributions pretty unpopular, especially for contributors that didn't 
work with Perl before.

For the addition of new regression tests (as others suggested) it would 
be pretty cool if they can be added in a somewhat more popular 
(scripting) language (Python and pytest were already mentioned). Yet the 
number of test frameworks to execute should stay at a manageable low number.

That being said, I am all for extending the use of any test framework.

Regards,
Micha