You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Joshua Slive <jo...@slive.ca> on 2002/10/26 21:37:35 UTC

new download page

http://httpd.apache.org/download.html

I believe this is better than the current circumstances because it is 
clearer for the users, and it better enables us to direct people to the 
mirrors for the download and the main site for the signatures.

I've never particularly like the autoindex/README/HEADER thing.  It is 
consfusing trying to figure out exactly what to download and there are 
notes scattered all over the place.  And it works very poorly on the 
mirrors, since many of them are not configured in exactly the same was 
as www.apache.org.

I will be changing the links on httpd.apache.org to point to this page 
instead of directly at http://www.apache.org/dist/httpd/.  I'll wait a 
day for corrections and comments.

I believe this should go a long way towards reducing daedelus bandwidth 
usage for httpd downloads.  It will, of course, do nothing for 
jakarta/xml/etc.

Joshua.


Re: new download page

Posted by Erik Abele <er...@codefaktor.de>.
Joshua Slive wrote:
> Erik Abele wrote:
> 
>>
>> +1. great idea, but I think the mirror sites should be mentioned more
>> than only once. Perhaps an extra paragraph like the following would help:
> 
> 
> If you look at the actual links, you'll see I'm pretty much forcing 
> people to download from the mirrors.  I provide direct links only to the 
> mirrors.  I do provide a link to the main site at the top, but I think 
> most people will take the direct links.

Oh sorry, I didn't realize that. I only saw the pointers in the text. 
Well, then forget the extra paragraph, all this should be sufficient ;-)

> 
>> Another Point are the official patches. IMO we should mention
>> http://www.apache.org/dist/httpd/patches/ together with a little note:
>> "When we have patches to a minor bug or two, or features which we
>> haven't yet included in a new release, we will put them in the patches
>> subdirectory so people can get access to it before we roll another
>> complete release."[1]
> 
> Perhaps a short note in the top section.  But there is rarely anything 
> interesting in the patches directory.
> 

That is, what I thought of... Isn't the above note short enough?

>> oh, btw the 2.0 paragraph praises 2.0.36 as the best available version
>> instead of 2.0.43. The headline, the text and the check-for-patches-link
>> are wrong.
> 
> 
> 
> Yep, thanks, I copied my text from a non-updated version of the 
> dist/README.html.  I'll fix that.
> 
> Joshua.
> 

erik





Re: new download page

Posted by Bojan Smojver <bo...@rexursive.com>.
On Sun, 2002-10-27 at 09:56, Pier Fumagalli wrote:
> Erik Abele wrote:
> > 
> > +1. great idea, but I think the mirror sites should be mentioned more
> > than only once.
> 
> Agreed, it's one of those things I hate most of SourceForge... I _always_
> screw up, copy the link from my browser to my terminal on the "wget" command
> line parameter, and end up with a few-kb long HTML file...

Welcome to the club :-)

Bojan


Re: new download page

Posted by Johannes Erdfelt <jo...@erdfelt.com>.
On Sun, Oct 27, 2002, Thom May <th...@planetarytramp.net> wrote:
> * Joshua Slive (joshua@slive.ca) wrote :
> > Pier Fumagalli wrote:
> > 
> > >On 27/10/02 0:54, "David Burry"  wrote:
> > >
> > >
> > 
> > 
> > Right.  If we had very reliable mirrors and a good technique for keeping 
> > them that way, I'd be fine with doing an automatic redirect or fancy DNS 
> > tricks.  But we don't have that at the moment.
> > 
> > >I looked into it back in the days, but the only way would be to go down to
> > >RIPE (IANA in the US) to see where that IP is coming from, doing some 
> > >weirdo
> > >WHOIS parsing and stuff... _WAY_ overkilling... Anyhow this is going waaay
> > >offtopic! :-)
> > 
> > See: http://maxmind.com/geoip/
> 
> Or just ask BGP... http://www.supersparrow.org/

Network routes don't necessarily mean that server is "best". Bandwidth
varies greatly between different routes as well as server load.

Plus, supersparrow is mostly a proof of concept. Dents (the underlying
DNS server that I've mostly wrote) is a long way off from being
production ready, and the method supersparrow uses doesn't scale well
(telnet to a Cisco router).

Anyway, it's next to impossible to make a perfect decision about the
"best" server to use. IMHO, if you make the decision for the user (by
only returning certain servers via DNS, etc) then it should be close to
a perfect choice.

Otherwise, you may just want to let the user choose themselves by just
listing the mirrors and their location and let the user choose.

JE


Re: new download page

Posted by Thom May <th...@planetarytramp.net>.
* Joshua Slive (joshua@slive.ca) wrote :
> Pier Fumagalli wrote:
> 
> >On 27/10/02 0:54, "David Burry"  wrote:
> >
> >
> 
> 
> Right.  If we had very reliable mirrors and a good technique for keeping 
> them that way, I'd be fine with doing an automatic redirect or fancy DNS 
> tricks.  But we don't have that at the moment.
> 
> >I looked into it back in the days, but the only way would be to go down to
> >RIPE (IANA in the US) to see where that IP is coming from, doing some 
> >weirdo
> >WHOIS parsing and stuff... _WAY_ overkilling... Anyhow this is going waaay
> >offtopic! :-)
> 
> See: http://maxmind.com/geoip/

Or just ask BGP... http://www.supersparrow.org/
-Thom

Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
David Burry wrote:

> Excellent little utility... however closer network-wise is often
> significantly different than closer geographically,


Well, yeah, but that's what Akamai and the like get the big bucks for.

Sorry, we are really off-topic for dev@httpd.  It might be slightly 
closer on infrastructure@apache.org.

Joshua.



Re: new download page

Posted by David Burry <db...@tagnet.org>.
Excellent little utility... however closer network-wise is often
significantly different than closer geographically, for instance California
is likely a lot closer to Peru than Chile is (as an extreme example), if you
go by the packets fly instead of by the crow flies...   Also when a closer
server is overloaded you will get a download quicker from a more distant
server (regardless of how you define "closer").  So a good balancing
algorithm really shouldn't care about geographic distance but traceroute
hops and ping times and server loads...

Dave

----- Original Message -----
From: "Joshua Slive" <jo...@slive.ca>
> See: http://maxmind.com/geoip/
>
> If someone wants a little project, it shouldn't be too hard to integrate
> this into the existing closer.cgi script.
>
> Joshua.
>
>


Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
Jeroen Massar wrote:

>
>
> A easy fix for the "Sourceforge syndrome"  would be :


Interesting idea.  I'm leaning towards just doing something like <a 
href="...">Mirrors of httpd-...</a>, which would be ugly, but clear.

Joshua.


RE: new download page

Posted by Jeroen Massar <je...@unfix.org>.
Joshua Slive [mailto:joshua@slive.ca] wrote:
> Pier Fumagalli wrote:
> 
> > I looked into it back in the days, but the only way would 
> > be to go down to
> > RIPE (IANA in the US) to see where that IP is coming from, 
> > doing some 
> > weirdo
> > WHOIS parsing and stuff... _WAY_ overkilling... Anyhow this 
> > is going waaay
> > offtopic! :-)
> 
> See: http://maxmind.com/geoip/
> 
> If someone wants a little project, it shouldn't be too hard 
> to integrate this into the existing closer.cgi script.

Being geographically close says nothing about the connection.
If you really want to autodirect users to a certain mirror you
could do this based on AS number. This is for example done by
the folks at scene.org. Contact redhound@scene.org if you want
to know more about this solution.

A easy fix for the "Sourceforge syndrome" <grin> would be :

<a
href="http://www.apache.org/dyn/closer.cgi/httpd/httpd-2.0.43.tar.gz&des
c=this-points-to-a-html-with-mirrors-do-not-wget-this-url">httpd-2.0.43.
tar.gz</a>

Et tada.... people wanting to use wget, of which I am one, would notice
that it is not a direct link to the file, you won't need changes to the
closer.cgi as it should ignore the second param (desc).

Now you can put back that cool download page which was much better than
the current and old one.

At least thats my opinion.

Greets,
 Jeroen



Re: new download page

Posted by Ask Bjoern Hansen <as...@develooper.com>.
On Sat, 26 Oct 2002, Joshua Slive wrote:

> > WHOIS parsing and stuff... _WAY_ overkilling... Anyhow this is going waaay
> > offtopic! :-)
>
> See: http://maxmind.com/geoip/
>
> If someone wants a little project, it shouldn't be too hard to integrate
> this into the existing closer.cgi script.

FWIW, that's what my dynamic DNS thing I just mentioned is using.
It translates the mirrors.dist file into a configuration file which
is then used by the DNS servers.


 - ask

-- 
ask bjoern hansen, http://www.askbjoernhansen.com/ !try; do();


Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
Pier Fumagalli wrote:

> On 27/10/02 0:54, "David Burry"  wrote:
>
>
> >I agree that a link on a "tar.gz" (etc) filename is a lot more 
> intuitive if
> >it serves an actual tar.gz file... What about a script that randomly
> >redirects to an actual mirrored file?  I realize it may be necessary to
> >monitor all mirrors to automatically take them in and out of the loop 
> when
> >they're up and down, but still....  I also wish there was a way to
> >automatically detect which one is actually closer and fastest 
> network-wise
> >too... hmm..  This is probably getting to be too complex of a 
> suggestion for
> >anyone to do with volunteer time and resources but still just an 
> idea... ;o)


Right.  If we had very reliable mirrors and a good technique for keeping 
them that way, I'd be fine with doing an automatic redirect or fancy DNS 
tricks.  But we don't have that at the moment.

> I looked into it back in the days, but the only way would be to go down to
> RIPE (IANA in the US) to see where that IP is coming from, doing some 
> weirdo
> WHOIS parsing and stuff... _WAY_ overkilling... Anyhow this is going waaay
> offtopic! :-)

See: http://maxmind.com/geoip/

If someone wants a little project, it shouldn't be too hard to integrate 
this into the existing closer.cgi script.

Joshua.



Re: new download page

Posted by Pier Fumagalli <pi...@betaversion.org>.
On 27/10/02 0:54, "David Burry" <db...@tagnet.org> wrote:

> I agree that a link on a "tar.gz" (etc) filename is a lot more intuitive if
> it serves an actual tar.gz file... What about a script that randomly
> redirects to an actual mirrored file?  I realize it may be necessary to
> monitor all mirrors to automatically take them in and out of the loop when
> they're up and down, but still....  I also wish there was a way to
> automatically detect which one is actually closer and fastest network-wise
> too... hmm..  This is probably getting to be too complex of a suggestion for
> anyone to do with volunteer time and resources but still just an idea... ;o)

I looked into it back in the days, but the only way would be to go down to
RIPE (IANA in the US) to see where that IP is coming from, doing some weirdo
WHOIS parsing and stuff... _WAY_ overkilling... Anyhow this is going waaay
offtopic! :-)

    Pier (having nothing better to do than work and reply-to email tonight)


Re: new download page

Posted by Pier Fumagalli <pi...@betaversion.org>.
On 27/10/02 19:26, "Ask Bjoern Hansen" <as...@develooper.com> wrote:
> On Sat, 26 Oct 2002, David Burry wrote:
> 
> ftp://ftp.apache.ddns.develooper.com/pub/apache/dist/ should find an
> Apache mirror not on the other side of the world.

We want downloads working with HTTP... Anyhow, how do you do that? Can we
move the logic on Apache.ORG so that something like "mirror.apache.org" will
be pointing at your closest mirror? (You should improve it, I end up in
hungary 19 hops, but got some mirrors at 5/6)...

    Pier


Re: new download page

Posted by Ask Bjoern Hansen <as...@develooper.com>.
On Sat, 26 Oct 2002, David Burry wrote:

[...]
> too... hmm..  This is probably getting to be too complex of a suggestion for
> anyone to do with volunteer time and resources but still just an idea... ;o)

ftp'ing to ftp://ftp.perl.org/pub/CPAN/ generally sends you to a
nearby CPAN mirror.

ftp://ftp.apache.ddns.develooper.com/pub/apache/dist/ should find an
Apache mirror not on the other side of the world.


 - ask

-- 
ask bjoern hansen, http://www.askbjoernhansen.com/ !try; do();


Re: new download page

Posted by David Burry <db...@tagnet.org>.
I agree that a link on a "tar.gz" (etc) filename is a lot more intuitive if
it serves an actual tar.gz file... What about a script that randomly
redirects to an actual mirrored file?  I realize it may be necessary to
monitor all mirrors to automatically take them in and out of the loop when
they're up and down, but still....  I also wish there was a way to
automatically detect which one is actually closer and fastest network-wise
too... hmm..  This is probably getting to be too complex of a suggestion for
anyone to do with volunteer time and resources but still just an idea... ;o)

Dave

----- Original Message -----
From: "Pier Fumagalli" <pi...@betaversion.org>

> Ok, as long as it's clear! :-) I'm very dumb, but I know other people
> smarter than me who also have the same problem with SourceForge... You
> simply "forget"! :-)
>
>     Pier
>


Re: new download page

Posted by David Burry <db...@tagnet.org>.
Awesome script...  I hadn't thought of doing it this way, this is better
than what I was thinking.. it seems to address everyone's concerns too in
the best way that's still within our resources.

Dave

----- Original Message -----
From: "Justin Erenkrantz" <je...@apache.org>
To: <de...@httpd.apache.org>
Sent: Sunday, October 27, 2002 1:50 PM
Subject: Re: new download page


> --On Sunday, October 27, 2002 9:39 AM -0800 Justin Erenkrantz
> <je...@apache.org> wrote:
>
> > I'm trying to write it up now.  I'm also cleaning up closer.cgi
> > while I'm at it.  -- justin
>
> Well, that took *way* longer than I wanted to.  Anyway, a rough
> sketch of what I'm thinking of is here:
>
> http://www.apache.org/dyn/mirrors/httpd.cgi
>
> And, to prove that this new system isn't any worse than the old one:
>
> http://www.apache.org/dyn/mirrors/list.cgi
>
> This is a python-based CGI script that uses Greg Stein's EZT library
> (much kudos to Greg for this awesome tool).  It allows for the
> separation of the layout from the mirroring data.  Therefore, it
> makes it really easy to do the above with only one CGI script
> (httpd.cgi and list.cgi are symlinked to the same file) that has
> multiple 'views' and templates.
>
> We would probably have to work a bit on the layout and flesh it out
> some, but this is the idea that I had.
>
> Source at:
>
> http://www.apache.org/~jerenkrantz/mirrors.tar.gz
>
> If I could run CGI scripts from my home dir, I wouldn't have stuck
> this in www.apache.org's docroot, but CGI scripts are not allowed
> from user directories.  ISTR mentioning this before and getting no
> response from Greg or Jeff.  -- justin
>


Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
Justin Erenkrantz wrote:

> --On Sunday, October 27, 2002 9:39 AM -0800 Justin Erenkrantz
>  wrote:
>
> > I'm trying to write it up now.  I'm also cleaning up closer.cgi
> > while I'm at it.  -- justin
>
>
> Well, that took *way* longer than I wanted to.  Anyway, a rough sketch
> of what I'm thinking of is here:
>
> http://www.apache.org/dyn/mirrors/httpd.cgi


Looks good.  +1.

Joshua.



Re: new download page

Posted by Justin Erenkrantz <je...@apache.org>.
--On Sunday, October 27, 2002 9:39 AM -0800 Justin Erenkrantz 
<je...@apache.org> wrote:

> I'm trying to write it up now.  I'm also cleaning up closer.cgi
> while I'm at it.  -- justin

Well, that took *way* longer than I wanted to.  Anyway, a rough 
sketch of what I'm thinking of is here:

http://www.apache.org/dyn/mirrors/httpd.cgi

And, to prove that this new system isn't any worse than the old one:

http://www.apache.org/dyn/mirrors/list.cgi

This is a python-based CGI script that uses Greg Stein's EZT library 
(much kudos to Greg for this awesome tool).  It allows for the 
separation of the layout from the mirroring data.  Therefore, it 
makes it really easy to do the above with only one CGI script 
(httpd.cgi and list.cgi are symlinked to the same file) that has 
multiple 'views' and templates.

We would probably have to work a bit on the layout and flesh it out 
some, but this is the idea that I had.

Source at:

http://www.apache.org/~jerenkrantz/mirrors.tar.gz

If I could run CGI scripts from my home dir, I wouldn't have stuck 
this in www.apache.org's docroot, but CGI scripts are not allowed 
from user directories.  ISTR mentioning this before and getting no 
response from Greg or Jeff.  -- justin

RE: new download page

Posted by James Cox <im...@php.net>.
>
> --On Sunday, October 27, 2002 12:33 PM -0500 Joshua Slive
> <jo...@slive.ca> wrote:
>
> > Sure, you can do that.  But in that case, you really do need to
> > make absolutely sure that every mirror works every time.  What I
> > have implemented allows the user to gracefully fallback to a
> > working mirror.
>
> No, because there would be a selection box that allows the selection
> of which mirror to use.  So, it would still allow for graceful
> fallback in the event that the 'default' mirror is down.
>
> I'm trying to write it up now.  I'm also cleaning up closer.cgi while
> I'm at it.  -- justin
>
FWIW, and if you don't mind using php, take a look at

http://cvs.php.net/cvs.php/php-master-web/scripts/mirror-test?login=2

(i suggest the make version, runs faster, needs latest cvs of wget)

and

http://cvs.php.net/cvs.php/php-master-web/scripts/mirror-summary?login=2

convert this code to look at a seperated-values file, if desired, or use the
database. Either way, you can easily adapt this to maintain a dynamic list
of mirrors, or at least provide status updates on mirrors.

 -- james


Re: new download page

Posted by Justin Erenkrantz <je...@apache.org>.
--On Sunday, October 27, 2002 12:33 PM -0500 Joshua Slive 
<jo...@slive.ca> wrote:

> Sure, you can do that.  But in that case, you really do need to
> make absolutely sure that every mirror works every time.  What I
> have implemented allows the user to gracefully fallback to a
> working mirror.

No, because there would be a selection box that allows the selection 
of which mirror to use.  So, it would still allow for graceful 
fallback in the event that the 'default' mirror is down.

I'm trying to write it up now.  I'm also cleaning up closer.cgi while 
I'm at it.  -- justin

Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
Justin Erenkrantz wrote:

>
> No, it isn't.  We'd select a random default mirror.  (The key is the
> closer.cgi functionality would be incorporated into download.html.)


Sure, you can do that.  But in that case, you really do need to make 
absolutely sure that every mirror works every time.  What I have 
implemented allows the user to gracefully fallback to a working mirror.

So go for it.  All you need to do is implement a monitoring system for 
mirrors, and then your proposed shtml page.  I agree it would be 
superior to what I have done.

Until you do that, my system is better than what we have been using.

Joshua.


Re: new download page

Posted by Justin Erenkrantz <je...@apache.org>.
--On Sunday, October 27, 2002 11:46 AM -0500 Joshua Slive 
<jo...@slive.ca> wrote:

> This seems to be exactly the same number of steps to me.  In the
> current page you select the file and then the mirror.  With your
> idea, you select the mirror and then the file.  I don't have any
> problem with your suggestion, other than the fact that it isn't
> implemented.

No, it isn't.  We'd select a random default mirror.  (The key is the 
closer.cgi functionality would be incorporated into download.html.)

> 1. Most of the mirrors are fine.  That particular one is entered in
> our mirror list incorrectly.

And, everytime someone breaks the mirrors.list file, we're going to 
break downloads.  A fair number of commits to mirrors.list are bogus 
and break the file.  If we want to switch httpd downloads to relying 
on mirrors, then we have to be careful about the integrity of that 
file.  (Something we have refused to enforce in the past because we 
don't want to hurt people's feelings.)

> 2. Every page lists two guaranteed working sites at the bottom:
> nagoya and daedelus.  I'm thinking of also adding ibiblio to that.

ibiblio is not an affiliated site, but a large and respected mirror. 
Yet, I believe the guaranteed working sites should be only those 
under ASF (or ASF member) control.  There is no accountability for 
problems with ibiblio.  Therefore, I would be hesitant to say that it 
is a guaranteed working site.

> 3. If you find a problem with a mirror listing, why don't you fix
> it rather than complaining about it?

Because I just noticed it, and it wasn't obvious what the failure 
condition was (from a 404, how am I supposed to know that it is 
inputted incorrectly?).

> 4. Even deadelus is not a guarenteed working site at the moment.

But, it is the 'master' site (as well as the rsync master).

> 5. Nobody is forced to do anything.  Clear links are still provided
> to http://www.apache.org/dist/httpd/.

I don't believe that the closer.cgi file makes it clear enough where 
to download from in the event of an error.  Hmm, I'll add a 
disclaimer to the top of the page.  -- justin

Re: new download page

Posted by gr...@apache.org.
Joshua Slive wrote:

> 4. Even deadelus is not a guarenteed working site at the moment.
          ^^^^

I hope that was a Freudian slip.  It seems pretty alive this morning.

Greg

Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
Justin Erenkrantz wrote:

> You are missing my point: you are creating an extra step that is not
> needed.  There are plenty of solutions to this problem that do not
> require this level of indirection.
>
> For example, you could incorporate the CGI script logic into a shtml
> file that has a choice list representing each mirror (and method). The
> links on our download page would be recomputed as you select the
> mirror.  I still prefer a round-robin DNS as that doesn't require any
> CGI scripting.

This seems to be exactly the same number of steps to me.  In the current 
page you select the file and then the mirror.  With your idea, you 
select the mirror and then the file.  I don't have any problem with your 
suggestion, other than the fact that it isn't implemented.

> > 2. It is extremely simple to configure and maintain.
>
>
> No, it's not.  Currently, we have bogus mirrors.  For example, I see
> apache.towardex.com listed as a mirror for me.  When I click on the
> link, it gives me a 404.  That is inacceptable.
>
> If you want to force users to do this scheme, then you have to ensure
> that we don't list broken mirrors.


Bullsh**.

1. Most of the mirrors are fine.  That particular one is entered in our 
mirror list incorrectly.

2. Every page lists two guaranteed working sites at the bottom: nagoya 
and daedelus.  I'm thinking of also adding ibiblio to that.

3. If you find a problem with a mirror listing, why don't you fix it 
rather than complaining about it?

4. Even deadelus is not a guarenteed working site at the moment.

5. Nobody is forced to do anything.  Clear links are still provided to 
http://www.apache.org/dist/httpd/.

> > 3. It can be put into place NOW.
>
>
> No, I don't think we can deploy this because we have so many busted
> mirrors.
>
> I'd rather we do the right solution, then do a broken solution.  This is
> a broken solution that will result in too much confusion for our users.
> Please do not switch to this.  -- justin


So would you prefer a state where some users might need to try two or 
three links to get an actual download, or a state where daedelus is 
completely unresponsive?  If patterns are followed, there is a good 
chance we could have serious capacity problems on both daedelus and 
nagoya tommorow morning.

If you have a better solution, then do something about it.  We have been 
talking about this for months, but nobody has stepped forward to 
actually do it.  I implemented the solution that was within my technical 
and time limits.  It is a working solution, and I believe it is superior 
both from a user point of view and from a resource management point of view.

Joshua.





Re: new download page

Posted by Justin Erenkrantz <je...@apache.org>.
--On Saturday, October 26, 2002 9:33 PM -0400 Joshua Slive 
<jo...@slive.ca> wrote:

> I like this system better because:
>
> 1. It is perfectly transparent to the users.  They know exactly
> where they are downloading from and are given options for
> alternative locations.

You are missing my point: you are creating an extra step that is not 
needed.  There are plenty of solutions to this problem that do not 
require this level of indirection.

For example, you could incorporate the CGI script logic into a shtml 
file that has a choice list representing each mirror (and method). 
The links on our download page would be recomputed as you select the 
mirror.  I still prefer a round-robin DNS as that doesn't require any 
CGI scripting.

> 2. It is extremely simple to configure and maintain.

No, it's not.  Currently, we have bogus mirrors.  For example, I see 
apache.towardex.com listed as a mirror for me.  When I click on the 
link, it gives me a 404.  That is inacceptable.

If you want to force users to do this scheme, then you have to ensure 
that we don't list broken mirrors.

> 3. It can be put into place NOW.

No, I don't think we can deploy this because we have so many busted 
mirrors.

I'd rather we do the right solution, then do a broken solution.  This 
is a broken solution that will result in too much confusion for our 
users.  Please do not switch to this.  -- justin

Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
Justin Erenkrantz wrote:

> Well, I agree with Pier.  I'm an idiot, too.  I absolutely can't stand
> SourceForge's mirroring system (which is essentially what that page is
> moving us to).  It tells me that I'm downloading a file, but when I try
> to download it by hitting the link, I get an HTML file that shows me
> mirrors where I can download it.  Eh, no.
>
> To be blunt, any link from that download page must go directly to a
> tarball not to a page that lists mirrors.  I've offered ASF-wide
> suggestions to the mirroring problem.  I still think the best strategy
> is to do round-robin DNS of dists.apache.org (and indicate that those
> servers aren't necessarily trusted).  -- justin


I like this system better because:

1. It is perfectly transparent to the users.  They know exactly where 
they are downloading from and are given options for alternative locations.

2. It is extremely simple to configure and maintain.

3. It can be put into place NOW.

I understand the presentation issue, and am willing to accept 
suggestions for improvements.  But I won't hold this up for a 
theoretical solution.  There is an immediate problem that needs to be 
addressed, and a little inconvenience may need to be tolerated.

[Incidentally, this system is still much clearer than Sourceforge.  It 
just puts the links one level deeper.  Sorceforge hides things behind 
layers of redirects/javascript.]

I'm certainly still willing to discuss the round-robin DNS idea for the 
future, not withstanding my opinion in point 1 above.  I don't see any 
simple way to provide a single URL and still be transparent and allow 
choices to the user.

Joshua.


Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
Bill Stoddard wrote:

> I don't have a problem at all with the way downloads have been done. 
> FWIW, I
> agree with Justin here.


Sorry, I'm not clear about what the first sentence means.  The problem 
with how downloads are currently done is that we don't have the 
bandwidth to support it.  (In addition, presenting that huge list of 
files to users is confusing, but that is a different issue.)

Joshua.


RE: new download page

Posted by Sander Striker <st...@apache.org>.
> From: Bill Stoddard [mailto:bill@wstoddard.com]
> Sent: 27 October 2002 03:15

>> --On Sunday, October 27, 2002 12:30 AM +0100 Pier Fumagalli
>> <pi...@betaversion.org> wrote:
>>
>>> Ok, as long as it's clear! :-) I'm very dumb, but I know other
>>> people smarter than me who also have the same problem with
>>> SourceForge... You simply "forget"! :-)
>>
>> Well, I agree with Pier.  I'm an idiot, too.  I absolutely can't
>> stand SourceForge's mirroring system (which is essentially what that
>> page is moving us to).  It tells me that I'm downloading a file, but
>> when I try to download it by hitting the link, I get an HTML file
>> that shows me mirrors where I can download it.  Eh, no.

Glad to see that I'm not the only idiot that was bitten by this before ;)

>> To be blunt, any link from that download page must go directly to a
>> tarball not to a page that lists mirrors.  I've offered ASF-wide
>> suggestions to the mirroring problem.  I still think the best
>> strategy is to do round-robin DNS of dists.apache.org (and indicate
>> that those servers aren't necessarily trusted).  -- justin
>>
> 
> I don't have a problem at all with the way downloads have been done. FWIW, I
> agree with Justin here.

I agree aswell.

Sander

RE: new download page

Posted by Bill Stoddard <bi...@wstoddard.com>.
> --On Sunday, October 27, 2002 12:30 AM +0100 Pier Fumagalli
> <pi...@betaversion.org> wrote:
>
> > Ok, as long as it's clear! :-) I'm very dumb, but I know other
> > people smarter than me who also have the same problem with
> > SourceForge... You simply "forget"! :-)
>
> Well, I agree with Pier.  I'm an idiot, too.  I absolutely can't
> stand SourceForge's mirroring system (which is essentially what that
> page is moving us to).  It tells me that I'm downloading a file, but
> when I try to download it by hitting the link, I get an HTML file
> that shows me mirrors where I can download it.  Eh, no.
>
> To be blunt, any link from that download page must go directly to a
> tarball not to a page that lists mirrors.  I've offered ASF-wide
> suggestions to the mirroring problem.  I still think the best
> strategy is to do round-robin DNS of dists.apache.org (and indicate
> that those servers aren't necessarily trusted).  -- justin
>

I don't have a problem at all with the way downloads have been done. FWIW, I
agree with Justin here.

Bill


Re: new download page

Posted by Justin Erenkrantz <je...@apache.org>.
--On Sunday, October 27, 2002 12:30 AM +0100 Pier Fumagalli 
<pi...@betaversion.org> wrote:

> Ok, as long as it's clear! :-) I'm very dumb, but I know other
> people smarter than me who also have the same problem with
> SourceForge... You simply "forget"! :-)

Well, I agree with Pier.  I'm an idiot, too.  I absolutely can't 
stand SourceForge's mirroring system (which is essentially what that 
page is moving us to).  It tells me that I'm downloading a file, but 
when I try to download it by hitting the link, I get an HTML file 
that shows me mirrors where I can download it.  Eh, no.

To be blunt, any link from that download page must go directly to a 
tarball not to a page that lists mirrors.  I've offered ASF-wide 
suggestions to the mirroring problem.  I still think the best 
strategy is to do round-robin DNS of dists.apache.org (and indicate 
that those servers aren't necessarily trusted).  -- justin

Re: new download page

Posted by Pier Fumagalli <pi...@betaversion.org>.
On 27/10/02 0:04, "Joshua Slive" <jo...@slive.ca> wrote:
> Pier Fumagalli wrote:
> 
>> 
>> I'd say that it should be more visible that the link is an HTML rather
>> than
>> a TARball... Something like "Click here to find out where you can download
>> HTTPD-2.0.43.tar.gz from"...
> 
> Ewww... Ugly.  I'm open to suggestions on improving the transparency,
> but I don't like that one.  It would be easy enough if it was only one
> or two files, but it is hard when we need to present it clearly for many
> files.

Ok, as long as it's clear! :-) I'm very dumb, but I know other people
smarter than me who also have the same problem with SourceForge... You
simply "forget"! :-)

    Pier


Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
Pier Fumagalli wrote:

>
> I'd say that it should be more visible that the link is an HTML rather 
> than
> a TARball... Something like "Click here to find out where you can download
> HTTPD-2.0.43.tar.gz from"...


Ewww... Ugly.  I'm open to suggestions on improving the transparency, 
but I don't like that one.  It would be easy enough if it was only one 
or two files, but it is hard when we need to present it clearly for many 
files.

Joshua.


Re: new download page

Posted by Pier Fumagalli <pi...@betaversion.org>.
Erik Abele wrote:
> 
> +1. great idea, but I think the mirror sites should be mentioned more
> than only once.

Agreed, it's one of those things I hate most of SourceForge... I _always_
screw up, copy the link from my browser to my terminal on the "wget" command
line parameter, and end up with a few-kb long HTML file...

I'd say that it should be more visible that the link is an HTML rather than
a TARball... Something like "Click here to find out where you can download
HTTPD-2.0.43.tar.gz from"...

I know, I'm dumb! :-)

    Pier


Re: new download page

Posted by Joshua Slive <jo...@slive.ca>.
Erik Abele wrote:

>
> +1. great idea, but I think the mirror sites should be mentioned more
> than only once. Perhaps an extra paragraph like the following would help:

If you look at the actual links, you'll see I'm pretty much forcing 
people to download from the mirrors.  I provide direct links only to the 
mirrors.  I do provide a link to the main site at the top, but I think 
most people will take the direct links.

> Another Point are the official patches. IMO we should mention
> http://www.apache.org/dist/httpd/patches/ together with a little note:
> "When we have patches to a minor bug or two, or features which we
> haven't yet included in a new release, we will put them in the patches
> subdirectory so people can get access to it before we roll another
> complete release."[1]


Perhaps a short note in the top section.  But there is rarely anything 
interesting in the patches directory.

> oh, btw the 2.0 paragraph praises 2.0.36 as the best available version
> instead of 2.0.43. The headline, the text and the check-for-patches-link
> are wrong.


Yep, thanks, I copied my text from a non-updated version of the 
dist/README.html.  I'll fix that.

Joshua.


Re: new download page

Posted by Erik Abele <er...@codefaktor.de>.
Joshua Slive wrote:
> http://httpd.apache.org/download.html
> 
> I believe this is better than the current circumstances because it is 
> clearer for the users, and it better enables us to direct people to the 
> mirrors for the download and the main site for the signatures.
> 
> I've never particularly like the autoindex/README/HEADER thing.  It is 
> consfusing trying to figure out exactly what to download and there are 
> notes scattered all over the place.  And it works very poorly on the 
> mirrors, since many of them are not configured in exactly the same was 
> as www.apache.org.
> 
> I will be changing the links on httpd.apache.org to point to this page 
> instead of directly at http://www.apache.org/dist/httpd/.  I'll wait a 
> day for corrections and comments.
> 
> I believe this should go a long way towards reducing daedelus bandwidth 
> usage for httpd downloads.  It will, of course, do nothing for 
> jakarta/xml/etc.
> 
> Joshua.
> 

+1. great idea, but I think the mirror sites should be mentioned more 
than only once. Perhaps an extra paragraph like the following would help:

"Please do not download from www.apache.org. Please use a mirror site to 
help us save apache.org bandwidth. Go here to find your nearest mirror."[1]

Another Point are the official patches. IMO we should mention 
http://www.apache.org/dist/httpd/patches/ together with a little note: 
"When we have patches to a minor bug or two, or features which we 
haven't yet included in a new release, we will put them in the patches 
subdirectory so people can get access to it before we roll another 
complete release."[1]

oh, btw the 2.0 paragraph praises 2.0.36 as the best available version 
instead of 2.0.43. The headline, the text and the check-for-patches-link 
are wrong.

cheers,
erik

[1] taken from http://www.apache.org/dist/httpd/