You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by sk...@hss.hns.com on 2001/04/26 07:05:01 UTC

Volunteering for enhancing apache



We are working on figuring out efficient ways of secure web browsing
with satellite as the transmission medium. If the browser/origin server
supports  only HTTP 1.0, it would result a major overhead in terms of SSL
handshake for each TCP connection being set up for getting the HTML as
well as the embedded gifs etc

Though HTTP 1.1 provides some respite as it supports persistent connections
and pipelining. We would still like it to take bit further and figure
out a solution which is more bandwidth effective and provides faster access.

This is keeping in mind that SSL handshake cant be spoofed...........Any
thoughts??????

Our idea about enhancing HTTP protocol:
---------------------------------------

The way today browsing works is on a request/response paradigm for each embedded
resource in a URI.Like if a html page has 4 gifs, browser sends 5 GET requests
one
for the browser and 4 for the gifs in it. I am wondering why cant the web server
goes a step further and scans the URI for the embedded gifs and other GETable
type resources and sends them across on the same TCP connection in continuation.
This would save 4 extra GET requests and hence
would reduce the network traffic therby providing a more bandwidth effective
mechanism of normal
web browsing...The only shortcoming which comes to my mind as of no is the case
in which if the connection breaks down, server has to transmit all the stuff all
over again... well, to overcome this demerit, i think we can add couple of more
headers in the protocol to acheive 'resume' kind of functionality.

I am actually wondering about whether its too petty an idea which got rejected
by the masses or
didnt spring up in anybodys mind...

Any comments................


Apart from all this we would also like to contribute to the developement of
apache web server to bring it as close to the best (in terms of HTTP 1.1
compliance etc) as possible........



RE: Volunteering for enhancing apache

Posted by "Peter J. Cranstone" <Cr...@remotecommunications.com>.
>> We would still like it to take bit further and figure out a solution
which is more bandwidth effective and provides faster >> access.

Why not add mod_gzip to Apache. Compress the output prior to encrypting and
then use mod_ssl for security. The technique to understand how to configure
this can be found in the mailing archives for the mod_gzip forum:
(forum) http://lists.over.net/mailman/listinfo/mod_gzip
(archives) http://lists.over.net/pipermail/mod_gzip/

Remember you need HTTP 1.1 compliant browsers at the client side. On the
server side Apache 1.3.9 or 1.3.12 is the most stable, although it
(mod_gzip) has been tested with all versions.

Regards


Peter J. Cranstone



-----Original Message-----
From: skgupta@hss.hns.com [mailto:skgupta@hss.hns.com]
Sent: Wednesday, April 25, 2001 11:05 PM
To: new-httpd@apache.org
Subject: Volunteering for enhancing apache





We are working on figuring out efficient ways of secure web browsing
with satellite as the transmission medium. If the browser/origin server
supports  only HTTP 1.0, it would result a major overhead in terms of SSL
handshake for each TCP connection being set up for getting the HTML as
well as the embedded gifs etc

Though HTTP 1.1 provides some respite as it supports persistent connections
and pipelining. We would still like it to take bit further and figure
out a solution which is more bandwidth effective and provides faster access.

This is keeping in mind that SSL handshake cant be spoofed...........Any
thoughts??????

Our idea about enhancing HTTP protocol:
---------------------------------------

The way today browsing works is on a request/response paradigm for each
embedded
resource in a URI.Like if a html page has 4 gifs, browser sends 5 GET
requests
one
for the browser and 4 for the gifs in it. I am wondering why cant the web
server
goes a step further and scans the URI for the embedded gifs and other
GETable
type resources and sends them across on the same TCP connection in
continuation.
This would save 4 extra GET requests and hence
would reduce the network traffic therby providing a more bandwidth effective
mechanism of normal
web browsing...The only shortcoming which comes to my mind as of no is the
case
in which if the connection breaks down, server has to transmit all the stuff
all
over again... well, to overcome this demerit, i think we can add couple of
more
headers in the protocol to acheive 'resume' kind of functionality.

I am actually wondering about whether its too petty an idea which got
rejected
by the masses or
didnt spring up in anybodys mind...

Any comments................


Apart from all this we would also like to contribute to the developement of
apache web server to bring it as close to the best (in terms of HTTP 1.1
compliance etc) as possible........