You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by on 2002/09/27 11:11:04 UTC

Saving request state when client uses byte ranges

Hi all,

I have a problem which I'm hoping someone can help me to sort out.  I've
written a module that performs two functions:

 - Authorisation of asset retrieval based on client IP, request URI and
   user token.
 - Confirmation of (non-)delivery of the requested asset.

This ties into a back-end over which I have no control.  The confirmation
of delivery works by using the module's logging function to check that:

 - the request wasn't aborted (request_rec->connection->aborted)
 - the total bytes sent (r->bytes_sent) is the same as the content length
   (r->clength).

This is all fine and dandy, and works like a dream... EXCEPT (you knew it
was coming!) where we have a smart-ass client who downloads byte ranges.

Now if it were just logging that was being performed here, I wouldn't be
overly bothered about this, but the confirmation of delivery also commits a
financial transaction, and then removes the authorisation record for the
asset.  Joe Q Public downloads bytes 0-299 of his 1 meg file, and gets
charged for it, even though he didn't download the whole thing and can't
download the rest of it.

I've kicked back on this and said that there's not much we can do about it
at the Apache level.  If we just don't do the confirmation of delivery on
partial responses, then anything downloaded in parts will never be charged
for.  We'd struggle to reconcile what client had downloaded what parts of
what file (and therefore deduce that they had received the complete file),
especially if different child processes dealt with two parts of the same
file.  Game over, yes?

Well, maybe... but I'm wondering if there's something we could do with
shared memory, to store the state of such downloads and reconcile it that
way.  Frankly I have no idea, as I've never programmed using shared memory
before.  I don't want to have to write an actual file to the file system,
that's just plain messy, not to mention prone to errors (someone deletes
the file, file locking problems, asynchronous I/O...) and a potential
performance hit.

Can anyone suggest a way around this problem?  Is my shared memory idea
workable?  Is there another way that I'm just blindly missing?

As ever, any help would be very greatly appreciated.

Cheers,

JT
-- 
+------------------------------------+------------------------------------+
| James Tait                         | ICQ# 17834893                      |
| MUD programmer and Linux advocate  | http://www.wyrddreams.demon.co.uk/ |
+------------------------------------+------------------------------------+

________________________________________________________________________
This email has been scanned for all viruses by the MessageLabs SkyScan
service. For more information on a proactive anti-virus service working
around the clock, around the globe, visit http://www.messagelabs.com
________________________________________________________________________