You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl@perl.apache.org by James G Smith <JG...@JameSmith.COM> on 2001/01/07 19:03:27 UTC

Re: getting rid of multiple identical http requests (bad users double-clicking)

Stas Bekman <st...@stason.org> wrote:
>On Fri, 5 Jan 2001, Gunther Birznieks wrote:
>
>> Sorry if this solution has been mentioned before (i didn't read the earlier 
>> parts of this thread), and I know it's not as perfect as a server-side 
>> solution...
>> 
>> But I've also seen a lot of people use javascript to accomplish the same 
>> thing as a quick fix. Few browsers don't support javascript. Of the small 
>> amount that don't, the venn diagram merge of browsers that don't do 
>> javascript and users with an itchy trigger finger is very small. The 
>> advantage is that it's faster than mungling your own server-side code with 
>> extra logic to prevent double posting.
>
>Nothing stops users from saving the form and resubmitting it without the
>JS code. This may reduce the number of attempts, but it's a partial
>solution and won't stop determined users.

Nothing dependent on the client can be considered a fail-safe 
solution.

I encountered this problem with some PHP pages, but the idea is 
the same regardless of the language.

Not all pages have problems with double submissions.  For 
example, a page that provides read-only access to data usually 
can be retrieved multiple times without damaging the data.  It's 
submitting changes to data that can become the problem.  I ended 
up locking on some identifying characteristic of the object whose 
data is being modified.  If I can't get the lock, I send back a 
page to the user explaining that there probably was a double 
submission and everything might have gone ok.  The user would 
need to go in and check the data to make sure.

In pseudo-perl-code:

sub get_lock {
  my($objecttype, $objectid) = @_;

  $n = 0;
  local($sec,$min,$hr,$md, $mon, $yr, $wday, $yday,$isdst) = gmtime(time);
  $lockfile = sprintf("%s/%4d%2d%2d%2d%2d%2d-%s", $objecttype, $yr+1900, $mon+1, $md, $hr, $min, $sec, $objectid);
  for( $n = 0; $n < 10000 && !$r; $n++ ) { 
    $r = link("$dir/$nullfile", "$dir/$lockfile-$n.lock");
  }
  
  return $r;
}

So, for example, if I am trying to modify an entry for a test 
organization in our directory service, the lock is

  "/var/md/dsa/shadow/www-ldif-log/roles and organizations/20010107175816-luggage-org-0.lock"

  $dir = "/var/md/dsa/shadow/www-ldif-log";
  $objecttype = "roles and organizations";
  $objectid   = "luggage-org";

This is a specific example, but I'm sure other ways can have the 
same result -- basically serializing write access to individual 
objects, in this case, in our directory service.  Then, double 
submissions don't hurt anything.

Regarding the desire to not add code - never let down your guard 
when you are designing and programming.  Paranoid people should 
be inherently more secure.
------------------------------------+-----------------------------------------
James Smith - jgsmith@jamesmith.com | http://www.jamesmith.com/
            jsmith@sourcegarden.org | http://sourcegarden.org/
              jgsmith@tamu.edu      | http://cis.tamu.edu/systems/opensystems/
------------------------------------+------------------------------------------