You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl@perl.apache.org by Foo Ji-Haw <jh...@nexlabs.com> on 2005/10/07 13:02:01 UTC

how to share data among modperl processes

Hi all,

I have a simple need where a process can take minutes to complete. But I want to display some progress bar of sorts to the user.

My idea is to have the handler call the local url which does the heavy lifting. Something like this:
http://localhost/job/dojob => launches => http://localhost/job/actualjobworker


The local url will set some global hash to say that it is still working on the job. The original handler (/job/dojob) becomes a polling script to check on the global hash to see if the work is completed.

I read from the wiki that it is possible to have a 'global' variable where all process can read/write to it. Can someone point me in the direction? I tried perl.apache.org but can't find anything there. A url should do the trick.

Thanks.

Re: how to share data among modperl processes

Posted by Steven Lembark <le...@wrkhors.com>.

-- Perrin Harkins <pe...@elem.com>

> On Thu, 2005-10-20 at 23:37 +0100, Roger McCalman wrote:
>> I would also be suprised if anyone would want to run NFS on machines
>> that are used as public webservers due to security issues with NFS.
> 
> It's not that bad in this case.  If you have a cluster of machines, you
> probably have a load balancer that your traffic goes through, so the web
> machines wouldn't actually be exposed to the world in this scenario.  A
> bad idea if you just use round robin DNS though.

Locking would be a bigger issue: NFS does not handle
locking well at all. 


-- 
Steven Lembark                                       85-09 90th Street
Workhorse Computing                                Woodhaven, NY 11421
lembark@wrkhors.com                                     1 888 359 3508

Re: how to share data among modperl processes

Posted by Perrin Harkins <pe...@elem.com>.
On Thu, 2005-10-20 at 23:37 +0100, Roger McCalman wrote:
> I would also be suprised if anyone would want to run NFS on machines
> that are used as public webservers due to security issues with NFS.

It's not that bad in this case.  If you have a cluster of machines, you
probably have a load balancer that your traffic goes through, so the web
machines wouldn't actually be exposed to the world in this scenario.  A
bad idea if you just use round robin DNS though.

- Perrin


Re: how to share data among modperl processes

Posted by Roger McCalman <ro...@runcircle.co.uk>.
On Thu, Oct 20, 2005 at 04:59:08PM -0400, Perrin Harkins wrote:
> On Thu, 2005-10-20 at 13:52 -0700, Jay Buffington wrote:
> > Even if you are in a multiple server environment you should still be
> > able to use Cache::FastMmap.  You'll just have to make sure that the
> > global param share_file is  a file that would be shared to all servers
> > (perhaps over an NFS mount).
> 
> I would be pretty cautious with that.  Unless your NFS server handles
> fcntl perfectly, you could be in big trouble.  I'd try a torture test
> with multiple processes on different machines banging on it.  Many
> people have reported locking issues with various NFS servers over the
> years.
> 
> Even if it does work, NFS is slow enough that it may be better to use
> Cache::Memcached or just MySQL at that point.

I would also be suprised if anyone would want to run NFS on machines
that are used as public webservers due to security issues with NFS.

Cheers, Roger

Re: how to share data among modperl processes

Posted by Perrin Harkins <pe...@elem.com>.
On Thu, 2005-10-20 at 13:52 -0700, Jay Buffington wrote:
> Even if you are in a multiple server environment you should still be
> able to use Cache::FastMmap.  You'll just have to make sure that the
> global param share_file is  a file that would be shared to all servers
> (perhaps over an NFS mount).

I would be pretty cautious with that.  Unless your NFS server handles
fcntl perfectly, you could be in big trouble.  I'd try a torture test
with multiple processes on different machines banging on it.  Many
people have reported locking issues with various NFS servers over the
years.

Even if it does work, NFS is slow enough that it may be better to use
Cache::Memcached or just MySQL at that point.

- Perrin


Re: how to share data among modperl processes

Posted by Pratik <pr...@gmail.com>.
Instead of dealing with uninvited trouble with NFS or anything
similar, using something like http://www.danga.com/memcached/ would be
a better idea.

Thanks,
Pratik

On 10/21/05, Jay Buffington <ja...@gmail.com> wrote:
> Even if you are in a multiple server environment you should still be
> able to use Cache::FastMmap.  You'll just have to make sure that the
> global param share_file is  a file that would be shared to all servers
> (perhaps over an NFS mount).
>
> Jay
>
> On 10/9/05, Pratik <pr...@gmail.com> wrote:
> > Unless you are running the application in multiple server environment,
> > http://search.cpan.org/~robm/Cache-FastMmap-1.09/ would be the best
> > choice. But probably you should read up on AJAX as well. It looks like
> > AJAX can help you accomplish what you are trying to do here.
> >
> > Thanks,
> > Pratik
> >
> > On 10/7/05, Foo Ji-Haw <jh...@nexlabs.com> wrote:
> > >
> > > Hi all,
> > >
> > > I have a simple need where a process can take minutes to complete. But I
> > > want to display some progress bar of sorts to the user.
> > >
> > > My idea is to have the handler call the local url which does the heavy
> > > lifting. Something like this:
> > > http://localhost/job/dojob => launches =>
> > > http://localhost/job/actualjobworker
> > >
> > >
> > > The local url will set some global hash to say that it is still working on
> > > the job. The original handler (/job/dojob) becomes a polling script to check
> > > on the global hash to see if the work is completed.
> > >
> > > I read from the wiki that it is possible to have a 'global' variable where
> > > all process can read/write to it. Can someone point me in the direction? I
> > > tried perl.apache.org but can't find anything there. A url should do the
> > > trick.
> > >
> > > Thanks.
> >
> >
> > --
> > http://www.rails.info - Coming Soon !
> >
>


--
http://www.rails.info - Coming Soon !

Re: how to share data among modperl processes

Posted by Jay Buffington <ja...@gmail.com>.
Even if you are in a multiple server environment you should still be
able to use Cache::FastMmap.  You'll just have to make sure that the
global param share_file is  a file that would be shared to all servers
(perhaps over an NFS mount).

Jay

On 10/9/05, Pratik <pr...@gmail.com> wrote:
> Unless you are running the application in multiple server environment,
> http://search.cpan.org/~robm/Cache-FastMmap-1.09/ would be the best
> choice. But probably you should read up on AJAX as well. It looks like
> AJAX can help you accomplish what you are trying to do here.
>
> Thanks,
> Pratik
>
> On 10/7/05, Foo Ji-Haw <jh...@nexlabs.com> wrote:
> >
> > Hi all,
> >
> > I have a simple need where a process can take minutes to complete. But I
> > want to display some progress bar of sorts to the user.
> >
> > My idea is to have the handler call the local url which does the heavy
> > lifting. Something like this:
> > http://localhost/job/dojob => launches =>
> > http://localhost/job/actualjobworker
> >
> >
> > The local url will set some global hash to say that it is still working on
> > the job. The original handler (/job/dojob) becomes a polling script to check
> > on the global hash to see if the work is completed.
> >
> > I read from the wiki that it is possible to have a 'global' variable where
> > all process can read/write to it. Can someone point me in the direction? I
> > tried perl.apache.org but can't find anything there. A url should do the
> > trick.
> >
> > Thanks.
>
>
> --
> http://www.rails.info - Coming Soon !
>

Re: how to share data among modperl processes

Posted by Foo Ji-Haw <jh...@nexlabs.com>.
Hello Pratik,

Between yours and Michael's solutions, I think yours is a closer fit (thanks
Michael!) to my needs. According to Cache::FastMmap, I only need to
initialise the file during the PerlRequire stage in httpd.conf. Then the
rest of the scripts can just use it. Is this correct?

----- Original Message ----- 
From: "Pratik" <pr...@gmail.com>
To: "Foo Ji-Haw" <jh...@nexlabs.com>
Cc: <mo...@perl.apache.org>
Sent: Sunday, October 09, 2005 10:21 PM
Subject: Re: how to share data among modperl processes


> Unless you are running the application in multiple server environment,
> http://search.cpan.org/~robm/Cache-FastMmap-1.09/ would be the best
> choice. But probably you should read up on AJAX as well. It looks like
> AJAX can help you accomplish what you are trying to do here.
>
> Thanks,
> Pratik
>
> On 10/7/05, Foo Ji-Haw <jh...@nexlabs.com> wrote:
> >
> > Hi all,
> >
> > I have a simple need where a process can take minutes to complete. But I
> > want to display some progress bar of sorts to the user.
> >
> > My idea is to have the handler call the local url which does the heavy
> > lifting. Something like this:
> > http://localhost/job/dojob => launches =>
> > http://localhost/job/actualjobworker
> >
> >
> > The local url will set some global hash to say that it is still working
on
> > the job. The original handler (/job/dojob) becomes a polling script to
check
> > on the global hash to see if the work is completed.
> >
> > I read from the wiki that it is possible to have a 'global' variable
where
> > all process can read/write to it. Can someone point me in the direction?
I
> > tried perl.apache.org but can't find anything there. A url should do the
> > trick.
> >
> > Thanks.
>
>
> --
> http://www.rails.info - Coming Soon !


Re: how to share data among modperl processes

Posted by Pratik <pr...@gmail.com>.
Unless you are running the application in multiple server environment,
http://search.cpan.org/~robm/Cache-FastMmap-1.09/ would be the best
choice. But probably you should read up on AJAX as well. It looks like
AJAX can help you accomplish what you are trying to do here.

Thanks,
Pratik

On 10/7/05, Foo Ji-Haw <jh...@nexlabs.com> wrote:
>
> Hi all,
>
> I have a simple need where a process can take minutes to complete. But I
> want to display some progress bar of sorts to the user.
>
> My idea is to have the handler call the local url which does the heavy
> lifting. Something like this:
> http://localhost/job/dojob => launches =>
> http://localhost/job/actualjobworker
>
>
> The local url will set some global hash to say that it is still working on
> the job. The original handler (/job/dojob) becomes a polling script to check
> on the global hash to see if the work is completed.
>
> I read from the wiki that it is possible to have a 'global' variable where
> all process can read/write to it. Can someone point me in the direction? I
> tried perl.apache.org but can't find anything there. A url should do the
> trick.
>
> Thanks.


--
http://www.rails.info - Coming Soon !

Re: how to share data among modperl processes

Posted by Michael Hall <mi...@gmail.com>.
You could try Cache::Memcached
http://www.danga.com/memcached/
http://search.cpan.org/~bradfitz/Cache-Memcached-1.15/

It implements a dictionary spread across any number of memcached servers.

It will also allow you to run your back end program on a different
Perl interpreter, computer, operating system or even a different
programming language!

Michael

On 10/7/05, Foo Ji-Haw <jh...@nexlabs.com> wrote:
>
> Hi all,
>
> I have a simple need where a process can take minutes to complete. But I
> want to display some progress bar of sorts to the user.
>
> My idea is to have the handler call the local url which does the heavy
> lifting. Something like this:
> http://localhost/job/dojob => launches =>
> http://localhost/job/actualjobworker
>
>
> The local url will set some global hash to say that it is still working on
> the job. The original handler (/job/dojob) becomes a polling script to check
> on the global hash to see if the work is completed.
>
> I read from the wiki that it is possible to have a 'global' variable where
> all process can read/write to it. Can someone point me in the direction? I
> tried perl.apache.org but can't find anything there. A url should do the
> trick.
>
> Thanks.