You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jcs-users@jakarta.apache.org by Daniel Rosenbaum <dr...@yahoo.com> on 2005/03/06 03:55:28 UTC

JCS Setup Questions

Hello,

JCS seems to have come a long way since about a year ago.  Could
I assume most of the bugs were fixed that the Hibernate project
was reporting?

Anyhow, I am thinking about using JCS for it's clustered cache
capability.  I hope the community would not mind giving me a few
pointers how to set this up properly, and some insight as to
what configuration options are best for me.

I have a web app running on Weblogic that is currently clustered
on two servers but may go up to 5 servers or more.  The data I
cache is mostly read only but changes occasionally.  I don't
care so much about sharing data between servers but am more
concerned about not serving stale data, so I would be happy just
to send an invalidate message to the rest of the servers on an
element change so they would not serve stale data.

I am trying to make heads or tales of the docs but find them
difficult to understand.  As far as I could tell though a
lateral cache would suit my needs best.  

1) Since I am not so concerned about sharing the data I figure a
remote cache is not so important, plus it seems for a remote
cache I would need to start another process besides for the web
servers.  This seems like overkill.  Am I correct though that
for a lateral cache I would not need to start another process? 
But if so I am confused how the "listener" gets started for a
lateral cache, what sets up binding of the port etc.  The doc
was not clear on this.

2) I am unclear how I would specify servers in the properties
files.  Could I use the same .ccf file for each server, or do I
need to create a separate one for each server.  For example, say
my servers as S1 and S2.  Could I simply put:

jcs.auxiliary.LTCP.attributes.TcpServers=S1:1111,S2:1112

and include this .ccf in the war file running on both servers,
or do I need to create one .ccf file with:

jcs.auxiliary.LTCP.attributes.TcpServers=S2:1112

to be in S1's war file and 

jcs.auxiliary.LTCP.attributes.TcpServers=S1:1112

to be in S2's war file?  And if I could just have one .ccf file
for both servers, wouldn't this result in the server sending a
message to itself (besides for the message sent to the other
server) which it would then need to handle?  This seems
wasteful.  On the other hand, I would not want to have to create
a separate war file to be deployed for each server with a
different setting in the .ccf file.  I have a feeling I just may
be misunderstanding how this all works.

3) Any advice what transmission mechanism to use for the lateral
cache?  I am leaning towards using JGroups for the reliability. 
Just a question about this though, does JGroups send the
invalidate messages right away?  I think I remember reading
somewhere that JGroups channels only work on a pull method, so
servers only wake up every so often to get new messages, but
messages are not received instantaneously.  Is this at all true?
 I hope not, as I would want invalidate messages to reach all
the servers as soon as data is changed.  If this is true, any
advice how else to configure the lateral cache so that messages
would be sent and received right away?

4) Is there a way to set up the lateral caches to only send
invalidate messages but not serialize the objects and send them?
 My objects sometimes get pretty large so I would not want to
incur the expense of this.  I'd rather servers get their own
copies in the uncommon event more than one user needs the same
data.  (most users in my app do not access the same data, but
one user typically would access the same data over and over
again in their web requests, and I am using a sticky load
balancer so most of the time a user would hit the same server.)

5) I will be using this cache with Hibernate.  For some reason
the Hibernate project deprecated the use of JCS as a secondary
cache, but I see no reason why not to, with all the bug fixes
lately.  Does anyone know why not to?  The only thing I saw is
not to use the JCS distributed cache functionality with
Hibernate since JCS does not support locking, but I am not so
concerned about this.  Is there any other reason?

6) Am I correct to assume that if I call cache.removeAll() that
all the elements in that cache region will be cleared on all the
servers through the lateral cache?  Is this also efficient, say
only one network message to clear the entire cache, or would JCS
have to create a separate message for each element in the cache?

7) Any other advice or alternative setups?

I know these are a lot of questions but I would appreciate any
help.  Once I understand this better, in return I would like to
give back to to community by perhaps contributing better
documentation, or writing a proper user's guide.

Thank you,
Daniel Rosenbaum




---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Aaron Smuts <as...@yahoo.com>.
I'm also working on a JMS plugin for the cache, mainly
for sending invalidation messages.  It is probably the
best way to scale a cache.  Pub sub is ideal for what
we are trying to do. 

Aaron

--- Daniel Rosenbaum <dr...@yahoo.com>
wrote:

> Thanks Aaron for your insight.  You answered some of
> my
> questions, and I now have a better picture how to
> set this all
> up, but the following questions still remain.  Could
> you or
> someone else please be kind enough to respond?
> 
> 1) How does the "listener" get started for a lateral
> cache, what
> sets up binding of the port etc.  Is it a separate
> process or
> does it start a new thread when the cache is
> initialized in the
> web app?
> 
> 2) Does the lateral cache broadcast change messages
> right away,
> or is there a thread that wakes up periodically to
> send
> messages?  In other words, is there ever a
> possibility server B
> would serve stale data after the object was changed
> on server A,
> simply because the thread did not wake up yet to
> propogate the
> change?
> 
> 3) Similar to (2), does the lateral cache have a
> listener, or is
> it the type that wakes up periodically to check if
> there are
> messages waiting?  Put another way, does it have a
> push or a
> pull model?
> 
> 4) Is there a way to set up the lateral caches to
> only send
> invalidate messages but not serialize the objects
> and send them?
> 
> 5) I will be using this cache with Hibernate.  For
> some reason
> the Hibernate project deprecated the use of JCS as a
> secondary
> cache, and the new Hibernate 3.0 release does not
> even include
> the JCS plugin at all, but I see no reason why not
> to, with all
> the bug fixes lately.  Does anyone know why not to? 
> The only
> reason I saw not to use the JCS distributed cache
> functionality
> with Hibernate is since JCS does not support locking
> and
> transactions, but I am not so concerned about this. 
> Is there
> any other reason?  (By the way, since the plugin
> code no longer
> exists on the new version of Hibernate, perhaps it
> should be
> moved to be part of the jcs code base so JCS could
> continue to
> be used with Hibernate?)
> 
> 6) New question: the config idea with JNDI is
> interesting but I
> am not sure it is possible using Weblogic
> clustering.  With
> Weblogic clustering you would deploy a war file on
> just one
> server and Weblogic would then automatically deploy
> it on all
> the servers in the cluster for you.  There is only
> one place to
> configure all the servers.  I am not sure if there
> is a way to
> specify a different value for each server, and you
> would
> actually have only one console.  Does anyone have
> any experience
> with this, and know of a way to specify a different
> value for
> each node in a cluster?
> 
> Thanks in advance,
> Daniel
> 
> 
> --- Aaron Smuts <as...@yahoo.com> wrote:
> 
> > Hi Daniel.
> > 
> > A removeAll sends one message, not a message for
> each
> > item.  
> > 
> > Jgroups is fine, but it is slower than the otehr
> > options.  
> > 
> > I'd use tcp lateral connections or the remote rmi
> > server.
> > 
> > If you use the tcp lateral, then you need to
> specify
> > the servers a cache should connect to.  If you
> have 3
> > servers, A, B, and C, then A should point to B and
> C, 
> > B should point to A and C, . . .
> > 
> > The problem is that this would require that you
> have a
> > different war for each app.  
> > 
> > There is asolution.  Use JNDI and a startup
> servlet. 
> > Set the server list as a value in applicaiton
> context
> > through the container.  Make a startup servlet
> that
> > configures JCS based on a properties object.  Load
> the
> > cache.ccf file and change the values you need. 
> Then
> > configure JCS with this. use the
> > CompositeCacheManager. 
> > 
> > This way you can deploy the same war to multiple
> > servers.  
> > 
> > 
> > Aaron
> > 
> > --- Daniel Rosenbaum <dr...@yahoo.com>
> > wrote:
> > 
> > > Hello,
> > > 
> > > JCS seems to have come a long way since about a
> year
> > > ago.  Could
> > > I assume most of the bugs were fixed that the
> > > Hibernate project
> > > was reporting?
> > > 
> > > Anyhow, I am thinking about using JCS for it's
> > > clustered cache
> > > capability.  I hope the community would not mind
> > > giving me a few
> > > pointers how to set this up properly, and some
> > > insight as to
> > > what configuration options are best for me.
> > > 
> > > I have a web app running on Weblogic that is
> > > currently clustered
> > > on two servers but may go up to 5 servers or
> more. 
> > > The data I
> > > cache is mostly read only but changes
> occasionally. 
> > > I don't
> > > care so much about sharing data between servers
> but
> > > am more
> > > concerned about not serving stale data, so I
> would
> > > be happy just
> > > to send an invalidate message to the rest of the
> > > servers on an
> > > element change so they would not serve stale
> data.
> > > 
> > > I am trying to make heads or tales of the docs
> but
> > > find them
> > > difficult to understand.  As far as I could tell
> > > though a
> > > lateral cache would suit my needs best.  
> > > 
> > > 1) Since I am not so concerned about sharing the
> > > data I figure a
> > > remote cache is not so important, plus it seems
> for
> > > a remote
> > > cache I would need to start another process
> besides
> > > for the web
> > > servers.  This seems like overkill.  Am I
> correct
> > > though that
> > > for a lateral cache I would not need to start
> > > another process? 
> > > But if so I am confused how the "listener" gets
> > > started for a
> > > lateral cache, what sets up binding of the port
> etc.
> > >  The doc
> > > was not clear on this.
> > > 
> > > 2) I am unclear how I would specify servers in
> the
> 
=== message truncated ===


---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Hanson Char <ha...@gmail.com>.
> Hibernate makes it easy to cache 
> particularly attractive, which is the query cache.  Hibernate
> saves the last update timestamp of any time a row in any table
> was updated, or an insert or delete performed.  When a query is
> performed and the result set is already cached, the last update
> timestamps are checked behind the scenes for all tables involved
> in the query.  If any of the timestamps are later than the time
> the query is performed, the cached result set is disregarded and
> a new trip to the database is performed.  This is a good thing,
> since now I do not keep track of all queries that may use a
> particular object, and can rely on Hibernate to do this checking
> for me.

+1 

> To implement this myself would be much more difficult.

Difficulty aside, it's just unnecessary.

H

---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Daniel Rosenbaum <dr...@yahoo.com>.
> Personally, I wouldn't cache inside Hibernate.  I'd
> roll my own outside so I have more control.  Caching
> is too hard to leave it up to a tool with a generic
> solution.  We can move on anyway.

Hibernate makes it easy to cache and has a feature I find
particularly attractive, which is the query cache.  Hibernate
saves the last update timestamp of any time a row in any table
was updated, or an insert or delete performed.  When a query is
performed and the result set is already cached, the last update
timestamps are checked behind the scenes for all tables involved
in the query.  If any of the timestamps are later than the time
the query is performed, the cached result set is disregarded and
a new trip to the database is performed.  This is a good thing,
since now I do not keep track of all queries that may use a
particular object, and can rely on Hibernate to do this checking
for me.  To implement this myself would be much more difficult.

> 
> If you are not trying to get from other laterals, then
> yes, remove on put is useful for data that is most
> likely associated with a single user, but could affect
> others, in a sticky load balanced situation.  
> 
> If you have get enabled, then this isn't a concern.  B
> would try to get from A and the endless invalidation
> would never have started.  The remote cache is best
> for this.
> 
> It is better to just put on put for other kinds of
> data.  It is not that expensive.
> 

What does it mean to have get enabled?  Does this mean that on a
get from the cache the first thing it will do is send a message
to the other servers asking if anyone has that object and if so
send it across the wire?  Assuming it is this, how much time
would the process wait for a response before giving up and going
to the db?  If it is a short time, then could the lateral be
configured that if a cache entry is put on A an invalidate
message is sent to B, and if B then asks for that entry A would
send it, rather than going to the database?  If I understand you
correctly this is exactly what should happen.

As a side question, wouldn't gets be slower when getting
something not already in the cache, because of the time it must
wait when asking the other caches if anyone else has a copy? 
What affect would this have on performance of an app, especially
if many gets are performed on loading a page?

> 
> That depends.  Some data is ok stale.  Bank account
> data isn't.  The description of a book or some other
> item at an online store, like the one I work for, is
> ok.
> 

In my case, some of my data is ok stale, but others it is
critical that it not be stale.  Otherwise it would make the user
go "huh?  but I just changed that!  What a piece of junk."  This
is data that is rarely changed, and is expensive to load fresh,
but on the occasion that it is changed it must be correct right
away.


I am particularly interested if anyone has any comments of using
JCS with Hibernate.  Have they had any issues with distribution?
 Also, I think a problem was reported once with serialization of
Hibernate objects, does anyone know if this has been corrected? 
(If my memory serves me correctly it was Travis Savo who
reported this problem.  Travis, if you are reading this, could
you please comment?  Thanks.)

Daniel

---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Aaron Smuts <as...@yahoo.com>.
--- Daniel Rosenbaum <dr...@yahoo.com>
wrote:

> I have a side concern specific to using Hibernate
> with
> distributed caching, particularly when caching
> collections.  I
> can best describe my concern with a (somewhat
> contrived)
> example.

Personally, I wouldn't cache inside Hibernate.  I'd
roll my own outside so I have more control.  Caching
is too hard to leave it up to a tool with a generic
solution.  We can move on anyway.

> 
> Say you have a Department object, along with
> Employee objects,
> with a Department.departmentEmployees collection,
> and there are
> 600 employees is a department.  In hibernate, each
> employee
> would have its own object, and the
> Department.departmentEmployees collection would be a
> cache entry
> with the primary keys of all the Employee objects
> that belong to
> that department.  So in all, you have a total of
> 600+1=601
> objects, and therefore 601 items would be put into
> the cache on
> loading this collection, or 601 cache puts.
> 
> Using a lateral cache, this would result in 601
> objects being
> serialized and sent on the network.  This seems an
> expensive
> price for placing this collection in a distributed
> cache.
> 

For soemthing like this.  I assume that employees
don't change often.  This is a startup cost.  The
distribution is so fast, that 600 or even 6,000 is not
very expensive.  You can make things faster if you
control the Serialization, but this is inside of
Hibernate, so I loose that option.  


> Worse, even if only cache invalidate messages would
> be sent on
> new puts, then couldn't the following happen:
> 
> 1) A servlet on Server A loads the collection from
> db.  601
> invalidate messages are sent to server B.
> 2) a short time after, as servlet on Server B also
> loads the
> same collection.  This would also result in getting
> the data
> from the db (since there is no serialization.)  This
> would also
> produce 601 invalidate messages, now to server A.
> 3) a short time after that, Server A loads the same
> collection
> as in (1) again.  It would no longer be in the
> cache, since step
> (2) invalidated it, and would have to go to the
> database again
> to get it!!!
> 

Yep.  Invalidation on put for collecitions can be
nasty.  What we need is a get all on startup feature. 
This works for the remote cache, but not for laterals.
 

> In other words, you would end up with a cache where
> an entry is
> only valid if that entry is only used on one and
> only one
> server, say if Server A is the only user of those
> entries, but
> if server B ever tries to use it, it would get
> invalidated! 
> This would seem to make the cache have very limited
> use.
> 

If you are not trying to get from other laterals, then
yes, remove on put is useful for data that is most
likely associated with a single user, but could affect
others, in a sticky load balanced situation.  

If you have get enabled, then this isn't a concern.  B
would try to get from A and the endless invalidation
would never have started.  The remote cache is best
for this.

It is better to just put on put for other kinds of
data.  It is not that expensive.



> Another concern I have, say the collection is
> changed on server
> A, which produces 601 messages, and server B starts
> to process
> the messages, but before all the messages are
> processed a
> servlet on server B reads the collection.  If this
> happened
> after only 250 messages are processed, the servlet
> would get a
> collection with 250 new objects and 350 old objects.
>  This would
> be an unacceptable situation where you would have
> inconsistent
> data.  In the employee example, you would have a
> collection with
> 250 employees with up-to-date info but the rest have
> stale info.
> 

If that is what hibernate would do, then yes.  

More later.

Aaron

---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Aaron Smuts <as...@yahoo.com>.
> True, but part of the reason to use a cache is for
> speed.  It is
> expensive to always have to reload a collection of
> 600-1000
> rows, so it is a perfect candidate for caching, but
> that is not
> acceptable if it is at the cost of having invalid
> data.
> 

That depends.  Some data is ok stale.  Bank account
data isn't.  The description of a book or some other
item at an online store, like the one I work for, is
ok.

 

> > 
> > You write the plugin, send it to us, and I'll put
> it
> > in a plugin jar along with the struts plugin. 
> 
> I may do just that, though the code would be pretty
> much the
> same as it is in the JCS plugin in the Hibernate
> 2.1.x source
> tree.  I could convert it to a JCS package and send
> it.  I'll
> try to do this when I get a chance.
> 

Sounds good.  

> > 
> > You can definitely set different values for
> different
> > servers.
> 
> Does anyone know how?  As far as I understand it,
> server
> instances in a Weblogic cluster utilize a
> cluster-wide JNDI
> tree, so how could you configure a separate value
> for each node?
>  
> > Either way, I think the remote cache is a better
> > option.  It solves your configuration problems,
> since
> > all the local caches can have the same settings. 
> It
> > does require that you run a separate process, but
> it
> > is a better model overall. 
> 
> That may be, but I doubt the managers and admins at
> my company
> would allow another process or server to run besides
> for the web
> servers.
> 

Oh well.  

Aaron




---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Aaron Smuts <as...@yahoo.com>.


--- Daniel Rosenbaum <dr...@yahoo.com>
wrote:

> So if I understand correctly, there is a possibility
> of serving
> raw data on server B if the invalidate event on
> server A is
> stuck in the queue.  Is this correct?

Stuck in the queue is not the problem.  There is a
period of time that B has old data after A has been
updated.  That's the problem.  It's a race condition. 
Without a transaction and distributed locking all you
can do is minimize the amount of time and lessen the
chance.  

There are Oracle rack clusters that have the same kind
of windows.  What you do is make sure the user goes
back to the same cluster server.   . . .  At a certain
scale there are no good solutions. 

There are various strategies for setups like yours. 
Go to the db before submitting a change.  . . .  Use
versioning of the objects.  . .  Store last modified
times . . .

I'll get to your examples next.

Aaron

---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Daniel Rosenbaum <dr...@yahoo.com>.
> The standard option is that every queue has
> its own devoted worker thread, which dies after 60
> seconds of inactivity and is started again when
> something gets put in the queue.  While alive it tries
> tog et stuff out of the queue.  If you put it in, it
> will be pulled out as soon as possible.  

So if I understand correctly, there is a possibility of serving
raw data on server B if the invalidate event on server A is
stuck in the queue.  Is this correct?

I have a side concern specific to using Hibernate with
distributed caching, particularly when caching collections.  I
can best describe my concern with a (somewhat contrived)
example.

Say you have a Department object, along with Employee objects,
with a Department.departmentEmployees collection, and there are
600 employees is a department.  In hibernate, each employee
would have its own object, and the
Department.departmentEmployees collection would be a cache entry
with the primary keys of all the Employee objects that belong to
that department.  So in all, you have a total of 600+1=601
objects, and therefore 601 items would be put into the cache on
loading this collection, or 601 cache puts.

Using a lateral cache, this would result in 601 objects being
serialized and sent on the network.  This seems an expensive
price for placing this collection in a distributed cache.

Worse, even if only cache invalidate messages would be sent on
new puts, then couldn't the following happen:

1) A servlet on Server A loads the collection from db.  601
invalidate messages are sent to server B.
2) a short time after, as servlet on Server B also loads the
same collection.  This would also result in getting the data
from the db (since there is no serialization.)  This would also
produce 601 invalidate messages, now to server A.
3) a short time after that, Server A loads the same collection
as in (1) again.  It would no longer be in the cache, since step
(2) invalidated it, and would have to go to the database again
to get it!!!

In other words, you would end up with a cache where an entry is
only valid if that entry is only used on one and only one
server, say if Server A is the only user of those entries, but
if server B ever tries to use it, it would get invalidated! 
This would seem to make the cache have very limited use.

Another concern I have, say the collection is changed on server
A, which produces 601 messages, and server B starts to process
the messages, but before all the messages are processed a
servlet on server B reads the collection.  If this happened
after only 250 messages are processed, the servlet would get a
collection with 250 new objects and 350 old objects.  This would
be an unacceptable situation where you would have inconsistent
data.  In the employee example, you would have a collection with
250 employees with up-to-date info but the rest have stale info.

I am starting to wonder if JCS is the right tool for distributed
caching, at least in my application.  Am I misunderstanding how
all this works?  Perhaps a transactional cache is really needed
for such needs, so there would not be a risk of a mixed bag of
old and new data.
  
> It would be madness to use jboss cache in a big
> cluster in locking mode.  Using jgroups for
> distributed locking is not scalable.  

I don't expect my app to ever be on more than a few servers,
maximum 7.  I think this is an acceptable number, though I do
not like the fact that each collection retrieval would result in
all that serialization and network traffic, as I described
above.  My application frequently needs to retrieve collections
of up to 1000 objects.  But I am researching whether I really
should be using JBoss cache.  (by the way my app has nothing to
with departments and employees, that was only an example)

> 
> Also, if you need that degree of data integrity, you
> don't need a cache, you need a database.

True, but part of the reason to use a cache is for speed.  It is
expensive to always have to reload a collection of 600-1000
rows, so it is a perfect candidate for caching, but that is not
acceptable if it is at the cost of having invalid data.

> 
> You write the plugin, send it to us, and I'll put it
> in a plugin jar along with the struts plugin. 

I may do just that, though the code would be pretty much the
same as it is in the JCS plugin in the Hibernate 2.1.x source
tree.  I could convert it to a JCS package and send it.  I'll
try to do this when I get a chance.

> 
> You can definitely set different values for different
> servers.

Does anyone know how?  As far as I understand it, server
instances in a Weblogic cluster utilize a cluster-wide JNDI
tree, so how could you configure a separate value for each node?
 
> Either way, I think the remote cache is a better
> option.  It solves your configuration problems, since
> all the local caches can have the same settings.  It
> does require that you run a separate process, but it
> is a better model overall. 

That may be, but I doubt the managers and admins at my company
would allow another process or server to run besides for the web
servers.

Thanks for all your help once again.

Daniel

> 
> Aaron
> 
> > Thanks in advance,
> > Daniel
> > 
> > 
> > --- Aaron Smuts <as...@yahoo.com> wrote:
> > 
> > > Hi Daniel.
> > > 
> > > A removeAll sends one message, not a message for
> > each
> > > item.  
> > > 
> > > Jgroups is fine, but it is slower than the other
> > > options.  
> > > 
> > > I'd use tcp lateral connections or the remote rmi
> > > server.
> > > 
> > > If you use the tcp lateral, then you need to
> > specify
> > > the servers a cache should connect to.  If you
> > have 3
> > > servers, A, B, and C, then A should point to B and
> > C, 
> > > B should point to A and C, . . .
> > > 
> > > The problem is that this would require that you
> > have a
> > > different war for each app.  
> > > 
> > > There is asolution.  Use JNDI and a startup
> > servlet. 
> > > Set the server list as a value in applicaiton
> > context
> > > through the container.  Make a startup servlet
> > that
> > > configures JCS based on a properties object.  Load
> > the
> > > cache.ccf file and change the values you need. 
> > Then
> > > configure JCS with this. use the
> > > CompositeCacheManager. 
> > > 
> > > This way you can deploy the same war to multiple
> > > servers.  
> > > 
> > > 
> > > Aaron
> > > 
> > > --- Daniel Rosenbaum <dr...@yahoo.com>
> > > wrote:
> > > 
> > > > Hello,
> > > > 
> > > > JCS seems to have come a long way since about a
> > year
> > > > ago.  Could
> > > > I assume most of the bugs were fixed that the
> > > > Hibernate project
> > > > was reporting?
> > > > 
> > > > Anyhow, I am thinking about using JCS for it's
> > > > clustered cache
> > > > capability.  I hope the community would not mind
> > > > giving me a few
> > > > pointers how to set this up properly, and some
> > > > insight as to
> > > > what configuration options are best for me.
> > > > 
> > > > I have a web app running on Weblogic that is
> > > > currently clustered
> > > > on two servers but may go up to 5 servers or
> > more. 
> > > > The data I
> > > > cache is mostly read only but changes
> > occasionally. 
> > > > I don't
> > > > care so much about sharing data between servers
> > but
> > > > am more
> > > > concerned about not serving stale data, so I
> > would
> > > > be happy just
> > > > to send an invalidate message to the rest of the
> > > > servers on an
> > > > element change so they would not serve stale
> > data.
> > > > 
> > > > I am trying to make heads or tales of the docs
> > but
> > > > find them
> > > > difficult to understand.  As far as I could tell
> > > > though a
> > > > lateral cache would suit my needs best.  
> > > > 
> > > > 1) Since I am not so concerned about sharing the
> > > > data I figure a
> > > > remote cache is not so important, plus it seems
> > for
> > > > a remote
> > > > cache I would need to start another process
> > besides
> > > > for the web
> > > > servers.  This seems like overkill.  Am I
> > correct
> > > > though that
> > > > for a lateral cache I would not need to start
> > > > another process? 
> > > > But if so I am confused how the "listener" gets
> > > > started for a
> > > > lateral cache, what sets up binding of the port
> > etc.
> > > >  The doc
> > > > was not clear on this.
> > > > 
> > > > 2) I am unclear how I would specify servers in
> > the
> > 
> === message truncated ===
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Aaron Smuts <as...@yahoo.com>.
--- Daniel Rosenbaum <dr...@yahoo.com>
wrote:

> Thanks Aaron for your insight.  You answered some of
> my
> questions, and I now have a better picture how to
> set this all
> up, but the following questions still remain.  Could
> you or
> someone else please be kind enough to respond?
> 
> 1) How does the "listener" get started for a lateral
> cache, what
> sets up binding of the port etc.  Is it a separate
> process or
> does it start a new thread when the cache is
> initialized in the
> web app?
> 

If you configure a region to use a lateral cache.  The
listener will startup when the cache is initialized. 
The first time you get any region or call configure,
all regions are configured.


> 2) Does the lateral cache broadcast change messages
> right away,
> or is there a thread that wakes up periodically to
> send
> messages?  In other words, is there ever a
> possibility server B
> would serve stale data after the object was changed
> on server A,
> simply because the thread did not wake up yet to
> propogate the
> change?
> 

All events to disk, laterals, remotes, etc are put
into event queues.  There are two options for the
queues.  The standard option is that every queue has
its own devoted worker thread, which dies after 60
seconds of inactivity and is started again when
something gets put in the queue.  While alive it tries
tog et stuff out of the queue.  If you put it in, it
will be pulled out as soon as possible.  The otehr
model is to configure a thread pool to be used by the
queue.  (I won't tell you how to do this right now. 
You don't really need to know.)


> 3) Similar to (2), does the lateral cache have a
> listener, or is
> it the type that wakes up periodically to check if
> there are
> messages waiting?  Put another way, does it have a
> push or a
> pull model?
> 

Incoming messages are queued just like outgoing.  The
are processed when they come in.

> 4) Is there a way to set up the lateral caches to
> only send
> invalidate messages but not serialize the objects
> and send them?
> 

I think so.  The remote cache has this.  I'll have to
check.  It is failry easy to add the feature if not.

> 5) I will be using this cache with Hibernate.  For
> some reason
> the Hibernate project deprecated the use of JCS as a
> secondary
> cache, and the new Hibernate 3.0 release does not
> even include
> the JCS plugin at all, but I see no reason why not
> to, with all
> the bug fixes lately.  Does anyone know why not to?

They deprecated it based on bugs in a 2 or 3 year old
version.  (I'll hold my tongue here.)
 
> The only
> reason I saw not to use the JCS distributed cache
> functionality
> with Hibernate is since JCS does not support locking
> and
> transactions, but I am not so concerned about this. 

It would be madness to use jboss cache in a big
cluster in locking mode.  Using jgroups for
distributed locking is not scallable.  

It also adds all sorts of complexity for puts.  Each
object is broken down into primitive elements.  Using
AOP they intercept all changes to the objects inteh
cache, so if you only modify a small field of a large
object, just that field needs to be sent over the
wire.  This solves an uncommon problem in a way that
has costs for the common situation where you have a
modest size object.  

Also, if you need that degree of data integrity, you
don't need a cache, you need a database.

> Is there
> any other reason?  (By the way, since the plugin
> code no longer
> exists on the new version of Hibernate, perhaps it
> should be
> moved to be part of the jcs code base so JCS could
> continue to
> be used with Hibernate?)

You write the plugin, send it to us, and I'll put it
in a plugin jar along with the struts plugin. 

> 
> 6) New question: the config idea with JNDI is
> interesting but I
> am not sure it is possible using Weblogic
> clustering.  With
> Weblogic clustering you would deploy a war file on
> just one
> server and Weblogic would then automatically deploy
> it on all
> the servers in the cluster for you.  There is only
> one place to
> configure all the servers.  I am not sure if there
> is a way to
> specify a different value for each server, and you
> would
> actually have only one console.  Does anyone have
> any experience
> with this, and know of a way to specify a different
> value for
> each node in a cluster?
> 

You can definitely set different values for different
servers.

Either way, I think the remote cache is a better
option.  It solves your configuration problems, since
all the local caches can have the same settings.  It
does require that you run a separate process, but it
is a better model overall. 

Aaron

> Thanks in advance,
> Daniel
> 
> 
> --- Aaron Smuts <as...@yahoo.com> wrote:
> 
> > Hi Daniel.
> > 
> > A removeAll sends one message, not a message for
> each
> > item.  
> > 
> > Jgroups is fine, but it is slower than the otehr
> > options.  
> > 
> > I'd use tcp lateral connections or the remote rmi
> > server.
> > 
> > If you use the tcp lateral, then you need to
> specify
> > the servers a cache should connect to.  If you
> have 3
> > servers, A, B, and C, then A should point to B and
> C, 
> > B should point to A and C, . . .
> > 
> > The problem is that this would require that you
> have a
> > different war for each app.  
> > 
> > There is asolution.  Use JNDI and a startup
> servlet. 
> > Set the server list as a value in applicaiton
> context
> > through the container.  Make a startup servlet
> that
> > configures JCS based on a properties object.  Load
> the
> > cache.ccf file and change the values you need. 
> Then
> > configure JCS with this. use the
> > CompositeCacheManager. 
> > 
> > This way you can deploy the same war to multiple
> > servers.  
> > 
> > 
> > Aaron
> > 
> > --- Daniel Rosenbaum <dr...@yahoo.com>
> > wrote:
> > 
> > > Hello,
> > > 
> > > JCS seems to have come a long way since about a
> year
> > > ago.  Could
> > > I assume most of the bugs were fixed that the
> > > Hibernate project
> > > was reporting?
> > > 
> > > Anyhow, I am thinking about using JCS for it's
> > > clustered cache
> > > capability.  I hope the community would not mind
> > > giving me a few
> > > pointers how to set this up properly, and some
> > > insight as to
> > > what configuration options are best for me.
> > > 
> > > I have a web app running on Weblogic that is
> > > currently clustered
> > > on two servers but may go up to 5 servers or
> more. 
> > > The data I
> > > cache is mostly read only but changes
> occasionally. 
> > > I don't
> > > care so much about sharing data between servers
> but
> > > am more
> > > concerned about not serving stale data, so I
> would
> > > be happy just
> > > to send an invalidate message to the rest of the
> > > servers on an
> > > element change so they would not serve stale
> data.
> > > 
> > > I am trying to make heads or tales of the docs
> but
> > > find them
> > > difficult to understand.  As far as I could tell
> > > though a
> > > lateral cache would suit my needs best.  
> > > 
> > > 1) Since I am not so concerned about sharing the
> > > data I figure a
> > > remote cache is not so important, plus it seems
> for
> > > a remote
> > > cache I would need to start another process
> besides
> > > for the web
> > > servers.  This seems like overkill.  Am I
> correct
> > > though that
> > > for a lateral cache I would not need to start
> > > another process? 
> > > But if so I am confused how the "listener" gets
> > > started for a
> > > lateral cache, what sets up binding of the port
> etc.
> > >  The doc
> > > was not clear on this.
> > > 
> > > 2) I am unclear how I would specify servers in
> the
> 
=== message truncated ===


---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Daniel Rosenbaum <dr...@yahoo.com>.
Thanks Aaron for your insight.  You answered some of my
questions, and I now have a better picture how to set this all
up, but the following questions still remain.  Could you or
someone else please be kind enough to respond?

1) How does the "listener" get started for a lateral cache, what
sets up binding of the port etc.  Is it a separate process or
does it start a new thread when the cache is initialized in the
web app?

2) Does the lateral cache broadcast change messages right away,
or is there a thread that wakes up periodically to send
messages?  In other words, is there ever a possibility server B
would serve stale data after the object was changed on server A,
simply because the thread did not wake up yet to propogate the
change?

3) Similar to (2), does the lateral cache have a listener, or is
it the type that wakes up periodically to check if there are
messages waiting?  Put another way, does it have a push or a
pull model?

4) Is there a way to set up the lateral caches to only send
invalidate messages but not serialize the objects and send them?

5) I will be using this cache with Hibernate.  For some reason
the Hibernate project deprecated the use of JCS as a secondary
cache, and the new Hibernate 3.0 release does not even include
the JCS plugin at all, but I see no reason why not to, with all
the bug fixes lately.  Does anyone know why not to?  The only
reason I saw not to use the JCS distributed cache functionality
with Hibernate is since JCS does not support locking and
transactions, but I am not so concerned about this.  Is there
any other reason?  (By the way, since the plugin code no longer
exists on the new version of Hibernate, perhaps it should be
moved to be part of the jcs code base so JCS could continue to
be used with Hibernate?)

6) New question: the config idea with JNDI is interesting but I
am not sure it is possible using Weblogic clustering.  With
Weblogic clustering you would deploy a war file on just one
server and Weblogic would then automatically deploy it on all
the servers in the cluster for you.  There is only one place to
configure all the servers.  I am not sure if there is a way to
specify a different value for each server, and you would
actually have only one console.  Does anyone have any experience
with this, and know of a way to specify a different value for
each node in a cluster?

Thanks in advance,
Daniel


--- Aaron Smuts <as...@yahoo.com> wrote:

> Hi Daniel.
> 
> A removeAll sends one message, not a message for each
> item.  
> 
> Jgroups is fine, but it is slower than the otehr
> options.  
> 
> I'd use tcp lateral connections or the remote rmi
> server.
> 
> If you use the tcp lateral, then you need to specify
> the servers a cache should connect to.  If you have 3
> servers, A, B, and C, then A should point to B and C, 
> B should point to A and C, . . .
> 
> The problem is that this would require that you have a
> different war for each app.  
> 
> There is asolution.  Use JNDI and a startup servlet. 
> Set the server list as a value in applicaiton context
> through the container.  Make a startup servlet that
> configures JCS based on a properties object.  Load the
> cache.ccf file and change the values you need.  Then
> configure JCS with this. use the
> CompositeCacheManager. 
> 
> This way you can deploy the same war to multiple
> servers.  
> 
> 
> Aaron
> 
> --- Daniel Rosenbaum <dr...@yahoo.com>
> wrote:
> 
> > Hello,
> > 
> > JCS seems to have come a long way since about a year
> > ago.  Could
> > I assume most of the bugs were fixed that the
> > Hibernate project
> > was reporting?
> > 
> > Anyhow, I am thinking about using JCS for it's
> > clustered cache
> > capability.  I hope the community would not mind
> > giving me a few
> > pointers how to set this up properly, and some
> > insight as to
> > what configuration options are best for me.
> > 
> > I have a web app running on Weblogic that is
> > currently clustered
> > on two servers but may go up to 5 servers or more. 
> > The data I
> > cache is mostly read only but changes occasionally. 
> > I don't
> > care so much about sharing data between servers but
> > am more
> > concerned about not serving stale data, so I would
> > be happy just
> > to send an invalidate message to the rest of the
> > servers on an
> > element change so they would not serve stale data.
> > 
> > I am trying to make heads or tales of the docs but
> > find them
> > difficult to understand.  As far as I could tell
> > though a
> > lateral cache would suit my needs best.  
> > 
> > 1) Since I am not so concerned about sharing the
> > data I figure a
> > remote cache is not so important, plus it seems for
> > a remote
> > cache I would need to start another process besides
> > for the web
> > servers.  This seems like overkill.  Am I correct
> > though that
> > for a lateral cache I would not need to start
> > another process? 
> > But if so I am confused how the "listener" gets
> > started for a
> > lateral cache, what sets up binding of the port etc.
> >  The doc
> > was not clear on this.
> > 
> > 2) I am unclear how I would specify servers in the
> > properties
> > files.  Could I use the same .ccf file for each
> > server, or do I
> > need to create a separate one for each server.  For
> > example, say
> > my servers as S1 and S2.  Could I simply put:
> > 
> >
> jcs.auxiliary.LTCP.attributes.TcpServers=S1:1111,S2:1112
> > 
> > and include this .ccf in the war file running on
> > both servers,
> > or do I need to create one .ccf file with:
> > 
> > jcs.auxiliary.LTCP.attributes.TcpServers=S2:1112
> > 
> > to be in S1's war file and 
> > 
> > jcs.auxiliary.LTCP.attributes.TcpServers=S1:1112
> > 
> > to be in S2's war file?  And if I could just have
> > one .ccf file
> > for both servers, wouldn't this result in the server
> > sending a
> > message to itself (besides for the message sent to
> > the other
> > server) which it would then need to handle?  This
> > seems
> > wasteful.  On the other hand, I would not want to
> > have to create
> > a separate war file to be deployed for each server
> > with a
> > different setting in the .ccf file.  I have a
> > feeling I just may
> > be misunderstanding how this all works.
> > 
> > 3) Any advice what transmission mechanism to use for
> > the lateral
> > cache?  I am leaning towards using JGroups for the
> > reliability. 
> > Just a question about this though, does JGroups send
> > the
> > invalidate messages right away?  I think I remember
> > reading
> > somewhere that JGroups channels only work on a pull
> > method, so
> > servers only wake up every so often to get new
> > messages, but
> > messages are not received instantaneously.  Is this
> > at all true?
> >  I hope not, as I would want invalidate messages to
> > reach all
> > the servers as soon as data is changed.  If this is
> > true, any
> > advice how else to configure the lateral cache so
> > that messages
> > would be sent and received right away?
> > 
> > 4) Is there a way to set up the lateral caches to
> > only send
> > invalidate messages but not serialize the objects
> > and send them?
> >  My objects sometimes get pretty large so I would
> > not want to
> > incur the expense of this.  I'd rather servers get
> > their own
> > copies in the uncommon event more than one user
> > needs the same
> > data.  (most users in my app do not access the same
> > data, but
> > one user typically would access the same data over
> > and over
> > again in their web requests, and I am using a sticky
> > load
> > balancer so most of the time a user would hit the
> > same server.)
> > 
> > 5) I will be using this cache with Hibernate.  For
> > some reason
> > the Hibernate project deprecated the use of JCS as a
> > secondary
> > cache, but I see no reason why not to, with all the
> > bug fixes
> > lately.  Does anyone know why not to?  The only
> > thing I saw is
> > not to use the JCS distributed cache functionality
> > with
> > Hibernate since JCS does not support locking, but I
> > am not so
> > concerned about this.  Is there any other reason?
> > 
> > 6) Am I correct to assume that if I call
> > cache.removeAll() that
> > all the elements in that cache region will be
> > cleared on all the
> > servers through the lateral cache?  Is this also
> > efficient, say
> > only one network message to clear the entire cache,
> > or would JCS
> > have to create a separate message for each element
> > in the cache?
> > 
> > 7) Any other advice or alternative setups?
> > 
> > I know these are a lot of questions but I would
> > appreciate any
> > help.  Once I understand this better, in return I
> > would like to
> > give back to to community by perhaps contributing
> > better
> > documentation, or writing a proper user's guide.
> > 
> > Thank you,
> > Daniel Rosenbaum
> > 
> > 
> > 
> > 
> >
>
---------------------------------------------------------------------
> > To unsubscribe, e-mail:
> > turbine-jcs-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail:
> > turbine-jcs-user-help@jakarta.apache.org
> > 
> > 
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Re: JCS Setup Questions

Posted by Aaron Smuts <as...@yahoo.com>.
Hi Daniel.

A removeAll sends one message, not a message for each
item.  

Jgroups is fine, but it is slower than the otehr
options.  

I'd use tcp lateral connections or the remote rmi
server.

If you use the tcp lateral, then you need to specify
the servers a cache should connect to.  If you have 3
servers, A, B, and C, then A should point to B and C, 
B should point to A and C, . . .

The problem is that this would require that you have a
different war for each app.  

There is asolution.  Use JNDI and a startup servlet. 
Set the server list as a value in applicaiton context
through the container.  Make a startup servlet that
configures JCS based on a properties object.  Load the
cache.ccf file and change the values you need.  Then
configure JCS with this. use the
CompositeCacheManager. 

This way you can deploy the same war to multiple
servers.  


Aaron

--- Daniel Rosenbaum <dr...@yahoo.com>
wrote:

> Hello,
> 
> JCS seems to have come a long way since about a year
> ago.  Could
> I assume most of the bugs were fixed that the
> Hibernate project
> was reporting?
> 
> Anyhow, I am thinking about using JCS for it's
> clustered cache
> capability.  I hope the community would not mind
> giving me a few
> pointers how to set this up properly, and some
> insight as to
> what configuration options are best for me.
> 
> I have a web app running on Weblogic that is
> currently clustered
> on two servers but may go up to 5 servers or more. 
> The data I
> cache is mostly read only but changes occasionally. 
> I don't
> care so much about sharing data between servers but
> am more
> concerned about not serving stale data, so I would
> be happy just
> to send an invalidate message to the rest of the
> servers on an
> element change so they would not serve stale data.
> 
> I am trying to make heads or tales of the docs but
> find them
> difficult to understand.  As far as I could tell
> though a
> lateral cache would suit my needs best.  
> 
> 1) Since I am not so concerned about sharing the
> data I figure a
> remote cache is not so important, plus it seems for
> a remote
> cache I would need to start another process besides
> for the web
> servers.  This seems like overkill.  Am I correct
> though that
> for a lateral cache I would not need to start
> another process? 
> But if so I am confused how the "listener" gets
> started for a
> lateral cache, what sets up binding of the port etc.
>  The doc
> was not clear on this.
> 
> 2) I am unclear how I would specify servers in the
> properties
> files.  Could I use the same .ccf file for each
> server, or do I
> need to create a separate one for each server.  For
> example, say
> my servers as S1 and S2.  Could I simply put:
> 
>
jcs.auxiliary.LTCP.attributes.TcpServers=S1:1111,S2:1112
> 
> and include this .ccf in the war file running on
> both servers,
> or do I need to create one .ccf file with:
> 
> jcs.auxiliary.LTCP.attributes.TcpServers=S2:1112
> 
> to be in S1's war file and 
> 
> jcs.auxiliary.LTCP.attributes.TcpServers=S1:1112
> 
> to be in S2's war file?  And if I could just have
> one .ccf file
> for both servers, wouldn't this result in the server
> sending a
> message to itself (besides for the message sent to
> the other
> server) which it would then need to handle?  This
> seems
> wasteful.  On the other hand, I would not want to
> have to create
> a separate war file to be deployed for each server
> with a
> different setting in the .ccf file.  I have a
> feeling I just may
> be misunderstanding how this all works.
> 
> 3) Any advice what transmission mechanism to use for
> the lateral
> cache?  I am leaning towards using JGroups for the
> reliability. 
> Just a question about this though, does JGroups send
> the
> invalidate messages right away?  I think I remember
> reading
> somewhere that JGroups channels only work on a pull
> method, so
> servers only wake up every so often to get new
> messages, but
> messages are not received instantaneously.  Is this
> at all true?
>  I hope not, as I would want invalidate messages to
> reach all
> the servers as soon as data is changed.  If this is
> true, any
> advice how else to configure the lateral cache so
> that messages
> would be sent and received right away?
> 
> 4) Is there a way to set up the lateral caches to
> only send
> invalidate messages but not serialize the objects
> and send them?
>  My objects sometimes get pretty large so I would
> not want to
> incur the expense of this.  I'd rather servers get
> their own
> copies in the uncommon event more than one user
> needs the same
> data.  (most users in my app do not access the same
> data, but
> one user typically would access the same data over
> and over
> again in their web requests, and I am using a sticky
> load
> balancer so most of the time a user would hit the
> same server.)
> 
> 5) I will be using this cache with Hibernate.  For
> some reason
> the Hibernate project deprecated the use of JCS as a
> secondary
> cache, but I see no reason why not to, with all the
> bug fixes
> lately.  Does anyone know why not to?  The only
> thing I saw is
> not to use the JCS distributed cache functionality
> with
> Hibernate since JCS does not support locking, but I
> am not so
> concerned about this.  Is there any other reason?
> 
> 6) Am I correct to assume that if I call
> cache.removeAll() that
> all the elements in that cache region will be
> cleared on all the
> servers through the lateral cache?  Is this also
> efficient, say
> only one network message to clear the entire cache,
> or would JCS
> have to create a separate message for each element
> in the cache?
> 
> 7) Any other advice or alternative setups?
> 
> I know these are a lot of questions but I would
> appreciate any
> help.  Once I understand this better, in return I
> would like to
> give back to to community by perhaps contributing
> better
> documentation, or writing a proper user's guide.
> 
> Thank you,
> Daniel Rosenbaum
> 
> 
> 
> 
>
---------------------------------------------------------------------
> To unsubscribe, e-mail:
> turbine-jcs-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail:
> turbine-jcs-user-help@jakarta.apache.org
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org