You are viewing a plain text version of this content. The canonical link for it is here.
Posted to general@lucene.apache.org by sunnyfr <jo...@gmail.com> on 2009/04/10 12:18:02 UTC

cache managing with frequent update

Hi,

I would like frequent update on my master/slave servers.
I've several option :

* master replicate on the slave server :
- during replication the cpu goes mad because it brings back all the index
folder, segment are too frequently merged.

so my other option would be :
* all my servers are master and update themselves. 

but the point for both solution is during the warmup of the cache.  it takes
too long time.

How people do when they have frequent update to do ? to they turn off their
servers during the warmup ??? 
what is the solution, should I make it lower ?? 
My update are every 30mn.

Thanks for your help,
Sunny

-- 
View this message in context: http://www.nabble.com/cache-managing-with-frequent-update-tp22985958p22985958.html
Sent from the Lucene - General mailing list archive at Nabble.com.


Re: cache managing with frequent update

Posted by Ted Dunning <te...@gmail.com>.
At Veoh, we pulled machines from the pool during updates, but we did the
updates by copying entirely new indexes to the machines.  We might have been
able to avoid pulling the machines from the pool if we had used better I/O
scheduling, but we were very risk averse by that time.

My current system uses elastic computing instead based on Katta.  When doing
a full update, we just launch an entire new search cluster and then flip
traffic to it.  Nodes download shards autonomously from an HDFS file
system.  Updates in the form of additions (our dominant form) are propagated
by simply adding a new shard and letting Katta distribute it.  This seems to
work *much* better than the Solr based system we had a Veoh.

On Fri, Apr 10, 2009 at 3:18 AM, sunnyfr <jo...@gmail.com> wrote:

> How people do when they have frequent update to do ? to they turn off their
> servers during the warmup ???
> what is the solution, should I make it lower ??
>



-- 
Ted Dunning, CTO
DeepDyve