You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Renz Daluz <re...@gmail.com> on 2009/08/03 08:20:34 UTC

Re: Lock timed out 2 worker running

Hi Chris,

Sorry for the very late reply. As a work around we sent the locking to
single and we turned-off one of our workers. And to answer your question,
please see below:

2009/7/17 Chris Hostetter <ho...@fucit.org>

>
> This is relaly odd.
>
> Just to clarify...
> 1) you are running a normal solr installation (in a servlet
>   container) and using SolrJ to send updates to Solr from another
>   application, correct?

Yep, we are running out-of-the-bo solr installation using tomcat as servel
container. Both of our index workers are using SolrJ to send update to Solr.


>
> 2) Do you have any special custom plugins running

Nope, everything is out-of-the-box.

>
> 3) do you have any other apps that might be attempting to access the index
>   directly?

Actually there is another 3rd apps (an instance of index workers but not all
functionality are enabled). It only send a delete request to Solr but it's
via SolrJ as well. And I double checked that all this workers are hitting
the same solr base url

>
> 4) what OS are you using? ... what type of filesystem? (local disk or some
>   shared network drive)

 CentOS 5.2 local disk.

>
> 5) are these errors appearing after Solr crashes and you restart it?


Yep, I can't find the logs but it's something like can't obtain lock for
<somefile>.lck Need to delete that fiile in order to start the solr properly


>
> 6) what version of Solr are you using?


The later 1.3.0 release.

>
>
> No matter how many worker threads you have, there should only be one
> IndexWriter using the index/lockfile from Solr ... so this error should
> really never happen in normal usage.


I'm not sure what you mean by normal usage. But aside from the 2 workers (or
3), we are running rsync and snapshooter every 30 secs.  and on the slave,
we are running snappuller every 30 secs. as well. This is a requirement to
pick up the latest changes right away.

Thanks,
/Laurence

>
>
>
> : Jul 10, 2009 4:01:55 AM org.apache.solr.common.SolrException log
> : SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain
> timed
> : out: SimpleFSLock@
> :
> /projects/msim/indexdata/data/index/lucene-0614ba206dd0e0871ca4eecf8f2e853a-write.lock
> : at org.apache.lucene.store.Lock.obtain(Lock.java:85)
> : at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
> : at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:938)
> : at
> org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:116)
> : at
> :
> org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:122)
> : at
> :
> org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:167)
> : at
> :
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:221)
> : at
> :
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:59)
> : at
> :
> org.apache.solr.handler.XmlUpdateRequestHandler.processUpdate(XmlUpdateRequestHandler.java:196)
> : at
> :
> org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody(XmlUpdateRequestHandler.java:123)
> : at
> :
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
> : at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
> : at
> :
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
> : at
> :
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
> : at
> :
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
> : at
> :
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
> : at
> :
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:210)
> : at
> :
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
> : at
> :
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> : at
> :
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
> : at
> :
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
> : at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:542)
> : at
> :
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:151)
> : at
> :
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:870)
> : at
> :
> org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
> : at
> :
> org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
> : at
> :
> org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
> : at
> :
> org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685)
> : at java.lang.Thread.run(Thread.java:619)
>
>
>
> -Hoss
>
>

Re: Lock timed out 2 worker running

Posted by renz052496 <re...@gmail.com>.
Yes, I missunderstood you question (re: the crashed). Solr did not crash but
we shutdown the JVM (tomcat) gracefully after we kill all our workers. But
upon restarting, solr just throwing the error.
Regards,
/Renz

2009/8/11 Chris Hostetter <ho...@fucit.org>

>
> : > 5) are these errors appearing after Solr crashes and you restart it?
> :
> :
> : Yep, I can't find the logs but it's something like can't obtain lock for
> : <somefile>.lck Need to delete that fiile in order to start the solr
> properly
>
> wait ... either you missunderstood my question, or you just explained
> what's happening.
>
> If you are using SimpleFSLock, and solr crashes (OOM, kill -9, yank the
> power cord) then it's possible the lock file will get left arround, in
> which case this is the expected behavior.  there's a config option you
> can set to tell solr that on start up you want it to cleanup any old lock
> files, but if you switch to the "single" lock manager mode your life gets
> a lot easier anyway.
>
> But you never mentioned anything about the server crashing in your
> original message, so i'm wondering if you really ment to answer "yep" when
> i asked "are these errors appearing *after* Solr crashes"
>
>
> -Hoss
>
>

Re: Lock timed out 2 worker running

Posted by Chris Hostetter <ho...@fucit.org>.
: > 5) are these errors appearing after Solr crashes and you restart it?
: 
: 
: Yep, I can't find the logs but it's something like can't obtain lock for
: <somefile>.lck Need to delete that fiile in order to start the solr properly

wait ... either you missunderstood my question, or you just explained 
what's happening.

If you are using SimpleFSLock, and solr crashes (OOM, kill -9, yank the 
power cord) then it's possible the lock file will get left arround, in 
which case this is the expected behavior.  there's a config option you 
can set to tell solr that on start up you want it to cleanup any old lock 
files, but if you switch to the "single" lock manager mode your life gets 
a lot easier anyway.

But you never mentioned anything about the server crashing in your 
original message, so i'm wondering if you really ment to answer "yep" when 
i asked "are these errors appearing *after* Solr crashes"


-Hoss