You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Torsten Krah <tk...@fachschaft.imn.htwk-leipzig.de> on 2012/02/06 17:09:21 UTC

Commit call - ReadTimeoutException -> usage scenario for big update requests and the ioexception case

Hi,

i wonder if it is possible to commit data to solr without having to
catch SockedReadTimeout Exceptions.

I am calling commit(false, false) using a streaming server instance -
but i still have to wait > 30 seconds and catch the timeout from http
method.
I does not matter if its 30 or 60, it will fail depending on how long it
takes until the update request is processed, or can i tweak things here?

So whats the way to go here? Any other option or must i fetch those
exception and go on like done now.
The operation itself does finish successful - later on when its done -
on server side and all stuff is committed and searchable.


regards

Torsten

Re: Commit call - ReadTimeoutException -> usage scenario for big update requests and the ioexception case

Posted by Torsten Krah <tk...@fachschaft.imn.htwk-leipzig.de>.
Am 07.02.2012 15:12, schrieb Erick Erickson:
> Right, I suspect you're hitting merges.

Guess so.

How often are you
> committing?

One time, after all work is done.

In other words, why are you committing explicitly?
> It's often better to use commitWithin on the add command
> and just let Solr do its work without explicitly committing.

Tika does extract my docs and i'll fetch the results (memory, disk) - 
externally.
I all went ok like expected, i'll take those docs and add it to my solr 
server instance.
After i am done with add + deletes i'll do commit. One commit for all 
those docs - adding and deleting.
If something went wrong before or between adding, update or deleting 
docs, i do call rollback and all is like before (i am doing the update 
from one source only so i can be sure that no one can call commit in 
between).

CommitWithin will break my possibility to rollback things, that why i 
want to explicitly call commit here.

>
> Going forward, this is fixed in trunk by the DocumentWriterPerThread
> improvements.

Will this be backported to upcoming 3.6?

>
> Best
> Erick
>
> On Mon, Feb 6, 2012 at 11:09 AM, Torsten Krah
> <tk...@fachschaft.imn.htwk-leipzig.de>  wrote:
>> Hi,
>>
>> i wonder if it is possible to commit data to solr without having to
>> catch SockedReadTimeout Exceptions.
>>
>> I am calling commit(false, false) using a streaming server instance -
>> but i still have to wait>  30 seconds and catch the timeout from http
>> method.
>> I does not matter if its 30 or 60, it will fail depending on how long it
>> takes until the update request is processed, or can i tweak things here?
>>
>> So whats the way to go here? Any other option or must i fetch those
>> exception and go on like done now.
>> The operation itself does finish successful - later on when its done -
>> on server side and all stuff is committed and searchable.
>>
>>
>> regards
>>
>> Torsten



Re: Commit call - ReadTimeoutException -> usage scenario for big update requests and the ioexception case

Posted by Erick Erickson <er...@gmail.com>.
Right, I suspect you're hitting merges. How often are you
committing? In other words, why are you committing explicitly?
It's often better to use commitWithin on the add command
and just let Solr do its work without explicitly committing.

Going forward, this is fixed in trunk by the DocumentWriterPerThread
improvements.

Best
Erick

On Mon, Feb 6, 2012 at 11:09 AM, Torsten Krah
<tk...@fachschaft.imn.htwk-leipzig.de> wrote:
> Hi,
>
> i wonder if it is possible to commit data to solr without having to
> catch SockedReadTimeout Exceptions.
>
> I am calling commit(false, false) using a streaming server instance -
> but i still have to wait > 30 seconds and catch the timeout from http
> method.
> I does not matter if its 30 or 60, it will fail depending on how long it
> takes until the update request is processed, or can i tweak things here?
>
> So whats the way to go here? Any other option or must i fetch those
> exception and go on like done now.
> The operation itself does finish successful - later on when its done -
> on server side and all stuff is committed and searchable.
>
>
> regards
>
> Torsten