You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Chris Brown <cb...@infoblox.com> on 2012/11/02 19:01:29 UTC

unable to create new native thread while importing

I'm having a problem importing data into Solr 4.0 (the same error happens
in 3.6.1).  Here is the Error I get:

2012-11-02 09:50:07.265:WARN:oejs.AbstractConnector:
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:658)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.startThread(QueuedThreadPool
.java:436)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.dispatch(QueuedThreadPool.ja
va:361)
        at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.dispatch(Soc
ketConnector.java:212)
        at 
org.eclipse.jetty.server.bio.SocketConnector.accept(SocketConnector.java:11
6)
        at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.j
ava:933)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java
:599)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:
534)
        at java.lang.Thread.run(Thread.java:680)

This error occurs after approximately 344k documents imported using 4100
calls and containing aproximately 40mb (raw xml, so the data is smaller).
The full import will be approximately 1300x this size if I'm able to
finish it.  I'm importing use Java's HttpURLConnection and my imports look
something like this:

(data in the name column is redacted but contains a 7-bit-clean string in
this example)

POST http://172.31.1.127:8983/solr/
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<add>
  <doc>
    <field name="id">3841</field>
    <field name="name">...</field>
  </doc>
  <doc>
    <field name="id">3842</field>
    <field name="name">...</field>
  </doc>
    ...etc...
</add>

There is a single import HttpURLConnection - I have multiple threads and
they're mutexing on the connection - and the client seems to operate find
until the server throws this error, then the client pauses until it times
out, then tries again and generates more outofmemory errors. Also, as far
as I can tell, the documents that appear to have been imported never get
indexed.

The configuration being used is the one in the solr example folder.

How do I do my import into Solr?  I've seen reference to changing the
AutoCommit settings which I've tried to no effect.  I also found mention
of a similar problem to do with Alpha 4.0 ConcurrentUpdateSolrServer but
since I'm not sure how to change this so I haven't tried this
(http://www.searchworkings.org/forum/-/message_boards/view_message/489575).

Thanks,
Chris...


Re: unable to create new native thread while importing

Posted by Chris Brown <cb...@infoblox.com>.
Thanks for that, I didn't know I could see the thread dump so easily.
That does appear to have been the problem - I wasn't flushing my input and
the underlining api held the connection open-and-not-reusable until
garbage collection.

Chris...

On 12-11-02 1:05 PM, "Alexandre Rafalovitch" <ar...@gmail.com> wrote:

>Have you tried doing a thread dump and seeing how many threads you have
>and
>what they are doing. Maybe a connection is not being closed somehow.
>
>Regards,
>   Alex.
>
>Personal blog: http://blog.outerthoughts.com/
>LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
>- Time is the quality of nature that keeps events from happening all at
>once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)
>
>
>On Fri, Nov 2, 2012 at 2:01 PM, Chris Brown <cb...@infoblox.com> wrote:
>
>> I'm having a problem importing data into Solr 4.0 (the same error
>>happens
>> in 3.6.1).  Here is the Error I get:
>>
>> 2012-11-02 09:50:07.265:WARN:oejs.AbstractConnector:
>> java.lang.OutOfMemoryError: unable to create new native thread
>>         at java.lang.Thread.start0(Native Method)
>>         at java.lang.Thread.start(Thread.java:658)
>>         at
>> 
>>org.eclipse.jetty.util.thread.QueuedThreadPool.startThread(QueuedThreadPo
>>ol
>> .java:436)
>>         at
>> 
>>org.eclipse.jetty.util.thread.QueuedThreadPool.dispatch(QueuedThreadPool.
>>ja
>> va:361)
>>         at
>> 
>>org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.dispatch(S
>>oc
>> ketConnector.java:212)
>>         at
>> 
>>org.eclipse.jetty.server.bio.SocketConnector.accept(SocketConnector.java:
>>11
>> 6)
>>         at
>> 
>>org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector
>>.j
>> ava:933)
>>         at
>> 
>>org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.ja
>>va
>> :599)
>>         at
>> 
>>org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.jav
>>a:
>> 534)
>>         at java.lang.Thread.run(Thread.java:680)
>>
>> This error occurs after approximately 344k documents imported using 4100
>> calls and containing aproximately 40mb (raw xml, so the data is
>>smaller).
>> The full import will be approximately 1300x this size if I'm able to
>> finish it.  I'm importing use Java's HttpURLConnection and my imports
>>look
>> something like this:
>>
>> (data in the name column is redacted but contains a 7-bit-clean string
>>in
>> this example)
>>
>> POST http://172.31.1.127:8983/solr/
>> <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
>> <add>
>>   <doc>
>>     <field name="id">3841</field>
>>     <field name="name">...</field>
>>   </doc>
>>   <doc>
>>     <field name="id">3842</field>
>>     <field name="name">...</field>
>>   </doc>
>>     ...etc...
>> </add>
>>
>> There is a single import HttpURLConnection - I have multiple threads and
>> they're mutexing on the connection - and the client seems to operate
>>find
>> until the server throws this error, then the client pauses until it
>>times
>> out, then tries again and generates more outofmemory errors. Also, as
>>far
>> as I can tell, the documents that appear to have been imported never get
>> indexed.
>>
>> The configuration being used is the one in the solr example folder.
>>
>> How do I do my import into Solr?  I've seen reference to changing the
>> AutoCommit settings which I've tried to no effect.  I also found mention
>> of a similar problem to do with Alpha 4.0 ConcurrentUpdateSolrServer but
>> since I'm not sure how to change this so I haven't tried this
>> 
>>(http://www.searchworkings.org/forum/-/message_boards/view_message/489575
>> ).
>>
>> Thanks,
>> Chris...
>>
>>


Re: unable to create new native thread while importing

Posted by Alexandre Rafalovitch <ar...@gmail.com>.
Have you tried doing a thread dump and seeing how many threads you have and
what they are doing. Maybe a connection is not being closed somehow.

Regards,
   Alex.

Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from happening all at
once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)


On Fri, Nov 2, 2012 at 2:01 PM, Chris Brown <cb...@infoblox.com> wrote:

> I'm having a problem importing data into Solr 4.0 (the same error happens
> in 3.6.1).  Here is the Error I get:
>
> 2012-11-02 09:50:07.265:WARN:oejs.AbstractConnector:
> java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:658)
>         at
> org.eclipse.jetty.util.thread.QueuedThreadPool.startThread(QueuedThreadPool
> .java:436)
>         at
> org.eclipse.jetty.util.thread.QueuedThreadPool.dispatch(QueuedThreadPool.ja
> va:361)
>         at
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.dispatch(Soc
> ketConnector.java:212)
>         at
> org.eclipse.jetty.server.bio.SocketConnector.accept(SocketConnector.java:11
> 6)
>         at
> org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.j
> ava:933)
>         at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java
> :599)
>         at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:
> 534)
>         at java.lang.Thread.run(Thread.java:680)
>
> This error occurs after approximately 344k documents imported using 4100
> calls and containing aproximately 40mb (raw xml, so the data is smaller).
> The full import will be approximately 1300x this size if I'm able to
> finish it.  I'm importing use Java's HttpURLConnection and my imports look
> something like this:
>
> (data in the name column is redacted but contains a 7-bit-clean string in
> this example)
>
> POST http://172.31.1.127:8983/solr/
> <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> <add>
>   <doc>
>     <field name="id">3841</field>
>     <field name="name">...</field>
>   </doc>
>   <doc>
>     <field name="id">3842</field>
>     <field name="name">...</field>
>   </doc>
>     ...etc...
> </add>
>
> There is a single import HttpURLConnection - I have multiple threads and
> they're mutexing on the connection - and the client seems to operate find
> until the server throws this error, then the client pauses until it times
> out, then tries again and generates more outofmemory errors. Also, as far
> as I can tell, the documents that appear to have been imported never get
> indexed.
>
> The configuration being used is the one in the solr example folder.
>
> How do I do my import into Solr?  I've seen reference to changing the
> AutoCommit settings which I've tried to no effect.  I also found mention
> of a similar problem to do with Alpha 4.0 ConcurrentUpdateSolrServer but
> since I'm not sure how to change this so I haven't tried this
> (http://www.searchworkings.org/forum/-/message_boards/view_message/489575
> ).
>
> Thanks,
> Chris...
>
>