You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Mikhail Khludnev <mk...@apache.org> on 2019/02/12 14:08:39 UTC

Re: unable to create new threads: out-of-memory issues

Hello, Martin.
How do you index? Where did you get this error?
 Usually it occurs in custom code with many new Thread() calls and usually
healed with thread poling.

On Tue, Feb 12, 2019 at 3:25 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
wrote:

> Hi,
>
> I am trying to create an index on a small Linux server running Solr-7.5.0,
> but keep running into problems.
>
> When I try to index a file-folder of roughly 18 GB (18000 files) I get the
> following error from the server:
>
> java.lang.OutOfMemoryError: unable to create new native thread.
>
> From the server I can see the following limits:
>
> User$ ulimit -a
> core file size                                 (blocks, -c) 0
> data seg size                                 (kbytes, -d) unlimited
> scheduling priority                     (-e) 0
> file size                                           (blocks, -f) unlimited
> pending signals                          (-i) 257568
> max locked memory                 (kbytes, -l) 64
> max memory size                      (kbytes, -m) unlimited
> open files                                    (-n) 1024
> pipe size                                       (512 bytes, -p) 8
> POSIX message queues            (bytes, -q) 819200
> real-time priority                      (-r) 0
> stack size                                      (kbytes, -s) 8192
> cpu time                                       (seconds, -t) unlimited
> max user processes                  (-u) 257568
> virtual memory                          (kbytes, -v) unlimited
> file locks                                      (-x) unlimited
>
> I do not see any limits on threads only on open files.
>
> I have added a autoCommit of a maximum of 1000 documents, but that did not
> help. How can I increase the thread limit, or is there another way of
> solving this issue? Any help is appreciated.
>
> Best regards
>
> Martin
>
> Beskyttelse af dine personlige oplysninger er vigtig for os. Her finder du
> KMD’s Privatlivspolitik<http://www.kmd.dk/Privatlivspolitik>, der
> fortæller, hvordan vi behandler oplysninger om dig.
>
> Protection of your personal data is important to us. Here you can read
> KMD’s Privacy Policy<http://www.kmd.net/Privacy-Policy> outlining how we
> process your personal data.
>
> Vi gør opmærksom på, at denne e-mail kan indeholde fortrolig information.
> Hvis du ved en fejltagelse modtager e-mailen, beder vi dig venligst
> informere afsender om fejlen ved at bruge svarfunktionen. Samtidig beder vi
> dig slette e-mailen i dit system uden at videresende eller kopiere den.
> Selvom e-mailen og ethvert vedhæftet bilag efter vores overbevisning er fri
> for virus og andre fejl, som kan påvirke computeren eller it-systemet,
> hvori den modtages og læses, åbnes den på modtagerens eget ansvar. Vi
> påtager os ikke noget ansvar for tab og skade, som er opstået i forbindelse
> med at modtage og bruge e-mailen.
>
> Please note that this message may contain confidential information. If you
> have received this message by mistake, please inform the sender of the
> mistake by sending a reply, then delete the message from your system
> without making, distributing or retaining any copies of it. Although we
> believe that the message and any attachments are free from viruses and
> other errors that might affect the computer or it-system where it is
> received and read, the recipient opens the message at his or her own risk.
> We assume no responsibility for any loss or damage arising from the receipt
> or use of this message.
>


-- 
Sincerely yours
Mikhail Khludnev

Re: unable to create new threads: out-of-memory issues

Posted by Erick Erickson <er...@gmail.com>.
Absolutely increase the file limit before going down other avenues. I
recommend 65K. This is because I've spent waaaaay more time than I
want to think about finding out that this is the problem as it can pop
out in unexpected ways, ways that are totally _not_ obvious.

It's one of those things that you can do in 5 minutes and if it's
_not_ the problem no harm done, and if it _is_ the problem it'll be
good for your blood pressure ;)

Best,
Erick

On Tue, Feb 12, 2019 at 8:38 AM Walter Underwood <wu...@wunderwood.org> wrote:
>
> Create one instance of HttpSolrClient and reuse it. It is thread-safe. It also keeps a connection pool, so reusing the same one will be faster.
>
> Do you really need atomic updates? Those are much slower because they have to read the document before updating.
>
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Feb 12, 2019, at 6:58 AM, Martin Frank Hansen (MHQ) <MH...@kmd.dk> wrote:
> >
> > Hi Mikhail,
> >
> > Thanks for your help. I will try it.
> >
> > -----Original Message-----
> > From: Mikhail Khludnev <mk...@apache.org>
> > Sent: 12. februar 2019 15:54
> > To: solr-user <so...@lucene.apache.org>
> > Subject: Re: unable to create new threads: out-of-memory issues
> >
> > 1. you can jstack <PID> to find it out.
> > 2. It might create a thread, I don't know.
> > 3. SolrClient is definitely a subject for heavy reuse.
> >
> > On Tue, Feb 12, 2019 at 5:16 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
> > wrote:
> >
> >> Hi Mikhail,
> >>
> >> I am using Solrj but think I might have found the problem.
> >>
> >> I am doing a atomicUpdate on existing documents, and found out that I
> >> create a new SolrClient for each document. I guess this is where all
> >> the threads are coming from. Is it correct that when creating a
> >> SolrClient, I also create a new thread?
> >>
> >> SolrClient solr = new HttpSolrClient.Builder(urlString).build();
> >>
> >> Thanks
> >>
> >> -----Original Message-----
> >> From: Mikhail Khludnev <mk...@apache.org>
> >> Sent: 12. februar 2019 15:09
> >> To: solr-user <so...@lucene.apache.org>
> >> Subject: Re: unable to create new threads: out-of-memory issues
> >>
> >> Hello, Martin.
> >> How do you index? Where did you get this error?
> >> Usually it occurs in custom code with many new Thread() calls and
> >> usually healed with thread poling.
> >>
> >> On Tue, Feb 12, 2019 at 3:25 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
> >> wrote:
> >>
> >>> Hi,
> >>>
> >>> I am trying to create an index on a small Linux server running
> >>> Solr-7.5.0, but keep running into problems.
> >>>
> >>> When I try to index a file-folder of roughly 18 GB (18000 files) I
> >>> get the following error from the server:
> >>>
> >>> java.lang.OutOfMemoryError: unable to create new native thread.
> >>>
> >>> From the server I can see the following limits:
> >>>
> >>> User$ ulimit -a
> >>> core file size                                 (blocks, -c) 0
> >>> data seg size                                 (kbytes, -d) unlimited
> >>> scheduling priority                     (-e) 0
> >>> file size                                           (blocks, -f)
> >> unlimited
> >>> pending signals                          (-i) 257568
> >>> max locked memory                 (kbytes, -l) 64
> >>> max memory size                      (kbytes, -m) unlimited
> >>> open files                                    (-n) 1024
> >>> pipe size                                       (512 bytes, -p) 8
> >>> POSIX message queues            (bytes, -q) 819200
> >>> real-time priority                      (-r) 0
> >>> stack size                                      (kbytes, -s) 8192
> >>> cpu time                                       (seconds, -t) unlimited
> >>> max user processes                  (-u) 257568
> >>> virtual memory                          (kbytes, -v) unlimited
> >>> file locks                                      (-x) unlimited
> >>>
> >>> I do not see any limits on threads only on open files.
> >>>
> >>> I have added a autoCommit of a maximum of 1000 documents, but that
> >>> did not help. How can I increase the thread limit, or is there
> >>> another way of solving this issue? Any help is appreciated.
> >>>
> >>> Best regards
> >>>
> >>> Martin
> >>>
> >>> Beskyttelse af dine personlige oplysninger er vigtig for os. Her
> >>> finder du KMD’s
> >>> Privatlivspolitik<http://www.kmd.dk/Privatlivspolitik>, der
> >>> fortæller,
> >> hvordan vi behandler oplysninger om dig.
> >>>
> >>> Protection of your personal data is important to us. Here you can
> >>> read KMD’s Privacy Policy<http://www.kmd.net/Privacy-Policy>
> >>> outlining how we process your personal data.
> >>>
> >>> Vi gør opmærksom på, at denne e-mail kan indeholde fortrolig information.
> >>> Hvis du ved en fejltagelse modtager e-mailen, beder vi dig venligst
> >>> informere afsender om fejlen ved at bruge svarfunktionen. Samtidig
> >>> beder vi dig slette e-mailen i dit system uden at videresende eller
> >> kopiere den.
> >>> Selvom e-mailen og ethvert vedhæftet bilag efter vores overbevisning
> >>> er fri for virus og andre fejl, som kan påvirke computeren eller
> >>> it-systemet, hvori den modtages og læses, åbnes den på modtagerens
> >>> eget ansvar. Vi påtager os ikke noget ansvar for tab og skade, som
> >>> er opstået i forbindelse med at modtage og bruge e-mailen.
> >>>
> >>> Please note that this message may contain confidential information.
> >>> If you have received this message by mistake, please inform the
> >>> sender of the mistake by sending a reply, then delete the message
> >>> from your system without making, distributing or retaining any copies of it.
> >>> Although we believe that the message and any attachments are free
> >>> from viruses and other errors that might affect the computer or
> >>> it-system where it is received and read, the recipient opens the
> >>> message at his or
> >> her own risk.
> >>> We assume no responsibility for any loss or damage arising from the
> >>> receipt or use of this message.
> >>>
> >>
> >>
> >> --
> >> Sincerely yours
> >> Mikhail Khludnev
> >>
> >
> >
> > --
> > Sincerely yours
> > Mikhail Khludnev
>

Re: unable to create new threads: out-of-memory issues

Posted by Walter Underwood <wu...@wunderwood.org>.
Create one instance of HttpSolrClient and reuse it. It is thread-safe. It also keeps a connection pool, so reusing the same one will be faster.

Do you really need atomic updates? Those are much slower because they have to read the document before updating.

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Feb 12, 2019, at 6:58 AM, Martin Frank Hansen (MHQ) <MH...@kmd.dk> wrote:
> 
> Hi Mikhail, 
> 
> Thanks for your help. I will try it. 
> 
> -----Original Message-----
> From: Mikhail Khludnev <mk...@apache.org> 
> Sent: 12. februar 2019 15:54
> To: solr-user <so...@lucene.apache.org>
> Subject: Re: unable to create new threads: out-of-memory issues
> 
> 1. you can jstack <PID> to find it out.
> 2. It might create a thread, I don't know.
> 3. SolrClient is definitely a subject for heavy reuse.
> 
> On Tue, Feb 12, 2019 at 5:16 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
> wrote:
> 
>> Hi Mikhail,
>> 
>> I am using Solrj but think I might have found the problem.
>> 
>> I am doing a atomicUpdate on existing documents, and found out that I 
>> create a new SolrClient for each document. I guess this is where all 
>> the threads are coming from. Is it correct that when creating a 
>> SolrClient, I also create a new thread?
>> 
>> SolrClient solr = new HttpSolrClient.Builder(urlString).build();
>> 
>> Thanks
>> 
>> -----Original Message-----
>> From: Mikhail Khludnev <mk...@apache.org>
>> Sent: 12. februar 2019 15:09
>> To: solr-user <so...@lucene.apache.org>
>> Subject: Re: unable to create new threads: out-of-memory issues
>> 
>> Hello, Martin.
>> How do you index? Where did you get this error?
>> Usually it occurs in custom code with many new Thread() calls and 
>> usually healed with thread poling.
>> 
>> On Tue, Feb 12, 2019 at 3:25 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
>> wrote:
>> 
>>> Hi,
>>> 
>>> I am trying to create an index on a small Linux server running 
>>> Solr-7.5.0, but keep running into problems.
>>> 
>>> When I try to index a file-folder of roughly 18 GB (18000 files) I 
>>> get the following error from the server:
>>> 
>>> java.lang.OutOfMemoryError: unable to create new native thread.
>>> 
>>> From the server I can see the following limits:
>>> 
>>> User$ ulimit -a
>>> core file size                                 (blocks, -c) 0
>>> data seg size                                 (kbytes, -d) unlimited
>>> scheduling priority                     (-e) 0
>>> file size                                           (blocks, -f)
>> unlimited
>>> pending signals                          (-i) 257568
>>> max locked memory                 (kbytes, -l) 64
>>> max memory size                      (kbytes, -m) unlimited
>>> open files                                    (-n) 1024
>>> pipe size                                       (512 bytes, -p) 8
>>> POSIX message queues            (bytes, -q) 819200
>>> real-time priority                      (-r) 0
>>> stack size                                      (kbytes, -s) 8192
>>> cpu time                                       (seconds, -t) unlimited
>>> max user processes                  (-u) 257568
>>> virtual memory                          (kbytes, -v) unlimited
>>> file locks                                      (-x) unlimited
>>> 
>>> I do not see any limits on threads only on open files.
>>> 
>>> I have added a autoCommit of a maximum of 1000 documents, but that 
>>> did not help. How can I increase the thread limit, or is there 
>>> another way of solving this issue? Any help is appreciated.
>>> 
>>> Best regards
>>> 
>>> Martin
>>> 
>>> Beskyttelse af dine personlige oplysninger er vigtig for os. Her 
>>> finder du KMD’s 
>>> Privatlivspolitik<http://www.kmd.dk/Privatlivspolitik>, der 
>>> fortæller,
>> hvordan vi behandler oplysninger om dig.
>>> 
>>> Protection of your personal data is important to us. Here you can 
>>> read KMD’s Privacy Policy<http://www.kmd.net/Privacy-Policy> 
>>> outlining how we process your personal data.
>>> 
>>> Vi gør opmærksom på, at denne e-mail kan indeholde fortrolig information.
>>> Hvis du ved en fejltagelse modtager e-mailen, beder vi dig venligst 
>>> informere afsender om fejlen ved at bruge svarfunktionen. Samtidig 
>>> beder vi dig slette e-mailen i dit system uden at videresende eller
>> kopiere den.
>>> Selvom e-mailen og ethvert vedhæftet bilag efter vores overbevisning 
>>> er fri for virus og andre fejl, som kan påvirke computeren eller 
>>> it-systemet, hvori den modtages og læses, åbnes den på modtagerens 
>>> eget ansvar. Vi påtager os ikke noget ansvar for tab og skade, som 
>>> er opstået i forbindelse med at modtage og bruge e-mailen.
>>> 
>>> Please note that this message may contain confidential information. 
>>> If you have received this message by mistake, please inform the 
>>> sender of the mistake by sending a reply, then delete the message 
>>> from your system without making, distributing or retaining any copies of it.
>>> Although we believe that the message and any attachments are free 
>>> from viruses and other errors that might affect the computer or 
>>> it-system where it is received and read, the recipient opens the 
>>> message at his or
>> her own risk.
>>> We assume no responsibility for any loss or damage arising from the 
>>> receipt or use of this message.
>>> 
>> 
>> 
>> --
>> Sincerely yours
>> Mikhail Khludnev
>> 
> 
> 
> --
> Sincerely yours
> Mikhail Khludnev


RE: unable to create new threads: out-of-memory issues

Posted by "Martin Frank Hansen (MHQ)" <MH...@kmd.dk>.
Hi Mikhail, 

Thanks for your help. I will try it. 

-----Original Message-----
From: Mikhail Khludnev <mk...@apache.org> 
Sent: 12. februar 2019 15:54
To: solr-user <so...@lucene.apache.org>
Subject: Re: unable to create new threads: out-of-memory issues

1. you can jstack <PID> to find it out.
2. It might create a thread, I don't know.
3. SolrClient is definitely a subject for heavy reuse.

On Tue, Feb 12, 2019 at 5:16 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
wrote:

> Hi Mikhail,
>
> I am using Solrj but think I might have found the problem.
>
> I am doing a atomicUpdate on existing documents, and found out that I 
> create a new SolrClient for each document. I guess this is where all 
> the threads are coming from. Is it correct that when creating a 
> SolrClient, I also create a new thread?
>
> SolrClient solr = new HttpSolrClient.Builder(urlString).build();
>
> Thanks
>
> -----Original Message-----
> From: Mikhail Khludnev <mk...@apache.org>
> Sent: 12. februar 2019 15:09
> To: solr-user <so...@lucene.apache.org>
> Subject: Re: unable to create new threads: out-of-memory issues
>
> Hello, Martin.
> How do you index? Where did you get this error?
>  Usually it occurs in custom code with many new Thread() calls and 
> usually healed with thread poling.
>
> On Tue, Feb 12, 2019 at 3:25 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
> wrote:
>
> > Hi,
> >
> > I am trying to create an index on a small Linux server running 
> > Solr-7.5.0, but keep running into problems.
> >
> > When I try to index a file-folder of roughly 18 GB (18000 files) I 
> > get the following error from the server:
> >
> > java.lang.OutOfMemoryError: unable to create new native thread.
> >
> > From the server I can see the following limits:
> >
> > User$ ulimit -a
> > core file size                                 (blocks, -c) 0
> > data seg size                                 (kbytes, -d) unlimited
> > scheduling priority                     (-e) 0
> > file size                                           (blocks, -f)
> unlimited
> > pending signals                          (-i) 257568
> > max locked memory                 (kbytes, -l) 64
> > max memory size                      (kbytes, -m) unlimited
> > open files                                    (-n) 1024
> > pipe size                                       (512 bytes, -p) 8
> > POSIX message queues            (bytes, -q) 819200
> > real-time priority                      (-r) 0
> > stack size                                      (kbytes, -s) 8192
> > cpu time                                       (seconds, -t) unlimited
> > max user processes                  (-u) 257568
> > virtual memory                          (kbytes, -v) unlimited
> > file locks                                      (-x) unlimited
> >
> > I do not see any limits on threads only on open files.
> >
> > I have added a autoCommit of a maximum of 1000 documents, but that 
> > did not help. How can I increase the thread limit, or is there 
> > another way of solving this issue? Any help is appreciated.
> >
> > Best regards
> >
> > Martin
> >
> > Beskyttelse af dine personlige oplysninger er vigtig for os. Her 
> > finder du KMD’s 
> > Privatlivspolitik<http://www.kmd.dk/Privatlivspolitik>, der 
> > fortæller,
> hvordan vi behandler oplysninger om dig.
> >
> > Protection of your personal data is important to us. Here you can 
> > read KMD’s Privacy Policy<http://www.kmd.net/Privacy-Policy> 
> > outlining how we process your personal data.
> >
> > Vi gør opmærksom på, at denne e-mail kan indeholde fortrolig information.
> > Hvis du ved en fejltagelse modtager e-mailen, beder vi dig venligst 
> > informere afsender om fejlen ved at bruge svarfunktionen. Samtidig 
> > beder vi dig slette e-mailen i dit system uden at videresende eller
> kopiere den.
> > Selvom e-mailen og ethvert vedhæftet bilag efter vores overbevisning 
> > er fri for virus og andre fejl, som kan påvirke computeren eller 
> > it-systemet, hvori den modtages og læses, åbnes den på modtagerens 
> > eget ansvar. Vi påtager os ikke noget ansvar for tab og skade, som 
> > er opstået i forbindelse med at modtage og bruge e-mailen.
> >
> > Please note that this message may contain confidential information. 
> > If you have received this message by mistake, please inform the 
> > sender of the mistake by sending a reply, then delete the message 
> > from your system without making, distributing or retaining any copies of it.
> > Although we believe that the message and any attachments are free 
> > from viruses and other errors that might affect the computer or 
> > it-system where it is received and read, the recipient opens the 
> > message at his or
> her own risk.
> > We assume no responsibility for any loss or damage arising from the 
> > receipt or use of this message.
> >
>
>
> --
> Sincerely yours
> Mikhail Khludnev
>


--
Sincerely yours
Mikhail Khludnev

Re: unable to create new threads: out-of-memory issues

Posted by Mikhail Khludnev <mk...@apache.org>.
1. you can jstack <PID> to find it out.
2. It might create a thread, I don't know.
3. SolrClient is definitely a subject for heavy reuse.

On Tue, Feb 12, 2019 at 5:16 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
wrote:

> Hi Mikhail,
>
> I am using Solrj but think I might have found the problem.
>
> I am doing a atomicUpdate on existing documents, and found out that I
> create a new SolrClient for each document. I guess this is where all the
> threads are coming from. Is it correct that when creating a SolrClient, I
> also create a new thread?
>
> SolrClient solr = new HttpSolrClient.Builder(urlString).build();
>
> Thanks
>
> -----Original Message-----
> From: Mikhail Khludnev <mk...@apache.org>
> Sent: 12. februar 2019 15:09
> To: solr-user <so...@lucene.apache.org>
> Subject: Re: unable to create new threads: out-of-memory issues
>
> Hello, Martin.
> How do you index? Where did you get this error?
>  Usually it occurs in custom code with many new Thread() calls and usually
> healed with thread poling.
>
> On Tue, Feb 12, 2019 at 3:25 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
> wrote:
>
> > Hi,
> >
> > I am trying to create an index on a small Linux server running
> > Solr-7.5.0, but keep running into problems.
> >
> > When I try to index a file-folder of roughly 18 GB (18000 files) I get
> > the following error from the server:
> >
> > java.lang.OutOfMemoryError: unable to create new native thread.
> >
> > From the server I can see the following limits:
> >
> > User$ ulimit -a
> > core file size                                 (blocks, -c) 0
> > data seg size                                 (kbytes, -d) unlimited
> > scheduling priority                     (-e) 0
> > file size                                           (blocks, -f)
> unlimited
> > pending signals                          (-i) 257568
> > max locked memory                 (kbytes, -l) 64
> > max memory size                      (kbytes, -m) unlimited
> > open files                                    (-n) 1024
> > pipe size                                       (512 bytes, -p) 8
> > POSIX message queues            (bytes, -q) 819200
> > real-time priority                      (-r) 0
> > stack size                                      (kbytes, -s) 8192
> > cpu time                                       (seconds, -t) unlimited
> > max user processes                  (-u) 257568
> > virtual memory                          (kbytes, -v) unlimited
> > file locks                                      (-x) unlimited
> >
> > I do not see any limits on threads only on open files.
> >
> > I have added a autoCommit of a maximum of 1000 documents, but that did
> > not help. How can I increase the thread limit, or is there another way
> > of solving this issue? Any help is appreciated.
> >
> > Best regards
> >
> > Martin
> >
> > Beskyttelse af dine personlige oplysninger er vigtig for os. Her
> > finder du KMD’s
> > Privatlivspolitik<http://www.kmd.dk/Privatlivspolitik>, der fortæller,
> hvordan vi behandler oplysninger om dig.
> >
> > Protection of your personal data is important to us. Here you can read
> > KMD’s Privacy Policy<http://www.kmd.net/Privacy-Policy> outlining how
> > we process your personal data.
> >
> > Vi gør opmærksom på, at denne e-mail kan indeholde fortrolig information.
> > Hvis du ved en fejltagelse modtager e-mailen, beder vi dig venligst
> > informere afsender om fejlen ved at bruge svarfunktionen. Samtidig
> > beder vi dig slette e-mailen i dit system uden at videresende eller
> kopiere den.
> > Selvom e-mailen og ethvert vedhæftet bilag efter vores overbevisning
> > er fri for virus og andre fejl, som kan påvirke computeren eller
> > it-systemet, hvori den modtages og læses, åbnes den på modtagerens
> > eget ansvar. Vi påtager os ikke noget ansvar for tab og skade, som er
> > opstået i forbindelse med at modtage og bruge e-mailen.
> >
> > Please note that this message may contain confidential information. If
> > you have received this message by mistake, please inform the sender of
> > the mistake by sending a reply, then delete the message from your
> > system without making, distributing or retaining any copies of it.
> > Although we believe that the message and any attachments are free from
> > viruses and other errors that might affect the computer or it-system
> > where it is received and read, the recipient opens the message at his or
> her own risk.
> > We assume no responsibility for any loss or damage arising from the
> > receipt or use of this message.
> >
>
>
> --
> Sincerely yours
> Mikhail Khludnev
>


-- 
Sincerely yours
Mikhail Khludnev

RE: unable to create new threads: out-of-memory issues

Posted by "Martin Frank Hansen (MHQ)" <MH...@kmd.dk>.
Hi Mikhail, 

I am using Solrj but think I might have found the problem. 

I am doing a atomicUpdate on existing documents, and found out that I create a new SolrClient for each document. I guess this is where all the threads are coming from. Is it correct that when creating a SolrClient, I also create a new thread? 

SolrClient solr = new HttpSolrClient.Builder(urlString).build();

Thanks 

-----Original Message-----
From: Mikhail Khludnev <mk...@apache.org> 
Sent: 12. februar 2019 15:09
To: solr-user <so...@lucene.apache.org>
Subject: Re: unable to create new threads: out-of-memory issues

Hello, Martin.
How do you index? Where did you get this error?
 Usually it occurs in custom code with many new Thread() calls and usually healed with thread poling.

On Tue, Feb 12, 2019 at 3:25 PM Martin Frank Hansen (MHQ) <MH...@kmd.dk>
wrote:

> Hi,
>
> I am trying to create an index on a small Linux server running 
> Solr-7.5.0, but keep running into problems.
>
> When I try to index a file-folder of roughly 18 GB (18000 files) I get 
> the following error from the server:
>
> java.lang.OutOfMemoryError: unable to create new native thread.
>
> From the server I can see the following limits:
>
> User$ ulimit -a
> core file size                                 (blocks, -c) 0
> data seg size                                 (kbytes, -d) unlimited
> scheduling priority                     (-e) 0
> file size                                           (blocks, -f) unlimited
> pending signals                          (-i) 257568
> max locked memory                 (kbytes, -l) 64
> max memory size                      (kbytes, -m) unlimited
> open files                                    (-n) 1024
> pipe size                                       (512 bytes, -p) 8
> POSIX message queues            (bytes, -q) 819200
> real-time priority                      (-r) 0
> stack size                                      (kbytes, -s) 8192
> cpu time                                       (seconds, -t) unlimited
> max user processes                  (-u) 257568
> virtual memory                          (kbytes, -v) unlimited
> file locks                                      (-x) unlimited
>
> I do not see any limits on threads only on open files.
>
> I have added a autoCommit of a maximum of 1000 documents, but that did 
> not help. How can I increase the thread limit, or is there another way 
> of solving this issue? Any help is appreciated.
>
> Best regards
>
> Martin
>
> Beskyttelse af dine personlige oplysninger er vigtig for os. Her 
> finder du KMD’s 
> Privatlivspolitik<http://www.kmd.dk/Privatlivspolitik>, der fortæller, hvordan vi behandler oplysninger om dig.
>
> Protection of your personal data is important to us. Here you can read 
> KMD’s Privacy Policy<http://www.kmd.net/Privacy-Policy> outlining how 
> we process your personal data.
>
> Vi gør opmærksom på, at denne e-mail kan indeholde fortrolig information.
> Hvis du ved en fejltagelse modtager e-mailen, beder vi dig venligst 
> informere afsender om fejlen ved at bruge svarfunktionen. Samtidig 
> beder vi dig slette e-mailen i dit system uden at videresende eller kopiere den.
> Selvom e-mailen og ethvert vedhæftet bilag efter vores overbevisning 
> er fri for virus og andre fejl, som kan påvirke computeren eller 
> it-systemet, hvori den modtages og læses, åbnes den på modtagerens 
> eget ansvar. Vi påtager os ikke noget ansvar for tab og skade, som er 
> opstået i forbindelse med at modtage og bruge e-mailen.
>
> Please note that this message may contain confidential information. If 
> you have received this message by mistake, please inform the sender of 
> the mistake by sending a reply, then delete the message from your 
> system without making, distributing or retaining any copies of it. 
> Although we believe that the message and any attachments are free from 
> viruses and other errors that might affect the computer or it-system 
> where it is received and read, the recipient opens the message at his or her own risk.
> We assume no responsibility for any loss or damage arising from the 
> receipt or use of this message.
>


--
Sincerely yours
Mikhail Khludnev