You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Mohamed Nadjib MAMI <ma...@iai.uni-bonn.de> on 2016/02/05 10:42:34 UTC

Too many open files, why changing ulimit not effecting?

Hello all,

I'm getting the famous /java.io.FileNotFoundException: ... (Too many 
open files) /exception. What seemed to have helped people out, it 
haven't for me. I tried to set the ulimit via the command line /"ulimit 
-n"/, then I tried to add the following lines to 
/"/etc/security/limits.conf"/ file:
/
//* - nofile 1000000//
//root soft nofile 1000000//
//root hard nofile 1000000//
//hduser soft nofile 1000000//
//hduser hard nofile 1000000/

...then I added this line /"session required pam_limits.so"/ to the two 
files/"/etc/pam.d/common-session"/ and /"session required 
pam_limits.so"/. The I logged-out/logged-in. First, I tried only the 
first line (/* - nofile 1000000//)/, then added the 2nd and the 3rd 
(root...),  then added the last two lines (hduser...), no effect. 
Weirdly enough, when I check with the command /"ulimit -n"/ it returns 
the correct value of 1000000.

I then added /"ulimit -n 1000000"/ to /"spark-env.sh"/ in the master and 
in each of my workers, no effect.

What else could it be besides changing the ulimit setting? if it's only 
that, what could cause Spark to ignore it?

I'll appreciate any help in advance.

-- 
/PhD Student - EIS Group - Bonn University, Germany.//
//+49 1575 8482232/


Re: Too many open files, why changing ulimit not effecting?

Posted by Michael Diamant <di...@gmail.com>.
If you are using systemd, you will need to specify the limit in the service
file.  I had run into this problem and discovered the solution from the
following references:
* https://bugzilla.redhat.com/show_bug.cgi?id=754285#c1
* http://serverfault.com/a/678861

On Fri, Feb 5, 2016 at 1:18 PM, Nirav Patel <np...@xactlycorp.com> wrote:

> For centos there's also /etc/security/limits.d/90-nproc.conf  that may
> need modifications.
>
> Services that you expect to use new limits needs to be restarted. Simple
> thing to do is to reboot the machine.
>
> On Fri, Feb 5, 2016 at 3:59 AM, Ted Yu <yu...@gmail.com> wrote:
>
>> bq. and *"session required pam_limits.so"*.
>>
>> What was the second file you modified ?
>>
>> Did you make the change on all the nodes ?
>>
>> Please see the verification step in
>> https://easyengine.io/tutorials/linux/increase-open-files-limit/
>>
>> On Fri, Feb 5, 2016 at 1:42 AM, Mohamed Nadjib MAMI <mami@iai.uni-bonn.de
>> > wrote:
>>
>>> Hello all,
>>>
>>> I'm getting the famous *java.io.FileNotFoundException: ... (Too many
>>> open files) *exception. What seemed to have helped people out, it
>>> haven't for me. I tried to set the ulimit via the command line *"ulimit
>>> -n"*, then I tried to add the following lines to
>>> *"/etc/security/limits.conf"* file:
>>>
>>> ** - nofile 1000000*
>>> *root soft nofile 1000000*
>>> *root hard nofile 1000000*
>>> *hduser soft nofile 1000000*
>>> *hduser hard nofile 1000000*
>>>
>>> ...then I added this line *"session required pam_limits.so"* to the two
>>> files* "/etc/pam.d/common-session"* and *"session required
>>> pam_limits.so"*. The I logged-out/logged-in. First, I tried only the
>>> first line (** - nofile 1000000**)*, then added the 2nd and the 3rd
>>> (root...),  then added the last two lines (hduser...), no effect. Weirdly
>>> enough, when I check with the command *"ulimit -n"* it returns the
>>> correct value of 1000000.
>>>
>>> I then added *"ulimit -n 1000000"* to *"spark-env.sh"* in the master
>>> and in each of my workers, no effect.
>>>
>>> What else could it be besides changing the ulimit setting? if it's only
>>> that, what could cause Spark to ignore it?
>>>
>>> I'll appreciate any help in advance.
>>>
>>> --
>>> *PhD Student - EIS Group - Bonn University, Germany.*
>>> *+49 1575 8482232 <%2B49%201575%208482232>*
>>>
>>>
>>
>
>
>
> [image: What's New with Xactly] <http://www.xactlycorp.com/email-click/>
>
> <https://www.nyse.com/quote/XNYS:XTLY>  [image: LinkedIn]
> <https://www.linkedin.com/company/xactly-corporation>  [image: Twitter]
> <https://twitter.com/Xactly>  [image: Facebook]
> <https://www.facebook.com/XactlyCorp>  [image: YouTube]
> <http://www.youtube.com/xactlycorporation>

Re: Too many open files, why changing ulimit not effecting?

Posted by Nirav Patel <np...@xactlycorp.com>.
For centos there's also /etc/security/limits.d/90-nproc.conf  that may need
modifications.

Services that you expect to use new limits needs to be restarted. Simple
thing to do is to reboot the machine.

On Fri, Feb 5, 2016 at 3:59 AM, Ted Yu <yu...@gmail.com> wrote:

> bq. and *"session required pam_limits.so"*.
>
> What was the second file you modified ?
>
> Did you make the change on all the nodes ?
>
> Please see the verification step in
> https://easyengine.io/tutorials/linux/increase-open-files-limit/
>
> On Fri, Feb 5, 2016 at 1:42 AM, Mohamed Nadjib MAMI <ma...@iai.uni-bonn.de>
> wrote:
>
>> Hello all,
>>
>> I'm getting the famous *java.io.FileNotFoundException: ... (Too many
>> open files) *exception. What seemed to have helped people out, it
>> haven't for me. I tried to set the ulimit via the command line *"ulimit
>> -n"*, then I tried to add the following lines to
>> *"/etc/security/limits.conf"* file:
>>
>> ** - nofile 1000000*
>> *root soft nofile 1000000*
>> *root hard nofile 1000000*
>> *hduser soft nofile 1000000*
>> *hduser hard nofile 1000000*
>>
>> ...then I added this line *"session required pam_limits.so"* to the two
>> files* "/etc/pam.d/common-session"* and *"session required
>> pam_limits.so"*. The I logged-out/logged-in. First, I tried only the
>> first line (** - nofile 1000000**)*, then added the 2nd and the 3rd
>> (root...),  then added the last two lines (hduser...), no effect. Weirdly
>> enough, when I check with the command *"ulimit -n"* it returns the
>> correct value of 1000000.
>>
>> I then added *"ulimit -n 1000000"* to *"spark-env.sh"* in the master and
>> in each of my workers, no effect.
>>
>> What else could it be besides changing the ulimit setting? if it's only
>> that, what could cause Spark to ignore it?
>>
>> I'll appreciate any help in advance.
>>
>> --
>> *PhD Student - EIS Group - Bonn University, Germany.*
>> *+49 1575 8482232 <%2B49%201575%208482232>*
>>
>>
>

-- 


[image: What's New with Xactly] <http://www.xactlycorp.com/email-click/>

<https://www.nyse.com/quote/XNYS:XTLY>  [image: LinkedIn] 
<https://www.linkedin.com/company/xactly-corporation>  [image: Twitter] 
<https://twitter.com/Xactly>  [image: Facebook] 
<https://www.facebook.com/XactlyCorp>  [image: YouTube] 
<http://www.youtube.com/xactlycorporation>

Re: Too many open files, why changing ulimit not effecting?

Posted by Ted Yu <yu...@gmail.com>.
bq. and *"session required pam_limits.so"*.

What was the second file you modified ?

Did you make the change on all the nodes ?

Please see the verification step in
https://easyengine.io/tutorials/linux/increase-open-files-limit/

On Fri, Feb 5, 2016 at 1:42 AM, Mohamed Nadjib MAMI <ma...@iai.uni-bonn.de>
wrote:

> Hello all,
>
> I'm getting the famous *java.io.FileNotFoundException: ... (Too many open
> files) *exception. What seemed to have helped people out, it haven't for
> me. I tried to set the ulimit via the command line *"ulimit -n"*, then I
> tried to add the following lines to *"/etc/security/limits.conf"* file:
>
> ** - nofile 1000000*
> *root soft nofile 1000000*
> *root hard nofile 1000000*
> *hduser soft nofile 1000000*
> *hduser hard nofile 1000000*
>
> ...then I added this line *"session required pam_limits.so"* to the two
> files* "/etc/pam.d/common-session"* and *"session required pam_limits.so"*.
> The I logged-out/logged-in. First, I tried only the first line (** -
> nofile 1000000**)*, then added the 2nd and the 3rd (root...),  then added
> the last two lines (hduser...), no effect. Weirdly enough, when I check
> with the command *"ulimit -n"* it returns the correct value of 1000000.
>
> I then added *"ulimit -n 1000000"* to *"spark-env.sh"* in the master and
> in each of my workers, no effect.
>
> What else could it be besides changing the ulimit setting? if it's only
> that, what could cause Spark to ignore it?
>
> I'll appreciate any help in advance.
>
> --
> *PhD Student - EIS Group - Bonn University, Germany.*
> *+49 1575 8482232 <%2B49%201575%208482232>*
>
>