You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@httpd.apache.org by Daryl King <al...@gmail.com> on 2015/08/21 15:49:39 UTC

[users@httpd] apache segfault debugging with gdb - need advice

I am running Apache 2.4.10 with mpm_event on a Debian 8 vps. When I run
Siege on my setup it runs well, except for a Segmentaion Fault at the very
end [child pid xxxx exit signal Segmentation fault (11)]. I have run GDB on
a core dump of the segfault and returned this:
 [Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/sbin/apache2 -k start'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f53a4ac8add in read () at ../sysdeps/unix/syscall-template.S:81
81    ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb)]
Im at a loss as to how to proceed with this, but am willing to keep digging
until I find the answer. Any advice appreciated..

Re: [users@httpd] apache segfault debugging with gdb - need advice

Posted by R T <i....@gmail.com>.
>However, in my experience it is unusual for a too low limit on the number
of open files to result in a segmentation fault. Especially in a well
written program like Apache HTTPD. A well written program will normally
check whether the >open (or any syscall which returns a file descriptor)
failed and refuse to use the -1 value as if it were a valid file descriptor
number. So I would be surprised if increasing that value resolved the
segmentation fault.

Kurtis: I think Darryl's issue here is with the resource intensity of
siege, not so much his LAMP stack. But generally speaking I agree, tweaking
the ulimits is a hack and should not be necessary for most mature software.
Though, I have seen posts (not necessarily here) wherein people mention
adjusting the ulimit in the Apache environment to facilitate higher
concurrency. But, personally, I wouldn't do it in a production environment.

>I have set my siege concurrency level a bit lower (20 users) and that
seems to have resolved the segfault issue. Its strange that I hadn't read
anywhere else that a lack of resources could cause that, but there it is. I
guess that running >Debian 8, Apache 2.4.10, php-fpm and Mariadb was just a
bit too much to ask of my single core 512mb VPS?

Daryl: Siege is a hobby project of one individual (at least last I
checked), and while it is a valuable tool, it needs some optimization and
is highly resource intensive. It uses a virtualization procedure to
simulate user concurrency - this is where available memory and the ulimit
issue can be a factor. In my case, I ended up spinning up a separate
virtual machine with 16GB RAM just to run siege against my development LAMP
stack. I adjusted the security limits in
/etc/sysconfig/security/limits.conf to allow for higher concurrency - to
the extent that in the end the ulimit wasn't the issue - the server ran out
of memory due to all of the virtual sessions spawned by siege.

It does sound as though you are tight on resources - though you could
certainly run your LAMP stack on that server acceptably depending on
expected traffic load. If a light load is expected, you should be ok. But
if you are running siege locally for load testing, I would definitely
recommend running it remotely from another machine if you have the
architecture available, as siege and your LAMP stack will be competing for
resources obviously if you run both locally.


Cheers,

Ryan

On Sat, Aug 22, 2015 at 9:58 AM, Daryl King <al...@gmail.com>
wrote:

> I have set my siege concurrency level a bit lower (20 users) and that
> seems to have resolved the segfault issue. Its strange that I hadn't read
> anywhere else that a lack of resources could cause that, but there it is. I
> guess that running Debian 8, Apache 2.4.10, php-fpm and Mariadb was just a
> bit too much to ask of my single core 512mb VPS?
> On Aug 22, 2015 1:13 PM, "Kurtis Rader" <kr...@skepticism.us> wrote:
>
>> On Fri, Aug 21, 2015 at 6:14 PM, Daryl King <al...@gmail.com>
>> wrote:
>>
>>> Thanks Ryan. Strangely when running "ulimit -n" it returns 65536 in a
>>> ssh session, but 1024 in webmin? Which one would be correct?
>>>
>>
>> Limits set by the ulimit command (and the setrlimit syscall) are correct
>> if they are high enough to allow a correctly functioning program to perform
>> its task. They are incorrect if set too low for the needs of a correctly
>> functioning program or so high that a malfunctioning program is able to
>> adversely affect the functioning of other processes. So the answer to your
>> question is: it depends.
>>
>> Having said that it is very unusual these days for "ulimit -n" to be set
>> too high. Supporting thousands of open files in a single process is
>> normally pretty cheap in terms of kernel memory, CPU cycles, etc. So if you
>> have a reason to think your program (e.g., httpd) has a legitimate need to
>> have more than 1024 files open simultaneously go ahead and increase the
>> "ulimit -n" (which is the setrlimit RLIMIT_NOFILE parameter) to a higher
>> value.
>>
>> However, in my experience it is unusual for a too low limit on the number
>> of open files to result in a segmentation fault. Especially in a well
>> written program like Apache HTTPD. A well written program will normally
>> check whether the open (or any syscall which returns a file descriptor)
>> failed and refuse to use the -1 value as if it were a valid file descriptor
>> number. So I would be surprised if increasing that value resolved the
>> segmentation fault.
>>
>> --
>> Kurtis Rader
>> Caretaker of the exceptional canines Junior and Hank
>>
>

Re: [users@httpd] apache segfault debugging with gdb - need advice

Posted by Daryl King <al...@gmail.com>.
I have set my siege concurrency level a bit lower (20 users) and that seems
to have resolved the segfault issue. Its strange that I hadn't read
anywhere else that a lack of resources could cause that, but there it is. I
guess that running Debian 8, Apache 2.4.10, php-fpm and Mariadb was just a
bit too much to ask of my single core 512mb VPS?
On Aug 22, 2015 1:13 PM, "Kurtis Rader" <kr...@skepticism.us> wrote:

> On Fri, Aug 21, 2015 at 6:14 PM, Daryl King <al...@gmail.com>
> wrote:
>
>> Thanks Ryan. Strangely when running "ulimit -n" it returns 65536 in a ssh
>> session, but 1024 in webmin? Which one would be correct?
>>
>
> Limits set by the ulimit command (and the setrlimit syscall) are correct
> if they are high enough to allow a correctly functioning program to perform
> its task. They are incorrect if set too low for the needs of a correctly
> functioning program or so high that a malfunctioning program is able to
> adversely affect the functioning of other processes. So the answer to your
> question is: it depends.
>
> Having said that it is very unusual these days for "ulimit -n" to be set
> too high. Supporting thousands of open files in a single process is
> normally pretty cheap in terms of kernel memory, CPU cycles, etc. So if you
> have a reason to think your program (e.g., httpd) has a legitimate need to
> have more than 1024 files open simultaneously go ahead and increase the
> "ulimit -n" (which is the setrlimit RLIMIT_NOFILE parameter) to a higher
> value.
>
> However, in my experience it is unusual for a too low limit on the number
> of open files to result in a segmentation fault. Especially in a well
> written program like Apache HTTPD. A well written program will normally
> check whether the open (or any syscall which returns a file descriptor)
> failed and refuse to use the -1 value as if it were a valid file descriptor
> number. So I would be surprised if increasing that value resolved the
> segmentation fault.
>
> --
> Kurtis Rader
> Caretaker of the exceptional canines Junior and Hank
>

Re: [users@httpd] apache segfault debugging with gdb - need advice

Posted by Kurtis Rader <kr...@skepticism.us>.
On Fri, Aug 21, 2015 at 6:14 PM, Daryl King <al...@gmail.com>
wrote:

> Thanks Ryan. Strangely when running "ulimit -n" it returns 65536 in a ssh
> session, but 1024 in webmin? Which one would be correct?
>

Limits set by the ulimit command (and the setrlimit syscall) are correct if
they are high enough to allow a correctly functioning program to perform
its task. They are incorrect if set too low for the needs of a correctly
functioning program or so high that a malfunctioning program is able to
adversely affect the functioning of other processes. So the answer to your
question is: it depends.

Having said that it is very unusual these days for "ulimit -n" to be set
too high. Supporting thousands of open files in a single process is
normally pretty cheap in terms of kernel memory, CPU cycles, etc. So if you
have a reason to think your program (e.g., httpd) has a legitimate need to
have more than 1024 files open simultaneously go ahead and increase the
"ulimit -n" (which is the setrlimit RLIMIT_NOFILE parameter) to a higher
value.

However, in my experience it is unusual for a too low limit on the number
of open files to result in a segmentation fault. Especially in a well
written program like Apache HTTPD. A well written program will normally
check whether the open (or any syscall which returns a file descriptor)
failed and refuse to use the -1 value as if it were a valid file descriptor
number. So I would be surprised if increasing that value resolved the
segmentation fault.

-- 
Kurtis Rader
Caretaker of the exceptional canines Junior and Hank

Re: [users@httpd] apache segfault debugging with gdb - need advice

Posted by Daryl King <al...@gmail.com>.
Thanks Ryan. Strangely when running "ulimit -n" it returns 65536 in a ssh
session, but 1024 in webmin? Which one would be correct?

On Sat, Aug 22, 2015 at 12:52 AM, R T <i....@gmail.com> wrote:

>
> Hi Daryl,
>
> Typically when I see a core dump when running siege, it is a resource
> issue. Out of memory, and/or I've reached the ulimit on my machine and need
> to set it higher. The limit is 1024 (displayed via ulimit -n), and can be
> changed via ulimit -n <value>. This change isn't persistent - and the
> setting can be changed permanently by editing
> /etc/sysconfig/security/limits.conf.
>
> I typically set it to something unrealistically high, and the machine will
> always run out of memory before hitting the ulimit.
>
>
> - Ryan
>
> On Fri, Aug 21, 2015 at 9:49 AM, Daryl King <al...@gmail.com>
> wrote:
>
>> I am running Apache 2.4.10 with mpm_event on a Debian 8 vps. When I run
>> Siege on my setup it runs well, except for a Segmentaion Fault at the very
>> end [child pid xxxx exit signal Segmentation fault (11)]. I have run GDB on
>> a core dump of the segfault and returned this:
>>  [Using host libthread_db library
>> "/lib/x86_64-linux-gnu/libthread_db.so.1".
>> Core was generated by `/usr/sbin/apache2 -k start'.
>> Program terminated with signal SIGSEGV, Segmentation fault.
>> #0  0x00007f53a4ac8add in read () at ../sysdeps/unix/syscall-template.S:81
>> 81    ../sysdeps/unix/syscall-template.S: No such file or directory.
>> (gdb)]
>> Im at a loss as to how to proceed with this, but am willing to keep
>> digging until I find the answer. Any advice appreciated..
>>
>
>

Re: [users@httpd] apache segfault debugging with gdb - need advice

Posted by R T <i....@gmail.com>.
Hi Daryl,

Typically when I see a core dump when running siege, it is a resource
issue. Out of memory, and/or I've reached the ulimit on my machine and need
to set it higher. The limit is 1024 (displayed via ulimit -n), and can be
changed via ulimit -n <value>. This change isn't persistent - and the
setting can be changed permanently by editing
/etc/sysconfig/security/limits.conf.

I typically set it to something unrealistically high, and the machine will
always run out of memory before hitting the ulimit.


- Ryan

On Fri, Aug 21, 2015 at 9:49 AM, Daryl King <al...@gmail.com>
wrote:

> I am running Apache 2.4.10 with mpm_event on a Debian 8 vps. When I run
> Siege on my setup it runs well, except for a Segmentaion Fault at the very
> end [child pid xxxx exit signal Segmentation fault (11)]. I have run GDB on
> a core dump of the segfault and returned this:
>  [Using host libthread_db library
> "/lib/x86_64-linux-gnu/libthread_db.so.1".
> Core was generated by `/usr/sbin/apache2 -k start'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x00007f53a4ac8add in read () at ../sysdeps/unix/syscall-template.S:81
> 81    ../sysdeps/unix/syscall-template.S: No such file or directory.
> (gdb)]
> Im at a loss as to how to proceed with this, but am willing to keep
> digging until I find the answer. Any advice appreciated..
>