You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Dmytro Shkvyra (JIRA)" <ji...@apache.org> on 2013/12/12 18:06:09 UTC
[jira] [Updated] (AMBARI-4055) set core file size on hosts to get
core dump when JVM crashes
[ https://issues.apache.org/jira/browse/AMBARI-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dmytro Shkvyra updated AMBARI-4055:
-----------------------------------
Attachment: AMBARI-4055.patch
> set core file size on hosts to get core dump when JVM crashes
> -------------------------------------------------------------
>
> Key: AMBARI-4055
> URL: https://issues.apache.org/jira/browse/AMBARI-4055
> Project: Ambari
> Issue Type: Task
> Components: agent
> Reporter: Dmytro Shkvyra
> Assignee: Dmytro Shkvyra
> Fix For: 1.5.0
>
> Attachments: AMBARI-4055.patch
>
>
> We recently got some customer issue with NameNode crash caused by native code error. Because the default ulimit for core file size is zero, the customer could not get a core dump and thus it makes it very hard to debug the issue.
> As more native code is added to improve system performance, we would expect to see more JVM crashes caused by errors in the native code before the code is eventually stabilized .
> We would like to set unlimit to the core file size on the host running NameNode and DataNode or any host where the native code is invoked.
> Let's add this step though Abmari.
> By default on Linux, the limit is zero, and thus core file can't be created.
> $> ulimit -c
> 0
> The command to set unlimited core file size is:
> $> ulimit -c unlimited
> after this, we can double check the limit:
> $> ulimit -c
> unlimited
> We can do this setting immediately before starting Hadoop service. The setting will take effect since the Hadoop service will be started in the same shell.
--
This message was sent by Atlassian JIRA
(v6.1.4#6159)