You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Koji Noguchi (JIRA)" <ji...@apache.org> on 2009/01/15 20:56:59 UTC
[jira] Commented: (HADOOP-5059) 'whoami', 'topologyscript' calls
failing with java.io.IOException: error=12, Cannot allocate memory
[ https://issues.apache.org/jira/browse/HADOOP-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12664234#action_12664234 ]
Koji Noguchi commented on HADOOP-5059:
--------------------------------------
Checking with strace, it was failing on
{noformat}
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x4133c9f0) = -1
ENOMEM (Cannot allocate memory)
...
write(2, "Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory",
85Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory) = 85
{noformat}
This clone call didn't have CLONE_VM flag.
>From clone manpage,
"If CLONE_VM is not set, the child process runs in a separate copy of the memory space of the calling process
at the time of clone. Memory writes or file mappings/unmappings performed by one of the processes do not affect the
other, as with fork(2). "
So it's probably using fork() and not vfork().
> 'whoami', 'topologyscript' calls failing with java.io.IOException: error=12, Cannot allocate memory
> ---------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5059
> URL: https://issues.apache.org/jira/browse/HADOOP-5059
> Project: Hadoop Core
> Issue Type: Bug
> Components: util
> Environment: On nodes with
> physical memory 32G
> Swap 16G
> Primary/Secondary Namenode using 25G of heap or more
> Reporter: Koji Noguchi
> Attachments: TestSysCall.java
>
>
> We've seen primary/secondary namenodes fail when calling whoami or topologyscripts.
> (Discussed as part of HADOOP-4998)
> Sample stack traces.
> Primary Namenode
> {noformat}
> 2009-01-12 03:57:27,381 WARN org.apache.hadoop.net.ScriptBasedMapping: java.io.IOException: Cannot run program
> "/path/topologyProgram" (in directory "/path"):
> java.io.IOException: error=12, Cannot allocate memory
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
> at org.apache.hadoop.util.Shell.run(Shell.java:134)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286)
> at org.apache.hadoop.net.ScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:122)
> at org.apache.hadoop.net.ScriptBasedMapping.resolve(ScriptBasedMapping.java:73)
> at org.apache.hadoop.dfs.FSNamesystem$ResolutionMonitor.run(FSNamesystem.java:1869)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
> at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
> ... 7 more
> 2009-01-12 03:57:27,381 ERROR org.apache.hadoop.fs.FSNamesystem: The resolve call returned null! Using /default-rack
> for some hosts
> 2009-01-12 03:57:27,381 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/55.5.55.55:50010
> {noformat}
> Secondary Namenode
> {noformat}
> 2008-10-09 02:00:58,288 ERROR org.apache.hadoop.dfs.NameNode.Secondary: java.io.IOException:
> javax.security.auth.login.LoginException: Login failed: Cannot run program "whoami": java.io.IOException:
> error=12, Cannot allocate memory
> at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)
> at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)
> at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:257)
> at org.apache.hadoop.dfs.FSNamesystem.setConfigurationParameters(FSNamesystem.java:370)
> at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:359)
> at org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:340)
> at org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:312)
> at org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:223)
> at java.lang.Thread.run(Thread.java:619)
> at org.apache.hadoop.dfs.FSNamesystem.setConfigurationParameters(FSNamesystem.java:372)
> at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:359)
> at org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:340)
> at org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:312)
> at org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:223)
> at java.lang.Thread.run(Thread.java:619)
> {noformat}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.