You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-dev@hadoop.apache.org by "Jeff Zhang (JIRA)" <ji...@apache.org> on 2014/02/24 04:34:19 UTC

[jira] [Created] (YARN-1754) Container process is not really killed

Jeff Zhang created YARN-1754:
--------------------------------

             Summary: Container process is not really killed
                 Key: YARN-1754
                 URL: https://issues.apache.org/jira/browse/YARN-1754
             Project: Hadoop YARN
          Issue Type: Bug
          Components: nodemanager
    Affects Versions: 2.2.0
         Environment: Mac
            Reporter: Jeff Zhang


I test the following distributed shell example on my mac:

hadoop jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar -appname shell -jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar -shell_command=sleep -shell_args=1000000000 -num_containers=1

And it will start 2 process for one container, one is the shell process, another is the real command I execute ( here is "sleep 1000000000"). 

And then I kill this application by running command "yarn application -kill app_id"

it will kill the shell process, but won't kill the real command process. The reason is that yarn use kill command to kill process, but it won't kill its child process. use pkill could resolve this issue.

I also verify this case on centos which is the same as mac. IMHO, it is a very important case which will make the resource usage inconsistency, and have potential security problem. 




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)