You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Jason Lowe (JIRA)" <ji...@apache.org> on 2018/08/23 16:02:00 UTC

[jira] [Comment Edited] (YARN-8638) Allow linux container runtimes to be pluggable

    [ https://issues.apache.org/jira/browse/YARN-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590416#comment-16590416 ] 

Jason Lowe edited comment on YARN-8638 at 8/23/18 4:01 PM:
-----------------------------------------------------------

bq. It would be better if we only allow loading of container runtime from the current package locations and org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime only.

Unless I'm missing something, the whole point of a pluggable interface in this case is to enable runtimes that exist outside of the Apache Hadoop code base.  There's already a separate property that controls what runtimes are allowed, so this plugin support is only for loading classes that aren't known by Hadoop when it was compiled.  Limiting the classes that will be loaded to a specific java package prefix seems arbitrary and won't accomplish the desired effect in practice.  If we change the interface then users who have custom plugins will break whether the java package is "correct" or not.

As I see it, the concern here is whether we're ready to mark parts or all of the container runtime interface as stable.  If we're not then let's just admit that, mark the interface as Private, Unstable, Evolving, whatever, and note that there's no guarantees that plugins will work as-is from release to release for a while until we are ready to mark it Public, Stable, etc.  Then people who want to live on the edge to try out new things, fully knowing they may have to rewrite parts of it when moving to new releases, can do so easily without shoehorning their plugin into an arbitrary java package prefix.



was (Author: jlowe):
Unless I'm missing something, the whole point of a pluggable interface in this case is to enable runtimes that exist outside of the Apache Hadoop code base.  There's already a separate property that controls what runtimes are allowed, so this plugin support is only for loading classes that aren't known by Hadoop when it was compiled.  Limiting the classes that will be loaded to a specific java package prefix seems arbitrary and won't accomplish the desired effect in practice.  If we change the interface then users who have custom plugins will break whether the java package is "correct" or not.

As I see it, the concern here is whether we're ready to mark parts or all of the container runtime interface as stable.  If we're not then let's just admit that, mark the interface as Private, Unstable, Evolving, whatever, and note that there's no guarantees that plugins will work as-is from release to release for a while until we are ready to mark it Public, Stable, etc.  Then people who want to live on the edge to try out new things, fully knowing they may have to rewrite parts of it when moving to new releases, can do so easily without shoehorning their plugin into an arbitrary java package prefix.


> Allow linux container runtimes to be pluggable
> ----------------------------------------------
>
>                 Key: YARN-8638
>                 URL: https://issues.apache.org/jira/browse/YARN-8638
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: nodemanager
>    Affects Versions: 3.2.0
>            Reporter: Craig Condit
>            Assignee: Craig Condit
>            Priority: Minor
>         Attachments: YARN-8638.001.patch, YARN-8638.002.patch
>
>
> YARN currently supports three different Linux container runtimes (default, docker, and javasandbox). However, it would be relatively straightforward to support arbitrary runtime implementations. This would enable easier experimentation with new and emerging runtime technologies (runc, containerd, etc.) without requiring a rebuild and redeployment of Hadoop. 
> This could be accomplished via a simple configuration change:
> {code:xml}
> <property>
>  <name>yarn.nodemanager.runtime.linux.allowed-runtimes</name>
>  <value>default,docker,experimental</value>
> </property>
>  
> <property>
>  <name>yarn.nodemanager.runtime.linux.experimental.class</name>
>  <value>com.somecompany.yarn.runtime.ExperimentalLinuxContainerRuntime</value>
> </property>{code}
>  
> In this example, {{yarn.nodemanager.runtime.linux.allowed-runtimes}} would now allow arbitrary values. Additionally, {{yarn.nodemanager.runtime.linux.\{RUNTIME_KEY}.class}} would indicate the {{LinuxContainerRuntime}} implementation to instantiate. A no-argument constructor should be sufficient, as {{LinuxContainerRuntime}} already provides an {{initialize()}} method.
> {{DockerLinuxContainerRuntime.isDockerContainerRequested(Map<String, String> env)}} and {{JavaSandboxLinuxContainerRuntime.isSandboxContainerRequested()}} could be generalized to {{isRuntimeRequested(Map<String, String> env)}} and added to the {{LinuxContainerRuntime}} interface. This would allow {{DelegatingLinuxContainerRuntime}} to select an appropriate runtime based on whether that runtime claimed ownership of the current container execution.
> For backwards compatibility, the existing values (default,docker,javasandbox) would continue to be supported as-is. Under the current logic, the evaluation order is javasandbox, docker, default (with default being chosen if no other candidates are available). Under the new evaluation logic, pluggable runtimes would be evaluated after docker and before default, in the order in which they are defined in the allowed-runtimes list. This will change no behavior on current clusters (as there would be no pluggable runtimes defined), and preserves behavior with respect to ordering of existing runtimes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org