You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2015/01/15 17:52:35 UTC

[jira] [Commented] (HADOOP-11485) Pluggable shell integration

    [ https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278931#comment-14278931 ] 

Allen Wittenauer commented on HADOOP-11485:
-------------------------------------------

I'm specifically thinking of doing something like this:

* directory off of HADOOP_CONF_DIR or HADOOP_LIBEXEC_DIR that contains shell fragments
* initializer that uses hadoop_add_colonpath to add a prefix to a var (HADOOP_SHELLFRAG or something)
* each shell frag could define the following functions:
{code}
_(frag)_hadoop_classpath
_(frag)_hadoop_init
_(frag)_hadoop_finalizer
{code}

i.e., the current hadoop_add_to_classpath_yarn would get moved out of hadoop-functions.sh into this fragment file and renamed to _yarn_hadoop_classpath. 

A few other notes:

* HADOOP_CONF_DIR would need to get moved to first in to last in+prepend.  It must *always* be first in the classpath.  We don't want 3rd parties coming in front.
* We could provide no guarantees, really, as to when a jar appears in the classpath using this method.  So this wouldn't be a way to override classes.
* Currently, the only way to manage $\@ via this fragment is going to be on source'ing it.  So sourcing will happen after we do our normal shell param processing.  So hadoop-common options will need to come first, 3rd party after, followed by the appropriate shell subcommand.  e.g., yarn --conf foo --hbaseconf bar jar myhbase.jar



> Pluggable shell integration
> ---------------------------
>
>                 Key: HADOOP-11485
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11485
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Allen Wittenauer
>
> It would be useful to provide a way for core and non-core Hadoop components to plug into the shell infrastructure.  This would allow us to pull the HDFS, MapReduce, and YARN components out of common.  Additionally, it should let 3rd parties such as HBase influence things like classpaths at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)