You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Niels Basjes <Ni...@basjes.nl> on 2014/08/15 00:23:21 UTC

Hortonworks scripting ...

Hi,

In the core Hadoop you can on your (desktop) client have multiple clusters
available simply by having multiple directories with setting files (i.e.
core-site.xml etc.) and select the one you want by changing the environment
settings (i.e. HADOOP_CONF_DIR and such) around.

This doesn't work when I run under the Hortonworks 2.1.2 distribution.

There I find that in all of the scripts placed in /usr/bin/ there is
"mucking about" with the environment settings. Things from /etc/default are
sourced and they override my settings.
Now I can control part of it by directing the BIGTOP_DEFAULTS_DIR into a
blank directory.
But in /usr/bin/pig sourcing /etc/default/hadoop hardcoded into the script.

Why is this done this way?

P.S. Where is the git(?) repo located where this (apperently HW specific)
scripting is maintained?

-- 
Best regards / Met vriendelijke groeten,

Niels Basjes

Re: Hortonworks scripting ...

Posted by Colin McCabe <cm...@alumni.cmu.edu>.
This mailing list is for questions about Apache Hadoop, not commercial
Hadoop distributions.  Try asking a Hortonworks-specific mailing list.

best,
Colin

On Thu, Aug 14, 2014 at 3:23 PM, Niels Basjes <Ni...@basjes.nl> wrote:
> Hi,
>
> In the core Hadoop you can on your (desktop) client have multiple clusters
> available simply by having multiple directories with setting files (i.e.
> core-site.xml etc.) and select the one you want by changing the environment
> settings (i.e. HADOOP_CONF_DIR and such) around.
>
> This doesn't work when I run under the Hortonworks 2.1.2 distribution.
>
> There I find that in all of the scripts placed in /usr/bin/ there is
> "mucking about" with the environment settings. Things from /etc/default are
> sourced and they override my settings.
> Now I can control part of it by directing the BIGTOP_DEFAULTS_DIR into a
> blank directory.
> But in /usr/bin/pig sourcing /etc/default/hadoop hardcoded into the script.
>
> Why is this done this way?
>
> P.S. Where is the git(?) repo located where this (apperently HW specific)
> scripting is maintained?
>
> --
> Best regards / Met vriendelijke groeten,
>
> Niels Basjes