You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by Erin Boyd <eb...@redhat.com> on 2014/06/23 23:12:11 UTC

help....config dictionary errors galore

Hi,
Scott Creeley and I have been trying to get all the services running in the 2.1.GlusterFS stack.
Several of the services won't start due to an error like this:

  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 75, in __getattr__
    raise Fail("Configuration parameter '"+self.name+"' was not found in configurations dictionary!")
Fail: Configuration parameter 'user_group' was not found in configurations dictionary!


other configuration paramters are slave_hosts
namenode_hosts

etc...


So while some of these are a parameter I would expect for Hadoop (like user_group), parameters like namenode_hosts is specific to HDFS.

Therefore, is there a way to create a flag in Ambari that says 'noHDFS' that can be used to create a conditional around these values? Should this
just be a standard global, or should the value be loaded at a different level, like a system level property?

HBase seems to have a lot of assumptions around HDFS being installed (ie loading values from the hdfs-site.xml).

Is there a best practice for such processes?

Let us know.
Thanks,
Erin

Re: help....config dictionary errors galore

Posted by Nate Cole <nc...@hortonworks.com>.
clusterHostInfo is built as the topology of all the components when that 
information is needed by agent scripts, most commonly on 
install/start/restart commands.

See StageUtils.getClusterHostInfo() to see how it gets built.

Thanks,
Nate

On 6/24/14 10:06 AM, Erin Boyd wrote:
> How does one go about looking at/adding to clusterHostInfo?
> Is there an API call to make to look at those values?
> Erin
>
>
> ----- Original Message -----
> From: "Erin Boyd" <eb...@redhat.com>
> To: "Nate Cole" <nc...@hortonworks.com>, "Mahadev Konar" <ma...@hortonworks.com>, "Yusaku Sako" <yu...@hortonworks.com>, "Jaimin Jetly" <ja...@hortonworks.com>, dev@ambari.apache.org
> Sent: Monday, June 23, 2014 3:12:11 PM
> Subject: help....config dictionary errors galore
>
> Hi,
> Scott Creeley and I have been trying to get all the services running in the 2.1.GlusterFS stack.
> Several of the services won't start due to an error like this:
>
>    File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 75, in __getattr__
>      raise Fail("Configuration parameter '"+self.name+"' was not found in configurations dictionary!")
> Fail: Configuration parameter 'user_group' was not found in configurations dictionary!
>
>
> other configuration paramters are slave_hosts
> namenode_hosts
>
> etc...
>
>
> So while some of these are a parameter I would expect for Hadoop (like user_group), parameters like namenode_hosts is specific to HDFS.
>
> Therefore, is there a way to create a flag in Ambari that says 'noHDFS' that can be used to create a conditional around these values? Should this
> just be a standard global, or should the value be loaded at a different level, like a system level property?
>
> HBase seems to have a lot of assumptions around HDFS being installed (ie loading values from the hdfs-site.xml).
>
> Is there a best practice for such processes?
>
> Let us know.
> Thanks,
> Erin


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: help....config dictionary errors galore

Posted by Erin Boyd <eb...@redhat.com>.
How does one go about looking at/adding to clusterHostInfo?
Is there an API call to make to look at those values?
Erin


----- Original Message -----
From: "Erin Boyd" <eb...@redhat.com>
To: "Nate Cole" <nc...@hortonworks.com>, "Mahadev Konar" <ma...@hortonworks.com>, "Yusaku Sako" <yu...@hortonworks.com>, "Jaimin Jetly" <ja...@hortonworks.com>, dev@ambari.apache.org
Sent: Monday, June 23, 2014 3:12:11 PM
Subject: help....config dictionary errors galore

Hi,
Scott Creeley and I have been trying to get all the services running in the 2.1.GlusterFS stack.
Several of the services won't start due to an error like this:

  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 75, in __getattr__
    raise Fail("Configuration parameter '"+self.name+"' was not found in configurations dictionary!")
Fail: Configuration parameter 'user_group' was not found in configurations dictionary!


other configuration paramters are slave_hosts
namenode_hosts

etc...


So while some of these are a parameter I would expect for Hadoop (like user_group), parameters like namenode_hosts is specific to HDFS.

Therefore, is there a way to create a flag in Ambari that says 'noHDFS' that can be used to create a conditional around these values? Should this
just be a standard global, or should the value be loaded at a different level, like a system level property?

HBase seems to have a lot of assumptions around HDFS being installed (ie loading values from the hdfs-site.xml).

Is there a best practice for such processes?

Let us know.
Thanks,
Erin