You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ambari.apache.org by Souvik Sarkhel <so...@gmail.com> on 2016/04/19 07:43:38 UTC

Step 4 in Adding a custom defined service is getting stuck at Customize Services screen

Hi,

I have created a custom service named *HDFSYARN* which installs *Hadoop* in
all the nodes and starts namenode, datanode and resource manager in yarn
mode. I want the user to be able to modify the following .xml files:



*capacity-scheduler.xmlcore-site.xmlmapred-site.xml*

*yarn-site.xml*
*hdfs-site.xml*

 I have placed the followed the following folder structure
*metainfo.xml*
|_ *configuration*

*capacity-scheduler.xml*
*      core-site.xml*
*      mapred-site.xml*
*      yarn-site.xml*
*      hdfs-site.xml*

*|_ package*

*        |_ scripts*

*                 master.py*

*                  slave.py*

and my *metainfo.xml* file looks like this




































































*<?xml version="1.0"?><metainfo>  <schemaVersion>2.0</schemaVersion>
<services>    <service>      <name>HDFSYARN</name>      <displayName>HDFS
YARN</displayName>      <comment>HDFS is a Java-based file system that
provides scalable and reliable data storage, it is designed to span large
clusters of commodity servers</comment>      <version>2.6.0</version>
<components>        <component>
<name>HDFS_NAMENODE</name>          <displayName>HDFS
NameNode</displayName>          <category>MASTER</category>
<cardinality>1</cardinality>
<timelineAppid>HDFSYARN</timelineAppid>
<dependencies>                    <dependency>
<name>TOMCAT/TOMCAT_SLAVE</name>
<scope>cluster</scope>
<auto-deploy>
<enabled>true</enabled>
</auto-deploy>                </dependency>
</dependencies>          <commandScript>
<script>scripts/master.py</script>
<scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
</commandScript>        </component>        <component>
<name>HDFS_DATANODE</name>          <displayName>HDFS
DataNode</displayName>          <cardinality>0+</cardinality>
<category>SLAVE</category>
<timelineAppid>HDFSYARN</timelineAppid>          <commandScript>
<script>scripts/slave.py</script>
<scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
</commandScript>        </component>   </components>
<osSpecifics>        <osSpecific>
<osFamily>any</osFamily>          <!-- note: use osType rather than
osFamily for Ambari 1.5.0 and 1.5.1 -->          <packages>
<package>              <name>hadoop-2.6.0</name>
</package>          </packages>        </osSpecific>
</osSpecifics>      <requiredServices>
<service>TOMCAT</service>      </requiredServices>
<configuration-dependencies>
<config-type>core-site</config-type>
<config-type>hdfs-site</config-type>
<config-type>mapred-site</config-type>
<config-type>capacity-scheduler</config-type>
<config-type>yarn-site</config-type>      </configuration-dependencies>
</service>  </services></metainfo>*

But the moment I place *hdfs-site.xml* and *yarn-site.xml* in configuration
folder and try to add the service it gets stuck at Customize Services window
[image: Inline image 1]
and when my configuration folder doesn't contains those two files
everything works properly.
Is it because *HDP* stack also has *HDFS* and *YARN* services and somehow
Ambari is still fetching some dependencies from those services instead of
the custom defined service.?
Thanking you in advance

-- 
Souvik Sarkhel

Re: Step 4 in Adding a custom defined service is getting stuck at Customize Services screen

Posted by Souvik Sarkhel <so...@gmail.com>.
Thanks Mithun after changing the component name it worked properly

On Tue, Apr 19, 2016, 12:43 Mithun Mathew <mi...@gmail.com> wrote:

> The app.js file gets deployed in /usr/lib/ambari-server/web/javascripts
> You can make changes to this file and test it right away - refreshing the
> browser (hard-refresh) would reload app.js.
>
> If you wish to edit the actual source code, app.js is a minified version
> of ambari-web <https://github.com/apache/ambari/tree/trunk/ambari-web>.
> You may look into Frontend development section on Ambari Developer Wiki
> <https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide>
> for getting started on this.
>
>
> On Mon, Apr 18, 2016 at 11:36 PM, Souvik Sarkhel <souvik.sarkhel@gmail.com
> > wrote:
>
>> Hi Mithun,
>>
>> I guess I found out what is causing the error. In app.js line 159865
>>
>>
>>
>>
>>
>>
>>
>>
>> *case 'dfs.data.dir':      case 'dfs.datanode.data.dir':        temp =
>> slaveComponentHostsInDB.findProperty('componentName', 'DATANODE');
>> temp.hosts.forEach(function (host) {
>> setOfHostNames.push(host.hostName);        }, this);*
>> if a config file contains *dfs.data.dir *then this piece of code looks
>> for slaves of the component named *DATANODE*.
>> But in my stack there is no component named *DATANODE *instead its named
>> '*HDFS_DATANODE*' hence its giving this error
>>
>> TypeError: temp is undefined in
>>
>> temp.hosts.forEach(function (host)
>>
>> How to solve this ?
>>
>>
>> Where can i find the app.js file so that I can change the value of
>> datanode to hdfs_datanode
>>
>>
>>
>> On Tue, Apr 19, 2016 at 11:30 AM, Mithun Mathew <mi...@apache.org>
>> wrote:
>>
>>> To my understanding configuration file names should be unique across
>>> services installed in the cluster.
>>> To confirm this, you may open the developer console and see what the
>>> error is. If it is stuck on the loading icon, it is highly likely that
>>> there is an error thrown by JS, the result of which configs were not loaded.
>>>
>>> On Mon, Apr 18, 2016 at 10:43 PM, Souvik Sarkhel <
>>> souvik.sarkhel@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have created a custom service named *HDFSYARN* which installs
>>>> *Hadoop* in all the nodes and starts namenode, datanode and resource
>>>> manager in yarn mode. I want the user to be able to modify the following
>>>> .xml files:
>>>>
>>>>
>>>>
>>>> *capacity-scheduler.xmlcore-site.xmlmapred-site.xml*
>>>>
>>>> *yarn-site.xml*
>>>> *hdfs-site.xml*
>>>>
>>>>  I have placed the followed the following folder structure
>>>> *metainfo.xml*
>>>> |_ *configuration*
>>>>
>>>> *capacity-scheduler.xml*
>>>> *      core-site.xml*
>>>> *      mapred-site.xml*
>>>> *      yarn-site.xml*
>>>> *      hdfs-site.xml*
>>>>
>>>> *|_ package*
>>>>
>>>> *        |_ scripts*
>>>>
>>>> *                 master.py*
>>>>
>>>> *                  slave.py*
>>>>
>>>> and my *metainfo.xml* file looks like this
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *<?xml version="1.0"?><metainfo>  <schemaVersion>2.0</schemaVersion>
>>>> <services>    <service>      <name>HDFSYARN</name>      <displayName>HDFS
>>>> YARN</displayName>      <comment>HDFS is a Java-based file system that
>>>> provides scalable and reliable data storage, it is designed to span large
>>>> clusters of commodity servers</comment>      <version>2.6.0</version>
>>>> <components>        <component>
>>>> <name>HDFS_NAMENODE</name>          <displayName>HDFS
>>>> NameNode</displayName>          <category>MASTER</category>
>>>> <cardinality>1</cardinality>
>>>> <timelineAppid>HDFSYARN</timelineAppid>
>>>> <dependencies>                    <dependency>
>>>> <name>TOMCAT/TOMCAT_SLAVE</name>
>>>> <scope>cluster</scope>
>>>> <auto-deploy>
>>>> <enabled>true</enabled>
>>>> </auto-deploy>                </dependency>
>>>> </dependencies>          <commandScript>
>>>> <script>scripts/master.py</script>
>>>> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
>>>> </commandScript>        </component>        <component>
>>>> <name>HDFS_DATANODE</name>          <displayName>HDFS
>>>> DataNode</displayName>          <cardinality>0+</cardinality>
>>>> <category>SLAVE</category>
>>>> <timelineAppid>HDFSYARN</timelineAppid>          <commandScript>
>>>> <script>scripts/slave.py</script>
>>>> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
>>>> </commandScript>        </component>   </components>
>>>> <osSpecifics>        <osSpecific>
>>>> <osFamily>any</osFamily>          <!-- note: use osType rather than
>>>> osFamily for Ambari 1.5.0 and 1.5.1 -->          <packages>
>>>> <package>              <name>hadoop-2.6.0</name>
>>>> </package>          </packages>        </osSpecific>
>>>> </osSpecifics>      <requiredServices>
>>>> <service>TOMCAT</service>      </requiredServices>
>>>> <configuration-dependencies>
>>>> <config-type>core-site</config-type>
>>>> <config-type>hdfs-site</config-type>
>>>> <config-type>mapred-site</config-type>
>>>> <config-type>capacity-scheduler</config-type>
>>>> <config-type>yarn-site</config-type>      </configuration-dependencies>
>>>> </service>  </services></metainfo>*
>>>>
>>>> But the moment I place *hdfs-site.xml* and *yarn-site.xml* in
>>>> configuration folder and try to add the service it gets stuck at Customize
>>>> Services window
>>>> [image: Inline image 1]
>>>> and when my configuration folder doesn't contains those two files
>>>> everything works properly.
>>>> Is it because *HDP* stack also has *HDFS* and *YARN* services and
>>>> somehow Ambari is still fetching some dependencies from those services
>>>> instead of the custom defined service.?
>>>> Thanking you in advance
>>>>
>>>> --
>>>> Souvik Sarkhel
>>>>
>>>
>>>
>>
>>
>> --
>> Souvik Sarkhel
>>
>
>
>
> --
> *Mithun Mathew* (Matt)
>
>    - www.linkedin.com/in/mithunmatt/
>
>

Re: Step 4 in Adding a custom defined service is getting stuck at Customize Services screen

Posted by Mithun Mathew <mi...@gmail.com>.
The app.js file gets deployed in /usr/lib/ambari-server/web/javascripts
You can make changes to this file and test it right away - refreshing the
browser (hard-refresh) would reload app.js.

If you wish to edit the actual source code, app.js is a minified version of
ambari-web <https://github.com/apache/ambari/tree/trunk/ambari-web>. You
may look into Frontend development section on Ambari Developer Wiki
<https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide> for
getting started on this.


On Mon, Apr 18, 2016 at 11:36 PM, Souvik Sarkhel <so...@gmail.com>
wrote:

> Hi Mithun,
>
> I guess I found out what is causing the error. In app.js line 159865
>
>
>
>
>
>
>
>
> *case 'dfs.data.dir':      case 'dfs.datanode.data.dir':        temp =
> slaveComponentHostsInDB.findProperty('componentName', 'DATANODE');
> temp.hosts.forEach(function (host) {
> setOfHostNames.push(host.hostName);        }, this);*
> if a config file contains *dfs.data.dir *then this piece of code looks
> for slaves of the component named *DATANODE*.
> But in my stack there is no component named *DATANODE *instead its named '
> *HDFS_DATANODE*' hence its giving this error
>
> TypeError: temp is undefined in
>
> temp.hosts.forEach(function (host)
>
> How to solve this ?
>
>
> Where can i find the app.js file so that I can change the value of
> datanode to hdfs_datanode
>
>
>
> On Tue, Apr 19, 2016 at 11:30 AM, Mithun Mathew <mi...@apache.org>
> wrote:
>
>> To my understanding configuration file names should be unique across
>> services installed in the cluster.
>> To confirm this, you may open the developer console and see what the
>> error is. If it is stuck on the loading icon, it is highly likely that
>> there is an error thrown by JS, the result of which configs were not loaded.
>>
>> On Mon, Apr 18, 2016 at 10:43 PM, Souvik Sarkhel <
>> souvik.sarkhel@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have created a custom service named *HDFSYARN* which installs *Hadoop*
>>> in all the nodes and starts namenode, datanode and resource manager in yarn
>>> mode. I want the user to be able to modify the following .xml files:
>>>
>>>
>>>
>>> *capacity-scheduler.xmlcore-site.xmlmapred-site.xml*
>>>
>>> *yarn-site.xml*
>>> *hdfs-site.xml*
>>>
>>>  I have placed the followed the following folder structure
>>> *metainfo.xml*
>>> |_ *configuration*
>>>
>>> *capacity-scheduler.xml*
>>> *      core-site.xml*
>>> *      mapred-site.xml*
>>> *      yarn-site.xml*
>>> *      hdfs-site.xml*
>>>
>>> *|_ package*
>>>
>>> *        |_ scripts*
>>>
>>> *                 master.py*
>>>
>>> *                  slave.py*
>>>
>>> and my *metainfo.xml* file looks like this
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *<?xml version="1.0"?><metainfo>  <schemaVersion>2.0</schemaVersion>
>>> <services>    <service>      <name>HDFSYARN</name>      <displayName>HDFS
>>> YARN</displayName>      <comment>HDFS is a Java-based file system that
>>> provides scalable and reliable data storage, it is designed to span large
>>> clusters of commodity servers</comment>      <version>2.6.0</version>
>>> <components>        <component>
>>> <name>HDFS_NAMENODE</name>          <displayName>HDFS
>>> NameNode</displayName>          <category>MASTER</category>
>>> <cardinality>1</cardinality>
>>> <timelineAppid>HDFSYARN</timelineAppid>
>>> <dependencies>                    <dependency>
>>> <name>TOMCAT/TOMCAT_SLAVE</name>
>>> <scope>cluster</scope>
>>> <auto-deploy>
>>> <enabled>true</enabled>
>>> </auto-deploy>                </dependency>
>>> </dependencies>          <commandScript>
>>> <script>scripts/master.py</script>
>>> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
>>> </commandScript>        </component>        <component>
>>> <name>HDFS_DATANODE</name>          <displayName>HDFS
>>> DataNode</displayName>          <cardinality>0+</cardinality>
>>> <category>SLAVE</category>
>>> <timelineAppid>HDFSYARN</timelineAppid>          <commandScript>
>>> <script>scripts/slave.py</script>
>>> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
>>> </commandScript>        </component>   </components>
>>> <osSpecifics>        <osSpecific>
>>> <osFamily>any</osFamily>          <!-- note: use osType rather than
>>> osFamily for Ambari 1.5.0 and 1.5.1 -->          <packages>
>>> <package>              <name>hadoop-2.6.0</name>
>>> </package>          </packages>        </osSpecific>
>>> </osSpecifics>      <requiredServices>
>>> <service>TOMCAT</service>      </requiredServices>
>>> <configuration-dependencies>
>>> <config-type>core-site</config-type>
>>> <config-type>hdfs-site</config-type>
>>> <config-type>mapred-site</config-type>
>>> <config-type>capacity-scheduler</config-type>
>>> <config-type>yarn-site</config-type>      </configuration-dependencies>
>>> </service>  </services></metainfo>*
>>>
>>> But the moment I place *hdfs-site.xml* and *yarn-site.xml* in
>>> configuration folder and try to add the service it gets stuck at Customize
>>> Services window
>>> [image: Inline image 1]
>>> and when my configuration folder doesn't contains those two files
>>> everything works properly.
>>> Is it because *HDP* stack also has *HDFS* and *YARN* services and
>>> somehow Ambari is still fetching some dependencies from those services
>>> instead of the custom defined service.?
>>> Thanking you in advance
>>>
>>> --
>>> Souvik Sarkhel
>>>
>>
>>
>
>
> --
> Souvik Sarkhel
>



-- 
*Mithun Mathew* (Matt)

   - www.linkedin.com/in/mithunmatt/

Re: Step 4 in Adding a custom defined service is getting stuck at Customize Services screen

Posted by Souvik Sarkhel <so...@gmail.com>.
Hi Mithun,

I guess I found out what is causing the error. In app.js line 159865








*case 'dfs.data.dir':      case 'dfs.datanode.data.dir':        temp =
slaveComponentHostsInDB.findProperty('componentName', 'DATANODE');
temp.hosts.forEach(function (host) {
setOfHostNames.push(host.hostName);        }, this);*
if a config file contains *dfs.data.dir *then this piece of code looks for
slaves of the component named *DATANODE*.
But in my stack there is no component named *DATANODE *instead its named '
*HDFS_DATANODE*' hence its giving this error

TypeError: temp is undefined in

temp.hosts.forEach(function (host)

How to solve this ?


Where can i find the app.js file so that I can change the value of datanode
to hdfs_datanode



On Tue, Apr 19, 2016 at 11:30 AM, Mithun Mathew <mi...@apache.org> wrote:

> To my understanding configuration file names should be unique across
> services installed in the cluster.
> To confirm this, you may open the developer console and see what the error
> is. If it is stuck on the loading icon, it is highly likely that there is
> an error thrown by JS, the result of which configs were not loaded.
>
> On Mon, Apr 18, 2016 at 10:43 PM, Souvik Sarkhel <souvik.sarkhel@gmail.com
> > wrote:
>
>> Hi,
>>
>> I have created a custom service named *HDFSYARN* which installs *Hadoop*
>> in all the nodes and starts namenode, datanode and resource manager in yarn
>> mode. I want the user to be able to modify the following .xml files:
>>
>>
>>
>> *capacity-scheduler.xmlcore-site.xmlmapred-site.xml*
>>
>> *yarn-site.xml*
>> *hdfs-site.xml*
>>
>>  I have placed the followed the following folder structure
>> *metainfo.xml*
>> |_ *configuration*
>>
>> *capacity-scheduler.xml*
>> *      core-site.xml*
>> *      mapred-site.xml*
>> *      yarn-site.xml*
>> *      hdfs-site.xml*
>>
>> *|_ package*
>>
>> *        |_ scripts*
>>
>> *                 master.py*
>>
>> *                  slave.py*
>>
>> and my *metainfo.xml* file looks like this
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *<?xml version="1.0"?><metainfo>  <schemaVersion>2.0</schemaVersion>
>> <services>    <service>      <name>HDFSYARN</name>      <displayName>HDFS
>> YARN</displayName>      <comment>HDFS is a Java-based file system that
>> provides scalable and reliable data storage, it is designed to span large
>> clusters of commodity servers</comment>      <version>2.6.0</version>
>> <components>        <component>
>> <name>HDFS_NAMENODE</name>          <displayName>HDFS
>> NameNode</displayName>          <category>MASTER</category>
>> <cardinality>1</cardinality>
>> <timelineAppid>HDFSYARN</timelineAppid>
>> <dependencies>                    <dependency>
>> <name>TOMCAT/TOMCAT_SLAVE</name>
>> <scope>cluster</scope>
>> <auto-deploy>
>> <enabled>true</enabled>
>> </auto-deploy>                </dependency>
>> </dependencies>          <commandScript>
>> <script>scripts/master.py</script>
>> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
>> </commandScript>        </component>        <component>
>> <name>HDFS_DATANODE</name>          <displayName>HDFS
>> DataNode</displayName>          <cardinality>0+</cardinality>
>> <category>SLAVE</category>
>> <timelineAppid>HDFSYARN</timelineAppid>          <commandScript>
>> <script>scripts/slave.py</script>
>> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
>> </commandScript>        </component>   </components>
>> <osSpecifics>        <osSpecific>
>> <osFamily>any</osFamily>          <!-- note: use osType rather than
>> osFamily for Ambari 1.5.0 and 1.5.1 -->          <packages>
>> <package>              <name>hadoop-2.6.0</name>
>> </package>          </packages>        </osSpecific>
>> </osSpecifics>      <requiredServices>
>> <service>TOMCAT</service>      </requiredServices>
>> <configuration-dependencies>
>> <config-type>core-site</config-type>
>> <config-type>hdfs-site</config-type>
>> <config-type>mapred-site</config-type>
>> <config-type>capacity-scheduler</config-type>
>> <config-type>yarn-site</config-type>      </configuration-dependencies>
>> </service>  </services></metainfo>*
>>
>> But the moment I place *hdfs-site.xml* and *yarn-site.xml* in
>> configuration folder and try to add the service it gets stuck at Customize
>> Services window
>> [image: Inline image 1]
>> and when my configuration folder doesn't contains those two files
>> everything works properly.
>> Is it because *HDP* stack also has *HDFS* and *YARN* services and
>> somehow Ambari is still fetching some dependencies from those services
>> instead of the custom defined service.?
>> Thanking you in advance
>>
>> --
>> Souvik Sarkhel
>>
>
>


-- 
Souvik Sarkhel

Re: Step 4 in Adding a custom defined service is getting stuck at Customize Services screen

Posted by Mithun Mathew <mi...@apache.org>.
To my understanding configuration file names should be unique across
services installed in the cluster.
To confirm this, you may open the developer console and see what the error
is. If it is stuck on the loading icon, it is highly likely that there is
an error thrown by JS, the result of which configs were not loaded.

On Mon, Apr 18, 2016 at 10:43 PM, Souvik Sarkhel <so...@gmail.com>
wrote:

> Hi,
>
> I have created a custom service named *HDFSYARN* which installs *Hadoop*
> in all the nodes and starts namenode, datanode and resource manager in yarn
> mode. I want the user to be able to modify the following .xml files:
>
>
>
> *capacity-scheduler.xmlcore-site.xmlmapred-site.xml*
>
> *yarn-site.xml*
> *hdfs-site.xml*
>
>  I have placed the followed the following folder structure
> *metainfo.xml*
> |_ *configuration*
>
> *capacity-scheduler.xml*
> *      core-site.xml*
> *      mapred-site.xml*
> *      yarn-site.xml*
> *      hdfs-site.xml*
>
> *|_ package*
>
> *        |_ scripts*
>
> *                 master.py*
>
> *                  slave.py*
>
> and my *metainfo.xml* file looks like this
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *<?xml version="1.0"?><metainfo>  <schemaVersion>2.0</schemaVersion>
> <services>    <service>      <name>HDFSYARN</name>      <displayName>HDFS
> YARN</displayName>      <comment>HDFS is a Java-based file system that
> provides scalable and reliable data storage, it is designed to span large
> clusters of commodity servers</comment>      <version>2.6.0</version>
> <components>        <component>
> <name>HDFS_NAMENODE</name>          <displayName>HDFS
> NameNode</displayName>          <category>MASTER</category>
> <cardinality>1</cardinality>
> <timelineAppid>HDFSYARN</timelineAppid>
> <dependencies>                    <dependency>
> <name>TOMCAT/TOMCAT_SLAVE</name>
> <scope>cluster</scope>
> <auto-deploy>
> <enabled>true</enabled>
> </auto-deploy>                </dependency>
> </dependencies>          <commandScript>
> <script>scripts/master.py</script>
> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
> </commandScript>        </component>        <component>
> <name>HDFS_DATANODE</name>          <displayName>HDFS
> DataNode</displayName>          <cardinality>0+</cardinality>
> <category>SLAVE</category>
> <timelineAppid>HDFSYARN</timelineAppid>          <commandScript>
> <script>scripts/slave.py</script>
> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
> </commandScript>        </component>   </components>
> <osSpecifics>        <osSpecific>
> <osFamily>any</osFamily>          <!-- note: use osType rather than
> osFamily for Ambari 1.5.0 and 1.5.1 -->          <packages>
> <package>              <name>hadoop-2.6.0</name>
> </package>          </packages>        </osSpecific>
> </osSpecifics>      <requiredServices>
> <service>TOMCAT</service>      </requiredServices>
> <configuration-dependencies>
> <config-type>core-site</config-type>
> <config-type>hdfs-site</config-type>
> <config-type>mapred-site</config-type>
> <config-type>capacity-scheduler</config-type>
> <config-type>yarn-site</config-type>      </configuration-dependencies>
> </service>  </services></metainfo>*
>
> But the moment I place *hdfs-site.xml* and *yarn-site.xml* in
> configuration folder and try to add the service it gets stuck at Customize
> Services window
> [image: Inline image 1]
> and when my configuration folder doesn't contains those two files
> everything works properly.
> Is it because *HDP* stack also has *HDFS* and *YARN* services and somehow
> Ambari is still fetching some dependencies from those services instead of
> the custom defined service.?
> Thanking you in advance
>
> --
> Souvik Sarkhel
>