You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ambari.apache.org by Sidharth Kashyap <si...@outlook.com> on 2014/08/11 23:13:00 UTC

Ambari on a Supercomputer

Hello,
I am planning to use Ambari to deploy Hadoop clusters on our system with the help of Blueprints. I am assuming that this will help us to rapidly deploy similar clusters (which are fine tuned to our needs) and plug in and plug out components to suit various client needs. 
I request you to kindly help me understand the following doubts
1) Can we choose to leave out Nagios and Ganglia2) Can we have a single instance of Ambari Server which hence creates multiple Hadoop clusters as needed3) Can the underlying components (Pig, Hive, Spark etc.) have a custom installation (non standard paths)?

I am also curious to know if this(Ambari on HPC setup) has been tried out by anyone
Thanks,Sidharth N. Kashyap@sidkashyap