You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@bigtop.apache.org by Sanket Sharma <sa...@gmail.com> on 2022/04/16 12:15:48 UTC

Project status

Hi,

I need to provision a new Hadoop cluster on a set of VM's running on
Proxmox and I was looking for ways to automate and help manage the
configuration.

I stumbled upon Bigtop while I was looking for alternatives for Ambari
which has now been moved the attic. However, I'm struggling to find clear
documentation on various aspects and most of the documentation seems quite
old, especially for someone who is starting from scratch.

In my case, I want to install Hadoo, Spark, Hbase on two clusters - say dev
(1 node) and test (6 nodes). I started looking at
the bigtop-deploy/puppet/README.md which has directions on using puppet
scripts, but the scripts seem to be outdated. The scripts are written for
Puppet 3.x which doesn't work on the latest versions of Ubuntu, Debian etc.
To run the scripts using Puppet 3.x I'll have to use Debin 6 or 7 or
Ubuntu. However, it seems like I can provision a cluster running new
versions of Ubuntu or Debian using the scripts.

What I would like to know:
1. Are the puppet scripts being updated and is that the recommended way to
spin up a cluster?
2. Is there a better way to provision VMs?
3. It would also be good to know if Puppet provisioner scripts and
supporting custom deployments is a priority for the project - I'd be happy
to update the documentation as I go along if it helps.
4. I could also just use the package repository to just do a apt install
hadoop* for instance - however, in this case, unless I"m just installing a
single node all the data node configuration will have to be manual - is
that correct (instead of using puppet based config for instance?)

Thank you for the help and assistance in advance.


Best regards,
Sanket