You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by NOC <no...@logicweb.com> on 2016/09/27 12:27:11 UTC

New Initial HW Setup

Hello,

 

Looking to start up a virtualized setup using CloudStack and would like some
feedback / advice as I do some researching. 

 

For starting off, was looking to do a single NODE instead of separate NODES
for CPU/RAM, Storage.

 

Example:

 

Dell R815

4 x Opteron 16-Core CPUs

256GB RAM

6 x 2TB SSD Drives

Perc H700 RAID

Centos 7 64 bit

 

Wouldn't that be sufficient enough to get going and just add more, similar
nodes down the road seamlessly for additional processing power and RAM?

 

One of my main confusion is how does the primary management server hosting
the CS panel, utilize processing power from external additional NODES? I
cannot understand how this happens and would appreciate some explanation. 

 

From my understanding, CS basically creates the equivalent of virtual
servers (VPS). So in essence, scaling up or down you can offer cloud hosting
(ie like shared hosting) and virtual servers. But, to sell equivalent of
standalone dedicated servers, how would that work? You cannot offer the
client KVM/IPMI, yet how does one prove if the server is virtualized or not,
on the client side? If I'm guaranteeing 500GB SSD storage, 4 CPU Cores and
32GB RAM he/she would have no way of knowing if it's cloud based or
standalone, am I right or wrong?

 

Thanks in advance for the tips.

 


RE: New Initial HW Setup

Posted by Jeremy Peterson <jp...@acentek.net>.
I came from using Solus VM to switching to CloudStack.

In SolusVM you are used to Nodes which are a hypervisor.  We used KVM but talked to the group here and found ShapeBlue who sent us on looking into Tim Mackey and XenServer.  I HIGHLY suggest you look into XenServer.  It has helped us greatly.  Being an VMware company in our corporate environment and seeing our customer environment running on XenServer we don't see a difference.  CloudStack is the brains behind the environment it knows the preset parameters we define for CPU, Memory, HDD.  Then since we've built templates it deploys servers in minutes.  

We are using iSCSI for primary storage which before was just local storage on the Node in SolusVM.  The iSCSI as remote primary storage allows us to have VM's running and seemless migrate from one host to another.  

We use NFS for secondary storage which is basically template storage iso storage and snapshot storage.  I wish CloudStack had a more defined nightly backup feature or if I could leverage something on my XenServer but for now we use vm recurring snapshots.  Previously in SolusVM we used a FreeNAS FTP server and scheduled backups from there.

We use a server farm of VM's two of everything (thanks to Geoff Higginbottom) internal DNS servers, NTP server, CloudStack Management servers, HAProxy servers.  For full redundancy.

Good luck I hope you find CloudStack as useful as we have.

Jeremy


-----Original Message-----
From: Tim Mackey [mailto:tmackey@gmail.com] 
Sent: Tuesday, September 27, 2016 1:17 PM
To: users@cloudstack.apache.org
Subject: Re: New Initial HW Setup

On Tue, Sep 27, 2016 at 1:45 PM, NOC <no...@logicweb.com> wrote:

> > You've one thing to decide - do you want to cluster those hosts 
> > (main
> benefit being shared storage and network config), or run as single 
> host clusters.
>
> To make sure we're on the same page and I'm understanding you right, 
> the "hosts" you're referring I assume are the NODES in the cluster. 
> Clustering them you said shares the storage, but what about CPU/RAM? I 
> assume it would share those too? As for network config, what do you 
> mean by sharing network config?
>

Sorry, when I say hosts, that's the same as what you're calling nodes. When I'm talking about shared storage, I'm referring to a SAN or NAS device. You haven't indicated you have that, so there is less value in having multiple hosts per cluster for you. btw, if you're thinking of "cluster" in the sense of oVirt, that's not how CloudStack works. Your entire cloud becomes a pool of CPU/RAM (with obvious requirements that you can't run a VM on multiple hosts at the same time). e.g. if I have ten hosts, each with 24 cores and 256GB RAM, I'll have a cloud capable of hosting 240 vCPUs and 2.5TB RAM with no overcommit. VMs can live anywhere in there (providing sufficient capacity exists).

The topic of primary storage (where the VMs actually run), is important in this discussion. In your case you've said you're wanting to run using local storage which translates to running using local primary storage. You could run with shared storage as well (shared primary storage), and were that an option for you, CloudStack would manage the storage for you at the cluster level. CloudStack will also manage the VM networks at the cluster level, which could give some efficiencies with some network topologies.

btw, since you're wanting a VPS like experience, you'll probably want to read the docs on both "dedicated service offerings" [1] and VPC networks [2].


> Is there any general advantage to running them as single host nodes?
>

So there is one thing I need to keep in mind, and that's your usage of KVM.
To a large extent, CloudStack clustering was designed with XenServer and vSphere in mind, and then applied to KVM because the construct existed.
There is nothing wrong with your desire to have a single host with local storage in a cluster with local storage when KVM is the hypervisor.
CloudStack will take care of a really decent chunk of the heavy lifting to make it all work.




> Thanks again for your assistance.
>

[1]
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/service_offerings.html

[2]
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/service_offerings.html

>
> -----Original Message-----
> From: Tim Mackey [mailto:tmackey@gmail.com]
> Sent: Tuesday, September 27, 2016 1:26 PM
> To: users@cloudstack.apache.org
> Subject: Re: New Initial HW Setup
>
> On Tue, Sep 27, 2016 at 12:34 PM, NOC <no...@logicweb.com> wrote:
>
> > Thanks for the feedback!
> >
> > Ok so management server is just a standalone server (no fancy specs 
> > generally speaking) for the CS control panel itself.
> >
>
> Yes. I've run smallish installations with a single management server 
> running in a VM. If the management server goes down, everything keeps 
> going; you've just lost the UI/API for the duration of the outage.
>
> >
> > Compute NODE: CPU, RAM, Local Storage. That's my goal, using KVM as 
> > a platform. So essentially, I can do this for example:
> >
> > Compute NODE Specs:
> >
> > Dell R815
> > 4 x Opteron 16-Core CPUs
> > 256GB RAM
> > 6 x 2TB SSD Drives
> > Perc H700 RAID
> > KVM Platform (offering Linux & Windows templates)
> >
> > I can say, down the road add the above similar NODES into the 
> > cluster, seamlessly via the CS management panel. Just like that, correct?
> > Nothing else fancy involved?
> >
> You've one thing to decide - do you want to cluster those hosts (main 
> benefit being shared storage and network config), or run as single 
> host clusters. It's really up to you, but I'd advise you to keep the 
> cluster size/capacity consistent between clusters. e.g. If you decide 
> to put three hosts in a cluster, always scale in three host chunks. 
> btw, you'll want to enable local storage as a global config option on 
> the management server. If you don't do this, the system assumes shared 
> storage and you won't be able to start any VMs (including the system ones).
>
> >
> > Regarding selling them like VPS, I'm assuming the option to provide 
> > the end user/customer a full list of available templates for them to 
> > install and reinstall at their disposal can be done easily? Say we 
> > wanted 10 Linux OS flavors to offer and 2 Windows. We can set this 
> > up in advance and grant the user the ability via some predefined 
> > package
> per se?
> >
> Yup, but double check the Windows licensing before offering them up. 
> iirc, there's nastiness in there about how licenses are counted. You 
> also will want to ensure any Windows images start from a sysprep state 
> otherwise they'll have the same sids and weird things'll happen on the network.
>
> >
> > We currently use SolusVM for virtualization (VPS plans). So, 
> > generally speaking, I'm not sure I see much of a difference overall 
> > between this and CS? Please correct me if I'm wrong. Because as it 
> > stands now, SolusVM works in generally the same exact way.
> >
> I'm not familiar with SolusVM, so can't comment on comparisons.
>
> >
> > -----Original Message-----
> > From: Tim Mackey [mailto:tmackey@gmail.com]
> > Sent: Tuesday, September 27, 2016 9:59 AM
> > To: users@cloudstack.apache.org
> > Subject: Re: New Initial HW Setup
> >
> > Good morning.
> >
> > I think it's probably best to take a step back and define a couple 
> > of things.
> >
> >
> > 1. The management server is really a highly efficient cluster manager.
> > It runs external to the compute nodes.
> > 2. A compute node contains CPU and RAM, has a network fabric, and 
> > may have local storage. Compute nodes can be clustered based on the 
> > native capabilities of the chosen hypervisor (e.g. XenServer uses 
> > the XAPI cluster manager with its rules, while KVM is a collection hosts).
> > 3. A compute node can be bare metal, but those rules are very different.
> >
> > I used to present a hypervisor matrix, and here's my most recent deck:
> > http://www.slideshare.net/TimMackey/selecting-the-correct-hypervisor
> > -f
> > or-
> > cloudstack-45.
> > Much of whats in there will be relevant to you at this point.
> >
> > Looking at your specific questions:
> >
> >  - " *how does the primary management server hosting the CS panel, 
> > utilize processing power from external additional NODES*". First you 
> > will configure the management server with knowledge of the compute 
> > node. The management server then understands the capacity of the 
> > compute node, and from there you can do stuff like provision VMs. 
> > For example, if you've a template which has a compute offering with 
> > 2vCPUs, 8GB RAM and two vNICs, that's how the management server will 
> > setup the VM which will be based on the template the user chooses.
> >
> > - "*sell equivalent of **standalone dedicated servers, how would 
> > that work*".
> > If the goal is to provide an equivalent of a bare metal virtual 
> > server, then things are much more involved from the user perspective 
> > (e.g. you need to start with a predefined ISO). If the goal is to 
> > provide a VPS from a set of predefined OS types, then that's easier 
> > - just upload a template for each one. The user then selects which 
> > template they want and it gets provisioned.
> >
> > - "*If I'm guaranteeing 500GB SSD storage, 4 CPU Cores and **32GB 
> > RAM he/she would have no way of knowing if it's cloud based or 
> > **standalone, am I right or wrong*". It depends upon what you're 
> > guaranteeing. Within the guest it would be easy to tell if you've 4
> cores, 32 GB RAM and 500GB disk.
> > What would be hard to tell is if the vCPUs are dedicated or 
> > overloads, and if the disk was SSD. As a user, I honestly care less 
> > about SSD than IOPs, and there are ways to tell that.
> >
> > btw, everywhere I mention "user selects" that could be a workflow 
> > you kick off on behalf of the user. It's entirely possible to 
> > provide only guest VM access via SSH if you don't want users to have 
> > access to the CloudStack management console.
> >
> > Hope this helps some, and if I've misspoken something, I'm certain 
> > others will set me right!
> >
> > -tim
> >
> >
> > On Tue, Sep 27, 2016 at 8:27 AM, NOC <no...@logicweb.com> wrote:
> >
> > > Hello,
> > >
> > >
> > >
> > > Looking to start up a virtualized setup using CloudStack and would 
> > > like some feedback / advice as I do some researching.
> > >
> > >
> > >
> > > For starting off, was looking to do a single NODE instead of 
> > > separate NODES for CPU/RAM, Storage.
> > >
> > >
> > >
> > > Example:
> > >
> > >
> > >
> > > Dell R815
> > >
> > > 4 x Opteron 16-Core CPUs
> > >
> > > 256GB RAM
> > >
> > > 6 x 2TB SSD Drives
> > >
> > > Perc H700 RAID
> > >
> > > Centos 7 64 bit
> > >
> > >
> > >
> > > Wouldn't that be sufficient enough to get going and just add more, 
> > > similar nodes down the road seamlessly for additional processing 
> > > power
> > and RAM?
> > >
> > >
> > >
> > > One of my main confusion is how does the primary management server 
> > > hosting the CS panel, utilize processing power from external 
> > > additional NODES? I cannot understand how this happens and would
> > appreciate some explanation.
> > >
> > >
> > >
> > > From my understanding, CS basically creates the equivalent of 
> > > virtual servers (VPS). So in essence, scaling up or down you can 
> > > offer cloud hosting (ie like shared hosting) and virtual servers.
> > > But, to sell equivalent of standalone dedicated servers, how would 
> > > that work? You cannot offer the client KVM/IPMI, yet how does one 
> > > prove if the server is virtualized or not, on the client side? If 
> > > I'm guaranteeing 500GB SSD storage, 4 CPU Cores and 32GB RAM 
> > > he/she would have no way of knowing if it's cloud based or 
> > > standalone, am I
> right or wrong?
> > >
> > >
> > >
> > > Thanks in advance for the tips.
> > >
> > >
> > >
> > >
> >
> >
>
>

Re: New Initial HW Setup

Posted by Tim Mackey <tm...@gmail.com>.
On Tue, Sep 27, 2016 at 1:45 PM, NOC <no...@logicweb.com> wrote:

> > You've one thing to decide - do you want to cluster those hosts (main
> benefit being shared storage and network config), or run as single host
> clusters.
>
> To make sure we're on the same page and I'm understanding you right, the
> "hosts" you're referring I assume are the NODES in the cluster. Clustering
> them you said shares the storage, but what about CPU/RAM? I assume it would
> share those too? As for network config, what do you mean by sharing network
> config?
>

Sorry, when I say hosts, that's the same as what you're calling nodes. When
I'm talking about shared storage, I'm referring to a SAN or NAS device. You
haven't indicated you have that, so there is less value in having multiple
hosts per cluster for you. btw, if you're thinking of "cluster" in the
sense of oVirt, that's not how CloudStack works. Your entire cloud becomes
a pool of CPU/RAM (with obvious requirements that you can't run a VM on
multiple hosts at the same time). e.g. if I have ten hosts, each with 24
cores and 256GB RAM, I'll have a cloud capable of hosting 240 vCPUs and
2.5TB RAM with no overcommit. VMs can live anywhere in there (providing
sufficient capacity exists).

The topic of primary storage (where the VMs actually run), is important in
this discussion. In your case you've said you're wanting to run using local
storage which translates to running using local primary storage. You could
run with shared storage as well (shared primary storage), and were that an
option for you, CloudStack would manage the storage for you at the cluster
level. CloudStack will also manage the VM networks at the cluster level,
which could give some efficiencies with some network topologies.

btw, since you're wanting a VPS like experience, you'll probably want to
read the docs on both "dedicated service offerings" [1] and VPC networks
[2].


> Is there any general advantage to running them as single host nodes?
>

So there is one thing I need to keep in mind, and that's your usage of KVM.
To a large extent, CloudStack clustering was designed with XenServer and
vSphere in mind, and then applied to KVM because the construct existed.
There is nothing wrong with your desire to have a single host with local
storage in a cluster with local storage when KVM is the hypervisor.
CloudStack will take care of a really decent chunk of the heavy lifting to
make it all work.




> Thanks again for your assistance.
>

[1]
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/service_offerings.html

[2]
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/service_offerings.html

>
> -----Original Message-----
> From: Tim Mackey [mailto:tmackey@gmail.com]
> Sent: Tuesday, September 27, 2016 1:26 PM
> To: users@cloudstack.apache.org
> Subject: Re: New Initial HW Setup
>
> On Tue, Sep 27, 2016 at 12:34 PM, NOC <no...@logicweb.com> wrote:
>
> > Thanks for the feedback!
> >
> > Ok so management server is just a standalone server (no fancy specs
> > generally speaking) for the CS control panel itself.
> >
>
> Yes. I've run smallish installations with a single management server
> running in a VM. If the management server goes down, everything keeps
> going; you've just lost the UI/API for the duration of the outage.
>
> >
> > Compute NODE: CPU, RAM, Local Storage. That's my goal, using KVM as a
> > platform. So essentially, I can do this for example:
> >
> > Compute NODE Specs:
> >
> > Dell R815
> > 4 x Opteron 16-Core CPUs
> > 256GB RAM
> > 6 x 2TB SSD Drives
> > Perc H700 RAID
> > KVM Platform (offering Linux & Windows templates)
> >
> > I can say, down the road add the above similar NODES into the cluster,
> > seamlessly via the CS management panel. Just like that, correct?
> > Nothing else fancy involved?
> >
> You've one thing to decide - do you want to cluster those hosts (main
> benefit being shared storage and network config), or run as single host
> clusters. It's really up to you, but I'd advise you to keep the cluster
> size/capacity consistent between clusters. e.g. If you decide to put three
> hosts in a cluster, always scale in three host chunks. btw, you'll want to
> enable local storage as a global config option on the management server. If
> you don't do this, the system assumes shared storage and you won't be able
> to start any VMs (including the system ones).
>
> >
> > Regarding selling them like VPS, I'm assuming the option to provide
> > the end user/customer a full list of available templates for them to
> > install and reinstall at their disposal can be done easily? Say we
> > wanted 10 Linux OS flavors to offer and 2 Windows. We can set this up
> > in advance and grant the user the ability via some predefined package
> per se?
> >
> Yup, but double check the Windows licensing before offering them up. iirc,
> there's nastiness in there about how licenses are counted. You also will
> want to ensure any Windows images start from a sysprep state otherwise
> they'll have the same sids and weird things'll happen on the network.
>
> >
> > We currently use SolusVM for virtualization (VPS plans). So, generally
> > speaking, I'm not sure I see much of a difference overall between this
> > and CS? Please correct me if I'm wrong. Because as it stands now,
> > SolusVM works in generally the same exact way.
> >
> I'm not familiar with SolusVM, so can't comment on comparisons.
>
> >
> > -----Original Message-----
> > From: Tim Mackey [mailto:tmackey@gmail.com]
> > Sent: Tuesday, September 27, 2016 9:59 AM
> > To: users@cloudstack.apache.org
> > Subject: Re: New Initial HW Setup
> >
> > Good morning.
> >
> > I think it's probably best to take a step back and define a couple of
> > things.
> >
> >
> > 1. The management server is really a highly efficient cluster manager.
> > It runs external to the compute nodes.
> > 2. A compute node contains CPU and RAM, has a network fabric, and may
> > have local storage. Compute nodes can be clustered based on the native
> > capabilities of the chosen hypervisor (e.g. XenServer uses the XAPI
> > cluster manager with its rules, while KVM is a collection hosts).
> > 3. A compute node can be bare metal, but those rules are very different.
> >
> > I used to present a hypervisor matrix, and here's my most recent deck:
> > http://www.slideshare.net/TimMackey/selecting-the-correct-hypervisor-f
> > or-
> > cloudstack-45.
> > Much of whats in there will be relevant to you at this point.
> >
> > Looking at your specific questions:
> >
> >  - " *how does the primary management server hosting the CS panel,
> > utilize processing power from external additional NODES*". First you
> > will configure the management server with knowledge of the compute
> > node. The management server then understands the capacity of the
> > compute node, and from there you can do stuff like provision VMs. For
> > example, if you've a template which has a compute offering with
> > 2vCPUs, 8GB RAM and two vNICs, that's how the management server will
> > setup the VM which will be based on the template the user chooses.
> >
> > - "*sell equivalent of **standalone dedicated servers, how would that
> > work*".
> > If the goal is to provide an equivalent of a bare metal virtual
> > server, then things are much more involved from the user perspective
> > (e.g. you need to start with a predefined ISO). If the goal is to
> > provide a VPS from a set of predefined OS types, then that's easier -
> > just upload a template for each one. The user then selects which
> > template they want and it gets provisioned.
> >
> > - "*If I'm guaranteeing 500GB SSD storage, 4 CPU Cores and **32GB RAM
> > he/she would have no way of knowing if it's cloud based or
> > **standalone, am I right or wrong*". It depends upon what you're
> > guaranteeing. Within the guest it would be easy to tell if you've 4
> cores, 32 GB RAM and 500GB disk.
> > What would be hard to tell is if the vCPUs are dedicated or overloads,
> > and if the disk was SSD. As a user, I honestly care less about SSD
> > than IOPs, and there are ways to tell that.
> >
> > btw, everywhere I mention "user selects" that could be a workflow you
> > kick off on behalf of the user. It's entirely possible to provide only
> > guest VM access via SSH if you don't want users to have access to the
> > CloudStack management console.
> >
> > Hope this helps some, and if I've misspoken something, I'm certain
> > others will set me right!
> >
> > -tim
> >
> >
> > On Tue, Sep 27, 2016 at 8:27 AM, NOC <no...@logicweb.com> wrote:
> >
> > > Hello,
> > >
> > >
> > >
> > > Looking to start up a virtualized setup using CloudStack and would
> > > like some feedback / advice as I do some researching.
> > >
> > >
> > >
> > > For starting off, was looking to do a single NODE instead of
> > > separate NODES for CPU/RAM, Storage.
> > >
> > >
> > >
> > > Example:
> > >
> > >
> > >
> > > Dell R815
> > >
> > > 4 x Opteron 16-Core CPUs
> > >
> > > 256GB RAM
> > >
> > > 6 x 2TB SSD Drives
> > >
> > > Perc H700 RAID
> > >
> > > Centos 7 64 bit
> > >
> > >
> > >
> > > Wouldn't that be sufficient enough to get going and just add more,
> > > similar nodes down the road seamlessly for additional processing
> > > power
> > and RAM?
> > >
> > >
> > >
> > > One of my main confusion is how does the primary management server
> > > hosting the CS panel, utilize processing power from external
> > > additional NODES? I cannot understand how this happens and would
> > appreciate some explanation.
> > >
> > >
> > >
> > > From my understanding, CS basically creates the equivalent of
> > > virtual servers (VPS). So in essence, scaling up or down you can
> > > offer cloud hosting (ie like shared hosting) and virtual servers.
> > > But, to sell equivalent of standalone dedicated servers, how would
> > > that work? You cannot offer the client KVM/IPMI, yet how does one
> > > prove if the server is virtualized or not, on the client side? If
> > > I'm guaranteeing 500GB SSD storage, 4 CPU Cores and 32GB RAM he/she
> > > would have no way of knowing if it's cloud based or standalone, am I
> right or wrong?
> > >
> > >
> > >
> > > Thanks in advance for the tips.
> > >
> > >
> > >
> > >
> >
> >
>
>

RE: New Initial HW Setup

Posted by NOC <no...@logicweb.com>.
> You've one thing to decide - do you want to cluster those hosts (main benefit being shared storage and network config), or run as single host clusters.

To make sure we're on the same page and I'm understanding you right, the "hosts" you're referring I assume are the NODES in the cluster. Clustering them you said shares the storage, but what about CPU/RAM? I assume it would share those too? As for network config, what do you mean by sharing network config?

Is there any general advantage to running them as single host nodes?

Thanks again for your assistance. 

-----Original Message-----
From: Tim Mackey [mailto:tmackey@gmail.com] 
Sent: Tuesday, September 27, 2016 1:26 PM
To: users@cloudstack.apache.org
Subject: Re: New Initial HW Setup

On Tue, Sep 27, 2016 at 12:34 PM, NOC <no...@logicweb.com> wrote:

> Thanks for the feedback!
>
> Ok so management server is just a standalone server (no fancy specs 
> generally speaking) for the CS control panel itself.
>

Yes. I've run smallish installations with a single management server running in a VM. If the management server goes down, everything keeps going; you've just lost the UI/API for the duration of the outage.

>
> Compute NODE: CPU, RAM, Local Storage. That's my goal, using KVM as a 
> platform. So essentially, I can do this for example:
>
> Compute NODE Specs:
>
> Dell R815
> 4 x Opteron 16-Core CPUs
> 256GB RAM
> 6 x 2TB SSD Drives
> Perc H700 RAID
> KVM Platform (offering Linux & Windows templates)
>
> I can say, down the road add the above similar NODES into the cluster, 
> seamlessly via the CS management panel. Just like that, correct? 
> Nothing else fancy involved?
>
You've one thing to decide - do you want to cluster those hosts (main benefit being shared storage and network config), or run as single host clusters. It's really up to you, but I'd advise you to keep the cluster size/capacity consistent between clusters. e.g. If you decide to put three hosts in a cluster, always scale in three host chunks. btw, you'll want to enable local storage as a global config option on the management server. If you don't do this, the system assumes shared storage and you won't be able to start any VMs (including the system ones).

>
> Regarding selling them like VPS, I'm assuming the option to provide 
> the end user/customer a full list of available templates for them to 
> install and reinstall at their disposal can be done easily? Say we 
> wanted 10 Linux OS flavors to offer and 2 Windows. We can set this up 
> in advance and grant the user the ability via some predefined package per se?
>
Yup, but double check the Windows licensing before offering them up. iirc, there's nastiness in there about how licenses are counted. You also will want to ensure any Windows images start from a sysprep state otherwise they'll have the same sids and weird things'll happen on the network.

>
> We currently use SolusVM for virtualization (VPS plans). So, generally 
> speaking, I'm not sure I see much of a difference overall between this 
> and CS? Please correct me if I'm wrong. Because as it stands now, 
> SolusVM works in generally the same exact way.
>
I'm not familiar with SolusVM, so can't comment on comparisons.

>
> -----Original Message-----
> From: Tim Mackey [mailto:tmackey@gmail.com]
> Sent: Tuesday, September 27, 2016 9:59 AM
> To: users@cloudstack.apache.org
> Subject: Re: New Initial HW Setup
>
> Good morning.
>
> I think it's probably best to take a step back and define a couple of 
> things.
>
>
> 1. The management server is really a highly efficient cluster manager. 
> It runs external to the compute nodes.
> 2. A compute node contains CPU and RAM, has a network fabric, and may 
> have local storage. Compute nodes can be clustered based on the native 
> capabilities of the chosen hypervisor (e.g. XenServer uses the XAPI 
> cluster manager with its rules, while KVM is a collection hosts).
> 3. A compute node can be bare metal, but those rules are very different.
>
> I used to present a hypervisor matrix, and here's my most recent deck:
> http://www.slideshare.net/TimMackey/selecting-the-correct-hypervisor-f
> or-
> cloudstack-45.
> Much of whats in there will be relevant to you at this point.
>
> Looking at your specific questions:
>
>  - " *how does the primary management server hosting the CS panel, 
> utilize processing power from external additional NODES*". First you 
> will configure the management server with knowledge of the compute 
> node. The management server then understands the capacity of the 
> compute node, and from there you can do stuff like provision VMs. For 
> example, if you've a template which has a compute offering with 
> 2vCPUs, 8GB RAM and two vNICs, that's how the management server will 
> setup the VM which will be based on the template the user chooses.
>
> - "*sell equivalent of **standalone dedicated servers, how would that 
> work*".
> If the goal is to provide an equivalent of a bare metal virtual 
> server, then things are much more involved from the user perspective 
> (e.g. you need to start with a predefined ISO). If the goal is to 
> provide a VPS from a set of predefined OS types, then that's easier - 
> just upload a template for each one. The user then selects which 
> template they want and it gets provisioned.
>
> - "*If I'm guaranteeing 500GB SSD storage, 4 CPU Cores and **32GB RAM 
> he/she would have no way of knowing if it's cloud based or 
> **standalone, am I right or wrong*". It depends upon what you're 
> guaranteeing. Within the guest it would be easy to tell if you've 4 cores, 32 GB RAM and 500GB disk.
> What would be hard to tell is if the vCPUs are dedicated or overloads, 
> and if the disk was SSD. As a user, I honestly care less about SSD 
> than IOPs, and there are ways to tell that.
>
> btw, everywhere I mention "user selects" that could be a workflow you 
> kick off on behalf of the user. It's entirely possible to provide only 
> guest VM access via SSH if you don't want users to have access to the 
> CloudStack management console.
>
> Hope this helps some, and if I've misspoken something, I'm certain 
> others will set me right!
>
> -tim
>
>
> On Tue, Sep 27, 2016 at 8:27 AM, NOC <no...@logicweb.com> wrote:
>
> > Hello,
> >
> >
> >
> > Looking to start up a virtualized setup using CloudStack and would 
> > like some feedback / advice as I do some researching.
> >
> >
> >
> > For starting off, was looking to do a single NODE instead of 
> > separate NODES for CPU/RAM, Storage.
> >
> >
> >
> > Example:
> >
> >
> >
> > Dell R815
> >
> > 4 x Opteron 16-Core CPUs
> >
> > 256GB RAM
> >
> > 6 x 2TB SSD Drives
> >
> > Perc H700 RAID
> >
> > Centos 7 64 bit
> >
> >
> >
> > Wouldn't that be sufficient enough to get going and just add more, 
> > similar nodes down the road seamlessly for additional processing 
> > power
> and RAM?
> >
> >
> >
> > One of my main confusion is how does the primary management server 
> > hosting the CS panel, utilize processing power from external 
> > additional NODES? I cannot understand how this happens and would
> appreciate some explanation.
> >
> >
> >
> > From my understanding, CS basically creates the equivalent of 
> > virtual servers (VPS). So in essence, scaling up or down you can 
> > offer cloud hosting (ie like shared hosting) and virtual servers. 
> > But, to sell equivalent of standalone dedicated servers, how would 
> > that work? You cannot offer the client KVM/IPMI, yet how does one 
> > prove if the server is virtualized or not, on the client side? If 
> > I'm guaranteeing 500GB SSD storage, 4 CPU Cores and 32GB RAM he/she 
> > would have no way of knowing if it's cloud based or standalone, am I right or wrong?
> >
> >
> >
> > Thanks in advance for the tips.
> >
> >
> >
> >
>
>


Re: New Initial HW Setup

Posted by Tim Mackey <tm...@gmail.com>.
On Tue, Sep 27, 2016 at 12:34 PM, NOC <no...@logicweb.com> wrote:

> Thanks for the feedback!
>
> Ok so management server is just a standalone server (no fancy specs
> generally speaking) for the CS control panel itself.
>

Yes. I've run smallish installations with a single management server
running in a VM. If the management server goes down, everything keeps
going; you've just lost the UI/API for the duration of the outage.

>
> Compute NODE: CPU, RAM, Local Storage. That's my goal, using KVM as a
> platform. So essentially, I can do this for example:
>
> Compute NODE Specs:
>
> Dell R815
> 4 x Opteron 16-Core CPUs
> 256GB RAM
> 6 x 2TB SSD Drives
> Perc H700 RAID
> KVM Platform (offering Linux & Windows templates)
>
> I can say, down the road add the above similar NODES into the cluster,
> seamlessly via the CS management panel. Just like that, correct? Nothing
> else fancy involved?
>
You've one thing to decide - do you want to cluster those hosts (main
benefit being shared storage and network config), or run as single host
clusters. It's really up to you, but I'd advise you to keep the cluster
size/capacity consistent between clusters. e.g. If you decide to put three
hosts in a cluster, always scale in three host chunks. btw, you'll want to
enable local storage as a global config option on the management server. If
you don't do this, the system assumes shared storage and you won't be able
to start any VMs (including the system ones).

>
> Regarding selling them like VPS, I'm assuming the option to provide the
> end user/customer a full list of available templates for them to install
> and reinstall at their disposal can be done easily? Say we wanted 10 Linux
> OS flavors to offer and 2 Windows. We can set this up in advance and grant
> the user the ability via some predefined package per se?
>
Yup, but double check the Windows licensing before offering them up. iirc,
there's nastiness in there about how licenses are counted. You also will
want to ensure any Windows images start from a sysprep state otherwise
they'll have the same sids and weird things'll happen on the network.

>
> We currently use SolusVM for virtualization (VPS plans). So, generally
> speaking, I'm not sure I see much of a difference overall between this and
> CS? Please correct me if I'm wrong. Because as it stands now, SolusVM works
> in generally the same exact way.
>
I'm not familiar with SolusVM, so can't comment on comparisons.

>
> -----Original Message-----
> From: Tim Mackey [mailto:tmackey@gmail.com]
> Sent: Tuesday, September 27, 2016 9:59 AM
> To: users@cloudstack.apache.org
> Subject: Re: New Initial HW Setup
>
> Good morning.
>
> I think it's probably best to take a step back and define a couple of
> things.
>
>
> 1. The management server is really a highly efficient cluster manager. It
> runs external to the compute nodes.
> 2. A compute node contains CPU and RAM, has a network fabric, and may have
> local storage. Compute nodes can be clustered based on the native
> capabilities of the chosen hypervisor (e.g. XenServer uses the XAPI cluster
> manager with its rules, while KVM is a collection hosts).
> 3. A compute node can be bare metal, but those rules are very different.
>
> I used to present a hypervisor matrix, and here's my most recent deck:
> http://www.slideshare.net/TimMackey/selecting-the-correct-hypervisor-for-
> cloudstack-45.
> Much of whats in there will be relevant to you at this point.
>
> Looking at your specific questions:
>
>  - " *how does the primary management server hosting the CS panel, utilize
> processing power from external additional NODES*". First you will configure
> the management server with knowledge of the compute node. The management
> server then understands the capacity of the compute node, and from there
> you can do stuff like provision VMs. For example, if you've a template
> which has a compute offering with 2vCPUs, 8GB RAM and two vNICs, that's how
> the management server will setup the VM which will be based on the template
> the user chooses.
>
> - "*sell equivalent of **standalone dedicated servers, how would that
> work*".
> If the goal is to provide an equivalent of a bare metal virtual server,
> then things are much more involved from the user perspective (e.g. you need
> to start with a predefined ISO). If the goal is to provide a VPS from a set
> of predefined OS types, then that's easier - just upload a template for
> each one. The user then selects which template they want and it gets
> provisioned.
>
> - "*If I'm guaranteeing 500GB SSD storage, 4 CPU Cores and **32GB RAM
> he/she would have no way of knowing if it's cloud based or **standalone, am
> I right or wrong*". It depends upon what you're guaranteeing. Within the
> guest it would be easy to tell if you've 4 cores, 32 GB RAM and 500GB disk.
> What would be hard to tell is if the vCPUs are dedicated or overloads, and
> if the disk was SSD. As a user, I honestly care less about SSD than IOPs,
> and there are ways to tell that.
>
> btw, everywhere I mention "user selects" that could be a workflow you kick
> off on behalf of the user. It's entirely possible to provide only guest VM
> access via SSH if you don't want users to have access to the CloudStack
> management console.
>
> Hope this helps some, and if I've misspoken something, I'm certain others
> will set me right!
>
> -tim
>
>
> On Tue, Sep 27, 2016 at 8:27 AM, NOC <no...@logicweb.com> wrote:
>
> > Hello,
> >
> >
> >
> > Looking to start up a virtualized setup using CloudStack and would
> > like some feedback / advice as I do some researching.
> >
> >
> >
> > For starting off, was looking to do a single NODE instead of separate
> > NODES for CPU/RAM, Storage.
> >
> >
> >
> > Example:
> >
> >
> >
> > Dell R815
> >
> > 4 x Opteron 16-Core CPUs
> >
> > 256GB RAM
> >
> > 6 x 2TB SSD Drives
> >
> > Perc H700 RAID
> >
> > Centos 7 64 bit
> >
> >
> >
> > Wouldn't that be sufficient enough to get going and just add more,
> > similar nodes down the road seamlessly for additional processing power
> and RAM?
> >
> >
> >
> > One of my main confusion is how does the primary management server
> > hosting the CS panel, utilize processing power from external
> > additional NODES? I cannot understand how this happens and would
> appreciate some explanation.
> >
> >
> >
> > From my understanding, CS basically creates the equivalent of virtual
> > servers (VPS). So in essence, scaling up or down you can offer cloud
> > hosting (ie like shared hosting) and virtual servers. But, to sell
> > equivalent of standalone dedicated servers, how would that work? You
> > cannot offer the client KVM/IPMI, yet how does one prove if the server
> > is virtualized or not, on the client side? If I'm guaranteeing 500GB
> > SSD storage, 4 CPU Cores and 32GB RAM he/she would have no way of
> > knowing if it's cloud based or standalone, am I right or wrong?
> >
> >
> >
> > Thanks in advance for the tips.
> >
> >
> >
> >
>
>

RE: New Initial HW Setup

Posted by NOC <no...@logicweb.com>.
Thanks for the feedback!

Ok so management server is just a standalone server (no fancy specs generally speaking) for the CS control panel itself.

Compute NODE: CPU, RAM, Local Storage. That's my goal, using KVM as a platform. So essentially, I can do this for example:

Compute NODE Specs:

Dell R815
4 x Opteron 16-Core CPUs
256GB RAM
6 x 2TB SSD Drives
Perc H700 RAID
KVM Platform (offering Linux & Windows templates)

I can say, down the road add the above similar NODES into the cluster, seamlessly via the CS management panel. Just like that, correct? Nothing else fancy involved? 

Regarding selling them like VPS, I'm assuming the option to provide the end user/customer a full list of available templates for them to install and reinstall at their disposal can be done easily? Say we wanted 10 Linux OS flavors to offer and 2 Windows. We can set this up in advance and grant the user the ability via some predefined package per se?

We currently use SolusVM for virtualization (VPS plans). So, generally speaking, I'm not sure I see much of a difference overall between this and CS? Please correct me if I'm wrong. Because as it stands now, SolusVM works in generally the same exact way. 

-----Original Message-----
From: Tim Mackey [mailto:tmackey@gmail.com] 
Sent: Tuesday, September 27, 2016 9:59 AM
To: users@cloudstack.apache.org
Subject: Re: New Initial HW Setup

Good morning.

I think it's probably best to take a step back and define a couple of things.


1. The management server is really a highly efficient cluster manager. It runs external to the compute nodes.
2. A compute node contains CPU and RAM, has a network fabric, and may have local storage. Compute nodes can be clustered based on the native capabilities of the chosen hypervisor (e.g. XenServer uses the XAPI cluster manager with its rules, while KVM is a collection hosts).
3. A compute node can be bare metal, but those rules are very different.

I used to present a hypervisor matrix, and here's my most recent deck:
http://www.slideshare.net/TimMackey/selecting-the-correct-hypervisor-for-cloudstack-45.
Much of whats in there will be relevant to you at this point.

Looking at your specific questions:

 - " *how does the primary management server hosting the CS panel, utilize processing power from external additional NODES*". First you will configure the management server with knowledge of the compute node. The management server then understands the capacity of the compute node, and from there you can do stuff like provision VMs. For example, if you've a template which has a compute offering with 2vCPUs, 8GB RAM and two vNICs, that's how the management server will setup the VM which will be based on the template the user chooses.

- "*sell equivalent of **standalone dedicated servers, how would that work*".
If the goal is to provide an equivalent of a bare metal virtual server, then things are much more involved from the user perspective (e.g. you need to start with a predefined ISO). If the goal is to provide a VPS from a set of predefined OS types, then that's easier - just upload a template for each one. The user then selects which template they want and it gets provisioned.

- "*If I'm guaranteeing 500GB SSD storage, 4 CPU Cores and **32GB RAM he/she would have no way of knowing if it's cloud based or **standalone, am I right or wrong*". It depends upon what you're guaranteeing. Within the guest it would be easy to tell if you've 4 cores, 32 GB RAM and 500GB disk.
What would be hard to tell is if the vCPUs are dedicated or overloads, and if the disk was SSD. As a user, I honestly care less about SSD than IOPs, and there are ways to tell that.

btw, everywhere I mention "user selects" that could be a workflow you kick off on behalf of the user. It's entirely possible to provide only guest VM access via SSH if you don't want users to have access to the CloudStack management console.

Hope this helps some, and if I've misspoken something, I'm certain others will set me right!

-tim


On Tue, Sep 27, 2016 at 8:27 AM, NOC <no...@logicweb.com> wrote:

> Hello,
>
>
>
> Looking to start up a virtualized setup using CloudStack and would 
> like some feedback / advice as I do some researching.
>
>
>
> For starting off, was looking to do a single NODE instead of separate 
> NODES for CPU/RAM, Storage.
>
>
>
> Example:
>
>
>
> Dell R815
>
> 4 x Opteron 16-Core CPUs
>
> 256GB RAM
>
> 6 x 2TB SSD Drives
>
> Perc H700 RAID
>
> Centos 7 64 bit
>
>
>
> Wouldn't that be sufficient enough to get going and just add more, 
> similar nodes down the road seamlessly for additional processing power and RAM?
>
>
>
> One of my main confusion is how does the primary management server 
> hosting the CS panel, utilize processing power from external 
> additional NODES? I cannot understand how this happens and would appreciate some explanation.
>
>
>
> From my understanding, CS basically creates the equivalent of virtual 
> servers (VPS). So in essence, scaling up or down you can offer cloud 
> hosting (ie like shared hosting) and virtual servers. But, to sell 
> equivalent of standalone dedicated servers, how would that work? You 
> cannot offer the client KVM/IPMI, yet how does one prove if the server 
> is virtualized or not, on the client side? If I'm guaranteeing 500GB 
> SSD storage, 4 CPU Cores and 32GB RAM he/she would have no way of 
> knowing if it's cloud based or standalone, am I right or wrong?
>
>
>
> Thanks in advance for the tips.
>
>
>
>


Re: New Initial HW Setup

Posted by Tim Mackey <tm...@gmail.com>.
Good morning.

I think it's probably best to take a step back and define a couple of
things.


1. The management server is really a highly efficient cluster manager. It
runs external to the compute nodes.
2. A compute node contains CPU and RAM, has a network fabric, and may have
local storage. Compute nodes can be clustered based on the native
capabilities of the chosen hypervisor (e.g. XenServer uses the XAPI cluster
manager with its rules, while KVM is a collection hosts).
3. A compute node can be bare metal, but those rules are very different.

I used to present a hypervisor matrix, and here's my most recent deck:
http://www.slideshare.net/TimMackey/selecting-the-correct-hypervisor-for-cloudstack-45.
Much of whats in there will be relevant to you at this point.

Looking at your specific questions:

 - " *how does the primary management server hosting the CS panel, utilize
processing power from external additional NODES*". First you will configure
the management server with knowledge of the compute node. The management
server then understands the capacity of the compute node, and from there
you can do stuff like provision VMs. For example, if you've a template
which has a compute offering with 2vCPUs, 8GB RAM and two vNICs, that's how
the management server will setup the VM which will be based on the template
the user chooses.

- "*sell equivalent of **standalone dedicated servers, how would that work*".
If the goal is to provide an equivalent of a bare metal virtual server,
then things are much more involved from the user perspective (e.g. you need
to start with a predefined ISO). If the goal is to provide a VPS from a set
of predefined OS types, then that's easier - just upload a template for
each one. The user then selects which template they want and it gets
provisioned.

- "*If I'm guaranteeing 500GB SSD storage, 4 CPU Cores and **32GB RAM
he/she would have no way of knowing if it's cloud based or **standalone, am
I right or wrong*". It depends upon what you're guaranteeing. Within the
guest it would be easy to tell if you've 4 cores, 32 GB RAM and 500GB disk.
What would be hard to tell is if the vCPUs are dedicated or overloads, and
if the disk was SSD. As a user, I honestly care less about SSD than IOPs,
and there are ways to tell that.

btw, everywhere I mention "user selects" that could be a workflow you kick
off on behalf of the user. It's entirely possible to provide only guest VM
access via SSH if you don't want users to have access to the CloudStack
management console.

Hope this helps some, and if I've misspoken something, I'm certain others
will set me right!

-tim


On Tue, Sep 27, 2016 at 8:27 AM, NOC <no...@logicweb.com> wrote:

> Hello,
>
>
>
> Looking to start up a virtualized setup using CloudStack and would like
> some
> feedback / advice as I do some researching.
>
>
>
> For starting off, was looking to do a single NODE instead of separate NODES
> for CPU/RAM, Storage.
>
>
>
> Example:
>
>
>
> Dell R815
>
> 4 x Opteron 16-Core CPUs
>
> 256GB RAM
>
> 6 x 2TB SSD Drives
>
> Perc H700 RAID
>
> Centos 7 64 bit
>
>
>
> Wouldn't that be sufficient enough to get going and just add more, similar
> nodes down the road seamlessly for additional processing power and RAM?
>
>
>
> One of my main confusion is how does the primary management server hosting
> the CS panel, utilize processing power from external additional NODES? I
> cannot understand how this happens and would appreciate some explanation.
>
>
>
> From my understanding, CS basically creates the equivalent of virtual
> servers (VPS). So in essence, scaling up or down you can offer cloud
> hosting
> (ie like shared hosting) and virtual servers. But, to sell equivalent of
> standalone dedicated servers, how would that work? You cannot offer the
> client KVM/IPMI, yet how does one prove if the server is virtualized or
> not,
> on the client side? If I'm guaranteeing 500GB SSD storage, 4 CPU Cores and
> 32GB RAM he/she would have no way of knowing if it's cloud based or
> standalone, am I right or wrong?
>
>
>
> Thanks in advance for the tips.
>
>
>
>