You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Yiping Zhang <yz...@marketo.com> on 2016/03/03 01:56:36 UTC

working with over provisioning factors

Hi,

I have to change the global and cluster setting memory.overprovisioning.factor for one of clusters with hundreds of running VM instances.  Now, in order for the changes to take effect, I need to restart all running instances.  I am wondering is there a way to update all running VM instances to reflect updated memory allocations without restarting every single VM instances?

While I am on the subject of restarting VM instances,  I went into data base to get their POWER_STATE_UPDATE_TIME value from vm_instance table directly. I noticed that this value is not consistently updated for those VM instances I restarted (either in UI or via API calls  with stop followed by start actions).  Is this a bug ?

Yiping

Re: working with over provisioning factors

Posted by Prashant Mishra <pr...@accelerite.com>.
1-You need to stop-start vm for the real changes (to change vm settings at
hypervisor level).
2-if you want to change available resources at DB level try  updating
cpu/mem overprovisioning column in user_vm_details and cluster_details I
never tried this but it should work .

~prashant
On 3/3/16, 6:26 AM, "Yiping Zhang" <yz...@marketo.com> wrote:

>Hi,
>
>I have to change the global and cluster setting
>memory.overprovisioning.factor for one of clusters with hundreds of
>running VM instances.  Now, in order for the changes to take effect, I
>need to restart all running instances.  I am wondering is there a way to
>update all running VM instances to reflect updated memory allocations
>without restarting every single VM instances?
>
>While I am on the subject of restarting VM instances,  I went into data
>base to get their POWER_STATE_UPDATE_TIME value from vm_instance table
>directly. I noticed that this value is not consistently updated for those
>VM instances I restarted (either in UI or via API calls  with stop
>followed by start actions).  Is this a bug ?
>
>Yiping




DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Accelerite, a Persistent Systems business. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Accelerite, a Persistent Systems business does not accept any liability for virus infected mails.

Re: working with over provisioning factors

Posted by Rafael Weingärtner <ra...@gmail.com>.
You need to restart VMs, in some environments such as XenServer, it will
only allow over commitment of resource as long as it can fulfil the minimum
amount of resource to VMs that are defined based on the over-provisioning
factor. Therefore, to change that minimum amount of resource allocated to
VMs, you need to restart them; hence, that value is defined dynamically
based on over-provisioning factors by ACS.

On Sat, Mar 5, 2016 at 3:50 AM, ilya <il...@gmail.com> wrote:

> Yiping
>
> I've done this before - several times in very large environments.
>
> I'm not certain why you require a restart of VMs. Restart of VMs will
> not reflect the change in cloudstack resources until you modify the DB.
>
> If you want for mem.overprovisiong to take effect, you would need to
> modify for each applicable VM-id in cloud.user_vm_details table.
>
> This may be a bit painful for iterative SQL syntax to get correctly. I'd
> suggest you do a db dump before hand.
>
> Regards
> ilya
>
> PS: mem.overprovisiong - generally bad idea, but i'm certain you know
> what works better for your environment.
>
>
>
> On 3/2/16 4:56 PM, Yiping Zhang wrote:
> > Hi,
> >
> > I have to change the global and cluster setting
> memory.overprovisioning.factor for one of clusters with hundreds of running
> VM instances.  Now, in order for the changes to take effect, I need to
> restart all running instances.  I am wondering is there a way to update all
> running VM instances to reflect updated memory allocations without
> restarting every single VM instances?
> >
> > While I am on the subject of restarting VM instances,  I went into data
> base to get their POWER_STATE_UPDATE_TIME value from vm_instance table
> directly. I noticed that this value is not consistently updated for those
> VM instances I restarted (either in UI or via API calls  with stop followed
> by start actions).  Is this a bug ?
> >
> > Yiping
> >
>



-- 
Rafael Weingärtner

Re: working with over provisioning factors

Posted by ilya <il...@gmail.com>.
Yiping

I've done this before - several times in very large environments.

I'm not certain why you require a restart of VMs. Restart of VMs will
not reflect the change in cloudstack resources until you modify the DB.

If you want for mem.overprovisiong to take effect, you would need to
modify for each applicable VM-id in cloud.user_vm_details table.

This may be a bit painful for iterative SQL syntax to get correctly. I'd
suggest you do a db dump before hand.

Regards
ilya

PS: mem.overprovisiong - generally bad idea, but i'm certain you know
what works better for your environment.



On 3/2/16 4:56 PM, Yiping Zhang wrote:
> Hi,
> 
> I have to change the global and cluster setting memory.overprovisioning.factor for one of clusters with hundreds of running VM instances.  Now, in order for the changes to take effect, I need to restart all running instances.  I am wondering is there a way to update all running VM instances to reflect updated memory allocations without restarting every single VM instances?
> 
> While I am on the subject of restarting VM instances,  I went into data base to get their POWER_STATE_UPDATE_TIME value from vm_instance table directly. I noticed that this value is not consistently updated for those VM instances I restarted (either in UI or via API calls  with stop followed by start actions).  Is this a bug ?
> 
> Yiping
>