You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Piotr Pisz <pi...@piszki.pl> on 2019/03/08 07:31:45 UTC

downloaded template vs disk service offering

Hi Users :-)

I have a question.
If from the disk for which the cache = writeback paramter was set, I make a template, all new machines have cache = writeback. And that's ok. 
If I load a template from outside, volume has cache = none. I have not found a place in DB where I could improve this parameter.
Do you know where we can set the template cache?

PS. Disk offering made with GUI does not set the cache parameter in DB...

Regards,
Piotr


Re: downloaded template vs disk service offering

Posted by Ivan Kudryavtsev <ku...@bw-sw.com>.
Avoid HV caching when possible, better to add RAM to VM or manage in-VM
writeback settings. Going with HV writeback you end up with angry users
someday, who lost much more data than you can imagine even if you don't use
migrations at all.

Want faster operations - improve your storage, build R0 among attached
disks, combine SSD, HDD in a single VM to deliver bcache, lvmcache, etc.
Use ceph cache pools on nvme over low latency dc switches, but don't use
writeback.

пт, 8 мар. 2019 г., 6:54 Andrija Panic <an...@shapeblue.com>:

> Hi Piotr,
>
>
> https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/cha.cachemodes.html#sec.cache.mode.live.migration
>
> So yes, CEPH being considered "clustered storage" - live migration works -
> but in case of QCOW2 (NFS) it doesn't actually work.
>
> BTW, as for CEPH, you would probably want to also check RBD client side
> write-back cache... (versus/instead qemu cache=writeback) (i.e. 32MB
> writeback cache in librbd per each volume, etc.).
> I believe I did test one versus another caching (was operating CEPH backed
> CloudStack installation myself a while ago) - afaik, there were no visible
> performance/latency differences in RBD write-back caching versus qemu
> writeback caching (both active = issues with performance)
>
> Kind regards,
> Andrija
>
> andrija.panic@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>
> -----Original Message-----
> From: Piotr Pisz <pi...@piszki.pl>
> Sent: 08 March 2019 12:44
> To: users@cloudstack.apache.org
> Subject: RE: downloaded template vs disk service offering
>
> Hey Andrija,
>
> Thank you for the explanation, now I finally understand how it works :-)
> As for live migration, the migration of such machines (with cache =
> writeback) in ceph rbd (centos 7 kvm) works without any problem.
>
> Regards,
> Piotr
>
>
> -----Original Message-----
> From: Andrija Panic <an...@shapeblue.com>
> Sent: Friday, March 8, 2019 9:22 AM
> To: users@cloudstack.apache.org
> Subject: RE: downloaded template vs disk service offering
>
> Hi Piotr,
>
> It's true that when setting cache mode for Disk offering via GUI doesn't
> get it implemented in DB (does API works fine, did you test it ? if so
> please raise the GitHub issue with description).
>
> In general, you can initially set cache mode for disk only on Disk
> Offering (possibly also on Compute Offering for the Root disk).
> When you make new template from existing disk, this new template will have
> source_template_id field in vm_templates table (on it's row) set to your
> original template from which you created the volum (template --> disk -->
> new template)
>
> Also worth noting - all volumes are inheriting "on he fly" (when you start
> VM) this cache mode setting from it's template (all volumes have
> "template_id" field in "volumes" table)
>
> So if you set cache_mode (via DB) for specific template, it will affect
> ALL VMs created from that template...(once you stop and start those VMs,
> obviously) - i.e. when you deploy new VM, some column values are copied
> over to the actual volume row/table, but some are just read on the fly, as
> this cache_mode.
>
> Nevertheless, I would strongly discourage using write-back cache for
> disks, since:
>
> - it can be severely risky, in case of power loss, kernel panic, etc - you
> will end up with corrupted volumes.
> - VMs can NOT be live migrated (at least with KVM), with cache set to
> anything else than none (google it yourself) - happy to learn if this
> limitation is present for other Hypervisors as well
>
> Fine to play with, but I would skip it in production.
>
> Kind regards,
> Andrija
>
> andrija.panic@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
>
>
>
>
> -----Original Message-----
> From: Piotr Pisz <pi...@piszki.pl>
> Sent: 08 March 2019 08:32
> To: users@cloudstack.apache.org
> Subject: downloaded template vs disk service offering
>
> Hi Users :-)
>
> I have a question.
> If from the disk for which the cache = writeback paramter was set, I make
> a template, all new machines have cache = writeback. And that's ok.
> If I load a template from outside, volume has cache = none. I have not
> found a place in DB where I could improve this parameter.
> Do you know where we can set the template cache?
>
> PS. Disk offering made with GUI does not set the cache parameter in DB...
>
> Regards,
> Piotr
>
>
>

RE: downloaded template vs disk service offering

Posted by Andrija Panic <an...@shapeblue.com>.
Hi Piotr,

https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/cha.cachemodes.html#sec.cache.mode.live.migration 

So yes, CEPH being considered "clustered storage" - live migration works - but in case of QCOW2 (NFS) it doesn't actually work.

BTW, as for CEPH, you would probably want to also check RBD client side write-back cache... (versus/instead qemu cache=writeback) (i.e. 32MB writeback cache in librbd per each volume, etc.).
I believe I did test one versus another caching (was operating CEPH backed CloudStack installation myself a while ago) - afaik, there were no visible performance/latency differences in RBD write-back caching versus qemu writeback caching (both active = issues with performance)

Kind regards,
Andrija

andrija.panic@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 


-----Original Message-----
From: Piotr Pisz <pi...@piszki.pl> 
Sent: 08 March 2019 12:44
To: users@cloudstack.apache.org
Subject: RE: downloaded template vs disk service offering

Hey Andrija,

Thank you for the explanation, now I finally understand how it works :-) As for live migration, the migration of such machines (with cache = writeback) in ceph rbd (centos 7 kvm) works without any problem.

Regards,
Piotr


-----Original Message-----
From: Andrija Panic <an...@shapeblue.com>
Sent: Friday, March 8, 2019 9:22 AM
To: users@cloudstack.apache.org
Subject: RE: downloaded template vs disk service offering

Hi Piotr,

It's true that when setting cache mode for Disk offering via GUI doesn't get it implemented in DB (does API works fine, did you test it ? if so please raise the GitHub issue with description).

In general, you can initially set cache mode for disk only on Disk Offering (possibly also on Compute Offering for the Root disk).
When you make new template from existing disk, this new template will have source_template_id field in vm_templates table (on it's row) set to your original template from which you created the volum (template --> disk --> new template)

Also worth noting - all volumes are inheriting "on he fly" (when you start VM) this cache mode setting from it's template (all volumes have "template_id" field in "volumes" table)

So if you set cache_mode (via DB) for specific template, it will affect ALL VMs created from that template...(once you stop and start those VMs, obviously) - i.e. when you deploy new VM, some column values are copied over to the actual volume row/table, but some are just read on the fly, as this cache_mode.

Nevertheless, I would strongly discourage using write-back cache for disks, since:

- it can be severely risky, in case of power loss, kernel panic, etc - you will end up with corrupted volumes.
- VMs can NOT be live migrated (at least with KVM), with cache set to anything else than none (google it yourself) - happy to learn if this limitation is present for other Hypervisors as well

Fine to play with, but I would skip it in production.

Kind regards,
Andrija

andrija.panic@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
  
 


-----Original Message-----
From: Piotr Pisz <pi...@piszki.pl>
Sent: 08 March 2019 08:32
To: users@cloudstack.apache.org
Subject: downloaded template vs disk service offering

Hi Users :-)

I have a question.
If from the disk for which the cache = writeback paramter was set, I make a template, all new machines have cache = writeback. And that's ok. 
If I load a template from outside, volume has cache = none. I have not found a place in DB where I could improve this parameter.
Do you know where we can set the template cache?

PS. Disk offering made with GUI does not set the cache parameter in DB...

Regards,
Piotr



RE: downloaded template vs disk service offering

Posted by Piotr Pisz <pi...@piszki.pl>.
Hey Andrija,

Thank you for the explanation, now I finally understand how it works :-)
As for live migration, the migration of such machines (with cache = writeback) in ceph rbd (centos 7 kvm) works without any problem.

Regards,
Piotr


-----Original Message-----
From: Andrija Panic <an...@shapeblue.com> 
Sent: Friday, March 8, 2019 9:22 AM
To: users@cloudstack.apache.org
Subject: RE: downloaded template vs disk service offering

Hi Piotr,

It's true that when setting cache mode for Disk offering via GUI doesn't get it implemented in DB (does API works fine, did you test it ? if so please raise the GitHub issue with description).

In general, you can initially set cache mode for disk only on Disk Offering (possibly also on Compute Offering for the Root disk).
When you make new template from existing disk, this new template will have source_template_id field in vm_templates table (on it's row) set to your original template from which you created the volum (template --> disk --> new template)

Also worth noting - all volumes are inheriting "on he fly" (when you start VM) this cache mode setting from it's template (all volumes have "template_id" field in "volumes" table)

So if you set cache_mode (via DB) for specific template, it will affect ALL VMs created from that template...(once you stop and start those VMs, obviously) - i.e. when you deploy new VM, some column values are copied over to the actual volume row/table, but some are just read on the fly, as this cache_mode.

Nevertheless, I would strongly discourage using write-back cache for disks, since:

- it can be severely risky, in case of power loss, kernel panic, etc - you will end up with corrupted volumes.
- VMs can NOT be live migrated (at least with KVM), with cache set to anything else than none (google it yourself) - happy to learn if this limitation is present for other Hypervisors as well

Fine to play with, but I would skip it in production.

Kind regards,
Andrija

andrija.panic@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
  
 


-----Original Message-----
From: Piotr Pisz <pi...@piszki.pl>
Sent: 08 March 2019 08:32
To: users@cloudstack.apache.org
Subject: downloaded template vs disk service offering

Hi Users :-)

I have a question.
If from the disk for which the cache = writeback paramter was set, I make a template, all new machines have cache = writeback. And that's ok. 
If I load a template from outside, volume has cache = none. I have not found a place in DB where I could improve this parameter.
Do you know where we can set the template cache?

PS. Disk offering made with GUI does not set the cache parameter in DB...

Regards,
Piotr



RE: downloaded template vs disk service offering

Posted by Andrija Panic <an...@shapeblue.com>.
Hi Piotr,

It's true that when setting cache mode for Disk offering via GUI doesn't get it implemented in DB (does API works fine, did you test it ? if so please raise the GitHub issue with description).

In general, you can initially set cache mode for disk only on Disk Offering (possibly also on Compute Offering for the Root disk).
When you make new template from existing disk, this new template will have source_template_id field in vm_templates table (on it's row) set to your original template from which you created the volum (template --> disk --> new template)

Also worth noting - all volumes are inheriting "on he fly" (when you start VM) this cache mode setting from it's template (all volumes have "template_id" field in "volumes" table)

So if you set cache_mode (via DB) for specific template, it will affect ALL VMs created from that template...(once you stop and start those VMs, obviously) - i.e. when you deploy new VM, some column values are copied over to the actual volume row/table, but some are just read on the fly, as this cache_mode.

Nevertheless, I would strongly discourage using write-back cache for disks, since:

- it can be severely risky, in case of power loss, kernel panic, etc - you will end up with corrupted volumes.
- VMs can NOT be live migrated (at least with KVM), with cache set to anything else than none (google it yourself) - happy to learn if this limitation is present for other Hypervisors as well

Fine to play with, but I would skip it in production.

Kind regards,
Andrija

andrija.panic@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 


-----Original Message-----
From: Piotr Pisz <pi...@piszki.pl> 
Sent: 08 March 2019 08:32
To: users@cloudstack.apache.org
Subject: downloaded template vs disk service offering

Hi Users :-)

I have a question.
If from the disk for which the cache = writeback paramter was set, I make a template, all new machines have cache = writeback. And that's ok. 
If I load a template from outside, volume has cache = none. I have not found a place in DB where I could improve this parameter.
Do you know where we can set the template cache?

PS. Disk offering made with GUI does not set the cache parameter in DB...

Regards,
Piotr