You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Paul Angus <pa...@shapeblue.com> on 2018/02/20 08:46:47 UTC

Caching modes

Hey guys,

Can anyone shed any light on write caching in CloudStack?   cacheMode is available through the UI for data disks (but not root disks), but not documented as an API option for data or root disks (although is documented as a response for data disks).

#huh?

thanks



paul.angus@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


Re: Caching modes

Posted by Rafael Weingärtner <ra...@gmail.com>.
You merged from branch andrijapanic-patch-1, into master. However, this
process was done in your own repository and not ACS.
You had this PR opened: https://github.com/apache/cloudstack-docs/pull/22,
and it was merged on Nov 9, 2017.

That is the content you are seeing on link http://docs.cloudstack.apache.
org/en/latest/networking/vxlan.html#important-note-on-
max-number-of-multicast-groups-and-thus-vxlan-intefaces



On Wed, Feb 21, 2018 at 9:08 AM, Andrija Panic <an...@gmail.com>
wrote:

> Rafael, I just successfully merged (strange?)
> https://github.com/andrijapanic/cloudstack-docs/pull/1 and I can see
> changes are publicly available on
> http://docs.cloudstack.apache.org/en/latest/networking/
> vxlan.html#important-note-on-max-number-of-multicast-
> groups-and-thus-vxlan-intefaces
>
> Is this normal, that I can merge my own pull request on cloudstack-doc repo
> ? There is no limitations (I was able to make PR and merge myself)
>
> On 21 February 2018 at 00:58, Andrija Panic <an...@gmail.com>
> wrote:
>
> > pls merge also https://github.com/apache/cloudstack-docs-admin/pull/48
> >
> > just correct code block syntax (to display code properly)
> >
> > On 20 February 2018 at 21:02, Rafael Weingärtner <
> > rafaelweingartner@gmail.com> wrote:
> >
> >> Thanks, we will proceed reviweing
> >>
> >> On Tue, Feb 20, 2018 at 3:12 PM, Andrija Panic <andrija.panic@gmail.com
> >
> >> wrote:
> >>
> >> > Here it is:
> >> >
> >> > https://github.com/apache/cloudstack-docs-admin/pull/47
> >> >
> >> > Added KVM online storage migration (atm only CEPH/NFS to SolidFire,
> new
> >> in
> >> > 4.11 release)
> >> > Added KVM cache mode setup and limitations.
> >> >
> >> >
> >> > Cheers
> >> >
> >> > On 20 February 2018 at 16:49, Rafael Weingärtner <
> >> > rafaelweingartner@gmail.com> wrote:
> >> >
> >> > > If you are willing to write it down, please do so, and open a PR. We
> >> will
> >> > > review and merged it afterwards.
> >> > >
> >> > > On Tue, Feb 20, 2018 at 12:41 PM, Andrija Panic <
> >> andrija.panic@gmail.com
> >> > >
> >> > > wrote:
> >> > >
> >> > > > I advise (or not...depends on point of view) to stay that way -
> >> because
> >> > > > when you activate write-back cache - live migrations will stop,
> and
> >> > this
> >> > > > makes *Enable maintenance mode (put host into maintenance)*
> >> impossible.
> >> > > >
> >> > > > I would perhaps suggest that there is documentation for "advanced
> >> > users"
> >> > > or
> >> > > > similar, that will say "it's possible to enable this and this way
> >> via
> >> > DB
> >> > > > hack, but be warned on live migration consequences etc..." since
> >> this
> >> > > will
> >> > > > render more problems if people start using it.
> >> > > >
> >> > > > If you choose to do, let me know, I can write that (documentation)
> >> > > briefly.
> >> > > >
> >> > > > Not to mention it can be unsafe (power failure - less possible I
> >> guess,
> >> > > but
> >> > > > rare kernel panic etc might have it's consequences I assume)
> >> > > >
> >> > > > It does indeed increase performance on NFS much, but not
> >> necessarily on
> >> > > > CEPH (if you are using lirbd cache on client side as explained
> >> above)
> >> > > >
> >> > > > On 20 February 2018 at 15:48, Rafael Weingärtner <
> >> > > > rafaelweingartner@gmail.com> wrote:
> >> > > >
> >> > > > > Yes. Weird enough, the code is using the value in the database
> if
> >> it
> >> > is
> >> > > > > provided there, but there is no easy way for users to change
> that
> >> > > > > configuration in the database. ¯\_(ツ)_/¯
> >> > > > >
> >> > > > > On Tue, Feb 20, 2018 at 11:45 AM, Andrija Panic <
> >> > > andrija.panic@gmail.com
> >> > > > >
> >> > > > > wrote:
> >> > > > >
> >> > > > > > So it seems that just passing the cachemode value to API is
> not
> >> > > there,
> >> > > > or
> >> > > > > > somehow messedup, but deployVM process does read DB values
> from
> >> > > > > > disk_offering table for sure, and applies it to XML file for
> >> KVM.
> >> > > > > > This is above ACS 4.8.x.
> >> > > > > >
> >> > > > > >
> >> > > > > > On 20 February 2018 at 15:44, Andrija Panic <
> >> > andrija.panic@gmail.com
> >> > > >
> >> > > > > > wrote:
> >> > > > > >
> >> > > > > > > I have edited the disk_offering table, in the cache_mode
> just
> >> > enter
> >> > > > > > > "writeback". Stop and start VM, and it will pickup/inherit
> the
> >> > > > > cache_mode
> >> > > > > > > from it's parrent offering
> >> > > > > > > This also applies to Compute/Service offering, again inside
> >> > > > > disk_offering
> >> > > > > > > table - just tested both
> >> > > > > > >
> >> > > > > > > i.e.
> >> > > > > > >
> >> > > > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback'
> >> WHERE
> >> > > > > > > `id`=102; # Compute Offering (Service offering)
> >> > > > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback'
> >> WHERE
> >> > > > > > > `id`=114; #data disk offering
> >> > > > > > >
> >> > > > > > > Before SQL:
> >> > > > > > >
> >> > > > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> >> > > > > > >       <driver name='qemu' type='qcow2' cache='none'/>
> >> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> >> > > > > a772-1ea939ef6ec3/1b655159-
> >> > > > > > > ae10-41cf-8987-f1cfb47fe453'/>
> >> > > > > > >       <target dev='vda' bus='virtio'/>
> >> > > > > > > --
> >> > > > > > >       <driver name='qemu' type='qcow2' cache='none'/>
> >> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> >> > > > > a772-1ea939ef6ec3/09bdadcb-
> >> > > > > > > ec6e-4dda-b37b-17b1a749257f'/>
> >> > > > > > >       <target dev='vdb' bus='virtio'/>
> >> > > > > > > --
> >> > > > > > >
> >> > > > > > > STOP and START VM = after SQL
> >> > > > > > >
> >> > > > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> >> > > > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> >> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> >> > > > > a772-1ea939ef6ec3/1b655159-
> >> > > > > > > ae10-41cf-8987-f1cfb47fe453'/>
> >> > > > > > >       <target dev='vda' bus='virtio'/>
> >> > > > > > > --
> >> > > > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> >> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> >> > > > > a772-1ea939ef6ec3/09bdadcb-
> >> > > > > > > ec6e-4dda-b37b-17b1a749257f'/>
> >> > > > > > >       <target dev='vdb' bus='virtio'/>
> >> > > > > > > --
> >> > > > > > >
> >> > > > > > >
> >> > > > > > >
> >> > > > > > > On 20 February 2018 at 14:03, Rafael Weingärtner <
> >> > > > > > > rafaelweingartner@gmail.com> wrote:
> >> > > > > > >
> >> > > > > > >> I have no idea how it can change the performance. If you
> >> look at
> >> > > the
> >> > > > > > >> content of the commit you provided, it is only the commit
> >> that
> >> > > > enabled
> >> > > > > > the
> >> > > > > > >> use of getCacheMode from disk offerings. However, it is not
> >> > > exposing
> >> > > > > any
> >> > > > > > >> way to users to change that value/configuration in the
> >> > database. I
> >> > > > > might
> >> > > > > > >> have missed it; do you see any API methods that receive the
> >> > > > parameter
> >> > > > > > >> "cacheMode" and then pass this parameter to a
> "diskOffering"
> >> > > object,
> >> > > > > and
> >> > > > > > >> then persist/update this object in the database?
> >> > > > > > >>
> >> > > > > > >> May I ask how are you guys changing the cacheMode
> >> configuration?
> >> > > > > > >>
> >> > > > > > >> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <
> >> > > > paul.angus@shapeblue.com
> >> > > > > >
> >> > > > > > >> wrote:
> >> > > > > > >>
> >> > > > > > >> > I'm working with some guys who are experimenting with the
> >> > > setting
> >> > > > as
> >> > > > > > if
> >> > > > > > >> > definitely seems to change the performance of data disks.
> >> It
> >> > > also
> >> > > > > > >> changes
> >> > > > > > >> > the XML of the VM which is created.
> >> > > > > > >> >
> >> > > > > > >> > p.s.
> >> > > > > > >> > I've found this commit;
> >> > > > > > >> >
> >> > > > > > >> > https://github.com/apache/clou
> >> dstack/commit/1edaa36cc68e845a
> >> > > > > > >> 42339d5f267d49
> >> > > > > > >> > c82343aefb
> >> > > > > > >> >
> >> > > > > > >> > so I've got something to investigate now, but API
> >> > documentation
> >> > > > must
> >> > > > > > >> > definitely be askew.
> >> > > > > > >> >
> >> > > > > > >> >
> >> > > > > > >> >
> >> > > > > > >> > paul.angus@shapeblue.com
> >> > > > > > >> > www.shapeblue.com
> >> > > > > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >> > > > > > >> > @shapeblue
> >> > > > > > >> >
> >> > > > > > >> >
> >> > > > > > >> >
> >> > > > > > >> >
> >> > > > > > >> > -----Original Message-----
> >> > > > > > >> > From: Rafael Weingärtner [mailto:rafaelweingartner@gmai
> >> l.com]
> >> > > > > > >> > Sent: 20 February 2018 12:31
> >> > > > > > >> > To: dev <de...@cloudstack.apache.org>
> >> > > > > > >> > Subject: Re: Caching modes
> >> > > > > > >> >
> >> > > > > > >> > This cache mode parameter does not exist in
> >> > > > "CreateDiskOfferingCmd"
> >> > > > > > >> > command. I also checked some commits from 2, 3, 4 and 5
> >> years
> >> > > ago,
> >> > > > > and
> >> > > > > > >> > this parameter was never there. If you check the API in
> >> [1],
> >> > you
> >> > > > can
> >> > > > > > see
> >> > > > > > >> > that it is not an expected parameter. Moreover, I do not
> >> see
> >> > any
> >> > > > use
> >> > > > > > of
> >> > > > > > >> > "setCacheMode" in the code (in case it is updated by some
> >> > other
> >> > > > > > method).
> >> > > > > > >> > Interestingly enough, the code uses "getCacheMode".
> >> > > > > > >> >
> >> > > > > > >> > In summary, it is not a feature, and it does not work. It
> >> > looks
> >> > > > like
> >> > > > > > >> some
> >> > > > > > >> > leftover from dark ages when people could commit anything
> >> and
> >> > > then
> >> > > > > > they
> >> > > > > > >> > would just leave a half implementation there in our code
> >> base.
> >> > > > > > >> >
> >> > > > > > >> > [1]
> >> > > > > > >> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
> >> > > > > > >> > createDiskOffering.html
> >> > > > > > >> >
> >> > > > > > >> >
> >> > > > > > >> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <
> >> > > > > > andrija.panic@gmail.com
> >> > > > > > >> >
> >> > > > > > >> > wrote:
> >> > > > > > >> >
> >> > > > > > >> > > I can also assume that "cachemode" as API parameter is
> >> not
> >> > > > > > supported,
> >> > > > > > >> > > since when creating data disk offering via GUI also
> >> doesn't
> >> > > set
> >> > > > it
> >> > > > > > in
> >> > > > > > >> > DB/table.
> >> > > > > > >> > >
> >> > > > > > >> > > CM:    create diskoffering name=xxx displaytext=xxx
> >> > > > > > storagetype=shared
> >> > > > > > >> > > disksize=1024 cachemode=writeback
> >> > > > > > >> > >
> >> > > > > > >> > > this also does not set cachemode in table... my guess
> >> it's
> >> > not
> >> > > > > > >> > > implemented in API
> >> > > > > > >> > >
> >> > > > > > >> > > Let me know if I can help with any testing here.
> >> > > > > > >> > >
> >> > > > > > >> > > Cheers
> >> > > > > > >> > >
> >> > > > > > >> > > On 20 February 2018 at 13:09, Andrija Panic <
> >> > > > > > andrija.panic@gmail.com>
> >> > > > > > >> > > wrote:
> >> > > > > > >> > >
> >> > > > > > >> > > > Hi Paul,
> >> > > > > > >> > > >
> >> > > > > > >> > > > not helping directly answering your question, but
> here
> >> are
> >> > > > some
> >> > > > > > >> > > > observations and "warning" if client's are using
> >> > write-back
> >> > > > > cache
> >> > > > > > on
> >> > > > > > >> > > > KVM level
> >> > > > > > >> > > >
> >> > > > > > >> > > >
> >> > > > > > >> > > > I have (long time ago) tested performance in 3
> >> > combinations
> >> > > > > (this
> >> > > > > > >> > > > was not really thorough testing but a brief testing
> >> with
> >> > FIO
> >> > > > and
> >> > > > > > >> > > > random IO WRITE)
> >> > > > > > >> > > >
> >> > > > > > >> > > > - just CEPH rbd cache (on KVM side)
> >> > > > > > >> > > >            i.e. [client]
> >> > > > > > >> > > >                  rbd cache = true
> >> > > > > > >> > > >                  rbd cache writethrough until flush =
> >> true
> >> > > > > > >> > > >                  #(this is default 32MB per volume,
> >> afaik
> >> > > > > > >> > > >
> >> > > > > > >> > > > - just KMV write-back cache (had to manually edit
> >> > > > disk_offering
> >> > > > > > >> > > > table to activate cache mode, since when creating new
> >> disk
> >> > > > > > offering
> >> > > > > > >> > > > via GUI, the disk_offering tables was NOT populated
> >> with
> >> > > > > > >> > > > "write-back" sertting/value
> >> > > > > > >> > > ! )
> >> > > > > > >> > > >
> >> > > > > > >> > > > - both CEPH and KVM write-back cahce active
> >> > > > > > >> > > >
> >> > > > > > >> > > > My observations were like following, but would be
> good
> >> to
> >> > > > > actually
> >> > > > > > >> > > confirm
> >> > > > > > >> > > > by someone else:
> >> > > > > > >> > > >
> >> > > > > > >> > > > - same performance with only CEPH caching or with
> only
> >> KVM
> >> > > > > caching
> >> > > > > > >> > > > - a bit worse performance with both CEPH and KVM
> >> caching
> >> > > > active
> >> > > > > > >> > > > (nonsense combination, I know...)
> >> > > > > > >> > > >
> >> > > > > > >> > > >
> >> > > > > > >> > > > Please keep in mind, that some ACS functionality, KVM
> >> > > > > > >> > > > live-migrations on shared storage (NFS/CEPH) are NOT
> >> > > supported
> >> > > > > > when
> >> > > > > > >> > > > you use KVM write-back cache, since this is
> considered
> >> > > > "unsafe"
> >> > > > > > >> > migration, more info here:
> >> > > > > > >> > > > https://doc.opensuse.org/documentation/leap/
> >> > virtualization/
> >> > > > > > >> > > html/book.virt/
> >> > > > > > >> > > > cha.cachemodes.html#sec.cache.mode.live.migration
> >> > > > > > >> > > >
> >> > > > > > >> > > > or in short:
> >> > > > > > >> > > > "
> >> > > > > > >> > > > The libvirt management layer includes checks for
> >> migration
> >> > > > > > >> > > > compatibility based on several factors. If the guest
> >> > storage
> >> > > > is
> >> > > > > > >> > > > hosted on a clustered file system, is read-only or is
> >> > marked
> >> > > > > > >> > > > shareable, then the cache mode is ignored when
> >> determining
> >> > > if
> >> > > > > > >> > > > migration can be allowed. Otherwise libvirt will not
> >> allow
> >> > > > > > migration
> >> > > > > > >> > > > unless the cache mode is set to none. However, this
> >> > > > restriction
> >> > > > > > can
> >> > > > > > >> > > > be overridden with the “unsafe” option to the
> migration
> >> > > APIs,
> >> > > > > > which
> >> > > > > > >> > > > is also supported by virsh, as for example in
> >> > > > > > >> > > >
> >> > > > > > >> > > > virsh migrate --live --unsafe
> >> > > > > > >> > > > "
> >> > > > > > >> > > >
> >> > > > > > >> > > > Cheers
> >> > > > > > >> > > > Andrija
> >> > > > > > >> > > >
> >> > > > > > >> > > >
> >> > > > > > >> > > > On 20 February 2018 at 11:24, Paul Angus <
> >> > > > > > paul.angus@shapeblue.com>
> >> > > > > > >> > > wrote:
> >> > > > > > >> > > >
> >> > > > > > >> > > >> Hi Wido,
> >> > > > > > >> > > >>
> >> > > > > > >> > > >> This is for KVM (with Ceph backend as it happens),
> the
> >> > API
> >> > > > > > >> > > >> documentation is out of sync with UI capabilities,
> so
> >> I'm
> >> > > > > trying
> >> > > > > > to
> >> > > > > > >> > > >> figure out if we
> >> > > > > > >> > > >> *should* be able to set cacheMode for root disks.
> It
> >> > seems
> >> > > > to
> >> > > > > > make
> >> > > > > > >> > > quite a
> >> > > > > > >> > > >> difference to performance.
> >> > > > > > >> > > >>
> >> > > > > > >> > > >>
> >> > > > > > >> > > >>
> >> > > > > > >> > > >> paul.angus@shapeblue.com
> >> > > > > > >> > > >> www.shapeblue.com
> >> > > > > > >> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >> > > > @shapeblue
> >> > > > > > >> > > >>
> >> > > > > > >> > > >>
> >> > > > > > >> > > >>
> >> > > > > > >> > > >>
> >> > > > > > >> > > >> -----Original Message-----
> >> > > > > > >> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
> >> > > > > > >> > > >> Sent: 20 February 2018 09:03
> >> > > > > > >> > > >> To: dev@cloudstack.apache.org
> >> > > > > > >> > > >> Subject: Re: Caching modes
> >> > > > > > >> > > >>
> >> > > > > > >> > > >>
> >> > > > > > >> > > >>
> >> > > > > > >> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> >> > > > > > >> > > >> > Hey guys,
> >> > > > > > >> > > >> >
> >> > > > > > >> > > >> > Can anyone shed any light on write caching in
> >> > CloudStack?
> >> > > > > > >> >  cacheMode
> >> > > > > > >> > > >> is available through the UI for data disks (but not
> >> root
> >> > > > > disks),
> >> > > > > > >> but
> >> > > > > > >> > not
> >> > > > > > >> > > >> documented as an API option for data or root disks
> >> > > (although
> >> > > > is
> >> > > > > > >> > > documented
> >> > > > > > >> > > >> as a response for data disks).
> >> > > > > > >> > > >> >
> >> > > > > > >> > > >>
> >> > > > > > >> > > >> What hypervisor?
> >> > > > > > >> > > >>
> >> > > > > > >> > > >> In case of KVM it's passed down to XML which then
> >> passes
> >> > it
> >> > > > to
> >> > > > > > >> > Qemu/KVM
> >> > > > > > >> > > >> which then handles the caching.
> >> > > > > > >> > > >>
> >> > > > > > >> > > >> The implementation varies per hypervisor, so that
> >> should
> >> > be
> >> > > > the
> >> > > > > > >> > > question.
> >> > > > > > >> > > >>
> >> > > > > > >> > > >> Wido
> >> > > > > > >> > > >>
> >> > > > > > >> > > >>
> >> > > > > > >> > > >> > #huh?
> >> > > > > > >> > > >> >
> >> > > > > > >> > > >> > thanks
> >> > > > > > >> > > >> >
> >> > > > > > >> > > >> >
> >> > > > > > >> > > >> >
> >> > > > > > >> > > >> > paul.angus@shapeblue.com
> >> > > > > > >> > > >> > www.shapeblue.com
> >> > > > > > >> > > >> > 53 Chandos Place, Covent Garden, London  WC2N
> 4HSUK
> >> > > > > @shapeblue
> >> > > > > > >> > > >> >
> >> > > > > > >> > > >> >
> >> > > > > > >> > > >> >
> >> > > > > > >> > > >>
> >> > > > > > >> > > >
> >> > > > > > >> > > >
> >> > > > > > >> > > >
> >> > > > > > >> > > > --
> >> > > > > > >> > > >
> >> > > > > > >> > > > Andrija Panić
> >> > > > > > >> > > >
> >> > > > > > >> > >
> >> > > > > > >> > >
> >> > > > > > >> > >
> >> > > > > > >> > > --
> >> > > > > > >> > >
> >> > > > > > >> > > Andrija Panić
> >> > > > > > >> > >
> >> > > > > > >> >
> >> > > > > > >> >
> >> > > > > > >> >
> >> > > > > > >> > --
> >> > > > > > >> > Rafael Weingärtner
> >> > > > > > >> >
> >> > > > > > >>
> >> > > > > > >>
> >> > > > > > >>
> >> > > > > > >> --
> >> > > > > > >> Rafael Weingärtner
> >> > > > > > >>
> >> > > > > > >
> >> > > > > > >
> >> > > > > > >
> >> > > > > > > --
> >> > > > > > >
> >> > > > > > > Andrija Panić
> >> > > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > > --
> >> > > > > >
> >> > > > > > Andrija Panić
> >> > > > > >
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > Rafael Weingärtner
> >> > > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > >
> >> > > > Andrija Panić
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > > Rafael Weingärtner
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> >
> >> > Andrija Panić
> >> >
> >>
> >>
> >>
> >> --
> >> Rafael Weingärtner
> >>
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
>
> Andrija Panić
>



-- 
Rafael Weingärtner

Re: Caching modes

Posted by Andrija Panic <an...@gmail.com>.
Rafael, I just successfully merged (strange?)
https://github.com/andrijapanic/cloudstack-docs/pull/1 and I can see
changes are publicly available on
http://docs.cloudstack.apache.org/en/latest/networking/vxlan.html#important-note-on-max-number-of-multicast-groups-and-thus-vxlan-intefaces

Is this normal, that I can merge my own pull request on cloudstack-doc repo
? There is no limitations (I was able to make PR and merge myself)

On 21 February 2018 at 00:58, Andrija Panic <an...@gmail.com> wrote:

> pls merge also https://github.com/apache/cloudstack-docs-admin/pull/48
>
> just correct code block syntax (to display code properly)
>
> On 20 February 2018 at 21:02, Rafael Weingärtner <
> rafaelweingartner@gmail.com> wrote:
>
>> Thanks, we will proceed reviweing
>>
>> On Tue, Feb 20, 2018 at 3:12 PM, Andrija Panic <an...@gmail.com>
>> wrote:
>>
>> > Here it is:
>> >
>> > https://github.com/apache/cloudstack-docs-admin/pull/47
>> >
>> > Added KVM online storage migration (atm only CEPH/NFS to SolidFire, new
>> in
>> > 4.11 release)
>> > Added KVM cache mode setup and limitations.
>> >
>> >
>> > Cheers
>> >
>> > On 20 February 2018 at 16:49, Rafael Weingärtner <
>> > rafaelweingartner@gmail.com> wrote:
>> >
>> > > If you are willing to write it down, please do so, and open a PR. We
>> will
>> > > review and merged it afterwards.
>> > >
>> > > On Tue, Feb 20, 2018 at 12:41 PM, Andrija Panic <
>> andrija.panic@gmail.com
>> > >
>> > > wrote:
>> > >
>> > > > I advise (or not...depends on point of view) to stay that way -
>> because
>> > > > when you activate write-back cache - live migrations will stop, and
>> > this
>> > > > makes *Enable maintenance mode (put host into maintenance)*
>> impossible.
>> > > >
>> > > > I would perhaps suggest that there is documentation for "advanced
>> > users"
>> > > or
>> > > > similar, that will say "it's possible to enable this and this way
>> via
>> > DB
>> > > > hack, but be warned on live migration consequences etc..." since
>> this
>> > > will
>> > > > render more problems if people start using it.
>> > > >
>> > > > If you choose to do, let me know, I can write that (documentation)
>> > > briefly.
>> > > >
>> > > > Not to mention it can be unsafe (power failure - less possible I
>> guess,
>> > > but
>> > > > rare kernel panic etc might have it's consequences I assume)
>> > > >
>> > > > It does indeed increase performance on NFS much, but not
>> necessarily on
>> > > > CEPH (if you are using lirbd cache on client side as explained
>> above)
>> > > >
>> > > > On 20 February 2018 at 15:48, Rafael Weingärtner <
>> > > > rafaelweingartner@gmail.com> wrote:
>> > > >
>> > > > > Yes. Weird enough, the code is using the value in the database if
>> it
>> > is
>> > > > > provided there, but there is no easy way for users to change that
>> > > > > configuration in the database. ¯\_(ツ)_/¯
>> > > > >
>> > > > > On Tue, Feb 20, 2018 at 11:45 AM, Andrija Panic <
>> > > andrija.panic@gmail.com
>> > > > >
>> > > > > wrote:
>> > > > >
>> > > > > > So it seems that just passing the cachemode value to API is not
>> > > there,
>> > > > or
>> > > > > > somehow messedup, but deployVM process does read DB values from
>> > > > > > disk_offering table for sure, and applies it to XML file for
>> KVM.
>> > > > > > This is above ACS 4.8.x.
>> > > > > >
>> > > > > >
>> > > > > > On 20 February 2018 at 15:44, Andrija Panic <
>> > andrija.panic@gmail.com
>> > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > > > I have edited the disk_offering table, in the cache_mode just
>> > enter
>> > > > > > > "writeback". Stop and start VM, and it will pickup/inherit the
>> > > > > cache_mode
>> > > > > > > from it's parrent offering
>> > > > > > > This also applies to Compute/Service offering, again inside
>> > > > > disk_offering
>> > > > > > > table - just tested both
>> > > > > > >
>> > > > > > > i.e.
>> > > > > > >
>> > > > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback'
>> WHERE
>> > > > > > > `id`=102; # Compute Offering (Service offering)
>> > > > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback'
>> WHERE
>> > > > > > > `id`=114; #data disk offering
>> > > > > > >
>> > > > > > > Before SQL:
>> > > > > > >
>> > > > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
>> > > > > > >       <driver name='qemu' type='qcow2' cache='none'/>
>> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
>> > > > > a772-1ea939ef6ec3/1b655159-
>> > > > > > > ae10-41cf-8987-f1cfb47fe453'/>
>> > > > > > >       <target dev='vda' bus='virtio'/>
>> > > > > > > --
>> > > > > > >       <driver name='qemu' type='qcow2' cache='none'/>
>> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
>> > > > > a772-1ea939ef6ec3/09bdadcb-
>> > > > > > > ec6e-4dda-b37b-17b1a749257f'/>
>> > > > > > >       <target dev='vdb' bus='virtio'/>
>> > > > > > > --
>> > > > > > >
>> > > > > > > STOP and START VM = after SQL
>> > > > > > >
>> > > > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
>> > > > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
>> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
>> > > > > a772-1ea939ef6ec3/1b655159-
>> > > > > > > ae10-41cf-8987-f1cfb47fe453'/>
>> > > > > > >       <target dev='vda' bus='virtio'/>
>> > > > > > > --
>> > > > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
>> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
>> > > > > a772-1ea939ef6ec3/09bdadcb-
>> > > > > > > ec6e-4dda-b37b-17b1a749257f'/>
>> > > > > > >       <target dev='vdb' bus='virtio'/>
>> > > > > > > --
>> > > > > > >
>> > > > > > >
>> > > > > > >
>> > > > > > > On 20 February 2018 at 14:03, Rafael Weingärtner <
>> > > > > > > rafaelweingartner@gmail.com> wrote:
>> > > > > > >
>> > > > > > >> I have no idea how it can change the performance. If you
>> look at
>> > > the
>> > > > > > >> content of the commit you provided, it is only the commit
>> that
>> > > > enabled
>> > > > > > the
>> > > > > > >> use of getCacheMode from disk offerings. However, it is not
>> > > exposing
>> > > > > any
>> > > > > > >> way to users to change that value/configuration in the
>> > database. I
>> > > > > might
>> > > > > > >> have missed it; do you see any API methods that receive the
>> > > > parameter
>> > > > > > >> "cacheMode" and then pass this parameter to a "diskOffering"
>> > > object,
>> > > > > and
>> > > > > > >> then persist/update this object in the database?
>> > > > > > >>
>> > > > > > >> May I ask how are you guys changing the cacheMode
>> configuration?
>> > > > > > >>
>> > > > > > >> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <
>> > > > paul.angus@shapeblue.com
>> > > > > >
>> > > > > > >> wrote:
>> > > > > > >>
>> > > > > > >> > I'm working with some guys who are experimenting with the
>> > > setting
>> > > > as
>> > > > > > if
>> > > > > > >> > definitely seems to change the performance of data disks.
>> It
>> > > also
>> > > > > > >> changes
>> > > > > > >> > the XML of the VM which is created.
>> > > > > > >> >
>> > > > > > >> > p.s.
>> > > > > > >> > I've found this commit;
>> > > > > > >> >
>> > > > > > >> > https://github.com/apache/clou
>> dstack/commit/1edaa36cc68e845a
>> > > > > > >> 42339d5f267d49
>> > > > > > >> > c82343aefb
>> > > > > > >> >
>> > > > > > >> > so I've got something to investigate now, but API
>> > documentation
>> > > > must
>> > > > > > >> > definitely be askew.
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > paul.angus@shapeblue.com
>> > > > > > >> > www.shapeblue.com
>> > > > > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> > > > > > >> > @shapeblue
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > -----Original Message-----
>> > > > > > >> > From: Rafael Weingärtner [mailto:rafaelweingartner@gmai
>> l.com]
>> > > > > > >> > Sent: 20 February 2018 12:31
>> > > > > > >> > To: dev <de...@cloudstack.apache.org>
>> > > > > > >> > Subject: Re: Caching modes
>> > > > > > >> >
>> > > > > > >> > This cache mode parameter does not exist in
>> > > > "CreateDiskOfferingCmd"
>> > > > > > >> > command. I also checked some commits from 2, 3, 4 and 5
>> years
>> > > ago,
>> > > > > and
>> > > > > > >> > this parameter was never there. If you check the API in
>> [1],
>> > you
>> > > > can
>> > > > > > see
>> > > > > > >> > that it is not an expected parameter. Moreover, I do not
>> see
>> > any
>> > > > use
>> > > > > > of
>> > > > > > >> > "setCacheMode" in the code (in case it is updated by some
>> > other
>> > > > > > method).
>> > > > > > >> > Interestingly enough, the code uses "getCacheMode".
>> > > > > > >> >
>> > > > > > >> > In summary, it is not a feature, and it does not work. It
>> > looks
>> > > > like
>> > > > > > >> some
>> > > > > > >> > leftover from dark ages when people could commit anything
>> and
>> > > then
>> > > > > > they
>> > > > > > >> > would just leave a half implementation there in our code
>> base.
>> > > > > > >> >
>> > > > > > >> > [1]
>> > > > > > >> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
>> > > > > > >> > createDiskOffering.html
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <
>> > > > > > andrija.panic@gmail.com
>> > > > > > >> >
>> > > > > > >> > wrote:
>> > > > > > >> >
>> > > > > > >> > > I can also assume that "cachemode" as API parameter is
>> not
>> > > > > > supported,
>> > > > > > >> > > since when creating data disk offering via GUI also
>> doesn't
>> > > set
>> > > > it
>> > > > > > in
>> > > > > > >> > DB/table.
>> > > > > > >> > >
>> > > > > > >> > > CM:    create diskoffering name=xxx displaytext=xxx
>> > > > > > storagetype=shared
>> > > > > > >> > > disksize=1024 cachemode=writeback
>> > > > > > >> > >
>> > > > > > >> > > this also does not set cachemode in table... my guess
>> it's
>> > not
>> > > > > > >> > > implemented in API
>> > > > > > >> > >
>> > > > > > >> > > Let me know if I can help with any testing here.
>> > > > > > >> > >
>> > > > > > >> > > Cheers
>> > > > > > >> > >
>> > > > > > >> > > On 20 February 2018 at 13:09, Andrija Panic <
>> > > > > > andrija.panic@gmail.com>
>> > > > > > >> > > wrote:
>> > > > > > >> > >
>> > > > > > >> > > > Hi Paul,
>> > > > > > >> > > >
>> > > > > > >> > > > not helping directly answering your question, but here
>> are
>> > > > some
>> > > > > > >> > > > observations and "warning" if client's are using
>> > write-back
>> > > > > cache
>> > > > > > on
>> > > > > > >> > > > KVM level
>> > > > > > >> > > >
>> > > > > > >> > > >
>> > > > > > >> > > > I have (long time ago) tested performance in 3
>> > combinations
>> > > > > (this
>> > > > > > >> > > > was not really thorough testing but a brief testing
>> with
>> > FIO
>> > > > and
>> > > > > > >> > > > random IO WRITE)
>> > > > > > >> > > >
>> > > > > > >> > > > - just CEPH rbd cache (on KVM side)
>> > > > > > >> > > >            i.e. [client]
>> > > > > > >> > > >                  rbd cache = true
>> > > > > > >> > > >                  rbd cache writethrough until flush =
>> true
>> > > > > > >> > > >                  #(this is default 32MB per volume,
>> afaik
>> > > > > > >> > > >
>> > > > > > >> > > > - just KMV write-back cache (had to manually edit
>> > > > disk_offering
>> > > > > > >> > > > table to activate cache mode, since when creating new
>> disk
>> > > > > > offering
>> > > > > > >> > > > via GUI, the disk_offering tables was NOT populated
>> with
>> > > > > > >> > > > "write-back" sertting/value
>> > > > > > >> > > ! )
>> > > > > > >> > > >
>> > > > > > >> > > > - both CEPH and KVM write-back cahce active
>> > > > > > >> > > >
>> > > > > > >> > > > My observations were like following, but would be good
>> to
>> > > > > actually
>> > > > > > >> > > confirm
>> > > > > > >> > > > by someone else:
>> > > > > > >> > > >
>> > > > > > >> > > > - same performance with only CEPH caching or with only
>> KVM
>> > > > > caching
>> > > > > > >> > > > - a bit worse performance with both CEPH and KVM
>> caching
>> > > > active
>> > > > > > >> > > > (nonsense combination, I know...)
>> > > > > > >> > > >
>> > > > > > >> > > >
>> > > > > > >> > > > Please keep in mind, that some ACS functionality, KVM
>> > > > > > >> > > > live-migrations on shared storage (NFS/CEPH) are NOT
>> > > supported
>> > > > > > when
>> > > > > > >> > > > you use KVM write-back cache, since this is considered
>> > > > "unsafe"
>> > > > > > >> > migration, more info here:
>> > > > > > >> > > > https://doc.opensuse.org/documentation/leap/
>> > virtualization/
>> > > > > > >> > > html/book.virt/
>> > > > > > >> > > > cha.cachemodes.html#sec.cache.mode.live.migration
>> > > > > > >> > > >
>> > > > > > >> > > > or in short:
>> > > > > > >> > > > "
>> > > > > > >> > > > The libvirt management layer includes checks for
>> migration
>> > > > > > >> > > > compatibility based on several factors. If the guest
>> > storage
>> > > > is
>> > > > > > >> > > > hosted on a clustered file system, is read-only or is
>> > marked
>> > > > > > >> > > > shareable, then the cache mode is ignored when
>> determining
>> > > if
>> > > > > > >> > > > migration can be allowed. Otherwise libvirt will not
>> allow
>> > > > > > migration
>> > > > > > >> > > > unless the cache mode is set to none. However, this
>> > > > restriction
>> > > > > > can
>> > > > > > >> > > > be overridden with the “unsafe” option to the migration
>> > > APIs,
>> > > > > > which
>> > > > > > >> > > > is also supported by virsh, as for example in
>> > > > > > >> > > >
>> > > > > > >> > > > virsh migrate --live --unsafe
>> > > > > > >> > > > "
>> > > > > > >> > > >
>> > > > > > >> > > > Cheers
>> > > > > > >> > > > Andrija
>> > > > > > >> > > >
>> > > > > > >> > > >
>> > > > > > >> > > > On 20 February 2018 at 11:24, Paul Angus <
>> > > > > > paul.angus@shapeblue.com>
>> > > > > > >> > > wrote:
>> > > > > > >> > > >
>> > > > > > >> > > >> Hi Wido,
>> > > > > > >> > > >>
>> > > > > > >> > > >> This is for KVM (with Ceph backend as it happens), the
>> > API
>> > > > > > >> > > >> documentation is out of sync with UI capabilities, so
>> I'm
>> > > > > trying
>> > > > > > to
>> > > > > > >> > > >> figure out if we
>> > > > > > >> > > >> *should* be able to set cacheMode for root disks.  It
>> > seems
>> > > > to
>> > > > > > make
>> > > > > > >> > > quite a
>> > > > > > >> > > >> difference to performance.
>> > > > > > >> > > >>
>> > > > > > >> > > >>
>> > > > > > >> > > >>
>> > > > > > >> > > >> paul.angus@shapeblue.com
>> > > > > > >> > > >> www.shapeblue.com
>> > > > > > >> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> > > > @shapeblue
>> > > > > > >> > > >>
>> > > > > > >> > > >>
>> > > > > > >> > > >>
>> > > > > > >> > > >>
>> > > > > > >> > > >> -----Original Message-----
>> > > > > > >> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
>> > > > > > >> > > >> Sent: 20 February 2018 09:03
>> > > > > > >> > > >> To: dev@cloudstack.apache.org
>> > > > > > >> > > >> Subject: Re: Caching modes
>> > > > > > >> > > >>
>> > > > > > >> > > >>
>> > > > > > >> > > >>
>> > > > > > >> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
>> > > > > > >> > > >> > Hey guys,
>> > > > > > >> > > >> >
>> > > > > > >> > > >> > Can anyone shed any light on write caching in
>> > CloudStack?
>> > > > > > >> >  cacheMode
>> > > > > > >> > > >> is available through the UI for data disks (but not
>> root
>> > > > > disks),
>> > > > > > >> but
>> > > > > > >> > not
>> > > > > > >> > > >> documented as an API option for data or root disks
>> > > (although
>> > > > is
>> > > > > > >> > > documented
>> > > > > > >> > > >> as a response for data disks).
>> > > > > > >> > > >> >
>> > > > > > >> > > >>
>> > > > > > >> > > >> What hypervisor?
>> > > > > > >> > > >>
>> > > > > > >> > > >> In case of KVM it's passed down to XML which then
>> passes
>> > it
>> > > > to
>> > > > > > >> > Qemu/KVM
>> > > > > > >> > > >> which then handles the caching.
>> > > > > > >> > > >>
>> > > > > > >> > > >> The implementation varies per hypervisor, so that
>> should
>> > be
>> > > > the
>> > > > > > >> > > question.
>> > > > > > >> > > >>
>> > > > > > >> > > >> Wido
>> > > > > > >> > > >>
>> > > > > > >> > > >>
>> > > > > > >> > > >> > #huh?
>> > > > > > >> > > >> >
>> > > > > > >> > > >> > thanks
>> > > > > > >> > > >> >
>> > > > > > >> > > >> >
>> > > > > > >> > > >> >
>> > > > > > >> > > >> > paul.angus@shapeblue.com
>> > > > > > >> > > >> > www.shapeblue.com
>> > > > > > >> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> > > > > @shapeblue
>> > > > > > >> > > >> >
>> > > > > > >> > > >> >
>> > > > > > >> > > >> >
>> > > > > > >> > > >>
>> > > > > > >> > > >
>> > > > > > >> > > >
>> > > > > > >> > > >
>> > > > > > >> > > > --
>> > > > > > >> > > >
>> > > > > > >> > > > Andrija Panić
>> > > > > > >> > > >
>> > > > > > >> > >
>> > > > > > >> > >
>> > > > > > >> > >
>> > > > > > >> > > --
>> > > > > > >> > >
>> > > > > > >> > > Andrija Panić
>> > > > > > >> > >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > --
>> > > > > > >> > Rafael Weingärtner
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> --
>> > > > > > >> Rafael Weingärtner
>> > > > > > >>
>> > > > > > >
>> > > > > > >
>> > > > > > >
>> > > > > > > --
>> > > > > > >
>> > > > > > > Andrija Panić
>> > > > > > >
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > --
>> > > > > >
>> > > > > > Andrija Panić
>> > > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Rafael Weingärtner
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > >
>> > > > Andrija Panić
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Rafael Weingärtner
>> > >
>> >
>> >
>> >
>> > --
>> >
>> > Andrija Panić
>> >
>>
>>
>>
>> --
>> Rafael Weingärtner
>>
>
>
>
> --
>
> Andrija Panić
>



-- 

Andrija Panić

Re: Caching modes

Posted by Andrija Panic <an...@gmail.com>.
pls merge also https://github.com/apache/cloudstack-docs-admin/pull/48

just correct code block syntax (to display code properly)

On 20 February 2018 at 21:02, Rafael Weingärtner <
rafaelweingartner@gmail.com> wrote:

> Thanks, we will proceed reviweing
>
> On Tue, Feb 20, 2018 at 3:12 PM, Andrija Panic <an...@gmail.com>
> wrote:
>
> > Here it is:
> >
> > https://github.com/apache/cloudstack-docs-admin/pull/47
> >
> > Added KVM online storage migration (atm only CEPH/NFS to SolidFire, new
> in
> > 4.11 release)
> > Added KVM cache mode setup and limitations.
> >
> >
> > Cheers
> >
> > On 20 February 2018 at 16:49, Rafael Weingärtner <
> > rafaelweingartner@gmail.com> wrote:
> >
> > > If you are willing to write it down, please do so, and open a PR. We
> will
> > > review and merged it afterwards.
> > >
> > > On Tue, Feb 20, 2018 at 12:41 PM, Andrija Panic <
> andrija.panic@gmail.com
> > >
> > > wrote:
> > >
> > > > I advise (or not...depends on point of view) to stay that way -
> because
> > > > when you activate write-back cache - live migrations will stop, and
> > this
> > > > makes *Enable maintenance mode (put host into maintenance)*
> impossible.
> > > >
> > > > I would perhaps suggest that there is documentation for "advanced
> > users"
> > > or
> > > > similar, that will say "it's possible to enable this and this way via
> > DB
> > > > hack, but be warned on live migration consequences etc..." since this
> > > will
> > > > render more problems if people start using it.
> > > >
> > > > If you choose to do, let me know, I can write that (documentation)
> > > briefly.
> > > >
> > > > Not to mention it can be unsafe (power failure - less possible I
> guess,
> > > but
> > > > rare kernel panic etc might have it's consequences I assume)
> > > >
> > > > It does indeed increase performance on NFS much, but not necessarily
> on
> > > > CEPH (if you are using lirbd cache on client side as explained above)
> > > >
> > > > On 20 February 2018 at 15:48, Rafael Weingärtner <
> > > > rafaelweingartner@gmail.com> wrote:
> > > >
> > > > > Yes. Weird enough, the code is using the value in the database if
> it
> > is
> > > > > provided there, but there is no easy way for users to change that
> > > > > configuration in the database. ¯\_(ツ)_/¯
> > > > >
> > > > > On Tue, Feb 20, 2018 at 11:45 AM, Andrija Panic <
> > > andrija.panic@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > So it seems that just passing the cachemode value to API is not
> > > there,
> > > > or
> > > > > > somehow messedup, but deployVM process does read DB values from
> > > > > > disk_offering table for sure, and applies it to XML file for KVM.
> > > > > > This is above ACS 4.8.x.
> > > > > >
> > > > > >
> > > > > > On 20 February 2018 at 15:44, Andrija Panic <
> > andrija.panic@gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > I have edited the disk_offering table, in the cache_mode just
> > enter
> > > > > > > "writeback". Stop and start VM, and it will pickup/inherit the
> > > > > cache_mode
> > > > > > > from it's parrent offering
> > > > > > > This also applies to Compute/Service offering, again inside
> > > > > disk_offering
> > > > > > > table - just tested both
> > > > > > >
> > > > > > > i.e.
> > > > > > >
> > > > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback'
> WHERE
> > > > > > > `id`=102; # Compute Offering (Service offering)
> > > > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback'
> WHERE
> > > > > > > `id`=114; #data disk offering
> > > > > > >
> > > > > > > Before SQL:
> > > > > > >
> > > > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > > > > > >       <driver name='qemu' type='qcow2' cache='none'/>
> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > > > a772-1ea939ef6ec3/1b655159-
> > > > > > > ae10-41cf-8987-f1cfb47fe453'/>
> > > > > > >       <target dev='vda' bus='virtio'/>
> > > > > > > --
> > > > > > >       <driver name='qemu' type='qcow2' cache='none'/>
> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > > > a772-1ea939ef6ec3/09bdadcb-
> > > > > > > ec6e-4dda-b37b-17b1a749257f'/>
> > > > > > >       <target dev='vdb' bus='virtio'/>
> > > > > > > --
> > > > > > >
> > > > > > > STOP and START VM = after SQL
> > > > > > >
> > > > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > > > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > > > a772-1ea939ef6ec3/1b655159-
> > > > > > > ae10-41cf-8987-f1cfb47fe453'/>
> > > > > > >       <target dev='vda' bus='virtio'/>
> > > > > > > --
> > > > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > > > a772-1ea939ef6ec3/09bdadcb-
> > > > > > > ec6e-4dda-b37b-17b1a749257f'/>
> > > > > > >       <target dev='vdb' bus='virtio'/>
> > > > > > > --
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On 20 February 2018 at 14:03, Rafael Weingärtner <
> > > > > > > rafaelweingartner@gmail.com> wrote:
> > > > > > >
> > > > > > >> I have no idea how it can change the performance. If you look
> at
> > > the
> > > > > > >> content of the commit you provided, it is only the commit that
> > > > enabled
> > > > > > the
> > > > > > >> use of getCacheMode from disk offerings. However, it is not
> > > exposing
> > > > > any
> > > > > > >> way to users to change that value/configuration in the
> > database. I
> > > > > might
> > > > > > >> have missed it; do you see any API methods that receive the
> > > > parameter
> > > > > > >> "cacheMode" and then pass this parameter to a "diskOffering"
> > > object,
> > > > > and
> > > > > > >> then persist/update this object in the database?
> > > > > > >>
> > > > > > >> May I ask how are you guys changing the cacheMode
> configuration?
> > > > > > >>
> > > > > > >> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <
> > > > paul.angus@shapeblue.com
> > > > > >
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >> > I'm working with some guys who are experimenting with the
> > > setting
> > > > as
> > > > > > if
> > > > > > >> > definitely seems to change the performance of data disks.
> It
> > > also
> > > > > > >> changes
> > > > > > >> > the XML of the VM which is created.
> > > > > > >> >
> > > > > > >> > p.s.
> > > > > > >> > I've found this commit;
> > > > > > >> >
> > > > > > >> > https://github.com/apache/cloudstack/commit/
> 1edaa36cc68e845a
> > > > > > >> 42339d5f267d49
> > > > > > >> > c82343aefb
> > > > > > >> >
> > > > > > >> > so I've got something to investigate now, but API
> > documentation
> > > > must
> > > > > > >> > definitely be askew.
> > > > > > >> >
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > paul.angus@shapeblue.com
> > > > > > >> > www.shapeblue.com
> > > > > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > > > >> > @shapeblue
> > > > > > >> >
> > > > > > >> >
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > -----Original Message-----
> > > > > > >> > From: Rafael Weingärtner [mailto:rafaelweingartner@
> gmail.com]
> > > > > > >> > Sent: 20 February 2018 12:31
> > > > > > >> > To: dev <de...@cloudstack.apache.org>
> > > > > > >> > Subject: Re: Caching modes
> > > > > > >> >
> > > > > > >> > This cache mode parameter does not exist in
> > > > "CreateDiskOfferingCmd"
> > > > > > >> > command. I also checked some commits from 2, 3, 4 and 5
> years
> > > ago,
> > > > > and
> > > > > > >> > this parameter was never there. If you check the API in [1],
> > you
> > > > can
> > > > > > see
> > > > > > >> > that it is not an expected parameter. Moreover, I do not see
> > any
> > > > use
> > > > > > of
> > > > > > >> > "setCacheMode" in the code (in case it is updated by some
> > other
> > > > > > method).
> > > > > > >> > Interestingly enough, the code uses "getCacheMode".
> > > > > > >> >
> > > > > > >> > In summary, it is not a feature, and it does not work. It
> > looks
> > > > like
> > > > > > >> some
> > > > > > >> > leftover from dark ages when people could commit anything
> and
> > > then
> > > > > > they
> > > > > > >> > would just leave a half implementation there in our code
> base.
> > > > > > >> >
> > > > > > >> > [1]
> > > > > > >> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
> > > > > > >> > createDiskOffering.html
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <
> > > > > > andrija.panic@gmail.com
> > > > > > >> >
> > > > > > >> > wrote:
> > > > > > >> >
> > > > > > >> > > I can also assume that "cachemode" as API parameter is not
> > > > > > supported,
> > > > > > >> > > since when creating data disk offering via GUI also
> doesn't
> > > set
> > > > it
> > > > > > in
> > > > > > >> > DB/table.
> > > > > > >> > >
> > > > > > >> > > CM:    create diskoffering name=xxx displaytext=xxx
> > > > > > storagetype=shared
> > > > > > >> > > disksize=1024 cachemode=writeback
> > > > > > >> > >
> > > > > > >> > > this also does not set cachemode in table... my guess it's
> > not
> > > > > > >> > > implemented in API
> > > > > > >> > >
> > > > > > >> > > Let me know if I can help with any testing here.
> > > > > > >> > >
> > > > > > >> > > Cheers
> > > > > > >> > >
> > > > > > >> > > On 20 February 2018 at 13:09, Andrija Panic <
> > > > > > andrija.panic@gmail.com>
> > > > > > >> > > wrote:
> > > > > > >> > >
> > > > > > >> > > > Hi Paul,
> > > > > > >> > > >
> > > > > > >> > > > not helping directly answering your question, but here
> are
> > > > some
> > > > > > >> > > > observations and "warning" if client's are using
> > write-back
> > > > > cache
> > > > > > on
> > > > > > >> > > > KVM level
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > I have (long time ago) tested performance in 3
> > combinations
> > > > > (this
> > > > > > >> > > > was not really thorough testing but a brief testing with
> > FIO
> > > > and
> > > > > > >> > > > random IO WRITE)
> > > > > > >> > > >
> > > > > > >> > > > - just CEPH rbd cache (on KVM side)
> > > > > > >> > > >            i.e. [client]
> > > > > > >> > > >                  rbd cache = true
> > > > > > >> > > >                  rbd cache writethrough until flush =
> true
> > > > > > >> > > >                  #(this is default 32MB per volume,
> afaik
> > > > > > >> > > >
> > > > > > >> > > > - just KMV write-back cache (had to manually edit
> > > > disk_offering
> > > > > > >> > > > table to activate cache mode, since when creating new
> disk
> > > > > > offering
> > > > > > >> > > > via GUI, the disk_offering tables was NOT populated with
> > > > > > >> > > > "write-back" sertting/value
> > > > > > >> > > ! )
> > > > > > >> > > >
> > > > > > >> > > > - both CEPH and KVM write-back cahce active
> > > > > > >> > > >
> > > > > > >> > > > My observations were like following, but would be good
> to
> > > > > actually
> > > > > > >> > > confirm
> > > > > > >> > > > by someone else:
> > > > > > >> > > >
> > > > > > >> > > > - same performance with only CEPH caching or with only
> KVM
> > > > > caching
> > > > > > >> > > > - a bit worse performance with both CEPH and KVM caching
> > > > active
> > > > > > >> > > > (nonsense combination, I know...)
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > Please keep in mind, that some ACS functionality, KVM
> > > > > > >> > > > live-migrations on shared storage (NFS/CEPH) are NOT
> > > supported
> > > > > > when
> > > > > > >> > > > you use KVM write-back cache, since this is considered
> > > > "unsafe"
> > > > > > >> > migration, more info here:
> > > > > > >> > > > https://doc.opensuse.org/documentation/leap/
> > virtualization/
> > > > > > >> > > html/book.virt/
> > > > > > >> > > > cha.cachemodes.html#sec.cache.mode.live.migration
> > > > > > >> > > >
> > > > > > >> > > > or in short:
> > > > > > >> > > > "
> > > > > > >> > > > The libvirt management layer includes checks for
> migration
> > > > > > >> > > > compatibility based on several factors. If the guest
> > storage
> > > > is
> > > > > > >> > > > hosted on a clustered file system, is read-only or is
> > marked
> > > > > > >> > > > shareable, then the cache mode is ignored when
> determining
> > > if
> > > > > > >> > > > migration can be allowed. Otherwise libvirt will not
> allow
> > > > > > migration
> > > > > > >> > > > unless the cache mode is set to none. However, this
> > > > restriction
> > > > > > can
> > > > > > >> > > > be overridden with the “unsafe” option to the migration
> > > APIs,
> > > > > > which
> > > > > > >> > > > is also supported by virsh, as for example in
> > > > > > >> > > >
> > > > > > >> > > > virsh migrate --live --unsafe
> > > > > > >> > > > "
> > > > > > >> > > >
> > > > > > >> > > > Cheers
> > > > > > >> > > > Andrija
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > On 20 February 2018 at 11:24, Paul Angus <
> > > > > > paul.angus@shapeblue.com>
> > > > > > >> > > wrote:
> > > > > > >> > > >
> > > > > > >> > > >> Hi Wido,
> > > > > > >> > > >>
> > > > > > >> > > >> This is for KVM (with Ceph backend as it happens), the
> > API
> > > > > > >> > > >> documentation is out of sync with UI capabilities, so
> I'm
> > > > > trying
> > > > > > to
> > > > > > >> > > >> figure out if we
> > > > > > >> > > >> *should* be able to set cacheMode for root disks.  It
> > seems
> > > > to
> > > > > > make
> > > > > > >> > > quite a
> > > > > > >> > > >> difference to performance.
> > > > > > >> > > >>
> > > > > > >> > > >>
> > > > > > >> > > >>
> > > > > > >> > > >> paul.angus@shapeblue.com
> > > > > > >> > > >> www.shapeblue.com
> > > > > > >> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > @shapeblue
> > > > > > >> > > >>
> > > > > > >> > > >>
> > > > > > >> > > >>
> > > > > > >> > > >>
> > > > > > >> > > >> -----Original Message-----
> > > > > > >> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
> > > > > > >> > > >> Sent: 20 February 2018 09:03
> > > > > > >> > > >> To: dev@cloudstack.apache.org
> > > > > > >> > > >> Subject: Re: Caching modes
> > > > > > >> > > >>
> > > > > > >> > > >>
> > > > > > >> > > >>
> > > > > > >> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> > > > > > >> > > >> > Hey guys,
> > > > > > >> > > >> >
> > > > > > >> > > >> > Can anyone shed any light on write caching in
> > CloudStack?
> > > > > > >> >  cacheMode
> > > > > > >> > > >> is available through the UI for data disks (but not
> root
> > > > > disks),
> > > > > > >> but
> > > > > > >> > not
> > > > > > >> > > >> documented as an API option for data or root disks
> > > (although
> > > > is
> > > > > > >> > > documented
> > > > > > >> > > >> as a response for data disks).
> > > > > > >> > > >> >
> > > > > > >> > > >>
> > > > > > >> > > >> What hypervisor?
> > > > > > >> > > >>
> > > > > > >> > > >> In case of KVM it's passed down to XML which then
> passes
> > it
> > > > to
> > > > > > >> > Qemu/KVM
> > > > > > >> > > >> which then handles the caching.
> > > > > > >> > > >>
> > > > > > >> > > >> The implementation varies per hypervisor, so that
> should
> > be
> > > > the
> > > > > > >> > > question.
> > > > > > >> > > >>
> > > > > > >> > > >> Wido
> > > > > > >> > > >>
> > > > > > >> > > >>
> > > > > > >> > > >> > #huh?
> > > > > > >> > > >> >
> > > > > > >> > > >> > thanks
> > > > > > >> > > >> >
> > > > > > >> > > >> >
> > > > > > >> > > >> >
> > > > > > >> > > >> > paul.angus@shapeblue.com
> > > > > > >> > > >> > www.shapeblue.com
> > > > > > >> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > > @shapeblue
> > > > > > >> > > >> >
> > > > > > >> > > >> >
> > > > > > >> > > >> >
> > > > > > >> > > >>
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > --
> > > > > > >> > > >
> > > > > > >> > > > Andrija Panić
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >> > >
> > > > > > >> > >
> > > > > > >> > > --
> > > > > > >> > >
> > > > > > >> > > Andrija Panić
> > > > > > >> > >
> > > > > > >> >
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > --
> > > > > > >> > Rafael Weingärtner
> > > > > > >> >
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> --
> > > > > > >> Rafael Weingärtner
> > > > > > >>
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > Andrija Panić
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Andrija Panić
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Rafael Weingärtner
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Andrija Panić
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 

Andrija Panić

Re: Caching modes

Posted by Rafael Weingärtner <ra...@gmail.com>.
Thanks, we will proceed reviweing

On Tue, Feb 20, 2018 at 3:12 PM, Andrija Panic <an...@gmail.com>
wrote:

> Here it is:
>
> https://github.com/apache/cloudstack-docs-admin/pull/47
>
> Added KVM online storage migration (atm only CEPH/NFS to SolidFire, new in
> 4.11 release)
> Added KVM cache mode setup and limitations.
>
>
> Cheers
>
> On 20 February 2018 at 16:49, Rafael Weingärtner <
> rafaelweingartner@gmail.com> wrote:
>
> > If you are willing to write it down, please do so, and open a PR. We will
> > review and merged it afterwards.
> >
> > On Tue, Feb 20, 2018 at 12:41 PM, Andrija Panic <andrija.panic@gmail.com
> >
> > wrote:
> >
> > > I advise (or not...depends on point of view) to stay that way - because
> > > when you activate write-back cache - live migrations will stop, and
> this
> > > makes *Enable maintenance mode (put host into maintenance)* impossible.
> > >
> > > I would perhaps suggest that there is documentation for "advanced
> users"
> > or
> > > similar, that will say "it's possible to enable this and this way via
> DB
> > > hack, but be warned on live migration consequences etc..." since this
> > will
> > > render more problems if people start using it.
> > >
> > > If you choose to do, let me know, I can write that (documentation)
> > briefly.
> > >
> > > Not to mention it can be unsafe (power failure - less possible I guess,
> > but
> > > rare kernel panic etc might have it's consequences I assume)
> > >
> > > It does indeed increase performance on NFS much, but not necessarily on
> > > CEPH (if you are using lirbd cache on client side as explained above)
> > >
> > > On 20 February 2018 at 15:48, Rafael Weingärtner <
> > > rafaelweingartner@gmail.com> wrote:
> > >
> > > > Yes. Weird enough, the code is using the value in the database if it
> is
> > > > provided there, but there is no easy way for users to change that
> > > > configuration in the database. ¯\_(ツ)_/¯
> > > >
> > > > On Tue, Feb 20, 2018 at 11:45 AM, Andrija Panic <
> > andrija.panic@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > So it seems that just passing the cachemode value to API is not
> > there,
> > > or
> > > > > somehow messedup, but deployVM process does read DB values from
> > > > > disk_offering table for sure, and applies it to XML file for KVM.
> > > > > This is above ACS 4.8.x.
> > > > >
> > > > >
> > > > > On 20 February 2018 at 15:44, Andrija Panic <
> andrija.panic@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > I have edited the disk_offering table, in the cache_mode just
> enter
> > > > > > "writeback". Stop and start VM, and it will pickup/inherit the
> > > > cache_mode
> > > > > > from it's parrent offering
> > > > > > This also applies to Compute/Service offering, again inside
> > > > disk_offering
> > > > > > table - just tested both
> > > > > >
> > > > > > i.e.
> > > > > >
> > > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > > > > > `id`=102; # Compute Offering (Service offering)
> > > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > > > > > `id`=114; #data disk offering
> > > > > >
> > > > > > Before SQL:
> > > > > >
> > > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > > > > >       <driver name='qemu' type='qcow2' cache='none'/>
> > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > > a772-1ea939ef6ec3/1b655159-
> > > > > > ae10-41cf-8987-f1cfb47fe453'/>
> > > > > >       <target dev='vda' bus='virtio'/>
> > > > > > --
> > > > > >       <driver name='qemu' type='qcow2' cache='none'/>
> > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > > a772-1ea939ef6ec3/09bdadcb-
> > > > > > ec6e-4dda-b37b-17b1a749257f'/>
> > > > > >       <target dev='vdb' bus='virtio'/>
> > > > > > --
> > > > > >
> > > > > > STOP and START VM = after SQL
> > > > > >
> > > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > > a772-1ea939ef6ec3/1b655159-
> > > > > > ae10-41cf-8987-f1cfb47fe453'/>
> > > > > >       <target dev='vda' bus='virtio'/>
> > > > > > --
> > > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > > a772-1ea939ef6ec3/09bdadcb-
> > > > > > ec6e-4dda-b37b-17b1a749257f'/>
> > > > > >       <target dev='vdb' bus='virtio'/>
> > > > > > --
> > > > > >
> > > > > >
> > > > > >
> > > > > > On 20 February 2018 at 14:03, Rafael Weingärtner <
> > > > > > rafaelweingartner@gmail.com> wrote:
> > > > > >
> > > > > >> I have no idea how it can change the performance. If you look at
> > the
> > > > > >> content of the commit you provided, it is only the commit that
> > > enabled
> > > > > the
> > > > > >> use of getCacheMode from disk offerings. However, it is not
> > exposing
> > > > any
> > > > > >> way to users to change that value/configuration in the
> database. I
> > > > might
> > > > > >> have missed it; do you see any API methods that receive the
> > > parameter
> > > > > >> "cacheMode" and then pass this parameter to a "diskOffering"
> > object,
> > > > and
> > > > > >> then persist/update this object in the database?
> > > > > >>
> > > > > >> May I ask how are you guys changing the cacheMode configuration?
> > > > > >>
> > > > > >> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <
> > > paul.angus@shapeblue.com
> > > > >
> > > > > >> wrote:
> > > > > >>
> > > > > >> > I'm working with some guys who are experimenting with the
> > setting
> > > as
> > > > > if
> > > > > >> > definitely seems to change the performance of data disks.  It
> > also
> > > > > >> changes
> > > > > >> > the XML of the VM which is created.
> > > > > >> >
> > > > > >> > p.s.
> > > > > >> > I've found this commit;
> > > > > >> >
> > > > > >> > https://github.com/apache/cloudstack/commit/1edaa36cc68e845a
> > > > > >> 42339d5f267d49
> > > > > >> > c82343aefb
> > > > > >> >
> > > > > >> > so I've got something to investigate now, but API
> documentation
> > > must
> > > > > >> > definitely be askew.
> > > > > >> >
> > > > > >> >
> > > > > >> >
> > > > > >> > paul.angus@shapeblue.com
> > > > > >> > www.shapeblue.com
> > > > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > > >> > @shapeblue
> > > > > >> >
> > > > > >> >
> > > > > >> >
> > > > > >> >
> > > > > >> > -----Original Message-----
> > > > > >> > From: Rafael Weingärtner [mailto:rafaelweingartner@gmail.com]
> > > > > >> > Sent: 20 February 2018 12:31
> > > > > >> > To: dev <de...@cloudstack.apache.org>
> > > > > >> > Subject: Re: Caching modes
> > > > > >> >
> > > > > >> > This cache mode parameter does not exist in
> > > "CreateDiskOfferingCmd"
> > > > > >> > command. I also checked some commits from 2, 3, 4 and 5 years
> > ago,
> > > > and
> > > > > >> > this parameter was never there. If you check the API in [1],
> you
> > > can
> > > > > see
> > > > > >> > that it is not an expected parameter. Moreover, I do not see
> any
> > > use
> > > > > of
> > > > > >> > "setCacheMode" in the code (in case it is updated by some
> other
> > > > > method).
> > > > > >> > Interestingly enough, the code uses "getCacheMode".
> > > > > >> >
> > > > > >> > In summary, it is not a feature, and it does not work. It
> looks
> > > like
> > > > > >> some
> > > > > >> > leftover from dark ages when people could commit anything and
> > then
> > > > > they
> > > > > >> > would just leave a half implementation there in our code base.
> > > > > >> >
> > > > > >> > [1]
> > > > > >> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
> > > > > >> > createDiskOffering.html
> > > > > >> >
> > > > > >> >
> > > > > >> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <
> > > > > andrija.panic@gmail.com
> > > > > >> >
> > > > > >> > wrote:
> > > > > >> >
> > > > > >> > > I can also assume that "cachemode" as API parameter is not
> > > > > supported,
> > > > > >> > > since when creating data disk offering via GUI also doesn't
> > set
> > > it
> > > > > in
> > > > > >> > DB/table.
> > > > > >> > >
> > > > > >> > > CM:    create diskoffering name=xxx displaytext=xxx
> > > > > storagetype=shared
> > > > > >> > > disksize=1024 cachemode=writeback
> > > > > >> > >
> > > > > >> > > this also does not set cachemode in table... my guess it's
> not
> > > > > >> > > implemented in API
> > > > > >> > >
> > > > > >> > > Let me know if I can help with any testing here.
> > > > > >> > >
> > > > > >> > > Cheers
> > > > > >> > >
> > > > > >> > > On 20 February 2018 at 13:09, Andrija Panic <
> > > > > andrija.panic@gmail.com>
> > > > > >> > > wrote:
> > > > > >> > >
> > > > > >> > > > Hi Paul,
> > > > > >> > > >
> > > > > >> > > > not helping directly answering your question, but here are
> > > some
> > > > > >> > > > observations and "warning" if client's are using
> write-back
> > > > cache
> > > > > on
> > > > > >> > > > KVM level
> > > > > >> > > >
> > > > > >> > > >
> > > > > >> > > > I have (long time ago) tested performance in 3
> combinations
> > > > (this
> > > > > >> > > > was not really thorough testing but a brief testing with
> FIO
> > > and
> > > > > >> > > > random IO WRITE)
> > > > > >> > > >
> > > > > >> > > > - just CEPH rbd cache (on KVM side)
> > > > > >> > > >            i.e. [client]
> > > > > >> > > >                  rbd cache = true
> > > > > >> > > >                  rbd cache writethrough until flush = true
> > > > > >> > > >                  #(this is default 32MB per volume, afaik
> > > > > >> > > >
> > > > > >> > > > - just KMV write-back cache (had to manually edit
> > > disk_offering
> > > > > >> > > > table to activate cache mode, since when creating new disk
> > > > > offering
> > > > > >> > > > via GUI, the disk_offering tables was NOT populated with
> > > > > >> > > > "write-back" sertting/value
> > > > > >> > > ! )
> > > > > >> > > >
> > > > > >> > > > - both CEPH and KVM write-back cahce active
> > > > > >> > > >
> > > > > >> > > > My observations were like following, but would be good to
> > > > actually
> > > > > >> > > confirm
> > > > > >> > > > by someone else:
> > > > > >> > > >
> > > > > >> > > > - same performance with only CEPH caching or with only KVM
> > > > caching
> > > > > >> > > > - a bit worse performance with both CEPH and KVM caching
> > > active
> > > > > >> > > > (nonsense combination, I know...)
> > > > > >> > > >
> > > > > >> > > >
> > > > > >> > > > Please keep in mind, that some ACS functionality, KVM
> > > > > >> > > > live-migrations on shared storage (NFS/CEPH) are NOT
> > supported
> > > > > when
> > > > > >> > > > you use KVM write-back cache, since this is considered
> > > "unsafe"
> > > > > >> > migration, more info here:
> > > > > >> > > > https://doc.opensuse.org/documentation/leap/
> virtualization/
> > > > > >> > > html/book.virt/
> > > > > >> > > > cha.cachemodes.html#sec.cache.mode.live.migration
> > > > > >> > > >
> > > > > >> > > > or in short:
> > > > > >> > > > "
> > > > > >> > > > The libvirt management layer includes checks for migration
> > > > > >> > > > compatibility based on several factors. If the guest
> storage
> > > is
> > > > > >> > > > hosted on a clustered file system, is read-only or is
> marked
> > > > > >> > > > shareable, then the cache mode is ignored when determining
> > if
> > > > > >> > > > migration can be allowed. Otherwise libvirt will not allow
> > > > > migration
> > > > > >> > > > unless the cache mode is set to none. However, this
> > > restriction
> > > > > can
> > > > > >> > > > be overridden with the “unsafe” option to the migration
> > APIs,
> > > > > which
> > > > > >> > > > is also supported by virsh, as for example in
> > > > > >> > > >
> > > > > >> > > > virsh migrate --live --unsafe
> > > > > >> > > > "
> > > > > >> > > >
> > > > > >> > > > Cheers
> > > > > >> > > > Andrija
> > > > > >> > > >
> > > > > >> > > >
> > > > > >> > > > On 20 February 2018 at 11:24, Paul Angus <
> > > > > paul.angus@shapeblue.com>
> > > > > >> > > wrote:
> > > > > >> > > >
> > > > > >> > > >> Hi Wido,
> > > > > >> > > >>
> > > > > >> > > >> This is for KVM (with Ceph backend as it happens), the
> API
> > > > > >> > > >> documentation is out of sync with UI capabilities, so I'm
> > > > trying
> > > > > to
> > > > > >> > > >> figure out if we
> > > > > >> > > >> *should* be able to set cacheMode for root disks.  It
> seems
> > > to
> > > > > make
> > > > > >> > > quite a
> > > > > >> > > >> difference to performance.
> > > > > >> > > >>
> > > > > >> > > >>
> > > > > >> > > >>
> > > > > >> > > >> paul.angus@shapeblue.com
> > > > > >> > > >> www.shapeblue.com
> > > > > >> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > @shapeblue
> > > > > >> > > >>
> > > > > >> > > >>
> > > > > >> > > >>
> > > > > >> > > >>
> > > > > >> > > >> -----Original Message-----
> > > > > >> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
> > > > > >> > > >> Sent: 20 February 2018 09:03
> > > > > >> > > >> To: dev@cloudstack.apache.org
> > > > > >> > > >> Subject: Re: Caching modes
> > > > > >> > > >>
> > > > > >> > > >>
> > > > > >> > > >>
> > > > > >> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> > > > > >> > > >> > Hey guys,
> > > > > >> > > >> >
> > > > > >> > > >> > Can anyone shed any light on write caching in
> CloudStack?
> > > > > >> >  cacheMode
> > > > > >> > > >> is available through the UI for data disks (but not root
> > > > disks),
> > > > > >> but
> > > > > >> > not
> > > > > >> > > >> documented as an API option for data or root disks
> > (although
> > > is
> > > > > >> > > documented
> > > > > >> > > >> as a response for data disks).
> > > > > >> > > >> >
> > > > > >> > > >>
> > > > > >> > > >> What hypervisor?
> > > > > >> > > >>
> > > > > >> > > >> In case of KVM it's passed down to XML which then passes
> it
> > > to
> > > > > >> > Qemu/KVM
> > > > > >> > > >> which then handles the caching.
> > > > > >> > > >>
> > > > > >> > > >> The implementation varies per hypervisor, so that should
> be
> > > the
> > > > > >> > > question.
> > > > > >> > > >>
> > > > > >> > > >> Wido
> > > > > >> > > >>
> > > > > >> > > >>
> > > > > >> > > >> > #huh?
> > > > > >> > > >> >
> > > > > >> > > >> > thanks
> > > > > >> > > >> >
> > > > > >> > > >> >
> > > > > >> > > >> >
> > > > > >> > > >> > paul.angus@shapeblue.com
> > > > > >> > > >> > www.shapeblue.com
> > > > > >> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > @shapeblue
> > > > > >> > > >> >
> > > > > >> > > >> >
> > > > > >> > > >> >
> > > > > >> > > >>
> > > > > >> > > >
> > > > > >> > > >
> > > > > >> > > >
> > > > > >> > > > --
> > > > > >> > > >
> > > > > >> > > > Andrija Panić
> > > > > >> > > >
> > > > > >> > >
> > > > > >> > >
> > > > > >> > >
> > > > > >> > > --
> > > > > >> > >
> > > > > >> > > Andrija Panić
> > > > > >> > >
> > > > > >> >
> > > > > >> >
> > > > > >> >
> > > > > >> > --
> > > > > >> > Rafael Weingärtner
> > > > > >> >
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> --
> > > > > >> Rafael Weingärtner
> > > > > >>
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Andrija Panić
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Andrija Panić
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Rafael Weingärtner
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
>
> Andrija Panić
>



-- 
Rafael Weingärtner

Re: Caching modes

Posted by Andrija Panic <an...@gmail.com>.
Here it is:

https://github.com/apache/cloudstack-docs-admin/pull/47

Added KVM online storage migration (atm only CEPH/NFS to SolidFire, new in
4.11 release)
Added KVM cache mode setup and limitations.


Cheers

On 20 February 2018 at 16:49, Rafael Weingärtner <
rafaelweingartner@gmail.com> wrote:

> If you are willing to write it down, please do so, and open a PR. We will
> review and merged it afterwards.
>
> On Tue, Feb 20, 2018 at 12:41 PM, Andrija Panic <an...@gmail.com>
> wrote:
>
> > I advise (or not...depends on point of view) to stay that way - because
> > when you activate write-back cache - live migrations will stop, and this
> > makes *Enable maintenance mode (put host into maintenance)* impossible.
> >
> > I would perhaps suggest that there is documentation for "advanced users"
> or
> > similar, that will say "it's possible to enable this and this way via DB
> > hack, but be warned on live migration consequences etc..." since this
> will
> > render more problems if people start using it.
> >
> > If you choose to do, let me know, I can write that (documentation)
> briefly.
> >
> > Not to mention it can be unsafe (power failure - less possible I guess,
> but
> > rare kernel panic etc might have it's consequences I assume)
> >
> > It does indeed increase performance on NFS much, but not necessarily on
> > CEPH (if you are using lirbd cache on client side as explained above)
> >
> > On 20 February 2018 at 15:48, Rafael Weingärtner <
> > rafaelweingartner@gmail.com> wrote:
> >
> > > Yes. Weird enough, the code is using the value in the database if it is
> > > provided there, but there is no easy way for users to change that
> > > configuration in the database. ¯\_(ツ)_/¯
> > >
> > > On Tue, Feb 20, 2018 at 11:45 AM, Andrija Panic <
> andrija.panic@gmail.com
> > >
> > > wrote:
> > >
> > > > So it seems that just passing the cachemode value to API is not
> there,
> > or
> > > > somehow messedup, but deployVM process does read DB values from
> > > > disk_offering table for sure, and applies it to XML file for KVM.
> > > > This is above ACS 4.8.x.
> > > >
> > > >
> > > > On 20 February 2018 at 15:44, Andrija Panic <andrija.panic@gmail.com
> >
> > > > wrote:
> > > >
> > > > > I have edited the disk_offering table, in the cache_mode just enter
> > > > > "writeback". Stop and start VM, and it will pickup/inherit the
> > > cache_mode
> > > > > from it's parrent offering
> > > > > This also applies to Compute/Service offering, again inside
> > > disk_offering
> > > > > table - just tested both
> > > > >
> > > > > i.e.
> > > > >
> > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > > > > `id`=102; # Compute Offering (Service offering)
> > > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > > > > `id`=114; #data disk offering
> > > > >
> > > > > Before SQL:
> > > > >
> > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > > > >       <driver name='qemu' type='qcow2' cache='none'/>
> > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > a772-1ea939ef6ec3/1b655159-
> > > > > ae10-41cf-8987-f1cfb47fe453'/>
> > > > >       <target dev='vda' bus='virtio'/>
> > > > > --
> > > > >       <driver name='qemu' type='qcow2' cache='none'/>
> > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > a772-1ea939ef6ec3/09bdadcb-
> > > > > ec6e-4dda-b37b-17b1a749257f'/>
> > > > >       <target dev='vdb' bus='virtio'/>
> > > > > --
> > > > >
> > > > > STOP and START VM = after SQL
> > > > >
> > > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > a772-1ea939ef6ec3/1b655159-
> > > > > ae10-41cf-8987-f1cfb47fe453'/>
> > > > >       <target dev='vda' bus='virtio'/>
> > > > > --
> > > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > > a772-1ea939ef6ec3/09bdadcb-
> > > > > ec6e-4dda-b37b-17b1a749257f'/>
> > > > >       <target dev='vdb' bus='virtio'/>
> > > > > --
> > > > >
> > > > >
> > > > >
> > > > > On 20 February 2018 at 14:03, Rafael Weingärtner <
> > > > > rafaelweingartner@gmail.com> wrote:
> > > > >
> > > > >> I have no idea how it can change the performance. If you look at
> the
> > > > >> content of the commit you provided, it is only the commit that
> > enabled
> > > > the
> > > > >> use of getCacheMode from disk offerings. However, it is not
> exposing
> > > any
> > > > >> way to users to change that value/configuration in the database. I
> > > might
> > > > >> have missed it; do you see any API methods that receive the
> > parameter
> > > > >> "cacheMode" and then pass this parameter to a "diskOffering"
> object,
> > > and
> > > > >> then persist/update this object in the database?
> > > > >>
> > > > >> May I ask how are you guys changing the cacheMode configuration?
> > > > >>
> > > > >> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <
> > paul.angus@shapeblue.com
> > > >
> > > > >> wrote:
> > > > >>
> > > > >> > I'm working with some guys who are experimenting with the
> setting
> > as
> > > > if
> > > > >> > definitely seems to change the performance of data disks.  It
> also
> > > > >> changes
> > > > >> > the XML of the VM which is created.
> > > > >> >
> > > > >> > p.s.
> > > > >> > I've found this commit;
> > > > >> >
> > > > >> > https://github.com/apache/cloudstack/commit/1edaa36cc68e845a
> > > > >> 42339d5f267d49
> > > > >> > c82343aefb
> > > > >> >
> > > > >> > so I've got something to investigate now, but API documentation
> > must
> > > > >> > definitely be askew.
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > paul.angus@shapeblue.com
> > > > >> > www.shapeblue.com
> > > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > >> > @shapeblue
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > -----Original Message-----
> > > > >> > From: Rafael Weingärtner [mailto:rafaelweingartner@gmail.com]
> > > > >> > Sent: 20 February 2018 12:31
> > > > >> > To: dev <de...@cloudstack.apache.org>
> > > > >> > Subject: Re: Caching modes
> > > > >> >
> > > > >> > This cache mode parameter does not exist in
> > "CreateDiskOfferingCmd"
> > > > >> > command. I also checked some commits from 2, 3, 4 and 5 years
> ago,
> > > and
> > > > >> > this parameter was never there. If you check the API in [1], you
> > can
> > > > see
> > > > >> > that it is not an expected parameter. Moreover, I do not see any
> > use
> > > > of
> > > > >> > "setCacheMode" in the code (in case it is updated by some other
> > > > method).
> > > > >> > Interestingly enough, the code uses "getCacheMode".
> > > > >> >
> > > > >> > In summary, it is not a feature, and it does not work. It looks
> > like
> > > > >> some
> > > > >> > leftover from dark ages when people could commit anything and
> then
> > > > they
> > > > >> > would just leave a half implementation there in our code base.
> > > > >> >
> > > > >> > [1]
> > > > >> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
> > > > >> > createDiskOffering.html
> > > > >> >
> > > > >> >
> > > > >> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <
> > > > andrija.panic@gmail.com
> > > > >> >
> > > > >> > wrote:
> > > > >> >
> > > > >> > > I can also assume that "cachemode" as API parameter is not
> > > > supported,
> > > > >> > > since when creating data disk offering via GUI also doesn't
> set
> > it
> > > > in
> > > > >> > DB/table.
> > > > >> > >
> > > > >> > > CM:    create diskoffering name=xxx displaytext=xxx
> > > > storagetype=shared
> > > > >> > > disksize=1024 cachemode=writeback
> > > > >> > >
> > > > >> > > this also does not set cachemode in table... my guess it's not
> > > > >> > > implemented in API
> > > > >> > >
> > > > >> > > Let me know if I can help with any testing here.
> > > > >> > >
> > > > >> > > Cheers
> > > > >> > >
> > > > >> > > On 20 February 2018 at 13:09, Andrija Panic <
> > > > andrija.panic@gmail.com>
> > > > >> > > wrote:
> > > > >> > >
> > > > >> > > > Hi Paul,
> > > > >> > > >
> > > > >> > > > not helping directly answering your question, but here are
> > some
> > > > >> > > > observations and "warning" if client's are using write-back
> > > cache
> > > > on
> > > > >> > > > KVM level
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > I have (long time ago) tested performance in 3 combinations
> > > (this
> > > > >> > > > was not really thorough testing but a brief testing with FIO
> > and
> > > > >> > > > random IO WRITE)
> > > > >> > > >
> > > > >> > > > - just CEPH rbd cache (on KVM side)
> > > > >> > > >            i.e. [client]
> > > > >> > > >                  rbd cache = true
> > > > >> > > >                  rbd cache writethrough until flush = true
> > > > >> > > >                  #(this is default 32MB per volume, afaik
> > > > >> > > >
> > > > >> > > > - just KMV write-back cache (had to manually edit
> > disk_offering
> > > > >> > > > table to activate cache mode, since when creating new disk
> > > > offering
> > > > >> > > > via GUI, the disk_offering tables was NOT populated with
> > > > >> > > > "write-back" sertting/value
> > > > >> > > ! )
> > > > >> > > >
> > > > >> > > > - both CEPH and KVM write-back cahce active
> > > > >> > > >
> > > > >> > > > My observations were like following, but would be good to
> > > actually
> > > > >> > > confirm
> > > > >> > > > by someone else:
> > > > >> > > >
> > > > >> > > > - same performance with only CEPH caching or with only KVM
> > > caching
> > > > >> > > > - a bit worse performance with both CEPH and KVM caching
> > active
> > > > >> > > > (nonsense combination, I know...)
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > Please keep in mind, that some ACS functionality, KVM
> > > > >> > > > live-migrations on shared storage (NFS/CEPH) are NOT
> supported
> > > > when
> > > > >> > > > you use KVM write-back cache, since this is considered
> > "unsafe"
> > > > >> > migration, more info here:
> > > > >> > > > https://doc.opensuse.org/documentation/leap/virtualization/
> > > > >> > > html/book.virt/
> > > > >> > > > cha.cachemodes.html#sec.cache.mode.live.migration
> > > > >> > > >
> > > > >> > > > or in short:
> > > > >> > > > "
> > > > >> > > > The libvirt management layer includes checks for migration
> > > > >> > > > compatibility based on several factors. If the guest storage
> > is
> > > > >> > > > hosted on a clustered file system, is read-only or is marked
> > > > >> > > > shareable, then the cache mode is ignored when determining
> if
> > > > >> > > > migration can be allowed. Otherwise libvirt will not allow
> > > > migration
> > > > >> > > > unless the cache mode is set to none. However, this
> > restriction
> > > > can
> > > > >> > > > be overridden with the “unsafe” option to the migration
> APIs,
> > > > which
> > > > >> > > > is also supported by virsh, as for example in
> > > > >> > > >
> > > > >> > > > virsh migrate --live --unsafe
> > > > >> > > > "
> > > > >> > > >
> > > > >> > > > Cheers
> > > > >> > > > Andrija
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > On 20 February 2018 at 11:24, Paul Angus <
> > > > paul.angus@shapeblue.com>
> > > > >> > > wrote:
> > > > >> > > >
> > > > >> > > >> Hi Wido,
> > > > >> > > >>
> > > > >> > > >> This is for KVM (with Ceph backend as it happens), the API
> > > > >> > > >> documentation is out of sync with UI capabilities, so I'm
> > > trying
> > > > to
> > > > >> > > >> figure out if we
> > > > >> > > >> *should* be able to set cacheMode for root disks.  It seems
> > to
> > > > make
> > > > >> > > quite a
> > > > >> > > >> difference to performance.
> > > > >> > > >>
> > > > >> > > >>
> > > > >> > > >>
> > > > >> > > >> paul.angus@shapeblue.com
> > > > >> > > >> www.shapeblue.com
> > > > >> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> > > > >> > > >>
> > > > >> > > >>
> > > > >> > > >>
> > > > >> > > >>
> > > > >> > > >> -----Original Message-----
> > > > >> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
> > > > >> > > >> Sent: 20 February 2018 09:03
> > > > >> > > >> To: dev@cloudstack.apache.org
> > > > >> > > >> Subject: Re: Caching modes
> > > > >> > > >>
> > > > >> > > >>
> > > > >> > > >>
> > > > >> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> > > > >> > > >> > Hey guys,
> > > > >> > > >> >
> > > > >> > > >> > Can anyone shed any light on write caching in CloudStack?
> > > > >> >  cacheMode
> > > > >> > > >> is available through the UI for data disks (but not root
> > > disks),
> > > > >> but
> > > > >> > not
> > > > >> > > >> documented as an API option for data or root disks
> (although
> > is
> > > > >> > > documented
> > > > >> > > >> as a response for data disks).
> > > > >> > > >> >
> > > > >> > > >>
> > > > >> > > >> What hypervisor?
> > > > >> > > >>
> > > > >> > > >> In case of KVM it's passed down to XML which then passes it
> > to
> > > > >> > Qemu/KVM
> > > > >> > > >> which then handles the caching.
> > > > >> > > >>
> > > > >> > > >> The implementation varies per hypervisor, so that should be
> > the
> > > > >> > > question.
> > > > >> > > >>
> > > > >> > > >> Wido
> > > > >> > > >>
> > > > >> > > >>
> > > > >> > > >> > #huh?
> > > > >> > > >> >
> > > > >> > > >> > thanks
> > > > >> > > >> >
> > > > >> > > >> >
> > > > >> > > >> >
> > > > >> > > >> > paul.angus@shapeblue.com
> > > > >> > > >> > www.shapeblue.com
> > > > >> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > @shapeblue
> > > > >> > > >> >
> > > > >> > > >> >
> > > > >> > > >> >
> > > > >> > > >>
> > > > >> > > >
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > --
> > > > >> > > >
> > > > >> > > > Andrija Panić
> > > > >> > > >
> > > > >> > >
> > > > >> > >
> > > > >> > >
> > > > >> > > --
> > > > >> > >
> > > > >> > > Andrija Panić
> > > > >> > >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > --
> > > > >> > Rafael Weingärtner
> > > > >> >
> > > > >>
> > > > >>
> > > > >>
> > > > >> --
> > > > >> Rafael Weingärtner
> > > > >>
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Andrija Panić
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Andrija Panić
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 

Andrija Panić

Re: Caching modes

Posted by Rafael Weingärtner <ra...@gmail.com>.
If you are willing to write it down, please do so, and open a PR. We will
review and merged it afterwards.

On Tue, Feb 20, 2018 at 12:41 PM, Andrija Panic <an...@gmail.com>
wrote:

> I advise (or not...depends on point of view) to stay that way - because
> when you activate write-back cache - live migrations will stop, and this
> makes *Enable maintenance mode (put host into maintenance)* impossible.
>
> I would perhaps suggest that there is documentation for "advanced users" or
> similar, that will say "it's possible to enable this and this way via DB
> hack, but be warned on live migration consequences etc..." since this will
> render more problems if people start using it.
>
> If you choose to do, let me know, I can write that (documentation) briefly.
>
> Not to mention it can be unsafe (power failure - less possible I guess, but
> rare kernel panic etc might have it's consequences I assume)
>
> It does indeed increase performance on NFS much, but not necessarily on
> CEPH (if you are using lirbd cache on client side as explained above)
>
> On 20 February 2018 at 15:48, Rafael Weingärtner <
> rafaelweingartner@gmail.com> wrote:
>
> > Yes. Weird enough, the code is using the value in the database if it is
> > provided there, but there is no easy way for users to change that
> > configuration in the database. ¯\_(ツ)_/¯
> >
> > On Tue, Feb 20, 2018 at 11:45 AM, Andrija Panic <andrija.panic@gmail.com
> >
> > wrote:
> >
> > > So it seems that just passing the cachemode value to API is not there,
> or
> > > somehow messedup, but deployVM process does read DB values from
> > > disk_offering table for sure, and applies it to XML file for KVM.
> > > This is above ACS 4.8.x.
> > >
> > >
> > > On 20 February 2018 at 15:44, Andrija Panic <an...@gmail.com>
> > > wrote:
> > >
> > > > I have edited the disk_offering table, in the cache_mode just enter
> > > > "writeback". Stop and start VM, and it will pickup/inherit the
> > cache_mode
> > > > from it's parrent offering
> > > > This also applies to Compute/Service offering, again inside
> > disk_offering
> > > > table - just tested both
> > > >
> > > > i.e.
> > > >
> > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > > > `id`=102; # Compute Offering (Service offering)
> > > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > > > `id`=114; #data disk offering
> > > >
> > > > Before SQL:
> > > >
> > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > > >       <driver name='qemu' type='qcow2' cache='none'/>
> > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > a772-1ea939ef6ec3/1b655159-
> > > > ae10-41cf-8987-f1cfb47fe453'/>
> > > >       <target dev='vda' bus='virtio'/>
> > > > --
> > > >       <driver name='qemu' type='qcow2' cache='none'/>
> > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > a772-1ea939ef6ec3/09bdadcb-
> > > > ec6e-4dda-b37b-17b1a749257f'/>
> > > >       <target dev='vdb' bus='virtio'/>
> > > > --
> > > >
> > > > STOP and START VM = after SQL
> > > >
> > > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > a772-1ea939ef6ec3/1b655159-
> > > > ae10-41cf-8987-f1cfb47fe453'/>
> > > >       <target dev='vda' bus='virtio'/>
> > > > --
> > > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> > a772-1ea939ef6ec3/09bdadcb-
> > > > ec6e-4dda-b37b-17b1a749257f'/>
> > > >       <target dev='vdb' bus='virtio'/>
> > > > --
> > > >
> > > >
> > > >
> > > > On 20 February 2018 at 14:03, Rafael Weingärtner <
> > > > rafaelweingartner@gmail.com> wrote:
> > > >
> > > >> I have no idea how it can change the performance. If you look at the
> > > >> content of the commit you provided, it is only the commit that
> enabled
> > > the
> > > >> use of getCacheMode from disk offerings. However, it is not exposing
> > any
> > > >> way to users to change that value/configuration in the database. I
> > might
> > > >> have missed it; do you see any API methods that receive the
> parameter
> > > >> "cacheMode" and then pass this parameter to a "diskOffering" object,
> > and
> > > >> then persist/update this object in the database?
> > > >>
> > > >> May I ask how are you guys changing the cacheMode configuration?
> > > >>
> > > >> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <
> paul.angus@shapeblue.com
> > >
> > > >> wrote:
> > > >>
> > > >> > I'm working with some guys who are experimenting with the setting
> as
> > > if
> > > >> > definitely seems to change the performance of data disks.  It also
> > > >> changes
> > > >> > the XML of the VM which is created.
> > > >> >
> > > >> > p.s.
> > > >> > I've found this commit;
> > > >> >
> > > >> > https://github.com/apache/cloudstack/commit/1edaa36cc68e845a
> > > >> 42339d5f267d49
> > > >> > c82343aefb
> > > >> >
> > > >> > so I've got something to investigate now, but API documentation
> must
> > > >> > definitely be askew.
> > > >> >
> > > >> >
> > > >> >
> > > >> > paul.angus@shapeblue.com
> > > >> > www.shapeblue.com
> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > >> > @shapeblue
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> > -----Original Message-----
> > > >> > From: Rafael Weingärtner [mailto:rafaelweingartner@gmail.com]
> > > >> > Sent: 20 February 2018 12:31
> > > >> > To: dev <de...@cloudstack.apache.org>
> > > >> > Subject: Re: Caching modes
> > > >> >
> > > >> > This cache mode parameter does not exist in
> "CreateDiskOfferingCmd"
> > > >> > command. I also checked some commits from 2, 3, 4 and 5 years ago,
> > and
> > > >> > this parameter was never there. If you check the API in [1], you
> can
> > > see
> > > >> > that it is not an expected parameter. Moreover, I do not see any
> use
> > > of
> > > >> > "setCacheMode" in the code (in case it is updated by some other
> > > method).
> > > >> > Interestingly enough, the code uses "getCacheMode".
> > > >> >
> > > >> > In summary, it is not a feature, and it does not work. It looks
> like
> > > >> some
> > > >> > leftover from dark ages when people could commit anything and then
> > > they
> > > >> > would just leave a half implementation there in our code base.
> > > >> >
> > > >> > [1]
> > > >> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
> > > >> > createDiskOffering.html
> > > >> >
> > > >> >
> > > >> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <
> > > andrija.panic@gmail.com
> > > >> >
> > > >> > wrote:
> > > >> >
> > > >> > > I can also assume that "cachemode" as API parameter is not
> > > supported,
> > > >> > > since when creating data disk offering via GUI also doesn't set
> it
> > > in
> > > >> > DB/table.
> > > >> > >
> > > >> > > CM:    create diskoffering name=xxx displaytext=xxx
> > > storagetype=shared
> > > >> > > disksize=1024 cachemode=writeback
> > > >> > >
> > > >> > > this also does not set cachemode in table... my guess it's not
> > > >> > > implemented in API
> > > >> > >
> > > >> > > Let me know if I can help with any testing here.
> > > >> > >
> > > >> > > Cheers
> > > >> > >
> > > >> > > On 20 February 2018 at 13:09, Andrija Panic <
> > > andrija.panic@gmail.com>
> > > >> > > wrote:
> > > >> > >
> > > >> > > > Hi Paul,
> > > >> > > >
> > > >> > > > not helping directly answering your question, but here are
> some
> > > >> > > > observations and "warning" if client's are using write-back
> > cache
> > > on
> > > >> > > > KVM level
> > > >> > > >
> > > >> > > >
> > > >> > > > I have (long time ago) tested performance in 3 combinations
> > (this
> > > >> > > > was not really thorough testing but a brief testing with FIO
> and
> > > >> > > > random IO WRITE)
> > > >> > > >
> > > >> > > > - just CEPH rbd cache (on KVM side)
> > > >> > > >            i.e. [client]
> > > >> > > >                  rbd cache = true
> > > >> > > >                  rbd cache writethrough until flush = true
> > > >> > > >                  #(this is default 32MB per volume, afaik
> > > >> > > >
> > > >> > > > - just KMV write-back cache (had to manually edit
> disk_offering
> > > >> > > > table to activate cache mode, since when creating new disk
> > > offering
> > > >> > > > via GUI, the disk_offering tables was NOT populated with
> > > >> > > > "write-back" sertting/value
> > > >> > > ! )
> > > >> > > >
> > > >> > > > - both CEPH and KVM write-back cahce active
> > > >> > > >
> > > >> > > > My observations were like following, but would be good to
> > actually
> > > >> > > confirm
> > > >> > > > by someone else:
> > > >> > > >
> > > >> > > > - same performance with only CEPH caching or with only KVM
> > caching
> > > >> > > > - a bit worse performance with both CEPH and KVM caching
> active
> > > >> > > > (nonsense combination, I know...)
> > > >> > > >
> > > >> > > >
> > > >> > > > Please keep in mind, that some ACS functionality, KVM
> > > >> > > > live-migrations on shared storage (NFS/CEPH) are NOT supported
> > > when
> > > >> > > > you use KVM write-back cache, since this is considered
> "unsafe"
> > > >> > migration, more info here:
> > > >> > > > https://doc.opensuse.org/documentation/leap/virtualization/
> > > >> > > html/book.virt/
> > > >> > > > cha.cachemodes.html#sec.cache.mode.live.migration
> > > >> > > >
> > > >> > > > or in short:
> > > >> > > > "
> > > >> > > > The libvirt management layer includes checks for migration
> > > >> > > > compatibility based on several factors. If the guest storage
> is
> > > >> > > > hosted on a clustered file system, is read-only or is marked
> > > >> > > > shareable, then the cache mode is ignored when determining if
> > > >> > > > migration can be allowed. Otherwise libvirt will not allow
> > > migration
> > > >> > > > unless the cache mode is set to none. However, this
> restriction
> > > can
> > > >> > > > be overridden with the “unsafe” option to the migration APIs,
> > > which
> > > >> > > > is also supported by virsh, as for example in
> > > >> > > >
> > > >> > > > virsh migrate --live --unsafe
> > > >> > > > "
> > > >> > > >
> > > >> > > > Cheers
> > > >> > > > Andrija
> > > >> > > >
> > > >> > > >
> > > >> > > > On 20 February 2018 at 11:24, Paul Angus <
> > > paul.angus@shapeblue.com>
> > > >> > > wrote:
> > > >> > > >
> > > >> > > >> Hi Wido,
> > > >> > > >>
> > > >> > > >> This is for KVM (with Ceph backend as it happens), the API
> > > >> > > >> documentation is out of sync with UI capabilities, so I'm
> > trying
> > > to
> > > >> > > >> figure out if we
> > > >> > > >> *should* be able to set cacheMode for root disks.  It seems
> to
> > > make
> > > >> > > quite a
> > > >> > > >> difference to performance.
> > > >> > > >>
> > > >> > > >>
> > > >> > > >>
> > > >> > > >> paul.angus@shapeblue.com
> > > >> > > >> www.shapeblue.com
> > > >> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> > > >> > > >>
> > > >> > > >>
> > > >> > > >>
> > > >> > > >>
> > > >> > > >> -----Original Message-----
> > > >> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
> > > >> > > >> Sent: 20 February 2018 09:03
> > > >> > > >> To: dev@cloudstack.apache.org
> > > >> > > >> Subject: Re: Caching modes
> > > >> > > >>
> > > >> > > >>
> > > >> > > >>
> > > >> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> > > >> > > >> > Hey guys,
> > > >> > > >> >
> > > >> > > >> > Can anyone shed any light on write caching in CloudStack?
> > > >> >  cacheMode
> > > >> > > >> is available through the UI for data disks (but not root
> > disks),
> > > >> but
> > > >> > not
> > > >> > > >> documented as an API option for data or root disks (although
> is
> > > >> > > documented
> > > >> > > >> as a response for data disks).
> > > >> > > >> >
> > > >> > > >>
> > > >> > > >> What hypervisor?
> > > >> > > >>
> > > >> > > >> In case of KVM it's passed down to XML which then passes it
> to
> > > >> > Qemu/KVM
> > > >> > > >> which then handles the caching.
> > > >> > > >>
> > > >> > > >> The implementation varies per hypervisor, so that should be
> the
> > > >> > > question.
> > > >> > > >>
> > > >> > > >> Wido
> > > >> > > >>
> > > >> > > >>
> > > >> > > >> > #huh?
> > > >> > > >> >
> > > >> > > >> > thanks
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> > paul.angus@shapeblue.com
> > > >> > > >> > www.shapeblue.com
> > > >> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >>
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > > --
> > > >> > > >
> > > >> > > > Andrija Panić
> > > >> > > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > --
> > > >> > >
> > > >> > > Andrija Panić
> > > >> > >
> > > >> >
> > > >> >
> > > >> >
> > > >> > --
> > > >> > Rafael Weingärtner
> > > >> >
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> Rafael Weingärtner
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Andrija Panić
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
>
> Andrija Panić
>



-- 
Rafael Weingärtner

Re: Caching modes

Posted by Andrija Panic <an...@gmail.com>.
I advise (or not...depends on point of view) to stay that way - because
when you activate write-back cache - live migrations will stop, and this
makes *Enable maintenance mode (put host into maintenance)* impossible.

I would perhaps suggest that there is documentation for "advanced users" or
similar, that will say "it's possible to enable this and this way via DB
hack, but be warned on live migration consequences etc..." since this will
render more problems if people start using it.

If you choose to do, let me know, I can write that (documentation) briefly.

Not to mention it can be unsafe (power failure - less possible I guess, but
rare kernel panic etc might have it's consequences I assume)

It does indeed increase performance on NFS much, but not necessarily on
CEPH (if you are using lirbd cache on client side as explained above)

On 20 February 2018 at 15:48, Rafael Weingärtner <
rafaelweingartner@gmail.com> wrote:

> Yes. Weird enough, the code is using the value in the database if it is
> provided there, but there is no easy way for users to change that
> configuration in the database. ¯\_(ツ)_/¯
>
> On Tue, Feb 20, 2018 at 11:45 AM, Andrija Panic <an...@gmail.com>
> wrote:
>
> > So it seems that just passing the cachemode value to API is not there, or
> > somehow messedup, but deployVM process does read DB values from
> > disk_offering table for sure, and applies it to XML file for KVM.
> > This is above ACS 4.8.x.
> >
> >
> > On 20 February 2018 at 15:44, Andrija Panic <an...@gmail.com>
> > wrote:
> >
> > > I have edited the disk_offering table, in the cache_mode just enter
> > > "writeback". Stop and start VM, and it will pickup/inherit the
> cache_mode
> > > from it's parrent offering
> > > This also applies to Compute/Service offering, again inside
> disk_offering
> > > table - just tested both
> > >
> > > i.e.
> > >
> > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > > `id`=102; # Compute Offering (Service offering)
> > > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > > `id`=114; #data disk offering
> > >
> > > Before SQL:
> > >
> > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > >       <driver name='qemu' type='qcow2' cache='none'/>
> > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> a772-1ea939ef6ec3/1b655159-
> > > ae10-41cf-8987-f1cfb47fe453'/>
> > >       <target dev='vda' bus='virtio'/>
> > > --
> > >       <driver name='qemu' type='qcow2' cache='none'/>
> > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> a772-1ea939ef6ec3/09bdadcb-
> > > ec6e-4dda-b37b-17b1a749257f'/>
> > >       <target dev='vdb' bus='virtio'/>
> > > --
> > >
> > > STOP and START VM = after SQL
> > >
> > > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> a772-1ea939ef6ec3/1b655159-
> > > ae10-41cf-8987-f1cfb47fe453'/>
> > >       <target dev='vda' bus='virtio'/>
> > > --
> > >       <driver name='qemu' type='qcow2' cache='writeback'/>
> > >       <source file='/mnt/63a3ae7b-9ea9-3884-
> a772-1ea939ef6ec3/09bdadcb-
> > > ec6e-4dda-b37b-17b1a749257f'/>
> > >       <target dev='vdb' bus='virtio'/>
> > > --
> > >
> > >
> > >
> > > On 20 February 2018 at 14:03, Rafael Weingärtner <
> > > rafaelweingartner@gmail.com> wrote:
> > >
> > >> I have no idea how it can change the performance. If you look at the
> > >> content of the commit you provided, it is only the commit that enabled
> > the
> > >> use of getCacheMode from disk offerings. However, it is not exposing
> any
> > >> way to users to change that value/configuration in the database. I
> might
> > >> have missed it; do you see any API methods that receive the parameter
> > >> "cacheMode" and then pass this parameter to a "diskOffering" object,
> and
> > >> then persist/update this object in the database?
> > >>
> > >> May I ask how are you guys changing the cacheMode configuration?
> > >>
> > >> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <paul.angus@shapeblue.com
> >
> > >> wrote:
> > >>
> > >> > I'm working with some guys who are experimenting with the setting as
> > if
> > >> > definitely seems to change the performance of data disks.  It also
> > >> changes
> > >> > the XML of the VM which is created.
> > >> >
> > >> > p.s.
> > >> > I've found this commit;
> > >> >
> > >> > https://github.com/apache/cloudstack/commit/1edaa36cc68e845a
> > >> 42339d5f267d49
> > >> > c82343aefb
> > >> >
> > >> > so I've got something to investigate now, but API documentation must
> > >> > definitely be askew.
> > >> >
> > >> >
> > >> >
> > >> > paul.angus@shapeblue.com
> > >> > www.shapeblue.com
> > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > >> > @shapeblue
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > -----Original Message-----
> > >> > From: Rafael Weingärtner [mailto:rafaelweingartner@gmail.com]
> > >> > Sent: 20 February 2018 12:31
> > >> > To: dev <de...@cloudstack.apache.org>
> > >> > Subject: Re: Caching modes
> > >> >
> > >> > This cache mode parameter does not exist in "CreateDiskOfferingCmd"
> > >> > command. I also checked some commits from 2, 3, 4 and 5 years ago,
> and
> > >> > this parameter was never there. If you check the API in [1], you can
> > see
> > >> > that it is not an expected parameter. Moreover, I do not see any use
> > of
> > >> > "setCacheMode" in the code (in case it is updated by some other
> > method).
> > >> > Interestingly enough, the code uses "getCacheMode".
> > >> >
> > >> > In summary, it is not a feature, and it does not work. It looks like
> > >> some
> > >> > leftover from dark ages when people could commit anything and then
> > they
> > >> > would just leave a half implementation there in our code base.
> > >> >
> > >> > [1]
> > >> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
> > >> > createDiskOffering.html
> > >> >
> > >> >
> > >> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <
> > andrija.panic@gmail.com
> > >> >
> > >> > wrote:
> > >> >
> > >> > > I can also assume that "cachemode" as API parameter is not
> > supported,
> > >> > > since when creating data disk offering via GUI also doesn't set it
> > in
> > >> > DB/table.
> > >> > >
> > >> > > CM:    create diskoffering name=xxx displaytext=xxx
> > storagetype=shared
> > >> > > disksize=1024 cachemode=writeback
> > >> > >
> > >> > > this also does not set cachemode in table... my guess it's not
> > >> > > implemented in API
> > >> > >
> > >> > > Let me know if I can help with any testing here.
> > >> > >
> > >> > > Cheers
> > >> > >
> > >> > > On 20 February 2018 at 13:09, Andrija Panic <
> > andrija.panic@gmail.com>
> > >> > > wrote:
> > >> > >
> > >> > > > Hi Paul,
> > >> > > >
> > >> > > > not helping directly answering your question, but here are some
> > >> > > > observations and "warning" if client's are using write-back
> cache
> > on
> > >> > > > KVM level
> > >> > > >
> > >> > > >
> > >> > > > I have (long time ago) tested performance in 3 combinations
> (this
> > >> > > > was not really thorough testing but a brief testing with FIO and
> > >> > > > random IO WRITE)
> > >> > > >
> > >> > > > - just CEPH rbd cache (on KVM side)
> > >> > > >            i.e. [client]
> > >> > > >                  rbd cache = true
> > >> > > >                  rbd cache writethrough until flush = true
> > >> > > >                  #(this is default 32MB per volume, afaik
> > >> > > >
> > >> > > > - just KMV write-back cache (had to manually edit disk_offering
> > >> > > > table to activate cache mode, since when creating new disk
> > offering
> > >> > > > via GUI, the disk_offering tables was NOT populated with
> > >> > > > "write-back" sertting/value
> > >> > > ! )
> > >> > > >
> > >> > > > - both CEPH and KVM write-back cahce active
> > >> > > >
> > >> > > > My observations were like following, but would be good to
> actually
> > >> > > confirm
> > >> > > > by someone else:
> > >> > > >
> > >> > > > - same performance with only CEPH caching or with only KVM
> caching
> > >> > > > - a bit worse performance with both CEPH and KVM caching active
> > >> > > > (nonsense combination, I know...)
> > >> > > >
> > >> > > >
> > >> > > > Please keep in mind, that some ACS functionality, KVM
> > >> > > > live-migrations on shared storage (NFS/CEPH) are NOT supported
> > when
> > >> > > > you use KVM write-back cache, since this is considered "unsafe"
> > >> > migration, more info here:
> > >> > > > https://doc.opensuse.org/documentation/leap/virtualization/
> > >> > > html/book.virt/
> > >> > > > cha.cachemodes.html#sec.cache.mode.live.migration
> > >> > > >
> > >> > > > or in short:
> > >> > > > "
> > >> > > > The libvirt management layer includes checks for migration
> > >> > > > compatibility based on several factors. If the guest storage is
> > >> > > > hosted on a clustered file system, is read-only or is marked
> > >> > > > shareable, then the cache mode is ignored when determining if
> > >> > > > migration can be allowed. Otherwise libvirt will not allow
> > migration
> > >> > > > unless the cache mode is set to none. However, this restriction
> > can
> > >> > > > be overridden with the “unsafe” option to the migration APIs,
> > which
> > >> > > > is also supported by virsh, as for example in
> > >> > > >
> > >> > > > virsh migrate --live --unsafe
> > >> > > > "
> > >> > > >
> > >> > > > Cheers
> > >> > > > Andrija
> > >> > > >
> > >> > > >
> > >> > > > On 20 February 2018 at 11:24, Paul Angus <
> > paul.angus@shapeblue.com>
> > >> > > wrote:
> > >> > > >
> > >> > > >> Hi Wido,
> > >> > > >>
> > >> > > >> This is for KVM (with Ceph backend as it happens), the API
> > >> > > >> documentation is out of sync with UI capabilities, so I'm
> trying
> > to
> > >> > > >> figure out if we
> > >> > > >> *should* be able to set cacheMode for root disks.  It seems to
> > make
> > >> > > quite a
> > >> > > >> difference to performance.
> > >> > > >>
> > >> > > >>
> > >> > > >>
> > >> > > >> paul.angus@shapeblue.com
> > >> > > >> www.shapeblue.com
> > >> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> > >> > > >>
> > >> > > >>
> > >> > > >>
> > >> > > >>
> > >> > > >> -----Original Message-----
> > >> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
> > >> > > >> Sent: 20 February 2018 09:03
> > >> > > >> To: dev@cloudstack.apache.org
> > >> > > >> Subject: Re: Caching modes
> > >> > > >>
> > >> > > >>
> > >> > > >>
> > >> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> > >> > > >> > Hey guys,
> > >> > > >> >
> > >> > > >> > Can anyone shed any light on write caching in CloudStack?
> > >> >  cacheMode
> > >> > > >> is available through the UI for data disks (but not root
> disks),
> > >> but
> > >> > not
> > >> > > >> documented as an API option for data or root disks (although is
> > >> > > documented
> > >> > > >> as a response for data disks).
> > >> > > >> >
> > >> > > >>
> > >> > > >> What hypervisor?
> > >> > > >>
> > >> > > >> In case of KVM it's passed down to XML which then passes it to
> > >> > Qemu/KVM
> > >> > > >> which then handles the caching.
> > >> > > >>
> > >> > > >> The implementation varies per hypervisor, so that should be the
> > >> > > question.
> > >> > > >>
> > >> > > >> Wido
> > >> > > >>
> > >> > > >>
> > >> > > >> > #huh?
> > >> > > >> >
> > >> > > >> > thanks
> > >> > > >> >
> > >> > > >> >
> > >> > > >> >
> > >> > > >> > paul.angus@shapeblue.com
> > >> > > >> > www.shapeblue.com
> > >> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> > >> > > >> >
> > >> > > >> >
> > >> > > >> >
> > >> > > >>
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > > --
> > >> > > >
> > >> > > > Andrija Panić
> > >> > > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > --
> > >> > >
> > >> > > Andrija Panić
> > >> > >
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > Rafael Weingärtner
> > >> >
> > >>
> > >>
> > >>
> > >> --
> > >> Rafael Weingärtner
> > >>
> > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 

Andrija Panić

Re: Caching modes

Posted by Rafael Weingärtner <ra...@gmail.com>.
Yes. Weird enough, the code is using the value in the database if it is
provided there, but there is no easy way for users to change that
configuration in the database. ¯\_(ツ)_/¯

On Tue, Feb 20, 2018 at 11:45 AM, Andrija Panic <an...@gmail.com>
wrote:

> So it seems that just passing the cachemode value to API is not there, or
> somehow messedup, but deployVM process does read DB values from
> disk_offering table for sure, and applies it to XML file for KVM.
> This is above ACS 4.8.x.
>
>
> On 20 February 2018 at 15:44, Andrija Panic <an...@gmail.com>
> wrote:
>
> > I have edited the disk_offering table, in the cache_mode just enter
> > "writeback". Stop and start VM, and it will pickup/inherit the cache_mode
> > from it's parrent offering
> > This also applies to Compute/Service offering, again inside disk_offering
> > table - just tested both
> >
> > i.e.
> >
> > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > `id`=102; # Compute Offering (Service offering)
> > UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> > `id`=114; #data disk offering
> >
> > Before SQL:
> >
> > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> >       <driver name='qemu' type='qcow2' cache='none'/>
> >       <source file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/1b655159-
> > ae10-41cf-8987-f1cfb47fe453'/>
> >       <target dev='vda' bus='virtio'/>
> > --
> >       <driver name='qemu' type='qcow2' cache='none'/>
> >       <source file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/09bdadcb-
> > ec6e-4dda-b37b-17b1a749257f'/>
> >       <target dev='vdb' bus='virtio'/>
> > --
> >
> > STOP and START VM = after SQL
> >
> > root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
> >       <driver name='qemu' type='qcow2' cache='writeback'/>
> >       <source file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/1b655159-
> > ae10-41cf-8987-f1cfb47fe453'/>
> >       <target dev='vda' bus='virtio'/>
> > --
> >       <driver name='qemu' type='qcow2' cache='writeback'/>
> >       <source file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/09bdadcb-
> > ec6e-4dda-b37b-17b1a749257f'/>
> >       <target dev='vdb' bus='virtio'/>
> > --
> >
> >
> >
> > On 20 February 2018 at 14:03, Rafael Weingärtner <
> > rafaelweingartner@gmail.com> wrote:
> >
> >> I have no idea how it can change the performance. If you look at the
> >> content of the commit you provided, it is only the commit that enabled
> the
> >> use of getCacheMode from disk offerings. However, it is not exposing any
> >> way to users to change that value/configuration in the database. I might
> >> have missed it; do you see any API methods that receive the parameter
> >> "cacheMode" and then pass this parameter to a "diskOffering" object, and
> >> then persist/update this object in the database?
> >>
> >> May I ask how are you guys changing the cacheMode configuration?
> >>
> >> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <pa...@shapeblue.com>
> >> wrote:
> >>
> >> > I'm working with some guys who are experimenting with the setting as
> if
> >> > definitely seems to change the performance of data disks.  It also
> >> changes
> >> > the XML of the VM which is created.
> >> >
> >> > p.s.
> >> > I've found this commit;
> >> >
> >> > https://github.com/apache/cloudstack/commit/1edaa36cc68e845a
> >> 42339d5f267d49
> >> > c82343aefb
> >> >
> >> > so I've got something to investigate now, but API documentation must
> >> > definitely be askew.
> >> >
> >> >
> >> >
> >> > paul.angus@shapeblue.com
> >> > www.shapeblue.com
> >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >> > @shapeblue
> >> >
> >> >
> >> >
> >> >
> >> > -----Original Message-----
> >> > From: Rafael Weingärtner [mailto:rafaelweingartner@gmail.com]
> >> > Sent: 20 February 2018 12:31
> >> > To: dev <de...@cloudstack.apache.org>
> >> > Subject: Re: Caching modes
> >> >
> >> > This cache mode parameter does not exist in "CreateDiskOfferingCmd"
> >> > command. I also checked some commits from 2, 3, 4 and 5 years ago, and
> >> > this parameter was never there. If you check the API in [1], you can
> see
> >> > that it is not an expected parameter. Moreover, I do not see any use
> of
> >> > "setCacheMode" in the code (in case it is updated by some other
> method).
> >> > Interestingly enough, the code uses "getCacheMode".
> >> >
> >> > In summary, it is not a feature, and it does not work. It looks like
> >> some
> >> > leftover from dark ages when people could commit anything and then
> they
> >> > would just leave a half implementation there in our code base.
> >> >
> >> > [1]
> >> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
> >> > createDiskOffering.html
> >> >
> >> >
> >> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <
> andrija.panic@gmail.com
> >> >
> >> > wrote:
> >> >
> >> > > I can also assume that "cachemode" as API parameter is not
> supported,
> >> > > since when creating data disk offering via GUI also doesn't set it
> in
> >> > DB/table.
> >> > >
> >> > > CM:    create diskoffering name=xxx displaytext=xxx
> storagetype=shared
> >> > > disksize=1024 cachemode=writeback
> >> > >
> >> > > this also does not set cachemode in table... my guess it's not
> >> > > implemented in API
> >> > >
> >> > > Let me know if I can help with any testing here.
> >> > >
> >> > > Cheers
> >> > >
> >> > > On 20 February 2018 at 13:09, Andrija Panic <
> andrija.panic@gmail.com>
> >> > > wrote:
> >> > >
> >> > > > Hi Paul,
> >> > > >
> >> > > > not helping directly answering your question, but here are some
> >> > > > observations and "warning" if client's are using write-back cache
> on
> >> > > > KVM level
> >> > > >
> >> > > >
> >> > > > I have (long time ago) tested performance in 3 combinations (this
> >> > > > was not really thorough testing but a brief testing with FIO and
> >> > > > random IO WRITE)
> >> > > >
> >> > > > - just CEPH rbd cache (on KVM side)
> >> > > >            i.e. [client]
> >> > > >                  rbd cache = true
> >> > > >                  rbd cache writethrough until flush = true
> >> > > >                  #(this is default 32MB per volume, afaik
> >> > > >
> >> > > > - just KMV write-back cache (had to manually edit disk_offering
> >> > > > table to activate cache mode, since when creating new disk
> offering
> >> > > > via GUI, the disk_offering tables was NOT populated with
> >> > > > "write-back" sertting/value
> >> > > ! )
> >> > > >
> >> > > > - both CEPH and KVM write-back cahce active
> >> > > >
> >> > > > My observations were like following, but would be good to actually
> >> > > confirm
> >> > > > by someone else:
> >> > > >
> >> > > > - same performance with only CEPH caching or with only KVM caching
> >> > > > - a bit worse performance with both CEPH and KVM caching active
> >> > > > (nonsense combination, I know...)
> >> > > >
> >> > > >
> >> > > > Please keep in mind, that some ACS functionality, KVM
> >> > > > live-migrations on shared storage (NFS/CEPH) are NOT supported
> when
> >> > > > you use KVM write-back cache, since this is considered "unsafe"
> >> > migration, more info here:
> >> > > > https://doc.opensuse.org/documentation/leap/virtualization/
> >> > > html/book.virt/
> >> > > > cha.cachemodes.html#sec.cache.mode.live.migration
> >> > > >
> >> > > > or in short:
> >> > > > "
> >> > > > The libvirt management layer includes checks for migration
> >> > > > compatibility based on several factors. If the guest storage is
> >> > > > hosted on a clustered file system, is read-only or is marked
> >> > > > shareable, then the cache mode is ignored when determining if
> >> > > > migration can be allowed. Otherwise libvirt will not allow
> migration
> >> > > > unless the cache mode is set to none. However, this restriction
> can
> >> > > > be overridden with the “unsafe” option to the migration APIs,
> which
> >> > > > is also supported by virsh, as for example in
> >> > > >
> >> > > > virsh migrate --live --unsafe
> >> > > > "
> >> > > >
> >> > > > Cheers
> >> > > > Andrija
> >> > > >
> >> > > >
> >> > > > On 20 February 2018 at 11:24, Paul Angus <
> paul.angus@shapeblue.com>
> >> > > wrote:
> >> > > >
> >> > > >> Hi Wido,
> >> > > >>
> >> > > >> This is for KVM (with Ceph backend as it happens), the API
> >> > > >> documentation is out of sync with UI capabilities, so I'm trying
> to
> >> > > >> figure out if we
> >> > > >> *should* be able to set cacheMode for root disks.  It seems to
> make
> >> > > quite a
> >> > > >> difference to performance.
> >> > > >>
> >> > > >>
> >> > > >>
> >> > > >> paul.angus@shapeblue.com
> >> > > >> www.shapeblue.com
> >> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> >> > > >>
> >> > > >>
> >> > > >>
> >> > > >>
> >> > > >> -----Original Message-----
> >> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
> >> > > >> Sent: 20 February 2018 09:03
> >> > > >> To: dev@cloudstack.apache.org
> >> > > >> Subject: Re: Caching modes
> >> > > >>
> >> > > >>
> >> > > >>
> >> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> >> > > >> > Hey guys,
> >> > > >> >
> >> > > >> > Can anyone shed any light on write caching in CloudStack?
> >> >  cacheMode
> >> > > >> is available through the UI for data disks (but not root disks),
> >> but
> >> > not
> >> > > >> documented as an API option for data or root disks (although is
> >> > > documented
> >> > > >> as a response for data disks).
> >> > > >> >
> >> > > >>
> >> > > >> What hypervisor?
> >> > > >>
> >> > > >> In case of KVM it's passed down to XML which then passes it to
> >> > Qemu/KVM
> >> > > >> which then handles the caching.
> >> > > >>
> >> > > >> The implementation varies per hypervisor, so that should be the
> >> > > question.
> >> > > >>
> >> > > >> Wido
> >> > > >>
> >> > > >>
> >> > > >> > #huh?
> >> > > >> >
> >> > > >> > thanks
> >> > > >> >
> >> > > >> >
> >> > > >> >
> >> > > >> > paul.angus@shapeblue.com
> >> > > >> > www.shapeblue.com
> >> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> >> > > >> >
> >> > > >> >
> >> > > >> >
> >> > > >>
> >> > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > >
> >> > > > Andrija Panić
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > >
> >> > > Andrija Panić
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Rafael Weingärtner
> >> >
> >>
> >>
> >>
> >> --
> >> Rafael Weingärtner
> >>
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
>
> Andrija Panić
>



-- 
Rafael Weingärtner

Re: Caching modes

Posted by Andrija Panic <an...@gmail.com>.
So it seems that just passing the cachemode value to API is not there, or
somehow messedup, but deployVM process does read DB values from
disk_offering table for sure, and applies it to XML file for KVM.
This is above ACS 4.8.x.


On 20 February 2018 at 15:44, Andrija Panic <an...@gmail.com> wrote:

> I have edited the disk_offering table, in the cache_mode just enter
> "writeback". Stop and start VM, and it will pickup/inherit the cache_mode
> from it's parrent offering
> This also applies to Compute/Service offering, again inside disk_offering
> table - just tested both
>
> i.e.
>
> UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> `id`=102; # Compute Offering (Service offering)
> UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
> `id`=114; #data disk offering
>
> Before SQL:
>
> root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
>       <driver name='qemu' type='qcow2' cache='none'/>
>       <source file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/1b655159-
> ae10-41cf-8987-f1cfb47fe453'/>
>       <target dev='vda' bus='virtio'/>
> --
>       <driver name='qemu' type='qcow2' cache='none'/>
>       <source file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/09bdadcb-
> ec6e-4dda-b37b-17b1a749257f'/>
>       <target dev='vdb' bus='virtio'/>
> --
>
> STOP and START VM = after SQL
>
> root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
>       <driver name='qemu' type='qcow2' cache='writeback'/>
>       <source file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/1b655159-
> ae10-41cf-8987-f1cfb47fe453'/>
>       <target dev='vda' bus='virtio'/>
> --
>       <driver name='qemu' type='qcow2' cache='writeback'/>
>       <source file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/09bdadcb-
> ec6e-4dda-b37b-17b1a749257f'/>
>       <target dev='vdb' bus='virtio'/>
> --
>
>
>
> On 20 February 2018 at 14:03, Rafael Weingärtner <
> rafaelweingartner@gmail.com> wrote:
>
>> I have no idea how it can change the performance. If you look at the
>> content of the commit you provided, it is only the commit that enabled the
>> use of getCacheMode from disk offerings. However, it is not exposing any
>> way to users to change that value/configuration in the database. I might
>> have missed it; do you see any API methods that receive the parameter
>> "cacheMode" and then pass this parameter to a "diskOffering" object, and
>> then persist/update this object in the database?
>>
>> May I ask how are you guys changing the cacheMode configuration?
>>
>> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <pa...@shapeblue.com>
>> wrote:
>>
>> > I'm working with some guys who are experimenting with the setting as if
>> > definitely seems to change the performance of data disks.  It also
>> changes
>> > the XML of the VM which is created.
>> >
>> > p.s.
>> > I've found this commit;
>> >
>> > https://github.com/apache/cloudstack/commit/1edaa36cc68e845a
>> 42339d5f267d49
>> > c82343aefb
>> >
>> > so I've got something to investigate now, but API documentation must
>> > definitely be askew.
>> >
>> >
>> >
>> > paul.angus@shapeblue.com
>> > www.shapeblue.com
>> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> > @shapeblue
>> >
>> >
>> >
>> >
>> > -----Original Message-----
>> > From: Rafael Weingärtner [mailto:rafaelweingartner@gmail.com]
>> > Sent: 20 February 2018 12:31
>> > To: dev <de...@cloudstack.apache.org>
>> > Subject: Re: Caching modes
>> >
>> > This cache mode parameter does not exist in "CreateDiskOfferingCmd"
>> > command. I also checked some commits from 2, 3, 4 and 5 years ago, and
>> > this parameter was never there. If you check the API in [1], you can see
>> > that it is not an expected parameter. Moreover, I do not see any use of
>> > "setCacheMode" in the code (in case it is updated by some other method).
>> > Interestingly enough, the code uses "getCacheMode".
>> >
>> > In summary, it is not a feature, and it does not work. It looks like
>> some
>> > leftover from dark ages when people could commit anything and then they
>> > would just leave a half implementation there in our code base.
>> >
>> > [1]
>> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
>> > createDiskOffering.html
>> >
>> >
>> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <andrija.panic@gmail.com
>> >
>> > wrote:
>> >
>> > > I can also assume that "cachemode" as API parameter is not supported,
>> > > since when creating data disk offering via GUI also doesn't set it in
>> > DB/table.
>> > >
>> > > CM:    create diskoffering name=xxx displaytext=xxx storagetype=shared
>> > > disksize=1024 cachemode=writeback
>> > >
>> > > this also does not set cachemode in table... my guess it's not
>> > > implemented in API
>> > >
>> > > Let me know if I can help with any testing here.
>> > >
>> > > Cheers
>> > >
>> > > On 20 February 2018 at 13:09, Andrija Panic <an...@gmail.com>
>> > > wrote:
>> > >
>> > > > Hi Paul,
>> > > >
>> > > > not helping directly answering your question, but here are some
>> > > > observations and "warning" if client's are using write-back cache on
>> > > > KVM level
>> > > >
>> > > >
>> > > > I have (long time ago) tested performance in 3 combinations (this
>> > > > was not really thorough testing but a brief testing with FIO and
>> > > > random IO WRITE)
>> > > >
>> > > > - just CEPH rbd cache (on KVM side)
>> > > >            i.e. [client]
>> > > >                  rbd cache = true
>> > > >                  rbd cache writethrough until flush = true
>> > > >                  #(this is default 32MB per volume, afaik
>> > > >
>> > > > - just KMV write-back cache (had to manually edit disk_offering
>> > > > table to activate cache mode, since when creating new disk offering
>> > > > via GUI, the disk_offering tables was NOT populated with
>> > > > "write-back" sertting/value
>> > > ! )
>> > > >
>> > > > - both CEPH and KVM write-back cahce active
>> > > >
>> > > > My observations were like following, but would be good to actually
>> > > confirm
>> > > > by someone else:
>> > > >
>> > > > - same performance with only CEPH caching or with only KVM caching
>> > > > - a bit worse performance with both CEPH and KVM caching active
>> > > > (nonsense combination, I know...)
>> > > >
>> > > >
>> > > > Please keep in mind, that some ACS functionality, KVM
>> > > > live-migrations on shared storage (NFS/CEPH) are NOT supported when
>> > > > you use KVM write-back cache, since this is considered "unsafe"
>> > migration, more info here:
>> > > > https://doc.opensuse.org/documentation/leap/virtualization/
>> > > html/book.virt/
>> > > > cha.cachemodes.html#sec.cache.mode.live.migration
>> > > >
>> > > > or in short:
>> > > > "
>> > > > The libvirt management layer includes checks for migration
>> > > > compatibility based on several factors. If the guest storage is
>> > > > hosted on a clustered file system, is read-only or is marked
>> > > > shareable, then the cache mode is ignored when determining if
>> > > > migration can be allowed. Otherwise libvirt will not allow migration
>> > > > unless the cache mode is set to none. However, this restriction can
>> > > > be overridden with the “unsafe” option to the migration APIs, which
>> > > > is also supported by virsh, as for example in
>> > > >
>> > > > virsh migrate --live --unsafe
>> > > > "
>> > > >
>> > > > Cheers
>> > > > Andrija
>> > > >
>> > > >
>> > > > On 20 February 2018 at 11:24, Paul Angus <pa...@shapeblue.com>
>> > > wrote:
>> > > >
>> > > >> Hi Wido,
>> > > >>
>> > > >> This is for KVM (with Ceph backend as it happens), the API
>> > > >> documentation is out of sync with UI capabilities, so I'm trying to
>> > > >> figure out if we
>> > > >> *should* be able to set cacheMode for root disks.  It seems to make
>> > > quite a
>> > > >> difference to performance.
>> > > >>
>> > > >>
>> > > >>
>> > > >> paul.angus@shapeblue.com
>> > > >> www.shapeblue.com
>> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>> > > >>
>> > > >>
>> > > >>
>> > > >>
>> > > >> -----Original Message-----
>> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
>> > > >> Sent: 20 February 2018 09:03
>> > > >> To: dev@cloudstack.apache.org
>> > > >> Subject: Re: Caching modes
>> > > >>
>> > > >>
>> > > >>
>> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
>> > > >> > Hey guys,
>> > > >> >
>> > > >> > Can anyone shed any light on write caching in CloudStack?
>> >  cacheMode
>> > > >> is available through the UI for data disks (but not root disks),
>> but
>> > not
>> > > >> documented as an API option for data or root disks (although is
>> > > documented
>> > > >> as a response for data disks).
>> > > >> >
>> > > >>
>> > > >> What hypervisor?
>> > > >>
>> > > >> In case of KVM it's passed down to XML which then passes it to
>> > Qemu/KVM
>> > > >> which then handles the caching.
>> > > >>
>> > > >> The implementation varies per hypervisor, so that should be the
>> > > question.
>> > > >>
>> > > >> Wido
>> > > >>
>> > > >>
>> > > >> > #huh?
>> > > >> >
>> > > >> > thanks
>> > > >> >
>> > > >> >
>> > > >> >
>> > > >> > paul.angus@shapeblue.com
>> > > >> > www.shapeblue.com
>> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>> > > >> >
>> > > >> >
>> > > >> >
>> > > >>
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > >
>> > > > Andrija Panić
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > >
>> > > Andrija Panić
>> > >
>> >
>> >
>> >
>> > --
>> > Rafael Weingärtner
>> >
>>
>>
>>
>> --
>> Rafael Weingärtner
>>
>
>
>
> --
>
> Andrija Panić
>



-- 

Andrija Panić

Re: Caching modes

Posted by Andrija Panic <an...@gmail.com>.
I have edited the disk_offering table, in the cache_mode just enter
"writeback". Stop and start VM, and it will pickup/inherit the cache_mode
from it's parrent offering
This also applies to Compute/Service offering, again inside disk_offering
table - just tested both

i.e.

UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
`id`=102; # Compute Offering (Service offering)
UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
`id`=114; #data disk offering

Before SQL:

root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
      <driver name='qemu' type='qcow2' cache='none'/>
      <source
file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/1b655159-ae10-41cf-8987-f1cfb47fe453'/>
      <target dev='vda' bus='virtio'/>
--
      <driver name='qemu' type='qcow2' cache='none'/>
      <source
file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/09bdadcb-ec6e-4dda-b37b-17b1a749257f'/>
      <target dev='vdb' bus='virtio'/>
--

STOP and START VM = after SQL

root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
      <driver name='qemu' type='qcow2' cache='writeback'/>
      <source
file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/1b655159-ae10-41cf-8987-f1cfb47fe453'/>
      <target dev='vda' bus='virtio'/>
--
      <driver name='qemu' type='qcow2' cache='writeback'/>
      <source
file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/09bdadcb-ec6e-4dda-b37b-17b1a749257f'/>
      <target dev='vdb' bus='virtio'/>
--



On 20 February 2018 at 14:03, Rafael Weingärtner <
rafaelweingartner@gmail.com> wrote:

> I have no idea how it can change the performance. If you look at the
> content of the commit you provided, it is only the commit that enabled the
> use of getCacheMode from disk offerings. However, it is not exposing any
> way to users to change that value/configuration in the database. I might
> have missed it; do you see any API methods that receive the parameter
> "cacheMode" and then pass this parameter to a "diskOffering" object, and
> then persist/update this object in the database?
>
> May I ask how are you guys changing the cacheMode configuration?
>
> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <pa...@shapeblue.com>
> wrote:
>
> > I'm working with some guys who are experimenting with the setting as if
> > definitely seems to change the performance of data disks.  It also
> changes
> > the XML of the VM which is created.
> >
> > p.s.
> > I've found this commit;
> >
> > https://github.com/apache/cloudstack/commit/
> 1edaa36cc68e845a42339d5f267d49
> > c82343aefb
> >
> > so I've got something to investigate now, but API documentation must
> > definitely be askew.
> >
> >
> >
> > paul.angus@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
> > -----Original Message-----
> > From: Rafael Weingärtner [mailto:rafaelweingartner@gmail.com]
> > Sent: 20 February 2018 12:31
> > To: dev <de...@cloudstack.apache.org>
> > Subject: Re: Caching modes
> >
> > This cache mode parameter does not exist in "CreateDiskOfferingCmd"
> > command. I also checked some commits from 2, 3, 4 and 5 years ago, and
> > this parameter was never there. If you check the API in [1], you can see
> > that it is not an expected parameter. Moreover, I do not see any use of
> > "setCacheMode" in the code (in case it is updated by some other method).
> > Interestingly enough, the code uses "getCacheMode".
> >
> > In summary, it is not a feature, and it does not work. It looks like some
> > leftover from dark ages when people could commit anything and then they
> > would just leave a half implementation there in our code base.
> >
> > [1]
> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
> > createDiskOffering.html
> >
> >
> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <an...@gmail.com>
> > wrote:
> >
> > > I can also assume that "cachemode" as API parameter is not supported,
> > > since when creating data disk offering via GUI also doesn't set it in
> > DB/table.
> > >
> > > CM:    create diskoffering name=xxx displaytext=xxx storagetype=shared
> > > disksize=1024 cachemode=writeback
> > >
> > > this also does not set cachemode in table... my guess it's not
> > > implemented in API
> > >
> > > Let me know if I can help with any testing here.
> > >
> > > Cheers
> > >
> > > On 20 February 2018 at 13:09, Andrija Panic <an...@gmail.com>
> > > wrote:
> > >
> > > > Hi Paul,
> > > >
> > > > not helping directly answering your question, but here are some
> > > > observations and "warning" if client's are using write-back cache on
> > > > KVM level
> > > >
> > > >
> > > > I have (long time ago) tested performance in 3 combinations (this
> > > > was not really thorough testing but a brief testing with FIO and
> > > > random IO WRITE)
> > > >
> > > > - just CEPH rbd cache (on KVM side)
> > > >            i.e. [client]
> > > >                  rbd cache = true
> > > >                  rbd cache writethrough until flush = true
> > > >                  #(this is default 32MB per volume, afaik
> > > >
> > > > - just KMV write-back cache (had to manually edit disk_offering
> > > > table to activate cache mode, since when creating new disk offering
> > > > via GUI, the disk_offering tables was NOT populated with
> > > > "write-back" sertting/value
> > > ! )
> > > >
> > > > - both CEPH and KVM write-back cahce active
> > > >
> > > > My observations were like following, but would be good to actually
> > > confirm
> > > > by someone else:
> > > >
> > > > - same performance with only CEPH caching or with only KVM caching
> > > > - a bit worse performance with both CEPH and KVM caching active
> > > > (nonsense combination, I know...)
> > > >
> > > >
> > > > Please keep in mind, that some ACS functionality, KVM
> > > > live-migrations on shared storage (NFS/CEPH) are NOT supported when
> > > > you use KVM write-back cache, since this is considered "unsafe"
> > migration, more info here:
> > > > https://doc.opensuse.org/documentation/leap/virtualization/
> > > html/book.virt/
> > > > cha.cachemodes.html#sec.cache.mode.live.migration
> > > >
> > > > or in short:
> > > > "
> > > > The libvirt management layer includes checks for migration
> > > > compatibility based on several factors. If the guest storage is
> > > > hosted on a clustered file system, is read-only or is marked
> > > > shareable, then the cache mode is ignored when determining if
> > > > migration can be allowed. Otherwise libvirt will not allow migration
> > > > unless the cache mode is set to none. However, this restriction can
> > > > be overridden with the “unsafe” option to the migration APIs, which
> > > > is also supported by virsh, as for example in
> > > >
> > > > virsh migrate --live --unsafe
> > > > "
> > > >
> > > > Cheers
> > > > Andrija
> > > >
> > > >
> > > > On 20 February 2018 at 11:24, Paul Angus <pa...@shapeblue.com>
> > > wrote:
> > > >
> > > >> Hi Wido,
> > > >>
> > > >> This is for KVM (with Ceph backend as it happens), the API
> > > >> documentation is out of sync with UI capabilities, so I'm trying to
> > > >> figure out if we
> > > >> *should* be able to set cacheMode for root disks.  It seems to make
> > > quite a
> > > >> difference to performance.
> > > >>
> > > >>
> > > >>
> > > >> paul.angus@shapeblue.com
> > > >> www.shapeblue.com
> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> -----Original Message-----
> > > >> From: Wido den Hollander [mailto:wido@widodh.nl]
> > > >> Sent: 20 February 2018 09:03
> > > >> To: dev@cloudstack.apache.org
> > > >> Subject: Re: Caching modes
> > > >>
> > > >>
> > > >>
> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> > > >> > Hey guys,
> > > >> >
> > > >> > Can anyone shed any light on write caching in CloudStack?
> >  cacheMode
> > > >> is available through the UI for data disks (but not root disks), but
> > not
> > > >> documented as an API option for data or root disks (although is
> > > documented
> > > >> as a response for data disks).
> > > >> >
> > > >>
> > > >> What hypervisor?
> > > >>
> > > >> In case of KVM it's passed down to XML which then passes it to
> > Qemu/KVM
> > > >> which then handles the caching.
> > > >>
> > > >> The implementation varies per hypervisor, so that should be the
> > > question.
> > > >>
> > > >> Wido
> > > >>
> > > >>
> > > >> > #huh?
> > > >> >
> > > >> > thanks
> > > >> >
> > > >> >
> > > >> >
> > > >> > paul.angus@shapeblue.com
> > > >> > www.shapeblue.com
> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> > > >> >
> > > >> >
> > > >> >
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Andrija Panić
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 

Andrija Panić

Re: Caching modes

Posted by Rafael Weingärtner <ra...@gmail.com>.
I have no idea how it can change the performance. If you look at the
content of the commit you provided, it is only the commit that enabled the
use of getCacheMode from disk offerings. However, it is not exposing any
way to users to change that value/configuration in the database. I might
have missed it; do you see any API methods that receive the parameter
"cacheMode" and then pass this parameter to a "diskOffering" object, and
then persist/update this object in the database?

May I ask how are you guys changing the cacheMode configuration?

On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <pa...@shapeblue.com>
wrote:

> I'm working with some guys who are experimenting with the setting as if
> definitely seems to change the performance of data disks.  It also changes
> the XML of the VM which is created.
>
> p.s.
> I've found this commit;
>
> https://github.com/apache/cloudstack/commit/1edaa36cc68e845a42339d5f267d49
> c82343aefb
>
> so I've got something to investigate now, but API documentation must
> definitely be askew.
>
>
>
> paul.angus@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -----Original Message-----
> From: Rafael Weingärtner [mailto:rafaelweingartner@gmail.com]
> Sent: 20 February 2018 12:31
> To: dev <de...@cloudstack.apache.org>
> Subject: Re: Caching modes
>
> This cache mode parameter does not exist in "CreateDiskOfferingCmd"
> command. I also checked some commits from 2, 3, 4 and 5 years ago, and
> this parameter was never there. If you check the API in [1], you can see
> that it is not an expected parameter. Moreover, I do not see any use of
> "setCacheMode" in the code (in case it is updated by some other method).
> Interestingly enough, the code uses "getCacheMode".
>
> In summary, it is not a feature, and it does not work. It looks like some
> leftover from dark ages when people could commit anything and then they
> would just leave a half implementation there in our code base.
>
> [1]
> https://cloudstack.apache.org/api/apidocs-4.11/apis/
> createDiskOffering.html
>
>
> On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <an...@gmail.com>
> wrote:
>
> > I can also assume that "cachemode" as API parameter is not supported,
> > since when creating data disk offering via GUI also doesn't set it in
> DB/table.
> >
> > CM:    create diskoffering name=xxx displaytext=xxx storagetype=shared
> > disksize=1024 cachemode=writeback
> >
> > this also does not set cachemode in table... my guess it's not
> > implemented in API
> >
> > Let me know if I can help with any testing here.
> >
> > Cheers
> >
> > On 20 February 2018 at 13:09, Andrija Panic <an...@gmail.com>
> > wrote:
> >
> > > Hi Paul,
> > >
> > > not helping directly answering your question, but here are some
> > > observations and "warning" if client's are using write-back cache on
> > > KVM level
> > >
> > >
> > > I have (long time ago) tested performance in 3 combinations (this
> > > was not really thorough testing but a brief testing with FIO and
> > > random IO WRITE)
> > >
> > > - just CEPH rbd cache (on KVM side)
> > >            i.e. [client]
> > >                  rbd cache = true
> > >                  rbd cache writethrough until flush = true
> > >                  #(this is default 32MB per volume, afaik
> > >
> > > - just KMV write-back cache (had to manually edit disk_offering
> > > table to activate cache mode, since when creating new disk offering
> > > via GUI, the disk_offering tables was NOT populated with
> > > "write-back" sertting/value
> > ! )
> > >
> > > - both CEPH and KVM write-back cahce active
> > >
> > > My observations were like following, but would be good to actually
> > confirm
> > > by someone else:
> > >
> > > - same performance with only CEPH caching or with only KVM caching
> > > - a bit worse performance with both CEPH and KVM caching active
> > > (nonsense combination, I know...)
> > >
> > >
> > > Please keep in mind, that some ACS functionality, KVM
> > > live-migrations on shared storage (NFS/CEPH) are NOT supported when
> > > you use KVM write-back cache, since this is considered "unsafe"
> migration, more info here:
> > > https://doc.opensuse.org/documentation/leap/virtualization/
> > html/book.virt/
> > > cha.cachemodes.html#sec.cache.mode.live.migration
> > >
> > > or in short:
> > > "
> > > The libvirt management layer includes checks for migration
> > > compatibility based on several factors. If the guest storage is
> > > hosted on a clustered file system, is read-only or is marked
> > > shareable, then the cache mode is ignored when determining if
> > > migration can be allowed. Otherwise libvirt will not allow migration
> > > unless the cache mode is set to none. However, this restriction can
> > > be overridden with the “unsafe” option to the migration APIs, which
> > > is also supported by virsh, as for example in
> > >
> > > virsh migrate --live --unsafe
> > > "
> > >
> > > Cheers
> > > Andrija
> > >
> > >
> > > On 20 February 2018 at 11:24, Paul Angus <pa...@shapeblue.com>
> > wrote:
> > >
> > >> Hi Wido,
> > >>
> > >> This is for KVM (with Ceph backend as it happens), the API
> > >> documentation is out of sync with UI capabilities, so I'm trying to
> > >> figure out if we
> > >> *should* be able to set cacheMode for root disks.  It seems to make
> > quite a
> > >> difference to performance.
> > >>
> > >>
> > >>
> > >> paul.angus@shapeblue.com
> > >> www.shapeblue.com
> > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> > >>
> > >>
> > >>
> > >>
> > >> -----Original Message-----
> > >> From: Wido den Hollander [mailto:wido@widodh.nl]
> > >> Sent: 20 February 2018 09:03
> > >> To: dev@cloudstack.apache.org
> > >> Subject: Re: Caching modes
> > >>
> > >>
> > >>
> > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> > >> > Hey guys,
> > >> >
> > >> > Can anyone shed any light on write caching in CloudStack?
>  cacheMode
> > >> is available through the UI for data disks (but not root disks), but
> not
> > >> documented as an API option for data or root disks (although is
> > documented
> > >> as a response for data disks).
> > >> >
> > >>
> > >> What hypervisor?
> > >>
> > >> In case of KVM it's passed down to XML which then passes it to
> Qemu/KVM
> > >> which then handles the caching.
> > >>
> > >> The implementation varies per hypervisor, so that should be the
> > question.
> > >>
> > >> Wido
> > >>
> > >>
> > >> > #huh?
> > >> >
> > >> > thanks
> > >> >
> > >> >
> > >> >
> > >> > paul.angus@shapeblue.com
> > >> > www.shapeblue.com
> > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> > >> >
> > >> >
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 
Rafael Weingärtner

RE: Caching modes

Posted by Paul Angus <pa...@shapeblue.com>.
I'm working with some guys who are experimenting with the setting as if definitely seems to change the performance of data disks.  It also changes the XML of the VM which is created.

p.s.
I've found this commit; 

https://github.com/apache/cloudstack/commit/1edaa36cc68e845a42339d5f267d49c82343aefb

so I've got something to investigate now, but API documentation must definitely be askew.



paul.angus@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-----Original Message-----
From: Rafael Weingärtner [mailto:rafaelweingartner@gmail.com] 
Sent: 20 February 2018 12:31
To: dev <de...@cloudstack.apache.org>
Subject: Re: Caching modes

This cache mode parameter does not exist in "CreateDiskOfferingCmd"
command. I also checked some commits from 2, 3, 4 and 5 years ago, and this parameter was never there. If you check the API in [1], you can see that it is not an expected parameter. Moreover, I do not see any use of "setCacheMode" in the code (in case it is updated by some other method).
Interestingly enough, the code uses "getCacheMode".

In summary, it is not a feature, and it does not work. It looks like some leftover from dark ages when people could commit anything and then they would just leave a half implementation there in our code base.

[1]
https://cloudstack.apache.org/api/apidocs-4.11/apis/createDiskOffering.html


On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <an...@gmail.com>
wrote:

> I can also assume that "cachemode" as API parameter is not supported, 
> since when creating data disk offering via GUI also doesn't set it in DB/table.
>
> CM:    create diskoffering name=xxx displaytext=xxx storagetype=shared
> disksize=1024 cachemode=writeback
>
> this also does not set cachemode in table... my guess it's not 
> implemented in API
>
> Let me know if I can help with any testing here.
>
> Cheers
>
> On 20 February 2018 at 13:09, Andrija Panic <an...@gmail.com>
> wrote:
>
> > Hi Paul,
> >
> > not helping directly answering your question, but here are some 
> > observations and "warning" if client's are using write-back cache on 
> > KVM level
> >
> >
> > I have (long time ago) tested performance in 3 combinations (this 
> > was not really thorough testing but a brief testing with FIO and 
> > random IO WRITE)
> >
> > - just CEPH rbd cache (on KVM side)
> >            i.e. [client]
> >                  rbd cache = true
> >                  rbd cache writethrough until flush = true
> >                  #(this is default 32MB per volume, afaik
> >
> > - just KMV write-back cache (had to manually edit disk_offering 
> > table to activate cache mode, since when creating new disk offering 
> > via GUI, the disk_offering tables was NOT populated with 
> > "write-back" sertting/value
> ! )
> >
> > - both CEPH and KVM write-back cahce active
> >
> > My observations were like following, but would be good to actually
> confirm
> > by someone else:
> >
> > - same performance with only CEPH caching or with only KVM caching
> > - a bit worse performance with both CEPH and KVM caching active 
> > (nonsense combination, I know...)
> >
> >
> > Please keep in mind, that some ACS functionality, KVM 
> > live-migrations on shared storage (NFS/CEPH) are NOT supported when 
> > you use KVM write-back cache, since this is considered "unsafe" migration, more info here:
> > https://doc.opensuse.org/documentation/leap/virtualization/
> html/book.virt/
> > cha.cachemodes.html#sec.cache.mode.live.migration
> >
> > or in short:
> > "
> > The libvirt management layer includes checks for migration 
> > compatibility based on several factors. If the guest storage is 
> > hosted on a clustered file system, is read-only or is marked 
> > shareable, then the cache mode is ignored when determining if 
> > migration can be allowed. Otherwise libvirt will not allow migration 
> > unless the cache mode is set to none. However, this restriction can 
> > be overridden with the “unsafe” option to the migration APIs, which 
> > is also supported by virsh, as for example in
> >
> > virsh migrate --live --unsafe
> > "
> >
> > Cheers
> > Andrija
> >
> >
> > On 20 February 2018 at 11:24, Paul Angus <pa...@shapeblue.com>
> wrote:
> >
> >> Hi Wido,
> >>
> >> This is for KVM (with Ceph backend as it happens), the API 
> >> documentation is out of sync with UI capabilities, so I'm trying to 
> >> figure out if we
> >> *should* be able to set cacheMode for root disks.  It seems to make
> quite a
> >> difference to performance.
> >>
> >>
> >>
> >> paul.angus@shapeblue.com
> >> www.shapeblue.com
> >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> >>
> >>
> >>
> >>
> >> -----Original Message-----
> >> From: Wido den Hollander [mailto:wido@widodh.nl]
> >> Sent: 20 February 2018 09:03
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: Caching modes
> >>
> >>
> >>
> >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> >> > Hey guys,
> >> >
> >> > Can anyone shed any light on write caching in CloudStack?   cacheMode
> >> is available through the UI for data disks (but not root disks), but not
> >> documented as an API option for data or root disks (although is
> documented
> >> as a response for data disks).
> >> >
> >>
> >> What hypervisor?
> >>
> >> In case of KVM it's passed down to XML which then passes it to Qemu/KVM
> >> which then handles the caching.
> >>
> >> The implementation varies per hypervisor, so that should be the
> question.
> >>
> >> Wido
> >>
> >>
> >> > #huh?
> >> >
> >> > thanks
> >> >
> >> >
> >> >
> >> > paul.angus@shapeblue.com
> >> > www.shapeblue.com
> >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> >> >
> >> >
> >> >
> >>
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
>
> Andrija Panić
>



-- 
Rafael Weingärtner

Re: Caching modes

Posted by Rafael Weingärtner <ra...@gmail.com>.
This cache mode parameter does not exist in "CreateDiskOfferingCmd"
command. I also checked some commits from 2, 3, 4 and 5 years ago, and this
parameter was never there. If you check the API in [1], you can see that it
is not an expected parameter. Moreover, I do not see any use of
"setCacheMode" in the code (in case it is updated by some other method).
Interestingly enough, the code uses "getCacheMode".

In summary, it is not a feature, and it does not work. It looks like some
leftover from dark ages when people could commit anything and then they
would just leave a half implementation there in our code base.

[1]
https://cloudstack.apache.org/api/apidocs-4.11/apis/createDiskOffering.html


On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <an...@gmail.com>
wrote:

> I can also assume that "cachemode" as API parameter is not supported, since
> when creating data disk offering via GUI also doesn't set it in DB/table.
>
> CM:    create diskoffering name=xxx displaytext=xxx storagetype=shared
> disksize=1024 cachemode=writeback
>
> this also does not set cachemode in table... my guess it's not implemented
> in API
>
> Let me know if I can help with any testing here.
>
> Cheers
>
> On 20 February 2018 at 13:09, Andrija Panic <an...@gmail.com>
> wrote:
>
> > Hi Paul,
> >
> > not helping directly answering your question, but here are some
> > observations and "warning" if client's are using write-back cache on KVM
> > level
> >
> >
> > I have (long time ago) tested performance in 3 combinations (this was not
> > really thorough testing but a brief testing with FIO and random IO WRITE)
> >
> > - just CEPH rbd cache (on KVM side)
> >            i.e. [client]
> >                  rbd cache = true
> >                  rbd cache writethrough until flush = true
> >                  #(this is default 32MB per volume, afaik
> >
> > - just KMV write-back cache (had to manually edit disk_offering table to
> > activate cache mode, since when creating new disk offering via GUI, the
> > disk_offering tables was NOT populated with "write-back" sertting/value
> ! )
> >
> > - both CEPH and KVM write-back cahce active
> >
> > My observations were like following, but would be good to actually
> confirm
> > by someone else:
> >
> > - same performance with only CEPH caching or with only KVM caching
> > - a bit worse performance with both CEPH and KVM caching active (nonsense
> > combination, I know...)
> >
> >
> > Please keep in mind, that some ACS functionality, KVM live-migrations on
> > shared storage (NFS/CEPH) are NOT supported when you use KVM write-back
> > cache, since this is considered "unsafe" migration, more info here:
> > https://doc.opensuse.org/documentation/leap/virtualization/
> html/book.virt/
> > cha.cachemodes.html#sec.cache.mode.live.migration
> >
> > or in short:
> > "
> > The libvirt management layer includes checks for migration compatibility
> > based on several factors. If the guest storage is hosted on a clustered
> > file system, is read-only or is marked shareable, then the cache mode is
> > ignored when determining if migration can be allowed. Otherwise libvirt
> > will not allow migration unless the cache mode is set to none. However,
> > this restriction can be overridden with the “unsafe” option to the
> > migration APIs, which is also supported by virsh, as for example in
> >
> > virsh migrate --live --unsafe
> > "
> >
> > Cheers
> > Andrija
> >
> >
> > On 20 February 2018 at 11:24, Paul Angus <pa...@shapeblue.com>
> wrote:
> >
> >> Hi Wido,
> >>
> >> This is for KVM (with Ceph backend as it happens), the API documentation
> >> is out of sync with UI capabilities, so I'm trying to figure out if we
> >> *should* be able to set cacheMode for root disks.  It seems to make
> quite a
> >> difference to performance.
> >>
> >>
> >>
> >> paul.angus@shapeblue.com
> >> www.shapeblue.com
> >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >> @shapeblue
> >>
> >>
> >>
> >>
> >> -----Original Message-----
> >> From: Wido den Hollander [mailto:wido@widodh.nl]
> >> Sent: 20 February 2018 09:03
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: Caching modes
> >>
> >>
> >>
> >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> >> > Hey guys,
> >> >
> >> > Can anyone shed any light on write caching in CloudStack?   cacheMode
> >> is available through the UI for data disks (but not root disks), but not
> >> documented as an API option for data or root disks (although is
> documented
> >> as a response for data disks).
> >> >
> >>
> >> What hypervisor?
> >>
> >> In case of KVM it's passed down to XML which then passes it to Qemu/KVM
> >> which then handles the caching.
> >>
> >> The implementation varies per hypervisor, so that should be the
> question.
> >>
> >> Wido
> >>
> >>
> >> > #huh?
> >> >
> >> > thanks
> >> >
> >> >
> >> >
> >> > paul.angus@shapeblue.com
> >> > www.shapeblue.com
> >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> >> >
> >> >
> >> >
> >>
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
>
> Andrija Panić
>



-- 
Rafael Weingärtner

Re: Caching modes

Posted by Andrija Panic <an...@gmail.com>.
I can also assume that "cachemode" as API parameter is not supported, since
when creating data disk offering via GUI also doesn't set it in DB/table.

CM:    create diskoffering name=xxx displaytext=xxx storagetype=shared
disksize=1024 cachemode=writeback

this also does not set cachemode in table... my guess it's not implemented
in API

Let me know if I can help with any testing here.

Cheers

On 20 February 2018 at 13:09, Andrija Panic <an...@gmail.com> wrote:

> Hi Paul,
>
> not helping directly answering your question, but here are some
> observations and "warning" if client's are using write-back cache on KVM
> level
>
>
> I have (long time ago) tested performance in 3 combinations (this was not
> really thorough testing but a brief testing with FIO and random IO WRITE)
>
> - just CEPH rbd cache (on KVM side)
>            i.e. [client]
>                  rbd cache = true
>                  rbd cache writethrough until flush = true
>                  #(this is default 32MB per volume, afaik
>
> - just KMV write-back cache (had to manually edit disk_offering table to
> activate cache mode, since when creating new disk offering via GUI, the
> disk_offering tables was NOT populated with "write-back" sertting/value ! )
>
> - both CEPH and KVM write-back cahce active
>
> My observations were like following, but would be good to actually confirm
> by someone else:
>
> - same performance with only CEPH caching or with only KVM caching
> - a bit worse performance with both CEPH and KVM caching active (nonsense
> combination, I know...)
>
>
> Please keep in mind, that some ACS functionality, KVM live-migrations on
> shared storage (NFS/CEPH) are NOT supported when you use KVM write-back
> cache, since this is considered "unsafe" migration, more info here:
> https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/
> cha.cachemodes.html#sec.cache.mode.live.migration
>
> or in short:
> "
> The libvirt management layer includes checks for migration compatibility
> based on several factors. If the guest storage is hosted on a clustered
> file system, is read-only or is marked shareable, then the cache mode is
> ignored when determining if migration can be allowed. Otherwise libvirt
> will not allow migration unless the cache mode is set to none. However,
> this restriction can be overridden with the “unsafe” option to the
> migration APIs, which is also supported by virsh, as for example in
>
> virsh migrate --live --unsafe
> "
>
> Cheers
> Andrija
>
>
> On 20 February 2018 at 11:24, Paul Angus <pa...@shapeblue.com> wrote:
>
>> Hi Wido,
>>
>> This is for KVM (with Ceph backend as it happens), the API documentation
>> is out of sync with UI capabilities, so I'm trying to figure out if we
>> *should* be able to set cacheMode for root disks.  It seems to make quite a
>> difference to performance.
>>
>>
>>
>> paul.angus@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>>
>> -----Original Message-----
>> From: Wido den Hollander [mailto:wido@widodh.nl]
>> Sent: 20 February 2018 09:03
>> To: dev@cloudstack.apache.org
>> Subject: Re: Caching modes
>>
>>
>>
>> On 02/20/2018 09:46 AM, Paul Angus wrote:
>> > Hey guys,
>> >
>> > Can anyone shed any light on write caching in CloudStack?   cacheMode
>> is available through the UI for data disks (but not root disks), but not
>> documented as an API option for data or root disks (although is documented
>> as a response for data disks).
>> >
>>
>> What hypervisor?
>>
>> In case of KVM it's passed down to XML which then passes it to Qemu/KVM
>> which then handles the caching.
>>
>> The implementation varies per hypervisor, so that should be the question.
>>
>> Wido
>>
>>
>> > #huh?
>> >
>> > thanks
>> >
>> >
>> >
>> > paul.angus@shapeblue.com
>> > www.shapeblue.com
>> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>> >
>> >
>> >
>>
>
>
>
> --
>
> Andrija Panić
>



-- 

Andrija Panić

Re: Caching modes

Posted by Andrija Panic <an...@gmail.com>.
Hi Paul,

not helping directly answering your question, but here are some
observations and "warning" if client's are using write-back cache on KVM
level


I have (long time ago) tested performance in 3 combinations (this was not
really thorough testing but a brief testing with FIO and random IO WRITE)

- just CEPH rbd cache (on KVM side)
           i.e. [client]
                 rbd cache = true
                 rbd cache writethrough until flush = true
                 #(this is default 32MB per volume, afaik

- just KMV write-back cache (had to manually edit disk_offering table to
activate cache mode, since when creating new disk offering via GUI, the
disk_offering tables was NOT populated with "write-back" sertting/value ! )

- both CEPH and KVM write-back cahce active

My observations were like following, but would be good to actually confirm
by someone else:

- same performance with only CEPH caching or with only KVM caching
- a bit worse performance with both CEPH and KVM caching active (nonsense
combination, I know...)


Please keep in mind, that some ACS functionality, KVM live-migrations on
shared storage (NFS/CEPH) are NOT supported when you use KVM write-back
cache, since this is considered "unsafe" migration, more info here:
https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/cha.cachemodes.html#sec.cache.mode.live.migration

or in short:
"
The libvirt management layer includes checks for migration compatibility
based on several factors. If the guest storage is hosted on a clustered
file system, is read-only or is marked shareable, then the cache mode is
ignored when determining if migration can be allowed. Otherwise libvirt
will not allow migration unless the cache mode is set to none. However,
this restriction can be overridden with the “unsafe” option to the
migration APIs, which is also supported by virsh, as for example in

virsh migrate --live --unsafe
"

Cheers
Andrija


On 20 February 2018 at 11:24, Paul Angus <pa...@shapeblue.com> wrote:

> Hi Wido,
>
> This is for KVM (with Ceph backend as it happens), the API documentation
> is out of sync with UI capabilities, so I'm trying to figure out if we
> *should* be able to set cacheMode for root disks.  It seems to make quite a
> difference to performance.
>
>
>
> paul.angus@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -----Original Message-----
> From: Wido den Hollander [mailto:wido@widodh.nl]
> Sent: 20 February 2018 09:03
> To: dev@cloudstack.apache.org
> Subject: Re: Caching modes
>
>
>
> On 02/20/2018 09:46 AM, Paul Angus wrote:
> > Hey guys,
> >
> > Can anyone shed any light on write caching in CloudStack?   cacheMode is
> available through the UI for data disks (but not root disks), but not
> documented as an API option for data or root disks (although is documented
> as a response for data disks).
> >
>
> What hypervisor?
>
> In case of KVM it's passed down to XML which then passes it to Qemu/KVM
> which then handles the caching.
>
> The implementation varies per hypervisor, so that should be the question.
>
> Wido
>
>
> > #huh?
> >
> > thanks
> >
> >
> >
> > paul.angus@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> >
> >
> >
>



-- 

Andrija Panić

RE: Caching modes

Posted by Paul Angus <pa...@shapeblue.com>.
Hi Wido,

This is for KVM (with Ceph backend as it happens), the API documentation is out of sync with UI capabilities, so I'm trying to figure out if we *should* be able to set cacheMode for root disks.  It seems to make quite a difference to performance.



paul.angus@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-----Original Message-----
From: Wido den Hollander [mailto:wido@widodh.nl] 
Sent: 20 February 2018 09:03
To: dev@cloudstack.apache.org
Subject: Re: Caching modes



On 02/20/2018 09:46 AM, Paul Angus wrote:
> Hey guys,
> 
> Can anyone shed any light on write caching in CloudStack?   cacheMode is available through the UI for data disks (but not root disks), but not documented as an API option for data or root disks (although is documented as a response for data disks).
> 

What hypervisor?

In case of KVM it's passed down to XML which then passes it to Qemu/KVM which then handles the caching.

The implementation varies per hypervisor, so that should be the question.

Wido


> #huh?
> 
> thanks
> 
> 
> 
> paul.angus@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>    
>   
> 

Re: Caching modes

Posted by Wido den Hollander <wi...@widodh.nl>.

On 02/20/2018 09:46 AM, Paul Angus wrote:
> Hey guys,
> 
> Can anyone shed any light on write caching in CloudStack?   cacheMode is available through the UI for data disks (but not root disks), but not documented as an API option for data or root disks (although is documented as a response for data disks).
> 

What hypervisor?

In case of KVM it's passed down to XML which then passes it to Qemu/KVM 
which then handles the caching.

The implementation varies per hypervisor, so that should be the question.

Wido


> #huh?
> 
> thanks
> 
> 
> 
> paul.angus@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>    
>   
>