You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Slavka Peleva <sl...@storpool.com> on 2019/11/07 13:17:28 UTC

[DISCUSS] Storage-based Snapshots for KVM VMs

Hello everyone,

My name is Slavka Peleva and I work for StorPool Storage. I have recently
joined the mailing list.

Me and my colleagues have been working over а new feature for storage-based
live VM snapshots under the KVM hypervisor. With the current implementation
(which is using libvirt to perform the snapshotting) it is not possible for
storage providers that keep VM's disks in RAW format to take VM snapshots.

That's why we have decided to implement an alternative for VM snapshots
under the KVM hypervisor. It will be useful mostly for storages that are
keeping their disks in RAW format or in case of  VMs with mixed set of
disks (RAW and QCOW).

The solution is using the third party storages' plugins to
create/revert/delete disk snapshots. The snapshots will be consistent,
because the virtual machines will be frozen until the snapshotting is done.

For the qcow images we have an alternative solution with the qemu
drive-backup command. It could be used if a VM has mixed disks with raw and
qcow images (in this case “snapshot.backup.to.secondary” should be set to
true).

Qemu version 1.6+ is required, latest Ubuntu should already cover this, for
RHEL/CentOS7 the qemu-kvm-ev package is required.

And some details about the implementation -  we have added a new
configuration option in the database named "kvm.vmsnapshot.enabled".

For the VM takeSnapshot method we have split the takeSnapshot (of disk
method) into two parts - takeSnapshot and backupSnapshot. In order to make
all snapshots consistent, the virtual machine would have to be frozen with
the domfsfreeze command. While it’s frozen an asynchronous snapshot will be
taken for all disks with the appropriate datastore driver implementation.
Once the execution is complete, the virtual machine will be unfreezed by
using the domfsthaw command. Finally a backup to secondary storage is
invoked. The final backup operation is mandatory for VMs qcow disks
attached.

Does this feature sound right to be accepted? Please let us know if you
have any questions, comments, concerns about this.


Kind regards,

Slavka Peleva

Re: [DISCUSS] Storage-based Snapshots for KVM VMs

Posted by Slavka Peleva <sl...@storpool.com>.
Hello Andrija,

We totally agree that it needs to be tested. This one of the reasons we
have implemented it as an opt-in feature. As a side note - there are no
changes in the current storage providers implementation. We just use the
existing logic to take disk snapshots.
So far we have tested with NFS, Ceph and StorPool. We will check local
storage too, thanks for the hint. We don't have the infrastructure to test
the other storage providers, so help will be needed to validate them too.
Qemu version 1.6+ requirement is not needed for storage providers which has
implemented takeSnapshot - like StorPool, Ceph, SolidFire and etc.

Best regards,
Slavka

On Fri, Nov 8, 2019 at 6:01 PM Andrija Panic <an...@gmail.com>
wrote:

> Hi Slavka,
>
> If I read that correctly, your proposal seems to be around allowing to take
> snapshots of all volumes simultaneously, thus improving the current
> (useless...) logic where you can snap only single disk at a time (which
> makes i.e. DB server restoration impossible).
>
> This sounds interesting, but I might add, requires really extensive testing
> and code review, since I expect a lot of storage-related code changes.
> This needs to be tested with: local storage, NFS storage, Ceph, SolidFire,
> StorePool, and then some more probably.
>
> Qemu/libvirt versions will probably need to be taken into the
> considerations (where are the limitations etc.)
>
> Sounds very nice and as a lot of work (testing specifically).
>
> Andrija
>
> On Fri, 8 Nov 2019 at 09:57, Slavka Peleva <sl...@storpool.com> wrote:
>
> > Hi Sven,
> >
> > The procedure is:
> >
> > 1.  fsfreeze all disks of the VM with help of qemu-guest-agent. If this
> > fails, the operation is aborted
> > 2.  Take snapshot of each disk of the VM through the storage provider's
> > takeSnapshot call
> > 3.  fsthaw the VM
> > 4.  Backup the snapshots on secondary storage if it is enabled
> >
> > So the virtual machine is quiescent while the snapshots are being taken.
> >
> > Best regards,
> > Slavka
> >
> > On Fri, Nov 8, 2019 at 12:31 AM Sven Vogel <S....@ewerk.com> wrote:
> >
> > > Hi Slavka,
> > >
> > > Thanks for the answers Slavka! I have another question.
> > >
> > > You wrote:
> > >
> > > Those storage providers plugins are already in CloudStack - like
> > Solidfire,
> > > Cloudbyte and etc. It doesn't break the functionality we just use the
> > > existing implementation of every storage plugin for do the
> snapshotting.
> > >
> > > How does this work for all storages? How do you quiesce the virtual
> > > machine? Can you explain it a little bit more?
> > >
> > > Thanks and Cheers
> > >
> > > Sven
> > >
> > >
> > > __
> > >
> > > Sven Vogel
> > > Teamlead Platform
> > >
> > > EWERK DIGITAL GmbH
> > > Brühl 24, D-04109 Leipzig
> > > P +49 341 42649 - 99
> > > F +49 341 42649 - 98
> > > S.Vogel@ewerk.com
> > > www.ewerk.com
> > >
> > > Geschäftsführer:
> > > Dr. Erik Wende, Hendrik Schubert, Frank Richter
> > > Registergericht: Leipzig HRB 9065
> > >
> > > Support:
> > > +49 341 42649 555
> > >
> > > Zertifiziert nach:
> > > ISO/IEC 27001:2013
> > > DIN EN ISO 9001:2015
> > > DIN ISO/IEC 20000-1:2011
> > >
> > > ISAE 3402 Typ II Assessed
> > >
> > > EWERK-Blog<https://blog.ewerk.com/> | LinkedIn<
> > > https://www.linkedin.com/company/ewerk-group> | Xing<
> > > https://www.xing.com/company/ewerk> | Twitter<
> > > https://twitter.com/EWERK_Group> | Facebook<
> > > https://de-de.facebook.com/EWERK.IT/>
> > >
> > > Mit Handelsregistereintragung vom 09.07.2019 ist die EWERK RZ GmbH auf
> > die
> > > EWERK IT GmbH verschmolzen und firmiert nun gemeinsam unter dem Namen:
> > > EWERK DIGITAL GmbH, für weitere Informationen klicken Sie hier<
> > > https://www.ewerk.com/ewerkdigital>.
> > >
> > > Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
> > >
> > > Disclaimer Privacy:
> > > Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
> > ist
> > > vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> > > bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> > > Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> > > informieren Sie in diesem Fall unverzüglich den Absender und löschen
> Sie
> > > die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
> > System.
> > > Vielen Dank.
> > >
> > > The contents of this e-mail (including any attachments) are
> confidential
> > > and may be legally privileged. If you are not the intended recipient of
> > > this e-mail, any disclosure, copying, distribution or use of its
> contents
> > > is strictly prohibited, and you should please notify the sender
> > immediately
> > > and then delete it (including any attachments) from your system. Thank
> > you.
> > >
> > > Am 07.11.2019 um 18:12 schrieb Slavka Peleva <slavkap@storpool.com
> > <mailto:
> > > slavkap@storpool.com>>:
> > >
> > > Hi Sven,
> > >
> > > Thank you for your questions!
> > >
> > > My answers are below.
> > >
> > > Kind regards,
> > > Slavka
> > >
> > >
> > > Thanks for your mail/contribution. You implemention sounds interesting.
> > >
> > > Let me ask some questions.
> > >
> > > 1. If you speak for RAW Disks you mean. Right?
> > > <disk type='block' device='disk‘>
> > > <driver name='qemu' type='raw‘/>
> > >
> > >
> > > Yes, disks in raw format like this.
> > >
> > >
> > > 2.
> > > You wrote:
> > >
> > > The solution is using the third party storages' plugins to
> > > create/revert/delete disk snapshots. The snapshots will be consistent,
> > > because the virtual machines will be frozen until the snapshotting is
> > done.
> > >
> > > We use Netapp Solidfire Storage. I know they are using too RAW Format.
> > > You spoke about a third party plugin.
> > >
> > >
> > > "The solution is using the third party storages' plugins to
> > > create/revert/delete disk snapshots."
> > > The Plugins where they come and does they break any existing
> > functionality?
> > >
> > >
> > > Those storage providers plugins are already in CloudStack - like
> > Solidfire,
> > > Cloudbyte and etc. It doesn't break the functionality we just use the
> > > existing implementation of every storage plugin for do the
> snapshotting.
> > >
> > >
> > >
> > > I don’t see that we if we use Solidfire we can use this plugin from
> > > Storepool. Right?
> > >
> > >
> > > The proposed solution is not bound to a specific Storage Provider
> plugin.
> > > It will work with any and all Storage Providers that has implemented
> > > takeSnapshot call.
> > >
> > >
> > > 3.
> > > And some details about the implementation -  we have added a new
> > > configuration option in the database named "kvm.vmsnapshot.enabled".
> > >
> > > At the moment there is already an flag „kvm.snapshot.enabled“. Well, I
> > > mean „kvm.vmsnapshot.enabled“ is to uniq to „kvm.snapshot.enabled“.
> > >
> > > Maybe this name should rather to be called
> > > „kvm.vmsnapshotexternalprovider.enabled“ or something like that… Naming
> > is
> > > a bit long but maybe you will find another cool name. I think it should
> > be
> > > more generic if anybody can create a plugin for a storage system like
> > > storepool, Netapp Solidfire ... .
> > >
> > >
> > > Thanks for this! I will think about more appropriate name for this
> flag.
> > >
> > >
> > > Thanks and
> > >
> > > Cheers
> > >
> > > Sven
> > >
> > >
> > >
> > > __
> > >
> > > Sven Vogel
> > > Teamlead Platform
> > >
> > > EWERK DIGITAL GmbH
> > > Brühl 24, D-04109 Leipzig
> > > P +49 341 42649 - 99
> > > F +49 341 42649 - 98
> > > S.Vogel@ewerk.com<ma...@ewerk.com>
> > > www.ewerk.com
> > >
> > > Geschäftsführer:
> > > Dr. Erik Wende, Hendrik Schubert, Frank Richter
> > > Registergericht: Leipzig HRB 9065
> > >
> > > Zertifiziert nach:
> > > ISO/IEC 27001:2013
> > > DIN EN ISO 9001:2015
> > > DIN ISO/IEC 20000-1:2011
> > >
> > > EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
> > >
> > > Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
> > >
> > > Disclaimer Privacy:
> > > Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
> > ist
> > > vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> > > bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> > > Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> > > informieren Sie in diesem Fall unverzüglich den Absender und löschen
> Sie
> > > die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
> > System.
> > > Vielen Dank.
> > >
> > > The contents of this e-mail (including any attachments) are
> confidential
> > > and may be legally privileged. If you are not the intended recipient of
> > > this e-mail, any disclosure, copying, distribution or use of its
> contents
> > > is strictly prohibited, and you should please notify the sender
> > immediately
> > > and then delete it (including any attachments) from your system. Thank
> > you.
> > > Am 07.11.2019 um 14:17 schrieb Slavka Peleva <sl...@storpool.com>:
> > >
> > > Hello everyone,
> > >
> > > My name is Slavka Peleva and I work for StorPool Storage. I have
> recently
> > > joined the mailing list.
> > >
> > > Me and my colleagues have been working over а new feature for
> > > storage-based
> > > live VM snapshots under the KVM hypervisor. With the current
> > > implementation
> > > (which is using libvirt to perform the snapshotting) it is not possible
> > > for
> > > storage providers that keep VM's disks in RAW format to take VM
> > > snapshots.
> > >
> > > That's why we have decided to implement an alternative for VM snapshots
> > > under the KVM hypervisor. It will be useful mostly for storages that
> are
> > > keeping their disks in RAW format or in case of  VMs with mixed set of
> > > disks (RAW and QCOW).
> > >
> > > The solution is using the third party storages' plugins to
> > > create/revert/delete disk snapshots. The snapshots will be consistent,
> > > because the virtual machines will be frozen until the snapshotting is
> > > done.
> > >
> > > For the qcow images we have an alternative solution with the qemu
> > > drive-backup command. It could be used if a VM has mixed disks with raw
> > > and
> > > qcow images (in this case “snapshot.backup.to.secondary” should be set
> to
> > > true).
> > >
> > > Qemu version 1.6+ is required, latest Ubuntu should already cover this,
> > > for
> > > RHEL/CentOS7 the qemu-kvm-ev package is required.
> > >
> > > And some details about the implementation -  we have added a new
> > > configuration option in the database named "kvm.vmsnapshot.enabled".
> > >
> > > For the VM takeSnapshot method we have split the takeSnapshot (of disk
> > > method) into two parts - takeSnapshot and backupSnapshot. In order to
> > > make
> > > all snapshots consistent, the virtual machine would have to be frozen
> > > with
> > > the domfsfreeze command. While it’s frozen an asynchronous snapshot
> will
> > > be
> > > taken for all disks with the appropriate datastore driver
> implementation.
> > > Once the execution is complete, the virtual machine will be unfreezed
> by
> > > using the domfsthaw command. Finally a backup to secondary storage is
> > > invoked. The final backup operation is mandatory for VMs qcow disks
> > > attached.
> > >
> > > Does this feature sound right to be accepted? Please let us know if you
> > > have any questions, comments, concerns about this.
> > >
> > >
> > > Kind regards,
> > >
> > > Slavka Peleva
> > >
> > >
> >
>
>
> --
>
> Andrija Panić
>

Re: [DISCUSS] Storage-based Snapshots for KVM VMs

Posted by Slavka Peleva <sl...@storpool.com>.
Hi Sven,

If you are asking if a VM will break if it is left fs-frozen for an
extended period of time, in our experience, we have not observed such a
case.
We have seen a VMs fs-frozen by mistake for hours and after fs-thaw they
continue operating normally. In the code there is currently no hard limit
for the freeze time.
Our initial tests, at least on idle clusters, show that the time the VM is
frozen is minimal - 1.2 seconds for a VM with 3 disks - CEPH, StorPool and
NFS qcow.
Adding a hard limit for production VMs sounds like an interesting extension.

Best regards,
Slavka

On Fri, Nov 8, 2019 at 7:28 PM Sven Vogel <S....@ewerk.com> wrote:

> Hi Slavka, Hi Andrija,
>
> Sounds really cool.
>
> From you explaination I understand that if the machine is freezed you call
> a snapshot command for each disk and then thaw the vm. Right?
>
> How long we can freeze the vm? Is there any known time how long it’s
> possible?
>
> Thanks
>
> Sven
>
> Von meinem iPhone gesendet
>
>
> __
>
> Sven Vogel
> Teamlead Platform
>
> EWERK DIGITAL GmbH
> Brühl 24, D-04109 Leipzig
> P +49 341 42649 - 99
> F +49 341 42649 - 98
> S.Vogel@ewerk.com
> www.ewerk.com
>
> Geschäftsführer:
> Dr. Erik Wende, Hendrik Schubert, Frank Richter
> Registergericht: Leipzig HRB 9065
>
> Zertifiziert nach:
> ISO/IEC 27001:2013
> DIN EN ISO 9001:2015
> DIN ISO/IEC 20000-1:2011
>
> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
>
> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>
> Disclaimer Privacy:
> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist
> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System.
> Vielen Dank.
>
> The contents of this e-mail (including any attachments) are confidential
> and may be legally privileged. If you are not the intended recipient of
> this e-mail, any disclosure, copying, distribution or use of its contents
> is strictly prohibited, and you should please notify the sender immediately
> and then delete it (including any attachments) from your system. Thank you.
> > Am 08.11.2019 um 17:01 schrieb Andrija Panic <an...@gmail.com>:
> >
> > Hi Slavka,
> >
> > If I read that correctly, your proposal seems to be around allowing to
> take
> > snapshots of all volumes simultaneously, thus improving the current
> > (useless...) logic where you can snap only single disk at a time (which
> > makes i.e. DB server restoration impossible).
> >
> > This sounds interesting, but I might add, requires really extensive
> testing
> > and code review, since I expect a lot of storage-related code changes.
> > This needs to be tested with: local storage, NFS storage, Ceph,
> SolidFire,
> > StorePool, and then some more probably.
> >
> > Qemu/libvirt versions will probably need to be taken into the
> > considerations (where are the limitations etc.)
> >
> > Sounds very nice and as a lot of work (testing specifically).
> >
> > Andrija
> >
> >> On Fri, 8 Nov 2019 at 09:57, Slavka Peleva <sl...@storpool.com>
> wrote:
> >>
> >> Hi Sven,
> >>
> >> The procedure is:
> >>
> >> 1.  fsfreeze all disks of the VM with help of qemu-guest-agent. If this
> >> fails, the operation is aborted
> >> 2.  Take snapshot of each disk of the VM through the storage provider's
> >> takeSnapshot call
> >> 3.  fsthaw the VM
> >> 4.  Backup the snapshots on secondary storage if it is enabled
> >>
> >> So the virtual machine is quiescent while the snapshots are being taken.
> >>
> >> Best regards,
> >> Slavka
> >>
> >>> On Fri, Nov 8, 2019 at 12:31 AM Sven Vogel <S....@ewerk.com> wrote:
> >>>
> >>> Hi Slavka,
> >>>
> >>> Thanks for the answers Slavka! I have another question.
> >>>
> >>> You wrote:
> >>>
> >>> Those storage providers plugins are already in CloudStack - like
> >> Solidfire,
> >>> Cloudbyte and etc. It doesn't break the functionality we just use the
> >>> existing implementation of every storage plugin for do the
> snapshotting.
> >>>
> >>> How does this work for all storages? How do you quiesce the virtual
> >>> machine? Can you explain it a little bit more?
> >>>
> >>> Thanks and Cheers
> >>>
> >>> Sven
> >>>
> >>>
> >>> __
> >>>
> >>> Sven Vogel
> >>> Teamlead Platform
> >>>
> >>> EWERK DIGITAL GmbH
> >>> Brühl 24, D-04109 Leipzig
> >>> P +49 341 42649 - 99
> >>> F +49 341 42649 - 98
> >>> S.Vogel@ewerk.com
> >>> www.ewerk.com
> >>>
> >>> Geschäftsführer:
> >>> Dr. Erik Wende, Hendrik Schubert, Frank Richter
> >>> Registergericht: Leipzig HRB 9065
> >>>
> >>> Support:
> >>> +49 341 42649 555
> >>>
> >>> Zertifiziert nach:
> >>> ISO/IEC 27001:2013
> >>> DIN EN ISO 9001:2015
> >>> DIN ISO/IEC 20000-1:2011
> >>>
> >>> ISAE 3402 Typ II Assessed
> >>>
> >>> EWERK-Blog<https://blog.ewerk.com/> | LinkedIn<
> >>> https://www.linkedin.com/company/ewerk-group> | Xing<
> >>> https://www.xing.com/company/ewerk> | Twitter<
> >>> https://twitter.com/EWERK_Group> | Facebook<
> >>> https://de-de.facebook.com/EWERK.IT/>
> >>>
> >>> Mit Handelsregistereintragung vom 09.07.2019 ist die EWERK RZ GmbH auf
> >> die
> >>> EWERK IT GmbH verschmolzen und firmiert nun gemeinsam unter dem Namen:
> >>> EWERK DIGITAL GmbH, für weitere Informationen klicken Sie hier<
> >>> https://www.ewerk.com/ewerkdigital>.
> >>>
> >>> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
> >>>
> >>> Disclaimer Privacy:
> >>> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
> >> ist
> >>> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> >>> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> >>> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> >>> informieren Sie in diesem Fall unverzüglich den Absender und löschen
> Sie
> >>> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
> >> System.
> >>> Vielen Dank.
> >>>
> >>> The contents of this e-mail (including any attachments) are
> confidential
> >>> and may be legally privileged. If you are not the intended recipient of
> >>> this e-mail, any disclosure, copying, distribution or use of its
> contents
> >>> is strictly prohibited, and you should please notify the sender
> >> immediately
> >>> and then delete it (including any attachments) from your system. Thank
> >> you.
> >>>
> >>> Am 07.11.2019 um 18:12 schrieb Slavka Peleva <slavkap@storpool.com
> >> <mailto:
> >>> slavkap@storpool.com>>:
> >>>
> >>> Hi Sven,
> >>>
> >>> Thank you for your questions!
> >>>
> >>> My answers are below.
> >>>
> >>> Kind regards,
> >>> Slavka
> >>>
> >>>
> >>> Thanks for your mail/contribution. You implemention sounds interesting.
> >>>
> >>> Let me ask some questions.
> >>>
> >>> 1. If you speak for RAW Disks you mean. Right?
> >>> <disk type='block' device='disk‘>
> >>> <driver name='qemu' type='raw‘/>
> >>>
> >>>
> >>> Yes, disks in raw format like this.
> >>>
> >>>
> >>> 2.
> >>> You wrote:
> >>>
> >>> The solution is using the third party storages' plugins to
> >>> create/revert/delete disk snapshots. The snapshots will be consistent,
> >>> because the virtual machines will be frozen until the snapshotting is
> >> done.
> >>>
> >>> We use Netapp Solidfire Storage. I know they are using too RAW Format.
> >>> You spoke about a third party plugin.
> >>>
> >>>
> >>> "The solution is using the third party storages' plugins to
> >>> create/revert/delete disk snapshots."
> >>> The Plugins where they come and does they break any existing
> >> functionality?
> >>>
> >>>
> >>> Those storage providers plugins are already in CloudStack - like
> >> Solidfire,
> >>> Cloudbyte and etc. It doesn't break the functionality we just use the
> >>> existing implementation of every storage plugin for do the
> snapshotting.
> >>>
> >>>
> >>>
> >>> I don’t see that we if we use Solidfire we can use this plugin from
> >>> Storepool. Right?
> >>>
> >>>
> >>> The proposed solution is not bound to a specific Storage Provider
> plugin.
> >>> It will work with any and all Storage Providers that has implemented
> >>> takeSnapshot call.
> >>>
> >>>
> >>> 3.
> >>> And some details about the implementation -  we have added a new
> >>> configuration option in the database named "kvm.vmsnapshot.enabled".
> >>>
> >>> At the moment there is already an flag „kvm.snapshot.enabled“. Well, I
> >>> mean „kvm.vmsnapshot.enabled“ is to uniq to „kvm.snapshot.enabled“.
> >>>
> >>> Maybe this name should rather to be called
> >>> „kvm.vmsnapshotexternalprovider.enabled“ or something like that… Naming
> >> is
> >>> a bit long but maybe you will find another cool name. I think it should
> >> be
> >>> more generic if anybody can create a plugin for a storage system like
> >>> storepool, Netapp Solidfire ... .
> >>>
> >>>
> >>> Thanks for this! I will think about more appropriate name for this
> flag.
> >>>
> >>>
> >>> Thanks and
> >>>
> >>> Cheers
> >>>
> >>> Sven
> >>>
> >>>
> >>>
> >>> __
> >>>
> >>> Sven Vogel
> >>> Teamlead Platform
> >>>
> >>> EWERK DIGITAL GmbH
> >>> Brühl 24, D-04109 Leipzig
> >>> P +49 341 42649 - 99
> >>> F +49 341 42649 - 98
> >>> S.Vogel@ewerk.com<ma...@ewerk.com>
> >>> www.ewerk.com
> >>>
> >>> Geschäftsführer:
> >>> Dr. Erik Wende, Hendrik Schubert, Frank Richter
> >>> Registergericht: Leipzig HRB 9065
> >>>
> >>> Zertifiziert nach:
> >>> ISO/IEC 27001:2013
> >>> DIN EN ISO 9001:2015
> >>> DIN ISO/IEC 20000-1:2011
> >>>
> >>> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
> >>>
> >>> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
> >>>
> >>> Disclaimer Privacy:
> >>> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
> >> ist
> >>> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> >>> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> >>> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> >>> informieren Sie in diesem Fall unverzüglich den Absender und löschen
> Sie
> >>> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
> >> System.
> >>> Vielen Dank.
> >>>
> >>> The contents of this e-mail (including any attachments) are
> confidential
> >>> and may be legally privileged. If you are not the intended recipient of
> >>> this e-mail, any disclosure, copying, distribution or use of its
> contents
> >>> is strictly prohibited, and you should please notify the sender
> >> immediately
> >>> and then delete it (including any attachments) from your system. Thank
> >> you.
> >>>> Am 07.11.2019 um 14:17 schrieb Slavka Peleva <sl...@storpool.com>:
> >>>
> >>> Hello everyone,
> >>>
> >>> My name is Slavka Peleva and I work for StorPool Storage. I have
> recently
> >>> joined the mailing list.
> >>>
> >>> Me and my colleagues have been working over а new feature for
> >>> storage-based
> >>> live VM snapshots under the KVM hypervisor. With the current
> >>> implementation
> >>> (which is using libvirt to perform the snapshotting) it is not possible
> >>> for
> >>> storage providers that keep VM's disks in RAW format to take VM
> >>> snapshots.
> >>>
> >>> That's why we have decided to implement an alternative for VM snapshots
> >>> under the KVM hypervisor. It will be useful mostly for storages that
> are
> >>> keeping their disks in RAW format or in case of  VMs with mixed set of
> >>> disks (RAW and QCOW).
> >>>
> >>> The solution is using the third party storages' plugins to
> >>> create/revert/delete disk snapshots. The snapshots will be consistent,
> >>> because the virtual machines will be frozen until the snapshotting is
> >>> done.
> >>>
> >>> For the qcow images we have an alternative solution with the qemu
> >>> drive-backup command. It could be used if a VM has mixed disks with raw
> >>> and
> >>> qcow images (in this case “snapshot.backup.to.secondary” should be set
> to
> >>> true).
> >>>
> >>> Qemu version 1.6+ is required, latest Ubuntu should already cover this,
> >>> for
> >>> RHEL/CentOS7 the qemu-kvm-ev package is required.
> >>>
> >>> And some details about the implementation -  we have added a new
> >>> configuration option in the database named "kvm.vmsnapshot.enabled".
> >>>
> >>> For the VM takeSnapshot method we have split the takeSnapshot (of disk
> >>> method) into two parts - takeSnapshot and backupSnapshot. In order to
> >>> make
> >>> all snapshots consistent, the virtual machine would have to be frozen
> >>> with
> >>> the domfsfreeze command. While it’s frozen an asynchronous snapshot
> will
> >>> be
> >>> taken for all disks with the appropriate datastore driver
> implementation.
> >>> Once the execution is complete, the virtual machine will be unfreezed
> by
> >>> using the domfsthaw command. Finally a backup to secondary storage is
> >>> invoked. The final backup operation is mandatory for VMs qcow disks
> >>> attached.
> >>>
> >>> Does this feature sound right to be accepted? Please let us know if you
> >>> have any questions, comments, concerns about this.
> >>>
> >>>
> >>> Kind regards,
> >>>
> >>> Slavka Peleva
> >>>
> >>>
> >>
> >
> >
> > --
> >
> > Andrija Panić
>

Re: [DISCUSS] Storage-based Snapshots for KVM VMs

Posted by Sven Vogel <S....@ewerk.com>.
Hi Slavka, Hi Andrija,

Sounds really cool.

From you explaination I understand that if the machine is freezed you call a snapshot command for each disk and then thaw the vm. Right?

How long we can freeze the vm? Is there any known time how long it’s possible?

Thanks

Sven

Von meinem iPhone gesendet


__

Sven Vogel
Teamlead Platform

EWERK DIGITAL GmbH
Brühl 24, D-04109 Leipzig
P +49 341 42649 - 99
F +49 341 42649 - 98
S.Vogel@ewerk.com
www.ewerk.com

Geschäftsführer:
Dr. Erik Wende, Hendrik Schubert, Frank Richter
Registergericht: Leipzig HRB 9065

Zertifiziert nach:
ISO/IEC 27001:2013
DIN EN ISO 9001:2015
DIN ISO/IEC 20000-1:2011

EWERK-Blog | LinkedIn | Xing | Twitter | Facebook

Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.

Disclaimer Privacy:
Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung, Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System. Vielen Dank.

The contents of this e-mail (including any attachments) are confidential and may be legally privileged. If you are not the intended recipient of this e-mail, any disclosure, copying, distribution or use of its contents is strictly prohibited, and you should please notify the sender immediately and then delete it (including any attachments) from your system. Thank you.
> Am 08.11.2019 um 17:01 schrieb Andrija Panic <an...@gmail.com>:
>
> Hi Slavka,
>
> If I read that correctly, your proposal seems to be around allowing to take
> snapshots of all volumes simultaneously, thus improving the current
> (useless...) logic where you can snap only single disk at a time (which
> makes i.e. DB server restoration impossible).
>
> This sounds interesting, but I might add, requires really extensive testing
> and code review, since I expect a lot of storage-related code changes.
> This needs to be tested with: local storage, NFS storage, Ceph, SolidFire,
> StorePool, and then some more probably.
>
> Qemu/libvirt versions will probably need to be taken into the
> considerations (where are the limitations etc.)
>
> Sounds very nice and as a lot of work (testing specifically).
>
> Andrija
>
>> On Fri, 8 Nov 2019 at 09:57, Slavka Peleva <sl...@storpool.com> wrote:
>>
>> Hi Sven,
>>
>> The procedure is:
>>
>> 1.  fsfreeze all disks of the VM with help of qemu-guest-agent. If this
>> fails, the operation is aborted
>> 2.  Take snapshot of each disk of the VM through the storage provider's
>> takeSnapshot call
>> 3.  fsthaw the VM
>> 4.  Backup the snapshots on secondary storage if it is enabled
>>
>> So the virtual machine is quiescent while the snapshots are being taken.
>>
>> Best regards,
>> Slavka
>>
>>> On Fri, Nov 8, 2019 at 12:31 AM Sven Vogel <S....@ewerk.com> wrote:
>>>
>>> Hi Slavka,
>>>
>>> Thanks for the answers Slavka! I have another question.
>>>
>>> You wrote:
>>>
>>> Those storage providers plugins are already in CloudStack - like
>> Solidfire,
>>> Cloudbyte and etc. It doesn't break the functionality we just use the
>>> existing implementation of every storage plugin for do the snapshotting.
>>>
>>> How does this work for all storages? How do you quiesce the virtual
>>> machine? Can you explain it a little bit more?
>>>
>>> Thanks and Cheers
>>>
>>> Sven
>>>
>>>
>>> __
>>>
>>> Sven Vogel
>>> Teamlead Platform
>>>
>>> EWERK DIGITAL GmbH
>>> Brühl 24, D-04109 Leipzig
>>> P +49 341 42649 - 99
>>> F +49 341 42649 - 98
>>> S.Vogel@ewerk.com
>>> www.ewerk.com
>>>
>>> Geschäftsführer:
>>> Dr. Erik Wende, Hendrik Schubert, Frank Richter
>>> Registergericht: Leipzig HRB 9065
>>>
>>> Support:
>>> +49 341 42649 555
>>>
>>> Zertifiziert nach:
>>> ISO/IEC 27001:2013
>>> DIN EN ISO 9001:2015
>>> DIN ISO/IEC 20000-1:2011
>>>
>>> ISAE 3402 Typ II Assessed
>>>
>>> EWERK-Blog<https://blog.ewerk.com/> | LinkedIn<
>>> https://www.linkedin.com/company/ewerk-group> | Xing<
>>> https://www.xing.com/company/ewerk> | Twitter<
>>> https://twitter.com/EWERK_Group> | Facebook<
>>> https://de-de.facebook.com/EWERK.IT/>
>>>
>>> Mit Handelsregistereintragung vom 09.07.2019 ist die EWERK RZ GmbH auf
>> die
>>> EWERK IT GmbH verschmolzen und firmiert nun gemeinsam unter dem Namen:
>>> EWERK DIGITAL GmbH, für weitere Informationen klicken Sie hier<
>>> https://www.ewerk.com/ewerkdigital>.
>>>
>>> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>>>
>>> Disclaimer Privacy:
>>> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
>> ist
>>> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
>>> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
>>> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
>>> informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
>>> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
>> System.
>>> Vielen Dank.
>>>
>>> The contents of this e-mail (including any attachments) are confidential
>>> and may be legally privileged. If you are not the intended recipient of
>>> this e-mail, any disclosure, copying, distribution or use of its contents
>>> is strictly prohibited, and you should please notify the sender
>> immediately
>>> and then delete it (including any attachments) from your system. Thank
>> you.
>>>
>>> Am 07.11.2019 um 18:12 schrieb Slavka Peleva <slavkap@storpool.com
>> <mailto:
>>> slavkap@storpool.com>>:
>>>
>>> Hi Sven,
>>>
>>> Thank you for your questions!
>>>
>>> My answers are below.
>>>
>>> Kind regards,
>>> Slavka
>>>
>>>
>>> Thanks for your mail/contribution. You implemention sounds interesting.
>>>
>>> Let me ask some questions.
>>>
>>> 1. If you speak for RAW Disks you mean. Right?
>>> <disk type='block' device='disk‘>
>>> <driver name='qemu' type='raw‘/>
>>>
>>>
>>> Yes, disks in raw format like this.
>>>
>>>
>>> 2.
>>> You wrote:
>>>
>>> The solution is using the third party storages' plugins to
>>> create/revert/delete disk snapshots. The snapshots will be consistent,
>>> because the virtual machines will be frozen until the snapshotting is
>> done.
>>>
>>> We use Netapp Solidfire Storage. I know they are using too RAW Format.
>>> You spoke about a third party plugin.
>>>
>>>
>>> "The solution is using the third party storages' plugins to
>>> create/revert/delete disk snapshots."
>>> The Plugins where they come and does they break any existing
>> functionality?
>>>
>>>
>>> Those storage providers plugins are already in CloudStack - like
>> Solidfire,
>>> Cloudbyte and etc. It doesn't break the functionality we just use the
>>> existing implementation of every storage plugin for do the snapshotting.
>>>
>>>
>>>
>>> I don’t see that we if we use Solidfire we can use this plugin from
>>> Storepool. Right?
>>>
>>>
>>> The proposed solution is not bound to a specific Storage Provider plugin.
>>> It will work with any and all Storage Providers that has implemented
>>> takeSnapshot call.
>>>
>>>
>>> 3.
>>> And some details about the implementation -  we have added a new
>>> configuration option in the database named "kvm.vmsnapshot.enabled".
>>>
>>> At the moment there is already an flag „kvm.snapshot.enabled“. Well, I
>>> mean „kvm.vmsnapshot.enabled“ is to uniq to „kvm.snapshot.enabled“.
>>>
>>> Maybe this name should rather to be called
>>> „kvm.vmsnapshotexternalprovider.enabled“ or something like that… Naming
>> is
>>> a bit long but maybe you will find another cool name. I think it should
>> be
>>> more generic if anybody can create a plugin for a storage system like
>>> storepool, Netapp Solidfire ... .
>>>
>>>
>>> Thanks for this! I will think about more appropriate name for this flag.
>>>
>>>
>>> Thanks and
>>>
>>> Cheers
>>>
>>> Sven
>>>
>>>
>>>
>>> __
>>>
>>> Sven Vogel
>>> Teamlead Platform
>>>
>>> EWERK DIGITAL GmbH
>>> Brühl 24, D-04109 Leipzig
>>> P +49 341 42649 - 99
>>> F +49 341 42649 - 98
>>> S.Vogel@ewerk.com<ma...@ewerk.com>
>>> www.ewerk.com
>>>
>>> Geschäftsführer:
>>> Dr. Erik Wende, Hendrik Schubert, Frank Richter
>>> Registergericht: Leipzig HRB 9065
>>>
>>> Zertifiziert nach:
>>> ISO/IEC 27001:2013
>>> DIN EN ISO 9001:2015
>>> DIN ISO/IEC 20000-1:2011
>>>
>>> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
>>>
>>> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>>>
>>> Disclaimer Privacy:
>>> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
>> ist
>>> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
>>> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
>>> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
>>> informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
>>> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
>> System.
>>> Vielen Dank.
>>>
>>> The contents of this e-mail (including any attachments) are confidential
>>> and may be legally privileged. If you are not the intended recipient of
>>> this e-mail, any disclosure, copying, distribution or use of its contents
>>> is strictly prohibited, and you should please notify the sender
>> immediately
>>> and then delete it (including any attachments) from your system. Thank
>> you.
>>>> Am 07.11.2019 um 14:17 schrieb Slavka Peleva <sl...@storpool.com>:
>>>
>>> Hello everyone,
>>>
>>> My name is Slavka Peleva and I work for StorPool Storage. I have recently
>>> joined the mailing list.
>>>
>>> Me and my colleagues have been working over а new feature for
>>> storage-based
>>> live VM snapshots under the KVM hypervisor. With the current
>>> implementation
>>> (which is using libvirt to perform the snapshotting) it is not possible
>>> for
>>> storage providers that keep VM's disks in RAW format to take VM
>>> snapshots.
>>>
>>> That's why we have decided to implement an alternative for VM snapshots
>>> under the KVM hypervisor. It will be useful mostly for storages that are
>>> keeping their disks in RAW format or in case of  VMs with mixed set of
>>> disks (RAW and QCOW).
>>>
>>> The solution is using the third party storages' plugins to
>>> create/revert/delete disk snapshots. The snapshots will be consistent,
>>> because the virtual machines will be frozen until the snapshotting is
>>> done.
>>>
>>> For the qcow images we have an alternative solution with the qemu
>>> drive-backup command. It could be used if a VM has mixed disks with raw
>>> and
>>> qcow images (in this case “snapshot.backup.to.secondary” should be set to
>>> true).
>>>
>>> Qemu version 1.6+ is required, latest Ubuntu should already cover this,
>>> for
>>> RHEL/CentOS7 the qemu-kvm-ev package is required.
>>>
>>> And some details about the implementation -  we have added a new
>>> configuration option in the database named "kvm.vmsnapshot.enabled".
>>>
>>> For the VM takeSnapshot method we have split the takeSnapshot (of disk
>>> method) into two parts - takeSnapshot and backupSnapshot. In order to
>>> make
>>> all snapshots consistent, the virtual machine would have to be frozen
>>> with
>>> the domfsfreeze command. While it’s frozen an asynchronous snapshot will
>>> be
>>> taken for all disks with the appropriate datastore driver implementation.
>>> Once the execution is complete, the virtual machine will be unfreezed by
>>> using the domfsthaw command. Finally a backup to secondary storage is
>>> invoked. The final backup operation is mandatory for VMs qcow disks
>>> attached.
>>>
>>> Does this feature sound right to be accepted? Please let us know if you
>>> have any questions, comments, concerns about this.
>>>
>>>
>>> Kind regards,
>>>
>>> Slavka Peleva
>>>
>>>
>>
>
>
> --
>
> Andrija Panić

Re: [DISCUSS] Storage-based Snapshots for KVM VMs

Posted by Andrija Panic <an...@gmail.com>.
Hi Slavka,

If I read that correctly, your proposal seems to be around allowing to take
snapshots of all volumes simultaneously, thus improving the current
(useless...) logic where you can snap only single disk at a time (which
makes i.e. DB server restoration impossible).

This sounds interesting, but I might add, requires really extensive testing
and code review, since I expect a lot of storage-related code changes.
This needs to be tested with: local storage, NFS storage, Ceph, SolidFire,
StorePool, and then some more probably.

Qemu/libvirt versions will probably need to be taken into the
considerations (where are the limitations etc.)

Sounds very nice and as a lot of work (testing specifically).

Andrija

On Fri, 8 Nov 2019 at 09:57, Slavka Peleva <sl...@storpool.com> wrote:

> Hi Sven,
>
> The procedure is:
>
> 1.  fsfreeze all disks of the VM with help of qemu-guest-agent. If this
> fails, the operation is aborted
> 2.  Take snapshot of each disk of the VM through the storage provider's
> takeSnapshot call
> 3.  fsthaw the VM
> 4.  Backup the snapshots on secondary storage if it is enabled
>
> So the virtual machine is quiescent while the snapshots are being taken.
>
> Best regards,
> Slavka
>
> On Fri, Nov 8, 2019 at 12:31 AM Sven Vogel <S....@ewerk.com> wrote:
>
> > Hi Slavka,
> >
> > Thanks for the answers Slavka! I have another question.
> >
> > You wrote:
> >
> > Those storage providers plugins are already in CloudStack - like
> Solidfire,
> > Cloudbyte and etc. It doesn't break the functionality we just use the
> > existing implementation of every storage plugin for do the snapshotting.
> >
> > How does this work for all storages? How do you quiesce the virtual
> > machine? Can you explain it a little bit more?
> >
> > Thanks and Cheers
> >
> > Sven
> >
> >
> > __
> >
> > Sven Vogel
> > Teamlead Platform
> >
> > EWERK DIGITAL GmbH
> > Brühl 24, D-04109 Leipzig
> > P +49 341 42649 - 99
> > F +49 341 42649 - 98
> > S.Vogel@ewerk.com
> > www.ewerk.com
> >
> > Geschäftsführer:
> > Dr. Erik Wende, Hendrik Schubert, Frank Richter
> > Registergericht: Leipzig HRB 9065
> >
> > Support:
> > +49 341 42649 555
> >
> > Zertifiziert nach:
> > ISO/IEC 27001:2013
> > DIN EN ISO 9001:2015
> > DIN ISO/IEC 20000-1:2011
> >
> > ISAE 3402 Typ II Assessed
> >
> > EWERK-Blog<https://blog.ewerk.com/> | LinkedIn<
> > https://www.linkedin.com/company/ewerk-group> | Xing<
> > https://www.xing.com/company/ewerk> | Twitter<
> > https://twitter.com/EWERK_Group> | Facebook<
> > https://de-de.facebook.com/EWERK.IT/>
> >
> > Mit Handelsregistereintragung vom 09.07.2019 ist die EWERK RZ GmbH auf
> die
> > EWERK IT GmbH verschmolzen und firmiert nun gemeinsam unter dem Namen:
> > EWERK DIGITAL GmbH, für weitere Informationen klicken Sie hier<
> > https://www.ewerk.com/ewerkdigital>.
> >
> > Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
> >
> > Disclaimer Privacy:
> > Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
> ist
> > vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> > bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> > Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> > informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
> > die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
> System.
> > Vielen Dank.
> >
> > The contents of this e-mail (including any attachments) are confidential
> > and may be legally privileged. If you are not the intended recipient of
> > this e-mail, any disclosure, copying, distribution or use of its contents
> > is strictly prohibited, and you should please notify the sender
> immediately
> > and then delete it (including any attachments) from your system. Thank
> you.
> >
> > Am 07.11.2019 um 18:12 schrieb Slavka Peleva <slavkap@storpool.com
> <mailto:
> > slavkap@storpool.com>>:
> >
> > Hi Sven,
> >
> > Thank you for your questions!
> >
> > My answers are below.
> >
> > Kind regards,
> > Slavka
> >
> >
> > Thanks for your mail/contribution. You implemention sounds interesting.
> >
> > Let me ask some questions.
> >
> > 1. If you speak for RAW Disks you mean. Right?
> > <disk type='block' device='disk‘>
> > <driver name='qemu' type='raw‘/>
> >
> >
> > Yes, disks in raw format like this.
> >
> >
> > 2.
> > You wrote:
> >
> > The solution is using the third party storages' plugins to
> > create/revert/delete disk snapshots. The snapshots will be consistent,
> > because the virtual machines will be frozen until the snapshotting is
> done.
> >
> > We use Netapp Solidfire Storage. I know they are using too RAW Format.
> > You spoke about a third party plugin.
> >
> >
> > "The solution is using the third party storages' plugins to
> > create/revert/delete disk snapshots."
> > The Plugins where they come and does they break any existing
> functionality?
> >
> >
> > Those storage providers plugins are already in CloudStack - like
> Solidfire,
> > Cloudbyte and etc. It doesn't break the functionality we just use the
> > existing implementation of every storage plugin for do the snapshotting.
> >
> >
> >
> > I don’t see that we if we use Solidfire we can use this plugin from
> > Storepool. Right?
> >
> >
> > The proposed solution is not bound to a specific Storage Provider plugin.
> > It will work with any and all Storage Providers that has implemented
> > takeSnapshot call.
> >
> >
> > 3.
> > And some details about the implementation -  we have added a new
> > configuration option in the database named "kvm.vmsnapshot.enabled".
> >
> > At the moment there is already an flag „kvm.snapshot.enabled“. Well, I
> > mean „kvm.vmsnapshot.enabled“ is to uniq to „kvm.snapshot.enabled“.
> >
> > Maybe this name should rather to be called
> > „kvm.vmsnapshotexternalprovider.enabled“ or something like that… Naming
> is
> > a bit long but maybe you will find another cool name. I think it should
> be
> > more generic if anybody can create a plugin for a storage system like
> > storepool, Netapp Solidfire ... .
> >
> >
> > Thanks for this! I will think about more appropriate name for this flag.
> >
> >
> > Thanks and
> >
> > Cheers
> >
> > Sven
> >
> >
> >
> > __
> >
> > Sven Vogel
> > Teamlead Platform
> >
> > EWERK DIGITAL GmbH
> > Brühl 24, D-04109 Leipzig
> > P +49 341 42649 - 99
> > F +49 341 42649 - 98
> > S.Vogel@ewerk.com<ma...@ewerk.com>
> > www.ewerk.com
> >
> > Geschäftsführer:
> > Dr. Erik Wende, Hendrik Schubert, Frank Richter
> > Registergericht: Leipzig HRB 9065
> >
> > Zertifiziert nach:
> > ISO/IEC 27001:2013
> > DIN EN ISO 9001:2015
> > DIN ISO/IEC 20000-1:2011
> >
> > EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
> >
> > Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
> >
> > Disclaimer Privacy:
> > Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
> ist
> > vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> > bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> > Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> > informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
> > die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
> System.
> > Vielen Dank.
> >
> > The contents of this e-mail (including any attachments) are confidential
> > and may be legally privileged. If you are not the intended recipient of
> > this e-mail, any disclosure, copying, distribution or use of its contents
> > is strictly prohibited, and you should please notify the sender
> immediately
> > and then delete it (including any attachments) from your system. Thank
> you.
> > Am 07.11.2019 um 14:17 schrieb Slavka Peleva <sl...@storpool.com>:
> >
> > Hello everyone,
> >
> > My name is Slavka Peleva and I work for StorPool Storage. I have recently
> > joined the mailing list.
> >
> > Me and my colleagues have been working over а new feature for
> > storage-based
> > live VM snapshots under the KVM hypervisor. With the current
> > implementation
> > (which is using libvirt to perform the snapshotting) it is not possible
> > for
> > storage providers that keep VM's disks in RAW format to take VM
> > snapshots.
> >
> > That's why we have decided to implement an alternative for VM snapshots
> > under the KVM hypervisor. It will be useful mostly for storages that are
> > keeping their disks in RAW format or in case of  VMs with mixed set of
> > disks (RAW and QCOW).
> >
> > The solution is using the third party storages' plugins to
> > create/revert/delete disk snapshots. The snapshots will be consistent,
> > because the virtual machines will be frozen until the snapshotting is
> > done.
> >
> > For the qcow images we have an alternative solution with the qemu
> > drive-backup command. It could be used if a VM has mixed disks with raw
> > and
> > qcow images (in this case “snapshot.backup.to.secondary” should be set to
> > true).
> >
> > Qemu version 1.6+ is required, latest Ubuntu should already cover this,
> > for
> > RHEL/CentOS7 the qemu-kvm-ev package is required.
> >
> > And some details about the implementation -  we have added a new
> > configuration option in the database named "kvm.vmsnapshot.enabled".
> >
> > For the VM takeSnapshot method we have split the takeSnapshot (of disk
> > method) into two parts - takeSnapshot and backupSnapshot. In order to
> > make
> > all snapshots consistent, the virtual machine would have to be frozen
> > with
> > the domfsfreeze command. While it’s frozen an asynchronous snapshot will
> > be
> > taken for all disks with the appropriate datastore driver implementation.
> > Once the execution is complete, the virtual machine will be unfreezed by
> > using the domfsthaw command. Finally a backup to secondary storage is
> > invoked. The final backup operation is mandatory for VMs qcow disks
> > attached.
> >
> > Does this feature sound right to be accepted? Please let us know if you
> > have any questions, comments, concerns about this.
> >
> >
> > Kind regards,
> >
> > Slavka Peleva
> >
> >
>


-- 

Andrija Panić

Re: [DISCUSS] Storage-based Snapshots for KVM VMs

Posted by Slavka Peleva <sl...@storpool.com>.
Hi Sven,

The procedure is:

1.  fsfreeze all disks of the VM with help of qemu-guest-agent. If this
fails, the operation is aborted
2.  Take snapshot of each disk of the VM through the storage provider's
takeSnapshot call
3.  fsthaw the VM
4.  Backup the snapshots on secondary storage if it is enabled

So the virtual machine is quiescent while the snapshots are being taken.

Best regards,
Slavka

On Fri, Nov 8, 2019 at 12:31 AM Sven Vogel <S....@ewerk.com> wrote:

> Hi Slavka,
>
> Thanks for the answers Slavka! I have another question.
>
> You wrote:
>
> Those storage providers plugins are already in CloudStack - like Solidfire,
> Cloudbyte and etc. It doesn't break the functionality we just use the
> existing implementation of every storage plugin for do the snapshotting.
>
> How does this work for all storages? How do you quiesce the virtual
> machine? Can you explain it a little bit more?
>
> Thanks and Cheers
>
> Sven
>
>
> __
>
> Sven Vogel
> Teamlead Platform
>
> EWERK DIGITAL GmbH
> Brühl 24, D-04109 Leipzig
> P +49 341 42649 - 99
> F +49 341 42649 - 98
> S.Vogel@ewerk.com
> www.ewerk.com
>
> Geschäftsführer:
> Dr. Erik Wende, Hendrik Schubert, Frank Richter
> Registergericht: Leipzig HRB 9065
>
> Support:
> +49 341 42649 555
>
> Zertifiziert nach:
> ISO/IEC 27001:2013
> DIN EN ISO 9001:2015
> DIN ISO/IEC 20000-1:2011
>
> ISAE 3402 Typ II Assessed
>
> EWERK-Blog<https://blog.ewerk.com/> | LinkedIn<
> https://www.linkedin.com/company/ewerk-group> | Xing<
> https://www.xing.com/company/ewerk> | Twitter<
> https://twitter.com/EWERK_Group> | Facebook<
> https://de-de.facebook.com/EWERK.IT/>
>
> Mit Handelsregistereintragung vom 09.07.2019 ist die EWERK RZ GmbH auf die
> EWERK IT GmbH verschmolzen und firmiert nun gemeinsam unter dem Namen:
> EWERK DIGITAL GmbH, für weitere Informationen klicken Sie hier<
> https://www.ewerk.com/ewerkdigital>.
>
> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>
> Disclaimer Privacy:
> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist
> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System.
> Vielen Dank.
>
> The contents of this e-mail (including any attachments) are confidential
> and may be legally privileged. If you are not the intended recipient of
> this e-mail, any disclosure, copying, distribution or use of its contents
> is strictly prohibited, and you should please notify the sender immediately
> and then delete it (including any attachments) from your system. Thank you.
>
> Am 07.11.2019 um 18:12 schrieb Slavka Peleva <slavkap@storpool.com<mailto:
> slavkap@storpool.com>>:
>
> Hi Sven,
>
> Thank you for your questions!
>
> My answers are below.
>
> Kind regards,
> Slavka
>
>
> Thanks for your mail/contribution. You implemention sounds interesting.
>
> Let me ask some questions.
>
> 1. If you speak for RAW Disks you mean. Right?
> <disk type='block' device='disk‘>
> <driver name='qemu' type='raw‘/>
>
>
> Yes, disks in raw format like this.
>
>
> 2.
> You wrote:
>
> The solution is using the third party storages' plugins to
> create/revert/delete disk snapshots. The snapshots will be consistent,
> because the virtual machines will be frozen until the snapshotting is done.
>
> We use Netapp Solidfire Storage. I know they are using too RAW Format.
> You spoke about a third party plugin.
>
>
> "The solution is using the third party storages' plugins to
> create/revert/delete disk snapshots."
> The Plugins where they come and does they break any existing functionality?
>
>
> Those storage providers plugins are already in CloudStack - like Solidfire,
> Cloudbyte and etc. It doesn't break the functionality we just use the
> existing implementation of every storage plugin for do the snapshotting.
>
>
>
> I don’t see that we if we use Solidfire we can use this plugin from
> Storepool. Right?
>
>
> The proposed solution is not bound to a specific Storage Provider plugin.
> It will work with any and all Storage Providers that has implemented
> takeSnapshot call.
>
>
> 3.
> And some details about the implementation -  we have added a new
> configuration option in the database named "kvm.vmsnapshot.enabled".
>
> At the moment there is already an flag „kvm.snapshot.enabled“. Well, I
> mean „kvm.vmsnapshot.enabled“ is to uniq to „kvm.snapshot.enabled“.
>
> Maybe this name should rather to be called
> „kvm.vmsnapshotexternalprovider.enabled“ or something like that… Naming is
> a bit long but maybe you will find another cool name. I think it should be
> more generic if anybody can create a plugin for a storage system like
> storepool, Netapp Solidfire ... .
>
>
> Thanks for this! I will think about more appropriate name for this flag.
>
>
> Thanks and
>
> Cheers
>
> Sven
>
>
>
> __
>
> Sven Vogel
> Teamlead Platform
>
> EWERK DIGITAL GmbH
> Brühl 24, D-04109 Leipzig
> P +49 341 42649 - 99
> F +49 341 42649 - 98
> S.Vogel@ewerk.com<ma...@ewerk.com>
> www.ewerk.com
>
> Geschäftsführer:
> Dr. Erik Wende, Hendrik Schubert, Frank Richter
> Registergericht: Leipzig HRB 9065
>
> Zertifiziert nach:
> ISO/IEC 27001:2013
> DIN EN ISO 9001:2015
> DIN ISO/IEC 20000-1:2011
>
> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
>
> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>
> Disclaimer Privacy:
> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist
> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System.
> Vielen Dank.
>
> The contents of this e-mail (including any attachments) are confidential
> and may be legally privileged. If you are not the intended recipient of
> this e-mail, any disclosure, copying, distribution or use of its contents
> is strictly prohibited, and you should please notify the sender immediately
> and then delete it (including any attachments) from your system. Thank you.
> Am 07.11.2019 um 14:17 schrieb Slavka Peleva <sl...@storpool.com>:
>
> Hello everyone,
>
> My name is Slavka Peleva and I work for StorPool Storage. I have recently
> joined the mailing list.
>
> Me and my colleagues have been working over а new feature for
> storage-based
> live VM snapshots under the KVM hypervisor. With the current
> implementation
> (which is using libvirt to perform the snapshotting) it is not possible
> for
> storage providers that keep VM's disks in RAW format to take VM
> snapshots.
>
> That's why we have decided to implement an alternative for VM snapshots
> under the KVM hypervisor. It will be useful mostly for storages that are
> keeping their disks in RAW format or in case of  VMs with mixed set of
> disks (RAW and QCOW).
>
> The solution is using the third party storages' plugins to
> create/revert/delete disk snapshots. The snapshots will be consistent,
> because the virtual machines will be frozen until the snapshotting is
> done.
>
> For the qcow images we have an alternative solution with the qemu
> drive-backup command. It could be used if a VM has mixed disks with raw
> and
> qcow images (in this case “snapshot.backup.to.secondary” should be set to
> true).
>
> Qemu version 1.6+ is required, latest Ubuntu should already cover this,
> for
> RHEL/CentOS7 the qemu-kvm-ev package is required.
>
> And some details about the implementation -  we have added a new
> configuration option in the database named "kvm.vmsnapshot.enabled".
>
> For the VM takeSnapshot method we have split the takeSnapshot (of disk
> method) into two parts - takeSnapshot and backupSnapshot. In order to
> make
> all snapshots consistent, the virtual machine would have to be frozen
> with
> the domfsfreeze command. While it’s frozen an asynchronous snapshot will
> be
> taken for all disks with the appropriate datastore driver implementation.
> Once the execution is complete, the virtual machine will be unfreezed by
> using the domfsthaw command. Finally a backup to secondary storage is
> invoked. The final backup operation is mandatory for VMs qcow disks
> attached.
>
> Does this feature sound right to be accepted? Please let us know if you
> have any questions, comments, concerns about this.
>
>
> Kind regards,
>
> Slavka Peleva
>
>

Re: [DISCUSS] Storage-based Snapshots for KVM VMs

Posted by Sven Vogel <S....@ewerk.com>.
Hi Slavka,

Thanks for the answers Slavka! I have another question.

You wrote:

Those storage providers plugins are already in CloudStack - like Solidfire,
Cloudbyte and etc. It doesn't break the functionality we just use the
existing implementation of every storage plugin for do the snapshotting.

How does this work for all storages? How do you quiesce the virtual machine? Can you explain it a little bit more?

Thanks and Cheers

Sven


__

Sven Vogel
Teamlead Platform

EWERK DIGITAL GmbH
Brühl 24, D-04109 Leipzig
P +49 341 42649 - 99
F +49 341 42649 - 98
S.Vogel@ewerk.com
www.ewerk.com

Geschäftsführer:
Dr. Erik Wende, Hendrik Schubert, Frank Richter
Registergericht: Leipzig HRB 9065

Support:
+49 341 42649 555

Zertifiziert nach:
ISO/IEC 27001:2013
DIN EN ISO 9001:2015
DIN ISO/IEC 20000-1:2011

ISAE 3402 Typ II Assessed

EWERK-Blog<https://blog.ewerk.com/> | LinkedIn<https://www.linkedin.com/company/ewerk-group> | Xing<https://www.xing.com/company/ewerk> | Twitter<https://twitter.com/EWERK_Group> | Facebook<https://de-de.facebook.com/EWERK.IT/>

Mit Handelsregistereintragung vom 09.07.2019 ist die EWERK RZ GmbH auf die EWERK IT GmbH verschmolzen und firmiert nun gemeinsam unter dem Namen: EWERK DIGITAL GmbH, für weitere Informationen klicken Sie hier<https://www.ewerk.com/ewerkdigital>.

Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.

Disclaimer Privacy:
Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung, Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System. Vielen Dank.

The contents of this e-mail (including any attachments) are confidential and may be legally privileged. If you are not the intended recipient of this e-mail, any disclosure, copying, distribution or use of its contents is strictly prohibited, and you should please notify the sender immediately and then delete it (including any attachments) from your system. Thank you.

Am 07.11.2019 um 18:12 schrieb Slavka Peleva <sl...@storpool.com>>:

Hi Sven,

Thank you for your questions!

My answers are below.

Kind regards,
Slavka


Thanks for your mail/contribution. You implemention sounds interesting.

Let me ask some questions.

1. If you speak for RAW Disks you mean. Right?
<disk type='block' device='disk‘>
<driver name='qemu' type='raw‘/>


Yes, disks in raw format like this.


2.
You wrote:

The solution is using the third party storages' plugins to
create/revert/delete disk snapshots. The snapshots will be consistent,
because the virtual machines will be frozen until the snapshotting is done.

We use Netapp Solidfire Storage. I know they are using too RAW Format.
You spoke about a third party plugin.


"The solution is using the third party storages' plugins to
create/revert/delete disk snapshots."
The Plugins where they come and does they break any existing functionality?


Those storage providers plugins are already in CloudStack - like Solidfire,
Cloudbyte and etc. It doesn't break the functionality we just use the
existing implementation of every storage plugin for do the snapshotting.



I don’t see that we if we use Solidfire we can use this plugin from
Storepool. Right?


The proposed solution is not bound to a specific Storage Provider plugin.
It will work with any and all Storage Providers that has implemented
takeSnapshot call.


3.
And some details about the implementation -  we have added a new
configuration option in the database named "kvm.vmsnapshot.enabled".

At the moment there is already an flag „kvm.snapshot.enabled“. Well, I
mean „kvm.vmsnapshot.enabled“ is to uniq to „kvm.snapshot.enabled“.

Maybe this name should rather to be called
„kvm.vmsnapshotexternalprovider.enabled“ or something like that… Naming is
a bit long but maybe you will find another cool name. I think it should be
more generic if anybody can create a plugin for a storage system like
storepool, Netapp Solidfire ... .


Thanks for this! I will think about more appropriate name for this flag.


Thanks and

Cheers

Sven



__

Sven Vogel
Teamlead Platform

EWERK DIGITAL GmbH
Brühl 24, D-04109 Leipzig
P +49 341 42649 - 99
F +49 341 42649 - 98
S.Vogel@ewerk.com<ma...@ewerk.com>
www.ewerk.com

Geschäftsführer:
Dr. Erik Wende, Hendrik Schubert, Frank Richter
Registergericht: Leipzig HRB 9065

Zertifiziert nach:
ISO/IEC 27001:2013
DIN EN ISO 9001:2015
DIN ISO/IEC 20000-1:2011

EWERK-Blog | LinkedIn | Xing | Twitter | Facebook

Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.

Disclaimer Privacy:
Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist
vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System.
Vielen Dank.

The contents of this e-mail (including any attachments) are confidential
and may be legally privileged. If you are not the intended recipient of
this e-mail, any disclosure, copying, distribution or use of its contents
is strictly prohibited, and you should please notify the sender immediately
and then delete it (including any attachments) from your system. Thank you.
Am 07.11.2019 um 14:17 schrieb Slavka Peleva <sl...@storpool.com>:

Hello everyone,

My name is Slavka Peleva and I work for StorPool Storage. I have recently
joined the mailing list.

Me and my colleagues have been working over а new feature for
storage-based
live VM snapshots under the KVM hypervisor. With the current
implementation
(which is using libvirt to perform the snapshotting) it is not possible
for
storage providers that keep VM's disks in RAW format to take VM
snapshots.

That's why we have decided to implement an alternative for VM snapshots
under the KVM hypervisor. It will be useful mostly for storages that are
keeping their disks in RAW format or in case of  VMs with mixed set of
disks (RAW and QCOW).

The solution is using the third party storages' plugins to
create/revert/delete disk snapshots. The snapshots will be consistent,
because the virtual machines will be frozen until the snapshotting is
done.

For the qcow images we have an alternative solution with the qemu
drive-backup command. It could be used if a VM has mixed disks with raw
and
qcow images (in this case “snapshot.backup.to.secondary” should be set to
true).

Qemu version 1.6+ is required, latest Ubuntu should already cover this,
for
RHEL/CentOS7 the qemu-kvm-ev package is required.

And some details about the implementation -  we have added a new
configuration option in the database named "kvm.vmsnapshot.enabled".

For the VM takeSnapshot method we have split the takeSnapshot (of disk
method) into two parts - takeSnapshot and backupSnapshot. In order to
make
all snapshots consistent, the virtual machine would have to be frozen
with
the domfsfreeze command. While it’s frozen an asynchronous snapshot will
be
taken for all disks with the appropriate datastore driver implementation.
Once the execution is complete, the virtual machine will be unfreezed by
using the domfsthaw command. Finally a backup to secondary storage is
invoked. The final backup operation is mandatory for VMs qcow disks
attached.

Does this feature sound right to be accepted? Please let us know if you
have any questions, comments, concerns about this.


Kind regards,

Slavka Peleva


Re: [DISCUSS] Storage-based Snapshots for KVM VMs

Posted by Slavka Peleva <sl...@storpool.com>.
Hi Sven,

Thank you for your questions!

My answers are below.

Kind regards,
Slavka

>
> Thanks for your mail/contribution. You implemention sounds interesting.
>
> Let me ask some questions.
>
> 1. If you speak for RAW Disks you mean. Right?
> <disk type='block' device='disk‘>
> <driver name='qemu' type='raw‘/>
>
>
 Yes, disks in raw format like this.


> 2.
> You wrote:
>
> The solution is using the third party storages' plugins to
> create/revert/delete disk snapshots. The snapshots will be consistent,
> because the virtual machines will be frozen until the snapshotting is done.
>
> We use Netapp Solidfire Storage. I know they are using too RAW Format.
> You spoke about a third party plugin.
>
>
> "The solution is using the third party storages' plugins to
> create/revert/delete disk snapshots."
> The Plugins where they come and does they break any existing functionality?
>
>
Those storage providers plugins are already in CloudStack - like Solidfire,
Cloudbyte and etc. It doesn't break the functionality we just use the
existing implementation of every storage plugin for do the snapshotting.


>
> I don’t see that we if we use Solidfire we can use this plugin from
> Storepool. Right?
>
>
The proposed solution is not bound to a specific Storage Provider plugin.
It will work with any and all Storage Providers that has implemented
takeSnapshot call.


> 3.
> And some details about the implementation -  we have added a new
> configuration option in the database named "kvm.vmsnapshot.enabled".
>
> At the moment there is already an flag „kvm.snapshot.enabled“. Well, I
> mean „kvm.vmsnapshot.enabled“ is to uniq to „kvm.snapshot.enabled“.
>
> Maybe this name should rather to be called
> „kvm.vmsnapshotexternalprovider.enabled“ or something like that… Naming is
> a bit long but maybe you will find another cool name. I think it should be
> more generic if anybody can create a plugin for a storage system like
> storepool, Netapp Solidfire ... .
>
>
Thanks for this! I will think about more appropriate name for this flag.


> Thanks and
>
> Cheers
>
> Sven
>
>
>
> __
>
> Sven Vogel
> Teamlead Platform
>
> EWERK DIGITAL GmbH
> Brühl 24, D-04109 Leipzig
> P +49 341 42649 - 99
> F +49 341 42649 - 98
> S.Vogel@ewerk.com
> www.ewerk.com
>
> Geschäftsführer:
> Dr. Erik Wende, Hendrik Schubert, Frank Richter
> Registergericht: Leipzig HRB 9065
>
> Zertifiziert nach:
> ISO/IEC 27001:2013
> DIN EN ISO 9001:2015
> DIN ISO/IEC 20000-1:2011
>
> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
>
> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>
> Disclaimer Privacy:
> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist
> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System.
> Vielen Dank.
>
> The contents of this e-mail (including any attachments) are confidential
> and may be legally privileged. If you are not the intended recipient of
> this e-mail, any disclosure, copying, distribution or use of its contents
> is strictly prohibited, and you should please notify the sender immediately
> and then delete it (including any attachments) from your system. Thank you.
> > Am 07.11.2019 um 14:17 schrieb Slavka Peleva <sl...@storpool.com>:
> >
> > Hello everyone,
> >
> > My name is Slavka Peleva and I work for StorPool Storage. I have recently
> > joined the mailing list.
> >
> > Me and my colleagues have been working over а new feature for
> storage-based
> > live VM snapshots under the KVM hypervisor. With the current
> implementation
> > (which is using libvirt to perform the snapshotting) it is not possible
> for
> > storage providers that keep VM's disks in RAW format to take VM
> snapshots.
> >
> > That's why we have decided to implement an alternative for VM snapshots
> > under the KVM hypervisor. It will be useful mostly for storages that are
> > keeping their disks in RAW format or in case of  VMs with mixed set of
> > disks (RAW and QCOW).
> >
> > The solution is using the third party storages' plugins to
> > create/revert/delete disk snapshots. The snapshots will be consistent,
> > because the virtual machines will be frozen until the snapshotting is
> done.
> >
> > For the qcow images we have an alternative solution with the qemu
> > drive-backup command. It could be used if a VM has mixed disks with raw
> and
> > qcow images (in this case “snapshot.backup.to.secondary” should be set to
> > true).
> >
> > Qemu version 1.6+ is required, latest Ubuntu should already cover this,
> for
> > RHEL/CentOS7 the qemu-kvm-ev package is required.
> >
> > And some details about the implementation -  we have added a new
> > configuration option in the database named "kvm.vmsnapshot.enabled".
> >
> > For the VM takeSnapshot method we have split the takeSnapshot (of disk
> > method) into two parts - takeSnapshot and backupSnapshot. In order to
> make
> > all snapshots consistent, the virtual machine would have to be frozen
> with
> > the domfsfreeze command. While it’s frozen an asynchronous snapshot will
> be
> > taken for all disks with the appropriate datastore driver implementation.
> > Once the execution is complete, the virtual machine will be unfreezed by
> > using the domfsthaw command. Finally a backup to secondary storage is
> > invoked. The final backup operation is mandatory for VMs qcow disks
> > attached.
> >
> > Does this feature sound right to be accepted? Please let us know if you
> > have any questions, comments, concerns about this.
> >
> >
> > Kind regards,
> >
> > Slavka Peleva
>
>

Re: [DISCUSS] Storage-based Snapshots for KVM VMs

Posted by Sven Vogel <S....@ewerk.com>.
Hi Slavka,

Thanks for your mail/contribution. You implemention sounds interesting.

Let me ask some questions.

1. If you speak for RAW Disks you mean. Right?
<disk type='block' device='disk‘>
<driver name='qemu' type='raw‘/>

2.
You wrote:

The solution is using the third party storages' plugins to
create/revert/delete disk snapshots. The snapshots will be consistent,
because the virtual machines will be frozen until the snapshotting is done.

We use Netapp Solidfire Storage. I know they are using too RAW Format.
You spoke about a third party plugin.


"The solution is using the third party storages' plugins to
create/revert/delete disk snapshots."
The Plugins where they come and does they break any existing functionality?


I don’t see that we if we use Solidfire we can use this plugin from Storepool. Right?

3.
And some details about the implementation -  we have added a new
configuration option in the database named "kvm.vmsnapshot.enabled".

At the moment there is already an flag „kvm.snapshot.enabled“. Well, I mean „kvm.vmsnapshot.enabled“ is to uniq to „kvm.snapshot.enabled“.

Maybe this name should rather to be called „kvm.vmsnapshotexternalprovider.enabled“ or something like that… Naming is a bit long but maybe you will find another cool name. I think it should be more generic if anybody can create a plugin for a storage system like storepool, Netapp Solidfire ... .

Thanks and

Cheers

Sven



__

Sven Vogel
Teamlead Platform

EWERK DIGITAL GmbH
Brühl 24, D-04109 Leipzig
P +49 341 42649 - 99
F +49 341 42649 - 98
S.Vogel@ewerk.com
www.ewerk.com

Geschäftsführer:
Dr. Erik Wende, Hendrik Schubert, Frank Richter
Registergericht: Leipzig HRB 9065

Zertifiziert nach:
ISO/IEC 27001:2013
DIN EN ISO 9001:2015
DIN ISO/IEC 20000-1:2011

EWERK-Blog | LinkedIn | Xing | Twitter | Facebook

Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.

Disclaimer Privacy:
Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung, Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System. Vielen Dank.

The contents of this e-mail (including any attachments) are confidential and may be legally privileged. If you are not the intended recipient of this e-mail, any disclosure, copying, distribution or use of its contents is strictly prohibited, and you should please notify the sender immediately and then delete it (including any attachments) from your system. Thank you.
> Am 07.11.2019 um 14:17 schrieb Slavka Peleva <sl...@storpool.com>:
>
> Hello everyone,
>
> My name is Slavka Peleva and I work for StorPool Storage. I have recently
> joined the mailing list.
>
> Me and my colleagues have been working over а new feature for storage-based
> live VM snapshots under the KVM hypervisor. With the current implementation
> (which is using libvirt to perform the snapshotting) it is not possible for
> storage providers that keep VM's disks in RAW format to take VM snapshots.
>
> That's why we have decided to implement an alternative for VM snapshots
> under the KVM hypervisor. It will be useful mostly for storages that are
> keeping their disks in RAW format or in case of  VMs with mixed set of
> disks (RAW and QCOW).
>
> The solution is using the third party storages' plugins to
> create/revert/delete disk snapshots. The snapshots will be consistent,
> because the virtual machines will be frozen until the snapshotting is done.
>
> For the qcow images we have an alternative solution with the qemu
> drive-backup command. It could be used if a VM has mixed disks with raw and
> qcow images (in this case “snapshot.backup.to.secondary” should be set to
> true).
>
> Qemu version 1.6+ is required, latest Ubuntu should already cover this, for
> RHEL/CentOS7 the qemu-kvm-ev package is required.
>
> And some details about the implementation -  we have added a new
> configuration option in the database named "kvm.vmsnapshot.enabled".
>
> For the VM takeSnapshot method we have split the takeSnapshot (of disk
> method) into two parts - takeSnapshot and backupSnapshot. In order to make
> all snapshots consistent, the virtual machine would have to be frozen with
> the domfsfreeze command. While it’s frozen an asynchronous snapshot will be
> taken for all disks with the appropriate datastore driver implementation.
> Once the execution is complete, the virtual machine will be unfreezed by
> using the domfsthaw command. Finally a backup to secondary storage is
> invoked. The final backup operation is mandatory for VMs qcow disks
> attached.
>
> Does this feature sound right to be accepted? Please let us know if you
> have any questions, comments, concerns about this.
>
>
> Kind regards,
>
> Slavka Peleva