You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Adam Witwicki <aw...@oakfordis.com> on 2018/06/26 08:57:43 UTC

CloudStack - KVM / Ceph Performance

We are days away from completing our production CloudStack / Ceph deployment but have run into a small challenge in regards to Ceph performance. As you run a setup similar to ours I was wondering if you've seen a similar issue?

Our Ceph cluster consists of 60 X 7.2K SAS drive, spread over 5 nodes each with a 960GB Samsung NVMe. Our KVM hypervisors are connected to this nodes via 2 x 10Gbps links, per node.

3 centos Cloudstack management servers 4.11.0
4 centos KVM hosts

The RAW Ceph performance looks great!

CEPH RAW

Average IOPS

Max IOPS

MB/sec

Write 4K

5,885

6,190

22.9907

Read 4K

28,985

35,025

113

Write 4MB

204

219

816

Read 4MB

361

399

1447.51


However when we run the tests from within KVM we get significantly reduced performance, particular in writes:

VM Running on KVM Hypervisor

Average IOPS

Max IOPS

MB/sec

Write 4K

946.07

1390

3.69

Write 4MB

472

614

116


We believe the libvirt/ceph library could be causing the problem with in centos as we don't see this on a Ubuntu KVM HOST

Kind Regards,

[Oakford Internet Services]<http://www.oakfordis.com/>
Adam Witwicki | Hosted Systems Specialist
01380 710278 / awitwicki@oakfordis.com<ma...@oakfordis.com>
Oakford Internet Services Office: 01380 888088
10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT
www.oakfordis.com<http://www.oakfordis.com/> sales@oakfordis.com<ma...@oakfordis.com>


Disclaimer Notice:
This email has been sent by Oakford Technology Limited, while we have checked this e-mail and any attachments for viruses, we can not guarantee that they are virus-free. You must therefore take full responsibility for virus checking.
This message and any attachments are confidential and should only be read by those to whom they are addressed. If you are not the intended recipient, please contact us, delete the message from your computer and destroy any copies. Any distribution or copying without our prior permission is prohibited.
Internet communications are not always secure and therefore Oakford Technology Limited does not accept legal responsibility for this message. The recipient is responsible for verifying its authenticity before acting on the contents. Any views or opinions presented are solely those of the author and do not necessarily represent those of Oakford Technology Limited.
Registered address: Oakford Technology Limited, 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT.
Registered in England and Wales No. 5971519


Re: CloudStack - KVM / Ceph Performance

Posted by Andrija Panic <an...@gmail.com>.
https://ceph.com/geen-categorie/ceph-validate-that-the-rbd-cache-is-active/

On Tue, Jun 26, 2018, 17:16 Adam Witwicki <aw...@oakfordis.com> wrote:

> The speed difference between Centos and Ubuntu KVM hosts is 3 fold.
>
> How do I check the rbd cache configuration on KVM hosts?
>
> Thanks
>
> Adam
>
> -----Original Message-----
> From: Andrija Panic <an...@gmail.com>
> Sent: 26 June 2018 15:00
> To: users <us...@cloudstack.apache.org>
> Subject: Re: CloudStack - KVM / Ceph Performance
>
> ** This mail originated from OUTSIDE the Oakford corporate network. Treat
> hyperlinks and attachments in this email with caution. **
>
> You are obviously hitting some issues on Centos vs Ubuntu as KVM host, and
> can be due to difference in kernel, ceph (librbd), libvirt/qemu...
>
> But we have been using ceph (older, hammer release) with Ubuntu 14.04 and
> dont expect any miracle on single volume. BUT if you i.e. start Bitlocker
> drive encryption on multiple VMs at same time, you will be able to nicely
> saturate cluster to its full performance capacity.
>
> It's known feature that CEPH "excellent on parallel streams /io" i.e.
> single volume can't really reach nowhere near full performance of the
> cluster.
>
> Again questions about rbd cache configuration on KVM hosts?
>
> If you will eventually put more serious workloads on this specific
> cluster, it ain't gonna perform satisfactory for any customer - that is
> reality - been there, done that... moved away...
> Sorry for being too honest...
> (We had same size cluster, just used  Intel DC ssds for journals,
> collocated few journals on single SSD - not very optimal thought)
>
> Cheers
>
> On Jun 26, 2018 15:40, "Adam Witwicki" <aw...@oakfordis.com> wrote:
>
> Used the same guest VM, with a Ubuntu and Centos KVM host
>
> The Ubuntu host performed as expected, the guest on Centos KVM host is
> very slow
>
> I have been testing with KVM virtio drivers packaged in centos7 and
> downloaded them for windows guests
>
> Thanks
>
> Adam
>
>
> -----Original Message-----
> From: Simon Weller <sw...@ena.com.INVALID>
> Sent: 26 June 2018 13:35
> To: users <us...@cloudstack.apache.org>
> Subject: Re: CloudStack - KVM / Ceph Performance
>
> ** This mail originated from OUTSIDE the Oakford corporate network. Treat
> hyperlinks and attachments in this email with caution. **
>
>
> I assume you've checked to make sure you're using the KVM virtio drivers,
> right?
>
>
> - Si
>
>
> ________________________________
> From: Ivan Kudryavtsev <ku...@bw-sw.com>
> Sent: Tuesday, June 26, 2018 6:35 AM
> To: users
> Subject: Re: CloudStack - KVM / Ceph Performance
>
> Hello, Adam. Try to attach md raid0 of two Ceph vols inside VM. I have
> heard that people who run ceph use that practice to improve results. I
> don't know what you are expecting, but read somewhere that all ops to a
> single RBD volume are queued in a single IO queue, thus RAID0 helps here,
> increasing the speed almost twice. I mean you don't see a ceph cluster
> overall performance, you see a librbd+qemu performance in your tests.
>
> вт, 26 июн. 2018 г., 15:57 Adam Witwicki <aw...@oakfordis.com>:
>
> > We are days away from completing our production CloudStack / Ceph
> > deployment but have run into a small challenge in regards to Ceph
> > performance. As you run a setup similar to ours I was wondering if
> > you've seen a similar issue?
> >
> >
> >
> > Our Ceph cluster consists of 60 X 7.2K SAS drive, spread over 5 nodes
> > each with a 960GB Samsung NVMe. Our KVM hypervisors are connected to
> > this nodes via 2 x 10Gbps links, per node.
> >
> >
> >
> > 3 centos Cloudstack management servers 4.11.0
> >
> > 4 centos KVM hosts
> >
> >
> >
> > The RAW Ceph performance looks great!
> >
> >
> >
> > CEPH RAW
> >
> > *Average IOPS*
> >
> > *Max IOPS*
> >
> > *MB/sec*
> >
> > *Write 4K*
> >
> > 5,885
> >
> > 6,190
> >
> > 22.9907
> >
> > *Read 4K*
> >
> > 28,985
> >
> > 35,025
> >
> > 113
> >
> > *Write 4MB*
> >
> > 204
> >
> > 219
> >
> > 816
> >
> > *Read 4MB*
> >
> > 361
> >
> > 399
> >
> > 1447.51
> >
> >
> >
> > However when we run the tests from within KVM we get significantly
> > reduced performance, particular in writes:
> >
> >
> >
> > VM Running on KVM Hypervisor
> >
> > *Average IOPS*
> >
> > *Max IOPS*
> >
> > *MB/sec*
> >
> > *Write 4K*
> >
> > 946.07
> >
> > 1390
> >
> > 3.69
> >
> > *Write 4MB*
> >
> > 472
> >
> > 614
> >
> > 116
> >
> >
> >
> > We believe the libvirt/ceph library could be causing the problem with
> > in centos as we don't see this on a Ubuntu KVM HOST
> >
> >
> >
> > Kind Regards,
> >
> > [image: Oakford Internet Services] <http://www.oakfordis.com/>
> >
> > *Adam Witwicki* | Hosted Systems Specialist
> >
> > 01380 710278 / awitwicki@oakfordis.com
> >
> > *Oakford Internet Services* Office: 01380 888088
> > 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT
> > www.oakfordis.com<http://www.oakfordis.com> sales@oakfordis.com
> >
> >
> >
> > Disclaimer Notice:
> > This email has been sent by Oakford Technology Limited, while we have
> > checked this e-mail and any attachments for viruses, we can not
> > guarantee that they are virus-free. You must therefore take full
> > responsibility for virus checking.
> > This message and any attachments are confidential and should only be
> > read by those to whom they are addressed. If you are not the intended
> > recipient, please contact us, delete the message from your computer
> > and destroy any copies. Any distribution or copying without our prior
> > permission is prohibited.
> > Internet communications are not always secure and therefore Oakford
> > Technology Limited does not accept legal responsibility for this message.
> > The recipient is responsible for verifying its authenticity before
> > acting on the contents. Any views or opinions presented are solely
> > those of the author and do not necessarily represent those of Oakford
> Technology Limited.
> > Registered address: Oakford Technology Limited, 10 Prince Maurice
> > Court, Devizes, Wiltshire. SN10 2RT.
> > Registered in England and Wales No. 5971519
> >
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not guarantee
> that they are virus-free. You must therefore take full responsibility for
> virus checking.
> This message and any attachments are confidential and should only be read
> by those to whom they are addressed. If you are not the intended recipient,
> please contact us, delete the message from your computer and destroy any
> copies. Any distribution or copying without our prior permission is
> prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before acting
> on the contents. Any views or opinions presented are solely those of the
> author and do not necessarily represent those of Oakford Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice Court,
> Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not guarantee
> that they are virus-free. You must therefore take full responsibility for
> virus checking.
> This message and any attachments are confidential and should only be read
> by those to whom they are addressed. If you are not the intended recipient,
> please contact us, delete the message from your computer and destroy any
> copies. Any distribution or copying without our prior permission is
> prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before acting
> on the contents. Any views or opinions presented are solely those of the
> author and do not necessarily represent those of Oakford Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice Court,
> Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
>
>

RE: CloudStack - KVM / Ceph Performance

Posted by Adam Witwicki <aw...@oakfordis.com>.
The speed difference between Centos and Ubuntu KVM hosts is 3 fold.

How do I check the rbd cache configuration on KVM hosts?

Thanks

Adam

-----Original Message-----
From: Andrija Panic <an...@gmail.com>
Sent: 26 June 2018 15:00
To: users <us...@cloudstack.apache.org>
Subject: Re: CloudStack - KVM / Ceph Performance

** This mail originated from OUTSIDE the Oakford corporate network. Treat hyperlinks and attachments in this email with caution. **

You are obviously hitting some issues on Centos vs Ubuntu as KVM host, and can be due to difference in kernel, ceph (librbd), libvirt/qemu...

But we have been using ceph (older, hammer release) with Ubuntu 14.04 and dont expect any miracle on single volume. BUT if you i.e. start Bitlocker drive encryption on multiple VMs at same time, you will be able to nicely saturate cluster to its full performance capacity.

It's known feature that CEPH "excellent on parallel streams /io" i.e.
single volume can't really reach nowhere near full performance of the cluster.

Again questions about rbd cache configuration on KVM hosts?

If you will eventually put more serious workloads on this specific cluster, it ain't gonna perform satisfactory for any customer - that is reality - been there, done that... moved away...
Sorry for being too honest...
(We had same size cluster, just used  Intel DC ssds for journals, collocated few journals on single SSD - not very optimal thought)

Cheers

On Jun 26, 2018 15:40, "Adam Witwicki" <aw...@oakfordis.com> wrote:

Used the same guest VM, with a Ubuntu and Centos KVM host

The Ubuntu host performed as expected, the guest on Centos KVM host is very slow

I have been testing with KVM virtio drivers packaged in centos7 and downloaded them for windows guests

Thanks

Adam


-----Original Message-----
From: Simon Weller <sw...@ena.com.INVALID>
Sent: 26 June 2018 13:35
To: users <us...@cloudstack.apache.org>
Subject: Re: CloudStack - KVM / Ceph Performance

** This mail originated from OUTSIDE the Oakford corporate network. Treat hyperlinks and attachments in this email with caution. **


I assume you've checked to make sure you're using the KVM virtio drivers, right?


- Si


________________________________
From: Ivan Kudryavtsev <ku...@bw-sw.com>
Sent: Tuesday, June 26, 2018 6:35 AM
To: users
Subject: Re: CloudStack - KVM / Ceph Performance

Hello, Adam. Try to attach md raid0 of two Ceph vols inside VM. I have heard that people who run ceph use that practice to improve results. I don't know what you are expecting, but read somewhere that all ops to a single RBD volume are queued in a single IO queue, thus RAID0 helps here, increasing the speed almost twice. I mean you don't see a ceph cluster overall performance, you see a librbd+qemu performance in your tests.

вт, 26 июн. 2018 г., 15:57 Adam Witwicki <aw...@oakfordis.com>:

> We are days away from completing our production CloudStack / Ceph
> deployment but have run into a small challenge in regards to Ceph
> performance. As you run a setup similar to ours I was wondering if
> you've seen a similar issue?
>
>
>
> Our Ceph cluster consists of 60 X 7.2K SAS drive, spread over 5 nodes
> each with a 960GB Samsung NVMe. Our KVM hypervisors are connected to
> this nodes via 2 x 10Gbps links, per node.
>
>
>
> 3 centos Cloudstack management servers 4.11.0
>
> 4 centos KVM hosts
>
>
>
> The RAW Ceph performance looks great!
>
>
>
> CEPH RAW
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 5,885
>
> 6,190
>
> 22.9907
>
> *Read 4K*
>
> 28,985
>
> 35,025
>
> 113
>
> *Write 4MB*
>
> 204
>
> 219
>
> 816
>
> *Read 4MB*
>
> 361
>
> 399
>
> 1447.51
>
>
>
> However when we run the tests from within KVM we get significantly
> reduced performance, particular in writes:
>
>
>
> VM Running on KVM Hypervisor
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 946.07
>
> 1390
>
> 3.69
>
> *Write 4MB*
>
> 472
>
> 614
>
> 116
>
>
>
> We believe the libvirt/ceph library could be causing the problem with
> in centos as we don't see this on a Ubuntu KVM HOST
>
>
>
> Kind Regards,
>
> [image: Oakford Internet Services] <http://www.oakfordis.com/>
>
> *Adam Witwicki* | Hosted Systems Specialist
>
> 01380 710278 / awitwicki@oakfordis.com
>
> *Oakford Internet Services* Office: 01380 888088
> 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT
> www.oakfordis.com<http://www.oakfordis.com> sales@oakfordis.com
>
>
>
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not
> guarantee that they are virus-free. You must therefore take full
> responsibility for virus checking.
> This message and any attachments are confidential and should only be
> read by those to whom they are addressed. If you are not the intended
> recipient, please contact us, delete the message from your computer
> and destroy any copies. Any distribution or copying without our prior
> permission is prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before
> acting on the contents. Any views or opinions presented are solely
> those of the author and do not necessarily represent those of Oakford
Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice
> Court, Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
>
Disclaimer Notice:
This email has been sent by Oakford Technology Limited, while we have checked this e-mail and any attachments for viruses, we can not guarantee that they are virus-free. You must therefore take full responsibility for virus checking.
This message and any attachments are confidential and should only be read by those to whom they are addressed. If you are not the intended recipient, please contact us, delete the message from your computer and destroy any copies. Any distribution or copying without our prior permission is prohibited.
Internet communications are not always secure and therefore Oakford Technology Limited does not accept legal responsibility for this message.
The recipient is responsible for verifying its authenticity before acting on the contents. Any views or opinions presented are solely those of the author and do not necessarily represent those of Oakford Technology Limited.
Registered address: Oakford Technology Limited, 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT.
Registered in England and Wales No. 5971519
Disclaimer Notice:
This email has been sent by Oakford Technology Limited, while we have checked this e-mail and any attachments for viruses, we can not guarantee that they are virus-free. You must therefore take full responsibility for virus checking.
This message and any attachments are confidential and should only be read by those to whom they are addressed. If you are not the intended recipient, please contact us, delete the message from your computer and destroy any copies. Any distribution or copying without our prior permission is prohibited.
Internet communications are not always secure and therefore Oakford Technology Limited does not accept legal responsibility for this message. The recipient is responsible for verifying its authenticity before acting on the contents. Any views or opinions presented are solely those of the author and do not necessarily represent those of Oakford Technology Limited.
Registered address: Oakford Technology Limited, 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT.
Registered in England and Wales No. 5971519


Re: CloudStack - KVM / Ceph Performance

Posted by Andrija Panic <an...@gmail.com>.
You are obviously hitting some issues on Centos vs Ubuntu as KVM host, and
can be due to difference in kernel, ceph (librbd), libvirt/qemu...

But we have been using ceph (older, hammer release) with Ubuntu 14.04 and
dont expect any miracle on single volume. BUT if you i.e. start Bitlocker
drive encryption on multiple VMs at same time, you will be able to nicely
saturate cluster to its full performance capacity.

It's known feature that CEPH "excellent on parallel streams /io" i.e.
single volume can't really reach nowhere near full performance of the
cluster.

Again questions about rbd cache configuration on KVM hosts?

If you will eventually put more serious workloads on this specific cluster,
it ain't gonna perform satisfactory for any customer - that is reality -
been there, done that... moved away...
Sorry for being too honest...
(We had same size cluster, just used  Intel DC ssds for journals,
collocated few journals on single SSD - not very optimal thought)

Cheers

On Jun 26, 2018 15:40, "Adam Witwicki" <aw...@oakfordis.com> wrote:

Used the same guest VM, with a Ubuntu and Centos KVM host

The Ubuntu host performed as expected, the guest on Centos KVM host is very
slow

I have been testing with KVM virtio drivers packaged in centos7 and
downloaded them for windows guests

Thanks

Adam


-----Original Message-----
From: Simon Weller <sw...@ena.com.INVALID>
Sent: 26 June 2018 13:35
To: users <us...@cloudstack.apache.org>
Subject: Re: CloudStack - KVM / Ceph Performance

** This mail originated from OUTSIDE the Oakford corporate network. Treat
hyperlinks and attachments in this email with caution. **


I assume you've checked to make sure you're using the KVM virtio drivers,
right?


- Si


________________________________
From: Ivan Kudryavtsev <ku...@bw-sw.com>
Sent: Tuesday, June 26, 2018 6:35 AM
To: users
Subject: Re: CloudStack - KVM / Ceph Performance

Hello, Adam. Try to attach md raid0 of two Ceph vols inside VM. I have
heard that people who run ceph use that practice to improve results. I
don't know what you are expecting, but read somewhere that all ops to a
single RBD volume are queued in a single IO queue, thus RAID0 helps here,
increasing the speed almost twice. I mean you don't see a ceph cluster
overall performance, you see a librbd+qemu performance in your tests.

вт, 26 июн. 2018 г., 15:57 Adam Witwicki <aw...@oakfordis.com>:

> We are days away from completing our production CloudStack / Ceph
> deployment but have run into a small challenge in regards to Ceph
> performance. As you run a setup similar to ours I was wondering if
> you've seen a similar issue?
>
>
>
> Our Ceph cluster consists of 60 X 7.2K SAS drive, spread over 5 nodes
> each with a 960GB Samsung NVMe. Our KVM hypervisors are connected to
> this nodes via 2 x 10Gbps links, per node.
>
>
>
> 3 centos Cloudstack management servers 4.11.0
>
> 4 centos KVM hosts
>
>
>
> The RAW Ceph performance looks great!
>
>
>
> CEPH RAW
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 5,885
>
> 6,190
>
> 22.9907
>
> *Read 4K*
>
> 28,985
>
> 35,025
>
> 113
>
> *Write 4MB*
>
> 204
>
> 219
>
> 816
>
> *Read 4MB*
>
> 361
>
> 399
>
> 1447.51
>
>
>
> However when we run the tests from within KVM we get significantly
> reduced performance, particular in writes:
>
>
>
> VM Running on KVM Hypervisor
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 946.07
>
> 1390
>
> 3.69
>
> *Write 4MB*
>
> 472
>
> 614
>
> 116
>
>
>
> We believe the libvirt/ceph library could be causing the problem with
> in centos as we don't see this on a Ubuntu KVM HOST
>
>
>
> Kind Regards,
>
> [image: Oakford Internet Services] <http://www.oakfordis.com/>
>
> *Adam Witwicki* | Hosted Systems Specialist
>
> 01380 710278 / awitwicki@oakfordis.com
>
> *Oakford Internet Services* Office: 01380 888088
> 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT
> www.oakfordis.com<http://www.oakfordis.com> sales@oakfordis.com
>
>
>
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not
> guarantee that they are virus-free. You must therefore take full
> responsibility for virus checking.
> This message and any attachments are confidential and should only be
> read by those to whom they are addressed. If you are not the intended
> recipient, please contact us, delete the message from your computer
> and destroy any copies. Any distribution or copying without our prior
> permission is prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before
> acting on the contents. Any views or opinions presented are solely
> those of the author and do not necessarily represent those of Oakford
Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice
> Court, Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
>
Disclaimer Notice:
This email has been sent by Oakford Technology Limited, while we have
checked this e-mail and any attachments for viruses, we can not guarantee
that they are virus-free. You must therefore take full responsibility for
virus checking.
This message and any attachments are confidential and should only be read
by those to whom they are addressed. If you are not the intended recipient,
please contact us, delete the message from your computer and destroy any
copies. Any distribution or copying without our prior permission is
prohibited.
Internet communications are not always secure and therefore Oakford
Technology Limited does not accept legal responsibility for this message.
The recipient is responsible for verifying its authenticity before acting
on the contents. Any views or opinions presented are solely those of the
author and do not necessarily represent those of Oakford Technology Limited.
Registered address: Oakford Technology Limited, 10 Prince Maurice Court,
Devizes, Wiltshire. SN10 2RT.
Registered in England and Wales No. 5971519

RE: CloudStack - KVM / Ceph Performance

Posted by Adam Witwicki <aw...@oakfordis.com>.
Used the same guest VM, with a Ubuntu and Centos KVM host

The Ubuntu host performed as expected, the guest on Centos KVM host is very slow

I have been testing with KVM virtio drivers packaged in centos7 and downloaded them for windows guests

Thanks

Adam

-----Original Message-----
From: Simon Weller <sw...@ena.com.INVALID>
Sent: 26 June 2018 13:35
To: users <us...@cloudstack.apache.org>
Subject: Re: CloudStack - KVM / Ceph Performance

** This mail originated from OUTSIDE the Oakford corporate network. Treat hyperlinks and attachments in this email with caution. **

I assume you've checked to make sure you're using the KVM virtio drivers, right?


- Si


________________________________
From: Ivan Kudryavtsev <ku...@bw-sw.com>
Sent: Tuesday, June 26, 2018 6:35 AM
To: users
Subject: Re: CloudStack - KVM / Ceph Performance

Hello, Adam. Try to attach md raid0 of two Ceph vols inside VM. I have heard that people who run ceph use that practice to improve results. I don't know what you are expecting, but read somewhere that all ops to a single RBD volume are queued in a single IO queue, thus RAID0 helps here, increasing the speed almost twice. I mean you don't see a ceph cluster overall performance, you see a librbd+qemu performance in your tests.

вт, 26 июн. 2018 г., 15:57 Adam Witwicki <aw...@oakfordis.com>:

> We are days away from completing our production CloudStack / Ceph
> deployment but have run into a small challenge in regards to Ceph
> performance. As you run a setup similar to ours I was wondering if
> you've seen a similar issue?
>
>
>
> Our Ceph cluster consists of 60 X 7.2K SAS drive, spread over 5 nodes
> each with a 960GB Samsung NVMe. Our KVM hypervisors are connected to
> this nodes via 2 x 10Gbps links, per node.
>
>
>
> 3 centos Cloudstack management servers 4.11.0
>
> 4 centos KVM hosts
>
>
>
> The RAW Ceph performance looks great!
>
>
>
> CEPH RAW
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 5,885
>
> 6,190
>
> 22.9907
>
> *Read 4K*
>
> 28,985
>
> 35,025
>
> 113
>
> *Write 4MB*
>
> 204
>
> 219
>
> 816
>
> *Read 4MB*
>
> 361
>
> 399
>
> 1447.51
>
>
>
> However when we run the tests from within KVM we get significantly
> reduced performance, particular in writes:
>
>
>
> VM Running on KVM Hypervisor
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 946.07
>
> 1390
>
> 3.69
>
> *Write 4MB*
>
> 472
>
> 614
>
> 116
>
>
>
> We believe the libvirt/ceph library could be causing the problem with
> in centos as we don't see this on a Ubuntu KVM HOST
>
>
>
> Kind Regards,
>
> [image: Oakford Internet Services] <http://www.oakfordis.com/>
>
> *Adam Witwicki* | Hosted Systems Specialist
>
> 01380 710278 / awitwicki@oakfordis.com
>
> *Oakford Internet Services* Office: 01380 888088
> 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT
> www.oakfordis.com<http://www.oakfordis.com> sales@oakfordis.com
>
>
>
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not
> guarantee that they are virus-free. You must therefore take full
> responsibility for virus checking.
> This message and any attachments are confidential and should only be
> read by those to whom they are addressed. If you are not the intended
> recipient, please contact us, delete the message from your computer
> and destroy any copies. Any distribution or copying without our prior
> permission is prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before
> acting on the contents. Any views or opinions presented are solely
> those of the author and do not necessarily represent those of Oakford Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice
> Court, Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
>
Disclaimer Notice:
This email has been sent by Oakford Technology Limited, while we have checked this e-mail and any attachments for viruses, we can not guarantee that they are virus-free. You must therefore take full responsibility for virus checking.
This message and any attachments are confidential and should only be read by those to whom they are addressed. If you are not the intended recipient, please contact us, delete the message from your computer and destroy any copies. Any distribution or copying without our prior permission is prohibited.
Internet communications are not always secure and therefore Oakford Technology Limited does not accept legal responsibility for this message. The recipient is responsible for verifying its authenticity before acting on the contents. Any views or opinions presented are solely those of the author and do not necessarily represent those of Oakford Technology Limited.
Registered address: Oakford Technology Limited, 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT.
Registered in England and Wales No. 5971519


Re: CloudStack - KVM / Ceph Performance

Posted by Simon Weller <sw...@ena.com.INVALID>.
I assume you've checked to make sure you're using the KVM virtio drivers, right?


- Si


________________________________
From: Ivan Kudryavtsev <ku...@bw-sw.com>
Sent: Tuesday, June 26, 2018 6:35 AM
To: users
Subject: Re: CloudStack - KVM / Ceph Performance

Hello, Adam. Try to attach md raid0 of two Ceph vols inside VM. I have
heard that people who run ceph use that practice to improve results. I
don't know what you are expecting, but read somewhere that all ops to a
single RBD volume are queued in a single IO queue, thus RAID0 helps here,
increasing the speed almost twice. I mean you don't see a ceph cluster
overall performance, you see a librbd+qemu performance in your tests.

вт, 26 июн. 2018 г., 15:57 Adam Witwicki <aw...@oakfordis.com>:

> We are days away from completing our production CloudStack / Ceph
> deployment but have run into a small challenge in regards to Ceph
> performance. As you run a setup similar to ours I was wondering if you’ve
> seen a similar issue?
>
>
>
> Our Ceph cluster consists of 60 X 7.2K SAS drive, spread over 5 nodes each
> with a 960GB Samsung NVMe. Our KVM hypervisors are connected to this nodes
> via 2 x 10Gbps links, per node.
>
>
>
> 3 centos Cloudstack management servers 4.11.0
>
> 4 centos KVM hosts
>
>
>
> The RAW Ceph performance looks great!
>
>
>
> CEPH RAW
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 5,885
>
> 6,190
>
> 22.9907
>
> *Read 4K*
>
> 28,985
>
> 35,025
>
> 113
>
> *Write 4MB*
>
> 204
>
> 219
>
> 816
>
> *Read 4MB*
>
> 361
>
> 399
>
> 1447.51
>
>
>
> However when we run the tests from within KVM we get significantly reduced
> performance, particular in writes:
>
>
>
> VM Running on KVM Hypervisor
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 946.07
>
> 1390
>
> 3.69
>
> *Write 4MB*
>
> 472
>
> 614
>
> 116
>
>
>
> We believe the libvirt/ceph library could be causing the problem with in
> centos as we don’t see this on a Ubuntu KVM HOST
>
>
>
> Kind Regards,
>
> [image: Oakford Internet Services] <http://www.oakfordis.com/>
>
> *Adam Witwicki* | Hosted Systems Specialist
>
> 01380 710278 / awitwicki@oakfordis.com
>
> *Oakford Internet Services* Office: 01380 888088
> 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT
> www.oakfordis.com<http://www.oakfordis.com> sales@oakfordis.com
>
>
>
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not guarantee
> that they are virus-free. You must therefore take full responsibility for
> virus checking.
> This message and any attachments are confidential and should only be read
> by those to whom they are addressed. If you are not the intended recipient,
> please contact us, delete the message from your computer and destroy any
> copies. Any distribution or copying without our prior permission is
> prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before acting
> on the contents. Any views or opinions presented are solely those of the
> author and do not necessarily represent those of Oakford Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice Court,
> Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
>

Re: CloudStack - KVM / Ceph Performance

Posted by Ivan Kudryavtsev <ku...@bw-sw.com>.
Hello, Adam. Try to attach md raid0 of two Ceph vols inside VM. I have
heard that people who run ceph use that practice to improve results. I
don't know what you are expecting, but read somewhere that all ops to a
single RBD volume are queued in a single IO queue, thus RAID0 helps here,
increasing the speed almost twice. I mean you don't see a ceph cluster
overall performance, you see a librbd+qemu performance in your tests.

вт, 26 июн. 2018 г., 15:57 Adam Witwicki <aw...@oakfordis.com>:

> We are days away from completing our production CloudStack / Ceph
> deployment but have run into a small challenge in regards to Ceph
> performance. As you run a setup similar to ours I was wondering if you’ve
> seen a similar issue?
>
>
>
> Our Ceph cluster consists of 60 X 7.2K SAS drive, spread over 5 nodes each
> with a 960GB Samsung NVMe. Our KVM hypervisors are connected to this nodes
> via 2 x 10Gbps links, per node.
>
>
>
> 3 centos Cloudstack management servers 4.11.0
>
> 4 centos KVM hosts
>
>
>
> The RAW Ceph performance looks great!
>
>
>
> CEPH RAW
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 5,885
>
> 6,190
>
> 22.9907
>
> *Read 4K*
>
> 28,985
>
> 35,025
>
> 113
>
> *Write 4MB*
>
> 204
>
> 219
>
> 816
>
> *Read 4MB*
>
> 361
>
> 399
>
> 1447.51
>
>
>
> However when we run the tests from within KVM we get significantly reduced
> performance, particular in writes:
>
>
>
> VM Running on KVM Hypervisor
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 946.07
>
> 1390
>
> 3.69
>
> *Write 4MB*
>
> 472
>
> 614
>
> 116
>
>
>
> We believe the libvirt/ceph library could be causing the problem with in
> centos as we don’t see this on a Ubuntu KVM HOST
>
>
>
> Kind Regards,
>
> [image: Oakford Internet Services] <http://www.oakfordis.com/>
>
> *Adam Witwicki* | Hosted Systems Specialist
>
> 01380 710278 / awitwicki@oakfordis.com
>
> *Oakford Internet Services* Office: 01380 888088
> 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT
> www.oakfordis.com sales@oakfordis.com
>
>
>
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not guarantee
> that they are virus-free. You must therefore take full responsibility for
> virus checking.
> This message and any attachments are confidential and should only be read
> by those to whom they are addressed. If you are not the intended recipient,
> please contact us, delete the message from your computer and destroy any
> copies. Any distribution or copying without our prior permission is
> prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before acting
> on the contents. Any views or opinions presented are solely those of the
> author and do not necessarily represent those of Oakford Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice Court,
> Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
>

Re: CloudStack - KVM / Ceph Performance

Posted by Andrija Panic <an...@gmail.com>.
Rbd cache configured and active on KVM side ? What guest OS you test within
- you get these last results from inside VM as I understand?

Do you have ceph (client side on kVM hosts) version difference between
centos and ubuntu host? Kernel upgrade on centos (to 4.x) makes any
difference ? (Very easy to test i.e via elrepo kernel for centos)

Best
Andrija

On Tue, Jun 26, 2018, 11:57 Adam Witwicki <aw...@oakfordis.com> wrote:

> We are days away from completing our production CloudStack / Ceph
> deployment but have run into a small challenge in regards to Ceph
> performance. As you run a setup similar to ours I was wondering if you’ve
> seen a similar issue?
>
>
>
> Our Ceph cluster consists of 60 X 7.2K SAS drive, spread over 5 nodes each
> with a 960GB Samsung NVMe. Our KVM hypervisors are connected to this nodes
> via 2 x 10Gbps links, per node.
>
>
>
> 3 centos Cloudstack management servers 4.11.0
>
> 4 centos KVM hosts
>
>
>
> The RAW Ceph performance looks great!
>
>
>
> CEPH RAW
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 5,885
>
> 6,190
>
> 22.9907
>
> *Read 4K*
>
> 28,985
>
> 35,025
>
> 113
>
> *Write 4MB*
>
> 204
>
> 219
>
> 816
>
> *Read 4MB*
>
> 361
>
> 399
>
> 1447.51
>
>
>
> However when we run the tests from within KVM we get significantly reduced
> performance, particular in writes:
>
>
>
> VM Running on KVM Hypervisor
>
> *Average IOPS*
>
> *Max IOPS*
>
> *MB/sec*
>
> *Write 4K*
>
> 946.07
>
> 1390
>
> 3.69
>
> *Write 4MB*
>
> 472
>
> 614
>
> 116
>
>
>
> We believe the libvirt/ceph library could be causing the problem with in
> centos as we don’t see this on a Ubuntu KVM HOST
>
>
>
> Kind Regards,
>
> [image: Oakford Internet Services] <http://www.oakfordis.com/>
>
> *Adam Witwicki* | Hosted Systems Specialist
>
> 01380 710278 / awitwicki@oakfordis.com
>
> *Oakford Internet Services* Office: 01380 888088
> 10 Prince Maurice Court, Devizes, Wiltshire. SN10 2RT
> www.oakfordis.com sales@oakfordis.com
>
>
>
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not guarantee
> that they are virus-free. You must therefore take full responsibility for
> virus checking.
> This message and any attachments are confidential and should only be read
> by those to whom they are addressed. If you are not the intended recipient,
> please contact us, delete the message from your computer and destroy any
> copies. Any distribution or copying without our prior permission is
> prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before acting
> on the contents. Any views or opinions presented are solely those of the
> author and do not necessarily represent those of Oakford Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice Court,
> Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
>