You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Nishan Sanjeewa Gunasekara <ni...@gmail.com> on 2014/03/05 05:31:41 UTC

Disk IO on Cloudstack VMs

Hi,

I'm running CS 4.2.1 on XenServer 6.2 hosts.

My primary storage is a ZFS file system provided via NFS.

When I do a dd test directly from one of the hosts on the NFS mount I get a
write speed of about 150MBps

 dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s

But when I do the same test on a Cloudstack VM running on the same host
(root disk on the same nfs mount ofcourse) I get a very low write speed.
20MBps.

 dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s

Any ideas how I can improve this ?

Regards,
Nishan

Re: Disk IO on Cloudstack VMs

Posted by Shanker Balan <sh...@shapeblue.com>.
Comments inline.

On 05-Mar-2014, at 10:45 am, Sanjeev Neelarapu <sa...@citrix.com> wrote:

> Hi Nishan,
>
> If your vm is deployed in an isolated network all the traffic goes
> via VR. So please check the rate limiting on the VR.
> You may have to increase the rate limit value to get more BW.


Hi Sanjeev,

Does primary storage traffic go over the VR also. I thought the primary storage
traffic over the host’s interface and not over any VR.

Could you please clarify?



--
@shankerbalan

M: +91 98860 60539 | O: +91 (80) 67935867
shanker.balan@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre, Bangalore - 560 055

Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure Support<http://shapeblue.com/cloudstack-infrastructure-support/> offers the best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training<http://shapeblue.com/cloudstack-training/>
18th-19th February 2014, Brazil. Classroom<http://shapeblue.com/cloudstack-training/>
17th-23rd March 2014, Region A. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
24th-28th March 2014, Region B. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
16th-20th June 2014, Region A. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
23rd-27th June 2014, Region B. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>

This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error. Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.

RE: Disk IO on Cloudstack VMs

Posted by Sanjeev Neelarapu <sa...@citrix.com>.
Hi Nishan,

If your vm is deployed in an isolated network all the traffic goes via VR. So please check the rate limiting on the VR.
You may have to increase the rate limit value to get more BW.

Thanks,
Sanjeev

-----Original Message-----
From: Nishan Sanjeewa Gunasekara [mailto:nishan.sanjeewa@gmail.com] 
Sent: Wednesday, March 05, 2014 10:02 AM
To: users
Subject: Disk IO on Cloudstack VMs

Hi,

I'm running CS 4.2.1 on XenServer 6.2 hosts.

My primary storage is a ZFS file system provided via NFS.

When I do a dd test directly from one of the hosts on the NFS mount I get a write speed of about 150MBps

 dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s

But when I do the same test on a Cloudstack VM running on the same host (root disk on the same nfs mount ofcourse) I get a very low write speed.
20MBps.

 dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s

Any ideas how I can improve this ?

Regards,
Nishan

Re: Disk IO on Cloudstack VMs

Posted by Shanker Balan <sh...@shapeblue.com>.
Comments inline.

On 07-Mar-2014, at 6:44 am, Nishan Sanjeewa Gunasekara <ni...@gmail.com> wrote:

> Hi Shankar,
>
> 10% overhead for virtual disk seems reasonable.
> My VM is PV and I've installed xenserver tools as well. But still getting
> 20Mbps where as on the host its 150Mbps.
>
> Have you tuned up your VM or changed any parameters?

No. This is a stock CentOS 6 install.



--
@shankerbalan

M: +91 98860 60539 | O: +91 (80) 67935867
shanker.balan@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre, Bangalore - 560 055

Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure Support<http://shapeblue.com/cloudstack-infrastructure-support/> offers the best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training<http://shapeblue.com/cloudstack-training/>
18th-19th February 2014, Brazil. Classroom<http://shapeblue.com/cloudstack-training/>
17th-23rd March 2014, Region A. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
24th-28th March 2014, Region B. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
16th-20th June 2014, Region A. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
23rd-27th June 2014, Region B. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>

This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error. Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.

Re: Disk IO on Cloudstack VMs

Posted by Carlos Reátegui <cr...@gmail.com>.
BTW the centos machine below is a 1 core machine and the ubuntu one is a 2 core machine.  Not sure if that makes a difference.  You may want to experiment.

On Mar 6, 2014, at 8:32 PM, Carlos Reátegui <cr...@gmail.com> wrote:

> Nishan,
> How did you originally setup your networks?  Did you do it all through XenCenter, xe CLI, OS?  I believe I did it all through XC.  I am running XS 6.0.2 with CS 4.1.  Did you switch to bridge networking?  I did not as it only supports 2 nic bonds.
> 
> My hosts are set up with 4x1Gbe bond for NAS network (storage label) and 4x1Gbe bond for CS Management/CS Storage/CS Guest (cloud-public label).  Both bonds are active active.  I am using basic networking but without security groups since that requires bridge networking.  Security is not really an issue for this setup as it is used internally by QA and dev.
> 
> On one of my hosts:
> # dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 9.97504 seconds, 105 MB/s
> 
> On a CentOS 5 guest:
> # dd if=/dev/zero of=/mnt/test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 13.1997 seconds, 79.4 MB/s
> 
> On an ubuntu 12.04 guest:
> $ dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 9.85764 s, 106 MB/s
> 
> I tried the above a handful of times on each and they were within a couple % of each other every time.  Both of the guests are on the same host I tested above.  The SR is on NFS on the same machine as the CS Management stack (Ubuntu 12.04).  That machine is connected to the storage via 10Gbe and a 2x1Gbe bond to the cloud-public network.  No routes exist between these 2 networks.  MTU is 1500 for both networks.  I have another setup at a different location where I set the MTU to 9000 and I seem to remember getting a bit faster throughput at least on the hosts.  I’ll check later and let you know.
> 
> Let me know if there is anything you want me to check in my setup to compare.
> 
> Carlos
> 
> 
> On Mar 6, 2014, at 5:14 PM, Nishan Sanjeewa Gunasekara <ni...@gmail.com> wrote:
> 
>> Hi Shankar,
>> 
>> 10% overhead for virtual disk seems reasonable.
>> My VM is PV and I've installed xenserver tools as well. But still getting
>> 20Mbps where as on the host its 150Mbps.
>> 
>> Have you tuned up your VM or changed any parameters?
>> 
>> Regards,
>> Nishan
>> 
>> 
>> On Wed, Mar 5, 2014 at 4:15 PM, Shanker Balan
>> <sh...@shapeblue.com>wrote:
>> 
>>> Comments inline.
>>> 
>>> On 05-Mar-2014, at 10:01 am, Nishan Sanjeewa Gunasekara <
>>> nishan.sanjeewa@gmail.com> wrote:
>>> 
>>>> Hi,
>>>> 
>>>> I'm running CS 4.2.1 on XenServer 6.2 hosts.
>>>> 
>>>> My primary storage is a ZFS file system provided via NFS.
>>>> 
>>>> When I do a dd test directly from one of the hosts on the NFS mount I
>>> get a
>>>> write speed of about 150MBps
>>>> 
>>>> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
>>>> 1000+0 records in
>>>> 1000+0 records out
>>>> 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
>>>> 
>>>> But when I do the same test on a Cloudstack VM running on the same host
>>>> (root disk on the same nfs mount ofcourse) I get a very low write speed.
>>>> 20MBps.
>>>> 
>>>> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
>>>> 1000+0 records in
>>>> 1000+0 records out
>>>> 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
>>>> 
>>>> Any ideas how I can improve this ?
>>> 
>>> 
>>> Is this a PV VM?
>>> 
>>> I just did a unscientific test in my lab on a PV VM. Results below:
>>> 
>>> On the XenServer:
>>> 
>>> [root@vxen1-1 ebb66062-d46f-7b3a-07be-b9ec583ec1a9]# dd if=/dev/zero
>>> of=test.dmp bs=1M conv=fdatasync count=1000
>>> 1000+0 records in
>>> 1000+0 records out
>>> 1048576000 bytes (1.0 GB) copied, 26.1396 seconds, 40.1 MB/s
>>> 
>>> On a VM:
>>> 
>>> [root@scan1 ~]# dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync
>>> count=1000
>>> 1000+0 records in
>>> 1000+0 records out
>>> 1048576000 bytes (1.0 GB) copied, 29.4688 s, 35.6 MB/s
>>> 
>>> 
>>> I guess 10% is the virtual disk overhead.
>>> 
>>> 
>>> 
>>> 
>>> --
>>> @shankerbalan
>>> 
>>> M: +91 98860 60539 | O: +91 (80) 67935867
>>> shanker.balan@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
>>> ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre,
>>> Bangalore - 560 055
>>> 
>>> Need Enterprise Grade Support for Apache CloudStack?
>>> Our CloudStack Infrastructure Support<
>>> http://shapeblue.com/cloudstack-infrastructure-support/> offers the best
>>> 24/7 SLA for CloudStack Environments.
>>> 
>>> Apache CloudStack Bootcamp training courses
>>> 
>>> **NEW!** CloudStack 4.2.1 training<
>>> http://shapeblue.com/cloudstack-training/>
>>> 18th-19th February 2014, Brazil. Classroom<
>>> http://shapeblue.com/cloudstack-training/>
>>> 17th-23rd March 2014, Region A. Instructor led, On-line<
>>> http://shapeblue.com/cloudstack-training/>
>>> 24th-28th March 2014, Region B. Instructor led, On-line<
>>> http://shapeblue.com/cloudstack-training/>
>>> 16th-20th June 2014, Region A. Instructor led, On-line<
>>> http://shapeblue.com/cloudstack-training/>
>>> 23rd-27th June 2014, Region B. Instructor led, On-line<
>>> http://shapeblue.com/cloudstack-training/>
>>> 
>>> This email and any attachments to it may be confidential and are intended
>>> solely for the use of the individual to whom it is addressed. Any views or
>>> opinions expressed are solely those of the author and do not necessarily
>>> represent those of Shape Blue Ltd or related companies. If you are not the
>>> intended recipient of this email, you must neither take any action based
>>> upon its contents, nor copy or show it to anyone. Please contact the sender
>>> if you believe you have received this email in error. Shape Blue Ltd is a
>>> company incorporated in England & Wales. ShapeBlue Services India LLP is a
>>> company incorporated in India and is operated under license from Shape Blue
>>> Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil
>>> and is operated under license from Shape Blue Ltd. ShapeBlue is a
>>> registered trademark.
>>> 
> 


Re: Disk IO on Cloudstack VMs

Posted by Carlos Reátegui <cr...@gmail.com>.
Nishan,
How did you originally setup your networks?  Did you do it all through XenCenter, xe CLI, OS?  I believe I did it all through XC.  I am running XS 6.0.2 with CS 4.1.  Did you switch to bridge networking?  I did not as it only supports 2 nic bonds.

My hosts are set up with 4x1Gbe bond for NAS network (storage label) and 4x1Gbe bond for CS Management/CS Storage/CS Guest (cloud-public label).  Both bonds are active active.  I am using basic networking but without security groups since that requires bridge networking.  Security is not really an issue for this setup as it is used internally by QA and dev.

On one of my hosts:
# dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 9.97504 seconds, 105 MB/s

On a CentOS 5 guest:
# dd if=/dev/zero of=/mnt/test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 13.1997 seconds, 79.4 MB/s

On an ubuntu 12.04 guest:
$ dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 9.85764 s, 106 MB/s

I tried the above a handful of times on each and they were within a couple % of each other every time.  Both of the guests are on the same host I tested above.  The SR is on NFS on the same machine as the CS Management stack (Ubuntu 12.04).  That machine is connected to the storage via 10Gbe and a 2x1Gbe bond to the cloud-public network.  No routes exist between these 2 networks.  MTU is 1500 for both networks.  I have another setup at a different location where I set the MTU to 9000 and I seem to remember getting a bit faster throughput at least on the hosts.  I’ll check later and let you know.

Let me know if there is anything you want me to check in my setup to compare.

Carlos


On Mar 6, 2014, at 5:14 PM, Nishan Sanjeewa Gunasekara <ni...@gmail.com> wrote:

> Hi Shankar,
> 
> 10% overhead for virtual disk seems reasonable.
> My VM is PV and I've installed xenserver tools as well. But still getting
> 20Mbps where as on the host its 150Mbps.
> 
> Have you tuned up your VM or changed any parameters?
> 
> Regards,
> Nishan
> 
> 
> On Wed, Mar 5, 2014 at 4:15 PM, Shanker Balan
> <sh...@shapeblue.com>wrote:
> 
>> Comments inline.
>> 
>> On 05-Mar-2014, at 10:01 am, Nishan Sanjeewa Gunasekara <
>> nishan.sanjeewa@gmail.com> wrote:
>> 
>>> Hi,
>>> 
>>> I'm running CS 4.2.1 on XenServer 6.2 hosts.
>>> 
>>> My primary storage is a ZFS file system provided via NFS.
>>> 
>>> When I do a dd test directly from one of the hosts on the NFS mount I
>> get a
>>> write speed of about 150MBps
>>> 
>>> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
>>> 1000+0 records in
>>> 1000+0 records out
>>> 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
>>> 
>>> But when I do the same test on a Cloudstack VM running on the same host
>>> (root disk on the same nfs mount ofcourse) I get a very low write speed.
>>> 20MBps.
>>> 
>>> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
>>> 1000+0 records in
>>> 1000+0 records out
>>> 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
>>> 
>>> Any ideas how I can improve this ?
>> 
>> 
>> Is this a PV VM?
>> 
>> I just did a unscientific test in my lab on a PV VM. Results below:
>> 
>> On the XenServer:
>> 
>> [root@vxen1-1 ebb66062-d46f-7b3a-07be-b9ec583ec1a9]# dd if=/dev/zero
>> of=test.dmp bs=1M conv=fdatasync count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes (1.0 GB) copied, 26.1396 seconds, 40.1 MB/s
>> 
>> On a VM:
>> 
>> [root@scan1 ~]# dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync
>> count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes (1.0 GB) copied, 29.4688 s, 35.6 MB/s
>> 
>> 
>> I guess 10% is the virtual disk overhead.
>> 
>> 
>> 
>> 
>> --
>> @shankerbalan
>> 
>> M: +91 98860 60539 | O: +91 (80) 67935867
>> shanker.balan@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
>> ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre,
>> Bangalore - 560 055
>> 
>> Need Enterprise Grade Support for Apache CloudStack?
>> Our CloudStack Infrastructure Support<
>> http://shapeblue.com/cloudstack-infrastructure-support/> offers the best
>> 24/7 SLA for CloudStack Environments.
>> 
>> Apache CloudStack Bootcamp training courses
>> 
>> **NEW!** CloudStack 4.2.1 training<
>> http://shapeblue.com/cloudstack-training/>
>> 18th-19th February 2014, Brazil. Classroom<
>> http://shapeblue.com/cloudstack-training/>
>> 17th-23rd March 2014, Region A. Instructor led, On-line<
>> http://shapeblue.com/cloudstack-training/>
>> 24th-28th March 2014, Region B. Instructor led, On-line<
>> http://shapeblue.com/cloudstack-training/>
>> 16th-20th June 2014, Region A. Instructor led, On-line<
>> http://shapeblue.com/cloudstack-training/>
>> 23rd-27th June 2014, Region B. Instructor led, On-line<
>> http://shapeblue.com/cloudstack-training/>
>> 
>> This email and any attachments to it may be confidential and are intended
>> solely for the use of the individual to whom it is addressed. Any views or
>> opinions expressed are solely those of the author and do not necessarily
>> represent those of Shape Blue Ltd or related companies. If you are not the
>> intended recipient of this email, you must neither take any action based
>> upon its contents, nor copy or show it to anyone. Please contact the sender
>> if you believe you have received this email in error. Shape Blue Ltd is a
>> company incorporated in England & Wales. ShapeBlue Services India LLP is a
>> company incorporated in India and is operated under license from Shape Blue
>> Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil
>> and is operated under license from Shape Blue Ltd. ShapeBlue is a
>> registered trademark.
>> 


Re: Disk IO on Cloudstack VMs

Posted by Nishan Sanjeewa Gunasekara <ni...@gmail.com>.
Hi Shankar,

10% overhead for virtual disk seems reasonable.
My VM is PV and I've installed xenserver tools as well. But still getting
20Mbps where as on the host its 150Mbps.

Have you tuned up your VM or changed any parameters?

Regards,
Nishan


On Wed, Mar 5, 2014 at 4:15 PM, Shanker Balan
<sh...@shapeblue.com>wrote:

> Comments inline.
>
> On 05-Mar-2014, at 10:01 am, Nishan Sanjeewa Gunasekara <
> nishan.sanjeewa@gmail.com> wrote:
>
> > Hi,
> >
> > I'm running CS 4.2.1 on XenServer 6.2 hosts.
> >
> > My primary storage is a ZFS file system provided via NFS.
> >
> > When I do a dd test directly from one of the hosts on the NFS mount I
> get a
> > write speed of about 150MBps
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
> >
> > But when I do the same test on a Cloudstack VM running on the same host
> > (root disk on the same nfs mount ofcourse) I get a very low write speed.
> > 20MBps.
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
> >
> > Any ideas how I can improve this ?
>
>
> Is this a PV VM?
>
> I just did a unscientific test in my lab on a PV VM. Results below:
>
> On the XenServer:
>
> [root@vxen1-1 ebb66062-d46f-7b3a-07be-b9ec583ec1a9]# dd if=/dev/zero
> of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 26.1396 seconds, 40.1 MB/s
>
> On a VM:
>
> [root@scan1 ~]# dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync
> count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 29.4688 s, 35.6 MB/s
>
>
> I guess 10% is the virtual disk overhead.
>
>
>
>
> --
> @shankerbalan
>
> M: +91 98860 60539 | O: +91 (80) 67935867
> shanker.balan@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
> ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre,
> Bangalore - 560 055
>
> Need Enterprise Grade Support for Apache CloudStack?
> Our CloudStack Infrastructure Support<
> http://shapeblue.com/cloudstack-infrastructure-support/> offers the best
> 24/7 SLA for CloudStack Environments.
>
> Apache CloudStack Bootcamp training courses
>
> **NEW!** CloudStack 4.2.1 training<
> http://shapeblue.com/cloudstack-training/>
> 18th-19th February 2014, Brazil. Classroom<
> http://shapeblue.com/cloudstack-training/>
> 17th-23rd March 2014, Region A. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
> 24th-28th March 2014, Region B. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
> 16th-20th June 2014, Region A. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
> 23rd-27th June 2014, Region B. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
>
> This email and any attachments to it may be confidential and are intended
> solely for the use of the individual to whom it is addressed. Any views or
> opinions expressed are solely those of the author and do not necessarily
> represent those of Shape Blue Ltd or related companies. If you are not the
> intended recipient of this email, you must neither take any action based
> upon its contents, nor copy or show it to anyone. Please contact the sender
> if you believe you have received this email in error. Shape Blue Ltd is a
> company incorporated in England & Wales. ShapeBlue Services India LLP is a
> company incorporated in India and is operated under license from Shape Blue
> Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil
> and is operated under license from Shape Blue Ltd. ShapeBlue is a
> registered trademark.
>

Re: Disk IO on Cloudstack VMs

Posted by Shanker Balan <sh...@shapeblue.com>.
Comments inline.

On 05-Mar-2014, at 10:01 am, Nishan Sanjeewa Gunasekara <ni...@gmail.com> wrote:

> Hi,
>
> I'm running CS 4.2.1 on XenServer 6.2 hosts.
>
> My primary storage is a ZFS file system provided via NFS.
>
> When I do a dd test directly from one of the hosts on the NFS mount I get a
> write speed of about 150MBps
>
> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
>
> But when I do the same test on a Cloudstack VM running on the same host
> (root disk on the same nfs mount ofcourse) I get a very low write speed.
> 20MBps.
>
> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
>
> Any ideas how I can improve this ?


Is this a PV VM?

I just did a unscientific test in my lab on a PV VM. Results below:

On the XenServer:

[root@vxen1-1 ebb66062-d46f-7b3a-07be-b9ec583ec1a9]# dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 26.1396 seconds, 40.1 MB/s

On a VM:

[root@scan1 ~]# dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 29.4688 s, 35.6 MB/s


I guess 10% is the virtual disk overhead.




--
@shankerbalan

M: +91 98860 60539 | O: +91 (80) 67935867
shanker.balan@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre, Bangalore - 560 055

Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure Support<http://shapeblue.com/cloudstack-infrastructure-support/> offers the best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training<http://shapeblue.com/cloudstack-training/>
18th-19th February 2014, Brazil. Classroom<http://shapeblue.com/cloudstack-training/>
17th-23rd March 2014, Region A. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
24th-28th March 2014, Region B. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
16th-20th June 2014, Region A. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
23rd-27th June 2014, Region B. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>

This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error. Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.

Re: Disk IO on Cloudstack VMs

Posted by Shanker Balan <sh...@shapeblue.com>.
Comments inline.

On 05-Mar-2014, at 12:50 pm, Suresh Sadhu <Su...@citrix.com> wrote:

> HI Nishan,
>
> Its nfs mount and it has to pass through your network.
>
> I see for guest vms ,QOS limit set to 25600kbytes/s .I think increase this value will give better throughput.
>
> Also while you mount nfs  storage ,you can set the rsize and wsize  parameters for better  I/O.


Suresh, I think the OP is testing the ROOT disk and not a manual NFS mount.
The VR should not come into the picture in this case.

--
@shankerbalan

M: +91 98860 60539 | O: +91 (80) 67935867
shanker.balan@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre, Bangalore - 560 055

Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure Support<http://shapeblue.com/cloudstack-infrastructure-support/> offers the best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training<http://shapeblue.com/cloudstack-training/>
18th-19th February 2014, Brazil. Classroom<http://shapeblue.com/cloudstack-training/>
17th-23rd March 2014, Region A. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
24th-28th March 2014, Region B. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
16th-20th June 2014, Region A. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>
23rd-27th June 2014, Region B. Instructor led, On-line<http://shapeblue.com/cloudstack-training/>

This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error. Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.

RE: Disk IO on Cloudstack VMs

Posted by Suresh Sadhu <Su...@citrix.com>.
HI Nishan,

Its nfs mount and it has to pass through your network.

I see for guest vms ,QOS limit set to 25600kbytes/s .I think increase this value will give better throughput.

Also while you mount nfs  storage ,you can set the rsize and wsize  parameters for better  I/O.

Regards
Sadhu


-----Original Message-----
From: Nishan Sanjeewa Gunasekara [mailto:nishan.sanjeewa@gmail.com] 
Sent: 05 March 2014 12:30
To: users
Subject: Re: Disk IO on Cloudstack VMs

How does it relate to the network rate of the vm or the VR for that matter?
My vm is running on primary storage which is an nfs mount  on my host on a 10G link.
On 05/03/2014 4:38 PM, "Carlos Reategui" <cr...@gmail.com> wrote:

> Check with xencenter on the network tab for that vm and see what the 
> rate limit is there.
>
>
> > On Mar 4, 2014, at 8:31 PM, Nishan Sanjeewa Gunasekara <
> nishan.sanjeewa@gmail.com> wrote:
> >
> > Hi,
> >
> > I'm running CS 4.2.1 on XenServer 6.2 hosts.
> >
> > My primary storage is a ZFS file system provided via NFS.
> >
> > When I do a dd test directly from one of the hosts on the NFS mount 
> > I
> get a
> > write speed of about 150MBps
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
> >
> > But when I do the same test on a Cloudstack VM running on the same 
> > host (root disk on the same nfs mount ofcourse) I get a very low write speed.
> > 20MBps.
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
> >
> > Any ideas how I can improve this ?
> >
> > Regards,
> > Nishan
>

Re: Disk IO on Cloudstack VMs

Posted by Nishan Sanjeewa Gunasekara <ni...@gmail.com>.
How does it relate to the network rate of the vm or the VR for that matter?
My vm is running on primary storage which is an nfs mount  on my host on a
10G link.
On 05/03/2014 4:38 PM, "Carlos Reategui" <cr...@gmail.com> wrote:

> Check with xencenter on the network tab for that vm and see what the rate
> limit is there.
>
>
> > On Mar 4, 2014, at 8:31 PM, Nishan Sanjeewa Gunasekara <
> nishan.sanjeewa@gmail.com> wrote:
> >
> > Hi,
> >
> > I'm running CS 4.2.1 on XenServer 6.2 hosts.
> >
> > My primary storage is a ZFS file system provided via NFS.
> >
> > When I do a dd test directly from one of the hosts on the NFS mount I
> get a
> > write speed of about 150MBps
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
> >
> > But when I do the same test on a Cloudstack VM running on the same host
> > (root disk on the same nfs mount ofcourse) I get a very low write speed.
> > 20MBps.
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
> >
> > Any ideas how I can improve this ?
> >
> > Regards,
> > Nishan
>

Re: Disk IO on Cloudstack VMs

Posted by Carlos Reategui <cr...@gmail.com>.
Check with xencenter on the network tab for that vm and see what the rate limit is there.


> On Mar 4, 2014, at 8:31 PM, Nishan Sanjeewa Gunasekara <ni...@gmail.com> wrote:
> 
> Hi,
> 
> I'm running CS 4.2.1 on XenServer 6.2 hosts.
> 
> My primary storage is a ZFS file system provided via NFS.
> 
> When I do a dd test directly from one of the hosts on the NFS mount I get a
> write speed of about 150MBps
> 
> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
> 
> But when I do the same test on a Cloudstack VM running on the same host
> (root disk on the same nfs mount ofcourse) I get a very low write speed.
> 20MBps.
> 
> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
> 
> Any ideas how I can improve this ?
> 
> Regards,
> Nishan