You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by WXR <14...@qq.com> on 2013/08/06 15:48:13 UTC

Is NFS the cloudstack IO perfomance bottleneck?

I use kvm as hypervisor and nfs(nfs server on centos 6.4) as primary and secondary storage.

I use server A as host node and B with 1 hdd as primary storage.When I create 20 vms,I find the disk io performance is very low.
At first I think the bottleneck is from the hard disk,because there are 20 vms on a single hdd.So I attach another 4hdds on server B and increase the number of primary storage from 1 to 5.Now there are 20 vms allocated averagely on 5 primary storage(4 vms per storage),but the vm disk IO performance is the same as before.

I think NFS may be the bottleneck,but I don't know if it is true.Does anyone have a good idea to help me finding the real reason?

Re: Is NFS the cloudstack IO perfomance bottleneck?

Posted by WXR <14...@qq.com>.
I'm not using RAID ,just a single hdd per primary storage.
server A and B are all Dell PowerEdge R720,the nic on each server is broadcom gigabitethernet network adapter.
When I find the performance is bad enough,the trafficflow of each nic is just 600Mb/s.

Could you please share your experience of configuration about cloudstack with nfs? 




------------------ Original ------------------
From:  "Dean Kamali"<de...@gmail.com>;
Date:  Tue, Aug 6, 2013 10:43 PM
To:  "users"<us...@cloudstack.apache.org>; 

Subject:  Re: Is NFS the cloudstack IO perfomance bottleneck?



I don't think NFS is to blame, but I could blame the hardware for being
slow, could you please tell us your IOPS usage? Network setup? drive types
and RAID you are using?

have you used tools like ioping?

For %90 of the time is the hardware that preforms poorly, I have been using
NFS for over a year now with cloudstack and never had an issue with
performance.




On Tue, Aug 6, 2013 at 10:03 AM, Kirk Jantzer <ki...@gmail.com>wrote:

> I doubt NFS is the issue. What are the specs of the VMs? What is the
> network like? What are the disks? Etc. There are too many variables to say
> "NFS is a CloudStack bottleneck". I just implemented a 35TB storage cluster
> that is capable of 10's of thousands of IOPS and >1GB (yes, gigabyte, not
> gigabit) network bandwidth via NFS.
>
>
> On Tue, Aug 6, 2013 at 9:48 AM, WXR <14...@qq.com> wrote:
>
> > I use kvm as hypervisor and nfs(nfs server on centos 6.4) as primary and
> > secondary storage.
> >
> > I use server A as host node and B with 1 hdd as primary storage.When I
> > create 20 vms,I find the disk io performance is very low.
> > At first I think the bottleneck is from the hard disk,because there are
> 20
> > vms on a single hdd.So I attach another 4hdds on server B and increase
> the
> > number of primary storage from 1 to 5.Now there are 20 vms allocated
> > averagely on 5 primary storage(4 vms per storage),but the vm disk IO
> > performance is the same as before.
> >
> > I think NFS may be the bottleneck,but I don't know if it is true.Does
> > anyone have a good idea to help me finding the real reason?
>
>
>
>
> --
> Regards,
>
> Kirk Jantzer
> c: (678) 561-5475
> http://about.met/kirkjantzer
>

Re: Is NFS the cloudstack IO perfomance bottleneck?

Posted by Dean Kamali <de...@gmail.com>.
I don't think NFS is to blame, but I could blame the hardware for being
slow, could you please tell us your IOPS usage? Network setup? drive types
and RAID you are using?

have you used tools like ioping?

For %90 of the time is the hardware that preforms poorly, I have been using
NFS for over a year now with cloudstack and never had an issue with
performance.




On Tue, Aug 6, 2013 at 10:03 AM, Kirk Jantzer <ki...@gmail.com>wrote:

> I doubt NFS is the issue. What are the specs of the VMs? What is the
> network like? What are the disks? Etc. There are too many variables to say
> "NFS is a CloudStack bottleneck". I just implemented a 35TB storage cluster
> that is capable of 10's of thousands of IOPS and >1GB (yes, gigabyte, not
> gigabit) network bandwidth via NFS.
>
>
> On Tue, Aug 6, 2013 at 9:48 AM, WXR <14...@qq.com> wrote:
>
> > I use kvm as hypervisor and nfs(nfs server on centos 6.4) as primary and
> > secondary storage.
> >
> > I use server A as host node and B with 1 hdd as primary storage.When I
> > create 20 vms,I find the disk io performance is very low.
> > At first I think the bottleneck is from the hard disk,because there are
> 20
> > vms on a single hdd.So I attach another 4hdds on server B and increase
> the
> > number of primary storage from 1 to 5.Now there are 20 vms allocated
> > averagely on 5 primary storage(4 vms per storage),but the vm disk IO
> > performance is the same as before.
> >
> > I think NFS may be the bottleneck,but I don't know if it is true.Does
> > anyone have a good idea to help me finding the real reason?
>
>
>
>
> --
> Regards,
>
> Kirk Jantzer
> c: (678) 561-5475
> http://about.met/kirkjantzer
>

Re: Is NFS the cloudstack IO perfomance bottleneck?

Posted by Kirk Jantzer <ki...@gmail.com>.
I doubt NFS is the issue. What are the specs of the VMs? What is the
network like? What are the disks? Etc. There are too many variables to say
"NFS is a CloudStack bottleneck". I just implemented a 35TB storage cluster
that is capable of 10's of thousands of IOPS and >1GB (yes, gigabyte, not
gigabit) network bandwidth via NFS.


On Tue, Aug 6, 2013 at 9:48 AM, WXR <14...@qq.com> wrote:

> I use kvm as hypervisor and nfs(nfs server on centos 6.4) as primary and
> secondary storage.
>
> I use server A as host node and B with 1 hdd as primary storage.When I
> create 20 vms,I find the disk io performance is very low.
> At first I think the bottleneck is from the hard disk,because there are 20
> vms on a single hdd.So I attach another 4hdds on server B and increase the
> number of primary storage from 1 to 5.Now there are 20 vms allocated
> averagely on 5 primary storage(4 vms per storage),but the vm disk IO
> performance is the same as before.
>
> I think NFS may be the bottleneck,but I don't know if it is true.Does
> anyone have a good idea to help me finding the real reason?




-- 
Regards,

Kirk Jantzer
c: (678) 561-5475
http://about.met/kirkjantzer