You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Jeff Crystal <JC...@idsi4it.com> on 2014/08/18 17:10:19 UTC

Disk performance

Anyone have any suggestions for improving disk performance with Cloudstack and KVM?  Using NFS is pretty craptastic, even with dedicated network adapters and switches for storage traffic.


[JCrystal Signature2013-1]


Re: Disk performance

Posted by ilya musayev <il...@gmail.com>.
Good point Carlos, though for sake of clarity (to confirm its not a 
bug), i'd check if portgroups for storage have any network throttle.

In vmware its easy to check, i assume xen would be similar.


On 8/18/14, 3:29 PM, Carlos Reátegui wrote:
> Ilya,
> Isn’t the network throttling for the guest/public network?  How does it affect the primary storage network?
>
>
> On Aug 18, 2014, at 3:17 PM, ilya musayev <il...@gmail.com> wrote:
>
>> As someone probably already mentioned, check if throttle enabled/set in global settings as well as under network offering.
>>
>> Regards
>> ilya
>>
>> On 8/18/14, 1:01 PM, Carlos Reátegui wrote:
>>> You sure VR is traversed for nfs traffic?  In my setup the NAS subnet is completely separate from any that CS uses.  The hosts know about it but none of the system vms know about it.
>>>
>>> In my setup I am using shared network so the VR is not involved in network traffic.
>>>
>>> One of my setups:
>>> NAS (ubuntu nfs, HW raid10 with ssd cache) connected with 10Gbe on a subnet that CS does not know about other than the ip to the NFS server.
>>> XenServer Hosts: 4 x 1Gbe for primary storage, 4x1 Gbe for CloudStack (e.g. guest, management, secondary storage)
>>>
>>> Using bonnie++ I am seeing ~135Mbps read ~109Mbps write from an ubuntu 12.04 vm.
>>>
>>>
>>> On Aug 18, 2014, at 10:20 AM, Jeff Crystal <JC...@idsi4it.com> wrote:
>>>
>>>> Management server: HP Proliant ML350 G5 18GB RAM dual quad-core 2.0Ghz server
>>>> SAN: HP Proliant ML350 G6 28GB RAM dual quad-core 2.66Ghz server running Open-e DSS v7 lite
>>>> Virtual Hosts (2 identical servers)
>>>> HP Proliant ML 350 G5 24GB RAM 2.66Ghz dual Quad-core with (4) gigabit nics
>>>> Public, Guest, Storage, and Management networks are all assigned dedicated nics (cloudbr0-3)
>>>>
>>>> Using NFS I'm getting 6-7Mbps write and 45-50Mbps read speeds with this setup.
>>>>
>>>> Using Microsoft software iSCSI from a Windows Vm running in this environment and attached to the same SAN, I get 13-14Mbps read/write speeds.  (Access to the SAN traverses the virtual router.  I'm not sure if this is affecting the speed or not.)
>>>>
>>>> From: Carlos Reátegui [mailto:creategui@gmail.com]
>>>> Sent: Monday, August 18, 2014 1:07 PM
>>>> To: users@cloudstack.apache.org
>>>> Subject: Re: Disk performance
>>>>
>>>> What is your network setup?
>>>>
>>>>
>>>> On Aug 18, 2014, at 10:04 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e> wrote:
>>>>
>>>>> No, I need a shared storage solution. I'm wondering what others are using in place of NFS. OCFS2? GFS2? GlusterFS? I tried setting up CLVM, but it seems very problematic (server won't shut down without manual intervention to leave the cluster, server won't join the cluster on boot without manual commands. Not very enterprisey!) I'll have to give Ceph a look...
>>>>>
>>>>> -----Original Message-----
>>>>> From: Ahmad Emneina [mailto:aemneina@gmail.com]
>>>>> Sent: Monday, August 18, 2014 11:52 AM
>>>>> To: Cloudstack users mailing list
>>>>> Subject: Re: Disk performance
>>>>>
>>>>> local storage is probably your most performant storage type... you dont get the awesome of HA or easy volume recovery, but if all youre after is performance. Thats the one.
>>>>>
>>>>>
>>>>> On Mon, Aug 18, 2014 at 8:47 AM, Randy Smith <rb...@adams.edu><mailto:rbsmith@adams.edu%3e> wrote:
>>>>>
>>>>>> Jeff,
>>>>>>
>>>>>> I'm a big fan of ceph for clustered storage for block devices.
>>>>>>
>>>>>> Beyond that, there are a bunch of crazy things you can do to tune NFS
>>>>>> but it's rarely worth it.
>>>>>>
>>>>>>
>>>>>> On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e>
>>>>>> wrote:
>>>>>>
>>>>>>> Anyone have any suggestions for improving disk performance with
>>>>>>> Cloudstack and KVM? Using NFS is pretty craptastic, even with
>>>>>>> dedicated network adapters and switches for storage traffic.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> [image: JCrystal Signature2013-1]
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Randall Smith
>>>>>> Computing Services
>>>>>> Adams State University
>>>>>> http://www.adams.edu/
>>>>>> 719-587-7741
>>>>>>
>>>> ________________________________


Re: Disk performance

Posted by Carlos Reátegui <cr...@gmail.com>.
Ilya,
Isn’t the network throttling for the guest/public network?  How does it affect the primary storage network?


On Aug 18, 2014, at 3:17 PM, ilya musayev <il...@gmail.com> wrote:

> As someone probably already mentioned, check if throttle enabled/set in global settings as well as under network offering.
> 
> Regards
> ilya
> 
> On 8/18/14, 1:01 PM, Carlos Reátegui wrote:
>> You sure VR is traversed for nfs traffic?  In my setup the NAS subnet is completely separate from any that CS uses.  The hosts know about it but none of the system vms know about it.
>> 
>> In my setup I am using shared network so the VR is not involved in network traffic.
>> 
>> One of my setups:
>> NAS (ubuntu nfs, HW raid10 with ssd cache) connected with 10Gbe on a subnet that CS does not know about other than the ip to the NFS server.
>> XenServer Hosts: 4 x 1Gbe for primary storage, 4x1 Gbe for CloudStack (e.g. guest, management, secondary storage)
>> 
>> Using bonnie++ I am seeing ~135Mbps read ~109Mbps write from an ubuntu 12.04 vm.
>> 
>> 
>> On Aug 18, 2014, at 10:20 AM, Jeff Crystal <JC...@idsi4it.com> wrote:
>> 
>>> Management server: HP Proliant ML350 G5 18GB RAM dual quad-core 2.0Ghz server
>>> SAN: HP Proliant ML350 G6 28GB RAM dual quad-core 2.66Ghz server running Open-e DSS v7 lite
>>> Virtual Hosts (2 identical servers)
>>> HP Proliant ML 350 G5 24GB RAM 2.66Ghz dual Quad-core with (4) gigabit nics
>>> Public, Guest, Storage, and Management networks are all assigned dedicated nics (cloudbr0-3)
>>> 
>>> Using NFS I'm getting 6-7Mbps write and 45-50Mbps read speeds with this setup.
>>> 
>>> Using Microsoft software iSCSI from a Windows Vm running in this environment and attached to the same SAN, I get 13-14Mbps read/write speeds.  (Access to the SAN traverses the virtual router.  I'm not sure if this is affecting the speed or not.)
>>> 
>>> From: Carlos Reátegui [mailto:creategui@gmail.com]
>>> Sent: Monday, August 18, 2014 1:07 PM
>>> To: users@cloudstack.apache.org
>>> Subject: Re: Disk performance
>>> 
>>> What is your network setup?
>>> 
>>> 
>>> On Aug 18, 2014, at 10:04 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e> wrote:
>>> 
>>>> No, I need a shared storage solution. I'm wondering what others are using in place of NFS. OCFS2? GFS2? GlusterFS? I tried setting up CLVM, but it seems very problematic (server won't shut down without manual intervention to leave the cluster, server won't join the cluster on boot without manual commands. Not very enterprisey!) I'll have to give Ceph a look...
>>>> 
>>>> -----Original Message-----
>>>> From: Ahmad Emneina [mailto:aemneina@gmail.com]
>>>> Sent: Monday, August 18, 2014 11:52 AM
>>>> To: Cloudstack users mailing list
>>>> Subject: Re: Disk performance
>>>> 
>>>> local storage is probably your most performant storage type... you dont get the awesome of HA or easy volume recovery, but if all youre after is performance. Thats the one.
>>>> 
>>>> 
>>>> On Mon, Aug 18, 2014 at 8:47 AM, Randy Smith <rb...@adams.edu><mailto:rbsmith@adams.edu%3e> wrote:
>>>> 
>>>>> Jeff,
>>>>> 
>>>>> I'm a big fan of ceph for clustered storage for block devices.
>>>>> 
>>>>> Beyond that, there are a bunch of crazy things you can do to tune NFS
>>>>> but it's rarely worth it.
>>>>> 
>>>>> 
>>>>> On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e>
>>>>> wrote:
>>>>> 
>>>>>> Anyone have any suggestions for improving disk performance with
>>>>>> Cloudstack and KVM? Using NFS is pretty craptastic, even with
>>>>>> dedicated network adapters and switches for storage traffic.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> [image: JCrystal Signature2013-1]
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Randall Smith
>>>>> Computing Services
>>>>> Adams State University
>>>>> http://www.adams.edu/
>>>>> 719-587-7741
>>>>> 
>>> 
>>> ________________________________
> 


Re: Disk performance

Posted by ilya musayev <il...@gmail.com>.
As someone probably already mentioned, check if throttle enabled/set in 
global settings as well as under network offering.

Regards
ilya

On 8/18/14, 1:01 PM, Carlos Reátegui wrote:
> You sure VR is traversed for nfs traffic?  In my setup the NAS subnet is completely separate from any that CS uses.  The hosts know about it but none of the system vms know about it.
>
> In my setup I am using shared network so the VR is not involved in network traffic.
>
> One of my setups:
> NAS (ubuntu nfs, HW raid10 with ssd cache) connected with 10Gbe on a subnet that CS does not know about other than the ip to the NFS server.
> XenServer Hosts: 4 x 1Gbe for primary storage, 4x1 Gbe for CloudStack (e.g. guest, management, secondary storage)
>
> Using bonnie++ I am seeing ~135Mbps read ~109Mbps write from an ubuntu 12.04 vm.
>
>
> On Aug 18, 2014, at 10:20 AM, Jeff Crystal <JC...@idsi4it.com> wrote:
>
>> Management server: HP Proliant ML350 G5 18GB RAM dual quad-core 2.0Ghz server
>> SAN: HP Proliant ML350 G6 28GB RAM dual quad-core 2.66Ghz server running Open-e DSS v7 lite
>> Virtual Hosts (2 identical servers)
>> HP Proliant ML 350 G5 24GB RAM 2.66Ghz dual Quad-core with (4) gigabit nics
>> Public, Guest, Storage, and Management networks are all assigned dedicated nics (cloudbr0-3)
>>
>> Using NFS I'm getting 6-7Mbps write and 45-50Mbps read speeds with this setup.
>>
>> Using Microsoft software iSCSI from a Windows Vm running in this environment and attached to the same SAN, I get 13-14Mbps read/write speeds.  (Access to the SAN traverses the virtual router.  I'm not sure if this is affecting the speed or not.)
>>
>> From: Carlos Reátegui [mailto:creategui@gmail.com]
>> Sent: Monday, August 18, 2014 1:07 PM
>> To: users@cloudstack.apache.org
>> Subject: Re: Disk performance
>>
>> What is your network setup?
>>
>>
>> On Aug 18, 2014, at 10:04 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e> wrote:
>>
>>> No, I need a shared storage solution. I'm wondering what others are using in place of NFS. OCFS2? GFS2? GlusterFS? I tried setting up CLVM, but it seems very problematic (server won't shut down without manual intervention to leave the cluster, server won't join the cluster on boot without manual commands. Not very enterprisey!) I'll have to give Ceph a look...
>>>
>>> -----Original Message-----
>>> From: Ahmad Emneina [mailto:aemneina@gmail.com]
>>> Sent: Monday, August 18, 2014 11:52 AM
>>> To: Cloudstack users mailing list
>>> Subject: Re: Disk performance
>>>
>>> local storage is probably your most performant storage type... you dont get the awesome of HA or easy volume recovery, but if all youre after is performance. Thats the one.
>>>
>>>
>>> On Mon, Aug 18, 2014 at 8:47 AM, Randy Smith <rb...@adams.edu><mailto:rbsmith@adams.edu%3e> wrote:
>>>
>>>> Jeff,
>>>>
>>>> I'm a big fan of ceph for clustered storage for block devices.
>>>>
>>>> Beyond that, there are a bunch of crazy things you can do to tune NFS
>>>> but it's rarely worth it.
>>>>
>>>>
>>>> On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e>
>>>> wrote:
>>>>
>>>>> Anyone have any suggestions for improving disk performance with
>>>>> Cloudstack and KVM? Using NFS is pretty craptastic, even with
>>>>> dedicated network adapters and switches for storage traffic.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> [image: JCrystal Signature2013-1]
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Randall Smith
>>>> Computing Services
>>>> Adams State University
>>>> http://www.adams.edu/
>>>> 719-587-7741
>>>>
>>
>> ________________________________


Re: Disk performance

Posted by Carlos Reátegui <cr...@gmail.com>.
You sure VR is traversed for nfs traffic?  In my setup the NAS subnet is completely separate from any that CS uses.  The hosts know about it but none of the system vms know about it. 

In my setup I am using shared network so the VR is not involved in network traffic.

One of my setups:
NAS (ubuntu nfs, HW raid10 with ssd cache) connected with 10Gbe on a subnet that CS does not know about other than the ip to the NFS server.
XenServer Hosts: 4 x 1Gbe for primary storage, 4x1 Gbe for CloudStack (e.g. guest, management, secondary storage)

Using bonnie++ I am seeing ~135Mbps read ~109Mbps write from an ubuntu 12.04 vm.


On Aug 18, 2014, at 10:20 AM, Jeff Crystal <JC...@idsi4it.com> wrote:

> Management server: HP Proliant ML350 G5 18GB RAM dual quad-core 2.0Ghz server
> SAN: HP Proliant ML350 G6 28GB RAM dual quad-core 2.66Ghz server running Open-e DSS v7 lite
> Virtual Hosts (2 identical servers)
> HP Proliant ML 350 G5 24GB RAM 2.66Ghz dual Quad-core with (4) gigabit nics
> Public, Guest, Storage, and Management networks are all assigned dedicated nics (cloudbr0-3)
> 
> Using NFS I'm getting 6-7Mbps write and 45-50Mbps read speeds with this setup.
> 
> Using Microsoft software iSCSI from a Windows Vm running in this environment and attached to the same SAN, I get 13-14Mbps read/write speeds.  (Access to the SAN traverses the virtual router.  I'm not sure if this is affecting the speed or not.)
> 
> From: Carlos Reátegui [mailto:creategui@gmail.com]
> Sent: Monday, August 18, 2014 1:07 PM
> To: users@cloudstack.apache.org
> Subject: Re: Disk performance
> 
> What is your network setup?
> 
> 
> On Aug 18, 2014, at 10:04 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e> wrote:
> 
>> No, I need a shared storage solution. I'm wondering what others are using in place of NFS. OCFS2? GFS2? GlusterFS? I tried setting up CLVM, but it seems very problematic (server won't shut down without manual intervention to leave the cluster, server won't join the cluster on boot without manual commands. Not very enterprisey!) I'll have to give Ceph a look...
>> 
>> -----Original Message-----
>> From: Ahmad Emneina [mailto:aemneina@gmail.com]
>> Sent: Monday, August 18, 2014 11:52 AM
>> To: Cloudstack users mailing list
>> Subject: Re: Disk performance
>> 
>> local storage is probably your most performant storage type... you dont get the awesome of HA or easy volume recovery, but if all youre after is performance. Thats the one.
>> 
>> 
>> On Mon, Aug 18, 2014 at 8:47 AM, Randy Smith <rb...@adams.edu><mailto:rbsmith@adams.edu%3e> wrote:
>> 
>>> Jeff,
>>> 
>>> I'm a big fan of ceph for clustered storage for block devices.
>>> 
>>> Beyond that, there are a bunch of crazy things you can do to tune NFS
>>> but it's rarely worth it.
>>> 
>>> 
>>> On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e>
>>> wrote:
>>> 
>>>> Anyone have any suggestions for improving disk performance with
>>>> Cloudstack and KVM? Using NFS is pretty craptastic, even with
>>>> dedicated network adapters and switches for storage traffic.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> [image: JCrystal Signature2013-1]
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Randall Smith
>>> Computing Services
>>> Adams State University
>>> http://www.adams.edu/
>>> 719-587-7741
>>> 
> 
> 
> ________________________________


RE: Disk performance

Posted by Jeff Crystal <JC...@idsi4it.com>.
Management server: HP Proliant ML350 G5 18GB RAM dual quad-core 2.0Ghz server
SAN: HP Proliant ML350 G6 28GB RAM dual quad-core 2.66Ghz server running Open-e DSS v7 lite
Virtual Hosts (2 identical servers)
HP Proliant ML 350 G5 24GB RAM 2.66Ghz dual Quad-core with (4) gigabit nics
Public, Guest, Storage, and Management networks are all assigned dedicated nics (cloudbr0-3)

Using NFS I'm getting 6-7Mbps write and 45-50Mbps read speeds with this setup.

Using Microsoft software iSCSI from a Windows Vm running in this environment and attached to the same SAN, I get 13-14Mbps read/write speeds.  (Access to the SAN traverses the virtual router.  I'm not sure if this is affecting the speed or not.)

From: Carlos Reátegui [mailto:creategui@gmail.com]
Sent: Monday, August 18, 2014 1:07 PM
To: users@cloudstack.apache.org
Subject: Re: Disk performance

What is your network setup?


On Aug 18, 2014, at 10:04 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e> wrote:

> No, I need a shared storage solution. I'm wondering what others are using in place of NFS. OCFS2? GFS2? GlusterFS? I tried setting up CLVM, but it seems very problematic (server won't shut down without manual intervention to leave the cluster, server won't join the cluster on boot without manual commands. Not very enterprisey!) I'll have to give Ceph a look...
>
> -----Original Message-----
> From: Ahmad Emneina [mailto:aemneina@gmail.com]
> Sent: Monday, August 18, 2014 11:52 AM
> To: Cloudstack users mailing list
> Subject: Re: Disk performance
>
> local storage is probably your most performant storage type... you dont get the awesome of HA or easy volume recovery, but if all youre after is performance. Thats the one.
>
>
> On Mon, Aug 18, 2014 at 8:47 AM, Randy Smith <rb...@adams.edu><mailto:rbsmith@adams.edu%3e> wrote:
>
>> Jeff,
>>
>> I'm a big fan of ceph for clustered storage for block devices.
>>
>> Beyond that, there are a bunch of crazy things you can do to tune NFS
>> but it's rarely worth it.
>>
>>
>> On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal <JC...@idsi4it.com><mailto:JCrystal@idsi4it.com%3e>
>> wrote:
>>
>>> Anyone have any suggestions for improving disk performance with
>>> Cloudstack and KVM? Using NFS is pretty craptastic, even with
>>> dedicated network adapters and switches for storage traffic.
>>>
>>>
>>>
>>>
>>>
>>> [image: JCrystal Signature2013-1]
>>>
>>>
>>>
>>
>>
>>
>> --
>> Randall Smith
>> Computing Services
>> Adams State University
>> http://www.adams.edu/
>> 719-587-7741
>>


________________________________

Re: Disk performance

Posted by Carlos Reátegui <cr...@gmail.com>.
What is your network setup?  


On Aug 18, 2014, at 10:04 AM, Jeff Crystal <JC...@idsi4it.com> wrote:

> No, I need a shared storage solution.  I'm wondering what others are using in place of NFS.  OCFS2?  GFS2? GlusterFS?  I tried setting up CLVM, but it seems very problematic (server won't shut down without manual intervention to leave the cluster, server won't join the cluster on boot without manual commands.  Not very enterprisey!)  I'll have to give Ceph a look...
> 
> -----Original Message-----
> From: Ahmad Emneina [mailto:aemneina@gmail.com] 
> Sent: Monday, August 18, 2014 11:52 AM
> To: Cloudstack users mailing list
> Subject: Re: Disk performance
> 
> local storage is probably your most performant storage type... you dont get the awesome of HA or easy volume recovery, but if all youre after is performance. Thats the one.
> 
> 
> On Mon, Aug 18, 2014 at 8:47 AM, Randy Smith <rb...@adams.edu> wrote:
> 
>> Jeff,
>> 
>> I'm a big fan of ceph for clustered storage for block devices.
>> 
>> Beyond that, there are a bunch of crazy things you can do to tune NFS 
>> but it's rarely worth it.
>> 
>> 
>> On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal <JC...@idsi4it.com>
>> wrote:
>> 
>>> Anyone have any suggestions for improving disk performance with 
>>> Cloudstack and KVM?  Using NFS is pretty craptastic, even with 
>>> dedicated network adapters and switches for storage traffic.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> [image: JCrystal Signature2013-1]
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> --
>> Randall Smith
>> Computing Services
>> Adams State University
>> http://www.adams.edu/
>> 719-587-7741
>> 


RE: Disk performance

Posted by Jeff Crystal <JC...@idsi4it.com>.
No, I need a shared storage solution.  I'm wondering what others are using in place of NFS.  OCFS2?  GFS2? GlusterFS?  I tried setting up CLVM, but it seems very problematic (server won't shut down without manual intervention to leave the cluster, server won't join the cluster on boot without manual commands.  Not very enterprisey!)  I'll have to give Ceph a look...

-----Original Message-----
From: Ahmad Emneina [mailto:aemneina@gmail.com] 
Sent: Monday, August 18, 2014 11:52 AM
To: Cloudstack users mailing list
Subject: Re: Disk performance

local storage is probably your most performant storage type... you dont get the awesome of HA or easy volume recovery, but if all youre after is performance. Thats the one.


On Mon, Aug 18, 2014 at 8:47 AM, Randy Smith <rb...@adams.edu> wrote:

> Jeff,
>
> I'm a big fan of ceph for clustered storage for block devices.
>
> Beyond that, there are a bunch of crazy things you can do to tune NFS 
> but it's rarely worth it.
>
>
> On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal <JC...@idsi4it.com>
> wrote:
>
> >  Anyone have any suggestions for improving disk performance with 
> > Cloudstack and KVM?  Using NFS is pretty craptastic, even with 
> > dedicated network adapters and switches for storage traffic.
> >
> >
> >
> >
> >
> > [image: JCrystal Signature2013-1]
> >
> >
> >
>
>
>
> --
> Randall Smith
> Computing Services
> Adams State University
> http://www.adams.edu/
> 719-587-7741
>

Re: Disk performance

Posted by Ahmad Emneina <ae...@gmail.com>.
local storage is probably your most performant storage type... you dont get
the awesome of HA or easy volume recovery, but if all youre after is
performance. Thats the one.


On Mon, Aug 18, 2014 at 8:47 AM, Randy Smith <rb...@adams.edu> wrote:

> Jeff,
>
> I'm a big fan of ceph for clustered storage for block devices.
>
> Beyond that, there are a bunch of crazy things you can do to tune NFS but
> it's rarely worth it.
>
>
> On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal <JC...@idsi4it.com>
> wrote:
>
> >  Anyone have any suggestions for improving disk performance with
> > Cloudstack and KVM?  Using NFS is pretty craptastic, even with dedicated
> > network adapters and switches for storage traffic.
> >
> >
> >
> >
> >
> > [image: JCrystal Signature2013-1]
> >
> >
> >
>
>
>
> --
> Randall Smith
> Computing Services
> Adams State University
> http://www.adams.edu/
> 719-587-7741
>

Re: Disk performance

Posted by Randy Smith <rb...@adams.edu>.
Jeff,

I'm a big fan of ceph for clustered storage for block devices.

Beyond that, there are a bunch of crazy things you can do to tune NFS but
it's rarely worth it.


On Mon, Aug 18, 2014 at 9:10 AM, Jeff Crystal <JC...@idsi4it.com> wrote:

>  Anyone have any suggestions for improving disk performance with
> Cloudstack and KVM?  Using NFS is pretty craptastic, even with dedicated
> network adapters and switches for storage traffic.
>
>
>
>
>
> [image: JCrystal Signature2013-1]
>
>
>



-- 
Randall Smith
Computing Services
Adams State University
http://www.adams.edu/
719-587-7741