You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Trevor Francis <tr...@tgrahamcapital.com> on 2012/10/29 03:03:27 UTC

NFS vs iSCSI

I know this has been discussed on other forums with limited success in explaining which is best in for aproduction environment, but could you cloudstackers weigh in which storage technology would be best for both primary and secondary storage for VMs running on Xenserver? Both are pretty trivial to setup with NFS being the easiest.

Thanks,

Trevor Francis



RE: NFS vs iSCSI

Posted by Clayton Weise <cw...@iswest.net>.
Blowing the dust off of this topic as I was gone last week.  Outback Dingo, most people consider NFS easier to manage because it's very simple to think of things in terms of files and directories instead of IQNs, targets, initiators and the like.  And yes, NFS servers on Linux doing failover can sometimes create stale mounts on the client site and could require manual intervention or a "Repair SR" in the case of XenServer.

That being said in my limited experience NFS is easy for most people to digest and understand, it's quick to throw up, and simple enough to work with and configure.  In the case of XenServer, NFS is the only way you're going to be able to get oversubscription in a way that CloudStack will recognize.  If you have some method of thin provisioning on your back-end storage with iSCSI you can do this too but CloudStack just won't know so you'll want to adjust your oversubscription ratio accordingly.

So NFS certainly has a number of benefits over block level protocols like iSCSI, FC, ATAoE and the like.  However I've yet to see it perform on equal footing with high volume small block (32k or less) random I/O, which is what most database servers will be doing.  Any NFS tuners out there who have this kind of thing cranking away by all means feel free to throw in your ideas, configurations, etc.  So depending on your workload and familiarity with the protocols and principles involved pick your protocol accordingly.

-----Original Message-----
From: Andreas Huser [mailto:ahuser@7five-edv.de] 
Sent: Monday, October 29, 2012 6:48 AM
To: cloudstack-users@incubator.apache.org
Subject: Re: NFS vs iSCSI

Hi Trevor, 

hint check this: http://www.ebay.de/itm/Mellanox-Infiniband-4X-HCA-PCI-E-MHGA28-1TC-7104-HCA-128LPX-/290625782203 
Try NFS over RDMA, or IPoIB low latency. then nfs makes fun :) 

so long.... 
Andreas 




----- Ursprüngliche Mail -----

Von: "Jason Davis" <sc...@gmail.com> 
An: cloudstack-users@incubator.apache.org 
Gesendet: Montag, 29. Oktober 2012 05:01:55 
Betreff: Re: NFS vs iSCSI 

NFS failover is fine, I ran our Cluster with Isilon storage so we load 
balanced and could failover stupid easy. With my experiences with XS/XCP I 
found NFS much more pleasant to work with vs the iSCSI I did with our 
Equallogic array cluster. 

In any event, try both and see which one you like best... in all honesty 
with 10Gb/s Ethernet it frankly doesn't matter which protocol you go with. 
On Oct 28, 2012 10:53 PM, "Outback Dingo" <ou...@gmail.com> wrote: 

> On Sun, Oct 28, 2012 at 11:50 PM, Jason Davis <sc...@gmail.com> wrote: 
> > Like I was mentioning, for the cut in theoretical performance, you get 
> > something much easier to administer. Plenty of really nice SSD SSD/Disk 
> > arrays do NFS and are blazing fast. 
> 
> Not sure how it figures you think NFS is any easier to manage and 
> support then ISCSI is.... once its configured it just runs. 
> And ISCSI has the potential to do failover, NFS v3 cant really. 
> 
> > 
> > As for over provisioning, just like in KVM you can over provison the hell 
> > out of CPU, especially if the workload your end users will be doing is a 
> > known quantity. As for memory, I wouldn't even bother with memory 
> > ballooning and other provisioning tricks. Memory is so cheap that it's 
> > easier just to add a new hypervisor node once you need more RAM for the 
> > cluster. That and you get more CPU to boot. Good rule off thumb is to 
> never 
> > over provision RAM... much happier end users :) 
> > On Oct 28, 2012 10:15 PM, "Outback Dingo" <ou...@gmail.com> 
> wrote: 
> > 
> >> On Sun, Oct 28, 2012 at 11:11 PM, Trevor Francis 
> >> <tr...@tgrahamcapital.com> wrote: 
> >> > Good question. This is a private cloud for an application we have 
> >> developed. We will have no actual "public" users installing OS' of 
> varying 
> >> ranges. 
> >> > 
> >> > That being said. Cent 6.3 64-bit, is the only guest OS being deployed. 
> >> It is also what I am intending to deploy my NFS using. 
> >> > 
> >> 
> >> Then ISCSI would be a good choice is you have speed at the disk layer, 
> >> no sense slowing it down with NFS. 
> >> 
> >> > Yes, I know that ZFS rocks and FreeBSD is the bees knees, but we know 
> >> Cent and everything on our platform is standardized around that (short 
> of 
> >> XenServer hosts). Also, we don't need to take advantage of ZFS caching, 
> as 
> >> all of our deployed storage for guests is SSD anyway. 
> >> > 
> >> > Thanks! 
> >> > 
> >> > TGF 
> >> > 
> >> > 
> >> > 
> >> > 
> >> > On Oct 28, 2012, at 9:56 PM, Jason Davis <sc...@gmail.com> wrote: 
> >> > 
> >> >> Decent read: 
> >> >> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf 
> >> >> 
> >> >> As far as CS + XenServer, I prefer NFS. Easier to manage, thin 
> >> provisioning 
> >> >> works from the get go (which is super important as XenServer uses CoW 
> >> >> (linked clones) iterations from the template you use.) By default, XS 
> >> uses 
> >> >> LVM over iSCSI with iSCSI which can be confusing to administer. That 
> >> and it 
> >> >> doesn't thin provision... which sucks... 
> >> >> 
> >> >> In theory there are latency penalties with NFS (as mentioned in the 
> >> paper) 
> >> >> but in a live deployment, I never ran into this. 
> >> >> On Oct 28, 2012 9:03 PM, "Trevor Francis" < 
> >> trevor.francis@tgrahamcapital.com> 
> >> >> wrote: 
> >> >> 
> >> >>> I know this has been discussed on other forums with limited success 
> in 
> >> >>> explaining which is best in for aproduction environment, but could 
> you 
> >> >>> cloudstackers weigh in which storage technology would be best for 
> both 
> >> >>> primary and secondary storage for VMs running on Xenserver? Both are 
> >> pretty 
> >> >>> trivial to setup with NFS being the easiest. 
> >> >>> 
> >> >>> Thanks, 
> >> >>> 
> >> >>> Trevor Francis 
> >> >>> 
> >> >>> 
> >> >>> 
> >> > 
> >> 
> 


Re: NFS vs iSCSI

Posted by Andreas Huser <ah...@7five-edv.de>.
Hi Trevor, 

hint check this: http://www.ebay.de/itm/Mellanox-Infiniband-4X-HCA-PCI-E-MHGA28-1TC-7104-HCA-128LPX-/290625782203 
Try NFS over RDMA, or IPoIB low latency. then nfs makes fun :) 

so long.... 
Andreas 




----- Ursprüngliche Mail -----

Von: "Jason Davis" <sc...@gmail.com> 
An: cloudstack-users@incubator.apache.org 
Gesendet: Montag, 29. Oktober 2012 05:01:55 
Betreff: Re: NFS vs iSCSI 

NFS failover is fine, I ran our Cluster with Isilon storage so we load 
balanced and could failover stupid easy. With my experiences with XS/XCP I 
found NFS much more pleasant to work with vs the iSCSI I did with our 
Equallogic array cluster. 

In any event, try both and see which one you like best... in all honesty 
with 10Gb/s Ethernet it frankly doesn't matter which protocol you go with. 
On Oct 28, 2012 10:53 PM, "Outback Dingo" <ou...@gmail.com> wrote: 

> On Sun, Oct 28, 2012 at 11:50 PM, Jason Davis <sc...@gmail.com> wrote: 
> > Like I was mentioning, for the cut in theoretical performance, you get 
> > something much easier to administer. Plenty of really nice SSD SSD/Disk 
> > arrays do NFS and are blazing fast. 
> 
> Not sure how it figures you think NFS is any easier to manage and 
> support then ISCSI is.... once its configured it just runs. 
> And ISCSI has the potential to do failover, NFS v3 cant really. 
> 
> > 
> > As for over provisioning, just like in KVM you can over provison the hell 
> > out of CPU, especially if the workload your end users will be doing is a 
> > known quantity. As for memory, I wouldn't even bother with memory 
> > ballooning and other provisioning tricks. Memory is so cheap that it's 
> > easier just to add a new hypervisor node once you need more RAM for the 
> > cluster. That and you get more CPU to boot. Good rule off thumb is to 
> never 
> > over provision RAM... much happier end users :) 
> > On Oct 28, 2012 10:15 PM, "Outback Dingo" <ou...@gmail.com> 
> wrote: 
> > 
> >> On Sun, Oct 28, 2012 at 11:11 PM, Trevor Francis 
> >> <tr...@tgrahamcapital.com> wrote: 
> >> > Good question. This is a private cloud for an application we have 
> >> developed. We will have no actual "public" users installing OS' of 
> varying 
> >> ranges. 
> >> > 
> >> > That being said. Cent 6.3 64-bit, is the only guest OS being deployed. 
> >> It is also what I am intending to deploy my NFS using. 
> >> > 
> >> 
> >> Then ISCSI would be a good choice is you have speed at the disk layer, 
> >> no sense slowing it down with NFS. 
> >> 
> >> > Yes, I know that ZFS rocks and FreeBSD is the bees knees, but we know 
> >> Cent and everything on our platform is standardized around that (short 
> of 
> >> XenServer hosts). Also, we don't need to take advantage of ZFS caching, 
> as 
> >> all of our deployed storage for guests is SSD anyway. 
> >> > 
> >> > Thanks! 
> >> > 
> >> > TGF 
> >> > 
> >> > 
> >> > 
> >> > 
> >> > On Oct 28, 2012, at 9:56 PM, Jason Davis <sc...@gmail.com> wrote: 
> >> > 
> >> >> Decent read: 
> >> >> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf 
> >> >> 
> >> >> As far as CS + XenServer, I prefer NFS. Easier to manage, thin 
> >> provisioning 
> >> >> works from the get go (which is super important as XenServer uses CoW 
> >> >> (linked clones) iterations from the template you use.) By default, XS 
> >> uses 
> >> >> LVM over iSCSI with iSCSI which can be confusing to administer. That 
> >> and it 
> >> >> doesn't thin provision... which sucks... 
> >> >> 
> >> >> In theory there are latency penalties with NFS (as mentioned in the 
> >> paper) 
> >> >> but in a live deployment, I never ran into this. 
> >> >> On Oct 28, 2012 9:03 PM, "Trevor Francis" < 
> >> trevor.francis@tgrahamcapital.com> 
> >> >> wrote: 
> >> >> 
> >> >>> I know this has been discussed on other forums with limited success 
> in 
> >> >>> explaining which is best in for aproduction environment, but could 
> you 
> >> >>> cloudstackers weigh in which storage technology would be best for 
> both 
> >> >>> primary and secondary storage for VMs running on Xenserver? Both are 
> >> pretty 
> >> >>> trivial to setup with NFS being the easiest. 
> >> >>> 
> >> >>> Thanks, 
> >> >>> 
> >> >>> Trevor Francis 
> >> >>> 
> >> >>> 
> >> >>> 
> >> > 
> >> 
> 


Re: NFS vs iSCSI

Posted by Jason Davis <sc...@gmail.com>.
NFS failover is fine, I ran our Cluster with Isilon storage so we load
balanced and could failover stupid easy. With my experiences with XS/XCP I
found NFS much more pleasant to work with vs the iSCSI I did with our
Equallogic array cluster.

In any event, try both and see which one you like best... in all honesty
with 10Gb/s Ethernet it frankly doesn't matter which protocol you go with.
On Oct 28, 2012 10:53 PM, "Outback Dingo" <ou...@gmail.com> wrote:

> On Sun, Oct 28, 2012 at 11:50 PM, Jason Davis <sc...@gmail.com> wrote:
> > Like I was mentioning, for the cut in theoretical performance, you get
> > something much easier to administer. Plenty of really nice SSD SSD/Disk
> > arrays do NFS and are blazing fast.
>
> Not sure how it figures you think NFS is any easier to manage and
> support then ISCSI is.... once its configured it just runs.
> And ISCSI has the potential to do failover, NFS v3 cant really.
>
> >
> > As for over provisioning, just like in KVM you can over provison the hell
> > out of CPU, especially if the workload your end users will be doing is a
> > known quantity. As for memory, I wouldn't even bother with memory
> > ballooning and other provisioning tricks. Memory is so cheap that it's
> > easier just to add a new hypervisor node once you need more RAM for the
> > cluster. That and you get more CPU to boot. Good rule off thumb is to
> never
> > over provision RAM... much happier end users :)
> > On Oct 28, 2012 10:15 PM, "Outback Dingo" <ou...@gmail.com>
> wrote:
> >
> >> On Sun, Oct 28, 2012 at 11:11 PM, Trevor Francis
> >> <tr...@tgrahamcapital.com> wrote:
> >> > Good question. This is a private cloud for an application we have
> >> developed. We will have no actual "public" users installing OS' of
> varying
> >> ranges.
> >> >
> >> > That being said. Cent 6.3 64-bit, is the only guest OS being deployed.
> >> It is also what I am intending to deploy my NFS using.
> >> >
> >>
> >> Then ISCSI would be a good choice is you have speed at the disk layer,
> >> no sense slowing it down with NFS.
> >>
> >> > Yes, I know that ZFS rocks and FreeBSD is the bees knees, but we know
> >> Cent and everything on our platform is standardized around that (short
> of
> >> XenServer hosts). Also, we don't need to take advantage of ZFS caching,
> as
> >> all of our deployed storage for guests is SSD anyway.
> >> >
> >> > Thanks!
> >> >
> >> > TGF
> >> >
> >> >
> >> >
> >> >
> >> > On Oct 28, 2012, at 9:56 PM, Jason Davis <sc...@gmail.com> wrote:
> >> >
> >> >> Decent read:
> >> >> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf
> >> >>
> >> >> As far as CS + XenServer, I prefer NFS. Easier to manage, thin
> >> provisioning
> >> >> works from the get go (which is super important as XenServer uses CoW
> >> >> (linked clones) iterations from the template you use.) By default, XS
> >> uses
> >> >> LVM over iSCSI with iSCSI which can be confusing to administer. That
> >> and it
> >> >> doesn't thin provision... which sucks...
> >> >>
> >> >> In theory there are latency penalties with NFS (as mentioned in the
> >> paper)
> >> >> but in a live deployment, I never ran into this.
> >> >> On Oct 28, 2012 9:03 PM, "Trevor Francis" <
> >> trevor.francis@tgrahamcapital.com>
> >> >> wrote:
> >> >>
> >> >>> I know this has been discussed on other forums with limited success
> in
> >> >>> explaining which is best in for aproduction environment, but could
> you
> >> >>> cloudstackers weigh in which storage technology would be best for
> both
> >> >>> primary and secondary storage for VMs running on Xenserver? Both are
> >> pretty
> >> >>> trivial to setup with NFS being the easiest.
> >> >>>
> >> >>> Thanks,
> >> >>>
> >> >>> Trevor Francis
> >> >>>
> >> >>>
> >> >>>
> >> >
> >>
>

Re: NFS vs iSCSI

Posted by Outback Dingo <ou...@gmail.com>.
On Sun, Oct 28, 2012 at 11:50 PM, Jason Davis <sc...@gmail.com> wrote:
> Like I was mentioning, for the cut in theoretical performance, you get
> something much easier to administer. Plenty of really nice SSD SSD/Disk
> arrays do NFS and are blazing fast.

Not sure how it figures you think NFS is any easier to manage and
support then ISCSI is.... once its configured it just runs.
And ISCSI has the potential to do failover, NFS v3 cant really.

>
> As for over provisioning, just like in KVM you can over provison the hell
> out of CPU, especially if the workload your end users will be doing is a
> known quantity. As for memory, I wouldn't even bother with memory
> ballooning and other provisioning tricks. Memory is so cheap that it's
> easier just to add a new hypervisor node once you need more RAM for the
> cluster. That and you get more CPU to boot. Good rule off thumb is to never
> over provision RAM... much happier end users :)
> On Oct 28, 2012 10:15 PM, "Outback Dingo" <ou...@gmail.com> wrote:
>
>> On Sun, Oct 28, 2012 at 11:11 PM, Trevor Francis
>> <tr...@tgrahamcapital.com> wrote:
>> > Good question. This is a private cloud for an application we have
>> developed. We will have no actual "public" users installing OS' of varying
>> ranges.
>> >
>> > That being said. Cent 6.3 64-bit, is the only guest OS being deployed.
>> It is also what I am intending to deploy my NFS using.
>> >
>>
>> Then ISCSI would be a good choice is you have speed at the disk layer,
>> no sense slowing it down with NFS.
>>
>> > Yes, I know that ZFS rocks and FreeBSD is the bees knees, but we know
>> Cent and everything on our platform is standardized around that (short of
>> XenServer hosts). Also, we don't need to take advantage of ZFS caching, as
>> all of our deployed storage for guests is SSD anyway.
>> >
>> > Thanks!
>> >
>> > TGF
>> >
>> >
>> >
>> >
>> > On Oct 28, 2012, at 9:56 PM, Jason Davis <sc...@gmail.com> wrote:
>> >
>> >> Decent read:
>> >> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf
>> >>
>> >> As far as CS + XenServer, I prefer NFS. Easier to manage, thin
>> provisioning
>> >> works from the get go (which is super important as XenServer uses CoW
>> >> (linked clones) iterations from the template you use.) By default, XS
>> uses
>> >> LVM over iSCSI with iSCSI which can be confusing to administer. That
>> and it
>> >> doesn't thin provision... which sucks...
>> >>
>> >> In theory there are latency penalties with NFS (as mentioned in the
>> paper)
>> >> but in a live deployment, I never ran into this.
>> >> On Oct 28, 2012 9:03 PM, "Trevor Francis" <
>> trevor.francis@tgrahamcapital.com>
>> >> wrote:
>> >>
>> >>> I know this has been discussed on other forums with limited success in
>> >>> explaining which is best in for aproduction environment, but could you
>> >>> cloudstackers weigh in which storage technology would be best for both
>> >>> primary and secondary storage for VMs running on Xenserver? Both are
>> pretty
>> >>> trivial to setup with NFS being the easiest.
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Trevor Francis
>> >>>
>> >>>
>> >>>
>> >
>>

Re: NFS vs iSCSI

Posted by Jason Davis <sc...@gmail.com>.
Like I was mentioning, for the cut in theoretical performance, you get
something much easier to administer. Plenty of really nice SSD SSD/Disk
arrays do NFS and are blazing fast.

As for over provisioning, just like in KVM you can over provison the hell
out of CPU, especially if the workload your end users will be doing is a
known quantity. As for memory, I wouldn't even bother with memory
ballooning and other provisioning tricks. Memory is so cheap that it's
easier just to add a new hypervisor node once you need more RAM for the
cluster. That and you get more CPU to boot. Good rule off thumb is to never
over provision RAM... much happier end users :)
On Oct 28, 2012 10:15 PM, "Outback Dingo" <ou...@gmail.com> wrote:

> On Sun, Oct 28, 2012 at 11:11 PM, Trevor Francis
> <tr...@tgrahamcapital.com> wrote:
> > Good question. This is a private cloud for an application we have
> developed. We will have no actual "public" users installing OS' of varying
> ranges.
> >
> > That being said. Cent 6.3 64-bit, is the only guest OS being deployed.
> It is also what I am intending to deploy my NFS using.
> >
>
> Then ISCSI would be a good choice is you have speed at the disk layer,
> no sense slowing it down with NFS.
>
> > Yes, I know that ZFS rocks and FreeBSD is the bees knees, but we know
> Cent and everything on our platform is standardized around that (short of
> XenServer hosts). Also, we don't need to take advantage of ZFS caching, as
> all of our deployed storage for guests is SSD anyway.
> >
> > Thanks!
> >
> > TGF
> >
> >
> >
> >
> > On Oct 28, 2012, at 9:56 PM, Jason Davis <sc...@gmail.com> wrote:
> >
> >> Decent read:
> >> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf
> >>
> >> As far as CS + XenServer, I prefer NFS. Easier to manage, thin
> provisioning
> >> works from the get go (which is super important as XenServer uses CoW
> >> (linked clones) iterations from the template you use.) By default, XS
> uses
> >> LVM over iSCSI with iSCSI which can be confusing to administer. That
> and it
> >> doesn't thin provision... which sucks...
> >>
> >> In theory there are latency penalties with NFS (as mentioned in the
> paper)
> >> but in a live deployment, I never ran into this.
> >> On Oct 28, 2012 9:03 PM, "Trevor Francis" <
> trevor.francis@tgrahamcapital.com>
> >> wrote:
> >>
> >>> I know this has been discussed on other forums with limited success in
> >>> explaining which is best in for aproduction environment, but could you
> >>> cloudstackers weigh in which storage technology would be best for both
> >>> primary and secondary storage for VMs running on Xenserver? Both are
> pretty
> >>> trivial to setup with NFS being the easiest.
> >>>
> >>> Thanks,
> >>>
> >>> Trevor Francis
> >>>
> >>>
> >>>
> >
>

Re: NFS vs iSCSI

Posted by Outback Dingo <ou...@gmail.com>.
On Sun, Oct 28, 2012 at 11:11 PM, Trevor Francis
<tr...@tgrahamcapital.com> wrote:
> Good question. This is a private cloud for an application we have developed. We will have no actual "public" users installing OS' of varying ranges.
>
> That being said. Cent 6.3 64-bit, is the only guest OS being deployed. It is also what I am intending to deploy my NFS using.
>

Then ISCSI would be a good choice is you have speed at the disk layer,
no sense slowing it down with NFS.

> Yes, I know that ZFS rocks and FreeBSD is the bees knees, but we know Cent and everything on our platform is standardized around that (short of XenServer hosts). Also, we don't need to take advantage of ZFS caching, as all of our deployed storage for guests is SSD anyway.
>
> Thanks!
>
> TGF
>
>
>
>
> On Oct 28, 2012, at 9:56 PM, Jason Davis <sc...@gmail.com> wrote:
>
>> Decent read:
>> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf
>>
>> As far as CS + XenServer, I prefer NFS. Easier to manage, thin provisioning
>> works from the get go (which is super important as XenServer uses CoW
>> (linked clones) iterations from the template you use.) By default, XS uses
>> LVM over iSCSI with iSCSI which can be confusing to administer. That and it
>> doesn't thin provision... which sucks...
>>
>> In theory there are latency penalties with NFS (as mentioned in the paper)
>> but in a live deployment, I never ran into this.
>> On Oct 28, 2012 9:03 PM, "Trevor Francis" <tr...@tgrahamcapital.com>
>> wrote:
>>
>>> I know this has been discussed on other forums with limited success in
>>> explaining which is best in for aproduction environment, but could you
>>> cloudstackers weigh in which storage technology would be best for both
>>> primary and secondary storage for VMs running on Xenserver? Both are pretty
>>> trivial to setup with NFS being the easiest.
>>>
>>> Thanks,
>>>
>>> Trevor Francis
>>>
>>>
>>>
>

Re: NFS vs iSCSI

Posted by Trevor Francis <tr...@tgrahamcapital.com>.
Good question. This is a private cloud for an application we have developed. We will have no actual "public" users installing OS' of varying ranges. 

That being said. Cent 6.3 64-bit, is the only guest OS being deployed. It is also what I am intending to deploy my NFS using. 

Yes, I know that ZFS rocks and FreeBSD is the bees knees, but we know Cent and everything on our platform is standardized around that (short of XenServer hosts). Also, we don't need to take advantage of ZFS caching, as all of our deployed storage for guests is SSD anyway. 

Thanks!

TGF




On Oct 28, 2012, at 9:56 PM, Jason Davis <sc...@gmail.com> wrote:

> Decent read:
> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf
> 
> As far as CS + XenServer, I prefer NFS. Easier to manage, thin provisioning
> works from the get go (which is super important as XenServer uses CoW
> (linked clones) iterations from the template you use.) By default, XS uses
> LVM over iSCSI with iSCSI which can be confusing to administer. That and it
> doesn't thin provision... which sucks...
> 
> In theory there are latency penalties with NFS (as mentioned in the paper)
> but in a live deployment, I never ran into this.
> On Oct 28, 2012 9:03 PM, "Trevor Francis" <tr...@tgrahamcapital.com>
> wrote:
> 
>> I know this has been discussed on other forums with limited success in
>> explaining which is best in for aproduction environment, but could you
>> cloudstackers weigh in which storage technology would be best for both
>> primary and secondary storage for VMs running on Xenserver? Both are pretty
>> trivial to setup with NFS being the easiest.
>> 
>> Thanks,
>> 
>> Trevor Francis
>> 
>> 
>> 


Re: NFS vs iSCSI

Posted by Geoff Arnold <ge...@geoffarnold.com>.
So what guest OSs are you planning to support? (And which filesystems in those guests?) That's kind of important...

Geoff

On Oct 28, 2012, at 7:56 PM, Jason Davis <sc...@gmail.com> wrote:

> Decent read:
> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf
> 
> As far as CS + XenServer, I prefer NFS. Easier to manage, thin provisioning
> works from the get go (which is super important as XenServer uses CoW
> (linked clones) iterations from the template you use.) By default, XS uses
> LVM over iSCSI with iSCSI which can be confusing to administer. That and it
> doesn't thin provision... which sucks...
> 
> In theory there are latency penalties with NFS (as mentioned in the paper)
> but in a live deployment, I never ran into this.
> On Oct 28, 2012 9:03 PM, "Trevor Francis" <tr...@tgrahamcapital.com>
> wrote:
> 
>> I know this has been discussed on other forums with limited success in
>> explaining which is best in for aproduction environment, but could you
>> cloudstackers weigh in which storage technology would be best for both
>> primary and secondary storage for VMs running on Xenserver? Both are pretty
>> trivial to setup with NFS being the easiest.
>> 
>> Thanks,
>> 
>> Trevor Francis
>> 
>> 
>> 


Re: NFS vs iSCSI

Posted by Jason Davis <sc...@gmail.com>.
Decent read:
http://lass.cs.umass.edu/papers/pdf/FAST04.pdf

As far as CS + XenServer, I prefer NFS. Easier to manage, thin provisioning
works from the get go (which is super important as XenServer uses CoW
(linked clones) iterations from the template you use.) By default, XS uses
LVM over iSCSI with iSCSI which can be confusing to administer. That and it
doesn't thin provision... which sucks...

In theory there are latency penalties with NFS (as mentioned in the paper)
but in a live deployment, I never ran into this.
On Oct 28, 2012 9:03 PM, "Trevor Francis" <tr...@tgrahamcapital.com>
wrote:

> I know this has been discussed on other forums with limited success in
> explaining which is best in for aproduction environment, but could you
> cloudstackers weigh in which storage technology would be best for both
> primary and secondary storage for VMs running on Xenserver? Both are pretty
> trivial to setup with NFS being the easiest.
>
> Thanks,
>
> Trevor Francis
>
>
>