You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Fariborz Navidan <md...@gmail.com> on 2020/04/07 15:36:46 UTC

I/O speed in local NFS vs local Ceph

Hello,

I have a single physical host running  CloudStack. Primary storage is
currently mounted as a NFS share. The underlying filesystem is XFS
running on top of Linux Soft RAID-0. The underlying hardware consists of 2
SSD-NVMe drives.

The question is that, could I reach faster I/O on VMs if I would use Ceph
adding 2 physical devices directly to the cluster and expose it via RBD?
How much could it make the I/O faster?

Thanks

Re: I/O speed in local NFS vs local Ceph

Posted by Fariborz Navidan <md...@gmail.com>.
>>  as
good as a collective suicide in the long run

What do you mean?

On Tue, Apr 7, 2020 at 8:48 PM Andrija Panic <an...@gmail.com>
wrote:

> Shared mount point is used to "mount" i.e. attached a shared drive
> (enclosure, LUN, etc) to the same mount point on multiple KVM hosts and
> then run a shared file system on top (GFS2, OCFS, etc - all of which are as
> good as a collective suicide in the long run...)
>
> Stick to local - and you do understand the risk with RADI 0, right?
>
> On Tue, 7 Apr 2020 at 18:01, Fariborz Navidan <md...@gmail.com>
> wrote:
>
> > I managed to use Shared Mount Point to for local primary storage but it
> > does not allow over-provisioning. I guess direct local storage also does
> > not allow over-provisioning.
> >
> > On Tue, Apr 7, 2020 at 8:21 PM Simon Weller <sw...@ena.com.invalid>
> > wrote:
> >
> > > Ceph uses data replicas, so even if you only use 2 replicas (3 is
> > > recommend), you'd basically have best case of the IO of a single drive.
> > > You also need to have a minimum of 3 management nodes for Ceph, so
> > > personally, I'd stick with local storage if you're focused on speed.
> > > You're also running quite a risk by using RAID-0. A single drive
> failure
> > > and you'll lose all of your data. Is there a reason you are using NFS
> and
> > > not just direct local storage?
> > >
> > >
> > > ________________________________
> > > From: Fariborz Navidan <md...@gmail.com>
> > > Sent: Tuesday, April 7, 2020 10:36 AM
> > > To: users@cloudstack.apache.org <us...@cloudstack.apache.org>
> > > Subject: I/O speed in local NFS vs local Ceph
> > >
> > > Hello,
> > >
> > > I have a single physical host running  CloudStack. Primary storage is
> > > currently mounted as a NFS share. The underlying filesystem is XFS
> > > running on top of Linux Soft RAID-0. The underlying hardware consists
> of
> > 2
> > > SSD-NVMe drives.
> > >
> > > The question is that, could I reach faster I/O on VMs if I would use
> Ceph
> > > adding 2 physical devices directly to the cluster and expose it via
> RBD?
> > > How much could it make the I/O faster?
> > >
> > > Thanks
> > >
> >
>
>
> --
>
> Andrija Panić
>

Re: I/O speed in local NFS vs local Ceph

Posted by Andrija Panic <an...@gmail.com>.
Shared mount point is used to "mount" i.e. attached a shared drive
(enclosure, LUN, etc) to the same mount point on multiple KVM hosts and
then run a shared file system on top (GFS2, OCFS, etc - all of which are as
good as a collective suicide in the long run...)

Stick to local - and you do understand the risk with RADI 0, right?

On Tue, 7 Apr 2020 at 18:01, Fariborz Navidan <md...@gmail.com> wrote:

> I managed to use Shared Mount Point to for local primary storage but it
> does not allow over-provisioning. I guess direct local storage also does
> not allow over-provisioning.
>
> On Tue, Apr 7, 2020 at 8:21 PM Simon Weller <sw...@ena.com.invalid>
> wrote:
>
> > Ceph uses data replicas, so even if you only use 2 replicas (3 is
> > recommend), you'd basically have best case of the IO of a single drive.
> > You also need to have a minimum of 3 management nodes for Ceph, so
> > personally, I'd stick with local storage if you're focused on speed.
> > You're also running quite a risk by using RAID-0. A single drive failure
> > and you'll lose all of your data. Is there a reason you are using NFS and
> > not just direct local storage?
> >
> >
> > ________________________________
> > From: Fariborz Navidan <md...@gmail.com>
> > Sent: Tuesday, April 7, 2020 10:36 AM
> > To: users@cloudstack.apache.org <us...@cloudstack.apache.org>
> > Subject: I/O speed in local NFS vs local Ceph
> >
> > Hello,
> >
> > I have a single physical host running  CloudStack. Primary storage is
> > currently mounted as a NFS share. The underlying filesystem is XFS
> > running on top of Linux Soft RAID-0. The underlying hardware consists of
> 2
> > SSD-NVMe drives.
> >
> > The question is that, could I reach faster I/O on VMs if I would use Ceph
> > adding 2 physical devices directly to the cluster and expose it via RBD?
> > How much could it make the I/O faster?
> >
> > Thanks
> >
>


-- 

Andrija Panić

Re: I/O speed in local NFS vs local Ceph

Posted by Fariborz Navidan <md...@gmail.com>.
I managed to use Shared Mount Point to for local primary storage but it
does not allow over-provisioning. I guess direct local storage also does
not allow over-provisioning.

On Tue, Apr 7, 2020 at 8:21 PM Simon Weller <sw...@ena.com.invalid> wrote:

> Ceph uses data replicas, so even if you only use 2 replicas (3 is
> recommend), you'd basically have best case of the IO of a single drive.
> You also need to have a minimum of 3 management nodes for Ceph, so
> personally, I'd stick with local storage if you're focused on speed.
> You're also running quite a risk by using RAID-0. A single drive failure
> and you'll lose all of your data. Is there a reason you are using NFS and
> not just direct local storage?
>
>
> ________________________________
> From: Fariborz Navidan <md...@gmail.com>
> Sent: Tuesday, April 7, 2020 10:36 AM
> To: users@cloudstack.apache.org <us...@cloudstack.apache.org>
> Subject: I/O speed in local NFS vs local Ceph
>
> Hello,
>
> I have a single physical host running  CloudStack. Primary storage is
> currently mounted as a NFS share. The underlying filesystem is XFS
> running on top of Linux Soft RAID-0. The underlying hardware consists of 2
> SSD-NVMe drives.
>
> The question is that, could I reach faster I/O on VMs if I would use Ceph
> adding 2 physical devices directly to the cluster and expose it via RBD?
> How much could it make the I/O faster?
>
> Thanks
>

Re: I/O speed in local NFS vs local Ceph

Posted by Simon Weller <sw...@ena.com.INVALID>.
Ceph uses data replicas, so even if you only use 2 replicas (3 is recommend), you'd basically have best case of the IO of a single drive.  You also need to have a minimum of 3 management nodes for Ceph, so personally, I'd stick with local storage if you're focused on speed.
You're also running quite a risk by using RAID-0. A single drive failure and you'll lose all of your data. Is there a reason you are using NFS and not just direct local storage?


________________________________
From: Fariborz Navidan <md...@gmail.com>
Sent: Tuesday, April 7, 2020 10:36 AM
To: users@cloudstack.apache.org <us...@cloudstack.apache.org>
Subject: I/O speed in local NFS vs local Ceph

Hello,

I have a single physical host running  CloudStack. Primary storage is
currently mounted as a NFS share. The underlying filesystem is XFS
running on top of Linux Soft RAID-0. The underlying hardware consists of 2
SSD-NVMe drives.

The question is that, could I reach faster I/O on VMs if I would use Ceph
adding 2 physical devices directly to the cluster and expose it via RBD?
How much could it make the I/O faster?

Thanks