You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Vladimir Melnik <v....@uplink.ua> on 2014/11/04 08:07:11 UTC

CephFS vs. GFS2

Dear colleagues,

Did anyone compare CephFS and GFS2 or did anyone read any articles about
comparison of CephFS and GFS2 in regard to their performance? Alas, I
haven't found anything on this topic.

Right now I'm using GFS2 as a primary storage filesystem (it's
accessible by iSCSI+multipath), but, of course, I'd like to gain more
performance. Should I try CephFS as an alternative?

Thanks!

-- 
V.Melnik

RE: CephFS vs. GFS2

Posted by Vadim Kimlaychuk <Va...@Elion.ee>.
I am sure you can find a lot of different articles about Gluster/Ceph performance. Here is one of them:

http://iopscience.iop.org/1742-6596/513/4/042014/pdf/1742-6596_513_4_042014.pdf

I think it is not good to use CephFS with CS as it is FUSE and require additional layer to perform.  As Andrija mentioned it is also not recommended to use it, because it is NOT for production.

Ceph can use RBD devices and Cloudstack understands RBD storage. I see no need to use CephFS as it is possible to use RBD directly. If you want to create unified storage you are probably interested in S3/Swift interface that Ceph offers as well. So you can have primary storage connected to RBD and secondary - connected to S3/Swift.  

Vadim.

-----Original Message-----
From: Andrija Panic [mailto:andrija.panic@gmail.com] 
Sent: Tuesday, November 04, 2014 10:10 AM
To: users@cloudstack.apache.org
Subject: Re: CephFS vs. GFS2

Officially, CephFS is NOT production ready - that being said by the Intank
- you get your answer :)

On 4 November 2014 08:21, Stephan Seitz <s....@secretresearchfacility.com>
wrote:

> Am Dienstag, den 04.11.2014, 09:07 +0200 schrieb Vladimir Melnik:
> > Dear colleagues,
> >
> > Did anyone compare CephFS and GFS2 or did anyone read any articles 
> > about comparison of CephFS and GFS2 in regard to their performance? 
> > Alas, I haven't found anything on this topic.
> >
> > Right now I'm using GFS2 as a primary storage filesystem (it's 
> > accessible by iSCSI+multipath), but, of course, I'd like to gain 
> > more performance. Should I try CephFS as an alternative?
>
> Both techniques are quite different. In short, you shouldn't use 
> CephFS in favor of ceph's native RBD.
>
> I'll try to follow up later that day - currently in a hurry.
>
> >
> > Thanks!
> >
>
>


-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------

Re: CephFS vs. GFS2

Posted by Andrija Panic <an...@gmail.com>.
Officially, CephFS is NOT production ready - that being said by the Intank
- you get your answer :)

On 4 November 2014 08:21, Stephan Seitz <s....@secretresearchfacility.com>
wrote:

> Am Dienstag, den 04.11.2014, 09:07 +0200 schrieb Vladimir Melnik:
> > Dear colleagues,
> >
> > Did anyone compare CephFS and GFS2 or did anyone read any articles about
> > comparison of CephFS and GFS2 in regard to their performance? Alas, I
> > haven't found anything on this topic.
> >
> > Right now I'm using GFS2 as a primary storage filesystem (it's
> > accessible by iSCSI+multipath), but, of course, I'd like to gain more
> > performance. Should I try CephFS as an alternative?
>
> Both techniques are quite different. In short, you shouldn't use CephFS
> in favor of ceph's native RBD.
>
> I'll try to follow up later that day - currently in a hurry.
>
> >
> > Thanks!
> >
>
>


-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------

Re: CephFS vs. GFS2

Posted by Stephan Seitz <s....@secretresearchfacility.com>.
Am Dienstag, den 04.11.2014, 09:07 +0200 schrieb Vladimir Melnik:
> Dear colleagues,
> 
> Did anyone compare CephFS and GFS2 or did anyone read any articles about
> comparison of CephFS and GFS2 in regard to their performance? Alas, I
> haven't found anything on this topic.
> 
> Right now I'm using GFS2 as a primary storage filesystem (it's
> accessible by iSCSI+multipath), but, of course, I'd like to gain more
> performance. Should I try CephFS as an alternative?

Both techniques are quite different. In short, you shouldn't use CephFS
in favor of ceph's native RBD.

I'll try to follow up later that day - currently in a hurry.

> 
> Thanks!
>