You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Pratik Chandrakar <ch...@gmail.com> on 2020/04/24 12:05:02 UTC

slow performance of vm on gluster

Hello, 
I am using gluster 5.11 (3 node replication on raid 5) as a primary storage for cloudstack 4.11.2 on Centos 7.7, the setup is running stable for more then a year but performance of VM degraded very much with the increasing number of active VMs, UI response, boot time of VMs are also slow. One more problem which I face with the VMs which has more than 500 GB of storage doesn't start in case of stopped status without manual intervention of attaching and detaching of data volumes. Therefore I am planning to migrate to Ceph RBD. Will it be a right choice or something else should be considered because HA and deduplication is a must have requirement??
I know it's a naive question but didn't found answer on google.  

Re: slow performance of vm on gluster

Posted by Ivan Kudryavtsev <iv...@bw-sw.com>.
I use gluster on ssd r5 (two replicas + arbiter) with ACS for those who
need VM HA. Works fine, but I doubt it will work fine for HDD RAID5, as it
is only for linear workloads without bbu, and other tricks.

пт, 24 апр. 2020 г., 21:15 <nu...@li.nux.ro>:

> Hi,
>
> I would not use Gluster in production for VM workloads, perhaps as
> secondary storage where there are mostly sequential writes involved
> rather than load of random I/O, it would be fast at that.
>
> CEPH is a much better choice, it's user base is an order of magnitude
> larger and so many more problems and corner cases covered.
> CEPH is however also much more complex to deploy and maintain so you
> should do a few trials before that.
>
> You should be able to get a HA CEPH deployment up and running, I do not
> think they do deduplication though and neither GlusterFS afaik. I would
> imagine any deduplication process would put a crippling amount of load
> on the performance or require ludicrous amounts of extra resources.
>
> /imho
>
> Regards
>
> On 2020-04-24 13:05, Pratik Chandrakar wrote:
> > Hello,
> > I am using gluster 5.11 (3 node replication on raid 5) as a primary
> > storage for cloudstack 4.11.2 on Centos 7.7, the setup is running
> > stable for more then a year but performance of VM degraded very much
> > with the increasing number of active VMs, UI response, boot time of
> > VMs are also slow. One more problem which I face with the VMs which
> > has more than 500 GB of storage doesn't start in case of stopped
> > status without manual intervention of attaching and detaching of data
> > volumes. Therefore I am planning to migrate to Ceph RBD. Will it be a
> > right choice or something else should be considered because HA and
> > deduplication is a must have requirement??
> > I know it's a naive question but didn't found answer on google.
>

Re: slow performance of vm on gluster

Posted by nu...@li.nux.ro.
Hi,

I would not use Gluster in production for VM workloads, perhaps as 
secondary storage where there are mostly sequential writes involved 
rather than load of random I/O, it would be fast at that.

CEPH is a much better choice, it's user base is an order of magnitude 
larger and so many more problems and corner cases covered.
CEPH is however also much more complex to deploy and maintain so you 
should do a few trials before that.

You should be able to get a HA CEPH deployment up and running, I do not 
think they do deduplication though and neither GlusterFS afaik. I would 
imagine any deduplication process would put a crippling amount of load 
on the performance or require ludicrous amounts of extra resources.

/imho

Regards

On 2020-04-24 13:05, Pratik Chandrakar wrote:
> Hello,
> I am using gluster 5.11 (3 node replication on raid 5) as a primary
> storage for cloudstack 4.11.2 on Centos 7.7, the setup is running
> stable for more then a year but performance of VM degraded very much
> with the increasing number of active VMs, UI response, boot time of
> VMs are also slow. One more problem which I face with the VMs which
> has more than 500 GB of storage doesn't start in case of stopped
> status without manual intervention of attaching and detaching of data
> volumes. Therefore I am planning to migrate to Ceph RBD. Will it be a
> right choice or something else should be considered because HA and
> deduplication is a must have requirement??
> I know it's a naive question but didn't found answer on google.