You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Ivan Kudryavtsev <ku...@bw-sw.com> on 2018/12/26 20:07:31 UTC

CloudStack Ceph Storage experiments / help requested

Hello, colleagues. Happy Merry Christmas to you. Try to play with CS Ceph
Block Storage and stumbled upon the problem with the deployment of VM to
Ceph RBD.

ACS 4.11.3

1. Created ZoneWide RBD Storage which is UP in CS with 'rbd' tag.
2. Created SO with 'rbd' storage tag.

Upon deployment I see next logs:

2018-12-27 02:58:55,691 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) ZoneWideStoragePoolAllocator to find storage pool
2018-12-27 02:58:55,696 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) Checking if storage pool is suitable, name: null ,poolId:
37
2018-12-27 02:58:55,699 DEBUG [c.c.s.StorageManagerImpl]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) Destination pool id: 37
2018-12-27 02:58:55,711 DEBUG [c.c.s.StorageManagerImpl]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) Pool ID for the volume with ID 21022 is null
2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) Found storage pool HA-CEPH-SSD-R5 of type RBD with
overprovisioning factor 1
2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) Total over provisioned capacity calculated is 1 *
1099511627776
2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) Total capacity of the pool HA-CEPH-SSD-R5 with ID 37 is
1099511627776
2018-12-27 02:58:55,717 DEBUG [c.c.s.StorageManagerImpl]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) Checking pool: 37 for storage allocation , maxSize :
1099511627776, totalAllocatedSize : 10737418240, askingSize : 64424509440,
allocated disable threshold: 0.85
2018-12-27 02:58:55,718 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) List of pools in descending order of free capacity: []
2018-12-27 02:58:55,718 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
(logid:dd88c73e) No suitable pools found for volume:
Vol[21022|vm=5641|ROOT] under cluster: 1

So, It looks like that the storage is found (HA-CEPH-SSD-R5). Its size and
utilization is determined OK, but next an empty set as a result of the
calculation.

Any help is appreciated.

-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>

Re: CloudStack Ceph Storage experiments / help requested

Posted by Andrija Panic <an...@gmail.com>.
Hi Ivan,

I'm using zone-wide ceph in 4.5, upgrade to 4.8 version - so it should be
supported just fine. I didn't respond to your original email since I had no
idea why it would not work.

I'm even using it as we speak in DEV 4.11.2 zone - so that is not the
problem.

Cheers

On Wed, 26 Dec 2018 at 21:55, Ivan Kudryavtsev <ku...@bw-sw.com>
wrote:

> I removed zone-wide Ceph storage and created Cluster Wide. Now it works. I
> traced the problem to
>
>
> https://github.com/apache/cloudstack/blob/68b4b8410138a1a16337ff15ac6260e0ecae9bc0/engine/storage/src/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java#L118
>
> Looks like zone-wide Ceph is not supported.
>
> ср, 26 дек. 2018 г. в 15:07, Ivan Kudryavtsev <ku...@bw-sw.com>:
>
> > Hello, colleagues. Happy Merry Christmas to you. Try to play with CS Ceph
> > Block Storage and stumbled upon the problem with the deployment of VM to
> > Ceph RBD.
> >
> > ACS 4.11.3
> >
> > 1. Created ZoneWide RBD Storage which is UP in CS with 'rbd' tag.
> > 2. Created SO with 'rbd' storage tag.
> >
> > Upon deployment I see next logs:
> >
> > 2018-12-27 02:58:55,691 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) ZoneWideStoragePoolAllocator to find storage pool
> > 2018-12-27 02:58:55,696 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Checking if storage pool is suitable, name: null
> ,poolId:
> > 37
> > 2018-12-27 02:58:55,699 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Destination pool id: 37
> > 2018-12-27 02:58:55,711 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Pool ID for the volume with ID 21022 is null
> > 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Found storage pool HA-CEPH-SSD-R5 of type RBD with
> > overprovisioning factor 1
> > 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Total over provisioned capacity calculated is 1 *
> > 1099511627776
> > 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Total capacity of the pool HA-CEPH-SSD-R5 with ID 37 is
> > 1099511627776
> > 2018-12-27 02:58:55,717 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Checking pool: 37 for storage allocation , maxSize :
> > 1099511627776, totalAllocatedSize : 10737418240, askingSize :
> 64424509440,
> > allocated disable threshold: 0.85
> > 2018-12-27 02:58:55,718 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) List of pools in descending order of free capacity: []
> > 2018-12-27 02:58:55,718 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) No suitable pools found for volume:
> > Vol[21022|vm=5641|ROOT] under cluster: 1
> >
> > So, It looks like that the storage is found (HA-CEPH-SSD-R5). Its size
> and
> > utilization is determined OK, but next an empty set as a result of the
> > calculation.
> >
> > Any help is appreciated.
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell RU: +7-923-414-1515
> > Cell USA: +1-201-257-1512
> > WWW: http://bitworks.software/ <http://bw-sw.com/>
> >
> >
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>


-- 

Andrija Panić

Re: CloudStack Ceph Storage experiments / help requested

Posted by Andrija Panic <an...@gmail.com>.
Hi Ivan,

I'm using zone-wide ceph in 4.5, upgrade to 4.8 version - so it should be
supported just fine. I didn't respond to your original email since I had no
idea why it would not work.

I'm even using it as we speak in DEV 4.11.2 zone - so that is not the
problem.

Cheers

On Wed, 26 Dec 2018 at 21:55, Ivan Kudryavtsev <ku...@bw-sw.com>
wrote:

> I removed zone-wide Ceph storage and created Cluster Wide. Now it works. I
> traced the problem to
>
>
> https://github.com/apache/cloudstack/blob/68b4b8410138a1a16337ff15ac6260e0ecae9bc0/engine/storage/src/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java#L118
>
> Looks like zone-wide Ceph is not supported.
>
> ср, 26 дек. 2018 г. в 15:07, Ivan Kudryavtsev <ku...@bw-sw.com>:
>
> > Hello, colleagues. Happy Merry Christmas to you. Try to play with CS Ceph
> > Block Storage and stumbled upon the problem with the deployment of VM to
> > Ceph RBD.
> >
> > ACS 4.11.3
> >
> > 1. Created ZoneWide RBD Storage which is UP in CS with 'rbd' tag.
> > 2. Created SO with 'rbd' storage tag.
> >
> > Upon deployment I see next logs:
> >
> > 2018-12-27 02:58:55,691 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) ZoneWideStoragePoolAllocator to find storage pool
> > 2018-12-27 02:58:55,696 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Checking if storage pool is suitable, name: null
> ,poolId:
> > 37
> > 2018-12-27 02:58:55,699 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Destination pool id: 37
> > 2018-12-27 02:58:55,711 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Pool ID for the volume with ID 21022 is null
> > 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Found storage pool HA-CEPH-SSD-R5 of type RBD with
> > overprovisioning factor 1
> > 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Total over provisioned capacity calculated is 1 *
> > 1099511627776
> > 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Total capacity of the pool HA-CEPH-SSD-R5 with ID 37 is
> > 1099511627776
> > 2018-12-27 02:58:55,717 DEBUG [c.c.s.StorageManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) Checking pool: 37 for storage allocation , maxSize :
> > 1099511627776, totalAllocatedSize : 10737418240, askingSize :
> 64424509440,
> > allocated disable threshold: 0.85
> > 2018-12-27 02:58:55,718 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) List of pools in descending order of free capacity: []
> > 2018-12-27 02:58:55,718 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> > (logid:dd88c73e) No suitable pools found for volume:
> > Vol[21022|vm=5641|ROOT] under cluster: 1
> >
> > So, It looks like that the storage is found (HA-CEPH-SSD-R5). Its size
> and
> > utilization is determined OK, but next an empty set as a result of the
> > calculation.
> >
> > Any help is appreciated.
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell RU: +7-923-414-1515
> > Cell USA: +1-201-257-1512
> > WWW: http://bitworks.software/ <http://bw-sw.com/>
> >
> >
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>


-- 

Andrija Panić

Re: CloudStack Ceph Storage experiments / help requested

Posted by Ivan Kudryavtsev <ku...@bw-sw.com>.
I removed zone-wide Ceph storage and created Cluster Wide. Now it works. I
traced the problem to

https://github.com/apache/cloudstack/blob/68b4b8410138a1a16337ff15ac6260e0ecae9bc0/engine/storage/src/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java#L118

Looks like zone-wide Ceph is not supported.

ср, 26 дек. 2018 г. в 15:07, Ivan Kudryavtsev <ku...@bw-sw.com>:

> Hello, colleagues. Happy Merry Christmas to you. Try to play with CS Ceph
> Block Storage and stumbled upon the problem with the deployment of VM to
> Ceph RBD.
>
> ACS 4.11.3
>
> 1. Created ZoneWide RBD Storage which is UP in CS with 'rbd' tag.
> 2. Created SO with 'rbd' storage tag.
>
> Upon deployment I see next logs:
>
> 2018-12-27 02:58:55,691 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) ZoneWideStoragePoolAllocator to find storage pool
> 2018-12-27 02:58:55,696 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Checking if storage pool is suitable, name: null ,poolId:
> 37
> 2018-12-27 02:58:55,699 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Destination pool id: 37
> 2018-12-27 02:58:55,711 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Pool ID for the volume with ID 21022 is null
> 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Found storage pool HA-CEPH-SSD-R5 of type RBD with
> overprovisioning factor 1
> 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Total over provisioned capacity calculated is 1 *
> 1099511627776
> 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Total capacity of the pool HA-CEPH-SSD-R5 with ID 37 is
> 1099511627776
> 2018-12-27 02:58:55,717 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Checking pool: 37 for storage allocation , maxSize :
> 1099511627776, totalAllocatedSize : 10737418240, askingSize : 64424509440,
> allocated disable threshold: 0.85
> 2018-12-27 02:58:55,718 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) List of pools in descending order of free capacity: []
> 2018-12-27 02:58:55,718 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) No suitable pools found for volume:
> Vol[21022|vm=5641|ROOT] under cluster: 1
>
> So, It looks like that the storage is found (HA-CEPH-SSD-R5). Its size and
> utilization is determined OK, but next an empty set as a result of the
> calculation.
>
> Any help is appreciated.
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>
>

-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>

Re: CloudStack Ceph Storage experiments / help requested

Posted by Ivan Kudryavtsev <ku...@bw-sw.com>.
I removed zone-wide Ceph storage and created Cluster Wide. Now it works. I
traced the problem to

https://github.com/apache/cloudstack/blob/68b4b8410138a1a16337ff15ac6260e0ecae9bc0/engine/storage/src/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java#L118

Looks like zone-wide Ceph is not supported.

ср, 26 дек. 2018 г. в 15:07, Ivan Kudryavtsev <ku...@bw-sw.com>:

> Hello, colleagues. Happy Merry Christmas to you. Try to play with CS Ceph
> Block Storage and stumbled upon the problem with the deployment of VM to
> Ceph RBD.
>
> ACS 4.11.3
>
> 1. Created ZoneWide RBD Storage which is UP in CS with 'rbd' tag.
> 2. Created SO with 'rbd' storage tag.
>
> Upon deployment I see next logs:
>
> 2018-12-27 02:58:55,691 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) ZoneWideStoragePoolAllocator to find storage pool
> 2018-12-27 02:58:55,696 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Checking if storage pool is suitable, name: null ,poolId:
> 37
> 2018-12-27 02:58:55,699 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Destination pool id: 37
> 2018-12-27 02:58:55,711 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Pool ID for the volume with ID 21022 is null
> 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Found storage pool HA-CEPH-SSD-R5 of type RBD with
> overprovisioning factor 1
> 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Total over provisioned capacity calculated is 1 *
> 1099511627776
> 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Total capacity of the pool HA-CEPH-SSD-R5 with ID 37 is
> 1099511627776
> 2018-12-27 02:58:55,717 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Checking pool: 37 for storage allocation , maxSize :
> 1099511627776, totalAllocatedSize : 10737418240, askingSize : 64424509440,
> allocated disable threshold: 0.85
> 2018-12-27 02:58:55,718 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) List of pools in descending order of free capacity: []
> 2018-12-27 02:58:55,718 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) No suitable pools found for volume:
> Vol[21022|vm=5641|ROOT] under cluster: 1
>
> So, It looks like that the storage is found (HA-CEPH-SSD-R5). Its size and
> utilization is determined OK, but next an empty set as a result of the
> calculation.
>
> Any help is appreciated.
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>
>

-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>