You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Mārtiņš Jakubovičs <ma...@vertigs.lv> on 2014/12/18 13:19:16 UTC

Volume scheduler

Hello,

I want to understand, how work volume scheduler in CloudStack.
We got two SR:

First SR:
Disk Total    1.70 TB
Disk Allocated    1.31 TB

Second SR:
Disk Total    1.83 TB
Disk Allocated    57.09 GB

Why on first SR there are much more created volumes, than on second one? 
When I create new volume it creates in first SR anyway. Why volume is 
not created on SR which have much more free space?
How I can edit scheduling options?

Both SR got same "Storage Tags ".

ACS 4.3.1
storage.overprovisioning.factor 2.0
XenServer 6.2


Best regards,
Martins

RE: Volume scheduler

Posted by Anshul Gangwar <an...@citrix.com>.
You can use tags in disk offering/service offering to put volume in specific storage pool.

You can specify algorithms for volume placement. For more info refer http://support.citrix.com/article/CTX135790.

-----Original Message-----
From: Mārtiņš Jakubovičs [mailto:martins@vertigs.lv] 
Sent: Tuesday, December 30, 2014 5:13 PM
To: users@cloudstack.apache.org
Subject: Re: Volume scheduler

Hello,

I check this a little bit deeper and faced that Storage Allocator is not checking more than one SR. It check first suitable SR and create volume on it. I think, it will be useful, if ACS check all suitable SR's and decide from "scheduler" where to create volume. Schedulers can be different, for example, deploy on SR with most free space, or try to fit full SR first, etc. Now I can't normally control where volumes should create, I can remove storage TAG from SR, but this is manually work and it is not automatic solution, in a way we all use CloudStack.

Also, is it possible to control quotas
(pool.storage.allocated.capacity.disablethreshold,
pool.storage.capacity.disablethreshold, etc.) by used space in SR?

2014-12-30 11:47:05,328 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) LocalStoragePoolAllocator trying to find storage pool to fit the vm
2014-12-30 11:47:05,329 DEBUG
[o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) ClusterScopeStoragePoolAllocator looking for storage pool
2014-12-30 11:47:05,329 DEBUG
[o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Looking for pools in dc: 1
pod:1  cluster:1 having tags:[SATA]
2014-12-30 11:47:05,332 DEBUG
[o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Found pools matching tags: 
[Pool[2|NetworkFilesystem], Pool[3|NetworkFilesystem]]
2014-12-30 11:47:05,336 DEBUG
[o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Adding pool Pool[1|NetworkFilesystem] to avoid set since it did not match tags
2014-12-30 11:47:05,337 DEBUG
[o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Removing pool Pool[2|NetworkFilesystem] from avoid set, must have been inserted when searching for another disk's tag
2014-12-30 11:47:05,337 DEBUG
[o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Removing pool Pool[3|NetworkFilesystem] from avoid set, must have been inserted when searching for another disk's tag
2014-12-30 11:47:05,339 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Checking if storage pool is suitable, name: null ,poolId: 2
2014-12-30 11:47:05,348 DEBUG [c.c.s.StorageManagerImpl]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Checking pool 2 for storage,
totalSize: 1869169819648, usedBytes: 919377674240, usedPct: 
0.49186417658569737, disable threshold: 0.85
2014-12-30 11:47:05,355 DEBUG [c.c.s.StorageManagerImpl]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Checking pool: 2 for volume allocation [Vol[435|vm=null|DATADISK]], maxSize : 3738339639296, totalAllocatedSize : 1551179448320, askingSize : 21474836480, allocated disable threshold: 0.85
2014-12-30 11:47:05,356 DEBUG
[o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) ClusterScopeStoragePoolAllocator returning 1 suitable storage pools
2014-12-30 11:47:05,358 DEBUG [o.a.c.e.o.VolumeOrchestrator]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Trying to create org.apache.cloudstack.storage.volume.VolumeObject@411a72ee on
org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@2f6f88a6
2014-12-30 11:47:06,798 DEBUG
[o.a.c.s.d.d.CloudStackPrimaryDataStoreDriverImpl]
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Creating volume: 
org.apache.cloudstack.storage.volume.VolumeObject@2b6b3912

On 2014.12.18. 14:19, Mārtiņš Jakubovičs wrote:
> Hello,
>
> I want to understand, how work volume scheduler in CloudStack.
> We got two SR:
>
> First SR:
> Disk Total    1.70 TB
> Disk Allocated    1.31 TB
>
> Second SR:
> Disk Total    1.83 TB
> Disk Allocated    57.09 GB
>
> Why on first SR there are much more created volumes, than on second 
> one? When I create new volume it creates in first SR anyway. Why 
> volume is not created on SR which have much more free space?
> How I can edit scheduling options?
>
> Both SR got same "Storage Tags ".
>
> ACS 4.3.1
> storage.overprovisioning.factor 2.0
> XenServer 6.2
>
>
> Best regards,
> Martins


Re: Volume scheduler

Posted by Mārtiņš Jakubovičs <ma...@vertigs.lv>.
Hello,

I check this a little bit deeper and faced that Storage Allocator is not 
checking more than one SR. It check first suitable SR and create volume 
on it. I think, it will be useful, if ACS check all suitable SR's and 
decide from "scheduler" where to create volume. Schedulers can be 
different, for example, deploy on SR with most free space, or try to fit 
full SR first, etc. Now I can't normally control where volumes should 
create, I can remove storage TAG from SR, but this is manually work and 
it is not automatic solution, in a way we all use CloudStack.

Also, is it possible to control quotas 
(pool.storage.allocated.capacity.disablethreshold, 
pool.storage.capacity.disablethreshold, etc.) by used space in SR?

2014-12-30 11:47:05,328 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) LocalStoragePoolAllocator 
trying to find storage pool to fit the vm
2014-12-30 11:47:05,329 DEBUG 
[o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) 
ClusterScopeStoragePoolAllocator looking for storage pool
2014-12-30 11:47:05,329 DEBUG 
[o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Looking for pools in dc: 1  
pod:1  cluster:1 having tags:[SATA]
2014-12-30 11:47:05,332 DEBUG 
[o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Found pools matching tags: 
[Pool[2|NetworkFilesystem], Pool[3|NetworkFilesystem]]
2014-12-30 11:47:05,336 DEBUG 
[o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Adding pool 
Pool[1|NetworkFilesystem] to avoid set since it did not match tags
2014-12-30 11:47:05,337 DEBUG 
[o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Removing pool 
Pool[2|NetworkFilesystem] from avoid set, must have been inserted when 
searching for another disk's tag
2014-12-30 11:47:05,337 DEBUG 
[o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Removing pool 
Pool[3|NetworkFilesystem] from avoid set, must have been inserted when 
searching for another disk's tag
2014-12-30 11:47:05,339 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Checking if storage pool is 
suitable, name: null ,poolId: 2
2014-12-30 11:47:05,348 DEBUG [c.c.s.StorageManagerImpl] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Checking pool 2 for storage, 
totalSize: 1869169819648, usedBytes: 919377674240, usedPct: 
0.49186417658569737, disable threshold: 0.85
2014-12-30 11:47:05,355 DEBUG [c.c.s.StorageManagerImpl] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Checking pool: 2 for volume 
allocation [Vol[435|vm=null|DATADISK]], maxSize : 3738339639296, 
totalAllocatedSize : 1551179448320, askingSize : 21474836480, allocated 
disable threshold: 0.85
2014-12-30 11:47:05,356 DEBUG 
[o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) 
ClusterScopeStoragePoolAllocator returning 1 suitable storage pools
2014-12-30 11:47:05,358 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Trying to create 
org.apache.cloudstack.storage.volume.VolumeObject@411a72ee on 
org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@2f6f88a6
2014-12-30 11:47:06,798 DEBUG 
[o.a.c.s.d.d.CloudStackPrimaryDataStoreDriverImpl] 
(Job-Executor-11:ctx-c97ee1d1 ctx-9293a9d0) Creating volume: 
org.apache.cloudstack.storage.volume.VolumeObject@2b6b3912

On 2014.12.18. 14:19, Mārtiņš Jakubovičs wrote:
> Hello,
>
> I want to understand, how work volume scheduler in CloudStack.
> We got two SR:
>
> First SR:
> Disk Total    1.70 TB
> Disk Allocated    1.31 TB
>
> Second SR:
> Disk Total    1.83 TB
> Disk Allocated    57.09 GB
>
> Why on first SR there are much more created volumes, than on second 
> one? When I create new volume it creates in first SR anyway. Why 
> volume is not created on SR which have much more free space?
> How I can edit scheduling options?
>
> Both SR got same "Storage Tags ".
>
> ACS 4.3.1
> storage.overprovisioning.factor 2.0
> XenServer 6.2
>
>
> Best regards,
> Martins