You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Indra Pramana <in...@sg.or.id> on 2013/07/15 05:41:31 UTC

Wrong storage capacity issue reported

Dear all,

I am using CloudStack 4.1.0. Just managed to get it setup over the weekend.
System VMs have been created and both (SSVM and CPVM) are running fine. The
default CentOS template has also been downloaded and ready to use.

However, I am not able to launch my first VM instance because the storage
capacity checker is reporting wrong usage information. I have a total of 13
TB of primary storage and the capacity checker is reporting my usage is
36,510 TB (274879.22%), which is not supposed to be the case.

===
2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
(CapacityChecker:null) System Alert: Low Available Storage in cluster
Cluster-01 pod Pod-01 of availability zone 01
2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
(CapacityChecker:null) Available storage space is low, total: 13282342 MB,
used: 36510398411 MB (274879.22%)
===

As a result, VM instance creation fails since it's not able to find
available storage pool.

===
2013-07-15 11:15:28,313 DEBUG [cloud.storage.StorageManagerImpl]
(Job-Executor-3:job-168) Checking pool: 208 for volume allocation
[Vol[227|vm=225|ROOT]], maxSize : 15828044742656, totalAllocatedSize :
1769538048, askingSize : 8589934592, allocated disable threshold: 0.85
2013-07-15 11:15:28,313 DEBUG
[storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
Checking if storage pool is suitable, name: sc-image ,poolId: 209
2013-07-15 11:15:28,313 DEBUG
[storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
Is localStorageAllocationNeeded? false
2013-07-15 11:15:28,313 DEBUG
[storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
Is storage pool shared? true
2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
(Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
6013522722816, usedBytes: 38283921137336466, usedPct: 6366.305226067051,
disable threshold: 0.85
2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
(Job-Executor-3:job-168) Insufficient space on pool: 209 since its usage
percentage: 6366.305226067051 has crossed the
pool.storage.capacity.disablethreshold: 0.85
2013-07-15 11:15:28,317 DEBUG
[storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
FirstFitStoragePoolAllocator returning 1 suitable storage pools
2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-3:job-168) Checking suitable pools for volume (Id, Type):
(228,DATADISK)
2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-3:job-168) We need to allocate new storagepool for this volume
2013-07-15 11:15:28,319 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-3:job-168) Calling StoragePoolAllocators to find suitable
pools
2013-07-15 11:15:28,319 DEBUG
[storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
Looking for pools in dc: 6  pod:6  cluster:6 having tags:[rbd]
2013-07-15 11:15:28,322 DEBUG
[storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
FirstFitStoragePoolAllocator has 1 pools to check for allocation
2013-07-15 11:15:28,322 DEBUG
[storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
Checking if storage pool is suitable, name: sc-image ,poolId: 209
2013-07-15 11:15:28,322 DEBUG
[storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
Is localStorageAllocationNeeded? false
2013-07-15 11:15:28,322 DEBUG
[storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
Is storage pool shared? true
2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
(Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
6013522722816, usedBytes: 38283921137336466, usedPct: 6366.305226067051,
disable threshold: 0.85
2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
(Job-Executor-3:job-168) Insufficient space on pool: 209 since its usage
percentage: 6366.305226067051 has crossed the
pool.storage.capacity.disablethreshold: 0.85
2013-07-15 11:15:28,326 DEBUG
[storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
FirstFitStoragePoolAllocator returning 0 suitable storage pools
2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-3:job-168) No suitable pools found for volume:
Vol[228|vm=225|DATADISK] under cluster: 6
2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-3:job-168) No suitable pools found
2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-3:job-168) No suitable storagePools found under this Cluster:
6
2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-3:job-168) Could not find suitable Deployment Destination for
this VM under any clusters, returning.
2013-07-15 11:15:28,332 DEBUG [cloud.vm.UserVmManagerImpl]
(Job-Executor-3:job-168) Destroying vm VM[User|Indra-Test-3] as it failed
to create on Host with Id:null
2013-07-15 11:15:28,498 DEBUG [cloud.capacity.CapacityManagerImpl]
(Job-Executor-3:job-168) VM state transitted from :Stopped to Error with
event: OperationFailedToErrorvm's original host id: null new host id: null
host id before state transition: null
2013-07-15 11:15:29,125 INFO  [user.vm.DeployVMCmd]
(Job-Executor-3:job-168)
com.cloud.exception.InsufficientServerCapacityException: Unable to create a
deployment for VM[User|Indra-Test-3]Scope=interface
com.cloud.dc.DataCenter; id=6
===

Anyone can advise on how to resolve this problem?

Looking forward to your reply, thank you.

Cheers.

RE: Wrong storage capacity issue reported

Posted by Koushik Das <ko...@citrix.com>.
This is the storage associated with a hypervisor host. When you enable 'local storage' property at the zone, local storage also appear under primary storage pools. The cloud service needs to be restarted for existing local pools (hosts that are already added before enabling the property) to show up.

> -----Original Message-----
> From: Indra Pramana [mailto:indra@sg.or.id]
> Sent: Monday, July 15, 2013 3:22 PM
> To: users@cloudstack.apache.org
> Cc: guangjian@gmail.com; Wido den Hollander
> Subject: Re: Wrong storage capacity issue reported
> 
> hi Koushik,
> 
> Good day to you, and thank you for your e-mail.
> 
> May I know which local storage you are referring to?
> 
> Looking forward to your reply, thank you.
> 
> Cheers.
> 
> 
> 
> On Mon, Jul 15, 2013 at 4:28 PM, Koushik Das <ko...@citrix.com>
> wrote:
> 
> > There is an issue with local storage as well. Looks like there was
> > change in the storage_pool table where the available_bytes column was
> > changed to used_bytes but in the code available bytes was still passed
> > for used bytes for local storage.
> >
> > > -----Original Message-----
> > > From: Indra Pramana [mailto:indra@sg.or.id]
> > > Sent: Monday, July 15, 2013 10:29 AM
> > > To: users@cloudstack.apache.org
> > > Cc: guangjian@gmail.com; Wido den Hollander
> > > Subject: Re: Wrong storage capacity issue reported
> > >
> > > Hi Prasanna,
> > >
> > > Good day to you, and thank you for your e-mail.
> > >
> > > See my reply inline below.
> > >
> > >
> > > On Mon, Jul 15, 2013 at 12:38 PM, Prasanna Santhanam
> > > <ts...@apache.org>
> > > wrote:
> > >
> > > > On Mon, Jul 15, 2013 at 11:58:47AM +0800, Indra Pramana wrote:
> > > > > Dear all,
> > > > >
> > > > > In addition to my previous e-mail, I just realised that the
> > > > > wrong
> > > > capacity
> > > > > usage information is only applicable to the Ceph RBD primary
> > > > > storage. I
> > > > did
> > > > > a check manually on the "storage_pool" table on the "cloud"
> > > > > MySQL
> > > > database:
> > > > >
> > > > >
> > > >
> > +-----+-----------------+--------------------------------------+-------------------+----
> > >
> > --+----------------+--------+------------+-------------------+----------------+---------
> --
> > >
> > ----------------+------------------------------------------------+------------------------
> +-
> > >
> > --------------------+---------------------+-------------+-------------+------------------
> --
> > > -+-------+
> > > > > | id  | name            | uuid                                 |
> > > > > pool_type         | port | data_center_id | pod_id | cluster_id |
> > > > > available_bytes   | capacity_bytes | host_address              |
> > > > > user_info                                      | path
> >     |
> > > > > created             | removed             | update_time | status
> >  |
> > > > > storage_provider_id | scope |
> > > > >
> > > >
> > +-----+-----------------+--------------------------------------+-------------------+----
> > >
> > --+----------------+--------+------------+-------------------+----------------+---------
> --
> > >
> > ----------------+------------------------------------------------+------------------------
> +-
> > >
> > --------------------+---------------------+-------------+-------------+------------------
> --
> > > -+-------+
> > > > > | 209 | sc-image        | bab81ce8-d53f-3a7d-b8f6-841702f65c89 |
> > > > > RBD               | 6789 |              6 |      6 |          6 |
> > > > > 38283921137336466 |  6013522722816 | ceph-mon.xxx.com |
> admin:xxx |
> > > > > sc1                    | 2013-07-13 08:58:27 | NULL                |
> > > > > NULL        | Up          |                NULL | NULL  |
> > > > >
> > > > > The "available_bytes" column is wrong.
> > > > >
> > > > > My issue is similar to the one reported by Guangjian Liu here,
> > > > > but it
> > > > seems
> > > > > that so far there's no solutions available?
> > > > >
> > > > >
> > > > http://mail-archives.apache.org/mod_mbox/cloudstack-
> > > dev/201304.mbox/%3
> > > >
> > >
> CCAKryD0b_QtjhtsFjk8McEQzDH6frOxe6EkJmFrcMDN54e5ZH9A@mail.gmail.
> > > com%3E
> > > >
> > > > It might be a bug on ceph but the thread you referenced was resolved.
> > > > See here: http://markmail.org/message/mkm2fqyawmwpufsc
> > > >
> > >
> > > Thanks for the email thread. However, the conversation doesn't
> > > mention on how was the problem resolved.
> > >
> > > I tried to execute the same command which have suggested by Wido on
> > > the email thread:
> > >
> > > ===
> > > root@hv-kvm-02:~# virsh pool-list
> > > Name                 State      Autostart
> > > -----------------------------------------
> > > 87ba6ca3-1b46-3f36-b138-2fd3bffb3d71 active     no
> > > bab81ce8-d53f-3a7d-b8f6-841702f65c89 active     no
> > > ff06ae2a-ff27-4ff6-87b3-c2f942cf76d6 active     no
> > >
> > > root@hv-kvm-02:~# virsh pool-info bab81ce8-d53f-3a7d-b8f6-
> 841702f65c89
> > > Name:           bab81ce8-d53f-3a7d-b8f6-841702f65c89
> > > UUID:           bab81ce8-d53f-3a7d-b8f6-841702f65c89
> > > State:          running
> > > Persistent:     no
> > > Autostart:      no
> > > Capacity:       5.47 TiB
> > > Allocation:     *34819.02 TiB* <-- wrong information
> > > Available:      5.47 TiB
> > > ===
> > >
> > > If it still persists, can you please file a bug on JIRA?
> > > >
> > >
> > > Any specific instructions on how to file the bug?
> > >
> > >
> > > >  > Is it safe for me to update the database record manually (using
> > > > the UPDATE
> > > > > MySQL command) to reflect the actual usage of the disk?
> > > >
> > > > Even if you do whatever is reporting the stats to CS about Ceph's
> > > > storage usage, will overwrite those values.
> > > >
> > >
> > > Noted, thanks. So it's an issue on the Ceph RBD side rather than on
> > > CloudStack side? Updating the Cloudstack database record manually
> > > will
> > not
> > > help?
> > >
> > > In any case, I tried to update the "available_bytes" and "capacity_bytes"
> > > record on the "storage_pool" table of "cloud" database manually to
> > reflect
> > > the actual size of the RBD image, which is 3 TB:
> > >
> > > ===
> > > indra@cs-mgmt-01:~$ rbd --image sc-image -p sc1 info rbd image
> > 'sc-image':
> > >         size 3072 GB in 786432 objects
> > >         order 22 (4096 KB objects)
> > >         block_name_prefix: rb.0.1825.238e1f29
> > >         format: 1
> > >
> > > mysql> UPDATE storage_pool SET available_bytes=3072000000,
> > > capacity_bytes=3072000000 WHERE id=209; Query OK, 1 row affected
> > > (0.03 sec) Rows matched: 1  Changed: 1  Warnings: 0 ===
> > >
> > > I tried to re-run the creation of the VM instance again, and it's
> > > still
> > failed even
> > > though the error message is a bit different. Not too sure where the
> > > "usedBytes" value is coming from?
> > >
> > > usedBytes: 38283921137336466
> > >
> > > ===
> > > 2013-07-15 12:10:46,232 DEBUG [cloud.storage.StorageManagerImpl]
> > > (Job-Executor-5:job-170) Checking pool 209 for storage, totalSize:
> > > 3072000000, usedBytes: 38283921137336466, usedPct:
> > > 1.246221391189338E7, disable threshold: 0.85
> > > 2013-07-15 12:10:46,232 DEBUG [cloud.storage.StorageManagerImpl]
> > > (Job-Executor-5:job-170) Insufficient space on pool: 209 since its
> > > usage
> > > percentage: 1.246221391189338E7 has crossed the
> > > pool.storage.capacity.disablethreshold: 0.85
> > > 2013-07-15 12:10:46,232 DEBUG
> > > [storage.allocator.FirstFitStoragePoolAllocator]
> > > (Job-Executor-5:job-170) FirstFitStoragePoolAllocator returning 0
> > > suitable storage pools
> > > 2013-07-15 12:10:46,232 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-5:job-170) No suitable pools found for volume:
> > > Vol[231|vm=227|DATADISK] under cluster: 6
> > > 2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-5:job-170) No suitable pools found
> > > 2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-5:job-170) No suitable storagePools found under this
> > Cluster:
> > > 6
> > > 2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-5:job-170) Could not find suitable Deployment
> > > Destination
> > for
> > > this VM under any clusters, returning.
> > > 2013-07-15 12:10:46,235 DEBUG [cloud.vm.UserVmManagerImpl]
> > > (Job-Executor-5:job-170) Destroying vm VM[User|Indra-Test-6] as it
> > failed to
> > > create on Host with Id:null
> > > 2013-07-15 12:10:46,358 DEBUG [cloud.capacity.CapacityManagerImpl]
> > > (Job-Executor-5:job-170) VM state transitted from :Stopped to Error
> > > with
> > > event: OperationFailedToErrorvm's original host id: null new host id:
> > null host
> > > id before state transition: null
> > > 2013-07-15 12:10:47,052 INFO  [user.vm.DeployVMCmd]
> > > (Job-Executor-5:job-170)
> > > com.cloud.exception.InsufficientServerCapacityException: Unable to
> > > create a deployment for VM[User|Indra-Test-6]Scope=interface
> > > com.cloud.dc.DataCenter; id=6
> > > 2013-07-15 12:10:47,052 INFO  [user.vm.DeployVMCmd]
> > > (Job-Executor-5:job-170) Unable to create a deployment for
> > > VM[User|Indra- Test-6] ===
> > >
> > >
> > >
> > > >
> > > > >
> > > > > Looking forward to your reply, thank you.
> > > > >
> > > > > Cheers.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Mon, Jul 15, 2013 at 11:41 AM, Indra Pramana <in...@sg.or.id>
> > wrote:
> > > > >
> > > > > > Dear all,
> > > > > >
> > > > > > I am using CloudStack 4.1.0. Just managed to get it setup over
> > > > > > the weekend. System VMs have been created and both (SSVM and
> > > > > > CPVM)
> > > are
> > > > running
> > > > > > fine. The default CentOS template has also been downloaded and
> > > > > > ready
> > > > to use.
> > > > > >
> > > > > > However, I am not able to launch my first VM instance because
> > > > > > the
> > > > storage
> > > > > > capacity checker is reporting wrong usage information. I have
> > > > > > a total
> > > > of 13
> > > > > > TB of primary storage and the capacity checker is reporting my
> > > > > > usage is
> > > > > > 36,510 TB (274879.22%), which is not supposed to be the case.
> > > > > >
> > > > > > ===
> > > > > > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > > > > > (CapacityChecker:null) System Alert: Low Available Storage in
> > > > > > cluster
> > > > > > Cluster-01 pod Pod-01 of availability zone 01
> > > > > > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > > > > > (CapacityChecker:null) Available storage space is low, total:
> > > > > > 13282342
> > > > MB,
> > > > > > used: 36510398411 MB (274879.22%) ===
> > > > > >
> > > > > > As a result, VM instance creation fails since it's not able to
> > > > > > find available storage pool.
> > > > > >
> > > > > > ===
> > > > > > 2013-07-15 11:15:28,313 DEBUG
> > > > > > [cloud.storage.StorageManagerImpl]
> > > > > > (Job-Executor-3:job-168) Checking pool: 208 for volume
> > > > > > allocation [Vol[227|vm=225|ROOT]], maxSize : 15828044742656,
> > > totalAllocatedSize :
> > > > > > 1769538048, askingSize : 8589934592, allocated disable threshold:
> > > > > > 0.85
> > > > > > 2013-07-15 11:15:28,313 DEBUG
> > > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > Checking if storage pool is suitable, name: sc-image ,poolId:
> > > > > > 209
> > > > > > 2013-07-15 11:15:28,313 DEBUG
> > > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > Is localStorageAllocationNeeded? false
> > > > > > 2013-07-15 11:15:28,313 DEBUG
> > > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > Is storage pool shared? true
> > > > > > 2013-07-15 11:15:28,317 DEBUG
> > > > > > [cloud.storage.StorageManagerImpl]
> > > > > > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > > > > > 6013522722816, usedBytes: 38283921137336466, usedPct:
> > > > 6366.305226067051,
> > > > > > disable threshold: 0.85
> > > > > > 2013-07-15 11:15:28,317 DEBUG
> > > > > > [cloud.storage.StorageManagerImpl]
> > > > > > (Job-Executor-3:job-168) Insufficient space on pool: 209 since
> > > > > > its
> > > > usage
> > > > > > percentage: 6366.305226067051 has crossed the
> > > > > > pool.storage.capacity.disablethreshold: 0.85
> > > > > > 2013-07-15 11:15:28,317 DEBUG
> > > > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > FirstFitStoragePoolAllocator returning 1 suitable storage
> > > > > > pools
> > > > > > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > > (Job-Executor-3:job-168) Checking suitable pools for volume
> > > > > > (Id,
> > Type):
> > > > > > (228,DATADISK)
> > > > > > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > > (Job-Executor-3:job-168) We need to allocate new storagepool
> > > > > > for this
> > > > volume
> > > > > > 2013-07-15 11:15:28,319 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > > (Job-Executor-3:job-168) Calling StoragePoolAllocators to find
> > > > > > suitable pools
> > > > > > 2013-07-15 11:15:28,319 DEBUG
> > > > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > Looking for pools in dc: 6  pod:6  cluster:6 having tags:[rbd]
> > > > > > 2013-07-15 11:15:28,322 DEBUG
> > > > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > FirstFitStoragePoolAllocator has 1 pools to check for
> > > > > > allocation
> > > > > > 2013-07-15 11:15:28,322 DEBUG
> > > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > Checking if storage pool is suitable, name: sc-image ,poolId:
> > > > > > 209
> > > > > > 2013-07-15 11:15:28,322 DEBUG
> > > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > Is localStorageAllocationNeeded? false
> > > > > > 2013-07-15 11:15:28,322 DEBUG
> > > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > Is storage pool shared? true
> > > > > > 2013-07-15 11:15:28,326 DEBUG
> > > > > > [cloud.storage.StorageManagerImpl]
> > > > > > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > > > > > 6013522722816, usedBytes: 38283921137336466, usedPct:
> > > > 6366.305226067051,
> > > > > > disable threshold: 0.85
> > > > > > 2013-07-15 11:15:28,326 DEBUG
> > > > > > [cloud.storage.StorageManagerImpl]
> > > > > > (Job-Executor-3:job-168) Insufficient space on pool: 209 since
> > > > > > its
> > > > usage
> > > > > > percentage: 6366.305226067051 has crossed the
> > > > > > pool.storage.capacity.disablethreshold: 0.85
> > > > > > 2013-07-15 11:15:28,326 DEBUG
> > > > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > > > (Job-Executor-3:job-168)
> > > > > > FirstFitStoragePoolAllocator returning 0 suitable storage
> > > > > > pools
> > > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > > (Job-Executor-3:job-168) No suitable pools found for volume:
> > > > > > Vol[228|vm=225|DATADISK] under cluster: 6
> > > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > > (Job-Executor-3:job-168) No suitable pools found
> > > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > > (Job-Executor-3:job-168) No suitable storagePools found under
> > > > > > this
> > > > Cluster:
> > > > > > 6
> > > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > > (Job-Executor-3:job-168) Could not find suitable Deployment
> > > > Destination for
> > > > > > this VM under any clusters, returning.
> > > > > > 2013-07-15 11:15:28,332 DEBUG [cloud.vm.UserVmManagerImpl]
> > > > > > (Job-Executor-3:job-168) Destroying vm VM[User|Indra-Test-3]
> > > > > > as it
> > > > failed
> > > > > > to create on Host with Id:null
> > > > > > 2013-07-15 11:15:28,498 DEBUG
> > > > > > [cloud.capacity.CapacityManagerImpl]
> > > > > > (Job-Executor-3:job-168) VM state transitted from :Stopped to
> > > > > > Error
> > > > with
> > > > > > event: OperationFailedToErrorvm's original host id: null new
> > > > > > host
> > id:
> > > > null
> > > > > > host id before state transition: null
> > > > > > 2013-07-15 11:15:29,125 INFO  [user.vm.DeployVMCmd]
> > > > > > (Job-Executor-3:job-168)
> > > > > > com.cloud.exception.InsufficientServerCapacityException:
> > > > > > Unable to
> > > > create a
> > > > > > deployment for VM[User|Indra-Test-3]Scope=interface
> > > > > > com.cloud.dc.DataCenter; id=6
> > > > > > ===
> > > > > >
> > > > > > Anyone can advise on how to resolve this problem?
> > > > > >
> > > > > > Looking forward to your reply, thank you.
> > > > > >
> > > > > > Cheers.
> > > > > >
> > > >
> > > > --
> > > > Prasanna.,
> > > >
> > > > ------------------------
> > > > Powered by BigRock.com
> > > >
> > > >
> >

Re: Wrong storage capacity issue reported

Posted by Indra Pramana <in...@sg.or.id>.
hi Koushik,

Good day to you, and thank you for your e-mail.

May I know which local storage you are referring to?

Looking forward to your reply, thank you.

Cheers.



On Mon, Jul 15, 2013 at 4:28 PM, Koushik Das <ko...@citrix.com> wrote:

> There is an issue with local storage as well. Looks like there was change
> in the storage_pool table where the available_bytes column was changed to
> used_bytes but in the code available bytes was still passed for used bytes
> for local storage.
>
> > -----Original Message-----
> > From: Indra Pramana [mailto:indra@sg.or.id]
> > Sent: Monday, July 15, 2013 10:29 AM
> > To: users@cloudstack.apache.org
> > Cc: guangjian@gmail.com; Wido den Hollander
> > Subject: Re: Wrong storage capacity issue reported
> >
> > Hi Prasanna,
> >
> > Good day to you, and thank you for your e-mail.
> >
> > See my reply inline below.
> >
> >
> > On Mon, Jul 15, 2013 at 12:38 PM, Prasanna Santhanam <ts...@apache.org>
> > wrote:
> >
> > > On Mon, Jul 15, 2013 at 11:58:47AM +0800, Indra Pramana wrote:
> > > > Dear all,
> > > >
> > > > In addition to my previous e-mail, I just realised that the wrong
> > > capacity
> > > > usage information is only applicable to the Ceph RBD primary
> > > > storage. I
> > > did
> > > > a check manually on the "storage_pool" table on the "cloud" MySQL
> > > database:
> > > >
> > > >
> > >
> +-----+-----------------+--------------------------------------+-------------------+----
> >
> --+----------------+--------+------------+-------------------+----------------+-----------
> >
> ----------------+------------------------------------------------+------------------------+-
> >
> --------------------+---------------------+-------------+-------------+--------------------
> > -+-------+
> > > > | id  | name            | uuid                                 |
> > > > pool_type         | port | data_center_id | pod_id | cluster_id |
> > > > available_bytes   | capacity_bytes | host_address              |
> > > > user_info                                      | path
>     |
> > > > created             | removed             | update_time | status
>  |
> > > > storage_provider_id | scope |
> > > >
> > >
> +-----+-----------------+--------------------------------------+-------------------+----
> >
> --+----------------+--------+------------+-------------------+----------------+-----------
> >
> ----------------+------------------------------------------------+------------------------+-
> >
> --------------------+---------------------+-------------+-------------+--------------------
> > -+-------+
> > > > | 209 | sc-image        | bab81ce8-d53f-3a7d-b8f6-841702f65c89 |
> > > > RBD               | 6789 |              6 |      6 |          6 |
> > > > 38283921137336466 |  6013522722816 | ceph-mon.xxx.com | admin:xxx |
> > > > sc1                    | 2013-07-13 08:58:27 | NULL                |
> > > > NULL        | Up          |                NULL | NULL  |
> > > >
> > > > The "available_bytes" column is wrong.
> > > >
> > > > My issue is similar to the one reported by Guangjian Liu here, but
> > > > it
> > > seems
> > > > that so far there's no solutions available?
> > > >
> > > >
> > > http://mail-archives.apache.org/mod_mbox/cloudstack-
> > dev/201304.mbox/%3
> > >
> > CCAKryD0b_QtjhtsFjk8McEQzDH6frOxe6EkJmFrcMDN54e5ZH9A@mail.gmail.
> > com%3E
> > >
> > > It might be a bug on ceph but the thread you referenced was resolved.
> > > See here: http://markmail.org/message/mkm2fqyawmwpufsc
> > >
> >
> > Thanks for the email thread. However, the conversation doesn't mention on
> > how was the problem resolved.
> >
> > I tried to execute the same command which have suggested by Wido on the
> > email thread:
> >
> > ===
> > root@hv-kvm-02:~# virsh pool-list
> > Name                 State      Autostart
> > -----------------------------------------
> > 87ba6ca3-1b46-3f36-b138-2fd3bffb3d71 active     no
> > bab81ce8-d53f-3a7d-b8f6-841702f65c89 active     no
> > ff06ae2a-ff27-4ff6-87b3-c2f942cf76d6 active     no
> >
> > root@hv-kvm-02:~# virsh pool-info bab81ce8-d53f-3a7d-b8f6-841702f65c89
> > Name:           bab81ce8-d53f-3a7d-b8f6-841702f65c89
> > UUID:           bab81ce8-d53f-3a7d-b8f6-841702f65c89
> > State:          running
> > Persistent:     no
> > Autostart:      no
> > Capacity:       5.47 TiB
> > Allocation:     *34819.02 TiB* <-- wrong information
> > Available:      5.47 TiB
> > ===
> >
> > If it still persists, can you please file a bug on JIRA?
> > >
> >
> > Any specific instructions on how to file the bug?
> >
> >
> > >  > Is it safe for me to update the database record manually (using the
> > > UPDATE
> > > > MySQL command) to reflect the actual usage of the disk?
> > >
> > > Even if you do whatever is reporting the stats to CS about Ceph's
> > > storage usage, will overwrite those values.
> > >
> >
> > Noted, thanks. So it's an issue on the Ceph RBD side rather than on
> > CloudStack side? Updating the Cloudstack database record manually will
> not
> > help?
> >
> > In any case, I tried to update the "available_bytes" and "capacity_bytes"
> > record on the "storage_pool" table of "cloud" database manually to
> reflect
> > the actual size of the RBD image, which is 3 TB:
> >
> > ===
> > indra@cs-mgmt-01:~$ rbd --image sc-image -p sc1 info rbd image
> 'sc-image':
> >         size 3072 GB in 786432 objects
> >         order 22 (4096 KB objects)
> >         block_name_prefix: rb.0.1825.238e1f29
> >         format: 1
> >
> > mysql> UPDATE storage_pool SET available_bytes=3072000000,
> > capacity_bytes=3072000000 WHERE id=209;
> > Query OK, 1 row affected (0.03 sec)
> > Rows matched: 1  Changed: 1  Warnings: 0 ===
> >
> > I tried to re-run the creation of the VM instance again, and it's still
> failed even
> > though the error message is a bit different. Not too sure where the
> > "usedBytes" value is coming from?
> >
> > usedBytes: 38283921137336466
> >
> > ===
> > 2013-07-15 12:10:46,232 DEBUG [cloud.storage.StorageManagerImpl]
> > (Job-Executor-5:job-170) Checking pool 209 for storage, totalSize:
> > 3072000000, usedBytes: 38283921137336466, usedPct: 1.246221391189338E7,
> > disable threshold: 0.85
> > 2013-07-15 12:10:46,232 DEBUG [cloud.storage.StorageManagerImpl]
> > (Job-Executor-5:job-170) Insufficient space on pool: 209 since its usage
> > percentage: 1.246221391189338E7 has crossed the
> > pool.storage.capacity.disablethreshold: 0.85
> > 2013-07-15 12:10:46,232 DEBUG
> > [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-5:job-170)
> > FirstFitStoragePoolAllocator returning 0 suitable storage pools
> > 2013-07-15 12:10:46,232 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-5:job-170) No suitable pools found for volume:
> > Vol[231|vm=227|DATADISK] under cluster: 6
> > 2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-5:job-170) No suitable pools found
> > 2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-5:job-170) No suitable storagePools found under this
> Cluster:
> > 6
> > 2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-5:job-170) Could not find suitable Deployment Destination
> for
> > this VM under any clusters, returning.
> > 2013-07-15 12:10:46,235 DEBUG [cloud.vm.UserVmManagerImpl]
> > (Job-Executor-5:job-170) Destroying vm VM[User|Indra-Test-6] as it
> failed to
> > create on Host with Id:null
> > 2013-07-15 12:10:46,358 DEBUG [cloud.capacity.CapacityManagerImpl]
> > (Job-Executor-5:job-170) VM state transitted from :Stopped to Error with
> > event: OperationFailedToErrorvm's original host id: null new host id:
> null host
> > id before state transition: null
> > 2013-07-15 12:10:47,052 INFO  [user.vm.DeployVMCmd]
> > (Job-Executor-5:job-170)
> > com.cloud.exception.InsufficientServerCapacityException: Unable to create
> > a deployment for VM[User|Indra-Test-6]Scope=interface
> > com.cloud.dc.DataCenter; id=6
> > 2013-07-15 12:10:47,052 INFO  [user.vm.DeployVMCmd]
> > (Job-Executor-5:job-170) Unable to create a deployment for VM[User|Indra-
> > Test-6] ===
> >
> >
> >
> > >
> > > >
> > > > Looking forward to your reply, thank you.
> > > >
> > > > Cheers.
> > > >
> > > >
> > > >
> > > >
> > > > On Mon, Jul 15, 2013 at 11:41 AM, Indra Pramana <in...@sg.or.id>
> wrote:
> > > >
> > > > > Dear all,
> > > > >
> > > > > I am using CloudStack 4.1.0. Just managed to get it setup over the
> > > > > weekend. System VMs have been created and both (SSVM and CPVM)
> > are
> > > running
> > > > > fine. The default CentOS template has also been downloaded and
> > > > > ready
> > > to use.
> > > > >
> > > > > However, I am not able to launch my first VM instance because the
> > > storage
> > > > > capacity checker is reporting wrong usage information. I have a
> > > > > total
> > > of 13
> > > > > TB of primary storage and the capacity checker is reporting my
> > > > > usage is
> > > > > 36,510 TB (274879.22%), which is not supposed to be the case.
> > > > >
> > > > > ===
> > > > > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > > > > (CapacityChecker:null) System Alert: Low Available Storage in
> > > > > cluster
> > > > > Cluster-01 pod Pod-01 of availability zone 01
> > > > > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > > > > (CapacityChecker:null) Available storage space is low, total:
> > > > > 13282342
> > > MB,
> > > > > used: 36510398411 MB (274879.22%)
> > > > > ===
> > > > >
> > > > > As a result, VM instance creation fails since it's not able to
> > > > > find available storage pool.
> > > > >
> > > > > ===
> > > > > 2013-07-15 11:15:28,313 DEBUG [cloud.storage.StorageManagerImpl]
> > > > > (Job-Executor-3:job-168) Checking pool: 208 for volume allocation
> > > > > [Vol[227|vm=225|ROOT]], maxSize : 15828044742656,
> > totalAllocatedSize :
> > > > > 1769538048, askingSize : 8589934592, allocated disable threshold:
> > > > > 0.85
> > > > > 2013-07-15 11:15:28,313 DEBUG
> > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > Checking if storage pool is suitable, name: sc-image ,poolId: 209
> > > > > 2013-07-15 11:15:28,313 DEBUG
> > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > Is localStorageAllocationNeeded? false
> > > > > 2013-07-15 11:15:28,313 DEBUG
> > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > Is storage pool shared? true
> > > > > 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> > > > > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > > > > 6013522722816, usedBytes: 38283921137336466, usedPct:
> > > 6366.305226067051,
> > > > > disable threshold: 0.85
> > > > > 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> > > > > (Job-Executor-3:job-168) Insufficient space on pool: 209 since its
> > > usage
> > > > > percentage: 6366.305226067051 has crossed the
> > > > > pool.storage.capacity.disablethreshold: 0.85
> > > > > 2013-07-15 11:15:28,317 DEBUG
> > > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > FirstFitStoragePoolAllocator returning 1 suitable storage pools
> > > > > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > (Job-Executor-3:job-168) Checking suitable pools for volume (Id,
> Type):
> > > > > (228,DATADISK)
> > > > > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > (Job-Executor-3:job-168) We need to allocate new storagepool for
> > > > > this
> > > volume
> > > > > 2013-07-15 11:15:28,319 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > (Job-Executor-3:job-168) Calling StoragePoolAllocators to find
> > > > > suitable pools
> > > > > 2013-07-15 11:15:28,319 DEBUG
> > > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > Looking for pools in dc: 6  pod:6  cluster:6 having tags:[rbd]
> > > > > 2013-07-15 11:15:28,322 DEBUG
> > > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > FirstFitStoragePoolAllocator has 1 pools to check for allocation
> > > > > 2013-07-15 11:15:28,322 DEBUG
> > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > Checking if storage pool is suitable, name: sc-image ,poolId: 209
> > > > > 2013-07-15 11:15:28,322 DEBUG
> > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > Is localStorageAllocationNeeded? false
> > > > > 2013-07-15 11:15:28,322 DEBUG
> > > > > [storage.allocator.AbstractStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > Is storage pool shared? true
> > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> > > > > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > > > > 6013522722816, usedBytes: 38283921137336466, usedPct:
> > > 6366.305226067051,
> > > > > disable threshold: 0.85
> > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> > > > > (Job-Executor-3:job-168) Insufficient space on pool: 209 since its
> > > usage
> > > > > percentage: 6366.305226067051 has crossed the
> > > > > pool.storage.capacity.disablethreshold: 0.85
> > > > > 2013-07-15 11:15:28,326 DEBUG
> > > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > > (Job-Executor-3:job-168)
> > > > > FirstFitStoragePoolAllocator returning 0 suitable storage pools
> > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > (Job-Executor-3:job-168) No suitable pools found for volume:
> > > > > Vol[228|vm=225|DATADISK] under cluster: 6
> > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > (Job-Executor-3:job-168) No suitable pools found
> > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > (Job-Executor-3:job-168) No suitable storagePools found under this
> > > Cluster:
> > > > > 6
> > > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > > (Job-Executor-3:job-168) Could not find suitable Deployment
> > > Destination for
> > > > > this VM under any clusters, returning.
> > > > > 2013-07-15 11:15:28,332 DEBUG [cloud.vm.UserVmManagerImpl]
> > > > > (Job-Executor-3:job-168) Destroying vm VM[User|Indra-Test-3] as it
> > > failed
> > > > > to create on Host with Id:null
> > > > > 2013-07-15 11:15:28,498 DEBUG [cloud.capacity.CapacityManagerImpl]
> > > > > (Job-Executor-3:job-168) VM state transitted from :Stopped to
> > > > > Error
> > > with
> > > > > event: OperationFailedToErrorvm's original host id: null new host
> id:
> > > null
> > > > > host id before state transition: null
> > > > > 2013-07-15 11:15:29,125 INFO  [user.vm.DeployVMCmd]
> > > > > (Job-Executor-3:job-168)
> > > > > com.cloud.exception.InsufficientServerCapacityException: Unable to
> > > create a
> > > > > deployment for VM[User|Indra-Test-3]Scope=interface
> > > > > com.cloud.dc.DataCenter; id=6
> > > > > ===
> > > > >
> > > > > Anyone can advise on how to resolve this problem?
> > > > >
> > > > > Looking forward to your reply, thank you.
> > > > >
> > > > > Cheers.
> > > > >
> > >
> > > --
> > > Prasanna.,
> > >
> > > ------------------------
> > > Powered by BigRock.com
> > >
> > >
>

RE: Wrong storage capacity issue reported

Posted by Koushik Das <ko...@citrix.com>.
There is an issue with local storage as well. Looks like there was change in the storage_pool table where the available_bytes column was changed to used_bytes but in the code available bytes was still passed for used bytes for local storage.

> -----Original Message-----
> From: Indra Pramana [mailto:indra@sg.or.id]
> Sent: Monday, July 15, 2013 10:29 AM
> To: users@cloudstack.apache.org
> Cc: guangjian@gmail.com; Wido den Hollander
> Subject: Re: Wrong storage capacity issue reported
> 
> Hi Prasanna,
> 
> Good day to you, and thank you for your e-mail.
> 
> See my reply inline below.
> 
> 
> On Mon, Jul 15, 2013 at 12:38 PM, Prasanna Santhanam <ts...@apache.org>
> wrote:
> 
> > On Mon, Jul 15, 2013 at 11:58:47AM +0800, Indra Pramana wrote:
> > > Dear all,
> > >
> > > In addition to my previous e-mail, I just realised that the wrong
> > capacity
> > > usage information is only applicable to the Ceph RBD primary
> > > storage. I
> > did
> > > a check manually on the "storage_pool" table on the "cloud" MySQL
> > database:
> > >
> > >
> > +-----+-----------------+--------------------------------------+-------------------+----
> --+----------------+--------+------------+-------------------+----------------+-----------
> ----------------+------------------------------------------------+------------------------+-
> --------------------+---------------------+-------------+-------------+--------------------
> -+-------+
> > > | id  | name            | uuid                                 |
> > > pool_type         | port | data_center_id | pod_id | cluster_id |
> > > available_bytes   | capacity_bytes | host_address              |
> > > user_info                                      | path                   |
> > > created             | removed             | update_time | status      |
> > > storage_provider_id | scope |
> > >
> > +-----+-----------------+--------------------------------------+-------------------+----
> --+----------------+--------+------------+-------------------+----------------+-----------
> ----------------+------------------------------------------------+------------------------+-
> --------------------+---------------------+-------------+-------------+--------------------
> -+-------+
> > > | 209 | sc-image        | bab81ce8-d53f-3a7d-b8f6-841702f65c89 |
> > > RBD               | 6789 |              6 |      6 |          6 |
> > > 38283921137336466 |  6013522722816 | ceph-mon.xxx.com | admin:xxx |
> > > sc1                    | 2013-07-13 08:58:27 | NULL                |
> > > NULL        | Up          |                NULL | NULL  |
> > >
> > > The "available_bytes" column is wrong.
> > >
> > > My issue is similar to the one reported by Guangjian Liu here, but
> > > it
> > seems
> > > that so far there's no solutions available?
> > >
> > >
> > http://mail-archives.apache.org/mod_mbox/cloudstack-
> dev/201304.mbox/%3
> >
> CCAKryD0b_QtjhtsFjk8McEQzDH6frOxe6EkJmFrcMDN54e5ZH9A@mail.gmail.
> com%3E
> >
> > It might be a bug on ceph but the thread you referenced was resolved.
> > See here: http://markmail.org/message/mkm2fqyawmwpufsc
> >
> 
> Thanks for the email thread. However, the conversation doesn't mention on
> how was the problem resolved.
> 
> I tried to execute the same command which have suggested by Wido on the
> email thread:
> 
> ===
> root@hv-kvm-02:~# virsh pool-list
> Name                 State      Autostart
> -----------------------------------------
> 87ba6ca3-1b46-3f36-b138-2fd3bffb3d71 active     no
> bab81ce8-d53f-3a7d-b8f6-841702f65c89 active     no
> ff06ae2a-ff27-4ff6-87b3-c2f942cf76d6 active     no
> 
> root@hv-kvm-02:~# virsh pool-info bab81ce8-d53f-3a7d-b8f6-841702f65c89
> Name:           bab81ce8-d53f-3a7d-b8f6-841702f65c89
> UUID:           bab81ce8-d53f-3a7d-b8f6-841702f65c89
> State:          running
> Persistent:     no
> Autostart:      no
> Capacity:       5.47 TiB
> Allocation:     *34819.02 TiB* <-- wrong information
> Available:      5.47 TiB
> ===
> 
> If it still persists, can you please file a bug on JIRA?
> >
> 
> Any specific instructions on how to file the bug?
> 
> 
> >  > Is it safe for me to update the database record manually (using the
> > UPDATE
> > > MySQL command) to reflect the actual usage of the disk?
> >
> > Even if you do whatever is reporting the stats to CS about Ceph's
> > storage usage, will overwrite those values.
> >
> 
> Noted, thanks. So it's an issue on the Ceph RBD side rather than on
> CloudStack side? Updating the Cloudstack database record manually will not
> help?
> 
> In any case, I tried to update the "available_bytes" and "capacity_bytes"
> record on the "storage_pool" table of "cloud" database manually to reflect
> the actual size of the RBD image, which is 3 TB:
> 
> ===
> indra@cs-mgmt-01:~$ rbd --image sc-image -p sc1 info rbd image 'sc-image':
>         size 3072 GB in 786432 objects
>         order 22 (4096 KB objects)
>         block_name_prefix: rb.0.1825.238e1f29
>         format: 1
> 
> mysql> UPDATE storage_pool SET available_bytes=3072000000,
> capacity_bytes=3072000000 WHERE id=209;
> Query OK, 1 row affected (0.03 sec)
> Rows matched: 1  Changed: 1  Warnings: 0 ===
> 
> I tried to re-run the creation of the VM instance again, and it's still failed even
> though the error message is a bit different. Not too sure where the
> "usedBytes" value is coming from?
> 
> usedBytes: 38283921137336466
> 
> ===
> 2013-07-15 12:10:46,232 DEBUG [cloud.storage.StorageManagerImpl]
> (Job-Executor-5:job-170) Checking pool 209 for storage, totalSize:
> 3072000000, usedBytes: 38283921137336466, usedPct: 1.246221391189338E7,
> disable threshold: 0.85
> 2013-07-15 12:10:46,232 DEBUG [cloud.storage.StorageManagerImpl]
> (Job-Executor-5:job-170) Insufficient space on pool: 209 since its usage
> percentage: 1.246221391189338E7 has crossed the
> pool.storage.capacity.disablethreshold: 0.85
> 2013-07-15 12:10:46,232 DEBUG
> [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-5:job-170)
> FirstFitStoragePoolAllocator returning 0 suitable storage pools
> 2013-07-15 12:10:46,232 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-5:job-170) No suitable pools found for volume:
> Vol[231|vm=227|DATADISK] under cluster: 6
> 2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-5:job-170) No suitable pools found
> 2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-5:job-170) No suitable storagePools found under this Cluster:
> 6
> 2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-5:job-170) Could not find suitable Deployment Destination for
> this VM under any clusters, returning.
> 2013-07-15 12:10:46,235 DEBUG [cloud.vm.UserVmManagerImpl]
> (Job-Executor-5:job-170) Destroying vm VM[User|Indra-Test-6] as it failed to
> create on Host with Id:null
> 2013-07-15 12:10:46,358 DEBUG [cloud.capacity.CapacityManagerImpl]
> (Job-Executor-5:job-170) VM state transitted from :Stopped to Error with
> event: OperationFailedToErrorvm's original host id: null new host id: null host
> id before state transition: null
> 2013-07-15 12:10:47,052 INFO  [user.vm.DeployVMCmd]
> (Job-Executor-5:job-170)
> com.cloud.exception.InsufficientServerCapacityException: Unable to create
> a deployment for VM[User|Indra-Test-6]Scope=interface
> com.cloud.dc.DataCenter; id=6
> 2013-07-15 12:10:47,052 INFO  [user.vm.DeployVMCmd]
> (Job-Executor-5:job-170) Unable to create a deployment for VM[User|Indra-
> Test-6] ===
> 
> 
> 
> >
> > >
> > > Looking forward to your reply, thank you.
> > >
> > > Cheers.
> > >
> > >
> > >
> > >
> > > On Mon, Jul 15, 2013 at 11:41 AM, Indra Pramana <in...@sg.or.id> wrote:
> > >
> > > > Dear all,
> > > >
> > > > I am using CloudStack 4.1.0. Just managed to get it setup over the
> > > > weekend. System VMs have been created and both (SSVM and CPVM)
> are
> > running
> > > > fine. The default CentOS template has also been downloaded and
> > > > ready
> > to use.
> > > >
> > > > However, I am not able to launch my first VM instance because the
> > storage
> > > > capacity checker is reporting wrong usage information. I have a
> > > > total
> > of 13
> > > > TB of primary storage and the capacity checker is reporting my
> > > > usage is
> > > > 36,510 TB (274879.22%), which is not supposed to be the case.
> > > >
> > > > ===
> > > > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > > > (CapacityChecker:null) System Alert: Low Available Storage in
> > > > cluster
> > > > Cluster-01 pod Pod-01 of availability zone 01
> > > > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > > > (CapacityChecker:null) Available storage space is low, total:
> > > > 13282342
> > MB,
> > > > used: 36510398411 MB (274879.22%)
> > > > ===
> > > >
> > > > As a result, VM instance creation fails since it's not able to
> > > > find available storage pool.
> > > >
> > > > ===
> > > > 2013-07-15 11:15:28,313 DEBUG [cloud.storage.StorageManagerImpl]
> > > > (Job-Executor-3:job-168) Checking pool: 208 for volume allocation
> > > > [Vol[227|vm=225|ROOT]], maxSize : 15828044742656,
> totalAllocatedSize :
> > > > 1769538048, askingSize : 8589934592, allocated disable threshold:
> > > > 0.85
> > > > 2013-07-15 11:15:28,313 DEBUG
> > > > [storage.allocator.AbstractStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > Checking if storage pool is suitable, name: sc-image ,poolId: 209
> > > > 2013-07-15 11:15:28,313 DEBUG
> > > > [storage.allocator.AbstractStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > Is localStorageAllocationNeeded? false
> > > > 2013-07-15 11:15:28,313 DEBUG
> > > > [storage.allocator.AbstractStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > Is storage pool shared? true
> > > > 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> > > > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > > > 6013522722816, usedBytes: 38283921137336466, usedPct:
> > 6366.305226067051,
> > > > disable threshold: 0.85
> > > > 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> > > > (Job-Executor-3:job-168) Insufficient space on pool: 209 since its
> > usage
> > > > percentage: 6366.305226067051 has crossed the
> > > > pool.storage.capacity.disablethreshold: 0.85
> > > > 2013-07-15 11:15:28,317 DEBUG
> > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > FirstFitStoragePoolAllocator returning 1 suitable storage pools
> > > > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > (Job-Executor-3:job-168) Checking suitable pools for volume (Id, Type):
> > > > (228,DATADISK)
> > > > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > (Job-Executor-3:job-168) We need to allocate new storagepool for
> > > > this
> > volume
> > > > 2013-07-15 11:15:28,319 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > (Job-Executor-3:job-168) Calling StoragePoolAllocators to find
> > > > suitable pools
> > > > 2013-07-15 11:15:28,319 DEBUG
> > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > Looking for pools in dc: 6  pod:6  cluster:6 having tags:[rbd]
> > > > 2013-07-15 11:15:28,322 DEBUG
> > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > FirstFitStoragePoolAllocator has 1 pools to check for allocation
> > > > 2013-07-15 11:15:28,322 DEBUG
> > > > [storage.allocator.AbstractStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > Checking if storage pool is suitable, name: sc-image ,poolId: 209
> > > > 2013-07-15 11:15:28,322 DEBUG
> > > > [storage.allocator.AbstractStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > Is localStorageAllocationNeeded? false
> > > > 2013-07-15 11:15:28,322 DEBUG
> > > > [storage.allocator.AbstractStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > Is storage pool shared? true
> > > > 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> > > > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > > > 6013522722816, usedBytes: 38283921137336466, usedPct:
> > 6366.305226067051,
> > > > disable threshold: 0.85
> > > > 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> > > > (Job-Executor-3:job-168) Insufficient space on pool: 209 since its
> > usage
> > > > percentage: 6366.305226067051 has crossed the
> > > > pool.storage.capacity.disablethreshold: 0.85
> > > > 2013-07-15 11:15:28,326 DEBUG
> > > > [storage.allocator.FirstFitStoragePoolAllocator]
> > (Job-Executor-3:job-168)
> > > > FirstFitStoragePoolAllocator returning 0 suitable storage pools
> > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > (Job-Executor-3:job-168) No suitable pools found for volume:
> > > > Vol[228|vm=225|DATADISK] under cluster: 6
> > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > (Job-Executor-3:job-168) No suitable pools found
> > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > (Job-Executor-3:job-168) No suitable storagePools found under this
> > Cluster:
> > > > 6
> > > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > > (Job-Executor-3:job-168) Could not find suitable Deployment
> > Destination for
> > > > this VM under any clusters, returning.
> > > > 2013-07-15 11:15:28,332 DEBUG [cloud.vm.UserVmManagerImpl]
> > > > (Job-Executor-3:job-168) Destroying vm VM[User|Indra-Test-3] as it
> > failed
> > > > to create on Host with Id:null
> > > > 2013-07-15 11:15:28,498 DEBUG [cloud.capacity.CapacityManagerImpl]
> > > > (Job-Executor-3:job-168) VM state transitted from :Stopped to
> > > > Error
> > with
> > > > event: OperationFailedToErrorvm's original host id: null new host id:
> > null
> > > > host id before state transition: null
> > > > 2013-07-15 11:15:29,125 INFO  [user.vm.DeployVMCmd]
> > > > (Job-Executor-3:job-168)
> > > > com.cloud.exception.InsufficientServerCapacityException: Unable to
> > create a
> > > > deployment for VM[User|Indra-Test-3]Scope=interface
> > > > com.cloud.dc.DataCenter; id=6
> > > > ===
> > > >
> > > > Anyone can advise on how to resolve this problem?
> > > >
> > > > Looking forward to your reply, thank you.
> > > >
> > > > Cheers.
> > > >
> >
> > --
> > Prasanna.,
> >
> > ------------------------
> > Powered by BigRock.com
> >
> >

Re: Wrong storage capacity issue reported

Posted by Indra Pramana <in...@sg.or.id>.
Hi Prasanna,

Good day to you, and thank you for your e-mail.

See my reply inline below.


On Mon, Jul 15, 2013 at 12:38 PM, Prasanna Santhanam <ts...@apache.org> wrote:

> On Mon, Jul 15, 2013 at 11:58:47AM +0800, Indra Pramana wrote:
> > Dear all,
> >
> > In addition to my previous e-mail, I just realised that the wrong
> capacity
> > usage information is only applicable to the Ceph RBD primary storage. I
> did
> > a check manually on the "storage_pool" table on the "cloud" MySQL
> database:
> >
> >
> +-----+-----------------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+---------------------------+------------------------------------------------+------------------------+---------------------+---------------------+-------------+-------------+---------------------+-------+
> > | id  | name            | uuid                                 |
> > pool_type         | port | data_center_id | pod_id | cluster_id |
> > available_bytes   | capacity_bytes | host_address              |
> > user_info                                      | path                   |
> > created             | removed             | update_time | status      |
> > storage_provider_id | scope |
> >
> +-----+-----------------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+---------------------------+------------------------------------------------+------------------------+---------------------+---------------------+-------------+-------------+---------------------+-------+
> > | 209 | sc-image        | bab81ce8-d53f-3a7d-b8f6-841702f65c89 |
> > RBD               | 6789 |              6 |      6 |          6 |
> > 38283921137336466 |  6013522722816 | ceph-mon.xxx.com | admin:xxx |
> > sc1                    | 2013-07-13 08:58:27 | NULL                |
> > NULL        | Up          |                NULL | NULL  |
> >
> > The "available_bytes" column is wrong.
> >
> > My issue is similar to the one reported by Guangjian Liu here, but it
> seems
> > that so far there's no solutions available?
> >
> >
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201304.mbox/%3CCAKryD0b_QtjhtsFjk8McEQzDH6frOxe6EkJmFrcMDN54e5ZH9A@mail.gmail.com%3E
>
> It might be a bug on ceph but the thread you referenced was resolved.
> See here: http://markmail.org/message/mkm2fqyawmwpufsc
>

Thanks for the email thread. However, the conversation doesn't mention on
how was the problem resolved.

I tried to execute the same command which have suggested by Wido on the
email thread:

===
root@hv-kvm-02:~# virsh pool-list
Name                 State      Autostart
-----------------------------------------
87ba6ca3-1b46-3f36-b138-2fd3bffb3d71 active     no
bab81ce8-d53f-3a7d-b8f6-841702f65c89 active     no
ff06ae2a-ff27-4ff6-87b3-c2f942cf76d6 active     no

root@hv-kvm-02:~# virsh pool-info bab81ce8-d53f-3a7d-b8f6-841702f65c89
Name:           bab81ce8-d53f-3a7d-b8f6-841702f65c89
UUID:           bab81ce8-d53f-3a7d-b8f6-841702f65c89
State:          running
Persistent:     no
Autostart:      no
Capacity:       5.47 TiB
Allocation:     *34819.02 TiB* <-- wrong information
Available:      5.47 TiB
===

If it still persists, can you please file a bug on JIRA?
>

Any specific instructions on how to file the bug?


>  > Is it safe for me to update the database record manually (using the
> UPDATE
> > MySQL command) to reflect the actual usage of the disk?
>
> Even if you do whatever is reporting the stats to CS about Ceph's
> storage usage, will overwrite those values.
>

Noted, thanks. So it's an issue on the Ceph RBD side rather than on
CloudStack side? Updating the Cloudstack database record manually will not
help?

In any case, I tried to update the "available_bytes" and "capacity_bytes"
record on the "storage_pool" table of "cloud" database manually to reflect
the actual size of the RBD image, which is 3 TB:

===
indra@cs-mgmt-01:~$ rbd --image sc-image -p sc1 info
rbd image 'sc-image':
        size 3072 GB in 786432 objects
        order 22 (4096 KB objects)
        block_name_prefix: rb.0.1825.238e1f29
        format: 1

mysql> UPDATE storage_pool SET available_bytes=3072000000,
capacity_bytes=3072000000 WHERE id=209;
Query OK, 1 row affected (0.03 sec)
Rows matched: 1  Changed: 1  Warnings: 0
===

I tried to re-run the creation of the VM instance again, and it's still
failed even though the error message is a bit different. Not too sure where
the "usedBytes" value is coming from?

usedBytes: 38283921137336466

===
2013-07-15 12:10:46,232 DEBUG [cloud.storage.StorageManagerImpl]
(Job-Executor-5:job-170) Checking pool 209 for storage, totalSize:
3072000000, usedBytes: 38283921137336466, usedPct: 1.246221391189338E7,
disable threshold: 0.85
2013-07-15 12:10:46,232 DEBUG [cloud.storage.StorageManagerImpl]
(Job-Executor-5:job-170) Insufficient space on pool: 209 since its usage
percentage: 1.246221391189338E7 has crossed the
pool.storage.capacity.disablethreshold: 0.85
2013-07-15 12:10:46,232 DEBUG
[storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-5:job-170)
FirstFitStoragePoolAllocator returning 0 suitable storage pools
2013-07-15 12:10:46,232 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-5:job-170) No suitable pools found for volume:
Vol[231|vm=227|DATADISK] under cluster: 6
2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-5:job-170) No suitable pools found
2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-5:job-170) No suitable storagePools found under this Cluster:
6
2013-07-15 12:10:46,233 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-5:job-170) Could not find suitable Deployment Destination for
this VM under any clusters, returning.
2013-07-15 12:10:46,235 DEBUG [cloud.vm.UserVmManagerImpl]
(Job-Executor-5:job-170) Destroying vm VM[User|Indra-Test-6] as it failed
to create on Host with Id:null
2013-07-15 12:10:46,358 DEBUG [cloud.capacity.CapacityManagerImpl]
(Job-Executor-5:job-170) VM state transitted from :Stopped to Error with
event: OperationFailedToErrorvm's original host id: null new host id: null
host id before state transition: null
2013-07-15 12:10:47,052 INFO  [user.vm.DeployVMCmd]
(Job-Executor-5:job-170)
com.cloud.exception.InsufficientServerCapacityException: Unable to create a
deployment for VM[User|Indra-Test-6]Scope=interface
com.cloud.dc.DataCenter; id=6
2013-07-15 12:10:47,052 INFO  [user.vm.DeployVMCmd]
(Job-Executor-5:job-170) Unable to create a deployment for
VM[User|Indra-Test-6]
===



>
> >
> > Looking forward to your reply, thank you.
> >
> > Cheers.
> >
> >
> >
> >
> > On Mon, Jul 15, 2013 at 11:41 AM, Indra Pramana <in...@sg.or.id> wrote:
> >
> > > Dear all,
> > >
> > > I am using CloudStack 4.1.0. Just managed to get it setup over the
> > > weekend. System VMs have been created and both (SSVM and CPVM) are
> running
> > > fine. The default CentOS template has also been downloaded and ready
> to use.
> > >
> > > However, I am not able to launch my first VM instance because the
> storage
> > > capacity checker is reporting wrong usage information. I have a total
> of 13
> > > TB of primary storage and the capacity checker is reporting my usage is
> > > 36,510 TB (274879.22%), which is not supposed to be the case.
> > >
> > > ===
> > > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > > (CapacityChecker:null) System Alert: Low Available Storage in cluster
> > > Cluster-01 pod Pod-01 of availability zone 01
> > > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > > (CapacityChecker:null) Available storage space is low, total: 13282342
> MB,
> > > used: 36510398411 MB (274879.22%)
> > > ===
> > >
> > > As a result, VM instance creation fails since it's not able to find
> > > available storage pool.
> > >
> > > ===
> > > 2013-07-15 11:15:28,313 DEBUG [cloud.storage.StorageManagerImpl]
> > > (Job-Executor-3:job-168) Checking pool: 208 for volume allocation
> > > [Vol[227|vm=225|ROOT]], maxSize : 15828044742656, totalAllocatedSize :
> > > 1769538048, askingSize : 8589934592, allocated disable threshold: 0.85
> > > 2013-07-15 11:15:28,313 DEBUG
> > > [storage.allocator.AbstractStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > Checking if storage pool is suitable, name: sc-image ,poolId: 209
> > > 2013-07-15 11:15:28,313 DEBUG
> > > [storage.allocator.AbstractStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > Is localStorageAllocationNeeded? false
> > > 2013-07-15 11:15:28,313 DEBUG
> > > [storage.allocator.AbstractStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > Is storage pool shared? true
> > > 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> > > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > > 6013522722816, usedBytes: 38283921137336466, usedPct:
> 6366.305226067051,
> > > disable threshold: 0.85
> > > 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> > > (Job-Executor-3:job-168) Insufficient space on pool: 209 since its
> usage
> > > percentage: 6366.305226067051 has crossed the
> > > pool.storage.capacity.disablethreshold: 0.85
> > > 2013-07-15 11:15:28,317 DEBUG
> > > [storage.allocator.FirstFitStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > FirstFitStoragePoolAllocator returning 1 suitable storage pools
> > > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-3:job-168) Checking suitable pools for volume (Id, Type):
> > > (228,DATADISK)
> > > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-3:job-168) We need to allocate new storagepool for this
> volume
> > > 2013-07-15 11:15:28,319 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-3:job-168) Calling StoragePoolAllocators to find suitable
> > > pools
> > > 2013-07-15 11:15:28,319 DEBUG
> > > [storage.allocator.FirstFitStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > Looking for pools in dc: 6  pod:6  cluster:6 having tags:[rbd]
> > > 2013-07-15 11:15:28,322 DEBUG
> > > [storage.allocator.FirstFitStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > FirstFitStoragePoolAllocator has 1 pools to check for allocation
> > > 2013-07-15 11:15:28,322 DEBUG
> > > [storage.allocator.AbstractStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > Checking if storage pool is suitable, name: sc-image ,poolId: 209
> > > 2013-07-15 11:15:28,322 DEBUG
> > > [storage.allocator.AbstractStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > Is localStorageAllocationNeeded? false
> > > 2013-07-15 11:15:28,322 DEBUG
> > > [storage.allocator.AbstractStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > Is storage pool shared? true
> > > 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> > > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > > 6013522722816, usedBytes: 38283921137336466, usedPct:
> 6366.305226067051,
> > > disable threshold: 0.85
> > > 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> > > (Job-Executor-3:job-168) Insufficient space on pool: 209 since its
> usage
> > > percentage: 6366.305226067051 has crossed the
> > > pool.storage.capacity.disablethreshold: 0.85
> > > 2013-07-15 11:15:28,326 DEBUG
> > > [storage.allocator.FirstFitStoragePoolAllocator]
> (Job-Executor-3:job-168)
> > > FirstFitStoragePoolAllocator returning 0 suitable storage pools
> > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-3:job-168) No suitable pools found for volume:
> > > Vol[228|vm=225|DATADISK] under cluster: 6
> > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-3:job-168) No suitable pools found
> > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-3:job-168) No suitable storagePools found under this
> Cluster:
> > > 6
> > > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > > (Job-Executor-3:job-168) Could not find suitable Deployment
> Destination for
> > > this VM under any clusters, returning.
> > > 2013-07-15 11:15:28,332 DEBUG [cloud.vm.UserVmManagerImpl]
> > > (Job-Executor-3:job-168) Destroying vm VM[User|Indra-Test-3] as it
> failed
> > > to create on Host with Id:null
> > > 2013-07-15 11:15:28,498 DEBUG [cloud.capacity.CapacityManagerImpl]
> > > (Job-Executor-3:job-168) VM state transitted from :Stopped to Error
> with
> > > event: OperationFailedToErrorvm's original host id: null new host id:
> null
> > > host id before state transition: null
> > > 2013-07-15 11:15:29,125 INFO  [user.vm.DeployVMCmd]
> > > (Job-Executor-3:job-168)
> > > com.cloud.exception.InsufficientServerCapacityException: Unable to
> create a
> > > deployment for VM[User|Indra-Test-3]Scope=interface
> > > com.cloud.dc.DataCenter; id=6
> > > ===
> > >
> > > Anyone can advise on how to resolve this problem?
> > >
> > > Looking forward to your reply, thank you.
> > >
> > > Cheers.
> > >
>
> --
> Prasanna.,
>
> ------------------------
> Powered by BigRock.com
>
>

Re: Wrong storage capacity issue reported

Posted by Dean Kamali <de...@gmail.com>.
I think we should file another bug for NFS, I'm having the same issue here,
showing double amount of storage, I have a 2TB lun for primary storage,
however in CS Management console its showing 4TB.

CS 4.1


On Wed, Jul 17, 2013 at 5:32 AM, Indra Pramana <in...@sg.or.id> wrote:

> Dear Wido and all,
>
> FYI, I managed to resolve the problem by installing latest version of
> libvirt. Apparently, the libvirt version that I was using (1.0.2), which I
> got from here:
>
>
> http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/
>
> was the culprit. I downloaded the latest libvirt version (version 1.1.0)
> from their FTP site:
>
> ftp://libvirt.org/libvirt/libvirt-1.1.0.tar.gz
>
> Install the latest librbd-dev package:
>
> apt-get install librbd-dev
>
> Then compile libvirt with storage RBD enabled:
>
> ./autogen.sh --prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
> --with-storage-rbd
>
> Restart the KVM host after it's done.
>
> virsh pool-info is now showing the correct "Allocation" amount:
>
> root@hv-kvm-02:~# virsh pool-info d433809b-01ea-3947-ba0f-48077244e4d6
> Name:           d433809b-01ea-3947-ba0f-48077244e4d6
> UUID:           d433809b-01ea-3947-ba0f-48077244e4d6
> State:          running
> Persistent:     no
> Autostart:      no
> Capacity:       5.47 TiB
> Allocation:     328.00 B
> Available:      5.47 TiB
>
> Tried to create a VM instance, and now the VM can utilise the RBD storage
> pool. No more "insufficient space on pool" error message on the log.
>
> The only issue is that, I noted that only the DATADISK volume is created on
> the RBD primary storage pool. The ROOT volume of that VM is still created
> on the NFS primary storage pool.
>
> Is there a way to ensure that the ROOT volume will also be created on the
> RBD primary storage pool instead of the NFS pool?
>
> Looking forward to your reply, thank you.
>
> Cheers.
>
>
>
> On Wed, Jul 17, 2013 at 12:42 PM, Indra Pramana <in...@sg.or.id> wrote:
>
> > Dear Wido and all,
> >
> > On Tue, Jul 16, 2013 at 10:49 AM, Indra Pramana <in...@sg.or.id> wrote:
> >
> >> Hi Prasanna,
> >>
> >> On Mon, Jul 15, 2013 at 12:38 PM, Prasanna Santhanam <tsp@apache.org
> >wrote:
> >>
> >>> It might be a bug on ceph but the thread you referenced was resolved.
> >>> See here: http://markmail.org/message/mkm2fqyawmwpufsc
> >>>
> >>> If it still persists, can you please file a bug on JIRA?
> >>>
> >>
> >> As suggested, I have filed a bug on JIRA.
> >>
> >> https://issues.apache.org/jira/browse/CLOUDSTACK-3542
> >>
> >> Appreciate any advise on how to resolve the problem. Wido, any comments
> >> from your end? :)
> >>
> >
> > Anyone has any clues on how to resolve this problem? It is a show-stopper
> > for me, I can't proceed further without able to get my RBD primary
> storage
> > to work.
> >
> > Any advice is greatly appreciated.
> >
> > Looking forward to your reply, thank you.
> >
> > Cheers.
> >
> >
>

Re: Wrong storage capacity issue reported

Posted by Indra Pramana <in...@sg.or.id>.
Hi Wido,

Good day to you, and glad to hear from you again. Thank you for your
e-mail. :)

On Thu, Jul 18, 2013 at 12:28 AM, Wido den Hollander <wi...@widodh.nl> wrote:

>
> The only issue is that, I noted that only the DATADISK volume is created
>> on the RBD primary storage pool. The ROOT volume of that VM is still
>> created on the NFS primary storage pool.
>>
>> Is there a way to ensure that the ROOT volume will also be created on
>> the RBD primary storage pool instead of the NFS pool?
>>
>>
> Yes, make sure you use the right storage tags to force the ROOT disk on
> the RBD storage pool as well.
>

Can you share how to force the ROOT disk to use the RBD storage pool as
well? I use the disk offering method with the "rbd" tag to force the
DATADISK to use the RBD storage pool, but how about the ROOT disk? Where
can I set the "rbd" tag for the ROOT disk?

Looking forward to your reply, thank you.

Cheers.

Re: Wrong storage capacity issue reported

Posted by Wido den Hollander <wi...@widodh.nl>.
Hi,

On 07/17/2013 11:32 AM, Indra Pramana wrote:
> Dear Wido and all,
>
> FYI, I managed to resolve the problem by installing latest version of
> libvirt. Apparently, the libvirt version that I was using (1.0.2), which
> I got from here:
>
> http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/
>

Odd, I haven't noticed that.

> was the culprit. I downloaded the latest libvirt version (version 1.1.0)
> from their FTP site:
>
> ftp://libvirt.org/libvirt/libvirt-1.1.0.tar.gz
>
> Install the latest librbd-dev package:
>
> apt-get install librbd-dev
>
> Then compile libvirt with storage RBD enabled:
>
> ./autogen.sh --prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
> --with-storage-rbd
>
> Restart the KVM host after it's done.
>
> virsh pool-info is now showing the correct "Allocation" amount:
>
> root@hv-kvm-02:~# virsh pool-info d433809b-01ea-3947-ba0f-48077244e4d6
> Name:           d433809b-01ea-3947-ba0f-48077244e4d6
> UUID:           d433809b-01ea-3947-ba0f-48077244e4d6
> State:          running
> Persistent:     no
> Autostart:      no
> Capacity:       5.47 TiB
> Allocation:     328.00 B
> Available:      5.47 TiB
>
> Tried to create a VM instance, and now the VM can utilise the RBD
> storage pool. No more "insufficient space on pool" error message on the log.
>
> The only issue is that, I noted that only the DATADISK volume is created
> on the RBD primary storage pool. The ROOT volume of that VM is still
> created on the NFS primary storage pool.
>
> Is there a way to ensure that the ROOT volume will also be created on
> the RBD primary storage pool instead of the NFS pool?
>

Yes, make sure you use the right storage tags to force the ROOT disk on 
the RBD storage pool as well.

Wido

> Looking forward to your reply, thank you.
>
> Cheers.
>
>
>
> On Wed, Jul 17, 2013 at 12:42 PM, Indra Pramana <indra@sg.or.id
> <ma...@sg.or.id>> wrote:
>
>     Dear Wido and all,
>
>     On Tue, Jul 16, 2013 at 10:49 AM, Indra Pramana <indra@sg.or.id
>     <ma...@sg.or.id>> wrote:
>
>         Hi Prasanna,
>
>         On Mon, Jul 15, 2013 at 12:38 PM, Prasanna Santhanam
>         <tsp@apache.org <ma...@apache.org>> wrote:
>
>             It might be a bug on ceph but the thread you referenced was
>             resolved.
>             See here: http://markmail.org/message/mkm2fqyawmwpufsc
>
>             If it still persists, can you please file a bug on JIRA?
>
>
>         As suggested, I have filed a bug on JIRA.
>
>         https://issues.apache.org/jira/browse/CLOUDSTACK-3542
>
>         Appreciate any advise on how to resolve the problem. Wido, any
>         comments from your end? :)
>
>
>     Anyone has any clues on how to resolve this problem? It is a
>     show-stopper for me, I can't proceed further without able to get my
>     RBD primary storage to work.
>
>     Any advice is greatly appreciated.
>
>     Looking forward to your reply, thank you.
>
>     Cheers.
>
>

Re: Wrong storage capacity issue reported

Posted by Indra Pramana <in...@sg.or.id>.
Dear Wido and all,

FYI, I managed to resolve the problem by installing latest version of
libvirt. Apparently, the libvirt version that I was using (1.0.2), which I
got from here:

http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/

was the culprit. I downloaded the latest libvirt version (version 1.1.0)
from their FTP site:

ftp://libvirt.org/libvirt/libvirt-1.1.0.tar.gz

Install the latest librbd-dev package:

apt-get install librbd-dev

Then compile libvirt with storage RBD enabled:

./autogen.sh --prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
--with-storage-rbd

Restart the KVM host after it's done.

virsh pool-info is now showing the correct "Allocation" amount:

root@hv-kvm-02:~# virsh pool-info d433809b-01ea-3947-ba0f-48077244e4d6
Name:           d433809b-01ea-3947-ba0f-48077244e4d6
UUID:           d433809b-01ea-3947-ba0f-48077244e4d6
State:          running
Persistent:     no
Autostart:      no
Capacity:       5.47 TiB
Allocation:     328.00 B
Available:      5.47 TiB

Tried to create a VM instance, and now the VM can utilise the RBD storage
pool. No more "insufficient space on pool" error message on the log.

The only issue is that, I noted that only the DATADISK volume is created on
the RBD primary storage pool. The ROOT volume of that VM is still created
on the NFS primary storage pool.

Is there a way to ensure that the ROOT volume will also be created on the
RBD primary storage pool instead of the NFS pool?

Looking forward to your reply, thank you.

Cheers.



On Wed, Jul 17, 2013 at 12:42 PM, Indra Pramana <in...@sg.or.id> wrote:

> Dear Wido and all,
>
> On Tue, Jul 16, 2013 at 10:49 AM, Indra Pramana <in...@sg.or.id> wrote:
>
>> Hi Prasanna,
>>
>> On Mon, Jul 15, 2013 at 12:38 PM, Prasanna Santhanam <ts...@apache.org>wrote:
>>
>>> It might be a bug on ceph but the thread you referenced was resolved.
>>> See here: http://markmail.org/message/mkm2fqyawmwpufsc
>>>
>>> If it still persists, can you please file a bug on JIRA?
>>>
>>
>> As suggested, I have filed a bug on JIRA.
>>
>> https://issues.apache.org/jira/browse/CLOUDSTACK-3542
>>
>> Appreciate any advise on how to resolve the problem. Wido, any comments
>> from your end? :)
>>
>
> Anyone has any clues on how to resolve this problem? It is a show-stopper
> for me, I can't proceed further without able to get my RBD primary storage
> to work.
>
> Any advice is greatly appreciated.
>
> Looking forward to your reply, thank you.
>
> Cheers.
>
>

Re: Wrong storage capacity issue reported

Posted by Indra Pramana <in...@sg.or.id>.
Dear Wido and all,

On Tue, Jul 16, 2013 at 10:49 AM, Indra Pramana <in...@sg.or.id> wrote:

> Hi Prasanna,
>
> On Mon, Jul 15, 2013 at 12:38 PM, Prasanna Santhanam <ts...@apache.org>wrote:
>
>> It might be a bug on ceph but the thread you referenced was resolved.
>> See here: http://markmail.org/message/mkm2fqyawmwpufsc
>>
>> If it still persists, can you please file a bug on JIRA?
>>
>
> As suggested, I have filed a bug on JIRA.
>
> https://issues.apache.org/jira/browse/CLOUDSTACK-3542
>
> Appreciate any advise on how to resolve the problem. Wido, any comments
> from your end? :)
>

Anyone has any clues on how to resolve this problem? It is a show-stopper
for me, I can't proceed further without able to get my RBD primary storage
to work.

Any advice is greatly appreciated.

Looking forward to your reply, thank you.

Cheers.

Re: Wrong storage capacity issue reported

Posted by Indra Pramana <in...@sg.or.id>.
Hi Prasanna,

On Mon, Jul 15, 2013 at 12:38 PM, Prasanna Santhanam <ts...@apache.org> wrote:

> It might be a bug on ceph but the thread you referenced was resolved.
> See here: http://markmail.org/message/mkm2fqyawmwpufsc
>
> If it still persists, can you please file a bug on JIRA?
>

As suggested, I have filed a bug on JIRA.

https://issues.apache.org/jira/browse/CLOUDSTACK-3542

Appreciate any advise on how to resolve the problem. Wido, any comments
from your end? :)

Looking forward to your reply, thank you.

Cheers.

Re: Wrong storage capacity issue reported

Posted by Prasanna Santhanam <ts...@apache.org>.
On Mon, Jul 15, 2013 at 11:58:47AM +0800, Indra Pramana wrote:
> Dear all,
> 
> In addition to my previous e-mail, I just realised that the wrong capacity
> usage information is only applicable to the Ceph RBD primary storage. I did
> a check manually on the "storage_pool" table on the "cloud" MySQL database:
> 
> +-----+-----------------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+---------------------------+------------------------------------------------+------------------------+---------------------+---------------------+-------------+-------------+---------------------+-------+
> | id  | name            | uuid                                 |
> pool_type         | port | data_center_id | pod_id | cluster_id |
> available_bytes   | capacity_bytes | host_address              |
> user_info                                      | path                   |
> created             | removed             | update_time | status      |
> storage_provider_id | scope |
> +-----+-----------------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+---------------------------+------------------------------------------------+------------------------+---------------------+---------------------+-------------+-------------+---------------------+-------+
> | 209 | sc-image        | bab81ce8-d53f-3a7d-b8f6-841702f65c89 |
> RBD               | 6789 |              6 |      6 |          6 |
> 38283921137336466 |  6013522722816 | ceph-mon.xxx.com | admin:xxx |
> sc1                    | 2013-07-13 08:58:27 | NULL                |
> NULL        | Up          |                NULL | NULL  |
> 
> The "available_bytes" column is wrong.
> 
> My issue is similar to the one reported by Guangjian Liu here, but it seems
> that so far there's no solutions available?
> 
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201304.mbox/%3CCAKryD0b_QtjhtsFjk8McEQzDH6frOxe6EkJmFrcMDN54e5ZH9A@mail.gmail.com%3E

It might be a bug on ceph but the thread you referenced was resolved.
See here: http://markmail.org/message/mkm2fqyawmwpufsc

If it still persists, can you please file a bug on JIRA?


> 
> Is it safe for me to update the database record manually (using the UPDATE
> MySQL command) to reflect the actual usage of the disk?

Even if you do whatever is reporting the stats to CS about Ceph's
storage usage, will overwrite those values.

> 
> Looking forward to your reply, thank you.
> 
> Cheers.
> 
> 
> 
> 
> On Mon, Jul 15, 2013 at 11:41 AM, Indra Pramana <in...@sg.or.id> wrote:
> 
> > Dear all,
> >
> > I am using CloudStack 4.1.0. Just managed to get it setup over the
> > weekend. System VMs have been created and both (SSVM and CPVM) are running
> > fine. The default CentOS template has also been downloaded and ready to use.
> >
> > However, I am not able to launch my first VM instance because the storage
> > capacity checker is reporting wrong usage information. I have a total of 13
> > TB of primary storage and the capacity checker is reporting my usage is
> > 36,510 TB (274879.22%), which is not supposed to be the case.
> >
> > ===
> > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > (CapacityChecker:null) System Alert: Low Available Storage in cluster
> > Cluster-01 pod Pod-01 of availability zone 01
> > 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> > (CapacityChecker:null) Available storage space is low, total: 13282342 MB,
> > used: 36510398411 MB (274879.22%)
> > ===
> >
> > As a result, VM instance creation fails since it's not able to find
> > available storage pool.
> >
> > ===
> > 2013-07-15 11:15:28,313 DEBUG [cloud.storage.StorageManagerImpl]
> > (Job-Executor-3:job-168) Checking pool: 208 for volume allocation
> > [Vol[227|vm=225|ROOT]], maxSize : 15828044742656, totalAllocatedSize :
> > 1769538048, askingSize : 8589934592, allocated disable threshold: 0.85
> > 2013-07-15 11:15:28,313 DEBUG
> > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> > Checking if storage pool is suitable, name: sc-image ,poolId: 209
> > 2013-07-15 11:15:28,313 DEBUG
> > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> > Is localStorageAllocationNeeded? false
> > 2013-07-15 11:15:28,313 DEBUG
> > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> > Is storage pool shared? true
> > 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > 6013522722816, usedBytes: 38283921137336466, usedPct: 6366.305226067051,
> > disable threshold: 0.85
> > 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> > (Job-Executor-3:job-168) Insufficient space on pool: 209 since its usage
> > percentage: 6366.305226067051 has crossed the
> > pool.storage.capacity.disablethreshold: 0.85
> > 2013-07-15 11:15:28,317 DEBUG
> > [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
> > FirstFitStoragePoolAllocator returning 1 suitable storage pools
> > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-3:job-168) Checking suitable pools for volume (Id, Type):
> > (228,DATADISK)
> > 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-3:job-168) We need to allocate new storagepool for this volume
> > 2013-07-15 11:15:28,319 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-3:job-168) Calling StoragePoolAllocators to find suitable
> > pools
> > 2013-07-15 11:15:28,319 DEBUG
> > [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
> > Looking for pools in dc: 6  pod:6  cluster:6 having tags:[rbd]
> > 2013-07-15 11:15:28,322 DEBUG
> > [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
> > FirstFitStoragePoolAllocator has 1 pools to check for allocation
> > 2013-07-15 11:15:28,322 DEBUG
> > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> > Checking if storage pool is suitable, name: sc-image ,poolId: 209
> > 2013-07-15 11:15:28,322 DEBUG
> > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> > Is localStorageAllocationNeeded? false
> > 2013-07-15 11:15:28,322 DEBUG
> > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> > Is storage pool shared? true
> > 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> > (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> > 6013522722816, usedBytes: 38283921137336466, usedPct: 6366.305226067051,
> > disable threshold: 0.85
> > 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> > (Job-Executor-3:job-168) Insufficient space on pool: 209 since its usage
> > percentage: 6366.305226067051 has crossed the
> > pool.storage.capacity.disablethreshold: 0.85
> > 2013-07-15 11:15:28,326 DEBUG
> > [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
> > FirstFitStoragePoolAllocator returning 0 suitable storage pools
> > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-3:job-168) No suitable pools found for volume:
> > Vol[228|vm=225|DATADISK] under cluster: 6
> > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-3:job-168) No suitable pools found
> > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-3:job-168) No suitable storagePools found under this Cluster:
> > 6
> > 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> > (Job-Executor-3:job-168) Could not find suitable Deployment Destination for
> > this VM under any clusters, returning.
> > 2013-07-15 11:15:28,332 DEBUG [cloud.vm.UserVmManagerImpl]
> > (Job-Executor-3:job-168) Destroying vm VM[User|Indra-Test-3] as it failed
> > to create on Host with Id:null
> > 2013-07-15 11:15:28,498 DEBUG [cloud.capacity.CapacityManagerImpl]
> > (Job-Executor-3:job-168) VM state transitted from :Stopped to Error with
> > event: OperationFailedToErrorvm's original host id: null new host id: null
> > host id before state transition: null
> > 2013-07-15 11:15:29,125 INFO  [user.vm.DeployVMCmd]
> > (Job-Executor-3:job-168)
> > com.cloud.exception.InsufficientServerCapacityException: Unable to create a
> > deployment for VM[User|Indra-Test-3]Scope=interface
> > com.cloud.dc.DataCenter; id=6
> > ===
> >
> > Anyone can advise on how to resolve this problem?
> >
> > Looking forward to your reply, thank you.
> >
> > Cheers.
> >

-- 
Prasanna.,

------------------------
Powered by BigRock.com


Re: Wrong storage capacity issue reported

Posted by Indra Pramana <in...@sg.or.id>.
Dear all,

In addition to my previous e-mail, I just realised that the wrong capacity
usage information is only applicable to the Ceph RBD primary storage. I did
a check manually on the "storage_pool" table on the "cloud" MySQL database:

+-----+-----------------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+---------------------------+------------------------------------------------+------------------------+---------------------+---------------------+-------------+-------------+---------------------+-------+
| id  | name            | uuid                                 |
pool_type         | port | data_center_id | pod_id | cluster_id |
available_bytes   | capacity_bytes | host_address              |
user_info                                      | path                   |
created             | removed             | update_time | status      |
storage_provider_id | scope |
+-----+-----------------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+---------------------------+------------------------------------------------+------------------------+---------------------+---------------------+-------------+-------------+---------------------+-------+
| 209 | sc-image        | bab81ce8-d53f-3a7d-b8f6-841702f65c89 |
RBD               | 6789 |              6 |      6 |          6 |
38283921137336466 |  6013522722816 | ceph-mon.xxx.com | admin:xxx |
sc1                    | 2013-07-13 08:58:27 | NULL                |
NULL        | Up          |                NULL | NULL  |

The "available_bytes" column is wrong.

My issue is similar to the one reported by Guangjian Liu here, but it seems
that so far there's no solutions available?

http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201304.mbox/%3CCAKryD0b_QtjhtsFjk8McEQzDH6frOxe6EkJmFrcMDN54e5ZH9A@mail.gmail.com%3E

Is it safe for me to update the database record manually (using the UPDATE
MySQL command) to reflect the actual usage of the disk?

Looking forward to your reply, thank you.

Cheers.




On Mon, Jul 15, 2013 at 11:41 AM, Indra Pramana <in...@sg.or.id> wrote:

> Dear all,
>
> I am using CloudStack 4.1.0. Just managed to get it setup over the
> weekend. System VMs have been created and both (SSVM and CPVM) are running
> fine. The default CentOS template has also been downloaded and ready to use.
>
> However, I am not able to launch my first VM instance because the storage
> capacity checker is reporting wrong usage information. I have a total of 13
> TB of primary storage and the capacity checker is reporting my usage is
> 36,510 TB (274879.22%), which is not supposed to be the case.
>
> ===
> 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> (CapacityChecker:null) System Alert: Low Available Storage in cluster
> Cluster-01 pod Pod-01 of availability zone 01
> 2013-07-15 11:27:31,632 DEBUG [cloud.alert.AlertManagerImpl]
> (CapacityChecker:null) Available storage space is low, total: 13282342 MB,
> used: 36510398411 MB (274879.22%)
> ===
>
> As a result, VM instance creation fails since it's not able to find
> available storage pool.
>
> ===
> 2013-07-15 11:15:28,313 DEBUG [cloud.storage.StorageManagerImpl]
> (Job-Executor-3:job-168) Checking pool: 208 for volume allocation
> [Vol[227|vm=225|ROOT]], maxSize : 15828044742656, totalAllocatedSize :
> 1769538048, askingSize : 8589934592, allocated disable threshold: 0.85
> 2013-07-15 11:15:28,313 DEBUG
> [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> Checking if storage pool is suitable, name: sc-image ,poolId: 209
> 2013-07-15 11:15:28,313 DEBUG
> [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> Is localStorageAllocationNeeded? false
> 2013-07-15 11:15:28,313 DEBUG
> [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> Is storage pool shared? true
> 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> 6013522722816, usedBytes: 38283921137336466, usedPct: 6366.305226067051,
> disable threshold: 0.85
> 2013-07-15 11:15:28,317 DEBUG [cloud.storage.StorageManagerImpl]
> (Job-Executor-3:job-168) Insufficient space on pool: 209 since its usage
> percentage: 6366.305226067051 has crossed the
> pool.storage.capacity.disablethreshold: 0.85
> 2013-07-15 11:15:28,317 DEBUG
> [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
> FirstFitStoragePoolAllocator returning 1 suitable storage pools
> 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-3:job-168) Checking suitable pools for volume (Id, Type):
> (228,DATADISK)
> 2013-07-15 11:15:28,317 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-3:job-168) We need to allocate new storagepool for this volume
> 2013-07-15 11:15:28,319 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-3:job-168) Calling StoragePoolAllocators to find suitable
> pools
> 2013-07-15 11:15:28,319 DEBUG
> [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
> Looking for pools in dc: 6  pod:6  cluster:6 having tags:[rbd]
> 2013-07-15 11:15:28,322 DEBUG
> [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
> FirstFitStoragePoolAllocator has 1 pools to check for allocation
> 2013-07-15 11:15:28,322 DEBUG
> [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> Checking if storage pool is suitable, name: sc-image ,poolId: 209
> 2013-07-15 11:15:28,322 DEBUG
> [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> Is localStorageAllocationNeeded? false
> 2013-07-15 11:15:28,322 DEBUG
> [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-3:job-168)
> Is storage pool shared? true
> 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> (Job-Executor-3:job-168) Checking pool 209 for storage, totalSize:
> 6013522722816, usedBytes: 38283921137336466, usedPct: 6366.305226067051,
> disable threshold: 0.85
> 2013-07-15 11:15:28,326 DEBUG [cloud.storage.StorageManagerImpl]
> (Job-Executor-3:job-168) Insufficient space on pool: 209 since its usage
> percentage: 6366.305226067051 has crossed the
> pool.storage.capacity.disablethreshold: 0.85
> 2013-07-15 11:15:28,326 DEBUG
> [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-3:job-168)
> FirstFitStoragePoolAllocator returning 0 suitable storage pools
> 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-3:job-168) No suitable pools found for volume:
> Vol[228|vm=225|DATADISK] under cluster: 6
> 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-3:job-168) No suitable pools found
> 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-3:job-168) No suitable storagePools found under this Cluster:
> 6
> 2013-07-15 11:15:28,326 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-3:job-168) Could not find suitable Deployment Destination for
> this VM under any clusters, returning.
> 2013-07-15 11:15:28,332 DEBUG [cloud.vm.UserVmManagerImpl]
> (Job-Executor-3:job-168) Destroying vm VM[User|Indra-Test-3] as it failed
> to create on Host with Id:null
> 2013-07-15 11:15:28,498 DEBUG [cloud.capacity.CapacityManagerImpl]
> (Job-Executor-3:job-168) VM state transitted from :Stopped to Error with
> event: OperationFailedToErrorvm's original host id: null new host id: null
> host id before state transition: null
> 2013-07-15 11:15:29,125 INFO  [user.vm.DeployVMCmd]
> (Job-Executor-3:job-168)
> com.cloud.exception.InsufficientServerCapacityException: Unable to create a
> deployment for VM[User|Indra-Test-3]Scope=interface
> com.cloud.dc.DataCenter; id=6
> ===
>
> Anyone can advise on how to resolve this problem?
>
> Looking forward to your reply, thank you.
>
> Cheers.
>