You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@cloudstack.apache.org by "Yuri Kogun (JIRA)" <ji...@apache.org> on 2015/01/26 16:47:34 UTC

[jira] [Commented] (CLOUDSTACK-2344) not enough free memory on one host of the cluster

    [ https://issues.apache.org/jira/browse/CLOUDSTACK-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14291972#comment-14291972 ] 

Yuri Kogun commented on CLOUDSTACK-2344:
----------------------------------------

We have the same issue in our installation on the Dev cluster. Cloudstack 4.3.0.1 and xenserver 6.2. We have 300 VM's in total with 12 hosts. Below is the extract from the management server log 
{code}
2015-01-26 01:45:20,005 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) Deployment found  - P0=VM[User|i-3-160816-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(5)-Host(35)-Storage(Volume(158708|ROOT-->Pool(30))]
2015-01-26 01:45:20,124 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 35 host id before state transition: null
2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) Hosts's actual total CPU: 44688 and CPU after applying overprovisioning: 58094
2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) We are allocating VM, increasing the used capacity of this host:35
2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) Current Used CPU: 52500 , Free CPU:5594 ,Requested CPU: 1500
2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) Current Used RAM: 61220061184 , Free RAM:2060627968 ,Requested RAM: 1572864000
2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) CPU STATS after allocation: for host: 35, old used: 52500, old reserved: 0, actual total: 44688, total with o
verprovisioning: 58094; new used:54000, reserved:0; requested cpu:1500,alloc_from_last:false
2015-01-26 01:45:20,138 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-8:ctx-c47b7a2f job-205846/job-205847 ctx-ea6dc373) RAM STATS after allocation: for host: 35, old used: 61220061184, old reserved: 0, total: 63280689152; new use
d: 62792925184, reserved: 0; requested mem: 1572864000,alloc_from_last:false
.....

2015-01-26 01:45:25,547 WARN  [c.c.h.x.r.CitrixResourceBase] (DirectAgent-364:ctx-578a4e5d) Task failed! Task record:                 uuid: ff81a41e-3340-e7d5-6f8c-c99d4a910bb0
           nameLabel: Async.VM.start_on
     nameDescription:
   allowedOperations: []
   currentOperations: {}
             created: Mon Jan 26 01:45:24 GMT 2015
            finished: Mon Jan 26 01:45:24 GMT 2015
              status: failure
          residentOn: com.xensource.xenapi.Host@fcaebca8
            progress: 1.0
                type: <none/>
              result:
           errorInfo: [HOST_NOT_ENOUGH_FREE_MEMORY, 1587544064, 1446559744]
         otherConfig: {}
           subtaskOf: com.xensource.xenapi.Task@aaf13f6f
            subtasks: []
{code}



> not enough free memory on one host of the cluster
> -------------------------------------------------
>
>                 Key: CLOUDSTACK-2344
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2344
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>          Components: XenServer
>    Affects Versions: 4.0.1, 4.1.0, 4.2.0
>            Reporter: Nikita Gubenko
>            Priority: Minor
>
> Hi
> I'm testing some overprovisioning cases. I have two XenServer hosts with lots of VMs on each.
> When I try to upgrade resources on the VM on the host that doesn't have enough free memory for it and try to start it I get
> Task failed! Task record: uuid: 0582f281-5a54-27cd-ba23-a4edf60312f4
>            nameLabel: Async.VM.start_on
>      nameDescription:
>    allowedOperations: []
>    currentOperations: {}
>              created: Thu Apr 25 00:29:34 MSK 2013
>             finished: Thu Apr 25 00:29:34 MSK 2013
>               status: failure
>           residentOn: com.xensource.xenapi.Host@234582ee
>             progress: 1.0
>                 type: <none/>
>               result:
>            errorInfo: [HOST_NOT_ENOUGH_FREE_MEMORY, 8659140608, 6721728512]
>          otherConfig: {}
>            subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>             subtasks: []
> This is normal behavior, but CS doesn't try to start this VM on the second host of the cluster which has enough free memory. OfferHA option is enabled.
> I think this should be fixed, because there is no balanced VM distribution within a cluster, so this situation can occur.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)