You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@cloudstack.apache.org by "Martin Emrich (JIRA)" <ji...@apache.org> on 2016/01/26 14:42:39 UTC
[jira] [Commented] (CLOUDSTACK-2344) not enough free memory on one
host of the cluster
[ https://issues.apache.org/jira/browse/CLOUDSTACK-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15117195#comment-15117195 ]
Martin Emrich commented on CLOUDSTACK-2344:
-------------------------------------------
Happened again today on another Cloudstack here (4.4.4).
Using XenServer 6.2 SP1, only local storage, thus no HA.
> not enough free memory on one host of the cluster
> -------------------------------------------------
>
> Key: CLOUDSTACK-2344
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2344
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the default.)
> Components: XenServer
> Affects Versions: 4.0.1, 4.1.0, 4.2.0
> Reporter: Nikita Gubenko
> Priority: Minor
>
> Hi
> I'm testing some overprovisioning cases. I have two XenServer hosts with lots of VMs on each.
> When I try to upgrade resources on the VM on the host that doesn't have enough free memory for it and try to start it I get
> Task failed! Task record: uuid: 0582f281-5a54-27cd-ba23-a4edf60312f4
> nameLabel: Async.VM.start_on
> nameDescription:
> allowedOperations: []
> currentOperations: {}
> created: Thu Apr 25 00:29:34 MSK 2013
> finished: Thu Apr 25 00:29:34 MSK 2013
> status: failure
> residentOn: com.xensource.xenapi.Host@234582ee
> progress: 1.0
> type: <none/>
> result:
> errorInfo: [HOST_NOT_ENOUGH_FREE_MEMORY, 8659140608, 6721728512]
> otherConfig: {}
> subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> This is normal behavior, but CS doesn't try to start this VM on the second host of the cluster which has enough free memory. OfferHA option is enabled.
> I think this should be fixed, because there is no balanced VM distribution within a cluster, so this situation can occur.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)