You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Cristian Ciobanu <cr...@istream.today> on 2016/01/25 20:51:53 UTC

CloudStack and VMware VM deploy issue

Hello,

      


     I build a CloudStack environment using VMware 5.5 and CS 4.7. ( with local storage enabled )

     Everything working till i try to build a new VM, after i try to build a VM, i see status staring, but i don't see any events on VMware side ( and is not starting ) also I'm not able to delete.

     SSVM and Proxy Console is up and running, reachable on private IP, also i tested te internet connectivity from inside of SSVm and works. 

     The VR is also Up and running but i don't see any IP on NIC 2, i'm not sure if this can be a issue :

Type
Traffic Type Control
Network Name
Netmask 0.0.0.0
IP Address 0.0.0.0
ID 1da1e7e7-51b4-4418-8fd9-bb5a5cf57ef1
Network ID cb974c5d-dfc2-4f7a-a57d-fd1b6b7d183e
Isolation URI
Broadcast URI
================================================================================

This is what i found for last test of VM deploy from default template, CentOS 5.3(64-bit) no GUI (vSphere):
 

[root@ns52XXX management]# grep -r job-33 --color
management-server.log:2016-01-25 14:32:45,588 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (API-Job-Executor-9:ctx-0c67fce8 job-33) Add job-33 into job monitoring
management-server.log:2016-01-25 14:32:45,603 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (catalina-exec-24:ctx-94aab038 ctx-9869fd5f) submit async job-33, details: AsyncJobVO {id:33, userId: 2, accountId: 2, instanceType: VirtualMachine, instanceId: 7, cmd: org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin, cmdInfo: {"serviceofferingid":"8c446bee-3419-4de0-81b0-a4405b4e95e4","keyboard":"us","cmdEventType":"VM.CREATE","ctxUserId":"2","zoneid":"9bb80e96-9d6b-4a11-b92f-9bfc754ebc89","httpmethod":"GET","templateid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","response":"json","id":"7","ctxDetails":"{\"com.cloud.template.VirtualMachineTemplate\":\"f164ac64-c393-11e5-8b5b-0cc47a7d402a\",\"com.cloud.vm.VirtualMachine\":\"533c1d9b-807a-42c6-84e1-05c38d347c43\",\"com.cloud.offering.DiskOffering\":\"8e249745-34a1-4773-8d76-2836b042514b\",\"com.cloud.dc.DataCenter\":\"9bb80e96-9d6b-4a11-b92f-9bfc754ebc89\",\"com.cloud.offering.ServiceOffering\":\"8c446bee-3419-4de0-81b0-a4405b4e95e4\"}","hypervisor":"VMware","name":"TEST11","_":"1453750363928","uuid":"533c1d9b-807a-42c6-84e1-05c38d347c43","ctxAccountId":"2","diskofferingid":"8e249745-34a1-4773-8d76-2836b042514b","ctxStartEventId":"47","displayname":"TEST11"}, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
management-server.log:2016-01-25 14:32:45,607 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33) Executing AsyncJobVO {id:33, userId: 2, accountId: 2, instanceType: VirtualMachine, instanceId: 7, cmd: org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin, cmdInfo: {"serviceofferingid":"8c446bee-3419-4de0-81b0-a4405b4e95e4","keyboard":"us","cmdEventType":"VM.CREATE","ctxUserId":"2","zoneid":"9bb80e96-9d6b-4a11-b92f-9bfc754ebc89","httpmethod":"GET","templateid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","response":"json","id":"7","ctxDetails":"{\"com.cloud.template.VirtualMachineTemplate\":\"f164ac64-c393-11e5-8b5b-0cc47a7d402a\",\"com.cloud.vm.VirtualMachine\":\"533c1d9b-807a-42c6-84e1-05c38d347c43\",\"com.cloud.offering.DiskOffering\":\"8e249745-34a1-4773-8d76-2836b042514b\",\"com.cloud.dc.DataCenter\":\"9bb80e96-9d6b-4a11-b92f-9bfc754ebc89\",\"com.cloud.offering.ServiceOffering\":\"8c446bee-3419-4de0-81b0-a4405b4e95e4\"}","hypervisor":"VMware","name":"TEST11","_":"1453750363928","uuid":"533c1d9b-807a-42c6-84e1-05c38d347c43","ctxAccountId":"2","diskofferingid":"8e249745-34a1-4773-8d76-2836b042514b","ctxStartEventId":"47","displayname":"TEST11"}, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
management-server.log:2016-01-25 14:32:45,664 DEBUG [c.c.n.NetworkModelImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-25 14:32:45,670 DEBUG [c.c.n.NetworkModelImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-25 14:32:45,687 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Deploy avoids pods: [], clusters: [], hosts: []
management-server.log:2016-01-25 14:32:45,688 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@5a8681fe
management-server.log:2016-01-25 14:32:45,688 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Trying to allocate a host and storage pools from dc:1, pod:null,cluster:null, requested cpu: 500, requested ram: 536870912
management-server.log:2016-01-25 14:32:45,688 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Is ROOT volume READY (pool already allocated)?: No
management-server.log:2016-01-25 14:32:45,688 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Searching all possible resources under this Zone: 1
management-server.log:2016-01-25 14:32:45,689 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Listing clusters in order of aggregate capacity, that have (atleast one host with) enough CPU and RAM capacity under this Zone: 1
management-server.log:2016-01-25 14:32:45,693 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Removing from the clusterId list these clusters from avoid set: []
management-server.log:2016-01-25 14:32:45,701 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Checking resources in Cluster: 1 under Pod: 1
management-server.log:2016-01-25 14:32:45,702 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Looking for hosts in dc: 1  pod:1  cluster:1
management-server.log:2016-01-25 14:32:45,706 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) FirstFitAllocator has 1 hosts to check for allocation: [Host[-1-Routing]]
management-server.log:2016-01-25 14:32:45,709 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Found 1 hosts for allocation after prioritization: [Host[-1-Routing]]
management-server.log:2016-01-25 14:32:45,709 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Looking for speed=500Mhz, Ram=512
management-server.log:2016-01-25 14:32:45,717 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Host: 1 has cpu capability (cpu:8, speed:3699) to support requested CPU: 1 and requested speed: 500
management-server.log:2016-01-25 14:32:45,717 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Checking if host: 1 has enough capacity for requested CPU: 500 and requested RAM: 536870912 , cpuOverprovisioningFactor: 1.0
management-server.log:2016-01-25 14:32:45,719 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
management-server.log:2016-01-25 14:32:45,719 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Free CPU: 28092 , Requested CPU: 500
management-server.log:2016-01-25 14:32:45,719 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Free RAM: 32368287744 , Requested RAM: 536870912
management-server.log:2016-01-25 14:32:45,719 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Host has enough CPU and RAM available
management-server.log:2016-01-25 14:32:45,719 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) STATS: Can alloc CPU from host: 1, used: 1500, reserved: 0, actual total: 29592, total with overprovisioning: 29592; requested cpu:500,alloc_from_last_host?:false ,considerReservedCapacity?: true
management-server.log:2016-01-25 14:32:45,719 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) STATS: Can alloc MEM from host: 1, used: 1879048192, reserved: 0, total: 34247335936; requested mem: 536870912,alloc_from_last_host?:false ,considerReservedCapacity?: true
management-server.log:2016-01-25 14:32:45,719 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Found a suitable host, adding to list: 1
management-server.log:2016-01-25 14:32:45,719 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68 FirstFitRoutingAllocator) Host Allocator returning 1 suitable hosts
management-server.log:2016-01-25 14:32:45,722 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Checking suitable pools for volume (Id, Type): (7,ROOT)
management-server.log:2016-01-25 14:32:45,722 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) We need to allocate new storagepool for this volume
management-server.log:2016-01-25 14:32:45,723 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Calling StoragePoolAllocators to find suitable pools
management-server.log:2016-01-25 14:32:45,723 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) LocalStoragePoolAllocator trying to find storage pool to fit the vm
management-server.log:2016-01-25 14:32:45,726 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Checking if storage pool is suitable, name: null ,poolId: 1
management-server.log:2016-01-25 14:32:45,728 INFO  [c.c.s.StorageManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
management-server.log:2016-01-25 14:32:45,729 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 7591690240, usedPct: 0.0019013883286271346, disable threshold: 0.85
management-server.log:2016-01-25 14:32:45,735 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Found storage pool xxxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 2
management-server.log:2016-01-25 14:32:45,735 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Total over provisioned capacity calculated is 2 * 3992708972544
management-server.log:2016-01-25 14:32:45,735 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Total capacity of the pool xxxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 7985417945088
management-server.log:2016-01-25 14:32:45,736 DEBUG [c.c.s.StorageManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Checking pool: 1 for volume allocation [Vol[7|vm=7|ROOT]], maxSize : 7985417945088, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
management-server.log:2016-01-25 14:32:45,737 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) LocalStoragePoolAllocator returning 1 suitable storage pools
management-server.log:2016-01-25 14:32:45,737 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Checking suitable pools for volume (Id, Type): (8,DATADISK)
management-server.log:2016-01-25 14:32:45,737 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) We need to allocate new storagepool for this volume
management-server.log:2016-01-25 14:32:45,738 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Calling StoragePoolAllocators to find suitable pools
management-server.log:2016-01-25 14:32:45,738 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) LocalStoragePoolAllocator trying to find storage pool to fit the vm
management-server.log:2016-01-25 14:32:45,738 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) ClusterScopeStoragePoolAllocator looking for storage pool
management-server.log:2016-01-25 14:32:45,738 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Looking for pools in dc: 1  pod:1  cluster:1
management-server.log:2016-01-25 14:32:45,739 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Found pools matching tags: []
management-server.log:2016-01-25 14:32:45,740 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) No storage pools available for shared volume allocation, returning
management-server.log:2016-01-25 14:32:45,740 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) ZoneWideStoragePoolAllocator to find storage pool
management-server.log:2016-01-25 14:32:45,743 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
management-server.log:2016-01-25 14:32:45,743 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) No suitable pools found
management-server.log:2016-01-25 14:32:45,743 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) No suitable storagePools found under this Cluster: 1
management-server.log:2016-01-25 14:32:45,745 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Could not find suitable Deployment Destination for this VM under any clusters, returning.
management-server.log:2016-01-25 14:32:45,745 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Searching all possible resources under this Zone: 1
management-server.log:2016-01-25 14:32:45,745 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Listing clusters in order of aggregate capacity, that have (atleast one host with) enough CPU and RAM capacity under this Zone: 1
management-server.log:2016-01-25 14:32:45,747 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Removing from the clusterId list these clusters from avoid set: [1]
management-server.log:2016-01-25 14:32:45,748 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) No clusters found after removing disabled clusters and clusters in avoid list, returning.
management-server.log:2016-01-25 14:32:45,750 DEBUG [c.c.v.UserVmManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Destroying vm VM[User|i-2-7-VM] as it failed to create on Host with Id:null
management-server.log:2016-01-25 14:32:45,794 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) VM state transitted from :Stopped to Error with event: OperationFailedToErrorvm's original host id: null new host id: null host id before state transition: null
management-server.log:2016-01-25 14:32:45,869 DEBUG [c.c.u.AccountManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Access granted to Acct[655b8296-c394-11e5-8b5b-0cc47a7d402a-admin] to Domain:1/ by AffinityGroupAccessChecker
management-server.log:2016-01-25 14:32:45,971 DEBUG [c.c.u.AccountManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Access granted to Acct[655b8296-c394-11e5-8b5b-0cc47a7d402a-admin] to Domain:1/ by AffinityGroupAccessChecker
management-server.log:2016-01-25 14:32:45,995 WARN  [o.a.c.alerts] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68)  alertType:: 8 // dataCenterId:: 1 // podId:: null // clusterId:: null // message:: Failed to deploy Vm with Id: 7, on Host with Id: null
management-server.log:2016-01-25 14:32:46,103 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-7-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-25 14:32:46,104 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Unable to create a deployment for VM[User|i-2-7-VM]
management-server.log:2016-01-25 14:32:46,105 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33) Complete async job-33, jobStatus: FAILED, resultCode: 530, result: org.apache.cloudstack.api.response.ExceptionResponse/null/{"uuidList":[],"errorcode":533,"errortext":"Unable to create a deployment for VM[User|i-2-7-VM]"}
management-server.log:2016-01-25 14:32:46,106 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33) Publish async job-33 complete on message bus
management-server.log:2016-01-25 14:32:46,106 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33) Wake up jobs related to job-33
management-server.log:2016-01-25 14:32:46,106 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33) Update db status for job-33
management-server.log:2016-01-25 14:32:46,108 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33) Wake up jobs joined with job-33 and disjoin all subjobs created from job- 33
management-server.log:2016-01-25 14:32:46,128 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33) Done executing org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin for job-33
management-server.log:2016-01-25 14:32:46,128 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (API-Job-Executor-9:ctx-0c67fce8 job-33) Remove job-33 from job monitoring
catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (API-Job-Executor-9:ctx-0c67fce8 job-33) Add job-33 into job monitoring
catalina.out:INFO  [c.c.s.StorageManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
catalina.out:WARN  [o.a.c.alerts] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68)  alertType:: 8 // dataCenterId:: 1 // podId:: null // clusterId:: null // message:: Failed to deploy Vm with Id: 7, on Host with Id: null
catalina.out:INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-7-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) Unable to create a deployment for VM[User|i-2-7-VM]
catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (API-Job-Executor-9:ctx-0c67fce8 job-33) Remove job-33 from job monitoring


 Thanks in advance !!

Regards,
Cristian

Re: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hello,

   I have the same issue again, vCenter stop to be reachable from SSVM, when I try to create a template or VM deploy from template, everything is ok when i create deploy from ISO, in this case i think the issue was not related to IP class.

   Also i'm able to ping vCenter from SSVM after restart of SSVM.

Cloudstack 4.5.2 and vmware5.5  

Thank you.


Regards,
Cristian
On 28.01.2016 22:54:00, Cristian Ciobanu <cr...@istream.today> wrote:
Hi ilya,

        Thanks to all for support, the issue was fixed ( both create template from ROOT Volume and Deploy VM from default template :

        I found 3 Interfaces on SSVM, 2 with Private IP but from same class, in order to fix this i added a new class for storage networking on POD level. ( everything is fine right now)

Regards,
Cristian
On 28.01.2016 22:31:08, ilya <il...@gmail.com> wrote:
See response in-line:

On 1/28/16 5:11 AM, Cristian Ciobanu wrote:
> Hello again,
>
> I found something strange.
>
> 1. i did the telnet test from SSVM with destination vCenter, not able to ping also not able to connect via telnet on 80

>From SSVM log

2016-01-27 11:23:07,117 ERROR
[storage.resource.VmwareSecondaryStorageResourceHandler]
(agentRequest-Handler-5:job-29/job-30, cmd: CopyCommand) Unexpected
exception com.sun.xml.internal.ws.client.ClientTransportException: HTTP
transport error: java.net.ConnectException: Connection timed out
com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport
error: java.net.ConnectException: Connection timed out
at
com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:117)



Try the following on SSVM:

# service iptables stop

try connecting to vcenter again via telnet or "nc -z", port is 443 not 80

If it does not work, check your routing table - something is funky with
your FW or routes.

Explain how your network is setup, are there firewalls setup?

Besides vCenter, i would test the connectivy to actual ESXi host that
the VM resides on, try "nc -z esxhost 443".

There can also be an "extra nic issue" that could cause routing and
connectivity issue. Try "ifdown eth3" on SSVM to see if it helps.

All of this must work in order for cloudstack to function properly.

Regards
ilya


> 2. Restart SSVM PING and telnet vCenter, i was able to ping and connect via telnet
> 3. Again, PING test from SSVM -> vCenter works, on same time i started a VM deploy, but i get the same error and when i verified SSVM i saw fail for PING test ( no packets received from vCenter )
>
> I have no idea what can be the issue.
>
>
> Thanks
>
>
> Regards,
> Cristian
> On 28.01.2016 14:44:49, Cristian Ciobanu wrote:
> Hi Somesh,
>
>
> I did the request test, and is not able to connect ( i did telnet text on port 80, 8080, 443 ) also is not responding to PING
>
> I also did the same test from Management console ( CloudStack ) and works.
>
> I'm not sure why is not working from SSVM because i'm able to ping google from it.
>
> Thank you !
>
>
> Regards,
> Cristian
> On 28.01.2016 01:42:15, Somesh Naidu wrote:
> Can you check if SSVM is able to talk to the vCenter on port 80?
>
> Regards,
> Somesh
>
>
> -----Original Message-----
> From: Cristian Ciobanu [mailto:cristian.c@istream.today]
> Sent: Wednesday, January 27, 2016 4:32 PM
> To: users@cloudstack.apache.org
> Subject: Re: CloudStack and VMware VM deploy issue
>
> Hello again,
>
> Just want to add additional logs from SSVM : http://pastebin.com/1J5ejc6D
>
> Please help to solve this issue.
>
> Thanks All
>
>
> Regards,
> Cristian
> On 27.01.2016 17:52:32, Cristian Ciobanu wrote:
> Hello to All,
>
> I did rebuild of this configuration and i have the same issue, first VM deploy issue, please see the full log : http://pastebin.com/ybEybCQJ (maybe you will see another error on first part of log, this was related to HOST IP configuration, i forgot to change it, but right now is fixed )
>
> Thank you!
>
>
> Regards,
> Cristian
> On 27.01.2016 03:42:35, ilya wrote:
> err..
>
> Looks like my message was trimmed, do
>
> cat /dev/null > /var/log/cloudstack/management/management-server.log
>
> or similar to start with minimal log.
>
> On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
>> I will come back in few hours with more details, I will start again from scratch.
>>
>> Thank you!
>>
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:43:54, ilya wrote:
>> Can you post full log to pastebin like site?
>>
>> Ideally, if you can trim the log before you simulate the error - it
>> would be great.
>>
>> do
>>
>>> /var/log/cloudstack/management/management-server.log
>>
>> Then similate the issue and upload to http://pastebin.com
>>
>> Thanks
>> ilya
>>
>>
>>
>> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>>> Hi ilya
>>>
>>>
>>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>>
>>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>>
>>>
>>>
>>>
>>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>>
>>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>>> Message: null
>>>
>>>
>>>
>>>
>>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>>
>>> root@ns52xxxx management]# grep -r job-79 --color
>>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcH
N

> y
>
>> A
>> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://down
l

>
>> o
>> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgA
K

> c
>
>> H
>> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lb
n

> Q
>
>> g
>> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZ
t

> V
> 2
>> 9
>> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0
Z

> 1
> d
>> H
>> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH
2

> s
>
>> H
>> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>>
>>> ============================================================================================
>>>
>>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>>
>>> Thank you
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 23:08:00, ilya wrote:
>>> Cristian
>>>
>>> Not entire clear on strange part.
>>>
>>> If you want to use local storage, you must have all components reference
>>> local storage.
>>>
>>> That includes SystemVMs - there is a zone level setting that enables
>>> local storage, as well as service and disk offering must be set to local.
>>>
>>> Regards
>>> ilya
>>>
>>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>>> Hi Timothy
>>>>
>>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>>
>>>>
>>>>
>>>> Thank you!
>>>>
>>>> Regards,
>>>> Cristian
>>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>>> Hi Cristian,
>>>>
>>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>>
>>>> “No storage pools available for shared volume allocation, returning”
>>>>
>>>> Change this under the Service Offering à Disk Offering.
>>>>
>>>> Regards,
>>>>
>>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>>> Timothy Lothering
>>>> Solutions Architect
>>>> Managed Services
>>>> T: +27877415535
>>>> F: +27877415100
>>>> C: +27824904099
>>>> E: tlothering@datacentrix.co.za
>>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>>
>>>>
>>>> Serious about performance, passionate about value
>>>>
>>>>
>>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>>> To: users@cloudstack.apache.org
>>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>>
>>>> Hi Cristian,
>>>>
>>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>>> And check that your disk offerings are all set to local storage.
>>>>
>>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>>
>>>> Error
>>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>>
>>>> Thanks
>>>> Glenn
>>>>
>>>>
>>>>
>>>> [ShapeBlue] [http://www.shapeblue.com]
>>>> Glenn Wagner
>>>> Senior Consultant
>>>> ,
>>>> ShapeBlue
>>>> d:
>>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>>> |
>>>> m:
>>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>>> e:
>>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>>> |
>>>> w:
>>>> www.shapeblue.com [http://www.shapeblue.com]
>>>> a:
>>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>>
>>>> -----Original Message-----
>>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>>
>>>> Cristian
>>>>
>>>> See response in-line
>>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>>> Cluster: 1
>>>>
>>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>>
>>>> If it works, make sure the disk offering you create is also type "local"
>>>> and not shared or clustered.
>>>>
>>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>>
>>>> Regards
>>>> ilya
>>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>>
>>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>>> Disclaimer
>>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>>
>>>>
>>>
>>
>

Re: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hi ilya,

        Thanks to all for support, the issue was fixed ( both create template from ROOT Volume and Deploy VM from default template :

        I found 3 Interfaces on SSVM, 2 with Private IP but from same class, in order to fix this i added a new class for storage networking on POD level. ( everything is fine right now)

Regards,
Cristian
On 28.01.2016 22:31:08, ilya <il...@gmail.com> wrote:
See response in-line:

On 1/28/16 5:11 AM, Cristian Ciobanu wrote:
> Hello again,
>
> I found something strange.
>
> 1. i did the telnet test from SSVM with destination vCenter, not able to ping also not able to connect via telnet on 80

>From SSVM log

2016-01-27 11:23:07,117 ERROR
[storage.resource.VmwareSecondaryStorageResourceHandler]
(agentRequest-Handler-5:job-29/job-30, cmd: CopyCommand) Unexpected
exception com.sun.xml.internal.ws.client.ClientTransportException: HTTP
transport error: java.net.ConnectException: Connection timed out
com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport
error: java.net.ConnectException: Connection timed out
at
com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:117)



Try the following on SSVM:

# service iptables stop

try connecting to vcenter again via telnet or "nc -z", port is 443 not 80

If it does not work, check your routing table - something is funky with
your FW or routes.

Explain how your network is setup, are there firewalls setup?

Besides vCenter, i would test the connectivy to actual ESXi host that
the VM resides on, try "nc -z esxhost 443".

There can also be an "extra nic issue" that could cause routing and
connectivity issue. Try "ifdown eth3" on SSVM to see if it helps.

All of this must work in order for cloudstack to function properly.

Regards
ilya


> 2. Restart SSVM PING and telnet vCenter, i was able to ping and connect via telnet
> 3. Again, PING test from SSVM -> vCenter works, on same time i started a VM deploy, but i get the same error and when i verified SSVM i saw fail for PING test ( no packets received from vCenter )
>
> I have no idea what can be the issue.
>
>
> Thanks
>
>
> Regards,
> Cristian
> On 28.01.2016 14:44:49, Cristian Ciobanu wrote:
> Hi Somesh,
>
>
> I did the request test, and is not able to connect ( i did telnet text on port 80, 8080, 443 ) also is not responding to PING
>
> I also did the same test from Management console ( CloudStack ) and works.
>
> I'm not sure why is not working from SSVM because i'm able to ping google from it.
>
> Thank you !
>
>
> Regards,
> Cristian
> On 28.01.2016 01:42:15, Somesh Naidu wrote:
> Can you check if SSVM is able to talk to the vCenter on port 80?
>
> Regards,
> Somesh
>
>
> -----Original Message-----
> From: Cristian Ciobanu [mailto:cristian.c@istream.today]
> Sent: Wednesday, January 27, 2016 4:32 PM
> To: users@cloudstack.apache.org
> Subject: Re: CloudStack and VMware VM deploy issue
>
> Hello again,
>
> Just want to add additional logs from SSVM : http://pastebin.com/1J5ejc6D
>
> Please help to solve this issue.
>
> Thanks All
>
>
> Regards,
> Cristian
> On 27.01.2016 17:52:32, Cristian Ciobanu wrote:
> Hello to All,
>
> I did rebuild of this configuration and i have the same issue, first VM deploy issue, please see the full log : http://pastebin.com/ybEybCQJ (maybe you will see another error on first part of log, this was related to HOST IP configuration, i forgot to change it, but right now is fixed )
>
> Thank you!
>
>
> Regards,
> Cristian
> On 27.01.2016 03:42:35, ilya wrote:
> err..
>
> Looks like my message was trimmed, do
>
> cat /dev/null > /var/log/cloudstack/management/management-server.log
>
> or similar to start with minimal log.
>
> On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
>> I will come back in few hours with more details, I will start again from scratch.
>>
>> Thank you!
>>
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:43:54, ilya wrote:
>> Can you post full log to pastebin like site?
>>
>> Ideally, if you can trim the log before you simulate the error - it
>> would be great.
>>
>> do
>>
>>> /var/log/cloudstack/management/management-server.log
>>
>> Then similate the issue and upload to http://pastebin.com
>>
>> Thanks
>> ilya
>>
>>
>>
>> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>>> Hi ilya
>>>
>>>
>>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>>
>>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>>
>>>
>>>
>>>
>>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>>
>>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>>> Message: null
>>>
>>>
>>>
>>>
>>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>>
>>> root@ns52xxxx management]# grep -r job-79 --color
>>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcH
N

> y
>
>> A
>> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://down
l

>
>> o
>> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgA
K

> c
>
>> H
>> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lb
n

> Q
>
>> g
>> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZ
t

> V
> 2
>> 9
>> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0
Z

> 1
> d
>> H
>> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH
2

> s
>
>> H
>> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>>
>>> ============================================================================================
>>>
>>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>>
>>> Thank you
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 23:08:00, ilya wrote:
>>> Cristian
>>>
>>> Not entire clear on strange part.
>>>
>>> If you want to use local storage, you must have all components reference
>>> local storage.
>>>
>>> That includes SystemVMs - there is a zone level setting that enables
>>> local storage, as well as service and disk offering must be set to local.
>>>
>>> Regards
>>> ilya
>>>
>>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>>> Hi Timothy
>>>>
>>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>>
>>>>
>>>>
>>>> Thank you!
>>>>
>>>> Regards,
>>>> Cristian
>>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>>> Hi Cristian,
>>>>
>>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>>
>>>> “No storage pools available for shared volume allocation, returning”
>>>>
>>>> Change this under the Service Offering à Disk Offering.
>>>>
>>>> Regards,
>>>>
>>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>>> Timothy Lothering
>>>> Solutions Architect
>>>> Managed Services
>>>> T: +27877415535
>>>> F: +27877415100
>>>> C: +27824904099
>>>> E: tlothering@datacentrix.co.za
>>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>>
>>>>
>>>> Serious about performance, passionate about value
>>>>
>>>>
>>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>>> To: users@cloudstack.apache.org
>>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>>
>>>> Hi Cristian,
>>>>
>>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>>> And check that your disk offerings are all set to local storage.
>>>>
>>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>>
>>>> Error
>>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>>
>>>> Thanks
>>>> Glenn
>>>>
>>>>
>>>>
>>>> [ShapeBlue] [http://www.shapeblue.com]
>>>> Glenn Wagner
>>>> Senior Consultant
>>>> ,
>>>> ShapeBlue
>>>> d:
>>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>>> |
>>>> m:
>>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>>> e:
>>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>>> |
>>>> w:
>>>> www.shapeblue.com [http://www.shapeblue.com]
>>>> a:
>>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>>
>>>> -----Original Message-----
>>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>>
>>>> Cristian
>>>>
>>>> See response in-line
>>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>>> Cluster: 1
>>>>
>>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>>
>>>> If it works, make sure the disk offering you create is also type "local"
>>>> and not shared or clustered.
>>>>
>>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>>
>>>> Regards
>>>> ilya
>>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>>
>>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>>> Disclaimer
>>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>>
>>>>
>>>
>>
>

Re: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hi ilya,

     Please see test results before start the VM deploy ( fresh restart of SSVM )

        Test: http://imgur.com/iQvjC73     ( vscs001 = vCenter,  test from last line is for ESXI host )
        Nic : http://imgur.com/V6eWoEF
        Route :http://imgur.com/e6uYUTY

     After i tried to deploy a VM

         Test :http://imgur.com/VXvjJMo ( vscs001 = vCenter,  test from last line is for ESXI host )

    Regarding following command nc -z i don't see any output, i did the same test from CloudStack management server i have this output :

           [root@ns52xxx management]# nc -z vscs001.xxxx.xxx 443
            Connection to vscs001.xxxx.xxx 443 port [tcp/https] succeeded!


     
    I have basic network configured, service iptables stop on SSVM not work because is a Debian i did a flush, but no changes. i'm not sure why i have so many interfaces on SSVM, because i created deploy only with management and guest. please see the screenshot from vSphere

           vSphere : http://imgur.com/43eCn0x   ( you will see 2 VM's created from ISO )
     

    This issue is very strange, i'm not sure what i did when i was able to create template and deploy VM, the only thing what i did was to change network class for storage interface, 172.11.10.0/24, after one day SSVM was with status connecting in CS management , then i did rebuild of SSVM and not able to reproduce again.


  Thank you.


Regards,
Cristian
On 28.01.2016 22:31:08, ilya <il...@gmail.com> wrote:
See response in-line:

On 1/28/16 5:11 AM, Cristian Ciobanu wrote:
> Hello again,
>
> I found something strange.
>
> 1. i did the telnet test from SSVM with destination vCenter, not able to ping also not able to connect via telnet on 80

>From SSVM log

2016-01-27 11:23:07,117 ERROR
[storage.resource.VmwareSecondaryStorageResourceHandler]
(agentRequest-Handler-5:job-29/job-30, cmd: CopyCommand) Unexpected
exception com.sun.xml.internal.ws.client.ClientTransportException: HTTP
transport error: java.net.ConnectException: Connection timed out
com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport
error: java.net.ConnectException: Connection timed out
at
com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:117)



Try the following on SSVM:

# service iptables stop

try connecting to vcenter again via telnet or "nc -z", port is 443 not 80

If it does not work, check your routing table - something is funky with
your FW or routes.

Explain how your network is setup, are there firewalls setup?

Besides vCenter, i would test the connectivy to actual ESXi host that
the VM resides on, try "nc -z esxhost 443".

There can also be an "extra nic issue" that could cause routing and
connectivity issue. Try "ifdown eth3" on SSVM to see if it helps.

All of this must work in order for cloudstack to function properly.

Regards
ilya


> 2. Restart SSVM PING and telnet vCenter, i was able to ping and connect via telnet
> 3. Again, PING test from SSVM -> vCenter works, on same time i started a VM deploy, but i get the same error and when i verified SSVM i saw fail for PING test ( no packets received from vCenter )
>
> I have no idea what can be the issue.
>
>
> Thanks
>
>
> Regards,
> Cristian
> On 28.01.2016 14:44:49, Cristian Ciobanu wrote:
> Hi Somesh,
>
>
> I did the request test, and is not able to connect ( i did telnet text on port 80, 8080, 443 ) also is not responding to PING
>
> I also did the same test from Management console ( CloudStack ) and works.
>
> I'm not sure why is not working from SSVM because i'm able to ping google from it.
>
> Thank you !
>
>
> Regards,
> Cristian
> On 28.01.2016 01:42:15, Somesh Naidu wrote:
> Can you check if SSVM is able to talk to the vCenter on port 80?
>
> Regards,
> Somesh
>
>
> -----Original Message-----
> From: Cristian Ciobanu [mailto:cristian.c@istream.today]
> Sent: Wednesday, January 27, 2016 4:32 PM
> To: users@cloudstack.apache.org
> Subject: Re: CloudStack and VMware VM deploy issue
>
> Hello again,
>
> Just want to add additional logs from SSVM : http://pastebin.com/1J5ejc6D
>
> Please help to solve this issue.
>
> Thanks All
>
>
> Regards,
> Cristian
> On 27.01.2016 17:52:32, Cristian Ciobanu wrote:
> Hello to All,
>
> I did rebuild of this configuration and i have the same issue, first VM deploy issue, please see the full log : http://pastebin.com/ybEybCQJ (maybe you will see another error on first part of log, this was related to HOST IP configuration, i forgot to change it, but right now is fixed )
>
> Thank you!
>
>
> Regards,
> Cristian
> On 27.01.2016 03:42:35, ilya wrote:
> err..
>
> Looks like my message was trimmed, do
>
> cat /dev/null > /var/log/cloudstack/management/management-server.log
>
> or similar to start with minimal log.
>
> On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
>> I will come back in few hours with more details, I will start again from scratch.
>>
>> Thank you!
>>
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:43:54, ilya wrote:
>> Can you post full log to pastebin like site?
>>
>> Ideally, if you can trim the log before you simulate the error - it
>> would be great.
>>
>> do
>>
>>> /var/log/cloudstack/management/management-server.log
>>
>> Then similate the issue and upload to http://pastebin.com
>>
>> Thanks
>> ilya
>>
>>
>>
>> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>>> Hi ilya
>>>
>>>
>>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>>
>>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>>
>>>
>>>
>>>
>>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>>
>>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>>> Message: null
>>>
>>>
>>>
>>>
>>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>>
>>> root@ns52xxxx management]# grep -r job-79 --color
>>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcH
N

> y
>
>> A
>> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://down
l

>
>> o
>> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgA
K

> c
>
>> H
>> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lb
n

> Q
>
>> g
>> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZ
t

> V
> 2
>> 9
>> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0
Z

> 1
> d
>> H
>> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH
2

> s
>
>> H
>> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>>
>>> ============================================================================================
>>>
>>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>>
>>> Thank you
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 23:08:00, ilya wrote:
>>> Cristian
>>>
>>> Not entire clear on strange part.
>>>
>>> If you want to use local storage, you must have all components reference
>>> local storage.
>>>
>>> That includes SystemVMs - there is a zone level setting that enables
>>> local storage, as well as service and disk offering must be set to local.
>>>
>>> Regards
>>> ilya
>>>
>>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>>> Hi Timothy
>>>>
>>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>>
>>>>
>>>>
>>>> Thank you!
>>>>
>>>> Regards,
>>>> Cristian
>>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>>> Hi Cristian,
>>>>
>>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>>
>>>> “No storage pools available for shared volume allocation, returning”
>>>>
>>>> Change this under the Service Offering à Disk Offering.
>>>>
>>>> Regards,
>>>>
>>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>>> Timothy Lothering
>>>> Solutions Architect
>>>> Managed Services
>>>> T: +27877415535
>>>> F: +27877415100
>>>> C: +27824904099
>>>> E: tlothering@datacentrix.co.za
>>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>>
>>>>
>>>> Serious about performance, passionate about value
>>>>
>>>>
>>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>>> To: users@cloudstack.apache.org
>>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>>
>>>> Hi Cristian,
>>>>
>>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>>> And check that your disk offerings are all set to local storage.
>>>>
>>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>>
>>>> Error
>>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>>
>>>> Thanks
>>>> Glenn
>>>>
>>>>
>>>>
>>>> [ShapeBlue] [http://www.shapeblue.com]
>>>> Glenn Wagner
>>>> Senior Consultant
>>>> ,
>>>> ShapeBlue
>>>> d:
>>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>>> |
>>>> m:
>>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>>> e:
>>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>>> |
>>>> w:
>>>> www.shapeblue.com [http://www.shapeblue.com]
>>>> a:
>>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>>
>>>> -----Original Message-----
>>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>>
>>>> Cristian
>>>>
>>>> See response in-line
>>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>>> Cluster: 1
>>>>
>>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>>
>>>> If it works, make sure the disk offering you create is also type "local"
>>>> and not shared or clustered.
>>>>
>>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>>
>>>> Regards
>>>> ilya
>>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>>
>>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>>> Disclaimer
>>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>>
>>>>
>>>
>>
>

Re: CloudStack and VMware VM deploy issue

Posted by ilya <il...@gmail.com>.
See response in-line:

On 1/28/16 5:11 AM, Cristian Ciobanu wrote:
> Hello again,
> 
>      I found something strange.
> 
>      1. i did the telnet test from SSVM  with destination vCenter, not able to ping also not able to connect via telnet on 80

>From SSVM log

2016-01-27 11:23:07,117 ERROR
[storage.resource.VmwareSecondaryStorageResourceHandler]
(agentRequest-Handler-5:job-29/job-30, cmd: CopyCommand) Unexpected
exception com.sun.xml.internal.ws.client.ClientTransportException: HTTP
transport error: java.net.ConnectException: Connection timed out
com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport
error: java.net.ConnectException: Connection timed out
    at
com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:117)



Try the following on SSVM:

# service iptables stop

try connecting to vcenter again via telnet or "nc -z", port is 443 not 80

If it does not work, check your routing table - something is funky with
your FW or routes.

Explain how your network is setup, are there firewalls setup?

Besides vCenter, i would test the connectivy to actual ESXi host that
the VM resides on, try "nc -z esxhost 443".

There can also be an "extra nic issue" that could cause routing and
connectivity issue. Try "ifdown eth3" on SSVM to see if it helps.

All of this must work in order for cloudstack to function properly.

Regards
ilya


>      2. Restart SSVM PING and telnet vCenter, i was able to ping and connect via telnet
>      3. Again, PING test from SSVM -> vCenter works, on same time i started a VM deploy, but i get the same error and when i verified SSVM i saw fail for PING test ( no packets received from vCenter ) 
> 
>      I have no idea what can be the issue.
> 
> 
> Thanks  
> 
> 
> Regards,
> Cristian
> On 28.01.2016 14:44:49, Cristian Ciobanu <cr...@istream.today> wrote:
> Hi Somesh,
> 
> 
>       I did the request test, and is not able to connect ( i did telnet text on port 80, 8080, 443 ) also is not responding to PING
> 
>       I also did the same test from Management console ( CloudStack ) and works.
> 
>       I'm not sure why is not working from SSVM because  i'm able to ping google from it.
> 
> Thank you !
> 
> 
> Regards,
> Cristian
> On 28.01.2016 01:42:15, Somesh Naidu <so...@citrix.com> wrote:
> Can you check if SSVM is able to talk to the vCenter on port 80?
> 
> Regards,
> Somesh
> 
> 
> -----Original Message-----
> From: Cristian Ciobanu [mailto:cristian.c@istream.today]
> Sent: Wednesday, January 27, 2016 4:32 PM
> To: users@cloudstack.apache.org
> Subject: Re: CloudStack and VMware VM deploy issue
> 
> Hello again,
> 
>    Just want to add additional logs from SSVM : http://pastebin.com/1J5ejc6D
> 
>    Please help to solve this issue.
> 
> Thanks All
> 
> 
> Regards,
> Cristian
> On 27.01.2016 17:52:32, Cristian Ciobanu wrote:
> Hello to All,
> 
>        I did rebuild of this configuration and i have the same issue, first VM deploy issue, please see the full log : http://pastebin.com/ybEybCQJ   (maybe you will see another error on first part of log, this was related to HOST IP configuration, i forgot to change it, but right now is fixed )
> 
>  Thank you!     
> 
> 
> Regards,
> Cristian
> On 27.01.2016 03:42:35, ilya wrote:
> err..
> 
> Looks like my message was trimmed, do
> 
> cat /dev/null > /var/log/cloudstack/management/management-server.log
> 
> or similar to start with minimal log.
> 
> On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
>> I will come back in few hours with more details, I will start again from scratch.
>>
>> Thank you!
>>
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:43:54, ilya wrote:
>> Can you post full log to pastebin like site?
>>
>> Ideally, if you can trim the log before you simulate the error - it
>> would be great.
>>
>> do
>>
>>> /var/log/cloudstack/management/management-server.log
>>
>> Then similate the issue and upload to http://pastebin.com
>>
>> Thanks
>> ilya
>>
>>
>>
>> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>>> Hi ilya
>>>
>>>
>>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>>
>>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>>
>>>
>>>
>>>
>>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>>
>>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>>> Message: null
>>>
>>>
>>>
>>>
>>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>>
>>> root@ns52xxxx management]# grep -r job-79 --color
>>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcH
 N

> y
> 
>> A
>> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://down
 l

> 
>> o
>> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgA
 K

> c
> 
>> H
>> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lb
 n

> Q
> 
>> g
>> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZ
 t

> V
> 2
>> 9
>> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0
 Z

> 1
> d
>> H
>> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH
 2

> s
> 
>> H
>> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>>
>>> ============================================================================================
>>>
>>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>>
>>> Thank you
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 23:08:00, ilya wrote:
>>> Cristian
>>>
>>> Not entire clear on strange part.
>>>
>>> If you want to use local storage, you must have all components reference
>>> local storage.
>>>
>>> That includes SystemVMs - there is a zone level setting that enables
>>> local storage, as well as service and disk offering must be set to local.
>>>
>>> Regards
>>> ilya
>>>
>>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>>> Hi Timothy
>>>>
>>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>>
>>>>
>>>>
>>>> Thank you!
>>>>
>>>> Regards,
>>>> Cristian
>>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>>> Hi Cristian,
>>>>
>>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>>
>>>> “No storage pools available for shared volume allocation, returning”
>>>>
>>>> Change this under the Service Offering à Disk Offering.
>>>>
>>>> Regards,
>>>>
>>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>>> Timothy Lothering
>>>> Solutions Architect
>>>> Managed Services
>>>> T: +27877415535
>>>> F: +27877415100
>>>> C: +27824904099
>>>> E: tlothering@datacentrix.co.za
>>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>>
>>>>
>>>> Serious about performance, passionate about value
>>>>
>>>>
>>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>>> To: users@cloudstack.apache.org
>>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>>
>>>> Hi Cristian,
>>>>
>>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>>> And check that your disk offerings are all set to local storage.
>>>>
>>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>>
>>>> Error
>>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>>
>>>> Thanks
>>>> Glenn
>>>>
>>>>
>>>>
>>>> [ShapeBlue] [http://www.shapeblue.com]
>>>> Glenn Wagner
>>>> Senior Consultant
>>>> ,
>>>> ShapeBlue
>>>> d:
>>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>>> |
>>>> m:
>>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>>> e:
>>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>>> |
>>>> w:
>>>> www.shapeblue.com [http://www.shapeblue.com]
>>>> a:
>>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>>
>>>> -----Original Message-----
>>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>>
>>>> Cristian
>>>>
>>>> See response in-line
>>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable pools found
>>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>>> Cluster: 1
>>>>
>>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>>
>>>> If it works, make sure the disk offering you create is also type "local"
>>>> and not shared or clustered.
>>>>
>>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>>
>>>> Regards
>>>> ilya
>>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>>
>>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>>> Disclaimer
>>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>>
>>>>
>>>
>>
> 

RE: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hello again,

     I found something strange.

     1. i did the telnet test from SSVM  with destination vCenter, not able to ping also not able to connect via telnet on 80
     2. Restart SSVM PING and telnet vCenter, i was able to ping and connect via telnet
     3. Again, PING test from SSVM -> vCenter works, on same time i started a VM deploy, but i get the same error and when i verified SSVM i saw fail for PING test ( no packets received from vCenter ) 

     I have no idea what can be the issue.


Thanks  


Regards,
Cristian
On 28.01.2016 14:44:49, Cristian Ciobanu <cr...@istream.today> wrote:
Hi Somesh,


      I did the request test, and is not able to connect ( i did telnet text on port 80, 8080, 443 ) also is not responding to PING

      I also did the same test from Management console ( CloudStack ) and works.

      I'm not sure why is not working from SSVM because  i'm able to ping google from it.

Thank you !


Regards,
Cristian
On 28.01.2016 01:42:15, Somesh Naidu <so...@citrix.com> wrote:
Can you check if SSVM is able to talk to the vCenter on port 80?

Regards,
Somesh


-----Original Message-----
From: Cristian Ciobanu [mailto:cristian.c@istream.today]
Sent: Wednesday, January 27, 2016 4:32 PM
To: users@cloudstack.apache.org
Subject: Re: CloudStack and VMware VM deploy issue

Hello again,

   Just want to add additional logs from SSVM : http://pastebin.com/1J5ejc6D

   Please help to solve this issue.

Thanks All


Regards,
Cristian
On 27.01.2016 17:52:32, Cristian Ciobanu wrote:
Hello to All,

       I did rebuild of this configuration and i have the same issue, first VM deploy issue, please see the full log : http://pastebin.com/ybEybCQJ   (maybe you will see another error on first part of log, this was related to HOST IP configuration, i forgot to change it, but right now is fixed )

 Thank you!     


Regards,
Cristian
On 27.01.2016 03:42:35, ilya wrote:
err..

Looks like my message was trimmed, do

cat /dev/null > /var/log/cloudstack/management/management-server.log

or similar to start with minimal log.

On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
> I will come back in few hours with more details, I will start again from scratch.
>
> Thank you!
>
>
> Regards,
> Cristian
> On 26.01.2016 23:43:54, ilya wrote:
> Can you post full log to pastebin like site?
>
> Ideally, if you can trim the log before you simulate the error - it
> would be great.
>
> do
>
>> /var/log/cloudstack/management/management-server.log
>
> Then similate the issue and upload to http://pastebin.com
>
> Thanks
> ilya
>
>
>
> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>> Hi ilya
>>
>>
>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>
>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>
>>
>>
>>
>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>> Message: null
>>
>>
>>
>>
>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> root@ns52xxxx management]# grep -r job-79 --color
>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHN
y

> A
> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl

> o
> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAK
c

> H
> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbn
Q

> g
> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZt
V
2
> 9
> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z
1
d
> H
> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2
s

> H
> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>
>> ============================================================================================
>>
>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>
>> Thank you
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:08:00, ilya wrote:
>> Cristian
>>
>> Not entire clear on strange part.
>>
>> If you want to use local storage, you must have all components reference
>> local storage.
>>
>> That includes SystemVMs - there is a zone level setting that enables
>> local storage, as well as service and disk offering must be set to local.
>>
>> Regards
>> ilya
>>
>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>> Hi Timothy
>>>
>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>
>>>
>>>
>>> Thank you!
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>> Hi Cristian,
>>>
>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>
>>> “No storage pools available for shared volume allocation, returning”
>>>
>>> Change this under the Service Offering à Disk Offering.
>>>
>>> Regards,
>>>
>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>> Timothy Lothering
>>> Solutions Architect
>>> Managed Services
>>> T: +27877415535
>>> F: +27877415100
>>> C: +27824904099
>>> E: tlothering@datacentrix.co.za
>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>
>>>
>>> Serious about performance, passionate about value
>>>
>>>
>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>> To: users@cloudstack.apache.org
>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>
>>> Hi Cristian,
>>>
>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>> And check that your disk offerings are all set to local storage.
>>>
>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>
>>> Error
>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>
>>> Thanks
>>> Glenn
>>>
>>>
>>>
>>> [ShapeBlue] [http://www.shapeblue.com]
>>> Glenn Wagner
>>> Senior Consultant
>>> ,
>>> ShapeBlue
>>> d:
>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>> |
>>> m:
>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>> e:
>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>> |
>>> w:
>>> www.shapeblue.com [http://www.shapeblue.com]
>>> a:
>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>
>>> -----Original Message-----
>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>
>>> Cristian
>>>
>>> See response in-line
>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>> Cluster: 1
>>>
>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>
>>> If it works, make sure the disk offering you create is also type "local"
>>> and not shared or clustered.
>>>
>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>
>>> Regards
>>> ilya
>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>
>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>> Disclaimer
>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>
>>>
>>
>

RE: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hi Somesh,


      I did the request test, and is not able to connect ( i did telnet text on port 80, 8080, 443 ) also is not responding to PING

      I also did the same test from Management console ( CloudStack ) and works.

      I'm not sure why is not working from SSVM because  i'm able to ping google from it.

Thank you !


Regards,
Cristian
On 28.01.2016 01:42:15, Somesh Naidu <so...@citrix.com> wrote:
Can you check if SSVM is able to talk to the vCenter on port 80?

Regards,
Somesh


-----Original Message-----
From: Cristian Ciobanu [mailto:cristian.c@istream.today]
Sent: Wednesday, January 27, 2016 4:32 PM
To: users@cloudstack.apache.org
Subject: Re: CloudStack and VMware VM deploy issue

Hello again,

   Just want to add additional logs from SSVM : http://pastebin.com/1J5ejc6D

   Please help to solve this issue.

Thanks All


Regards,
Cristian
On 27.01.2016 17:52:32, Cristian Ciobanu wrote:
Hello to All,

       I did rebuild of this configuration and i have the same issue, first VM deploy issue, please see the full log : http://pastebin.com/ybEybCQJ   (maybe you will see another error on first part of log, this was related to HOST IP configuration, i forgot to change it, but right now is fixed )

 Thank you!     


Regards,
Cristian
On 27.01.2016 03:42:35, ilya wrote:
err..

Looks like my message was trimmed, do

cat /dev/null > /var/log/cloudstack/management/management-server.log

or similar to start with minimal log.

On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
> I will come back in few hours with more details, I will start again from scratch.
>
> Thank you!
>
>
> Regards,
> Cristian
> On 26.01.2016 23:43:54, ilya wrote:
> Can you post full log to pastebin like site?
>
> Ideally, if you can trim the log before you simulate the error - it
> would be great.
>
> do
>
>> /var/log/cloudstack/management/management-server.log
>
> Then similate the issue and upload to http://pastebin.com
>
> Thanks
> ilya
>
>
>
> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>> Hi ilya
>>
>>
>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>
>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>
>>
>>
>>
>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>> Message: null
>>
>>
>>
>>
>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> root@ns52xxxx management]# grep -r job-79 --color
>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHN
y

> A
> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl

> o
> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAK
c

> H
> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbn
Q

> g
> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZt
V
2
> 9
> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z
1
d
> H
> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2
s

> H
> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>
>> ============================================================================================
>>
>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>
>> Thank you
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:08:00, ilya wrote:
>> Cristian
>>
>> Not entire clear on strange part.
>>
>> If you want to use local storage, you must have all components reference
>> local storage.
>>
>> That includes SystemVMs - there is a zone level setting that enables
>> local storage, as well as service and disk offering must be set to local.
>>
>> Regards
>> ilya
>>
>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>> Hi Timothy
>>>
>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>
>>>
>>>
>>> Thank you!
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>> Hi Cristian,
>>>
>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>
>>> “No storage pools available for shared volume allocation, returning”
>>>
>>> Change this under the Service Offering à Disk Offering.
>>>
>>> Regards,
>>>
>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>> Timothy Lothering
>>> Solutions Architect
>>> Managed Services
>>> T: +27877415535
>>> F: +27877415100
>>> C: +27824904099
>>> E: tlothering@datacentrix.co.za
>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>
>>>
>>> Serious about performance, passionate about value
>>>
>>>
>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>> To: users@cloudstack.apache.org
>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>
>>> Hi Cristian,
>>>
>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>> And check that your disk offerings are all set to local storage.
>>>
>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>
>>> Error
>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>
>>> Thanks
>>> Glenn
>>>
>>>
>>>
>>> [ShapeBlue] [http://www.shapeblue.com]
>>> Glenn Wagner
>>> Senior Consultant
>>> ,
>>> ShapeBlue
>>> d:
>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>> |
>>> m:
>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>> e:
>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>> |
>>> w:
>>> www.shapeblue.com [http://www.shapeblue.com]
>>> a:
>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>
>>> -----Original Message-----
>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>
>>> Cristian
>>>
>>> See response in-line
>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>> Cluster: 1
>>>
>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>
>>> If it works, make sure the disk offering you create is also type "local"
>>> and not shared or clustered.
>>>
>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>
>>> Regards
>>> ilya
>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>
>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>> Disclaimer
>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>
>>>
>>
>

RE: CloudStack and VMware VM deploy issue

Posted by Somesh Naidu <So...@citrix.com>.
Can you check if SSVM is able to talk to the vCenter on port 80?

Regards,
Somesh


-----Original Message-----
From: Cristian Ciobanu [mailto:cristian.c@istream.today] 
Sent: Wednesday, January 27, 2016 4:32 PM
To: users@cloudstack.apache.org
Subject: Re: CloudStack and VMware VM deploy issue

Hello again,

   Just want to add additional logs from SSVM : http://pastebin.com/1J5ejc6D

   Please help to solve this issue.

Thanks All


Regards,
Cristian
On 27.01.2016 17:52:32, Cristian Ciobanu <cr...@istream.today> wrote:
Hello to All,

       I did rebuild of this configuration and i have the same issue, first VM deploy issue, please see the full log : http://pastebin.com/ybEybCQJ   (maybe you will see another error on first part of log, this was related to HOST IP configuration, i forgot to change it, but right now is fixed )

 Thank you!     


Regards,
Cristian
On 27.01.2016 03:42:35, ilya <il...@gmail.com> wrote:
err..

Looks like my message was trimmed, do

cat /dev/null > /var/log/cloudstack/management/management-server.log

or similar to start with minimal log.

On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
> I will come back in few hours with more details, I will start again from scratch.
>
> Thank you!
>
>
> Regards,
> Cristian
> On 26.01.2016 23:43:54, ilya wrote:
> Can you post full log to pastebin like site?
>
> Ideally, if you can trim the log before you simulate the error - it
> would be great.
>
> do
>
>> /var/log/cloudstack/management/management-server.log
>
> Then similate the issue and upload to http://pastebin.com
>
> Thanks
> ilya
>
>
>
> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>> Hi ilya
>>
>>
>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>
>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>
>>
>>
>>
>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>> Message: null
>>
>>
>>
>>
>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> root@ns52xxxx management]# grep -r job-79 --color
>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHN
y

> A
> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl

> o
> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAK
c

> H
> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbn
Q

> g
> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZt
V
2
> 9
> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z
1
d
> H
> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2
s

> H
> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>
>> ============================================================================================
>>
>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>
>> Thank you
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:08:00, ilya wrote:
>> Cristian
>>
>> Not entire clear on strange part.
>>
>> If you want to use local storage, you must have all components reference
>> local storage.
>>
>> That includes SystemVMs - there is a zone level setting that enables
>> local storage, as well as service and disk offering must be set to local.
>>
>> Regards
>> ilya
>>
>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>> Hi Timothy
>>>
>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>
>>>
>>>
>>> Thank you!
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>> Hi Cristian,
>>>
>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>
>>> “No storage pools available for shared volume allocation, returning”
>>>
>>> Change this under the Service Offering à Disk Offering.
>>>
>>> Regards,
>>>
>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>> Timothy Lothering
>>> Solutions Architect
>>> Managed Services
>>> T: +27877415535
>>> F: +27877415100
>>> C: +27824904099
>>> E: tlothering@datacentrix.co.za
>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>
>>>
>>> Serious about performance, passionate about value
>>>
>>>
>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>> To: users@cloudstack.apache.org
>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>
>>> Hi Cristian,
>>>
>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>> And check that your disk offerings are all set to local storage.
>>>
>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>
>>> Error
>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>
>>> Thanks
>>> Glenn
>>>
>>>
>>>
>>> [ShapeBlue] [http://www.shapeblue.com]
>>> Glenn Wagner
>>> Senior Consultant
>>> ,
>>> ShapeBlue
>>> d:
>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>> |
>>> m:
>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>> e:
>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>> |
>>> w:
>>> www.shapeblue.com [http://www.shapeblue.com]
>>> a:
>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>
>>> -----Original Message-----
>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>
>>> Cristian
>>>
>>> See response in-line
>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>> Cluster: 1
>>>
>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>
>>> If it works, make sure the disk offering you create is also type "local"
>>> and not shared or clustered.
>>>
>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>
>>> Regards
>>> ilya
>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>
>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>> Disclaimer
>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>
>>>
>>
>

Re: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hello again,

   Just want to add additional logs from SSVM : http://pastebin.com/1J5ejc6D

   Please help to solve this issue.

Thanks All


Regards,
Cristian
On 27.01.2016 17:52:32, Cristian Ciobanu <cr...@istream.today> wrote:
Hello to All,

       I did rebuild of this configuration and i have the same issue, first VM deploy issue, please see the full log : http://pastebin.com/ybEybCQJ   (maybe you will see another error on first part of log, this was related to HOST IP configuration, i forgot to change it, but right now is fixed )

 Thank you!     


Regards,
Cristian
On 27.01.2016 03:42:35, ilya <il...@gmail.com> wrote:
err..

Looks like my message was trimmed, do

cat /dev/null > /var/log/cloudstack/management/management-server.log

or similar to start with minimal log.

On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
> I will come back in few hours with more details, I will start again from scratch.
>
> Thank you!
>
>
> Regards,
> Cristian
> On 26.01.2016 23:43:54, ilya wrote:
> Can you post full log to pastebin like site?
>
> Ideally, if you can trim the log before you simulate the error - it
> would be great.
>
> do
>
>> /var/log/cloudstack/management/management-server.log
>
> Then similate the issue and upload to http://pastebin.com
>
> Thanks
> ilya
>
>
>
> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>> Hi ilya
>>
>>
>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>
>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>
>>
>>
>>
>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>> Message: null
>>
>>
>>
>>
>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> root@ns52xxxx management]# grep -r job-79 --color
>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHN
y

> A
> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl

> o
> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAK
c

> H
> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbn
Q

> g
> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZt
V
2
> 9
> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z
1
d
> H
> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2
s

> H
> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>
>> ============================================================================================
>>
>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>
>> Thank you
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:08:00, ilya wrote:
>> Cristian
>>
>> Not entire clear on strange part.
>>
>> If you want to use local storage, you must have all components reference
>> local storage.
>>
>> That includes SystemVMs - there is a zone level setting that enables
>> local storage, as well as service and disk offering must be set to local.
>>
>> Regards
>> ilya
>>
>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>> Hi Timothy
>>>
>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>
>>>
>>>
>>> Thank you!
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>> Hi Cristian,
>>>
>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>
>>> “No storage pools available for shared volume allocation, returning”
>>>
>>> Change this under the Service Offering à Disk Offering.
>>>
>>> Regards,
>>>
>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>> Timothy Lothering
>>> Solutions Architect
>>> Managed Services
>>> T: +27877415535
>>> F: +27877415100
>>> C: +27824904099
>>> E: tlothering@datacentrix.co.za
>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>
>>>
>>> Serious about performance, passionate about value
>>>
>>>
>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>> To: users@cloudstack.apache.org
>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>
>>> Hi Cristian,
>>>
>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>> And check that your disk offerings are all set to local storage.
>>>
>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>
>>> Error
>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>
>>> Thanks
>>> Glenn
>>>
>>>
>>>
>>> [ShapeBlue] [http://www.shapeblue.com]
>>> Glenn Wagner
>>> Senior Consultant
>>> ,
>>> ShapeBlue
>>> d:
>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>> |
>>> m:
>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>> e:
>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>> |
>>> w:
>>> www.shapeblue.com [http://www.shapeblue.com]
>>> a:
>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>
>>> -----Original Message-----
>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>
>>> Cristian
>>>
>>> See response in-line
>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>> Cluster: 1
>>>
>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>
>>> If it works, make sure the disk offering you create is also type "local"
>>> and not shared or clustered.
>>>
>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>
>>> Regards
>>> ilya
>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>
>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>> Disclaimer
>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>
>>>
>>
>

Re: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hello to All,

       I did rebuild of this configuration and i have the same issue, first VM deploy issue, please see the full log : http://pastebin.com/ybEybCQJ   (maybe you will see another error on first part of log, this was related to HOST IP configuration, i forgot to change it, but right now is fixed )

 Thank you!     


Regards,
Cristian
On 27.01.2016 03:42:35, ilya <il...@gmail.com> wrote:
err..

Looks like my message was trimmed, do

cat /dev/null > /var/log/cloudstack/management/management-server.log

or similar to start with minimal log.

On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
> I will come back in few hours with more details, I will start again from scratch.
>
> Thank you!
>
>
> Regards,
> Cristian
> On 26.01.2016 23:43:54, ilya wrote:
> Can you post full log to pastebin like site?
>
> Ideally, if you can trim the log before you simulate the error - it
> would be great.
>
> do
>
>> /var/log/cloudstack/management/management-server.log
>
> Then similate the issue and upload to http://pastebin.com
>
> Thanks
> ilya
>
>
>
> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>> Hi ilya
>>
>>
>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>
>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>
>>
>>
>>
>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>> Message: null
>>
>>
>>
>>
>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> root@ns52xxxx management]# grep -r job-79 --color
>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHN
y

> A
> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl

> o
> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAK
c

> H
> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbn
Q

> g
> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZt
V
2
> 9
> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z
1
d
> H
> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2
s

> H
> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>
>> ============================================================================================
>>
>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>
>> Thank you
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:08:00, ilya wrote:
>> Cristian
>>
>> Not entire clear on strange part.
>>
>> If you want to use local storage, you must have all components reference
>> local storage.
>>
>> That includes SystemVMs - there is a zone level setting that enables
>> local storage, as well as service and disk offering must be set to local.
>>
>> Regards
>> ilya
>>
>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>> Hi Timothy
>>>
>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>
>>>
>>>
>>> Thank you!
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>> Hi Cristian,
>>>
>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>
>>> “No storage pools available for shared volume allocation, returning”
>>>
>>> Change this under the Service Offering à Disk Offering.
>>>
>>> Regards,
>>>
>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>> Timothy Lothering
>>> Solutions Architect
>>> Managed Services
>>> T: +27877415535
>>> F: +27877415100
>>> C: +27824904099
>>> E: tlothering@datacentrix.co.za
>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>
>>>
>>> Serious about performance, passionate about value
>>>
>>>
>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>> To: users@cloudstack.apache.org
>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>
>>> Hi Cristian,
>>>
>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>> And check that your disk offerings are all set to local storage.
>>>
>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>
>>> Error
>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>
>>> Thanks
>>> Glenn
>>>
>>>
>>>
>>> [ShapeBlue] [http://www.shapeblue.com]
>>> Glenn Wagner
>>> Senior Consultant
>>> ,
>>> ShapeBlue
>>> d:
>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>> |
>>> m:
>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>> e:
>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>> |
>>> w:
>>> www.shapeblue.com [http://www.shapeblue.com]
>>> a:
>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>
>>> -----Original Message-----
>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>
>>> Cristian
>>>
>>> See response in-line
>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>> Cluster: 1
>>>
>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>
>>> If it works, make sure the disk offering you create is also type "local"
>>> and not shared or clustered.
>>>
>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>
>>> Regards
>>> ilya
>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>
>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>> Disclaimer
>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>
>>>
>>
>

Re: CloudStack and VMware VM deploy issue

Posted by ilya <il...@gmail.com>.
err..

Looks like my message was trimmed, do

cat /dev/null > /var/log/cloudstack/management/management-server.log

or similar to start with minimal log.

On 1/26/16 1:53 PM, Cristian Ciobanu wrote:
> I will come back in few hours with more details, I will start again from scratch.
> 
> Thank you!
> 
> 
> Regards,
> Cristian
> On 26.01.2016 23:43:54, ilya <il...@gmail.com> wrote:
> Can you post full log to pastebin like site?
> 
> Ideally, if you can trim the log before you simulate the error - it
> would be great.
> 
> do
> 
>> /var/log/cloudstack/management/management-server.log
> 
> Then similate the issue and upload to http://pastebin.com
> 
> Thanks
> ilya
> 
> 
> 
> On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
>> Hi ilya
>>
>>
>> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>>
>> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>>
>>
>>
>>
>> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
>> Message: null
>>
>>
>>
>>
>> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>>
>> root@ns52xxxx management]# grep -r job-79 --color
>> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
>> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
>> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHN
 y

> A
> BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
>> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
>> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
>> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
>> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
>> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
>> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
>> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
>> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
>> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
>> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
>> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
>> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
>> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
>> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
>> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
>> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
>> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
>> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
>> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
>> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
>> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
>> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
>> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
>> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
>> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
>> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
>> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
>> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
>> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
>> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
>> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
>> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
>> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl

> o
> ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
>> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
>> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
>> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
>> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
>> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
>> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
>> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
>> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
>> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
>> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
>> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
>> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
>> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
>> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
>> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
>> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
>> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAK
 c

> H
> NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbn
 Q

> g
> Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZt
 V
2
> 9
> ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z
 1
d
> H
> VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
>> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
>> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
>> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
>> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
>> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
>> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
>> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
>> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
>> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
>> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
>> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
>> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2
 s

> H
> DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
>> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>>
>> ============================================================================================
>>
>> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>>
>> Thank you
>>
>> Regards,
>> Cristian
>> On 26.01.2016 23:08:00, ilya wrote:
>> Cristian
>>
>> Not entire clear on strange part.
>>
>> If you want to use local storage, you must have all components reference
>> local storage.
>>
>> That includes SystemVMs - there is a zone level setting that enables
>> local storage, as well as service and disk offering must be set to local.
>>
>> Regards
>> ilya
>>
>> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>>> Hi Timothy
>>>
>>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>>
>>>
>>>
>>> Thank you!
>>>
>>> Regards,
>>> Cristian
>>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>>> Hi Cristian,
>>>
>>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>>
>>> “No storage pools available for shared volume allocation, returning”
>>>
>>> Change this under the Service Offering à Disk Offering.
>>>
>>> Regards,
>>>
>>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>>> Timothy Lothering
>>> Solutions Architect
>>> Managed Services
>>> T: +27877415535
>>> F: +27877415100
>>> C: +27824904099
>>> E: tlothering@datacentrix.co.za
>>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>>
>>>
>>> Serious about performance, passionate about value
>>>
>>>
>>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>>> Sent: Tuesday, 26 January 2016 7:27 AM
>>> To: users@cloudstack.apache.org
>>> Subject: RE: CloudStack and VMware VM deploy issue
>>>
>>> Hi Cristian,
>>>
>>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>>> And check that your disk offerings are all set to local storage.
>>>
>>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>>
>>> Error
>>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>>
>>> Thanks
>>> Glenn
>>>
>>>
>>>
>>> [ShapeBlue] [http://www.shapeblue.com]
>>> Glenn Wagner
>>> Senior Consultant
>>> ,
>>> ShapeBlue
>>> d:
>>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>>> |
>>> m:
>>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>>> e:
>>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>>> |
>>> w:
>>> www.shapeblue.com [http://www.shapeblue.com]
>>> a:
>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>>
>>> -----Original Message-----
>>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>>> Sent: Tuesday, 26 January 2016 1:29 AM
>>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>>> Subject: Re: CloudStack and VMware VM deploy issue
>>>
>>> Cristian
>>>
>>> See response in-line
>>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable pools found
>>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>>> Cluster: 1
>>>
>>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>>
>>> If it works, make sure the disk offering you create is also type "local"
>>> and not shared or clustered.
>>>
>>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>>
>>> Regards
>>> ilya
>>> Find out more about ShapeBlue and our range of CloudStack related services:
>>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>>
>>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>>> Disclaimer
>>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>>
>>>
>>
> 

Re: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
I will come back in few hours with more details, I will start again from scratch.

Thank you!


Regards,
Cristian
On 26.01.2016 23:43:54, ilya <il...@gmail.com> wrote:
Can you post full log to pastebin like site?

Ideally, if you can trim the log before you simulate the error - it
would be great.

do

> /var/log/cloudstack/management/management-server.log

Then similate the issue and upload to http://pastebin.com

Thanks
ilya



On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
> Hi ilya
>
>
> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>
> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>
>
>
>
> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>
> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
> Message: null
>
>
>
>
> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>
> root@ns52xxxx management]# grep -r job-79 --color
> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNy
A
BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl
o
ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKc
H
NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbnQ
g
Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZtV2
9
ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z1d
H
VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2s
H
DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>
> ============================================================================================
>
> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>
> Thank you
>
> Regards,
> Cristian
> On 26.01.2016 23:08:00, ilya wrote:
> Cristian
>
> Not entire clear on strange part.
>
> If you want to use local storage, you must have all components reference
> local storage.
>
> That includes SystemVMs - there is a zone level setting that enables
> local storage, as well as service and disk offering must be set to local.
>
> Regards
> ilya
>
> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>> Hi Timothy
>>
>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>
>>
>>
>> Thank you!
>>
>> Regards,
>> Cristian
>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>> Hi Cristian,
>>
>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>
>> “No storage pools available for shared volume allocation, returning”
>>
>> Change this under the Service Offering à Disk Offering.
>>
>> Regards,
>>
>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>> Timothy Lothering
>> Solutions Architect
>> Managed Services
>> T: +27877415535
>> F: +27877415100
>> C: +27824904099
>> E: tlothering@datacentrix.co.za
>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>
>>
>> Serious about performance, passionate about value
>>
>>
>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>> Sent: Tuesday, 26 January 2016 7:27 AM
>> To: users@cloudstack.apache.org
>> Subject: RE: CloudStack and VMware VM deploy issue
>>
>> Hi Cristian,
>>
>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>> And check that your disk offerings are all set to local storage.
>>
>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>
>> Error
>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>
>> Thanks
>> Glenn
>>
>>
>>
>> [ShapeBlue] [http://www.shapeblue.com]
>> Glenn Wagner
>> Senior Consultant
>> ,
>> ShapeBlue
>> d:
>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>> |
>> m:
>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>> e:
>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>> |
>> w:
>> www.shapeblue.com [http://www.shapeblue.com]
>> a:
>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>
>> -----Original Message-----
>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>> Sent: Tuesday, 26 January 2016 1:29 AM
>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>> Subject: Re: CloudStack and VMware VM deploy issue
>>
>> Cristian
>>
>> See response in-line
>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable pools found
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>> Cluster: 1
>>
>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>
>> If it works, make sure the disk offering you create is also type "local"
>> and not shared or clustered.
>>
>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>
>> Regards
>> ilya
>> Find out more about ShapeBlue and our range of CloudStack related services:
>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>
>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>> Disclaimer
>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>
>>
>

Re: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Please see  : http://pastebin.com/D7KXRiL6  this is what i sent to you.


Thanks

Regards,
ShapeHost
www.istream.today [http://www.istream.today/]
www.shape.host [http://www.shape.host/]
+40.733.955.922
On 27.01.2016 00:05:13, Cristian Ciobanu <cr...@istream.today> wrote:
Short question, i saw this repo : http://packages.shapeblue.com/cloudstack/custom/vmware-diskcontroller/centos/4.5/ do i have to use this one ? because i did install from this one :http://packages.shapeblue.com/cloudstack/main/centos/4.5/

Thanks!


Regards,
Cristian
On 26.01.2016 23:43:54, ilya <il...@gmail.com> wrote:
Can you post full log to pastebin like site?

Ideally, if you can trim the log before you simulate the error - it
would be great.

do

> /var/log/cloudstack/management/management-server.log

Then similate the issue and upload to http://pastebin.com

Thanks
ilya



On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
> Hi ilya
>
>
> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>
> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>
>
>
>
> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>
> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
> Message: null
>
>
>
>
> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>
> root@ns52xxxx management]# grep -r job-79 --color
> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNy
A
BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl
o
ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKc
H
NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbnQ
g
Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZtV2
9
ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z1d
H
VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2s
H
DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>
> ============================================================================================
>
> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>
> Thank you
>
> Regards,
> Cristian
> On 26.01.2016 23:08:00, ilya wrote:
> Cristian
>
> Not entire clear on strange part.
>
> If you want to use local storage, you must have all components reference
> local storage.
>
> That includes SystemVMs - there is a zone level setting that enables
> local storage, as well as service and disk offering must be set to local.
>
> Regards
> ilya
>
> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>> Hi Timothy
>>
>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>
>>
>>
>> Thank you!
>>
>> Regards,
>> Cristian
>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>> Hi Cristian,
>>
>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>
>> “No storage pools available for shared volume allocation, returning”
>>
>> Change this under the Service Offering à Disk Offering.
>>
>> Regards,
>>
>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>> Timothy Lothering
>> Solutions Architect
>> Managed Services
>> T: +27877415535
>> F: +27877415100
>> C: +27824904099
>> E: tlothering@datacentrix.co.za
>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>
>>
>> Serious about performance, passionate about value
>>
>>
>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>> Sent: Tuesday, 26 January 2016 7:27 AM
>> To: users@cloudstack.apache.org
>> Subject: RE: CloudStack and VMware VM deploy issue
>>
>> Hi Cristian,
>>
>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>> And check that your disk offerings are all set to local storage.
>>
>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>
>> Error
>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>
>> Thanks
>> Glenn
>>
>>
>>
>> [ShapeBlue] [http://www.shapeblue.com]
>> Glenn Wagner
>> Senior Consultant
>> ,
>> ShapeBlue
>> d:
>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>> |
>> m:
>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>> e:
>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>> |
>> w:
>> www.shapeblue.com [http://www.shapeblue.com]
>> a:
>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>
>> -----Original Message-----
>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>> Sent: Tuesday, 26 January 2016 1:29 AM
>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>> Subject: Re: CloudStack and VMware VM deploy issue
>>
>> Cristian
>>
>> See response in-line
>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable pools found
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>> Cluster: 1
>>
>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>
>> If it works, make sure the disk offering you create is also type "local"
>> and not shared or clustered.
>>
>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>
>> Regards
>> ilya
>> Find out more about ShapeBlue and our range of CloudStack related services:
>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>
>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>> Disclaimer
>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>
>>
>

Re: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Short question, i saw this repo : http://packages.shapeblue.com/cloudstack/custom/vmware-diskcontroller/centos/4.5/ do i have to use this one ? because i did install from this one :http://packages.shapeblue.com/cloudstack/main/centos/4.5/

Thanks!


Regards,
Cristian
On 26.01.2016 23:43:54, ilya <il...@gmail.com> wrote:
Can you post full log to pastebin like site?

Ideally, if you can trim the log before you simulate the error - it
would be great.

do

> /var/log/cloudstack/management/management-server.log

Then similate the issue and upload to http://pastebin.com

Thanks
ilya



On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
> Hi ilya
>
>
> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
>
> I did this in CS 4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
>
>
>
>
> B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>
> I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
> Message: null
>
>
>
>
> A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking ):
>
> root@ns52xxxx management]# grep -r job-79 --color
> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
> management-server.log:2016-01-26 06:28:29,307 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNy
A
BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
> management-server.log:2016-01-26 06:28:29,373 INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args: -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
> management-server.log:2016-01-26 06:28:30,498 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl
o
ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received: { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
> management-server.log:2016-01-26 06:29:33,819 INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
> management-server.log:2016-01-26 06:29:33,821 INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing: { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
> management-server.log:2016-01-26 06:29:33,842 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
> management-server.log:2016-01-26 06:29:33,867 INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
> management-server.log:2016-01-26 06:29:33,896 INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received: { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> management-server.log:2016-01-26 06:29:34,079 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKc
H
NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbnQ
g
Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZtV2
9
ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z1d
H
VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
> management-server.log:2016-01-26 06:29:34,113 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
> catalina.out:INFO [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
> catalina.out:INFO [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
> catalina.out:INFO [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
> catalina.out:INFO [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
> catalina.out:INFO [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> catalina.out:INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2s
H
DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
> catalina.out:INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
>
> ============================================================================================
>
> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
>
> Thank you
>
> Regards,
> Cristian
> On 26.01.2016 23:08:00, ilya wrote:
> Cristian
>
> Not entire clear on strange part.
>
> If you want to use local storage, you must have all components reference
> local storage.
>
> That includes SystemVMs - there is a zone level setting that enables
> local storage, as well as service and disk offering must be set to local.
>
> Regards
> ilya
>
> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>> Hi Timothy
>>
>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>
>>
>>
>> Thank you!
>>
>> Regards,
>> Cristian
>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>> Hi Cristian,
>>
>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>
>> “No storage pools available for shared volume allocation, returning”
>>
>> Change this under the Service Offering à Disk Offering.
>>
>> Regards,
>>
>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>> Timothy Lothering
>> Solutions Architect
>> Managed Services
>> T: +27877415535
>> F: +27877415100
>> C: +27824904099
>> E: tlothering@datacentrix.co.za
>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>
>>
>> Serious about performance, passionate about value
>>
>>
>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>> Sent: Tuesday, 26 January 2016 7:27 AM
>> To: users@cloudstack.apache.org
>> Subject: RE: CloudStack and VMware VM deploy issue
>>
>> Hi Cristian,
>>
>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>> And check that your disk offerings are all set to local storage.
>>
>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>
>> Error
>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>
>> Thanks
>> Glenn
>>
>>
>>
>> [ShapeBlue] [http://www.shapeblue.com]
>> Glenn Wagner
>> Senior Consultant
>> ,
>> ShapeBlue
>> d:
>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>> |
>> m:
>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>> e:
>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>> |
>> w:
>> www.shapeblue.com [http://www.shapeblue.com]
>> a:
>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>
>> -----Original Message-----
>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>> Sent: Tuesday, 26 January 2016 1:29 AM
>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>> Subject: Re: CloudStack and VMware VM deploy issue
>>
>> Cristian
>>
>> See response in-line
>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable pools found
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>> Cluster: 1
>>
>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>
>> If it works, make sure the disk offering you create is also type "local"
>> and not shared or clustered.
>>
>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>
>> Regards
>> ilya
>> Find out more about ShapeBlue and our range of CloudStack related services:
>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>
>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>> Disclaimer
>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>
>>
>

Re: CloudStack and VMware VM deploy issue

Posted by ilya <il...@gmail.com>.
Can you post full log to pastebin like site?

Ideally, if you can trim the log before you simulate the error - it
would be great.

do

> /var/log/cloudstack/management/management-server.log

Then similate the issue and upload to http://pastebin.com

Thanks
ilya



On 1/26/16 1:33 PM, Cristian Ciobanu wrote:
> Hi ilya
> 
>  
> Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)
> 
> I did this in CS  4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.
> 
> 
>   
> 
>  B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking  ):
> 
>       I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
> Message: null
> 
>   
> 
> 
>   A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking  ):
> 
> root@ns52xxxx management]# grep -r job-79 --color
> management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
> management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
> management-server.log:2016-01-26 06:28:29,307 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
> management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNy
 A
BFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
> management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
> management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
> management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
> management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
> management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
> management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
> management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
> management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
> management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
> management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
> management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
> management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
> management-server.log:2016-01-26 06:28:29,373 INFO  [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
> management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
> management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
> management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
> management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
> management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
> management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found  - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
> management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
> management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
> management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
> management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
> management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
> management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
> management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
> management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args:  -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
> management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
> management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
> management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
> management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
> management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
> management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
> management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
> management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
> management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
> management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
> management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
> management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
> management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
> management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
> management-server.log:2016-01-26 06:28:30,498 INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
> management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
> management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending  { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://downl
 o
ad.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
> management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received:  { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
> management-server.log:2016-01-26 06:29:33,819 INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
> management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
> management-server.log:2016-01-26 06:29:33,821 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
> management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
> management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
> management-server.log:2016-01-26 06:29:33,842 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
> management-server.log:2016-01-26 06:29:33,867 INFO  [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
> management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
> management-server.log:2016-01-26 06:29:33,896 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
> management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
> management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
> management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
> management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
> management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
> management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
> management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
> management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
> management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
> management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
> management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
> management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> management-server.log:2016-01-26 06:29:34,079 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
> management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKc
 H
NyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
> management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbnQ
 g
Zm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZtV2
 9
ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z1d
 H
VyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
> management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
> management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
> management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
> management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
> management-server.log:2016-01-26 06:29:34,113 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
> catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
> catalina.out:INFO  [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
> catalina.out:INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
> catalina.out:INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
> catalina.out:INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
> catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
> catalina.out:INFO  [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
> catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
> catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> catalina.out:INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
> catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2s
 H
DFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
> catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
> 
> ============================================================================================
> 
> I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )
> 
> Thank you 
> 
> Regards,
> Cristian
> On 26.01.2016 23:08:00, ilya <il...@gmail.com> wrote:
> Cristian
> 
> Not entire clear on strange part.
> 
> If you want to use local storage, you must have all components reference
> local storage.
> 
> That includes SystemVMs - there is a zone level setting that enables
> local storage, as well as service and disk offering must be set to local.
> 
> Regards
> ilya
> 
> On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
>> Hi Timothy
>>
>> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>>
>>
>>
>> Thank you!
>>
>> Regards,
>> Cristian
>> On 26.01.2016 09:11:35, Timothy Lothering wrote:
>> Hi Cristian,
>>
>> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>>
>> “No storage pools available for shared volume allocation, returning”
>>
>> Change this under the Service Offering à Disk Offering.
>>
>> Regards,
>>
>> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
>> Timothy Lothering
>> Solutions Architect
>> Managed Services
>> T: +27877415535
>> F: +27877415100
>> C: +27824904099
>> E: tlothering@datacentrix.co.za
>> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>>
>>
>> Serious about performance, passionate about value
>>
>>
>> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
>> Sent: Tuesday, 26 January 2016 7:27 AM
>> To: users@cloudstack.apache.org
>> Subject: RE: CloudStack and VMware VM deploy issue
>>
>> Hi Cristian,
>>
>> Like ilya said try create a new VM without a Data Drive when using the Wizard
>> And check that your disk offerings are all set to local storage.
>>
>> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>>
>> Error
>> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>>
>> Thanks
>> Glenn
>>
>>
>>
>> [ShapeBlue] [http://www.shapeblue.com]
>> Glenn Wagner
>> Senior Consultant
>> ,
>> ShapeBlue
>> d:
>> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>> |
>> m:
>> +27 73 917 4111 [tel:+27%2073%20917%204111]
>> e:
>> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>> |
>> w:
>> www.shapeblue.com [http://www.shapeblue.com]
>> a:
>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>>
>> -----Original Message-----
>> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
>> Sent: Tuesday, 26 January 2016 1:29 AM
>> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
>> Subject: Re: CloudStack and VMware VM deploy issue
>>
>> Cristian
>>
>> See response in-line
>> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable pools found for volume:
>>> Vol[8|vm=7|DATADISK] under cluster: 1
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable pools found
>>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>>> job-33 ctx-480f2e68) No suitable storagePools found under this
>>> Cluster: 1
>>
>> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>>
>> If it works, make sure the disk offering you create is also type "local"
>> and not shared or clustered.
>>
>> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>>
>> Regards
>> ilya
>> Find out more about ShapeBlue and our range of CloudStack related services:
>> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
>> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
>> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>>
>> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
>> Disclaimer
>> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>>
>>
> 

RE: CloudStack and VMware VM deploy issue

Posted by Somesh Naidu <So...@citrix.com>.
The problem seems to be pointed out by the below lines in the log:

2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
2016-01-26 06:29:33,821 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.

There may be some inconsistency regarding download status of template ID 7, CentOS 5.3(64-bit) no GUI (vSphere), on primary storage, ID 1 - datastore-159. For some reason the copy command has failed. You'll need to look at the SSVM logs (search for 3-5184769071010284298) to find out more details on why it failed.

Regards,
Somesh


-----Original Message-----
From: Cristian Ciobanu [mailto:cristian.c@istream.today] 
Sent: Tuesday, January 26, 2016 4:34 PM
To: users@cloudstack.apache.org
Subject: Re: CloudStack and VMware VM deploy issue

Hi ilya

 
Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)

I did this in CS  4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.


  

 B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking  ):

      I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
Message: null

  


  A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking  ):

root@ns52xxxx management]# grep -r job-79 --color
management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
management-server.log:2016-01-26 06:28:29,307 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
management-server.log:2016-01-26 06:28:29,373 INFO  [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found  - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args:  -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
management-server.log:2016-01-26 06:28:30,498 INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending  { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received:  { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
management-server.log:2016-01-26 06:29:33,819 INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
management-server.log:2016-01-26 06:29:33,821 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
management-server.log:2016-01-26 06:29:33,842 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
management-server.log:2016-01-26 06:29:33,867 INFO  [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
management-server.log:2016-01-26 06:29:33,896 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 06:29:34,079 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbnQgZm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZtV29ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z1dHVyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
management-server.log:2016-01-26 06:29:34,113 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
catalina.out:INFO  [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
catalina.out:INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
catalina.out:INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
catalina.out:INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
catalina.out:INFO  [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring

============================================================================================

I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )

Thank you 

Regards,
Cristian
On 26.01.2016 23:08:00, ilya <il...@gmail.com> wrote:
Cristian

Not entire clear on strange part.

If you want to use local storage, you must have all components reference
local storage.

That includes SystemVMs - there is a zone level setting that enables
local storage, as well as service and disk offering must be set to local.

Regards
ilya

On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
> Hi Timothy
>
> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>
>
>
> Thank you!
>
> Regards,
> Cristian
> On 26.01.2016 09:11:35, Timothy Lothering wrote:
> Hi Cristian,
>
> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>
> “No storage pools available for shared volume allocation, returning”
>
> Change this under the Service Offering à Disk Offering.
>
> Regards,
>
> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
> Timothy Lothering
> Solutions Architect
> Managed Services
> T: +27877415535
> F: +27877415100
> C: +27824904099
> E: tlothering@datacentrix.co.za
> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>
>
> Serious about performance, passionate about value
>
>
> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
> Sent: Tuesday, 26 January 2016 7:27 AM
> To: users@cloudstack.apache.org
> Subject: RE: CloudStack and VMware VM deploy issue
>
> Hi Cristian,
>
> Like ilya said try create a new VM without a Data Drive when using the Wizard
> And check that your disk offerings are all set to local storage.
>
> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>
> Error
> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>
> Thanks
> Glenn
>
>
>
> [ShapeBlue] [http://www.shapeblue.com]
> Glenn Wagner
> Senior Consultant
> ,
> ShapeBlue
> d:
> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
> |
> m:
> +27 73 917 4111 [tel:+27%2073%20917%204111]
> e:
> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
> |
> w:
> www.shapeblue.com [http://www.shapeblue.com]
> a:
> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>
> -----Original Message-----
> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
> Sent: Tuesday, 26 January 2016 1:29 AM
> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
> Subject: Re: CloudStack and VMware VM deploy issue
>
> Cristian
>
> See response in-line
> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>> job-33 ctx-480f2e68) No suitable pools found for volume:
>> Vol[8|vm=7|DATADISK] under cluster: 1
>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>> job-33 ctx-480f2e68) No suitable pools found
>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>> job-33 ctx-480f2e68) No suitable storagePools found under this
>> Cluster: 1
>
> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>
> If it works, make sure the disk offering you create is also type "local"
> and not shared or clustered.
>
> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>
> Regards
> ilya
> Find out more about ShapeBlue and our range of CloudStack related services:
> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>
> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
> Disclaimer
> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>
>

Re: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hi ilya

 
Yes, before deploy i set local storage in "Global Settings" also i set on zone level settings and i removed all disk offering with shared option and added with local storage. ( CS 4.5.2 ShapeBlue)

I did this in CS  4.5.2 i also used this version for KVM with local storage and works. but on VMware i have this issue.


  

 B. I have this error when i try the following task ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking  ):

      I installed an Windows 2008 from ISO file, after this i tried to create template from ROOT volume, but i get this error : Failed to create templatecreate template from volume exception: Exception: java.lang.NullPointerException
Message: null

  


  A. I see this error when i try to deploy a vm from default template ( CS 4.5.2 ShapeBlue - VMware 5.5 - Local Storage enabled with Basic networking  ):

root@ns52xxxx management]# grep -r job-79 --color
management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
management-server.log:2016-01-26 06:28:29,307 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
management-server.log:2016-01-26 06:28:29,373 INFO  [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found  - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args:  -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
management-server.log:2016-01-26 06:28:30,498 INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending  { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova [http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova]","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received:  { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
management-server.log:2016-01-26 06:29:33,819 INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
management-server.log:2016-01-26 06:29:33,821 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
management-server.log:2016-01-26 06:29:33,842 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
management-server.log:2016-01-26 06:29:33,867 INFO  [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
management-server.log:2016-01-26 06:29:33,896 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 06:29:34,079 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbnQgZm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZtV29ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z1dHVyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
management-server.log:2016-01-26 06:29:34,113 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
catalina.out:INFO  [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
catalina.out:INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
catalina.out:INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
catalina.out:INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
catalina.out:INFO  [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring

============================================================================================

I just want to mention that i test also CS 4.6 and 4.7 but there i have other issue regarding Virtual Router, never start ( i just see "starting" for VR status )

Thank you 

Regards,
Cristian
On 26.01.2016 23:08:00, ilya <il...@gmail.com> wrote:
Cristian

Not entire clear on strange part.

If you want to use local storage, you must have all components reference
local storage.

That includes SystemVMs - there is a zone level setting that enables
local storage, as well as service and disk offering must be set to local.

Regards
ilya

On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
> Hi Timothy
>
> This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
>
>
>
> Thank you!
>
> Regards,
> Cristian
> On 26.01.2016 09:11:35, Timothy Lothering wrote:
> Hi Cristian,
>
> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>
> “No storage pools available for shared volume allocation, returning”
>
> Change this under the Service Offering à Disk Offering.
>
> Regards,
>
> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
> Timothy Lothering
> Solutions Architect
> Managed Services
> T: +27877415535
> F: +27877415100
> C: +27824904099
> E: tlothering@datacentrix.co.za
> www.datacentrix.co.za [http://www.datacentrix.co.za/]
>
>
> Serious about performance, passionate about value
>
>
> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
> Sent: Tuesday, 26 January 2016 7:27 AM
> To: users@cloudstack.apache.org
> Subject: RE: CloudStack and VMware VM deploy issue
>
> Hi Cristian,
>
> Like ilya said try create a new VM without a Data Drive when using the Wizard
> And check that your disk offerings are all set to local storage.
>
> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
>
> Error
> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
>
> Thanks
> Glenn
>
>
>
> [ShapeBlue] [http://www.shapeblue.com]
> Glenn Wagner
> Senior Consultant
> ,
> ShapeBlue
> d:
> | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
> |
> m:
> +27 73 917 4111 [tel:+27%2073%20917%204111]
> e:
> glenn.wagner@shapeblue.com | t: [mailto:glenn.wagner@shapeblue.com%20|%20t:]
> |
> w:
> www.shapeblue.com [http://www.shapeblue.com]
> a:
> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>
> -----Original Message-----
> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
> Sent: Tuesday, 26 January 2016 1:29 AM
> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
> Subject: Re: CloudStack and VMware VM deploy issue
>
> Cristian
>
> See response in-line
> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>> job-33 ctx-480f2e68) No suitable pools found for volume:
>> Vol[8|vm=7|DATADISK] under cluster: 1
>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>> job-33 ctx-480f2e68) No suitable pools found
>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>> job-33 ctx-480f2e68) No suitable storagePools found under this
>> Cluster: 1
>
> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
>
> If it works, make sure the disk offering you create is also type "local"
> and not shared or clustered.
>
> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
>
> Regards
> ilya
> Find out more about ShapeBlue and our range of CloudStack related services:
> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
>
> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
> Disclaimer
> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
>
>

Re: CloudStack and VMware VM deploy issue

Posted by ilya <il...@gmail.com>.
Cristian

Not entire clear on strange part.

If you want to use local storage, you must have all components reference
local storage.

That includes SystemVMs - there is a zone level setting that enables
local storage, as well as service and disk offering must be set to local.

Regards
ilya

On 1/25/16 11:51 PM, Cristian Ciobanu wrote:
> Hi Timothy
> 
>     This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.
> 
> 
> 
> Thank you!
> 
> Regards,
> Cristian
> On 26.01.2016 09:11:35, Timothy Lothering <tl...@datacentrix.co.za> wrote:
> Hi Cristian,
>  
> As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
>  
> “No storage pools available for shared volume allocation, returning”
>  
> Change this under the Service Offering à Disk Offering.
>  
> Regards,
>  
> [www.datacentrix.co.za] [http://www.datacentrix.co.za]
> Timothy Lothering
> Solutions Architect
> Managed Services
> T: +27877415535
> F: +27877415100
> C: +27824904099
> E: tlothering@datacentrix.co.za
> www.datacentrix.co.za [http://www.datacentrix.co.za/]
> 
>           
> Serious about performance, passionate about value
> 
> 
> From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
> Sent: Tuesday, 26 January 2016 7:27 AM
> To: users@cloudstack.apache.org
> Subject: RE: CloudStack and VMware VM deploy issue
>  
> Hi Cristian,
> 
> Like ilya said try create a new VM without a Data Drive when using the Wizard
> And check that your disk offerings are all set to local storage.
> 
> If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage
> 
> Error
> No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
> 
> Thanks
> Glenn
> 
> 
>  
> [ShapeBlue] [http://www.shapeblue.com]
> Glenn Wagner
> Senior Consultant
> , 
> ShapeBlue
> d: 
>  | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
>  | 
> m: 
> +27 73 917 4111 [tel:+27%2073%20917%204111]
> e: 
> glenn.wagner@shapeblue.com | t:  [mailto:glenn.wagner@shapeblue.com%20|%20t:]
>  | 
> w: 
> www.shapeblue.com [http://www.shapeblue.com]
> a: 
> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
> This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
>  
> -----Original Message-----
> From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
> Sent: Tuesday, 26 January 2016 1:29 AM
> To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
> Subject: Re: CloudStack and VMware VM deploy issue
> 
> Cristian
> 
> See response in-line
> On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>> job-33 ctx-480f2e68) No suitable pools found for volume:
>> Vol[8|vm=7|DATADISK] under cluster: 1
>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>> job-33 ctx-480f2e68) No suitable pools found
>> management-server.log:2016-01-25 14:32:45,743 DEBUG
>> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
>> job-33 ctx-480f2e68) No suitable storagePools found under this
>> Cluster: 1
> 
> Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..
> 
> If it works, make sure the disk offering you create is also type "local"
> and not shared or clustered.
> 
> PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.
> 
> Regards
> ilya
> Find out more about ShapeBlue and our range of CloudStack related services:
> IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
> CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
> CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]
> 
> [https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&amp;code=938ed3adfac8855ae916afa117d73328]
> Disclaimer
> Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.
> 
> 

RE: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hi again,


     I found something interesting, please see the red line :


[root@ns52xxxx management]# grep -r job-79 --color
management-server.log:2016-01-26 06:28:27,608 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-14:ctx-107b2062 job-78 ctx-9e2c938d) Sync job-79 execution on object VmWorkJobQueue.12
management-server.log:2016-01-26 06:28:29,292 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-336c2f61) Schedule queued job-79
management-server.log:2016-01-26 06:28:29,307 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
management-server.log:2016-01-26 06:28:29,320 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Executing AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}
management-server.log:2016-01-26 06:28:29,321 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Run VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
management-server.log:2016-01-26 06:28:29,323 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":1,"rawParams":{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":12,"handlerName":"VirtualMachineManagerImpl"}
management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: null new host id: null host id before state transition: null
management-server.log:2016-01-26 06:28:29,334 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully transitioned to start state for VM[User|i-2-12-VM] reservation id = ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777
management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to deploy VM, vm has dcId: 1 and podId: null
management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) advanceStart: DeploymentPlan is provided, using dcId:1, podId: 1, clusterId: 1, hostId: 1, poolId: null
management-server.log:2016-01-26 06:28:29,350 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: null, clusters: null, hosts: null
management-server.log:2016-01-26 06:28:29,360 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: []
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
management-server.log:2016-01-26 06:28:29,361 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
management-server.log:2016-01-26 06:28:29,363 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Looking for suitable pools for this host under zone: 1, pod: 1, cluster: 1
management-server.log:2016-01-26 06:28:29,366 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking suitable pools for volume (Id, Type): (15,ROOT)
management-server.log:2016-01-26 06:28:29,367 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We need to allocate new storagepool for this volume
management-server.log:2016-01-26 06:28:29,368 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Calling StoragePoolAllocators to find suitable pools
management-server.log:2016-01-26 06:28:29,368 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator trying to find storage pool to fit the vm
management-server.log:2016-01-26 06:28:29,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if storage pool is suitable, name: null ,poolId: 1
management-server.log:2016-01-26 06:28:29,373 INFO  [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
management-server.log:2016-01-26 06:28:29,374 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool 1 for storage, totalSize: 3992708972544, usedBytes: 10096738304, usedPct: 0.002528793952534624, disable threshold: 0.85
management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found storage pool ns52xxx.ip-1xx-xx-2xx.net Local Storage of type VMFS with overprovisioning factor 1
management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total over provisioned capacity calculated is 1 * 3992708972544
management-server.log:2016-01-26 06:28:29,380 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Total capacity of the pool ns52xxx.ip-1xx-xx-2xx.net Local Storage id: 1 is 3992708972544
management-server.log:2016-01-26 06:28:29,381 DEBUG [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking pool: 1 for volume allocation [Vol[15|vm=12|ROOT]], maxSize : 3992708972544, totalAllocatedSize : 3145728000, askingSize : 2147483648, allocated disable threshold: 0.85
management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found suitable local storage pool 1, adding to list
management-server.log:2016-01-26 06:28:29,382 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) LocalStoragePoolAllocator returning 1 suitable storage pools
management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
management-server.log:2016-01-26 06:28:29,382 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if host: 1 can access any suitable storage pool for volume: ROOT
management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Host: 1 can access pool: 1
management-server.log:2016-01-26 06:28:29,383 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found a potential host id: 1 name: ns52xxx.ip-1xx-xx-2xx.net and associated storage pools for this VM
management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,384 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deployment found  - P0=VM[User|i-2-12-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,408 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: null new host id: 1 host id before state transition: null
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) We are allocating VM, increasing the used capacity of this host:1
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used CPU: 3000 , Free CPU:26592 ,Requested CPU: 500
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Current Used RAM: 3758096384 , Free RAM:30489239552 ,Requested RAM: 536870912
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) CPU STATS after allocation: for host: 1, old used: 3000, old reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used:3500, reserved:0; requested cpu:500,alloc_from_last:false
management-server.log:2016-01-26 06:28:29,415 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) RAM STATS after allocation: for host: 1, old used: 3758096384, old reserved: 0, total: 34247335936; new used: 4294967296, reserved: 0; requested mem: 536870912,alloc_from_last:false
management-server.log:2016-01-26 06:28:29,432 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM is being created in podId: 1
management-server.log:2016-01-26 06:28:29,440 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Network id=204 is already implemented
management-server.log:2016-01-26 06:28:29,466 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-26 06:28:29,491 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on 1
management-server.log:2016-01-26 06:28:29,510 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to prepare for Nic[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235]
management-server.log:2016-01-26 06:28:29,521 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is acquired for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,524 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Lock is released for network id 204 as a part of router startup in Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(15|ROOT-->Pool(1))]
management-server.log:2016-01-26 06:28:29,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying dhcp entry in network Ntwk[204|Guest|7]
management-server.log:2016-01-26 06:28:29,564 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.DhcpEntryCommand":{"vmMac":"06:80:4c:00:00:3b","vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","defaultRouter":"1xx.xx.xx.238","defaultDns":"1xx.xx.xx.230","duid":"00:03:00:01:06:80:4c:00:00:3b","isDefault":true,"executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.guest.ip":"1xx.xx.xx.230","router.ip":"0.0.0.0","router.name":"r-11-VM"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,565 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:29,566 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) Run command on VR: 1xx.xx.xx.230, script: edithosts.sh with args:  -m 06:80:4c:00:00:3b -4 1xx.xx.xx.235 -h TST11 -d 1xx.xx.xx.238 -n 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:29,902 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-37:ctx-233e6307 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: DhcpEntryCommand) edithosts.sh execution result: true
management-server.log:2016-01-26 06:28:29,904 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347953: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer } }
management-server.log:2016-01-26 06:28:29,917 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Applying userdata and password entry in network Ntwk[204|Guest|7]
management-server.log:2016-01-26 06:28:29,940 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,943 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"1xx.xx.xx.235","vmName":"TST11","executeInSequence":false,"accessDetails":{"zone.network.type":"Basic","router.name":"r-11-VM","router.ip":"0.0.0.0","router.guest.ip":"1xx.xx.xx.230"},"wait":0}}] }
management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:29,944 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) Run command on VR: 1xx.xx.xx.230, script: savepassword.sh with args: -v 1xx.xx.xx.235 -p saved_password
management-server.log:2016-01-26 06:28:30,220 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: SavePasswordCommand) savepassword.sh execution result: true
management-server.log:2016-01-26 06:28:30,222 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) In Basic zone mode, use router's guest IP for SSH control. guest IP : 1xx.xx.xx.230
management-server.log:2016-01-26 06:28:30,223 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) Run command on VR: 1xx.xx.xx.230, script: vmdata.py with args: -d eyIxNTguNjkuMTcuMjM1IjpbWyJ1c2VyZGF0YSIsInVzZXItZGF0YSIsbnVsbF0sWyJtZXRhZGF0YSIsInNlcnZpY2Utb2ZmZXJpbmciLCJTbWFsbCBJbnN0YW5jZSJdLFsibWV0YWRhdGEiLCJhdmFpbGFiaWxpdHktem9uZSIsIlBPRDAwMSJdLFsibWV0YWRhdGEiLCJsb2NhbC1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJsb2NhbC1ob3N0bmFtZSIsIlRTVDExIl0sWyJtZXRhZGF0YSIsInB1YmxpYy1pcHY0IiwiMTU4LjY5LjE3LjIzNSJdLFsibWV0YWRhdGEiLCJwdWJsaWMtaG9zdG5hbWUiLCJUU1QxMSJdLFsibWV0YWRhdGEiLCJpbnN0YW5jZS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJ2bS1pZCIsIjhkMmI5NzVlLTYyNWItNDI0Mi04YTg2LTNjMTA0MzkzYmIyMCJdLFsibWV0YWRhdGEiLCJwdWJsaWMta2V5cyIsbnVsbF0sWyJtZXRhZGF0YSIsImNsb3VkLWlkZW50aWZpZXIiLCJDbG91ZFN0YWNrLXtiNWE2M2FlMi1jOWU0LTQ2YmMtODA3ZS03NjkwMWM0ZTNlOGV9Il1dfQ==
management-server.log:2016-01-26 06:28:30,447 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-293:ctx-7bf50e13 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: VmDataCommand) vmdata.py execution result: true
management-server.log:2016-01-26 06:28:30,449 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347954: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 10, { Answer, Answer } }
management-server.log:2016-01-26 06:28:30,450 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-26 06:28:30,453 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Checking if we need to prepare 1 volumes for VM[User|i-2-12-VM]
management-server.log:2016-01-26 06:28:30,465 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Image
management-server.log:2016-01-26 06:28:30,469 DEBUG [o.a.c.s.d.PrimaryDataStoreImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Not found (templateId:7poolId:1) in template_spool_ref, persisting it
management-server.log:2016-01-26 06:28:30,493 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) template 7 is already in store:1, type:Primary
management-server.log:2016-01-26 06:28:30,495 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Found template centos53-x64 in storage pool 1 with VMTemplateStoragePool id: 15
management-server.log:2016-01-26 06:28:30,496 DEBUG [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Acquire lock on VMTemplateStoragePool 15 with timeout 3600 seconds
management-server.log:2016-01-26 06:28:30,498 INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
management-server.log:2016-01-26 06:28:30,517 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
management-server.log:2016-01-26 06:28:30,557 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Sending  { Cmd , MgmtId: 37591044139110, via: 3(s-4-VM), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/1/7/34da6edf-8871-3509-9fad-9001d7f9575a.ova","origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://172.11.0.2/secondary","_role":"Image"}},"name":"centos53-x64","hypervisorType":"VMware"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://download.cloud.com/releases/2.2.0/CentOS5.3-x86_64.ova","uuid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","id":7,"format":"OVA","accountId":1,"checksum":"f6f881b7f2292948d8494db837fe0f47","hvm":false,"displayText":"CentOS 5.3(64-bit) no GUI (vSphere)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"39f5d196-1569-41a7-996c-06abb1325d23","id":1,"poolType":"VMFS","host":"VMFS datastore: datastore-159","path":"datastore-159","port":0,"url":"VMFS://VMFS datastore: datastore-159/datastore-159/?ROLE=Primary&STOREUUID=39f5d196-1569-41a7-996c-06abb1325d23"}},"name":"centos53-x64","hypervisorType":"VMware"}},"executeInSequence":true,"options":{},"wait":10800}}] }
management-server.log:2016-01-26 06:29:33,767 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 3-5184769071010284298: Received:  { Ans: , MgmtId: 37591044139110, via: 3, Ver: v1, Flags: 110, { CopyCmdAnswer } }
management-server.log:2016-01-26 06:29:33,819 INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
management-server.log:2016-01-26 06:29:33,821 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to create Vol[15|vm=12|ROOT]:Failed to create a Vmware context, check the management server logs or the ssvm log for details
management-server.log:2016-01-26 06:29:33,821 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
management-server.log:2016-01-26 06:29:33,836 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cleaning up resources for the vm VM[User|i-2-12-VM] in Starting state
management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
management-server.log:2016-01-26 06:29:33,841 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns52xxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}}] }
management-server.log:2016-01-26 06:29:33,842 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) find VM i-2-12-VM on host
management-server.log:2016-01-26 06:29:33,867 INFO  [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
management-server.log:2016-01-26 06:29:33,867 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) load VM cache on host
management-server.log:2016-01-26 06:29:33,896 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
management-server.log:2016-01-26 06:29:33,897 DEBUG [c.c.a.t.Request] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Seq 1-1380071810812347957: Received:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, { StopAnswer } }
management-server.log:2016-01-26 06:29:33,908 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Service SecurityGroup is not supported in the network id=204
management-server.log:2016-01-26 06:29:33,911 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Changing active number of nics for network id=204 on -1
management-server.log:2016-01-26 06:29:33,945 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Asking VirtualRouter to release NicProfile[26-12-ae2f6b6d-693a-4dc6-8e80-28f1ca7e9777-1xx.xx.xx.235-vlan://untagged
management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully released network resources for the vm VM[User|i-2-12-VM]
management-server.log:2016-01-26 06:29:33,946 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Successfully cleanued up resources for the vm VM[User|i-2-12-VM] in Starting state
management-server.log:2016-01-26 06:29:33,956 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Deploy avoids pods: [], clusters: [], hosts: [1]
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlanner allocation algorithm: com.cloud.deploy.FirstFitPlanner@46c88784
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 500, requested ram: 536870912
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Is ROOT volume READY (pool already allocated)?: No
management-server.log:2016-01-26 06:29:33,957 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 1
management-server.log:2016-01-26 06:29:33,958 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) The specified host is in avoid set
management-server.log:2016-01-26 06:29:33,959 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Cannnot deploy to specified host, returning.
management-server.log:2016-01-26 06:29:34,012 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: null new host id: null host id before state transition: 1
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total CPU: 29592 and CPU after applying overprovisioning: 29592
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Hosts's actual total RAM: 34247335936 and RAM after applying overprovisioning: 34247335936
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 29592, total with overprovisioning: 29592; new used: 3000,reserved:0; movedfromreserved: false,moveToReserveredfalse
management-server.log:2016-01-26 06:29:34,020 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) release mem from host: 1, old used: 4294967296,reserved: 0, total: 34247335936; new used: 3758096384,reserved:0; movedfromreserved: false,moveToReserveredfalse
management-server.log:2016-01-26 06:29:34,079 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 06:29:34,079 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 06:29:34,080 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 12, job origin: 78
management-server.log:2016-01-26 06:29:34,080 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
management-server.log:2016-01-26 06:29:34,083 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Complete async job-79, jobStatus: FAILED, resultCode: 0, result: rO0ABXNyADdjb20uY2xvdWQuZXhjZXB0aW9uLkluc3VmZmljaWVudFNlcnZlckNhcGFjaXR5RXhjZXB0aW9uAAAAAFZNT3ACAAFaABVhZmZpbml0eUdyb3Vwc0FwcGxpZWR4cgAxY29tLmNsb3VkLmV4Y2VwdGlvbi5JbnN1ZmZpY2llbnRDYXBhY2l0eUV4Y2VwdGlvbgAAAABWTU91AgACTAACaWR0ABBMamF2YS9sYW5nL0xvbmc7TAAFc2NvcGV0ABFMamF2YS9sYW5nL0NsYXNzO3hyACJjb20uY2xvdWQuZXhjZXB0aW9uLkNsb3VkRXhjZXB0aW9ueeiOG_HkRMcCAAJMAAtjc0Vycm9yQ29kZXQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAAZpZExpc3R0ABVMamF2YS91dGlsL0FycmF5TGlzdDt4cgATamF2YS5sYW5nLkV4Y2VwdGlvbtD9Hz4aOxzEAgAAeHIAE2phdmEubGFuZy5UaHJvd2FibGXVxjUnOXe4ywMABEwABWNhdXNldAAVTGphdmEvbGFuZy9UaHJvd2FibGU7TAANZGV0YWlsTWVzc2FnZXQAEkxqYXZhL2xhbmcvU3RyaW5nO1sACnN0YWNrVHJhY2V0AB5bTGphdmEvbGFuZy9TdGFja1RyYWNlRWxlbWVudDtMABRzdXBwcmVzc2VkRXhjZXB0aW9uc3QAEExqYXZhL3V0aWwvTGlzdDt4cHEAfgANdAA0VW5hYmxlIHRvIGNyZWF0ZSBhIGRlcGxveW1lbnQgZm9yIFZNW1VzZXJ8aS0yLTEyLVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAAFXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAKTAAIZmlsZU5hbWVxAH4ACkwACm1ldGhvZE5hbWVxAH4ACnhwAAADs3QAJmNvbS5jbG91ZC52bS5WaXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbC5qYXZhdAAQb3JjaGVzdHJhdGVTdGFydHNxAH4AEQAAEYFxAH4AE3EAfgAUcQB-ABVzcQB-ABH____-dAAkc3VuLnJlZmxlY3QuTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsdAAdTmF0aXZlTWV0aG9kQWNjZXNzb3JJbXBsLmphdmF0AAdpbnZva2Uwc3EAfgARAAAAOXEAfgAYcQB-ABl0AAZpbnZva2VzcQB-ABEAAAArdAAoc3VuLnJlZmxlY3QuRGVsZWdhdGluZ01ldGhvZEFjY2Vzc29ySW1wbHQAIURlbGVnYXRpbmdNZXRob2RBY2Nlc3NvckltcGwuamF2YXEAfgAcc3EAfgARAAACXnQAGGphdmEubGFuZy5yZWZsZWN0Lk1ldGhvZHQAC01ldGhvZC5qYXZhcQB-ABxzcQB-ABEAAABrdAAiY29tLmNsb3VkLnZtLlZtV29ya0pvYkhhbmRsZXJQcm94eXQAGlZtV29ya0pvYkhhbmRsZXJQcm94eS5qYXZhdAAPaGFuZGxlVm1Xb3JrSm9ic3EAfgARAAASHXEAfgATcQB-ABRxAH4AJnNxAH4AEQAAAGd0ACBjb20uY2xvdWQudm0uVm1Xb3JrSm9iRGlzcGF0Y2hlcnQAGFZtV29ya0pvYkRpc3BhdGNoZXIuamF2YXQABnJ1bkpvYnNxAH4AEQAAAhl0AD9vcmcuYXBhY2hlLmNsb3Vkc3RhY2suZnJhbWV3b3JrLmpvYnMuaW1wbC5Bc3luY0pvYk1hbmFnZXJJbXBsJDV0ABhBc3luY0pvYk1hbmFnZXJJbXBsLmphdmF0AAxydW5JbkNvbnRleHRzcQB-ABEAAAAxdAA-b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlJDF0ABtNYW5hZ2VkQ29udGV4dFJ1bm5hYmxlLmphdmF0AANydW5zcQB-ABEAAAA4dABCb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dCQxdAAaRGVmYXVsdE1hbmFnZWRDb250ZXh0LmphdmF0AARjYWxsc3EAfgARAAAAZ3QAQG9yZy5hcGFjaGUuY2xvdWRzdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHRxAH4ANnQAD2NhbGxXaXRoQ29udGV4dHNxAH4AEQAAADVxAH4AOXEAfgA2dAAOcnVuV2l0aENvbnRleHRzcQB-ABEAAAAudAA8b3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5NYW5hZ2VkQ29udGV4dFJ1bm5hYmxlcQB-ADJxAH4AM3NxAH4AEQAAAe5xAH4ALXEAfgAucQB-ADNzcQB-ABEAAAHXdAAuamF2YS51dGlsLmNvbmN1cnJlbnQuRXhlY3V0b3JzJFJ1bm5hYmxlQWRhcHRlcnQADkV4ZWN1dG9ycy5qYXZhcQB-ADdzcQB-ABEAAAEGdAAfamF2YS51dGlsLmNvbmN1cnJlbnQuRnV0dXJlVGFza3QAD0Z1dHVyZVRhc2suamF2YXEAfgAzc3EAfgARAAAEeXQAJ2phdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvcnQAF1RocmVhZFBvb2xFeGVjdXRvci5qYXZhdAAJcnVuV29ya2Vyc3EAfgARAAACZ3QALmphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXJxAH4ASHEAfgAzc3EAfgARAAAC6XQAEGphdmEubGFuZy5UaHJlYWR0AAtUaHJlYWQuamF2YXEAfgAzc3IAJmphdmEudXRpbC5Db2xsZWN0aW9ucyRVbm1vZGlmaWFibGVMaXN0_A8lMbXsjhACAAFMAARsaXN0cQB-AAx4cgAsamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUNvbGxlY3Rpb24ZQgCAy173HgIAAUwAAWN0ABZMamF2YS91dGlsL0NvbGxlY3Rpb247eHBzcgATamF2YS51dGlsLkFycmF5TGlzdHiB0h2Zx2GdAwABSQAEc2l6ZXhwAAAAAHcEAAAAAHhxAH4AVHhzcgARamF2YS5sYW5nLkludGVnZXIS4qCk94GHOAIAAUkABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAQ73NxAH4AUwAAAAB3BAAAAAB4c3IADmphdmEubGFuZy5Mb25nO4vkkMyPI98CAAFKAAV2YWx1ZXhxAH4AVgAAAAAAAAABdnIAF2NvbS5jbG91ZC5kYy5EYXRhQ2VudGVysvEK66fneL4CAAB4cAA
management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Publish async job-79 complete on message bus
management-server.log:2016-01-26 06:29:34,084 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs related to job-79
management-server.log:2016-01-26 06:29:34,085 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Update db status for job-79
management-server.log:2016-01-26 06:29:34,087 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Wake up jobs joined with job-79 and disjoin all subjobs created from job- 79
management-server.log:2016-01-26 06:29:34,111 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Done executing com.cloud.vm.VmWorkStart for job-79
management-server.log:2016-01-26 06:29:34,113 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring
catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Add job-79 into job monitoring
catalina.out:INFO  [c.c.s.StorageManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Storage pool null (1) does not supply IOPS capacity, assuming enough capacity
catalina.out:INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) lock is acquired for VMTemplateStoragePool 15
catalina.out:INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) releasing lock for VMTemplateStoragePool 15
catalina.out:INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Unable to contact resource.
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-12-VM","wait":0}
catalina.out:INFO  [c.c.h.v.m.HostMO] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM not found in host cache
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-74:ctx-e2f53743 ns52xxx.ip-1xx-xx-2xx.net, job-78/job-79, cmd: StopCommand) VM i-2-12-VM is no longer in vSphere
catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79 ctx-67168166) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-12-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Unable to complete AsyncJobVO {id:79, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAADHQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 37591044139110, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 26 06:28:27 EST 2016}, job origin:78
catalina.out:INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-25:ctx-7bfca42f job-78/job-79) Remove job-79 from job monitoring


Thank you!

Regards,
Cristian
On 26.01.2016 13:17:56, Cristian Ciobanu <cr...@istream.today> wrote:
Hi Timothy,

     No, no tags was set.


    I did the test again, please see ( maybe you can see better )

apilog.log:2016-01-26 05:03:43,469 INFO  [a.c.c.a.ApiServer] (catalina-exec-10:ctx-f88627c5 ctx-683cc439) (userId=2 accountId=2 sessionId=97066990E92A5A302ED321063ED998C7) 7x.11x.7x.7 -- GET command=listVirtualMachines&response=json&listAll=true&page=1&pagesize=20&_=1453802621405 200 {"listvirtualmachinesresponse":{"count":1,"virtualmachine":[{"id":"16a33873-cefd-45ac-8436-9198c72d56fa","name":"TST3","displayname":"TST3","account":"admin","domainid":"f14ff256-c393-11e5-8b5b-0cc47a7d402a","domain":"ROOT","created":"2016-01-26T04:06:05-0500","state":"Error","haenable":false,"zoneid":"9bb80e96-9d6b-4a11-b92f-9bfc754ebc89","zonename":"POD001","templateid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","templatename":"CentOS 5.3(64-bit) no GUI (vSphere)","templatedisplaytext":"CentOS 5.3(64-bit) no GUI (vSphere)","passwordenabled":false,"serviceofferingid":"8c446bee-3419-4de0-81b0-a4405b4e95e4","serviceofferingname":"Small Instance","diskofferingid":"e9d92dfc-fdfa-4171-b2c6-ba276a383ee0","diskofferingname":"Small","cpunumber":1,"cpuspeed":500,"memory":512,"guestosid":"f1a6c90a-c393-11e5-8b5b-0cc47a7d402a","rootdeviceid":0,"rootdevicetype":"ROOT","securitygroup":[],"nic":[{"id":"81b59b98-4103-4315-af46-e598a53fc1fb","networkid":"6b1833ff-951f-49f8-ad2b-7c15b9ebbfa1","networkname":"defaultGuestNetwork","netmask":"255.255.255.240","gateway":"1xx.6x.1x.x38","ipaddress":"1xx.6x.1x.x38","broadcasturi":"vlan://untagged","traffictype":"Guest","type":"Shared","isdefault":true,"macaddress":"06:84:f8:00:00:3b"}],"hypervisor":"VMware","instancename":"i-2-10-VM","tags":[],"affinitygroup":[],"displayvm":true,"isdynamicallyscalable":false,"ostypeid":12}]}}
management-server.log:2016-01-26 04:06:05,376 DEBUG [c.c.v.VirtualMachineManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocating entries for VM: VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:05,377 DEBUG [c.c.v.VirtualMachineManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocating nics for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:05,378 DEBUG [o.a.c.e.o.NetworkOrchestrator] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocating nic for vm VM[User|i-2-10-VM] in network Ntwk[204|Guest|7] with requested profile NicProfile[0-0-null-null-null
management-server.log:2016-01-26 04:06:05,390 DEBUG [c.c.v.VirtualMachineManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocating disks for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:05,419 DEBUG [c.c.v.VirtualMachineManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocation completed for VM: VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:05,419 DEBUG [c.c.v.UserVmManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Successfully allocated DB entry for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:07,285 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Successfully transitioned to start state for VM[User|i-2-10-VM] reservation id = 1957af30-1cc8-4a60-b4d7-99f83920913b
management-server.log:2016-01-26 04:06:07,367 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Deployment found  - P0=VM[User|i-2-10-V ], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(12|ROOT-->Pool(1)Volume(12|ROOT-->Pool(1), Volume(12|ROOT-->Pool(1)Volume(12|ROOT-->Pool(1), Volume(13|DATADISK-->Pool(1))]
management-server.log:2016-01-26 04:08:15,968 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Checking if we need to prepare 2 volumes for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:09:19,333 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Cleaning up resources for the vm VM[User|i-2-10-VM] in Starting state
management-server.log:2016-01-26 04:09:19,338 DEBUG [c.c.a.t.Request] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Seq 1-1380071810812347665: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns5xxxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}}] }
management-server.log:2016-01-26 04:09:19,338 DEBUG [c.c.a.t.Request] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Seq 1-1380071810812347665: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns5xxxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}}] }
management-server.log:2016-01-26 04:09:19,339 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}
management-server.log:2016-01-26 04:09:19,382 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) find VM i-2-10-VM on host
management-server.log:2016-01-26 04:09:19,382 INFO  [c.c.h.v.m.HostMO] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) VM i-2-10-VM not found in host cache
management-server.log:2016-01-26 04:09:19,411 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) VM i-2-10-VM is no longer in vSphere
management-server.log:2016-01-26 04:09:19,412 DEBUG [c.c.a.t.Request] (DirectAgent-100:ctx-43e47ff0) Seq 1-1380071810812347665: Processing:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, [{"com.cloud.agent.api.StopAnswer":{"result":true,"details":"VM i-2-10-VM is no longer in vSphere","wait":0}}] }
management-server.log:2016-01-26 04:09:19,465 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Successfully released network resources for the vm VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:09:19,465 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Successfully cleanued up resources for the vm VM[User|i-2-10-VM] in Starting state
management-server.log:2016-01-26 04:09:19,582 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 04:09:19,582 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 04:09:21,280 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Successfully transitioned to start state for VM[User|i-2-10-VM] reservation id = 257fa2e6-8774-41be-80dc-cd5f7121aa62
management-server.log:2016-01-26 04:09:21,380 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Deployment found  - P0=VM[User|i-2-10-V ], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(12|ROOT-->Pool(1)Volume(12|ROOT-->Pool(1), Volume(12|ROOT-->Pool(1)Volume(12|ROOT-->Pool(1), Volume(13|DATADISK-->Pool(1))]
management-server.log:2016-01-26 04:09:22,424 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Checking if we need to prepare 2 volumes for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:10:25,770 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Cleaning up resources for the vm VM[User|i-2-10-VM] in Starting state
management-server.log:2016-01-26 04:10:25,775 DEBUG [c.c.a.t.Request] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Seq 1-1380071810812347670: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns5xxxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}}] }
management-server.log:2016-01-26 04:10:25,775 DEBUG [c.c.a.t.Request] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Seq 1-1380071810812347670: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns5xxxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}}] }
management-server.log:2016-01-26 04:10:25,776 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}
management-server.log:2016-01-26 04:10:25,814 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) find VM i-2-10-VM on host
management-server.log:2016-01-26 04:10:25,814 INFO  [c.c.h.v.m.HostMO] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) VM i-2-10-VM not found in host cache
management-server.log:2016-01-26 04:10:25,843 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) VM i-2-10-VM is no longer in vSphere
management-server.log:2016-01-26 04:10:25,845 DEBUG [c.c.a.t.Request] (DirectAgent-401:ctx-5ee3c027) Seq 1-1380071810812347670: Processing:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, [{"com.cloud.agent.api.StopAnswer":{"result":true,"details":"VM i-2-10-VM is no longer in vSphere","wait":0}}] }
management-server.log:2016-01-26 04:10:25,884 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Successfully released network resources for the vm VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:10:25,884 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Successfully cleanued up resources for the vm VM[User|i-2-10-VM] in Starting state
management-server.log:2016-01-26 04:10:26,002 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 04:10:26,003 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 04:10:26,072 DEBUG [c.c.v.UserVmManagerImpl] (API-Job-Executor-11:ctx-aee174b3 job-67 ctx-1576cc94) Destroying vm VM[User|i-2-10-VM] as it failed to create on Host with Id:null
management-server.log:Caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}
catalina.out:INFO  [c.c.h.v.m.HostMO] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) VM i-2-10-VM not found in host cache
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) VM i-2-10-VM is no longer in vSphere
catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}
catalina.out:INFO  [c.c.h.v.m.HostMO] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) VM i-2-10-VM not found in host cache
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) VM i-2-10-VM is no longer in vSphere
catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:Caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1




Regards,
Cristian
On 26.01.2016 12:03:23, Timothy Lothering <tl...@datacentrix.co.za> wrote:
Hi Cristian,

Thanks for your reply, have you set any "Storage Tags" on the Disk Offering?

-----Original Message-----
From: Cristian Ciobanu [mailto:cristian.c@istream.today]
Sent: Tuesday, 26 January 2016 9:52 AM
To: users@cloudstack.apache.org
Subject: RE: CloudStack and VMware VM deploy issue

Hi Timothy

    This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.



Thank you!

Regards,
Cristian
On 26.01.2016 09:11:35, Timothy Lothering wrote:
Hi Cristian,
 
As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
 
“No storage pools available for shared volume allocation, returning”
 
Change this under the Service Offering à Disk Offering.
 
Regards,
 
[www.datacentrix.co.za] [http://www.datacentrix.co.za] Timothy Lothering Solutions Architect Managed Services
T: +27877415535
F: +27877415100
C: +27824904099
E: tlothering@datacentrix.co.za
www.datacentrix.co.za [http://www.datacentrix.co.za/]

          
Serious about performance, passionate about value


From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
Sent: Tuesday, 26 January 2016 7:27 AM
To: users@cloudstack.apache.org
Subject: RE: CloudStack and VMware VM deploy issue
 
Hi Cristian,

Like ilya said try create a new VM without a Data Drive when using the Wizard And check that your disk offerings are all set to local storage.

If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage

Error
No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1

Thanks
Glenn


 
[ShapeBlue] [http://www.shapeblue.com]
Glenn Wagner
Senior Consultant
,
ShapeBlue
d:
 | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
 |
m: 
+27 73 917 4111 [tel:+27%2073%20917%204111]
e:
glenn.wagner@shapeblue.com | t:  [mailto:glenn.wagner@shapeblue.com%20|%20t:]
 |
w:
www.shapeblue.com [http://www.shapeblue.com]
a:
2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
 
-----Original Message-----
From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
Sent: Tuesday, 26 January 2016 1:29 AM
To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
Subject: Re: CloudStack and VMware VM deploy issue

Cristian

See response in-line
On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found for volume:
> Vol[8|vm=7|DATADISK] under cluster: 1
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable storagePools found under this
> Cluster: 1

Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..

If it works, make sure the disk offering you create is also type "local"
and not shared or clustered.

PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.

Regards
ilya
Find out more about ShapeBlue and our range of CloudStack related services:
IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/] CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]

[https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
Disclaimer
Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.


RE: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hi Timothy,

     No, no tags was set.


    I did the test again, please see ( maybe you can see better )

apilog.log:2016-01-26 05:03:43,469 INFO  [a.c.c.a.ApiServer] (catalina-exec-10:ctx-f88627c5 ctx-683cc439) (userId=2 accountId=2 sessionId=97066990E92A5A302ED321063ED998C7) 7x.11x.7x.7 -- GET command=listVirtualMachines&response=json&listAll=true&page=1&pagesize=20&_=1453802621405 200 {"listvirtualmachinesresponse":{"count":1,"virtualmachine":[{"id":"16a33873-cefd-45ac-8436-9198c72d56fa","name":"TST3","displayname":"TST3","account":"admin","domainid":"f14ff256-c393-11e5-8b5b-0cc47a7d402a","domain":"ROOT","created":"2016-01-26T04:06:05-0500","state":"Error","haenable":false,"zoneid":"9bb80e96-9d6b-4a11-b92f-9bfc754ebc89","zonename":"POD001","templateid":"f164ac64-c393-11e5-8b5b-0cc47a7d402a","templatename":"CentOS 5.3(64-bit) no GUI (vSphere)","templatedisplaytext":"CentOS 5.3(64-bit) no GUI (vSphere)","passwordenabled":false,"serviceofferingid":"8c446bee-3419-4de0-81b0-a4405b4e95e4","serviceofferingname":"Small Instance","diskofferingid":"e9d92dfc-fdfa-4171-b2c6-ba276a383ee0","diskofferingname":"Small","cpunumber":1,"cpuspeed":500,"memory":512,"guestosid":"f1a6c90a-c393-11e5-8b5b-0cc47a7d402a","rootdeviceid":0,"rootdevicetype":"ROOT","securitygroup":[],"nic":[{"id":"81b59b98-4103-4315-af46-e598a53fc1fb","networkid":"6b1833ff-951f-49f8-ad2b-7c15b9ebbfa1","networkname":"defaultGuestNetwork","netmask":"255.255.255.240","gateway":"1xx.6x.1x.x38","ipaddress":"1xx.6x.1x.x38","broadcasturi":"vlan://untagged","traffictype":"Guest","type":"Shared","isdefault":true,"macaddress":"06:84:f8:00:00:3b"}],"hypervisor":"VMware","instancename":"i-2-10-VM","tags":[],"affinitygroup":[],"displayvm":true,"isdynamicallyscalable":false,"ostypeid":12}]}}
management-server.log:2016-01-26 04:06:05,376 DEBUG [c.c.v.VirtualMachineManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocating entries for VM: VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:05,377 DEBUG [c.c.v.VirtualMachineManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocating nics for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:05,378 DEBUG [o.a.c.e.o.NetworkOrchestrator] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocating nic for vm VM[User|i-2-10-VM] in network Ntwk[204|Guest|7] with requested profile NicProfile[0-0-null-null-null
management-server.log:2016-01-26 04:06:05,390 DEBUG [c.c.v.VirtualMachineManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocating disks for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:05,419 DEBUG [c.c.v.VirtualMachineManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Allocation completed for VM: VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:05,419 DEBUG [c.c.v.UserVmManagerImpl] (catalina-exec-2:ctx-6c7ee388 ctx-b9b54f99) Successfully allocated DB entry for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:06:07,285 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Successfully transitioned to start state for VM[User|i-2-10-VM] reservation id = 1957af30-1cc8-4a60-b4d7-99f83920913b
management-server.log:2016-01-26 04:06:07,367 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Deployment found  - P0=VM[User|i-2-10-V ], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(12|ROOT-->Pool(1)Volume(12|ROOT-->Pool(1), Volume(12|ROOT-->Pool(1)Volume(12|ROOT-->Pool(1), Volume(13|DATADISK-->Pool(1))]
management-server.log:2016-01-26 04:08:15,968 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Checking if we need to prepare 2 volumes for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:09:19,333 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Cleaning up resources for the vm VM[User|i-2-10-VM] in Starting state
management-server.log:2016-01-26 04:09:19,338 DEBUG [c.c.a.t.Request] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Seq 1-1380071810812347665: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns5xxxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}}] }
management-server.log:2016-01-26 04:09:19,338 DEBUG [c.c.a.t.Request] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Seq 1-1380071810812347665: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns5xxxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}}] }
management-server.log:2016-01-26 04:09:19,339 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}
management-server.log:2016-01-26 04:09:19,382 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) find VM i-2-10-VM on host
management-server.log:2016-01-26 04:09:19,382 INFO  [c.c.h.v.m.HostMO] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) VM i-2-10-VM not found in host cache
management-server.log:2016-01-26 04:09:19,411 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) VM i-2-10-VM is no longer in vSphere
management-server.log:2016-01-26 04:09:19,412 DEBUG [c.c.a.t.Request] (DirectAgent-100:ctx-43e47ff0) Seq 1-1380071810812347665: Processing:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, [{"com.cloud.agent.api.StopAnswer":{"result":true,"details":"VM i-2-10-VM is no longer in vSphere","wait":0}}] }
management-server.log:2016-01-26 04:09:19,465 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Successfully released network resources for the vm VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:09:19,465 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Successfully cleanued up resources for the vm VM[User|i-2-10-VM] in Starting state
management-server.log:2016-01-26 04:09:19,582 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 04:09:19,582 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 04:09:21,280 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Successfully transitioned to start state for VM[User|i-2-10-VM] reservation id = 257fa2e6-8774-41be-80dc-cd5f7121aa62
management-server.log:2016-01-26 04:09:21,380 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Deployment found  - P0=VM[User|i-2-10-V ], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(12|ROOT-->Pool(1)Volume(12|ROOT-->Pool(1), Volume(12|ROOT-->Pool(1)Volume(12|ROOT-->Pool(1), Volume(13|DATADISK-->Pool(1))]
management-server.log:2016-01-26 04:09:22,424 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Checking if we need to prepare 2 volumes for VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:10:25,770 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Cleaning up resources for the vm VM[User|i-2-10-VM] in Starting state
management-server.log:2016-01-26 04:10:25,775 DEBUG [c.c.a.t.Request] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Seq 1-1380071810812347670: Sending  { Cmd , MgmtId: 37591044139110, via: 1(ns5xxxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}}] }
management-server.log:2016-01-26 04:10:25,775 DEBUG [c.c.a.t.Request] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Seq 1-1380071810812347670: Executing:  { Cmd , MgmtId: 37591044139110, via: 1(ns5xxxx.ip-1xx-xx-2xx.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}}] }
management-server.log:2016-01-26 04:10:25,776 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}
management-server.log:2016-01-26 04:10:25,814 DEBUG [c.c.h.v.m.HostMO] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) find VM i-2-10-VM on host
management-server.log:2016-01-26 04:10:25,814 INFO  [c.c.h.v.m.HostMO] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) VM i-2-10-VM not found in host cache
management-server.log:2016-01-26 04:10:25,843 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) VM i-2-10-VM is no longer in vSphere
management-server.log:2016-01-26 04:10:25,845 DEBUG [c.c.a.t.Request] (DirectAgent-401:ctx-5ee3c027) Seq 1-1380071810812347670: Processing:  { Ans: , MgmtId: 37591044139110, via: 1, Ver: v1, Flags: 110, [{"com.cloud.agent.api.StopAnswer":{"result":true,"details":"VM i-2-10-VM is no longer in vSphere","wait":0}}] }
management-server.log:2016-01-26 04:10:25,884 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Successfully released network resources for the vm VM[User|i-2-10-VM]
management-server.log:2016-01-26 04:10:25,884 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Successfully cleanued up resources for the vm VM[User|i-2-10-VM] in Starting state
management-server.log:2016-01-26 04:10:26,002 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 04:10:26,003 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
management-server.log:2016-01-26 04:10:26,072 DEBUG [c.c.v.UserVmManagerImpl] (API-Job-Executor-11:ctx-aee174b3 job-67 ctx-1576cc94) Destroying vm VM[User|i-2-10-VM] as it failed to create on Host with Id:null
management-server.log:Caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}
catalina.out:INFO  [c.c.h.v.m.HostMO] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) VM i-2-10-VM not found in host cache
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-100:ctx-43e47ff0 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-68, cmd: StopCommand) VM i-2-10-VM is no longer in vSphere
catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-19:ctx-661ff1c1 job-67/job-68 ctx-804188db) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) Executing resource StopCommand: {"isProxy":false,"executeInSequence":true,"checkBeforeCleanup":false,"vmName":"i-2-10-VM","wait":0}
catalina.out:INFO  [c.c.h.v.m.HostMO] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) VM i-2-10-VM not found in host cache
catalina.out:INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-401:ctx-5ee3c027 ns5xxxx.ip-1xx-xx-2xx.net, job-67/job-70, cmd: StopCommand) VM i-2-10-VM is no longer in vSphere
catalina.out:ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-20:ctx-01700109 job-67/job-70 ctx-7018c496) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1
catalina.out:Caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-2-10-VM]Scope=interface com.cloud.dc.DataCenter; id=1




Regards,
Cristian
On 26.01.2016 12:03:23, Timothy Lothering <tl...@datacentrix.co.za> wrote:
Hi Cristian,

Thanks for your reply, have you set any "Storage Tags" on the Disk Offering?

-----Original Message-----
From: Cristian Ciobanu [mailto:cristian.c@istream.today]
Sent: Tuesday, 26 January 2016 9:52 AM
To: users@cloudstack.apache.org
Subject: RE: CloudStack and VMware VM deploy issue

Hi Timothy

    This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.



Thank you!

Regards,
Cristian
On 26.01.2016 09:11:35, Timothy Lothering wrote:
Hi Cristian,
 
As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
 
“No storage pools available for shared volume allocation, returning”
 
Change this under the Service Offering à Disk Offering.
 
Regards,
 
[www.datacentrix.co.za] [http://www.datacentrix.co.za] Timothy Lothering Solutions Architect Managed Services
T: +27877415535
F: +27877415100
C: +27824904099
E: tlothering@datacentrix.co.za
www.datacentrix.co.za [http://www.datacentrix.co.za/]

          
Serious about performance, passionate about value


From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
Sent: Tuesday, 26 January 2016 7:27 AM
To: users@cloudstack.apache.org
Subject: RE: CloudStack and VMware VM deploy issue
 
Hi Cristian,

Like ilya said try create a new VM without a Data Drive when using the Wizard And check that your disk offerings are all set to local storage.

If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage

Error
No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1

Thanks
Glenn


 
[ShapeBlue] [http://www.shapeblue.com]
Glenn Wagner
Senior Consultant
,
ShapeBlue
d:
 | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
 |
m: 
+27 73 917 4111 [tel:+27%2073%20917%204111]
e:
glenn.wagner@shapeblue.com | t:  [mailto:glenn.wagner@shapeblue.com%20|%20t:]
 |
w:
www.shapeblue.com [http://www.shapeblue.com]
a:
2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
 
-----Original Message-----
From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
Sent: Tuesday, 26 January 2016 1:29 AM
To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
Subject: Re: CloudStack and VMware VM deploy issue

Cristian

See response in-line
On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found for volume:
> Vol[8|vm=7|DATADISK] under cluster: 1
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable storagePools found under this
> Cluster: 1

Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..

If it works, make sure the disk offering you create is also type "local"
and not shared or clustered.

PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.

Regards
ilya
Find out more about ShapeBlue and our range of CloudStack related services:
IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/] CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]

[https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&code=938ed3adfac8855ae916afa117d73328]
Disclaimer
Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.


RE: CloudStack and VMware VM deploy issue

Posted by Timothy Lothering <tl...@datacentrix.co.za>.
Hi Cristian,

Thanks for your reply, have you set any "Storage Tags" on the Disk Offering?

-----Original Message-----
From: Cristian Ciobanu [mailto:cristian.c@istream.today] 
Sent: Tuesday, 26 January 2016 9:52 AM
To: users@cloudstack.apache.org
Subject: RE: CloudStack and VMware VM deploy issue

Hi Timothy

    This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.



Thank you!

Regards,
Cristian
On 26.01.2016 09:11:35, Timothy Lothering <tl...@datacentrix.co.za> wrote:
Hi Cristian,
 
As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
 
“No storage pools available for shared volume allocation, returning”
 
Change this under the Service Offering à Disk Offering.
 
Regards,
 
[www.datacentrix.co.za] [http://www.datacentrix.co.za] Timothy Lothering Solutions Architect Managed Services
T: +27877415535
F: +27877415100
C: +27824904099
E: tlothering@datacentrix.co.za
www.datacentrix.co.za [http://www.datacentrix.co.za/]

          
Serious about performance, passionate about value


From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
Sent: Tuesday, 26 January 2016 7:27 AM
To: users@cloudstack.apache.org
Subject: RE: CloudStack and VMware VM deploy issue
 
Hi Cristian,

Like ilya said try create a new VM without a Data Drive when using the Wizard And check that your disk offerings are all set to local storage.

If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage

Error
No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1

Thanks
Glenn


 
[ShapeBlue] [http://www.shapeblue.com]
Glenn Wagner
Senior Consultant
,
ShapeBlue
d:
 | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
 |
m: 
+27 73 917 4111 [tel:+27%2073%20917%204111]
e:
glenn.wagner@shapeblue.com | t:  [mailto:glenn.wagner@shapeblue.com%20|%20t:]
 |
w:
www.shapeblue.com [http://www.shapeblue.com]
a:
2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
 
-----Original Message-----
From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
Sent: Tuesday, 26 January 2016 1:29 AM
To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
Subject: Re: CloudStack and VMware VM deploy issue

Cristian

See response in-line
On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
> management-server.log:2016-01-25 14:32:45,743 DEBUG 
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found for volume:
> Vol[8|vm=7|DATADISK] under cluster: 1
> management-server.log:2016-01-25 14:32:45,743 DEBUG 
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found
> management-server.log:2016-01-25 14:32:45,743 DEBUG 
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable storagePools found under this
> Cluster: 1

Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..

If it works, make sure the disk offering you create is also type "local"
and not shared or clustered.

PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.

Regards
ilya
Find out more about ShapeBlue and our range of CloudStack related services:
IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/] CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]

[https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&amp;code=938ed3adfac8855ae916afa117d73328]
Disclaimer
Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.


RE: CloudStack and VMware VM deploy issue

Posted by Cristian Ciobanu <cr...@istream.today>.
Hi Timothy

    This is very strange, I configured for local storage also i removed all disk offering and create the same but with local storage.



Thank you!

Regards,
Cristian
On 26.01.2016 09:11:35, Timothy Lothering <tl...@datacentrix.co.za> wrote:
Hi Cristian,
 
As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:
 
“No storage pools available for shared volume allocation, returning”
 
Change this under the Service Offering à Disk Offering.
 
Regards,
 
[www.datacentrix.co.za] [http://www.datacentrix.co.za]
Timothy Lothering
Solutions Architect
Managed Services
T: +27877415535
F: +27877415100
C: +27824904099
E: tlothering@datacentrix.co.za
www.datacentrix.co.za [http://www.datacentrix.co.za/]

          
Serious about performance, passionate about value


From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
Sent: Tuesday, 26 January 2016 7:27 AM
To: users@cloudstack.apache.org
Subject: RE: CloudStack and VMware VM deploy issue
 
Hi Cristian,

Like ilya said try create a new VM without a Data Drive when using the Wizard
And check that your disk offerings are all set to local storage.

If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage

Error
No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1

Thanks
Glenn


 
[ShapeBlue] [http://www.shapeblue.com]
Glenn Wagner
Senior Consultant
, 
ShapeBlue
d: 
 | s: +27 21 527 0091 [tel:|%20s:%20+27%2021%20527%200091]
 | 
m: 
+27 73 917 4111 [tel:+27%2073%20917%204111]
e: 
glenn.wagner@shapeblue.com | t:  [mailto:glenn.wagner@shapeblue.com%20|%20t:]
 | 
w: 
www.shapeblue.com [http://www.shapeblue.com]
a: 
2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa
Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.
 
-----Original Message-----
From: ilya [mailto:ilya.mailing.lists@gmail.com [mailto:ilya.mailing.lists@gmail.com]]
Sent: Tuesday, 26 January 2016 1:29 AM
To: users@cloudstack.apache.org [mailto:users@cloudstack.apache.org]
Subject: Re: CloudStack and VMware VM deploy issue

Cristian

See response in-line
On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found for volume:
> Vol[8|vm=7|DATADISK] under cluster: 1
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable storagePools found under this
> Cluster: 1

Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..

If it works, make sure the disk offering you create is also type "local"
and not shared or clustered.

PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.

Regards
ilya
Find out more about ShapeBlue and our range of CloudStack related services:
IaaS Cloud Design & Build [http://shapeblue.com/iaas-cloud-design-and-build/] | CSForge – rapid IaaS deployment framework [http://shapeblue.com/csforge/]
CloudStack Consulting [http://shapeblue.com/cloudstack-consultancy/] | CloudStack Software Engineering [http://shapeblue.com/cloudstack-software-engineering/]
CloudStack Infrastructure Support [http://shapeblue.com/cloudstack-infrastructure-support/] | CloudStack Bootcamp Training Courses [http://shapeblue.com/cloudstack-training/]

[https://mail28.mimecast.co.za/mimecast/click?account=CSA14A19&amp;code=938ed3adfac8855ae916afa117d73328]
Disclaimer
Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication has been maintained nor that it is free of errors, virus, interception or interference.


RE: CloudStack and VMware VM deploy issue

Posted by Timothy Lothering <tl...@datacentrix.co.za>.
Hi Cristian,

As per Ilya’s mail, check that the DATA disk offering is set to use “Storage Type – Local”, the logs provided indicate that no shared storage can be found:

“No storage pools available for shared volume allocation, returning”

Change this under the Service Offering --> Disk Offering.

Regards,

From: Glenn Wagner [mailto:glenn.wagner@shapeblue.com]
Sent: Tuesday, 26 January 2016 7:27 AM
To: users@cloudstack.apache.org
Subject: RE: CloudStack and VMware VM deploy issue

Hi Cristian,

Like ilya said try create a new VM without a Data Drive when using the Wizard
And check that your disk offerings are all set to local storage.

If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage

Error
No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1

Thanks
Glenn


[ShapeBlue]<http://www.shapeblue.com>

Glenn Wagner

Senior Consultant

,

ShapeBlue



d:

 | s: +27 21 527 0091<tel:|%20s:%20+27%2021%20527%200091>

 |

m:

+27 73 917 4111<tel:+27%2073%20917%204111>


e:

glenn.wagner@shapeblue.com | t: <mailto:glenn.wagner@shapeblue.com%20|%20t:>

 |

w:

www.shapeblue.com<http://www.shapeblue.com>


a:

2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa



[cid:image0f9ce0.png@7a8b50a8.45b5374f]



Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.



-----Original Message-----
From: ilya [mailto:ilya.mailing.lists@gmail.com]
Sent: Tuesday, 26 January 2016 1:29 AM
To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
Subject: Re: CloudStack and VMware VM deploy issue

Cristian

See response in-line
On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found for volume:
> Vol[8|vm=7|DATADISK] under cluster: 1
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable storagePools found under this
> Cluster: 1

Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..

If it works, make sure the disk offering you create is also type "local"
and not shared or clustered.

PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.

Regards
ilya
Find out more about ShapeBlue and our range of CloudStack related services:
IaaS Cloud Design & Build<http://shapeblue.com/iaas-cloud-design-and-build/> | CSForge – rapid IaaS deployment framework<http://shapeblue.com/csforge/>
CloudStack Consulting<http://shapeblue.com/cloudstack-consultancy/> | CloudStack Software Engineering<http://shapeblue.com/cloudstack-software-engineering/>
CloudStack Infrastructure Support<http://shapeblue.com/cloudstack-infrastructure-support/> | CloudStack Bootcamp Training Courses<http://shapeblue.com/cloudstack-training/>
Timothy Lothering
Solutions Architect
Managed Services

T: +27877415535
F: +27877415100
C: +27824904099
E: tlothering@datacentrix.co.za


DISCLAIMER NOTICE: 

Everything in this e-mail and any attachments relating to the official business of Datacentrix Holdings Ltd. and its subsidiaries 
('Datacentrix') is proprietary to Datacentrix. It is confidential, legally privileged and protected by law. Datacentrix does not 
own and endorse any other content. Views and opinions are those of the sender unless clearly stated as being that of Datacentrix. 
The person addressed in the e-mail is the sole authorised recipient. Please notify the sender immediately if it has unintentionally 
reached you and do not read, disclose or use the content in any way. Datacentrix cannot assure that the integrity of this communication 
has been maintained nor that it is free of errors, virus, interception or interference.

RE: CloudStack and VMware VM deploy issue

Posted by Glenn Wagner <gl...@shapeblue.com>.
Hi Cristian,

Like ilya said try create a new VM without a Data Drive when using the Wizard
And check that your disk offerings are all set to local storage.

If you are just testing you can also change your over provision factor in the Global settings from 2 to like 4 or 6 for storage

Error
No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1

Thanks
Glenn



[ShapeBlue]<http://www.shapeblue.com>
Glenn Wagner
Senior Consultant       ,       ShapeBlue


d:       | s: +27 21 527 0091<tel:|%20s:%20+27%2021%20527%200091>        |      m:      +27 73 917 4111<tel:+27%2073%20917%204111>

e:      glenn.wagner@shapeblue.com | t: <mailto:glenn.wagner@shapeblue.com%20|%20t:>     |      w:      www.shapeblue.com<http://www.shapeblue.com>

a:      2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South Africa


[cid:image0f9ce0.png@7a8b50a8.45b5374f]


Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error.




-----Original Message-----
From: ilya [mailto:ilya.mailing.lists@gmail.com]
Sent: Tuesday, 26 January 2016 1:29 AM
To: users@cloudstack.apache.org
Subject: Re: CloudStack and VMware VM deploy issue

Cristian

See response in-line
On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found for volume:
> Vol[8|vm=7|DATADISK] under cluster: 1
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable pools found
> management-server.log:2016-01-25 14:32:45,743 DEBUG
> [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8
> job-33 ctx-480f2e68) No suitable storagePools found under this
> Cluster: 1

Can you deploy without a data disk? That is when going through wizard, dont select any additional disks..

If it works, make sure the disk offering you create is also type "local"
and not shared or clustered.

PS: I would also change a value in global settings for vm.instancename.flag to true. This is going to use the names given in cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But this has nothing to do with your issue above.

Regards
ilya
Find out more about ShapeBlue and our range of CloudStack related services:
IaaS Cloud Design & Build<http://shapeblue.com/iaas-cloud-design-and-build//> | CSForge – rapid IaaS deployment framework<http://shapeblue.com/csforge/>
CloudStack Consulting<http://shapeblue.com/cloudstack-consultancy/> | CloudStack Software Engineering<http://shapeblue.com/cloudstack-software-engineering/>
CloudStack Infrastructure Support<http://shapeblue.com/cloudstack-infrastructure-support/> | CloudStack Bootcamp Training Courses<http://shapeblue.com/cloudstack-training/>

Re: CloudStack and VMware VM deploy issue

Posted by ilya <il...@gmail.com>.
Cristian

See response in-line
On 1/25/16 11:51 AM, Cristian Ciobanu wrote:
> management-server.log:2016-01-25 14:32:45,743 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) No suitable pools found for volume: Vol[8|vm=7|DATADISK] under cluster: 1
> management-server.log:2016-01-25 14:32:45,743 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) No suitable pools found
> management-server.log:2016-01-25 14:32:45,743 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-9:ctx-0c67fce8 job-33 ctx-480f2e68) No suitable storagePools found under this Cluster: 1

Can you deploy without a data disk? That is when going through wizard,
dont select any additional disks..

If it works, make sure the disk offering you create is also type "local"
and not shared or clustered.

PS: I would also change a value in global settings for
vm.instancename.flag to true. This is going to use the names given in
cloudstack in vmware, vs using the pre-generated "i-xxx-xxxx" name. But
this has nothing to do with your issue above.

Regards
ilya