You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cloudstack.apache.org by GitBox <gi...@apache.org> on 2020/01/28 22:24:47 UTC

[GitHub] [cloudstack] kovukono opened a new issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot

kovukono opened a new issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot
URL: https://github.com/apache/cloudstack/issues/3849
 
 
   <!--
   Verify first that your issue/request is not already reported on GitHub.
   Also test if the latest release and master branch are affected too.
   Always add information AFTER of these HTML comments, but no need to delete the comments.
   -->
   
   ##### ISSUE TYPE
   <!-- Pick one below and delete the rest -->
    * Bug Report
   
   ##### COMPONENT NAME
   <!--
   Categorize the issue, e.g. API, VR, VPN, UI, etc.
   -->
   ~~~
   VR
   ~~~
   
   ##### CLOUDSTACK VERSION
   <!--
   New line separated list of affected versions, commit ID for issues on master branch.
   -->
   
   ~~~
   4.12
   ~~~
   
   ##### CONFIGURATION
   <!--
   Information about the configuration if relevant, e.g. basic network, advanced networking, etc.  N/A otherwise
   -->
   A site-to-site or remote access VPN must be enabled on the VR.
   
   ##### OS / ENVIRONMENT
   <!--
   Information about the environment if relevant, N/A otherwise
   -->
   Xenserver hypervisor
   
   ##### SUMMARY
   <!-- Explain the problem/feature briefly -->
   When creating the VR, Cloudstack is providing two metadata sets to the hypervisor, one with only the eth0 device, and then one with eth0 and eth1. When a VR with a VPN is restarted, Cloudstack does not appear to send that second set and does not attempt to create the eth1 device. As a result, iptables rules are misconfigured and drop ICMP traffic entirely, causing Cloudstack to fail the VR's GetDomRVersionCmd check. Since it failed, Cloudstack immediately stops the router. If the router is set as HA, Cloudstack will try indefinitely to restart this VR and fail each time. Recreating a VPC with cleanup appears to be the only available fix.
   
   ##### STEPS TO REPRODUCE
   <!--
   For bugs, show exactly how to reproduce the problem, using a minimal test-case. Use Screenshots if accurate.
   
   For new features, show how the feature would be used.
   -->
   
   <!-- Paste example playbooks or commands between quotes below -->
   ~~~
   Create a VPC, and then add a remote access VPN to that VPC. Restart the VR.
   ~~~
   
   <!-- You can also paste gist.github.com links for larger files -->
   
   ##### EXPECTED RESULTS
   <!-- What did you expect to happen when running the steps above? -->
   
   ~~~
   The VR to be restarted with no issues.
   ~~~
   
   ##### ACTUAL RESULTS
   <!-- What actually happened? -->
   
   <!-- Paste verbatim command output between quotes below -->
   Cloudstack:
   ~~~
   ```2020-01-28 16:39:05,520 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-29:ctx-e32b0b84 job-558) (logid:498aca8c) Executing AsyncJobVO {id:558, userId: 2, accountId: 2, instanceType: DomainRouter, instanceId: 70, cmd: org.apache.cloudstack.api.command.admin.router.RebootRouterCmd, cmdInfo: {"response":"json","ctxUserId":"2","httpmethod":"GET","ctxStartEventId":"949","id":"9e98f52d-056e-406d-89e9-022a2a8d2368","ctxDetails":"{\"interface com.cloud.vm.VirtualMachine\":\"9e98f52d-056e-406d-89e9-022a2a8d2368\"}","projectid":"3ecd7da2-c1fb-489e-b72f-4deee0bd254f","ctxAccountId":"2","uuid":"9e98f52d-056e-406d-89e9-022a2a8d2368","cmdEventType":"ROUTER.REBOOT","_":"1580247428117"}, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 117524791259211, completeMsid: null, lastUpdated: null, lastPolled: null, created: null, removed: null}
   2020-01-28 16:39:05,526 WARN  [c.c.a.d.ParamGenericValidationWorker] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Received unknown parameters for command rebootRouter. Unknown parameters : projectid
   2020-01-28 16:39:05,549 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Stopping and starting router VM[DomainRouter|r-70-VM] as a part of router reboot
   2020-01-28 16:39:05,550 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Stopping router VM[DomainRouter|r-70-VM]
   2020-01-28 16:39:05,557 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Sync job-559 execution on object VmWorkJobQueue.70
   2020-01-28 16:39:06,847 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559) (logid:498aca8c) Executing AsyncJobVO {id:559, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStop, cmdInfo: rO0ABXNyABdjb20uY2xvdWQudm0uVm1Xb3JrU3RvcALQ4GymiWjjAgABWgAHY2xlYW51cHhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAARnQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 117524791259211, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 28 16:39:05 EST 2020, removed: null}
   2020-01-28 16:39:06,847 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559) (logid:498aca8c) Run VM work job: com.cloud.vm.VmWorkStop for VM 70, job origin: 558
   2020-01-28 16:39:06,848 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Execute VM work job: com.cloud.vm.VmWorkStop{"cleanup":false,"userId":2,"accountId":2,"vmId":70,"handlerName":"VirtualMachineManagerImpl"}
   2020-01-28 16:39:06,868 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) VM state transitted from :Running to Stopping with event: StopRequestedvm's original host id: 1 new host id: 1 host id before state transition: 1
   2020-01-28 16:39:07,366 DEBUG [c.c.a.t.Request] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Seq 1-8214847195300501429: Received:  { Ans: , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 10, { NetworkUsageAnswer } }
   2020-01-28 16:39:07,366 DEBUG [c.c.a.m.AgentManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Details from executing class com.cloud.agent.api.NetworkUsageCommand: success
   2020-01-28 16:39:07,367 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Recieved and Sent bytes are both 0. Not updating user_statistics
   2020-01-28 16:39:07,372 DEBUG [c.c.a.t.Request] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Seq 1-8214847195300501430: Sending  { Cmd , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"checkBeforeCleanup":false,"controlIp":"169.254.1.177","forceStop":false,"volumesToDisconnect":[],"vmName":"r-70-VM","executeInSequence":false,"wait":0}}] }
   2020-01-28 16:39:07,372 DEBUG [c.c.a.t.Request] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Seq 1-8214847195300501430: Executing:  { Cmd , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"checkBeforeCleanup":false,"controlIp":"169.254.1.177","forceStop":false,"volumesToDisconnect":[],"vmName":"r-70-VM","executeInSequence":false,"wait":0}}] }
   2020-01-28 16:39:07,399 DEBUG [c.c.h.x.r.w.x.CitrixStopCommandWrapper] (DirectAgent-83:ctx-763eb2b7) (logid:498aca8c) 9. The VM r-70-VM is in Stopping state
   2020-01-28 16:39:11,470 DEBUG [c.c.h.x.r.w.x.CitrixStopCommandWrapper] (DirectAgent-83:ctx-763eb2b7) (logid:498aca8c) 10. The VM r-70-VM is in Stopped state
   2020-01-28 16:39:11,470 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-83:ctx-763eb2b7) (logid:498aca8c) Seq 1-8214847195300501430: Response Received: 
   2020-01-28 16:39:11,470 DEBUG [c.c.a.t.Request] (DirectAgent-83:ctx-763eb2b7) (logid:498aca8c) Seq 1-8214847195300501430: Processing:  { Ans: , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 10, [{"com.cloud.agent.api.StopAnswer":{"platform":"apic:true;viridian:true;pae:true;cores-per-socket:1;acpi:1;hpet:true;nx:true","result":true,"details":"Stop VM r-70-VM Succeed","wait":0}}] }
   2020-01-28 16:39:11,470 DEBUG [c.c.a.t.Request] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Seq 1-8214847195300501430: Received:  { Ans: , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 10, { StopAnswer } }
   2020-01-28 16:39:11,480 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) VM[DomainRouter|r-70-VM] is stopped on the host.  Proceeding to release resource held.
   2020-01-28 16:39:11,484 DEBUG [c.c.n.g.ControlNetworkGuru] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Released nic: NicProfile[141-70-null-null-null
   2020-01-28 16:39:11,495 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Successfully released network resources for the vm VM[DomainRouter|r-70-VM]
   2020-01-28 16:39:11,495 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Successfully released storage resources for the vm VM[DomainRouter|r-70-VM]
   2020-01-28 16:39:11,509 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) VM state transitted from :Stopping to Stopped with event: OperationSucceededvm's original host id: 1 new host id: null host id before state transition: 1
   2020-01-28 16:39:11,515 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Hosts's actual total CPU: 112000 and CPU after applying overprovisioning: 448000
   2020-01-28 16:39:11,515 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Hosts's actual total RAM: 393741362432 and RAM after applying overprovisioning: 393741369344
   2020-01-28 16:39:11,515 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) release cpu from host: 1, old used: 3500,reserved: 0, actual total: 112000, total with overprovisioning: 448000; new used: 2500,reserved:1000; movedfromreserved: false,moveToReserveredtrue
   2020-01-28 16:39:11,515 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) release mem from host: 1, old used: 2684354560,reserved: 0, total: 393741369344; new used: 2147483648,reserved:536870912; movedfromreserved: false,moveToReserveredtrue
   2020-01-28 16:39:11,529 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Done executing VM work job: com.cloud.vm.VmWorkStop{"cleanup":false,"userId":2,"accountId":2,"vmId":70,"handlerName":"VirtualMachineManagerImpl"}
   2020-01-28 16:39:11,529 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Complete async job-559, jobStatus: SUCCEEDED, resultCode: 0, result: null
   2020-01-28 16:39:11,530 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Publish async job-559 complete on message bus
   2020-01-28 16:39:11,530 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Wake up jobs related to job-559
   2020-01-28 16:39:11,530 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Update db status for job-559
   2020-01-28 16:39:11,531 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559 ctx-a033a44d) (logid:498aca8c) Wake up jobs joined with job-559 and disjoin all subjobs created from job- 559
   2020-01-28 16:39:11,539 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559) (logid:498aca8c) Done with run of VM work job: com.cloud.vm.VmWorkStop for VM 70, job origin: 558
   2020-01-28 16:39:11,539 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559) (logid:498aca8c) Done executing com.cloud.vm.VmWorkStop for job-559
   2020-01-28 16:39:11,541 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-53:ctx-26aa0984 job-558/job-559) (logid:498aca8c) Remove job-559 from job monitoring
   2020-01-28 16:39:11,557 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Starting network Ntwk[202|Control|3]...
   2020-01-28 16:39:11,559 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Network id=202 is already implemented
   2020-01-28 16:39:11,560 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Starting network Ntwk[200|Public|1]...
   2020-01-28 16:39:11,562 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Network id=200 is already implemented
   2020-01-28 16:39:11,564 DEBUG [c.c.n.r.NetworkHelperImpl] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Starting router VM[DomainRouter|r-70-VM]
   2020-01-28 16:39:11,572 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-29:ctx-e32b0b84 job-558 ctx-b0f0fb94) (logid:498aca8c) Sync job-560 execution on object VmWorkJobQueue.70
   2020-01-28 16:39:12,845 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560) (logid:498aca8c) Executing AsyncJobVO {id:560, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAARnQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAHBwcHBwcHBwc3IAEWphdmEudXRpbC5IYXNoTWFwBQfawcMWYNEDAAJGAApsb2FkRmFjdG9ySQAJdGhyZXNob2xkeHA_QAAAAAAADHcIAAAAEAAAAAF0AA5SZXN0YXJ0TmV0d29ya3QAP3JPMEFCWE55QUJGcVlYWmhMbXhoYm1jdVFtOXZiR1ZoYnMwZ2NvRFZuUHJ1QWdBQldnQUZkbUZzZFdWNGNBRXhw, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 117524791259211, completeMsid: null, lastUpdated: null, lastPolled: null, created: Tue Jan 28 16:39:11 EST 2020, removed: null}
   2020-01-28 16:39:12,845 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560) (logid:498aca8c) Run VM work job: com.cloud.vm.VmWorkStart for VM 70, job origin: 558
   2020-01-28 16:39:12,846 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Execute VM work job: com.cloud.vm.VmWorkStart{"dcId":0,"rawParams":{"RestartNetwork":"rO0ABXNyABFqYXZhLmxhbmcuQm9vbGVhbs0gcoDVnPruAgABWgAFdmFsdWV4cAE"},"userId":2,"accountId":2,"vmId":70,"handlerName":"VirtualMachineManagerImpl"}
   2020-01-28 16:39:12,848 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) [OrchestrateStart] Start
   2020-01-28 16:39:12,853 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-0c0caddf) (logid:0944a835) End cleanup expired async-jobs
   2020-01-28 16:39:12,862 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) VM state transitted from :Stopped to Starting with event: StartRequestedvm's original host id: 1 new host id: null host id before state transition: null
   2020-01-28 16:39:12,869 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Successfully transitioned to start state for VM[DomainRouter|r-70-VM] reservation id = cbc53b54-2990-4f59-88ca-a2ab52fc9038
   2020-01-28 16:39:12,871 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Deploy avoids pods: null, clusters: null, hosts: null
   2020-01-28 16:39:12,875 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Root volume is ready, need to place VM in volume's cluster
   2020-01-28 16:39:12,875 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Vol[65|vm=70|ROOT] is READY, changing deployment plan to use this pool's dcId: 1 , podId: 1 , and clusterId: 1
   2020-01-28 16:39:12,888 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) DeploymentPlanner allocation algorithm: null
   2020-01-28 16:39:12,888 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 1000, requested ram: 536870912
   2020-01-28 16:39:12,888 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Is ROOT volume READY (pool already allocated)?: Yes
   2020-01-28 16:39:12,889 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Deploy avoids pods: [], clusters: [], hosts: []
   2020-01-28 16:39:12,890 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) This VM has last host_id specified, trying to choose the same host: 1
   2020-01-28 16:39:12,897 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Host: 1 has cpu capability (cpu:56, speed:2000) to support requested CPU: 1 and requested speed: 1000
   2020-01-28 16:39:12,897 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Checking if host: 1 has enough capacity for requested CPU: 1000 and requested RAM: 536870912 , cpuOverprovisioningFactor: 4.0
   2020-01-28 16:39:12,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Hosts's actual total CPU: 112000 and CPU after applying overprovisioning: 448000
   2020-01-28 16:39:12,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) We need to allocate to the last host again, so checking if there is enough reserved capacity
   2020-01-28 16:39:12,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Reserved CPU: 1000 , Requested CPU: 1000
   2020-01-28 16:39:12,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Reserved RAM: 536870912 , Requested RAM: 536870912
   2020-01-28 16:39:12,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Host has enough CPU and RAM available
   2020-01-28 16:39:12,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) STATS: Can alloc CPU from host: 1, used: 2500, reserved: 1000, actual total: 112000, total with overprovisioning: 448000; requested cpu:1000,alloc_from_last_host?:true ,considerReservedCapacity?: true
   2020-01-28 16:39:12,899 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) STATS: Can alloc MEM from host: 1, used: 2147483648, reserved: 536870912, total: 393741369344; requested mem: 536870912,alloc_from_last_host?:true ,considerReservedCapacity?: true
   2020-01-28 16:39:12,899 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) The last host of this VM is UP and has enough capacity
   2020-01-28 16:39:12,899 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Now checking for suitable pools under zone: 1, pod: 1, cluster: 1
   2020-01-28 16:39:12,902 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Checking suitable pools for volume (Id, Type): (65,ROOT)
   2020-01-28 16:39:12,903 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Volume has pool already allocated, checking if pool can be reused, poolId: 3
   2020-01-28 16:39:12,904 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Planner need not allocate a pool for this volume since its READY
   2020-01-28 16:39:12,904 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Trying to find a potenial host and associated storage pools from the suitable host/pool lists for this VM
   2020-01-28 16:39:12,904 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Checking if host: 1 can access any suitable storage pool for volume: ROOT
   2020-01-28 16:39:12,905 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Host: 1 can access pool: 3
   2020-01-28 16:39:12,906 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Found a potential host id: 1 name: cca-r1-xen23 and associated storage pools for this VM
   2020-01-28 16:39:12,906 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage()]
   2020-01-28 16:39:12,906 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Deployment found  - P0=VM[DomainRouter|r-70-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage()]
   2020-01-28 16:39:12,944 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) VM state transitted from :Starting to Starting with event: OperationRetryvm's original host id: 1 new host id: 1 host id before state transition: null
   2020-01-28 16:39:12,944 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) VM starting again on the last host it was stopped on
   2020-01-28 16:39:12,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Hosts's actual total CPU: 112000 and CPU after applying overprovisioning: 448000
   2020-01-28 16:39:12,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) We are allocating VM, increasing the used capacity of this host:1
   2020-01-28 16:39:12,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Current Used CPU: 2500 , Free CPU:444500 ,Requested CPU: 1000
   2020-01-28 16:39:12,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Current Used RAM: 2147483648 , Free RAM:391057014784 ,Requested RAM: 536870912
   2020-01-28 16:39:12,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) We are allocating VM to the last host again, so adjusting the reserved capacity if it is not less than required
   2020-01-28 16:39:12,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Reserved CPU: 1000 , Requested CPU: 1000
   2020-01-28 16:39:12,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Reserved RAM: 536870912 , Requested RAM: 536870912
   2020-01-28 16:39:12,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) CPU STATS after allocation: for host: 1, old used: 2500, old reserved: 1000, actual total: 112000, total with overprovisioning: 448000; new used:3500, reserved:0; requested cpu:1000,alloc_from_last:true
   2020-01-28 16:39:12,950 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) RAM STATS after allocation: for host: 1, old used: 2147483648, old reserved: 536870912, total: 393741369344; new used: 2684354560, reserved: 0; requested mem: 536870912,alloc_from_last:true
   2020-01-28 16:39:12,962 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) [OrchestrateStart] got a plan 114
   2020-01-28 16:39:13,009 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) No need to recreate the volume: Vol[65|vm=70|ROOT], since it already has a pool assigned: 3, adding disk to VM
   2020-01-28 16:39:13,024 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) [OrchestrateStart] volume prepare 19
   2020-01-28 16:39:13,026 DEBUG [c.c.n.r.VpcVirtualNetworkApplianceManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Removing nic NicProfile[142-70-null-45.72.188.244-vlan://12 of type Public from the nics passed on vm start. The nic will be plugged later
   2020-01-28 16:39:13,038 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Boot Args for VM[DomainRouter|r-70-VM]:  vpccidr=10.23.0.0/22 domain=cs4cloud.internal dns1=8.8.8.8 dns2= privategateway=None template=domP name=r-70-VM eth0ip=169.254.0.27 eth0mask=255.255.0.0 type=vpcrouter disable_rp_filter=true baremetalnotificationsecuritykey=jbpRvxGfs_EP5a0XnldApNTotdMTfygtSiWKsWSsCHgqWED3YtM-LLo6ML56svtrNfUEh14KzUaoTb-ex8znFQ baremetalnotificationapikey=hPw1qOrcTFhfeSLRMwHpf2ldeqQt-mEs9IaZ0En35KQZi5lPJsJElD017QviWnp6XgVwekI4QmF2CT7lxO2yhg host=172.27.2.119 port=8080
   2020-01-28 16:39:13,101 DEBUG [c.c.n.r.VpcVirtualNetworkApplianceManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Found 0 static routes to apply as a part of vpc route VM[DomainRouter|r-70-VM] start
   2020-01-28 16:39:13,125 DEBUG [c.c.a.t.Request] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Seq 1-8214847195300501431: Executing:  { Cmd , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StartCommand":{"vm":{"id":70,"name":"r-70-VM","bootloader":"PyGrub","state":"Starting","type":"DomainRouter","cpus":1,"minSpeed":250,"maxSpeed":1000,"minRam":536870912,"maxRam":536870912,"arch":"x86_64","os":"Debian GNU/Linux 7(64-bit)","platformEmulator":"Debian Wheezy 7.0 (64-bit)","bootArgs":" vpccidr=10.23.0.0/22 domain=cs4cloud.internal dns1=8.8.8.8 dns2= privategateway=None template=domP name=r-70-VM eth0ip=169.254.0.27 eth0mask=255.255.0.0 type=vpcrouter disable_rp_filter=true baremetalnotificationsecuritykey=jbpRvxGfs_EP5a0XnldApNTotdMTfygtSiWKsWSsCHgqWED3YtM-LLo6ML56svtrNfUEh14KzUaoTb-ex8znFQ baremetalnotificationapikey=hPw1qOrcTFhfeSLRMwHpf2ldeqQt-mEs9IaZ0En35KQZi5lPJsJElD017QviWnp6XgVwekI4QmF2CT7lxO2yhg host=172.27.2.119 port=8080","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"esEbRksQzKneKtb5a5q_YA","params":{"cpuOvercommitRatio":"4.0","memoryOvercommitRatio":"1.0"},"uuid":"9e98f52d-056e-406d-89e9-022a2a8d2368","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"58afbe87-4f2e-476d-ad3b-e7cdca832750","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"d484cb52-e2ff-3907-a0b4-2ff5a4d53fc7","id":3,"poolType":"IscsiLUN","host":"172.27.21.142","path":"/iqn.2013-05.com.daterainc:tc:01:sn:2bd12b32935ceab7/0","port":3260,"url":"IscsiLUN://172.27.21.142/iqn.2013-05.com.daterainc:tc:01:sn:2bd12b32935ceab7/0/?ROLE=Primary&STOREUUID=d484cb52-e2ff-3907-a0b4-2ff5a4d53fc7","isManaged":false}},"name":"ROOT-70","size":3145728000,"path":"a3a76572-0181-4f7b-b84e-badeb8c2b1c4","volumeId":65,"vmName":"r-70-VM","accountId":4,"format":"VHD","provisioningType":"THIN","id":65,"deviceId":0,"bytesReadRate":0,"bytesWriteRate":0,"iopsReadRate":0,"iopsWriteRate":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"a3a76572-0181-4f7b-b84e-badeb8c2b1c4","type":"ROOT","_details":{"storageHost":"172.27.21.142","managed":"false","storagePort":"3260","volumeSize":"3145728000"}}],"nics":[{"deviceId":0,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nicUuid":"18c9c03b-63e7-402d-a1ba-636b3ac32b36","details":{"PromiscuousMode":"false","MacAddressChanges":"true","ForgedTransmits":"true"},"dpdkDisabled":false,"uuid":"0283f2dc-197f-44ea-b5a5-566547650ebe","ip":"169.254.0.27","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:00:1b","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}],"guestOsDetails":{"xenserver.dynamicMin":"536870912","xenserver.dynamicMax":"137438953472"},"extraConfig":{}},"hostIp":"172.27.20.63","executeInSequence":false,"wait":0}},{"com.cloud.agent.api.check.CheckSshCommand":{"ip":"169.254.0.27","port":3922,"interval":6,"retries":100,"name":"r-70-VM","wait":0}},{"com.cloud.agent.api.GetDomRVersionCmd":{"accessDetails":{"router.name":"r-70-VM","router.ip":"169.254.0.27"},"wait":0}},{"com.cloud.agent.api.PlugNicCommand":{"nic":{"deviceId":1,"networkRateMbps":-1,"defaultNic":true,"pxeDisable":true,"nicUuid":"53815fd3-3f8a-44a5-b456-210f691b1874","dpdkDisabled":false,"uuid":"e01b2cac-1906-4101-920e-d69b283d76a4","ip":"45.72.188.244","netmask":"255.255.255.0","gateway":"45.72.188.1","mac":"1e:00:30:00:00:0f","broadcastType":"Vlan","type":"Public","broadcastUri":"vlan://12","isolationUri":"vlan://12","isSecurityGroupEnabled":false,"name":"data"},"instanceName":"r-70-VM","vmType":"DomainRouter","details":{},"wait":0}},{"com.cloud.agent.api.routing.IpAssocVpcCommand":{"ipAddresses":[{"accountId":4,"publicIp":"45.72.188.244","sourceNat":true,"add":true,"oneToOneNat":false,"firstIP":true,"broadcastUri":"vlan://12","vlanGateway":"45.72.188.1","vlanNetmask":"255.255.255.0","vifMacAddress":"1e:00:30:00:00:0f","networkRate":-1,"trafficType":"Public","networkName":"data","newNic":false}],"accessDetails":{"router.name":"r-70-VM","router.guest.ip":"45.72.188.244","router.ip":"169.254.0.27","zone.network.type":"Advanced"},"wait":0}},{"com.cloud.agent.api.routing.SetSourceNatCommand":{"ipAddress":{"accountId":4,"publicIp":"45.72.188.244","sourceNat":true,"add":true,"oneToOneNat":false,"firstIP":true,"broadcastUri":"vlan://12","vlanGateway":"45.72.188.1","vlanNetmask":"255.255.255.0","vifMacAddress":"1e:00:30:00:00:0f","networkRate":-1,"trafficType":"Public","networkName":"data","newNic":false},"add":true,"accessDetails":{"router.name":"r-70-VM","router.ip":"169.254.0.27","zone.network.type":"Advanced"},"wait":0}},{"com.cloud.agent.api.routing.VpnUsersCfgCommand":{"userpwds":[],"vpnType":"ikev2","accessDetails":{"router.name":"r-70-VM","account.id":"4","router.ip":"169.254.0.27","zone.network.type":"Advanced"},"wait":0}},{"com.cloud.agent.api.routing.RemoteAccessVpnCfgCommand":{"create":true,"vpcEnabled":true,"vpnServerIp":"45.72.188.244","ipRange":"192.168.168.2-192.168.168.254","localIp":"192.168.168.1","localCidr":"10.23.0.0/22","publicInterface":"eth1","vpnType":"ikev2","caCert":"-----BEGIN CERTIFICATE-----(removed)-----END CERTIFICATE-----","serverKey":"-----BEGIN RSA PRIVATE KEY-----(removed)-----END RSA PRIVATE KEY-----","accessDetails":{"router.name":"r-70-VM","router.ip":"169.254.0.27","zone.network.type":"Advanced"},"wait":0}},{}] }
   2020-01-28 16:39:13,140 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) 1. The VM r-70-VM is in Starting state.
   2020-01-28 16:39:13,155 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Created VM d3bb0e0d-a52e-413b-1792-31aad8a06854 for r-70-VM
   2020-01-28 16:39:13,159 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) PV args are -- quiet console=hvc0%vpccidr=10.23.0.0/22%domain=cs4cloud.internal%dns1=8.8.8.8%dns2=%privategateway=None%template=domP%name=r-70-VM%eth0ip=169.254.0.27%eth0mask=255.255.0.0%type=vpcrouter%disable_rp_filter=true%baremetalnotificationsecuritykey=jbpRvxGfs_EP5a0XnldApNTotdMTfygtSiWKsWSsCHgqWED3YtM-LLo6ML56svtrNfUEh14KzUaoTb-ex8znFQ%baremetalnotificationapikey=hPw1qOrcTFhfeSLRMwHpf2ldeqQt-mEs9IaZ0En35KQZi5lPJsJElD017QviWnp6XgVwekI4QmF2CT7lxO2yhg%host=172.27.2.119%port=8080
   2020-01-28 16:39:13,161 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) HVM args are  vpccidr=10.23.0.0/22 domain=cs4cloud.internal dns1=8.8.8.8 dns2= privategateway=None template=domP name=r-70-VM eth0ip=169.254.0.27 eth0mask=255.255.0.0 type=vpcrouter disable_rp_filter=true baremetalnotificationsecuritykey=jbpRvxGfs_EP5a0XnldApNTotdMTfygtSiWKsWSsCHgqWED3YtM-LLo6ML56svtrNfUEh14KzUaoTb-ex8znFQ baremetalnotificationapikey=hPw1qOrcTFhfeSLRMwHpf2ldeqQt-mEs9IaZ0En35KQZi5lPJsJElD017QviWnp6XgVwekI4QmF2CT7lxO2yhg host=172.27.2.119 port=8080
   2020-01-28 16:39:13,192 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) VBD 50a2dab8-e391-ae4f-4987-c09e2dd7b3c0 created for com.cloud.agent.api.to.DiskTO@144f7c0
   2020-01-28 16:39:13,192 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Creating VIF for r-70-VM on nic [Nic:Control-169.254.0.27-null]
   2020-01-28 16:39:13,209 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) already have a vif on dom0 for link local network
   2020-01-28 16:39:13,367 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Created a vif 0298f73d-2aa0-27a9-daa7-b927866b02d9 on 0
   2020-01-28 16:39:18,388 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) 2. The VM r-70-VM is in Running state.
   2020-01-28 16:39:18,391 DEBUG [c.c.h.x.r.w.x.CitrixCheckSshCommandWrapper] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Ping command port, 169.254.0.27:3922
   2020-01-28 16:39:18,392 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Trying to connect to 169.254.0.27 attempt 0 of 100
   2020-01-28 16:39:31,498 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Trying to connect to 169.254.0.27 attempt 1 of 100
   2020-01-28 16:39:41,614 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Trying to connect to 169.254.0.27 attempt 2 of 100
   2020-01-28 16:39:41,715 DEBUG [c.c.h.x.r.w.x.CitrixCheckSshCommandWrapper] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Ping command port succeeded for vm r-70-VM
   2020-01-28 16:39:41,715 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Executing command in VR: /opt/cloud/bin/router_proxy.sh get_template_version.sh 169.254.0.27 null
   2020-01-28 16:40:02,911 ERROR [c.c.u.s.SshHelper] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) SSH execution of command /opt/cloud/bin/router_proxy.sh get_template_version.sh 169.254.0.27 null has an error status code in return. Result output: 
   2020-01-28 16:40:02,912 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Seq 1-8214847195300501431: Cancelling because one of the answers is false and it is stop on error.
   2020-01-28 16:40:02,912 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Seq 1-8214847195300501431: Response Received: 
   2020-01-28 16:40:02,913 DEBUG [c.c.a.t.Request] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Seq 1-8214847195300501431: Processing:  { Ans: , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 110, [{"com.cloud.agent.api.StartAnswer":{"vm":{"id":70,"name":"r-70-VM","bootloader":"PyGrub","state":"Starting","type":"DomainRouter","cpus":1,"minSpeed":250,"maxSpeed":1000,"minRam":536870912,"maxRam":536870912,"arch":"x86_64","os":"Debian GNU/Linux 7(64-bit)","platformEmulator":"Debian Wheezy 7.0 (64-bit)","bootArgs":" vpccidr=10.23.0.0/22 domain=cs4cloud.internal dns1=8.8.8.8 dns2= privategateway=None template=domP name=r-70-VM eth0ip=169.254.0.27 eth0mask=255.255.0.0 type=vpcrouter disable_rp_filter=true baremetalnotificationsecuritykey=jbpRvxGfs_EP5a0XnldApNTotdMTfygtSiWKsWSsCHgqWED3YtM-LLo6ML56svtrNfUEh14KzUaoTb-ex8znFQ baremetalnotificationapikey=hPw1qOrcTFhfeSLRMwHpf2ldeqQt-mEs9IaZ0En35KQZi5lPJsJElD017QviWnp6XgVwekI4QmF2CT7lxO2yhg host=172.27.2.119 port=8080","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"esEbRksQzKneKtb5a5q_YA","params":{"cpuOvercommitRatio":"4.0","memoryOvercommitRatio":"1.0"},"uuid":"9e98f52d-056e-406d-89e9-022a2a8d2368","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"58afbe87-4f2e-476d-ad3b-e7cdca832750","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"d484cb52-e2ff-3907-a0b4-2ff5a4d53fc7","id":3,"poolType":"IscsiLUN","host":"172.27.21.142","path":"/iqn.2013-05.com.daterainc:tc:01:sn:2bd12b32935ceab7/0","port":3260,"url":"IscsiLUN://172.27.21.142/iqn.2013-05.com.daterainc:tc:01:sn:2bd12b32935ceab7/0/?ROLE=Primary&STOREUUID=d484cb52-e2ff-3907-a0b4-2ff5a4d53fc7","isManaged":false}},"name":"ROOT-70","size":3145728000,"path":"a3a76572-0181-4f7b-b84e-badeb8c2b1c4","volumeId":65,"vmName":"r-70-VM","accountId":4,"format":"VHD","provisioningType":"THIN","id":65,"deviceId":0,"bytesReadRate":0,"bytesWriteRate":0,"iopsReadRate":0,"iopsWriteRate":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"a3a76572-0181-4f7b-b84e-badeb8c2b1c4","type":"ROOT","_details":{"storageHost":"172.27.21.142","managed":"false","storagePort":"3260","volumeSize":"3145728000"}}],"nics":[{"deviceId":0,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nicUuid":"18c9c03b-63e7-402d-a1ba-636b3ac32b36","details":{"PromiscuousMode":"false","MacAddressChanges":"true","ForgedTransmits":"true"},"dpdkDisabled":false,"uuid":"0283f2dc-197f-44ea-b5a5-566547650ebe","ip":"169.254.0.27","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:00:1b","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}],"guestOsDetails":{"xenserver.dynamicMin":"536870912","xenserver.dynamicMax":"137438953472"},"extraConfig":{}},"_iqnToData":{},"result":true,"wait":0}},{"com.cloud.agent.api.check.CheckSshAnswer":{"result":true,"wait":0}},{"com.cloud.agent.api.GetDomRVersionAnswer":{"result":false,"details":"GetDomRVersionCmd failed","wait":0}}] }
   2020-01-28 16:40:02,913 DEBUG [c.c.a.m.AgentAttache] (DirectAgent-331:ctx-714b1162) (logid:498aca8c) Seq 1-8214847195300501431: No more commands found
   2020-01-28 16:40:02,918 DEBUG [c.c.a.t.Request] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Seq 1-8214847195300501431: Received:  { Ans: , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 110, { StartAnswer, CheckSshAnswer, GetDomRVersionAnswer } }
   2020-01-28 16:40:02,922 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) [OrchestrateStart] hypervisor prepare 49884
   2020-01-28 16:40:02,934 WARN  [c.c.a.AlertManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) AlertType:: 9 | dataCenterId:: 1 | podId:: 1 | clusterId:: null | message:: Command: com.cloud.agent.api.GetDomRVersionCommand failed while starting virtual router
   2020-01-28 16:40:02,940 ERROR [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) GetDomRVersionCmd failed
   2020-01-28 16:40:02,940 WARN  [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Command: com.cloud.agent.api.GetDomRVersionCommand failed while starting virtual router
   2020-01-28 16:40:02,940 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) The guru did not like the answers so stopping VM[DomainRouter|r-70-VM]
   2020-01-28 16:40:02,944 DEBUG [c.c.a.t.Request] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Seq 1-8214847195300501435: Sending  { Cmd , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"checkBeforeCleanup":false,"controlIp":"169.254.1.177","forceStop":false,"volumesToDisconnect":[],"vmName":"r-70-VM","executeInSequence":false,"wait":0}}] }
   2020-01-28 16:40:02,944 DEBUG [c.c.a.t.Request] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Seq 1-8214847195300501435: Executing:  { Cmd , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"checkBeforeCleanup":false,"controlIp":"169.254.1.177","forceStop":false,"volumesToDisconnect":[],"vmName":"r-70-VM","executeInSequence":false,"wait":0}}] }
   2020-01-28 16:40:02,944 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-22:ctx-4d92e213) (logid:af77d502) Seq 1-8214847195300501435: Executing request
   2020-01-28 16:40:02,966 DEBUG [c.c.h.x.r.w.x.CitrixStopCommandWrapper] (DirectAgent-22:ctx-4d92e213) (logid:498aca8c) 9. The VM r-70-VM is in Stopping state
   2020-01-28 16:40:06,014 DEBUG [c.c.h.x.r.w.x.CitrixStopCommandWrapper] (DirectAgent-22:ctx-4d92e213) (logid:498aca8c) 10. The VM r-70-VM is in Stopped state
   2020-01-28 16:40:06,014 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-22:ctx-4d92e213) (logid:498aca8c) Seq 1-8214847195300501435: Response Received: 
   2020-01-28 16:40:06,014 DEBUG [c.c.a.t.Request] (DirectAgent-22:ctx-4d92e213) (logid:498aca8c) Seq 1-8214847195300501435: Processing:  { Ans: , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 10, [{"com.cloud.agent.api.StopAnswer":{"platform":"apic:true;viridian:true;pae:true;cores-per-socket:1;acpi:1;hpet:true;nx:true","result":true,"details":"Stop VM r-70-VM Succeed","wait":0}}] }
   2020-01-28 16:40:06,014 DEBUG [c.c.a.t.Request] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Seq 1-8214847195300501435: Received:  { Ans: , MgmtId: 117524791259211, via: 1(cca-r1-xen23), Ver: v1, Flags: 10, { StopAnswer } }
   2020-01-28 16:40:06,014 DEBUG [c.c.a.m.AgentManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Details from executing class com.cloud.agent.api.StopCommand: Stop VM r-70-VM Succeed
   2020-01-28 16:40:06,014 ERROR [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Failed to start instance VM[DomainRouter|r-70-VM]com.cloud.utils.exception.ExecutionException: Unable to start  VM:9e98f52d-056e-406d-89e9-022a2a8d2368 due to error in finalizeStart, not retrying
           at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1229)
           at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:5172)
           at sun.reflect.GeneratedMethodAccessor440.invoke(Unknown Source)
           at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
           at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5335)
           at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
           at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:583)
           at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
           at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
           at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
           at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
           at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
           at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:531)
           at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   2020-01-28 16:40:06,025 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-54:ctx-6f38cf78 job-558/job-560 ctx-4ba35566) (logid:498aca8c) Cleaning up resources for the vm VM[DomainRouter|r-70-VM] in Starting state
   ~~~
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [cloudstack] kovukono closed issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot

Posted by GitBox <gi...@apache.org>.
kovukono closed issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot
URL: https://github.com/apache/cloudstack/issues/3849
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [cloudstack] weizhouapache commented on issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot

Posted by GitBox <gi...@apache.org>.
weizhouapache commented on issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot
URL: https://github.com/apache/cloudstack/issues/3849#issuecomment-580414303
 
 
   @kovukono ok. glad to hear it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [cloudstack] kovukono commented on issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot

Posted by GitBox <gi...@apache.org>.
kovukono commented on issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot
URL: https://github.com/apache/cloudstack/issues/3849#issuecomment-580399523
 
 
   @weizhouapache We've tested this on 4.13 and 4.12, and it's worked successfully on both community releases. This appears to be something that we have in our own codebase, so I'm going to close this ticket.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [cloudstack] weizhouapache commented on issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot

Posted by GitBox <gi...@apache.org>.
weizhouapache commented on issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot
URL: https://github.com/apache/cloudstack/issues/3849#issuecomment-579835951
 
 
   @kovukono does this issue happen each time when you reboot VR ?
   I cannot reproduce in 4.13 (kvm, not xen)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [cloudstack] kovukono commented on issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot

Posted by GitBox <gi...@apache.org>.
kovukono commented on issue #3849: Cloudstack 4.12 VRs with VPNs fail to reboot
URL: https://github.com/apache/cloudstack/issues/3849#issuecomment-579916944
 
 
   @weizhouapache This happens every time you reboot a VR that has a VPN. We spun up an instance of 4.13 in our lab and were able to determine that it indeed doesn't appear to be happening in 4.13.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services