You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Sonali Jadhav <so...@servercentralen.se> on 2015/07/09 15:01:12 UTC
bug? failed to migrate instance when assigned computer offering is
deleted
Hi,
I am unsure if this would be bug or my mistake.
I am trying to add one host in maintenance mode, There are 3 VMs. Assigned computer offerings to those VMs are deleted. I suspect its logging NPE and that's why those instance are not getting moved to another host in cluster ?
Here is link to logs, http://pastebin.com/ScyBCX9v
I am running ACS 4.5.1. Is there any workaround?
/Sonali
RE: bug? failed to migrate instance when assigned computer offering
is deleted
Posted by Sonali Jadhav <so...@servercentralen.se>.
Hi,
Thanks for response, I made changes you suggested, I am still seeing this in logs,
2015-07-09 19:38:53,463 DEBUG [c.c.a.ApiServlet] (http-6443-exec-6:ctx-9c2c01de) ===START=== 49.200.117.9 -- GET command=queryAsyncJobResult&jobId=21297b96-e905-4b7b-9394-dcb6d2bdec9f&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463534760
2015-07-09 19:38:53,539 DEBUG [c.c.a.ApiServlet] (http-6443-exec-6:ctx-9c2c01de ctx-75788600) ===END=== 49.200.117.9 -- GET command=queryAsyncJobResult&jobId=21297b96-e905-4b7b-9394-dcb6d2bdec9f&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463534760
2015-07-09 19:38:53,775 DEBUG [c.c.a.ApiServlet] (http-6443-exec-8:ctx-4ac31ee2) ===START=== 49.200.117.9 -- GET command=listZones&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&id=1baf17c9-8325-4fa6-bffc-e502a33b578b&_=1436463535088
2015-07-09 19:38:53,799 DEBUG [c.c.a.ApiServlet] (http-6443-exec-8:ctx-4ac31ee2 ctx-143db9c0) ===END=== 49.200.117.9 -- GET command=listZones&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&id=1baf17c9-8325-4fa6-bffc-e502a33b578b&_=1436463535088
2015-07-09 19:38:54,053 DEBUG [c.c.a.ApiServlet] (http-6443-exec-1:ctx-6169ccc9) ===START=== 49.200.117.9 -- GET command=listHosts&id=c3c78959-6387-4cc9-8f59-23d44d2257a8&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463535365
2015-07-09 19:38:54,064 DEBUG [c.c.a.q.QueryManagerImpl] (http-6443-exec-1:ctx-6169ccc9 ctx-28f551a1) >>>Searching for hosts>>>
2015-07-09 19:38:54,074 DEBUG [c.c.a.q.QueryManagerImpl] (http-6443-exec-1:ctx-6169ccc9 ctx-28f551a1) >>>Generating Response>>>
2015-07-09 19:38:54,083 DEBUG [c.c.a.ApiServlet] (http-6443-exec-1:ctx-6169ccc9 ctx-28f551a1) ===END=== 49.200.117.9 -- GET command=listHosts&id=c3c78959-6387-4cc9-8f59-23d44d2257a8&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463535365
2015-07-09 19:38:54,208 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-2:null) SeqA 11-91657: Processing Seq 11-91657: { Cmd , MgmtId: -1, via: 11, Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":80,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] }
2015-07-09 19:38:54,270 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-2:null) SeqA 11-91657: Sending Seq 11-91657: { Ans: , MgmtId: 59778234354585, via: 11, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
2015-07-09 19:38:54,341 DEBUG [c.c.a.ApiServlet] (http-6443-exec-7:ctx-2e934bc0) ===START=== 49.200.117.9 -- GET command=listDedicatedHosts&hostid=c3c78959-6387-4cc9-8f59-23d44d2257a8&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463535649
2015-07-09 19:38:54,355 DEBUG [c.c.a.ApiServlet] (http-6443-exec-7:ctx-2e934bc0 ctx-9b8f5e9b) ===END=== 49.200.117.9 -- GET command=listDedicatedHosts&hostid=c3c78959-6387-4cc9-8f59-23d44d2257a8&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463535649
2015-07-09 19:38:54,361 WARN [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-1:ctx-5d67d187 work-78) Encountered unhandled exception during HA process, reschedule retry
java.lang.NullPointerException
at com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
2015-07-09 19:38:54,362 INFO [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-1:ctx-5d67d187 work-78) Rescheduling HAWork[78-Migration-31-Running-Migrating] to try again at Thu Jul 09 19:49:08 CEST 2015
2015-07-09 19:38:54,403 WARN [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-2fd92e98 work-79) Encountered unhandled exception during HA process, reschedule retry
java.lang.NullPointerException
at com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
2015-07-09 19:38:54,404 INFO [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-2:ctx-2fd92e98 work-79) Rescheduling HAWork[79-Migration-32-Running-Migrating] to try again at Thu Jul 09 19:49:08 CEST 2015
2015-07-09 19:38:54,542 WARN [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-3:ctx-48e55f91 work-80) Encountered unhandled exception during HA process, reschedule retry
java.lang.NullPointerException
at com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
2015-07-09 19:38:54,543 INFO [c.c.h.HighAvailabilityManagerImpl] (HA-Worker-3:ctx-48e55f91 work-80) Rescheduling HAWork[80-Migration-34-Running-Migrating] to try again at Thu Jul 09 19:49:08 CEST 2015
2015-07-09 19:38:54,621 DEBUG [c.c.a.ApiServlet] (http-6443-exec-12:ctx-94bf5f81) ===START=== 49.200.117.9 -- GET command=listConfigurations&name=ha.tag&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463535879
2015-07-09 19:38:54,638 DEBUG [c.c.a.ApiServlet] (http-6443-exec-12:ctx-94bf5f81 ctx-22d06326) ===END=== 49.200.117.9 -- GET command=listConfigurations&name=ha.tag&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463535879
2015-07-09 19:38:54,916 DEBUG [c.c.a.ApiServlet] (http-6443-exec-6:ctx-f9448fa4) ===START=== 49.200.117.9 -- GET command=listOsCategories&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463536178
2015-07-09 19:38:54,932 DEBUG [c.c.a.ApiServlet] (http-6443-exec-6:ctx-f9448fa4 ctx-365c54c8) ===END=== 49.200.117.9 -- GET command=listOsCategories&response=json&sessionkey=Sx7fH6%2FYwuAsE3dPCITcZKF8cIo%3D&_=1436463536178
2015-07-09 19:38:57,557 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-61496fe8) Resetting hosts suitable for reconnect
2015-07-09 19:38:57,559 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-61496fe8) Completed resetting hosts suitable for reconnect
2015-07-09 19:38:57,559 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-61496fe8) Acquiring hosts for clusters already owned by this management server
2015-07-09 19:38:57,561 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-61496fe8) Completed acquiring hosts for clusters already owned by this management server
2015-07-09 19:38:57,561 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-61496fe8) Acquiring hosts for clusters not owned by any management server
2015-07-09 19:38:57,562 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-61496fe8) Completed acquiring hosts for clusters not owned by any management server
2015-07-09 19:38:59,212 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgentCronJob-309:ctx-251dafcc) Ping from 5(SeSolXS03)
2015-07-09 19:38:59,212 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (DirectAgentCronJob-309:ctx-251dafcc) Process host VM state report from ping process. host: 5
2015-07-09 19:38:59,223 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (DirectAgentCronJob-309:ctx-251dafcc) Process VM state report. host: 5, number of records in report: 3
2015-07-09 19:38:59,223 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (DirectAgentCronJob-309:ctx-251dafcc) VM state report. host: 5, vm id: 34, power state: PowerOn
2015-07-09 19:38:59,227 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (DirectAgentCronJob-309:ctx-251dafcc) VM power state does not change, skip DB writing. vm id: 34
2015-07-09 19:38:59,227 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (DirectAgentCronJob-309:ctx-251dafcc) VM state report. host: 5, vm id: 32, power state: PowerOn
2015-07-09 19:38:59,231 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (DirectAgentCronJob-309:ctx-251dafcc) VM power state does not change, skip DB writing. vm id: 32
2015-07-09 19:38:59,231 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (DirectAgentCronJob-309:ctx-251dafcc) VM state report. host: 5, vm id: 31, power state: PowerOn
2015-07-09 19:38:59,235 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (DirectAgentCronJob-309:ctx-251dafcc) VM power state does not change, skip DB writing. vm id: 31
2015-07-09 19:38:59,240 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] (DirectAgentCronJob-309:ctx-251dafcc) Done with process of VM state report. host: 5
2015-07-09 19:39:00,059 DEBUG [c.c.s.StatsCollector] (StatsCollector-1:ctx-e670519b) AutoScaling Monitor is running...
2015-07-09 19:39:00,720 INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-3dd8e202) Begin cleanup expired async-jobs
2015-07-09 19:39:00,730 INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-3dd8e202) End cleanup expired async-jobs
2015-07-09 19:39:03,685 DEBUG [c.c.s.StatsCollector] (StatsCollector-4:ctx-4638497f) VmStatsCollector is running...
2015-07-09 19:39:03,718 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-478:ctx-46753ff4) Seq 1-1244400872037812592: Executing request
2015-07-09 19:39:03,789 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-478:ctx-46753ff4) Vm cpu utilization 0.12184210526315789
2015-07-09 19:39:03,789 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-478:ctx-46753ff4) Vm cpu utilization 0.8310526315789473
2015-07-09 19:39:03,789 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-478:ctx-46753ff4) Vm cpu utilization 0.10763157894736838
2015-07-09 19:39:03,789 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-478:ctx-46753ff4) Vm cpu utilization 0.14907894736842106
2015-07-09 19:39:03,789 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-478:ctx-46753ff4) Seq 1-1244400872037812592: Response Received:
2015-07-09 19:39:03,789 DEBUG [c.c.a.t.Request] (StatsCollector-4:ctx-4638497f) Seq 1-1244400872037812592: Received: { Ans: , MgmtId: 59778234354585, via: 1, Ver: v1, Flags: 10, { GetVmStatsAnswer } }
2015-07-09 19:39:04,209 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-10:null) SeqA 11-91658: Processing Seq 11-91658: { Cmd , MgmtId: -1, via: 11, Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":80,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] }
2015-07-09 19:39:04,256 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-10:null) SeqA 11-91658: Sending Seq 11-91658: { Ans: , MgmtId: 59778234354585, via: 11, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
2015-07-09 19:39:06,487 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] (secstorage-1:ctx-1037a8a0) Zone 1 is ready to launch secondary storage VM
2015-07-09 19:39:06,711 DEBUG [c.c.c.ConsoleProxyManagerImpl] (consoleproxy-1:ctx-c5d3d9ec) Zone 1 is ready to launch console proxy
2015-07-09 19:39:10,719 INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-547e56d7) Begin cleanup expired async-jobs
2015-07-09 19:39:10,730 INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-547e56d7) End cleanup expired async-jobs
2015-07-09 19:39:14,210 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-4:null) SeqA 11-91659: Processing Seq 11-91659: { Cmd , MgmtId: -1, via: 11, Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":80,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] }
2015-07-09 19:39:14,258 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-4:null) SeqA 11-91659: Sending Seq 11-91659: { Ans: , MgmtId: 59778234354585, via: 11, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
So it's not compute offering, but something else ?
/Sonali
-----Original Message-----
From: Andrija Panic [mailto:andrija.panic@gmail.com]
Sent: Thursday, July 9, 2015 7:16 PM
To: users@cloudstack.apache.org
Subject: Re: bug? failed to migrate instance when assigned computer offering is deleted
Not sure if this will help - but try to "undelete" the offering.
cloud.disk_offering table - find the offering that is deleted - change STATE column from INACTIVE to ACTIVE, and that will efectively restore the deleted offering.
Let us know if it helps.
Best
On 9 July 2015 at 15:01, Sonali Jadhav <so...@servercentralen.se> wrote:
> Hi,
>
> I am unsure if this would be bug or my mistake.
>
> I am trying to add one host in maintenance mode, There are 3 VMs.
> Assigned computer offerings to those VMs are deleted. I suspect its
> logging NPE and that's why those instance are not getting moved to another host in cluster ?
>
> Here is link to logs, http://pastebin.com/ScyBCX9v
>
> I am running ACS 4.5.1. Is there any workaround?
>
> /Sonali
>
--
Andrija Panić
Re: bug? failed to migrate instance when assigned computer offering
is deleted
Posted by Andrija Panic <an...@gmail.com>.
Not sure if this will help - but try to "undelete" the offering.
cloud.disk_offering table - find the offering that is deleted - change
STATE column from INACTIVE to ACTIVE, and that will efectively restore the
deleted offering.
Let us know if it helps.
Best
On 9 July 2015 at 15:01, Sonali Jadhav <so...@servercentralen.se> wrote:
> Hi,
>
> I am unsure if this would be bug or my mistake.
>
> I am trying to add one host in maintenance mode, There are 3 VMs. Assigned
> computer offerings to those VMs are deleted. I suspect its logging NPE and
> that's why those instance are not getting moved to another host in cluster ?
>
> Here is link to logs, http://pastebin.com/ScyBCX9v
>
> I am running ACS 4.5.1. Is there any workaround?
>
> /Sonali
>
--
Andrija Panić