You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Martin Emrich <ma...@empolis.com> on 2017/02/22 11:26:52 UTC

XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool assigned: 29, adding di
sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool assigned: 29, addin
g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool assigned: 29, addin
g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool assigned: 29, addin
g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) Catch Exception: class com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid
The device name is invalid
        at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
        at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
        at com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
        at com.xensource.xenapi.VBD.create(VBD.java:322)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
        at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Seems to be a problem with the VM's volumes. I don't see any difference in the ACS database to other VM's volumes.. What could be wrong here?

Thanks,

Martin

AW: XenServer VM does no longer start

Posted by Martin Emrich <ma...@empolis.com>.
Yes that worked. After detaching one volume the VM starts (although it's unusable as the volume ist part of a larger LVM volume).

I'm trying to get my build environment up for Abhinandan's patch, but no success. I cloned the 4.9.2.0 branch, and ran (cd packaging ; ./package.sh -p oss -d centos63).
This used to work with 4.6.0, but now I get 

+ cp 'tools/marvin/dist/Marvin-*.tar.gz' /opt/csbuild/cs/cloudstack/dist/rpmbuild/BUILDROOT/cloudstack-4.9.2.0-1.el6.x86_64/usr/share/cloudstack-marvin/
cp: cannot stat `tools/marvin/dist/Marvin-*.tar.gz': No such file or directory

Any quick Idea? Or should I start a new thread for that?

Thanks,

Martin

-----Ursprüngliche Nachricht-----
Von: S. Brüseke - proIO GmbH [mailto:s.brueseke@proio.com] 
Gesendet: Donnerstag, 23. Februar 2017 14:39
An: Martin Emrich <ma...@empolis.com>; users@cloudstack.apache.org
Betreff: AW: XenServer VM does no longer start

Hi Martin,

as Abhinandan was pointing out in a previous mail it looks like you hit a bug. Take a look a the link he provided in his mail.
Please detach all data disks and try to start the VM. Is this working?

Mit freundlichen Grüßen / With kind regards,

Swen


-----Ursprüngliche Nachricht-----
Von: Martin Emrich [mailto:martin.emrich@empolis.com]
Gesendet: Donnerstag, 23. Februar 2017 13:49
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: AW: XenServer VM does no longer start

Hi!

How can I check that? 

I tried starting the VM, not a single line appeared on the SMlog during that attempt.

Thanks,

Martin

-----Ursprüngliche Nachricht-----
Von: S. Brüseke - proIO GmbH [mailto:s.brueseke@proio.com]
Gesendet: Mittwoch, 22. Februar 2017 12:41
An: users@cloudstack.apache.org
Betreff: AW: XenServer VM does no longer start

Hi Martin,

does the volume still exist on primary storage? You can also take a look at SMlog on XenServer.

Mit freundlichen Grüßen / With kind regards,

Swen


-----Ursprüngliche Nachricht-----
Von: Martin Emrich [mailto:martin.emrich@empolis.com]
Gesendet: Mittwoch, 22. Februar 2017 12:27
An: users@cloudstack.apache.org
Betreff: XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool assigned: 29, adding di sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) Catch Exception: class com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid The device name is invalid
        at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
        at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
        at com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
        at com.xensource.xenapi.VBD.create(VBD.java:322)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
        at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Seems to be a problem with the VM's volumes. I don't see any difference in the ACS database to other VM's volumes.. What could be wrong here?

Thanks,

Martin


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. 




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. 



AW: XenServer VM does no longer start

Posted by "S. Brüseke - proIO GmbH" <s....@proio.com>.
Hi Martin,

as Abhinandan was pointing out in a previous mail it looks like you hit a bug. Take a look a the link he provided in his mail.
Please detach all data disks and try to start the VM. Is this working?

Mit freundlichen Grüßen / With kind regards,

Swen


-----Ursprüngliche Nachricht-----
Von: Martin Emrich [mailto:martin.emrich@empolis.com] 
Gesendet: Donnerstag, 23. Februar 2017 13:49
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: AW: XenServer VM does no longer start

Hi!

How can I check that? 

I tried starting the VM, not a single line appeared on the SMlog during that attempt.

Thanks,

Martin

-----Ursprüngliche Nachricht-----
Von: S. Brüseke - proIO GmbH [mailto:s.brueseke@proio.com] 
Gesendet: Mittwoch, 22. Februar 2017 12:41
An: users@cloudstack.apache.org
Betreff: AW: XenServer VM does no longer start

Hi Martin,

does the volume still exist on primary storage? You can also take a look at SMlog on XenServer.

Mit freundlichen Grüßen / With kind regards,

Swen


-----Ursprüngliche Nachricht-----
Von: Martin Emrich [mailto:martin.emrich@empolis.com]
Gesendet: Mittwoch, 22. Februar 2017 12:27
An: users@cloudstack.apache.org
Betreff: XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool assigned: 29, adding di sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) Catch Exception: class com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid The device name is invalid
        at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
        at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
        at com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
        at com.xensource.xenapi.VBD.create(VBD.java:322)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
        at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Seems to be a problem with the VM's volumes. I don't see any difference in the ACS database to other VM's volumes.. What could be wrong here?

Thanks,

Martin


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. 




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. 



AW: XenServer VM does no longer start

Posted by Martin Emrich <ma...@empolis.com>.
Hi!

How can I check that? 

I tried starting the VM, not a single line appeared on the SMlog during that attempt.

Thanks,

Martin

-----Ursprüngliche Nachricht-----
Von: S. Brüseke - proIO GmbH [mailto:s.brueseke@proio.com] 
Gesendet: Mittwoch, 22. Februar 2017 12:41
An: users@cloudstack.apache.org
Betreff: AW: XenServer VM does no longer start

Hi Martin,

does the volume still exist on primary storage? You can also take a look at SMlog on XenServer.

Mit freundlichen Grüßen / With kind regards,

Swen


-----Ursprüngliche Nachricht-----
Von: Martin Emrich [mailto:martin.emrich@empolis.com]
Gesendet: Mittwoch, 22. Februar 2017 12:27
An: users@cloudstack.apache.org
Betreff: XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool assigned: 29, adding di sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) Catch Exception: class com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid The device name is invalid
        at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
        at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
        at com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
        at com.xensource.xenapi.VBD.create(VBD.java:322)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
        at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Seems to be a problem with the VM's volumes. I don't see any difference in the ACS database to other VM's volumes.. What could be wrong here?

Thanks,

Martin


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. 



AW: XenServer VM does no longer start

Posted by "S. Brüseke - proIO GmbH" <s....@proio.com>.
Hi Martin,

does the volume still exist on primary storage? You can also take a look at SMlog on XenServer.

Mit freundlichen Grüßen / With kind regards,

Swen


-----Ursprüngliche Nachricht-----
Von: Martin Emrich [mailto:martin.emrich@empolis.com] 
Gesendet: Mittwoch, 22. Februar 2017 12:27
An: users@cloudstack.apache.org
Betreff: XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool assigned: 29, adding di sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool assigned: 29, addin g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) Catch Exception: class com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid The device name is invalid
        at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
        at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
        at com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
        at com.xensource.xenapi.VBD.create(VBD.java:322)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
        at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Seems to be a problem with the VM's volumes. I don't see any difference in the ACS database to other VM's volumes.. What could be wrong here?

Thanks,

Martin


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. 



Re: XenServer VM does no longer start

Posted by Abhinandan Prateek <ab...@shapeblue.com>.
Hi Martin,

  Looks like you have hit a bug, you can patch it from this PR https://github.com/apache/cloudstack/pull/1829




On 22/02/17, 4:56 PM, "Martin Emrich" <ma...@empolis.com> wrote:

>Hi!
>
>After shutting down a VM for resizing, it no longer starts. The GUI reports insufficient Capacity (but there's plenty), and in the Log I see this:
>
>2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
>2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool assigned: 29, adding di
>sk to VM
>2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool assigned: 29, addin
>g disk to VM
>2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool assigned: 29, addin
>g disk to VM
>2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool assigned: 29, addin
>g disk to VM
>2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
>2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 for i-18-2998-VM
>2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created for com.cloud.agent.api.to.DiskTO@1d82661a
>2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created for com.cloud.agent.api.to.DiskTO@5bfd4418
>2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created for com.cloud.agent.api.to.DiskTO@5081b2d6
>2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
>2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created for com.cloud.agent.api.to.DiskTO@64992bda
>2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] (DirectAgent-469:ctx-d6e5768e) Catch Exception: class com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid
>The device name is invalid
>        at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
>        at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
>        at com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
>        at com.xensource.xenapi.VBD.create(VBD.java:322)
>        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
>        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
>        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
>        at com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
>        at com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
>        at com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
>        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>        at java.lang.Thread.run(Thread.java:745)
>
>
>Seems to be a problem with the VM's volumes. I don't see any difference in the ACS database to other VM's volumes.. What could be wrong here?
>
>Thanks,
>
>Martin

abhinandan.prateek@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue