You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by li jerry <di...@hotmail.com> on 2019/06/19 15:23:48 UTC

Create VM prompt SR backend failure

Hi, All

I have a cloudstack + xenserver+iscsi cluster with five nodes: node01, node02, node03, node04 and node05.

cloudstack version is 4.11.2
XenServer version is 7.1.2
Primary storage is iSCSI



When the newly created VM is assigned to node05, the following errors are prompted in the log:
-------------------------------------------------------------------------------------------------------------------
2019-06-19 21:57:06,826 DEBUG [c.c.a.t.Request] (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03) (logid:687a6387) Seq 3-3381640370201
826942: Executing:  { Cmd , MgmtId: 240661250348494, via: 3(node05), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"sr
cTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"593bfe3a-a161-4b14-a316-c4f6e17094d8","uuid":"7700a9fc-3906-4b24-859c-3ffa4d9b7860","i
d":352,"format":"VHD","accountId":2,"hvm":true,"displayText":"yumos 20190301","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStor
eTO":{"uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P0
1CL01_MIX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","size":53687091200,"hypervisorTy
pe":"XenServer","bootable":false,"uniqueName":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","directDownload":false}},"destTO":{"org.apache.cloudstack.storage.to
.VolumeObjectTO":{"uuid":"2cc60f23-f6f5-447d-8120-f9de61bae6f2","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"
uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P01CL01_M
IX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"ROOT-2216","size":53687091200,"volumeId":5333,"vmName":"i-2-2216-VM","account
Id":2,"format":"VHD","provisioningType":"THIN","id":5333,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"options2":{},
"wait":0}}] }
2019-06-19 21:57:25,836 WARN  [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Unable to create volume; Pool=volumeT
O[uuid=2cc60f23-f6f5-447d-8120-f9de61bae6f2|path=null|datastore=PrimaryDataStoreTO[uuid=P01CL01_MIX34|name=null|id=61|pooltype=PreSetup]]; Disk:
2019-06-19 21:57:25,836 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942: Response Received:
2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request] (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942: Processing:  { Ans: , MgmtId:
240661250348494, via: 3(node05), Ver: v1, Flags: 110, [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"There was an S
R backend failure.","wait":0}}] }
2019-06-19 21:57:25,836 DEBUG [c.c.a.m.AgentAttache] (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942: No more commands found
2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request] (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03) (logid:687a6387) Seq 3-3381640370201
826942: Received:  { Ans: , MgmtId: 240661250348494, via: 3(node05), Ver: v1, Flags: 110, { CopyCmdAnswer } }
2019-06-19 21:57:25,847 WARN  [o.a.c.s.d.ObjectInDataStoreManagerImpl] (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03) (logid:687a63
87) Unsupported data object (VOLUME, org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@5d4df229), no need to delete from object in store ref t
able
2019-06-19 21:57:25,847 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03) (logid:687a6387) Unable
to create Vol[5333|vm=2216|ROOT]:There was an SR backend failure.
----------------------------------------------------------------------------------------------------------------------


Then the VM will be re-created on the other four nodes.
VM that has been running normally can be migrated to node 05 through cloudstack online migration.
This problem led to my new VM always being created between node1-4.
Do you have any suggestions or experiences to share with me?
Thank you

答复: Create VM prompt SR backend failure

Posted by li jerry <di...@hotmail.com>.
Hi ,Andrija Panic

      Thank you for reminding us that we analyzed SMlog and found that there is a VDI fault under SR, which caused the new VDI creation to fail.



Jun 20 18:35:07 node05 SM: [26622] ['/usr/bin/vhd-util', 'query', '--debug', '-s', '-n', '/dev/VG_XenStorage-155f0774-2448-4db8-7036-090b7e68981f/VHD-6ad186bd-f0bb-46bf-b51e-8acdf87e70f0']

Jun 20 18:35:07 node05 SM: [26622] FAILED in util.pread: (rc 22) stdout: 'failed to get physical size: -22

Jun 20 18:35:07 node05 SM: [26622] ', stderr: ''

Jun 20 18:35:07 node05 SM: [26622] lock: released /var/lock/sm/155f0774-2448-4db8-7036-090b7e68981f/sr

Jun 20 18:35:07 node05 SM: [26622] ***** LVHD over iSCSI: EXCEPTION <class 'util.CommandException'>, Invalid argument

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/SRCommand.py", line 351, in run

Jun 20 18:35:07 node05 SM: [26622]     sr = driver(cmd, cmd.sr_uuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/SR.py", line 147, in __init__

Jun 20 18:35:07 node05 SM: [26622]     self.load(sr_uuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/LVMoISCSISR", line 320, in load

Jun 20 18:35:07 node05 SM: [26622]     LVHDSR.LVHDSR.load(self, sr_uuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/LVHDSR.py", line 199, in load

Jun 20 18:35:07 node05 SM: [26622]     self._undoAllJournals()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/LVHDSR.py", line 1134, in _undoAllJournals

Jun 20 18:35:07 node05 SM: [26622]     self._handleInterruptedCoalesceLeaf()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/LVHDSR.py", line 890, in _handleInterruptedCoalesceLeaf

Jun 20 18:35:07 node05 SM: [26622]     cleanup.gc_force(self.session, self.uuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 2741, in gc_force

Jun 20 18:35:07 node05 SM: [26622]     sr.scanLocked(force)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 1419, in scanLocked

Jun 20 18:35:07 node05 SM: [26622]     self.scan(force)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 2262, in scan

Jun 20 18:35:07 node05 SM: [26622]     self._handleInterruptedCoalesceLeaf()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 2340, in _handleInterruptedCoalesceLeaf

Jun 20 18:35:07 node05 SM: [26622]     self._undoInterruptedCoalesceLeaf(uuid, parentUuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 2383, in _undoInterruptedCoalesceLeaf

Jun 20 18:35:07 node05 SM: [26622]     parent.deflate()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 1065, in deflate

Jun 20 18:35:07 node05 SM: [26622]     lvhdutil.deflate(self.sr.lvmCache, self.fileName, self.getSizeVHD())

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 1122, in getSizeVHD

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/util.py", line 292, in ioretry

Jun 20 18:35:07 node05 SM: [26622]     return f()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/vhdutil.py", line 101, in <lambda>

Jun 20 18:35:07 node05 SM: [26622]     return util.ioretry(lambda: util.pread2(cmd),

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/util.py", line 189, in pread2

Jun 20 18:35:07 node05 SM: [26622]     return pread(cmdlist, quiet = quiet)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/util.py", line 182, in pread

Jun 20 18:35:07 node05 SM: [26622]     raise CommandException(rc, str(cmdlist), stderr.strip())



________________________________
发件人: Andrija Panic <an...@gmail.com>
发送时间: Wednesday, June 19, 2019 11:59:36 PM
收件人: users
抄送: dev@cloudstack.apache.org
主题: Re: Create VM prompt SR backend failure

Li,

please check the logs on XenServer, they should contain the error message
explaining why volume creation failed.

Best
Andrija

On Wed, 19 Jun 2019 at 17:24, li jerry <di...@hotmail.com> wrote:

> Hi, All
>
> I have a cloudstack + xenserver+iscsi cluster with five nodes: node01,
> node02, node03, node04 and node05.
>
> cloudstack version is 4.11.2
> XenServer version is 7.1.2
> Primary storage is iSCSI
>
>
>
> When the newly created VM is assigned to node05, the following errors are
> prompted in the log:
>
> -------------------------------------------------------------------------------------------------------------------
> 2019-06-19 21:57:06,826 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Seq 3-3381640370201
> 826942: Executing:  { Cmd , MgmtId: 240661250348494, via: 3(node05), Ver:
> v1, Flags: 100111,
> [{"org.apache.cloudstack.storage.command.CopyCommand":{"sr
>
> cTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"593bfe3a-a161-4b14-a316-c4f6e17094d8","uuid":"7700a9fc-3906-4b24-859c-3ffa4d9b7860","i
> d":352,"format":"VHD","accountId":2,"hvm":true,"displayText":"yumos
> 20190301","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStor
>
> eTO":{"uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P0
>
> 1CL01_MIX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","size":53687091200,"hypervisorTy
>
> pe":"XenServer","bootable":false,"uniqueName":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","directDownload":false}},"destTO":{"
> org.apache.cloudstack.storage.to
>
> .VolumeObjectTO":{"uuid":"2cc60f23-f6f5-447d-8120-f9de61bae6f2","volumeType":"ROOT","dataStore":{"org.
> apache.cloudstack.storage.to.PrimaryDataStoreTO":{"
>
> uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P01CL01_M
>
> IX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"ROOT-2216","size":53687091200,"volumeId":5333,"vmName":"i-2-2216-VM","account
>
> Id":2,"format":"VHD","provisioningType":"THIN","id":5333,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"options2":{},
> "wait":0}}] }
> 2019-06-19 21:57:25,836 WARN  [c.c.h.x.r.XenServerStorageProcessor]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Unable to create volume;
> Pool=volumeT
> O[uuid=2cc60f23-f6f5-447d-8120-f9de61bae6f2|path=null|datastore=PrimaryDataStoreTO[uuid=P01CL01_MIX34|name=null|id=61|pooltype=PreSetup]];
> Disk:
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.m.DirectAgentAttache]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> Response Received:
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> Processing:  { Ans: , MgmtId:
> 240661250348494, via: 3(node05), Ver: v1, Flags: 110,
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"There
> was an S
> R backend failure.","wait":0}}] }
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.m.AgentAttache]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> No more commands found
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Seq 3-3381640370201
> 826942: Received:  { Ans: , MgmtId: 240661250348494, via: 3(node05), Ver:
> v1, Flags: 110, { CopyCmdAnswer } }
> 2019-06-19 21:57:25,847 WARN  [o.a.c.s.d.ObjectInDataStoreManagerImpl]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a63
> 87) Unsupported data object (VOLUME,
> org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@5d4df229),
> no need to delete from object in store ref t
> able
> 2019-06-19 21:57:25,847 DEBUG [o.a.c.e.o.VolumeOrchestrator]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Unable
> to create Vol[5333|vm=2216|ROOT]:There was an SR backend failure.
>
> ----------------------------------------------------------------------------------------------------------------------
>
>
> Then the VM will be re-created on the other four nodes.
> VM that has been running normally can be migrated to node 05 through
> cloudstack online migration.
> This problem led to my new VM always being created between node1-4.
> Do you have any suggestions or experiences to share with me?
> Thank you
>


--

Andrija Panić

答复: Create VM prompt SR backend failure

Posted by li jerry <di...@hotmail.com>.
Hi ,Andrija Panic

      Thank you for reminding us that we analyzed SMlog and found that there is a VDI fault under SR, which caused the new VDI creation to fail.



Jun 20 18:35:07 node05 SM: [26622] ['/usr/bin/vhd-util', 'query', '--debug', '-s', '-n', '/dev/VG_XenStorage-155f0774-2448-4db8-7036-090b7e68981f/VHD-6ad186bd-f0bb-46bf-b51e-8acdf87e70f0']

Jun 20 18:35:07 node05 SM: [26622] FAILED in util.pread: (rc 22) stdout: 'failed to get physical size: -22

Jun 20 18:35:07 node05 SM: [26622] ', stderr: ''

Jun 20 18:35:07 node05 SM: [26622] lock: released /var/lock/sm/155f0774-2448-4db8-7036-090b7e68981f/sr

Jun 20 18:35:07 node05 SM: [26622] ***** LVHD over iSCSI: EXCEPTION <class 'util.CommandException'>, Invalid argument

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/SRCommand.py", line 351, in run

Jun 20 18:35:07 node05 SM: [26622]     sr = driver(cmd, cmd.sr_uuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/SR.py", line 147, in __init__

Jun 20 18:35:07 node05 SM: [26622]     self.load(sr_uuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/LVMoISCSISR", line 320, in load

Jun 20 18:35:07 node05 SM: [26622]     LVHDSR.LVHDSR.load(self, sr_uuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/LVHDSR.py", line 199, in load

Jun 20 18:35:07 node05 SM: [26622]     self._undoAllJournals()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/LVHDSR.py", line 1134, in _undoAllJournals

Jun 20 18:35:07 node05 SM: [26622]     self._handleInterruptedCoalesceLeaf()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/LVHDSR.py", line 890, in _handleInterruptedCoalesceLeaf

Jun 20 18:35:07 node05 SM: [26622]     cleanup.gc_force(self.session, self.uuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 2741, in gc_force

Jun 20 18:35:07 node05 SM: [26622]     sr.scanLocked(force)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 1419, in scanLocked

Jun 20 18:35:07 node05 SM: [26622]     self.scan(force)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 2262, in scan

Jun 20 18:35:07 node05 SM: [26622]     self._handleInterruptedCoalesceLeaf()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 2340, in _handleInterruptedCoalesceLeaf

Jun 20 18:35:07 node05 SM: [26622]     self._undoInterruptedCoalesceLeaf(uuid, parentUuid)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 2383, in _undoInterruptedCoalesceLeaf

Jun 20 18:35:07 node05 SM: [26622]     parent.deflate()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 1065, in deflate

Jun 20 18:35:07 node05 SM: [26622]     lvhdutil.deflate(self.sr.lvmCache, self.fileName, self.getSizeVHD())

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/cleanup.py", line 1122, in getSizeVHD

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/util.py", line 292, in ioretry

Jun 20 18:35:07 node05 SM: [26622]     return f()

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/vhdutil.py", line 101, in <lambda>

Jun 20 18:35:07 node05 SM: [26622]     return util.ioretry(lambda: util.pread2(cmd),

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/util.py", line 189, in pread2

Jun 20 18:35:07 node05 SM: [26622]     return pread(cmdlist, quiet = quiet)

Jun 20 18:35:07 node05 SM: [26622]   File "/opt/xensource/sm/util.py", line 182, in pread

Jun 20 18:35:07 node05 SM: [26622]     raise CommandException(rc, str(cmdlist), stderr.strip())



________________________________
发件人: Andrija Panic <an...@gmail.com>
发送时间: Wednesday, June 19, 2019 11:59:36 PM
收件人: users
抄送: dev@cloudstack.apache.org
主题: Re: Create VM prompt SR backend failure

Li,

please check the logs on XenServer, they should contain the error message
explaining why volume creation failed.

Best
Andrija

On Wed, 19 Jun 2019 at 17:24, li jerry <di...@hotmail.com> wrote:

> Hi, All
>
> I have a cloudstack + xenserver+iscsi cluster with five nodes: node01,
> node02, node03, node04 and node05.
>
> cloudstack version is 4.11.2
> XenServer version is 7.1.2
> Primary storage is iSCSI
>
>
>
> When the newly created VM is assigned to node05, the following errors are
> prompted in the log:
>
> -------------------------------------------------------------------------------------------------------------------
> 2019-06-19 21:57:06,826 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Seq 3-3381640370201
> 826942: Executing:  { Cmd , MgmtId: 240661250348494, via: 3(node05), Ver:
> v1, Flags: 100111,
> [{"org.apache.cloudstack.storage.command.CopyCommand":{"sr
>
> cTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"593bfe3a-a161-4b14-a316-c4f6e17094d8","uuid":"7700a9fc-3906-4b24-859c-3ffa4d9b7860","i
> d":352,"format":"VHD","accountId":2,"hvm":true,"displayText":"yumos
> 20190301","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStor
>
> eTO":{"uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P0
>
> 1CL01_MIX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","size":53687091200,"hypervisorTy
>
> pe":"XenServer","bootable":false,"uniqueName":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","directDownload":false}},"destTO":{"
> org.apache.cloudstack.storage.to
>
> .VolumeObjectTO":{"uuid":"2cc60f23-f6f5-447d-8120-f9de61bae6f2","volumeType":"ROOT","dataStore":{"org.
> apache.cloudstack.storage.to.PrimaryDataStoreTO":{"
>
> uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P01CL01_M
>
> IX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"ROOT-2216","size":53687091200,"volumeId":5333,"vmName":"i-2-2216-VM","account
>
> Id":2,"format":"VHD","provisioningType":"THIN","id":5333,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"options2":{},
> "wait":0}}] }
> 2019-06-19 21:57:25,836 WARN  [c.c.h.x.r.XenServerStorageProcessor]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Unable to create volume;
> Pool=volumeT
> O[uuid=2cc60f23-f6f5-447d-8120-f9de61bae6f2|path=null|datastore=PrimaryDataStoreTO[uuid=P01CL01_MIX34|name=null|id=61|pooltype=PreSetup]];
> Disk:
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.m.DirectAgentAttache]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> Response Received:
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> Processing:  { Ans: , MgmtId:
> 240661250348494, via: 3(node05), Ver: v1, Flags: 110,
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"There
> was an S
> R backend failure.","wait":0}}] }
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.m.AgentAttache]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> No more commands found
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Seq 3-3381640370201
> 826942: Received:  { Ans: , MgmtId: 240661250348494, via: 3(node05), Ver:
> v1, Flags: 110, { CopyCmdAnswer } }
> 2019-06-19 21:57:25,847 WARN  [o.a.c.s.d.ObjectInDataStoreManagerImpl]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a63
> 87) Unsupported data object (VOLUME,
> org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@5d4df229),
> no need to delete from object in store ref t
> able
> 2019-06-19 21:57:25,847 DEBUG [o.a.c.e.o.VolumeOrchestrator]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Unable
> to create Vol[5333|vm=2216|ROOT]:There was an SR backend failure.
>
> ----------------------------------------------------------------------------------------------------------------------
>
>
> Then the VM will be re-created on the other four nodes.
> VM that has been running normally can be migrated to node 05 through
> cloudstack online migration.
> This problem led to my new VM always being created between node1-4.
> Do you have any suggestions or experiences to share with me?
> Thank you
>


--

Andrija Panić

Re: Create VM prompt SR backend failure

Posted by Andrija Panic <an...@gmail.com>.
Li,

please check the logs on XenServer, they should contain the error message
explaining why volume creation failed.

Best
Andrija

On Wed, 19 Jun 2019 at 17:24, li jerry <di...@hotmail.com> wrote:

> Hi, All
>
> I have a cloudstack + xenserver+iscsi cluster with five nodes: node01,
> node02, node03, node04 and node05.
>
> cloudstack version is 4.11.2
> XenServer version is 7.1.2
> Primary storage is iSCSI
>
>
>
> When the newly created VM is assigned to node05, the following errors are
> prompted in the log:
>
> -------------------------------------------------------------------------------------------------------------------
> 2019-06-19 21:57:06,826 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Seq 3-3381640370201
> 826942: Executing:  { Cmd , MgmtId: 240661250348494, via: 3(node05), Ver:
> v1, Flags: 100111,
> [{"org.apache.cloudstack.storage.command.CopyCommand":{"sr
>
> cTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"593bfe3a-a161-4b14-a316-c4f6e17094d8","uuid":"7700a9fc-3906-4b24-859c-3ffa4d9b7860","i
> d":352,"format":"VHD","accountId":2,"hvm":true,"displayText":"yumos
> 20190301","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStor
>
> eTO":{"uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P0
>
> 1CL01_MIX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","size":53687091200,"hypervisorTy
>
> pe":"XenServer","bootable":false,"uniqueName":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","directDownload":false}},"destTO":{"
> org.apache.cloudstack.storage.to
>
> .VolumeObjectTO":{"uuid":"2cc60f23-f6f5-447d-8120-f9de61bae6f2","volumeType":"ROOT","dataStore":{"org.
> apache.cloudstack.storage.to.PrimaryDataStoreTO":{"
>
> uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P01CL01_M
>
> IX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"ROOT-2216","size":53687091200,"volumeId":5333,"vmName":"i-2-2216-VM","account
>
> Id":2,"format":"VHD","provisioningType":"THIN","id":5333,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"options2":{},
> "wait":0}}] }
> 2019-06-19 21:57:25,836 WARN  [c.c.h.x.r.XenServerStorageProcessor]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Unable to create volume;
> Pool=volumeT
> O[uuid=2cc60f23-f6f5-447d-8120-f9de61bae6f2|path=null|datastore=PrimaryDataStoreTO[uuid=P01CL01_MIX34|name=null|id=61|pooltype=PreSetup]];
> Disk:
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.m.DirectAgentAttache]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> Response Received:
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> Processing:  { Ans: , MgmtId:
> 240661250348494, via: 3(node05), Ver: v1, Flags: 110,
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"There
> was an S
> R backend failure.","wait":0}}] }
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.m.AgentAttache]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> No more commands found
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Seq 3-3381640370201
> 826942: Received:  { Ans: , MgmtId: 240661250348494, via: 3(node05), Ver:
> v1, Flags: 110, { CopyCmdAnswer } }
> 2019-06-19 21:57:25,847 WARN  [o.a.c.s.d.ObjectInDataStoreManagerImpl]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a63
> 87) Unsupported data object (VOLUME,
> org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@5d4df229),
> no need to delete from object in store ref t
> able
> 2019-06-19 21:57:25,847 DEBUG [o.a.c.e.o.VolumeOrchestrator]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Unable
> to create Vol[5333|vm=2216|ROOT]:There was an SR backend failure.
>
> ----------------------------------------------------------------------------------------------------------------------
>
>
> Then the VM will be re-created on the other four nodes.
> VM that has been running normally can be migrated to node 05 through
> cloudstack online migration.
> This problem led to my new VM always being created between node1-4.
> Do you have any suggestions or experiences to share with me?
> Thank you
>


-- 

Andrija Panić

Re: Create VM prompt SR backend failure

Posted by Andrija Panic <an...@gmail.com>.
Li,

please check the logs on XenServer, they should contain the error message
explaining why volume creation failed.

Best
Andrija

On Wed, 19 Jun 2019 at 17:24, li jerry <di...@hotmail.com> wrote:

> Hi, All
>
> I have a cloudstack + xenserver+iscsi cluster with five nodes: node01,
> node02, node03, node04 and node05.
>
> cloudstack version is 4.11.2
> XenServer version is 7.1.2
> Primary storage is iSCSI
>
>
>
> When the newly created VM is assigned to node05, the following errors are
> prompted in the log:
>
> -------------------------------------------------------------------------------------------------------------------
> 2019-06-19 21:57:06,826 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Seq 3-3381640370201
> 826942: Executing:  { Cmd , MgmtId: 240661250348494, via: 3(node05), Ver:
> v1, Flags: 100111,
> [{"org.apache.cloudstack.storage.command.CopyCommand":{"sr
>
> cTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"593bfe3a-a161-4b14-a316-c4f6e17094d8","uuid":"7700a9fc-3906-4b24-859c-3ffa4d9b7860","i
> d":352,"format":"VHD","accountId":2,"hvm":true,"displayText":"yumos
> 20190301","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStor
>
> eTO":{"uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P0
>
> 1CL01_MIX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","size":53687091200,"hypervisorTy
>
> pe":"XenServer","bootable":false,"uniqueName":"1e48f683-ad70-4fe7-bb1e-d5731afd4001","directDownload":false}},"destTO":{"
> org.apache.cloudstack.storage.to
>
> .VolumeObjectTO":{"uuid":"2cc60f23-f6f5-447d-8120-f9de61bae6f2","volumeType":"ROOT","dataStore":{"org.
> apache.cloudstack.storage.to.PrimaryDataStoreTO":{"
>
> uuid":"P01CL01_MIX34","id":61,"poolType":"PreSetup","host":"localhost","path":"/P01CL01_MIX34","port":0,"url":"PreSetup://localhost/P01CL01_M
>
> IX34/?ROLE=Primary&STOREUUID=P01CL01_MIX34","isManaged":false}},"name":"ROOT-2216","size":53687091200,"volumeId":5333,"vmName":"i-2-2216-VM","account
>
> Id":2,"format":"VHD","provisioningType":"THIN","id":5333,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"options2":{},
> "wait":0}}] }
> 2019-06-19 21:57:25,836 WARN  [c.c.h.x.r.XenServerStorageProcessor]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Unable to create volume;
> Pool=volumeT
> O[uuid=2cc60f23-f6f5-447d-8120-f9de61bae6f2|path=null|datastore=PrimaryDataStoreTO[uuid=P01CL01_MIX34|name=null|id=61|pooltype=PreSetup]];
> Disk:
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.m.DirectAgentAttache]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> Response Received:
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> Processing:  { Ans: , MgmtId:
> 240661250348494, via: 3(node05), Ver: v1, Flags: 110,
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"There
> was an S
> R backend failure.","wait":0}}] }
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.m.AgentAttache]
> (DirectAgent-133:ctx-5bfa8d07) (logid:687a6387) Seq 3-3381640370201826942:
> No more commands found
> 2019-06-19 21:57:25,836 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Seq 3-3381640370201
> 826942: Received:  { Ans: , MgmtId: 240661250348494, via: 3(node05), Ver:
> v1, Flags: 110, { CopyCmdAnswer } }
> 2019-06-19 21:57:25,847 WARN  [o.a.c.s.d.ObjectInDataStoreManagerImpl]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a63
> 87) Unsupported data object (VOLUME,
> org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@5d4df229),
> no need to delete from object in store ref t
> able
> 2019-06-19 21:57:25,847 DEBUG [o.a.c.e.o.VolumeOrchestrator]
> (Work-Job-Executor-28:ctx-f83290e0 job-75093/job-75095 ctx-23f17f03)
> (logid:687a6387) Unable
> to create Vol[5333|vm=2216|ROOT]:There was an SR backend failure.
>
> ----------------------------------------------------------------------------------------------------------------------
>
>
> Then the VM will be re-created on the other four nodes.
> VM that has been running normally can be migrated to node 05 through
> cloudstack online migration.
> This problem led to my new VM always being created between node1-4.
> Do you have any suggestions or experiences to share with me?
> Thank you
>


-- 

Andrija Panić