You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Jeremy Hansen <je...@skidrow.la.INVALID> on 2023/02/21 10:35:58 UTC

Stuck in Preparing for maintenance on primary storage

I tried to put one of my primary storage definitions in to maintenance mode. Now it’s stuck in preparing for maintenance and I’m not sure how to remedy this situation:

Cancel maintenance mode
(NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage with id 1 is not ready to complete migration, as the status is:PrepareForMaintenance

Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…

4.17.2.0. Using NFS for primary and secondary storage. I was attempting to migrate to a new volume. All volumes were moved to the new storage. I was simply trying to delete the old storage definition.

Thanks
-jeremy


Re: Stuck in Preparing for maintenance on primary storage

Posted by Simon Weller <si...@gmail.com>.
The VM filesystem is putting itself in read-only mode, so something odd
appears to be going on.

On Wed, Feb 22, 2023, 7:07 AM Simon Weller <si...@gmail.com> wrote:

> Jeremy,
>
> Any chance you have a write permission problem on your new NFS server?
> Those errors indicate an underlying storage issue.
>
> -Si
>
> On Tue, Feb 21, 2023, 11:46 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
>> Oh and the system vm’s continue to stay in Starting state.
>>
>> -jeremy
>>
>>
>>
>> On Tuesday, Feb 21, 2023 at 9:44 PM, Me <je...@skidrow.la> wrote:
>> The vm’s finally stopped and restarted.  This is what I’m seeing in dmesg
>> on the secondary storage vm:
>>
>> root@s-60-VM:~# dmesg  | grep -i error
>> [    3.861852] blk_update_request: I/O error, dev vda, sector 6787872 op
>> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
>> [    3.865833] blk_update_request: I/O error, dev vda, sector 6787872 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    3.869553] systemd[1]: Failed to read configured hostname:
>> Input/output error
>> [    4.560419] EXT4-fs (vda6): re-mounted. Opts: errors=remount-ro
>> [    4.646460] blk_update_request: I/O error, dev vda, sector 6787160 op
>> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
>> [    4.650710] blk_update_request: I/O error, dev vda, sector 6787160 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    4.975915] blk_update_request: I/O error, dev vda, sector 6787856 op
>> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
>> [    4.980318] blk_update_request: I/O error, dev vda, sector 6787856 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    5.018828] blk_update_request: I/O error, dev vda, sector 6787136 op
>> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
>> [    5.022976] blk_update_request: I/O error, dev vda, sector 6787136 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    5.026750] blk_update_request: I/O error, dev vda, sector 6787136 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    5.460315] blk_update_request: I/O error, dev vda, sector 6787856 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [   10.415215] print_req_error: 16 callbacks suppressed
>> [   10.415219] blk_update_request: I/O error, dev vda, sector 6787864 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [   13.362595] blk_update_request: I/O error, dev vda, sector 6787136 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [   13.388990] blk_update_request: I/O error, dev vda, sector 6787136 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [   13.787276] blk_update_request: I/O error, dev vda, sector 6399408 op
>> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
>> [   13.791575] blk_update_request: I/O error, dev vda, sector 6399408 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [   14.632299] blk_update_request: I/O error, dev vda, sector 6787136 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [   14.658283] blk_update_request: I/O error, dev vda, sector 6787136 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>>
>> -jeremy
>>
>>
>>
>> On Tuesday, Feb 21, 2023 at 8:57 PM, Me <je...@skidrow.la> wrote:
>> The node cloudstack is claiming the system vm’s is starting on shows no
>> signs of any vm’s running.  virsh list is black.
>>
>> Thanks
>> -jeremy
>>
>>
>>
>> On Tuesday, Feb 21, 2023 at 8:23 PM, Me <je...@skidrow.la> wrote:
>> Also, just to note, I’m not sure how much made it in to the logs.  The
>> system vm’s are stuck in starting state and trying to kill through the
>> interface doesn’t seem to do anything.
>>
>> -jeremy
>>
>>
>>
>>
>> On Tuesday, Feb 21, 2023 at 8:20 PM, Me <je...@skidrow.la> wrote:
>> Is there something else I can use to submit logs?  Too much for pastebin.
>>
>> Thanks
>> -jeremy
>>
>>
>>
>> On Tuesday, Feb 21, 2023 at 7:07 PM, Simon Weller <si...@gmail.com>
>> wrote:
>> Can you pull some management server logs and also put the CloudStack KVM
>> agent into debug mode before destroying the ssvm and share the logs?
>>
>>
>> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350
>>
>> On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <je...@skidrow.la.invalid>
>> wrote:
>>
>> Yes. It’s just a different partition on the same nfs server.
>>
>>
>>
>> On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <si...@gmail.com>
>> wrote:
>> The new and old primary storage is in the same zone, correct?
>> Did you also change out the secondary storage?
>>
>> On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
>> wrote:
>>
>> Yes. On Kvm. I’ve been trying to destroy them from the interface and it
>> just keep churning. I did a destroy with virsh, but no status changed in
>> the interface. Also, the newly created ones don’t seem to bring up their
>> agent and never fully start.
>>
>> Thanks
>>
>>
>>
>> On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
>> wrote:
>> Just destroy the old system VMs and they will be recreated on available
>> storage.
>>
>> Are you on KVM?
>>
>>
>>
>> On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
>> wrote:
>>
>> How do I completely recreate the system vm?
>>
>> I was able to get the old storage in to full maintenance and deleted it,
>> so maybe the system vm are still using the old storage? Is there a way to
>> tell the system vm’s to use the new storage? Db change?
>>
>> Thanks!
>>
>>
>>
>> On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
>> wrote:
>> Hey Jeremy,
>>
>> Is there anything in the management logs that indicate why it's not
>> completing the maintenance action?
>> Usually, this state is triggered by some stuck VMs that haven't migrated
>> off of the primary storage.
>>
>> You mentioned the system VMs. Are they still on the old storage? Could
>> this
>> be due to some storage tags?
>>
>> -Si
>>
>> On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
>>
>> wrote:
>>
>> Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
>> and I can’t remove the old primary storage.
>>
>> -jeremy
>>
>>
>>
>> On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
>> I tried to put one of my primary storage definitions in to maintenance
>> mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
>> remedy this situation:
>>
>> Cancel maintenance mode
>> (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
>> with id 1 is not ready to complete migration, as the status
>> is:PrepareForMaintenance
>>
>> Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
>>
>> 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
>> to migrate to a new volume. All volumes were moved to the new storage. I
>> was simply trying to delete the old storage definition.
>>
>> Thanks
>> -jeremy
>>
>>
>>
>>
>>
>>
>>
>>

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
No issue with writes:

192.168.210.23:/exports/cloudstorage/primary 49T 57G 47T 1% /mnt/11cd19d0-f207-3d01-880f-8d01d4b15020
tmpfs 6.3G 0 6.3G 0% /run/user/0
192.168.210.23:/exports/cloudstorage/secondary 49T 57G 47T 1% /var/cloudstack/mnt/161333239336.2b9f6261

[root@droid 11cd19d0-f207-3d01-880f-8d01d4b15020]# touch /var/cloudstack/mnt/161333239336.2b9f6261/file
[root@droid 11cd19d0-f207-3d01-880f-8d01d4b15020]# ls -lad /var/cloudstack/mnt/161333239336.2b9f6261/file
-rw-r--r-- 1 root root 0 Feb 22 17:30 /var/cloudstack/mnt/161333239336.2b9f6261/file

[root@droid ~]# touch /mnt/11cd19d0-f207-3d01-880f-8d01d4b15020/file
[root@droid ~]# ls -ald /mnt/11cd19d0-f207-3d01-880f-8d01d4b15020/file
-rw-r--r-- 1 root root 0 Feb 22 17:31 /mnt/11cd19d0-f207-3d01-880f-8d01d4b15020/file

-jeremy

> On Wednesday, Feb 22, 2023 at 5:07 AM, Simon Weller <siweller77@gmail.com (mailto:siweller77@gmail.com)> wrote:
> Jeremy,
>
> Any chance you have a write permission problem on your new NFS server?
> Those errors indicate an underlying storage issue.
>
> -Si
>
> On Tue, Feb 21, 2023, 11:46 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> > Oh and the system vm’s continue to stay in Starting state.
> >
> > -jeremy
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 9:44 PM, Me <je...@skidrow.la> wrote:
> > The vm’s finally stopped and restarted. This is what I’m seeing in dmesg
> > on the secondary storage vm:
> >
> > root@s-60-VM:~# dmesg | grep -i error
> > [ 3.861852] blk_update_request: I/O error, dev vda, sector 6787872 op
> > 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> > [ 3.865833] blk_update_request: I/O error, dev vda, sector 6787872 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 3.869553] systemd[1]: Failed to read configured hostname:
> > Input/output error
> > [ 4.560419] EXT4-fs (vda6): re-mounted. Opts: errors=remount-ro
> > [ 4.646460] blk_update_request: I/O error, dev vda, sector 6787160 op
> > 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> > [ 4.650710] blk_update_request: I/O error, dev vda, sector 6787160 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 4.975915] blk_update_request: I/O error, dev vda, sector 6787856 op
> > 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> > [ 4.980318] blk_update_request: I/O error, dev vda, sector 6787856 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 5.018828] blk_update_request: I/O error, dev vda, sector 6787136 op
> > 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> > [ 5.022976] blk_update_request: I/O error, dev vda, sector 6787136 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 5.026750] blk_update_request: I/O error, dev vda, sector 6787136 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 5.460315] blk_update_request: I/O error, dev vda, sector 6787856 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 10.415215] print_req_error: 16 callbacks suppressed
> > [ 10.415219] blk_update_request: I/O error, dev vda, sector 6787864 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 13.362595] blk_update_request: I/O error, dev vda, sector 6787136 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 13.388990] blk_update_request: I/O error, dev vda, sector 6787136 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 13.787276] blk_update_request: I/O error, dev vda, sector 6399408 op
> > 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> > [ 13.791575] blk_update_request: I/O error, dev vda, sector 6399408 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 14.632299] blk_update_request: I/O error, dev vda, sector 6787136 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> > [ 14.658283] blk_update_request: I/O error, dev vda, sector 6787136 op
> > 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> >
> > -jeremy
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 8:57 PM, Me <je...@skidrow.la> wrote:
> > The node cloudstack is claiming the system vm’s is starting on shows no
> > signs of any vm’s running. virsh list is black.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 8:23 PM, Me <je...@skidrow.la> wrote:
> > Also, just to note, I’m not sure how much made it in to the logs. The
> > system vm’s are stuck in starting state and trying to kill through the
> > interface doesn’t seem to do anything.
> >
> > -jeremy
> >
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 8:20 PM, Me <je...@skidrow.la> wrote:
> > Is there something else I can use to submit logs? Too much for pastebin.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 7:07 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > Can you pull some management server logs and also put the CloudStack KVM
> > agent into debug mode before destroying the ssvm and share the logs?
> >
> >
> > https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350
> >
> > On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > Yes. It’s just a different partition on the same nfs server.
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > The new and old primary storage is in the same zone, correct?
> > Did you also change out the secondary storage?
> >
> > On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> > just keep churning. I did a destroy with virsh, but no status changed in
> > the interface. Also, the newly created ones don’t seem to bring up their
> > agent and never fully start.
> >
> > Thanks
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > Just destroy the old system VMs and they will be recreated on available
> > storage.
> >
> > Are you on KVM?
> >
> >
> >
> > On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > How do I completely recreate the system vm?
> >
> > I was able to get the old storage in to full maintenance and deleted it,
> > so maybe the system vm are still using the old storage? Is there a way to
> > tell the system vm’s to use the new storage? Db change?
> >
> > Thanks!
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > Hey Jeremy,
> >
> > Is there anything in the management logs that indicate why it's not
> > completing the maintenance action?
> > Usually, this state is triggered by some stuck VMs that haven't migrated
> > off of the primary storage.
> >
> > You mentioned the system VMs. Are they still on the old storage? Could
> > this
> > be due to some storage tags?
> >
> > -Si
> >
> > On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > and I can’t remove the old primary storage.
> >
> > -jeremy
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> > I tried to put one of my primary storage definitions in to maintenance
> > mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> > remedy this situation:
> >
> > Cancel maintenance mode
> > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > with id 1 is not ready to complete migration, as the status
> > is:PrepareForMaintenance
> >
> > Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
> >
> > 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> > to migrate to a new volume. All volumes were moved to the new storage. I
> > was simply trying to delete the old storage definition.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> >
> >
> >
> >
> >

Re: Stuck in Preparing for maintenance on primary storage

Posted by Simon Weller <si...@gmail.com>.
Jeremy,

Any chance you have a write permission problem on your new NFS server?
Those errors indicate an underlying storage issue.

-Si

On Tue, Feb 21, 2023, 11:46 PM Jeremy Hansen <je...@skidrow.la.invalid>
wrote:

> Oh and the system vm’s continue to stay in Starting state.
>
> -jeremy
>
>
>
> On Tuesday, Feb 21, 2023 at 9:44 PM, Me <je...@skidrow.la> wrote:
> The vm’s finally stopped and restarted.  This is what I’m seeing in dmesg
> on the secondary storage vm:
>
> root@s-60-VM:~# dmesg  | grep -i error
> [    3.861852] blk_update_request: I/O error, dev vda, sector 6787872 op
> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [    3.865833] blk_update_request: I/O error, dev vda, sector 6787872 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    3.869553] systemd[1]: Failed to read configured hostname:
> Input/output error
> [    4.560419] EXT4-fs (vda6): re-mounted. Opts: errors=remount-ro
> [    4.646460] blk_update_request: I/O error, dev vda, sector 6787160 op
> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [    4.650710] blk_update_request: I/O error, dev vda, sector 6787160 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    4.975915] blk_update_request: I/O error, dev vda, sector 6787856 op
> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [    4.980318] blk_update_request: I/O error, dev vda, sector 6787856 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    5.018828] blk_update_request: I/O error, dev vda, sector 6787136 op
> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [    5.022976] blk_update_request: I/O error, dev vda, sector 6787136 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    5.026750] blk_update_request: I/O error, dev vda, sector 6787136 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    5.460315] blk_update_request: I/O error, dev vda, sector 6787856 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [   10.415215] print_req_error: 16 callbacks suppressed
> [   10.415219] blk_update_request: I/O error, dev vda, sector 6787864 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [   13.362595] blk_update_request: I/O error, dev vda, sector 6787136 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [   13.388990] blk_update_request: I/O error, dev vda, sector 6787136 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [   13.787276] blk_update_request: I/O error, dev vda, sector 6399408 op
> 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [   13.791575] blk_update_request: I/O error, dev vda, sector 6399408 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [   14.632299] blk_update_request: I/O error, dev vda, sector 6787136 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [   14.658283] blk_update_request: I/O error, dev vda, sector 6787136 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>
> -jeremy
>
>
>
> On Tuesday, Feb 21, 2023 at 8:57 PM, Me <je...@skidrow.la> wrote:
> The node cloudstack is claiming the system vm’s is starting on shows no
> signs of any vm’s running.  virsh list is black.
>
> Thanks
> -jeremy
>
>
>
> On Tuesday, Feb 21, 2023 at 8:23 PM, Me <je...@skidrow.la> wrote:
> Also, just to note, I’m not sure how much made it in to the logs.  The
> system vm’s are stuck in starting state and trying to kill through the
> interface doesn’t seem to do anything.
>
> -jeremy
>
>
>
>
> On Tuesday, Feb 21, 2023 at 8:20 PM, Me <je...@skidrow.la> wrote:
> Is there something else I can use to submit logs?  Too much for pastebin.
>
> Thanks
> -jeremy
>
>
>
> On Tuesday, Feb 21, 2023 at 7:07 PM, Simon Weller <si...@gmail.com>
> wrote:
> Can you pull some management server logs and also put the CloudStack KVM
> agent into debug mode before destroying the ssvm and share the logs?
>
>
> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350
>
> On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> Yes. It’s just a different partition on the same nfs server.
>
>
>
> On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <si...@gmail.com>
> wrote:
> The new and old primary storage is in the same zone, correct?
> Did you also change out the secondary storage?
>
> On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> just keep churning. I did a destroy with virsh, but no status changed in
> the interface. Also, the newly created ones don’t seem to bring up their
> agent and never fully start.
>
> Thanks
>
>
>
> On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> wrote:
> Just destroy the old system VMs and they will be recreated on available
> storage.
>
> Are you on KVM?
>
>
>
> On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> How do I completely recreate the system vm?
>
> I was able to get the old storage in to full maintenance and deleted it,
> so maybe the system vm are still using the old storage? Is there a way to
> tell the system vm’s to use the new storage? Db change?
>
> Thanks!
>
>
>
> On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> wrote:
> Hey Jeremy,
>
> Is there anything in the management logs that indicate why it's not
> completing the maintenance action?
> Usually, this state is triggered by some stuck VMs that haven't migrated
> off of the primary storage.
>
> You mentioned the system VMs. Are they still on the old storage? Could
> this
> be due to some storage tags?
>
> -Si
>
> On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> and I can’t remove the old primary storage.
>
> -jeremy
>
>
>
> On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> I tried to put one of my primary storage definitions in to maintenance
> mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> remedy this situation:
>
> Cancel maintenance mode
> (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> with id 1 is not ready to complete migration, as the status
> is:PrepareForMaintenance
>
> Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
>
> 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> to migrate to a new volume. All volumes were moved to the new storage. I
> was simply trying to delete the old storage definition.
>
> Thanks
> -jeremy
>
>
>
>
>
>
>
>

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
Oh and the system vm’s continue to stay in Starting state.

-jeremy

> On Tuesday, Feb 21, 2023 at 9:44 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> The vm’s finally stopped and restarted. This is what I’m seeing in dmesg on the secondary storage vm:
>
> root@s-60-VM:~# dmesg | grep -i error
> [ 3.861852] blk_update_request: I/O error, dev vda, sector 6787872 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [ 3.865833] blk_update_request: I/O error, dev vda, sector 6787872 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 3.869553] systemd[1]: Failed to read configured hostname: Input/output error
> [ 4.560419] EXT4-fs (vda6): re-mounted. Opts: errors=remount-ro
> [ 4.646460] blk_update_request: I/O error, dev vda, sector 6787160 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [ 4.650710] blk_update_request: I/O error, dev vda, sector 6787160 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 4.975915] blk_update_request: I/O error, dev vda, sector 6787856 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [ 4.980318] blk_update_request: I/O error, dev vda, sector 6787856 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 5.018828] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [ 5.022976] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 5.026750] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 5.460315] blk_update_request: I/O error, dev vda, sector 6787856 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 10.415215] print_req_error: 16 callbacks suppressed
> [ 10.415219] blk_update_request: I/O error, dev vda, sector 6787864 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 13.362595] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 13.388990] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 13.787276] blk_update_request: I/O error, dev vda, sector 6399408 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [ 13.791575] blk_update_request: I/O error, dev vda, sector 6399408 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 14.632299] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [ 14.658283] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>
>
>
> -jeremy
>
>
>
> > On Tuesday, Feb 21, 2023 at 8:57 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> > The node cloudstack is claiming the system vm’s is starting on shows no signs of any vm’s running. virsh list is black.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> > > On Tuesday, Feb 21, 2023 at 8:23 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> > > Also, just to note, I’m not sure how much made it in to the logs. The system vm’s are stuck in starting state and trying to kill through the interface doesn’t seem to do anything.
> > >
> > > -jeremy
> > >
> > >
> > >
> > >
> > > > On Tuesday, Feb 21, 2023 at 8:20 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> > > > Is there something else I can use to submit logs? Too much for pastebin.
> > > >
> > > > Thanks
> > > > -jeremy
> > > >
> > > >
> > > >
> > > > > On Tuesday, Feb 21, 2023 at 7:07 PM, Simon Weller <siweller77@gmail.com (mailto:siweller77@gmail.com)> wrote:
> > > > > Can you pull some management server logs and also put the CloudStack KVM
> > > > > agent into debug mode before destroying the ssvm and share the logs?
> > > > >
> > > > > https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350
> > > > >
> > > > > On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > > wrote:
> > > > >
> > > > > > Yes. It’s just a different partition on the same nfs server.
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <si...@gmail.com>
> > > > > > wrote:
> > > > > > The new and old primary storage is in the same zone, correct?
> > > > > > Did you also change out the secondary storage?
> > > > > >
> > > > > > On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > > > wrote:
> > > > > >
> > > > > > Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> > > > > > just keep churning. I did a destroy with virsh, but no status changed in
> > > > > > the interface. Also, the newly created ones don’t seem to bring up their
> > > > > > agent and never fully start.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> > > > > > wrote:
> > > > > > Just destroy the old system VMs and they will be recreated on available
> > > > > > storage.
> > > > > >
> > > > > > Are you on KVM?
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > > > wrote:
> > > > > >
> > > > > > How do I completely recreate the system vm?
> > > > > >
> > > > > > I was able to get the old storage in to full maintenance and deleted it,
> > > > > > so maybe the system vm are still using the old storage? Is there a way to
> > > > > > tell the system vm’s to use the new storage? Db change?
> > > > > >
> > > > > > Thanks!
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> > > > > > wrote:
> > > > > > Hey Jeremy,
> > > > > >
> > > > > > Is there anything in the management logs that indicate why it's not
> > > > > > completing the maintenance action?
> > > > > > Usually, this state is triggered by some stuck VMs that haven't migrated
> > > > > > off of the primary storage.
> > > > > >
> > > > > > You mentioned the system VMs. Are they still on the old storage? Could
> > > > > > this
> > > > > > be due to some storage tags?
> > > > > >
> > > > > > -Si
> > > > > >
> > > > > > On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > > > wrote:
> > > > > >
> > > > > > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > > > > > and I can’t remove the old primary storage.
> > > > > >
> > > > > > -jeremy
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> > > > > > I tried to put one of my primary storage definitions in to maintenance
> > > > > > mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> > > > > > remedy this situation:
> > > > > >
> > > > > > Cancel maintenance mode
> > > > > > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > > > > > with id 1 is not ready to complete migration, as the status
> > > > > > is:PrepareForMaintenance
> > > > > >
> > > > > > Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
> > > > > >
> > > > > > 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> > > > > > to migrate to a new volume. All volumes were moved to the new storage. I
> > > > > > was simply trying to delete the old storage definition.
> > > > > >
> > > > > > Thanks
> > > > > > -jeremy
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
The vm’s finally stopped and restarted. This is what I’m seeing in dmesg on the secondary storage vm:

root@s-60-VM:~# dmesg | grep -i error
[ 3.861852] blk_update_request: I/O error, dev vda, sector 6787872 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 3.865833] blk_update_request: I/O error, dev vda, sector 6787872 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 3.869553] systemd[1]: Failed to read configured hostname: Input/output error
[ 4.560419] EXT4-fs (vda6): re-mounted. Opts: errors=remount-ro
[ 4.646460] blk_update_request: I/O error, dev vda, sector 6787160 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 4.650710] blk_update_request: I/O error, dev vda, sector 6787160 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 4.975915] blk_update_request: I/O error, dev vda, sector 6787856 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 4.980318] blk_update_request: I/O error, dev vda, sector 6787856 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 5.018828] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 5.022976] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 5.026750] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 5.460315] blk_update_request: I/O error, dev vda, sector 6787856 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 10.415215] print_req_error: 16 callbacks suppressed
[ 10.415219] blk_update_request: I/O error, dev vda, sector 6787864 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 13.362595] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 13.388990] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 13.787276] blk_update_request: I/O error, dev vda, sector 6399408 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 13.791575] blk_update_request: I/O error, dev vda, sector 6399408 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 14.632299] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 14.658283] blk_update_request: I/O error, dev vda, sector 6787136 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0

-jeremy

> On Tuesday, Feb 21, 2023 at 8:57 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> The node cloudstack is claiming the system vm’s is starting on shows no signs of any vm’s running. virsh list is black.
>
> Thanks
> -jeremy
>
>
>
> > On Tuesday, Feb 21, 2023 at 8:23 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> > Also, just to note, I’m not sure how much made it in to the logs. The system vm’s are stuck in starting state and trying to kill through the interface doesn’t seem to do anything.
> >
> > -jeremy
> >
> >
> >
> >
> > > On Tuesday, Feb 21, 2023 at 8:20 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> > > Is there something else I can use to submit logs? Too much for pastebin.
> > >
> > > Thanks
> > > -jeremy
> > >
> > >
> > >
> > > > On Tuesday, Feb 21, 2023 at 7:07 PM, Simon Weller <siweller77@gmail.com (mailto:siweller77@gmail.com)> wrote:
> > > > Can you pull some management server logs and also put the CloudStack KVM
> > > > agent into debug mode before destroying the ssvm and share the logs?
> > > >
> > > > https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350
> > > >
> > > > On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > wrote:
> > > >
> > > > > Yes. It’s just a different partition on the same nfs server.
> > > > >
> > > > >
> > > > >
> > > > > On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <si...@gmail.com>
> > > > > wrote:
> > > > > The new and old primary storage is in the same zone, correct?
> > > > > Did you also change out the secondary storage?
> > > > >
> > > > > On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > > wrote:
> > > > >
> > > > > Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> > > > > just keep churning. I did a destroy with virsh, but no status changed in
> > > > > the interface. Also, the newly created ones don’t seem to bring up their
> > > > > agent and never fully start.
> > > > >
> > > > > Thanks
> > > > >
> > > > >
> > > > >
> > > > > On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> > > > > wrote:
> > > > > Just destroy the old system VMs and they will be recreated on available
> > > > > storage.
> > > > >
> > > > > Are you on KVM?
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > > wrote:
> > > > >
> > > > > How do I completely recreate the system vm?
> > > > >
> > > > > I was able to get the old storage in to full maintenance and deleted it,
> > > > > so maybe the system vm are still using the old storage? Is there a way to
> > > > > tell the system vm’s to use the new storage? Db change?
> > > > >
> > > > > Thanks!
> > > > >
> > > > >
> > > > >
> > > > > On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> > > > > wrote:
> > > > > Hey Jeremy,
> > > > >
> > > > > Is there anything in the management logs that indicate why it's not
> > > > > completing the maintenance action?
> > > > > Usually, this state is triggered by some stuck VMs that haven't migrated
> > > > > off of the primary storage.
> > > > >
> > > > > You mentioned the system VMs. Are they still on the old storage? Could
> > > > > this
> > > > > be due to some storage tags?
> > > > >
> > > > > -Si
> > > > >
> > > > > On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > > wrote:
> > > > >
> > > > > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > > > > and I can’t remove the old primary storage.
> > > > >
> > > > > -jeremy
> > > > >
> > > > >
> > > > >
> > > > > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> > > > > I tried to put one of my primary storage definitions in to maintenance
> > > > > mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> > > > > remedy this situation:
> > > > >
> > > > > Cancel maintenance mode
> > > > > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > > > > with id 1 is not ready to complete migration, as the status
> > > > > is:PrepareForMaintenance
> > > > >
> > > > > Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
> > > > >
> > > > > 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> > > > > to migrate to a new volume. All volumes were moved to the new storage. I
> > > > > was simply trying to delete the old storage definition.
> > > > >
> > > > > Thanks
> > > > > -jeremy
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
The node cloudstack is claiming the system vm’s is starting on shows no signs of any vm’s running. virsh list is black.

Thanks
-jeremy

> On Tuesday, Feb 21, 2023 at 8:23 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> Also, just to note, I’m not sure how much made it in to the logs. The system vm’s are stuck in starting state and trying to kill through the interface doesn’t seem to do anything.
>
> -jeremy
>
>
>
>
> > On Tuesday, Feb 21, 2023 at 8:20 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> > Is there something else I can use to submit logs? Too much for pastebin.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> > > On Tuesday, Feb 21, 2023 at 7:07 PM, Simon Weller <siweller77@gmail.com (mailto:siweller77@gmail.com)> wrote:
> > > Can you pull some management server logs and also put the CloudStack KVM
> > > agent into debug mode before destroying the ssvm and share the logs?
> > >
> > > https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350
> > >
> > > On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > wrote:
> > >
> > > > Yes. It’s just a different partition on the same nfs server.
> > > >
> > > >
> > > >
> > > > On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <si...@gmail.com>
> > > > wrote:
> > > > The new and old primary storage is in the same zone, correct?
> > > > Did you also change out the secondary storage?
> > > >
> > > > On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > wrote:
> > > >
> > > > Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> > > > just keep churning. I did a destroy with virsh, but no status changed in
> > > > the interface. Also, the newly created ones don’t seem to bring up their
> > > > agent and never fully start.
> > > >
> > > > Thanks
> > > >
> > > >
> > > >
> > > > On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> > > > wrote:
> > > > Just destroy the old system VMs and they will be recreated on available
> > > > storage.
> > > >
> > > > Are you on KVM?
> > > >
> > > >
> > > >
> > > > On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > wrote:
> > > >
> > > > How do I completely recreate the system vm?
> > > >
> > > > I was able to get the old storage in to full maintenance and deleted it,
> > > > so maybe the system vm are still using the old storage? Is there a way to
> > > > tell the system vm’s to use the new storage? Db change?
> > > >
> > > > Thanks!
> > > >
> > > >
> > > >
> > > > On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> > > > wrote:
> > > > Hey Jeremy,
> > > >
> > > > Is there anything in the management logs that indicate why it's not
> > > > completing the maintenance action?
> > > > Usually, this state is triggered by some stuck VMs that haven't migrated
> > > > off of the primary storage.
> > > >
> > > > You mentioned the system VMs. Are they still on the old storage? Could
> > > > this
> > > > be due to some storage tags?
> > > >
> > > > -Si
> > > >
> > > > On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > > wrote:
> > > >
> > > > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > > > and I can’t remove the old primary storage.
> > > >
> > > > -jeremy
> > > >
> > > >
> > > >
> > > > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> > > > I tried to put one of my primary storage definitions in to maintenance
> > > > mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> > > > remedy this situation:
> > > >
> > > > Cancel maintenance mode
> > > > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > > > with id 1 is not ready to complete migration, as the status
> > > > is:PrepareForMaintenance
> > > >
> > > > Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
> > > >
> > > > 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> > > > to migrate to a new volume. All volumes were moved to the new storage. I
> > > > was simply trying to delete the old storage definition.
> > > >
> > > > Thanks
> > > > -jeremy
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
Also, just to note, I’m not sure how much made it in to the logs. The system vm’s are stuck in starting state and trying to kill through the interface doesn’t seem to do anything.

-jeremy

> On Tuesday, Feb 21, 2023 at 8:20 PM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> Is there something else I can use to submit logs? Too much for pastebin.
>
> Thanks
> -jeremy
>
>
>
> > On Tuesday, Feb 21, 2023 at 7:07 PM, Simon Weller <siweller77@gmail.com (mailto:siweller77@gmail.com)> wrote:
> > Can you pull some management server logs and also put the CloudStack KVM
> > agent into debug mode before destroying the ssvm and share the logs?
> >
> > https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350
> >
> > On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > > Yes. It’s just a different partition on the same nfs server.
> > >
> > >
> > >
> > > On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <si...@gmail.com>
> > > wrote:
> > > The new and old primary storage is in the same zone, correct?
> > > Did you also change out the secondary storage?
> > >
> > > On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > wrote:
> > >
> > > Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> > > just keep churning. I did a destroy with virsh, but no status changed in
> > > the interface. Also, the newly created ones don’t seem to bring up their
> > > agent and never fully start.
> > >
> > > Thanks
> > >
> > >
> > >
> > > On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> > > wrote:
> > > Just destroy the old system VMs and they will be recreated on available
> > > storage.
> > >
> > > Are you on KVM?
> > >
> > >
> > >
> > > On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > wrote:
> > >
> > > How do I completely recreate the system vm?
> > >
> > > I was able to get the old storage in to full maintenance and deleted it,
> > > so maybe the system vm are still using the old storage? Is there a way to
> > > tell the system vm’s to use the new storage? Db change?
> > >
> > > Thanks!
> > >
> > >
> > >
> > > On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> > > wrote:
> > > Hey Jeremy,
> > >
> > > Is there anything in the management logs that indicate why it's not
> > > completing the maintenance action?
> > > Usually, this state is triggered by some stuck VMs that haven't migrated
> > > off of the primary storage.
> > >
> > > You mentioned the system VMs. Are they still on the old storage? Could
> > > this
> > > be due to some storage tags?
> > >
> > > -Si
> > >
> > > On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > > wrote:
> > >
> > > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > > and I can’t remove the old primary storage.
> > >
> > > -jeremy
> > >
> > >
> > >
> > > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> > > I tried to put one of my primary storage definitions in to maintenance
> > > mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> > > remedy this situation:
> > >
> > > Cancel maintenance mode
> > > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > > with id 1 is not ready to complete migration, as the status
> > > is:PrepareForMaintenance
> > >
> > > Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
> > >
> > > 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> > > to migrate to a new volume. All volumes were moved to the new storage. I
> > > was simply trying to delete the old storage definition.
> > >
> > > Thanks
> > > -jeremy
> > >
> > >
> > >
> > >
> > >
> > >
> > >

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
Is there something else I can use to submit logs? Too much for pastebin.

Thanks
-jeremy

> On Tuesday, Feb 21, 2023 at 7:07 PM, Simon Weller <siweller77@gmail.com (mailto:siweller77@gmail.com)> wrote:
> Can you pull some management server logs and also put the CloudStack KVM
> agent into debug mode before destroying the ssvm and share the logs?
>
> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350
>
> On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> > Yes. It’s just a different partition on the same nfs server.
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > The new and old primary storage is in the same zone, correct?
> > Did you also change out the secondary storage?
> >
> > On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> > just keep churning. I did a destroy with virsh, but no status changed in
> > the interface. Also, the newly created ones don’t seem to bring up their
> > agent and never fully start.
> >
> > Thanks
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > Just destroy the old system VMs and they will be recreated on available
> > storage.
> >
> > Are you on KVM?
> >
> >
> >
> > On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > How do I completely recreate the system vm?
> >
> > I was able to get the old storage in to full maintenance and deleted it,
> > so maybe the system vm are still using the old storage? Is there a way to
> > tell the system vm’s to use the new storage? Db change?
> >
> > Thanks!
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > Hey Jeremy,
> >
> > Is there anything in the management logs that indicate why it's not
> > completing the maintenance action?
> > Usually, this state is triggered by some stuck VMs that haven't migrated
> > off of the primary storage.
> >
> > You mentioned the system VMs. Are they still on the old storage? Could
> > this
> > be due to some storage tags?
> >
> > -Si
> >
> > On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > and I can’t remove the old primary storage.
> >
> > -jeremy
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> > I tried to put one of my primary storage definitions in to maintenance
> > mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> > remedy this situation:
> >
> > Cancel maintenance mode
> > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > with id 1 is not ready to complete migration, as the status
> > is:PrepareForMaintenance
> >
> > Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
> >
> > 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> > to migrate to a new volume. All volumes were moved to the new storage. I
> > was simply trying to delete the old storage definition.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> >
> >
> >
> >

Re: Stuck in Preparing for maintenance on primary storage

Posted by Simon Weller <si...@gmail.com>.
Can you pull some management server logs and also put the CloudStack KVM
agent into debug mode before destroying the ssvm and share the logs?

https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350

On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <je...@skidrow.la.invalid>
wrote:

> Yes. It’s just a different partition on the same nfs server.
>
>
>
> On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <si...@gmail.com>
> wrote:
> The new and old primary storage is in the same zone, correct?
> Did you also change out the secondary storage?
>
> On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> just keep churning. I did a destroy with virsh, but no status changed in
> the interface. Also, the newly created ones don’t seem to bring up their
> agent and never fully start.
>
> Thanks
>
>
>
> On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> wrote:
> Just destroy the old system VMs and they will be recreated on available
> storage.
>
> Are you on KVM?
>
>
>
> On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> How do I completely recreate the system vm?
>
> I was able to get the old storage in to full maintenance and deleted it,
> so maybe the system vm are still using the old storage? Is there a way to
> tell the system vm’s to use the new storage? Db change?
>
> Thanks!
>
>
>
> On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> wrote:
> Hey Jeremy,
>
> Is there anything in the management logs that indicate why it's not
> completing the maintenance action?
> Usually, this state is triggered by some stuck VMs that haven't migrated
> off of the primary storage.
>
> You mentioned the system VMs. Are they still on the old storage? Could
> this
> be due to some storage tags?
>
> -Si
>
> On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> and I can’t remove the old primary storage.
>
> -jeremy
>
>
>
> On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> I tried to put one of my primary storage definitions in to maintenance
> mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> remedy this situation:
>
> Cancel maintenance mode
> (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> with id 1 is not ready to complete migration, as the status
> is:PrepareForMaintenance
>
> Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
>
> 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> to migrate to a new volume. All volumes were moved to the new storage. I
> was simply trying to delete the old storage definition.
>
> Thanks
> -jeremy
>
>
>
>
>
>
>

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
Yes. It’s just a different partition on the same nfs server.

> On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller <siweller77@gmail.com (mailto:siweller77@gmail.com)> wrote:
> The new and old primary storage is in the same zone, correct?
> Did you also change out the secondary storage?
>
> On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> > Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> > just keep churning. I did a destroy with virsh, but no status changed in
> > the interface. Also, the newly created ones don’t seem to bring up their
> > agent and never fully start.
> >
> > Thanks
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > Just destroy the old system VMs and they will be recreated on available
> > storage.
> >
> > Are you on KVM?
> >
> >
> >
> > On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > How do I completely recreate the system vm?
> >
> > I was able to get the old storage in to full maintenance and deleted it,
> > so maybe the system vm are still using the old storage? Is there a way to
> > tell the system vm’s to use the new storage? Db change?
> >
> > Thanks!
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > Hey Jeremy,
> >
> > Is there anything in the management logs that indicate why it's not
> > completing the maintenance action?
> > Usually, this state is triggered by some stuck VMs that haven't migrated
> > off of the primary storage.
> >
> > You mentioned the system VMs. Are they still on the old storage? Could
> > this
> > be due to some storage tags?
> >
> > -Si
> >
> > On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > and I can’t remove the old primary storage.
> >
> > -jeremy
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> > I tried to put one of my primary storage definitions in to maintenance
> > mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> > remedy this situation:
> >
> > Cancel maintenance mode
> > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > with id 1 is not ready to complete migration, as the status
> > is:PrepareForMaintenance
> >
> > Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
> >
> > 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> > to migrate to a new volume. All volumes were moved to the new storage. I
> > was simply trying to delete the old storage definition.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> >
> >
> >

Re: Stuck in Preparing for maintenance on primary storage

Posted by Simon Weller <si...@gmail.com>.
The new and old primary storage is in the same zone, correct?
Did you also change out the secondary storage?

On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen <je...@skidrow.la.invalid>
wrote:

> Yes. On Kvm. I’ve been trying to destroy them from the interface and it
> just keep churning. I did a destroy with virsh, but no status changed in
> the interface. Also, the newly created ones don’t seem to bring up their
> agent and never fully start.
>
> Thanks
>
>
>
> On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <si...@gmail.com>
> wrote:
> Just destroy the old system VMs and they will be recreated on available
> storage.
>
> Are you on KVM?
>
>
>
> On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> How do I completely recreate the system vm?
>
> I was able to get the old storage in to full maintenance and deleted it,
> so maybe the system vm are still using the old storage? Is there a way to
> tell the system vm’s to use the new storage? Db change?
>
> Thanks!
>
>
>
> On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> wrote:
> Hey Jeremy,
>
> Is there anything in the management logs that indicate why it's not
> completing the maintenance action?
> Usually, this state is triggered by some stuck VMs that haven't migrated
> off of the primary storage.
>
> You mentioned the system VMs. Are they still on the old storage? Could
> this
> be due to some storage tags?
>
> -Si
>
> On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> and I can’t remove the old primary storage.
>
> -jeremy
>
>
>
> On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> I tried to put one of my primary storage definitions in to maintenance
> mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> remedy this situation:
>
> Cancel maintenance mode
> (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> with id 1 is not ready to complete migration, as the status
> is:PrepareForMaintenance
>
> Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
>
> 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> to migrate to a new volume. All volumes were moved to the new storage. I
> was simply trying to delete the old storage definition.
>
> Thanks
> -jeremy
>
>
>
>
>
>

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
Yes. On Kvm. I’ve been trying to destroy them from the interface and it just keep churning. I did a destroy with virsh, but no status changed in the interface. Also, the newly created ones don’t seem to bring up their agent and never fully start.

Thanks

> On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller <siweller77@gmail.com (mailto:siweller77@gmail.com)> wrote:
> Just destroy the old system VMs and they will be recreated on available
> storage.
>
> Are you on KVM?
>
>
>
> On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> > How do I completely recreate the system vm?
> >
> > I was able to get the old storage in to full maintenance and deleted it,
> > so maybe the system vm are still using the old storage? Is there a way to
> > tell the system vm’s to use the new storage? Db change?
> >
> > Thanks!
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> > wrote:
> > Hey Jeremy,
> >
> > Is there anything in the management logs that indicate why it's not
> > completing the maintenance action?
> > Usually, this state is triggered by some stuck VMs that haven't migrated
> > off of the primary storage.
> >
> > You mentioned the system VMs. Are they still on the old storage? Could
> > this
> > be due to some storage tags?
> >
> > -Si
> >
> > On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> > wrote:
> >
> > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > and I can’t remove the old primary storage.
> >
> > -jeremy
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> > I tried to put one of my primary storage definitions in to maintenance
> > mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> > remedy this situation:
> >
> > Cancel maintenance mode
> > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > with id 1 is not ready to complete migration, as the status
> > is:PrepareForMaintenance
> >
> > Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
> >
> > 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> > to migrate to a new volume. All volumes were moved to the new storage. I
> > was simply trying to delete the old storage definition.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> >
> >

Re: Stuck in Preparing for maintenance on primary storage

Posted by Simon Weller <si...@gmail.com>.
Just destroy the old system VMs and they will be recreated on available
storage.

Are you on KVM?



On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen <je...@skidrow.la.invalid>
wrote:

> How do I completely recreate the system vm?
>
> I was able to get the old storage in to full maintenance and deleted it,
> so maybe the system vm are still using the old storage?  Is there a way to
> tell the system vm’s to use the new storage?  Db change?
>
> Thanks!
>
>
>
> On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <si...@gmail.com>
> wrote:
> Hey Jeremy,
>
> Is there anything in the management logs that indicate why it's not
> completing the maintenance action?
> Usually, this state is triggered by some stuck VMs that haven't migrated
> off of the primary storage.
>
> You mentioned the system VMs. Are they still on the old storage? Could
> this
> be due to some storage tags?
>
> -Si
>
> On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> and I can’t remove the old primary storage.
>
> -jeremy
>
>
>
> On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> I tried to put one of my primary storage definitions in to maintenance
> mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> remedy this situation:
>
> Cancel maintenance mode
> (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> with id 1 is not ready to complete migration, as the status
> is:PrepareForMaintenance
>
> Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
>
> 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> to migrate to a new volume. All volumes were moved to the new storage. I
> was simply trying to delete the old storage definition.
>
> Thanks
> -jeremy
>
>
>
>
>

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
How do I completely recreate the system vm?

I was able to get the old storage in to full maintenance and deleted it, so maybe the system vm are still using the old storage? Is there a way to tell the system vm’s to use the new storage? Db change?

Thanks!

> On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller <siweller77@gmail.com (mailto:siweller77@gmail.com)> wrote:
> Hey Jeremy,
>
> Is there anything in the management logs that indicate why it's not
> completing the maintenance action?
> Usually, this state is triggered by some stuck VMs that haven't migrated
> off of the primary storage.
>
> You mentioned the system VMs. Are they still on the old storage? Could this
> be due to some storage tags?
>
> -Si
>
> On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
> wrote:
>
> > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > and I can’t remove the old primary storage.
> >
> > -jeremy
> >
> >
> >
> > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> > I tried to put one of my primary storage definitions in to maintenance
> > mode. Now it’s stuck in preparing for maintenance and I’m not sure how to
> > remedy this situation:
> >
> > Cancel maintenance mode
> > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > with id 1 is not ready to complete migration, as the status
> > is:PrepareForMaintenance
> >
> > Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
> >
> > 4.17.2.0. Using NFS for primary and secondary storage. I was attempting
> > to migrate to a new volume. All volumes were moved to the new storage. I
> > was simply trying to delete the old storage definition.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> >

Re: Stuck in Preparing for maintenance on primary storage

Posted by Simon Weller <si...@gmail.com>.
Hey Jeremy,

Is there anything in the management logs that indicate why it's not
completing the maintenance action?
Usually, this state is triggered by some stuck VMs that haven't migrated
off of the primary storage.

You mentioned the system VMs. Are they still on the old storage? Could this
be due to some storage tags?

-Si

On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen <je...@skidrow.la.invalid>
wrote:

> Any ideas on this?  I’m completely stuck.  Can’t bring up my system vm’s
> and I can’t remove the old primary storage.
>
> -jeremy
>
>
>
> On Tuesday, Feb 21, 2023 at 2:35 AM, Me <je...@skidrow.la> wrote:
> I tried to put one of my primary storage definitions in to maintenance
> mode.  Now it’s stuck in preparing for maintenance and I’m not sure how to
> remedy this situation:
>
> Cancel maintenance mode
> (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> with id 1 is not ready to complete migration, as the status
> is:PrepareForMaintenance
>
> Restarted manager, agents, libvirtd.  My secondarystoragevm can’t start…
>
> 4.17.2.0.  Using NFS for primary and secondary storage.  I was attempting
> to migrate to a new volume.  All volumes were moved to the new storage.  I
> was simply trying to delete the old storage definition.
>
> Thanks
> -jeremy
>
>
>
>

Re: Stuck in Preparing for maintenance on primary storage

Posted by Jeremy Hansen <je...@skidrow.la.INVALID>.
Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s and I can’t remove the old primary storage.

-jeremy

> On Tuesday, Feb 21, 2023 at 2:35 AM, Me <jeremy@skidrow.la (mailto:jeremy@skidrow.la)> wrote:
> I tried to put one of my primary storage definitions in to maintenance mode. Now it’s stuck in preparing for maintenance and I’m not sure how to remedy this situation:
>
> Cancel maintenance mode
> (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage with id 1 is not ready to complete migration, as the status is:PrepareForMaintenance
>
>
> Restarted manager, agents, libvirtd. My secondarystoragevm can’t start…
>
> 4.17.2.0. Using NFS for primary and secondary storage. I was attempting to migrate to a new volume. All volumes were moved to the new storage. I was simply trying to delete the old storage definition.
>
> Thanks
> -jeremy
>
>
>