You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Yordan Kostov <Yo...@worldsupport.info> on 2018/10/18 09:32:45 UTC

Cloudstack database and orphaned entities

Dear all,

                While testing different setups and also deleting them I noticed that it is occasional/common occurrence that orphaned system VMs or hosts remain in the database.

                For example - during removal of a zone (system VMs, secondary storages, host, cluster, pod, networks and then the zone) if not executed in the correct order system VMs are left orphaned in the DB (no seen in the GUI) and thus preventing deletion of the POD. The error (quoted by memory) says "there are existing hosts so operation cannot proceed". Other times the orphaned VMs lock public IPs preventing deletion of zone networks.

                What I did to go around the issues is go inside the DB and tweak the cloud -> vm_instance  & hosts table settings for the particular system instance to mimic the one for other already removed instances (changing the status to "removed", setings modification date etc).

                What is the best way to approach such issue in production?
                Also what is the reasoning for the system VMs are both present in VM_instance table hosts tables at the same time? It feels counter intuitive to look for/insert VMs in the hosts table.

Best regards,
Jordan Kostov


Re: Cloudstack database and orphaned entities

Posted by Boris Stoyanov <bo...@shapeblue.com>.
well, with Xen I think you have a console where you can see and manage all running vms on the host and remove if it’s orphaned. Further I think maybe a new cloudstack zone/installation would sync this and recognize it as a guest vm and probably you’ll be able to see it in cloudstack UI even.. 

*at least that’s what’s happening with VMware. 

Bobby.


boris.stoyanov@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 

> On 18 Oct 2018, at 13:05, Yordan Kostov <Yo...@worldsupport.info> wrote:
> 
> Thank you for pointing this out Bobi,
> I will document the steps and open the ticket there.
> 
> Btw forgot to mention - this is ACS 4.11.2 + XenServer.
> 
> Best regards,
> Jordan
> 
> -----Original Message-----
> From: Boris Stoyanov [mailto:boris.stoyanov@shapeblue.com] 
> Sent: Thursday, October 18, 2018 12:49 PM
> To: users@cloudstack.apache.org
> Subject: Re: Cloudstack database and orphaned entities
> 
> Hi Yordan,
> 
> I think the best approach would be to submit an issue in the GitHub page of ACS and document the exact steps on reproducing this issue. From there on someone could pick it up and fix it for the next LTS release. Honestly I don’t think that would be a common production problem, since if someone wants to remove a zone he would be keen on building it up in a fresh installed product on a separate DB/Server. 
> 
> The issue with VMs left behind in hosts could be problematic if there’s no proper loopback on what’s running on the host (kvm mostly), since cloudstack would only interact with it’s own resources and if there’s any other instance running on the KVM host, cloudstack would not touch/report/interact with it’s existence. 
> 
> Good find, please add the issue in github with detailed steps so it can get triaged. 
> 
> Thanks,
> Bobby. 
> 
> 
> boris.stoyanov@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
> 
> 
> 
>> On 18 Oct 2018, at 12:32, Yordan Kostov <Yo...@worldsupport.info> wrote:
>> 
>> Dear all,
>> 
>>               While testing different setups and also deleting them I noticed that it is occasional/common occurrence that orphaned system VMs or hosts remain in the database.
>> 
>>               For example - during removal of a zone (system VMs, secondary storages, host, cluster, pod, networks and then the zone) if not executed in the correct order system VMs are left orphaned in the DB (no seen in the GUI) and thus preventing deletion of the POD. The error (quoted by memory) says "there are existing hosts so operation cannot proceed". Other times the orphaned VMs lock public IPs preventing deletion of zone networks.
>> 
>>               What I did to go around the issues is go inside the DB and tweak the cloud -> vm_instance  & hosts table settings for the particular system instance to mimic the one for other already removed instances (changing the status to "removed", setings modification date etc).
>> 
>>               What is the best way to approach such issue in production?
>>               Also what is the reasoning for the system VMs are both present in VM_instance table hosts tables at the same time? It feels counter intuitive to look for/insert VMs in the hosts table.
>> 
>> Best regards,
>> Jordan Kostov
>> 
> 


RE: Cloudstack database and orphaned entities

Posted by Yordan Kostov <Yo...@worldsupport.info>.
Thank you for pointing this out Bobi,
I will document the steps and open the ticket there.

Btw forgot to mention - this is ACS 4.11.2 + XenServer.

Best regards,
Jordan

-----Original Message-----
From: Boris Stoyanov [mailto:boris.stoyanov@shapeblue.com] 
Sent: Thursday, October 18, 2018 12:49 PM
To: users@cloudstack.apache.org
Subject: Re: Cloudstack database and orphaned entities

Hi Yordan,

I think the best approach would be to submit an issue in the GitHub page of ACS and document the exact steps on reproducing this issue. From there on someone could pick it up and fix it for the next LTS release. Honestly I don’t think that would be a common production problem, since if someone wants to remove a zone he would be keen on building it up in a fresh installed product on a separate DB/Server. 

The issue with VMs left behind in hosts could be problematic if there’s no proper loopback on what’s running on the host (kvm mostly), since cloudstack would only interact with it’s own resources and if there’s any other instance running on the KVM host, cloudstack would not touch/report/interact with it’s existence. 

Good find, please add the issue in github with detailed steps so it can get triaged. 

Thanks,
Bobby. 


boris.stoyanov@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
  
 

> On 18 Oct 2018, at 12:32, Yordan Kostov <Yo...@worldsupport.info> wrote:
> 
> Dear all,
> 
>                While testing different setups and also deleting them I noticed that it is occasional/common occurrence that orphaned system VMs or hosts remain in the database.
> 
>                For example - during removal of a zone (system VMs, secondary storages, host, cluster, pod, networks and then the zone) if not executed in the correct order system VMs are left orphaned in the DB (no seen in the GUI) and thus preventing deletion of the POD. The error (quoted by memory) says "there are existing hosts so operation cannot proceed". Other times the orphaned VMs lock public IPs preventing deletion of zone networks.
> 
>                What I did to go around the issues is go inside the DB and tweak the cloud -> vm_instance  & hosts table settings for the particular system instance to mimic the one for other already removed instances (changing the status to "removed", setings modification date etc).
> 
>                What is the best way to approach such issue in production?
>                Also what is the reasoning for the system VMs are both present in VM_instance table hosts tables at the same time? It feels counter intuitive to look for/insert VMs in the hosts table.
> 
> Best regards,
> Jordan Kostov
> 


Re: Cloudstack database and orphaned entities

Posted by Boris Stoyanov <bo...@shapeblue.com>.
Hi Yordan,

I think the best approach would be to submit an issue in the GitHub page of ACS and document the exact steps on reproducing this issue. From there on someone could pick it up and fix it for the next LTS release. Honestly I don’t think that would be a common production problem, since if someone wants to remove a zone he would be keen on building it up in a fresh installed product on a separate DB/Server. 

The issue with VMs left behind in hosts could be problematic if there’s no proper loopback on what’s running on the host (kvm mostly), since cloudstack would only interact with it’s own resources and if there’s any other instance running on the KVM host, cloudstack would not touch/report/interact with it’s existence. 

Good find, please add the issue in github with detailed steps so it can get triaged. 

Thanks,
Bobby. 


boris.stoyanov@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 

> On 18 Oct 2018, at 12:32, Yordan Kostov <Yo...@worldsupport.info> wrote:
> 
> Dear all,
> 
>                While testing different setups and also deleting them I noticed that it is occasional/common occurrence that orphaned system VMs or hosts remain in the database.
> 
>                For example - during removal of a zone (system VMs, secondary storages, host, cluster, pod, networks and then the zone) if not executed in the correct order system VMs are left orphaned in the DB (no seen in the GUI) and thus preventing deletion of the POD. The error (quoted by memory) says "there are existing hosts so operation cannot proceed". Other times the orphaned VMs lock public IPs preventing deletion of zone networks.
> 
>                What I did to go around the issues is go inside the DB and tweak the cloud -> vm_instance  & hosts table settings for the particular system instance to mimic the one for other already removed instances (changing the status to "removed", setings modification date etc).
> 
>                What is the best way to approach such issue in production?
>                Also what is the reasoning for the system VMs are both present in VM_instance table hosts tables at the same time? It feels counter intuitive to look for/insert VMs in the hosts table.
> 
> Best regards,
> Jordan Kostov
>