You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Alessandro Caviglione <c....@gmail.com> on 2016/01/01 23:23:14 UTC

A Story of a Failed XenServer Upgrade

Hi guys,
I want to share my XenServer Upgrade adventure to understand if I did
domething wrong.
I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs has
been upgraded I start the upgrade process of my XenServer hosts from 6.2 to
6.5.
I do not already have PoolHA enabled so I followed this article:
http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/

The cluster consists of n° 3 XenServer hosts.

First of all I added manage.xenserver.pool.master=false
to environment.properties file and restarted cloudstack-management service.

After that I put in Maintenance Mode Pool Master host and, after all VMs
has been migrated, I Unmanaged the cluster.
At this point all host appears as "Disconnected" from CS interface and this
should be right.
Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
in-place upgrade.
After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
At this point I expected that, after click on Manage Cluster on CS, all the
hosts come back to "UP" and I could go ahead upgrading the other hosts....

But, instead of that, all the hosts still appears as "Disconnected", I
tried a couple of cloudstack-management service restart without success.

So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
the error: The server is still booting

After some investigation, I run the command "xe task-list" and this is the
result:

uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
name-label ( RO): VM.set_memory_dynamic_range
name-description ( RO):
status ( RO): pending
progress ( RO): 0.000

I tried a couple of reboot but nothing changes.... so I decided to shut
down the server, force raise a slave host to master with emergency mode,
remove old server from CS and reboot CS.

After that, I see my cluster up and running again, so I installed XS 6.2SP1
on the "upgraded" host and added again to the cluster....

So after an entire day of work, I'm in the same situation! :D

Anyone can tell me if I made something wrong??

Thank you very much!

R: A Story of a Failed XenServer Upgrade

Posted by Davide Pala <da...@gesca.it>.
Hi Yiping. I'll do this early. If you can share this document i'll appreciate.
Tnx



Inviato dal mio dispositivo Samsung


-------- Messaggio originale --------
Da: Yiping Zhang <yz...@marketo.com>
Data: 08/01/2016 02:32 (GMT+01:00)
A: users@cloudstack.apache.org, aemneina@gmail.com
Oggetto: Re: A Story of a Failed XenServer Upgrade

Hi, Alessandro

Late to the thread.  Is this still an issue for you ?

I went thru this process before and I have a step by step document that I can share if you still need it.

Yiping




On 1/2/16, 4:43 PM, "Ahmad Emneina" <ae...@gmail.com> wrote:

>Hi Alessandro,
>Without seeing the logs, or DB, it will be hard to diagnose the issue. I've
>seen something similar in the past, where the XenServer host version isnt
>getting updated in the DB, as part of the XS upgrade process. That caused
>CloudStack to use the wrong hypervisor resource to try connecting back to
>the XenServers... ending up in failure. If you could share sanitized
>versions of your log and db, someone here might be able to give you the
>necessary steps to get your cluster back under CloudStack control.
>
>On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
>c.alessandro@gmail.com> wrote:
>
>> No guys,as the article wrote, my first action was to put in Maintenance
>> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer
>> Pool Master first before any of the Slaves.  To do so you need to empty the
>> Pool Master of all CloudStack VMs, and you do this by putting the Host into
>> Maintenance Mode within CloudStack to trigger a live migration of all VMs
>> to alternate Hosts"
>>
>> This is exactly what I've done and after the XS upgrade, no hosts was able
>> to communicate with CS and also with the upgraded host.
>>
>> Putting an host in Maint Mode within CS will trigger MM also on XenServer
>> host or just will move the VMs to other hosts?
>>
>> And again.... what's the best practices to upgrade a XS cluster?
>>
>> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <RB...@schubergphilis.com>
>> wrote:
>>
>> > CloudStack should always do the migration of VM's not the Hypervisor.
>> >
>> > That's not true. You can safely migrate outside of CloudStack as the
>> power
>> > report will tell CloudStack where the vms live and the db gets updated
>> > accordingly. I do this a lot while patching and that works fine on 6.2
>> and
>> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
>> >
>> > Regards, Remi
>> >
>> >
>> > Sent from my iPhone
>> >
>> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net<mailto:
>> > jpeterson@acentek.net>> wrote:
>> >
>> > I don't use XenServer maintenance mode until after CloudStack has put the
>> > Host in maintenance mode.
>> >
>> > When you initiate maintenance mode from the host rather than CloudStack
>> > the db does not know where the VM's are and your UUID's get jacked.
>> >
>> > CS is your brains not the hypervisor.
>> >
>> > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
>> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor
>> if
>> > needed and then CS and move on to the next Host.
>> >
>> > CloudStack should always do the migration of VM's not the Hypervisor.
>> >
>> > Jeremy
>> >
>> >
>> > -----Original Message-----
>> > From: Davide Pala [mailto:davide.pala@gesca.it]
>> > Sent: Friday, January 1, 2016 5:18 PM
>> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>> > Subject: R: A Story of a Failed XenServer Upgrade
>> >
>> > Hi alessandro. If u put in maintenance mode the master you force the
>> > election of a new pool master. Now when you have see the upgraded host as
>> > disconnected you are connected to the new pool master and the host (as a
>> > pool member) cannot comunicate with a pool master of an earliest version.
>> > The solution? Launche the upgrade on the pool master without enter in
>> > maintenance mode. And remember a consistent backup!!!
>> >
>> >
>> >
>> > Inviato dal mio dispositivo Samsung
>> >
>> >
>> > -------- Messaggio originale --------
>> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
>> > c.alessandro@gmail.com>>
>> > Data: 01/01/2016 23:23 (GMT+01:00)
>> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>> > Oggetto: A Story of a Failed XenServer Upgrade
>> >
>> > Hi guys,
>> > I want to share my XenServer Upgrade adventure to understand if I did
>> > domething wrong.
>> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
>> > has been upgraded I start the upgrade process of my XenServer hosts from
>> > 6.2 to 6.5.
>> > I do not already have PoolHA enabled so I followed this article:
>> >
>> >
>> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
>> >
>> > The cluster consists of n° 3 XenServer hosts.
>> >
>> > First of all I added manage.xenserver.pool.master=false
>> > to environment.properties file and restarted cloudstack-management
>> service.
>> >
>> > After that I put in Maintenance Mode Pool Master host and, after all VMs
>> > has been migrated, I Unmanaged the cluster.
>> > At this point all host appears as "Disconnected" from CS interface and
>> > this should be right.
>> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
>> > in-place upgrade.
>> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
>> > At this point I expected that, after click on Manage Cluster on CS, all
>> > the hosts come back to "UP" and I could go ahead upgrading the other
>> > hosts....
>> >
>> > But, instead of that, all the hosts still appears as "Disconnected", I
>> > tried a couple of cloudstack-management service restart without success.
>> >
>> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
>> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
>> > the error: The server is still booting
>> >
>> > After some investigation, I run the command "xe task-list" and this is
>> the
>> > result:
>> >
>> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
>> > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
>> > status ( RO): pending
>> > progress ( RO): 0.000
>> >
>> > I tried a couple of reboot but nothing changes.... so I decided to shut
>> > down the server, force raise a slave host to master with emergency mode,
>> > remove old server from CS and reboot CS.
>> >
>> > After that, I see my cluster up and running again, so I installed XS
>> > 6.2SP1 on the "upgraded" host and added again to the cluster....
>> >
>> > So after an entire day of work, I'm in the same situation! :D
>> >
>> > Anyone can tell me if I made something wrong??
>> >
>> > Thank you very much!
>> >
>>

Re: A Story of a Failed XenServer Upgrade

Posted by Pierre-Luc Dion <pd...@cloudops.com>.
Hi Alessandro,

Sorry to step in late, did you follow current upgrade instruction [1] ? I
think we still have to recopy 4 scripts from the cloudstack-management
server to xenserver recently upgraded.

/opt/xensource/sm/NFSSR.py
/opt/xensource/bin/setupxenserver.sh
/opt/xensource/bin/make_migratable.sh
/opt/xensource/bin/cloud-clean-vlan.sh

I don't see any wrong steps in your process, execpt the copy of the 4
files. Since you upgraded from 6.2 to 6.5 , I'm wondering if iptables from
dom0 would have been changed and CloudStack would have lost connectivity to
the freshly upgraded xenserver ?

Also, once the pool-master upgraded, did you perform a "Force Reconnect" in
cloudstack and look into the management-server.log to see what's wrong?


I agree with you Davide, placing a node in maintenance mode in CloudStack
must not place the xenserver in maintenance in xenserver pool because it
could trigger a pool-master change which is not wanted during a maintenance
such as applying hotfixes.

[1]
http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.6/hypervisor/xenserver.html#upgrading-xenserver-versions


Regards,


On Fri, Jan 8, 2016 at 5:20 AM, Alessandro Caviglione <
c.alessandro@gmail.com> wrote:

> Hi Yiping,
> yes, thank you very much!!
> Please share the doc so I can try again the upgrade process and see if it
> was only a "unfortunate coincidence of events" or a wrong upgrade process.
>
> Thanks!
>
> On Fri, Jan 8, 2016 at 10:20 AM, Nux! <nu...@li.nux.ro> wrote:
>
> > Yiping,
> >
> > Why not make a blog post about it so everyone can benefit? :)
> >
> > Lucian
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> > ----- Original Message -----
> > > From: "Yiping Zhang" <yz...@marketo.com>
> > > To: users@cloudstack.apache.org, aemneina@gmail.com
> > > Sent: Friday, 8 January, 2016 01:31:21
> > > Subject: Re: A Story of a Failed XenServer Upgrade
> >
> > > Hi, Alessandro
> > >
> > > Late to the thread.  Is this still an issue for you ?
> > >
> > > I went thru this process before and I have a step by step document that
> > I can
> > > share if you still need it.
> > >
> > > Yiping
> > >
> > >
> > >
> > >
> > > On 1/2/16, 4:43 PM, "Ahmad Emneina" <ae...@gmail.com> wrote:
> > >
> > >>Hi Alessandro,
> > >>Without seeing the logs, or DB, it will be hard to diagnose the issue.
> > I've
> > >>seen something similar in the past, where the XenServer host version
> isnt
> > >>getting updated in the DB, as part of the XS upgrade process. That
> caused
> > >>CloudStack to use the wrong hypervisor resource to try connecting back
> to
> > >>the XenServers... ending up in failure. If you could share sanitized
> > >>versions of your log and db, someone here might be able to give you the
> > >>necessary steps to get your cluster back under CloudStack control.
> > >>
> > >>On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
> > >>c.alessandro@gmail.com> wrote:
> > >>
> > >>> No guys,as the article wrote, my first action was to put in
> Maintenance
> > >>> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
> > XenServer
> > >>> Pool Master first before any of the Slaves.  To do so you need to
> > empty the
> > >>> Pool Master of all CloudStack VMs, and you do this by putting the
> Host
> > into
> > >>> Maintenance Mode within CloudStack to trigger a live migration of all
> > VMs
> > >>> to alternate Hosts"
> > >>>
> > >>> This is exactly what I've done and after the XS upgrade, no hosts was
> > able
> > >>> to communicate with CS and also with the upgraded host.
> > >>>
> > >>> Putting an host in Maint Mode within CS will trigger MM also on
> > XenServer
> > >>> host or just will move the VMs to other hosts?
> > >>>
> > >>> And again.... what's the best practices to upgrade a XS cluster?
> > >>>
> > >>> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
> > RBergsma@schubergphilis.com>
> > >>> wrote:
> > >>>
> > >>> > CloudStack should always do the migration of VM's not the
> Hypervisor.
> > >>> >
> > >>> > That's not true. You can safely migrate outside of CloudStack as
> the
> > >>> power
> > >>> > report will tell CloudStack where the vms live and the db gets
> > updated
> > >>> > accordingly. I do this a lot while patching and that works fine on
> > 6.2
> > >>> and
> > >>> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> > >>> >
> > >>> > Regards, Remi
> > >>> >
> > >>> >
> > >>> > Sent from my iPhone
> > >>> >
> > >>> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net
> > <mailto:
> > >>> > jpeterson@acentek.net>> wrote:
> > >>> >
> > >>> > I don't use XenServer maintenance mode until after CloudStack has
> > put the
> > >>> > Host in maintenance mode.
> > >>> >
> > >>> > When you initiate maintenance mode from the host rather than
> > CloudStack
> > >>> > the db does not know where the VM's are and your UUID's get jacked.
> > >>> >
> > >>> > CS is your brains not the hypervisor.
> > >>> >
> > >>> > Maintenance in CS.  All VM's will migrate.  Maintenance in
> XenCenter.
> > >>> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
> > hypervisor
> > >>> if
> > >>> > needed and then CS and move on to the next Host.
> > >>> >
> > >>> > CloudStack should always do the migration of VM's not the
> Hypervisor.
> > >>> >
> > >>> > Jeremy
> > >>> >
> > >>> >
> > >>> > -----Original Message-----
> > >>> > From: Davide Pala [mailto:davide.pala@gesca.it]
> > >>> > Sent: Friday, January 1, 2016 5:18 PM
> > >>> > To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org
> >
> > >>> > Subject: R: A Story of a Failed XenServer Upgrade
> > >>> >
> > >>> > Hi alessandro. If u put in maintenance mode the master you force
> the
> > >>> > election of a new pool master. Now when you have see the upgraded
> > host as
> > >>> > disconnected you are connected to the new pool master and the host
> > (as a
> > >>> > pool member) cannot comunicate with a pool master of an earliest
> > version.
> > >>> > The solution? Launche the upgrade on the pool master without enter
> in
> > >>> > maintenance mode. And remember a consistent backup!!!
> > >>> >
> > >>> >
> > >>> >
> > >>> > Inviato dal mio dispositivo Samsung
> > >>> >
> > >>> >
> > >>> > -------- Messaggio originale --------
> > >>> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> > >>> > c.alessandro@gmail.com>>
> > >>> > Data: 01/01/2016 23:23 (GMT+01:00)
> > >>> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > >>> > Oggetto: A Story of a Failed XenServer Upgrade
> > >>> >
> > >>> > Hi guys,
> > >>> > I want to share my XenServer Upgrade adventure to understand if I
> did
> > >>> > domething wrong.
> > >>> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the
> > VRs
> > >>> > has been upgraded I start the upgrade process of my XenServer hosts
> > from
> > >>> > 6.2 to 6.5.
> > >>> > I do not already have PoolHA enabled so I followed this article:
> > >>> >
> > >>> >
> > >>>
> >
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> > >>> >
> > >>> > The cluster consists of n° 3 XenServer hosts.
> > >>> >
> > >>> > First of all I added manage.xenserver.pool.master=false
> > >>> > to environment.properties file and restarted cloudstack-management
> > >>> service.
> > >>> >
> > >>> > After that I put in Maintenance Mode Pool Master host and, after
> all
> > VMs
> > >>> > has been migrated, I Unmanaged the cluster.
> > >>> > At this point all host appears as "Disconnected" from CS interface
> > and
> > >>> > this should be right.
> > >>> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and
> start
> > a
> > >>> > in-place upgrade.
> > >>> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot
> > again.
> > >>> > At this point I expected that, after click on Manage Cluster on CS,
> > all
> > >>> > the hosts come back to "UP" and I could go ahead upgrading the
> other
> > >>> > hosts....
> > >>> >
> > >>> > But, instead of that, all the hosts still appears as
> "Disconnected",
> > I
> > >>> > tried a couple of cloudstack-management service restart without
> > success.
> > >>> >
> > >>> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5
> > and it
> > >>> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but
> I
> > got
> > >>> > the error: The server is still booting
> > >>> >
> > >>> > After some investigation, I run the command "xe task-list" and this
> > is
> > >>> the
> > >>> > result:
> > >>> >
> > >>> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> > >>> > name-label ( RO): VM.set_memory_dynamic_range name-description (
> RO):
> > >>> > status ( RO): pending
> > >>> > progress ( RO): 0.000
> > >>> >
> > >>> > I tried a couple of reboot but nothing changes.... so I decided to
> > shut
> > >>> > down the server, force raise a slave host to master with emergency
> > mode,
> > >>> > remove old server from CS and reboot CS.
> > >>> >
> > >>> > After that, I see my cluster up and running again, so I installed
> XS
> > >>> > 6.2SP1 on the "upgraded" host and added again to the cluster....
> > >>> >
> > >>> > So after an entire day of work, I'm in the same situation! :D
> > >>> >
> > >>> > Anyone can tell me if I made something wrong??
> > >>> >
> > >>> > Thank you very much!
> > >>> >
> >
>

R: A Story of a Failed XenServer Upgrade

Posted by Davide Pala <da...@gesca.it>.
I think you've forget the attachment ...



Inviato dal mio dispositivo Samsung


-------- Messaggio originale --------
Da: Yiping Zhang <yz...@marketo.com>
Data: 08/01/2016 18:44 (GMT+01:00)
A: users@cloudstack.apache.org
Oggetto: Re: A Story of a Failed XenServer Upgrade


See attached pdf document. This is the final procedure we adopted after upgrading seven XenServer pools.

Yiping





On 1/8/16, 2:20 AM, "Alessandro Caviglione" <c....@gmail.com> wrote:

>Hi Yiping,
>yes, thank you very much!!
>Please share the doc so I can try again the upgrade process and see if it
>was only a "unfortunate coincidence of events" or a wrong upgrade process.
>
>Thanks!
>
>On Fri, Jan 8, 2016 at 10:20 AM, Nux! <nu...@li.nux.ro> wrote:
>
>> Yiping,
>>
>> Why not make a blog post about it so everyone can benefit? :)
>>
>> Lucian
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro
>>
>> ----- Original Message -----
>> > From: "Yiping Zhang" <yz...@marketo.com>
>> > To: users@cloudstack.apache.org, aemneina@gmail.com
>> > Sent: Friday, 8 January, 2016 01:31:21
>> > Subject: Re: A Story of a Failed XenServer Upgrade
>>
>> > Hi, Alessandro
>> >
>> > Late to the thread.  Is this still an issue for you ?
>> >
>> > I went thru this process before and I have a step by step document that
>> I can
>> > share if you still need it.
>> >
>> > Yiping
>> >
>> >
>> >
>> >
>> > On 1/2/16, 4:43 PM, "Ahmad Emneina" <ae...@gmail.com> wrote:
>> >
>> >>Hi Alessandro,
>> >>Without seeing the logs, or DB, it will be hard to diagnose the issue.
>> I've
>> >>seen something similar in the past, where the XenServer host version isnt
>> >>getting updated in the DB, as part of the XS upgrade process. That caused
>> >>CloudStack to use the wrong hypervisor resource to try connecting back to
>> >>the XenServers... ending up in failure. If you could share sanitized
>> >>versions of your log and db, someone here might be able to give you the
>> >>necessary steps to get your cluster back under CloudStack control.
>> >>
>> >>On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
>> >>c.alessandro@gmail.com> wrote:
>> >>
>> >>> No guys,as the article wrote, my first action was to put in Maintenance
>> >>> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
>> XenServer
>> >>> Pool Master first before any of the Slaves.  To do so you need to
>> empty the
>> >>> Pool Master of all CloudStack VMs, and you do this by putting the Host
>> into
>> >>> Maintenance Mode within CloudStack to trigger a live migration of all
>> VMs
>> >>> to alternate Hosts"
>> >>>
>> >>> This is exactly what I've done and after the XS upgrade, no hosts was
>> able
>> >>> to communicate with CS and also with the upgraded host.
>> >>>
>> >>> Putting an host in Maint Mode within CS will trigger MM also on
>> XenServer
>> >>> host or just will move the VMs to other hosts?
>> >>>
>> >>> And again.... what's the best practices to upgrade a XS cluster?
>> >>>
>> >>> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
>> RBergsma@schubergphilis.com>
>> >>> wrote:
>> >>>
>> >>> > CloudStack should always do the migration of VM's not the Hypervisor.
>> >>> >
>> >>> > That's not true. You can safely migrate outside of CloudStack as the
>> >>> power
>> >>> > report will tell CloudStack where the vms live and the db gets
>> updated
>> >>> > accordingly. I do this a lot while patching and that works fine on
>> 6.2
>> >>> and
>> >>> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
>> >>> >
>> >>> > Regards, Remi
>> >>> >
>> >>> >
>> >>> > Sent from my iPhone
>> >>> >
>> >>> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net
>> <mailto:
>> >>> > jpeterson@acentek.net>> wrote:
>> >>> >
>> >>> > I don't use XenServer maintenance mode until after CloudStack has
>> put the
>> >>> > Host in maintenance mode.
>> >>> >
>> >>> > When you initiate maintenance mode from the host rather than
>> CloudStack
>> >>> > the db does not know where the VM's are and your UUID's get jacked.
>> >>> >
>> >>> > CS is your brains not the hypervisor.
>> >>> >
>> >>> > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
>> >>> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
>> hypervisor
>> >>> if
>> >>> > needed and then CS and move on to the next Host.
>> >>> >
>> >>> > CloudStack should always do the migration of VM's not the Hypervisor.
>> >>> >
>> >>> > Jeremy
>> >>> >
>> >>> >
>> >>> > -----Original Message-----
>> >>> > From: Davide Pala [mailto:davide.pala@gesca.it]
>> >>> > Sent: Friday, January 1, 2016 5:18 PM
>> >>> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>> >>> > Subject: R: A Story of a Failed XenServer Upgrade
>> >>> >
>> >>> > Hi alessandro. If u put in maintenance mode the master you force the
>> >>> > election of a new pool master. Now when you have see the upgraded
>> host as
>> >>> > disconnected you are connected to the new pool master and the host
>> (as a
>> >>> > pool member) cannot comunicate with a pool master of an earliest
>> version.
>> >>> > The solution? Launche the upgrade on the pool master without enter in
>> >>> > maintenance mode. And remember a consistent backup!!!
>> >>> >
>> >>> >
>> >>> >
>> >>> > Inviato dal mio dispositivo Samsung
>> >>> >
>> >>> >
>> >>> > -------- Messaggio originale --------
>> >>> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
>> >>> > c.alessandro@gmail.com>>
>> >>> > Data: 01/01/2016 23:23 (GMT+01:00)
>> >>> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>> >>> > Oggetto: A Story of a Failed XenServer Upgrade
>> >>> >
>> >>> > Hi guys,
>> >>> > I want to share my XenServer Upgrade adventure to understand if I did
>> >>> > domething wrong.
>> >>> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the
>> VRs
>> >>> > has been upgraded I start the upgrade process of my XenServer hosts
>> from
>> >>> > 6.2 to 6.5.
>> >>> > I do not already have PoolHA enabled so I followed this article:
>> >>> >
>> >>> >
>> >>>
>> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
>> >>> >
>> >>> > The cluster consists of n° 3 XenServer hosts.
>> >>> >
>> >>> > First of all I added manage.xenserver.pool.master=false
>> >>> > to environment.properties file and restarted cloudstack-management
>> >>> service.
>> >>> >
>> >>> > After that I put in Maintenance Mode Pool Master host and, after all
>> VMs
>> >>> > has been migrated, I Unmanaged the cluster.
>> >>> > At this point all host appears as "Disconnected" from CS interface
>> and
>> >>> > this should be right.
>> >>> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start
>> a
>> >>> > in-place upgrade.
>> >>> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot
>> again.
>> >>> > At this point I expected that, after click on Manage Cluster on CS,
>> all
>> >>> > the hosts come back to "UP" and I could go ahead upgrading the other
>> >>> > hosts....
>> >>> >
>> >>> > But, instead of that, all the hosts still appears as "Disconnected",
>> I
>> >>> > tried a couple of cloudstack-management service restart without
>> success.
>> >>> >
>> >>> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5
>> and it
>> >>> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I
>> got
>> >>> > the error: The server is still booting
>> >>> >
>> >>> > After some investigation, I run the command "xe task-list" and this
>> is
>> >>> the
>> >>> > result:
>> >>> >
>> >>> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
>> >>> > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
>> >>> > status ( RO): pending
>> >>> > progress ( RO): 0.000
>> >>> >
>> >>> > I tried a couple of reboot but nothing changes.... so I decided to
>> shut
>> >>> > down the server, force raise a slave host to master with emergency
>> mode,
>> >>> > remove old server from CS and reboot CS.
>> >>> >
>> >>> > After that, I see my cluster up and running again, so I installed XS
>> >>> > 6.2SP1 on the "upgraded" host and added again to the cluster....
>> >>> >
>> >>> > So after an entire day of work, I'm in the same situation! :D
>> >>> >
>> >>> > Anyone can tell me if I made something wrong??
>> >>> >
>> >>> > Thank you very much!
>> >>> >
>>

Re: A Story of a Failed XenServer Upgrade

Posted by Yiping Zhang <yz...@marketo.com>.
See attached pdf document. This is the final procedure we adopted after upgrading seven XenServer pools.

Yiping





On 1/8/16, 2:20 AM, "Alessandro Caviglione" <c....@gmail.com> wrote:

>Hi Yiping,
>yes, thank you very much!!
>Please share the doc so I can try again the upgrade process and see if it
>was only a "unfortunate coincidence of events" or a wrong upgrade process.
>
>Thanks!
>
>On Fri, Jan 8, 2016 at 10:20 AM, Nux! <nu...@li.nux.ro> wrote:
>
>> Yiping,
>>
>> Why not make a blog post about it so everyone can benefit? :)
>>
>> Lucian
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro
>>
>> ----- Original Message -----
>> > From: "Yiping Zhang" <yz...@marketo.com>
>> > To: users@cloudstack.apache.org, aemneina@gmail.com
>> > Sent: Friday, 8 January, 2016 01:31:21
>> > Subject: Re: A Story of a Failed XenServer Upgrade
>>
>> > Hi, Alessandro
>> >
>> > Late to the thread.  Is this still an issue for you ?
>> >
>> > I went thru this process before and I have a step by step document that
>> I can
>> > share if you still need it.
>> >
>> > Yiping
>> >
>> >
>> >
>> >
>> > On 1/2/16, 4:43 PM, "Ahmad Emneina" <ae...@gmail.com> wrote:
>> >
>> >>Hi Alessandro,
>> >>Without seeing the logs, or DB, it will be hard to diagnose the issue.
>> I've
>> >>seen something similar in the past, where the XenServer host version isnt
>> >>getting updated in the DB, as part of the XS upgrade process. That caused
>> >>CloudStack to use the wrong hypervisor resource to try connecting back to
>> >>the XenServers... ending up in failure. If you could share sanitized
>> >>versions of your log and db, someone here might be able to give you the
>> >>necessary steps to get your cluster back under CloudStack control.
>> >>
>> >>On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
>> >>c.alessandro@gmail.com> wrote:
>> >>
>> >>> No guys,as the article wrote, my first action was to put in Maintenance
>> >>> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
>> XenServer
>> >>> Pool Master first before any of the Slaves.  To do so you need to
>> empty the
>> >>> Pool Master of all CloudStack VMs, and you do this by putting the Host
>> into
>> >>> Maintenance Mode within CloudStack to trigger a live migration of all
>> VMs
>> >>> to alternate Hosts"
>> >>>
>> >>> This is exactly what I've done and after the XS upgrade, no hosts was
>> able
>> >>> to communicate with CS and also with the upgraded host.
>> >>>
>> >>> Putting an host in Maint Mode within CS will trigger MM also on
>> XenServer
>> >>> host or just will move the VMs to other hosts?
>> >>>
>> >>> And again.... what's the best practices to upgrade a XS cluster?
>> >>>
>> >>> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
>> RBergsma@schubergphilis.com>
>> >>> wrote:
>> >>>
>> >>> > CloudStack should always do the migration of VM's not the Hypervisor.
>> >>> >
>> >>> > That's not true. You can safely migrate outside of CloudStack as the
>> >>> power
>> >>> > report will tell CloudStack where the vms live and the db gets
>> updated
>> >>> > accordingly. I do this a lot while patching and that works fine on
>> 6.2
>> >>> and
>> >>> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
>> >>> >
>> >>> > Regards, Remi
>> >>> >
>> >>> >
>> >>> > Sent from my iPhone
>> >>> >
>> >>> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net
>> <mailto:
>> >>> > jpeterson@acentek.net>> wrote:
>> >>> >
>> >>> > I don't use XenServer maintenance mode until after CloudStack has
>> put the
>> >>> > Host in maintenance mode.
>> >>> >
>> >>> > When you initiate maintenance mode from the host rather than
>> CloudStack
>> >>> > the db does not know where the VM's are and your UUID's get jacked.
>> >>> >
>> >>> > CS is your brains not the hypervisor.
>> >>> >
>> >>> > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
>> >>> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
>> hypervisor
>> >>> if
>> >>> > needed and then CS and move on to the next Host.
>> >>> >
>> >>> > CloudStack should always do the migration of VM's not the Hypervisor.
>> >>> >
>> >>> > Jeremy
>> >>> >
>> >>> >
>> >>> > -----Original Message-----
>> >>> > From: Davide Pala [mailto:davide.pala@gesca.it]
>> >>> > Sent: Friday, January 1, 2016 5:18 PM
>> >>> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>> >>> > Subject: R: A Story of a Failed XenServer Upgrade
>> >>> >
>> >>> > Hi alessandro. If u put in maintenance mode the master you force the
>> >>> > election of a new pool master. Now when you have see the upgraded
>> host as
>> >>> > disconnected you are connected to the new pool master and the host
>> (as a
>> >>> > pool member) cannot comunicate with a pool master of an earliest
>> version.
>> >>> > The solution? Launche the upgrade on the pool master without enter in
>> >>> > maintenance mode. And remember a consistent backup!!!
>> >>> >
>> >>> >
>> >>> >
>> >>> > Inviato dal mio dispositivo Samsung
>> >>> >
>> >>> >
>> >>> > -------- Messaggio originale --------
>> >>> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
>> >>> > c.alessandro@gmail.com>>
>> >>> > Data: 01/01/2016 23:23 (GMT+01:00)
>> >>> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>> >>> > Oggetto: A Story of a Failed XenServer Upgrade
>> >>> >
>> >>> > Hi guys,
>> >>> > I want to share my XenServer Upgrade adventure to understand if I did
>> >>> > domething wrong.
>> >>> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the
>> VRs
>> >>> > has been upgraded I start the upgrade process of my XenServer hosts
>> from
>> >>> > 6.2 to 6.5.
>> >>> > I do not already have PoolHA enabled so I followed this article:
>> >>> >
>> >>> >
>> >>>
>> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
>> >>> >
>> >>> > The cluster consists of n° 3 XenServer hosts.
>> >>> >
>> >>> > First of all I added manage.xenserver.pool.master=false
>> >>> > to environment.properties file and restarted cloudstack-management
>> >>> service.
>> >>> >
>> >>> > After that I put in Maintenance Mode Pool Master host and, after all
>> VMs
>> >>> > has been migrated, I Unmanaged the cluster.
>> >>> > At this point all host appears as "Disconnected" from CS interface
>> and
>> >>> > this should be right.
>> >>> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start
>> a
>> >>> > in-place upgrade.
>> >>> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot
>> again.
>> >>> > At this point I expected that, after click on Manage Cluster on CS,
>> all
>> >>> > the hosts come back to "UP" and I could go ahead upgrading the other
>> >>> > hosts....
>> >>> >
>> >>> > But, instead of that, all the hosts still appears as "Disconnected",
>> I
>> >>> > tried a couple of cloudstack-management service restart without
>> success.
>> >>> >
>> >>> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5
>> and it
>> >>> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I
>> got
>> >>> > the error: The server is still booting
>> >>> >
>> >>> > After some investigation, I run the command "xe task-list" and this
>> is
>> >>> the
>> >>> > result:
>> >>> >
>> >>> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
>> >>> > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
>> >>> > status ( RO): pending
>> >>> > progress ( RO): 0.000
>> >>> >
>> >>> > I tried a couple of reboot but nothing changes.... so I decided to
>> shut
>> >>> > down the server, force raise a slave host to master with emergency
>> mode,
>> >>> > remove old server from CS and reboot CS.
>> >>> >
>> >>> > After that, I see my cluster up and running again, so I installed XS
>> >>> > 6.2SP1 on the "upgraded" host and added again to the cluster....
>> >>> >
>> >>> > So after an entire day of work, I'm in the same situation! :D
>> >>> >
>> >>> > Anyone can tell me if I made something wrong??
>> >>> >
>> >>> > Thank you very much!
>> >>> >
>>

Re: A Story of a Failed XenServer Upgrade

Posted by Alessandro Caviglione <c....@gmail.com>.
Hi Yiping,
yes, thank you very much!!
Please share the doc so I can try again the upgrade process and see if it
was only a "unfortunate coincidence of events" or a wrong upgrade process.

Thanks!

On Fri, Jan 8, 2016 at 10:20 AM, Nux! <nu...@li.nux.ro> wrote:

> Yiping,
>
> Why not make a blog post about it so everyone can benefit? :)
>
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> ----- Original Message -----
> > From: "Yiping Zhang" <yz...@marketo.com>
> > To: users@cloudstack.apache.org, aemneina@gmail.com
> > Sent: Friday, 8 January, 2016 01:31:21
> > Subject: Re: A Story of a Failed XenServer Upgrade
>
> > Hi, Alessandro
> >
> > Late to the thread.  Is this still an issue for you ?
> >
> > I went thru this process before and I have a step by step document that
> I can
> > share if you still need it.
> >
> > Yiping
> >
> >
> >
> >
> > On 1/2/16, 4:43 PM, "Ahmad Emneina" <ae...@gmail.com> wrote:
> >
> >>Hi Alessandro,
> >>Without seeing the logs, or DB, it will be hard to diagnose the issue.
> I've
> >>seen something similar in the past, where the XenServer host version isnt
> >>getting updated in the DB, as part of the XS upgrade process. That caused
> >>CloudStack to use the wrong hypervisor resource to try connecting back to
> >>the XenServers... ending up in failure. If you could share sanitized
> >>versions of your log and db, someone here might be able to give you the
> >>necessary steps to get your cluster back under CloudStack control.
> >>
> >>On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
> >>c.alessandro@gmail.com> wrote:
> >>
> >>> No guys,as the article wrote, my first action was to put in Maintenance
> >>> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
> XenServer
> >>> Pool Master first before any of the Slaves.  To do so you need to
> empty the
> >>> Pool Master of all CloudStack VMs, and you do this by putting the Host
> into
> >>> Maintenance Mode within CloudStack to trigger a live migration of all
> VMs
> >>> to alternate Hosts"
> >>>
> >>> This is exactly what I've done and after the XS upgrade, no hosts was
> able
> >>> to communicate with CS and also with the upgraded host.
> >>>
> >>> Putting an host in Maint Mode within CS will trigger MM also on
> XenServer
> >>> host or just will move the VMs to other hosts?
> >>>
> >>> And again.... what's the best practices to upgrade a XS cluster?
> >>>
> >>> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
> RBergsma@schubergphilis.com>
> >>> wrote:
> >>>
> >>> > CloudStack should always do the migration of VM's not the Hypervisor.
> >>> >
> >>> > That's not true. You can safely migrate outside of CloudStack as the
> >>> power
> >>> > report will tell CloudStack where the vms live and the db gets
> updated
> >>> > accordingly. I do this a lot while patching and that works fine on
> 6.2
> >>> and
> >>> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> >>> >
> >>> > Regards, Remi
> >>> >
> >>> >
> >>> > Sent from my iPhone
> >>> >
> >>> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net
> <mailto:
> >>> > jpeterson@acentek.net>> wrote:
> >>> >
> >>> > I don't use XenServer maintenance mode until after CloudStack has
> put the
> >>> > Host in maintenance mode.
> >>> >
> >>> > When you initiate maintenance mode from the host rather than
> CloudStack
> >>> > the db does not know where the VM's are and your UUID's get jacked.
> >>> >
> >>> > CS is your brains not the hypervisor.
> >>> >
> >>> > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> >>> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
> hypervisor
> >>> if
> >>> > needed and then CS and move on to the next Host.
> >>> >
> >>> > CloudStack should always do the migration of VM's not the Hypervisor.
> >>> >
> >>> > Jeremy
> >>> >
> >>> >
> >>> > -----Original Message-----
> >>> > From: Davide Pala [mailto:davide.pala@gesca.it]
> >>> > Sent: Friday, January 1, 2016 5:18 PM
> >>> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> >>> > Subject: R: A Story of a Failed XenServer Upgrade
> >>> >
> >>> > Hi alessandro. If u put in maintenance mode the master you force the
> >>> > election of a new pool master. Now when you have see the upgraded
> host as
> >>> > disconnected you are connected to the new pool master and the host
> (as a
> >>> > pool member) cannot comunicate with a pool master of an earliest
> version.
> >>> > The solution? Launche the upgrade on the pool master without enter in
> >>> > maintenance mode. And remember a consistent backup!!!
> >>> >
> >>> >
> >>> >
> >>> > Inviato dal mio dispositivo Samsung
> >>> >
> >>> >
> >>> > -------- Messaggio originale --------
> >>> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> >>> > c.alessandro@gmail.com>>
> >>> > Data: 01/01/2016 23:23 (GMT+01:00)
> >>> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> >>> > Oggetto: A Story of a Failed XenServer Upgrade
> >>> >
> >>> > Hi guys,
> >>> > I want to share my XenServer Upgrade adventure to understand if I did
> >>> > domething wrong.
> >>> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the
> VRs
> >>> > has been upgraded I start the upgrade process of my XenServer hosts
> from
> >>> > 6.2 to 6.5.
> >>> > I do not already have PoolHA enabled so I followed this article:
> >>> >
> >>> >
> >>>
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> >>> >
> >>> > The cluster consists of n° 3 XenServer hosts.
> >>> >
> >>> > First of all I added manage.xenserver.pool.master=false
> >>> > to environment.properties file and restarted cloudstack-management
> >>> service.
> >>> >
> >>> > After that I put in Maintenance Mode Pool Master host and, after all
> VMs
> >>> > has been migrated, I Unmanaged the cluster.
> >>> > At this point all host appears as "Disconnected" from CS interface
> and
> >>> > this should be right.
> >>> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start
> a
> >>> > in-place upgrade.
> >>> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot
> again.
> >>> > At this point I expected that, after click on Manage Cluster on CS,
> all
> >>> > the hosts come back to "UP" and I could go ahead upgrading the other
> >>> > hosts....
> >>> >
> >>> > But, instead of that, all the hosts still appears as "Disconnected",
> I
> >>> > tried a couple of cloudstack-management service restart without
> success.
> >>> >
> >>> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5
> and it
> >>> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I
> got
> >>> > the error: The server is still booting
> >>> >
> >>> > After some investigation, I run the command "xe task-list" and this
> is
> >>> the
> >>> > result:
> >>> >
> >>> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> >>> > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> >>> > status ( RO): pending
> >>> > progress ( RO): 0.000
> >>> >
> >>> > I tried a couple of reboot but nothing changes.... so I decided to
> shut
> >>> > down the server, force raise a slave host to master with emergency
> mode,
> >>> > remove old server from CS and reboot CS.
> >>> >
> >>> > After that, I see my cluster up and running again, so I installed XS
> >>> > 6.2SP1 on the "upgraded" host and added again to the cluster....
> >>> >
> >>> > So after an entire day of work, I'm in the same situation! :D
> >>> >
> >>> > Anyone can tell me if I made something wrong??
> >>> >
> >>> > Thank you very much!
> >>> >
>

Re: A Story of a Failed XenServer Upgrade

Posted by Nux! <nu...@li.nux.ro>.
Yiping,

Why not make a blog post about it so everyone can benefit? :)

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

----- Original Message -----
> From: "Yiping Zhang" <yz...@marketo.com>
> To: users@cloudstack.apache.org, aemneina@gmail.com
> Sent: Friday, 8 January, 2016 01:31:21
> Subject: Re: A Story of a Failed XenServer Upgrade

> Hi, Alessandro
> 
> Late to the thread.  Is this still an issue for you ?
> 
> I went thru this process before and I have a step by step document that I can
> share if you still need it.
> 
> Yiping
> 
> 
> 
> 
> On 1/2/16, 4:43 PM, "Ahmad Emneina" <ae...@gmail.com> wrote:
> 
>>Hi Alessandro,
>>Without seeing the logs, or DB, it will be hard to diagnose the issue. I've
>>seen something similar in the past, where the XenServer host version isnt
>>getting updated in the DB, as part of the XS upgrade process. That caused
>>CloudStack to use the wrong hypervisor resource to try connecting back to
>>the XenServers... ending up in failure. If you could share sanitized
>>versions of your log and db, someone here might be able to give you the
>>necessary steps to get your cluster back under CloudStack control.
>>
>>On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
>>c.alessandro@gmail.com> wrote:
>>
>>> No guys,as the article wrote, my first action was to put in Maintenance
>>> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer
>>> Pool Master first before any of the Slaves.  To do so you need to empty the
>>> Pool Master of all CloudStack VMs, and you do this by putting the Host into
>>> Maintenance Mode within CloudStack to trigger a live migration of all VMs
>>> to alternate Hosts"
>>>
>>> This is exactly what I've done and after the XS upgrade, no hosts was able
>>> to communicate with CS and also with the upgraded host.
>>>
>>> Putting an host in Maint Mode within CS will trigger MM also on XenServer
>>> host or just will move the VMs to other hosts?
>>>
>>> And again.... what's the best practices to upgrade a XS cluster?
>>>
>>> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <RB...@schubergphilis.com>
>>> wrote:
>>>
>>> > CloudStack should always do the migration of VM's not the Hypervisor.
>>> >
>>> > That's not true. You can safely migrate outside of CloudStack as the
>>> power
>>> > report will tell CloudStack where the vms live and the db gets updated
>>> > accordingly. I do this a lot while patching and that works fine on 6.2
>>> and
>>> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
>>> >
>>> > Regards, Remi
>>> >
>>> >
>>> > Sent from my iPhone
>>> >
>>> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net<mailto:
>>> > jpeterson@acentek.net>> wrote:
>>> >
>>> > I don't use XenServer maintenance mode until after CloudStack has put the
>>> > Host in maintenance mode.
>>> >
>>> > When you initiate maintenance mode from the host rather than CloudStack
>>> > the db does not know where the VM's are and your UUID's get jacked.
>>> >
>>> > CS is your brains not the hypervisor.
>>> >
>>> > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
>>> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor
>>> if
>>> > needed and then CS and move on to the next Host.
>>> >
>>> > CloudStack should always do the migration of VM's not the Hypervisor.
>>> >
>>> > Jeremy
>>> >
>>> >
>>> > -----Original Message-----
>>> > From: Davide Pala [mailto:davide.pala@gesca.it]
>>> > Sent: Friday, January 1, 2016 5:18 PM
>>> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>>> > Subject: R: A Story of a Failed XenServer Upgrade
>>> >
>>> > Hi alessandro. If u put in maintenance mode the master you force the
>>> > election of a new pool master. Now when you have see the upgraded host as
>>> > disconnected you are connected to the new pool master and the host (as a
>>> > pool member) cannot comunicate with a pool master of an earliest version.
>>> > The solution? Launche the upgrade on the pool master without enter in
>>> > maintenance mode. And remember a consistent backup!!!
>>> >
>>> >
>>> >
>>> > Inviato dal mio dispositivo Samsung
>>> >
>>> >
>>> > -------- Messaggio originale --------
>>> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
>>> > c.alessandro@gmail.com>>
>>> > Data: 01/01/2016 23:23 (GMT+01:00)
>>> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>>> > Oggetto: A Story of a Failed XenServer Upgrade
>>> >
>>> > Hi guys,
>>> > I want to share my XenServer Upgrade adventure to understand if I did
>>> > domething wrong.
>>> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
>>> > has been upgraded I start the upgrade process of my XenServer hosts from
>>> > 6.2 to 6.5.
>>> > I do not already have PoolHA enabled so I followed this article:
>>> >
>>> >
>>> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
>>> >
>>> > The cluster consists of n° 3 XenServer hosts.
>>> >
>>> > First of all I added manage.xenserver.pool.master=false
>>> > to environment.properties file and restarted cloudstack-management
>>> service.
>>> >
>>> > After that I put in Maintenance Mode Pool Master host and, after all VMs
>>> > has been migrated, I Unmanaged the cluster.
>>> > At this point all host appears as "Disconnected" from CS interface and
>>> > this should be right.
>>> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
>>> > in-place upgrade.
>>> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
>>> > At this point I expected that, after click on Manage Cluster on CS, all
>>> > the hosts come back to "UP" and I could go ahead upgrading the other
>>> > hosts....
>>> >
>>> > But, instead of that, all the hosts still appears as "Disconnected", I
>>> > tried a couple of cloudstack-management service restart without success.
>>> >
>>> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
>>> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
>>> > the error: The server is still booting
>>> >
>>> > After some investigation, I run the command "xe task-list" and this is
>>> the
>>> > result:
>>> >
>>> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
>>> > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
>>> > status ( RO): pending
>>> > progress ( RO): 0.000
>>> >
>>> > I tried a couple of reboot but nothing changes.... so I decided to shut
>>> > down the server, force raise a slave host to master with emergency mode,
>>> > remove old server from CS and reboot CS.
>>> >
>>> > After that, I see my cluster up and running again, so I installed XS
>>> > 6.2SP1 on the "upgraded" host and added again to the cluster....
>>> >
>>> > So after an entire day of work, I'm in the same situation! :D
>>> >
>>> > Anyone can tell me if I made something wrong??
>>> >
>>> > Thank you very much!
>>> >

Re: A Story of a Failed XenServer Upgrade

Posted by Yiping Zhang <yz...@marketo.com>.
Hi, Alessandro

Late to the thread.  Is this still an issue for you ?

I went thru this process before and I have a step by step document that I can share if you still need it.

Yiping




On 1/2/16, 4:43 PM, "Ahmad Emneina" <ae...@gmail.com> wrote:

>Hi Alessandro,
>Without seeing the logs, or DB, it will be hard to diagnose the issue. I've
>seen something similar in the past, where the XenServer host version isnt
>getting updated in the DB, as part of the XS upgrade process. That caused
>CloudStack to use the wrong hypervisor resource to try connecting back to
>the XenServers... ending up in failure. If you could share sanitized
>versions of your log and db, someone here might be able to give you the
>necessary steps to get your cluster back under CloudStack control.
>
>On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
>c.alessandro@gmail.com> wrote:
>
>> No guys,as the article wrote, my first action was to put in Maintenance
>> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer
>> Pool Master first before any of the Slaves.  To do so you need to empty the
>> Pool Master of all CloudStack VMs, and you do this by putting the Host into
>> Maintenance Mode within CloudStack to trigger a live migration of all VMs
>> to alternate Hosts"
>>
>> This is exactly what I've done and after the XS upgrade, no hosts was able
>> to communicate with CS and also with the upgraded host.
>>
>> Putting an host in Maint Mode within CS will trigger MM also on XenServer
>> host or just will move the VMs to other hosts?
>>
>> And again.... what's the best practices to upgrade a XS cluster?
>>
>> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <RB...@schubergphilis.com>
>> wrote:
>>
>> > CloudStack should always do the migration of VM's not the Hypervisor.
>> >
>> > That's not true. You can safely migrate outside of CloudStack as the
>> power
>> > report will tell CloudStack where the vms live and the db gets updated
>> > accordingly. I do this a lot while patching and that works fine on 6.2
>> and
>> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
>> >
>> > Regards, Remi
>> >
>> >
>> > Sent from my iPhone
>> >
>> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net<mailto:
>> > jpeterson@acentek.net>> wrote:
>> >
>> > I don't use XenServer maintenance mode until after CloudStack has put the
>> > Host in maintenance mode.
>> >
>> > When you initiate maintenance mode from the host rather than CloudStack
>> > the db does not know where the VM's are and your UUID's get jacked.
>> >
>> > CS is your brains not the hypervisor.
>> >
>> > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
>> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor
>> if
>> > needed and then CS and move on to the next Host.
>> >
>> > CloudStack should always do the migration of VM's not the Hypervisor.
>> >
>> > Jeremy
>> >
>> >
>> > -----Original Message-----
>> > From: Davide Pala [mailto:davide.pala@gesca.it]
>> > Sent: Friday, January 1, 2016 5:18 PM
>> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>> > Subject: R: A Story of a Failed XenServer Upgrade
>> >
>> > Hi alessandro. If u put in maintenance mode the master you force the
>> > election of a new pool master. Now when you have see the upgraded host as
>> > disconnected you are connected to the new pool master and the host (as a
>> > pool member) cannot comunicate with a pool master of an earliest version.
>> > The solution? Launche the upgrade on the pool master without enter in
>> > maintenance mode. And remember a consistent backup!!!
>> >
>> >
>> >
>> > Inviato dal mio dispositivo Samsung
>> >
>> >
>> > -------- Messaggio originale --------
>> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
>> > c.alessandro@gmail.com>>
>> > Data: 01/01/2016 23:23 (GMT+01:00)
>> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
>> > Oggetto: A Story of a Failed XenServer Upgrade
>> >
>> > Hi guys,
>> > I want to share my XenServer Upgrade adventure to understand if I did
>> > domething wrong.
>> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
>> > has been upgraded I start the upgrade process of my XenServer hosts from
>> > 6.2 to 6.5.
>> > I do not already have PoolHA enabled so I followed this article:
>> >
>> >
>> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
>> >
>> > The cluster consists of n° 3 XenServer hosts.
>> >
>> > First of all I added manage.xenserver.pool.master=false
>> > to environment.properties file and restarted cloudstack-management
>> service.
>> >
>> > After that I put in Maintenance Mode Pool Master host and, after all VMs
>> > has been migrated, I Unmanaged the cluster.
>> > At this point all host appears as "Disconnected" from CS interface and
>> > this should be right.
>> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
>> > in-place upgrade.
>> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
>> > At this point I expected that, after click on Manage Cluster on CS, all
>> > the hosts come back to "UP" and I could go ahead upgrading the other
>> > hosts....
>> >
>> > But, instead of that, all the hosts still appears as "Disconnected", I
>> > tried a couple of cloudstack-management service restart without success.
>> >
>> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
>> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
>> > the error: The server is still booting
>> >
>> > After some investigation, I run the command "xe task-list" and this is
>> the
>> > result:
>> >
>> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
>> > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
>> > status ( RO): pending
>> > progress ( RO): 0.000
>> >
>> > I tried a couple of reboot but nothing changes.... so I decided to shut
>> > down the server, force raise a slave host to master with emergency mode,
>> > remove old server from CS and reboot CS.
>> >
>> > After that, I see my cluster up and running again, so I installed XS
>> > 6.2SP1 on the "upgraded" host and added again to the cluster....
>> >
>> > So after an entire day of work, I'm in the same situation! :D
>> >
>> > Anyone can tell me if I made something wrong??
>> >
>> > Thank you very much!
>> >
>>

Re: A Story of a Failed XenServer Upgrade

Posted by Ahmad Emneina <ae...@gmail.com>.
Hi Alessandro,
Without seeing the logs, or DB, it will be hard to diagnose the issue. I've
seen something similar in the past, where the XenServer host version isnt
getting updated in the DB, as part of the XS upgrade process. That caused
CloudStack to use the wrong hypervisor resource to try connecting back to
the XenServers... ending up in failure. If you could share sanitized
versions of your log and db, someone here might be able to give you the
necessary steps to get your cluster back under CloudStack control.

On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
c.alessandro@gmail.com> wrote:

> No guys,as the article wrote, my first action was to put in Maintenance
> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer
> Pool Master first before any of the Slaves.  To do so you need to empty the
> Pool Master of all CloudStack VMs, and you do this by putting the Host into
> Maintenance Mode within CloudStack to trigger a live migration of all VMs
> to alternate Hosts"
>
> This is exactly what I've done and after the XS upgrade, no hosts was able
> to communicate with CS and also with the upgraded host.
>
> Putting an host in Maint Mode within CS will trigger MM also on XenServer
> host or just will move the VMs to other hosts?
>
> And again.... what's the best practices to upgrade a XS cluster?
>
> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <RB...@schubergphilis.com>
> wrote:
>
> > CloudStack should always do the migration of VM's not the Hypervisor.
> >
> > That's not true. You can safely migrate outside of CloudStack as the
> power
> > report will tell CloudStack where the vms live and the db gets updated
> > accordingly. I do this a lot while patching and that works fine on 6.2
> and
> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> >
> > Regards, Remi
> >
> >
> > Sent from my iPhone
> >
> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net<mailto:
> > jpeterson@acentek.net>> wrote:
> >
> > I don't use XenServer maintenance mode until after CloudStack has put the
> > Host in maintenance mode.
> >
> > When you initiate maintenance mode from the host rather than CloudStack
> > the db does not know where the VM's are and your UUID's get jacked.
> >
> > CS is your brains not the hypervisor.
> >
> > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor
> if
> > needed and then CS and move on to the next Host.
> >
> > CloudStack should always do the migration of VM's not the Hypervisor.
> >
> > Jeremy
> >
> >
> > -----Original Message-----
> > From: Davide Pala [mailto:davide.pala@gesca.it]
> > Sent: Friday, January 1, 2016 5:18 PM
> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > Subject: R: A Story of a Failed XenServer Upgrade
> >
> > Hi alessandro. If u put in maintenance mode the master you force the
> > election of a new pool master. Now when you have see the upgraded host as
> > disconnected you are connected to the new pool master and the host (as a
> > pool member) cannot comunicate with a pool master of an earliest version.
> > The solution? Launche the upgrade on the pool master without enter in
> > maintenance mode. And remember a consistent backup!!!
> >
> >
> >
> > Inviato dal mio dispositivo Samsung
> >
> >
> > -------- Messaggio originale --------
> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> > c.alessandro@gmail.com>>
> > Data: 01/01/2016 23:23 (GMT+01:00)
> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > Oggetto: A Story of a Failed XenServer Upgrade
> >
> > Hi guys,
> > I want to share my XenServer Upgrade adventure to understand if I did
> > domething wrong.
> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
> > has been upgraded I start the upgrade process of my XenServer hosts from
> > 6.2 to 6.5.
> > I do not already have PoolHA enabled so I followed this article:
> >
> >
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> >
> > The cluster consists of n° 3 XenServer hosts.
> >
> > First of all I added manage.xenserver.pool.master=false
> > to environment.properties file and restarted cloudstack-management
> service.
> >
> > After that I put in Maintenance Mode Pool Master host and, after all VMs
> > has been migrated, I Unmanaged the cluster.
> > At this point all host appears as "Disconnected" from CS interface and
> > this should be right.
> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
> > in-place upgrade.
> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
> > At this point I expected that, after click on Manage Cluster on CS, all
> > the hosts come back to "UP" and I could go ahead upgrading the other
> > hosts....
> >
> > But, instead of that, all the hosts still appears as "Disconnected", I
> > tried a couple of cloudstack-management service restart without success.
> >
> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
> > the error: The server is still booting
> >
> > After some investigation, I run the command "xe task-list" and this is
> the
> > result:
> >
> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> > status ( RO): pending
> > progress ( RO): 0.000
> >
> > I tried a couple of reboot but nothing changes.... so I decided to shut
> > down the server, force raise a slave host to master with emergency mode,
> > remove old server from CS and reboot CS.
> >
> > After that, I see my cluster up and running again, so I installed XS
> > 6.2SP1 on the "upgraded" host and added again to the cluster....
> >
> > So after an entire day of work, I'm in the same situation! :D
> >
> > Anyone can tell me if I made something wrong??
> >
> > Thank you very much!
> >
>

Re: A Story of a Failed XenServer Upgrade

Posted by Rafael Weingärtner <ra...@gmail.com>.
until we use "xe pool-designate-new-master", the master of the pool will
not change.

Unless, you have configured the HA features in your XenServer Pool/cluster,
the reboot of the master may trigger a change of master server.

Bottom line: today, the code does not change the master server of a
XenServer pool. The maintenance process of ACS does not change the master
of the cluster/XenServer pool

On Sat, Jan 2, 2016 at 9:36 PM, Davide Pala <da...@gesca.it> wrote:

> No. The upgrade must be done with a cold reboot without the maintenance.
> XS pool master mist be the master again when it boot
>
>
>
> Inviato dal mio dispositivo Samsung
>
>
> -------- Messaggio originale --------
> Da: Rafael Weingärtner <ra...@gmail.com>
> Data: 03/01/2016 00:25 (GMT+01:00)
> A: users@cloudstack.apache.org
> Oggetto: Re: A Story of a Failed XenServer Upgrade
>
> That is true it is not putting the host in maintenance. Not just the
> master, but any XenServer host.
>
> The question is, should it? If so, we should open a Jira ticket.
>
> On Sat, Jan 2, 2016 at 9:18 PM, Davide Pala <da...@gesca.it> wrote:
>
> > So i dont know anything about cloud on xenserver and for this reason i
> > think the cloudstack dont put in maintenance xenserver pool master
> >
> >
> >
> > Inviato dal mio dispositivo Samsung
> >
> >
> > -------- Messaggio originale --------
> > Da: Rafael Weingärtner <ra...@gmail.com>
> > Data: 02/01/2016 22:48 (GMT+01:00)
> > A: users@cloudstack.apache.org
> > Oggetto: Re: A Story of a Failed XenServer Upgrade
> >
> > Hi all,
> >
> > There is nothing better than looking at the source code.
> >
> > After the VM migration (or restart for LCX ?!), when the host is put in
> > maintenance mode, for Xenserver it will remove a tag called “cloud”.
> >
> > On Sat, Jan 2, 2016 at 7:38 PM, Davide Pala <da...@gesca.it>
> wrote:
> >
> > > i don't know what do the maintenance mode in CS but if it put in
> > > maintenance also the pool master this article is wrong!
> > >
> > >
> > > Davide Pala
> > > Infrastructure Specialist
> > > Gesca srl
> > >
> > > Via degli Olmetti, 18
> > > 00060 Formello (Roma)
> > > Office:  +39 06 9040661
> > > Fax:     +39 06 90406666
> > > E-mail: davide.pala@gesca.it
> > > Web:    www.gesca.it<http://www.gesca.it>
> > >
> > >
> > >
> > >
> > > ________________________________________
> > > Da: Alessandro Caviglione [c.alessandro@gmail.com]
> > > Inviato: sabato 2 gennaio 2016 22.27
> > > A: users@cloudstack.apache.org
> > > Oggetto: Re: A Story of a Failed XenServer Upgrade
> > >
> > > No guys,as the article wrote, my first action was to put in Maintenance
> > > Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
> > XenServer
> > > Pool Master first before any of the Slaves.  To do so you need to empty
> > the
> > > Pool Master of all CloudStack VMs, and you do this by putting the Host
> > into
> > > Maintenance Mode within CloudStack to trigger a live migration of all
> VMs
> > > to alternate Hosts"
> > >
> > > This is exactly what I've done and after the XS upgrade, no hosts was
> > able
> > > to communicate with CS and also with the upgraded host.
> > >
> > > Putting an host in Maint Mode within CS will trigger MM also on
> XenServer
> > > host or just will move the VMs to other hosts?
> > >
> > > And again.... what's the best practices to upgrade a XS cluster?
> > >
> > > On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
> > RBergsma@schubergphilis.com>
> > > wrote:
> > >
> > > > CloudStack should always do the migration of VM's not the Hypervisor.
> > > >
> > > > That's not true. You can safely migrate outside of CloudStack as the
> > > power
> > > > report will tell CloudStack where the vms live and the db gets
> updated
> > > > accordingly. I do this a lot while patching and that works fine on
> 6.2
> > > and
> > > > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> > > >
> > > > Regards, Remi
> > > >
> > > >
> > > > Sent from my iPhone
> > > >
> > > > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net
> > <mailto:
> > > > jpeterson@acentek.net>> wrote:
> > > >
> > > > I don't use XenServer maintenance mode until after CloudStack has put
> > the
> > > > Host in maintenance mode.
> > > >
> > > > When you initiate maintenance mode from the host rather than
> CloudStack
> > > > the db does not know where the VM's are and your UUID's get jacked.
> > > >
> > > > CS is your brains not the hypervisor.
> > > >
> > > > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> > > > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
> > hypervisor
> > > if
> > > > needed and then CS and move on to the next Host.
> > > >
> > > > CloudStack should always do the migration of VM's not the Hypervisor.
> > > >
> > > > Jeremy
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: Davide Pala [mailto:davide.pala@gesca.it]
> > > > Sent: Friday, January 1, 2016 5:18 PM
> > > > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > > > Subject: R: A Story of a Failed XenServer Upgrade
> > > >
> > > > Hi alessandro. If u put in maintenance mode the master you force the
> > > > election of a new pool master. Now when you have see the upgraded
> host
> > as
> > > > disconnected you are connected to the new pool master and the host
> (as
> > a
> > > > pool member) cannot comunicate with a pool master of an earliest
> > version.
> > > > The solution? Launche the upgrade on the pool master without enter in
> > > > maintenance mode. And remember a consistent backup!!!
> > > >
> > > >
> > > >
> > > > Inviato dal mio dispositivo Samsung
> > > >
> > > >
> > > > -------- Messaggio originale --------
> > > > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> > > > c.alessandro@gmail.com>>
> > > > Data: 01/01/2016 23:23 (GMT+01:00)
> > > > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > > > Oggetto: A Story of a Failed XenServer Upgrade
> > > >
> > > > Hi guys,
> > > > I want to share my XenServer Upgrade adventure to understand if I did
> > > > domething wrong.
> > > > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the
> VRs
> > > > has been upgraded I start the upgrade process of my XenServer hosts
> > from
> > > > 6.2 to 6.5.
> > > > I do not already have PoolHA enabled so I followed this article:
> > > >
> > > >
> > >
> >
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> > > >
> > > > The cluster consists of n° 3 XenServer hosts.
> > > >
> > > > First of all I added manage.xenserver.pool.master=false
> > > > to environment.properties file and restarted cloudstack-management
> > > service.
> > > >
> > > > After that I put in Maintenance Mode Pool Master host and, after all
> > VMs
> > > > has been migrated, I Unmanaged the cluster.
> > > > At this point all host appears as "Disconnected" from CS interface
> and
> > > > this should be right.
> > > > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start
> a
> > > > in-place upgrade.
> > > > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot
> again.
> > > > At this point I expected that, after click on Manage Cluster on CS,
> all
> > > > the hosts come back to "UP" and I could go ahead upgrading the other
> > > > hosts....
> > > >
> > > > But, instead of that, all the hosts still appears as "Disconnected",
> I
> > > > tried a couple of cloudstack-management service restart without
> > success.
> > > >
> > > > So I opened XenCenter and connect to Pool Master I upgraded to 6.5
> and
> > it
> > > > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I
> > got
> > > > the error: The server is still booting
> > > >
> > > > After some investigation, I run the command "xe task-list" and this
> is
> > > the
> > > > result:
> > > >
> > > > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> > > > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> > > > status ( RO): pending
> > > > progress ( RO): 0.000
> > > >
> > > > I tried a couple of reboot but nothing changes.... so I decided to
> shut
> > > > down the server, force raise a slave host to master with emergency
> > mode,
> > > > remove old server from CS and reboot CS.
> > > >
> > > > After that, I see my cluster up and running again, so I installed XS
> > > > 6.2SP1 on the "upgraded" host and added again to the cluster....
> > > >
> > > > So after an entire day of work, I'm in the same situation! :D
> > > >
> > > > Anyone can tell me if I made something wrong??
> > > >
> > > > Thank you very much!
> > > >
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 
Rafael Weingärtner

R: A Story of a Failed XenServer Upgrade

Posted by Davide Pala <da...@gesca.it>.
No. The upgrade must be done with a cold reboot without the maintenance. XS pool master mist be the master again when it boot



Inviato dal mio dispositivo Samsung


-------- Messaggio originale --------
Da: Rafael Weingärtner <ra...@gmail.com>
Data: 03/01/2016 00:25 (GMT+01:00)
A: users@cloudstack.apache.org
Oggetto: Re: A Story of a Failed XenServer Upgrade

That is true it is not putting the host in maintenance. Not just the
master, but any XenServer host.

The question is, should it? If so, we should open a Jira ticket.

On Sat, Jan 2, 2016 at 9:18 PM, Davide Pala <da...@gesca.it> wrote:

> So i dont know anything about cloud on xenserver and for this reason i
> think the cloudstack dont put in maintenance xenserver pool master
>
>
>
> Inviato dal mio dispositivo Samsung
>
>
> -------- Messaggio originale --------
> Da: Rafael Weingärtner <ra...@gmail.com>
> Data: 02/01/2016 22:48 (GMT+01:00)
> A: users@cloudstack.apache.org
> Oggetto: Re: A Story of a Failed XenServer Upgrade
>
> Hi all,
>
> There is nothing better than looking at the source code.
>
> After the VM migration (or restart for LCX ?!), when the host is put in
> maintenance mode, for Xenserver it will remove a tag called “cloud”.
>
> On Sat, Jan 2, 2016 at 7:38 PM, Davide Pala <da...@gesca.it> wrote:
>
> > i don't know what do the maintenance mode in CS but if it put in
> > maintenance also the pool master this article is wrong!
> >
> >
> > Davide Pala
> > Infrastructure Specialist
> > Gesca srl
> >
> > Via degli Olmetti, 18
> > 00060 Formello (Roma)
> > Office:  +39 06 9040661
> > Fax:     +39 06 90406666
> > E-mail: davide.pala@gesca.it
> > Web:    www.gesca.it<http://www.gesca.it>
> >
> >
> >
> >
> > ________________________________________
> > Da: Alessandro Caviglione [c.alessandro@gmail.com]
> > Inviato: sabato 2 gennaio 2016 22.27
> > A: users@cloudstack.apache.org
> > Oggetto: Re: A Story of a Failed XenServer Upgrade
> >
> > No guys,as the article wrote, my first action was to put in Maintenance
> > Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
> XenServer
> > Pool Master first before any of the Slaves.  To do so you need to empty
> the
> > Pool Master of all CloudStack VMs, and you do this by putting the Host
> into
> > Maintenance Mode within CloudStack to trigger a live migration of all VMs
> > to alternate Hosts"
> >
> > This is exactly what I've done and after the XS upgrade, no hosts was
> able
> > to communicate with CS and also with the upgraded host.
> >
> > Putting an host in Maint Mode within CS will trigger MM also on XenServer
> > host or just will move the VMs to other hosts?
> >
> > And again.... what's the best practices to upgrade a XS cluster?
> >
> > On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
> RBergsma@schubergphilis.com>
> > wrote:
> >
> > > CloudStack should always do the migration of VM's not the Hypervisor.
> > >
> > > That's not true. You can safely migrate outside of CloudStack as the
> > power
> > > report will tell CloudStack where the vms live and the db gets updated
> > > accordingly. I do this a lot while patching and that works fine on 6.2
> > and
> > > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> > >
> > > Regards, Remi
> > >
> > >
> > > Sent from my iPhone
> > >
> > > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net
> <mailto:
> > > jpeterson@acentek.net>> wrote:
> > >
> > > I don't use XenServer maintenance mode until after CloudStack has put
> the
> > > Host in maintenance mode.
> > >
> > > When you initiate maintenance mode from the host rather than CloudStack
> > > the db does not know where the VM's are and your UUID's get jacked.
> > >
> > > CS is your brains not the hypervisor.
> > >
> > > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> > > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
> hypervisor
> > if
> > > needed and then CS and move on to the next Host.
> > >
> > > CloudStack should always do the migration of VM's not the Hypervisor.
> > >
> > > Jeremy
> > >
> > >
> > > -----Original Message-----
> > > From: Davide Pala [mailto:davide.pala@gesca.it]
> > > Sent: Friday, January 1, 2016 5:18 PM
> > > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > > Subject: R: A Story of a Failed XenServer Upgrade
> > >
> > > Hi alessandro. If u put in maintenance mode the master you force the
> > > election of a new pool master. Now when you have see the upgraded host
> as
> > > disconnected you are connected to the new pool master and the host (as
> a
> > > pool member) cannot comunicate with a pool master of an earliest
> version.
> > > The solution? Launche the upgrade on the pool master without enter in
> > > maintenance mode. And remember a consistent backup!!!
> > >
> > >
> > >
> > > Inviato dal mio dispositivo Samsung
> > >
> > >
> > > -------- Messaggio originale --------
> > > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> > > c.alessandro@gmail.com>>
> > > Data: 01/01/2016 23:23 (GMT+01:00)
> > > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > > Oggetto: A Story of a Failed XenServer Upgrade
> > >
> > > Hi guys,
> > > I want to share my XenServer Upgrade adventure to understand if I did
> > > domething wrong.
> > > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
> > > has been upgraded I start the upgrade process of my XenServer hosts
> from
> > > 6.2 to 6.5.
> > > I do not already have PoolHA enabled so I followed this article:
> > >
> > >
> >
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> > >
> > > The cluster consists of n° 3 XenServer hosts.
> > >
> > > First of all I added manage.xenserver.pool.master=false
> > > to environment.properties file and restarted cloudstack-management
> > service.
> > >
> > > After that I put in Maintenance Mode Pool Master host and, after all
> VMs
> > > has been migrated, I Unmanaged the cluster.
> > > At this point all host appears as "Disconnected" from CS interface and
> > > this should be right.
> > > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
> > > in-place upgrade.
> > > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
> > > At this point I expected that, after click on Manage Cluster on CS, all
> > > the hosts come back to "UP" and I could go ahead upgrading the other
> > > hosts....
> > >
> > > But, instead of that, all the hosts still appears as "Disconnected", I
> > > tried a couple of cloudstack-management service restart without
> success.
> > >
> > > So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and
> it
> > > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I
> got
> > > the error: The server is still booting
> > >
> > > After some investigation, I run the command "xe task-list" and this is
> > the
> > > result:
> > >
> > > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> > > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> > > status ( RO): pending
> > > progress ( RO): 0.000
> > >
> > > I tried a couple of reboot but nothing changes.... so I decided to shut
> > > down the server, force raise a slave host to master with emergency
> mode,
> > > remove old server from CS and reboot CS.
> > >
> > > After that, I see my cluster up and running again, so I installed XS
> > > 6.2SP1 on the "upgraded" host and added again to the cluster....
> > >
> > > So after an entire day of work, I'm in the same situation! :D
> > >
> > > Anyone can tell me if I made something wrong??
> > >
> > > Thank you very much!
> > >
> >
>
>
>
> --
> Rafael Weingärtner
>



--
Rafael Weingärtner

Re: A Story of a Failed XenServer Upgrade

Posted by Rafael Weingärtner <ra...@gmail.com>.
That is true it is not putting the host in maintenance. Not just the
master, but any XenServer host.

The question is, should it? If so, we should open a Jira ticket.

On Sat, Jan 2, 2016 at 9:18 PM, Davide Pala <da...@gesca.it> wrote:

> So i dont know anything about cloud on xenserver and for this reason i
> think the cloudstack dont put in maintenance xenserver pool master
>
>
>
> Inviato dal mio dispositivo Samsung
>
>
> -------- Messaggio originale --------
> Da: Rafael Weingärtner <ra...@gmail.com>
> Data: 02/01/2016 22:48 (GMT+01:00)
> A: users@cloudstack.apache.org
> Oggetto: Re: A Story of a Failed XenServer Upgrade
>
> Hi all,
>
> There is nothing better than looking at the source code.
>
> After the VM migration (or restart for LCX ?!), when the host is put in
> maintenance mode, for Xenserver it will remove a tag called “cloud”.
>
> On Sat, Jan 2, 2016 at 7:38 PM, Davide Pala <da...@gesca.it> wrote:
>
> > i don't know what do the maintenance mode in CS but if it put in
> > maintenance also the pool master this article is wrong!
> >
> >
> > Davide Pala
> > Infrastructure Specialist
> > Gesca srl
> >
> > Via degli Olmetti, 18
> > 00060 Formello (Roma)
> > Office:  +39 06 9040661
> > Fax:     +39 06 90406666
> > E-mail: davide.pala@gesca.it
> > Web:    www.gesca.it<http://www.gesca.it>
> >
> >
> >
> >
> > ________________________________________
> > Da: Alessandro Caviglione [c.alessandro@gmail.com]
> > Inviato: sabato 2 gennaio 2016 22.27
> > A: users@cloudstack.apache.org
> > Oggetto: Re: A Story of a Failed XenServer Upgrade
> >
> > No guys,as the article wrote, my first action was to put in Maintenance
> > Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
> XenServer
> > Pool Master first before any of the Slaves.  To do so you need to empty
> the
> > Pool Master of all CloudStack VMs, and you do this by putting the Host
> into
> > Maintenance Mode within CloudStack to trigger a live migration of all VMs
> > to alternate Hosts"
> >
> > This is exactly what I've done and after the XS upgrade, no hosts was
> able
> > to communicate with CS and also with the upgraded host.
> >
> > Putting an host in Maint Mode within CS will trigger MM also on XenServer
> > host or just will move the VMs to other hosts?
> >
> > And again.... what's the best practices to upgrade a XS cluster?
> >
> > On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
> RBergsma@schubergphilis.com>
> > wrote:
> >
> > > CloudStack should always do the migration of VM's not the Hypervisor.
> > >
> > > That's not true. You can safely migrate outside of CloudStack as the
> > power
> > > report will tell CloudStack where the vms live and the db gets updated
> > > accordingly. I do this a lot while patching and that works fine on 6.2
> > and
> > > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> > >
> > > Regards, Remi
> > >
> > >
> > > Sent from my iPhone
> > >
> > > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net
> <mailto:
> > > jpeterson@acentek.net>> wrote:
> > >
> > > I don't use XenServer maintenance mode until after CloudStack has put
> the
> > > Host in maintenance mode.
> > >
> > > When you initiate maintenance mode from the host rather than CloudStack
> > > the db does not know where the VM's are and your UUID's get jacked.
> > >
> > > CS is your brains not the hypervisor.
> > >
> > > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> > > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
> hypervisor
> > if
> > > needed and then CS and move on to the next Host.
> > >
> > > CloudStack should always do the migration of VM's not the Hypervisor.
> > >
> > > Jeremy
> > >
> > >
> > > -----Original Message-----
> > > From: Davide Pala [mailto:davide.pala@gesca.it]
> > > Sent: Friday, January 1, 2016 5:18 PM
> > > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > > Subject: R: A Story of a Failed XenServer Upgrade
> > >
> > > Hi alessandro. If u put in maintenance mode the master you force the
> > > election of a new pool master. Now when you have see the upgraded host
> as
> > > disconnected you are connected to the new pool master and the host (as
> a
> > > pool member) cannot comunicate with a pool master of an earliest
> version.
> > > The solution? Launche the upgrade on the pool master without enter in
> > > maintenance mode. And remember a consistent backup!!!
> > >
> > >
> > >
> > > Inviato dal mio dispositivo Samsung
> > >
> > >
> > > -------- Messaggio originale --------
> > > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> > > c.alessandro@gmail.com>>
> > > Data: 01/01/2016 23:23 (GMT+01:00)
> > > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > > Oggetto: A Story of a Failed XenServer Upgrade
> > >
> > > Hi guys,
> > > I want to share my XenServer Upgrade adventure to understand if I did
> > > domething wrong.
> > > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
> > > has been upgraded I start the upgrade process of my XenServer hosts
> from
> > > 6.2 to 6.5.
> > > I do not already have PoolHA enabled so I followed this article:
> > >
> > >
> >
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> > >
> > > The cluster consists of n° 3 XenServer hosts.
> > >
> > > First of all I added manage.xenserver.pool.master=false
> > > to environment.properties file and restarted cloudstack-management
> > service.
> > >
> > > After that I put in Maintenance Mode Pool Master host and, after all
> VMs
> > > has been migrated, I Unmanaged the cluster.
> > > At this point all host appears as "Disconnected" from CS interface and
> > > this should be right.
> > > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
> > > in-place upgrade.
> > > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
> > > At this point I expected that, after click on Manage Cluster on CS, all
> > > the hosts come back to "UP" and I could go ahead upgrading the other
> > > hosts....
> > >
> > > But, instead of that, all the hosts still appears as "Disconnected", I
> > > tried a couple of cloudstack-management service restart without
> success.
> > >
> > > So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and
> it
> > > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I
> got
> > > the error: The server is still booting
> > >
> > > After some investigation, I run the command "xe task-list" and this is
> > the
> > > result:
> > >
> > > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> > > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> > > status ( RO): pending
> > > progress ( RO): 0.000
> > >
> > > I tried a couple of reboot but nothing changes.... so I decided to shut
> > > down the server, force raise a slave host to master with emergency
> mode,
> > > remove old server from CS and reboot CS.
> > >
> > > After that, I see my cluster up and running again, so I installed XS
> > > 6.2SP1 on the "upgraded" host and added again to the cluster....
> > >
> > > So after an entire day of work, I'm in the same situation! :D
> > >
> > > Anyone can tell me if I made something wrong??
> > >
> > > Thank you very much!
> > >
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 
Rafael Weingärtner

R: A Story of a Failed XenServer Upgrade

Posted by Davide Pala <da...@gesca.it>.
So i dont know anything about cloud on xenserver and for this reason i think the cloudstack dont put in maintenance xenserver pool master



Inviato dal mio dispositivo Samsung


-------- Messaggio originale --------
Da: Rafael Weingärtner <ra...@gmail.com>
Data: 02/01/2016 22:48 (GMT+01:00)
A: users@cloudstack.apache.org
Oggetto: Re: A Story of a Failed XenServer Upgrade

Hi all,

There is nothing better than looking at the source code.

After the VM migration (or restart for LCX ?!), when the host is put in
maintenance mode, for Xenserver it will remove a tag called “cloud”.

On Sat, Jan 2, 2016 at 7:38 PM, Davide Pala <da...@gesca.it> wrote:

> i don't know what do the maintenance mode in CS but if it put in
> maintenance also the pool master this article is wrong!
>
>
> Davide Pala
> Infrastructure Specialist
> Gesca srl
>
> Via degli Olmetti, 18
> 00060 Formello (Roma)
> Office:  +39 06 9040661
> Fax:     +39 06 90406666
> E-mail: davide.pala@gesca.it
> Web:    www.gesca.it<http://www.gesca.it>
>
>
>
>
> ________________________________________
> Da: Alessandro Caviglione [c.alessandro@gmail.com]
> Inviato: sabato 2 gennaio 2016 22.27
> A: users@cloudstack.apache.org
> Oggetto: Re: A Story of a Failed XenServer Upgrade
>
> No guys,as the article wrote, my first action was to put in Maintenance
> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer
> Pool Master first before any of the Slaves.  To do so you need to empty the
> Pool Master of all CloudStack VMs, and you do this by putting the Host into
> Maintenance Mode within CloudStack to trigger a live migration of all VMs
> to alternate Hosts"
>
> This is exactly what I've done and after the XS upgrade, no hosts was able
> to communicate with CS and also with the upgraded host.
>
> Putting an host in Maint Mode within CS will trigger MM also on XenServer
> host or just will move the VMs to other hosts?
>
> And again.... what's the best practices to upgrade a XS cluster?
>
> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <RB...@schubergphilis.com>
> wrote:
>
> > CloudStack should always do the migration of VM's not the Hypervisor.
> >
> > That's not true. You can safely migrate outside of CloudStack as the
> power
> > report will tell CloudStack where the vms live and the db gets updated
> > accordingly. I do this a lot while patching and that works fine on 6.2
> and
> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> >
> > Regards, Remi
> >
> >
> > Sent from my iPhone
> >
> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net<mailto:
> > jpeterson@acentek.net>> wrote:
> >
> > I don't use XenServer maintenance mode until after CloudStack has put the
> > Host in maintenance mode.
> >
> > When you initiate maintenance mode from the host rather than CloudStack
> > the db does not know where the VM's are and your UUID's get jacked.
> >
> > CS is your brains not the hypervisor.
> >
> > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor
> if
> > needed and then CS and move on to the next Host.
> >
> > CloudStack should always do the migration of VM's not the Hypervisor.
> >
> > Jeremy
> >
> >
> > -----Original Message-----
> > From: Davide Pala [mailto:davide.pala@gesca.it]
> > Sent: Friday, January 1, 2016 5:18 PM
> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > Subject: R: A Story of a Failed XenServer Upgrade
> >
> > Hi alessandro. If u put in maintenance mode the master you force the
> > election of a new pool master. Now when you have see the upgraded host as
> > disconnected you are connected to the new pool master and the host (as a
> > pool member) cannot comunicate with a pool master of an earliest version.
> > The solution? Launche the upgrade on the pool master without enter in
> > maintenance mode. And remember a consistent backup!!!
> >
> >
> >
> > Inviato dal mio dispositivo Samsung
> >
> >
> > -------- Messaggio originale --------
> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> > c.alessandro@gmail.com>>
> > Data: 01/01/2016 23:23 (GMT+01:00)
> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > Oggetto: A Story of a Failed XenServer Upgrade
> >
> > Hi guys,
> > I want to share my XenServer Upgrade adventure to understand if I did
> > domething wrong.
> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
> > has been upgraded I start the upgrade process of my XenServer hosts from
> > 6.2 to 6.5.
> > I do not already have PoolHA enabled so I followed this article:
> >
> >
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> >
> > The cluster consists of n° 3 XenServer hosts.
> >
> > First of all I added manage.xenserver.pool.master=false
> > to environment.properties file and restarted cloudstack-management
> service.
> >
> > After that I put in Maintenance Mode Pool Master host and, after all VMs
> > has been migrated, I Unmanaged the cluster.
> > At this point all host appears as "Disconnected" from CS interface and
> > this should be right.
> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
> > in-place upgrade.
> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
> > At this point I expected that, after click on Manage Cluster on CS, all
> > the hosts come back to "UP" and I could go ahead upgrading the other
> > hosts....
> >
> > But, instead of that, all the hosts still appears as "Disconnected", I
> > tried a couple of cloudstack-management service restart without success.
> >
> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
> > the error: The server is still booting
> >
> > After some investigation, I run the command "xe task-list" and this is
> the
> > result:
> >
> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> > status ( RO): pending
> > progress ( RO): 0.000
> >
> > I tried a couple of reboot but nothing changes.... so I decided to shut
> > down the server, force raise a slave host to master with emergency mode,
> > remove old server from CS and reboot CS.
> >
> > After that, I see my cluster up and running again, so I installed XS
> > 6.2SP1 on the "upgraded" host and added again to the cluster....
> >
> > So after an entire day of work, I'm in the same situation! :D
> >
> > Anyone can tell me if I made something wrong??
> >
> > Thank you very much!
> >
>



--
Rafael Weingärtner

Re: A Story of a Failed XenServer Upgrade

Posted by Rafael Weingärtner <ra...@gmail.com>.
Hi all,

There is nothing better than looking at the source code.

After the VM migration (or restart for LCX ?!), when the host is put in
maintenance mode, for Xenserver it will remove a tag called “cloud”.

On Sat, Jan 2, 2016 at 7:38 PM, Davide Pala <da...@gesca.it> wrote:

> i don't know what do the maintenance mode in CS but if it put in
> maintenance also the pool master this article is wrong!
>
>
> Davide Pala
> Infrastructure Specialist
> Gesca srl
>
> Via degli Olmetti, 18
> 00060 Formello (Roma)
> Office:  +39 06 9040661
> Fax:     +39 06 90406666
> E-mail: davide.pala@gesca.it
> Web:    www.gesca.it
>
>
>
>
> ________________________________________
> Da: Alessandro Caviglione [c.alessandro@gmail.com]
> Inviato: sabato 2 gennaio 2016 22.27
> A: users@cloudstack.apache.org
> Oggetto: Re: A Story of a Failed XenServer Upgrade
>
> No guys,as the article wrote, my first action was to put in Maintenance
> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer
> Pool Master first before any of the Slaves.  To do so you need to empty the
> Pool Master of all CloudStack VMs, and you do this by putting the Host into
> Maintenance Mode within CloudStack to trigger a live migration of all VMs
> to alternate Hosts"
>
> This is exactly what I've done and after the XS upgrade, no hosts was able
> to communicate with CS and also with the upgraded host.
>
> Putting an host in Maint Mode within CS will trigger MM also on XenServer
> host or just will move the VMs to other hosts?
>
> And again.... what's the best practices to upgrade a XS cluster?
>
> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <RB...@schubergphilis.com>
> wrote:
>
> > CloudStack should always do the migration of VM's not the Hypervisor.
> >
> > That's not true. You can safely migrate outside of CloudStack as the
> power
> > report will tell CloudStack where the vms live and the db gets updated
> > accordingly. I do this a lot while patching and that works fine on 6.2
> and
> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> >
> > Regards, Remi
> >
> >
> > Sent from my iPhone
> >
> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net<mailto:
> > jpeterson@acentek.net>> wrote:
> >
> > I don't use XenServer maintenance mode until after CloudStack has put the
> > Host in maintenance mode.
> >
> > When you initiate maintenance mode from the host rather than CloudStack
> > the db does not know where the VM's are and your UUID's get jacked.
> >
> > CS is your brains not the hypervisor.
> >
> > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor
> if
> > needed and then CS and move on to the next Host.
> >
> > CloudStack should always do the migration of VM's not the Hypervisor.
> >
> > Jeremy
> >
> >
> > -----Original Message-----
> > From: Davide Pala [mailto:davide.pala@gesca.it]
> > Sent: Friday, January 1, 2016 5:18 PM
> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > Subject: R: A Story of a Failed XenServer Upgrade
> >
> > Hi alessandro. If u put in maintenance mode the master you force the
> > election of a new pool master. Now when you have see the upgraded host as
> > disconnected you are connected to the new pool master and the host (as a
> > pool member) cannot comunicate with a pool master of an earliest version.
> > The solution? Launche the upgrade on the pool master without enter in
> > maintenance mode. And remember a consistent backup!!!
> >
> >
> >
> > Inviato dal mio dispositivo Samsung
> >
> >
> > -------- Messaggio originale --------
> > Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> > c.alessandro@gmail.com>>
> > Data: 01/01/2016 23:23 (GMT+01:00)
> > A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > Oggetto: A Story of a Failed XenServer Upgrade
> >
> > Hi guys,
> > I want to share my XenServer Upgrade adventure to understand if I did
> > domething wrong.
> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
> > has been upgraded I start the upgrade process of my XenServer hosts from
> > 6.2 to 6.5.
> > I do not already have PoolHA enabled so I followed this article:
> >
> >
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> >
> > The cluster consists of n° 3 XenServer hosts.
> >
> > First of all I added manage.xenserver.pool.master=false
> > to environment.properties file and restarted cloudstack-management
> service.
> >
> > After that I put in Maintenance Mode Pool Master host and, after all VMs
> > has been migrated, I Unmanaged the cluster.
> > At this point all host appears as "Disconnected" from CS interface and
> > this should be right.
> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
> > in-place upgrade.
> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
> > At this point I expected that, after click on Manage Cluster on CS, all
> > the hosts come back to "UP" and I could go ahead upgrading the other
> > hosts....
> >
> > But, instead of that, all the hosts still appears as "Disconnected", I
> > tried a couple of cloudstack-management service restart without success.
> >
> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
> > the error: The server is still booting
> >
> > After some investigation, I run the command "xe task-list" and this is
> the
> > result:
> >
> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> > status ( RO): pending
> > progress ( RO): 0.000
> >
> > I tried a couple of reboot but nothing changes.... so I decided to shut
> > down the server, force raise a slave host to master with emergency mode,
> > remove old server from CS and reboot CS.
> >
> > After that, I see my cluster up and running again, so I installed XS
> > 6.2SP1 on the "upgraded" host and added again to the cluster....
> >
> > So after an entire day of work, I'm in the same situation! :D
> >
> > Anyone can tell me if I made something wrong??
> >
> > Thank you very much!
> >
>



-- 
Rafael Weingärtner

RE: A Story of a Failed XenServer Upgrade

Posted by Davide Pala <da...@gesca.it>.
i don't know what do the maintenance mode in CS but if it put in maintenance also the pool master this article is wrong!


Davide Pala
Infrastructure Specialist
Gesca srl

Via degli Olmetti, 18
00060 Formello (Roma)
Office:  +39 06 9040661
Fax:     +39 06 90406666
E-mail: davide.pala@gesca.it
Web:    www.gesca.it




________________________________________
Da: Alessandro Caviglione [c.alessandro@gmail.com]
Inviato: sabato 2 gennaio 2016 22.27
A: users@cloudstack.apache.org
Oggetto: Re: A Story of a Failed XenServer Upgrade

No guys,as the article wrote, my first action was to put in Maintenance
Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer
Pool Master first before any of the Slaves.  To do so you need to empty the
Pool Master of all CloudStack VMs, and you do this by putting the Host into
Maintenance Mode within CloudStack to trigger a live migration of all VMs
to alternate Hosts"

This is exactly what I've done and after the XS upgrade, no hosts was able
to communicate with CS and also with the upgraded host.

Putting an host in Maint Mode within CS will trigger MM also on XenServer
host or just will move the VMs to other hosts?

And again.... what's the best practices to upgrade a XS cluster?

On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <RB...@schubergphilis.com>
wrote:

> CloudStack should always do the migration of VM's not the Hypervisor.
>
> That's not true. You can safely migrate outside of CloudStack as the power
> report will tell CloudStack where the vms live and the db gets updated
> accordingly. I do this a lot while patching and that works fine on 6.2 and
> 6.5. I use both CloudStack 4.4.4 and 4.7.0.
>
> Regards, Remi
>
>
> Sent from my iPhone
>
> On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net<mailto:
> jpeterson@acentek.net>> wrote:
>
> I don't use XenServer maintenance mode until after CloudStack has put the
> Host in maintenance mode.
>
> When you initiate maintenance mode from the host rather than CloudStack
> the db does not know where the VM's are and your UUID's get jacked.
>
> CS is your brains not the hypervisor.
>
> Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor if
> needed and then CS and move on to the next Host.
>
> CloudStack should always do the migration of VM's not the Hypervisor.
>
> Jeremy
>
>
> -----Original Message-----
> From: Davide Pala [mailto:davide.pala@gesca.it]
> Sent: Friday, January 1, 2016 5:18 PM
> To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> Subject: R: A Story of a Failed XenServer Upgrade
>
> Hi alessandro. If u put in maintenance mode the master you force the
> election of a new pool master. Now when you have see the upgraded host as
> disconnected you are connected to the new pool master and the host (as a
> pool member) cannot comunicate with a pool master of an earliest version.
> The solution? Launche the upgrade on the pool master without enter in
> maintenance mode. And remember a consistent backup!!!
>
>
>
> Inviato dal mio dispositivo Samsung
>
>
> -------- Messaggio originale --------
> Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> c.alessandro@gmail.com>>
> Data: 01/01/2016 23:23 (GMT+01:00)
> A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> Oggetto: A Story of a Failed XenServer Upgrade
>
> Hi guys,
> I want to share my XenServer Upgrade adventure to understand if I did
> domething wrong.
> I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
> has been upgraded I start the upgrade process of my XenServer hosts from
> 6.2 to 6.5.
> I do not already have PoolHA enabled so I followed this article:
>
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
>
> The cluster consists of n° 3 XenServer hosts.
>
> First of all I added manage.xenserver.pool.master=false
> to environment.properties file and restarted cloudstack-management service.
>
> After that I put in Maintenance Mode Pool Master host and, after all VMs
> has been migrated, I Unmanaged the cluster.
> At this point all host appears as "Disconnected" from CS interface and
> this should be right.
> Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
> in-place upgrade.
> After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
> At this point I expected that, after click on Manage Cluster on CS, all
> the hosts come back to "UP" and I could go ahead upgrading the other
> hosts....
>
> But, instead of that, all the hosts still appears as "Disconnected", I
> tried a couple of cloudstack-management service restart without success.
>
> So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
> appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
> the error: The server is still booting
>
> After some investigation, I run the command "xe task-list" and this is the
> result:
>
> uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> status ( RO): pending
> progress ( RO): 0.000
>
> I tried a couple of reboot but nothing changes.... so I decided to shut
> down the server, force raise a slave host to master with emergency mode,
> remove old server from CS and reboot CS.
>
> After that, I see my cluster up and running again, so I installed XS
> 6.2SP1 on the "upgraded" host and added again to the cluster....
>
> So after an entire day of work, I'm in the same situation! :D
>
> Anyone can tell me if I made something wrong??
>
> Thank you very much!
>

Re: A Story of a Failed XenServer Upgrade

Posted by Alessandro Caviglione <c....@gmail.com>.
No guys,as the article wrote, my first action was to put in Maintenance
Mode the Pool Master INSIDE CS; "It is vital that you upgrade the XenServer
Pool Master first before any of the Slaves.  To do so you need to empty the
Pool Master of all CloudStack VMs, and you do this by putting the Host into
Maintenance Mode within CloudStack to trigger a live migration of all VMs
to alternate Hosts"

This is exactly what I've done and after the XS upgrade, no hosts was able
to communicate with CS and also with the upgraded host.

Putting an host in Maint Mode within CS will trigger MM also on XenServer
host or just will move the VMs to other hosts?

And again.... what's the best practices to upgrade a XS cluster?

On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <RB...@schubergphilis.com>
wrote:

> CloudStack should always do the migration of VM's not the Hypervisor.
>
> That's not true. You can safely migrate outside of CloudStack as the power
> report will tell CloudStack where the vms live and the db gets updated
> accordingly. I do this a lot while patching and that works fine on 6.2 and
> 6.5. I use both CloudStack 4.4.4 and 4.7.0.
>
> Regards, Remi
>
>
> Sent from my iPhone
>
> On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeterson@acentek.net<mailto:
> jpeterson@acentek.net>> wrote:
>
> I don't use XenServer maintenance mode until after CloudStack has put the
> Host in maintenance mode.
>
> When you initiate maintenance mode from the host rather than CloudStack
> the db does not know where the VM's are and your UUID's get jacked.
>
> CS is your brains not the hypervisor.
>
> Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor if
> needed and then CS and move on to the next Host.
>
> CloudStack should always do the migration of VM's not the Hypervisor.
>
> Jeremy
>
>
> -----Original Message-----
> From: Davide Pala [mailto:davide.pala@gesca.it]
> Sent: Friday, January 1, 2016 5:18 PM
> To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> Subject: R: A Story of a Failed XenServer Upgrade
>
> Hi alessandro. If u put in maintenance mode the master you force the
> election of a new pool master. Now when you have see the upgraded host as
> disconnected you are connected to the new pool master and the host (as a
> pool member) cannot comunicate with a pool master of an earliest version.
> The solution? Launche the upgrade on the pool master without enter in
> maintenance mode. And remember a consistent backup!!!
>
>
>
> Inviato dal mio dispositivo Samsung
>
>
> -------- Messaggio originale --------
> Da: Alessandro Caviglione <c.alessandro@gmail.com<mailto:
> c.alessandro@gmail.com>>
> Data: 01/01/2016 23:23 (GMT+01:00)
> A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> Oggetto: A Story of a Failed XenServer Upgrade
>
> Hi guys,
> I want to share my XenServer Upgrade adventure to understand if I did
> domething wrong.
> I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs
> has been upgraded I start the upgrade process of my XenServer hosts from
> 6.2 to 6.5.
> I do not already have PoolHA enabled so I followed this article:
>
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
>
> The cluster consists of n° 3 XenServer hosts.
>
> First of all I added manage.xenserver.pool.master=false
> to environment.properties file and restarted cloudstack-management service.
>
> After that I put in Maintenance Mode Pool Master host and, after all VMs
> has been migrated, I Unmanaged the cluster.
> At this point all host appears as "Disconnected" from CS interface and
> this should be right.
> Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
> in-place upgrade.
> After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
> At this point I expected that, after click on Manage Cluster on CS, all
> the hosts come back to "UP" and I could go ahead upgrading the other
> hosts....
>
> But, instead of that, all the hosts still appears as "Disconnected", I
> tried a couple of cloudstack-management service restart without success.
>
> So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
> appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
> the error: The server is still booting
>
> After some investigation, I run the command "xe task-list" and this is the
> result:
>
> uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> status ( RO): pending
> progress ( RO): 0.000
>
> I tried a couple of reboot but nothing changes.... so I decided to shut
> down the server, force raise a slave host to master with emergency mode,
> remove old server from CS and reboot CS.
>
> After that, I see my cluster up and running again, so I installed XS
> 6.2SP1 on the "upgraded" host and added again to the cluster....
>
> So after an entire day of work, I'm in the same situation! :D
>
> Anyone can tell me if I made something wrong??
>
> Thank you very much!
>

Re: A Story of a Failed XenServer Upgrade

Posted by Remi Bergsma <RB...@schubergphilis.com>.
CloudStack should always do the migration of VM's not the Hypervisor.

That's not true. You can safely migrate outside of CloudStack as the power report will tell CloudStack where the vms live and the db gets updated accordingly. I do this a lot while patching and that works fine on 6.2 and 6.5. I use both CloudStack 4.4.4 and 4.7.0.

Regards, Remi


Sent from my iPhone

On 02 Jan 2016, at 16:26, Jeremy Peterson <jp...@acentek.net>> wrote:

I don't use XenServer maintenance mode until after CloudStack has put the Host in maintenance mode.

When you initiate maintenance mode from the host rather than CloudStack the db does not know where the VM's are and your UUID's get jacked.

CS is your brains not the hypervisor.

Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.  Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor if needed and then CS and move on to the next Host.

CloudStack should always do the migration of VM's not the Hypervisor.

Jeremy


-----Original Message-----
From: Davide Pala [mailto:davide.pala@gesca.it]
Sent: Friday, January 1, 2016 5:18 PM
To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
Subject: R: A Story of a Failed XenServer Upgrade

Hi alessandro. If u put in maintenance mode the master you force the election of a new pool master. Now when you have see the upgraded host as disconnected you are connected to the new pool master and the host (as a pool member) cannot comunicate with a pool master of an earliest version. The solution? Launche the upgrade on the pool master without enter in maintenance mode. And remember a consistent backup!!!



Inviato dal mio dispositivo Samsung


-------- Messaggio originale --------
Da: Alessandro Caviglione <c....@gmail.com>>
Data: 01/01/2016 23:23 (GMT+01:00)
A: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
Oggetto: A Story of a Failed XenServer Upgrade

Hi guys,
I want to share my XenServer Upgrade adventure to understand if I did domething wrong.
I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs has been upgraded I start the upgrade process of my XenServer hosts from 6.2 to 6.5.
I do not already have PoolHA enabled so I followed this article:
http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/

The cluster consists of n° 3 XenServer hosts.

First of all I added manage.xenserver.pool.master=false
to environment.properties file and restarted cloudstack-management service.

After that I put in Maintenance Mode Pool Master host and, after all VMs has been migrated, I Unmanaged the cluster.
At this point all host appears as "Disconnected" from CS interface and this should be right.
Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a in-place upgrade.
After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
At this point I expected that, after click on Manage Cluster on CS, all the hosts come back to "UP" and I could go ahead upgrading the other hosts....

But, instead of that, all the hosts still appears as "Disconnected", I tried a couple of cloudstack-management service restart without success.

So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got the error: The server is still booting

After some investigation, I run the command "xe task-list" and this is the
result:

uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
status ( RO): pending
progress ( RO): 0.000

I tried a couple of reboot but nothing changes.... so I decided to shut down the server, force raise a slave host to master with emergency mode, remove old server from CS and reboot CS.

After that, I see my cluster up and running again, so I installed XS 6.2SP1 on the "upgraded" host and added again to the cluster....

So after an entire day of work, I'm in the same situation! :D

Anyone can tell me if I made something wrong??

Thank you very much!

RE: A Story of a Failed XenServer Upgrade

Posted by Jeremy Peterson <jp...@acentek.net>.
I don't use XenServer maintenance mode until after CloudStack has put the Host in maintenance mode.

When you initiate maintenance mode from the host rather than CloudStack the db does not know where the VM's are and your UUID's get jacked.

CS is your brains not the hypervisor.

Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.  Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at hypervisor if needed and then CS and move on to the next Host.

CloudStack should always do the migration of VM's not the Hypervisor.

Jeremy


-----Original Message-----
From: Davide Pala [mailto:davide.pala@gesca.it] 
Sent: Friday, January 1, 2016 5:18 PM
To: users@cloudstack.apache.org
Subject: R: A Story of a Failed XenServer Upgrade

Hi alessandro. If u put in maintenance mode the master you force the election of a new pool master. Now when you have see the upgraded host as disconnected you are connected to the new pool master and the host (as a pool member) cannot comunicate with a pool master of an earliest version. The solution? Launche the upgrade on the pool master without enter in maintenance mode. And remember a consistent backup!!!



Inviato dal mio dispositivo Samsung


-------- Messaggio originale --------
Da: Alessandro Caviglione <c....@gmail.com>
Data: 01/01/2016 23:23 (GMT+01:00)
A: users@cloudstack.apache.org
Oggetto: A Story of a Failed XenServer Upgrade

Hi guys,
I want to share my XenServer Upgrade adventure to understand if I did domething wrong.
I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs has been upgraded I start the upgrade process of my XenServer hosts from 6.2 to 6.5.
I do not already have PoolHA enabled so I followed this article:
http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/

The cluster consists of n° 3 XenServer hosts.

First of all I added manage.xenserver.pool.master=false
to environment.properties file and restarted cloudstack-management service.

After that I put in Maintenance Mode Pool Master host and, after all VMs has been migrated, I Unmanaged the cluster.
At this point all host appears as "Disconnected" from CS interface and this should be right.
Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a in-place upgrade.
After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
At this point I expected that, after click on Manage Cluster on CS, all the hosts come back to "UP" and I could go ahead upgrading the other hosts....

But, instead of that, all the hosts still appears as "Disconnected", I tried a couple of cloudstack-management service restart without success.

So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got the error: The server is still booting

After some investigation, I run the command "xe task-list" and this is the
result:

uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
status ( RO): pending
progress ( RO): 0.000

I tried a couple of reboot but nothing changes.... so I decided to shut down the server, force raise a slave host to master with emergency mode, remove old server from CS and reboot CS.

After that, I see my cluster up and running again, so I installed XS 6.2SP1 on the "upgraded" host and added again to the cluster....

So after an entire day of work, I'm in the same situation! :D

Anyone can tell me if I made something wrong??

Thank you very much!

R: A Story of a Failed XenServer Upgrade

Posted by Davide Pala <da...@gesca.it>.
Hi alessandro. If u put in maintenance mode the master you force the election of a new pool master. Now when you have see the upgraded host as disconnected you are connected to the new pool master and the host (as a pool member) cannot comunicate with a pool master of an earliest version. The solution? Launche the upgrade on the pool master without enter in maintenance mode. And remember a consistent backup!!!



Inviato dal mio dispositivo Samsung


-------- Messaggio originale --------
Da: Alessandro Caviglione <c....@gmail.com>
Data: 01/01/2016 23:23 (GMT+01:00)
A: users@cloudstack.apache.org
Oggetto: A Story of a Failed XenServer Upgrade

Hi guys,
I want to share my XenServer Upgrade adventure to understand if I did
domething wrong.
I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the VRs has
been upgraded I start the upgrade process of my XenServer hosts from 6.2 to
6.5.
I do not already have PoolHA enabled so I followed this article:
http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/

The cluster consists of n° 3 XenServer hosts.

First of all I added manage.xenserver.pool.master=false
to environment.properties file and restarted cloudstack-management service.

After that I put in Maintenance Mode Pool Master host and, after all VMs
has been migrated, I Unmanaged the cluster.
At this point all host appears as "Disconnected" from CS interface and this
should be right.
Now I put XenServer 6.5 CD in the host in Maintenance Mode and start a
in-place upgrade.
After XS6.5 has been installed, I istalled the 6.5SP1 and reboot again.
At this point I expected that, after click on Manage Cluster on CS, all the
hosts come back to "UP" and I could go ahead upgrading the other hosts....

But, instead of that, all the hosts still appears as "Disconnected", I
tried a couple of cloudstack-management service restart without success.

So I opened XenCenter and connect to Pool Master I upgraded to 6.5 and it
appear in Maintenance Mode, so I tried to Exit from Maint Mode but I got
the error: The server is still booting

After some investigation, I run the command "xe task-list" and this is the
result:

uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
name-label ( RO): VM.set_memory_dynamic_range
name-description ( RO):
status ( RO): pending
progress ( RO): 0.000

I tried a couple of reboot but nothing changes.... so I decided to shut
down the server, force raise a slave host to master with emergency mode,
remove old server from CS and reboot CS.

After that, I see my cluster up and running again, so I installed XS 6.2SP1
on the "upgraded" host and added again to the cluster....

So after an entire day of work, I'm in the same situation! :D

Anyone can tell me if I made something wrong??

Thank you very much!