You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Greg Goodrich <GG...@ippathways.com> on 2019/12/12 22:34:26 UTC

Troubleshooting virtual router failing to start

We’ve been doing test runs in our staging environment of upgrading from 4.9.3 to 4.11, leveraging KVM on our 2 hosts. Recently, we downgraded back to 4.9.3 to re-run the processes. We restored the database, and the software on all machines, etc. However, now, the virtual routers are failing to start, and I’m having difficulties tracking down exactly what is going wrong. I’ll admit to not being adept at working with CloudStack yet. I’ve tried restarting the network, both with and without clean up. I’ve also tried starting the routers via the infrastructure -> routers area. After a decent amount of time, the status is updated to Stopped. The VMs never seem to come online, as I’ve checked virsh on both of our hosts, and the only VMs that seem to be running are the console proxy and secondary storage VMs. I also tried restarting the agents on both hosts, as well as the management node and libvirt.

Here are some snippets from the logs that seem concerning to me, that may or may not help (these are not in order, and one is from restarting network, other from starting VR manually):

2019-12-12 18:00:34,557 DEBUG [c.c.a.t.Request] (AgentManager-Handler-10:null) (logid:) Seq 96-6863767307089348712: Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 110, [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException: com.cloud.utils.exception.CloudRuntimeException: Can't find volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0}}] }

2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request] (AgentManager-Handler-4:null) (logid:) Seq 96-6863767307089348508: Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 10, [{"com.cloud.agent.api.UnsupportedAnswer":{"result":false,"details":"Unsupported command issued: com.cloud.agent.api.StartCommand.  Are you sure you got the right type of server?","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped by previous failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped by previous failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped by previous failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped by previous failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped by previous failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped by previous failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped by previous failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped by previous failure","wait":0}}] }
2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request] (Work-Job-Executor-20:ctx-a8bb9961 job-31381/job-31386 ctx-951bce3f) (logid:010e3619) Seq 96-6863767307089348508: Received:  { Ans: , MgmtId: 345051498372, via: 96(labcloudkvm01.ipplab.corp), Ver: v1, Flags: 10, { UnsupportedAnswer, Answer, Answer, Answer, Answer, Answer, Answer, Answer, Answer } }

Any hints on which path I should go down next would be greatly appreciated!

--
Greg Goodrich | IP Pathways
Senior Developer
3600 109th Street | Urbandale, IA 50322
p. 515.422.9346 | e. ggoodrich@ippathways.com<ma...@ippathways.com>


Re: Troubleshooting virtual router failing to start

Posted by Greg Goodrich <GG...@ippathways.com>.
Okay, this issue has been resolved. I’m not sure if the upgrade caused it or if it was a step that we performed during the upgrade. It turns out that the issue was that the system vm template for the VR was missing from Primary Storage, but the database still had an entry for it, believing it to still be there. I think maybe this disconnect between the database and primary storage caused it to fail. We updated the 4.6 vm template via the UI, and then everything worked again. Thanks for the help on this!

--
Greg Goodrich | IP Pathways
Senior Developer
3600 109th Street | Urbandale, IA 50322
p. 515.422.9346 | e. ggoodrich@ippathways.com<ma...@ippathways.com>

On Dec 13, 2019, at 8:44 AM, Greg Goodrich <GG...@ippathways.com>> wrote:

Thanks Thomas, I will check on that.

Andrija, my supervisor and I had previously ’tested’ the upgrade to 4.11, and we decided we wanted to do it again, so we restored the 4.9 db and software. I don’t recall if we tore down all resources prior to this downgrade or not. Your point is well taken, regarding downgrades. We likely need to find a better way to do this, maybe through snapshots of all machines prior to attempting to upgrade.

Thanks again!

--
Greg Goodrich | IP Pathways
Senior Developer
3600 109th Street | Urbandale, IA 50322
p. 515.422.9346 | e. ggoodrich@ippathways.com<ma...@ippathways.com>

On Dec 13, 2019, at 5:25 AM, Andrija Panic <an...@gmail.com>> wrote:

Greg,

what does it mean "***Recently***, we downgraded back to 4.9.3 to re-run
the processes" - have you created a lot of new resources with the upgraded
env (4.11 code) and then restored the DB to 4.9.3 ? If so, you need to
delete ALL (I mean all) resources you might have created - any VMs, VRs,
volumes, networks (i.e. bridges or similar) etc, otherwise you will hit all
kind of issues when ACS 4.9.3 tries to create volumes/VMs of the same name,
create bridges/vnets of the same name etc (which already exist due to being
created by 4.11)

Rolling back after some time running the new code/creating new resources is
a challenge to handle - and NOT recommended by all means.

Best,
Andrija

On Fri, 13 Dec 2019 at 10:03, Thomas Joseph <th...@gmail.com>> wrote:

Hello Greg,

Check if any of your primary or secondary storage is in maintenance mode -
specific to the uuid mentioned in the error message -
Can't find volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0
Do you have the version related system template in place and have you
updated the template version in the global settings parameter?

With regards
Thomas

On Thu, 12 Dec 2019, 10:34 pm Greg Goodrich, <GG...@ippathways.com>>
wrote:

We’ve been doing test runs in our staging environment of upgrading from
4.9.3 to 4.11, leveraging KVM on our 2 hosts. Recently, we downgraded
back
to 4.9.3 to re-run the processes. We restored the database, and the
software on all machines, etc. However, now, the virtual routers are
failing to start, and I’m having difficulties tracking down exactly what
is
going wrong. I’ll admit to not being adept at working with CloudStack
yet.
I’ve tried restarting the network, both with and without clean up. I’ve
also tried starting the routers via the infrastructure -> routers area.
After a decent amount of time, the status is updated to Stopped. The VMs
never seem to come online, as I’ve checked virsh on both of our hosts,
and
the only VMs that seem to be running are the console proxy and secondary
storage VMs. I also tried restarting the agents on both hosts, as well as
the management node and libvirt.

Here are some snippets from the logs that seem concerning to me, that may
or may not help (these are not in order, and one is from restarting
network, other from starting VR manually):

2019-12-12 18:00:34,557 DEBUG [c.c.a.t.Request]
(AgentManager-Handler-10:null) (logid:) Seq 96-6863767307089348712:
Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 110,

[{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
com.cloud.utils.exception.CloudRuntimeException: Can't find
volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0}}] }

2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
(AgentManager-Handler-4:null) (logid:) Seq 96-6863767307089348508:
Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 10,

[{"com.cloud.agent.api.UnsupportedAnswer":{"result":false,"details":"Unsupported
command issued: com.cloud.agent.api.StartCommand.  Are you sure you got
the
right type of

server?","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous failure","wait":0}}] }
2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-20:ctx-a8bb9961 job-31381/job-31386 ctx-951bce3f)
(logid:010e3619) Seq 96-6863767307089348508: Received:  { Ans: , MgmtId:
345051498372, via: 96(labcloudkvm01.ipplab.corp), Ver: v1, Flags: 10, {
UnsupportedAnswer, Answer, Answer, Answer, Answer, Answer, Answer,
Answer,
Answer } }

Any hints on which path I should go down next would be greatly
appreciated!

--
Greg Goodrich | IP Pathways
Senior Developer
3600 109th Street | Urbandale, IA 50322
p. 515.422.9346 | e. ggoodrich@ippathways.com<ma...@ippathways.com><mailto:jfry@ippathways.com






--

Andrija Panić


Re: Troubleshooting virtual router failing to start

Posted by Greg Goodrich <GG...@ippathways.com>.
Thanks Thomas, I will check on that.

Andrija, my supervisor and I had previously ’tested’ the upgrade to 4.11, and we decided we wanted to do it again, so we restored the 4.9 db and software. I don’t recall if we tore down all resources prior to this downgrade or not. Your point is well taken, regarding downgrades. We likely need to find a better way to do this, maybe through snapshots of all machines prior to attempting to upgrade.

Thanks again!

--
Greg Goodrich | IP Pathways
Senior Developer
3600 109th Street | Urbandale, IA 50322
p. 515.422.9346 | e. ggoodrich@ippathways.com<ma...@ippathways.com>

On Dec 13, 2019, at 5:25 AM, Andrija Panic <an...@gmail.com>> wrote:

Greg,

what does it mean "***Recently***, we downgraded back to 4.9.3 to re-run
the processes" - have you created a lot of new resources with the upgraded
env (4.11 code) and then restored the DB to 4.9.3 ? If so, you need to
delete ALL (I mean all) resources you might have created - any VMs, VRs,
volumes, networks (i.e. bridges or similar) etc, otherwise you will hit all
kind of issues when ACS 4.9.3 tries to create volumes/VMs of the same name,
create bridges/vnets of the same name etc (which already exist due to being
created by 4.11)

Rolling back after some time running the new code/creating new resources is
a challenge to handle - and NOT recommended by all means.

Best,
Andrija

On Fri, 13 Dec 2019 at 10:03, Thomas Joseph <th...@gmail.com>> wrote:

Hello Greg,

Check if any of your primary or secondary storage is in maintenance mode -
specific to the uuid mentioned in the error message -
Can't find volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0
Do you have the version related system template in place and have you
updated the template version in the global settings parameter?

With regards
Thomas

On Thu, 12 Dec 2019, 10:34 pm Greg Goodrich, <GG...@ippathways.com>>
wrote:

We’ve been doing test runs in our staging environment of upgrading from
4.9.3 to 4.11, leveraging KVM on our 2 hosts. Recently, we downgraded
back
to 4.9.3 to re-run the processes. We restored the database, and the
software on all machines, etc. However, now, the virtual routers are
failing to start, and I’m having difficulties tracking down exactly what
is
going wrong. I’ll admit to not being adept at working with CloudStack
yet.
I’ve tried restarting the network, both with and without clean up. I’ve
also tried starting the routers via the infrastructure -> routers area.
After a decent amount of time, the status is updated to Stopped. The VMs
never seem to come online, as I’ve checked virsh on both of our hosts,
and
the only VMs that seem to be running are the console proxy and secondary
storage VMs. I also tried restarting the agents on both hosts, as well as
the management node and libvirt.

Here are some snippets from the logs that seem concerning to me, that may
or may not help (these are not in order, and one is from restarting
network, other from starting VR manually):

2019-12-12 18:00:34,557 DEBUG [c.c.a.t.Request]
(AgentManager-Handler-10:null) (logid:) Seq 96-6863767307089348712:
Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 110,

[{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
com.cloud.utils.exception.CloudRuntimeException: Can't find
volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0}}] }

2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
(AgentManager-Handler-4:null) (logid:) Seq 96-6863767307089348508:
Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 10,

[{"com.cloud.agent.api.UnsupportedAnswer":{"result":false,"details":"Unsupported
command issued: com.cloud.agent.api.StartCommand.  Are you sure you got
the
right type of

server?","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous

failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
by previous failure","wait":0}}] }
2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-20:ctx-a8bb9961 job-31381/job-31386 ctx-951bce3f)
(logid:010e3619) Seq 96-6863767307089348508: Received:  { Ans: , MgmtId:
345051498372, via: 96(labcloudkvm01.ipplab.corp), Ver: v1, Flags: 10, {
UnsupportedAnswer, Answer, Answer, Answer, Answer, Answer, Answer,
Answer,
Answer } }

Any hints on which path I should go down next would be greatly
appreciated!

--
Greg Goodrich | IP Pathways
Senior Developer
3600 109th Street | Urbandale, IA 50322
p. 515.422.9346 | e. ggoodrich@ippathways.com<ma...@ippathways.com><mailto:jfry@ippathways.com






--

Andrija Panić


Re: Troubleshooting virtual router failing to start

Posted by Andrija Panic <an...@gmail.com>.
Greg,

what does it mean "***Recently***, we downgraded back to 4.9.3 to re-run
the processes" - have you created a lot of new resources with the upgraded
env (4.11 code) and then restored the DB to 4.9.3 ? If so, you need to
delete ALL (I mean all) resources you might have created - any VMs, VRs,
volumes, networks (i.e. bridges or similar) etc, otherwise you will hit all
kind of issues when ACS 4.9.3 tries to create volumes/VMs of the same name,
create bridges/vnets of the same name etc (which already exist due to being
created by 4.11)

Rolling back after some time running the new code/creating new resources is
a challenge to handle - and NOT recommended by all means.

Best,
Andrija

On Fri, 13 Dec 2019 at 10:03, Thomas Joseph <th...@gmail.com> wrote:

> Hello Greg,
>
> Check if any of your primary or secondary storage is in maintenance mode -
> specific to the uuid mentioned in the error message -
> Can't find volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0
> Do you have the version related system template in place and have you
> updated the template version in the global settings parameter?
>
> With regards
> Thomas
>
> On Thu, 12 Dec 2019, 10:34 pm Greg Goodrich, <GG...@ippathways.com>
> wrote:
>
> > We’ve been doing test runs in our staging environment of upgrading from
> > 4.9.3 to 4.11, leveraging KVM on our 2 hosts. Recently, we downgraded
> back
> > to 4.9.3 to re-run the processes. We restored the database, and the
> > software on all machines, etc. However, now, the virtual routers are
> > failing to start, and I’m having difficulties tracking down exactly what
> is
> > going wrong. I’ll admit to not being adept at working with CloudStack
> yet.
> > I’ve tried restarting the network, both with and without clean up. I’ve
> > also tried starting the routers via the infrastructure -> routers area.
> > After a decent amount of time, the status is updated to Stopped. The VMs
> > never seem to come online, as I’ve checked virsh on both of our hosts,
> and
> > the only VMs that seem to be running are the console proxy and secondary
> > storage VMs. I also tried restarting the agents on both hosts, as well as
> > the management node and libvirt.
> >
> > Here are some snippets from the logs that seem concerning to me, that may
> > or may not help (these are not in order, and one is from restarting
> > network, other from starting VR manually):
> >
> > 2019-12-12 18:00:34,557 DEBUG [c.c.a.t.Request]
> > (AgentManager-Handler-10:null) (logid:) Seq 96-6863767307089348712:
> > Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 110,
> >
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
> > com.cloud.utils.exception.CloudRuntimeException: Can't find
> > volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0}}] }
> >
> > 2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
> > (AgentManager-Handler-4:null) (logid:) Seq 96-6863767307089348508:
> > Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 10,
> >
> [{"com.cloud.agent.api.UnsupportedAnswer":{"result":false,"details":"Unsupported
> > command issued: com.cloud.agent.api.StartCommand.  Are you sure you got
> the
> > right type of
> >
> server?","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous failure","wait":0}}] }
> > 2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
> > (Work-Job-Executor-20:ctx-a8bb9961 job-31381/job-31386 ctx-951bce3f)
> > (logid:010e3619) Seq 96-6863767307089348508: Received:  { Ans: , MgmtId:
> > 345051498372, via: 96(labcloudkvm01.ipplab.corp), Ver: v1, Flags: 10, {
> > UnsupportedAnswer, Answer, Answer, Answer, Answer, Answer, Answer,
> Answer,
> > Answer } }
> >
> > Any hints on which path I should go down next would be greatly
> appreciated!
> >
> > --
> > Greg Goodrich | IP Pathways
> > Senior Developer
> > 3600 109th Street | Urbandale, IA 50322
> > p. 515.422.9346 | e. ggoodrich@ippathways.com<mailto:jfry@ippathways.com
> >
> >
> >
>


-- 

Andrija Panić

Re: Troubleshooting virtual router failing to start

Posted by Thomas Joseph <th...@gmail.com>.
Hello Greg,

Check if any of your primary or secondary storage is in maintenance mode -
specific to the uuid mentioned in the error message -
Can't find volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0
Do you have the version related system template in place and have you
updated the template version in the global settings parameter?

With regards
Thomas

On Thu, 12 Dec 2019, 10:34 pm Greg Goodrich, <GG...@ippathways.com>
wrote:

> We’ve been doing test runs in our staging environment of upgrading from
> 4.9.3 to 4.11, leveraging KVM on our 2 hosts. Recently, we downgraded back
> to 4.9.3 to re-run the processes. We restored the database, and the
> software on all machines, etc. However, now, the virtual routers are
> failing to start, and I’m having difficulties tracking down exactly what is
> going wrong. I’ll admit to not being adept at working with CloudStack yet.
> I’ve tried restarting the network, both with and without clean up. I’ve
> also tried starting the routers via the infrastructure -> routers area.
> After a decent amount of time, the status is updated to Stopped. The VMs
> never seem to come online, as I’ve checked virsh on both of our hosts, and
> the only VMs that seem to be running are the console proxy and secondary
> storage VMs. I also tried restarting the agents on both hosts, as well as
> the management node and libvirt.
>
> Here are some snippets from the logs that seem concerning to me, that may
> or may not help (these are not in order, and one is from restarting
> network, other from starting VR manually):
>
> 2019-12-12 18:00:34,557 DEBUG [c.c.a.t.Request]
> (AgentManager-Handler-10:null) (logid:) Seq 96-6863767307089348712:
> Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 110,
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
> com.cloud.utils.exception.CloudRuntimeException: Can't find
> volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0}}] }
>
> 2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
> (AgentManager-Handler-4:null) (logid:) Seq 96-6863767307089348508:
> Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 10,
> [{"com.cloud.agent.api.UnsupportedAnswer":{"result":false,"details":"Unsupported
> command issued: com.cloud.agent.api.StartCommand.  Are you sure you got the
> right type of
> server?","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> by previous
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> by previous
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> by previous
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> by previous
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> by previous
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> by previous
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> by previous
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> by previous failure","wait":0}}] }
> 2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-20:ctx-a8bb9961 job-31381/job-31386 ctx-951bce3f)
> (logid:010e3619) Seq 96-6863767307089348508: Received:  { Ans: , MgmtId:
> 345051498372, via: 96(labcloudkvm01.ipplab.corp), Ver: v1, Flags: 10, {
> UnsupportedAnswer, Answer, Answer, Answer, Answer, Answer, Answer, Answer,
> Answer } }
>
> Any hints on which path I should go down next would be greatly appreciated!
>
> --
> Greg Goodrich | IP Pathways
> Senior Developer
> 3600 109th Street | Urbandale, IA 50322
> p. 515.422.9346 | e. ggoodrich@ippathways.com<ma...@ippathways.com>
>
>