You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Giri Prasad <g_...@yahoo.com> on 2014/04/19 14:12:31 UTC

unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Hello All,


 I am trying to trying to setup cloudstack 4.3 on centos 6.5. After all the installation steps are over, and upon pressing the <Launch> button in the cloudstack gui, the following error comes out repeatedly. Any comments as to why is this happening?

Thanks in advance.

Regards,
Giri

tail -f /var/log/cloudstack/agent/agent.log

2014-04-19 14:46:43,735 WARN  [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-4:null) Timed out: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n v-1-VM -p %template=domP%type=consoleproxy%host=192.XXX.XX.X%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.1.66%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 .  Output is: 
2014-04-19 14:46:49,775 WARN  [kvm.resource.LibvirtComputingResource] (Script-3:null) Interrupting script.
2014-04-19 14:46:49,776 WARN  [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-3:null) Timed out: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n s-2-VM -p %template=domP%type=secstorage%host=192.XXX.XX.X%port=8250%name=s-2-VM%zone=1%pod=1%guid=s-2-VM%resource=com.cloud.storage.resource.PremiumSecondaryStorageResource%instance=SecStorage%sslcopy=false%role=templateProcessor%mtu=1500%eth2ip=192.XXX.XX.171%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.2.215%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.165%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%private.network.device=eth1%eth3ip=192.XXX.XX.161%eth3mask=255.255.255.0%storageip=192.XXX.XX.161%storagenetmask=255.255.255.0%storagegateway=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 .  Output is: 
2014-04-19 14:47:04,740 WARN  [kvm.resource.LibvirtComputingResource] (Script-4:null) Interrupting script.
2014-04-19 14:47:04,741 WARN  [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-4:null) Timed out: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n v-1-VM -p %template=domP%type=consoleproxy%host=192.XXX.XX.X%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.1.66%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 .  Output is: 
2014-04-19 14:47:10,782 WARN  [kvm.resource.LibvirtComputingResource] (Script-7:null) Interrupting script.
2014-04-19 14:47:10,782 WARN  [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-3:null) Timed out: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n s-2-VM -p %template=domP%type=secstorage%host=192.XXX.XX.X%port=8250%name=s-2-VM%zone=1%pod=1%guid=s-2-VM%resource=com.cloud.storage.resource.PremiumSecondaryStorageResource%instance=SecStorage%sslcopy=false%role=templateProcessor%mtu=1500%eth2ip=192.XXX.XX.171%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.2.215%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.165%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%private.network.device=eth1%eth3ip=192.XXX.XX.161%eth3mask=255.255.255.0%storageip=192.XXX.XX.161%storagenetmask=255.255.255.0%storagegateway=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 .  Output is: 



tail -f /var/log/cloudstack/management/management-server.log 

2014-04-19 14:48:04,970 DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-e0e30b67) Detected management node left, id:2, nodeIP:192.XXX.XX.X
2014-04-19 14:48:04,970 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-e0e30b67) Trying to connect to 192.XXX.XX.X
2014-04-19 14:48:04,971 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-e0e30b67) Management node 2 is detected inactive by timestamp but is pingable
2014-04-19 14:48:06,547 DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-b48bc2bd) Detected management node left, id:2, nodeIP:192.XXX.XX.X
2014-04-19 14:48:06,547 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-b48bc2bd) Trying to connect to 192.XXX.XX.X
2014-04-19 14:48:06,548 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-b48bc2bd) Management node 2 is detected inactive by timestamp but is pingable
2014-04-19 14:48:06,642 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-7776d406) Found 0 routers to update status. 
2014-04-19 14:48:06,644 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-7776d406) Found 0 networks to update RvR status. 
2014-04-19 14:48:07,933 DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-bcbccc64) Detected management node left, id:2, nodeIP:192.XXX.XX.X
2014-04-19 14:48:07,933 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-bcbccc64) Trying to connect to 192.XXX.XX.X
2014-04-19 14:48:07,933 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-bcbccc64) Management node 2 is detected inactive by timestamp but is pingable
2014-04-19 14:48:09,435 DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-45236a49) Detected management node left, id:2, nodeIP:192.XXX.XX.X
2014-04-19 14:48:09,435 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-45236a49) Trying to connect to 192.XXX.XX.X
2014-04-19 14:48:09,436 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-45236a49) Management node 2 is detected inactive by timestamp but is pingable
2014-04-19 14:48:09,729 DEBUG [c.c.a.ApiServlet] (catalina-exec-1:ctx-5750245e) ===START===  192.XXX.XX.X -- GET  command=listSystemVms&response=json&sessionkey=o4rxuG%2BgVnMwrfFXkNjHfnaNGe4%3D&_=1397899089726
2014-04-19 14:48:09,898 DEBUG [c.c.a.ApiServlet] (catalina-exec-1:ctx-5750245e ctx-6bcad27e) ===END===  192.XXX.XX.X -- GET  command=listSystemVms&response=json&sessionkey=o4rxuG%2BgVnMwrfFXkNjHfnaNGe4%3D&_=1397899089726
2014-04-19 14:48:10,920 DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-c76d5680) Detected management node left, id:2, nodeIP:192.XXX.XX.X
2014-04-19 14:48:10,920 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-c76d5680) Trying to connect to 192.XXX.XX.X
2014-04-19 14:48:10,920 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-c76d5680) Management node 2 is detected inactive by timestamp but is pingable
2014-04-19 14:48:11,363 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-b3be8305) Resetting hosts suitable for reconnect
2014-04-19 14:48:11,365 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-b3be8305) Completed resetting hosts suitable for reconnect
2014-04-19 14:48:11,366 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-b3be8305) Acquiring hosts for clusters already owned by this management server
2014-04-19 14:48:11,366 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-b3be8305) Completed acquiring hosts for clusters already owned by this management server
2014-04-19 14:48:11,367 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-b3be8305) Acquiring hosts for clusters not owned by any management server
2014-04-19 14:48:11,367 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-b3be8305) Completed acquiring hosts for clusters not owned by any management server
2014-04-19 14:48:12,448 DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-a25c3541) Detected management node left, id:2, nodeIP:192.XXX.XX.X
2014-04-19 14:48:12,449 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-a25c3541) Trying to connect to 192.XXX.XX.X
2014-04-19 14:48:12,449 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-a25c3541) Management node 2 is detected inactive by timestamp but is pingable
2014-04-19 14:48:13,931 DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-7cdc85ae) Detected management node left, id:2, nodeIP:192.XXX.XX.X
2014-04-19 14:48:13,931 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-7cdc85ae) Trying to connect to 192.XXX.XX.X
2014-04-19 14:48:13,931 INFO  [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-7cdc85ae) Management node 2 is detected inactive by timestamp but is pingable
2014-04-19 14:48:14,728 DEBUG [c.c.a.ApiServlet] (catalina-exec-10:ctx-5e99529e) ===START===  192.XXX.XX.X -- GET  command=listSystemVms&response=json&sessionkey=o4rxuG%2BgVnMwrfFXkNjHfnaNGe4%3D&_=1397899094725
2014-04-19 14:48:14,895 DEBUG [c.c.a.ApiServlet] (catalina-exec-10:ctx-5e99529e ctx-3d105d7f) ===END===  192.XXX.XX.X -- GET  command=listSystemVms&response=json&sessionkey=o4rxuG%2BgVnMwrfFXkNjHfnaNGe4%3D&_=1397899094725
2014-04-19 14:48:15,432 DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-76b37f47) Detected management node left, id:2, nodeIP:192.XXX.XX.X


# /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n v-1-VM -p %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 .
ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent - Connection refused


# virsh list --all
 Id    Name                           State
----------------------------------------------------
 15    v-1-VM                         running
 16    s-5-VM                         running

/etc/libvirt/libvirtd.conf
listen_tls=0
listen_tcp=1


Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Marcus <sh...@gmail.com>.
Sorry, actually I see the 'connection refused' is just your own test
after the fact. By that time the vm may be shut down, so connection
refused would make sense.

What happens if you do this:

'virsh dumpxml v-1-VM > /tmp/v-1-VM.xml' while it is running
stop the cloudstack agent
'virsh destroy v-1-VM'
'virsh create /tmp/v-1-VM.xml'
Then try connecting to that VM via VNC to watch it boot up, or running
that command manually, repeatedly? Does it time out?

In the end this may not mean much, because in CentOS 6.x that command
is retried over and over while the system vm is coming up anyway (in
other words, some failures are expected). It could be related, but it
could also be that the system vm is failing to come up for any other
reason, and this is just the thing you noticed.

On Sun, Apr 20, 2014 at 11:25 PM, Marcus <sh...@gmail.com> wrote:
> You may want to look in the qemu log of the vm to see if there's
> something deeper going on, perhaps the qemu process is not fully
> starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
> something like that.
>
> On Sun, Apr 20, 2014 at 11:22 PM, Marcus <sh...@gmail.com> wrote:
>> No, it has nothing to do with ssh or libvirt daemon. It's the literal
>> unix socket that is created for virtio-serial communication when the
>> qemu process starts. The question is why the system is refusing access
>> to the socket. I assume this is being attempted as root.
>>
>> On Sat, Apr 19, 2014 at 9:58 AM, Nux! <nu...@li.nux.ro> wrote:
>>> On 19.04.2014 15:24, Giri Prasad wrote:
>>>
>>>>
>>>> # grep listen_ /etc/libvirt/libvirtd.conf
>>>> listen_tls=0
>>>> listen_tcp=1
>>>> #listen_addr = "192.XX.XX.X"
>>>> listen_addr = "0.0.0.0"
>>>>
>>>> #
>>>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>>>> -n v-1-VM -p
>>>>
>>>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>>>> .
>>>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>>>> Connection refused
>>>
>>>
>>> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>>>
>>> (kind of stabbing in the dark)
>>>
>>>
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>>
>>> Nux!
>>> www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Samuel Winchenbach <sw...@gmail.com>.
I seem to be having a similar issue.  4.3 on CentOS 6.5

[root@cloudstack agent]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 3     s-1-VM                         running
 4     v-2-VM                         running


/var/log/cloudstack/agent/agent.log:

2014-04-25 09:59:05,648 WARN  [kvm.resource.LibvirtComputingResource]
(Script-5:null) Interrupting script.
2014-04-25 09:59:05,649 WARN  [kvm.resource.LibvirtComputingResource]
(agentRequest-Handler-5:null) Timed out:
/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n
s-1-VM -p
%template=domP%type=secstorage%host=192.168.0.2%port=8250%name=s-1-VM%zone=1%pod=1%guid=s-1-VM%resource=com.cloud.storage.resource.PremiumSecondaryStorageResource%instance=SecStorage%sslcopy=false%role=templateProcessor%mtu=1500%eth2ip=192.168.0.37%eth2mask=255.255.255.0%gateway=192.168.0.1%eth0ip=169.254.1.73%eth0mask=255.255.0.0%eth1ip=192.168.0.9%eth1mask=255.255.255.0%mgmtcidr=
192.168.0.0/24%localgw=192.168.0.1%private.network.device=eth1%eth3ip=192.168.0.7%eth3mask=255.255.255.0%storageip=192.168.0.7%storagenetmask=255.255.255.0%storagegateway=192.168.0.1%internaldns1=8.8.8.8%internaldns2=8.8.4.4%dns1=8.8.8.8%dns2=8.8.4.4.
 Output is:
.
.
.
2014-04-25 10:26:39,752 WARN  [kvm.resource.LibvirtComputingResource]
(Script-10:null) Interrupting script.
2014-04-25 10:26:39,752 WARN  [kvm.resource.LibvirtComputingResource]
(agentRequest-Handler-1:null) Timed out:
/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n
v-2-VM -p
%template=domP%type=consoleproxy%host=192.168.0.2%port=8250%name=v-2-VM%zone=1%pod=1%guid=Proxy.2%proxy_vm=2%disable_rp_filter=true%eth2ip=192.168.0.119%eth2mask=255.255.255.0%gateway=192.168.0.1%eth0ip=169.254.0.16%eth0mask=255.255.0.0%eth1ip=192.168.0.5%eth1mask=255.255.255.0%mgmtcidr=
192.168.0.0/24%localgw=192.168.0.1%internaldns1=8.8.8.8%internaldns2=8.8.4.4%dns1=8.8.8.8%dns2=8.8.4.4.
 Output is:


[root@cloudstack libvirt]# cat libvirtd.log
2014-04-25 13:56:20.205+0000: 3435: info : libvirt version: 0.10.2,
package: 29.el6_5.7 (CentOS BuildSystem <http://bugs.centos.org>,
2014-04-07-07:42:04, c6b9.bsys.dev.centos.org)
2014-04-25 13:56:20.205+0000: 3435: warning : virSecurityManagerNew:148 :
Configured security driver "none" disables default policy to create
confined guests
2014-04-25 13:58:11.251+0000: 3425: warning : qemuSetupCgroup:381 : Could
not autoset a RSS limit for domain s-1-VM
2014-04-25 13:58:11.300+0000: 3425: warning : qemuDomainObjTaint:1377 :
Domain id=1 name='s-1-VM' uuid=245fc4ef-3735-4ba7-bb9b-27cbe28ae103 is
tainted: high-privileges
2014-04-25 13:58:14.022+0000: 3428: warning : qemuSetupCgroup:381 : Could
not autoset a RSS limit for domain v-2-VM
2014-04-25 13:58:14.065+0000: 3428: warning : qemuDomainObjTaint:1377 :
Domain id=2 name='v-2-VM' uuid=e36968ea-2ef2-4046-a1a8-59947e929d08 is
tainted: high-privileges
2014-04-25 14:16:32.456+0000: 3424: error : qemuMonitorIO:614 : internal
error End of file from monitor
2014-04-25 14:16:35.264+0000: 3424: error : qemuMonitorIO:614 : internal
error End of file from monitor
2014-04-25 14:16:38.312+0000: 3425: warning : qemuSetupCgroup:381 : Could
not autoset a RSS limit for domain s-1-VM
2014-04-25 14:16:38.362+0000: 3425: warning : qemuDomainObjTaint:1377 :
Domain id=3 name='s-1-VM' uuid=245fc4ef-3735-4ba7-bb9b-27cbe28ae103 is
tainted: high-privileges
2014-04-25 14:16:39.813+0000: 3428: warning : qemuSetupCgroup:381 : Could
not autoset a RSS limit for domain v-2-VM
2014-04-25 14:16:39.858+0000: 3428: warning : qemuDomainObjTaint:1377 :
Domain id=4 name='v-2-VM' uuid=e36968ea-2ef2-4046-a1a8-59947e929d08 is
tainted: high-privileges

[root@cloudstack libvirt]# cat qemu/s-1-VM.log
2014-04-25 13:58:11.299+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none
/usr/libexec/qemu-kvm -name s-1-VM -S -M rhel6.5.0 -enable-kvm -m 512
-realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
245fc4ef-3735-4ba7-bb9b-27cbe28ae103 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/s-1-VM.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
file=/mnt/8964d2eb-e9c8-3d55-b0b7-30c20078cf5e/1a9eff0c-51fa-46af-aa34-7a22e018ffb6,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-drive
file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none
-device
ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:01:49,bus=pci.0,addr=0x3
-netdev tap,fd=27,id=hostnet1,vhost=on,vhostfd=28 -device
virtio-net-pci,netdev=hostnet1,id=net1,mac=06:6b:04:00:00:07,bus=pci.0,addr=0x4
-netdev tap,fd=29,id=hostnet2,vhost=on,vhostfd=30 -device
virtio-net-pci,netdev=hostnet2,id=net2,mac=06:0f:de:00:00:23,bus=pci.0,addr=0x5
-netdev tap,fd=31,id=hostnet3,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet3,id=net3,mac=06:2f:62:00:00:05,bus=pci.0,addr=0x6
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/s-1-VM.agent,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=s-1-VM.vport
-device usb-tablet,id=input0 -vnc 192.168.0.2:0,password -vga cirrus
Domain id=1 is tainted: high-privileges
char device redirected to /dev/pts/1
qemu: terminating on signal 15 from pid 3424
2014-04-25 14:16:32.456+0000: shutting down
2014-04-25 14:16:38.362+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none
/usr/libexec/qemu-kvm -name s-1-VM -S -M rhel6.5.0 -enable-kvm -m 512
-realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
245fc4ef-3735-4ba7-bb9b-27cbe28ae103 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/s-1-VM.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
file=/mnt/8964d2eb-e9c8-3d55-b0b7-30c20078cf5e/1a9eff0c-51fa-46af-aa34-7a22e018ffb6,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-drive
file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none
-device
ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:01:a6,bus=pci.0,addr=0x3
-netdev tap,fd=30,id=hostnet1,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet1,id=net1,mac=06:d7:98:00:00:05,bus=pci.0,addr=0x4
-netdev tap,fd=32,id=hostnet2,vhost=on,vhostfd=33 -device
virtio-net-pci,netdev=hostnet2,id=net2,mac=06:0f:de:00:00:23,bus=pci.0,addr=0x5
-netdev tap,fd=34,id=hostnet3,vhost=on,vhostfd=35 -device
virtio-net-pci,netdev=hostnet3,id=net3,mac=06:fe:4a:00:00:07,bus=pci.0,addr=0x6
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/s-1-VM.agent,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=s-1-VM.vport
-device usb-tablet,id=input0 -vnc 192.168.0.2:0,password -vga cirrus
Domain id=3 is tainted: high-privileges
char device redirected to /dev/pts/1


[root@cloudstack libvirt]# cat qemu/v-2-VM.log
2014-04-25 13:58:14.065+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none
/usr/libexec/qemu-kvm -name v-2-VM -S -M rhel6.5.0 -enable-kvm -m 1024
-realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
e36968ea-2ef2-4046-a1a8-59947e929d08 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/v-2-VM.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
file=/mnt/8964d2eb-e9c8-3d55-b0b7-30c20078cf5e/779f4f62-0853-4a4c-9579-73097a6f635b,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-drive
file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none
-device
ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=27 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:01:9e,bus=pci.0,addr=0x3
-netdev tap,fd=28,id=hostnet1,vhost=on,vhostfd=29 -device
virtio-net-pci,netdev=hostnet1,id=net1,mac=06:62:32:00:00:02,bus=pci.0,addr=0x4
-netdev tap,fd=30,id=hostnet2,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet2,id=net2,mac=06:6f:e8:00:00:75,bus=pci.0,addr=0x5
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/v-2-VM.agent,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=v-2-VM.vport
-device usb-tablet,id=input0 -vnc 192.168.0.2:1,password -vga cirrus
Domain id=2 is tainted: high-privileges
char device redirected to /dev/pts/2
qemu: terminating on signal 15 from pid 3424
2014-04-25 14:16:35.264+0000: shutting down
2014-04-25 14:16:39.858+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none
/usr/libexec/qemu-kvm -name v-2-VM -S -M rhel6.5.0 -enable-kvm -m 1024
-realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
e36968ea-2ef2-4046-a1a8-59947e929d08 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/v-2-VM.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
file=/mnt/8964d2eb-e9c8-3d55-b0b7-30c20078cf5e/779f4f62-0853-4a4c-9579-73097a6f635b,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-drive
file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none
-device
ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=30 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:00:10,bus=pci.0,addr=0x3
-netdev tap,fd=31,id=hostnet1,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet1,id=net1,mac=06:d8:dc:00:00:03,bus=pci.0,addr=0x4
-netdev tap,fd=33,id=hostnet2,vhost=on,vhostfd=34 -device
virtio-net-pci,netdev=hostnet2,id=net2,mac=06:6f:e8:00:00:75,bus=pci.0,addr=0x5
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/v-2-VM.agent,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=v-2-VM.vport
-device usb-tablet,id=input0 -vnc 192.168.0.2:1,password -vga cirrus
Domain id=4 is tainted: high-privileges
char device redirected to /dev/pts/2


On Mon, Apr 21, 2014 at 4:24 AM, Giri Prasad <g_...@yahoo.com> wrote:

>
>
> Thanks for the comment. The details are as:
>
> virsh -c qemu+tcp://192.XXX.XX.X/system
> Welcome to virsh, the virtualization interactive terminal.
>
> Type:  'help' for help with commands
>        'quit' to quit
>
> virsh # list --all
>  Id    Name                           State
> ----------------------------------------------------
>  1     v-1-VM                         running
>  2     s-185-VM                       running
>
> virsh # exit
>
>
> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl-n v-1-VM -p
> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
> .
> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
> Connection refused
>
>
> /var/log/libvirt/qemu/v-1-VM.log
>
> 2014-04-21 07:43:50.503+0000: starting up
> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none
> /usr/libexec/qemu-kvm -name v-1-VM -S -M rhel6.5.0 -enable-kvm -m 1024
> -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
> 50cc182f-f4d3-4658-bdd0-ecfc688711d9 -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/v-1-VM.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
> file=/mnt/b30c0f37-8876-3003-a552-61bcce73ea4c/6733480c-13f6-4967-98ad-79503a98c376,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
> -drive
> file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none
> -device
>  ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
> -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:01:ae,bus=pci.0,addr=0x3
> -netdev tap,fd=28,id=hostnet1,vhost=on,vhostfd=29 -device
> virtio-net-pci,netdev=hostnet1,id=net1,mac=06:3a:4c:00:00:01,bus=pci.0,addr=0x4
> -netdev tap,fd=30,id=hostnet2,vhost=on,vhostfd=31 -device
> virtio-net-pci,netdev=hostnet2,id=net2,mac=06:6f:ac:00:00:0e,bus=pci.0,addr=0x5
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/v-1-VM.agent,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=v-1-VM.vport
> -device usb-tablet,id=input0 -vnc 192.XXX.XX.5:0,password -vga cirrus
> Domain id=1 is tainted: high-privileges
> char device redirected to /dev/pts/2
> qemu: terminating on signal 15 from pid 2893
> 2014-04-21 08:02:12.453+0000: shutting down
> 2014-04-21 08:02:18.195+0000: starting up
> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none
> /usr/libexec/qemu-kvm -name v-1-VM -S -M rhel6.5.0 -enable-kvm -m 1024
> -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
> 50cc182f-f4d3-4658-bdd0-ecfc688711d9 -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/v-1-VM.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
> file=/mnt/b30c0f37-8876-3003-a552-61bcce73ea4c/6733480c-13f6-4967-98ad-79503a98c376,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
> -drive
> file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none
> -device
>  ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
> -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:02:a1,bus=pci.0,addr=0x3
> -netdev tap,fd=28,id=hostnet1,vhost=on,vhostfd=29 -device
> virtio-net-pci,netdev=hostnet1,id=net1,mac=06:bb:aa:00:00:01,bus=pci.0,addr=0x4
> -netdev tap,fd=30,id=hostnet2,vhost=on,vhostfd=31 -device
> virtio-net-pci,netdev=hostnet2,id=net2,mac=06:6f:ac:00:00:0e,bus=pci.0,addr=0x5
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/v-1-VM.agent,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=v-1-VM.vport
> -device usb-tablet,id=input0 -vnc 192.XXX.XX.5:0,password -vga cirrus
> Domain id=3 is tainted: high-privileges
> char device redirected to /dev/pts/2
>
>
> /var/log/libvirt/qemu/s-185-VM.log
>
> Domain id=10 is tainted: high-privileges
> char device redirected to /dev/pts/2
> 2014-04-21 07:43:50.937+0000: starting up
> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none
> /usr/libexec/qemu-kvm -name s-185-VM -S -M rhel6.5.0 -enable-kvm -m 512
> -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
> d7eaa38a-b967-49fa-a356-07918f9d70d3 -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/s-185-VM.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
> file=/mnt/b30c0f37-8876-3003-a552-61bcce73ea4c/ec01b3b6-a537-48ea-9a19-8a8ed8dd41a6,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
> -drive
> file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none
> -device
>  ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
> -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:02:c8,bus=pci.0,addr=0x3
> -netdev tap,fd=31,id=hostnet1,vhost=on,vhostfd=32 -device
> virtio-net-pci,netdev=hostnet1,id=net1,mac=06:00:e4:00:00:02,bus=pci.0,addr=0x4
> -netdev tap,fd=33,id=hostnet2,vhost=on,vhostfd=34 -device
> virtio-net-pci,netdev=hostnet2,id=net2,mac=06:ac:b4:00:00:0c,bus=pci.0,addr=0x5
> -netdev tap,fd=35,id=hostnet3,vhost=on,vhostfd=36 -device
> virtio-net-pci,netdev=hostnet3,id=net3,mac=06:92:fe:00:00:04,bus=pci.0,addr=0x6
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/s-185-VM.agent,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=s-185-VM.vport
> -device usb-tablet,id=input0 -vnc 192.XXX.XX.5:1,password -vga cirrus
> Domain id=2 is tainted: high-privileges
> char device redirected to /dev/pts/3
> qemu: terminating on signal 15 from pid 2893
> 2014-04-21 08:02:11.809+0000: shutting down
>
>
> /var/log/messages
> Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #17 vnet6,
> fe80::fc92:feff:fe00:4#123, interface stats: received=0, sent=0, dropped=0,
> active_time=1100 secs
> Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #16 vnet4,
> fe80::fc00:e4ff:fe00:2#123, interface stats: received=0, sent=0, dropped=0,
> active_time=1100 secs
> Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #15 vnet5,
> fe80::fcac:b4ff:fe00:c#123, interface stats: received=0, sent=0, dropped=0,
> active_time=1100 secs
> Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #14 vnet3,
> fe80::fc00:a9ff:fefe:2c8#123, interface stats: received=0, sent=0,
> dropped=0, active_time=1100 secs
> Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #13 vnet0,
> fe80::fc00:a9ff:fefe:1ae#123, interface stats: received=0, sent=0,
> dropped=0, active_time=1100 secs
> Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #12 vnet2,
> fe80::fc6f:acff:fe00:e#123, interface stats: received=0, sent=0, dropped=0,
> active_time=1100 secs
> Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #11 vnet1,
> fe80::fc3a:4cff:fe00:1#123, interface stats: received=0, sent=0, dropped=0,
> active_time=1100 secs
> Apr 21 13:32:14 servpc ntpd[2465]: peers refreshed
> Apr 21 13:32:18 servpc kernel: device vnet0 entered promiscuous mode
> Apr 21 13:32:18 servpc kernel: cloud0: port 1(vnet0) entering forwarding
> state
> Apr 21 13:32:18 servpc kernel: device vnet1 entered promiscuous mode
> Apr 21 13:32:18 servpc kernel: cloudbr0: port 2(vnet1) entering forwarding
> state
> Apr 21 13:32:18 servpc kernel: device vnet2 entered promiscuous mode
> Apr 21 13:32:18 servpc kernel: cloudbr0: port 3(vnet2) entering forwarding
> state
> Apr 21 13:32:18 servpc NetworkManager[2163]: <warn>
> /sys/devices/virtual/net/vnet0: couldn't determine device driver;
> ignoring...
> Apr 21 13:32:18 servpc NetworkManager[2163]: <warn>
> /sys/devices/virtual/net/vnet2: couldn't determine device driver;
> ignoring...
> Apr 21 13:32:18 servpc NetworkManager[2163]: <warn>
> /sys/devices/virtual/net/vnet1: couldn't determine device driver;
> ignoring...
> Apr 21 13:32:18 servpc qemu-kvm: Could not find keytab file:
> /etc/qemu/krb5.tab: No such file or directory
> Apr 21 13:32:22 servpc ntpd[2465]: Listen normally on 18 vnet2
> fe80::fc6f:acff:fe00:e UDP 123
> Apr 21 13:32:22 servpc ntpd[2465]: Listen normally on 19 vnet1
> fe80::fcbb:aaff:fe00:1 UDP 123
> Apr 21 13:32:22 servpc ntpd[2465]: Listen normally on 20 vnet0
> fe80::fc00:a9ff:fefe:2a1 UDP 123
> Apr 21 13:32:22 servpc ntpd[2465]: peers refreshed
> Apr 21 13:32:23 servpc kernel: device vnet3 entered promiscuous mode
> Apr 21 13:32:23 servpc kernel: cloud0: port 2(vnet3) entering forwarding
> state
> Apr 21 13:32:23 servpc kernel: device vnet4 entered promiscuous mode
> Apr 21 13:32:23 servpc kernel: cloudbr0: port 4(vnet4) entering forwarding
> state
> Apr 21 13:32:23 servpc kernel: device vnet5 entered promiscuous mode
> Apr 21 13:32:23 servpc kernel: cloudbr0: port 5(vnet5) entering forwarding
> state
> Apr 21 13:32:23 servpc kernel: device vnet6 entered promiscuous mode
> Apr 21 13:32:23 servpc kernel: cloudbr0: port 6(vnet6) entering forwarding
> state
> Apr 21 13:32:23 servpc NetworkManager[2163]: <warn>
> /sys/devices/virtual/net/vnet3: couldn't determine device driver;
> ignoring...
> Apr 21 13:32:23 servpc NetworkManager[2163]: <warn>
> /sys/devices/virtual/net/vnet5: couldn't determine device driver;
> ignoring...
> Apr 21 13:32:23 servpc NetworkManager[2163]: <warn>
> /sys/devices/virtual/net/vnet4: couldn't determine device driver;
> ignoring...
> Apr 21 13:32:23 servpc NetworkManager[2163]: <warn>
> /sys/devices/virtual/net/vnet6: couldn't determine device driver;
> ignoring...
> Apr 21 13:32:23 servpc qemu-kvm: Could not find keytab file:
> /etc/qemu/krb5.tab: No such file or directory
> Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 21 vnet4
> fe80::fcb8:b8ff:fe00:2 UDP 123
> Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 22 vnet5
> fe80::fc46:5cff:fe00:c UDP 123
> Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 23 vnet3
> fe80::fc00:a9ff:fefe:391 UDP 123
> Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 24 vnet6
> fe80::fc98:82ff:fe00:7 UDP 123
> Apr 21 13:32:27 servpc ntpd[2465]: peers refreshed
> Apr 21 13:32:33 servpc kernel: cloud0: port 1(vnet0) entering forwarding
> state
> Apr 21 13:32:33 servpc kernel: cloudbr0: port 2(vnet1) entering forwarding
> state
> Apr 21 13:32:33 servpc kernel: cloudbr0: port 3(vnet2) entering forwarding
> state
> Apr 21 13:32:38 servpc kernel: cloud0: port 2(vnet3) entering forwarding
> state
> Apr 21 13:32:38 servpc kernel: cloudbr0: port 4(vnet4) entering forwarding
> state
> Apr 21 13:32:38 servpc kernel: cloudbr0: port 5(vnet5) entering forwarding
> state
> Apr 21 13:32:38 servpc kernel: cloudbr0: port 6(vnet6) entering forwarding
> state
>
>
> /var/log/cloudstack/management/management-server.log
> 2014-04-21 13:44:35,624 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentManager-Handler-1:null) Ping from 1
> 2014-04-21 13:44:42,166 DEBUG [c.c.s.StatsCollector]
> (StatsCollector-2:ctx-9ae38372) VmStatsCollector is running...
> 2014-04-21 13:44:44,624 DEBUG [c.c.s.StatsCollector]
> (StatsCollector-3:ctx-7e2d9fd6) StorageCollector is running...
> 2014-04-21 13:44:44,630 DEBUG [c.c.s.StatsCollector]
> (StatsCollector-3:ctx-7e2d9fd6) There is no secondary storage VM for
> secondary storage host nfs://192.XXX.XX.5/export/secondary
> 2014-04-21 13:44:44,704 DEBUG [c.c.a.t.Request]
> (StatsCollector-3:ctx-7e2d9fd6) Seq 1-261488720: Received:  { Ans: ,
> MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, {
> GetStorageStatsAnswer } }
> 2014-04-21 13:44:57,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-3b69a7f3) Found 0 routers to update status.
> 2014-04-21 13:44:57,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-3b69a7f3) Found 0 networks to update RvR status.
> 2014-04-21 13:45:01,906 DEBUG [c.c.s.StatsCollector]
> (StatsCollector-1:ctx-1ed2eef8) HostStatsCollector is running...
> 2014-04-21 13:45:02,503 DEBUG [c.c.a.t.Request]
> (StatsCollector-1:ctx-1ed2eef8) Seq 1-261488721: Received:  { Ans: ,
> MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetHostStatsAnswer }
> }
> 2014-04-21 13:45:27,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-0350d1df) Found 0 routers to update status.
> 2014-04-21 13:45:27,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-0350d1df) Found 0 networks to update RvR status.
> 2014-04-21 13:45:32,113 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-2bb3cb37) Resetting hosts suitable for reconnect
> 2014-04-21 13:45:32,115 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-2bb3cb37) Completed resetting hosts suitable for reconnect
> 2014-04-21 13:45:32,115 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-2bb3cb37) Acquiring hosts for clusters already owned by this
> management server
> 2014-04-21 13:45:32,116 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-2bb3cb37) Completed acquiring hosts for clusters already owned by
> this management server
> 2014-04-21 13:45:32,116 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-2bb3cb37) Acquiring hosts for clusters not owned by any
> management server
> 2014-04-21 13:45:32,116 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager
> Timer:ctx-2bb3cb37) Completed acquiring hosts for clusters not owned by any
> management server
> 2014-04-21 13:45:35,638 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentManager-Handler-4:null) Ping from 1
> 2014-04-21 13:45:42,171 DEBUG [c.c.s.StatsCollector]
> (StatsCollector-2:ctx-af582c81) VmStatsCollector is running...
> 2014-04-21 13:45:44,704 DEBUG [c.c.s.StatsCollector]
> (StatsCollector-3:ctx-4afb93cc) StorageCollector is running...
> 2014-04-21 13:45:44,709 DEBUG [c.c.s.StatsCollector]
> (StatsCollector-3:ctx-4afb93cc) There is no secondary storage VM for
> secondary storage host nfs://192.XXX.XX.5/export/secondary
> 2014-04-21 13:45:44,786 DEBUG [c.c.a.t.Request]
> (StatsCollector-3:ctx-4afb93cc) Seq 1-261488722: Received:  { Ans: ,
> MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, {
> GetStorageStatsAnswer } }
> 2014-04-21 13:45:57,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-f43718c1) Found 0 routers to update status.
> 2014-04-21 13:45:57,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-f43718c1) Found 0 networks to update RvR status.
> 2014-04-21 13:46:02,503 DEBUG [c.c.s.StatsCollector]
> (StatsCollector-1:ctx-a4e05182) HostStatsCollector is running...
> 2014-04-21 13:46:03,106 DEBUG [c.c.a.t.Request]
> (StatsCollector-1:ctx-a4e05182) Seq 1-261488723: Received:  { Ans: ,
> MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetHostStatsAnswer }
> }
> 2014-04-21 13:46:27,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-b7ff11b3) Found 0 routers to update status.
> 2014-04-21 13:46:27,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> (RouterStatusMonitor-1:ctx-b7ff11b3) Found 0 networks to update RvR status.
>
>
> >From: Marcus <sh...@gmail.com>
> >You may want to look in the qemu log of the vm to see if there's
> >something deeper going on, perhaps the qemu process is not fully
> >starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
> >something like that.
>
>
> On Sun, Apr 20, 2014 at 11:22 PM, Marcus <sh...@gmail.com> wrote:
> > No, it has nothing to do with ssh or libvirt daemon. It's the literal
> > unix socket that is created for virtio-serial communication when the
> > qemu process starts. The question is why the system is refusing access
> > to the socket. I assume this is being attempted as root.
> >
> > On Sat, Apr 19, 2014 at 9:58 AM, Nux! <nu...@li.nux.ro> wrote:
> >> On 19.04.2014 15:24, Giri Prasad wrote:
> >>
> >>>
> >>> # grep listen_ /etc/libvirt/libvirtd.conf
> >>> listen_tls=0
> >>> listen_tcp=1
> >>> #listen_addr = "192.XX.XX.X"
> >>> listen_addr = "0.0.0.0"
> >>>
> >>> #
> >>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/
> patchviasocket.pl
> >>> -n v-1-VM -p
> >>>
> >>>
> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
> >>> .
> >>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
> >>> Connection refused
> >>
> >>
> >> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in
> /etc/sysconfig/libvirtd?
> >>
> >> (kind of stabbing in the dark)
> >>
> >>
> >> --
> >> Sent from the Delta quadrant using Borg technology!
> >>
> >> Nux!
> >> www.nux.ro
>

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Giri Prasad <g_...@yahoo.com>.

Thanks for the comment. The details are as:

virsh -c qemu+tcp://192.XXX.XX.X/system
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list --all
 Id    Name                           State
----------------------------------------------------
 1     v-1-VM                         running
 2     s-185-VM                       running

virsh # exit


/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n v-1-VM -p %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 .
ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent - Connection refused


/var/log/libvirt/qemu/v-1-VM.log

2014-04-21 07:43:50.503+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name v-1-VM -S -M rhel6.5.0 -enable-kvm -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 50cc182f-f4d3-4658-bdd0-ecfc688711d9 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/v-1-VM.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/mnt/b30c0f37-8876-3003-a552-61bcce73ea4c/6733480c-13f6-4967-98ad-79503a98c376,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none -device
 ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:01:ae,bus=pci.0,addr=0x3 -netdev tap,fd=28,id=hostnet1,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=06:3a:4c:00:00:01,bus=pci.0,addr=0x4 -netdev tap,fd=30,id=hostnet2,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=06:6f:ac:00:00:0e,bus=pci.0,addr=0x5 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/v-1-VM.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=v-1-VM.vport -device usb-tablet,id=input0 -vnc 192.XXX.XX.5:0,password -vga cirrus
Domain id=1 is tainted: high-privileges
char device redirected to /dev/pts/2
qemu: terminating on signal 15 from pid 2893
2014-04-21 08:02:12.453+0000: shutting down
2014-04-21 08:02:18.195+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name v-1-VM -S -M rhel6.5.0 -enable-kvm -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 50cc182f-f4d3-4658-bdd0-ecfc688711d9 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/v-1-VM.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/mnt/b30c0f37-8876-3003-a552-61bcce73ea4c/6733480c-13f6-4967-98ad-79503a98c376,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none -device
 ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:02:a1,bus=pci.0,addr=0x3 -netdev tap,fd=28,id=hostnet1,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=06:bb:aa:00:00:01,bus=pci.0,addr=0x4 -netdev tap,fd=30,id=hostnet2,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=06:6f:ac:00:00:0e,bus=pci.0,addr=0x5 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/v-1-VM.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=v-1-VM.vport -device usb-tablet,id=input0 -vnc 192.XXX.XX.5:0,password -vga cirrus
Domain id=3 is tainted: high-privileges
char device redirected to /dev/pts/2


/var/log/libvirt/qemu/s-185-VM.log

Domain id=10 is tainted: high-privileges
char device redirected to /dev/pts/2
2014-04-21 07:43:50.937+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name s-185-VM -S -M rhel6.5.0 -enable-kvm -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid d7eaa38a-b967-49fa-a356-07918f9d70d3 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/s-185-VM.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive file=/mnt/b30c0f37-8876-3003-a552-61bcce73ea4c/ec01b3b6-a537-48ea-9a19-8a8ed8dd41a6,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none -device
 ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:02:c8,bus=pci.0,addr=0x3 -netdev tap,fd=31,id=hostnet1,vhost=on,vhostfd=32 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=06:00:e4:00:00:02,bus=pci.0,addr=0x4 -netdev tap,fd=33,id=hostnet2,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=06:ac:b4:00:00:0c,bus=pci.0,addr=0x5 -netdev tap,fd=35,id=hostnet3,vhost=on,vhostfd=36 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=06:92:fe:00:00:04,bus=pci.0,addr=0x6 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/s-185-VM.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=s-185-VM.vport -device usb-tablet,id=input0 -vnc 192.XXX.XX.5:1,password -vga cirrus
Domain id=2 is tainted: high-privileges
char device redirected to /dev/pts/3
qemu: terminating on signal 15 from pid 2893
2014-04-21 08:02:11.809+0000: shutting down


/var/log/messages
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #17 vnet6, fe80::fc92:feff:fe00:4#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #16 vnet4, fe80::fc00:e4ff:fe00:2#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #15 vnet5, fe80::fcac:b4ff:fe00:c#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #14 vnet3, fe80::fc00:a9ff:fefe:2c8#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #13 vnet0, fe80::fc00:a9ff:fefe:1ae#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #12 vnet2, fe80::fc6f:acff:fe00:e#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #11 vnet1, fe80::fc3a:4cff:fe00:1#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: peers refreshed
Apr 21 13:32:18 servpc kernel: device vnet0 entered promiscuous mode
Apr 21 13:32:18 servpc kernel: cloud0: port 1(vnet0) entering forwarding state
Apr 21 13:32:18 servpc kernel: device vnet1 entered promiscuous mode
Apr 21 13:32:18 servpc kernel: cloudbr0: port 2(vnet1) entering forwarding state
Apr 21 13:32:18 servpc kernel: device vnet2 entered promiscuous mode
Apr 21 13:32:18 servpc kernel: cloudbr0: port 3(vnet2) entering forwarding state
Apr 21 13:32:18 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet0: couldn't determine device driver; ignoring...
Apr 21 13:32:18 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet2: couldn't determine device driver; ignoring...
Apr 21 13:32:18 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet1: couldn't determine device driver; ignoring...
Apr 21 13:32:18 servpc qemu-kvm: Could not find keytab file: /etc/qemu/krb5.tab: No such file or directory
Apr 21 13:32:22 servpc ntpd[2465]: Listen normally on 18 vnet2 fe80::fc6f:acff:fe00:e UDP 123
Apr 21 13:32:22 servpc ntpd[2465]: Listen normally on 19 vnet1 fe80::fcbb:aaff:fe00:1 UDP 123
Apr 21 13:32:22 servpc ntpd[2465]: Listen normally on 20 vnet0 fe80::fc00:a9ff:fefe:2a1 UDP 123
Apr 21 13:32:22 servpc ntpd[2465]: peers refreshed
Apr 21 13:32:23 servpc kernel: device vnet3 entered promiscuous mode
Apr 21 13:32:23 servpc kernel: cloud0: port 2(vnet3) entering forwarding state
Apr 21 13:32:23 servpc kernel: device vnet4 entered promiscuous mode
Apr 21 13:32:23 servpc kernel: cloudbr0: port 4(vnet4) entering forwarding state
Apr 21 13:32:23 servpc kernel: device vnet5 entered promiscuous mode
Apr 21 13:32:23 servpc kernel: cloudbr0: port 5(vnet5) entering forwarding state
Apr 21 13:32:23 servpc kernel: device vnet6 entered promiscuous mode
Apr 21 13:32:23 servpc kernel: cloudbr0: port 6(vnet6) entering forwarding state
Apr 21 13:32:23 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet3: couldn't determine device driver; ignoring...
Apr 21 13:32:23 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet5: couldn't determine device driver; ignoring...
Apr 21 13:32:23 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet4: couldn't determine device driver; ignoring...
Apr 21 13:32:23 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet6: couldn't determine device driver; ignoring...
Apr 21 13:32:23 servpc qemu-kvm: Could not find keytab file: /etc/qemu/krb5.tab: No such file or directory
Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 21 vnet4 fe80::fcb8:b8ff:fe00:2 UDP 123
Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 22 vnet5 fe80::fc46:5cff:fe00:c UDP 123
Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 23 vnet3 fe80::fc00:a9ff:fefe:391 UDP 123
Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 24 vnet6 fe80::fc98:82ff:fe00:7 UDP 123
Apr 21 13:32:27 servpc ntpd[2465]: peers refreshed
Apr 21 13:32:33 servpc kernel: cloud0: port 1(vnet0) entering forwarding state
Apr 21 13:32:33 servpc kernel: cloudbr0: port 2(vnet1) entering forwarding state
Apr 21 13:32:33 servpc kernel: cloudbr0: port 3(vnet2) entering forwarding state
Apr 21 13:32:38 servpc kernel: cloud0: port 2(vnet3) entering forwarding state
Apr 21 13:32:38 servpc kernel: cloudbr0: port 4(vnet4) entering forwarding state
Apr 21 13:32:38 servpc kernel: cloudbr0: port 5(vnet5) entering forwarding state
Apr 21 13:32:38 servpc kernel: cloudbr0: port 6(vnet6) entering forwarding state


/var/log/cloudstack/management/management-server.log
2014-04-21 13:44:35,624 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-1:null) Ping from 1
2014-04-21 13:44:42,166 DEBUG [c.c.s.StatsCollector] (StatsCollector-2:ctx-9ae38372) VmStatsCollector is running...
2014-04-21 13:44:44,624 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-7e2d9fd6) StorageCollector is running...
2014-04-21 13:44:44,630 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-7e2d9fd6) There is no secondary storage VM for secondary storage host nfs://192.XXX.XX.5/export/secondary
2014-04-21 13:44:44,704 DEBUG [c.c.a.t.Request] (StatsCollector-3:ctx-7e2d9fd6) Seq 1-261488720: Received:  { Ans: , MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
2014-04-21 13:44:57,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-3b69a7f3) Found 0 routers to update status. 
2014-04-21 13:44:57,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-3b69a7f3) Found 0 networks to update RvR status. 
2014-04-21 13:45:01,906 DEBUG [c.c.s.StatsCollector] (StatsCollector-1:ctx-1ed2eef8) HostStatsCollector is running...
2014-04-21 13:45:02,503 DEBUG [c.c.a.t.Request] (StatsCollector-1:ctx-1ed2eef8) Seq 1-261488721: Received:  { Ans: , MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetHostStatsAnswer } }
2014-04-21 13:45:27,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-0350d1df) Found 0 routers to update status. 
2014-04-21 13:45:27,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-0350d1df) Found 0 networks to update RvR status. 
2014-04-21 13:45:32,113 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Resetting hosts suitable for reconnect
2014-04-21 13:45:32,115 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Completed resetting hosts suitable for reconnect
2014-04-21 13:45:32,115 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Acquiring hosts for clusters already owned by this management server
2014-04-21 13:45:32,116 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Completed acquiring hosts for clusters already owned by this management server
2014-04-21 13:45:32,116 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Acquiring hosts for clusters not owned by any management server
2014-04-21 13:45:32,116 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Completed acquiring hosts for clusters not owned by any management server
2014-04-21 13:45:35,638 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-4:null) Ping from 1
2014-04-21 13:45:42,171 DEBUG [c.c.s.StatsCollector] (StatsCollector-2:ctx-af582c81) VmStatsCollector is running...
2014-04-21 13:45:44,704 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-4afb93cc) StorageCollector is running...
2014-04-21 13:45:44,709 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-4afb93cc) There is no secondary storage VM for secondary storage host nfs://192.XXX.XX.5/export/secondary
2014-04-21 13:45:44,786 DEBUG [c.c.a.t.Request] (StatsCollector-3:ctx-4afb93cc) Seq 1-261488722: Received:  { Ans: , MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
2014-04-21 13:45:57,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-f43718c1) Found 0 routers to update status. 
2014-04-21 13:45:57,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-f43718c1) Found 0 networks to update RvR status. 
2014-04-21 13:46:02,503 DEBUG [c.c.s.StatsCollector] (StatsCollector-1:ctx-a4e05182) HostStatsCollector is running...
2014-04-21 13:46:03,106 DEBUG [c.c.a.t.Request] (StatsCollector-1:ctx-a4e05182) Seq 1-261488723: Received:  { Ans: , MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetHostStatsAnswer } }
2014-04-21 13:46:27,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-b7ff11b3) Found 0 routers to update status. 
2014-04-21 13:46:27,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-b7ff11b3) Found 0 networks to update RvR status. 


>From: Marcus <sh...@gmail.com>
>You may want to look in the qemu log of the vm to see if there's
>something deeper going on, perhaps the qemu process is not fully
>starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
>something like that.


On Sun, Apr 20, 2014 at 11:22 PM, Marcus <sh...@gmail.com> wrote:
> No, it has nothing to do with ssh or libvirt daemon. It's the literal
> unix socket that is created for virtio-serial communication when the
> qemu process starts. The question is why the system is refusing access
> to the socket. I assume this is being attempted as root.
>
> On Sat, Apr 19, 2014 at 9:58 AM, Nux! <nu...@li.nux.ro> wrote:
>> On 19.04.2014 15:24, Giri Prasad wrote:
>>
>>>
>>> # grep listen_ /etc/libvirt/libvirtd.conf
>>> listen_tls=0
>>> listen_tcp=1
>>> #listen_addr = "192.XX.XX.X"
>>> listen_addr = "0.0.0.0"
>>>
>>> #
>>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>>> -n v-1-VM -p
>>>
>>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>>> .
>>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>>> Connection refused
>>
>>
>> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>>
>> (kind of stabbing in the dark)
>>
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Marcus <sh...@gmail.com>.
Sorry, actually I see the 'connection refused' is just your own test
after the fact. By that time the vm may be shut down, so connection
refused would make sense.

What happens if you do this:

'virsh dumpxml v-1-VM > /tmp/v-1-VM.xml' while it is running
stop the cloudstack agent
'virsh destroy v-1-VM'
'virsh create /tmp/v-1-VM.xml'
Then try connecting to that VM via VNC to watch it boot up, or running
that command manually, repeatedly? Does it time out?

In the end this may not mean much, because in CentOS 6.x that command
is retried over and over while the system vm is coming up anyway (in
other words, some failures are expected). It could be related, but it
could also be that the system vm is failing to come up for any other
reason, and this is just the thing you noticed.

On Sun, Apr 20, 2014 at 11:25 PM, Marcus <sh...@gmail.com> wrote:
> You may want to look in the qemu log of the vm to see if there's
> something deeper going on, perhaps the qemu process is not fully
> starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
> something like that.
>
> On Sun, Apr 20, 2014 at 11:22 PM, Marcus <sh...@gmail.com> wrote:
>> No, it has nothing to do with ssh or libvirt daemon. It's the literal
>> unix socket that is created for virtio-serial communication when the
>> qemu process starts. The question is why the system is refusing access
>> to the socket. I assume this is being attempted as root.
>>
>> On Sat, Apr 19, 2014 at 9:58 AM, Nux! <nu...@li.nux.ro> wrote:
>>> On 19.04.2014 15:24, Giri Prasad wrote:
>>>
>>>>
>>>> # grep listen_ /etc/libvirt/libvirtd.conf
>>>> listen_tls=0
>>>> listen_tcp=1
>>>> #listen_addr = "192.XX.XX.X"
>>>> listen_addr = "0.0.0.0"
>>>>
>>>> #
>>>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>>>> -n v-1-VM -p
>>>>
>>>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>>>> .
>>>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>>>> Connection refused
>>>
>>>
>>> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>>>
>>> (kind of stabbing in the dark)
>>>
>>>
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>>
>>> Nux!
>>> www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Giri Prasad <g_...@yahoo.com>.

Thanks for the comment. The details are as:

virsh -c qemu+tcp://192.XXX.XX.X/system
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list --all
 Id    Name                           State
----------------------------------------------------
 1     v-1-VM                         running
 2     s-185-VM                       running

virsh # exit


/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n v-1-VM -p %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 .
ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent - Connection refused


/var/log/libvirt/qemu/v-1-VM.log

2014-04-21 07:43:50.503+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name v-1-VM -S -M rhel6.5.0 -enable-kvm -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 50cc182f-f4d3-4658-bdd0-ecfc688711d9 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/v-1-VM.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/mnt/b30c0f37-8876-3003-a552-61bcce73ea4c/6733480c-13f6-4967-98ad-79503a98c376,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none -device
 ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:01:ae,bus=pci.0,addr=0x3 -netdev tap,fd=28,id=hostnet1,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=06:3a:4c:00:00:01,bus=pci.0,addr=0x4 -netdev tap,fd=30,id=hostnet2,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=06:6f:ac:00:00:0e,bus=pci.0,addr=0x5 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/v-1-VM.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=v-1-VM.vport -device usb-tablet,id=input0 -vnc 192.XXX.XX.5:0,password -vga cirrus
Domain id=1 is tainted: high-privileges
char device redirected to /dev/pts/2
qemu: terminating on signal 15 from pid 2893
2014-04-21 08:02:12.453+0000: shutting down
2014-04-21 08:02:18.195+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name v-1-VM -S -M rhel6.5.0 -enable-kvm -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 50cc182f-f4d3-4658-bdd0-ecfc688711d9 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/v-1-VM.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/mnt/b30c0f37-8876-3003-a552-61bcce73ea4c/6733480c-13f6-4967-98ad-79503a98c376,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none -device
 ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:02:a1,bus=pci.0,addr=0x3 -netdev tap,fd=28,id=hostnet1,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=06:bb:aa:00:00:01,bus=pci.0,addr=0x4 -netdev tap,fd=30,id=hostnet2,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=06:6f:ac:00:00:0e,bus=pci.0,addr=0x5 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/v-1-VM.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=v-1-VM.vport -device usb-tablet,id=input0 -vnc 192.XXX.XX.5:0,password -vga cirrus
Domain id=3 is tainted: high-privileges
char device redirected to /dev/pts/2


/var/log/libvirt/qemu/s-185-VM.log

Domain id=10 is tainted: high-privileges
char device redirected to /dev/pts/2
2014-04-21 07:43:50.937+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name s-185-VM -S -M rhel6.5.0 -enable-kvm -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid d7eaa38a-b967-49fa-a356-07918f9d70d3 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/s-185-VM.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive file=/mnt/b30c0f37-8876-3003-a552-61bcce73ea4c/ec01b3b6-a537-48ea-9a19-8a8ed8dd41a6,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive file=/usr/share/cloudstack-common/vms/systemvm.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none -device
 ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=0e:00:a9:fe:02:c8,bus=pci.0,addr=0x3 -netdev tap,fd=31,id=hostnet1,vhost=on,vhostfd=32 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=06:00:e4:00:00:02,bus=pci.0,addr=0x4 -netdev tap,fd=33,id=hostnet2,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=06:ac:b4:00:00:0c,bus=pci.0,addr=0x5 -netdev tap,fd=35,id=hostnet3,vhost=on,vhostfd=36 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=06:92:fe:00:00:04,bus=pci.0,addr=0x6 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/s-185-VM.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=s-185-VM.vport -device usb-tablet,id=input0 -vnc 192.XXX.XX.5:1,password -vga cirrus
Domain id=2 is tainted: high-privileges
char device redirected to /dev/pts/3
qemu: terminating on signal 15 from pid 2893
2014-04-21 08:02:11.809+0000: shutting down


/var/log/messages
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #17 vnet6, fe80::fc92:feff:fe00:4#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #16 vnet4, fe80::fc00:e4ff:fe00:2#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #15 vnet5, fe80::fcac:b4ff:fe00:c#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #14 vnet3, fe80::fc00:a9ff:fefe:2c8#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #13 vnet0, fe80::fc00:a9ff:fefe:1ae#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #12 vnet2, fe80::fc6f:acff:fe00:e#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: Deleting interface #11 vnet1, fe80::fc3a:4cff:fe00:1#123, interface stats: received=0, sent=0, dropped=0, active_time=1100 secs
Apr 21 13:32:14 servpc ntpd[2465]: peers refreshed
Apr 21 13:32:18 servpc kernel: device vnet0 entered promiscuous mode
Apr 21 13:32:18 servpc kernel: cloud0: port 1(vnet0) entering forwarding state
Apr 21 13:32:18 servpc kernel: device vnet1 entered promiscuous mode
Apr 21 13:32:18 servpc kernel: cloudbr0: port 2(vnet1) entering forwarding state
Apr 21 13:32:18 servpc kernel: device vnet2 entered promiscuous mode
Apr 21 13:32:18 servpc kernel: cloudbr0: port 3(vnet2) entering forwarding state
Apr 21 13:32:18 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet0: couldn't determine device driver; ignoring...
Apr 21 13:32:18 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet2: couldn't determine device driver; ignoring...
Apr 21 13:32:18 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet1: couldn't determine device driver; ignoring...
Apr 21 13:32:18 servpc qemu-kvm: Could not find keytab file: /etc/qemu/krb5.tab: No such file or directory
Apr 21 13:32:22 servpc ntpd[2465]: Listen normally on 18 vnet2 fe80::fc6f:acff:fe00:e UDP 123
Apr 21 13:32:22 servpc ntpd[2465]: Listen normally on 19 vnet1 fe80::fcbb:aaff:fe00:1 UDP 123
Apr 21 13:32:22 servpc ntpd[2465]: Listen normally on 20 vnet0 fe80::fc00:a9ff:fefe:2a1 UDP 123
Apr 21 13:32:22 servpc ntpd[2465]: peers refreshed
Apr 21 13:32:23 servpc kernel: device vnet3 entered promiscuous mode
Apr 21 13:32:23 servpc kernel: cloud0: port 2(vnet3) entering forwarding state
Apr 21 13:32:23 servpc kernel: device vnet4 entered promiscuous mode
Apr 21 13:32:23 servpc kernel: cloudbr0: port 4(vnet4) entering forwarding state
Apr 21 13:32:23 servpc kernel: device vnet5 entered promiscuous mode
Apr 21 13:32:23 servpc kernel: cloudbr0: port 5(vnet5) entering forwarding state
Apr 21 13:32:23 servpc kernel: device vnet6 entered promiscuous mode
Apr 21 13:32:23 servpc kernel: cloudbr0: port 6(vnet6) entering forwarding state
Apr 21 13:32:23 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet3: couldn't determine device driver; ignoring...
Apr 21 13:32:23 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet5: couldn't determine device driver; ignoring...
Apr 21 13:32:23 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet4: couldn't determine device driver; ignoring...
Apr 21 13:32:23 servpc NetworkManager[2163]: <warn> /sys/devices/virtual/net/vnet6: couldn't determine device driver; ignoring...
Apr 21 13:32:23 servpc qemu-kvm: Could not find keytab file: /etc/qemu/krb5.tab: No such file or directory
Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 21 vnet4 fe80::fcb8:b8ff:fe00:2 UDP 123
Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 22 vnet5 fe80::fc46:5cff:fe00:c UDP 123
Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 23 vnet3 fe80::fc00:a9ff:fefe:391 UDP 123
Apr 21 13:32:27 servpc ntpd[2465]: Listen normally on 24 vnet6 fe80::fc98:82ff:fe00:7 UDP 123
Apr 21 13:32:27 servpc ntpd[2465]: peers refreshed
Apr 21 13:32:33 servpc kernel: cloud0: port 1(vnet0) entering forwarding state
Apr 21 13:32:33 servpc kernel: cloudbr0: port 2(vnet1) entering forwarding state
Apr 21 13:32:33 servpc kernel: cloudbr0: port 3(vnet2) entering forwarding state
Apr 21 13:32:38 servpc kernel: cloud0: port 2(vnet3) entering forwarding state
Apr 21 13:32:38 servpc kernel: cloudbr0: port 4(vnet4) entering forwarding state
Apr 21 13:32:38 servpc kernel: cloudbr0: port 5(vnet5) entering forwarding state
Apr 21 13:32:38 servpc kernel: cloudbr0: port 6(vnet6) entering forwarding state


/var/log/cloudstack/management/management-server.log
2014-04-21 13:44:35,624 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-1:null) Ping from 1
2014-04-21 13:44:42,166 DEBUG [c.c.s.StatsCollector] (StatsCollector-2:ctx-9ae38372) VmStatsCollector is running...
2014-04-21 13:44:44,624 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-7e2d9fd6) StorageCollector is running...
2014-04-21 13:44:44,630 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-7e2d9fd6) There is no secondary storage VM for secondary storage host nfs://192.XXX.XX.5/export/secondary
2014-04-21 13:44:44,704 DEBUG [c.c.a.t.Request] (StatsCollector-3:ctx-7e2d9fd6) Seq 1-261488720: Received:  { Ans: , MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
2014-04-21 13:44:57,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-3b69a7f3) Found 0 routers to update status. 
2014-04-21 13:44:57,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-3b69a7f3) Found 0 networks to update RvR status. 
2014-04-21 13:45:01,906 DEBUG [c.c.s.StatsCollector] (StatsCollector-1:ctx-1ed2eef8) HostStatsCollector is running...
2014-04-21 13:45:02,503 DEBUG [c.c.a.t.Request] (StatsCollector-1:ctx-1ed2eef8) Seq 1-261488721: Received:  { Ans: , MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetHostStatsAnswer } }
2014-04-21 13:45:27,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-0350d1df) Found 0 routers to update status. 
2014-04-21 13:45:27,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-0350d1df) Found 0 networks to update RvR status. 
2014-04-21 13:45:32,113 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Resetting hosts suitable for reconnect
2014-04-21 13:45:32,115 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Completed resetting hosts suitable for reconnect
2014-04-21 13:45:32,115 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Acquiring hosts for clusters already owned by this management server
2014-04-21 13:45:32,116 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Completed acquiring hosts for clusters already owned by this management server
2014-04-21 13:45:32,116 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Acquiring hosts for clusters not owned by any management server
2014-04-21 13:45:32,116 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:ctx-2bb3cb37) Completed acquiring hosts for clusters not owned by any management server
2014-04-21 13:45:35,638 DEBUG [c.c.a.m.AgentManagerImpl] (AgentManager-Handler-4:null) Ping from 1
2014-04-21 13:45:42,171 DEBUG [c.c.s.StatsCollector] (StatsCollector-2:ctx-af582c81) VmStatsCollector is running...
2014-04-21 13:45:44,704 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-4afb93cc) StorageCollector is running...
2014-04-21 13:45:44,709 DEBUG [c.c.s.StatsCollector] (StatsCollector-3:ctx-4afb93cc) There is no secondary storage VM for secondary storage host nfs://192.XXX.XX.5/export/secondary
2014-04-21 13:45:44,786 DEBUG [c.c.a.t.Request] (StatsCollector-3:ctx-4afb93cc) Seq 1-261488722: Received:  { Ans: , MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
2014-04-21 13:45:57,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-f43718c1) Found 0 routers to update status. 
2014-04-21 13:45:57,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-f43718c1) Found 0 networks to update RvR status. 
2014-04-21 13:46:02,503 DEBUG [c.c.s.StatsCollector] (StatsCollector-1:ctx-a4e05182) HostStatsCollector is running...
2014-04-21 13:46:03,106 DEBUG [c.c.a.t.Request] (StatsCollector-1:ctx-a4e05182) Seq 1-261488723: Received:  { Ans: , MgmtId: 266790436644351, via: 1, Ver: v1, Flags: 10, { GetHostStatsAnswer } }
2014-04-21 13:46:27,121 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-b7ff11b3) Found 0 routers to update status. 
2014-04-21 13:46:27,123 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:ctx-b7ff11b3) Found 0 networks to update RvR status. 


>From: Marcus <sh...@gmail.com>
>You may want to look in the qemu log of the vm to see if there's
>something deeper going on, perhaps the qemu process is not fully
>starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
>something like that.


On Sun, Apr 20, 2014 at 11:22 PM, Marcus <sh...@gmail.com> wrote:
> No, it has nothing to do with ssh or libvirt daemon. It's the literal
> unix socket that is created for virtio-serial communication when the
> qemu process starts. The question is why the system is refusing access
> to the socket. I assume this is being attempted as root.
>
> On Sat, Apr 19, 2014 at 9:58 AM, Nux! <nu...@li.nux.ro> wrote:
>> On 19.04.2014 15:24, Giri Prasad wrote:
>>
>>>
>>> # grep listen_ /etc/libvirt/libvirtd.conf
>>> listen_tls=0
>>> listen_tcp=1
>>> #listen_addr = "192.XX.XX.X"
>>> listen_addr = "0.0.0.0"
>>>
>>> #
>>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>>> -n v-1-VM -p
>>>
>>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>>> .
>>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>>> Connection refused
>>
>>
>> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>>
>> (kind of stabbing in the dark)
>>
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Marcus <sh...@gmail.com>.
You may want to look in the qemu log of the vm to see if there's
something deeper going on, perhaps the qemu process is not fully
starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
something like that.

On Sun, Apr 20, 2014 at 11:22 PM, Marcus <sh...@gmail.com> wrote:
> No, it has nothing to do with ssh or libvirt daemon. It's the literal
> unix socket that is created for virtio-serial communication when the
> qemu process starts. The question is why the system is refusing access
> to the socket. I assume this is being attempted as root.
>
> On Sat, Apr 19, 2014 at 9:58 AM, Nux! <nu...@li.nux.ro> wrote:
>> On 19.04.2014 15:24, Giri Prasad wrote:
>>
>>>
>>> # grep listen_ /etc/libvirt/libvirtd.conf
>>> listen_tls=0
>>> listen_tcp=1
>>> #listen_addr = "192.XX.XX.X"
>>> listen_addr = "0.0.0.0"
>>>
>>> #
>>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>>> -n v-1-VM -p
>>>
>>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>>> .
>>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>>> Connection refused
>>
>>
>> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>>
>> (kind of stabbing in the dark)
>>
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Marcus <sh...@gmail.com>.
You may want to look in the qemu log of the vm to see if there's
something deeper going on, perhaps the qemu process is not fully
starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
something like that.

On Sun, Apr 20, 2014 at 11:22 PM, Marcus <sh...@gmail.com> wrote:
> No, it has nothing to do with ssh or libvirt daemon. It's the literal
> unix socket that is created for virtio-serial communication when the
> qemu process starts. The question is why the system is refusing access
> to the socket. I assume this is being attempted as root.
>
> On Sat, Apr 19, 2014 at 9:58 AM, Nux! <nu...@li.nux.ro> wrote:
>> On 19.04.2014 15:24, Giri Prasad wrote:
>>
>>>
>>> # grep listen_ /etc/libvirt/libvirtd.conf
>>> listen_tls=0
>>> listen_tcp=1
>>> #listen_addr = "192.XX.XX.X"
>>> listen_addr = "0.0.0.0"
>>>
>>> #
>>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>>> -n v-1-VM -p
>>>
>>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>>> .
>>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>>> Connection refused
>>
>>
>> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>>
>> (kind of stabbing in the dark)
>>
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Marcus <sh...@gmail.com>.
No, it has nothing to do with ssh or libvirt daemon. It's the literal
unix socket that is created for virtio-serial communication when the
qemu process starts. The question is why the system is refusing access
to the socket. I assume this is being attempted as root.

On Sat, Apr 19, 2014 at 9:58 AM, Nux! <nu...@li.nux.ro> wrote:
> On 19.04.2014 15:24, Giri Prasad wrote:
>
>>
>> # grep listen_ /etc/libvirt/libvirtd.conf
>> listen_tls=0
>> listen_tcp=1
>> #listen_addr = "192.XX.XX.X"
>> listen_addr = "0.0.0.0"
>>
>> #
>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>> -n v-1-VM -p
>>
>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>> .
>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>> Connection refused
>
>
> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>
> (kind of stabbing in the dark)
>
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Marcus <sh...@gmail.com>.
No, it has nothing to do with ssh or libvirt daemon. It's the literal
unix socket that is created for virtio-serial communication when the
qemu process starts. The question is why the system is refusing access
to the socket. I assume this is being attempted as root.

On Sat, Apr 19, 2014 at 9:58 AM, Nux! <nu...@li.nux.ro> wrote:
> On 19.04.2014 15:24, Giri Prasad wrote:
>
>>
>> # grep listen_ /etc/libvirt/libvirtd.conf
>> listen_tls=0
>> listen_tcp=1
>> #listen_addr = "192.XX.XX.X"
>> listen_addr = "0.0.0.0"
>>
>> #
>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>> -n v-1-VM -p
>>
>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>> .
>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>> Connection refused
>
>
> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>
> (kind of stabbing in the dark)
>
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Nux! <nu...@li.nux.ro>.
On 19.04.2014 15:24, Giri Prasad wrote:

> 
> # grep listen_ /etc/libvirt/libvirtd.conf
> listen_tls=0
> listen_tcp=1
> #listen_addr = "192.XX.XX.X"
> listen_addr = "0.0.0.0"
> 
> #
> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
> -n v-1-VM -p
> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
> .
> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
> Connection refused

Do you have "-l" or "--listen" as LIBVIRTD_ARGS in 
/etc/sysconfig/libvirtd?

(kind of stabbing in the dark)

-- 
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Nux! <nu...@li.nux.ro>.
On 19.04.2014 15:24, Giri Prasad wrote:

> 
> # grep listen_ /etc/libvirt/libvirtd.conf
> listen_tls=0
> listen_tcp=1
> #listen_addr = "192.XX.XX.X"
> listen_addr = "0.0.0.0"
> 
> #
> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
> -n v-1-VM -p
> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
> .
> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
> Connection refused

Do you have "-l" or "--listen" as LIBVIRTD_ARGS in 
/etc/sysconfig/libvirtd?

(kind of stabbing in the dark)

-- 
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Giri Prasad <g_...@yahoo.com>.
>>>   I am trying to trying to setup cloudstack 4.3 on centos 6.5. After


> 
>>> all the installation steps are over, and upon pressing the <Launch>
>>> button in the cloudstack gui, the following error comes out
>>> repeatedly. Any comments as to why is this happening?
> 
>> Hi,
> 
>> Is Selinux off or in permissive mode?
> 
>> Also, what does "virsh dumpxml v-1-VM | grep agent" say?
> 
> Hi,
> 
>  Thanks for your comment.
> 
> virsh dumpxml v-1-VM | grep agent
>       <source mode='bind' path='/var/lib/libvirt/qemu/v-1-VM.agent'/>
> SELINUX=permissive

>One thing that caught me by surprise - though the error does not 
>indicate anything like this - is missing openssh-clients package 
>installed. Can you make sure you have that installed?

# rpm -qa | grep openssh
openssh-clients-5.3p1-94.el6.x86_64
openssh-5.3p1-94.el6.x86_64
openssh-askpass-5.3p1-94.el6.x86_64
openssh-server-5.3p1-94.el6.x86_64

# grep listen_ /etc/libvirt/libvirtd.conf
listen_tls=0
listen_tcp=1
#listen_addr = "192.XX.XX.X"
listen_addr = "0.0.0.0"

# /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n v-1-VM -p %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 .
ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent - Connection refused

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Giri Prasad <g_...@yahoo.com>.
>>>   I am trying to trying to setup cloudstack 4.3 on centos 6.5. After


> 
>>> all the installation steps are over, and upon pressing the <Launch>
>>> button in the cloudstack gui, the following error comes out
>>> repeatedly. Any comments as to why is this happening?
> 
>> Hi,
> 
>> Is Selinux off or in permissive mode?
> 
>> Also, what does "virsh dumpxml v-1-VM | grep agent" say?
> 
> Hi,
> 
>  Thanks for your comment.
> 
> virsh dumpxml v-1-VM | grep agent
>       <source mode='bind' path='/var/lib/libvirt/qemu/v-1-VM.agent'/>
> SELINUX=permissive

>One thing that caught me by surprise - though the error does not 
>indicate anything like this - is missing openssh-clients package 
>installed. Can you make sure you have that installed?

# rpm -qa | grep openssh
openssh-clients-5.3p1-94.el6.x86_64
openssh-5.3p1-94.el6.x86_64
openssh-askpass-5.3p1-94.el6.x86_64
openssh-server-5.3p1-94.el6.x86_64

# grep listen_ /etc/libvirt/libvirtd.conf
listen_tls=0
listen_tcp=1
#listen_addr = "192.XX.XX.X"
listen_addr = "0.0.0.0"

# /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n v-1-VM -p %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1 .
ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent - Connection refused

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Nux! <nu...@li.nux.ro>.
On 19.04.2014 14:40, Giri Prasad wrote:
>>>   I am trying to trying to setup cloudstack 4.3 on centos 6.5. After
> 
>>> all the installation steps are over, and upon pressing the <Launch>
>>> button in the cloudstack gui, the following error comes out
>>> repeatedly. Any comments as to why is this happening?
> 
>> Hi,
> 
>> Is Selinux off or in permissive mode?
> 
>> Also, what does "virsh dumpxml v-1-VM | grep agent" say?
> 
> Hi,
> 
>  Thanks for your comment.
> 
> virsh dumpxml v-1-VM | grep agent
>       <source mode='bind' path='/var/lib/libvirt/qemu/v-1-VM.agent'/>
> SELINUX=permissive

One thing that caught me by surprise - though the error does not 
indicate anything like this - is missing openssh-clients package 
installed. Can you make sure you have that installed?

Lucian

-- 
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Nux! <nu...@li.nux.ro>.
On 19.04.2014 14:40, Giri Prasad wrote:
>>>   I am trying to trying to setup cloudstack 4.3 on centos 6.5. After
> 
>>> all the installation steps are over, and upon pressing the <Launch>
>>> button in the cloudstack gui, the following error comes out
>>> repeatedly. Any comments as to why is this happening?
> 
>> Hi,
> 
>> Is Selinux off or in permissive mode?
> 
>> Also, what does "virsh dumpxml v-1-VM | grep agent" say?
> 
> Hi,
> 
>  Thanks for your comment.
> 
> virsh dumpxml v-1-VM | grep agent
>       <source mode='bind' path='/var/lib/libvirt/qemu/v-1-VM.agent'/>
> SELINUX=permissive

One thing that caught me by surprise - though the error does not 
indicate anything like this - is missing openssh-clients package 
installed. Can you make sure you have that installed?

Lucian

-- 
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Giri Prasad <g_...@yahoo.com>.
>>  I am trying to trying to setup cloudstack 4.3 on centos 6.5. After

>> all the installation steps are over, and upon pressing the <Launch>
>> button in the cloudstack gui, the following error comes out
>> repeatedly. Any comments as to why is this happening?

> Hi,

> Is Selinux off or in permissive mode?

> Also, what does "virsh dumpxml v-1-VM | grep agent" say?

Hi,

 Thanks for your comment.

virsh dumpxml v-1-VM | grep agent
      <source mode='bind' path='/var/lib/libvirt/qemu/v-1-VM.agent'/>
SELINUX=permissive

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Giri Prasad <g_...@yahoo.com>.
>>  I am trying to trying to setup cloudstack 4.3 on centos 6.5. After

>> all the installation steps are over, and upon pressing the <Launch>
>> button in the cloudstack gui, the following error comes out
>> repeatedly. Any comments as to why is this happening?

> Hi,

> Is Selinux off or in permissive mode?

> Also, what does "virsh dumpxml v-1-VM | grep agent" say?

Hi,

 Thanks for your comment.

virsh dumpxml v-1-VM | grep agent
      <source mode='bind' path='/var/lib/libvirt/qemu/v-1-VM.agent'/>
SELINUX=permissive

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Nux! <nu...@li.nux.ro>.
On 19.04.2014 13:12, Giri Prasad wrote:
> Hello All,
> 
> 
>  I am trying to trying to setup cloudstack 4.3 on centos 6.5. After
> all the installation steps are over, and upon pressing the <Launch>
> button in the cloudstack gui, the following error comes out
> repeatedly. Any comments as to why is this happening?

Hi,

Is Selinux off or in permissive mode?

Also, what does "virsh dumpxml v-1-VM | grep agent" say?



-- 
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

Posted by Nux! <nu...@li.nux.ro>.
On 19.04.2014 13:12, Giri Prasad wrote:
> Hello All,
> 
> 
>  I am trying to trying to setup cloudstack 4.3 on centos 6.5. After
> all the installation steps are over, and upon pressing the <Launch>
> button in the cloudstack gui, the following error comes out
> repeatedly. Any comments as to why is this happening?

Hi,

Is Selinux off or in permissive mode?

Also, what does "virsh dumpxml v-1-VM | grep agent" say?



-- 
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro