You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Umair Azam <ua...@i2cinc.com> on 2014/03/11 00:25:38 UTC

Re: NFS machine not responding, timed out issue

Hi Carlos,

I see following management servers logs, what could be the possible 
reasons for these exceptions.

2014-03-11 08:05:47,985 DEBUG [xen.resource.CitrixResourceBase] 
(DirectAgent-1:null) Unable to create destination path: 
/etc/xapi.d/plugins on 10.11.17.32 but trying anyway
2014-03-11 08:05:48,092 DEBUG [xen.resource.CitrixResourceBase] 
(DirectAgent-1:null) Unable to create destination path: /opt/cloud/bin 
on 10.11.17.32 but trying anyway
2014-03-11 08:05:48,338 DEBUG [xen.resource.CitrixResourceBase] 
(DirectAgent-1:null) Unable to create destination path: 
/etc/xapi.d/plugins on 10.11.17.32 but trying anyway
2014-03-11 08:05:48,675 DEBUG [xen.resource.CitrixResourceBase] 
(DirectAgent-1:null) Unable to create destination path: /opt/cloud/bin 
on 10.11.17.32 but trying anyway
2014-03-11 08:05:48,759 DEBUG [xen.resource.CitrixResourceBase] 
(DirectAgent-1:null) Unable to create destination path: /opt/cloud/bin 
on 10.11.17.32 but trying anyway
2014-03-11 08:05:48,843 DEBUG [xen.resource.CitrixResourceBase] 
(DirectAgent-1:null) Unable to create destination path: 
/opt/xensource/packages/iso on 10.11.17.32 but trying anyway
2014-03-11 08:05:52,854 DEBUG [xen.resource.CitrixResourceBase] 
(DirectAgent-1:null) Unable to create destination path: /opt/cloud/bin 
on 10.11.17.32 but trying anyway
2014-03-11 08:05:52,939 DEBUG [xen.resource.CitrixResourceBase] 
(DirectAgent-1:null) Unable to create destination path: 
/etc/xapi.d/plugins on 10.11.17.32 but trying anyway

Umair Azam

On 2/13/2014 9:22 AM, Carlos Reategui wrote:
> If you had it on the management server then it would have been copied over to the hosts when you added them and there is no need for you to do it now.
>
>> On Feb 12, 2014, at 5:46 PM, Umair Azam <ua...@i2cinc.com> wrote:
>>
>> Hi,
>>
>> Carlos thanks for the follow up I really appreciate that, Well i have got the vhd-util placed in /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/vhd-util on managment server but missed to place it on XS host in /opt/xensource/bin. I probably missed or couldn't find anything in cloudstack installation guide to place vhd-util in Xen hosts at /opt/xensource/bin.
>>
>> I will do that right away and see if it works or not. Thanks again
>>
>> Umair Azam
>>
>>> On 2/13/2014 6:36 AM, Carlos Reategui wrote:
>>> Hi Umair (sorry missed the 'i' last time)
>>>
>>> Haven't been able to digest your logs but one common problem with XS is
>>> forgetting to get the specific vhd-util mentioned in the docs:
>>> section 4.5.3.3 of
>>> https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/management-server-install-flow.html
>>> If you missed this, you can manually copy that vhd-util to the XS hosts
>>> to /opt/xensource/bin/
>>>
>>> Also if you are using basic network, don't forget to switch the XS hosts to
>>> bridge (xe-switch-network-backend bridge)
>>>
>>> Finally one thing that has helped me is to disable iptables on the XS hosts
>>> (service iptables stop; chkconfig iptables off)
>>>
>>> regards,
>>> Carlos
>>>
>>>
>>> On Feb 12, 2014, at 5:13 PM, Umair Azam <ua...@i2cinc.com> wrote:
>>>
>>> Hi carlos,
>>>
>>> I have changed switch ports, cables etc, I am successfully able to mount
>>> and copy data. Below logs in the thread are latest after this fix.
>>>
>>> Umair Azam
>>>
>>> On 2/13/2014 6:03 AM, Carlos Reategui wrote:
>>>
>>> Hi Umar,
>>> Did you verify your network per Shanker's suggestion?  You need to make
>>> sure the NIC flapping is fixed first before you can expect anything to run
>>> stably.  When you say you are able to mount it manually, did you also try
>>> copying data over that mount?  I had a similar problem once and it ended up
>>> being a bad cable.
>>> Regards,
>>> -Carlos
>>>
>>> On Feb 12, 2014, at 4:09 PM, Umair Azam <ua...@i2cinc.com> wrote:
>>>
>>> Thanks for the follow up shanker. Please go through below details and help
>>> me dig out the issue,
>>>
>>> I am using cloudstack 4.2.1 installed through yum and xenserver 6.2, I am
>>> getting very strange errors and i have no idea whats going on.
>>>
>>> Managment server IP: 10.11.17.30
>>> Primary/secondar Storage IP: 10.11.17.31
>>> PoD IP range 10.11.17.100 to 10.11.17.112 (for system VM's)
>>> Guest IP range: 10.11.17.113 to 10.11.17.120
>>> Primary storage path: /export/primary
>>> Secondary storage IP: /export/secondary
>>>
>>>
>>> *logs from the management server* (I know i have posted lengthy logs but it
>>> would of great help if u help to make me understand the issue:
>>>
>>> 2014-02-13 09:38:35,497 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-13:null) *Trying to connect to 169.254.1.149*
>>> 2014-02-13 09:38:35,542 DEBUG [cloud.api.ApiServlet]
>>> (catalina-exec-21:null) ===START===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248334219
>>> 2014-02-13 09:38:35,611 DEBUG [cloud.api.ApiServlet]
>>> (catalina-exec-21:null) ===END===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248334219
>>> 2014-02-13 09:38:38,931 DEBUG [cloud.server.StatsCollector]
>>> (StatsCollector-1:null) StorageCollector is running...
>>> 2014-02-13 09:38:38,934 INFO [storage.endpoint.DefaultEndPointSelector]
>>> (StatsCollector-1:null) *No running ssvm is found, so command will be sent
>>> to LocalHostEndPoint*
>>> 2014-02-13 09:38:38,939 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-37:null) Seq 1-535822363: Executing request
>>> 2014-02-13 09:38:39,184 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-37:null) Seq 1-535822363: Response Received:
>>> 2014-02-13 09:38:39,184 DEBUG [agent.transport.Request]
>>> (StatsCollector-1:null) Seq 1-535822363: Received:  { Ans: , MgmtId:
>>> 130189987379, via: 1, Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
>>> 2014-02-13 09:38:40,555 DEBUG [cloud.api.ApiServlet]
>>> (catalina-exec-25:null) ===START===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248339219
>>> 2014-02-13 09:38:40,624 DEBUG [cloud.api.ApiServlet]
>>> (catalina-exec-25:null) ===END===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248339219
>>> 2014-02-13 09:38:45,542 DEBUG [cloud.api.ApiServlet]
>>> (catalina-exec-24:null) ===START===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248344219
>>> 2014-02-13 09:38:45,609 DEBUG [cloud.api.ApiServlet]
>>> (catalina-exec-24:null) ===END===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248344219
>>> 2014-02-13 09:38:48,646 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-13:null) *Trying to connect to 169.254.1.149*
>>> 2014-02-13 09:38:50,542 DEBUG [cloud.api.ApiServlet] (catalina-exec-1:null)
>>> ===START===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248349219
>>> 2014-02-13 09:38:50,609 DEBUG [cloud.api.ApiServlet] (catalina-exec-1:null)
>>> ===END===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248349219
>>> 2014-02-13 09:38:55,542 DEBUG [cloud.api.ApiServlet]
>>> (catalina-exec-23:null) ===START===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248354219
>>> 2014-02-13 09:38:55,608 DEBUG [cloud.api.ApiServlet]
>>> (catalina-exec-23:null) ===END===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248354219
>>> 2014-02-13 09:39:00,542 DEBUG [cloud.api.ApiServlet] (catalina-exec-3:null)
>>> ===START===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248359215
>>> 2014-02-13 09:39:00,609 DEBUG [cloud.api.ApiServlet] (catalina-exec-3:null)
>>> ===END===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248359215
>>> 2014-02-13 09:39:01,875 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-13:null) *Trying to connect to 169.254.1.149*
>>> 2014-02-13 09:39:03,284 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (RouterStatusMonitor-1:null) Found 0 routers to update status.
>>> 2014-02-13 09:39:03,285 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (RouterStatusMonitor-1:null) Found 0 networks to update RvR status.
>>> 2014-02-13 09:39:05,553 DEBUG [cloud.api.ApiServlet] (catalina-exec-2:null)
>>> ===START===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248364215
>>> 2014-02-13 09:39:05,620 DEBUG [cloud.api.ApiServlet] (catalina-exec-2:null)
>>> ===END===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248364215
>>> 2014-02-13 09:39:08,461 DEBUG [host.dao.HostDaoImpl] (ClusteredAgentManager
>>> Timer:null) Resetting hosts suitable for reconnect
>>>
>>>
>>> 2014-02-13 09:39:09,173 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-30:null) *Ping from 1*
>>> 2014-02-13 09:39:09,737 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-37:null) Seq 1-535822340: Executing request
>>> 2014-02-13 09:39:10,079 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-37:null) Seq 1-535822340: Response Received:
>>> 2014-02-13 09:39:10,079 DEBUG [agent.transport.Request]
>>> (DirectAgent-37:null) Seq 1-535822340: Processing:  { Ans: , MgmtId:
>>> 130189987379, via: 1, Ver: v1, Flags: 10,
>>> [{"com.cloud.agent.api.ClusterSyncAnswer":{"_clusterId":1,"_newStates":{},"_isExecuted":false,"result":true,"wait":0}}]
>>> }
>>> 2014-02-13 09:39:10,541 DEBUG [cloud.api.ApiServlet] (catalina-exec-5:null)
>>> ===START===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248369214
>>> 2014-02-13 09:39:10,608 DEBUG [cloud.api.ApiServlet] (catalina-exec-5:null)
>>> ===END===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248369214
>>> 2014-02-13 09:39:15,432 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-13:null) *Trying to connect to 169.254.1.149*
>>> 2014-02-13 09:39:15,542 DEBUG [cloud.api.ApiServlet] (catalina-exec-4:null)
>>> ===START===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248374215
>>> 2014-02-13 09:39:15,609 DEBUG [cloud.api.ApiServlet] (catalina-exec-4:null)
>>> ===END===  10.11.17.22 --
>>> GET command=listSystemVms&response=json&sessionkey=XDoj6Eke%2Brel612mP%2FBXLiaLwk8%3D&_=1392248374215
>>> 2014-02-13 09:39:18,368 DEBUG [cloud.server.StatsCollector]
>>> (StatsCollector-1:null) VmStatsCollector is running...
>>>
>>>
>>> 2014-02-13 09:39:52,595 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-13:null) *Ping command port succeeded for vm v-2-VM*
>>> 2014-02-13 09:39:52,595 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-13:null) Seq 1-535822360: Response Received:
>>> 2014-02-13 09:39:52,616 DEBUG [agent.transport.Request]
>>> (DirectAgent-13:null) Seq 1-535822360: Processing:  { Ans: , MgmtId:
>>> 130189987379, via: 1, Ver: v1, Flags: 110,
>>> [{"com.cloud.agent.api.StartAnswer":{"vm":{"id":2,"name":"v-2-VM","bootloader":"PyGrub","type":"ConsoleProxy","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":1073741824,"maxRam":1073741824,"arch":"i686","os":"Debian
>>> GNU/Linux 7(32-bit)","bootArgs":" template=domP type=consoleproxy
>>> host=10.11.17.30 port=8250 name=v-2-VM premium=true zone=1 pod=1
>>> guid=Proxy.2 proxy_vm=2 disable_rp_filter=true eth2ip=10.11.17.114
>>> eth2mask=255.255.255.0 gateway=10.11.17.1 eth0ip=169.254.1.149
>>> eth0mask=255.255.0.0 eth1ip=10.11.17.107 eth1mask=255.255.255.0 mgmtcidr=
>>> 10.11.17.0/24 localgw=10.11.17.1 internaldns1=192.168.150.50
>>> internaldns2=192.168.150.51 dns1=8.8.8.8
>>> dns2=8.8.4.4","rebootOnCrash":false,"enableHA":false,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"12708af7c4b48136"
>>>
>>> 2014-02-13 09:39:52,616 DEBUG [agent.manager.AgentAttache]
>>> (DirectAgent-13:null) Seq 1-535822361: *Sending now.  is current sequence.*
>>> 2014-02-13 09:39:52,620 DEBUG [agent.transport.Request]
>>> (DirectAgent-13:null) Seq 1-535822361: Executing:  { Cmd , MgmtId:
>>> 130189987379, via: 1
>>>
>>> 2014-02-13 09:39:52,621 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-45:null) Seq 1-535822361: Executing request
>>> 2014-02-13 09:39:52,694 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-45:null) 1. *The VM s-1-VM is in Starting state*.
>>> 2014-02-13 09:39:52,723 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-45:null) Created VM a0ae11c8-af19-1acb-4998-a7de1457a886 for
>>> s-1-VM
>>>
>>> 2014-02-13 09:39:52,761 DEBUG [cloud.vm.VirtualMachineManagerImpl]
>>> (consoleproxy-1:null) *Start completed for VM VM[ConsoleProxy|v-2-VM]*
>>> 2014-02-13 09:39:52,761 INFO [cloud.consoleproxy.ConsoleProxyManagerImpl]
>>> (consoleproxy-1:null) C*onsole proxy v-2-VM is started*
>>> 2014-02-13 09:39:52,764 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-45:null)*Creating VIF for s-1-VM on nic
>>> [Nic:Guest-10.11.17.113-vlan://untagged]*
>>> 2014-02-13 09:39:52,777 DEBUG [cloud.consoleproxy.ConsoleProxyManagerImpl]
>>> (consoleproxy-1:null) Zone 1 is ready to launch console proxy
>>> 2014-02-13 09:39:52,782 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-45:null) Created a vif f61262e1-f848-25e2-6c74-338deb0efe59 on
>>> 2
>>> 2014-02-13 09:39:52,782 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-45:null) *Creating VIF for s-1-VM on nic
>>> [Nic:Control-169.254.1.44-null]*
>>>
>>> 2014-02-13 09:40:01,053 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-45:null) 2.*The VM s-1-VM is in Running state.*
>>> 2014-02-13 09:40:01,275 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-45:null) *Ping command port, 169.254.1.44:3922*
>>> 2014-02-13 09:40:01,365 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-45:null) *Trying to connect to 169.254.1.44*
>>> 2014-02-13 09:40:01,745 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentManager-Handler-7:null) SeqA 2-2: Processing Seq 2-2:  { Cmd ,
>>> MgmtId: -1, via: 2, Ver: v1, Flags:
>>> 11,
>>> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n
>>> \"connections\": []\n}","wait":0}}] }
>>>
>>>
>>> 2014-02-13 09:40:32,208 DEBUG [cloud.consoleproxy.ConsoleProxyManagerImpl]
>>> (consoleproxy-1:null) *Zone 1 is ready to launch console proxy*
>>> 2014-02-13 09:40:33,284 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (RouterStatusMonitor-1:null) Found 0 routers to update status.
>>> 2014-02-13 09:40:33,285 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (RouterStatusMonitor-1:null) Found 0 networks to update RvR status
>>>
>>> 2014-02-13 09:40:41,613 DEBUG [xen.resource.CitrixResourceBase]
>>> (DirectAgent-45:null) *Trying to connect to 169.254.1.44*
>>> 2014-02-13 09:40:41,746 DEBUG [agent.manager.AgentManagerImpl]
>>> (AgentManager-Handler-14:null) SeqA 2-9: Processing Seq 2-9:  { Cmd ,
>>> MgmtId: -1, via: 2, Ver: v1, Flags:
>>> 11,
>>> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n
>>> \"connections\": []\n}","wait":0}}] }
>>>
>>> 2014-02-13 09:41:02,208 DEBUG [cloud.consoleproxy.ConsoleProxyManagerImpl]
>>> (consoleproxy-1:null) Zone 1 is ready to launch console proxy
>>> 2014-02-13 09:41:03,284 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (RouterStatusMonitor-1:null) Found 0 routers to update status.
>>> 2014-02-13 09:41:03,285 DEBUG
>>> [network.router.VirtualNetworkApplianceManagerImpl]
>>> (RouterStatusMonitor-1:null) Found 0 networks to update RvR status.
>>>
>>> 2014-02-13 09:41:14,743 DEBUG [xen.resource.XenServerConnectionPool]
>>> (DirectAgent-15:null) *connect through IP(10.11.17.32 for
>>> pool(fabd7eda-e96a-8cbf-70d7-aeb6e6551e73) is broken due
>>> to *org.apache.xmlrpc.XmlRpcException: Failed to read server's response:
>>> connect timed out
>>> 2014-02-13 09:41:14,743 DEBUG [xen.resource.XenServerConnectionPool]
>>> (DirectAgent-15:null) *Remove master connection through 10.11.17.32 for
>>> pool(null)*
>>> 2014-02-13 09:41:14,743 DEBUG [xen.resource.XenServerConnectionPool]
>>> (DirectAgent-15:null) Logging on as the slave to 10.11.17.32
>>>
>>> kHealthCommand":{"wait":50}}] }
>>> 2014-02-13 09:41:19,626 DEBUG [agent.manager.DirectAgentAttache]
>>> (DirectAgent-50:null) Seq 1-535822368: Executing request
>>> 2014-02-13 09:41:19,749 DEBUG [xen.resource.XenServerConnectionPool]
>>> (DirectAgent-15:null) *Unable to create slave connection to
>>> host(*77c0a8c2-101e-4398-ab5d-7dcc46decdf8) due
>>> to org.apache.xmlrpc.XmlRpcException: Failed to read server's response:
>>> connect timed out
>>>
>>> 2014-02-13 09:41:45,882 WARN  [agent.manager.DirectAgentAttache]
>>> (DirectAgent-1:null) Seq 1-535822370: Exception Caught while executing
>>> command
>>> com.cloud.utils.exception.CloudRuntimeException: *Unable to create slave
>>> connection to host(77c0a8c2-101e-4398-ab5d-7dcc46decdf8) due to
>>> org.apache.xmlrpc.XmlRpcException: Failed to read server's response: No
>>> route to host*
>>>
>>>
>>> *hypervisor logs:**
>>>
>>> *Feb 13 04:36:17 xenserver-Host1 ovs-vswitchd: 00179|netdev|WARN|failed to
>>> get flags for network device vif1.1: No such device
>>> Feb 13 04:36:21 xenserver-Host1 xenstored:  A7           getdomain 1
>>> Feb 13 04:36:21 xenserver-Host1 ovs-vswitchd:
>>> 00180|netdev_linux|WARN|ioctl(SIOCGIFINDEX)*on vif1.1 device failed: No
>>> such device*
>>> Feb 13 04:36:21 xenserver-Host1 ovs-vswitchd:
>>> 00181|netdev_linux|WARN|vif1.1: linux-sys get stats failed 19
>>> Feb 13 04:36:21 xenserver-Host1 ovs-vswitchd:
>>> 00182|netdev_linux|WARN|*ethtool command ETHTOOL_GDRVINFO on network device
>>> vif1.1 failed: No such device*
>>> Feb 13 04:36:21 xenserver-Host1 ovs-vswitchd:
>>> 00183|netdev_linux|WARN|*ethtool command ETHTOOL_GSET on network device
>>> vif1.1 failed: No such device**
>>> **
>>>
>>> *Feb 13 04:36:23 xenserver-Host1 xapi: [ info|xenserver-Host1|4275 INET
>>> 0.0.0.0:80|SR.scan R:9926ae7c7a39|storage_impl] SR.scan
>>> dbg:OpaqueRef:9926ae7c-7a39-cd67-e976-a4921ba0aea2
>>> sr:261b670c-3050-d2e8-e318-a5b0460dc4c7
>>> Feb 13 04:36:23 xenserver-Host1 xapi: [ info|xenserver-Host1|4275 INET
>>> 0.0.0.0:80|sm_exec D:40bbf6e26998|xapi] Session.create
>>> trackid=950b4175555ea2317194796c52dab3b7 pool=false
>>> uname= is_local_superuser=true auth_user_sid=
>>> parent=trackid=9834f5af41c964e225f24279aefe4e49
>>> Feb 13 04:36:23 xenserver-Host1 xapi: [ info|xenserver-Host1|4279
>>> UNIX /var/xapi/xapi|session.login_with_password D:77fbf51ebd09|xapi]
>>> Session.create trackid=6c4640a281c134f5b59e6f54bdde8775 pool=false
>>> uname=root is_local_superuser=true
>>> auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
>>> Feb 13 04:36:23 xenserver-Host1 xapi: [ info|xenserver-Host1|4275 INET
>>> 0.0.0.0:80|sm_exec D:40bbf6e26998|xapi] Session.destroy
>>> trackid=950b4175555ea2317194796c52dab3b7
>>> Feb 13 04:36:23 xenserver-Host1 xapi: [ info|xenserver-Host1|4303 UNIX
>>> /var/xapi/xapi|session.logout D:19cec38867cb|xapi] Session.destroy
>>> trackid=6c4640a281c134f5b59e6f54bdde8775
>>> Feb 13 04:36:26 xenserver-Host1 ovs-vswitchd: 00223|netdev|WARN|*Dropped 56
>>> log messages in last 9 seconds* (most recently, 1 seconds ago) due to
>>> excessive rate
>>> Feb 13 04:36:26 xenserver-Host1 ovs-vswitchd: 00224|netdev|WARN|*failed to
>>> get flags for network device vif1.1: No such devic*e
>>> Feb 13 04:36:26 xenserver-Host1 ovs-vswitchd:
>>> 00225|netdev_linux|WARN|Dropped 8 log messages in last 5 seconds (most
>>> recently, 5 seconds ago) due to excessive rate
>>> Feb 13 04:36:26 xenserver-Host1 ovs-vswitchd:
>>> 00226|netdev_linux|WARN|ioctl(SIOCGIFINDEX) *on vif1.1 device failed: No
>>> such device**
>>>
>>>
>>> *Feb 13 04:36:38 xenserver-Host1 scripts-vif: vif2.0
>>> external-ids:"xs-vm-uuid"="3ee2bf12-9ba0-fbc1-4789-c7fd9fc01194"
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: vif2.0
>>> external-ids:"xs-vif-uuid"="9779edb4-0ca0-7337-71ad-480ea5fcef9d"
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: vif2.0
>>> external-ids:"xs-network-uuid"="33831c9a-b2e2-4478-21b7-697da4f9cc4e"
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: vif2.0
>>> external-ids:"attached-mac"="0e:00:a9:fe:01:95"
>>> *
>>> *
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: Called as "online vif" domid:2
>>> devid:1 mode:openvswitch
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: Setting vif2.1 MTU 1500
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: Adding vif2.1 to xenbr0 with
>>> address fe:ff:ff:ff:ff:ff
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: F*ailed to ip link set vif2.1
>>> address fe:ff:ff:ff:ff:ff*
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: vif2.1
>>> external-ids:"xs-vm-uuid"="3ee2bf12-9ba0-fbc1-4789-c7fd9fc01194"
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: vif2.1
>>> external-ids:"xs-vif-uuid"="5b9f0df6-2756-1aaf-690e-249df678a240"
>>> Feb 13 04:36:38 xenserver-Host1 scripts-vif: vif2.1
>>> external-ids:"xs-network-uuid"="a67b6e0f-6452-7474-c850-f1ef215d3941"
>>>
>>>
>>> Feb 13 04:36:39 xenserver-Host1 scripts-vif: Called as "add vif" domid:2
>>> devid:2 mode:openvswitch
>>> Feb 13 04:36:39 xenserver-Host1 scripts-vif: Called as "online vif" domid:2
>>> devid:2 mode:openvswitch
>>> Feb 13 04:36:39 xenserver-Host1 scripts-vif: Setting vif2.2 MTU 1500
>>> Feb 13 04:36:39 xenserver-Host1 scripts-vif: Adding vif2.2 to xenbr0 with
>>> address fe:ff:ff:ff:ff:ff
>>> Feb 13 04:36:39 xenserver-Host1 scripts-vif: *Failed to ip link set vif2.2
>>> address fe:ff:ff:ff:ff:ff*
>>> Feb 13 04:36:39 xenserver-Host1 scripts-vif: vif2.2
>>> external-ids:"xs-vm-uuid"="3ee2bf12-9ba0-fbc1-4789-c7fd9fc01194"
>>> Feb 13 04:36:39 xenserver-Host1 scripts-vif: vif2.2
>>> external-ids:"xs-vif-uuid"="7c4e02c7-658c-4f1f-ea00-d585f052b9c9"
>>> Feb 13 04:36:39 xenserver-Host1 scripts-vif: vif2.2
>>> external-ids:"xs-network-uuid"="a67b6e0f-6452-7474-c850-f1ef215d3941"
>>> Feb 13 04:36:39 xenserver-Host1 scripts-vif: vif2.2
>>> external-ids:"attached-mac"="06:2b:90:00:00:0f"
>>>
>>> Feb 13 04:37:43 xenserver-Host1 scripts-vif: Adding vif3.2 to xenbr0 with
>>> address fe:ff:ff:ff:ff:ff
>>> Feb 13 04:37:43 xenserver-Host1 scripts-vif: Failed to ip link set vif3.2
>>> address fe:ff:ff:ff:ff:ff
>>> Feb 13 04:37:43 xenserver-Host1 scripts-vif: vif3.2
>>> external-ids:"xs-vm-uuid"="a0ae11c8-af19-1acb-4998-a7de1457a886"
>>> Feb 13 04:37:43 xenserver-Host1 scripts-vif: vif3.2
>>> external-ids:"xs-vif-uuid"="f61262e1-f848-25e2-6c74-338deb0efe59"
>>> Feb 13 04:37:43 xenserver-Host1 scripts-vif: vif3.2
>>> external-ids:"xs-network-uuid"="a67b6e0f-6452-7474-c850-f1ef215d3941"
>>> Feb 13 04:37:43 xenserver-Host1 scripts-vif: vif3.2
>>> external-ids:"attached-mac"="06:e8:4c:00:00:0e"
>>>
>>> Feb 13 04:38:24 xenserver-Host1 xapi: [ info|xenserver-Host1|4856
>>> UNIX /var/xapi/xapi|session.login_with_password D:29f302c62711|xapi]
>>> Session.create trackid=7351c4bbf9b985953add83ed4fb5dfe3 pool=false
>>> uname=root is_local_superuser=true
>>> auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
>>> Feb 13 04:38:24 xenserver-Host1 xapi: [ info|xenserver-Host1|4841 INET
>>> 0.0.0.0:80|sm_exec D:e58e39494b07|xapi] Session.destroy
>>> trackid=a4921a1edef9d2a3f8442836d620e900
>>> Feb 13 04:38:24 xenserver-Host1 xapi: [ info|xenserver-Host1|4874 UNIX
>>> /var/xapi/xapi|session.logout D:6f611c2117d0|xapi] Session.destroy
>>> trackid=7351c4bbf9b985953add83ed4fb5dfe3
>>>
>>> After this hypervisor reboots (reboots might be due to HA enabled)
>>> *
>>> **logs from the primary/secondary storage:*
>>>
>>> Feb 13 04:08:30 secondary-storage rpc.idmapd[1767]: nss_getpwnam: name '0'
>>> does not map into domain 'localdomain'
>>> Feb 13 04:30:29 secondary-storage rpc.mountd[1729]: authenticated mount
>>> request from 10.11.17.32:679 for /export/primary (/export)
>>> Feb 13 04:30:29 secondary-storage rpc.mountd[1729]: authenticated unmount
>>> request from 10.11.17.32:686 for /export/primary (/export)
>>> Feb 13 04:30:29 secondary-storage rpc.mountd[1729]: authenticated mount
>>> request from 10.11.17.32:710 for /export/primary (/export)
>>> Feb 13 04:30:49 secondary-storage rpc.mountd[1729]: authenticated mount
>>> request from 10.11.17.32:1007 for /export/secondary/template/tmpl/1/
>>>
>>> Umair Azam
>>> Associate Systems Administrator
>>> Network Operations Center
>>> i2c Incorporated
>>> 1300 Island Drive, Suite 105
>>> Redwood City, CA 94065-5170
>>> Desk: +1 650.480.5291
>>> PBX: +1 650.593.5400 x 4244
>>> 24x7 NOC: +1 650.480.5291
>>> Fax: +1 650.593.5402
>>> URL: www.i2cinc.com
>>> **************************************
>>> CONFIDENTIALITY CAUTION
>>> This communication (including any accompanying documents) is intended only
>>> for the use of the addressee(s) and contains information that is PRIVILEGED
>>> AND CONFIDENTIAL. Unauthorized reading, dissemination, distribution or
>>> copying of this communication is prohibited. If you have received this
>>> communication in error, please notify us immediately by e-mail, telephone
>>> or fax and promptly destroy the original communication. Thank you for
>>> your cooperation.
>>>
>>> On 2/12/2014 10:28 AM, Shanker Balan wrote:
>>>
>>> Comments inline.
>>>
>>> On 12-Feb-2014, at 9:09 am, Umair Azam <ua...@i2cinc.com> wrote:
>>>
>>> I am also getting following in hypervisor logs. Do you think its the issue
>>> with NIC of primary storage, well i can mount that filesystem manually
>>>
>>> /var/log/messages:Feb 11 08:16:40 xenserver-Host1 kernel: [ 3103.098811]
>>> e1000e: eth0 NIC Link is Down
>>> /var/log/messages:Feb 11 08:21:31 xenserver-Host1 kernel: [ 3393.829579]
>>> e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None
>>> /var/log/messages:Feb 11 08:21:31 xenserver-Host1 kernel: [ 3393.829687]
>>> e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO
>>> /var/log/messages:Feb 11 08:23:50 xenserver-Host1 kernel: [ 3533.528756]
>>> e1000e: eth0 NIC Link is Down
>>> /var/log/messages:Feb 11 08:23:20 xenserver-Host1 kernel: [ 23.231541]
>>> e1000e 0000:00:19.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1e:4f:fb:81:16
>>> /var/log/messages:Feb 11 08:23:20 xenserver-Host1 kernel: [ 23.231544]
>>> e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection
>>> /var/log/messages:Feb 11 08:23:20 xenserver-Host1 kernel: [ 23.231562]
>>> e1000e 0000:00:19.0: eth0: MAC: 7, PHY: 6, PBA No: 1051FF-0FF
>>>
>>>
>>> Your NIC link is flapping way too often and thats possibly whats causing
>>> NFS to freak out. Try to narrow down the cause of the link flaps:
>>>
>>> 1) Force link speed settings
>>> 2) Change patch cords
>>> 3) Replace physical NIC
>>> 4) Try patching to another switch port
>>> 5) Check switch port speed settings
>>>
>>>
>>> Hth.
>>>
>>>
>>> --
>>> @shankerbalan
>>>
>>> M: +91 98860 60539 | O: +91 (80) 67935867
>>> shanker.balan@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
>>> ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre,
>>> Bangalore - 560 055
>>>
>>> Need Enterprise Grade Support for Apache CloudStack?
>>> Our CloudStack Infrastructure Support<
>>> http://shapeblue.com/cloudstack-infrastructure-support/> offers the
>>> best 24/7 SLA for CloudStack Environments.
>>>
>>> Apache CloudStack Bootcamp training courses
>>>
>>> **NEW!** CloudStack 4.2.1 training<http://shapeblue.com/cloudstack-training/
>>> 18th-19th February 2014, Brazil. Classroom<
>>> http://shapeblue.com/cloudstack-training/>
>>> 17th-23rd March 2014, Region A. Instructor led, On-line<
>>> http://shapeblue.com/cloudstack-training/>
>>> 24th-28th March 2014, Region B. Instructor led, On-line<
>>> http://shapeblue.com/cloudstack-training/>
>>> 16th-20th June 2014, Region A. Instructor led, On-line<
>>> http://shapeblue.com/cloudstack-training/>
>>> 23rd-27th June 2014, Region B. Instructor led, On-line<
>>> http://shapeblue.com/cloudstack-training/>
>>>
>>> This email and any attachments to it may be confidential and are intended
>>> solely for the use of the individual to whom it is addressed. Any views or
>>> opinions expressed are solely those of the author and do not
>>> necessarily represent those of Shape Blue Ltd or related companies. If you
>>> are not the intended recipient of this email, you must neither take any
>>> action based upon its contents, nor copy or show it to anyone. Please
>>> contact the sender if you believe you have received this email in error.
>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue
>>> Services India LLP is a company incorporated in India and is operated under
>>> license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a
>>> company incorporated in Brasil and is operated under license from Shape
>>> Blue Ltd. ShapeBlue is a registered trademark.