You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Matthew Hartmann <mh...@tls.net> on 2012/05/22 18:02:32 UTC

XenServer inode exhaustion

Hello all!

I have quite an odd issue with my XenServer 6.0.2 machines that are in a 
Pool that is managed by CloudStack 3.0.2.

In my Pool, I have a Pool Master and two Pool Members. On the Pool 
Master, there are 12 VM's running and in /tmp there are only 18 
"stream-unix.####.#" files. I thought this was odd so I checked my Pool 
Members.

On Pool Member "B" there are only 2 VM's running and roughly ~63,700 
"stream-unix.####.#" stale socket files.

On Pool Member "C" there are only 8 VM's running and roughly ~63,600 
"stream-unix.####.#" stale socket files.

The last time this happened, the stale socket files exhausted the 
available inodes on the file system. With inodes exhausted, the file 
system reported itself as full. This resulted in a corrupt XenServer 
Pool and lead to rebuilding the Pool (not to mention the countless hours 
of scrubbing the CloudStack database).

All thoughts and/or suggestions are welcome!

Cheers,

Matthew

RE: XenServer inode exhaustion

Posted by Anthony Xu <Xu...@citrix.com>.
Hi,
Try removing /etc/udev/rules.d/xen-ovs-vif-flows.rules and socket files under /tmp in XenServer Host.


Anthony

> -----Original Message-----
> From: Clayton Weise [mailto:cweise@iswest.net]
> Sent: Monday, August 20, 2012 11:31 AM
> To: 'CloudStack Devs'; 'CloudStack user/admin discussions'
> Subject: RE: XenServer inode exhaustion
> 
> This just happened to me, only it happened to the pool master and one
> of the slaves.  Does anybody have any insight/ideas on this.  Matthew,
> how were you able to fix this?
> 
> -----Original Message-----
> From: Matthew Hartmann [mailto:mhartmann@tls.net]
> Sent: Tuesday, May 22, 2012 9:03 AM
> To: CloudStack user/admin discussions; CloudStack Devs
> Subject: XenServer inode exhaustion
> 
> Hello all!
> 
> I have quite an odd issue with my XenServer 6.0.2 machines that are in
> a
> Pool that is managed by CloudStack 3.0.2.
> 
> In my Pool, I have a Pool Master and two Pool Members. On the Pool
> Master, there are 12 VM's running and in /tmp there are only 18
> "stream-unix.####.#" files. I thought this was odd so I checked my Pool
> Members.
> 
> On Pool Member "B" there are only 2 VM's running and roughly ~63,700
> "stream-unix.####.#" stale socket files.
> 
> On Pool Member "C" there are only 8 VM's running and roughly ~63,600
> "stream-unix.####.#" stale socket files.
> 
> The last time this happened, the stale socket files exhausted the
> available inodes on the file system. With inodes exhausted, the file
> system reported itself as full. This resulted in a corrupt XenServer
> Pool and lead to rebuilding the Pool (not to mention the countless
> hours
> of scrubbing the CloudStack database).
> 
> All thoughts and/or suggestions are welcome!
> 
> Cheers,
> 
> Matthew

RE: XenServer inode exhaustion

Posted by Anthony Xu <Xu...@citrix.com>.
Hi,
Try removing /etc/udev/rules.d/xen-ovs-vif-flows.rules and socket files under /tmp in XenServer Host.


Anthony

> -----Original Message-----
> From: Clayton Weise [mailto:cweise@iswest.net]
> Sent: Monday, August 20, 2012 11:31 AM
> To: 'CloudStack Devs'; 'CloudStack user/admin discussions'
> Subject: RE: XenServer inode exhaustion
> 
> This just happened to me, only it happened to the pool master and one
> of the slaves.  Does anybody have any insight/ideas on this.  Matthew,
> how were you able to fix this?
> 
> -----Original Message-----
> From: Matthew Hartmann [mailto:mhartmann@tls.net]
> Sent: Tuesday, May 22, 2012 9:03 AM
> To: CloudStack user/admin discussions; CloudStack Devs
> Subject: XenServer inode exhaustion
> 
> Hello all!
> 
> I have quite an odd issue with my XenServer 6.0.2 machines that are in
> a
> Pool that is managed by CloudStack 3.0.2.
> 
> In my Pool, I have a Pool Master and two Pool Members. On the Pool
> Master, there are 12 VM's running and in /tmp there are only 18
> "stream-unix.####.#" files. I thought this was odd so I checked my Pool
> Members.
> 
> On Pool Member "B" there are only 2 VM's running and roughly ~63,700
> "stream-unix.####.#" stale socket files.
> 
> On Pool Member "C" there are only 8 VM's running and roughly ~63,600
> "stream-unix.####.#" stale socket files.
> 
> The last time this happened, the stale socket files exhausted the
> available inodes on the file system. With inodes exhausted, the file
> system reported itself as full. This resulted in a corrupt XenServer
> Pool and lead to rebuilding the Pool (not to mention the countless
> hours
> of scrubbing the CloudStack database).
> 
> All thoughts and/or suggestions are welcome!
> 
> Cheers,
> 
> Matthew

RE: XenServer inode exhaustion

Posted by Clayton Weise <cw...@iswest.net>.
This just happened to me, only it happened to the pool master and one of the slaves.  Does anybody have any insight/ideas on this.  Matthew, how were you able to fix this?

-----Original Message-----
From: Matthew Hartmann [mailto:mhartmann@tls.net] 
Sent: Tuesday, May 22, 2012 9:03 AM
To: CloudStack user/admin discussions; CloudStack Devs
Subject: XenServer inode exhaustion

Hello all!

I have quite an odd issue with my XenServer 6.0.2 machines that are in a 
Pool that is managed by CloudStack 3.0.2.

In my Pool, I have a Pool Master and two Pool Members. On the Pool 
Master, there are 12 VM's running and in /tmp there are only 18 
"stream-unix.####.#" files. I thought this was odd so I checked my Pool 
Members.

On Pool Member "B" there are only 2 VM's running and roughly ~63,700 
"stream-unix.####.#" stale socket files.

On Pool Member "C" there are only 8 VM's running and roughly ~63,600 
"stream-unix.####.#" stale socket files.

The last time this happened, the stale socket files exhausted the 
available inodes on the file system. With inodes exhausted, the file 
system reported itself as full. This resulted in a corrupt XenServer 
Pool and lead to rebuilding the Pool (not to mention the countless hours 
of scrubbing the CloudStack database).

All thoughts and/or suggestions are welcome!

Cheers,

Matthew

RE: XenServer inode exhaustion

Posted by Clayton Weise <cw...@iswest.net>.
This just happened to me, only it happened to the pool master and one of the slaves.  Does anybody have any insight/ideas on this.  Matthew, how were you able to fix this?

-----Original Message-----
From: Matthew Hartmann [mailto:mhartmann@tls.net] 
Sent: Tuesday, May 22, 2012 9:03 AM
To: CloudStack user/admin discussions; CloudStack Devs
Subject: XenServer inode exhaustion

Hello all!

I have quite an odd issue with my XenServer 6.0.2 machines that are in a 
Pool that is managed by CloudStack 3.0.2.

In my Pool, I have a Pool Master and two Pool Members. On the Pool 
Master, there are 12 VM's running and in /tmp there are only 18 
"stream-unix.####.#" files. I thought this was odd so I checked my Pool 
Members.

On Pool Member "B" there are only 2 VM's running and roughly ~63,700 
"stream-unix.####.#" stale socket files.

On Pool Member "C" there are only 8 VM's running and roughly ~63,600 
"stream-unix.####.#" stale socket files.

The last time this happened, the stale socket files exhausted the 
available inodes on the file system. With inodes exhausted, the file 
system reported itself as full. This resulted in a corrupt XenServer 
Pool and lead to rebuilding the Pool (not to mention the countless hours 
of scrubbing the CloudStack database).

All thoughts and/or suggestions are welcome!

Cheers,

Matthew