You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Marcus Sorensen <sh...@gmail.com> on 2013/02/21 02:51:53 UTC
[DISCUSS] Management Server Memory Requirements
When Javelin was merged, there was an email sent out stating that devs
should set their MAVEN_OPTS to use 2g of heap, and 512M of permanent
memory. Subsequently, there have also been several e-mails and issues
where devs have echoed this recommendation, and presumably it fixed
issues. I've seen the MS run out of memory myself and applied those
recommendations.
Is this what we want to provide in the tomcat config for a package
based install as well? It's effectively saying that the minimum
requirements for the management server are something like 3 or 4 GB
(to be safe for other running tasks) of RAM, right?
There is currently a bug filed that may or may not have to do with
this, CLOUDSTACK-1339. Users report mgmt server slowness, going
unresponsive for minutes at a time, but the logs seem to show business
as usual. User reports that java is taking 75% of RAM, depending on
what else is going on they may be swapping. Settings in the code for
an install are currently at 2g/512M, I've been running this on a 4GB
server for awhile now, java is at 900M, but I haven't been pounding it
with requests or anything.
This bug might not have anything to do with the memory settings, but I
figured it would be good to nail down what our minimum requirements
are for 4.1
Re: [DISCUSS] Management Server Memory Requirements
Posted by Marcus Sorensen <sh...@gmail.com>.
Yeah, I can't get the management server to even function for a long
period of time with those memory settings, it throws out of memory
exceptions. I ran into that with my devcloud Xen last Friday as I run
the management server inside of it and dom0 only as 1GB of RAM.
Increasing to 1.5GB was enough to get by.
Unless anyone has some java tricks or insight into the spring
framework that can improve memory use (I'm making an assumption that
this was due to spring based on prior discussions), it seems like we
should probably set a minimum of 2GB for management server, with 4GB
being recommended, since many people will run their mysql on the same
host.
On Wed, Feb 20, 2013 at 9:46 PM, Parth Jagirdar
<Pa...@citrix.com> wrote:
> JAVA_OPTS="-Djava.awt.headless=true
> -Dcom.sun.management.jmxremote.port=45219
> -Dcom.sun.management.jmxremote.authenticate=false
> -Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
> -XX:+HeapDumpOnOutOfMemoryError
> -XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>
> Which did not help.
>
> --------------
>
> [root@localhost management]# cat /proc/meminfo
> MemTotal: 1016656 kB
> MemFree: 68400 kB
> Buffers: 9108 kB
> Cached: 20984 kB
> SwapCached: 17492 kB
> Active: 424152 kB
> Inactive: 433152 kB
> Active(anon): 409812 kB
> Inactive(anon): 417412 kB
> Active(file): 14340 kB
> Inactive(file): 15740 kB
> Unevictable: 0 kB
> Mlocked: 0 kB
> SwapTotal: 2031608 kB
> SwapFree: 1840900 kB
> Dirty: 80 kB
> Writeback: 0 kB
> AnonPages: 815460 kB
> Mapped: 11408 kB
> Shmem: 4 kB
> Slab: 60120 kB
> SReclaimable: 10368 kB
> SUnreclaim: 49752 kB
> KernelStack: 5216 kB
> PageTables: 6800 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 2539936 kB
> Committed_AS: 1596896 kB
> VmallocTotal: 34359738367 kB
> VmallocUsed: 7724 kB
> VmallocChunk: 34359718200 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 503808 kB
> HugePages_Total: 0
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> DirectMap4k: 6144 kB
> DirectMap2M: 1038336 kB
> [root@localhost management]#
> -----------------------------
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>
>
> 9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
>
>
> 1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
>
>
> 1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
>
>
>
>
>
> On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>
>>Hi,
>>
>>Cloudstack Java process statistics are given below when it stops
>>responding are given below :
>>
>>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05, 0.02
>>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
>>0.0%st
>>Mem: 1014860k total, 947632k used, 67228k free, 5868k buffers
>>Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>
>>Thanks,
>>Sailaja.M
>>
>>-----Original Message-----
>>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>Sent: Thursday, February 21, 2013 9:35 AM
>>To: cloudstack-dev@incubator.apache.org
>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>
>>Yes, these are great data points, but so far nobody has responded on that
>>ticket with the information required to know if the slowness is related
>>to memory settings or swapping. That was just a hunch on my part from
>>being a system admin.
>>
>>How much memory do these systems have that experience issues? What does
>>/proc/meminfo say during the issues? Does adjusting the tomcat6.conf
>>memory settings make a difference (see ticket comments)? How much memory
>>do the java processes list as resident in top?
>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
>>wrote:
>>
>>> +1 Performance degradation is dramatic and I too have observed this
>>>issue.
>>>
>>> I have logged my comments into 1339.
>>>
>>>
>>> ŠParth
>>>
>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>> <sr...@citrix.com> wrote:
>>>
>>> >To add to what Marcus mentioned,
>>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>>> >5-10 min of starting management server and there has been a lot of
>>> >API requests through automated tests. It is observed that Management
>>> >server not only slows down but also goes down after a while.
>>> >
>>> >~Talluri
>>> >
>>> >-----Original Message-----
>>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>> >Sent: Thursday, February 21, 2013 7:22
>>> >To: cloudstack-dev@incubator.apache.org
>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>> >
>>> >When Javelin was merged, there was an email sent out stating that
>>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>>> >permanent memory. Subsequently, there have also been several e-mails
>>> >and issues where devs have echoed this recommendation, and presumably
>>> >it fixed issues. I've seen the MS run out of memory myself and
>>> >applied those recommendations.
>>> >
>>> >Is this what we want to provide in the tomcat config for a package
>>> >based install as well? It's effectively saying that the minimum
>>> >requirements for the management server are something like 3 or 4 GB
>>> >(to be safe for other running tasks) of RAM, right?
>>> >
>>> >There is currently a bug filed that may or may not have to do with
>>> >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>>> >unresponsive for minutes at a time, but the logs seem to show
>>> >business as usual. User reports that java is taking 75% of RAM,
>>> >depending on what else is going on they may be swapping. Settings in
>>> >the code for an install are currently at 2g/512M, I've been running
>>> >this on a 4GB server for awhile now, java is at 900M, but I haven't
>>> >been pounding it with requests or anything.
>>> >
>>> >This bug might not have anything to do with the memory settings, but
>>> >I figured it would be good to nail down what our minimum requirements
>>> >are for 4.1
>>>
>>>
>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Marcus Sorensen <sh...@gmail.com>.
As mentioned, it seemed to coincide with Javelin merge/spring
framework. Maybe there is some tuning to do there, but it's been
brought up before with no response, in the threads discussing the
MAVEN_OPTS memory increases.
On Wed, Feb 20, 2013 at 10:03 PM, Sudha Ponnaganti
<su...@citrix.com> wrote:
> I think we need to investigate why we need more memory before increasing requirements. Would below data points provide that kind of info??
>
>
> -----Original Message-----
> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> Sent: Wednesday, February 20, 2013 8:41 PM
> To: cloudstack-dev@incubator.apache.org
> Subject: RE: [DISCUSS] Management Server Memory Requirements
>
> Thanks. Looks like test servers are 1GB. And there is swapping. Can you run "vmstat 1" and give us 30 seconds of output?
>
> So we need to decide as a dev team if we need to raise minimum requirements and/or lower java process memory according to what new 4.1 code can get away with.
> On Feb 20, 2013 9:27 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>
>> Hi,
>>
>> Cloudstack Java process statistics are given below when it stops
>> responding are given below :
>>
>> top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05, 0.02
>> Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>> Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Mem: 1014860k total, 947632k used, 67228k free, 5868k buffers
>> Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>
>> Thanks,
>> Sailaja.M
>>
>> -----Original Message-----
>> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>> Sent: Thursday, February 21, 2013 9:35 AM
>> To: cloudstack-dev@incubator.apache.org
>> Subject: Re: [DISCUSS] Management Server Memory Requirements
>>
>> Yes, these are great data points, but so far nobody has responded on
>> that ticket with the information required to know if the slowness is
>> related to memory settings or swapping. That was just a hunch on my
>> part from being a system admin.
>>
>> How much memory do these systems have that experience issues? What
>> does /proc/meminfo say during the issues? Does adjusting the
>> tomcat6.conf memory settings make a difference (see ticket comments)?
>> How much memory do the java processes list as resident in top?
>> On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
>> wrote:
>>
>> > +1 Performance degradation is dramatic and I too have observed this
>> issue.
>> >
>> > I have logged my comments into 1339.
>> >
>> >
>> > ŠParth
>> >
>> > On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>> > <sr...@citrix.com> wrote:
>> >
>> > >To add to what Marcus mentioned,
>> > >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>> > >5-10 min of starting management server and there has been a lot of
>> > >API requests through automated tests. It is observed that
>> > >Management server not only slows down but also goes down after a while.
>> > >
>> > >~Talluri
>> > >
>> > >-----Original Message-----
>> > >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>> > >Sent: Thursday, February 21, 2013 7:22
>> > >To: cloudstack-dev@incubator.apache.org
>> > >Subject: [DISCUSS] Management Server Memory Requirements
>> > >
>> > >When Javelin was merged, there was an email sent out stating that
>> > >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>> > >permanent memory. Subsequently, there have also been several
>> > >e-mails and issues where devs have echoed this recommendation, and
>> > >presumably it fixed issues. I've seen the MS run out of memory
>> > >myself and applied those recommendations.
>> > >
>> > >Is this what we want to provide in the tomcat config for a package
>> > >based install as well? It's effectively saying that the minimum
>> > >requirements for the management server are something like 3 or 4 GB
>> > >(to be safe for other running tasks) of RAM, right?
>> > >
>> > >There is currently a bug filed that may or may not have to do with
>> > >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>> > >unresponsive for minutes at a time, but the logs seem to show
>> > >business as usual. User reports that java is taking 75% of RAM,
>> > >depending on what else is going on they may be swapping. Settings
>> > >in the code for an install are currently at 2g/512M, I've been
>> > >running this on a 4GB server for awhile now, java is at 900M, but I
>> > >haven't been pounding it with requests or anything.
>> > >
>> > >This bug might not have anything to do with the memory settings,
>> > >but I figured it would be good to nail down what our minimum
>> > >requirements are for 4.1
>> >
>> >
>>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Marcus Sorensen <sh...@gmail.com>.
Yes, this could be bad for people who might want to upgrade to 4.1
On Wed, Feb 20, 2013 at 10:03 PM, Sudha Ponnaganti
<su...@citrix.com> wrote:
> I think we need to investigate why we need more memory before increasing requirements. Would below data points provide that kind of info??
>
>
> -----Original Message-----
> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> Sent: Wednesday, February 20, 2013 8:41 PM
> To: cloudstack-dev@incubator.apache.org
> Subject: RE: [DISCUSS] Management Server Memory Requirements
>
> Thanks. Looks like test servers are 1GB. And there is swapping. Can you run "vmstat 1" and give us 30 seconds of output?
>
> So we need to decide as a dev team if we need to raise minimum requirements and/or lower java process memory according to what new 4.1 code can get away with.
> On Feb 20, 2013 9:27 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>
>> Hi,
>>
>> Cloudstack Java process statistics are given below when it stops
>> responding are given below :
>>
>> top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05, 0.02
>> Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>> Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
>> 0.0%st
>> Mem: 1014860k total, 947632k used, 67228k free, 5868k buffers
>> Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>
>> Thanks,
>> Sailaja.M
>>
>> -----Original Message-----
>> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>> Sent: Thursday, February 21, 2013 9:35 AM
>> To: cloudstack-dev@incubator.apache.org
>> Subject: Re: [DISCUSS] Management Server Memory Requirements
>>
>> Yes, these are great data points, but so far nobody has responded on
>> that ticket with the information required to know if the slowness is
>> related to memory settings or swapping. That was just a hunch on my
>> part from being a system admin.
>>
>> How much memory do these systems have that experience issues? What
>> does /proc/meminfo say during the issues? Does adjusting the
>> tomcat6.conf memory settings make a difference (see ticket comments)?
>> How much memory do the java processes list as resident in top?
>> On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
>> wrote:
>>
>> > +1 Performance degradation is dramatic and I too have observed this
>> issue.
>> >
>> > I have logged my comments into 1339.
>> >
>> >
>> > ŠParth
>> >
>> > On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>> > <sr...@citrix.com> wrote:
>> >
>> > >To add to what Marcus mentioned,
>> > >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>> > >5-10 min of starting management server and there has been a lot of
>> > >API requests through automated tests. It is observed that
>> > >Management server not only slows down but also goes down after a while.
>> > >
>> > >~Talluri
>> > >
>> > >-----Original Message-----
>> > >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>> > >Sent: Thursday, February 21, 2013 7:22
>> > >To: cloudstack-dev@incubator.apache.org
>> > >Subject: [DISCUSS] Management Server Memory Requirements
>> > >
>> > >When Javelin was merged, there was an email sent out stating that
>> > >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>> > >permanent memory. Subsequently, there have also been several
>> > >e-mails and issues where devs have echoed this recommendation, and
>> > >presumably it fixed issues. I've seen the MS run out of memory
>> > >myself and applied those recommendations.
>> > >
>> > >Is this what we want to provide in the tomcat config for a package
>> > >based install as well? It's effectively saying that the minimum
>> > >requirements for the management server are something like 3 or 4 GB
>> > >(to be safe for other running tasks) of RAM, right?
>> > >
>> > >There is currently a bug filed that may or may not have to do with
>> > >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>> > >unresponsive for minutes at a time, but the logs seem to show
>> > >business as usual. User reports that java is taking 75% of RAM,
>> > >depending on what else is going on they may be swapping. Settings
>> > >in the code for an install are currently at 2g/512M, I've been
>> > >running this on a 4GB server for awhile now, java is at 900M, but I
>> > >haven't been pounding it with requests or anything.
>> > >
>> > >This bug might not have anything to do with the memory settings,
>> > >but I figured it would be good to nail down what our minimum
>> > >requirements are for 4.1
>> >
>> >
>>
RE: [DISCUSS] Management Server Memory Requirements
Posted by Sudha Ponnaganti <su...@citrix.com>.
I think we need to investigate why we need more memory before increasing requirements. Would below data points provide that kind of info??
-----Original Message-----
From: Marcus Sorensen [mailto:shadowsor@gmail.com]
Sent: Wednesday, February 20, 2013 8:41 PM
To: cloudstack-dev@incubator.apache.org
Subject: RE: [DISCUSS] Management Server Memory Requirements
Thanks. Looks like test servers are 1GB. And there is swapping. Can you run "vmstat 1" and give us 30 seconds of output?
So we need to decide as a dev team if we need to raise minimum requirements and/or lower java process memory according to what new 4.1 code can get away with.
On Feb 20, 2013 9:27 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
> Hi,
>
> Cloudstack Java process statistics are given below when it stops
> responding are given below :
>
> top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05, 0.02
> Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
> Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
> 0.0%st
> Mem: 1014860k total, 947632k used, 67228k free, 5868k buffers
> Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>
> Thanks,
> Sailaja.M
>
> -----Original Message-----
> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> Sent: Thursday, February 21, 2013 9:35 AM
> To: cloudstack-dev@incubator.apache.org
> Subject: Re: [DISCUSS] Management Server Memory Requirements
>
> Yes, these are great data points, but so far nobody has responded on
> that ticket with the information required to know if the slowness is
> related to memory settings or swapping. That was just a hunch on my
> part from being a system admin.
>
> How much memory do these systems have that experience issues? What
> does /proc/meminfo say during the issues? Does adjusting the
> tomcat6.conf memory settings make a difference (see ticket comments)?
> How much memory do the java processes list as resident in top?
> On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
> wrote:
>
> > +1 Performance degradation is dramatic and I too have observed this
> issue.
> >
> > I have logged my comments into 1339.
> >
> >
> > ŠParth
> >
> > On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
> > <sr...@citrix.com> wrote:
> >
> > >To add to what Marcus mentioned,
> > >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
> > >5-10 min of starting management server and there has been a lot of
> > >API requests through automated tests. It is observed that
> > >Management server not only slows down but also goes down after a while.
> > >
> > >~Talluri
> > >
> > >-----Original Message-----
> > >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> > >Sent: Thursday, February 21, 2013 7:22
> > >To: cloudstack-dev@incubator.apache.org
> > >Subject: [DISCUSS] Management Server Memory Requirements
> > >
> > >When Javelin was merged, there was an email sent out stating that
> > >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
> > >permanent memory. Subsequently, there have also been several
> > >e-mails and issues where devs have echoed this recommendation, and
> > >presumably it fixed issues. I've seen the MS run out of memory
> > >myself and applied those recommendations.
> > >
> > >Is this what we want to provide in the tomcat config for a package
> > >based install as well? It's effectively saying that the minimum
> > >requirements for the management server are something like 3 or 4 GB
> > >(to be safe for other running tasks) of RAM, right?
> > >
> > >There is currently a bug filed that may or may not have to do with
> > >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
> > >unresponsive for minutes at a time, but the logs seem to show
> > >business as usual. User reports that java is taking 75% of RAM,
> > >depending on what else is going on they may be swapping. Settings
> > >in the code for an install are currently at 2g/512M, I've been
> > >running this on a 4GB server for awhile now, java is at 900M, but I
> > >haven't been pounding it with requests or anything.
> > >
> > >This bug might not have anything to do with the memory settings,
> > >but I figured it would be good to nail down what our minimum
> > >requirements are for 4.1
> >
> >
>
RE: [DISCUSS] Management Server Memory Requirements
Posted by Marcus Sorensen <sh...@gmail.com>.
Thanks. Looks like test servers are 1GB. And there is swapping. Can you run
"vmstat 1" and give us 30 seconds of output?
So we need to decide as a dev team if we need to raise minimum requirements
and/or lower java process memory according to what new 4.1 code can get
away with.
On Feb 20, 2013 9:27 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
> Hi,
>
> Cloudstack Java process statistics are given below when it stops
> responding are given below :
>
> top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05, 0.02
> Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
> Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
> 0.0%st
> Mem: 1014860k total, 947632k used, 67228k free, 5868k buffers
> Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>
> Thanks,
> Sailaja.M
>
> -----Original Message-----
> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> Sent: Thursday, February 21, 2013 9:35 AM
> To: cloudstack-dev@incubator.apache.org
> Subject: Re: [DISCUSS] Management Server Memory Requirements
>
> Yes, these are great data points, but so far nobody has responded on that
> ticket with the information required to know if the slowness is related to
> memory settings or swapping. That was just a hunch on my part from being a
> system admin.
>
> How much memory do these systems have that experience issues? What does
> /proc/meminfo say during the issues? Does adjusting the tomcat6.conf memory
> settings make a difference (see ticket comments)? How much memory do the
> java processes list as resident in top?
> On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
> wrote:
>
> > +1 Performance degradation is dramatic and I too have observed this
> issue.
> >
> > I have logged my comments into 1339.
> >
> >
> > ŠParth
> >
> > On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
> > <sr...@citrix.com> wrote:
> >
> > >To add to what Marcus mentioned,
> > >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
> > >5-10 min of starting management server and there has been a lot of
> > >API requests through automated tests. It is observed that Management
> > >server not only slows down but also goes down after a while.
> > >
> > >~Talluri
> > >
> > >-----Original Message-----
> > >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> > >Sent: Thursday, February 21, 2013 7:22
> > >To: cloudstack-dev@incubator.apache.org
> > >Subject: [DISCUSS] Management Server Memory Requirements
> > >
> > >When Javelin was merged, there was an email sent out stating that
> > >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
> > >permanent memory. Subsequently, there have also been several e-mails
> > >and issues where devs have echoed this recommendation, and presumably
> > >it fixed issues. I've seen the MS run out of memory myself and
> > >applied those recommendations.
> > >
> > >Is this what we want to provide in the tomcat config for a package
> > >based install as well? It's effectively saying that the minimum
> > >requirements for the management server are something like 3 or 4 GB
> > >(to be safe for other running tasks) of RAM, right?
> > >
> > >There is currently a bug filed that may or may not have to do with
> > >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
> > >unresponsive for minutes at a time, but the logs seem to show
> > >business as usual. User reports that java is taking 75% of RAM,
> > >depending on what else is going on they may be swapping. Settings in
> > >the code for an install are currently at 2g/512M, I've been running
> > >this on a 4GB server for awhile now, java is at 900M, but I haven't
> > >been pounding it with requests or anything.
> > >
> > >This bug might not have anything to do with the memory settings, but
> > >I figured it would be good to nail down what our minimum requirements
> > >are for 4.1
> >
> >
>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Marcus Sorensen <sh...@gmail.com>.
Yes, there we see some swap thrashing.
On Wed, Feb 20, 2013 at 10:04 PM, Parth Jagirdar
<Pa...@citrix.com> wrote:
> Marcus,
>
>
> I attempted login into UI while running the log.
>
> [root@localhost management]# vmstat 1
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu-----
> r b swpd free buff cache si so bi bo in cs us sy id
> wa st
> 0 0 191132 70904 10340 16192 36 28 40 91 9 1 0 0 89
> 10 0
> 0 0 191132 70904 10340 16204 0 0 0 0 46 75 0 0
> 100 0 0
> 0 0 191132 70904 10356 16204 0 0 0 36 72 221 1 0 91
> 8 0
> 0 0 191132 70904 10372 16188 0 0 0 44 73 130 0 1 88
> 11 0
> 0 0 191132 70780 10420 16208 0 0 4 276 83 191 1 0 75
> 24 0
> 0 0 191132 70780 10452 16192 0 0 0 120 106 309 1 0 77
> 22 0
> 0 0 191132 70780 10468 16200 0 0 0 40 91 183 1 1 90
> 8 0
> 0 0 191132 70780 10468 16216 0 0 0 0 47 128 0 0
> 100 0 0
> 0 0 191132 70780 10484 16216 0 0 0 36 70 136 0 0 94
> 6 0
> 0 0 191132 70656 10500 16200 0 0 0 40 66 116 1 0 91
> 8 0
> 0 0 191132 70656 10500 16216 0 0 0 0 47 94 0 0
> 100 0 0
> 0 1 189504 66216 10400 17940 2192 100 3928 172 404 579 9 2 5
> 84 0
> 1 1 188772 60220 10552 21992 1000 0 5220 68 412 741 7 2 21
> 69 0
> 1 2 187352 49316 7832 30344 1660 32 10052 32 833 1015 28 3 0
> 69 0
> 0 4 188816 52420 1392 25716 1488 2872 3168 3240 663 870 19 3 0
> 78 0
> 1 1 187388 51808 1372 25040 2476 1104 3084 1260 675 813 15 3 0
> 82 0
> 0 1 187360 54040 1500 24980 32 0 1048 0 447 379 6 1 0
> 93 0
> 0 1 187360 53916 1516 25004 0 0 924 52 309 283 1 0 0
> 99 0
> 0 1 195476 64076 1272 20624 0 8116 32 8156 312 308 1 2 0
> 97 0
> 0 0 203084 71920 1264 19412 0 7608 0 7608 256 173 0 2 89
> 9 0
> 0 0 203076 71324 1376 20132 64 0 868 40 192 232 2 0 65
> 33 0
> 0 0 203076 71328 1392 20108 0 0 0 68 75 144 1 0 85
> 14 0
> 0 0 203076 71084 1392 20392 0 0 268 0 66 132 0 1 96
> 3 0
> 0 0 203076 71084 1408 20392 0 0 0 36 60 122 0 0 94
> 6 0
> 0 0 203076 71076 1424 20376 0 0 0 36 77 148 1 0 92
> 7 0
> 0 1 203072 70696 1472 20460 96 0 168 280 196 1080 7 1 66
> 26 0
> 0 0 202900 68704 1512 21236 656 0 1432 104 338 760 10 1 10
> 79 0
> 0 0 201804 65728 1540 21984 1184 0 1904 64 547 1117 26 2 40
> 33 0
> 0 2 161904 122500 1540 22640 68 0 776 0 407 477 23 2 57
> 18 0
> 1 0 161384 122132 1556 22748 36 0 92 60 970 200 92 0 0
> 8 0
> 0 1 160840 119512 1836 23228 676 0 1432 76 772 866 58 2 0
> 40 0
> 0 0 160776 119636 1836 23516 196 0 500 0 104 199 1 0 63
> 36 0
> 0 0 160776 119636 1852 23520 0 0 0 44 83 251 2 0 92
> 6 0
> 0 0 160776 119644 1868 23504 0 0 0 40 64 117 0 1 90
> 9 0
> 0 0 160776 119644 1868 23520 0 0 0 0 46 91 0 0
> 100 0 0
> 0 1 160764 119456 1888 23556 28 0 32 164 71 121 0 0 87
> 13 0
> 0 0 160764 119208 1952 23572 0 0 4 288 392 1083 4 1 66
> 29 0
> 0 0 160764 119192 1952 23596 0 0 0 0 42 69 0 0
> 100 0 0
> 0 0 160764 119192 1968 23596 0 0 0 40 60 127 1 0 92
> 7 0
> 0 0 160764 119192 1984 23584 0 0 4 36 71 135 0 1 91
> 8 0
> 0 0 160764 119192 1984 23600 0 0 0 0 46 89 0 0
> 100 0 0
> 0 0 160764 119192 2000 23600 0 0 0 36 59 121 1 0 92
> 7 0
> 0 0 160764 119192 2016 23584 0 0 0 36 82 196 0 0 93
> 7 0
> 0 0 160764 119192 2016 23600 0 0 0 0 38 69 0 0
> 100 0 0
> 0 0 160764 119192 2032 23600 0 0 0 36 63 130 1 0 91
> 8 0
> 0 0 160764 119192 2048 23584 0 0 0 36 67 132 0 0 94
> 6 0
> 0 0 160764 119192 2096 23584 0 0 0 272 89 193 0 0 76
> 24 0
>
>
> ...Parth
>
>
> On 2/20/13 8:59 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>
>>Well, it doesn't seem to be actively swapping at this point, but I
>>think it's got active memory swapped out and being used as
>>occasionally wait% goes up significantly. At any rate this system is
>>severely memory limited.
>>
>>On Wed, Feb 20, 2013 at 9:52 PM, Parth Jagirdar
>><Pa...@citrix.com> wrote:
>>> Marcus,
>>>
>>> vmstat 1 output
>>>
>>>
>>> [root@localhost management]# vmstat 1
>>> procs -----------memory---------- ---swap-- -----io---- --system--
>>> -----cpu-----
>>> r b swpd free buff cache si so bi bo in cs us sy
>>>id
>>> wa st
>>> 0 1 190820 72380 10904 15852 36 28 40 92 9 1 0 0
>>>89
>>> 10 0
>>> 0 0 190820 72256 10932 15828 0 0 0 56 63 130 0 0
>>>88
>>> 12 0
>>> 1 0 190820 72256 10932 15844 0 0 0 0 53 153 1 0
>>>99
>>> 0 0
>>> 0 0 190820 72256 10948 15844 0 0 0 44 89 253 2 0
>>>88
>>> 10 0
>>> 0 0 190820 72256 10964 15828 0 0 0 72 64 135 0 0
>>>88
>>> 12 0
>>> 0 0 190820 72256 10964 15844 0 0 0 0 43 76 0 0
>>> 100 0 0
>>> 0 0 190820 72256 10980 15844 0 0 0 36 86 244 1 1
>>>91
>>> 7 0
>>> 0 0 190820 72256 10996 15828 0 0 0 44 57 112 0 1
>>>88
>>> 11 0
>>> 0 0 190820 72256 10996 15844 0 0 0 0 45 88 0 0
>>> 100 0 0
>>> 0 0 190820 72256 11012 15844 0 0 0 36 100 264 1 1
>>>91
>>> 7 0
>>> 0 0 190820 72132 11044 15824 0 0 4 96 106 211 1 0
>>>80
>>> 19 0
>>> 0 0 190820 72132 11092 15856 0 0 0 368 81 223 0 1
>>>74
>>> 25 0
>>> 0 0 190820 72132 11108 15856 0 0 0 36 78 145 0 1
>>>93
>>> 6 0
>>> 0 0 190820 72132 11124 15840 0 0 0 40 55 106 1 0
>>>90
>>> 9 0
>>> 0 0 190820 72132 11124 15856 0 0 0 0 47 96 0 0
>>> 100 0 0
>>> 0 0 190820 72132 11140 15856 0 0 0 36 61 113 0 0
>>>85
>>> 15 0
>>> 0 0 190820 72008 11156 15840 0 0 0 36 61 158 0 0
>>>93
>>> 7 0
>>> 0 0 190820 72008 11156 15856 0 0 0 0 41 82 0 0
>>> 100 0 0
>>> 0 0 190820 72008 11172 15856 0 0 0 36 74 149 1 0
>>>94
>>> 5 0
>>> 0 0 190820 72008 11188 15840 0 0 0 36 60 117 0 0
>>>93
>>> 7 0
>>> 0 0 190820 72008 11188 15856 0 0 0 0 43 91 0 0
>>> 100 0 0
>>> 1 0 190820 72008 11252 15860 0 0 4 312 108 243 1 1
>>>68
>>> 30 0
>>> 0 0 190820 72008 11268 15844 0 0 0 36 60 128 0 0
>>>92
>>> 8 0
>>> 0 0 190820 72008 11268 15860 0 0 0 0 36 67 0 0
>>> 100 0 0
>>> 0 0 190820 71884 11284 15860 0 0 0 104 84 139 0 1
>>>83
>>> 16 0
>>> 0 0 190820 71884 11300 15844 0 0 0 60 55 111 0 0
>>>69
>>> 31 0
>>> 0 0 190820 71884 11300 15860 0 0 0 0 53 121 1 0
>>>99
>>> 0 0
>>> 0 0 190820 71884 11316 15860 0 0 0 40 67 130 0 0
>>>87
>>> 13 0
>>> 0 0 190820 71884 11332 15844 0 0 0 40 58 130 0 0
>>>90
>>> 10 0
>>> 0 0 190820 71884 11332 15864 0 0 0 0 59 824 1 1
>>>98
>>> 0 0
>>> 1 0 190820 71884 11348 15864 0 0 0 40 113 185 1 0
>>>67
>>> 32 0
>>> 0 0 190820 71744 11412 15852 0 0 4 540 100 238 0 0
>>>67
>>> 33 0
>>> 0 0 190820 71744 11412 15868 0 0 0 0 55 159 1 0
>>>99
>>> 0 0
>>> 0 0 190820 71744 11428 15868 0 0 0 40 89 246 2 1
>>>90
>>> 7 0
>>> 0 0 190820 71620 11444 15852 0 0 0 72 65 135 0 0
>>>93
>>> 7 0
>>> 0 0 190820 71620 11444 15868 0 0 0 0 40 74 0 0
>>> 100 0 0
>>> 0 0 190820 71620 11460 15868 0 0 0 52 75 216 1 0
>>>92
>>> 7 0
>>> 0 0 190820 71620 11476 15852 0 0 0 44 53 109 0 0
>>>89
>>> 11 0
>>> 0 0 190820 71620 11476 15868 0 0 0 0 43 87 0 0
>>> 100 0 0
>>> 0 0 190820 71620 11496 15868 0 0 4 36 83 143 0 1
>>>90
>>> 9 0
>>> 0 0 190820 71620 11512 15852 0 0 0 40 78 869 1 0
>>>91
>>> 8 0
>>> 0 1 190820 71628 11524 15856 0 0 0 188 94 145 0 0
>>>87
>>> 13 0
>>> 0 0 190820 71496 11576 15872 0 0 4 132 96 214 1 0
>>>80
>>> 19 0
>>> 0 0 190820 71496 11592 15856 0 0 0 36 94 128 1 0
>>>92
>>> 7 0
>>> 0 0 190820 71496 11592 15872 0 0 0 0 115 164 0 0
>>> 100 0 0
>>> 0 0 190820 71496 11608 15876 0 0 0 36 130 200 0 0
>>>87
>>> 13 0
>>> 0 0 190820 71496 11624 15860 0 0 0 36 141 218 1 1
>>>91
>>> 7 0
>>> 0 0 190820 71504 11624 15876 0 0 0 0 105 119 0 0
>>> 100 0 0
>>> 0 0 190820 71504 11640 15876 0 0 0 36 140 218 1 0
>>>90
>>> 9 0
>>> procs -----------memory---------- ---swap-- -----io---- --system--
>>> -----cpu-----
>>> r b swpd free buff cache si so bi bo in cs us sy
>>>id
>>> wa st
>>> 0 0 190820 71504 11656 15860 0 0 0 36 131 169 1 0
>>>92
>>> 7 0
>>> 0 0 190820 71504 11656 15876 0 0 0 0 115 146 0 0
>>> 100 0 0
>>> 0 0 190820 71380 11672 15876 0 0 0 36 128 173 0 1
>>>91
>>> 8 0
>>> 0 0 190820 71380 11736 15860 0 0 0 308 146 279 1 0
>>>69
>>> 30 0
>>> 0 0 190820 71380 11736 15876 0 0 0 0 59 82 0 0
>>> 100 0 0
>>> 0 0 190820 71380 11760 15876 0 0 4 64 90 174 1 1
>>>86
>>> 12 0
>>>
>>> ...Parth
>>>
>>>
>>>
>>>
>>> On 2/20/13 8:46 PM, "Parth Jagirdar" <Pa...@citrix.com> wrote:
>>>
>>>>JAVA_OPTS="-Djava.awt.headless=true
>>>>-Dcom.sun.management.jmxremote.port=45219
>>>>-Dcom.sun.management.jmxremote.authenticate=false
>>>>-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
>>>>-XX:+HeapDumpOnOutOfMemoryError
>>>>-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>>>>
>>>>Which did not help.
>>>>
>>>>--------------
>>>>
>>>>[root@localhost management]# cat /proc/meminfo
>>>>MemTotal: 1016656 kB
>>>>MemFree: 68400 kB
>>>>Buffers: 9108 kB
>>>>Cached: 20984 kB
>>>>SwapCached: 17492 kB
>>>>Active: 424152 kB
>>>>Inactive: 433152 kB
>>>>Active(anon): 409812 kB
>>>>Inactive(anon): 417412 kB
>>>>Active(file): 14340 kB
>>>>Inactive(file): 15740 kB
>>>>Unevictable: 0 kB
>>>>Mlocked: 0 kB
>>>>SwapTotal: 2031608 kB
>>>>SwapFree: 1840900 kB
>>>>Dirty: 80 kB
>>>>Writeback: 0 kB
>>>>AnonPages: 815460 kB
>>>>Mapped: 11408 kB
>>>>Shmem: 4 kB
>>>>Slab: 60120 kB
>>>>SReclaimable: 10368 kB
>>>>SUnreclaim: 49752 kB
>>>>KernelStack: 5216 kB
>>>>PageTables: 6800 kB
>>>>NFS_Unstable: 0 kB
>>>>Bounce: 0 kB
>>>>WritebackTmp: 0 kB
>>>>CommitLimit: 2539936 kB
>>>>Committed_AS: 1596896 kB
>>>>VmallocTotal: 34359738367 kB
>>>>VmallocUsed: 7724 kB
>>>>VmallocChunk: 34359718200 kB
>>>>HardwareCorrupted: 0 kB
>>>>AnonHugePages: 503808 kB
>>>>HugePages_Total: 0
>>>>HugePages_Free: 0
>>>>HugePages_Rsvd: 0
>>>>HugePages_Surp: 0
>>>>Hugepagesize: 2048 kB
>>>>DirectMap4k: 6144 kB
>>>>DirectMap2M: 1038336 kB
>>>>[root@localhost management]#
>>>>-----------------------------
>>>>
>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>
>>>>
>>>> 9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
>>>>
>>>>
>>>> 1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
>>>>
>>>>
>>>> 1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>>>>
>>>>>Hi,
>>>>>
>>>>>Cloudstack Java process statistics are given below when it stops
>>>>>responding are given below :
>>>>>
>>>>>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05,
>>>>>0.02
>>>>>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>>>>>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
>>>>>0.0%st
>>>>>Mem: 1014860k total, 947632k used, 67228k free, 5868k
>>>>>buffers
>>>>>Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>>>>
>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>>>>
>>>>>Thanks,
>>>>>Sailaja.M
>>>>>
>>>>>-----Original Message-----
>>>>>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>>Sent: Thursday, February 21, 2013 9:35 AM
>>>>>To: cloudstack-dev@incubator.apache.org
>>>>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>>>>
>>>>>Yes, these are great data points, but so far nobody has responded on
>>>>>that
>>>>>ticket with the information required to know if the slowness is related
>>>>>to memory settings or swapping. That was just a hunch on my part from
>>>>>being a system admin.
>>>>>
>>>>>How much memory do these systems have that experience issues? What does
>>>>>/proc/meminfo say during the issues? Does adjusting the tomcat6.conf
>>>>>memory settings make a difference (see ticket comments)? How much
>>>>>memory
>>>>>do the java processes list as resident in top?
>>>>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
>>>>>wrote:
>>>>>
>>>>>> +1 Performance degradation is dramatic and I too have observed this
>>>>>>issue.
>>>>>>
>>>>>> I have logged my comments into 1339.
>>>>>>
>>>>>>
>>>>>> ŠParth
>>>>>>
>>>>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>>>>> <sr...@citrix.com> wrote:
>>>>>>
>>>>>> >To add to what Marcus mentioned,
>>>>>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>>>>>> >5-10 min of starting management server and there has been a lot of
>>>>>> >API requests through automated tests. It is observed that Management
>>>>>> >server not only slows down but also goes down after a while.
>>>>>> >
>>>>>> >~Talluri
>>>>>> >
>>>>>> >-----Original Message-----
>>>>>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>>> >Sent: Thursday, February 21, 2013 7:22
>>>>>> >To: cloudstack-dev@incubator.apache.org
>>>>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>>>>> >
>>>>>> >When Javelin was merged, there was an email sent out stating that
>>>>>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>>>>>> >permanent memory. Subsequently, there have also been several
>>>>>>e-mails
>>>>>> >and issues where devs have echoed this recommendation, and
>>>>>>presumably
>>>>>> >it fixed issues. I've seen the MS run out of memory myself and
>>>>>> >applied those recommendations.
>>>>>> >
>>>>>> >Is this what we want to provide in the tomcat config for a package
>>>>>> >based install as well? It's effectively saying that the minimum
>>>>>> >requirements for the management server are something like 3 or 4 GB
>>>>>> >(to be safe for other running tasks) of RAM, right?
>>>>>> >
>>>>>> >There is currently a bug filed that may or may not have to do with
>>>>>> >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>>>>>> >unresponsive for minutes at a time, but the logs seem to show
>>>>>> >business as usual. User reports that java is taking 75% of RAM,
>>>>>> >depending on what else is going on they may be swapping. Settings in
>>>>>> >the code for an install are currently at 2g/512M, I've been running
>>>>>> >this on a 4GB server for awhile now, java is at 900M, but I haven't
>>>>>> >been pounding it with requests or anything.
>>>>>> >
>>>>>> >This bug might not have anything to do with the memory settings, but
>>>>>> >I figured it would be good to nail down what our minimum
>>>>>>requirements
>>>>>> >are for 4.1
>>>>>>
>>>>>>
>>>>
>>>
>
RE: [DISCUSS] Management Server Memory Requirements
Posted by Marcus Sorensen <sh...@gmail.com>.
Yeah, it will work for a little while until it starts swapping. Up until
this point most of the bugs filed have been about starting management
server or initial zone setup.
On Feb 20, 2013 11:09 PM, "Sudha Ponnaganti" <su...@citrix.com>
wrote:
> I just had a call with some of the QA Team members and starting yesterday,
> everyone seem to be running into this issue. I haven't heard anyone
> complaining about it earlier, however only little testing has happened
> before.
>
>
> -----Original Message-----
> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> Sent: Wednesday, February 20, 2013 10:05 PM
> To: cloudstack-dev@incubator.apache.org
> Subject: Re: [DISCUSS] Management Server Memory Requirements
>
> We began seeing this at the point of the 4.1 cut. Our xen devclouds
> stopped working until we increased the dom0 RAM to 1.5GB. Nobody else was
> complaining, and most people don't run the mgmt server in devcloud, so I
> was just waiting to see where the conversation about memory went on the
> MAVEN_OPTS e-mail thread.
>
> On Wed, Feb 20, 2013 at 10:26 PM, Sudha Ponnaganti <
> sudha.ponnaganti@citrix.com> wrote:
> > Parth / Sailaja,
> >
> > Can you update ticket with data points and see if this can be assigned
> to Alex to start investigation with Javelin merge unless this can be
> associated with a specific check-in.
> >
> > Talluri - would you be able to narrow down to the check-in or build that
> we started to see this??
> >
> > Thanks
> > /sudha
> >
> > -----Original Message-----
> > From: Parth Jagirdar [mailto:Parth.Jagirdar@citrix.com]
> > Sent: Wednesday, February 20, 2013 9:05 PM
> > To: cloudstack-dev@incubator.apache.org
> > Subject: Re: [DISCUSS] Management Server Memory Requirements
> >
> > Marcus,
> >
> >
> > I attempted login into UI while running the log.
> >
> > [root@localhost management]# vmstat 1
> > procs -----------memory---------- ---swap-- -----io---- --system--
> > -----cpu-----
> > r b swpd free buff cache si so bi bo in cs us sy
> id
> > wa st
> > 0 0 191132 70904 10340 16192 36 28 40 91 9 1 0 0
> 89
> > 10 0
> > 0 0 191132 70904 10340 16204 0 0 0 0 46 75 0 0
> > 100 0 0
> > 0 0 191132 70904 10356 16204 0 0 0 36 72 221 1 0
> 91
> > 8 0
> > 0 0 191132 70904 10372 16188 0 0 0 44 73 130 0 1
> 88
> > 11 0
> > 0 0 191132 70780 10420 16208 0 0 4 276 83 191 1 0
> 75
> > 24 0
> > 0 0 191132 70780 10452 16192 0 0 0 120 106 309 1 0
> 77
> > 22 0
> > 0 0 191132 70780 10468 16200 0 0 0 40 91 183 1 1
> 90
> > 8 0
> > 0 0 191132 70780 10468 16216 0 0 0 0 47 128 0 0
> > 100 0 0
> > 0 0 191132 70780 10484 16216 0 0 0 36 70 136 0 0
> 94
> > 6 0
> > 0 0 191132 70656 10500 16200 0 0 0 40 66 116 1 0
> 91
> > 8 0
> > 0 0 191132 70656 10500 16216 0 0 0 0 47 94 0 0
> > 100 0 0
> > 0 1 189504 66216 10400 17940 2192 100 3928 172 404 579 9 2
> 5
> > 84 0
> > 1 1 188772 60220 10552 21992 1000 0 5220 68 412 741 7 2
> 21
> > 69 0
> > 1 2 187352 49316 7832 30344 1660 32 10052 32 833 1015 28 3
> 0
> > 69 0
> > 0 4 188816 52420 1392 25716 1488 2872 3168 3240 663 870 19 3
> 0
> > 78 0
> > 1 1 187388 51808 1372 25040 2476 1104 3084 1260 675 813 15 3
> 0
> > 82 0
> > 0 1 187360 54040 1500 24980 32 0 1048 0 447 379 6 1
> 0
> > 93 0
> > 0 1 187360 53916 1516 25004 0 0 924 52 309 283 1 0
> 0
> > 99 0
> > 0 1 195476 64076 1272 20624 0 8116 32 8156 312 308 1 2
> 0
> > 97 0
> > 0 0 203084 71920 1264 19412 0 7608 0 7608 256 173 0 2
> 89
> > 9 0
> > 0 0 203076 71324 1376 20132 64 0 868 40 192 232 2 0
> 65
> > 33 0
> > 0 0 203076 71328 1392 20108 0 0 0 68 75 144 1 0
> 85
> > 14 0
> > 0 0 203076 71084 1392 20392 0 0 268 0 66 132 0 1
> 96
> > 3 0
> > 0 0 203076 71084 1408 20392 0 0 0 36 60 122 0 0
> 94
> > 6 0
> > 0 0 203076 71076 1424 20376 0 0 0 36 77 148 1 0
> 92
> > 7 0
> > 0 1 203072 70696 1472 20460 96 0 168 280 196 1080 7 1
> 66
> > 26 0
> > 0 0 202900 68704 1512 21236 656 0 1432 104 338 760 10 1
> 10
> > 79 0
> > 0 0 201804 65728 1540 21984 1184 0 1904 64 547 1117 26 2
> 40
> > 33 0
> > 0 2 161904 122500 1540 22640 68 0 776 0 407 477 23 2
> 57
> > 18 0
> > 1 0 161384 122132 1556 22748 36 0 92 60 970 200 92 0
> 0
> > 8 0
> > 0 1 160840 119512 1836 23228 676 0 1432 76 772 866 58 2
> 0
> > 40 0
> > 0 0 160776 119636 1836 23516 196 0 500 0 104 199 1 0
> 63
> > 36 0
> > 0 0 160776 119636 1852 23520 0 0 0 44 83 251 2 0
> 92
> > 6 0
> > 0 0 160776 119644 1868 23504 0 0 0 40 64 117 0 1
> 90
> > 9 0
> > 0 0 160776 119644 1868 23520 0 0 0 0 46 91 0 0
> > 100 0 0
> > 0 1 160764 119456 1888 23556 28 0 32 164 71 121 0 0
> 87
> > 13 0
> > 0 0 160764 119208 1952 23572 0 0 4 288 392 1083 4 1
> 66
> > 29 0
> > 0 0 160764 119192 1952 23596 0 0 0 0 42 69 0 0
> > 100 0 0
> > 0 0 160764 119192 1968 23596 0 0 0 40 60 127 1 0
> 92
> > 7 0
> > 0 0 160764 119192 1984 23584 0 0 4 36 71 135 0 1
> 91
> > 8 0
> > 0 0 160764 119192 1984 23600 0 0 0 0 46 89 0 0
> > 100 0 0
> > 0 0 160764 119192 2000 23600 0 0 0 36 59 121 1 0
> 92
> > 7 0
> > 0 0 160764 119192 2016 23584 0 0 0 36 82 196 0 0
> 93
> > 7 0
> > 0 0 160764 119192 2016 23600 0 0 0 0 38 69 0 0
> > 100 0 0
> > 0 0 160764 119192 2032 23600 0 0 0 36 63 130 1 0
> 91
> > 8 0
> > 0 0 160764 119192 2048 23584 0 0 0 36 67 132 0 0
> 94
> > 6 0
> > 0 0 160764 119192 2096 23584 0 0 0 272 89 193 0 0
> 76
> > 24 0
> >
> >
> > ...Parth
> >
> >
> > On 2/20/13 8:59 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
> >
> >>Well, it doesn't seem to be actively swapping at this point, but I
> >>think it's got active memory swapped out and being used as
> >>occasionally wait% goes up significantly. At any rate this system is
> >>severely memory limited.
> >>
> >>On Wed, Feb 20, 2013 at 9:52 PM, Parth Jagirdar
> >><Pa...@citrix.com> wrote:
> >>> Marcus,
> >>>
> >>> vmstat 1 output
> >>>
> >>>
> >>> [root@localhost management]# vmstat 1 procs
> >>>-----------memory---------- ---swap-- -----io---- --system--
> >>> -----cpu-----
> >>> r b swpd free buff cache si so bi bo in cs us sy
> >>>id
> >>> wa st
> >>> 0 1 190820 72380 10904 15852 36 28 40 92 9 1 0 0
> >>>89
> >>> 10 0
> >>> 0 0 190820 72256 10932 15828 0 0 0 56 63 130 0 0
> >>>88
> >>> 12 0
> >>> 1 0 190820 72256 10932 15844 0 0 0 0 53 153 1 0
> >>>99
> >>> 0 0
> >>> 0 0 190820 72256 10948 15844 0 0 0 44 89 253 2 0
> >>>88
> >>> 10 0
> >>> 0 0 190820 72256 10964 15828 0 0 0 72 64 135 0 0
> >>>88
> >>> 12 0
> >>> 0 0 190820 72256 10964 15844 0 0 0 0 43 76 0 0
> >>> 100 0 0
> >>> 0 0 190820 72256 10980 15844 0 0 0 36 86 244 1 1
> >>>91
> >>> 7 0
> >>> 0 0 190820 72256 10996 15828 0 0 0 44 57 112 0 1
> >>>88
> >>> 11 0
> >>> 0 0 190820 72256 10996 15844 0 0 0 0 45 88 0 0
> >>> 100 0 0
> >>> 0 0 190820 72256 11012 15844 0 0 0 36 100 264 1 1
> >>>91
> >>> 7 0
> >>> 0 0 190820 72132 11044 15824 0 0 4 96 106 211 1 0
> >>>80
> >>> 19 0
> >>> 0 0 190820 72132 11092 15856 0 0 0 368 81 223 0 1
> >>>74
> >>> 25 0
> >>> 0 0 190820 72132 11108 15856 0 0 0 36 78 145 0 1
> >>>93
> >>> 6 0
> >>> 0 0 190820 72132 11124 15840 0 0 0 40 55 106 1 0
> >>>90
> >>> 9 0
> >>> 0 0 190820 72132 11124 15856 0 0 0 0 47 96 0 0
> >>> 100 0 0
> >>> 0 0 190820 72132 11140 15856 0 0 0 36 61 113 0 0
> >>>85
> >>> 15 0
> >>> 0 0 190820 72008 11156 15840 0 0 0 36 61 158 0 0
> >>>93
> >>> 7 0
> >>> 0 0 190820 72008 11156 15856 0 0 0 0 41 82 0 0
> >>> 100 0 0
> >>> 0 0 190820 72008 11172 15856 0 0 0 36 74 149 1 0
> >>>94
> >>> 5 0
> >>> 0 0 190820 72008 11188 15840 0 0 0 36 60 117 0 0
> >>>93
> >>> 7 0
> >>> 0 0 190820 72008 11188 15856 0 0 0 0 43 91 0 0
> >>> 100 0 0
> >>> 1 0 190820 72008 11252 15860 0 0 4 312 108 243 1 1
> >>>68
> >>> 30 0
> >>> 0 0 190820 72008 11268 15844 0 0 0 36 60 128 0 0
> >>>92
> >>> 8 0
> >>> 0 0 190820 72008 11268 15860 0 0 0 0 36 67 0 0
> >>> 100 0 0
> >>> 0 0 190820 71884 11284 15860 0 0 0 104 84 139 0 1
> >>>83
> >>> 16 0
> >>> 0 0 190820 71884 11300 15844 0 0 0 60 55 111 0 0
> >>>69
> >>> 31 0
> >>> 0 0 190820 71884 11300 15860 0 0 0 0 53 121 1 0
> >>>99
> >>> 0 0
> >>> 0 0 190820 71884 11316 15860 0 0 0 40 67 130 0 0
> >>>87
> >>> 13 0
> >>> 0 0 190820 71884 11332 15844 0 0 0 40 58 130 0 0
> >>>90
> >>> 10 0
> >>> 0 0 190820 71884 11332 15864 0 0 0 0 59 824 1 1
> >>>98
> >>> 0 0
> >>> 1 0 190820 71884 11348 15864 0 0 0 40 113 185 1 0
> >>>67
> >>> 32 0
> >>> 0 0 190820 71744 11412 15852 0 0 4 540 100 238 0 0
> >>>67
> >>> 33 0
> >>> 0 0 190820 71744 11412 15868 0 0 0 0 55 159 1 0
> >>>99
> >>> 0 0
> >>> 0 0 190820 71744 11428 15868 0 0 0 40 89 246 2 1
> >>>90
> >>> 7 0
> >>> 0 0 190820 71620 11444 15852 0 0 0 72 65 135 0 0
> >>>93
> >>> 7 0
> >>> 0 0 190820 71620 11444 15868 0 0 0 0 40 74 0 0
> >>> 100 0 0
> >>> 0 0 190820 71620 11460 15868 0 0 0 52 75 216 1 0
> >>>92
> >>> 7 0
> >>> 0 0 190820 71620 11476 15852 0 0 0 44 53 109 0 0
> >>>89
> >>> 11 0
> >>> 0 0 190820 71620 11476 15868 0 0 0 0 43 87 0 0
> >>> 100 0 0
> >>> 0 0 190820 71620 11496 15868 0 0 4 36 83 143 0 1
> >>>90
> >>> 9 0
> >>> 0 0 190820 71620 11512 15852 0 0 0 40 78 869 1 0
> >>>91
> >>> 8 0
> >>> 0 1 190820 71628 11524 15856 0 0 0 188 94 145 0 0
> >>>87
> >>> 13 0
> >>> 0 0 190820 71496 11576 15872 0 0 4 132 96 214 1 0
> >>>80
> >>> 19 0
> >>> 0 0 190820 71496 11592 15856 0 0 0 36 94 128 1 0
> >>>92
> >>> 7 0
> >>> 0 0 190820 71496 11592 15872 0 0 0 0 115 164 0 0
> >>> 100 0 0
> >>> 0 0 190820 71496 11608 15876 0 0 0 36 130 200 0 0
> >>>87
> >>> 13 0
> >>> 0 0 190820 71496 11624 15860 0 0 0 36 141 218 1 1
> >>>91
> >>> 7 0
> >>> 0 0 190820 71504 11624 15876 0 0 0 0 105 119 0 0
> >>> 100 0 0
> >>> 0 0 190820 71504 11640 15876 0 0 0 36 140 218 1 0
> >>>90
> >>> 9 0
> >>> procs -----------memory---------- ---swap-- -----io---- --system--
> >>> -----cpu-----
> >>> r b swpd free buff cache si so bi bo in cs us sy
> >>>id
> >>> wa st
> >>> 0 0 190820 71504 11656 15860 0 0 0 36 131 169 1 0
> >>>92
> >>> 7 0
> >>> 0 0 190820 71504 11656 15876 0 0 0 0 115 146 0 0
> >>> 100 0 0
> >>> 0 0 190820 71380 11672 15876 0 0 0 36 128 173 0 1
> >>>91
> >>> 8 0
> >>> 0 0 190820 71380 11736 15860 0 0 0 308 146 279 1 0
> >>>69
> >>> 30 0
> >>> 0 0 190820 71380 11736 15876 0 0 0 0 59 82 0 0
> >>> 100 0 0
> >>> 0 0 190820 71380 11760 15876 0 0 4 64 90 174 1 1
> >>>86
> >>> 12 0
> >>>
> >>> ...Parth
> >>>
> >>>
> >>>
> >>>
> >>> On 2/20/13 8:46 PM, "Parth Jagirdar" <Pa...@citrix.com>
> wrote:
> >>>
> >>>>JAVA_OPTS="-Djava.awt.headless=true
> >>>>-Dcom.sun.management.jmxremote.port=45219
> >>>>-Dcom.sun.management.jmxremote.authenticate=false
> >>>>-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
> >>>>-XX:+HeapDumpOnOutOfMemoryError
> >>>>-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
> >>>>
> >>>>Which did not help.
> >>>>
> >>>>--------------
> >>>>
> >>>>[root@localhost management]# cat /proc/meminfo
> >>>>MemTotal: 1016656 kB
> >>>>MemFree: 68400 kB
> >>>>Buffers: 9108 kB
> >>>>Cached: 20984 kB
> >>>>SwapCached: 17492 kB
> >>>>Active: 424152 kB
> >>>>Inactive: 433152 kB
> >>>>Active(anon): 409812 kB
> >>>>Inactive(anon): 417412 kB
> >>>>Active(file): 14340 kB
> >>>>Inactive(file): 15740 kB
> >>>>Unevictable: 0 kB
> >>>>Mlocked: 0 kB
> >>>>SwapTotal: 2031608 kB
> >>>>SwapFree: 1840900 kB
> >>>>Dirty: 80 kB
> >>>>Writeback: 0 kB
> >>>>AnonPages: 815460 kB
> >>>>Mapped: 11408 kB
> >>>>Shmem: 4 kB
> >>>>Slab: 60120 kB
> >>>>SReclaimable: 10368 kB
> >>>>SUnreclaim: 49752 kB
> >>>>KernelStack: 5216 kB
> >>>>PageTables: 6800 kB
> >>>>NFS_Unstable: 0 kB
> >>>>Bounce: 0 kB
> >>>>WritebackTmp: 0 kB
> >>>>CommitLimit: 2539936 kB
> >>>>Committed_AS: 1596896 kB
> >>>>VmallocTotal: 34359738367 kB
> >>>>VmallocUsed: 7724 kB
> >>>>VmallocChunk: 34359718200 kB
> >>>>HardwareCorrupted: 0 kB
> >>>>AnonHugePages: 503808 kB
> >>>>HugePages_Total: 0
> >>>>HugePages_Free: 0
> >>>>HugePages_Rsvd: 0
> >>>>HugePages_Surp: 0
> >>>>Hugepagesize: 2048 kB
> >>>>DirectMap4k: 6144 kB
> >>>>DirectMap2M: 1038336 kB
> >>>>[root@localhost management]#
> >>>>-----------------------------
> >>>>
> >>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> >>>>
> >>>>
> >>>> 9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
> >>>>
> >>>>
> >>>> 1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
> >>>>
> >>>>
> >>>> 1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
> >>>>
> >>>>>Hi,
> >>>>>
> >>>>>Cloudstack Java process statistics are given below when it stops
> >>>>>responding are given below :
> >>>>>
> >>>>>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06,
> >>>>>0.05,
> >>>>>0.02
> >>>>>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
> >>>>>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi,
> >>>>>0.0%si, 0.0%st
> >>>>>Mem: 1014860k total, 947632k used, 67228k free, 5868k
> >>>>>buffers
> >>>>>Swap: 2031608k total, 832320k used, 1199288k free, 26764k
> cached
> >>>>>
> >>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> >>>>>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
> >>>>>
> >>>>>Thanks,
> >>>>>Sailaja.M
> >>>>>
> >>>>>-----Original Message-----
> >>>>>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> >>>>>Sent: Thursday, February 21, 2013 9:35 AM
> >>>>>To: cloudstack-dev@incubator.apache.org
> >>>>>Subject: Re: [DISCUSS] Management Server Memory Requirements
> >>>>>
> >>>>>Yes, these are great data points, but so far nobody has responded
> >>>>>on that ticket with the information required to know if the
> >>>>>slowness is related to memory settings or swapping. That was just a
> >>>>>hunch on my part from being a system admin.
> >>>>>
> >>>>>How much memory do these systems have that experience issues? What
> >>>>>does /proc/meminfo say during the issues? Does adjusting the
> >>>>>tomcat6.conf memory settings make a difference (see ticket
> >>>>>comments)? How much memory do the java processes list as resident
> >>>>>in top?
> >>>>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar"
> >>>>><Pa...@citrix.com>
> >>>>>wrote:
> >>>>>
> >>>>>> +1 Performance degradation is dramatic and I too have observed
> >>>>>> +this
> >>>>>>issue.
> >>>>>>
> >>>>>> I have logged my comments into 1339.
> >>>>>>
> >>>>>>
> >>>>>> ŠParth
> >>>>>>
> >>>>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
> >>>>>> <sr...@citrix.com> wrote:
> >>>>>>
> >>>>>> >To add to what Marcus mentioned, Regarding bug CLOUDSTACK-1339 :
> >>>>>> >I have observed this issue within
> >>>>>> >5-10 min of starting management server and there has been a lot
> >>>>>> >of API requests through automated tests. It is observed that
> >>>>>> >Management server not only slows down but also goes down after a
> while.
> >>>>>> >
> >>>>>> >~Talluri
> >>>>>> >
> >>>>>> >-----Original Message-----
> >>>>>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> >>>>>> >Sent: Thursday, February 21, 2013 7:22
> >>>>>> >To: cloudstack-dev@incubator.apache.org
> >>>>>> >Subject: [DISCUSS] Management Server Memory Requirements
> >>>>>> >
> >>>>>> >When Javelin was merged, there was an email sent out stating
> >>>>>> >that devs should set their MAVEN_OPTS to use 2g of heap, and
> >>>>>> >512M of permanent memory. Subsequently, there have also been
> >>>>>> >several
> >>>>>>e-mails
> >>>>>> >and issues where devs have echoed this recommendation, and
> >>>>>>presumably
> >>>>>> >it fixed issues. I've seen the MS run out of memory myself and
> >>>>>> >applied those recommendations.
> >>>>>> >
> >>>>>> >Is this what we want to provide in the tomcat config for a
> >>>>>> >package based install as well? It's effectively saying that the
> >>>>>> >minimum requirements for the management server are something
> >>>>>> >like
> >>>>>> >3 or 4 GB (to be safe for other running tasks) of RAM, right?
> >>>>>> >
> >>>>>> >There is currently a bug filed that may or may not have to do
> >>>>>> >with this, CLOUDSTACK-1339. Users report mgmt server slowness,
> >>>>>> >going unresponsive for minutes at a time, but the logs seem to
> >>>>>> >show business as usual. User reports that java is taking 75% of
> >>>>>> >RAM, depending on what else is going on they may be swapping.
> >>>>>> >Settings in the code for an install are currently at 2g/512M,
> >>>>>> >I've been running this on a 4GB server for awhile now, java is
> >>>>>> >at 900M, but I haven't been pounding it with requests or anything.
> >>>>>> >
> >>>>>> >This bug might not have anything to do with the memory settings,
> >>>>>> >but I figured it would be good to nail down what our minimum
> >>>>>>requirements
> >>>>>> >are for 4.1
> >>>>>>
> >>>>>>
> >>>>
> >>>
> >
>
RE: [DISCUSS] Management Server Memory Requirements
Posted by Sudha Ponnaganti <su...@citrix.com>.
I just had a call with some of the QA Team members and starting yesterday, everyone seem to be running into this issue. I haven't heard anyone complaining about it earlier, however only little testing has happened before.
-----Original Message-----
From: Marcus Sorensen [mailto:shadowsor@gmail.com]
Sent: Wednesday, February 20, 2013 10:05 PM
To: cloudstack-dev@incubator.apache.org
Subject: Re: [DISCUSS] Management Server Memory Requirements
We began seeing this at the point of the 4.1 cut. Our xen devclouds stopped working until we increased the dom0 RAM to 1.5GB. Nobody else was complaining, and most people don't run the mgmt server in devcloud, so I was just waiting to see where the conversation about memory went on the MAVEN_OPTS e-mail thread.
On Wed, Feb 20, 2013 at 10:26 PM, Sudha Ponnaganti <su...@citrix.com> wrote:
> Parth / Sailaja,
>
> Can you update ticket with data points and see if this can be assigned to Alex to start investigation with Javelin merge unless this can be associated with a specific check-in.
>
> Talluri - would you be able to narrow down to the check-in or build that we started to see this??
>
> Thanks
> /sudha
>
> -----Original Message-----
> From: Parth Jagirdar [mailto:Parth.Jagirdar@citrix.com]
> Sent: Wednesday, February 20, 2013 9:05 PM
> To: cloudstack-dev@incubator.apache.org
> Subject: Re: [DISCUSS] Management Server Memory Requirements
>
> Marcus,
>
>
> I attempted login into UI while running the log.
>
> [root@localhost management]# vmstat 1
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu-----
> r b swpd free buff cache si so bi bo in cs us sy id
> wa st
> 0 0 191132 70904 10340 16192 36 28 40 91 9 1 0 0 89
> 10 0
> 0 0 191132 70904 10340 16204 0 0 0 0 46 75 0 0
> 100 0 0
> 0 0 191132 70904 10356 16204 0 0 0 36 72 221 1 0 91
> 8 0
> 0 0 191132 70904 10372 16188 0 0 0 44 73 130 0 1 88
> 11 0
> 0 0 191132 70780 10420 16208 0 0 4 276 83 191 1 0 75
> 24 0
> 0 0 191132 70780 10452 16192 0 0 0 120 106 309 1 0 77
> 22 0
> 0 0 191132 70780 10468 16200 0 0 0 40 91 183 1 1 90
> 8 0
> 0 0 191132 70780 10468 16216 0 0 0 0 47 128 0 0
> 100 0 0
> 0 0 191132 70780 10484 16216 0 0 0 36 70 136 0 0 94
> 6 0
> 0 0 191132 70656 10500 16200 0 0 0 40 66 116 1 0 91
> 8 0
> 0 0 191132 70656 10500 16216 0 0 0 0 47 94 0 0
> 100 0 0
> 0 1 189504 66216 10400 17940 2192 100 3928 172 404 579 9 2 5
> 84 0
> 1 1 188772 60220 10552 21992 1000 0 5220 68 412 741 7 2 21
> 69 0
> 1 2 187352 49316 7832 30344 1660 32 10052 32 833 1015 28 3 0
> 69 0
> 0 4 188816 52420 1392 25716 1488 2872 3168 3240 663 870 19 3 0
> 78 0
> 1 1 187388 51808 1372 25040 2476 1104 3084 1260 675 813 15 3 0
> 82 0
> 0 1 187360 54040 1500 24980 32 0 1048 0 447 379 6 1 0
> 93 0
> 0 1 187360 53916 1516 25004 0 0 924 52 309 283 1 0 0
> 99 0
> 0 1 195476 64076 1272 20624 0 8116 32 8156 312 308 1 2 0
> 97 0
> 0 0 203084 71920 1264 19412 0 7608 0 7608 256 173 0 2 89
> 9 0
> 0 0 203076 71324 1376 20132 64 0 868 40 192 232 2 0 65
> 33 0
> 0 0 203076 71328 1392 20108 0 0 0 68 75 144 1 0 85
> 14 0
> 0 0 203076 71084 1392 20392 0 0 268 0 66 132 0 1 96
> 3 0
> 0 0 203076 71084 1408 20392 0 0 0 36 60 122 0 0 94
> 6 0
> 0 0 203076 71076 1424 20376 0 0 0 36 77 148 1 0 92
> 7 0
> 0 1 203072 70696 1472 20460 96 0 168 280 196 1080 7 1 66
> 26 0
> 0 0 202900 68704 1512 21236 656 0 1432 104 338 760 10 1 10
> 79 0
> 0 0 201804 65728 1540 21984 1184 0 1904 64 547 1117 26 2 40
> 33 0
> 0 2 161904 122500 1540 22640 68 0 776 0 407 477 23 2 57
> 18 0
> 1 0 161384 122132 1556 22748 36 0 92 60 970 200 92 0 0
> 8 0
> 0 1 160840 119512 1836 23228 676 0 1432 76 772 866 58 2 0
> 40 0
> 0 0 160776 119636 1836 23516 196 0 500 0 104 199 1 0 63
> 36 0
> 0 0 160776 119636 1852 23520 0 0 0 44 83 251 2 0 92
> 6 0
> 0 0 160776 119644 1868 23504 0 0 0 40 64 117 0 1 90
> 9 0
> 0 0 160776 119644 1868 23520 0 0 0 0 46 91 0 0
> 100 0 0
> 0 1 160764 119456 1888 23556 28 0 32 164 71 121 0 0 87
> 13 0
> 0 0 160764 119208 1952 23572 0 0 4 288 392 1083 4 1 66
> 29 0
> 0 0 160764 119192 1952 23596 0 0 0 0 42 69 0 0
> 100 0 0
> 0 0 160764 119192 1968 23596 0 0 0 40 60 127 1 0 92
> 7 0
> 0 0 160764 119192 1984 23584 0 0 4 36 71 135 0 1 91
> 8 0
> 0 0 160764 119192 1984 23600 0 0 0 0 46 89 0 0
> 100 0 0
> 0 0 160764 119192 2000 23600 0 0 0 36 59 121 1 0 92
> 7 0
> 0 0 160764 119192 2016 23584 0 0 0 36 82 196 0 0 93
> 7 0
> 0 0 160764 119192 2016 23600 0 0 0 0 38 69 0 0
> 100 0 0
> 0 0 160764 119192 2032 23600 0 0 0 36 63 130 1 0 91
> 8 0
> 0 0 160764 119192 2048 23584 0 0 0 36 67 132 0 0 94
> 6 0
> 0 0 160764 119192 2096 23584 0 0 0 272 89 193 0 0 76
> 24 0
>
>
> ...Parth
>
>
> On 2/20/13 8:59 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>
>>Well, it doesn't seem to be actively swapping at this point, but I
>>think it's got active memory swapped out and being used as
>>occasionally wait% goes up significantly. At any rate this system is
>>severely memory limited.
>>
>>On Wed, Feb 20, 2013 at 9:52 PM, Parth Jagirdar
>><Pa...@citrix.com> wrote:
>>> Marcus,
>>>
>>> vmstat 1 output
>>>
>>>
>>> [root@localhost management]# vmstat 1 procs
>>>-----------memory---------- ---swap-- -----io---- --system--
>>> -----cpu-----
>>> r b swpd free buff cache si so bi bo in cs us sy
>>>id
>>> wa st
>>> 0 1 190820 72380 10904 15852 36 28 40 92 9 1 0 0
>>>89
>>> 10 0
>>> 0 0 190820 72256 10932 15828 0 0 0 56 63 130 0 0
>>>88
>>> 12 0
>>> 1 0 190820 72256 10932 15844 0 0 0 0 53 153 1 0
>>>99
>>> 0 0
>>> 0 0 190820 72256 10948 15844 0 0 0 44 89 253 2 0
>>>88
>>> 10 0
>>> 0 0 190820 72256 10964 15828 0 0 0 72 64 135 0 0
>>>88
>>> 12 0
>>> 0 0 190820 72256 10964 15844 0 0 0 0 43 76 0 0
>>> 100 0 0
>>> 0 0 190820 72256 10980 15844 0 0 0 36 86 244 1 1
>>>91
>>> 7 0
>>> 0 0 190820 72256 10996 15828 0 0 0 44 57 112 0 1
>>>88
>>> 11 0
>>> 0 0 190820 72256 10996 15844 0 0 0 0 45 88 0 0
>>> 100 0 0
>>> 0 0 190820 72256 11012 15844 0 0 0 36 100 264 1 1
>>>91
>>> 7 0
>>> 0 0 190820 72132 11044 15824 0 0 4 96 106 211 1 0
>>>80
>>> 19 0
>>> 0 0 190820 72132 11092 15856 0 0 0 368 81 223 0 1
>>>74
>>> 25 0
>>> 0 0 190820 72132 11108 15856 0 0 0 36 78 145 0 1
>>>93
>>> 6 0
>>> 0 0 190820 72132 11124 15840 0 0 0 40 55 106 1 0
>>>90
>>> 9 0
>>> 0 0 190820 72132 11124 15856 0 0 0 0 47 96 0 0
>>> 100 0 0
>>> 0 0 190820 72132 11140 15856 0 0 0 36 61 113 0 0
>>>85
>>> 15 0
>>> 0 0 190820 72008 11156 15840 0 0 0 36 61 158 0 0
>>>93
>>> 7 0
>>> 0 0 190820 72008 11156 15856 0 0 0 0 41 82 0 0
>>> 100 0 0
>>> 0 0 190820 72008 11172 15856 0 0 0 36 74 149 1 0
>>>94
>>> 5 0
>>> 0 0 190820 72008 11188 15840 0 0 0 36 60 117 0 0
>>>93
>>> 7 0
>>> 0 0 190820 72008 11188 15856 0 0 0 0 43 91 0 0
>>> 100 0 0
>>> 1 0 190820 72008 11252 15860 0 0 4 312 108 243 1 1
>>>68
>>> 30 0
>>> 0 0 190820 72008 11268 15844 0 0 0 36 60 128 0 0
>>>92
>>> 8 0
>>> 0 0 190820 72008 11268 15860 0 0 0 0 36 67 0 0
>>> 100 0 0
>>> 0 0 190820 71884 11284 15860 0 0 0 104 84 139 0 1
>>>83
>>> 16 0
>>> 0 0 190820 71884 11300 15844 0 0 0 60 55 111 0 0
>>>69
>>> 31 0
>>> 0 0 190820 71884 11300 15860 0 0 0 0 53 121 1 0
>>>99
>>> 0 0
>>> 0 0 190820 71884 11316 15860 0 0 0 40 67 130 0 0
>>>87
>>> 13 0
>>> 0 0 190820 71884 11332 15844 0 0 0 40 58 130 0 0
>>>90
>>> 10 0
>>> 0 0 190820 71884 11332 15864 0 0 0 0 59 824 1 1
>>>98
>>> 0 0
>>> 1 0 190820 71884 11348 15864 0 0 0 40 113 185 1 0
>>>67
>>> 32 0
>>> 0 0 190820 71744 11412 15852 0 0 4 540 100 238 0 0
>>>67
>>> 33 0
>>> 0 0 190820 71744 11412 15868 0 0 0 0 55 159 1 0
>>>99
>>> 0 0
>>> 0 0 190820 71744 11428 15868 0 0 0 40 89 246 2 1
>>>90
>>> 7 0
>>> 0 0 190820 71620 11444 15852 0 0 0 72 65 135 0 0
>>>93
>>> 7 0
>>> 0 0 190820 71620 11444 15868 0 0 0 0 40 74 0 0
>>> 100 0 0
>>> 0 0 190820 71620 11460 15868 0 0 0 52 75 216 1 0
>>>92
>>> 7 0
>>> 0 0 190820 71620 11476 15852 0 0 0 44 53 109 0 0
>>>89
>>> 11 0
>>> 0 0 190820 71620 11476 15868 0 0 0 0 43 87 0 0
>>> 100 0 0
>>> 0 0 190820 71620 11496 15868 0 0 4 36 83 143 0 1
>>>90
>>> 9 0
>>> 0 0 190820 71620 11512 15852 0 0 0 40 78 869 1 0
>>>91
>>> 8 0
>>> 0 1 190820 71628 11524 15856 0 0 0 188 94 145 0 0
>>>87
>>> 13 0
>>> 0 0 190820 71496 11576 15872 0 0 4 132 96 214 1 0
>>>80
>>> 19 0
>>> 0 0 190820 71496 11592 15856 0 0 0 36 94 128 1 0
>>>92
>>> 7 0
>>> 0 0 190820 71496 11592 15872 0 0 0 0 115 164 0 0
>>> 100 0 0
>>> 0 0 190820 71496 11608 15876 0 0 0 36 130 200 0 0
>>>87
>>> 13 0
>>> 0 0 190820 71496 11624 15860 0 0 0 36 141 218 1 1
>>>91
>>> 7 0
>>> 0 0 190820 71504 11624 15876 0 0 0 0 105 119 0 0
>>> 100 0 0
>>> 0 0 190820 71504 11640 15876 0 0 0 36 140 218 1 0
>>>90
>>> 9 0
>>> procs -----------memory---------- ---swap-- -----io---- --system--
>>> -----cpu-----
>>> r b swpd free buff cache si so bi bo in cs us sy
>>>id
>>> wa st
>>> 0 0 190820 71504 11656 15860 0 0 0 36 131 169 1 0
>>>92
>>> 7 0
>>> 0 0 190820 71504 11656 15876 0 0 0 0 115 146 0 0
>>> 100 0 0
>>> 0 0 190820 71380 11672 15876 0 0 0 36 128 173 0 1
>>>91
>>> 8 0
>>> 0 0 190820 71380 11736 15860 0 0 0 308 146 279 1 0
>>>69
>>> 30 0
>>> 0 0 190820 71380 11736 15876 0 0 0 0 59 82 0 0
>>> 100 0 0
>>> 0 0 190820 71380 11760 15876 0 0 4 64 90 174 1 1
>>>86
>>> 12 0
>>>
>>> ...Parth
>>>
>>>
>>>
>>>
>>> On 2/20/13 8:46 PM, "Parth Jagirdar" <Pa...@citrix.com> wrote:
>>>
>>>>JAVA_OPTS="-Djava.awt.headless=true
>>>>-Dcom.sun.management.jmxremote.port=45219
>>>>-Dcom.sun.management.jmxremote.authenticate=false
>>>>-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
>>>>-XX:+HeapDumpOnOutOfMemoryError
>>>>-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>>>>
>>>>Which did not help.
>>>>
>>>>--------------
>>>>
>>>>[root@localhost management]# cat /proc/meminfo
>>>>MemTotal: 1016656 kB
>>>>MemFree: 68400 kB
>>>>Buffers: 9108 kB
>>>>Cached: 20984 kB
>>>>SwapCached: 17492 kB
>>>>Active: 424152 kB
>>>>Inactive: 433152 kB
>>>>Active(anon): 409812 kB
>>>>Inactive(anon): 417412 kB
>>>>Active(file): 14340 kB
>>>>Inactive(file): 15740 kB
>>>>Unevictable: 0 kB
>>>>Mlocked: 0 kB
>>>>SwapTotal: 2031608 kB
>>>>SwapFree: 1840900 kB
>>>>Dirty: 80 kB
>>>>Writeback: 0 kB
>>>>AnonPages: 815460 kB
>>>>Mapped: 11408 kB
>>>>Shmem: 4 kB
>>>>Slab: 60120 kB
>>>>SReclaimable: 10368 kB
>>>>SUnreclaim: 49752 kB
>>>>KernelStack: 5216 kB
>>>>PageTables: 6800 kB
>>>>NFS_Unstable: 0 kB
>>>>Bounce: 0 kB
>>>>WritebackTmp: 0 kB
>>>>CommitLimit: 2539936 kB
>>>>Committed_AS: 1596896 kB
>>>>VmallocTotal: 34359738367 kB
>>>>VmallocUsed: 7724 kB
>>>>VmallocChunk: 34359718200 kB
>>>>HardwareCorrupted: 0 kB
>>>>AnonHugePages: 503808 kB
>>>>HugePages_Total: 0
>>>>HugePages_Free: 0
>>>>HugePages_Rsvd: 0
>>>>HugePages_Surp: 0
>>>>Hugepagesize: 2048 kB
>>>>DirectMap4k: 6144 kB
>>>>DirectMap2M: 1038336 kB
>>>>[root@localhost management]#
>>>>-----------------------------
>>>>
>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>
>>>>
>>>> 9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
>>>>
>>>>
>>>> 1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
>>>>
>>>>
>>>> 1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>>>>
>>>>>Hi,
>>>>>
>>>>>Cloudstack Java process statistics are given below when it stops
>>>>>responding are given below :
>>>>>
>>>>>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06,
>>>>>0.05,
>>>>>0.02
>>>>>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>>>>>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi,
>>>>>0.0%si, 0.0%st
>>>>>Mem: 1014860k total, 947632k used, 67228k free, 5868k
>>>>>buffers
>>>>>Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>>>>
>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>>>>
>>>>>Thanks,
>>>>>Sailaja.M
>>>>>
>>>>>-----Original Message-----
>>>>>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>>Sent: Thursday, February 21, 2013 9:35 AM
>>>>>To: cloudstack-dev@incubator.apache.org
>>>>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>>>>
>>>>>Yes, these are great data points, but so far nobody has responded
>>>>>on that ticket with the information required to know if the
>>>>>slowness is related to memory settings or swapping. That was just a
>>>>>hunch on my part from being a system admin.
>>>>>
>>>>>How much memory do these systems have that experience issues? What
>>>>>does /proc/meminfo say during the issues? Does adjusting the
>>>>>tomcat6.conf memory settings make a difference (see ticket
>>>>>comments)? How much memory do the java processes list as resident
>>>>>in top?
>>>>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar"
>>>>><Pa...@citrix.com>
>>>>>wrote:
>>>>>
>>>>>> +1 Performance degradation is dramatic and I too have observed
>>>>>> +this
>>>>>>issue.
>>>>>>
>>>>>> I have logged my comments into 1339.
>>>>>>
>>>>>>
>>>>>> ŠParth
>>>>>>
>>>>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>>>>> <sr...@citrix.com> wrote:
>>>>>>
>>>>>> >To add to what Marcus mentioned, Regarding bug CLOUDSTACK-1339 :
>>>>>> >I have observed this issue within
>>>>>> >5-10 min of starting management server and there has been a lot
>>>>>> >of API requests through automated tests. It is observed that
>>>>>> >Management server not only slows down but also goes down after a while.
>>>>>> >
>>>>>> >~Talluri
>>>>>> >
>>>>>> >-----Original Message-----
>>>>>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>>> >Sent: Thursday, February 21, 2013 7:22
>>>>>> >To: cloudstack-dev@incubator.apache.org
>>>>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>>>>> >
>>>>>> >When Javelin was merged, there was an email sent out stating
>>>>>> >that devs should set their MAVEN_OPTS to use 2g of heap, and
>>>>>> >512M of permanent memory. Subsequently, there have also been
>>>>>> >several
>>>>>>e-mails
>>>>>> >and issues where devs have echoed this recommendation, and
>>>>>>presumably
>>>>>> >it fixed issues. I've seen the MS run out of memory myself and
>>>>>> >applied those recommendations.
>>>>>> >
>>>>>> >Is this what we want to provide in the tomcat config for a
>>>>>> >package based install as well? It's effectively saying that the
>>>>>> >minimum requirements for the management server are something
>>>>>> >like
>>>>>> >3 or 4 GB (to be safe for other running tasks) of RAM, right?
>>>>>> >
>>>>>> >There is currently a bug filed that may or may not have to do
>>>>>> >with this, CLOUDSTACK-1339. Users report mgmt server slowness,
>>>>>> >going unresponsive for minutes at a time, but the logs seem to
>>>>>> >show business as usual. User reports that java is taking 75% of
>>>>>> >RAM, depending on what else is going on they may be swapping.
>>>>>> >Settings in the code for an install are currently at 2g/512M,
>>>>>> >I've been running this on a 4GB server for awhile now, java is
>>>>>> >at 900M, but I haven't been pounding it with requests or anything.
>>>>>> >
>>>>>> >This bug might not have anything to do with the memory settings,
>>>>>> >but I figured it would be good to nail down what our minimum
>>>>>>requirements
>>>>>> >are for 4.1
>>>>>>
>>>>>>
>>>>
>>>
>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Rohit Yadav <bh...@apache.org>.
On Fri, Feb 22, 2013 at 4:14 AM, Kelven Yang <ke...@citrix.com> wrote:
> Rohit,
>
> I don't think the memory issue is related to auto-scanning, I'm
> investigating the heap dump now and will update the thread once I've
> nailed down the root cause.
Hi Kelven,
You're correct, in my case both with autoscanning+annotations and no
autoscanning + no annotation, there was not much difference (about few
MBs). The only flexibility with xml way is to enforce aop etc. on it
and not make spring a build time dependency. If you think we should go
the xml way, let me know I already have the fix for that.
The problem as Prasanna hints should be in applicationContext.xml
where we're adding proxies for all the classes and methods
(captureAnyMethod), see;
<aop:config proxy-target-class="true">
<aop:aspect id="dbContextBuilder" ref="transactionContextBuilder">
<aop:pointcut id="captureAnyMethod"
expression="execution(* *(..))"
/>
Thanks and regards.
>
> Kelven
>
> On 2/21/13 7:20 AM, "Rohit Yadav" <bh...@apache.org> wrote:
>
>>I've a fix that works so far so good, let's hear from Kelven.
>>
>>Regards.
>>
>>On Thu, Feb 21, 2013 at 7:50 PM, Chip Childers
>><ch...@sungard.com> wrote:
>>> On Thu, Feb 21, 2013 at 02:03:25PM +0530, Prasanna Santhanam wrote:
>>>> On Thu, Feb 21, 2013 at 12:57:07PM +0530, Prasanna Santhanam wrote:
>>>> > On Thu, Feb 21, 2013 at 11:35:15AM +0530, Marcus Sorensen wrote:
>>>> > > We began seeing this at the point of the 4.1 cut. Our xen devclouds
>>>> > > stopped working until we increased the dom0 RAM to 1.5GB. Nobody
>>>>else
>>>> > > was complaining, and most people don't run the mgmt server in
>>>> > > devcloud, so I was just waiting to see where the conversation about
>>>> > > memory went on the MAVEN_OPTS e-mail thread.
>>>> >
>>>> > actually the devcloud-ci job broke soon as javelin came to master and
>>>> > the 2g recommendation wasn't useful for a machine that ran several
>>>> > devcloud workers. There was no hope for that machine that has since
>>>> > been recommissioned, the job deleted and now repurposed to build
>>>> > systemvms instead. If the memory overhead is fixed I can bring the ci
>>>> > back up.
>>>> >
>>>>
>>>> Alex posted CLOUDSTACK-1276 and quoted 4.1 would be difficult to ship
>>>> with autoscanning turned on. A sudden increase in memory requirements
>>>> post upgrade to 4.1 is going to be a surprise in a production
>>>> deployment (which should have enough memory) but I'd rather not depend
>>>> on that assumption.
>>>>
>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1276
>>>>
>>>> --
>>>> Prasanna.,
>>>>
>>>
>>> Good call to bump that to blocker.
>>>
>>> Kelvin - adding you to the CC for this thread as a heads up, in case you
>>> didn't notice this thread or CLOUDSTACK-1276 being bumped to blocker.
>>>
>>> -chip
>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Prasanna Santhanam <ts...@apache.org>.
On Fri, Feb 22, 2013 at 04:14:12AM +0530, Kelven Yang wrote:
> Rohit,
>
> I don't think the memory issue is related to auto-scanning, I'm
> investigating the heap dump now and will update the thread once I've
> nailed down the root cause.
>
> Kelven
True - but the aop regex for the dbcontext seems too wide and is
probably introducting a lot of transaction contexts. I'm no spring
expert but the target of that expression is a method or a class? Does
it apply to all the Daos? Or every @DB annotated context ?
--
Prasanna.,
Re: [DISCUSS] Management Server Memory Requirements
Posted by Kelven Yang <ke...@citrix.com>.
Rohit,
I don't think the memory issue is related to auto-scanning, I'm
investigating the heap dump now and will update the thread once I've
nailed down the root cause.
Kelven
On 2/21/13 7:20 AM, "Rohit Yadav" <bh...@apache.org> wrote:
>I've a fix that works so far so good, let's hear from Kelven.
>
>Regards.
>
>On Thu, Feb 21, 2013 at 7:50 PM, Chip Childers
><ch...@sungard.com> wrote:
>> On Thu, Feb 21, 2013 at 02:03:25PM +0530, Prasanna Santhanam wrote:
>>> On Thu, Feb 21, 2013 at 12:57:07PM +0530, Prasanna Santhanam wrote:
>>> > On Thu, Feb 21, 2013 at 11:35:15AM +0530, Marcus Sorensen wrote:
>>> > > We began seeing this at the point of the 4.1 cut. Our xen devclouds
>>> > > stopped working until we increased the dom0 RAM to 1.5GB. Nobody
>>>else
>>> > > was complaining, and most people don't run the mgmt server in
>>> > > devcloud, so I was just waiting to see where the conversation about
>>> > > memory went on the MAVEN_OPTS e-mail thread.
>>> >
>>> > actually the devcloud-ci job broke soon as javelin came to master and
>>> > the 2g recommendation wasn't useful for a machine that ran several
>>> > devcloud workers. There was no hope for that machine that has since
>>> > been recommissioned, the job deleted and now repurposed to build
>>> > systemvms instead. If the memory overhead is fixed I can bring the ci
>>> > back up.
>>> >
>>>
>>> Alex posted CLOUDSTACK-1276 and quoted 4.1 would be difficult to ship
>>> with autoscanning turned on. A sudden increase in memory requirements
>>> post upgrade to 4.1 is going to be a surprise in a production
>>> deployment (which should have enough memory) but I'd rather not depend
>>> on that assumption.
>>>
>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1276
>>>
>>> --
>>> Prasanna.,
>>>
>>
>> Good call to bump that to blocker.
>>
>> Kelvin - adding you to the CC for this thread as a heads up, in case you
>> didn't notice this thread or CLOUDSTACK-1276 being bumped to blocker.
>>
>> -chip
Re: [DISCUSS] Management Server Memory Requirements
Posted by Rohit Yadav <bh...@apache.org>.
I've a fix that works so far so good, let's hear from Kelven.
Regards.
On Thu, Feb 21, 2013 at 7:50 PM, Chip Childers
<ch...@sungard.com> wrote:
> On Thu, Feb 21, 2013 at 02:03:25PM +0530, Prasanna Santhanam wrote:
>> On Thu, Feb 21, 2013 at 12:57:07PM +0530, Prasanna Santhanam wrote:
>> > On Thu, Feb 21, 2013 at 11:35:15AM +0530, Marcus Sorensen wrote:
>> > > We began seeing this at the point of the 4.1 cut. Our xen devclouds
>> > > stopped working until we increased the dom0 RAM to 1.5GB. Nobody else
>> > > was complaining, and most people don't run the mgmt server in
>> > > devcloud, so I was just waiting to see where the conversation about
>> > > memory went on the MAVEN_OPTS e-mail thread.
>> >
>> > actually the devcloud-ci job broke soon as javelin came to master and
>> > the 2g recommendation wasn't useful for a machine that ran several
>> > devcloud workers. There was no hope for that machine that has since
>> > been recommissioned, the job deleted and now repurposed to build
>> > systemvms instead. If the memory overhead is fixed I can bring the ci
>> > back up.
>> >
>>
>> Alex posted CLOUDSTACK-1276 and quoted 4.1 would be difficult to ship
>> with autoscanning turned on. A sudden increase in memory requirements
>> post upgrade to 4.1 is going to be a surprise in a production
>> deployment (which should have enough memory) but I'd rather not depend
>> on that assumption.
>>
>> https://issues.apache.org/jira/browse/CLOUDSTACK-1276
>>
>> --
>> Prasanna.,
>>
>
> Good call to bump that to blocker.
>
> Kelvin - adding you to the CC for this thread as a heads up, in case you
> didn't notice this thread or CLOUDSTACK-1276 being bumped to blocker.
>
> -chip
Re: [DISCUSS] Management Server Memory Requirements
Posted by Chip Childers <ch...@sungard.com>.
On Thu, Feb 21, 2013 at 02:03:25PM +0530, Prasanna Santhanam wrote:
> On Thu, Feb 21, 2013 at 12:57:07PM +0530, Prasanna Santhanam wrote:
> > On Thu, Feb 21, 2013 at 11:35:15AM +0530, Marcus Sorensen wrote:
> > > We began seeing this at the point of the 4.1 cut. Our xen devclouds
> > > stopped working until we increased the dom0 RAM to 1.5GB. Nobody else
> > > was complaining, and most people don't run the mgmt server in
> > > devcloud, so I was just waiting to see where the conversation about
> > > memory went on the MAVEN_OPTS e-mail thread.
> >
> > actually the devcloud-ci job broke soon as javelin came to master and
> > the 2g recommendation wasn't useful for a machine that ran several
> > devcloud workers. There was no hope for that machine that has since
> > been recommissioned, the job deleted and now repurposed to build
> > systemvms instead. If the memory overhead is fixed I can bring the ci
> > back up.
> >
>
> Alex posted CLOUDSTACK-1276 and quoted 4.1 would be difficult to ship
> with autoscanning turned on. A sudden increase in memory requirements
> post upgrade to 4.1 is going to be a surprise in a production
> deployment (which should have enough memory) but I'd rather not depend
> on that assumption.
>
> https://issues.apache.org/jira/browse/CLOUDSTACK-1276
>
> --
> Prasanna.,
>
Good call to bump that to blocker.
Kelvin - adding you to the CC for this thread as a heads up, in case you
didn't notice this thread or CLOUDSTACK-1276 being bumped to blocker.
-chip
Re: [DISCUSS] Management Server Memory Requirements
Posted by Prasanna Santhanam <ts...@apache.org>.
On Thu, Feb 21, 2013 at 12:57:07PM +0530, Prasanna Santhanam wrote:
> On Thu, Feb 21, 2013 at 11:35:15AM +0530, Marcus Sorensen wrote:
> > We began seeing this at the point of the 4.1 cut. Our xen devclouds
> > stopped working until we increased the dom0 RAM to 1.5GB. Nobody else
> > was complaining, and most people don't run the mgmt server in
> > devcloud, so I was just waiting to see where the conversation about
> > memory went on the MAVEN_OPTS e-mail thread.
>
> actually the devcloud-ci job broke soon as javelin came to master and
> the 2g recommendation wasn't useful for a machine that ran several
> devcloud workers. There was no hope for that machine that has since
> been recommissioned, the job deleted and now repurposed to build
> systemvms instead. If the memory overhead is fixed I can bring the ci
> back up.
>
Alex posted CLOUDSTACK-1276 and quoted 4.1 would be difficult to ship
with autoscanning turned on. A sudden increase in memory requirements
post upgrade to 4.1 is going to be a surprise in a production
deployment (which should have enough memory) but I'd rather not depend
on that assumption.
https://issues.apache.org/jira/browse/CLOUDSTACK-1276
--
Prasanna.,
Re: [DISCUSS] Management Server Memory Requirements
Posted by Prasanna Santhanam <ts...@apache.org>.
On Thu, Feb 21, 2013 at 11:35:15AM +0530, Marcus Sorensen wrote:
> We began seeing this at the point of the 4.1 cut. Our xen devclouds
> stopped working until we increased the dom0 RAM to 1.5GB. Nobody else
> was complaining, and most people don't run the mgmt server in
> devcloud, so I was just waiting to see where the conversation about
> memory went on the MAVEN_OPTS e-mail thread.
actually the devcloud-ci job broke soon as javelin came to master and
the 2g recommendation wasn't useful for a machine that ran several
devcloud workers. There was no hope for that machine that has since
been recommissioned, the job deleted and now repurposed to build
systemvms instead. If the memory overhead is fixed I can bring the ci
back up.
--
Prasanna.,
Re: [DISCUSS] Management Server Memory Requirements
Posted by Marcus Sorensen <sh...@gmail.com>.
We began seeing this at the point of the 4.1 cut. Our xen devclouds
stopped working until we increased the dom0 RAM to 1.5GB. Nobody else
was complaining, and most people don't run the mgmt server in
devcloud, so I was just waiting to see where the conversation about
memory went on the MAVEN_OPTS e-mail thread.
On Wed, Feb 20, 2013 at 10:26 PM, Sudha Ponnaganti
<su...@citrix.com> wrote:
> Parth / Sailaja,
>
> Can you update ticket with data points and see if this can be assigned to Alex to start investigation with Javelin merge unless this can be associated with a specific check-in.
>
> Talluri - would you be able to narrow down to the check-in or build that we started to see this??
>
> Thanks
> /sudha
>
> -----Original Message-----
> From: Parth Jagirdar [mailto:Parth.Jagirdar@citrix.com]
> Sent: Wednesday, February 20, 2013 9:05 PM
> To: cloudstack-dev@incubator.apache.org
> Subject: Re: [DISCUSS] Management Server Memory Requirements
>
> Marcus,
>
>
> I attempted login into UI while running the log.
>
> [root@localhost management]# vmstat 1
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu-----
> r b swpd free buff cache si so bi bo in cs us sy id
> wa st
> 0 0 191132 70904 10340 16192 36 28 40 91 9 1 0 0 89
> 10 0
> 0 0 191132 70904 10340 16204 0 0 0 0 46 75 0 0
> 100 0 0
> 0 0 191132 70904 10356 16204 0 0 0 36 72 221 1 0 91
> 8 0
> 0 0 191132 70904 10372 16188 0 0 0 44 73 130 0 1 88
> 11 0
> 0 0 191132 70780 10420 16208 0 0 4 276 83 191 1 0 75
> 24 0
> 0 0 191132 70780 10452 16192 0 0 0 120 106 309 1 0 77
> 22 0
> 0 0 191132 70780 10468 16200 0 0 0 40 91 183 1 1 90
> 8 0
> 0 0 191132 70780 10468 16216 0 0 0 0 47 128 0 0
> 100 0 0
> 0 0 191132 70780 10484 16216 0 0 0 36 70 136 0 0 94
> 6 0
> 0 0 191132 70656 10500 16200 0 0 0 40 66 116 1 0 91
> 8 0
> 0 0 191132 70656 10500 16216 0 0 0 0 47 94 0 0
> 100 0 0
> 0 1 189504 66216 10400 17940 2192 100 3928 172 404 579 9 2 5
> 84 0
> 1 1 188772 60220 10552 21992 1000 0 5220 68 412 741 7 2 21
> 69 0
> 1 2 187352 49316 7832 30344 1660 32 10052 32 833 1015 28 3 0
> 69 0
> 0 4 188816 52420 1392 25716 1488 2872 3168 3240 663 870 19 3 0
> 78 0
> 1 1 187388 51808 1372 25040 2476 1104 3084 1260 675 813 15 3 0
> 82 0
> 0 1 187360 54040 1500 24980 32 0 1048 0 447 379 6 1 0
> 93 0
> 0 1 187360 53916 1516 25004 0 0 924 52 309 283 1 0 0
> 99 0
> 0 1 195476 64076 1272 20624 0 8116 32 8156 312 308 1 2 0
> 97 0
> 0 0 203084 71920 1264 19412 0 7608 0 7608 256 173 0 2 89
> 9 0
> 0 0 203076 71324 1376 20132 64 0 868 40 192 232 2 0 65
> 33 0
> 0 0 203076 71328 1392 20108 0 0 0 68 75 144 1 0 85
> 14 0
> 0 0 203076 71084 1392 20392 0 0 268 0 66 132 0 1 96
> 3 0
> 0 0 203076 71084 1408 20392 0 0 0 36 60 122 0 0 94
> 6 0
> 0 0 203076 71076 1424 20376 0 0 0 36 77 148 1 0 92
> 7 0
> 0 1 203072 70696 1472 20460 96 0 168 280 196 1080 7 1 66
> 26 0
> 0 0 202900 68704 1512 21236 656 0 1432 104 338 760 10 1 10
> 79 0
> 0 0 201804 65728 1540 21984 1184 0 1904 64 547 1117 26 2 40
> 33 0
> 0 2 161904 122500 1540 22640 68 0 776 0 407 477 23 2 57
> 18 0
> 1 0 161384 122132 1556 22748 36 0 92 60 970 200 92 0 0
> 8 0
> 0 1 160840 119512 1836 23228 676 0 1432 76 772 866 58 2 0
> 40 0
> 0 0 160776 119636 1836 23516 196 0 500 0 104 199 1 0 63
> 36 0
> 0 0 160776 119636 1852 23520 0 0 0 44 83 251 2 0 92
> 6 0
> 0 0 160776 119644 1868 23504 0 0 0 40 64 117 0 1 90
> 9 0
> 0 0 160776 119644 1868 23520 0 0 0 0 46 91 0 0
> 100 0 0
> 0 1 160764 119456 1888 23556 28 0 32 164 71 121 0 0 87
> 13 0
> 0 0 160764 119208 1952 23572 0 0 4 288 392 1083 4 1 66
> 29 0
> 0 0 160764 119192 1952 23596 0 0 0 0 42 69 0 0
> 100 0 0
> 0 0 160764 119192 1968 23596 0 0 0 40 60 127 1 0 92
> 7 0
> 0 0 160764 119192 1984 23584 0 0 4 36 71 135 0 1 91
> 8 0
> 0 0 160764 119192 1984 23600 0 0 0 0 46 89 0 0
> 100 0 0
> 0 0 160764 119192 2000 23600 0 0 0 36 59 121 1 0 92
> 7 0
> 0 0 160764 119192 2016 23584 0 0 0 36 82 196 0 0 93
> 7 0
> 0 0 160764 119192 2016 23600 0 0 0 0 38 69 0 0
> 100 0 0
> 0 0 160764 119192 2032 23600 0 0 0 36 63 130 1 0 91
> 8 0
> 0 0 160764 119192 2048 23584 0 0 0 36 67 132 0 0 94
> 6 0
> 0 0 160764 119192 2096 23584 0 0 0 272 89 193 0 0 76
> 24 0
>
>
> ...Parth
>
>
> On 2/20/13 8:59 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>
>>Well, it doesn't seem to be actively swapping at this point, but I
>>think it's got active memory swapped out and being used as occasionally
>>wait% goes up significantly. At any rate this system is severely memory
>>limited.
>>
>>On Wed, Feb 20, 2013 at 9:52 PM, Parth Jagirdar
>><Pa...@citrix.com> wrote:
>>> Marcus,
>>>
>>> vmstat 1 output
>>>
>>>
>>> [root@localhost management]# vmstat 1 procs
>>>-----------memory---------- ---swap-- -----io---- --system--
>>> -----cpu-----
>>> r b swpd free buff cache si so bi bo in cs us sy
>>>id
>>> wa st
>>> 0 1 190820 72380 10904 15852 36 28 40 92 9 1 0 0
>>>89
>>> 10 0
>>> 0 0 190820 72256 10932 15828 0 0 0 56 63 130 0 0
>>>88
>>> 12 0
>>> 1 0 190820 72256 10932 15844 0 0 0 0 53 153 1 0
>>>99
>>> 0 0
>>> 0 0 190820 72256 10948 15844 0 0 0 44 89 253 2 0
>>>88
>>> 10 0
>>> 0 0 190820 72256 10964 15828 0 0 0 72 64 135 0 0
>>>88
>>> 12 0
>>> 0 0 190820 72256 10964 15844 0 0 0 0 43 76 0 0
>>> 100 0 0
>>> 0 0 190820 72256 10980 15844 0 0 0 36 86 244 1 1
>>>91
>>> 7 0
>>> 0 0 190820 72256 10996 15828 0 0 0 44 57 112 0 1
>>>88
>>> 11 0
>>> 0 0 190820 72256 10996 15844 0 0 0 0 45 88 0 0
>>> 100 0 0
>>> 0 0 190820 72256 11012 15844 0 0 0 36 100 264 1 1
>>>91
>>> 7 0
>>> 0 0 190820 72132 11044 15824 0 0 4 96 106 211 1 0
>>>80
>>> 19 0
>>> 0 0 190820 72132 11092 15856 0 0 0 368 81 223 0 1
>>>74
>>> 25 0
>>> 0 0 190820 72132 11108 15856 0 0 0 36 78 145 0 1
>>>93
>>> 6 0
>>> 0 0 190820 72132 11124 15840 0 0 0 40 55 106 1 0
>>>90
>>> 9 0
>>> 0 0 190820 72132 11124 15856 0 0 0 0 47 96 0 0
>>> 100 0 0
>>> 0 0 190820 72132 11140 15856 0 0 0 36 61 113 0 0
>>>85
>>> 15 0
>>> 0 0 190820 72008 11156 15840 0 0 0 36 61 158 0 0
>>>93
>>> 7 0
>>> 0 0 190820 72008 11156 15856 0 0 0 0 41 82 0 0
>>> 100 0 0
>>> 0 0 190820 72008 11172 15856 0 0 0 36 74 149 1 0
>>>94
>>> 5 0
>>> 0 0 190820 72008 11188 15840 0 0 0 36 60 117 0 0
>>>93
>>> 7 0
>>> 0 0 190820 72008 11188 15856 0 0 0 0 43 91 0 0
>>> 100 0 0
>>> 1 0 190820 72008 11252 15860 0 0 4 312 108 243 1 1
>>>68
>>> 30 0
>>> 0 0 190820 72008 11268 15844 0 0 0 36 60 128 0 0
>>>92
>>> 8 0
>>> 0 0 190820 72008 11268 15860 0 0 0 0 36 67 0 0
>>> 100 0 0
>>> 0 0 190820 71884 11284 15860 0 0 0 104 84 139 0 1
>>>83
>>> 16 0
>>> 0 0 190820 71884 11300 15844 0 0 0 60 55 111 0 0
>>>69
>>> 31 0
>>> 0 0 190820 71884 11300 15860 0 0 0 0 53 121 1 0
>>>99
>>> 0 0
>>> 0 0 190820 71884 11316 15860 0 0 0 40 67 130 0 0
>>>87
>>> 13 0
>>> 0 0 190820 71884 11332 15844 0 0 0 40 58 130 0 0
>>>90
>>> 10 0
>>> 0 0 190820 71884 11332 15864 0 0 0 0 59 824 1 1
>>>98
>>> 0 0
>>> 1 0 190820 71884 11348 15864 0 0 0 40 113 185 1 0
>>>67
>>> 32 0
>>> 0 0 190820 71744 11412 15852 0 0 4 540 100 238 0 0
>>>67
>>> 33 0
>>> 0 0 190820 71744 11412 15868 0 0 0 0 55 159 1 0
>>>99
>>> 0 0
>>> 0 0 190820 71744 11428 15868 0 0 0 40 89 246 2 1
>>>90
>>> 7 0
>>> 0 0 190820 71620 11444 15852 0 0 0 72 65 135 0 0
>>>93
>>> 7 0
>>> 0 0 190820 71620 11444 15868 0 0 0 0 40 74 0 0
>>> 100 0 0
>>> 0 0 190820 71620 11460 15868 0 0 0 52 75 216 1 0
>>>92
>>> 7 0
>>> 0 0 190820 71620 11476 15852 0 0 0 44 53 109 0 0
>>>89
>>> 11 0
>>> 0 0 190820 71620 11476 15868 0 0 0 0 43 87 0 0
>>> 100 0 0
>>> 0 0 190820 71620 11496 15868 0 0 4 36 83 143 0 1
>>>90
>>> 9 0
>>> 0 0 190820 71620 11512 15852 0 0 0 40 78 869 1 0
>>>91
>>> 8 0
>>> 0 1 190820 71628 11524 15856 0 0 0 188 94 145 0 0
>>>87
>>> 13 0
>>> 0 0 190820 71496 11576 15872 0 0 4 132 96 214 1 0
>>>80
>>> 19 0
>>> 0 0 190820 71496 11592 15856 0 0 0 36 94 128 1 0
>>>92
>>> 7 0
>>> 0 0 190820 71496 11592 15872 0 0 0 0 115 164 0 0
>>> 100 0 0
>>> 0 0 190820 71496 11608 15876 0 0 0 36 130 200 0 0
>>>87
>>> 13 0
>>> 0 0 190820 71496 11624 15860 0 0 0 36 141 218 1 1
>>>91
>>> 7 0
>>> 0 0 190820 71504 11624 15876 0 0 0 0 105 119 0 0
>>> 100 0 0
>>> 0 0 190820 71504 11640 15876 0 0 0 36 140 218 1 0
>>>90
>>> 9 0
>>> procs -----------memory---------- ---swap-- -----io---- --system--
>>> -----cpu-----
>>> r b swpd free buff cache si so bi bo in cs us sy
>>>id
>>> wa st
>>> 0 0 190820 71504 11656 15860 0 0 0 36 131 169 1 0
>>>92
>>> 7 0
>>> 0 0 190820 71504 11656 15876 0 0 0 0 115 146 0 0
>>> 100 0 0
>>> 0 0 190820 71380 11672 15876 0 0 0 36 128 173 0 1
>>>91
>>> 8 0
>>> 0 0 190820 71380 11736 15860 0 0 0 308 146 279 1 0
>>>69
>>> 30 0
>>> 0 0 190820 71380 11736 15876 0 0 0 0 59 82 0 0
>>> 100 0 0
>>> 0 0 190820 71380 11760 15876 0 0 4 64 90 174 1 1
>>>86
>>> 12 0
>>>
>>> ...Parth
>>>
>>>
>>>
>>>
>>> On 2/20/13 8:46 PM, "Parth Jagirdar" <Pa...@citrix.com> wrote:
>>>
>>>>JAVA_OPTS="-Djava.awt.headless=true
>>>>-Dcom.sun.management.jmxremote.port=45219
>>>>-Dcom.sun.management.jmxremote.authenticate=false
>>>>-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
>>>>-XX:+HeapDumpOnOutOfMemoryError
>>>>-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>>>>
>>>>Which did not help.
>>>>
>>>>--------------
>>>>
>>>>[root@localhost management]# cat /proc/meminfo
>>>>MemTotal: 1016656 kB
>>>>MemFree: 68400 kB
>>>>Buffers: 9108 kB
>>>>Cached: 20984 kB
>>>>SwapCached: 17492 kB
>>>>Active: 424152 kB
>>>>Inactive: 433152 kB
>>>>Active(anon): 409812 kB
>>>>Inactive(anon): 417412 kB
>>>>Active(file): 14340 kB
>>>>Inactive(file): 15740 kB
>>>>Unevictable: 0 kB
>>>>Mlocked: 0 kB
>>>>SwapTotal: 2031608 kB
>>>>SwapFree: 1840900 kB
>>>>Dirty: 80 kB
>>>>Writeback: 0 kB
>>>>AnonPages: 815460 kB
>>>>Mapped: 11408 kB
>>>>Shmem: 4 kB
>>>>Slab: 60120 kB
>>>>SReclaimable: 10368 kB
>>>>SUnreclaim: 49752 kB
>>>>KernelStack: 5216 kB
>>>>PageTables: 6800 kB
>>>>NFS_Unstable: 0 kB
>>>>Bounce: 0 kB
>>>>WritebackTmp: 0 kB
>>>>CommitLimit: 2539936 kB
>>>>Committed_AS: 1596896 kB
>>>>VmallocTotal: 34359738367 kB
>>>>VmallocUsed: 7724 kB
>>>>VmallocChunk: 34359718200 kB
>>>>HardwareCorrupted: 0 kB
>>>>AnonHugePages: 503808 kB
>>>>HugePages_Total: 0
>>>>HugePages_Free: 0
>>>>HugePages_Rsvd: 0
>>>>HugePages_Surp: 0
>>>>Hugepagesize: 2048 kB
>>>>DirectMap4k: 6144 kB
>>>>DirectMap2M: 1038336 kB
>>>>[root@localhost management]#
>>>>-----------------------------
>>>>
>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>
>>>>
>>>> 9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
>>>>
>>>>
>>>> 1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
>>>>
>>>>
>>>> 1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>>>>
>>>>>Hi,
>>>>>
>>>>>Cloudstack Java process statistics are given below when it stops
>>>>>responding are given below :
>>>>>
>>>>>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06,
>>>>>0.05,
>>>>>0.02
>>>>>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>>>>>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi,
>>>>>0.0%si, 0.0%st
>>>>>Mem: 1014860k total, 947632k used, 67228k free, 5868k
>>>>>buffers
>>>>>Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>>>>
>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>>>>
>>>>>Thanks,
>>>>>Sailaja.M
>>>>>
>>>>>-----Original Message-----
>>>>>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>>Sent: Thursday, February 21, 2013 9:35 AM
>>>>>To: cloudstack-dev@incubator.apache.org
>>>>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>>>>
>>>>>Yes, these are great data points, but so far nobody has responded on
>>>>>that ticket with the information required to know if the slowness is
>>>>>related to memory settings or swapping. That was just a hunch on my
>>>>>part from being a system admin.
>>>>>
>>>>>How much memory do these systems have that experience issues? What
>>>>>does /proc/meminfo say during the issues? Does adjusting the
>>>>>tomcat6.conf memory settings make a difference (see ticket
>>>>>comments)? How much memory do the java processes list as resident in
>>>>>top?
>>>>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar"
>>>>><Pa...@citrix.com>
>>>>>wrote:
>>>>>
>>>>>> +1 Performance degradation is dramatic and I too have observed
>>>>>> +this
>>>>>>issue.
>>>>>>
>>>>>> I have logged my comments into 1339.
>>>>>>
>>>>>>
>>>>>> ŠParth
>>>>>>
>>>>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>>>>> <sr...@citrix.com> wrote:
>>>>>>
>>>>>> >To add to what Marcus mentioned,
>>>>>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>>>>>> >5-10 min of starting management server and there has been a lot
>>>>>> >of API requests through automated tests. It is observed that
>>>>>> >Management server not only slows down but also goes down after a while.
>>>>>> >
>>>>>> >~Talluri
>>>>>> >
>>>>>> >-----Original Message-----
>>>>>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>>> >Sent: Thursday, February 21, 2013 7:22
>>>>>> >To: cloudstack-dev@incubator.apache.org
>>>>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>>>>> >
>>>>>> >When Javelin was merged, there was an email sent out stating that
>>>>>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>>>>>> >permanent memory. Subsequently, there have also been several
>>>>>>e-mails
>>>>>> >and issues where devs have echoed this recommendation, and
>>>>>>presumably
>>>>>> >it fixed issues. I've seen the MS run out of memory myself and
>>>>>> >applied those recommendations.
>>>>>> >
>>>>>> >Is this what we want to provide in the tomcat config for a
>>>>>> >package based install as well? It's effectively saying that the
>>>>>> >minimum requirements for the management server are something like
>>>>>> >3 or 4 GB (to be safe for other running tasks) of RAM, right?
>>>>>> >
>>>>>> >There is currently a bug filed that may or may not have to do
>>>>>> >with this, CLOUDSTACK-1339. Users report mgmt server slowness,
>>>>>> >going unresponsive for minutes at a time, but the logs seem to
>>>>>> >show business as usual. User reports that java is taking 75% of
>>>>>> >RAM, depending on what else is going on they may be swapping.
>>>>>> >Settings in the code for an install are currently at 2g/512M,
>>>>>> >I've been running this on a 4GB server for awhile now, java is at
>>>>>> >900M, but I haven't been pounding it with requests or anything.
>>>>>> >
>>>>>> >This bug might not have anything to do with the memory settings,
>>>>>> >but I figured it would be good to nail down what our minimum
>>>>>>requirements
>>>>>> >are for 4.1
>>>>>>
>>>>>>
>>>>
>>>
>
RE: [DISCUSS] Management Server Memory Requirements
Posted by Sudha Ponnaganti <su...@citrix.com>.
Parth / Sailaja,
Can you update ticket with data points and see if this can be assigned to Alex to start investigation with Javelin merge unless this can be associated with a specific check-in.
Talluri - would you be able to narrow down to the check-in or build that we started to see this??
Thanks
/sudha
-----Original Message-----
From: Parth Jagirdar [mailto:Parth.Jagirdar@citrix.com]
Sent: Wednesday, February 20, 2013 9:05 PM
To: cloudstack-dev@incubator.apache.org
Subject: Re: [DISCUSS] Management Server Memory Requirements
Marcus,
I attempted login into UI while running the log.
[root@localhost management]# vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
0 0 191132 70904 10340 16192 36 28 40 91 9 1 0 0 89
10 0
0 0 191132 70904 10340 16204 0 0 0 0 46 75 0 0
100 0 0
0 0 191132 70904 10356 16204 0 0 0 36 72 221 1 0 91
8 0
0 0 191132 70904 10372 16188 0 0 0 44 73 130 0 1 88
11 0
0 0 191132 70780 10420 16208 0 0 4 276 83 191 1 0 75
24 0
0 0 191132 70780 10452 16192 0 0 0 120 106 309 1 0 77
22 0
0 0 191132 70780 10468 16200 0 0 0 40 91 183 1 1 90
8 0
0 0 191132 70780 10468 16216 0 0 0 0 47 128 0 0
100 0 0
0 0 191132 70780 10484 16216 0 0 0 36 70 136 0 0 94
6 0
0 0 191132 70656 10500 16200 0 0 0 40 66 116 1 0 91
8 0
0 0 191132 70656 10500 16216 0 0 0 0 47 94 0 0
100 0 0
0 1 189504 66216 10400 17940 2192 100 3928 172 404 579 9 2 5
84 0
1 1 188772 60220 10552 21992 1000 0 5220 68 412 741 7 2 21
69 0
1 2 187352 49316 7832 30344 1660 32 10052 32 833 1015 28 3 0
69 0
0 4 188816 52420 1392 25716 1488 2872 3168 3240 663 870 19 3 0
78 0
1 1 187388 51808 1372 25040 2476 1104 3084 1260 675 813 15 3 0
82 0
0 1 187360 54040 1500 24980 32 0 1048 0 447 379 6 1 0
93 0
0 1 187360 53916 1516 25004 0 0 924 52 309 283 1 0 0
99 0
0 1 195476 64076 1272 20624 0 8116 32 8156 312 308 1 2 0
97 0
0 0 203084 71920 1264 19412 0 7608 0 7608 256 173 0 2 89
9 0
0 0 203076 71324 1376 20132 64 0 868 40 192 232 2 0 65
33 0
0 0 203076 71328 1392 20108 0 0 0 68 75 144 1 0 85
14 0
0 0 203076 71084 1392 20392 0 0 268 0 66 132 0 1 96
3 0
0 0 203076 71084 1408 20392 0 0 0 36 60 122 0 0 94
6 0
0 0 203076 71076 1424 20376 0 0 0 36 77 148 1 0 92
7 0
0 1 203072 70696 1472 20460 96 0 168 280 196 1080 7 1 66
26 0
0 0 202900 68704 1512 21236 656 0 1432 104 338 760 10 1 10
79 0
0 0 201804 65728 1540 21984 1184 0 1904 64 547 1117 26 2 40
33 0
0 2 161904 122500 1540 22640 68 0 776 0 407 477 23 2 57
18 0
1 0 161384 122132 1556 22748 36 0 92 60 970 200 92 0 0
8 0
0 1 160840 119512 1836 23228 676 0 1432 76 772 866 58 2 0
40 0
0 0 160776 119636 1836 23516 196 0 500 0 104 199 1 0 63
36 0
0 0 160776 119636 1852 23520 0 0 0 44 83 251 2 0 92
6 0
0 0 160776 119644 1868 23504 0 0 0 40 64 117 0 1 90
9 0
0 0 160776 119644 1868 23520 0 0 0 0 46 91 0 0
100 0 0
0 1 160764 119456 1888 23556 28 0 32 164 71 121 0 0 87
13 0
0 0 160764 119208 1952 23572 0 0 4 288 392 1083 4 1 66
29 0
0 0 160764 119192 1952 23596 0 0 0 0 42 69 0 0
100 0 0
0 0 160764 119192 1968 23596 0 0 0 40 60 127 1 0 92
7 0
0 0 160764 119192 1984 23584 0 0 4 36 71 135 0 1 91
8 0
0 0 160764 119192 1984 23600 0 0 0 0 46 89 0 0
100 0 0
0 0 160764 119192 2000 23600 0 0 0 36 59 121 1 0 92
7 0
0 0 160764 119192 2016 23584 0 0 0 36 82 196 0 0 93
7 0
0 0 160764 119192 2016 23600 0 0 0 0 38 69 0 0
100 0 0
0 0 160764 119192 2032 23600 0 0 0 36 63 130 1 0 91
8 0
0 0 160764 119192 2048 23584 0 0 0 36 67 132 0 0 94
6 0
0 0 160764 119192 2096 23584 0 0 0 272 89 193 0 0 76
24 0
...Parth
On 2/20/13 8:59 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>Well, it doesn't seem to be actively swapping at this point, but I
>think it's got active memory swapped out and being used as occasionally
>wait% goes up significantly. At any rate this system is severely memory
>limited.
>
>On Wed, Feb 20, 2013 at 9:52 PM, Parth Jagirdar
><Pa...@citrix.com> wrote:
>> Marcus,
>>
>> vmstat 1 output
>>
>>
>> [root@localhost management]# vmstat 1 procs
>>-----------memory---------- ---swap-- -----io---- --system--
>> -----cpu-----
>> r b swpd free buff cache si so bi bo in cs us sy
>>id
>> wa st
>> 0 1 190820 72380 10904 15852 36 28 40 92 9 1 0 0
>>89
>> 10 0
>> 0 0 190820 72256 10932 15828 0 0 0 56 63 130 0 0
>>88
>> 12 0
>> 1 0 190820 72256 10932 15844 0 0 0 0 53 153 1 0
>>99
>> 0 0
>> 0 0 190820 72256 10948 15844 0 0 0 44 89 253 2 0
>>88
>> 10 0
>> 0 0 190820 72256 10964 15828 0 0 0 72 64 135 0 0
>>88
>> 12 0
>> 0 0 190820 72256 10964 15844 0 0 0 0 43 76 0 0
>> 100 0 0
>> 0 0 190820 72256 10980 15844 0 0 0 36 86 244 1 1
>>91
>> 7 0
>> 0 0 190820 72256 10996 15828 0 0 0 44 57 112 0 1
>>88
>> 11 0
>> 0 0 190820 72256 10996 15844 0 0 0 0 45 88 0 0
>> 100 0 0
>> 0 0 190820 72256 11012 15844 0 0 0 36 100 264 1 1
>>91
>> 7 0
>> 0 0 190820 72132 11044 15824 0 0 4 96 106 211 1 0
>>80
>> 19 0
>> 0 0 190820 72132 11092 15856 0 0 0 368 81 223 0 1
>>74
>> 25 0
>> 0 0 190820 72132 11108 15856 0 0 0 36 78 145 0 1
>>93
>> 6 0
>> 0 0 190820 72132 11124 15840 0 0 0 40 55 106 1 0
>>90
>> 9 0
>> 0 0 190820 72132 11124 15856 0 0 0 0 47 96 0 0
>> 100 0 0
>> 0 0 190820 72132 11140 15856 0 0 0 36 61 113 0 0
>>85
>> 15 0
>> 0 0 190820 72008 11156 15840 0 0 0 36 61 158 0 0
>>93
>> 7 0
>> 0 0 190820 72008 11156 15856 0 0 0 0 41 82 0 0
>> 100 0 0
>> 0 0 190820 72008 11172 15856 0 0 0 36 74 149 1 0
>>94
>> 5 0
>> 0 0 190820 72008 11188 15840 0 0 0 36 60 117 0 0
>>93
>> 7 0
>> 0 0 190820 72008 11188 15856 0 0 0 0 43 91 0 0
>> 100 0 0
>> 1 0 190820 72008 11252 15860 0 0 4 312 108 243 1 1
>>68
>> 30 0
>> 0 0 190820 72008 11268 15844 0 0 0 36 60 128 0 0
>>92
>> 8 0
>> 0 0 190820 72008 11268 15860 0 0 0 0 36 67 0 0
>> 100 0 0
>> 0 0 190820 71884 11284 15860 0 0 0 104 84 139 0 1
>>83
>> 16 0
>> 0 0 190820 71884 11300 15844 0 0 0 60 55 111 0 0
>>69
>> 31 0
>> 0 0 190820 71884 11300 15860 0 0 0 0 53 121 1 0
>>99
>> 0 0
>> 0 0 190820 71884 11316 15860 0 0 0 40 67 130 0 0
>>87
>> 13 0
>> 0 0 190820 71884 11332 15844 0 0 0 40 58 130 0 0
>>90
>> 10 0
>> 0 0 190820 71884 11332 15864 0 0 0 0 59 824 1 1
>>98
>> 0 0
>> 1 0 190820 71884 11348 15864 0 0 0 40 113 185 1 0
>>67
>> 32 0
>> 0 0 190820 71744 11412 15852 0 0 4 540 100 238 0 0
>>67
>> 33 0
>> 0 0 190820 71744 11412 15868 0 0 0 0 55 159 1 0
>>99
>> 0 0
>> 0 0 190820 71744 11428 15868 0 0 0 40 89 246 2 1
>>90
>> 7 0
>> 0 0 190820 71620 11444 15852 0 0 0 72 65 135 0 0
>>93
>> 7 0
>> 0 0 190820 71620 11444 15868 0 0 0 0 40 74 0 0
>> 100 0 0
>> 0 0 190820 71620 11460 15868 0 0 0 52 75 216 1 0
>>92
>> 7 0
>> 0 0 190820 71620 11476 15852 0 0 0 44 53 109 0 0
>>89
>> 11 0
>> 0 0 190820 71620 11476 15868 0 0 0 0 43 87 0 0
>> 100 0 0
>> 0 0 190820 71620 11496 15868 0 0 4 36 83 143 0 1
>>90
>> 9 0
>> 0 0 190820 71620 11512 15852 0 0 0 40 78 869 1 0
>>91
>> 8 0
>> 0 1 190820 71628 11524 15856 0 0 0 188 94 145 0 0
>>87
>> 13 0
>> 0 0 190820 71496 11576 15872 0 0 4 132 96 214 1 0
>>80
>> 19 0
>> 0 0 190820 71496 11592 15856 0 0 0 36 94 128 1 0
>>92
>> 7 0
>> 0 0 190820 71496 11592 15872 0 0 0 0 115 164 0 0
>> 100 0 0
>> 0 0 190820 71496 11608 15876 0 0 0 36 130 200 0 0
>>87
>> 13 0
>> 0 0 190820 71496 11624 15860 0 0 0 36 141 218 1 1
>>91
>> 7 0
>> 0 0 190820 71504 11624 15876 0 0 0 0 105 119 0 0
>> 100 0 0
>> 0 0 190820 71504 11640 15876 0 0 0 36 140 218 1 0
>>90
>> 9 0
>> procs -----------memory---------- ---swap-- -----io---- --system--
>> -----cpu-----
>> r b swpd free buff cache si so bi bo in cs us sy
>>id
>> wa st
>> 0 0 190820 71504 11656 15860 0 0 0 36 131 169 1 0
>>92
>> 7 0
>> 0 0 190820 71504 11656 15876 0 0 0 0 115 146 0 0
>> 100 0 0
>> 0 0 190820 71380 11672 15876 0 0 0 36 128 173 0 1
>>91
>> 8 0
>> 0 0 190820 71380 11736 15860 0 0 0 308 146 279 1 0
>>69
>> 30 0
>> 0 0 190820 71380 11736 15876 0 0 0 0 59 82 0 0
>> 100 0 0
>> 0 0 190820 71380 11760 15876 0 0 4 64 90 174 1 1
>>86
>> 12 0
>>
>> ...Parth
>>
>>
>>
>>
>> On 2/20/13 8:46 PM, "Parth Jagirdar" <Pa...@citrix.com> wrote:
>>
>>>JAVA_OPTS="-Djava.awt.headless=true
>>>-Dcom.sun.management.jmxremote.port=45219
>>>-Dcom.sun.management.jmxremote.authenticate=false
>>>-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
>>>-XX:+HeapDumpOnOutOfMemoryError
>>>-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>>>
>>>Which did not help.
>>>
>>>--------------
>>>
>>>[root@localhost management]# cat /proc/meminfo
>>>MemTotal: 1016656 kB
>>>MemFree: 68400 kB
>>>Buffers: 9108 kB
>>>Cached: 20984 kB
>>>SwapCached: 17492 kB
>>>Active: 424152 kB
>>>Inactive: 433152 kB
>>>Active(anon): 409812 kB
>>>Inactive(anon): 417412 kB
>>>Active(file): 14340 kB
>>>Inactive(file): 15740 kB
>>>Unevictable: 0 kB
>>>Mlocked: 0 kB
>>>SwapTotal: 2031608 kB
>>>SwapFree: 1840900 kB
>>>Dirty: 80 kB
>>>Writeback: 0 kB
>>>AnonPages: 815460 kB
>>>Mapped: 11408 kB
>>>Shmem: 4 kB
>>>Slab: 60120 kB
>>>SReclaimable: 10368 kB
>>>SUnreclaim: 49752 kB
>>>KernelStack: 5216 kB
>>>PageTables: 6800 kB
>>>NFS_Unstable: 0 kB
>>>Bounce: 0 kB
>>>WritebackTmp: 0 kB
>>>CommitLimit: 2539936 kB
>>>Committed_AS: 1596896 kB
>>>VmallocTotal: 34359738367 kB
>>>VmallocUsed: 7724 kB
>>>VmallocChunk: 34359718200 kB
>>>HardwareCorrupted: 0 kB
>>>AnonHugePages: 503808 kB
>>>HugePages_Total: 0
>>>HugePages_Free: 0
>>>HugePages_Rsvd: 0
>>>HugePages_Surp: 0
>>>Hugepagesize: 2048 kB
>>>DirectMap4k: 6144 kB
>>>DirectMap2M: 1038336 kB
>>>[root@localhost management]#
>>>-----------------------------
>>>
>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>
>>>
>>> 9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
>>>
>>>
>>> 1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
>>>
>>>
>>> 1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
>>>
>>>
>>>
>>>
>>>
>>>On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>>>
>>>>Hi,
>>>>
>>>>Cloudstack Java process statistics are given below when it stops
>>>>responding are given below :
>>>>
>>>>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06,
>>>>0.05,
>>>>0.02
>>>>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>>>>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi,
>>>>0.0%si, 0.0%st
>>>>Mem: 1014860k total, 947632k used, 67228k free, 5868k
>>>>buffers
>>>>Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>>>
>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>>>
>>>>Thanks,
>>>>Sailaja.M
>>>>
>>>>-----Original Message-----
>>>>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>Sent: Thursday, February 21, 2013 9:35 AM
>>>>To: cloudstack-dev@incubator.apache.org
>>>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>>>
>>>>Yes, these are great data points, but so far nobody has responded on
>>>>that ticket with the information required to know if the slowness is
>>>>related to memory settings or swapping. That was just a hunch on my
>>>>part from being a system admin.
>>>>
>>>>How much memory do these systems have that experience issues? What
>>>>does /proc/meminfo say during the issues? Does adjusting the
>>>>tomcat6.conf memory settings make a difference (see ticket
>>>>comments)? How much memory do the java processes list as resident in
>>>>top?
>>>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar"
>>>><Pa...@citrix.com>
>>>>wrote:
>>>>
>>>>> +1 Performance degradation is dramatic and I too have observed
>>>>> +this
>>>>>issue.
>>>>>
>>>>> I have logged my comments into 1339.
>>>>>
>>>>>
>>>>> ŠParth
>>>>>
>>>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>>>> <sr...@citrix.com> wrote:
>>>>>
>>>>> >To add to what Marcus mentioned,
>>>>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>>>>> >5-10 min of starting management server and there has been a lot
>>>>> >of API requests through automated tests. It is observed that
>>>>> >Management server not only slows down but also goes down after a while.
>>>>> >
>>>>> >~Talluri
>>>>> >
>>>>> >-----Original Message-----
>>>>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>> >Sent: Thursday, February 21, 2013 7:22
>>>>> >To: cloudstack-dev@incubator.apache.org
>>>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>>>> >
>>>>> >When Javelin was merged, there was an email sent out stating that
>>>>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>>>>> >permanent memory. Subsequently, there have also been several
>>>>>e-mails
>>>>> >and issues where devs have echoed this recommendation, and
>>>>>presumably
>>>>> >it fixed issues. I've seen the MS run out of memory myself and
>>>>> >applied those recommendations.
>>>>> >
>>>>> >Is this what we want to provide in the tomcat config for a
>>>>> >package based install as well? It's effectively saying that the
>>>>> >minimum requirements for the management server are something like
>>>>> >3 or 4 GB (to be safe for other running tasks) of RAM, right?
>>>>> >
>>>>> >There is currently a bug filed that may or may not have to do
>>>>> >with this, CLOUDSTACK-1339. Users report mgmt server slowness,
>>>>> >going unresponsive for minutes at a time, but the logs seem to
>>>>> >show business as usual. User reports that java is taking 75% of
>>>>> >RAM, depending on what else is going on they may be swapping.
>>>>> >Settings in the code for an install are currently at 2g/512M,
>>>>> >I've been running this on a 4GB server for awhile now, java is at
>>>>> >900M, but I haven't been pounding it with requests or anything.
>>>>> >
>>>>> >This bug might not have anything to do with the memory settings,
>>>>> >but I figured it would be good to nail down what our minimum
>>>>>requirements
>>>>> >are for 4.1
>>>>>
>>>>>
>>>
>>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Parth Jagirdar <Pa...@citrix.com>.
Marcus,
I attempted login into UI while running the log.
[root@localhost management]# vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
0 0 191132 70904 10340 16192 36 28 40 91 9 1 0 0 89
10 0
0 0 191132 70904 10340 16204 0 0 0 0 46 75 0 0
100 0 0
0 0 191132 70904 10356 16204 0 0 0 36 72 221 1 0 91
8 0
0 0 191132 70904 10372 16188 0 0 0 44 73 130 0 1 88
11 0
0 0 191132 70780 10420 16208 0 0 4 276 83 191 1 0 75
24 0
0 0 191132 70780 10452 16192 0 0 0 120 106 309 1 0 77
22 0
0 0 191132 70780 10468 16200 0 0 0 40 91 183 1 1 90
8 0
0 0 191132 70780 10468 16216 0 0 0 0 47 128 0 0
100 0 0
0 0 191132 70780 10484 16216 0 0 0 36 70 136 0 0 94
6 0
0 0 191132 70656 10500 16200 0 0 0 40 66 116 1 0 91
8 0
0 0 191132 70656 10500 16216 0 0 0 0 47 94 0 0
100 0 0
0 1 189504 66216 10400 17940 2192 100 3928 172 404 579 9 2 5
84 0
1 1 188772 60220 10552 21992 1000 0 5220 68 412 741 7 2 21
69 0
1 2 187352 49316 7832 30344 1660 32 10052 32 833 1015 28 3 0
69 0
0 4 188816 52420 1392 25716 1488 2872 3168 3240 663 870 19 3 0
78 0
1 1 187388 51808 1372 25040 2476 1104 3084 1260 675 813 15 3 0
82 0
0 1 187360 54040 1500 24980 32 0 1048 0 447 379 6 1 0
93 0
0 1 187360 53916 1516 25004 0 0 924 52 309 283 1 0 0
99 0
0 1 195476 64076 1272 20624 0 8116 32 8156 312 308 1 2 0
97 0
0 0 203084 71920 1264 19412 0 7608 0 7608 256 173 0 2 89
9 0
0 0 203076 71324 1376 20132 64 0 868 40 192 232 2 0 65
33 0
0 0 203076 71328 1392 20108 0 0 0 68 75 144 1 0 85
14 0
0 0 203076 71084 1392 20392 0 0 268 0 66 132 0 1 96
3 0
0 0 203076 71084 1408 20392 0 0 0 36 60 122 0 0 94
6 0
0 0 203076 71076 1424 20376 0 0 0 36 77 148 1 0 92
7 0
0 1 203072 70696 1472 20460 96 0 168 280 196 1080 7 1 66
26 0
0 0 202900 68704 1512 21236 656 0 1432 104 338 760 10 1 10
79 0
0 0 201804 65728 1540 21984 1184 0 1904 64 547 1117 26 2 40
33 0
0 2 161904 122500 1540 22640 68 0 776 0 407 477 23 2 57
18 0
1 0 161384 122132 1556 22748 36 0 92 60 970 200 92 0 0
8 0
0 1 160840 119512 1836 23228 676 0 1432 76 772 866 58 2 0
40 0
0 0 160776 119636 1836 23516 196 0 500 0 104 199 1 0 63
36 0
0 0 160776 119636 1852 23520 0 0 0 44 83 251 2 0 92
6 0
0 0 160776 119644 1868 23504 0 0 0 40 64 117 0 1 90
9 0
0 0 160776 119644 1868 23520 0 0 0 0 46 91 0 0
100 0 0
0 1 160764 119456 1888 23556 28 0 32 164 71 121 0 0 87
13 0
0 0 160764 119208 1952 23572 0 0 4 288 392 1083 4 1 66
29 0
0 0 160764 119192 1952 23596 0 0 0 0 42 69 0 0
100 0 0
0 0 160764 119192 1968 23596 0 0 0 40 60 127 1 0 92
7 0
0 0 160764 119192 1984 23584 0 0 4 36 71 135 0 1 91
8 0
0 0 160764 119192 1984 23600 0 0 0 0 46 89 0 0
100 0 0
0 0 160764 119192 2000 23600 0 0 0 36 59 121 1 0 92
7 0
0 0 160764 119192 2016 23584 0 0 0 36 82 196 0 0 93
7 0
0 0 160764 119192 2016 23600 0 0 0 0 38 69 0 0
100 0 0
0 0 160764 119192 2032 23600 0 0 0 36 63 130 1 0 91
8 0
0 0 160764 119192 2048 23584 0 0 0 36 67 132 0 0 94
6 0
0 0 160764 119192 2096 23584 0 0 0 272 89 193 0 0 76
24 0
...Parth
On 2/20/13 8:59 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>Well, it doesn't seem to be actively swapping at this point, but I
>think it's got active memory swapped out and being used as
>occasionally wait% goes up significantly. At any rate this system is
>severely memory limited.
>
>On Wed, Feb 20, 2013 at 9:52 PM, Parth Jagirdar
><Pa...@citrix.com> wrote:
>> Marcus,
>>
>> vmstat 1 output
>>
>>
>> [root@localhost management]# vmstat 1
>> procs -----------memory---------- ---swap-- -----io---- --system--
>> -----cpu-----
>> r b swpd free buff cache si so bi bo in cs us sy
>>id
>> wa st
>> 0 1 190820 72380 10904 15852 36 28 40 92 9 1 0 0
>>89
>> 10 0
>> 0 0 190820 72256 10932 15828 0 0 0 56 63 130 0 0
>>88
>> 12 0
>> 1 0 190820 72256 10932 15844 0 0 0 0 53 153 1 0
>>99
>> 0 0
>> 0 0 190820 72256 10948 15844 0 0 0 44 89 253 2 0
>>88
>> 10 0
>> 0 0 190820 72256 10964 15828 0 0 0 72 64 135 0 0
>>88
>> 12 0
>> 0 0 190820 72256 10964 15844 0 0 0 0 43 76 0 0
>> 100 0 0
>> 0 0 190820 72256 10980 15844 0 0 0 36 86 244 1 1
>>91
>> 7 0
>> 0 0 190820 72256 10996 15828 0 0 0 44 57 112 0 1
>>88
>> 11 0
>> 0 0 190820 72256 10996 15844 0 0 0 0 45 88 0 0
>> 100 0 0
>> 0 0 190820 72256 11012 15844 0 0 0 36 100 264 1 1
>>91
>> 7 0
>> 0 0 190820 72132 11044 15824 0 0 4 96 106 211 1 0
>>80
>> 19 0
>> 0 0 190820 72132 11092 15856 0 0 0 368 81 223 0 1
>>74
>> 25 0
>> 0 0 190820 72132 11108 15856 0 0 0 36 78 145 0 1
>>93
>> 6 0
>> 0 0 190820 72132 11124 15840 0 0 0 40 55 106 1 0
>>90
>> 9 0
>> 0 0 190820 72132 11124 15856 0 0 0 0 47 96 0 0
>> 100 0 0
>> 0 0 190820 72132 11140 15856 0 0 0 36 61 113 0 0
>>85
>> 15 0
>> 0 0 190820 72008 11156 15840 0 0 0 36 61 158 0 0
>>93
>> 7 0
>> 0 0 190820 72008 11156 15856 0 0 0 0 41 82 0 0
>> 100 0 0
>> 0 0 190820 72008 11172 15856 0 0 0 36 74 149 1 0
>>94
>> 5 0
>> 0 0 190820 72008 11188 15840 0 0 0 36 60 117 0 0
>>93
>> 7 0
>> 0 0 190820 72008 11188 15856 0 0 0 0 43 91 0 0
>> 100 0 0
>> 1 0 190820 72008 11252 15860 0 0 4 312 108 243 1 1
>>68
>> 30 0
>> 0 0 190820 72008 11268 15844 0 0 0 36 60 128 0 0
>>92
>> 8 0
>> 0 0 190820 72008 11268 15860 0 0 0 0 36 67 0 0
>> 100 0 0
>> 0 0 190820 71884 11284 15860 0 0 0 104 84 139 0 1
>>83
>> 16 0
>> 0 0 190820 71884 11300 15844 0 0 0 60 55 111 0 0
>>69
>> 31 0
>> 0 0 190820 71884 11300 15860 0 0 0 0 53 121 1 0
>>99
>> 0 0
>> 0 0 190820 71884 11316 15860 0 0 0 40 67 130 0 0
>>87
>> 13 0
>> 0 0 190820 71884 11332 15844 0 0 0 40 58 130 0 0
>>90
>> 10 0
>> 0 0 190820 71884 11332 15864 0 0 0 0 59 824 1 1
>>98
>> 0 0
>> 1 0 190820 71884 11348 15864 0 0 0 40 113 185 1 0
>>67
>> 32 0
>> 0 0 190820 71744 11412 15852 0 0 4 540 100 238 0 0
>>67
>> 33 0
>> 0 0 190820 71744 11412 15868 0 0 0 0 55 159 1 0
>>99
>> 0 0
>> 0 0 190820 71744 11428 15868 0 0 0 40 89 246 2 1
>>90
>> 7 0
>> 0 0 190820 71620 11444 15852 0 0 0 72 65 135 0 0
>>93
>> 7 0
>> 0 0 190820 71620 11444 15868 0 0 0 0 40 74 0 0
>> 100 0 0
>> 0 0 190820 71620 11460 15868 0 0 0 52 75 216 1 0
>>92
>> 7 0
>> 0 0 190820 71620 11476 15852 0 0 0 44 53 109 0 0
>>89
>> 11 0
>> 0 0 190820 71620 11476 15868 0 0 0 0 43 87 0 0
>> 100 0 0
>> 0 0 190820 71620 11496 15868 0 0 4 36 83 143 0 1
>>90
>> 9 0
>> 0 0 190820 71620 11512 15852 0 0 0 40 78 869 1 0
>>91
>> 8 0
>> 0 1 190820 71628 11524 15856 0 0 0 188 94 145 0 0
>>87
>> 13 0
>> 0 0 190820 71496 11576 15872 0 0 4 132 96 214 1 0
>>80
>> 19 0
>> 0 0 190820 71496 11592 15856 0 0 0 36 94 128 1 0
>>92
>> 7 0
>> 0 0 190820 71496 11592 15872 0 0 0 0 115 164 0 0
>> 100 0 0
>> 0 0 190820 71496 11608 15876 0 0 0 36 130 200 0 0
>>87
>> 13 0
>> 0 0 190820 71496 11624 15860 0 0 0 36 141 218 1 1
>>91
>> 7 0
>> 0 0 190820 71504 11624 15876 0 0 0 0 105 119 0 0
>> 100 0 0
>> 0 0 190820 71504 11640 15876 0 0 0 36 140 218 1 0
>>90
>> 9 0
>> procs -----------memory---------- ---swap-- -----io---- --system--
>> -----cpu-----
>> r b swpd free buff cache si so bi bo in cs us sy
>>id
>> wa st
>> 0 0 190820 71504 11656 15860 0 0 0 36 131 169 1 0
>>92
>> 7 0
>> 0 0 190820 71504 11656 15876 0 0 0 0 115 146 0 0
>> 100 0 0
>> 0 0 190820 71380 11672 15876 0 0 0 36 128 173 0 1
>>91
>> 8 0
>> 0 0 190820 71380 11736 15860 0 0 0 308 146 279 1 0
>>69
>> 30 0
>> 0 0 190820 71380 11736 15876 0 0 0 0 59 82 0 0
>> 100 0 0
>> 0 0 190820 71380 11760 15876 0 0 4 64 90 174 1 1
>>86
>> 12 0
>>
>> ...Parth
>>
>>
>>
>>
>> On 2/20/13 8:46 PM, "Parth Jagirdar" <Pa...@citrix.com> wrote:
>>
>>>JAVA_OPTS="-Djava.awt.headless=true
>>>-Dcom.sun.management.jmxremote.port=45219
>>>-Dcom.sun.management.jmxremote.authenticate=false
>>>-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
>>>-XX:+HeapDumpOnOutOfMemoryError
>>>-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>>>
>>>Which did not help.
>>>
>>>--------------
>>>
>>>[root@localhost management]# cat /proc/meminfo
>>>MemTotal: 1016656 kB
>>>MemFree: 68400 kB
>>>Buffers: 9108 kB
>>>Cached: 20984 kB
>>>SwapCached: 17492 kB
>>>Active: 424152 kB
>>>Inactive: 433152 kB
>>>Active(anon): 409812 kB
>>>Inactive(anon): 417412 kB
>>>Active(file): 14340 kB
>>>Inactive(file): 15740 kB
>>>Unevictable: 0 kB
>>>Mlocked: 0 kB
>>>SwapTotal: 2031608 kB
>>>SwapFree: 1840900 kB
>>>Dirty: 80 kB
>>>Writeback: 0 kB
>>>AnonPages: 815460 kB
>>>Mapped: 11408 kB
>>>Shmem: 4 kB
>>>Slab: 60120 kB
>>>SReclaimable: 10368 kB
>>>SUnreclaim: 49752 kB
>>>KernelStack: 5216 kB
>>>PageTables: 6800 kB
>>>NFS_Unstable: 0 kB
>>>Bounce: 0 kB
>>>WritebackTmp: 0 kB
>>>CommitLimit: 2539936 kB
>>>Committed_AS: 1596896 kB
>>>VmallocTotal: 34359738367 kB
>>>VmallocUsed: 7724 kB
>>>VmallocChunk: 34359718200 kB
>>>HardwareCorrupted: 0 kB
>>>AnonHugePages: 503808 kB
>>>HugePages_Total: 0
>>>HugePages_Free: 0
>>>HugePages_Rsvd: 0
>>>HugePages_Surp: 0
>>>Hugepagesize: 2048 kB
>>>DirectMap4k: 6144 kB
>>>DirectMap2M: 1038336 kB
>>>[root@localhost management]#
>>>-----------------------------
>>>
>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>
>>>
>>> 9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
>>>
>>>
>>> 1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
>>>
>>>
>>> 1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
>>>
>>>
>>>
>>>
>>>
>>>On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>>>
>>>>Hi,
>>>>
>>>>Cloudstack Java process statistics are given below when it stops
>>>>responding are given below :
>>>>
>>>>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05,
>>>>0.02
>>>>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>>>>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
>>>>0.0%st
>>>>Mem: 1014860k total, 947632k used, 67228k free, 5868k
>>>>buffers
>>>>Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>>>
>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>>>
>>>>Thanks,
>>>>Sailaja.M
>>>>
>>>>-----Original Message-----
>>>>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>Sent: Thursday, February 21, 2013 9:35 AM
>>>>To: cloudstack-dev@incubator.apache.org
>>>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>>>
>>>>Yes, these are great data points, but so far nobody has responded on
>>>>that
>>>>ticket with the information required to know if the slowness is related
>>>>to memory settings or swapping. That was just a hunch on my part from
>>>>being a system admin.
>>>>
>>>>How much memory do these systems have that experience issues? What does
>>>>/proc/meminfo say during the issues? Does adjusting the tomcat6.conf
>>>>memory settings make a difference (see ticket comments)? How much
>>>>memory
>>>>do the java processes list as resident in top?
>>>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
>>>>wrote:
>>>>
>>>>> +1 Performance degradation is dramatic and I too have observed this
>>>>>issue.
>>>>>
>>>>> I have logged my comments into 1339.
>>>>>
>>>>>
>>>>> ŠParth
>>>>>
>>>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>>>> <sr...@citrix.com> wrote:
>>>>>
>>>>> >To add to what Marcus mentioned,
>>>>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>>>>> >5-10 min of starting management server and there has been a lot of
>>>>> >API requests through automated tests. It is observed that Management
>>>>> >server not only slows down but also goes down after a while.
>>>>> >
>>>>> >~Talluri
>>>>> >
>>>>> >-----Original Message-----
>>>>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>> >Sent: Thursday, February 21, 2013 7:22
>>>>> >To: cloudstack-dev@incubator.apache.org
>>>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>>>> >
>>>>> >When Javelin was merged, there was an email sent out stating that
>>>>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>>>>> >permanent memory. Subsequently, there have also been several
>>>>>e-mails
>>>>> >and issues where devs have echoed this recommendation, and
>>>>>presumably
>>>>> >it fixed issues. I've seen the MS run out of memory myself and
>>>>> >applied those recommendations.
>>>>> >
>>>>> >Is this what we want to provide in the tomcat config for a package
>>>>> >based install as well? It's effectively saying that the minimum
>>>>> >requirements for the management server are something like 3 or 4 GB
>>>>> >(to be safe for other running tasks) of RAM, right?
>>>>> >
>>>>> >There is currently a bug filed that may or may not have to do with
>>>>> >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>>>>> >unresponsive for minutes at a time, but the logs seem to show
>>>>> >business as usual. User reports that java is taking 75% of RAM,
>>>>> >depending on what else is going on they may be swapping. Settings in
>>>>> >the code for an install are currently at 2g/512M, I've been running
>>>>> >this on a 4GB server for awhile now, java is at 900M, but I haven't
>>>>> >been pounding it with requests or anything.
>>>>> >
>>>>> >This bug might not have anything to do with the memory settings, but
>>>>> >I figured it would be good to nail down what our minimum
>>>>>requirements
>>>>> >are for 4.1
>>>>>
>>>>>
>>>
>>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Marcus Sorensen <sh...@gmail.com>.
Well, it doesn't seem to be actively swapping at this point, but I
think it's got active memory swapped out and being used as
occasionally wait% goes up significantly. At any rate this system is
severely memory limited.
On Wed, Feb 20, 2013 at 9:52 PM, Parth Jagirdar
<Pa...@citrix.com> wrote:
> Marcus,
>
> vmstat 1 output
>
>
> [root@localhost management]# vmstat 1
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu-----
> r b swpd free buff cache si so bi bo in cs us sy id
> wa st
> 0 1 190820 72380 10904 15852 36 28 40 92 9 1 0 0 89
> 10 0
> 0 0 190820 72256 10932 15828 0 0 0 56 63 130 0 0 88
> 12 0
> 1 0 190820 72256 10932 15844 0 0 0 0 53 153 1 0 99
> 0 0
> 0 0 190820 72256 10948 15844 0 0 0 44 89 253 2 0 88
> 10 0
> 0 0 190820 72256 10964 15828 0 0 0 72 64 135 0 0 88
> 12 0
> 0 0 190820 72256 10964 15844 0 0 0 0 43 76 0 0
> 100 0 0
> 0 0 190820 72256 10980 15844 0 0 0 36 86 244 1 1 91
> 7 0
> 0 0 190820 72256 10996 15828 0 0 0 44 57 112 0 1 88
> 11 0
> 0 0 190820 72256 10996 15844 0 0 0 0 45 88 0 0
> 100 0 0
> 0 0 190820 72256 11012 15844 0 0 0 36 100 264 1 1 91
> 7 0
> 0 0 190820 72132 11044 15824 0 0 4 96 106 211 1 0 80
> 19 0
> 0 0 190820 72132 11092 15856 0 0 0 368 81 223 0 1 74
> 25 0
> 0 0 190820 72132 11108 15856 0 0 0 36 78 145 0 1 93
> 6 0
> 0 0 190820 72132 11124 15840 0 0 0 40 55 106 1 0 90
> 9 0
> 0 0 190820 72132 11124 15856 0 0 0 0 47 96 0 0
> 100 0 0
> 0 0 190820 72132 11140 15856 0 0 0 36 61 113 0 0 85
> 15 0
> 0 0 190820 72008 11156 15840 0 0 0 36 61 158 0 0 93
> 7 0
> 0 0 190820 72008 11156 15856 0 0 0 0 41 82 0 0
> 100 0 0
> 0 0 190820 72008 11172 15856 0 0 0 36 74 149 1 0 94
> 5 0
> 0 0 190820 72008 11188 15840 0 0 0 36 60 117 0 0 93
> 7 0
> 0 0 190820 72008 11188 15856 0 0 0 0 43 91 0 0
> 100 0 0
> 1 0 190820 72008 11252 15860 0 0 4 312 108 243 1 1 68
> 30 0
> 0 0 190820 72008 11268 15844 0 0 0 36 60 128 0 0 92
> 8 0
> 0 0 190820 72008 11268 15860 0 0 0 0 36 67 0 0
> 100 0 0
> 0 0 190820 71884 11284 15860 0 0 0 104 84 139 0 1 83
> 16 0
> 0 0 190820 71884 11300 15844 0 0 0 60 55 111 0 0 69
> 31 0
> 0 0 190820 71884 11300 15860 0 0 0 0 53 121 1 0 99
> 0 0
> 0 0 190820 71884 11316 15860 0 0 0 40 67 130 0 0 87
> 13 0
> 0 0 190820 71884 11332 15844 0 0 0 40 58 130 0 0 90
> 10 0
> 0 0 190820 71884 11332 15864 0 0 0 0 59 824 1 1 98
> 0 0
> 1 0 190820 71884 11348 15864 0 0 0 40 113 185 1 0 67
> 32 0
> 0 0 190820 71744 11412 15852 0 0 4 540 100 238 0 0 67
> 33 0
> 0 0 190820 71744 11412 15868 0 0 0 0 55 159 1 0 99
> 0 0
> 0 0 190820 71744 11428 15868 0 0 0 40 89 246 2 1 90
> 7 0
> 0 0 190820 71620 11444 15852 0 0 0 72 65 135 0 0 93
> 7 0
> 0 0 190820 71620 11444 15868 0 0 0 0 40 74 0 0
> 100 0 0
> 0 0 190820 71620 11460 15868 0 0 0 52 75 216 1 0 92
> 7 0
> 0 0 190820 71620 11476 15852 0 0 0 44 53 109 0 0 89
> 11 0
> 0 0 190820 71620 11476 15868 0 0 0 0 43 87 0 0
> 100 0 0
> 0 0 190820 71620 11496 15868 0 0 4 36 83 143 0 1 90
> 9 0
> 0 0 190820 71620 11512 15852 0 0 0 40 78 869 1 0 91
> 8 0
> 0 1 190820 71628 11524 15856 0 0 0 188 94 145 0 0 87
> 13 0
> 0 0 190820 71496 11576 15872 0 0 4 132 96 214 1 0 80
> 19 0
> 0 0 190820 71496 11592 15856 0 0 0 36 94 128 1 0 92
> 7 0
> 0 0 190820 71496 11592 15872 0 0 0 0 115 164 0 0
> 100 0 0
> 0 0 190820 71496 11608 15876 0 0 0 36 130 200 0 0 87
> 13 0
> 0 0 190820 71496 11624 15860 0 0 0 36 141 218 1 1 91
> 7 0
> 0 0 190820 71504 11624 15876 0 0 0 0 105 119 0 0
> 100 0 0
> 0 0 190820 71504 11640 15876 0 0 0 36 140 218 1 0 90
> 9 0
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu-----
> r b swpd free buff cache si so bi bo in cs us sy id
> wa st
> 0 0 190820 71504 11656 15860 0 0 0 36 131 169 1 0 92
> 7 0
> 0 0 190820 71504 11656 15876 0 0 0 0 115 146 0 0
> 100 0 0
> 0 0 190820 71380 11672 15876 0 0 0 36 128 173 0 1 91
> 8 0
> 0 0 190820 71380 11736 15860 0 0 0 308 146 279 1 0 69
> 30 0
> 0 0 190820 71380 11736 15876 0 0 0 0 59 82 0 0
> 100 0 0
> 0 0 190820 71380 11760 15876 0 0 4 64 90 174 1 1 86
> 12 0
>
> ...Parth
>
>
>
>
> On 2/20/13 8:46 PM, "Parth Jagirdar" <Pa...@citrix.com> wrote:
>
>>JAVA_OPTS="-Djava.awt.headless=true
>>-Dcom.sun.management.jmxremote.port=45219
>>-Dcom.sun.management.jmxremote.authenticate=false
>>-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
>>-XX:+HeapDumpOnOutOfMemoryError
>>-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>>
>>Which did not help.
>>
>>--------------
>>
>>[root@localhost management]# cat /proc/meminfo
>>MemTotal: 1016656 kB
>>MemFree: 68400 kB
>>Buffers: 9108 kB
>>Cached: 20984 kB
>>SwapCached: 17492 kB
>>Active: 424152 kB
>>Inactive: 433152 kB
>>Active(anon): 409812 kB
>>Inactive(anon): 417412 kB
>>Active(file): 14340 kB
>>Inactive(file): 15740 kB
>>Unevictable: 0 kB
>>Mlocked: 0 kB
>>SwapTotal: 2031608 kB
>>SwapFree: 1840900 kB
>>Dirty: 80 kB
>>Writeback: 0 kB
>>AnonPages: 815460 kB
>>Mapped: 11408 kB
>>Shmem: 4 kB
>>Slab: 60120 kB
>>SReclaimable: 10368 kB
>>SUnreclaim: 49752 kB
>>KernelStack: 5216 kB
>>PageTables: 6800 kB
>>NFS_Unstable: 0 kB
>>Bounce: 0 kB
>>WritebackTmp: 0 kB
>>CommitLimit: 2539936 kB
>>Committed_AS: 1596896 kB
>>VmallocTotal: 34359738367 kB
>>VmallocUsed: 7724 kB
>>VmallocChunk: 34359718200 kB
>>HardwareCorrupted: 0 kB
>>AnonHugePages: 503808 kB
>>HugePages_Total: 0
>>HugePages_Free: 0
>>HugePages_Rsvd: 0
>>HugePages_Surp: 0
>>Hugepagesize: 2048 kB
>>DirectMap4k: 6144 kB
>>DirectMap2M: 1038336 kB
>>[root@localhost management]#
>>-----------------------------
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>
>>
>> 9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
>>
>>
>> 1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
>>
>>
>> 1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
>>
>>
>>
>>
>>
>>On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>>
>>>Hi,
>>>
>>>Cloudstack Java process statistics are given below when it stops
>>>responding are given below :
>>>
>>>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05,
>>>0.02
>>>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>>>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
>>>0.0%st
>>>Mem: 1014860k total, 947632k used, 67228k free, 5868k buffers
>>>Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>>
>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>>
>>>Thanks,
>>>Sailaja.M
>>>
>>>-----Original Message-----
>>>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>Sent: Thursday, February 21, 2013 9:35 AM
>>>To: cloudstack-dev@incubator.apache.org
>>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>>
>>>Yes, these are great data points, but so far nobody has responded on that
>>>ticket with the information required to know if the slowness is related
>>>to memory settings or swapping. That was just a hunch on my part from
>>>being a system admin.
>>>
>>>How much memory do these systems have that experience issues? What does
>>>/proc/meminfo say during the issues? Does adjusting the tomcat6.conf
>>>memory settings make a difference (see ticket comments)? How much memory
>>>do the java processes list as resident in top?
>>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
>>>wrote:
>>>
>>>> +1 Performance degradation is dramatic and I too have observed this
>>>>issue.
>>>>
>>>> I have logged my comments into 1339.
>>>>
>>>>
>>>> ŠParth
>>>>
>>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>>> <sr...@citrix.com> wrote:
>>>>
>>>> >To add to what Marcus mentioned,
>>>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>>>> >5-10 min of starting management server and there has been a lot of
>>>> >API requests through automated tests. It is observed that Management
>>>> >server not only slows down but also goes down after a while.
>>>> >
>>>> >~Talluri
>>>> >
>>>> >-----Original Message-----
>>>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>> >Sent: Thursday, February 21, 2013 7:22
>>>> >To: cloudstack-dev@incubator.apache.org
>>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>>> >
>>>> >When Javelin was merged, there was an email sent out stating that
>>>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>>>> >permanent memory. Subsequently, there have also been several e-mails
>>>> >and issues where devs have echoed this recommendation, and presumably
>>>> >it fixed issues. I've seen the MS run out of memory myself and
>>>> >applied those recommendations.
>>>> >
>>>> >Is this what we want to provide in the tomcat config for a package
>>>> >based install as well? It's effectively saying that the minimum
>>>> >requirements for the management server are something like 3 or 4 GB
>>>> >(to be safe for other running tasks) of RAM, right?
>>>> >
>>>> >There is currently a bug filed that may or may not have to do with
>>>> >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>>>> >unresponsive for minutes at a time, but the logs seem to show
>>>> >business as usual. User reports that java is taking 75% of RAM,
>>>> >depending on what else is going on they may be swapping. Settings in
>>>> >the code for an install are currently at 2g/512M, I've been running
>>>> >this on a 4GB server for awhile now, java is at 900M, but I haven't
>>>> >been pounding it with requests or anything.
>>>> >
>>>> >This bug might not have anything to do with the memory settings, but
>>>> >I figured it would be good to nail down what our minimum requirements
>>>> >are for 4.1
>>>>
>>>>
>>
>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Parth Jagirdar <Pa...@citrix.com>.
Marcus,
vmstat 1 output
[root@localhost management]# vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
0 1 190820 72380 10904 15852 36 28 40 92 9 1 0 0 89
10 0
0 0 190820 72256 10932 15828 0 0 0 56 63 130 0 0 88
12 0
1 0 190820 72256 10932 15844 0 0 0 0 53 153 1 0 99
0 0
0 0 190820 72256 10948 15844 0 0 0 44 89 253 2 0 88
10 0
0 0 190820 72256 10964 15828 0 0 0 72 64 135 0 0 88
12 0
0 0 190820 72256 10964 15844 0 0 0 0 43 76 0 0
100 0 0
0 0 190820 72256 10980 15844 0 0 0 36 86 244 1 1 91
7 0
0 0 190820 72256 10996 15828 0 0 0 44 57 112 0 1 88
11 0
0 0 190820 72256 10996 15844 0 0 0 0 45 88 0 0
100 0 0
0 0 190820 72256 11012 15844 0 0 0 36 100 264 1 1 91
7 0
0 0 190820 72132 11044 15824 0 0 4 96 106 211 1 0 80
19 0
0 0 190820 72132 11092 15856 0 0 0 368 81 223 0 1 74
25 0
0 0 190820 72132 11108 15856 0 0 0 36 78 145 0 1 93
6 0
0 0 190820 72132 11124 15840 0 0 0 40 55 106 1 0 90
9 0
0 0 190820 72132 11124 15856 0 0 0 0 47 96 0 0
100 0 0
0 0 190820 72132 11140 15856 0 0 0 36 61 113 0 0 85
15 0
0 0 190820 72008 11156 15840 0 0 0 36 61 158 0 0 93
7 0
0 0 190820 72008 11156 15856 0 0 0 0 41 82 0 0
100 0 0
0 0 190820 72008 11172 15856 0 0 0 36 74 149 1 0 94
5 0
0 0 190820 72008 11188 15840 0 0 0 36 60 117 0 0 93
7 0
0 0 190820 72008 11188 15856 0 0 0 0 43 91 0 0
100 0 0
1 0 190820 72008 11252 15860 0 0 4 312 108 243 1 1 68
30 0
0 0 190820 72008 11268 15844 0 0 0 36 60 128 0 0 92
8 0
0 0 190820 72008 11268 15860 0 0 0 0 36 67 0 0
100 0 0
0 0 190820 71884 11284 15860 0 0 0 104 84 139 0 1 83
16 0
0 0 190820 71884 11300 15844 0 0 0 60 55 111 0 0 69
31 0
0 0 190820 71884 11300 15860 0 0 0 0 53 121 1 0 99
0 0
0 0 190820 71884 11316 15860 0 0 0 40 67 130 0 0 87
13 0
0 0 190820 71884 11332 15844 0 0 0 40 58 130 0 0 90
10 0
0 0 190820 71884 11332 15864 0 0 0 0 59 824 1 1 98
0 0
1 0 190820 71884 11348 15864 0 0 0 40 113 185 1 0 67
32 0
0 0 190820 71744 11412 15852 0 0 4 540 100 238 0 0 67
33 0
0 0 190820 71744 11412 15868 0 0 0 0 55 159 1 0 99
0 0
0 0 190820 71744 11428 15868 0 0 0 40 89 246 2 1 90
7 0
0 0 190820 71620 11444 15852 0 0 0 72 65 135 0 0 93
7 0
0 0 190820 71620 11444 15868 0 0 0 0 40 74 0 0
100 0 0
0 0 190820 71620 11460 15868 0 0 0 52 75 216 1 0 92
7 0
0 0 190820 71620 11476 15852 0 0 0 44 53 109 0 0 89
11 0
0 0 190820 71620 11476 15868 0 0 0 0 43 87 0 0
100 0 0
0 0 190820 71620 11496 15868 0 0 4 36 83 143 0 1 90
9 0
0 0 190820 71620 11512 15852 0 0 0 40 78 869 1 0 91
8 0
0 1 190820 71628 11524 15856 0 0 0 188 94 145 0 0 87
13 0
0 0 190820 71496 11576 15872 0 0 4 132 96 214 1 0 80
19 0
0 0 190820 71496 11592 15856 0 0 0 36 94 128 1 0 92
7 0
0 0 190820 71496 11592 15872 0 0 0 0 115 164 0 0
100 0 0
0 0 190820 71496 11608 15876 0 0 0 36 130 200 0 0 87
13 0
0 0 190820 71496 11624 15860 0 0 0 36 141 218 1 1 91
7 0
0 0 190820 71504 11624 15876 0 0 0 0 105 119 0 0
100 0 0
0 0 190820 71504 11640 15876 0 0 0 36 140 218 1 0 90
9 0
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
0 0 190820 71504 11656 15860 0 0 0 36 131 169 1 0 92
7 0
0 0 190820 71504 11656 15876 0 0 0 0 115 146 0 0
100 0 0
0 0 190820 71380 11672 15876 0 0 0 36 128 173 0 1 91
8 0
0 0 190820 71380 11736 15860 0 0 0 308 146 279 1 0 69
30 0
0 0 190820 71380 11736 15876 0 0 0 0 59 82 0 0
100 0 0
0 0 190820 71380 11760 15876 0 0 4 64 90 174 1 1 86
12 0
...Parth
On 2/20/13 8:46 PM, "Parth Jagirdar" <Pa...@citrix.com> wrote:
>JAVA_OPTS="-Djava.awt.headless=true
>-Dcom.sun.management.jmxremote.port=45219
>-Dcom.sun.management.jmxremote.authenticate=false
>-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
>-XX:+HeapDumpOnOutOfMemoryError
>-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>
>Which did not help.
>
>--------------
>
>[root@localhost management]# cat /proc/meminfo
>MemTotal: 1016656 kB
>MemFree: 68400 kB
>Buffers: 9108 kB
>Cached: 20984 kB
>SwapCached: 17492 kB
>Active: 424152 kB
>Inactive: 433152 kB
>Active(anon): 409812 kB
>Inactive(anon): 417412 kB
>Active(file): 14340 kB
>Inactive(file): 15740 kB
>Unevictable: 0 kB
>Mlocked: 0 kB
>SwapTotal: 2031608 kB
>SwapFree: 1840900 kB
>Dirty: 80 kB
>Writeback: 0 kB
>AnonPages: 815460 kB
>Mapped: 11408 kB
>Shmem: 4 kB
>Slab: 60120 kB
>SReclaimable: 10368 kB
>SUnreclaim: 49752 kB
>KernelStack: 5216 kB
>PageTables: 6800 kB
>NFS_Unstable: 0 kB
>Bounce: 0 kB
>WritebackTmp: 0 kB
>CommitLimit: 2539936 kB
>Committed_AS: 1596896 kB
>VmallocTotal: 34359738367 kB
>VmallocUsed: 7724 kB
>VmallocChunk: 34359718200 kB
>HardwareCorrupted: 0 kB
>AnonHugePages: 503808 kB
>HugePages_Total: 0
>HugePages_Free: 0
>HugePages_Rsvd: 0
>HugePages_Surp: 0
>Hugepagesize: 2048 kB
>DirectMap4k: 6144 kB
>DirectMap2M: 1038336 kB
>[root@localhost management]#
>-----------------------------
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>
>
> 9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
>
>
> 1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
>
>
> 1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
>
>
>
>
>
>On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>
>>Hi,
>>
>>Cloudstack Java process statistics are given below when it stops
>>responding are given below :
>>
>>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05,
>>0.02
>>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
>>0.0%st
>>Mem: 1014860k total, 947632k used, 67228k free, 5868k buffers
>>Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>>
>>Thanks,
>>Sailaja.M
>>
>>-----Original Message-----
>>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>Sent: Thursday, February 21, 2013 9:35 AM
>>To: cloudstack-dev@incubator.apache.org
>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>
>>Yes, these are great data points, but so far nobody has responded on that
>>ticket with the information required to know if the slowness is related
>>to memory settings or swapping. That was just a hunch on my part from
>>being a system admin.
>>
>>How much memory do these systems have that experience issues? What does
>>/proc/meminfo say during the issues? Does adjusting the tomcat6.conf
>>memory settings make a difference (see ticket comments)? How much memory
>>do the java processes list as resident in top?
>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
>>wrote:
>>
>>> +1 Performance degradation is dramatic and I too have observed this
>>>issue.
>>>
>>> I have logged my comments into 1339.
>>>
>>>
>>> ŠParth
>>>
>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>> <sr...@citrix.com> wrote:
>>>
>>> >To add to what Marcus mentioned,
>>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>>> >5-10 min of starting management server and there has been a lot of
>>> >API requests through automated tests. It is observed that Management
>>> >server not only slows down but also goes down after a while.
>>> >
>>> >~Talluri
>>> >
>>> >-----Original Message-----
>>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>> >Sent: Thursday, February 21, 2013 7:22
>>> >To: cloudstack-dev@incubator.apache.org
>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>> >
>>> >When Javelin was merged, there was an email sent out stating that
>>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>>> >permanent memory. Subsequently, there have also been several e-mails
>>> >and issues where devs have echoed this recommendation, and presumably
>>> >it fixed issues. I've seen the MS run out of memory myself and
>>> >applied those recommendations.
>>> >
>>> >Is this what we want to provide in the tomcat config for a package
>>> >based install as well? It's effectively saying that the minimum
>>> >requirements for the management server are something like 3 or 4 GB
>>> >(to be safe for other running tasks) of RAM, right?
>>> >
>>> >There is currently a bug filed that may or may not have to do with
>>> >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>>> >unresponsive for minutes at a time, but the logs seem to show
>>> >business as usual. User reports that java is taking 75% of RAM,
>>> >depending on what else is going on they may be swapping. Settings in
>>> >the code for an install are currently at 2g/512M, I've been running
>>> >this on a 4GB server for awhile now, java is at 900M, but I haven't
>>> >been pounding it with requests or anything.
>>> >
>>> >This bug might not have anything to do with the memory settings, but
>>> >I figured it would be good to nail down what our minimum requirements
>>> >are for 4.1
>>>
>>>
>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Parth Jagirdar <Pa...@citrix.com>.
JAVA_OPTS="-Djava.awt.headless=true
-Dcom.sun.management.jmxremote.port=45219
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
Which did not help.
--------------
[root@localhost management]# cat /proc/meminfo
MemTotal: 1016656 kB
MemFree: 68400 kB
Buffers: 9108 kB
Cached: 20984 kB
SwapCached: 17492 kB
Active: 424152 kB
Inactive: 433152 kB
Active(anon): 409812 kB
Inactive(anon): 417412 kB
Active(file): 14340 kB
Inactive(file): 15740 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 2031608 kB
SwapFree: 1840900 kB
Dirty: 80 kB
Writeback: 0 kB
AnonPages: 815460 kB
Mapped: 11408 kB
Shmem: 4 kB
Slab: 60120 kB
SReclaimable: 10368 kB
SUnreclaim: 49752 kB
KernelStack: 5216 kB
PageTables: 6800 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 2539936 kB
Committed_AS: 1596896 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 7724 kB
VmallocChunk: 34359718200 kB
HardwareCorrupted: 0 kB
AnonHugePages: 503808 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 6144 kB
DirectMap2M: 1038336 kB
[root@localhost management]#
-----------------------------
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
9809 cloud 20 0 2215m 785m 4672 S 0.7 79.1 1:59.40 java
1497 mysql 20 0 700m 15m 3188 S 0.3 1.5 23:04.58 mysqld
1 root 20 0 19348 300 296 S 0.0 0.0 0:00.73 init
On 2/20/13 8:26 PM, "Sailaja Mada" <sa...@citrix.com> wrote:
>Hi,
>
>Cloudstack Java process statistics are given below when it stops
>responding are given below :
>
>top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05, 0.02
>Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
>Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si,
>0.0%st
>Mem: 1014860k total, 947632k used, 67228k free, 5868k buffers
>Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
>
>Thanks,
>Sailaja.M
>
>-----Original Message-----
>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>Sent: Thursday, February 21, 2013 9:35 AM
>To: cloudstack-dev@incubator.apache.org
>Subject: Re: [DISCUSS] Management Server Memory Requirements
>
>Yes, these are great data points, but so far nobody has responded on that
>ticket with the information required to know if the slowness is related
>to memory settings or swapping. That was just a hunch on my part from
>being a system admin.
>
>How much memory do these systems have that experience issues? What does
>/proc/meminfo say during the issues? Does adjusting the tomcat6.conf
>memory settings make a difference (see ticket comments)? How much memory
>do the java processes list as resident in top?
>On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com>
>wrote:
>
>> +1 Performance degradation is dramatic and I too have observed this
>>issue.
>>
>> I have logged my comments into 1339.
>>
>>
>> ŠParth
>>
>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>> <sr...@citrix.com> wrote:
>>
>> >To add to what Marcus mentioned,
>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>> >5-10 min of starting management server and there has been a lot of
>> >API requests through automated tests. It is observed that Management
>> >server not only slows down but also goes down after a while.
>> >
>> >~Talluri
>> >
>> >-----Original Message-----
>> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>> >Sent: Thursday, February 21, 2013 7:22
>> >To: cloudstack-dev@incubator.apache.org
>> >Subject: [DISCUSS] Management Server Memory Requirements
>> >
>> >When Javelin was merged, there was an email sent out stating that
>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>> >permanent memory. Subsequently, there have also been several e-mails
>> >and issues where devs have echoed this recommendation, and presumably
>> >it fixed issues. I've seen the MS run out of memory myself and
>> >applied those recommendations.
>> >
>> >Is this what we want to provide in the tomcat config for a package
>> >based install as well? It's effectively saying that the minimum
>> >requirements for the management server are something like 3 or 4 GB
>> >(to be safe for other running tasks) of RAM, right?
>> >
>> >There is currently a bug filed that may or may not have to do with
>> >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>> >unresponsive for minutes at a time, but the logs seem to show
>> >business as usual. User reports that java is taking 75% of RAM,
>> >depending on what else is going on they may be swapping. Settings in
>> >the code for an install are currently at 2g/512M, I've been running
>> >this on a 4GB server for awhile now, java is at 900M, but I haven't
>> >been pounding it with requests or anything.
>> >
>> >This bug might not have anything to do with the memory settings, but
>> >I figured it would be good to nail down what our minimum requirements
>> >are for 4.1
>>
>>
RE: [DISCUSS] Management Server Memory Requirements
Posted by Sailaja Mada <sa...@citrix.com>.
Hi,
Cloudstack Java process statistics are given below when it stops responding are given below :
top - 09:52:03 up 4 days, 21:43, 2 users, load average: 0.06, 0.05, 0.02
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1014860k total, 947632k used, 67228k free, 5868k buffers
Swap: 2031608k total, 832320k used, 1199288k free, 26764k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12559 cloud 20 0 3159m 744m 4440 S 2.3 75.1 6:38.39 java
Thanks,
Sailaja.M
-----Original Message-----
From: Marcus Sorensen [mailto:shadowsor@gmail.com]
Sent: Thursday, February 21, 2013 9:35 AM
To: cloudstack-dev@incubator.apache.org
Subject: Re: [DISCUSS] Management Server Memory Requirements
Yes, these are great data points, but so far nobody has responded on that ticket with the information required to know if the slowness is related to memory settings or swapping. That was just a hunch on my part from being a system admin.
How much memory do these systems have that experience issues? What does /proc/meminfo say during the issues? Does adjusting the tomcat6.conf memory settings make a difference (see ticket comments)? How much memory do the java processes list as resident in top?
On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com> wrote:
> +1 Performance degradation is dramatic and I too have observed this issue.
>
> I have logged my comments into 1339.
>
>
> ŠParth
>
> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
> <sr...@citrix.com> wrote:
>
> >To add to what Marcus mentioned,
> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
> >5-10 min of starting management server and there has been a lot of
> >API requests through automated tests. It is observed that Management
> >server not only slows down but also goes down after a while.
> >
> >~Talluri
> >
> >-----Original Message-----
> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> >Sent: Thursday, February 21, 2013 7:22
> >To: cloudstack-dev@incubator.apache.org
> >Subject: [DISCUSS] Management Server Memory Requirements
> >
> >When Javelin was merged, there was an email sent out stating that
> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
> >permanent memory. Subsequently, there have also been several e-mails
> >and issues where devs have echoed this recommendation, and presumably
> >it fixed issues. I've seen the MS run out of memory myself and
> >applied those recommendations.
> >
> >Is this what we want to provide in the tomcat config for a package
> >based install as well? It's effectively saying that the minimum
> >requirements for the management server are something like 3 or 4 GB
> >(to be safe for other running tasks) of RAM, right?
> >
> >There is currently a bug filed that may or may not have to do with
> >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
> >unresponsive for minutes at a time, but the logs seem to show
> >business as usual. User reports that java is taking 75% of RAM,
> >depending on what else is going on they may be swapping. Settings in
> >the code for an install are currently at 2g/512M, I've been running
> >this on a 4GB server for awhile now, java is at 900M, but I haven't
> >been pounding it with requests or anything.
> >
> >This bug might not have anything to do with the memory settings, but
> >I figured it would be good to nail down what our minimum requirements
> >are for 4.1
>
>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Marcus Sorensen <sh...@gmail.com>.
Yes, these are great data points, but so far nobody has responded on that
ticket with the information required to know if the slowness is related to
memory settings or swapping. That was just a hunch on my part from being a
system admin.
How much memory do these systems have that experience issues? What does
/proc/meminfo say during the issues? Does adjusting the tomcat6.conf memory
settings make a difference (see ticket comments)? How much memory do the
java processes list as resident in top?
On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <Pa...@citrix.com> wrote:
> +1 Performance degradation is dramatic and I too have observed this issue.
>
> I have logged my comments into 1339.
>
>
> ŠParth
>
> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
> <sr...@citrix.com> wrote:
>
> >To add to what Marcus mentioned,
> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within 5-10
> >min of starting management server and there has been a lot of API
> >requests through automated tests. It is observed that Management server
> >not only slows down but also goes down after a while.
> >
> >~Talluri
> >
> >-----Original Message-----
> >From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> >Sent: Thursday, February 21, 2013 7:22
> >To: cloudstack-dev@incubator.apache.org
> >Subject: [DISCUSS] Management Server Memory Requirements
> >
> >When Javelin was merged, there was an email sent out stating that devs
> >should set their MAVEN_OPTS to use 2g of heap, and 512M of permanent
> >memory. Subsequently, there have also been several e-mails and issues
> >where devs have echoed this recommendation, and presumably it fixed
> >issues. I've seen the MS run out of memory myself and applied those
> >recommendations.
> >
> >Is this what we want to provide in the tomcat config for a package based
> >install as well? It's effectively saying that the minimum requirements
> >for the management server are something like 3 or 4 GB (to be safe for
> >other running tasks) of RAM, right?
> >
> >There is currently a bug filed that may or may not have to do with this,
> >CLOUDSTACK-1339. Users report mgmt server slowness, going unresponsive
> >for minutes at a time, but the logs seem to show business as usual. User
> >reports that java is taking 75% of RAM, depending on what else is going
> >on they may be swapping. Settings in the code for an install are
> >currently at 2g/512M, I've been running this on a 4GB server for awhile
> >now, java is at 900M, but I haven't been pounding it with requests or
> >anything.
> >
> >This bug might not have anything to do with the memory settings, but I
> >figured it would be good to nail down what our minimum requirements are
> >for 4.1
>
>
Re: [DISCUSS] Management Server Memory Requirements
Posted by Parth Jagirdar <Pa...@citrix.com>.
+1 Performance degradation is dramatic and I too have observed this issue.
I have logged my comments into 1339.
ŠParth
On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
<sr...@citrix.com> wrote:
>To add to what Marcus mentioned,
>Regarding bug CLOUDSTACK-1339 : I have observed this issue within 5-10
>min of starting management server and there has been a lot of API
>requests through automated tests. It is observed that Management server
>not only slows down but also goes down after a while.
>
>~Talluri
>
>-----Original Message-----
>From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>Sent: Thursday, February 21, 2013 7:22
>To: cloudstack-dev@incubator.apache.org
>Subject: [DISCUSS] Management Server Memory Requirements
>
>When Javelin was merged, there was an email sent out stating that devs
>should set their MAVEN_OPTS to use 2g of heap, and 512M of permanent
>memory. Subsequently, there have also been several e-mails and issues
>where devs have echoed this recommendation, and presumably it fixed
>issues. I've seen the MS run out of memory myself and applied those
>recommendations.
>
>Is this what we want to provide in the tomcat config for a package based
>install as well? It's effectively saying that the minimum requirements
>for the management server are something like 3 or 4 GB (to be safe for
>other running tasks) of RAM, right?
>
>There is currently a bug filed that may or may not have to do with this,
>CLOUDSTACK-1339. Users report mgmt server slowness, going unresponsive
>for minutes at a time, but the logs seem to show business as usual. User
>reports that java is taking 75% of RAM, depending on what else is going
>on they may be swapping. Settings in the code for an install are
>currently at 2g/512M, I've been running this on a 4GB server for awhile
>now, java is at 900M, but I haven't been pounding it with requests or
>anything.
>
>This bug might not have anything to do with the memory settings, but I
>figured it would be good to nail down what our minimum requirements are
>for 4.1
RE: [DISCUSS] Management Server Memory Requirements
Posted by Srikanteswararao Talluri <sr...@citrix.com>.
To add to what Marcus mentioned,
Regarding bug CLOUDSTACK-1339 : I have observed this issue within 5-10 min of starting management server and there has been a lot of API requests through automated tests. It is observed that Management server not only slows down but also goes down after a while.
~Talluri
-----Original Message-----
From: Marcus Sorensen [mailto:shadowsor@gmail.com]
Sent: Thursday, February 21, 2013 7:22
To: cloudstack-dev@incubator.apache.org
Subject: [DISCUSS] Management Server Memory Requirements
When Javelin was merged, there was an email sent out stating that devs should set their MAVEN_OPTS to use 2g of heap, and 512M of permanent memory. Subsequently, there have also been several e-mails and issues where devs have echoed this recommendation, and presumably it fixed issues. I've seen the MS run out of memory myself and applied those recommendations.
Is this what we want to provide in the tomcat config for a package based install as well? It's effectively saying that the minimum requirements for the management server are something like 3 or 4 GB (to be safe for other running tasks) of RAM, right?
There is currently a bug filed that may or may not have to do with this, CLOUDSTACK-1339. Users report mgmt server slowness, going unresponsive for minutes at a time, but the logs seem to show business as usual. User reports that java is taking 75% of RAM, depending on what else is going on they may be swapping. Settings in the code for an install are currently at 2g/512M, I've been running this on a 4GB server for awhile now, java is at 900M, but I haven't been pounding it with requests or anything.
This bug might not have anything to do with the memory settings, but I figured it would be good to nail down what our minimum requirements are for 4.1