You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ofbiz.apache.org by Jacques Le Roux <ja...@les7arts.com> on 2010/12/10 00:18:58 UTC
Re: demo server performance
I have spent a lot of time (I mean in respect of my free time) this last days to understand the problems.
It appears that removing the help was a great relief for our demo sever.
For few hours now we are running with
trunk: -Xms128M -Xmx768M -XX:MaxPermSize=192m (so max seems 768+192=960MB but actually it's more)
branch9: -Xms128M -Xmx512M
For instance now we have
Mem: 2573924k total, 2159888k used, 414036k free, 53672k buffers
Swap: 1502036k total, 50676k used, 1451360k free, 438000k cached
PID USER PR NI VIRT RES SHR %CPU %MEM
trunk 14896 ofbiz 20 0 1377m 753m 7956 0.3 30.0
branch9 18147 ofbiz 20 0 918m 670m 13m 0.7 26.7
As you can see at some stage we reach more than 960MB for the trunk (1377 max, which is approx but anyway)
The main points:
* We have still around 400MB free, but I suppose it will be less just before the 24h reload)
* We have anymore CPU running always near 100%, for instance right now
PID USER PR NI VIRT RES SHR %CPU %MEM TIME+ COMMAND
14896 ofbiz 20 0 1377m 757m 7968 29.7 30.2 19:57.63 java -Xms128M -Xmx768M -XX:MaxPermSize=192m
18147 ofbiz 20 0 918m 671m 13m 22.4 26.7 14:23.55 java -Xms128M -Xmx512M
I will wait some days and, if things continue to go well, will re-use more memory for our 2 processes. But I know there are other
problems...
Like David and Scott said if people are using the Artifact Info or other gluttonous features (Birts?) we will be in trouble with our
memory quota. So if such things come back in the future I will suggest to prevent users to use them on the demo server...
For the real problems, I think we should focus on fixing the online Help feature. It seems that this isues is something relatively
new and a disect should help (I use this word because it's convenient, on my side I simply use dichotomic tests with svn but I have
bigger fish to fry for now, that's why I have deactivated it). I think it's not more than few days (weeks?), help appreciated...
Thanks
Jacques
From: "BJ Freeman" <bj...@free-man.net>
> there is a thread on the user ML about the demo being slow.
> I would think that would be a high priority for all those that commit and make changes to ofbiz.
> after all what good is all this stuff if it can't be used.
> I brought down the demo trunk by accessing with seperate requests at one time, as I stated on the user ml.
>
> lets focus on real problems.
>
>
>
Re: demo server performance
Posted by Jacques Le Roux <ja...@les7arts.com>.
I have just tried to use top and jstack to get more information
Top gives me (using shift-H to get threads showing and c to see which thread belongs to which process)
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20059 ofbiz 20 0 1398m 898m 16m R 28.4 35.7 86:07.44
java -Xms128M -Xmx768M -XX:MaxPermSize=192m -XX:+HeapDumpOnOutOfMemoryError -Dofbiz.admin.port=10523 -Dofbiz.admin.key=so3du5kasd5dn
-jar ofbiz.ja
20057 ofbiz 20 0 1398m 898m 16m R 27.8 35.7 80:37.55
java -Xms128M -Xmx768M -XX:MaxPermSize=192m -XX:+HeapDumpOnOutOfMemoryError -Dofbiz.admin.port=10523 -Dofbiz.admin.key=so3du5kasd5dn
-jar ofbiz.ja
22410 ofbiz 20 0 1398m 898m 16m R 27.8 35.7 78:17.94
java -Xms128M -Xmx768M -XX:MaxPermSize=192m -XX:+HeapDumpOnOutOfMemoryError -Dofbiz.admin.port=10523 -Dofbiz.admin.key=so3du5kasd5dn
-jar ofbiz.ja
19476 ofbiz 20 0 1398m 898m 16m S 11.4 35.7 35:27.61
java -Xms128M -Xmx768M -XX:MaxPermSize=192m -XX:+HeapDumpOnOutOfMemoryError -Dofbiz.admin.port=10523 -Dofbiz.admin.key=so3du5kasd5dn
-jar ofbiz.ja
20369 ofbiz 20 0 1398m 898m 16m R 4.2 35.7 0:19.19
java -Xms128M -Xmx768M -XX:MaxPermSize=192m -XX:+HeapDumpOnOutOfMemoryError -Dofbiz.admin.port=10523 -Dofbiz.admin.key=so3du5kasd5dn
-jar ofbiz.ja
19491 ofbiz 20 0 1398m 898m 16m S 0.3 35.7 0:20.30
java -Xms128M -Xmx768M -XX:MaxPermSize=192m -XX:+HeapDumpOnOutOfMemoryError -Dofbiz.admin.port=10523 -Dofbiz.admin.key=so3du5kasd5dn
-jar ofbiz.ja
19592 ofbiz 20 0 1398m 898m 16m S 0.3 35.7 0:01.08
java -Xms128M -Xmx768M -XX:MaxPermSize=192m -XX:+HeapDumpOnOutOfMemoryError -Dofbiz.admin.port=10523 -Dofbiz.admin.key=so3du5kasd5dn
-jar ofbiz.ja
19826 ofbiz 20 0 1398m 898m 16m R 0.3 35.7 0:05.84
java -Xms128M -Xmx768M -XX:MaxPermSize=192m -XX:+HeapDumpOnOutOfMemoryError -Dofbiz.admin.port=10523 -Dofbiz.admin.key=so3du5kasd5dn
-jar ofbiz.ja
These are all trunk threads, but jstack gives me tons of this for each java thread (more than 100 threads per PID) of 20059, 20057
and 22410
$ jstack -F 20057
Attaching to process ID 20057, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 17.1-b03
Deadlock Detection:
No deadlocks found.
Thread 17347: (state = BLOCKED)
Error occurred during stack walking:
sun.jvm.hotspot.debugger.DebuggerException: sun.jvm.hotspot.debugger.DebuggerException: get_thread_regs failed for a lwp
at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$LinuxDebuggerLocalWorkerThread.execute(LinuxDebuggerLocal.java:152)
at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.getThreadIntegerRegisterSet(LinuxDebuggerLocal.java:466)
at sun.jvm.hotspot.debugger.linux.LinuxThread.getContext(LinuxThread.java:65)
at
sun.jvm.hotspot.runtime.linux_amd64.LinuxAMD64JavaThreadPDAccess.getCurrentFrameGuess(LinuxAMD64JavaThreadPDAccess.java:92)
at sun.jvm.hotspot.runtime.JavaThread.getCurrentFrameGuess(JavaThread.java:256)
at sun.jvm.hotspot.runtime.JavaThread.getLastJavaVFrameDbg(JavaThread.java:218)
at sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:76)
at sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:45)
at sun.jvm.hotspot.tools.JStack.run(JStack.java:60)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:221)
at sun.jvm.hotspot.tools.JStack.main(JStack.java:86)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at sun.tools.jstack.JStack.runJStackTool(JStack.java:118)
at sun.tools.jstack.JStack.main(JStack.java:84)
So, as I can't nothing with this, I tried to kill the 3 PIDS but it kills all the process
I reloaded and gave up, for now it's clean
Jacques
From: "Jacques Le Roux" <ja...@les7arts.com>
> Thanks BJ,
>
> I will remember. Unfortunaltely for the (less annoying) problem at hand we need more a detailled threads report and have nothing
> ready.
>
> Jacques
>
> BJ Freeman wrote:
>> that works for me. Count me in.
>>
>> =========================
>> BJ Freeman
>> Strategic Power Office with Supplier Automation <http://www.businessesnetwork.com/automation/viewforum.php?f=52>
>> Specialtymarket.com <http://www.specialtymarket.com/>
>> Systems Integrator-- Glad to Assist
>>
>> Chat Y! messenger: bjfr33man
>>
>>
>> Jacques Le Roux sent the following on 12/9/2010 3:42 PM:
>>> From: "Adam Heath" <do...@brainfood.com>
>>>> On 12/09/2010 05:18 PM, Jacques Le Roux wrote:
>>>>> I have spent a lot of time (I mean in respect of my free time) this last
>>>>> days to understand the problems.
>>>>> It appears that removing the help was a great relief for our demo sever.
>>>>>
>>>>> For few hours now we are running with
>>>>>
>>>>> trunk: -Xms128M -Xmx768M -XX:MaxPermSize=192m (so max seems
>>>>> 768+192=960MB but actually it's more)
>>>>> branch9: -Xms128M -Xmx512M
>>>>>
>>>>> For instance now we have
>>>>> Mem: 2573924k total, 2159888k used, 414036k free, 53672k buffers
>>>>> Swap: 1502036k total, 50676k used, 1451360k free, 438000k cached
>>>>>
>>>>> PID USER PR NI VIRT RES SHR %CPU %MEM
>>>>> trunk 14896 ofbiz 20 0 1377m 753m 7956 0.3 30.0
>>>>> branch9 18147 ofbiz 20 0 918m 670m 13m 0.7 26.7
>>>>>
>>>>> As you can see at some stage we reach more than 960MB for the trunk
>>>>> (1377 max, which is approx but anyway)
>>>>>
>>>>> The main points:
>>>>> * We have still around 400MB free, but I suppose it will be less just
>>>>> before the 24h reload)
>>>>> * We have anymore CPU running always near 100%, for instance right now
>>>>> PID USER PR NI VIRT RES SHR %CPU %MEM TIME+ COMMAND
>>>>> 14896 ofbiz 20 0 1377m 757m 7968 29.7 30.2 19:57.63 java -Xms128M
>>>>> -Xmx768M -XX:MaxPermSize=192m
>>>>> 18147 ofbiz 20 0 918m 671m 13m 22.4 26.7 14:23.55 java -Xms128M -Xmx512M
>>>>>
>>>>>
>>>>> I will wait some days and, if things continue to go well, will re-use
>>>>> more memory for our 2 processes. But I know there are other
>>>>> problems...
>>>>>
>>>>> Like David and Scott said if people are using the Artifact Info or other
>>>>> gluttonous features (Birts?) we will be in trouble with our
>>>>> memory quota. So if such things come back in the future I will suggest
>>>>> to prevent users to use them on the demo server...
>>>>>
>>>>> For the real problems, I think we should focus on fixing the online Help
>>>>> feature. It seems that this isues is something relatively
>>>>> new and a disect should help (I use this word because it's convenient,
>>>>> on my side I simply use dichotomic tests with svn but I have
>>>>> bigger fish to fry for now, that's why I have deactivated it). I think
>>>>> it's not more than few days (weeks?), help appreciated...
>>>>
>>>> Hate to disappoint, but all those memory stats you posted are
>>>> completely useless for actually tracking down what java is doing.
>>>>
>>>> You need to become friends with jmap, jhat(both standard jdk tools),
>>>> and ibm's heap analyzer. Plus, sending the QUIT signal to the java
>>>> process.
>>>
>>> Yes I know, this is only to give a general information about what's
>>> going on on the server.
>>> As I have already wrote I'm actually using mat
>>> http://www.eclipse.org/mat/ behind the scene
>>> I'm also using -XX:+HeapDumpOnOutOfMemoryError but most of the time get
>>> rather out of swap issues when crashing, hard to trace...
>>> One way would be mod_log_forensic... if someone wants to help...
>>>
>>>> In all honesty, I'm going to go out on a limb here and say the higher
>>>> memory requirements of newer ofbiz is due to converting tons of code
>>>> to groovy. With it as a simple-method, or a bsh, both would end up
>>>> using heap, as they are interpetted. java or groovy get compiled to
>>>> bytecode, which ends up being allocated in the permgen area, which
>>>> might also get jit compiled. So, permgen needs to increase.
>>>
>>> It does not seem that we have permgen issues. It's not yet clear, but
>>> for those interested I could move hprof files from demo roots to
>>> bigfiles dir...
>>>
>>> Thanks
>>>
>>> Jacques
>
>
Re: demo server performance
Posted by Jacques Le Roux <ja...@les7arts.com>.
Thanks BJ,
I will remember. Unfortunaltely for the (less annoying) problem at hand we need more a detailled threads report and have nothing
ready.
Jacques
BJ Freeman wrote:
> that works for me. Count me in.
>
> =========================
> BJ Freeman
> Strategic Power Office with Supplier Automation <http://www.businessesnetwork.com/automation/viewforum.php?f=52>
> Specialtymarket.com <http://www.specialtymarket.com/>
> Systems Integrator-- Glad to Assist
>
> Chat Y! messenger: bjfr33man
>
>
> Jacques Le Roux sent the following on 12/9/2010 3:42 PM:
>> From: "Adam Heath" <do...@brainfood.com>
>>> On 12/09/2010 05:18 PM, Jacques Le Roux wrote:
>>>> I have spent a lot of time (I mean in respect of my free time) this last
>>>> days to understand the problems.
>>>> It appears that removing the help was a great relief for our demo sever.
>>>>
>>>> For few hours now we are running with
>>>>
>>>> trunk: -Xms128M -Xmx768M -XX:MaxPermSize=192m (so max seems
>>>> 768+192=960MB but actually it's more)
>>>> branch9: -Xms128M -Xmx512M
>>>>
>>>> For instance now we have
>>>> Mem: 2573924k total, 2159888k used, 414036k free, 53672k buffers
>>>> Swap: 1502036k total, 50676k used, 1451360k free, 438000k cached
>>>>
>>>> PID USER PR NI VIRT RES SHR %CPU %MEM
>>>> trunk 14896 ofbiz 20 0 1377m 753m 7956 0.3 30.0
>>>> branch9 18147 ofbiz 20 0 918m 670m 13m 0.7 26.7
>>>>
>>>> As you can see at some stage we reach more than 960MB for the trunk
>>>> (1377 max, which is approx but anyway)
>>>>
>>>> The main points:
>>>> * We have still around 400MB free, but I suppose it will be less just
>>>> before the 24h reload)
>>>> * We have anymore CPU running always near 100%, for instance right now
>>>> PID USER PR NI VIRT RES SHR %CPU %MEM TIME+ COMMAND
>>>> 14896 ofbiz 20 0 1377m 757m 7968 29.7 30.2 19:57.63 java -Xms128M
>>>> -Xmx768M -XX:MaxPermSize=192m
>>>> 18147 ofbiz 20 0 918m 671m 13m 22.4 26.7 14:23.55 java -Xms128M -Xmx512M
>>>>
>>>>
>>>> I will wait some days and, if things continue to go well, will re-use
>>>> more memory for our 2 processes. But I know there are other
>>>> problems...
>>>>
>>>> Like David and Scott said if people are using the Artifact Info or other
>>>> gluttonous features (Birts?) we will be in trouble with our
>>>> memory quota. So if such things come back in the future I will suggest
>>>> to prevent users to use them on the demo server...
>>>>
>>>> For the real problems, I think we should focus on fixing the online Help
>>>> feature. It seems that this isues is something relatively
>>>> new and a disect should help (I use this word because it's convenient,
>>>> on my side I simply use dichotomic tests with svn but I have
>>>> bigger fish to fry for now, that's why I have deactivated it). I think
>>>> it's not more than few days (weeks?), help appreciated...
>>>
>>> Hate to disappoint, but all those memory stats you posted are
>>> completely useless for actually tracking down what java is doing.
>>>
>>> You need to become friends with jmap, jhat(both standard jdk tools),
>>> and ibm's heap analyzer. Plus, sending the QUIT signal to the java
>>> process.
>>
>> Yes I know, this is only to give a general information about what's
>> going on on the server.
>> As I have already wrote I'm actually using mat
>> http://www.eclipse.org/mat/ behind the scene
>> I'm also using -XX:+HeapDumpOnOutOfMemoryError but most of the time get
>> rather out of swap issues when crashing, hard to trace...
>> One way would be mod_log_forensic... if someone wants to help...
>>
>>> In all honesty, I'm going to go out on a limb here and say the higher
>>> memory requirements of newer ofbiz is due to converting tons of code
>>> to groovy. With it as a simple-method, or a bsh, both would end up
>>> using heap, as they are interpetted. java or groovy get compiled to
>>> bytecode, which ends up being allocated in the permgen area, which
>>> might also get jit compiled. So, permgen needs to increase.
>>
>> It does not seem that we have permgen issues. It's not yet clear, but
>> for those interested I could move hprof files from demo roots to
>> bigfiles dir...
>>
>> Thanks
>>
>> Jacques
Re: demo server performance
Posted by BJ Freeman <bj...@free-man.net>.
that works for me. Count me in.
=========================
BJ Freeman
Strategic Power Office with Supplier Automation <http://www.businessesnetwork.com/automation/viewforum.php?f=52>
Specialtymarket.com <http://www.specialtymarket.com/>
Systems Integrator-- Glad to Assist
Chat Y! messenger: bjfr33man
Jacques Le Roux sent the following on 12/9/2010 3:42 PM:
> From: "Adam Heath" <do...@brainfood.com>
>> On 12/09/2010 05:18 PM, Jacques Le Roux wrote:
>>> I have spent a lot of time (I mean in respect of my free time) this last
>>> days to understand the problems.
>>> It appears that removing the help was a great relief for our demo sever.
>>>
>>> For few hours now we are running with
>>>
>>> trunk: -Xms128M -Xmx768M -XX:MaxPermSize=192m (so max seems
>>> 768+192=960MB but actually it's more)
>>> branch9: -Xms128M -Xmx512M
>>>
>>> For instance now we have
>>> Mem: 2573924k total, 2159888k used, 414036k free, 53672k buffers
>>> Swap: 1502036k total, 50676k used, 1451360k free, 438000k cached
>>>
>>> PID USER PR NI VIRT RES SHR %CPU %MEM
>>> trunk 14896 ofbiz 20 0 1377m 753m 7956 0.3 30.0
>>> branch9 18147 ofbiz 20 0 918m 670m 13m 0.7 26.7
>>>
>>> As you can see at some stage we reach more than 960MB for the trunk
>>> (1377 max, which is approx but anyway)
>>>
>>> The main points:
>>> * We have still around 400MB free, but I suppose it will be less just
>>> before the 24h reload)
>>> * We have anymore CPU running always near 100%, for instance right now
>>> PID USER PR NI VIRT RES SHR %CPU %MEM TIME+ COMMAND
>>> 14896 ofbiz 20 0 1377m 757m 7968 29.7 30.2 19:57.63 java -Xms128M
>>> -Xmx768M -XX:MaxPermSize=192m
>>> 18147 ofbiz 20 0 918m 671m 13m 22.4 26.7 14:23.55 java -Xms128M -Xmx512M
>>>
>>>
>>> I will wait some days and, if things continue to go well, will re-use
>>> more memory for our 2 processes. But I know there are other
>>> problems...
>>>
>>> Like David and Scott said if people are using the Artifact Info or other
>>> gluttonous features (Birts?) we will be in trouble with our
>>> memory quota. So if such things come back in the future I will suggest
>>> to prevent users to use them on the demo server...
>>>
>>> For the real problems, I think we should focus on fixing the online Help
>>> feature. It seems that this isues is something relatively
>>> new and a disect should help (I use this word because it's convenient,
>>> on my side I simply use dichotomic tests with svn but I have
>>> bigger fish to fry for now, that's why I have deactivated it). I think
>>> it's not more than few days (weeks?), help appreciated...
>>
>> Hate to disappoint, but all those memory stats you posted are
>> completely useless for actually tracking down what java is doing.
>>
>> You need to become friends with jmap, jhat(both standard jdk tools),
>> and ibm's heap analyzer. Plus, sending the QUIT signal to the java
>> process.
>
> Yes I know, this is only to give a general information about what's
> going on on the server.
> As I have already wrote I'm actually using mat
> http://www.eclipse.org/mat/ behind the scene
> I'm also using -XX:+HeapDumpOnOutOfMemoryError but most of the time get
> rather out of swap issues when crashing, hard to trace...
> One way would be mod_log_forensic... if someone wants to help...
>
>> In all honesty, I'm going to go out on a limb here and say the higher
>> memory requirements of newer ofbiz is due to converting tons of code
>> to groovy. With it as a simple-method, or a bsh, both would end up
>> using heap, as they are interpetted. java or groovy get compiled to
>> bytecode, which ends up being allocated in the permgen area, which
>> might also get jit compiled. So, permgen needs to increase.
>
> It does not seem that we have permgen issues. It's not yet clear, but
> for those interested I could move hprof files from demo roots to
> bigfiles dir...
>
> Thanks
>
> Jacques
>
>
>
Re: demo server performance
Posted by Jacques Le Roux <ja...@les7arts.com>.
From: "Adam Heath" <do...@brainfood.com>
> On 12/09/2010 05:18 PM, Jacques Le Roux wrote:
>> I have spent a lot of time (I mean in respect of my free time) this last
>> days to understand the problems.
>> It appears that removing the help was a great relief for our demo sever.
>>
>> For few hours now we are running with
>>
>> trunk: -Xms128M -Xmx768M -XX:MaxPermSize=192m (so max seems
>> 768+192=960MB but actually it's more)
>> branch9: -Xms128M -Xmx512M
>>
>> For instance now we have
>> Mem: 2573924k total, 2159888k used, 414036k free, 53672k buffers
>> Swap: 1502036k total, 50676k used, 1451360k free, 438000k cached
>>
>> PID USER PR NI VIRT RES SHR %CPU %MEM
>> trunk 14896 ofbiz 20 0 1377m 753m 7956 0.3 30.0
>> branch9 18147 ofbiz 20 0 918m 670m 13m 0.7 26.7
>>
>> As you can see at some stage we reach more than 960MB for the trunk
>> (1377 max, which is approx but anyway)
>>
>> The main points:
>> * We have still around 400MB free, but I suppose it will be less just
>> before the 24h reload)
>> * We have anymore CPU running always near 100%, for instance right now
>> PID USER PR NI VIRT RES SHR %CPU %MEM TIME+ COMMAND
>> 14896 ofbiz 20 0 1377m 757m 7968 29.7 30.2 19:57.63 java -Xms128M
>> -Xmx768M -XX:MaxPermSize=192m
>> 18147 ofbiz 20 0 918m 671m 13m 22.4 26.7 14:23.55 java -Xms128M -Xmx512M
>>
>>
>> I will wait some days and, if things continue to go well, will re-use
>> more memory for our 2 processes. But I know there are other
>> problems...
>>
>> Like David and Scott said if people are using the Artifact Info or other
>> gluttonous features (Birts?) we will be in trouble with our
>> memory quota. So if such things come back in the future I will suggest
>> to prevent users to use them on the demo server...
>>
>> For the real problems, I think we should focus on fixing the online Help
>> feature. It seems that this isues is something relatively
>> new and a disect should help (I use this word because it's convenient,
>> on my side I simply use dichotomic tests with svn but I have
>> bigger fish to fry for now, that's why I have deactivated it). I think
>> it's not more than few days (weeks?), help appreciated...
>
> Hate to disappoint, but all those memory stats you posted are completely useless for actually tracking down what java is doing.
>
> You need to become friends with jmap, jhat(both standard jdk tools), and ibm's heap analyzer. Plus, sending the QUIT signal to
> the java process.
Yes I know, this is only to give a general information about what's going on on the server.
As I have already wrote I'm actually using mat http://www.eclipse.org/mat/ behind the scene
I'm also using -XX:+HeapDumpOnOutOfMemoryError but most of the time get rather out of swap issues when crashing, hard to trace...
One way would be mod_log_forensic... if someone wants to help...
> In all honesty, I'm going to go out on a limb here and say the higher memory requirements of newer ofbiz is due to converting tons
> of code to groovy. With it as a simple-method, or a bsh, both would end up using heap, as they are interpetted. java or groovy
> get compiled to bytecode, which ends up being allocated in the permgen area, which might also get jit compiled. So, permgen needs
> to increase.
It does not seem that we have permgen issues. It's not yet clear, but for those interested I could move hprof files from demo roots
to bigfiles dir...
Thanks
Jacques
Re: demo server performance
Posted by Adam Heath <do...@brainfood.com>.
On 12/09/2010 05:18 PM, Jacques Le Roux wrote:
> I have spent a lot of time (I mean in respect of my free time) this last
> days to understand the problems.
> It appears that removing the help was a great relief for our demo sever.
>
> For few hours now we are running with
>
> trunk: -Xms128M -Xmx768M -XX:MaxPermSize=192m (so max seems
> 768+192=960MB but actually it's more)
> branch9: -Xms128M -Xmx512M
>
> For instance now we have
> Mem: 2573924k total, 2159888k used, 414036k free, 53672k buffers
> Swap: 1502036k total, 50676k used, 1451360k free, 438000k cached
>
> PID USER PR NI VIRT RES SHR %CPU %MEM
> trunk 14896 ofbiz 20 0 1377m 753m 7956 0.3 30.0
> branch9 18147 ofbiz 20 0 918m 670m 13m 0.7 26.7
>
> As you can see at some stage we reach more than 960MB for the trunk
> (1377 max, which is approx but anyway)
>
> The main points:
> * We have still around 400MB free, but I suppose it will be less just
> before the 24h reload)
> * We have anymore CPU running always near 100%, for instance right now
> PID USER PR NI VIRT RES SHR %CPU %MEM TIME+ COMMAND
> 14896 ofbiz 20 0 1377m 757m 7968 29.7 30.2 19:57.63 java -Xms128M
> -Xmx768M -XX:MaxPermSize=192m
> 18147 ofbiz 20 0 918m 671m 13m 22.4 26.7 14:23.55 java -Xms128M -Xmx512M
>
>
> I will wait some days and, if things continue to go well, will re-use
> more memory for our 2 processes. But I know there are other
> problems...
>
> Like David and Scott said if people are using the Artifact Info or other
> gluttonous features (Birts?) we will be in trouble with our
> memory quota. So if such things come back in the future I will suggest
> to prevent users to use them on the demo server...
>
> For the real problems, I think we should focus on fixing the online Help
> feature. It seems that this isues is something relatively
> new and a disect should help (I use this word because it's convenient,
> on my side I simply use dichotomic tests with svn but I have
> bigger fish to fry for now, that's why I have deactivated it). I think
> it's not more than few days (weeks?), help appreciated...
Hate to disappoint, but all those memory stats you posted are
completely useless for actually tracking down what java is doing.
You need to become friends with jmap, jhat(both standard jdk tools),
and ibm's heap analyzer. Plus, sending the QUIT signal to the java
process.
In all honesty, I'm going to go out on a limb here and say the higher
memory requirements of newer ofbiz is due to converting tons of code
to groovy. With it as a simple-method, or a bsh, both would end up
using heap, as they are interpetted. java or groovy get compiled to
bytecode, which ends up being allocated in the permgen area, which
might also get jit compiled. So, permgen needs to increase.