You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Myrna van Lunteren (JIRA)" <ji...@apache.org> on 2014/06/18 20:54:26 UTC

[jira] [Commented] (DERBY-6622) Derby server process hitting OutOfMemoryErrors and taking up 100% cpu

    [ https://issues.apache.org/jira/browse/DERBY-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14036157#comment-14036157 ] 

Myrna van Lunteren commented on DERBY-6622:
-------------------------------------------

If I'm reading the server output log correctly, this database has been in use since July 2012; the first occurrence of the OOM has timestamp April 22, 2014. The derby.log file is from May, so is not much use in figuring out the original problem.

This is the original OOM stack trace:
-------------------
        at java.lang.Object.clone(Native Method)
        at java.util.LinkedList.clone(LinkedList.java:461)
        at com.mchange.v2.resourcepool.BasicResourcePool.cloneOfUnused(BasicResourcePool.java:1661)
        at com.mchange.v2.resourcepool.BasicResourcePool.cullExpired(BasicResourcePool.java:1450)
        at com.mchange.v2.resourcepool.BasicResourcePool.access$1900(BasicResourcePool.java:32)
        at com.mchange.v2.resourcepool.BasicResourcePool$CullTask.run(BasicResourcePool.java:1937)
        at java.util.Timer$TimerImpl.run(Timer.java:296)
Exception in thread "Thread-51" java.lang.OutOfMemoryError: Java heap space
        at java.lang.Object.clone(Native Method)
        at java.util.LinkedList.clone(LinkedList.java:461)
        at com.mchange.v2.resourcepool.BasicResourcePool.cloneOfUnused(BasicResourcePool.java:1661)
        at com.mchange.v2.resourcepool.BasicResourcePool.cullExpired(BasicResourcePool.java:1450)
        at com.mchange.v2.resourcepool.BasicResourcePool.access$1900(BasicResourcePool.java:32)
        at com.mchange.v2.resourcepool.BasicResourcePool$CullTask.run(BasicResourcePool.java:1937)
        at java.util.Timer$TimerImpl.run(Timer.java:296)
-------------------

But it does look like Derby went down after that - there were attempts to create JVM core dump files, and further connections were impossible until the server was booted on May 2. 

So, first I wonder if anything of interest happened on April 22. It might be too long ago for traces to be in the system - but perhaps some additional products were installed on the machine? 

We put together a page with information about analyzing OOMs: 
https://wiki.apache.org/db-derby/DebuggingDerbyMemoryIssues
This wiki page talks about using Derby 10.10 but I believe that is only for the example; the tool should work with your Derby 10.8.

Do you run with any specific -Xmx or -Xms settings?

Also, I would suggest that you run the ConsistencyChecker on the database, see: http://wiki.apache.org/db-derby/DatabaseConsistencyCheck


> Derby server process hitting OutOfMemoryErrors and taking up 100% cpu 
> ----------------------------------------------------------------------
>
>                 Key: DERBY-6622
>                 URL: https://issues.apache.org/jira/browse/DERBY-6622
>             Project: Derby
>          Issue Type: Bug
>          Components: Network Server
>    Affects Versions: 10.8.2.2
>         Environment: Linux
>            Reporter: Vamsavardhana Reddy
>         Attachments: derby.log, derbyserver.all.out.zip
>
>
> We are using Derby Network Server in DataPower appliance.  Underlying OS is a Linux based.  Derby Server is accessed only by java processes running on the same appliance.  We are noticing that the Derby server process is running into OutOfMemoryErrors.  Post the OOM errors, connection requests to the server are failing and the CPU usage reaches 100%.  We are also noticing deadlocks reported in the logs.  Please help identify the cause and how the issue can be resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)