You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jackrabbit.apache.org by "aasoj (JIRA)" <ji...@apache.org> on 2010/02/03 11:45:28 UTC
[jira] Created: (JCR-2483) Out of memory error while adding a new
host due to large number of revisions
Out of memory error while adding a new host due to large number of revisions
----------------------------------------------------------------------------
Key: JCR-2483
URL: https://issues.apache.org/jira/browse/JCR-2483
Project: Jackrabbit Content Repository
Issue Type: Improvement
Components: clustering
Environment: MySQL DB. 512 MB memory allocated to java app.
Reporter: aasoj
In a cluster deployment, revisions are saved in Journal Table in the DB. After a while a huge number of revisions can get created (around 70 k in our test). When a new host is added to the cluster, it tries to read all the revisions and hence the following error:
Caused by: java.lang.OutOfMemoryError: Java heap space
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2931)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2871)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3414)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:910)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1405)
at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2816)
at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:467)
at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2510)
at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1746)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2135)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:995)
at org.apache.jackrabbit.core.journal.DatabaseJournal.getRecords(DatabaseJournal.java:460)
at org.apache.jackrabbit.core.journal.AbstractJournal.doSync(AbstractJournal.java:201)
at org.apache.jackrabbit.core.journal.AbstractJournal.sync(AbstractJournal.java:188)
at org.apache.jackrabbit.core.cluster.ClusterNode.sync(ClusterNode.java:329)
at org.apache.jackrabbit.core.cluster.ClusterNode.start(ClusterNode.java:270)
This can also happen to an existing host in the cluster when the number of revisions returned is very high.
Possible solutions:
1. Cleaning old revisions using Janitor thread: This may be good for new hosts. But it will fail in a scenario when sync delay is high (few hours) and number of updates is high in existing hosts in the cluster
2. Increases memory allocated to Java process: This is not a feasible option always
3. Limit the number of updates read from the DB in any cycle.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
Re: [jira] Created: (JCR-2483) Out of memory error while adding a
new host due to large number of revisions
Posted by rabbeet <wi...@trivantisdev.com>.
Hi, this fix is only available as a patch. I am hoping this will get put into
a future version. The current implementation does not work for us. We have
over 1 million journal records and the current code tries to load all those
records into memory when we switch off clustering and rebuild the index.
--
View this message in context: http://jackrabbit.510166.n4.nabble.com/jira-Created-JCR-2483-Out-of-memory-error-while-adding-a-new-host-due-to-large-number-of-revisions-tp1461014p3090014.html
Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.
[jira] Updated: (JCR-2483) Out of memory error while adding a new
host due to large number of revisions
Posted by "aasoj (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/JCR-2483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
aasoj updated JCR-2483:
-----------------------
Attachment: patch
> Out of memory error while adding a new host due to large number of revisions
> ----------------------------------------------------------------------------
>
> Key: JCR-2483
> URL: https://issues.apache.org/jira/browse/JCR-2483
> Project: Jackrabbit Content Repository
> Issue Type: Improvement
> Components: clustering
> Environment: MySQL DB. 512 MB memory allocated to java app.
> Reporter: aasoj
> Fix For: 1.6.0
>
> Attachments: patch
>
>
> In a cluster deployment, revisions are saved in Journal Table in the DB. After a while a huge number of revisions can get created (around 70 k in our test). When a new host is added to the cluster, it tries to read all the revisions and hence the following error:
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2931)
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2871)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3414)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:910)
> at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1405)
> at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2816)
> at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:467)
> at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2510)
> at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1746)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2135)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542)
> at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734)
> at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:995)
> at org.apache.jackrabbit.core.journal.DatabaseJournal.getRecords(DatabaseJournal.java:460)
> at org.apache.jackrabbit.core.journal.AbstractJournal.doSync(AbstractJournal.java:201)
> at org.apache.jackrabbit.core.journal.AbstractJournal.sync(AbstractJournal.java:188)
> at org.apache.jackrabbit.core.cluster.ClusterNode.sync(ClusterNode.java:329)
> at org.apache.jackrabbit.core.cluster.ClusterNode.start(ClusterNode.java:270)
> This can also happen to an existing host in the cluster when the number of revisions returned is very high.
> Possible solutions:
> 1. Cleaning old revisions using Janitor thread: This may be good for new hosts. But it will fail in a scenario when sync delay is high (few hours) and number of updates is high in existing hosts in the cluster
> 2. Increases memory allocated to Java process: This is not a feasible option always
> 3. Limit the number of updates read from the DB in any cycle.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (JCR-2483) Out of memory error while adding a new
host due to large number of revisions
Posted by "aasoj (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/JCR-2483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
aasoj updated JCR-2483:
-----------------------
Fix Version/s: 1.6.0
Status: Patch Available (was: Open)
The patch adds a new configuration to DatabaseJournal, syncLimit. The value will be used to reduce the number of revisions returned. Finally AbstractJournal's doSync() is modified to loop till all the revisions are used.
> Out of memory error while adding a new host due to large number of revisions
> ----------------------------------------------------------------------------
>
> Key: JCR-2483
> URL: https://issues.apache.org/jira/browse/JCR-2483
> Project: Jackrabbit Content Repository
> Issue Type: Improvement
> Components: clustering
> Environment: MySQL DB. 512 MB memory allocated to java app.
> Reporter: aasoj
> Fix For: 1.6.0
>
>
> In a cluster deployment, revisions are saved in Journal Table in the DB. After a while a huge number of revisions can get created (around 70 k in our test). When a new host is added to the cluster, it tries to read all the revisions and hence the following error:
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2931)
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2871)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3414)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:910)
> at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1405)
> at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2816)
> at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:467)
> at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2510)
> at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1746)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2135)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542)
> at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734)
> at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:995)
> at org.apache.jackrabbit.core.journal.DatabaseJournal.getRecords(DatabaseJournal.java:460)
> at org.apache.jackrabbit.core.journal.AbstractJournal.doSync(AbstractJournal.java:201)
> at org.apache.jackrabbit.core.journal.AbstractJournal.sync(AbstractJournal.java:188)
> at org.apache.jackrabbit.core.cluster.ClusterNode.sync(ClusterNode.java:329)
> at org.apache.jackrabbit.core.cluster.ClusterNode.start(ClusterNode.java:270)
> This can also happen to an existing host in the cluster when the number of revisions returned is very high.
> Possible solutions:
> 1. Cleaning old revisions using Janitor thread: This may be good for new hosts. But it will fail in a scenario when sync delay is high (few hours) and number of updates is high in existing hosts in the cluster
> 2. Increases memory allocated to Java process: This is not a feasible option always
> 3. Limit the number of updates read from the DB in any cycle.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (JCR-2483) Out of memory error while adding a new
host due to large number of revisions
Posted by "Jukka Zitting (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/JCR-2483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jukka Zitting updated JCR-2483:
-------------------------------
Affects Version/s: 1.6.0
Fix Version/s: (was: 1.6.0)
> Out of memory error while adding a new host due to large number of revisions
> ----------------------------------------------------------------------------
>
> Key: JCR-2483
> URL: https://issues.apache.org/jira/browse/JCR-2483
> Project: Jackrabbit Content Repository
> Issue Type: Improvement
> Components: clustering
> Affects Versions: 1.6.0
> Environment: MySQL DB. 512 MB memory allocated to java app.
> Reporter: aasoj
> Attachments: patch
>
>
> In a cluster deployment, revisions are saved in Journal Table in the DB. After a while a huge number of revisions can get created (around 70 k in our test). When a new host is added to the cluster, it tries to read all the revisions and hence the following error:
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2931)
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2871)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3414)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:910)
> at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1405)
> at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2816)
> at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:467)
> at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2510)
> at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1746)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2135)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542)
> at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734)
> at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:995)
> at org.apache.jackrabbit.core.journal.DatabaseJournal.getRecords(DatabaseJournal.java:460)
> at org.apache.jackrabbit.core.journal.AbstractJournal.doSync(AbstractJournal.java:201)
> at org.apache.jackrabbit.core.journal.AbstractJournal.sync(AbstractJournal.java:188)
> at org.apache.jackrabbit.core.cluster.ClusterNode.sync(ClusterNode.java:329)
> at org.apache.jackrabbit.core.cluster.ClusterNode.start(ClusterNode.java:270)
> This can also happen to an existing host in the cluster when the number of revisions returned is very high.
> Possible solutions:
> 1. Cleaning old revisions using Janitor thread: This may be good for new hosts. But it will fail in a scenario when sync delay is high (few hours) and number of updates is high in existing hosts in the cluster
> 2. Increases memory allocated to Java process: This is not a feasible option always
> 3. Limit the number of updates read from the DB in any cycle.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.