You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jackrabbit.apache.org by Vishal Shukla <vi...@gmail.com> on 2011/12/26 16:05:15 UTC

Long Blocks In Jackrabbit Clustering

Hi all,


We are currently using database journal in the jackrabbit cluster. We have
high amount document upload through all the nodes, which includes
concurrent users. We are facing long blocks on *AXDNA_DMS_JRNL_GLOBAL_REVISION
*table. These blocks were typically ranging from 10 to 45 minutes duration
and documents could not be uploaded in this duration. Short blocks of few
seconds were also very frequently observed.


Blocks are observed for below queries.

*
*

*- update XXXX_DMS_JRNL_GLOBAL_REVISION set REVISION_ID = REVISION_ID + 1  *

*
*

*- select REVISION_ID, JOURNAL_ID, PRODUCER_ID, REVISION_DATA from
XXXX_DMS_JRNL_JOURNAL where REVISION_ID >  8806  order by REVISION_ID*


Using NO LOCK option with above SELECT query didn’t help in reducing the
blocks or its timings.


We also tried with *FileJournal *clustering. It reduced above mentioned DB
blocks but we faced blocks (may be file blocks) i.e. documents could not be
uploaded for around 10-15 minutes. On an average 1-2 such blocks were
observed per day.


It would be great if someone could suggest any workaround to get rid of
this issue or reduce block timings.

Thanks & Regards,
-- 
Vishal Shukla
SCJP, SCWCD, SCBCD
Cybage Softwares Pvt. Ltd.

Re: Long Blocks In Jackrabbit Clustering

Posted by vishal1shukla2 <vi...@gmail.com>.
Hi Jukka,

Thanks for the prompt response.

We are using file datastore to store binaries. Please find repository.xml
configuration below.


<?xml version="1.0" encoding="ISO-8859-1"?>

<Repository>

	<DataSources>
		<DataSource name="xxDmsPool">
			
			

			
			
		</DataSource>
	</DataSources>

	
	
	<Cluster id="cluster1">

		

		<Journal class="org.apache.jackrabbit.core.journal.FileJournal">
			
			
		</Journal>
	</Cluster>


	<DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
		
		
	</DataStore>

	<FileSystem class="org.apache.jackrabbit.core.fs.db.MSSqlFileSystem">
		
		
	</FileSystem>

	<Security appName="com.xx.dms">
		<AccessManager
			class="org.apache.jackrabbit.core.security.simple.SimpleAccessManager" />
	</Security>
	<Workspaces rootPath="${rep.home}/workspaces"
		defaultWorkspace="axdms" />
	<Workspace name="${wsp.name}">
		<FileSystem class="org.apache.jackrabbit.core.fs.db.MSSqlFileSystem">
			
			
		</FileSystem>
		<PersistenceManager
		
class="org.apache.jackrabbit.core.persistence.pool.MSSqlPersistenceManager">
			
			
			
			
			
		</PersistenceManager>

		<SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
			
			
			
			
			
			
			
			
			
			
			
			
			
			
			
			
			
			
			
			
			
		</SearchIndex>

	</Workspace>
	<Versioning rootPath="${rep.home}/versions">
		<FileSystem class="org.apache.jackrabbit.core.fs.db.MSSqlFileSystem">
			
			
		</FileSystem>
		<PersistenceManager
		
class="org.apache.jackrabbit.core.persistence.pool.MSSqlPersistenceManager">
			
			
			
			
			
		</PersistenceManager>
	</Versioning>
</Repository>



In the deployment, we have bundled jackrabbit inside application EAR file
itself. We are using JBoss 4.0 application server.

Please let me know in case if you need any other detail from my side.

Thanks & Regards,
Vishal Shukla

--
View this message in context: http://jackrabbit.510166.n4.nabble.com/Long-Blocks-In-Jackrabbit-Clustering-tp4240133p4242018.html
Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.

Re: Long Blocks In Jackrabbit Clustering

Posted by Jukka Zitting <ju...@gmail.com>.
Hi,

On Mon, Dec 26, 2011 at 5:05 PM, Vishal Shukla <vi...@gmail.com> wrote:
> We are currently using database journal in the jackrabbit cluster. We have
> high amount document upload through all the nodes, which includes concurrent
> users.

Jackrabbit currently only supports a single concurrent write operation
even in a clustered environment, which is most likely the cause of the
delays you're seeing.

If you're uploading large binaries, you should be able to
significantly reduce this bottleneck by using the data store feature
[1] which moves all binary uploads outside the big cluster write lock.

[1] http://wiki.apache.org/jackrabbit/DataStore

BR,

Jukka Zitting