You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jackrabbit.apache.org by Nilshan <ni...@indralok.com> on 2009/03/30 17:04:37 UTC

JackRabbit_Clustering

Hello All,

I have implemented Jackrabbit Clustering approach to create Two clustering
for two different processes. 
One for creation of rules (Cluster 1) and another for execution (Cluster 2).
Both these clusters have a shared database (DB2) 
where rules are stored.


The problem I face during execution is that , the execution process does not
get the latest updates from Rule creation process. I mean , If I change Rule
status it doesn't reflect immediately to execution process.
But when redeployed (Restart Server) the system executes as expected. (With
latest changes applied by 
Cluster 1)

My configuration are as below.
1.Shared DB ( Persistance Manager is ) : IBM DB2.
2. Drools Version : 5.x
3. Jackrabbit Version : 1.4.x

My clustering configuration for both the process are as below.

Configuration for Cluster 1 -----------START-------------------------------
<?xml version="1.0" encoding="UTF-8"?>
<Repository>

	<Cluster id="Rule Creation">
	
  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
    		
    		
  		</Journal>
	</Cluster>

	<FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
        
    </FileSystem>
        
	<Security appName="Jackrabbit">
		<AccessManager
class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
			<LoginModule
class="org.apache.jackrabbit.core.security.SimpleLoginModule">
				
			</LoginModule>
	</Security>
	
	<Workspaces rootPath="${rep.home}/workspaces" defaultWorkspace="default" />	
	<Workspace name="${wsp.name}">
		<PersistenceManager
class="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
			
	 		
			
			
	 		 				
	 		
			
		</PersistenceManager>
		
		<FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
            
        </FileSystem>
        
        <!--<FileSystem
class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
            
	 		
			
			
	 		 				
	 		
        </FileSystem>-->   
        
		<SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
        
        
                
        
    </SearchIndex>            	
	</Workspace>
	
	<Versioning rootPath="${rep.home}/version">
	
		<PersistenceManager
class="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
			
	 		
			
			
	 		 				
	 		
						
		</PersistenceManager>
		
		<FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
            
        </FileSystem>
        
        <!--<FileSystem
class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
            
	 		
			
			
	 		 				
	 		
        </FileSystem>-->
        
		
	</Versioning>    
	<SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
        
        
                
        
    </SearchIndex>
            
</Repository>
Configuration for Cluster 1 -----------END-------------------------------

Configuration for Cluster 2 -----------START-------------------------------
<?xml version="1.0" encoding="UTF-8"?>
<Repository>

	<Cluster id="Rule Execution">
	
  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
    		
    		
  		</Journal>
	</Cluster>

	<FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
        
    </FileSystem>
        
	<Security appName="Jackrabbit">
		<AccessManager
class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
			<LoginModule
class="org.apache.jackrabbit.core.security.SimpleLoginModule">
				
			</LoginModule>
	</Security>
	
	<Workspaces rootPath="${rep.home}/workspaces" defaultWorkspace="default" />	
	<Workspace name="${wsp.name}">
		<PersistenceManager
class="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
			
	 		
			
			
	 		 				
	 		
			
		</PersistenceManager>
		
		<FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
            
        </FileSystem>
        
        <!--<FileSystem
class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
            
	 		
			
			
	 		 				
	 		
        </FileSystem>-->   
        
		<SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
        
        
                
        
    </SearchIndex>            	
	</Workspace>
	
	<Versioning rootPath="${rep.home}/version">
	
		<PersistenceManager
class="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
			
	 		
			
			
	 		 				
	 		
						
		</PersistenceManager>
		
		<FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
            
        </FileSystem>
        
        <!--<FileSystem
class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
            
	 		
			
			
	 		 				
	 		
        </FileSystem>-->
        
		
	</Versioning>    
	<SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
        
        
                
        
    </SearchIndex>
            
</Repository>
Configuration for  Cluster 2 -----------END-------------------------------

Both the process are running on same machine under different JVM.

Do we have any specific Journal for DB2 database as we have one for Oracle.?

Please suggest me  a feasible solution for the above problem. 

Your Help for this would highly appreciated.

Thanks in advance.

Nilshan and Arpan.


-- 
View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22785285.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: JackRabbit_Clustering

Posted by Nilshan <ni...@indralok.com>.
Hello Ian,


Thank You.

I agree to you.

My clustering configuration works absolutely fine with FileJournal. In the
Rule Creation Cluster, when I create a new rule I get that rule in the Rule
Execution Cluster (Cluster 2) and the execution works fine. But the problem
is that when i update the rule, this effect does not reflect at the
Execution Cluster.
 Ex:- When I set the rule status as InActive, it does not reflect at the
Execution cluster and that rule gets executed even when it is InActive at
the Rule Creation Cluster. I feel that Both the clusters should be
Synchronized at any instance of time even if any minor changes are
made.....(Please correct me if I am wrong.) 

Also, I have noticed that entries are made into the revision.log whenever I
create a new rule or update it.
But I fail to understand why this discrepancy in both the clusters even when
it is logged in the revision log.

Due to this problem, we tried to move to the DatabaseJournal, but
unfortunately we faced connection error as we discussed in the previous
message....

Please suggest a good way to get out of this trap.

Thanks........
Nilshan And Arpan







Ian Boston-3 wrote:
> 
> The setup looks Ok, the key things are
> 1: revision is a local file that stores the revision that the current  
> node is at, I have a feeling that this is a binary integer so you can  
> use od (octal dump) to look at it and see where a app server node is at.
> 2. In the DB journal the table is shared, in the File Journal the  
> location is shared and this is the communication mechanism between app  
> server nodes within the cluster.
> 
> In the log files you should see ClusterNode revisions flowing past  
> when each item is saved (or session.save() which does a  
> session.getRootNode().save()).
> You should also see those ClusterNode revisions being replayed on  
> other nodes. ( are you doing item.save(), session.save() etc.... dumb  
> question)
> 
> Things that you might try:
> FileJournal
> monitor the journal file on both nodes with tail (but be prepared to  
> see junk and a broken terminal), you should see it being appended to  
> on both nodes.
> 
> DBJournal
> do a select count(*) from the journal table on both nodes to see that  
> records are being appended on each node.
> 
> Also for the DB journal you could turn on sql logging or use the  
> log4jdbc driver to log statements.
> 
> HTH
> Ian
> 
> 
> 
> 
> On 31 Mar 2009, at 05:56, Nilshan wrote:
> 
>>
>> Hello Ian ,
>>
>> Thanks for your quick reply...
>>
>> May be there is some problem with the Text Editor of this forum.  
>> because
>> some tags are not displayed by it.
>>
>> I think It didn't consider param Tag.
>>
>> For Cluster 1
>> <Cluster id="Rule Creation">	
>>  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
>>    		param name="revision" value="c:/myjournal/revision.log" /
>>    		param name="directory" value="c:/myjournal" /
>>  		</Journal>
>> </Cluster>
>>
>> For Cluster 2
>> <Cluster id="Rule Execution.">
>> 	
>>  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
>>    		param name="revision" value="c:/myjournal/revision_New.log" /
>>    		param name="directory" value="c:/myjournal" /
>>  		</Journal>
>> </Cluster>
>>
>> I tried shared Journal (as it is must and clearly suggested on  
>> clustering
>> wiki) but it didn't work.
>>
>> I used DatabaseJournal, unfortunately It was giving me "Unable to  
>> create
>> Connection" error.
>>
>> Configuration using DatabaseJournal I used is as Below.
>>
>> <Cluster id="Rule Creation">
>> 	
>>  	    <Journal  
>> class="org.apache.jackrabbit.core.journal.DatabaseJournal">
>>    		param name="revision" value="${rep.home}/revision.log" /
>>    		param name="driver" value="com.ibm.db2.jcc.DB2Driver"/
>> 	 		param name="url" value="jdbc:db2://192.168.1.36:50000/Database" /
>> 			param name="user" value="username" /			
>>                                       param name="password"
>> value="password" /	 		                          param name="schema"
>> value="db2"/ 				
>> 	 		param name="schemaObjectPrefix" value="RuleMgmt_Ver_" /  		</ 
>> Journal>
>> </Cluster>
>>
>> Please look at the above confiugration and let me know If I have  
>> made any
>> mistake.
>>
>> The following is the Error log while using DatabaseJournal..
>>
>> **********************************************************
>> javax.jcr.RepositoryException: Unable to create connection.: Unable to
>> create connection.
>> at
>> org 
>> .apache 
>> .jackrabbit 
>> .core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650)
>> at
>> org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java: 
>> 288)
>> at
>> org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java: 
>> 557)
>> at
>> org.apache.jackrabbit.core.TransientRepository 
>> $2.getRepository(TransientRepository.java:245)
>> at
>> org 
>> .apache 
>> .jackrabbit 
>> .core.TransientRepository.startRepository(TransientRepository.java: 
>> 265)
>> at
>> org 
>> .apache 
>> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
>> 333)
>> at
>> org 
>> .apache 
>> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
>> 363)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.initializeRepository(RuleAssetManager.java:555)
>> at
>> com 
>> .amicas 
>> .rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java:517)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.getServiceImpl(RuleAssetManager.java:494)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.categoryExists(RuleAssetManager.java:420)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager 
>> .initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167)
>> at
>> com 
>> .amicas 
>> .gwt 
>> .rulemanagement 
>> .servlet 
>> .RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java:256)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>> at java.lang.reflect.Method.invoke(Unknown Source)
>> at  
>> com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java: 
>> 528)
>> at
>> com 
>> .google 
>> .gwt 
>> .user 
>> .server 
>> .rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265)
>> at
>> com 
>> .google 
>> .gwt 
>> .user 
>> .server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:187)
>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
>> at
>> org 
>> .apache 
>> .catalina 
>> .core 
>> .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: 
>> 252)
>> at
>> org 
>> .apache 
>> .catalina 
>> .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
>> at
>> org 
>> .springframework 
>> .orm 
>> .hibernate 
>> .support 
>> .OpenSessionInViewFilter 
>> .doFilterInternal(OpenSessionInViewFilter.java:172)
>> at
>> org 
>> .springframework 
>> .web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 
>> 76)
>> **********************************************************
>>
>> I found specific Database Journal for MySQl , Oracle. Do we have any  
>> Db2
>> specific Journal. ?
>>
>>
>> Thanks,
>> Nilshan and Arpan.
>>
>> *******************************************************
>>
>> Ian Boston-3 wrote:
>>>
>>> A few observations:
>>>
>>> You appear to be using the File Journal,
>>> but I cant see where the shared directory for the file journal is
>>> defined.
>>>
>>> I think, (please correct if wrong) it would be normal to use a DB
>>> Journal if using a DB persistence manager.
>>>
>>> Ian
>>>
>>> On 30 Mar 2009, at 16:04, Nilshan wrote:
>>>
>>>>
>>>> Hello All,
>>>>
>>>> I have implemented Jackrabbit Clustering approach to create Two
>>>> clustering
>>>> for two different processes.
>>>> One for creation of rules (Cluster 1) and another for execution
>>>> (Cluster 2).
>>>> Both these clusters have a shared database (DB2)
>>>> where rules are stored.
>>>>
>>>>
>>>> The problem I face during execution is that , the execution process
>>>> does not
>>>> get the latest updates from Rule creation process. I mean , If I
>>>> change Rule
>>>> status it doesn't reflect immediately to execution process.
>>>> But when redeployed (Restart Server) the system executes as
>>>> expected. (With
>>>> latest changes applied by
>>>> Cluster 1)
>>>>
>>>> My configuration are as below.
>>>> 1.Shared DB ( Persistance Manager is ) : IBM DB2.
>>>> 2. Drools Version : 5.x
>>>> 3. Jackrabbit Version : 1.4.x
>>>>
>>>> My clustering configuration for both the process are as below.
>>>>
>>>> Configuration for Cluster 1 -----------
>>>> START-------------------------------
>>>> <?xml version="1.0" encoding="UTF-8"?>
>>>> <Repository>
>>>>
>>>> 	<Cluster id="Rule Creation">
>>>> 	
>>>> 	    <Journal  
>>>> class="org.apache.jackrabbit.core.journal.FileJournal">
>>>>   		
>>>>   		
>>>> 		</Journal>
>>>> 	</Cluster>
>>>>
>>>> 	<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>>
>>>>   </FileSystem>
>>>>
>>>> 	<Security appName="Jackrabbit">
>>>> 		<AccessManager
>>>> class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
>>>> 			<LoginModule
>>>> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>>>> 				
>>>> 			</LoginModule>
>>>> 	</Security>
>>>> 	
>>>> 	<Workspaces rootPath="${rep.home}/workspaces"
>>>> defaultWorkspace="default" />	
>>>> 	<Workspace name="${wsp.name}">
>>>> 		<PersistenceManager
>>>> class
>>>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>>>> 			
>>>> 	 		
>>>> 			
>>>> 			
>>>> 	 		 				
>>>> 	 		
>>>> 			
>>>> 		</PersistenceManager>
>>>> 		
>>>> 		<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>>
>>>>       </FileSystem>
>>>>
>>>>       <!--<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>>>
>>>> 	 		
>>>> 			
>>>> 			
>>>> 	 		 				
>>>> 	 		
>>>>       </FileSystem>-->
>>>>
>>>> 		<SearchIndex
>>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>>
>>>>
>>>>
>>>>
>>>>   </SearchIndex>            	
>>>> 	</Workspace>
>>>> 	
>>>> 	<Versioning rootPath="${rep.home}/version">
>>>> 	
>>>> 		<PersistenceManager
>>>> class
>>>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>>>> 			
>>>> 	 		
>>>> 			
>>>> 			
>>>> 	 		 				
>>>> 	 		
>>>> 						
>>>> 		</PersistenceManager>
>>>> 		
>>>> 		<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>>
>>>>       </FileSystem>
>>>>
>>>>       <!--<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>>>
>>>> 	 		
>>>> 			
>>>> 			
>>>> 	 		 				
>>>> 	 		
>>>>       </FileSystem>-->
>>>>
>>>> 		
>>>> 	</Versioning>
>>>> 	<SearchIndex
>>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>>
>>>>
>>>>
>>>>
>>>>   </SearchIndex>
>>>>
>>>> </Repository>
>>>> Configuration for Cluster 1 -----------
>>>> END-------------------------------
>>>>
>>>> Configuration for Cluster 2 -----------
>>>> START-------------------------------
>>>> <?xml version="1.0" encoding="UTF-8"?>
>>>> <Repository>
>>>>
>>>> 	<Cluster id="Rule Execution">
>>>> 	
>>>> 	    <Journal  
>>>> class="org.apache.jackrabbit.core.journal.FileJournal">
>>>>   		
>>>>   		
>>>> 		</Journal>
>>>> 	</Cluster>
>>>>
>>>> 	<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>>
>>>>   </FileSystem>
>>>>
>>>> 	<Security appName="Jackrabbit">
>>>> 		<AccessManager
>>>> class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
>>>> 			<LoginModule
>>>> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>>>> 				
>>>> 			</LoginModule>
>>>> 	</Security>
>>>> 	
>>>> 	<Workspaces rootPath="${rep.home}/workspaces"
>>>> defaultWorkspace="default" />	
>>>> 	<Workspace name="${wsp.name}">
>>>> 		<PersistenceManager
>>>> class
>>>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>>>> 			
>>>> 	 		
>>>> 			
>>>> 			
>>>> 	 		 				
>>>> 	 		
>>>> 			
>>>> 		</PersistenceManager>
>>>> 		
>>>> 		<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>>
>>>>       </FileSystem>
>>>>
>>>>       <!--<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>>>
>>>> 	 		
>>>> 			
>>>> 			
>>>> 	 		 				
>>>> 	 		
>>>>       </FileSystem>-->
>>>>
>>>> 		<SearchIndex
>>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>>
>>>>
>>>>
>>>>
>>>>   </SearchIndex>            	
>>>> 	</Workspace>
>>>> 	
>>>> 	<Versioning rootPath="${rep.home}/version">
>>>> 	
>>>> 		<PersistenceManager
>>>> class
>>>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>>>> 			
>>>> 	 		
>>>> 			
>>>> 			
>>>> 	 		 				
>>>> 	 		
>>>> 						
>>>> 		</PersistenceManager>
>>>> 		
>>>> 		<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>>
>>>>       </FileSystem>
>>>>
>>>>       <!--<FileSystem
>>>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>>>
>>>> 	 		
>>>> 			
>>>> 			
>>>> 	 		 				
>>>> 	 		
>>>>       </FileSystem>-->
>>>>
>>>> 		
>>>> 	</Versioning>
>>>> 	<SearchIndex
>>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>>
>>>>
>>>>
>>>>
>>>>   </SearchIndex>
>>>>
>>>> </Repository>
>>>> Configuration for  Cluster 2 -----------
>>>> END-------------------------------
>>>>
>>>> Both the process are running on same machine under different JVM.
>>>>
>>>> Do we have any specific Journal for DB2 database as we have one for
>>>> Oracle.?
>>>>
>>>> Please suggest me  a feasible solution for the above problem.
>>>>
>>>> Your Help for this would highly appreciated.
>>>>
>>>> Thanks in advance.
>>>>
>>>> Nilshan and Arpan.
>>>>
>>>>
>>>> -- 
>>>> View this message in context:
>>>> http://www.nabble.com/JackRabbit_Clustering-tp22785285p22785285.html
>>>> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>>>>
>>>
>>>
>>>
>> http://www.nabble.com/file/p22798352/Cluster_1.xml Cluster_1.xml
>> http://www.nabble.com/file/p22798352/Cluster_1.xml Cluster_1.xml
>> -- 
>> View this message in context:
>> http://www.nabble.com/JackRabbit_Clustering-tp22785285p22798352.html
>> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>>
> 
> 
> 

-- 
View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22801520.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: JackRabbit_Clustering

Posted by Ian Boston <ia...@googlemail.com>.
The setup looks Ok, the key things are
1: revision is a local file that stores the revision that the current  
node is at, I have a feeling that this is a binary integer so you can  
use od (octal dump) to look at it and see where a app server node is at.
2. In the DB journal the table is shared, in the File Journal the  
location is shared and this is the communication mechanism between app  
server nodes within the cluster.

In the log files you should see ClusterNode revisions flowing past  
when each item is saved (or session.save() which does a  
session.getRootNode().save()).
You should also see those ClusterNode revisions being replayed on  
other nodes. ( are you doing item.save(), session.save() etc.... dumb  
question)

Things that you might try:
FileJournal
monitor the journal file on both nodes with tail (but be prepared to  
see junk and a broken terminal), you should see it being appended to  
on both nodes.

DBJournal
do a select count(*) from the journal table on both nodes to see that  
records are being appended on each node.

Also for the DB journal you could turn on sql logging or use the  
log4jdbc driver to log statements.

HTH
Ian




On 31 Mar 2009, at 05:56, Nilshan wrote:

>
> Hello Ian ,
>
> Thanks for your quick reply...
>
> May be there is some problem with the Text Editor of this forum.  
> because
> some tags are not displayed by it.
>
> I think It didn't consider param Tag.
>
> For Cluster 1
> <Cluster id="Rule Creation">	
>  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
>    		param name="revision" value="c:/myjournal/revision.log" /
>    		param name="directory" value="c:/myjournal" /
>  		</Journal>
> </Cluster>
>
> For Cluster 2
> <Cluster id="Rule Execution.">
> 	
>  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
>    		param name="revision" value="c:/myjournal/revision_New.log" /
>    		param name="directory" value="c:/myjournal" /
>  		</Journal>
> </Cluster>
>
> I tried shared Journal (as it is must and clearly suggested on  
> clustering
> wiki) but it didn't work.
>
> I used DatabaseJournal, unfortunately It was giving me "Unable to  
> create
> Connection" error.
>
> Configuration using DatabaseJournal I used is as Below.
>
> <Cluster id="Rule Creation">
> 	
>  	    <Journal  
> class="org.apache.jackrabbit.core.journal.DatabaseJournal">
>    		param name="revision" value="${rep.home}/revision.log" /
>    		param name="driver" value="com.ibm.db2.jcc.DB2Driver"/
> 	 		param name="url" value="jdbc:db2://192.168.1.36:50000/Database" /
> 			param name="user" value="username" /			
>                                       param name="password"
> value="password" /	 		                          param name="schema"
> value="db2"/ 				
> 	 		param name="schemaObjectPrefix" value="RuleMgmt_Ver_" /  		</ 
> Journal>
> </Cluster>
>
> Please look at the above confiugration and let me know If I have  
> made any
> mistake.
>
> The following is the Error log while using DatabaseJournal..
>
> **********************************************************
> javax.jcr.RepositoryException: Unable to create connection.: Unable to
> create connection.
> at
> org 
> .apache 
> .jackrabbit 
> .core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650)
> at
> org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java: 
> 288)
> at
> org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java: 
> 557)
> at
> org.apache.jackrabbit.core.TransientRepository 
> $2.getRepository(TransientRepository.java:245)
> at
> org 
> .apache 
> .jackrabbit 
> .core.TransientRepository.startRepository(TransientRepository.java: 
> 265)
> at
> org 
> .apache 
> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
> 333)
> at
> org 
> .apache 
> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
> 363)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.initializeRepository(RuleAssetManager.java:555)
> at
> com 
> .amicas 
> .rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java:517)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.getServiceImpl(RuleAssetManager.java:494)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.categoryExists(RuleAssetManager.java:420)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager 
> .initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167)
> at
> com 
> .amicas 
> .gwt 
> .rulemanagement 
> .servlet 
> .RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java:256)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at  
> com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java: 
> 528)
> at
> com 
> .google 
> .gwt 
> .user 
> .server 
> .rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265)
> at
> com 
> .google 
> .gwt 
> .user 
> .server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:187)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
> at
> org 
> .apache 
> .catalina 
> .core 
> .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: 
> 252)
> at
> org 
> .apache 
> .catalina 
> .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
> at
> org 
> .springframework 
> .orm 
> .hibernate 
> .support 
> .OpenSessionInViewFilter 
> .doFilterInternal(OpenSessionInViewFilter.java:172)
> at
> org 
> .springframework 
> .web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 
> 76)
> **********************************************************
>
> I found specific Database Journal for MySQl , Oracle. Do we have any  
> Db2
> specific Journal. ?
>
>
> Thanks,
> Nilshan and Arpan.
>
> *******************************************************
>
> Ian Boston-3 wrote:
>>
>> A few observations:
>>
>> You appear to be using the File Journal,
>> but I cant see where the shared directory for the file journal is
>> defined.
>>
>> I think, (please correct if wrong) it would be normal to use a DB
>> Journal if using a DB persistence manager.
>>
>> Ian
>>
>> On 30 Mar 2009, at 16:04, Nilshan wrote:
>>
>>>
>>> Hello All,
>>>
>>> I have implemented Jackrabbit Clustering approach to create Two
>>> clustering
>>> for two different processes.
>>> One for creation of rules (Cluster 1) and another for execution
>>> (Cluster 2).
>>> Both these clusters have a shared database (DB2)
>>> where rules are stored.
>>>
>>>
>>> The problem I face during execution is that , the execution process
>>> does not
>>> get the latest updates from Rule creation process. I mean , If I
>>> change Rule
>>> status it doesn't reflect immediately to execution process.
>>> But when redeployed (Restart Server) the system executes as
>>> expected. (With
>>> latest changes applied by
>>> Cluster 1)
>>>
>>> My configuration are as below.
>>> 1.Shared DB ( Persistance Manager is ) : IBM DB2.
>>> 2. Drools Version : 5.x
>>> 3. Jackrabbit Version : 1.4.x
>>>
>>> My clustering configuration for both the process are as below.
>>>
>>> Configuration for Cluster 1 -----------
>>> START-------------------------------
>>> <?xml version="1.0" encoding="UTF-8"?>
>>> <Repository>
>>>
>>> 	<Cluster id="Rule Creation">
>>> 	
>>> 	    <Journal  
>>> class="org.apache.jackrabbit.core.journal.FileJournal">
>>>   		
>>>   		
>>> 		</Journal>
>>> 	</Cluster>
>>>
>>> 	<FileSystem
>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>
>>>   </FileSystem>
>>>
>>> 	<Security appName="Jackrabbit">
>>> 		<AccessManager
>>> class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
>>> 			<LoginModule
>>> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>>> 				
>>> 			</LoginModule>
>>> 	</Security>
>>> 	
>>> 	<Workspaces rootPath="${rep.home}/workspaces"
>>> defaultWorkspace="default" />	
>>> 	<Workspace name="${wsp.name}">
>>> 		<PersistenceManager
>>> class
>>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>>> 			
>>> 	 		
>>> 			
>>> 			
>>> 	 		 				
>>> 	 		
>>> 			
>>> 		</PersistenceManager>
>>> 		
>>> 		<FileSystem
>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>
>>>       </FileSystem>
>>>
>>>       <!--<FileSystem
>>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>>
>>> 	 		
>>> 			
>>> 			
>>> 	 		 				
>>> 	 		
>>>       </FileSystem>-->
>>>
>>> 		<SearchIndex
>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>
>>>
>>>
>>>
>>>   </SearchIndex>            	
>>> 	</Workspace>
>>> 	
>>> 	<Versioning rootPath="${rep.home}/version">
>>> 	
>>> 		<PersistenceManager
>>> class
>>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>>> 			
>>> 	 		
>>> 			
>>> 			
>>> 	 		 				
>>> 	 		
>>> 						
>>> 		</PersistenceManager>
>>> 		
>>> 		<FileSystem
>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>
>>>       </FileSystem>
>>>
>>>       <!--<FileSystem
>>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>>
>>> 	 		
>>> 			
>>> 			
>>> 	 		 				
>>> 	 		
>>>       </FileSystem>-->
>>>
>>> 		
>>> 	</Versioning>
>>> 	<SearchIndex
>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>
>>>
>>>
>>>
>>>   </SearchIndex>
>>>
>>> </Repository>
>>> Configuration for Cluster 1 -----------
>>> END-------------------------------
>>>
>>> Configuration for Cluster 2 -----------
>>> START-------------------------------
>>> <?xml version="1.0" encoding="UTF-8"?>
>>> <Repository>
>>>
>>> 	<Cluster id="Rule Execution">
>>> 	
>>> 	    <Journal  
>>> class="org.apache.jackrabbit.core.journal.FileJournal">
>>>   		
>>>   		
>>> 		</Journal>
>>> 	</Cluster>
>>>
>>> 	<FileSystem
>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>
>>>   </FileSystem>
>>>
>>> 	<Security appName="Jackrabbit">
>>> 		<AccessManager
>>> class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
>>> 			<LoginModule
>>> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>>> 				
>>> 			</LoginModule>
>>> 	</Security>
>>> 	
>>> 	<Workspaces rootPath="${rep.home}/workspaces"
>>> defaultWorkspace="default" />	
>>> 	<Workspace name="${wsp.name}">
>>> 		<PersistenceManager
>>> class
>>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>>> 			
>>> 	 		
>>> 			
>>> 			
>>> 	 		 				
>>> 	 		
>>> 			
>>> 		</PersistenceManager>
>>> 		
>>> 		<FileSystem
>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>
>>>       </FileSystem>
>>>
>>>       <!--<FileSystem
>>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>>
>>> 	 		
>>> 			
>>> 			
>>> 	 		 				
>>> 	 		
>>>       </FileSystem>-->
>>>
>>> 		<SearchIndex
>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>
>>>
>>>
>>>
>>>   </SearchIndex>            	
>>> 	</Workspace>
>>> 	
>>> 	<Versioning rootPath="${rep.home}/version">
>>> 	
>>> 		<PersistenceManager
>>> class
>>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>>> 			
>>> 	 		
>>> 			
>>> 			
>>> 	 		 				
>>> 	 		
>>> 						
>>> 		</PersistenceManager>
>>> 		
>>> 		<FileSystem
>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>
>>>       </FileSystem>
>>>
>>>       <!--<FileSystem
>>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>>
>>> 	 		
>>> 			
>>> 			
>>> 	 		 				
>>> 	 		
>>>       </FileSystem>-->
>>>
>>> 		
>>> 	</Versioning>
>>> 	<SearchIndex
>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>
>>>
>>>
>>>
>>>   </SearchIndex>
>>>
>>> </Repository>
>>> Configuration for  Cluster 2 -----------
>>> END-------------------------------
>>>
>>> Both the process are running on same machine under different JVM.
>>>
>>> Do we have any specific Journal for DB2 database as we have one for
>>> Oracle.?
>>>
>>> Please suggest me  a feasible solution for the above problem.
>>>
>>> Your Help for this would highly appreciated.
>>>
>>> Thanks in advance.
>>>
>>> Nilshan and Arpan.
>>>
>>>
>>> -- 
>>> View this message in context:
>>> http://www.nabble.com/JackRabbit_Clustering-tp22785285p22785285.html
>>> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
> http://www.nabble.com/file/p22798352/Cluster_1.xml Cluster_1.xml
> http://www.nabble.com/file/p22798352/Cluster_1.xml Cluster_1.xml
> -- 
> View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22798352.html
> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>


Re: How to implement LIKE statements with Xpath?

Posted by Marcel Reutegger <ma...@gmx.net>.
On Tue, Mar 31, 2009 at 16:01, Kurz Wolfgang <wo...@gwvs.de> wrote:
> I want the result to return everything that has test in it like test1, test2, test3
>
> I thought contains would do that but apparently it doesn’t:-)

if you want contains() to match prefixes you need to tell it to do so
by using a wildcard:

contains(., "test*")

regards
 marcel

Re: How to implement LIKE statements with Xpath?

Posted by Dennis van der Laan <d....@rug.nl>.
Hi Kurz,

This should work using jcr:like instead of jcr:contains. So using 
jcr:like(., "test%") should return everything starting with 'test'.

Best regards,
Dennis
> Hello everyone,
>
> i am implementing a full textual search with xpath.
>
> So far I have been using contains(.,"SearchString") but it doesn’t really get me the right results.
>
> What I would like to have is a LIKE statement like in SQL
>
> So for example if I am searching for "test"
>
> I want the result to return everything that has test in it like test1, test2, test3
>
> I thought contains would do that but apparently it doesn’t:-)
>
> Anyone have an idea for me how I could get this to work?
>
> Thx a lot in advance!
>
> Wolfgang
>
>   


-- 
Dennis van der Laan


How to implement LIKE statements with Xpath?

Posted by Kurz Wolfgang <wo...@gwvs.de>.
Hello everyone,

i am implementing a full textual search with xpath.

So far I have been using contains(.,"SearchString") but it doesn’t really get me the right results.

What I would like to have is a LIKE statement like in SQL

So for example if I am searching for "test"

I want the result to return everything that has test in it like test1, test2, test3

I thought contains would do that but apparently it doesn’t:-)

Anyone have an idea for me how I could get this to work?

Thx a lot in advance!

Wolfgang


Re: JackRabbit_Clustering

Posted by Ian Boston <ia...@googlemail.com>.
2 Observations/Questions:
When you say Rule Creation Cluster, and Rule Execution Cluster, are  
these 2 separate sets of JVM's running jackrabbit with the same DB  
backend, or does each *set* of JVM's have their own DB.

You can have 1 DB, 1 DataStore, 1..n JVM configured for one purpose  
and  1..n JVM's configured for another purpose as a *single*  
jackrabbit cluster.

You cannot have 1 DB, 1 DataStore, 1..n JVMs as 1 cluster configured  
for one purpose and another 1DB, 1 DataStore and 1..n JVMs as another  
cluster configured for another purpose *and* share the journal between  
both clusters.

--------------------

Second observation, the Rule Execution Jackrabbit instance  looks like  
it hasnt had all the node types and possibly name spaces registered,  
so it doesnt understand revisions from the Rule Creation Cluster  
(which contain references to revisions and namespaces).

Ian
On 1 Apr 2009, at 07:26, Nilshan wrote:

>
> Hello Ian,
>
> We tried using the shared network drive approach.
> <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
> param name="revision" value="${rep.home}/revision.log" /
> param name="directory" value="//Indralok1/Repo_Journal" /
> </Journal>
>
> The revision file and journal.log file appears in the shared drive  
> and as we
> update or create any new rule from the Rule Creation Cluster  
> (Cluster 1),
> the journal.log gets updated. This seems to be working fine at  
> cluster 1.
>
> Now , when the Rule Execution Cluster (Cluster 2) tries to execute the
> rules, it throws error saying "Unable to read Revision 2237"
>
> Below is the error log:-
>
> ******************************************************
> 2009-04-01 11:07:38,389 ERROR
> [org.apache.jackrabbit.core.cluster.ClusterNode] (CheckResources)  
> Unable to
> read revision '2237'.
> org.apache.jackrabbit.core.journal.JournalException: Parse error while
> reading node type definition.
> 	at
> org 
> .apache 
> .jackrabbit 
> .core.journal.AbstractRecord.readNodeTypeDef(AbstractRecord.java:256)
> 	at
> org 
> .apache.jackrabbit.core.cluster.ClusterNode.consume(ClusterNode.java: 
> 1026)
> 	at
> org 
> .apache 
> .jackrabbit.core.journal.AbstractJournal.doSync(AbstractJournal.java: 
> 198)
> 	at
> org 
> .apache 
> .jackrabbit.core.journal.AbstractJournal.sync(AbstractJournal.java: 
> 173)
> 	at
> org.apache.jackrabbit.core.cluster.ClusterNode.sync(ClusterNode.java: 
> 303)
> 	at
> org 
> .apache.jackrabbit.core.cluster.ClusterNode.start(ClusterNode.java: 
> 249)
> 	at
> org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java: 
> 319)
> 	at
> org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java: 
> 557)
> 	at
> org.apache.jackrabbit.core.TransientRepository 
> $2.getRepository(TransientRepository.java:245)
> 	at
> org 
> .apache 
> .jackrabbit 
> .core.TransientRepository.startRepository(TransientRepository.java: 
> 265)
> 	at
> org 
> .apache 
> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
> 333)
> 	at
> org 
> .apache 
> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
> 363)
>
> ******************************************************
>
> Are we missing something? If so, pls let us know....
>
> We assume that the cluster 2 is not able to read the revision that the
> Cluster 1 stores.
>
> Can you pls tell us what is the problem??????
>
> Thanks........
>
> Nilshan and Arpan
>
>
> Hello Ian and Thomas,
> Thank you for your quick reply.
> We will definitely try the suggested option to share the drive for the
> clusters..
> We will get in touch  soon..
> Thanks a ton.
>
> :rules:
>
> Nilshan and Arpan.
>
>
>
>
> Ian Boston wrote:
>>
>> I see Thomas has picked up the thread, and he knows far more about
>> this than I do, so listen to him more than me.
>>
>> 1. I see that the journal directory is  on the c: drive, do that mean
>> that both Jackrabbit instances are running on the same physical
>> machine ?
>> 2. It would be normal to put the local revision file with the local
>> repository storage eg
>>
>>
>> and the journal in a shared space.
>>
>>
>>
>> You may be getting name clashes between nodes as a result of using  
>> the
>> same directory for both the private and the public shared locations.
>>
>> ---
>>
>> The failure on the DB Journal looks like the DB connection is being
>> refused by the db server (url, username, password etc ).
>>
>> "Unable to create connection.: Unable to
>> create connection." is probably from the JDBC driver.
>>
>> Ian
>>
>>
>> On 31 Mar 2009, at 10:41, Nilshan wrote:
>>
>>>
>>> Hello Thomas,
>>>
>>> Thanks for the reply...
>>>
>>> Using FileJournal, we are facing the problem that is mentioned in  
>>> the
>>> previous message.....
>>>
>>> Using DatabaseJournal, we are facing the following error:-
>>>
>>>
>>> The following is the Error log while using DatabaseJournal..
>>>
>>> **********************************************************
>>> javax.jcr.RepositoryException: Unable to create connection.:  
>>> Unable to
>>> create connection.
>>> at
>>> org
>>> .apache
>>> .jackrabbit
>>> .core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650)
>>> at
>>> org 
>>> .apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java:
>>> 288)
>>> at
>>> org 
>>> .apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:
>>> 557)
>>> at
>>> org.apache.jackrabbit.core.TransientRepository
>>> $2.getRepository(TransientRepository.java:245)
>>> at
>>> org
>>> .apache
>>> .jackrabbit
>>> .core.TransientRepository.startRepository(TransientRepository.java:
>>> 265)
>>> at
>>> org
>>> .apache
>>> .jackrabbit.core.TransientRepository.login(TransientRepository.java:
>>> 333)
>>> at
>>> org
>>> .apache
>>> .jackrabbit.core.TransientRepository.login(TransientRepository.java:
>>> 363)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager.initializeRepository(RuleAssetManager.java:555)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java: 
>>> 517)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager.getServiceImpl(RuleAssetManager.java:494)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager.categoryExists(RuleAssetManager.java:420)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager
>>> .initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167)
>>> at
>>> com
>>> .amicas
>>> .gwt
>>> .rulemanagement
>>> .servlet
>>> .RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java: 
>>> 256)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>>> at java.lang.reflect.Method.invoke(Unknown Source)
>>> at
>>> com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:
>>> 528)
>>> at
>>> com
>>> .google
>>> .gwt
>>> .user
>>> .server
>>> .rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265)
>>> at
>>> com
>>> .google
>>> .gwt
>>> .user
>>> .server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java: 
>>> 187)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
>>> at
>>> org
>>> .apache
>>> .catalina
>>> .core
>>> .ApplicationFilterChain 
>>> .internalDoFilter(ApplicationFilterChain.java:
>>> 252)
>>> at
>>> org
>>> .apache
>>> .catalina
>>> .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java: 
>>> 173)
>>> at
>>> org
>>> .springframework
>>> .orm
>>> .hibernate
>>> .support
>>> .OpenSessionInViewFilter
>>> .doFilterInternal(OpenSessionInViewFilter.java:172)
>>> at
>>> org
>>> .springframework
>>> .web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:
>>> 76)
>>> **********************************************************
>>>
>>> My configuration for the DatabaseJournal is also mentioned in the
>>> previous
>>> message....
>>>
>>> I hope that I have provided sufficient information to you.
>>> If we are missing anything, let us know....
>>>
>>>
>>> Thank You,
>>> Nilshan and Arpan.
>>>
>>>
>>>
>>>
>>>
>>> Thomas Müller-2 wrote:
>>>>
>>>> Hi,
>>>>
>>>>> I tried shared Journal (as it is must and clearly suggested on
>>>>> clustering
>>>>> wiki) but it didn't work.
>>>>
>>>> Could you tell us what the problem was?
>>>>
>>>> Regards,
>>>> Thomas
>>>>
>>>>
>>>
>>> -- 
>>> View this message in context:
>>> http://www.nabble.com/JackRabbit_Clustering-tp22785285p22801622.html
>>> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>>
>
> -- 
> View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22820504.html
> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>


Re: JackRabbit_Clustering

Posted by Nilshan <ni...@indralok.com>.
Hello Ian,
             
We tried using the shared network drive approach.
<Journal class="org.apache.jackrabbit.core.journal.FileJournal">
param name="revision" value="${rep.home}/revision.log" /
param name="directory" value="//Indralok1/Repo_Journal" /
</Journal> 

The revision file and journal.log file appears in the shared drive and as we
update or create any new rule from the Rule Creation Cluster (Cluster 1),
the journal.log gets updated. This seems to be working fine at cluster 1.

Now , when the Rule Execution Cluster (Cluster 2) tries to execute the
rules, it throws error saying "Unable to read Revision 2237"

Below is the error log:-

******************************************************
2009-04-01 11:07:38,389 ERROR
[org.apache.jackrabbit.core.cluster.ClusterNode] (CheckResources) Unable to
read revision '2237'.
org.apache.jackrabbit.core.journal.JournalException: Parse error while
reading node type definition.
	at
org.apache.jackrabbit.core.journal.AbstractRecord.readNodeTypeDef(AbstractRecord.java:256)
	at
org.apache.jackrabbit.core.cluster.ClusterNode.consume(ClusterNode.java:1026)
	at
org.apache.jackrabbit.core.journal.AbstractJournal.doSync(AbstractJournal.java:198)
	at
org.apache.jackrabbit.core.journal.AbstractJournal.sync(AbstractJournal.java:173)
	at
org.apache.jackrabbit.core.cluster.ClusterNode.sync(ClusterNode.java:303)
	at
org.apache.jackrabbit.core.cluster.ClusterNode.start(ClusterNode.java:249)
	at
org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java:319)
	at
org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:557)
	at
org.apache.jackrabbit.core.TransientRepository$2.getRepository(TransientRepository.java:245)
	at
org.apache.jackrabbit.core.TransientRepository.startRepository(TransientRepository.java:265)
	at
org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:333)
	at
org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:363)

******************************************************

Are we missing something? If so, pls let us know....

We assume that the cluster 2 is not able to read the revision that the
Cluster 1 stores.

Can you pls tell us what is the problem??????

Thanks........

Nilshan and Arpan


Hello Ian and Thomas,
 Thank you for your quick reply. 
We will definitely try the suggested option to share the drive for the
clusters..
We will get in touch  soon..
Thanks a ton.

:rules:

Nilshan and Arpan.




Ian Boston wrote:
> 
> I see Thomas has picked up the thread, and he knows far more about  
> this than I do, so listen to him more than me.
> 
> 1. I see that the journal directory is  on the c: drive, do that mean  
> that both Jackrabbit instances are running on the same physical  
> machine ?
> 2. It would be normal to put the local revision file with the local  
> repository storage eg
> 
> 
> and the journal in a shared space.
> 
> 
> 
> You may be getting name clashes between nodes as a result of using the  
> same directory for both the private and the public shared locations.
> 
> ---
> 
> The failure on the DB Journal looks like the DB connection is being  
> refused by the db server (url, username, password etc ).
> 
> "Unable to create connection.: Unable to
> create connection." is probably from the JDBC driver.
> 
> Ian
> 
> 
> On 31 Mar 2009, at 10:41, Nilshan wrote:
> 
>>
>> Hello Thomas,
>>
>> Thanks for the reply...
>>
>> Using FileJournal, we are facing the problem that is mentioned in the
>> previous message.....
>>
>> Using DatabaseJournal, we are facing the following error:-
>>
>>
>> The following is the Error log while using DatabaseJournal..
>>
>> **********************************************************
>> javax.jcr.RepositoryException: Unable to create connection.: Unable to
>> create connection.
>> at
>> org 
>> .apache 
>> .jackrabbit 
>> .core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650)
>> at
>> org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java: 
>> 288)
>> at
>> org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java: 
>> 557)
>> at
>> org.apache.jackrabbit.core.TransientRepository 
>> $2.getRepository(TransientRepository.java:245)
>> at
>> org 
>> .apache 
>> .jackrabbit 
>> .core.TransientRepository.startRepository(TransientRepository.java: 
>> 265)
>> at
>> org 
>> .apache 
>> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
>> 333)
>> at
>> org 
>> .apache 
>> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
>> 363)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.initializeRepository(RuleAssetManager.java:555)
>> at
>> com 
>> .amicas 
>> .rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java:517)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.getServiceImpl(RuleAssetManager.java:494)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.categoryExists(RuleAssetManager.java:420)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager 
>> .initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167)
>> at
>> com 
>> .amicas 
>> .gwt 
>> .rulemanagement 
>> .servlet 
>> .RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java:256)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>> at java.lang.reflect.Method.invoke(Unknown Source)
>> at  
>> com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java: 
>> 528)
>> at
>> com 
>> .google 
>> .gwt 
>> .user 
>> .server 
>> .rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265)
>> at
>> com 
>> .google 
>> .gwt 
>> .user 
>> .server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:187)
>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
>> at
>> org 
>> .apache 
>> .catalina 
>> .core 
>> .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: 
>> 252)
>> at
>> org 
>> .apache 
>> .catalina 
>> .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
>> at
>> org 
>> .springframework 
>> .orm 
>> .hibernate 
>> .support 
>> .OpenSessionInViewFilter 
>> .doFilterInternal(OpenSessionInViewFilter.java:172)
>> at
>> org 
>> .springframework 
>> .web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 
>> 76)
>> **********************************************************
>>
>> My configuration for the DatabaseJournal is also mentioned in the  
>> previous
>> message....
>>
>> I hope that I have provided sufficient information to you.
>> If we are missing anything, let us know....
>>
>>
>> Thank You,
>> Nilshan and Arpan.
>>
>>
>>
>>
>>
>> Thomas Müller-2 wrote:
>>>
>>> Hi,
>>>
>>>> I tried shared Journal (as it is must and clearly suggested on  
>>>> clustering
>>>> wiki) but it didn't work.
>>>
>>> Could you tell us what the problem was?
>>>
>>> Regards,
>>> Thomas
>>>
>>>
>>
>> -- 
>> View this message in context:
>> http://www.nabble.com/JackRabbit_Clustering-tp22785285p22801622.html
>> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>>
> 
> 
> 
> 

-- 
View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22820504.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: JackRabbit_Clustering

Posted by Thomas Müller <th...@day.com>.
Hi,

> Thomas may be able to give more insight, but this is what my gut is saying
> is wrong.

I'm sorry, but I don't have more insight about this.

Regards,
Thomas

Re: JackRabbit_Clustering

Posted by Ian Boston <ie...@tfd.co.uk>.
AFAIK, CND or Node Types will replicate over a cluster, provider
a) All the namespaces referenced in the CND files are on all of the  
cluster nodes.
b) The Node Types on each node dont conflict.

I would set the default logging level to debug in both JVM's and start  
up a clean cluster with both JVM's in different orders (eg A first  
then B, clean repo, B first then A) looking for exceptions around the  
repository initialization.

I would suspect that you will get some NodeType or namespace related  
errors at startup that are preventing one cluster from understanding  
the events from the other.

Thomas may be able to give more insight, but this is what my gut is  
saying is wrong.
HTH
Ian

On 27 Apr 2009, at 14:15, Nilshan wrote:

>
> Hello Ian and Thomas ,
>
> I am really sorry that I am replying after a long time, actually I  
> was busy
> with some other component.
>
> When I tried Clustering without drools 5.X It worked absolutely fine
> (Without shared N/W drive).
> e.g : param name="directory" value="c:/Repo_Journal". And this is  
> working
> fine.
>
> Both the clusters are on the same machine but different JVMs.
>
> Now, When  I introduced drools in clustering It started crying.  
> (Unable to
> read revision).
>
> I think this is because...
>
> Accessing Repository using ...
>
> 1.JCRRepositoryConfigurator config = new  
> JackrabbitRepositoryConfigurator();
> 2.RuleAssetManager.repository =
> config.getJCRRepository(REPOSITORY_HOME_DIR);
> 3.repoSession = repository.login();
> 4.config.setupRulesRepository(repoSession);	
>
> At line 4 it setupRulesRepository which initially (At very first time)
> registers different CND files.
>
> i.e
>
> //Note, the order in which they are registered actually does matter !
> this.registerNodeTypesFromCndFile("/node_type_definitions/ 
> tag_node_type.cnd",
> ws);
> this.registerNodeTypesFromCndFile("/node_type_definitions/ 
> state_node_type.cnd",
> ws);
> this.registerNodeTypesFromCndFile("/node_type_definitions/ 
> versionable_node_type.cnd",
> ws);
> this.registerNodeTypesFromCndFile("/node_type_definitions/ 
> versionable_asset_folder_node_type.cnd",
> ws);
> this.registerNodeTypesFromCndFile("/node_type_definitions/ 
> rule_node_type.cnd",
> ws);
> this.registerNodeTypesFromCndFile("/node_type_definitions/ 
> rulepackage_node_type.cnd",
> ws);
>
> These CND files contain special characters ( like *) introduced by one
> cluster which are not properly interpreted (understood) by the second
> cluster. I feel that, this is the reason that clusters are not  
> working in
> harmony.
>
>
> Thanks,
> Nilshan.
>
>
> Hello Ian and Thomas,
> Thank you for your quick reply.
> We will definitely try the suggested option to share the drive for the
> clusters..
> We will get in touch  soon..
> Thanks a ton.
>
> :rules:
>
> Nilshan and Arpan.
>
>
>
>
> Ian Boston wrote:
>>
>> I see Thomas has picked up the thread, and he knows far more about
>> this than I do, so listen to him more than me.
>>
>> 1. I see that the journal directory is  on the c: drive, do that mean
>> that both Jackrabbit instances are running on the same physical
>> machine ?
>> 2. It would be normal to put the local revision file with the local
>> repository storage eg
>>
>>
>> and the journal in a shared space.
>>
>>
>>
>> You may be getting name clashes between nodes as a result of using  
>> the
>> same directory for both the private and the public shared locations.
>>
>> ---
>>
>> The failure on the DB Journal looks like the DB connection is being
>> refused by the db server (url, username, password etc ).
>>
>> "Unable to create connection.: Unable to
>> create connection." is probably from the JDBC driver.
>>
>> Ian
>>
>>
>> On 31 Mar 2009, at 10:41, Nilshan wrote:
>>
>>>
>>> Hello Thomas,
>>>
>>> Thanks for the reply...
>>>
>>> Using FileJournal, we are facing the problem that is mentioned in  
>>> the
>>> previous message.....
>>>
>>> Using DatabaseJournal, we are facing the following error:-
>>>
>>>
>>> The following is the Error log while using DatabaseJournal..
>>>
>>> **********************************************************
>>> javax.jcr.RepositoryException: Unable to create connection.:  
>>> Unable to
>>> create connection.
>>> at
>>> org
>>> .apache
>>> .jackrabbit
>>> .core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650)
>>> at
>>> org 
>>> .apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java:
>>> 288)
>>> at
>>> org 
>>> .apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:
>>> 557)
>>> at
>>> org.apache.jackrabbit.core.TransientRepository
>>> $2.getRepository(TransientRepository.java:245)
>>> at
>>> org
>>> .apache
>>> .jackrabbit
>>> .core.TransientRepository.startRepository(TransientRepository.java:
>>> 265)
>>> at
>>> org
>>> .apache
>>> .jackrabbit.core.TransientRepository.login(TransientRepository.java:
>>> 333)
>>> at
>>> org
>>> .apache
>>> .jackrabbit.core.TransientRepository.login(TransientRepository.java:
>>> 363)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager.initializeRepository(RuleAssetManager.java:555)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java: 
>>> 517)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager.getServiceImpl(RuleAssetManager.java:494)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager.categoryExists(RuleAssetManager.java:420)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager
>>> .initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718)
>>> at
>>> com
>>> .amicas
>>> .rulemanagement
>>> .RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167)
>>> at
>>> com
>>> .amicas
>>> .gwt
>>> .rulemanagement
>>> .servlet
>>> .RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java: 
>>> 256)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>>> at java.lang.reflect.Method.invoke(Unknown Source)
>>> at
>>> com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:
>>> 528)
>>> at
>>> com
>>> .google
>>> .gwt
>>> .user
>>> .server
>>> .rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265)
>>> at
>>> com
>>> .google
>>> .gwt
>>> .user
>>> .server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java: 
>>> 187)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
>>> at
>>> org
>>> .apache
>>> .catalina
>>> .core
>>> .ApplicationFilterChain 
>>> .internalDoFilter(ApplicationFilterChain.java:
>>> 252)
>>> at
>>> org
>>> .apache
>>> .catalina
>>> .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java: 
>>> 173)
>>> at
>>> org
>>> .springframework
>>> .orm
>>> .hibernate
>>> .support
>>> .OpenSessionInViewFilter
>>> .doFilterInternal(OpenSessionInViewFilter.java:172)
>>> at
>>> org
>>> .springframework
>>> .web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:
>>> 76)
>>> **********************************************************
>>>
>>> My configuration for the DatabaseJournal is also mentioned in the
>>> previous
>>> message....
>>>
>>> I hope that I have provided sufficient information to you.
>>> If we are missing anything, let us know....
>>>
>>>
>>> Thank You,
>>> Nilshan and Arpan.
>>>
>>>
>>>
>>>
>>>
>>> Thomas Müller-2 wrote:
>>>>
>>>> Hi,
>>>>
>>>>> I tried shared Journal (as it is must and clearly suggested on
>>>>> clustering
>>>>> wiki) but it didn't work.
>>>>
>>>> Could you tell us what the problem was?
>>>>
>>>> Regards,
>>>> Thomas
>>>>
>>>>
>>>
>>> -- 
>>> View this message in context:
>>> http://www.nabble.com/JackRabbit_Clustering-tp22785285p22801622.html
>>> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>>
>
> -- 
> View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p23256026.html
> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>


Re: JackRabbit_Clustering

Posted by Nilshan <ni...@indralok.com>.
Hello Ian and Thomas ,

I am really sorry that I am replying after a long time, actually I was busy
with some other component.

When I tried Clustering without drools 5.X It worked absolutely fine
(Without shared N/W drive). 
e.g : param name="directory" value="c:/Repo_Journal". And this is working
fine.

Both the clusters are on the same machine but different JVMs. 

Now, When  I introduced drools in clustering It started crying. (Unable to
read revision). 

I think this is because...

Accessing Repository using ...

1.JCRRepositoryConfigurator config = new JackrabbitRepositoryConfigurator();
2.RuleAssetManager.repository =
config.getJCRRepository(REPOSITORY_HOME_DIR);
3.repoSession = repository.login();
4.config.setupRulesRepository(repoSession);	

At line 4 it setupRulesRepository which initially (At very first time)
registers different CND files.

i.e 

//Note, the order in which they are registered actually does matter !
this.registerNodeTypesFromCndFile("/node_type_definitions/tag_node_type.cnd",
ws);
this.registerNodeTypesFromCndFile("/node_type_definitions/state_node_type.cnd",
ws);
this.registerNodeTypesFromCndFile("/node_type_definitions/versionable_node_type.cnd",
ws);
this.registerNodeTypesFromCndFile("/node_type_definitions/versionable_asset_folder_node_type.cnd",
ws);
this.registerNodeTypesFromCndFile("/node_type_definitions/rule_node_type.cnd",
ws);
this.registerNodeTypesFromCndFile("/node_type_definitions/rulepackage_node_type.cnd",
ws);

These CND files contain special characters ( like *) introduced by one
cluster which are not properly interpreted (understood) by the second
cluster. I feel that, this is the reason that clusters are not working in
harmony.


Thanks,
Nilshan.


Hello Ian and Thomas,
 Thank you for your quick reply. 
We will definitely try the suggested option to share the drive for the
clusters..
We will get in touch  soon..
Thanks a ton.

:rules:

Nilshan and Arpan.




Ian Boston wrote:
> 
> I see Thomas has picked up the thread, and he knows far more about  
> this than I do, so listen to him more than me.
> 
> 1. I see that the journal directory is  on the c: drive, do that mean  
> that both Jackrabbit instances are running on the same physical  
> machine ?
> 2. It would be normal to put the local revision file with the local  
> repository storage eg
> 
> 
> and the journal in a shared space.
> 
> 
> 
> You may be getting name clashes between nodes as a result of using the  
> same directory for both the private and the public shared locations.
> 
> ---
> 
> The failure on the DB Journal looks like the DB connection is being  
> refused by the db server (url, username, password etc ).
> 
> "Unable to create connection.: Unable to
> create connection." is probably from the JDBC driver.
> 
> Ian
> 
> 
> On 31 Mar 2009, at 10:41, Nilshan wrote:
> 
>>
>> Hello Thomas,
>>
>> Thanks for the reply...
>>
>> Using FileJournal, we are facing the problem that is mentioned in the
>> previous message.....
>>
>> Using DatabaseJournal, we are facing the following error:-
>>
>>
>> The following is the Error log while using DatabaseJournal..
>>
>> **********************************************************
>> javax.jcr.RepositoryException: Unable to create connection.: Unable to
>> create connection.
>> at
>> org 
>> .apache 
>> .jackrabbit 
>> .core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650)
>> at
>> org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java: 
>> 288)
>> at
>> org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java: 
>> 557)
>> at
>> org.apache.jackrabbit.core.TransientRepository 
>> $2.getRepository(TransientRepository.java:245)
>> at
>> org 
>> .apache 
>> .jackrabbit 
>> .core.TransientRepository.startRepository(TransientRepository.java: 
>> 265)
>> at
>> org 
>> .apache 
>> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
>> 333)
>> at
>> org 
>> .apache 
>> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
>> 363)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.initializeRepository(RuleAssetManager.java:555)
>> at
>> com 
>> .amicas 
>> .rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java:517)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.getServiceImpl(RuleAssetManager.java:494)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.categoryExists(RuleAssetManager.java:420)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager 
>> .initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718)
>> at
>> com 
>> .amicas 
>> .rulemanagement 
>> .RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167)
>> at
>> com 
>> .amicas 
>> .gwt 
>> .rulemanagement 
>> .servlet 
>> .RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java:256)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>> at java.lang.reflect.Method.invoke(Unknown Source)
>> at  
>> com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java: 
>> 528)
>> at
>> com 
>> .google 
>> .gwt 
>> .user 
>> .server 
>> .rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265)
>> at
>> com 
>> .google 
>> .gwt 
>> .user 
>> .server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:187)
>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
>> at
>> org 
>> .apache 
>> .catalina 
>> .core 
>> .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: 
>> 252)
>> at
>> org 
>> .apache 
>> .catalina 
>> .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
>> at
>> org 
>> .springframework 
>> .orm 
>> .hibernate 
>> .support 
>> .OpenSessionInViewFilter 
>> .doFilterInternal(OpenSessionInViewFilter.java:172)
>> at
>> org 
>> .springframework 
>> .web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 
>> 76)
>> **********************************************************
>>
>> My configuration for the DatabaseJournal is also mentioned in the  
>> previous
>> message....
>>
>> I hope that I have provided sufficient information to you.
>> If we are missing anything, let us know....
>>
>>
>> Thank You,
>> Nilshan and Arpan.
>>
>>
>>
>>
>>
>> Thomas Müller-2 wrote:
>>>
>>> Hi,
>>>
>>>> I tried shared Journal (as it is must and clearly suggested on  
>>>> clustering
>>>> wiki) but it didn't work.
>>>
>>> Could you tell us what the problem was?
>>>
>>> Regards,
>>> Thomas
>>>
>>>
>>
>> -- 
>> View this message in context:
>> http://www.nabble.com/JackRabbit_Clustering-tp22785285p22801622.html
>> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>>
> 
> 
> 
> 

-- 
View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p23256026.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: JackRabbit_Clustering

Posted by Nilshan <ni...@indralok.com>.
Hello Ian and Thomas,
 Thank you for your quick reply. 
We will definitely try the suggested option to share the drive for the
clusters..
We will get in touch  soon..
Thanks a ton.

:rules:

Nilshan and Arpan.




I see Thomas has picked up the thread, and he knows far more about  
this than I do, so listen to him more than me.

1. I see that the journal directory is  on the c: drive, do that mean  
that both Jackrabbit instances are running on the same physical  
machine ?
2. It would be normal to put the local revision file with the local  
repository storage eg


and the journal in a shared space.



You may be getting name clashes between nodes as a result of using the  
same directory for both the private and the public shared locations.

---

The failure on the DB Journal looks like the DB connection is being  
refused by the db server (url, username, password etc ).

"Unable to create connection.: Unable to
create connection." is probably from the JDBC driver.

Ian


On 31 Mar 2009, at 10:41, Nilshan wrote:

>
> Hello Thomas,
>
> Thanks for the reply...
>
> Using FileJournal, we are facing the problem that is mentioned in the
> previous message.....
>
> Using DatabaseJournal, we are facing the following error:-
>
>
> The following is the Error log while using DatabaseJournal..
>
> **********************************************************
> javax.jcr.RepositoryException: Unable to create connection.: Unable to
> create connection.
> at
> org 
> .apache 
> .jackrabbit 
> .core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650)
> at
> org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java: 
> 288)
> at
> org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java: 
> 557)
> at
> org.apache.jackrabbit.core.TransientRepository 
> $2.getRepository(TransientRepository.java:245)
> at
> org 
> .apache 
> .jackrabbit 
> .core.TransientRepository.startRepository(TransientRepository.java: 
> 265)
> at
> org 
> .apache 
> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
> 333)
> at
> org 
> .apache 
> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
> 363)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.initializeRepository(RuleAssetManager.java:555)
> at
> com 
> .amicas 
> .rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java:517)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.getServiceImpl(RuleAssetManager.java:494)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.categoryExists(RuleAssetManager.java:420)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager 
> .initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167)
> at
> com 
> .amicas 
> .gwt 
> .rulemanagement 
> .servlet 
> .RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java:256)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at  
> com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java: 
> 528)
> at
> com 
> .google 
> .gwt 
> .user 
> .server 
> .rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265)
> at
> com 
> .google 
> .gwt 
> .user 
> .server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:187)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
> at
> org 
> .apache 
> .catalina 
> .core 
> .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: 
> 252)
> at
> org 
> .apache 
> .catalina 
> .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
> at
> org 
> .springframework 
> .orm 
> .hibernate 
> .support 
> .OpenSessionInViewFilter 
> .doFilterInternal(OpenSessionInViewFilter.java:172)
> at
> org 
> .springframework 
> .web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 
> 76)
> **********************************************************
>
> My configuration for the DatabaseJournal is also mentioned in the  
> previous
> message....
>
> I hope that I have provided sufficient information to you.
> If we are missing anything, let us know....
>
>
> Thank You,
> Nilshan and Arpan.
>
>
>
>
>
> Thomas Müller-2 wrote:
>>
>> Hi,
>>
>>> I tried shared Journal (as it is must and clearly suggested on  
>>> clustering
>>> wiki) but it didn't work.
>>
>> Could you tell us what the problem was?
>>
>> Regards,
>> Thomas
>>
>>
>
> -- 
> View this message in context:
> http://www.nabble.com/JackRabbit_Clustering-tp22785285p22801622.html
> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>



-- 
View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22802207.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: JackRabbit_Clustering

Posted by Ian Boston <ie...@tfd.co.uk>.
I see Thomas has picked up the thread, and he knows far more about  
this than I do, so listen to him more than me.

1. I see that the journal directory is  on the c: drive, do that mean  
that both Jackrabbit instances are running on the same physical  
machine ?
2. It would be normal to put the local revision file with the local  
repository storage eg
<param name="revision"
       value="${rep.home}/revision.log" />

and the journal in a shared space.

<param name="directory" value="z:/sharedsmbdrive/sharedjournal/" />

You may be getting name clashes between nodes as a result of using the  
same directory for both the private and the public shared locations.

---

The failure on the DB Journal looks like the DB connection is being  
refused by the db server (url, username, password etc ).

"Unable to create connection.: Unable to
create connection." is probably from the JDBC driver.

Ian


On 31 Mar 2009, at 10:41, Nilshan wrote:

>
> Hello Thomas,
>
> Thanks for the reply...
>
> Using FileJournal, we are facing the problem that is mentioned in the
> previous message.....
>
> Using DatabaseJournal, we are facing the following error:-
>
>
> The following is the Error log while using DatabaseJournal..
>
> **********************************************************
> javax.jcr.RepositoryException: Unable to create connection.: Unable to
> create connection.
> at
> org 
> .apache 
> .jackrabbit 
> .core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650)
> at
> org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java: 
> 288)
> at
> org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java: 
> 557)
> at
> org.apache.jackrabbit.core.TransientRepository 
> $2.getRepository(TransientRepository.java:245)
> at
> org 
> .apache 
> .jackrabbit 
> .core.TransientRepository.startRepository(TransientRepository.java: 
> 265)
> at
> org 
> .apache 
> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
> 333)
> at
> org 
> .apache 
> .jackrabbit.core.TransientRepository.login(TransientRepository.java: 
> 363)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.initializeRepository(RuleAssetManager.java:555)
> at
> com 
> .amicas 
> .rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java:517)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.getServiceImpl(RuleAssetManager.java:494)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.categoryExists(RuleAssetManager.java:420)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager 
> .initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718)
> at
> com 
> .amicas 
> .rulemanagement 
> .RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167)
> at
> com 
> .amicas 
> .gwt 
> .rulemanagement 
> .servlet 
> .RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java:256)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at  
> com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java: 
> 528)
> at
> com 
> .google 
> .gwt 
> .user 
> .server 
> .rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265)
> at
> com 
> .google 
> .gwt 
> .user 
> .server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:187)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
> at
> org 
> .apache 
> .catalina 
> .core 
> .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: 
> 252)
> at
> org 
> .apache 
> .catalina 
> .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
> at
> org 
> .springframework 
> .orm 
> .hibernate 
> .support 
> .OpenSessionInViewFilter 
> .doFilterInternal(OpenSessionInViewFilter.java:172)
> at
> org 
> .springframework 
> .web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 
> 76)
> **********************************************************
>
> My configuration for the DatabaseJournal is also mentioned in the  
> previous
> message....
>
> I hope that I have provided sufficient information to you.
> If we are missing anything, let us know....
>
>
> Thank You,
> Nilshan and Arpan.
>
>
>
>
>
> Thomas Müller-2 wrote:
>>
>> Hi,
>>
>>> I tried shared Journal (as it is must and clearly suggested on  
>>> clustering
>>> wiki) but it didn't work.
>>
>> Could you tell us what the problem was?
>>
>> Regards,
>> Thomas
>>
>>
>
> -- 
> View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22801622.html
> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>


Re: JackRabbit_Clustering

Posted by Nilshan <ni...@indralok.com>.
Hello Thomas,

Thanks for the reply...

Using FileJournal, we are facing the problem that is mentioned in the
previous message.....

Using DatabaseJournal, we are facing the following error:- 


The following is the Error log while using DatabaseJournal.. 

********************************************************** 
javax.jcr.RepositoryException: Unable to create connection.: Unable to
create connection.
 at
org.apache.jackrabbit.core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650) 
 at
org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java:288) 
 at
org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:557) 
 at
org.apache.jackrabbit.core.TransientRepository$2.getRepository(TransientRepository.java:245) 
 at
org.apache.jackrabbit.core.TransientRepository.startRepository(TransientRepository.java:265) 
 at
org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:333) 
 at
org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:363) 
 at
com.amicas.rulemanagement.RuleAssetManager.initializeRepository(RuleAssetManager.java:555) 
 at
com.amicas.rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java:517) 
 at
com.amicas.rulemanagement.RuleAssetManager.getServiceImpl(RuleAssetManager.java:494) 
 at
com.amicas.rulemanagement.RuleAssetManager.categoryExists(RuleAssetManager.java:420) 
 at
com.amicas.rulemanagement.RuleAssetManager.initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718) 
 at
com.amicas.rulemanagement.RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167) 
 at
com.amicas.gwt.rulemanagement.servlet.RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java:256) 
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) 
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) 
 at java.lang.reflect.Method.invoke(Unknown Source) 
 at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:528) 
 at
com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265) 
 at
com.google.gwt.user.server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:187) 
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) 
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) 
 at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) 
 at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) 
 at
org.springframework.orm.hibernate.support.OpenSessionInViewFilter.doFilterInternal(OpenSessionInViewFilter.java:172) 
 at
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) 
********************************************************** 

My configuration for the DatabaseJournal is also mentioned in the previous
message....

I hope that I have provided sufficient information to you.
If we are missing anything, let us know....


Thank You,
Nilshan and Arpan.





Thomas Müller-2 wrote:
> 
> Hi,
> 
>> I tried shared Journal (as it is must and clearly suggested on clustering
>> wiki) but it didn't work.
> 
> Could you tell us what the problem was?
> 
> Regards,
> Thomas
> 
> 

-- 
View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22801622.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: JackRabbit_Clustering

Posted by Thomas Müller <th...@day.com>.
Hi,

> I tried shared Journal (as it is must and clearly suggested on clustering
> wiki) but it didn't work.

Could you tell us what the problem was?

Regards,
Thomas

Re: JackRabbit_Clustering

Posted by Nilshan <ni...@indralok.com>.
Hello Ian ,

Thanks for your quick reply... 

May be there is some problem with the Text Editor of this forum. because
some tags are not displayed by it.

I think It didn't consider param Tag.

For Cluster 1
<Cluster id="Rule Creation">	
  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
    		param name="revision" value="c:/myjournal/revision.log" /
    		param name="directory" value="c:/myjournal" /
  		</Journal>
</Cluster>

For Cluster 2
<Cluster id="Rule Execution.">
	
  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
    		param name="revision" value="c:/myjournal/revision_New.log" /
    		param name="directory" value="c:/myjournal" /
  		</Journal>
</Cluster>

I tried shared Journal (as it is must and clearly suggested on clustering
wiki) but it didn't work. 

I used DatabaseJournal, unfortunately It was giving me "Unable to create
Connection" error.  

Configuration using DatabaseJournal I used is as Below.

<Cluster id="Rule Creation">
	
  	    <Journal class="org.apache.jackrabbit.core.journal.DatabaseJournal">
    		param name="revision" value="${rep.home}/revision.log" /
    		param name="driver" value="com.ibm.db2.jcc.DB2Driver"/
	 		param name="url" value="jdbc:db2://192.168.1.36:50000/Database" /
			param name="user" value="username" /			
                                       param name="password"
value="password" /	 		                          param name="schema"
value="db2"/ 				
	 		param name="schemaObjectPrefix" value="RuleMgmt_Ver_" /  		</Journal>
</Cluster>

Please look at the above confiugration and let me know If I have made any
mistake.

The following is the Error log while using DatabaseJournal..

**********************************************************
javax.jcr.RepositoryException: Unable to create connection.: Unable to
create connection.
 at
org.apache.jackrabbit.core.RepositoryImpl.createClusterNode(RepositoryImpl.java:650)
 at
org.apache.jackrabbit.core.RepositoryImpl.<init>(RepositoryImpl.java:288)
 at
org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:557)
 at
org.apache.jackrabbit.core.TransientRepository$2.getRepository(TransientRepository.java:245)
 at
org.apache.jackrabbit.core.TransientRepository.startRepository(TransientRepository.java:265)
 at
org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:333)
 at
org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:363)
 at
com.amicas.rulemanagement.RuleAssetManager.initializeRepository(RuleAssetManager.java:555)
 at
com.amicas.rulemanagement.RuleAssetManager.getSession(RuleAssetManager.java:517)
 at
com.amicas.rulemanagement.RuleAssetManager.getServiceImpl(RuleAssetManager.java:494)
 at
com.amicas.rulemanagement.RuleAssetManager.categoryExists(RuleAssetManager.java:420)
 at
com.amicas.rulemanagement.RuleAssetManager.initializeRuleRepoAndMetaConfiguration(RuleAssetManager.java:1718)
 at
com.amicas.rulemanagement.RuleAssetManager.getRuleAssetManager(RuleAssetManager.java:167)
 at
com.amicas.gwt.rulemanagement.servlet.RuleServiceServlet.getAllRuleByCategory(RuleServiceServlet.java:256)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:528)
 at
com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265)
 at
com.google.gwt.user.server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:187)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
 at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
 at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
 at
org.springframework.orm.hibernate.support.OpenSessionInViewFilter.doFilterInternal(OpenSessionInViewFilter.java:172)
 at
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76)
**********************************************************

I found specific Database Journal for MySQl , Oracle. Do we have any Db2
specific Journal. ?


Thanks,
Nilshan and Arpan.

*******************************************************

Ian Boston-3 wrote:
> 
> A few observations:
> 
> You appear to be using the File Journal,
> but I cant see where the shared directory for the file journal is  
> defined.
> 
> I think, (please correct if wrong) it would be normal to use a DB  
> Journal if using a DB persistence manager.
> 
> Ian
> 
> On 30 Mar 2009, at 16:04, Nilshan wrote:
> 
>>
>> Hello All,
>>
>> I have implemented Jackrabbit Clustering approach to create Two  
>> clustering
>> for two different processes.
>> One for creation of rules (Cluster 1) and another for execution  
>> (Cluster 2).
>> Both these clusters have a shared database (DB2)
>> where rules are stored.
>>
>>
>> The problem I face during execution is that , the execution process  
>> does not
>> get the latest updates from Rule creation process. I mean , If I  
>> change Rule
>> status it doesn't reflect immediately to execution process.
>> But when redeployed (Restart Server) the system executes as  
>> expected. (With
>> latest changes applied by
>> Cluster 1)
>>
>> My configuration are as below.
>> 1.Shared DB ( Persistance Manager is ) : IBM DB2.
>> 2. Drools Version : 5.x
>> 3. Jackrabbit Version : 1.4.x
>>
>> My clustering configuration for both the process are as below.
>>
>> Configuration for Cluster 1 ----------- 
>> START-------------------------------
>> <?xml version="1.0" encoding="UTF-8"?>
>> <Repository>
>>
>> 	<Cluster id="Rule Creation">
>> 	
>>  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
>>    		
>>    		
>>  		</Journal>
>> 	</Cluster>
>>
>> 	<FileSystem  
>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>
>>    </FileSystem>
>>
>> 	<Security appName="Jackrabbit">
>> 		<AccessManager
>> class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
>> 			<LoginModule
>> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>> 				
>> 			</LoginModule>
>> 	</Security>
>> 	
>> 	<Workspaces rootPath="${rep.home}/workspaces"  
>> defaultWorkspace="default" />	
>> 	<Workspace name="${wsp.name}">
>> 		<PersistenceManager
>> class 
>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>> 			
>> 	 		
>> 			
>> 			
>> 	 		 				
>> 	 		
>> 			
>> 		</PersistenceManager>
>> 		
>> 		<FileSystem  
>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>
>>        </FileSystem>
>>
>>        <!--<FileSystem
>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>
>> 	 		
>> 			
>> 			
>> 	 		 				
>> 	 		
>>        </FileSystem>-->
>>
>> 		<SearchIndex  
>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>
>>
>>
>>
>>    </SearchIndex>            	
>> 	</Workspace>
>> 	
>> 	<Versioning rootPath="${rep.home}/version">
>> 	
>> 		<PersistenceManager
>> class 
>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>> 			
>> 	 		
>> 			
>> 			
>> 	 		 				
>> 	 		
>> 						
>> 		</PersistenceManager>
>> 		
>> 		<FileSystem  
>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>
>>        </FileSystem>
>>
>>        <!--<FileSystem
>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>
>> 	 		
>> 			
>> 			
>> 	 		 				
>> 	 		
>>        </FileSystem>-->
>>
>> 		
>> 	</Versioning>
>> 	<SearchIndex  
>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>
>>
>>
>>
>>    </SearchIndex>
>>
>> </Repository>
>> Configuration for Cluster 1 ----------- 
>> END-------------------------------
>>
>> Configuration for Cluster 2 ----------- 
>> START-------------------------------
>> <?xml version="1.0" encoding="UTF-8"?>
>> <Repository>
>>
>> 	<Cluster id="Rule Execution">
>> 	
>>  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
>>    		
>>    		
>>  		</Journal>
>> 	</Cluster>
>>
>> 	<FileSystem  
>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>
>>    </FileSystem>
>>
>> 	<Security appName="Jackrabbit">
>> 		<AccessManager
>> class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
>> 			<LoginModule
>> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>> 				
>> 			</LoginModule>
>> 	</Security>
>> 	
>> 	<Workspaces rootPath="${rep.home}/workspaces"  
>> defaultWorkspace="default" />	
>> 	<Workspace name="${wsp.name}">
>> 		<PersistenceManager
>> class 
>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>> 			
>> 	 		
>> 			
>> 			
>> 	 		 				
>> 	 		
>> 			
>> 		</PersistenceManager>
>> 		
>> 		<FileSystem  
>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>
>>        </FileSystem>
>>
>>        <!--<FileSystem
>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>
>> 	 		
>> 			
>> 			
>> 	 		 				
>> 	 		
>>        </FileSystem>-->
>>
>> 		<SearchIndex  
>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>
>>
>>
>>
>>    </SearchIndex>            	
>> 	</Workspace>
>> 	
>> 	<Versioning rootPath="${rep.home}/version">
>> 	
>> 		<PersistenceManager
>> class 
>> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
>> 			
>> 	 		
>> 			
>> 			
>> 	 		 				
>> 	 		
>> 						
>> 		</PersistenceManager>
>> 		
>> 		<FileSystem  
>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>
>>        </FileSystem>
>>
>>        <!--<FileSystem
>> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>>
>> 	 		
>> 			
>> 			
>> 	 		 				
>> 	 		
>>        </FileSystem>-->
>>
>> 		
>> 	</Versioning>
>> 	<SearchIndex  
>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>
>>
>>
>>
>>    </SearchIndex>
>>
>> </Repository>
>> Configuration for  Cluster 2 ----------- 
>> END-------------------------------
>>
>> Both the process are running on same machine under different JVM.
>>
>> Do we have any specific Journal for DB2 database as we have one for  
>> Oracle.?
>>
>> Please suggest me  a feasible solution for the above problem.
>>
>> Your Help for this would highly appreciated.
>>
>> Thanks in advance.
>>
>> Nilshan and Arpan.
>>
>>
>> -- 
>> View this message in context:
>> http://www.nabble.com/JackRabbit_Clustering-tp22785285p22785285.html
>> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>>
> 
> 
> 
http://www.nabble.com/file/p22798352/Cluster_1.xml Cluster_1.xml 
http://www.nabble.com/file/p22798352/Cluster_1.xml Cluster_1.xml 
-- 
View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22798352.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: JackRabbit_Clustering

Posted by Ian Boston <ia...@googlemail.com>.
A few observations:

You appear to be using the File Journal,
but I cant see where the shared directory for the file journal is  
defined.

I think, (please correct if wrong) it would be normal to use a DB  
Journal if using a DB persistence manager.

Ian

On 30 Mar 2009, at 16:04, Nilshan wrote:

>
> Hello All,
>
> I have implemented Jackrabbit Clustering approach to create Two  
> clustering
> for two different processes.
> One for creation of rules (Cluster 1) and another for execution  
> (Cluster 2).
> Both these clusters have a shared database (DB2)
> where rules are stored.
>
>
> The problem I face during execution is that , the execution process  
> does not
> get the latest updates from Rule creation process. I mean , If I  
> change Rule
> status it doesn't reflect immediately to execution process.
> But when redeployed (Restart Server) the system executes as  
> expected. (With
> latest changes applied by
> Cluster 1)
>
> My configuration are as below.
> 1.Shared DB ( Persistance Manager is ) : IBM DB2.
> 2. Drools Version : 5.x
> 3. Jackrabbit Version : 1.4.x
>
> My clustering configuration for both the process are as below.
>
> Configuration for Cluster 1 ----------- 
> START-------------------------------
> <?xml version="1.0" encoding="UTF-8"?>
> <Repository>
>
> 	<Cluster id="Rule Creation">
> 	
>  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
>    		
>    		
>  		</Journal>
> 	</Cluster>
>
> 	<FileSystem  
> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>
>    </FileSystem>
>
> 	<Security appName="Jackrabbit">
> 		<AccessManager
> class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
> 			<LoginModule
> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
> 				
> 			</LoginModule>
> 	</Security>
> 	
> 	<Workspaces rootPath="${rep.home}/workspaces"  
> defaultWorkspace="default" />	
> 	<Workspace name="${wsp.name}">
> 		<PersistenceManager
> class 
> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
> 			
> 	 		
> 			
> 			
> 	 		 				
> 	 		
> 			
> 		</PersistenceManager>
> 		
> 		<FileSystem  
> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>
>        </FileSystem>
>
>        <!--<FileSystem
> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>
> 	 		
> 			
> 			
> 	 		 				
> 	 		
>        </FileSystem>-->
>
> 		<SearchIndex  
> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>
>
>
>
>    </SearchIndex>            	
> 	</Workspace>
> 	
> 	<Versioning rootPath="${rep.home}/version">
> 	
> 		<PersistenceManager
> class 
> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
> 			
> 	 		
> 			
> 			
> 	 		 				
> 	 		
> 						
> 		</PersistenceManager>
> 		
> 		<FileSystem  
> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>
>        </FileSystem>
>
>        <!--<FileSystem
> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>
> 	 		
> 			
> 			
> 	 		 				
> 	 		
>        </FileSystem>-->
>
> 		
> 	</Versioning>
> 	<SearchIndex  
> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>
>
>
>
>    </SearchIndex>
>
> </Repository>
> Configuration for Cluster 1 ----------- 
> END-------------------------------
>
> Configuration for Cluster 2 ----------- 
> START-------------------------------
> <?xml version="1.0" encoding="UTF-8"?>
> <Repository>
>
> 	<Cluster id="Rule Execution">
> 	
>  	    <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
>    		
>    		
>  		</Journal>
> 	</Cluster>
>
> 	<FileSystem  
> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>
>    </FileSystem>
>
> 	<Security appName="Jackrabbit">
> 		<AccessManager
> class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
> 			<LoginModule
> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
> 				
> 			</LoginModule>
> 	</Security>
> 	
> 	<Workspaces rootPath="${rep.home}/workspaces"  
> defaultWorkspace="default" />	
> 	<Workspace name="${wsp.name}">
> 		<PersistenceManager
> class 
> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
> 			
> 	 		
> 			
> 			
> 	 		 				
> 	 		
> 			
> 		</PersistenceManager>
> 		
> 		<FileSystem  
> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>
>        </FileSystem>
>
>        <!--<FileSystem
> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>
> 	 		
> 			
> 			
> 	 		 				
> 	 		
>        </FileSystem>-->
>
> 		<SearchIndex  
> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>
>
>
>
>    </SearchIndex>            	
> 	</Workspace>
> 	
> 	<Versioning rootPath="${rep.home}/version">
> 	
> 		<PersistenceManager
> class 
> ="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
> 			
> 	 		
> 			
> 			
> 	 		 				
> 	 		
> 						
> 		</PersistenceManager>
> 		
> 		<FileSystem  
> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>
>        </FileSystem>
>
>        <!--<FileSystem
> class="org.apache.jackrabbit.core.fs.db.DB2FileSystem">
>
> 	 		
> 			
> 			
> 	 		 				
> 	 		
>        </FileSystem>-->
>
> 		
> 	</Versioning>
> 	<SearchIndex  
> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>
>
>
>
>    </SearchIndex>
>
> </Repository>
> Configuration for  Cluster 2 ----------- 
> END-------------------------------
>
> Both the process are running on same machine under different JVM.
>
> Do we have any specific Journal for DB2 database as we have one for  
> Oracle.?
>
> Please suggest me  a feasible solution for the above problem.
>
> Your Help for this would highly appreciated.
>
> Thanks in advance.
>
> Nilshan and Arpan.
>
>
> -- 
> View this message in context: http://www.nabble.com/JackRabbit_Clustering-tp22785285p22785285.html
> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>