You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jackrabbit.apache.org by Ard Schrijvers <a....@hippo.nl> on 2007/11/08 18:14:58 UTC

RE: Jackrabbit XPath performance

Hello Kisu,

I have crossposted to the users list, because I think the place to
discuss it is over there. See my comments below:

> Kisu San wrote: 
> 		QueryResult results = getQueryResults(query);
> 		
> 		NodeIterator it = results.getNodes();
> 		log.debug("Size is  " + it.getSize());
> 		BulletinDTO dto = null;
> 		//while (it.hasNext()) {
> 		for (int i= 0; i < it.getSize(); i++){
> 			Node n = it.nextNode();  =====> this is 
> where it is taking more time for fetch the first record. 
> after that it is looping fast
> 
> Is there anything wrong with NodeIterator?

Not that I am aware of, but I think you have a problem that is already
solved. Which version are you using of JR?

>From the top of my head, it had something todo with lazy loading of
nodes. Apparently, as you indicate you do not have the setLimit, and
this was part of the performance improvement, certainly for large result
set. 

Also, I do not know wether you have implemented your own access manager,
which might slow fetching results down as well, depending on the
implementation.

Anyway, if you are using an older version of JR, can you try it with the
trunk or some lately released version. Currently, getting results of
simple queries is very fast, so in your case there must be something
really different than currently in JR trunk

Regards Ard

RE: Jackrabbit XPath performance

Posted by Kisu San <Ki...@gmail.com>.
Hi Ard,

>>crossposted to the users list,

I thought I replied to all (users list)

Anyway, I am using jackrabbit 1.3.3 which comes with jcr-1.0.jar I try to
find if there any further version to it. But could not find any further
versions. 

Is this a old version? if Yes, what is the latest version?



Ard Schrijvers wrote:
> 
> Hello Kisu,
> 
> I have crossposted to the users list, because I think the place to
> discuss it is over there. See my comments below:
> 
>> Kisu San wrote: 
>> 		QueryResult results = getQueryResults(query);
>> 		
>> 		NodeIterator it = results.getNodes();
>> 		log.debug("Size is  " + it.getSize());
>> 		BulletinDTO dto = null;
>> 		//while (it.hasNext()) {
>> 		for (int i= 0; i < it.getSize(); i++){
>> 			Node n = it.nextNode();  =====> this is 
>> where it is taking more time for fetch the first record. 
>> after that it is looping fast
>> 
>> Is there anything wrong with NodeIterator?
> 
> Not that I am aware of, but I think you have a problem that is already
> solved. Which version are you using of JR?
> 
> From the top of my head, it had something todo with lazy loading of
> nodes. Apparently, as you indicate you do not have the setLimit, and
> this was part of the performance improvement, certainly for large result
> set. 
> 
> Also, I do not know wether you have implemented your own access manager,
> which might slow fetching results down as well, depending on the
> implementation.
> 
> Anyway, if you are using an older version of JR, can you try it with the
> trunk or some lately released version. Currently, getting results of
> simple queries is very fast, so in your case there must be something
> really different than currently in JR trunk
> 
> Regards Ard
> 
> 

-- 
View this message in context: http://www.nabble.com/Jackrabbit-XPath-performance-tf4765307.html#a13651687
Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.