You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jackrabbit.apache.org by Marcel Reutegger <ma...@gmx.net> on 2007/06/02 11:31:37 UTC

Re: Paging results

Ronaldo Florence wrote:
> I'm trying to page the results of a Xpath query, but I'm not sure how to do
> this. I used the skip method on the NodeIterator class, but I can't bring
> every node to memory, I need to page the results on the query, I have a
> large amount of data so it's imperative to do so.
>  
> I tried the following query:
>  
> //site/Dados/element(*, mtx:content)[position()=1 or position()=2 or
> position()=3]

jackrabbit only has limited support for the position() function, mainly to 
address same name siblings.

the skip method is exactly what you should use. the returned NodeIterator loads 
the nodes on a lazy basis. which means jackrabbit will only load nodes that you 
actually access and none of the skipped ones.

regards
  marcel

Re: RES: Paging results

Posted by Nicolas Dufour <nr...@gmail.com>.
I have used a simple way to do pagination.

I have first defined how many items is a page, so let's say 50.

Then I do my query with no special parameters.

Then I use this wonderful method of NodeIterator : skip !

skip(page*nb_element_per_page)

and then retrieve the elements after and limit them to 50 and you're set.


Nicolas


On 6/11/07, Marcel Reutegger <ma...@gmx.net> wrote:
>
> Ronaldo Florence wrote:
> > As far as I know the RowIterator uses the NodeIterator in the inside,
> so, it
> > really doesn't make much of a difference does it?
>
> correct.
>
> > Since the problem really was, too many nodes brought to memory at
> once...
> > Leading to a OutOfMemory exception...
> >
> > Am I right? Or would RowIterator really help too?
>
> no, that doesn't help. as you mentioned already, the underlying
> implementation
> is the same for RowIterator as well as for NodeIterator.
>
> regards
>   marcel
>

Re: RES: Paging results

Posted by Marcel Reutegger <ma...@gmx.net>.
Ronaldo Florence wrote:
> As far as I know the RowIterator uses the NodeIterator in the inside, so, it
> really doesn't make much of a difference does it?

correct.

> Since the problem really was, too many nodes brought to memory at once...
> Leading to a OutOfMemory exception...
> 
> Am I right? Or would RowIterator really help too?

no, that doesn't help. as you mentioned already, the underlying implementation 
is the same for RowIterator as well as for NodeIterator.

regards
  marcel

RES: Paging results

Posted by Ronaldo Florence <rf...@nexen.com.br>.
But,

As far as I know the RowIterator uses the NodeIterator in the inside, so, it
really doesn't make much of a difference does it?

Since the problem really was, too many nodes brought to memory at once...
Leading to a OutOfMemory exception...

Am I right? Or would RowIterator really help too?

Thanks

Ronaldo

-----Mensagem original-----
De: Paco Avila [mailto:pavila@git.es] 
Enviada em: segunda-feira, 4 de junho de 2007 05:18
Para: users@jackrabbit.apache.org
Assunto: Re: Paging results

El sáb, 02-06-2007 a las 11:31 +0200, Marcel Reutegger escribió:
> Ronaldo Florence wrote:
> > I'm trying to page the results of a Xpath query, but I'm not sure 
> > how to do this. I used the skip method on the NodeIterator class, 
> > but I can't bring every node to memory, I need to page the results 
> > on the query, I have a large amount of data so it's imperative to do so.
> >  
> > I tried the following query:
> >  
> > //site/Dados/element(*, mtx:content)[position()=1 or position()=2 or 
> > position()=3]
> 
> jackrabbit only has limited support for the position() function, 
> mainly to address same name siblings.
> 
> the skip method is exactly what you should use. the returned 
> NodeIterator loads the nodes on a lazy basis. which means jackrabbit 
> will only load nodes that you actually access and none of the skipped
ones.

You can also use the RowIterator.


Re: Paging results

Posted by Paco Avila <pa...@git.es>.
El sáb, 02-06-2007 a las 11:31 +0200, Marcel Reutegger escribió:
> Ronaldo Florence wrote:
> > I'm trying to page the results of a Xpath query, but I'm not sure how to do
> > this. I used the skip method on the NodeIterator class, but I can't bring
> > every node to memory, I need to page the results on the query, I have a
> > large amount of data so it's imperative to do so.
> >  
> > I tried the following query:
> >  
> > //site/Dados/element(*, mtx:content)[position()=1 or position()=2 or
> > position()=3]
> 
> jackrabbit only has limited support for the position() function, mainly to 
> address same name siblings.
> 
> the skip method is exactly what you should use. the returned NodeIterator loads 
> the nodes on a lazy basis. which means jackrabbit will only load nodes that you 
> actually access and none of the skipped ones.

You can also use the RowIterator.


RES: Paging results

Posted by Ronaldo Florence <rf...@nexen.com.br>.
Ok Marcel,

But I didn't know that we had to set resultFetchSize so that It didn't
consume all the server memory... But setting this did the trick...

Thaks again... 

-----Mensagem original-----
De: Marcel Reutegger [mailto:marcel.reutegger@gmx.net] 
Enviada em: sábado, 2 de junho de 2007 06:32
Para: users@jackrabbit.apache.org
Assunto: Re: Paging results

Ronaldo Florence wrote:
> I'm trying to page the results of a Xpath query, but I'm not sure how 
> to do this. I used the skip method on the NodeIterator class, but I 
> can't bring every node to memory, I need to page the results on the 
> query, I have a large amount of data so it's imperative to do so.
>  
> I tried the following query:
>  
> //site/Dados/element(*, mtx:content)[position()=1 or position()=2 or 
> position()=3]

jackrabbit only has limited support for the position() function, mainly to
address same name siblings.

the skip method is exactly what you should use. the returned NodeIterator
loads the nodes on a lazy basis. which means jackrabbit will only load nodes
that you actually access and none of the skipped ones.

regards
  marcel