You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jackrabbit.apache.org by Viraf Bankwalla <vi...@yahoo.com> on 2008/03/28 19:25:04 UTC

Jackrabbit scalability

Hi, I am new to this forum.  I have a simple application for a content repository.  I have digitized assets arriving from a number of sources that need to be stored (with some meta-data).  Initially this will be high speed scanners storing images.  I also need to be able to retrieve these images for viewing.  

The usage patterns support both single document release / retrieve as well as batch modes where we may receive 300K images that need to be loaded or extracted as a batch.  

I  am looking for a solution that allows  me to :
Store digitized assets and meta-data in a content repository
Retrieve digitized assets and meta-data from a content repositoryBatch store digitized assets and meta-data in a content repository
Batch retrieve digitized assets and meta-data from a content repositoryProvide high availability on the content repository (multiple machines need to serve and store content)Backup content repositoryRestore of content repositorySpecify retention policies
Archive / Purge content from content repositorySupport high volumes (> 100 M nodes)
I would like to hear your experiences with jackrabbit specifically if you are using it for the above use cases.

Thanks

- viraf




      ____________________________________________________________________________________
Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs