You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@sling.apache.org by "Robert A. Decker" <de...@robdecker.com> on 2012/12/04 17:34:58 UTC

data cleanup?

Hello,

We recently were transferring a lot of data via webdav in sling. We were processing a file on a windows machine shared from sling via webdav and writing the results back into the webdav directory as they were processed. Our filesystem on the sling server quickly hit 100% full - something like 3GB of data on the filesystem for only about 60MB of data being processed. 

For example, under /tmp there's GBs of tmp cache files. And under sling our sling folder quickly jumped up to 9GB, hitting 100%.

We've since changed the way we process the data so that we don't exchange nearly as much via webdav.

Is it possible to clean up the jackrabbit folder somehow? It looks like the majority of data is in the datastore folder, but I think it must just be some sort of webdav cacheing.


Rob

Re: data cleanup?

Posted by "Robert A. Decker" <de...@robdecker.com>.
Can anyone tell me how to get the path to the repository? I have access to the javax.jcr.Repository, but… really having a problem figuring out how to get the jackrabbit type of information, like the path to the repo. 

I know it needs to be abstracted, but there must be a way to get to it programatically.

Rob

On Dec 4, 2012, at 11:44 AM, Robert A. Decker wrote:

> Ok, I found this:
> http://wiki.apache.org/jackrabbit/DataStore#Data_Store_Garbage_Collection
> 
> Rob
> 
> On Dec 4, 2012, at 11:34 AM, Robert A. Decker wrote:
> 
>> Hello,
>> 
>> We recently were transferring a lot of data via webdav in sling. We were processing a file on a windows machine shared from sling via webdav and writing the results back into the webdav directory as they were processed. Our filesystem on the sling server quickly hit 100% full - something like 3GB of data on the filesystem for only about 60MB of data being processed. 
>> 
>> For example, under /tmp there's GBs of tmp cache files. And under sling our sling folder quickly jumped up to 9GB, hitting 100%.
>> 
>> We've since changed the way we process the data so that we don't exchange nearly as much via webdav.
>> 
>> Is it possible to clean up the jackrabbit folder somehow? It looks like the majority of data is in the datastore folder, but I think it must just be some sort of webdav cacheing.
>> 
>> 
>> Rob
> 
> 


Re: data cleanup?

Posted by "Robert A. Decker" <de...@robdecker.com>.
Ok, I found this:
http://wiki.apache.org/jackrabbit/DataStore#Data_Store_Garbage_Collection

Rob

On Dec 4, 2012, at 11:34 AM, Robert A. Decker wrote:

> Hello,
> 
> We recently were transferring a lot of data via webdav in sling. We were processing a file on a windows machine shared from sling via webdav and writing the results back into the webdav directory as they were processed. Our filesystem on the sling server quickly hit 100% full - something like 3GB of data on the filesystem for only about 60MB of data being processed. 
> 
> For example, under /tmp there's GBs of tmp cache files. And under sling our sling folder quickly jumped up to 9GB, hitting 100%.
> 
> We've since changed the way we process the data so that we don't exchange nearly as much via webdav.
> 
> Is it possible to clean up the jackrabbit folder somehow? It looks like the majority of data is in the datastore folder, but I think it must just be some sort of webdav cacheing.
> 
> 
> Rob