You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jackrabbit.apache.org by "philipp.thiemann" <p....@headframe-it.de> on 2009/10/30 11:51:13 UTC

Storing large blobs in mysql

Hello everybody,

I am using Jackrabbit 1.5.5 (Core) for a project that is storing and
processing large blob files (~100MB).

My local environment consists of a Windows XP, Apache Tomcat 6.0.20 , MySQL
5.1.38 and MySQL Connector 5.1.8.

When storing blobs with a size > ~10MB I get a CommunicationsException from
the database, leaving the blob file unstored in jackrabbit (of course).
The MySQL parameter "max_allowed_packet" is already increased to 128MB (this
is not my problem anymore;-) The error message is different now!).
I have also disabled firewall and anti virus software with no effect.

The only way I got things to work as a workaround was using a FileDataStore
for JR (see my repo config below).
But this has the disadvantage of needing a common SAN when used in a
clustered environment (JR cluster journal is stored in database!) what we
will to do in the next weeks.
As we plan to host the cluster nodes on different servers in different
networks, the SAN issue might pose a killer criterion.

Here are my questions:
 - Has anyone experienced a similar problem with large blobs on mysql?
 - Are there any other mysql parameters being useful?
 - Do the same effects occur with other databases as well?
 - What kind of database system would you propose for managing large blobs
(--> performance)?
 - Could this just be a "free memory" issue on my local machine?

Regards,
Philipp

------------

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE Repository PUBLIC "-//The Apache Software Foundation//DTD
Jackrabbit 1.4//EN" "http://jackrabbit.apache.org/dtd/repository-1.4.dtd">
<Repository>
  <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
    
  </FileSystem>
  <Security appName="Jackrabbit">
    <AccessManager
class="org.apache.jackrabbit.core.security.SimpleAccessManager"></AccessManager>
    <LoginModule
class="org.apache.jackrabbit.core.security.SimpleLoginModule">
      
    </LoginModule>
  </Security>
  <Workspaces rootPath="${rep.home}/workspaces" defaultWorkspace="default"
/>
  <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    
    
  </DataStore>
  <Workspace name="default">
    <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
      
    </FileSystem>
    <PersistenceManager
class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
      
      
      <!-- warning, this is not the schema name, it's the db type -->
      
      
      
      
    </PersistenceManager>
    <SearchIndex
class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    </SearchIndex>
  </Workspace>
  <Versioning rootPath="${rep.home}/version">
    <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
      
    </FileSystem>
    <PersistenceManager
class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
      
      
      <!-- warning, this is not the schema name, it's the db type -->
      
      
      
      
    </PersistenceManager>
  </Versioning>
  <!-- 
  	!!!Achtung!!!: Als NodeId wird das absolute Installationsverzeichnis fuer
die Instanz verwendet.
  	Bei Verteilung auf mehrere Server ist darauf zu achten, dass alle
Anwendungen in global eindeutigen 
  	Verzeichnissen liegen (z.B. .../shonx1/, .../shonx2/, .../shonx3/, ... ,
.../shonx8/)
  	
  	Allgemeine Hinweise zum JR-Cluster siehe:
http://wiki.apache.org/jackrabbit/Clustering 
  -->
  <Cluster id="cluster_${rep.home}" syncDelay="2000">
    <Journal class="org.apache.jackrabbit.core.journal.DatabaseJournal">
      
      
      
      <!-- warning, this is not the schema name, it's the db type -->
      
      
      
    </Journal>
  </Cluster>
</Repository>
 
-- 
View this message in context: http://old.nabble.com/Storing-large-blobs-in-mysql-tp26128045p26128045.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: Storing large blobs in mysql

Posted by "philipp.thiemann" <p....@headframe-it.de>.

Hi Greg,

thanks for your reply.
My error message was quite similar to the mysql issues on the JR wiki page.
But both of these pages point to the 'max_allowed_packet' issue I had solved
several days beforehand. 
Moreover everything worked as expected after a reboot without any
configuration changes.

I had found a resource in the web saying that mysql needs thrice the memory
of the blob for a operation. 
http://stackoverflow.com/questions/945471/handling-of-huge-blobs-in-mysql
handling-of-huge-blobs-in-mysql 
So if my memory was already exhausted that time this could explain the
behaviour.
I am just wondering why mysql database gives me a quite unspecific error
message like CommunicationsException that doesn't point to exhausted
memory... (provided my assumption is correct)

Regards,
Philipp


Greg Klebus-3 wrote:
> 
> Hi Philipp
> 
> This might be related to the known limitation in MySQL regarding
> storing of BLOBs - please see the note [1] on the DataStore wiki page
> in Jacrkabbit.
> 
> [1] http://wiki.apache.org/jackrabbit/DataStore#Limitations
> 
> Regards
> Greg
> 
> On Mon, Nov 2, 2009 at 9:58 PM, philipp.thiemann
> <p....@headframe-it.de> wrote:
>>
>> Hello Stefan,
>>
>> first of all thanks for your quick reply.
>>
>> After a reboot of my machine I wasn't able to reproduce the problem
>> anymore
>> today.
>> I remember my memory usage was above my physical memory size.
>> So I guess it was a memory issue after several standbys.
>> (Thanks to my rolling log appender I still have a stack trace of the
>> error.
>> - attached a file  http://old.nabble.com/file/p26157829/log.txt log.txt )
>>
>> If anybody else has the same problem with storing large blobs although
>> mysql
>> parameter "max_allowed_packet" is correctly set: Here is my advice:
>> Check your memory allocation and if possible try again after a reboot.
>>
>> Bye,
>> Philipp
>>
>>
>> Stefan Guggisberg wrote:
>>>
>>> On Fri, Oct 30, 2009 at 11:51 AM, philipp.thiemann
>>> <p....@headframe-it.de> wrote:
>>>>
>>>> Hello everybody,
>>>>
>>>> I am using Jackrabbit 1.5.5 (Core) for a project that is storing and
>>>> processing large blob files (~100MB).
>>>>
>>>> My local environment consists of a Windows XP, Apache Tomcat 6.0.20 ,
>>>> MySQL
>>>> 5.1.38 and MySQL Connector 5.1.8.
>>>>
>>>> When storing blobs with a size > ~10MB I get a CommunicationsException
>>>> from
>>>> the database, leaving the blob file unstored in jackrabbit (of course).
>>>> The MySQL parameter "max_allowed_packet" is already increased to 128MB
>>>> (this
>>>> is not my problem anymore;-) The error message is different now!).
>>>
>>> stack trace?
>>>
>>> cheers
>>> stefan
>>>
>>>> I have also disabled firewall and anti virus software with no effect.
>>>>
>>>> The only way I got things to work as a workaround was using a
>>>> FileDataStore
>>>> for JR (see my repo config below).
>>>> But this has the disadvantage of needing a common SAN when used in a
>>>> clustered environment (JR cluster journal is stored in database!) what
>>>> we
>>>> will to do in the next weeks.
>>>> As we plan to host the cluster nodes on different servers in different
>>>> networks, the SAN issue might pose a killer criterion.
>>>>
>>>> Here are my questions:
>>>>  - Has anyone experienced a similar problem with large blobs on mysql?
>>>>  - Are there any other mysql parameters being useful?
>>>>  - Do the same effects occur with other databases as well?
>>>>  - What kind of database system would you propose for managing large
>>>> blobs
>>>> (--> performance)?
>>>>  - Could this just be a "free memory" issue on my local machine?
>>>>
>>>> Regards,
>>>> Philipp
>>>>
>>>> ------------
>>>>
>>>> <?xml version="1.0" encoding="UTF-8"?>
>>>> <!DOCTYPE Repository PUBLIC "-//The Apache Software Foundation//DTD
>>>> Jackrabbit 1.4//EN"
>>>> "http://jackrabbit.apache.org/dtd/repository-1.4.dtd">
>>>> <Repository>
>>>>  <FileSystem
>>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>>
>>>>  </FileSystem>
>>>>  <Security appName="Jackrabbit">
>>>>    <AccessManager
>>>> class="org.apache.jackrabbit.core.security.SimpleAccessManager"></AccessManager>
>>>>    <LoginModule
>>>> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>>>>
>>>>    </LoginModule>
>>>>  </Security>
>>>>  <Workspaces rootPath="${rep.home}/workspaces"
>>>> defaultWorkspace="default"
>>>> />
>>>>  <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
>>>>
>>>>
>>>>  </DataStore>
>>>>  <Workspace name="default">
>>>>    <FileSystem
>>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>>
>>>>    </FileSystem>
>>>>    <PersistenceManager
>>>> class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
>>>>
>>>>
>>>>      <!-- warning, this is not the schema name, it's the db type -->
>>>>
>>>>
>>>>
>>>>
>>>>    </PersistenceManager>
>>>>    <SearchIndex
>>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>    </SearchIndex>
>>>>  </Workspace>
>>>>  <Versioning rootPath="${rep.home}/version">
>>>>    <FileSystem
>>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>>
>>>>    </FileSystem>
>>>>    <PersistenceManager
>>>> class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
>>>>
>>>>
>>>>      <!-- warning, this is not the schema name, it's the db type -->
>>>>
>>>>
>>>>
>>>>
>>>>    </PersistenceManager>
>>>>  </Versioning>
>>>>  <!--
>>>>        !!!Achtung!!!: Als NodeId wird das absolute
>>>> Installationsverzeichnis fuer
>>>> die Instanz verwendet.
>>>>        Bei Verteilung auf mehrere Server ist darauf zu achten, dass
>>>> alle
>>>> Anwendungen in global eindeutigen
>>>>        Verzeichnissen liegen (z.B. .../shonx1/, .../shonx2/,
>>>> .../shonx3/,
>>>> ... ,
>>>> .../shonx8/)
>>>>
>>>>        Allgemeine Hinweise zum JR-Cluster siehe:
>>>> http://wiki.apache.org/jackrabbit/Clustering
>>>>  -->
>>>>  <Cluster id="cluster_${rep.home}" syncDelay="2000">
>>>>    <Journal class="org.apache.jackrabbit.core.journal.DatabaseJournal">
>>>>
>>>>
>>>>
>>>>      <!-- warning, this is not the schema name, it's the db type -->
>>>>
>>>>
>>>>
>>>>    </Journal>
>>>>  </Cluster>
>>>> </Repository>
> 
> 

-- 
View this message in context: http://old.nabble.com/Storing-large-blobs-in-mysql-tp26128045p26194685.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: Storing large blobs in mysql

Posted by Greg Klebus <gk...@day.com>.
Hi Philipp

This might be related to the known limitation in MySQL regarding
storing of BLOBs - please see the note [1] on the DataStore wiki page
in Jacrkabbit.

[1] http://wiki.apache.org/jackrabbit/DataStore#Limitations

Regards
Greg

On Mon, Nov 2, 2009 at 9:58 PM, philipp.thiemann
<p....@headframe-it.de> wrote:
>
> Hello Stefan,
>
> first of all thanks for your quick reply.
>
> After a reboot of my machine I wasn't able to reproduce the problem anymore
> today.
> I remember my memory usage was above my physical memory size.
> So I guess it was a memory issue after several standbys.
> (Thanks to my rolling log appender I still have a stack trace of the error.
> - attached a file  http://old.nabble.com/file/p26157829/log.txt log.txt )
>
> If anybody else has the same problem with storing large blobs although mysql
> parameter "max_allowed_packet" is correctly set: Here is my advice:
> Check your memory allocation and if possible try again after a reboot.
>
> Bye,
> Philipp
>
>
> Stefan Guggisberg wrote:
>>
>> On Fri, Oct 30, 2009 at 11:51 AM, philipp.thiemann
>> <p....@headframe-it.de> wrote:
>>>
>>> Hello everybody,
>>>
>>> I am using Jackrabbit 1.5.5 (Core) for a project that is storing and
>>> processing large blob files (~100MB).
>>>
>>> My local environment consists of a Windows XP, Apache Tomcat 6.0.20 ,
>>> MySQL
>>> 5.1.38 and MySQL Connector 5.1.8.
>>>
>>> When storing blobs with a size > ~10MB I get a CommunicationsException
>>> from
>>> the database, leaving the blob file unstored in jackrabbit (of course).
>>> The MySQL parameter "max_allowed_packet" is already increased to 128MB
>>> (this
>>> is not my problem anymore;-) The error message is different now!).
>>
>> stack trace?
>>
>> cheers
>> stefan
>>
>>> I have also disabled firewall and anti virus software with no effect.
>>>
>>> The only way I got things to work as a workaround was using a
>>> FileDataStore
>>> for JR (see my repo config below).
>>> But this has the disadvantage of needing a common SAN when used in a
>>> clustered environment (JR cluster journal is stored in database!) what we
>>> will to do in the next weeks.
>>> As we plan to host the cluster nodes on different servers in different
>>> networks, the SAN issue might pose a killer criterion.
>>>
>>> Here are my questions:
>>>  - Has anyone experienced a similar problem with large blobs on mysql?
>>>  - Are there any other mysql parameters being useful?
>>>  - Do the same effects occur with other databases as well?
>>>  - What kind of database system would you propose for managing large
>>> blobs
>>> (--> performance)?
>>>  - Could this just be a "free memory" issue on my local machine?
>>>
>>> Regards,
>>> Philipp
>>>
>>> ------------
>>>
>>> <?xml version="1.0" encoding="UTF-8"?>
>>> <!DOCTYPE Repository PUBLIC "-//The Apache Software Foundation//DTD
>>> Jackrabbit 1.4//EN"
>>> "http://jackrabbit.apache.org/dtd/repository-1.4.dtd">
>>> <Repository>
>>>  <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>
>>>  </FileSystem>
>>>  <Security appName="Jackrabbit">
>>>    <AccessManager
>>> class="org.apache.jackrabbit.core.security.SimpleAccessManager"></AccessManager>
>>>    <LoginModule
>>> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>>>
>>>    </LoginModule>
>>>  </Security>
>>>  <Workspaces rootPath="${rep.home}/workspaces" defaultWorkspace="default"
>>> />
>>>  <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
>>>
>>>
>>>  </DataStore>
>>>  <Workspace name="default">
>>>    <FileSystem
>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>
>>>    </FileSystem>
>>>    <PersistenceManager
>>> class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
>>>
>>>
>>>      <!-- warning, this is not the schema name, it's the db type -->
>>>
>>>
>>>
>>>
>>>    </PersistenceManager>
>>>    <SearchIndex
>>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>    </SearchIndex>
>>>  </Workspace>
>>>  <Versioning rootPath="${rep.home}/version">
>>>    <FileSystem
>>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>>
>>>    </FileSystem>
>>>    <PersistenceManager
>>> class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
>>>
>>>
>>>      <!-- warning, this is not the schema name, it's the db type -->
>>>
>>>
>>>
>>>
>>>    </PersistenceManager>
>>>  </Versioning>
>>>  <!--
>>>        !!!Achtung!!!: Als NodeId wird das absolute
>>> Installationsverzeichnis fuer
>>> die Instanz verwendet.
>>>        Bei Verteilung auf mehrere Server ist darauf zu achten, dass alle
>>> Anwendungen in global eindeutigen
>>>        Verzeichnissen liegen (z.B. .../shonx1/, .../shonx2/, .../shonx3/,
>>> ... ,
>>> .../shonx8/)
>>>
>>>        Allgemeine Hinweise zum JR-Cluster siehe:
>>> http://wiki.apache.org/jackrabbit/Clustering
>>>  -->
>>>  <Cluster id="cluster_${rep.home}" syncDelay="2000">
>>>    <Journal class="org.apache.jackrabbit.core.journal.DatabaseJournal">
>>>
>>>
>>>
>>>      <!-- warning, this is not the schema name, it's the db type -->
>>>
>>>
>>>
>>>    </Journal>
>>>  </Cluster>
>>> </Repository>

Re: Storing large blobs in mysql

Posted by "philipp.thiemann" <p....@headframe-it.de>.
Hello Stefan,

first of all thanks for your quick reply.

After a reboot of my machine I wasn't able to reproduce the problem anymore
today. 
I remember my memory usage was above my physical memory size. 
So I guess it was a memory issue after several standbys.
(Thanks to my rolling log appender I still have a stack trace of the error.
- attached a file  http://old.nabble.com/file/p26157829/log.txt log.txt )

If anybody else has the same problem with storing large blobs although mysql
parameter "max_allowed_packet" is correctly set: Here is my advice:
Check your memory allocation and if possible try again after a reboot.

Bye,
Philipp


Stefan Guggisberg wrote:
> 
> On Fri, Oct 30, 2009 at 11:51 AM, philipp.thiemann
> <p....@headframe-it.de> wrote:
>>
>> Hello everybody,
>>
>> I am using Jackrabbit 1.5.5 (Core) for a project that is storing and
>> processing large blob files (~100MB).
>>
>> My local environment consists of a Windows XP, Apache Tomcat 6.0.20 ,
>> MySQL
>> 5.1.38 and MySQL Connector 5.1.8.
>>
>> When storing blobs with a size > ~10MB I get a CommunicationsException
>> from
>> the database, leaving the blob file unstored in jackrabbit (of course).
>> The MySQL parameter "max_allowed_packet" is already increased to 128MB
>> (this
>> is not my problem anymore;-) The error message is different now!).
> 
> stack trace?
> 
> cheers
> stefan
> 
>> I have also disabled firewall and anti virus software with no effect.
>>
>> The only way I got things to work as a workaround was using a
>> FileDataStore
>> for JR (see my repo config below).
>> But this has the disadvantage of needing a common SAN when used in a
>> clustered environment (JR cluster journal is stored in database!) what we
>> will to do in the next weeks.
>> As we plan to host the cluster nodes on different servers in different
>> networks, the SAN issue might pose a killer criterion.
>>
>> Here are my questions:
>>  - Has anyone experienced a similar problem with large blobs on mysql?
>>  - Are there any other mysql parameters being useful?
>>  - Do the same effects occur with other databases as well?
>>  - What kind of database system would you propose for managing large
>> blobs
>> (--> performance)?
>>  - Could this just be a "free memory" issue on my local machine?
>>
>> Regards,
>> Philipp
>>
>> ------------
>>
>> <?xml version="1.0" encoding="UTF-8"?>
>> <!DOCTYPE Repository PUBLIC "-//The Apache Software Foundation//DTD
>> Jackrabbit 1.4//EN"
>> "http://jackrabbit.apache.org/dtd/repository-1.4.dtd">
>> <Repository>
>>  <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>
>>  </FileSystem>
>>  <Security appName="Jackrabbit">
>>    <AccessManager
>> class="org.apache.jackrabbit.core.security.SimpleAccessManager"></AccessManager>
>>    <LoginModule
>> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>>
>>    </LoginModule>
>>  </Security>
>>  <Workspaces rootPath="${rep.home}/workspaces" defaultWorkspace="default"
>> />
>>  <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
>>
>>
>>  </DataStore>
>>  <Workspace name="default">
>>    <FileSystem
>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>
>>    </FileSystem>
>>    <PersistenceManager
>> class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
>>
>>
>>      <!-- warning, this is not the schema name, it's the db type -->
>>
>>
>>
>>
>>    </PersistenceManager>
>>    <SearchIndex
>> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>    </SearchIndex>
>>  </Workspace>
>>  <Versioning rootPath="${rep.home}/version">
>>    <FileSystem
>> class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>>
>>    </FileSystem>
>>    <PersistenceManager
>> class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
>>
>>
>>      <!-- warning, this is not the schema name, it's the db type -->
>>
>>
>>
>>
>>    </PersistenceManager>
>>  </Versioning>
>>  <!--
>>        !!!Achtung!!!: Als NodeId wird das absolute
>> Installationsverzeichnis fuer
>> die Instanz verwendet.
>>        Bei Verteilung auf mehrere Server ist darauf zu achten, dass alle
>> Anwendungen in global eindeutigen
>>        Verzeichnissen liegen (z.B. .../shonx1/, .../shonx2/, .../shonx3/,
>> ... ,
>> .../shonx8/)
>>
>>        Allgemeine Hinweise zum JR-Cluster siehe:
>> http://wiki.apache.org/jackrabbit/Clustering
>>  -->
>>  <Cluster id="cluster_${rep.home}" syncDelay="2000">
>>    <Journal class="org.apache.jackrabbit.core.journal.DatabaseJournal">
>>
>>
>>
>>      <!-- warning, this is not the schema name, it's the db type -->
>>
>>
>>
>>    </Journal>
>>  </Cluster>
>> </Repository>
>>
>> --
>> View this message in context:
>> http://old.nabble.com/Storing-large-blobs-in-mysql-tp26128045p26128045.html
>> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: http://old.nabble.com/Storing-large-blobs-in-mysql-tp26128045p26157829.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: Storing large blobs in mysql

Posted by Stefan Guggisberg <st...@gmail.com>.
On Fri, Oct 30, 2009 at 11:51 AM, philipp.thiemann
<p....@headframe-it.de> wrote:
>
> Hello everybody,
>
> I am using Jackrabbit 1.5.5 (Core) for a project that is storing and
> processing large blob files (~100MB).
>
> My local environment consists of a Windows XP, Apache Tomcat 6.0.20 , MySQL
> 5.1.38 and MySQL Connector 5.1.8.
>
> When storing blobs with a size > ~10MB I get a CommunicationsException from
> the database, leaving the blob file unstored in jackrabbit (of course).
> The MySQL parameter "max_allowed_packet" is already increased to 128MB (this
> is not my problem anymore;-) The error message is different now!).

stack trace?

cheers
stefan

> I have also disabled firewall and anti virus software with no effect.
>
> The only way I got things to work as a workaround was using a FileDataStore
> for JR (see my repo config below).
> But this has the disadvantage of needing a common SAN when used in a
> clustered environment (JR cluster journal is stored in database!) what we
> will to do in the next weeks.
> As we plan to host the cluster nodes on different servers in different
> networks, the SAN issue might pose a killer criterion.
>
> Here are my questions:
>  - Has anyone experienced a similar problem with large blobs on mysql?
>  - Are there any other mysql parameters being useful?
>  - Do the same effects occur with other databases as well?
>  - What kind of database system would you propose for managing large blobs
> (--> performance)?
>  - Could this just be a "free memory" issue on my local machine?
>
> Regards,
> Philipp
>
> ------------
>
> <?xml version="1.0" encoding="UTF-8"?>
> <!DOCTYPE Repository PUBLIC "-//The Apache Software Foundation//DTD
> Jackrabbit 1.4//EN" "http://jackrabbit.apache.org/dtd/repository-1.4.dtd">
> <Repository>
>  <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>
>  </FileSystem>
>  <Security appName="Jackrabbit">
>    <AccessManager
> class="org.apache.jackrabbit.core.security.SimpleAccessManager"></AccessManager>
>    <LoginModule
> class="org.apache.jackrabbit.core.security.SimpleLoginModule">
>
>    </LoginModule>
>  </Security>
>  <Workspaces rootPath="${rep.home}/workspaces" defaultWorkspace="default"
> />
>  <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
>
>
>  </DataStore>
>  <Workspace name="default">
>    <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>
>    </FileSystem>
>    <PersistenceManager
> class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
>
>
>      <!-- warning, this is not the schema name, it's the db type -->
>
>
>
>
>    </PersistenceManager>
>    <SearchIndex
> class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>    </SearchIndex>
>  </Workspace>
>  <Versioning rootPath="${rep.home}/version">
>    <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
>
>    </FileSystem>
>    <PersistenceManager
> class="org.apache.jackrabbit.core.persistence.bundle.MySqlPersistenceManager">
>
>
>      <!-- warning, this is not the schema name, it's the db type -->
>
>
>
>
>    </PersistenceManager>
>  </Versioning>
>  <!--
>        !!!Achtung!!!: Als NodeId wird das absolute Installationsverzeichnis fuer
> die Instanz verwendet.
>        Bei Verteilung auf mehrere Server ist darauf zu achten, dass alle
> Anwendungen in global eindeutigen
>        Verzeichnissen liegen (z.B. .../shonx1/, .../shonx2/, .../shonx3/, ... ,
> .../shonx8/)
>
>        Allgemeine Hinweise zum JR-Cluster siehe:
> http://wiki.apache.org/jackrabbit/Clustering
>  -->
>  <Cluster id="cluster_${rep.home}" syncDelay="2000">
>    <Journal class="org.apache.jackrabbit.core.journal.DatabaseJournal">
>
>
>
>      <!-- warning, this is not the schema name, it's the db type -->
>
>
>
>    </Journal>
>  </Cluster>
> </Repository>
>
> --
> View this message in context: http://old.nabble.com/Storing-large-blobs-in-mysql-tp26128045p26128045.html
> Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
>
>