You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jackrabbit.apache.org by Francisco Carriedo Scher <fc...@gmail.com> on 2011/11/02 22:49:12 UTC

Exception when operating through load balancer

Hi there,

i have a clustered JR environment and i operate it from the Java side
through Webdav. I have deployed three servers that operate in cluster and
everything went fine until i tried to add a load balancing / fault
tolerance feature to the design. I am using Nginx as load balancer with the
webdav module and, despite obtaining correctly the Repository object, some
operations fail with the following exception:

*javax.jcr.RepositoryException: Request Entity Too Large
    at org.apache.jackrabbit.spi2dav.**ExceptionConverter.generate(**
ExceptionConverter.java:113)
    at org.apache.jackrabbit.spi2dav.**ExceptionConverter.generate(**
ExceptionConverter.java:49)
    at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl$**
BatchImpl.start(**RepositoryServiceImpl.java:**457)
    at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl$**
BatchImpl.access$200(**RepositoryServiceImpl.java:**399)
    at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl.submit(**
RepositoryServiceImpl.java:**304)
    at org.apache.jackrabbit.jcr2spi.**WorkspaceManager$**
OperationVisitorImpl.execute(**WorkspaceManager.java:830)
    at org.apache.jackrabbit.jcr2spi.**WorkspaceManager$**
OperationVisitorImpl.access$**500(WorkspaceManager.java:797)
    at org.apache.jackrabbit.jcr2spi.**WorkspaceManager.execute(**
WorkspaceManager.java:594)
    at org.apache.jackrabbit.jcr2spi.**state.SessionItemStateManager.**
save(SessionItemStateManager.**java:139)
    at org.apache.jackrabbit.jcr2spi.**ItemImpl.save(ItemImpl.java:**246)
    at org.apache.jackrabbit.jcr2spi.**SessionImpl.save(SessionImpl.**
java:328)
    at com.solaiemes.filerepository.**management.**EmbeddableFileManager.**
deleteItem(**EmbeddableFileManager.java:**248)
    at com.solaiemes.filerepository.**management.**EmbeddableFileManager.**
importFile(**EmbeddableFileManager.java:**469)
    at com.solaiemes.filerepository.**management.**EmbeddableFileManager.**
saveFile(**EmbeddableFileManager.java:80)
    at com.solaiemes.filerepository.**management.RepoShell.main(**
RepoShell.java:87)
Caused by: org.apache.jackrabbit.webdav.**DavException: Request Entity Too
Large
    at org.apache.jackrabbit.webdav.**client.methods.DavMethodBase.**
getResponseException(**DavMethodBase.java:172)
    at org.apache.jackrabbit.webdav.**client.methods.DavMethodBase.**
checkSuccess(DavMethodBase.**java:181)
    at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl$**
BatchImpl.start(**RepositoryServiceImpl.java:**453)
    ... 12 more
*
Uploading file operation fails and the initial guessing was that uploading
files through webdav needs multiple HTTP requests and the load balancer
would forward each of those to a different repository server. With little
files (some bytes) and with operations that include just reading it seems
to work correctly. I added session affinity (sticky session) support to
Nginx and recompiled it but the same error persists.

Can somebody tell me if my guessing was correct and the error is related to
the question i suggested?

Thanks in advance for your attention!

Re: Exception when operating through load balancer

Posted by Jukka Zitting <ju...@gmail.com>.
Hi Francisco,

There's a small delay on when content changes in one cluster node
become properly visible on another node, which is why I'd recommended
using a Jackrabbit cluster  only in a setup that supports session
affinity. Otherwise you can easily end up seeing partially
inconsistent results for a short while after updates. And definitely
it'll confuse the server if you're trying to save content and the
save() call ends up sending multiple requests all to separate cluster
nodes.

Anyway, such a failed save() should not be able to cause repository
inconsistencies. Do you still see the problems when accessing the
repository locally?

To check for consistency issues you can add the consistencyCheck
parameter [1] to the PersistenceManager entry in the workspace.xml
configuration file.

    <param name="consistencyCheck" value="true"/>

At next restart the repository will run a full consistency check of
that workspace and log warnings about all problems it may find.

[1] http://jackrabbit.apache.org/api/2.2/org/apache/jackrabbit/core/persistence/pool/BundleDbPersistenceManager.html#setConsistencyCheck(java.lang.String)

BR,

Jukka Zitting

Re: Exception when operating through load balancer

Posted by Francisco Carriedo Scher <fc...@gmail.com>.
Additionaly to the exceptions that arised when i introduced load balancing
with Nginx (the quoted text below describes the issue) strange behaviour is
taking place now: not finding _existing_ nodes, errors while creating new
nodes and new exceptions appeared even connecting directly with the
repository through its IP address:

2011-11-03 11:57:40.506 WARN  [http-192.168.0.188-8080-1]
JcrRemotingServlet.java:337 */pequeno/jcr:content: mandatory property {
http://www.jcp.org/jcr/1.0}data does not exist*
2011-11-03 12:02:05.009 ERROR [http-192.168.0.188-8080-2]
ExportContextImpl.java:193 ClientAbortException:  *java.net.SocketException:
Broken pipe*

My guessing is that the repository became unconsistent. Any better
interpretation?

Thanks in advance for your attention!


2011/11/2 Francisco Carriedo Scher <fc...@gmail.com>

> Hi there,
>
> i have a clustered JR environment and i operate it from the Java side
> through Webdav. I have deployed three servers that operate in cluster and
> everything went fine until i tried to add a load balancing / fault
> tolerance feature to the design. I am using Nginx as load balancer with the
> webdav module and, despite obtaining correctly the Repository object, some
> operations fail with the following exception:
>
> *javax.jcr.RepositoryException: Request Entity Too Large
>     at org.apache.jackrabbit.spi2dav.**ExceptionConverter.generate(**
> ExceptionConverter.java:113)
>     at org.apache.jackrabbit.spi2dav.**ExceptionConverter.generate(**
> ExceptionConverter.java:49)
>     at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl$**
> BatchImpl.start(**RepositoryServiceImpl.java:**457)
>     at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl$**
> BatchImpl.access$200(**RepositoryServiceImpl.java:**399)
>     at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl.submit(**
> RepositoryServiceImpl.java:**304)
>     at org.apache.jackrabbit.jcr2spi.**WorkspaceManager$**
> OperationVisitorImpl.execute(**WorkspaceManager.java:830)
>     at org.apache.jackrabbit.jcr2spi.**WorkspaceManager$**
> OperationVisitorImpl.access$**500(WorkspaceManager.java:797)
>     at org.apache.jackrabbit.jcr2spi.**WorkspaceManager.execute(**
> WorkspaceManager.java:594)
>     at org.apache.jackrabbit.jcr2spi.**state.SessionItemStateManager.**
> save(SessionItemStateManager.**java:139)
>     at org.apache.jackrabbit.jcr2spi.**ItemImpl.save(ItemImpl.java:**246)
>     at org.apache.jackrabbit.jcr2spi.**SessionImpl.save(SessionImpl.**
> java:328)
>     at com.solaiemes.filerepository.**management.**EmbeddableFileManager.*
> *deleteItem(**EmbeddableFileManager.java:**248)
>     at com.solaiemes.filerepository.**management.**EmbeddableFileManager.*
> *importFile(**EmbeddableFileManager.java:**469)
>     at com.solaiemes.filerepository.**management.**EmbeddableFileManager.*
> *saveFile(**EmbeddableFileManager.java:80)
>     at com.solaiemes.filerepository.**management.RepoShell.main(**
> RepoShell.java:87)
> Caused by: org.apache.jackrabbit.webdav.**DavException: Request Entity
> Too Large
>     at org.apache.jackrabbit.webdav.**client.methods.DavMethodBase.**
> getResponseException(**DavMethodBase.java:172)
>     at org.apache.jackrabbit.webdav.**client.methods.DavMethodBase.**
> checkSuccess(DavMethodBase.**java:181)
>     at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl$**
> BatchImpl.start(**RepositoryServiceImpl.java:**453)
>     ... 12 more
> *
> Uploading file operation fails and the initial guessing was that uploading
> files through webdav needs multiple HTTP requests and the load balancer
> would forward each of those to a different repository server. With little
> files (some bytes) and with operations that include just reading it seems
> to work correctly. I added session affinity (sticky session) support to
> Nginx and recompiled it but the same error persists.
>
> Can somebody tell me if my guessing was correct and the error is related
> to the question i suggested?
>
> Thanks in advance for your attention!
>
>