You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@knox.apache.org by "Jeffrey E Rodriguez (JIRA)" <ji...@apache.org> on 2015/09/03 18:28:45 UTC

[jira] [Updated] (KNOX-595) When Kerberos is enable encountering "Hit replay Buffer" even if we increase replay buffer

     [ https://issues.apache.org/jira/browse/KNOX-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jeffrey E  Rodriguez updated KNOX-595:
--------------------------------------
    Attachment: gateway.log

Uploaded gateway.log generated while uploading file into IBM added value Text Analytics.
File uploaded to TA was a zip file with   a size of 2,3 Mb.

> When Kerberos is enable encountering "Hit replay  Buffer" even if we increase replay buffer
> -------------------------------------------------------------------------------------------
>
>                 Key: KNOX-595
>                 URL: https://issues.apache.org/jira/browse/KNOX-595
>             Project: Apache Knox
>          Issue Type: Bug
>          Components: Server
>    Affects Versions: 0.5.0
>         Environment: All Linux environments using HDP latest release.
>            Reporter: Jeffrey E  Rodriguez
>            Priority: Critical
>             Fix For: 0.7.0
>
>         Attachments: gateway.log
>
>
> When Kerberos is enable, even if the application doesn't support SPNEGO we go through ExecuteKerberosDispatch. When we follow this path Knox uses a CappedBufferHttpEntity. I presume to replay entities once SPNEGO protocol figures out whether app supports SPNEGO or not.
> We are encountering an exception when loading large files:
> 2015-09-03 08:52:16,819 WARN  hadoop.gateway (DefaultDispatch.java:executeOutboundRequest(129)) - Connection exception dispatching request: http://bdavm016.svl.ibm.com:32000/TextAnalyticsWeb/controller/g2t/docset/7c75e043-25d5-454f-aa30-15286c7d9fce/content?doAs=sam java.io.IOException: Hit replay buffer max limit
> java.io.IOException: Hit replay buffer max limit
> 	at org.apache.hadoop.gateway.dispatch.CappedBufferHttpEntity$ReplayStream.read(CappedBufferHttpEntity.java:143)
> 	at java.io.InputStream.read(InputStream.java:101)
> 	at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1792)
> 	at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769)
> 	at org.apache.commons.io.IOUtils.copy(IOUtils.java:1744)
> 	at org.apache.hadoop.gateway.dispatch.CappedBufferHttpEntity.writeTo(CappedBufferHttpEntity.java:93)
> 	at org.apache.http.impl.DefaultBHttpClientConnection.sendRequestEntity(DefaultBHttpClientConnection.java:155)
> 	at org.apache.http.impl.conn.CPoolProxy.sendRequestEntity(CPoolProxy.java:149)
> 	at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:236)
> 	at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:121)
> 	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:254)
> 	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:195)
> 	at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:86)
> 	at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108)
> 	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
> 	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> 	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)