You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Roman Savchenko <gm...@gmail.com> on 2022/01/10 15:26:00 UTC

Large Negotiation token

Dear Hadoop Developers,

I'm seeing an issue with httpfs server (Cloudera server with Kerberos and
HTTPFS enabled), when I'm trying to connect (via CUrl) to it with a
Kerberos authentication and large negotiation token (~10k bytes) that is
generated by a large amount of WIndows Security Groups and Windows SSPI.
Server just replies with 400 (Bad request) I'm curious is it possible to
handle it?

Thanks for helping with it,
Roman.

Re: Large Negotiation token

Posted by Roman Savchenko <gm...@gmail.com>.
Hey Chris,

Thanks for getting back to me and explaining a history of changes. I found
that settings from that link
https://hadoop.apache.org/docs/stable/hadoop-hdfs-httpfs/httpfs-default.html
I thought that default is 64k.
Roman.

вт, 11 янв. 2022 г. в 01:53, Chris Nauroth <cn...@apache.org>:

> Hello Roman,
>
> Do you know if the reply from the server indicates that it received a
> request header that was too large? If so, then you could try tuning
> configuration of the maximum request header size. The mechanisms for this
> tuning are different depending on which specific Hadoop version you are
> running.
>
> In Hadoop 2.x, HTTPFS uses Tomcat as the web server. The environment
> variable HTTPFS_MAX_HTTP_HEADER_SIZE will pass through to override the
> Tomcat default. [1]
>
> In Hadoop 3.x, HTTPFS switched to using Jetty instead of Tomcat. [2] The
> configuration property "hadoop.http.max.request.header.size" in
> core-site.xml sets the equivalent Jetty configuration.
>
> This originally used the Tomcat default of 8 KB, which would be too small
> for your ~10 KB SPNEGO token. That default was increased in HDFS-10423. [3]
> If you are running a version that predates this, then I'm not sure you'll
> have any option for tuning this. You might find that you need some kind of
> backport of that patch.
>
> I hope this helps.
>
> [1]
> https://github.com/apache/hadoop/blob/rel/release-2.10.1/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh#L207
> [2] https://issues.apache.org/jira/browse/HDFS-10860
> [3] https://issues.apache.org/jira/browse/HDFS-10423
>
> Chris Nauroth
>
>
> On Mon, Jan 10, 2022 at 7:26 AM Roman Savchenko <gm...@gmail.com> wrote:
>
>> Dear Hadoop Developers,
>>
>> I'm seeing an issue with httpfs server (Cloudera server with Kerberos
>> and HTTPFS enabled), when I'm trying to connect (via CUrl) to it with a
>> Kerberos authentication and large negotiation token (~10k bytes) that is
>> generated by a large amount of WIndows Security Groups and Windows SSPI.
>> Server just replies with 400 (Bad request) I'm curious is it possible to
>> handle it?
>>
>> Thanks for helping with it,
>> Roman.
>>
>

Re: Large Negotiation token

Posted by Chris Nauroth <cn...@apache.org>.
Hello Roman,

Do you know if the reply from the server indicates that it received a
request header that was too large? If so, then you could try tuning
configuration of the maximum request header size. The mechanisms for this
tuning are different depending on which specific Hadoop version you are
running.

In Hadoop 2.x, HTTPFS uses Tomcat as the web server. The environment
variable HTTPFS_MAX_HTTP_HEADER_SIZE will pass through to override the
Tomcat default. [1]

In Hadoop 3.x, HTTPFS switched to using Jetty instead of Tomcat. [2] The
configuration property "hadoop.http.max.request.header.size" in
core-site.xml sets the equivalent Jetty configuration.

This originally used the Tomcat default of 8 KB, which would be too small
for your ~10 KB SPNEGO token. That default was increased in HDFS-10423. [3]
If you are running a version that predates this, then I'm not sure you'll
have any option for tuning this. You might find that you need some kind of
backport of that patch.

I hope this helps.

[1]
https://github.com/apache/hadoop/blob/rel/release-2.10.1/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh#L207
[2] https://issues.apache.org/jira/browse/HDFS-10860
[3] https://issues.apache.org/jira/browse/HDFS-10423

Chris Nauroth


On Mon, Jan 10, 2022 at 7:26 AM Roman Savchenko <gm...@gmail.com> wrote:

> Dear Hadoop Developers,
>
> I'm seeing an issue with httpfs server (Cloudera server with Kerberos and
> HTTPFS enabled), when I'm trying to connect (via CUrl) to it with a
> Kerberos authentication and large negotiation token (~10k bytes) that is
> generated by a large amount of WIndows Security Groups and Windows SSPI.
> Server just replies with 400 (Bad request) I'm curious is it possible to
> handle it?
>
> Thanks for helping with it,
> Roman.
>