You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2020/02/05 21:39:13 UTC

[GitHub] [hadoop-ozone] smengcl commented on issue #508: HDDS-2950. Upgrade jetty to the latest 9.4 release

smengcl commented on issue #508: HDDS-2950. Upgrade jetty to the latest 9.4 release
URL: https://github.com/apache/hadoop-ozone/pull/508#issuecomment-582625425
 
 
   > Thank you very much @smengcl
   > 
   > As a background: as of now we use hadoop 3.2. The most safest import is to use exactly the same `HttpServer2.java` what we have in the latest hadoop 3.2 release.
   > 
   > I tried to use trunk HttpServer2 from the Hadoop **trunk** but found multiple changes related the authorization broke S3 gateway.
   > 
   > After that I decided to keep the same `HttpServer2` which is used now (with Jetty patch included).
   > 
   > Therefore (as a rule of thumb) I prefer to import here just minimal changes and import newer patches in separated jira (where we can carefully test s3g).
   > 
   > I checked the suggested patches (thanks for the suggestions):
   > 
   > * HADOOP-16727 It's a small null check, can be good to add it as soon as possible. I added it to the patch.
   > * HADOOP-16398 Prometheus support is backported from Ozone to Hadoop. We don't need the patch as we already have the original solution. (Later we can simplify our initialization to make it more similar to the other default servlets but it's a bigger refactor.)
   > * HADOOP-16718 I tried to understand why this is required and didn't find any information. If you have something, let me know. (Might be required for webhdfs which shouldn't be supported in our side.) For me it seems to be optional, and s3g and other ozone components work well based on the tests. Unless we have a strong reason I would prefer to keep the default Jetty behavior (and use one less config).
   
   Thanks for digging into this. I'm good with using same `HttpServer2.java` as Hadoop 3.2.
   
   The two new commits lgtm.
   
   As for HADOOP-16718, I believe the scope is larger than WebHDFS. Web UIs with HTTPS using jetty may all be impacted under certain cases. In short, this can cause connection failure if:
   
   > ... the server's JKS file has a private/public key/cert pairing that is valid but it also has another trustedCertEntry certificate that has the hostname in subjectAltName extension, the trusted cert gets picked.
   > This triggers an internal failure to determine a common cipher to use and the server will return the following exception to the client:
   > fatal error: 40: no cipher suites in common
   
   For details, please take a look at the link I sent over to you on Slack.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org