You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2021/03/19 18:34:00 UTC

[jira] [Work logged] (HADOOP-17133) Implement HttpServer2 metrics

     [ https://issues.apache.org/jira/browse/HADOOP-17133?focusedWorklogId=569040&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569040 ]

ASF GitHub Bot logged work on HADOOP-17133:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 19/Mar/21 18:33
            Start Date: 19/Mar/21 18:33
    Worklog Time Spent: 10m 
      Work Description: Jing9 commented on a change in pull request #2145:
URL: https://github.com/apache/hadoop/pull/2145#discussion_r597886565



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
##########
@@ -669,6 +674,16 @@ private void initializeWebServer(String name, String hostName,
     addDefaultApps(contexts, appDir, conf);
     webServer.setHandler(handlers);
 
+    // Jetty StatisticsHandler should be the first handler.
+    // The handler returns 503 if there is no next handler and the response is
+    // not committed. In Apache Hadoop, there are some servlets that do not
+    // commit (i.e. close) the response. Therefore the handler fails.

Review comment:
       If I understand correctly, this paragraph explains why we need to put the StatisticsHandler as the first handler, right? Is it possible that we can have a unit test to reproduce the scenario where the response is not committed ?

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
##########
@@ -1227,6 +1242,16 @@ public void start() throws IOException {
               .register("prometheus", "Hadoop metrics prometheus exporter",
                   prometheusMetricsSink);
         }
+        if (statsHandler != null) {
+          // Create metrics source for each HttpServer2 instance.
+          // Use port number to make the metrics source name unique.
+          int port = -1;
+          for (ServerConnector connector : listeners) {
+            port = connector.getLocalPort();
+            break;
+          }
+          metrics = HttpServer2Metrics.create(statsHandler, port);

Review comment:
       so if we have both http and https bound to the server, this metrics will cover both connectors but its name will only use one of the port?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 569040)
    Remaining Estimate: 0h
            Time Spent: 10m

> Implement HttpServer2 metrics
> -----------------------------
>
>                 Key: HADOOP-17133
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17133
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: httpfs, kms
>            Reporter: Akira Ajisaka
>            Assignee: Akira Ajisaka
>            Priority: Major
>             Fix For: 3.4.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> I'd like to collect metrics (number of connections, average response time, etc...) from HttpFS and KMS but there are no metrics for HttpServer2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org