You are viewing a plain text version of this content. The canonical link for it is here.
Posted to gitbox@hive.apache.org by GitBox <gi...@apache.org> on 2021/02/22 06:58:27 UTC

[GitHub] [hive] guptanikhil007 opened a new pull request #1999: [WIP] HIVE-24803: update metrics and allocation after kill trigger status becomes available

guptanikhil007 opened a new pull request #1999:
URL: https://github.com/apache/hive/pull/1999


   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: https://cwiki.apache.org/confluence/display/Hive/HowToContribute
     2. Ensure that you have created an issue on the Hive project JIRA: https://issues.apache.org/jira/projects/HIVE/summary
     3. Ensure you have added or run the appropriate tests for your PR: 
     4. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP]HIVE-XXXXX:  Your PR title ...'.
     5. Be sure to keep the PR description updated to reflect all changes.
     6. Please write your PR title to summarize what this PR proposes.
     7. If possible, provide a concise example to reproduce the issue for a faster review.
   
   -->
   
   ### What changes were proposed in this pull request?
   <!--
   Please clarify what changes you are proposing. The purpose of this section is to outline the changes and how this PR fixes the issue. 
   If possible, please consider writing useful notes for better and faster reviews in your PR. See the examples below.
     1. If you refactor some codes with changing classes, showing the class hierarchy will help reviewers.
     2. If you fix some SQL features, you can provide some references of other DBMSes.
     3. If there is design documentation, please add the link.
     4. If there is a discussion in the mailing list, please add the link.
   -->
   Update the metrics and resource allocations in the pool after kill trigger processing is completed.
   
   ### Why are the changes needed?
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you propose a new API, clarify the use case for a new API.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   Following two bugs were found with respect to Azure HDInsight customers
   
   1. Workload manger doesn't process allocation changes after KillTrigger which leads to existing queries not able to ffully utilize the resources of the query which is killed.  
   2. It is also found that the Workload Manager thread is not updating the runningQueries metric for the pool  after kill trigger is processed making metrics unreliable afterwards.
   
   These changes are done to fix the above two bugs.
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as the documentation fix.
   If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description, screenshot and/or a reproducable example to show the behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to the released Hive versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   No
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some test cases that check the changes thoroughly including negative and positive cases if possible.
   If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why it was difficult to add.
   -->
   This patch was tested manually on Azure HDInsight Cluster


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] guptanikhil007 commented on a change in pull request #1999: HIVE-24803: update metrics and allocation after kill trigger status becomes available

Posted by GitBox <gi...@apache.org>.
guptanikhil007 commented on a change in pull request #1999:
URL: https://github.com/apache/hive/pull/1999#discussion_r590532250



##########
File path: itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestWMMetricsWithTrigger.java
##########
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hive.jdbc;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.metrics.MetricsTestUtils;
+import org.apache.hadoop.hive.common.metrics.common.MetricsFactory;
+import org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics;
+import org.apache.hadoop.hive.common.metrics.metrics2.MetricsReporting;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.*;
+import org.apache.hadoop.hive.ql.exec.UDF;
+import org.apache.hadoop.hive.ql.exec.tez.WorkloadManager;
+import org.apache.hadoop.hive.ql.wm.*;
+import org.apache.hadoop.metrics2.AbstractMetric;
+import org.apache.hive.jdbc.miniHS2.MiniHS2;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.net.URL;
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.*;
+
+public class TestWMMetricsWithTrigger {
+
+  private final Logger LOG = LoggerFactory.getLogger(getClass().getName());
+  private static MiniHS2 miniHS2 = null;
+  private static List<Iterable<AbstractMetric>> metricValues = new ArrayList<>();
+  private static final String tableName = "testWmMetricsTriggerTbl";
+  private static final String testDbName = "testWmMetricsTrigger";
+  private static String wmPoolName = "llap";
+
+  public static class SleepMsUDF extends UDF {
+    private static final Logger LOG = LoggerFactory.getLogger(TestKillQueryWithAuthorizationDisabled.class);
+
+    public Integer evaluate(final Integer value, final Integer ms) {
+      try {
+        LOG.info("Sleeping for " + ms + " milliseconds");
+        Thread.sleep(ms);
+      } catch (InterruptedException e) {
+        LOG.warn("Interrupted Exception");
+        // No-op
+      }
+      return value;
+    }
+  }
+
+  static HiveConf defaultConf() throws Exception {
+    String confDir = "../../data/conf/llap/";
+    if (StringUtils.isNotBlank(confDir)) {
+      HiveConf.setHiveSiteLocation(new URL("file://" + new File(confDir).toURI().getPath() + "/hive-site.xml"));
+      System.out.println("Setting hive-site: " + HiveConf.getHiveSiteLocation());
+    }
+    HiveConf defaultConf = new HiveConf();
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS, false);
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_QUERY_RESULTS_CACHE_ENABLED, false);
+    defaultConf.setVar(HiveConf.ConfVars.HIVE_AUTHENTICATOR_MANAGER,
+        "org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator");
+    defaultConf.addResource(new URL("file://" + new File(confDir).toURI().getPath() + "/tez-site.xml"));
+    defaultConf.setTimeVar(HiveConf.ConfVars.HIVE_TRIGGER_VALIDATION_INTERVAL, 100, TimeUnit.MILLISECONDS);
+    defaultConf.setVar(HiveConf.ConfVars.HIVE_SERVER2_TEZ_INTERACTIVE_QUEUE, "default");
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_SERVER2_METRICS_ENABLED, true);
+    defaultConf.setVar(HiveConf.ConfVars.HIVE_METRICS_REPORTER, MetricsReporting.JSON_FILE.name());
+    // don't want cache hits from llap io for testing filesystem bytes read counters
+    defaultConf.setVar(HiveConf.ConfVars.LLAP_IO_MEMORY_MODE, "none");
+    return defaultConf;
+  }
+
+  @BeforeClass
+  public static void beforeTest() throws Exception {
+    HiveConf conf = defaultConf();
+
+    Class.forName(MiniHS2.getJdbcDriverName());
+    miniHS2 = new MiniHS2(conf, MiniHS2.MiniClusterType.LLAP);
+    Map<String, String> confOverlay = new HashMap<>();
+    miniHS2.start(confOverlay);
+    miniHS2.getDFS().getFileSystem().mkdirs(new Path("/apps_staging_dir/anonymous"));
+
+    Connection conDefault =
+        BaseJdbcWithMiniLlap.getConnection(miniHS2.getJdbcURL(), System.getProperty("user.name"), "bar");
+    Statement stmt = conDefault.createStatement();
+    String tblName = testDbName + "." + tableName;
+    String dataFileDir = conf.get("test.data.files").replace('\\', '/').replace("c:", "");
+    Path dataFilePath = new Path(dataFileDir, "kv1.txt");
+    String udfName = TestKillQueryWithAuthorizationDisabled.SleepMsUDF.class.getName();
+    stmt.execute("drop database if exists " + testDbName + " cascade");
+    stmt.execute("create database " + testDbName);
+    stmt.execute("dfs -put " + dataFilePath.toString() + " " + "kv1.txt");
+    stmt.execute("use " + testDbName);
+    stmt.execute("create table " + tblName + " (int_col int, value string) ");
+    stmt.execute("load data inpath 'kv1.txt' into table " + tblName);
+    stmt.execute("create function sleep as '" + udfName + "'");
+    stmt.close();
+    conDefault.close();
+    setupPlanAndTrigger();
+  }
+
+  private static void setupPlanAndTrigger() throws Exception {
+    WorkloadManager wm = WorkloadManager.getInstance();
+    WMPool wmPool = new WMPool("test_plan", wmPoolName);
+    wmPool.setAllocFraction(1.0f);
+    wmPool.setQueryParallelism(1);
+    WMFullResourcePlan resourcePlan = new WMFullResourcePlan(new WMResourcePlan("rp"), Lists.newArrayList(wmPool));
+    resourcePlan.getPlan().setDefaultPoolPath(wmPoolName);
+    Expression expression = ExpressionFactory.fromString("EXECUTION_TIME > 1000");
+    Trigger trigger = new ExecutionTrigger("kill_query", expression, new Action(Action.Type.KILL_QUERY));
+    WMTrigger wmTrigger = wmTriggerFromTrigger(trigger);
+    resourcePlan.addToTriggers(wmTrigger);
+    resourcePlan.addToPoolTriggers(new WMPoolTrigger("llap", trigger.getName()));
+    wm.updateResourcePlanAsync(resourcePlan).get(10, TimeUnit.SECONDS);
+  }
+
+  @AfterClass
+  public static void afterTest() {
+    if (miniHS2.isStarted()) {
+      miniHS2.stop();
+    }
+    metricValues.clear();
+    metricValues = null;
+  }
+
+  void runQueryWithTrigger(int queryTimeoutSecs) throws Exception {
+    LOG.info("Starting test");
+    String query = "select sleep(t1.int_col + t2.int_col, 500), t1.value from " + tableName + " t1 join " + tableName
+        + " t2 on t1.int_col>=t2.int_col";
+    long start = System.currentTimeMillis();
+    Connection conn =
+        BaseJdbcWithMiniLlap.getConnection(miniHS2.getJdbcURL(testDbName), System.getProperty("user.name"), "bar");
+    final Statement selStmt = conn.createStatement();
+    Throwable throwable = null;
+    try {
+      if (queryTimeoutSecs > 0) {
+        selStmt.setQueryTimeout(queryTimeoutSecs);
+      }
+      selStmt.execute(query);
+    } catch (SQLException e) {
+      throwable = e;
+    }
+    selStmt.close();
+    assertNotNull("Expected non-null throwable", throwable);
+    assertEquals(SQLException.class, throwable.getClass());
+    assertTrue("Query was killed due to " + throwable.getMessage() + " and not because of trigger violation",
+        throwable.getMessage().contains("violated"));
+    long end = System.currentTimeMillis();
+    LOG.info("time taken: {} ms", (end - start));
+  }
+
+  private static WMTrigger wmTriggerFromTrigger(Trigger trigger) {
+    WMTrigger result = new WMTrigger("rp", trigger.getName());
+    result.setTriggerExpression(trigger.getExpression().toString());
+    result.setActionExpression(trigger.getAction().toString());
+    return result;
+  }
+
+  @Test(timeout = 30000)
+  public void testWmPoolMetricsAfterKillTrigger() throws Exception {

Review comment:
       Added verification step

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
##########
@@ -677,6 +677,8 @@ private void processCurrentEvents(EventState e, WmThreadSyncWork syncWork) throw
           }
           wmEvent.endEvent(ctx.session);
         }
+        // Running query metrics needs to be updated for the pool

Review comment:
       done

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
##########
@@ -744,6 +748,21 @@ private void processCurrentEvents(EventState e, WmThreadSyncWork syncWork) throw
     }
   }
 
+  private void updatePoolMetricsAfterKillTrigger(HashSet<String> poolsToRedistribute, KillQueryContext ctx) {
+    String poolName = ctx.getPoolName();
+    if (StringUtils.isNotBlank(poolName)) {
+      poolsToRedistribute.add(poolName);
+      PoolState pool = pools.get(poolName);
+      if (pool != null) {
+        if (pool.metrics != null) {

Review comment:
       done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] sankarh commented on pull request #1999: HIVE-24803: update metrics and allocation after kill trigger status becomes available

Posted by GitBox <gi...@apache.org>.
sankarh commented on pull request #1999:
URL: https://github.com/apache/hive/pull/1999#issuecomment-795684458


   +1, LGTM


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] guptanikhil007 commented on pull request #1999: [WIP] HIVE-24803: update metrics and allocation after kill trigger status becomes available

Posted by GitBox <gi...@apache.org>.
guptanikhil007 commented on pull request #1999:
URL: https://github.com/apache/hive/pull/1999#issuecomment-783139137


   Following PR requires review: @sankarh , @adesh-rao 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] sankarh merged pull request #1999: HIVE-24803: update metrics and allocation after kill trigger status becomes available

Posted by GitBox <gi...@apache.org>.
sankarh merged pull request #1999:
URL: https://github.com/apache/hive/pull/1999


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] ashish-kumar-sharma commented on a change in pull request #1999: [WIP] HIVE-24803: update metrics and allocation after kill trigger status becomes available

Posted by GitBox <gi...@apache.org>.
ashish-kumar-sharma commented on a change in pull request #1999:
URL: https://github.com/apache/hive/pull/1999#discussion_r583359309



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
##########
@@ -744,6 +748,19 @@ private void processCurrentEvents(EventState e, WmThreadSyncWork syncWork) throw
     }
   }
 
+  private void updatePoolMetricsAfterKillTrigger(HashSet<String> poolsToRedistribute, KillQueryContext ctx) {
+    String poolName = ctx.getPoolName();
+    if (poolName != null) {

Review comment:
       use StringUtils.isEmpty(poolName)

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
##########
@@ -2225,13 +2251,17 @@ private void resetAndQueueKill(Map<WmTezSession, KillQueryContext> toKillQuery,
     KillQueryContext killQueryContext, Map<WmTezSession, GetRequest> toReuse) {
 
     WmTezSession toKill = killQueryContext.session;
+    String poolName = toKill.getPoolName();
+    boolean validPoolName = poolName != null;

Review comment:
       break this line into 2. so that code will be more readable.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] guptanikhil007 commented on a change in pull request #1999: [WIP] HIVE-24803: update metrics and allocation after kill trigger status becomes available

Posted by GitBox <gi...@apache.org>.
guptanikhil007 commented on a change in pull request #1999:
URL: https://github.com/apache/hive/pull/1999#discussion_r583698471



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
##########
@@ -744,6 +748,19 @@ private void processCurrentEvents(EventState e, WmThreadSyncWork syncWork) throw
     }
   }
 
+  private void updatePoolMetricsAfterKillTrigger(HashSet<String> poolsToRedistribute, KillQueryContext ctx) {
+    String poolName = ctx.getPoolName();
+    if (poolName != null) {

Review comment:
       using StringUtils.isNotBlank

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
##########
@@ -2225,13 +2251,17 @@ private void resetAndQueueKill(Map<WmTezSession, KillQueryContext> toKillQuery,
     KillQueryContext killQueryContext, Map<WmTezSession, GetRequest> toReuse) {
 
     WmTezSession toKill = killQueryContext.session;
+    String poolName = toKill.getPoolName();
+    boolean validPoolName = poolName != null;

Review comment:
       Broke the code into two separate blocks




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] guptanikhil007 edited a comment on pull request #1999: [WIP] HIVE-24803: update metrics and allocation after kill trigger status becomes available

Posted by GitBox <gi...@apache.org>.
guptanikhil007 edited a comment on pull request #1999:
URL: https://github.com/apache/hive/pull/1999#issuecomment-783139137


   Following PR requires review: @sankarh , @adesh-rao @ashish-kumar-sharma  
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] guptanikhil007 commented on pull request #1999: HIVE-24803: update metrics and allocation after kill trigger status becomes available

Posted by GitBox <gi...@apache.org>.
guptanikhil007 commented on pull request #1999:
URL: https://github.com/apache/hive/pull/1999#issuecomment-790460002


   @sankarh @adesh-rao  @ashish-kumar-sharma 
   Please Review the changes


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] sankarh commented on a change in pull request #1999: HIVE-24803: update metrics and allocation after kill trigger status becomes available

Posted by GitBox <gi...@apache.org>.
sankarh commented on a change in pull request #1999:
URL: https://github.com/apache/hive/pull/1999#discussion_r590419695



##########
File path: itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestWMMetricsWithTrigger.java
##########
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hive.jdbc;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.metrics.MetricsTestUtils;
+import org.apache.hadoop.hive.common.metrics.common.MetricsFactory;
+import org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics;
+import org.apache.hadoop.hive.common.metrics.metrics2.MetricsReporting;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.*;
+import org.apache.hadoop.hive.ql.exec.UDF;
+import org.apache.hadoop.hive.ql.exec.tez.WorkloadManager;
+import org.apache.hadoop.hive.ql.wm.*;
+import org.apache.hadoop.metrics2.AbstractMetric;
+import org.apache.hive.jdbc.miniHS2.MiniHS2;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.net.URL;
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.*;
+
+public class TestWMMetricsWithTrigger {
+
+  private final Logger LOG = LoggerFactory.getLogger(getClass().getName());
+  private static MiniHS2 miniHS2 = null;
+  private static List<Iterable<AbstractMetric>> metricValues = new ArrayList<>();
+  private static final String tableName = "testWmMetricsTriggerTbl";
+  private static final String testDbName = "testWmMetricsTrigger";
+  private static String wmPoolName = "llap";
+
+  public static class SleepMsUDF extends UDF {
+    private static final Logger LOG = LoggerFactory.getLogger(TestKillQueryWithAuthorizationDisabled.class);
+
+    public Integer evaluate(final Integer value, final Integer ms) {
+      try {
+        LOG.info("Sleeping for " + ms + " milliseconds");
+        Thread.sleep(ms);
+      } catch (InterruptedException e) {
+        LOG.warn("Interrupted Exception");
+        // No-op
+      }
+      return value;
+    }
+  }
+
+  static HiveConf defaultConf() throws Exception {
+    String confDir = "../../data/conf/llap/";
+    if (StringUtils.isNotBlank(confDir)) {
+      HiveConf.setHiveSiteLocation(new URL("file://" + new File(confDir).toURI().getPath() + "/hive-site.xml"));
+      System.out.println("Setting hive-site: " + HiveConf.getHiveSiteLocation());
+    }
+    HiveConf defaultConf = new HiveConf();
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS, false);
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_QUERY_RESULTS_CACHE_ENABLED, false);
+    defaultConf.setVar(HiveConf.ConfVars.HIVE_AUTHENTICATOR_MANAGER,
+        "org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator");
+    defaultConf.addResource(new URL("file://" + new File(confDir).toURI().getPath() + "/tez-site.xml"));
+    defaultConf.setTimeVar(HiveConf.ConfVars.HIVE_TRIGGER_VALIDATION_INTERVAL, 100, TimeUnit.MILLISECONDS);
+    defaultConf.setVar(HiveConf.ConfVars.HIVE_SERVER2_TEZ_INTERACTIVE_QUEUE, "default");
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_SERVER2_METRICS_ENABLED, true);
+    defaultConf.setVar(HiveConf.ConfVars.HIVE_METRICS_REPORTER, MetricsReporting.JSON_FILE.name());
+    // don't want cache hits from llap io for testing filesystem bytes read counters
+    defaultConf.setVar(HiveConf.ConfVars.LLAP_IO_MEMORY_MODE, "none");
+    return defaultConf;
+  }
+
+  @BeforeClass
+  public static void beforeTest() throws Exception {
+    HiveConf conf = defaultConf();
+
+    Class.forName(MiniHS2.getJdbcDriverName());
+    miniHS2 = new MiniHS2(conf, MiniHS2.MiniClusterType.LLAP);
+    Map<String, String> confOverlay = new HashMap<>();
+    miniHS2.start(confOverlay);
+    miniHS2.getDFS().getFileSystem().mkdirs(new Path("/apps_staging_dir/anonymous"));
+
+    Connection conDefault =
+        BaseJdbcWithMiniLlap.getConnection(miniHS2.getJdbcURL(), System.getProperty("user.name"), "bar");
+    Statement stmt = conDefault.createStatement();
+    String tblName = testDbName + "." + tableName;
+    String dataFileDir = conf.get("test.data.files").replace('\\', '/').replace("c:", "");
+    Path dataFilePath = new Path(dataFileDir, "kv1.txt");
+    String udfName = TestKillQueryWithAuthorizationDisabled.SleepMsUDF.class.getName();
+    stmt.execute("drop database if exists " + testDbName + " cascade");
+    stmt.execute("create database " + testDbName);
+    stmt.execute("dfs -put " + dataFilePath.toString() + " " + "kv1.txt");
+    stmt.execute("use " + testDbName);
+    stmt.execute("create table " + tblName + " (int_col int, value string) ");
+    stmt.execute("load data inpath 'kv1.txt' into table " + tblName);
+    stmt.execute("create function sleep as '" + udfName + "'");
+    stmt.close();
+    conDefault.close();
+    setupPlanAndTrigger();
+  }
+
+  private static void setupPlanAndTrigger() throws Exception {
+    WorkloadManager wm = WorkloadManager.getInstance();
+    WMPool wmPool = new WMPool("test_plan", wmPoolName);
+    wmPool.setAllocFraction(1.0f);
+    wmPool.setQueryParallelism(1);
+    WMFullResourcePlan resourcePlan = new WMFullResourcePlan(new WMResourcePlan("rp"), Lists.newArrayList(wmPool));
+    resourcePlan.getPlan().setDefaultPoolPath(wmPoolName);
+    Expression expression = ExpressionFactory.fromString("EXECUTION_TIME > 1000");
+    Trigger trigger = new ExecutionTrigger("kill_query", expression, new Action(Action.Type.KILL_QUERY));
+    WMTrigger wmTrigger = wmTriggerFromTrigger(trigger);
+    resourcePlan.addToTriggers(wmTrigger);
+    resourcePlan.addToPoolTriggers(new WMPoolTrigger("llap", trigger.getName()));
+    wm.updateResourcePlanAsync(resourcePlan).get(10, TimeUnit.SECONDS);
+  }
+
+  @AfterClass
+  public static void afterTest() {
+    if (miniHS2.isStarted()) {
+      miniHS2.stop();
+    }
+    metricValues.clear();
+    metricValues = null;
+  }
+
+  void runQueryWithTrigger(int queryTimeoutSecs) throws Exception {
+    LOG.info("Starting test");
+    String query = "select sleep(t1.int_col + t2.int_col, 500), t1.value from " + tableName + " t1 join " + tableName
+        + " t2 on t1.int_col>=t2.int_col";
+    long start = System.currentTimeMillis();
+    Connection conn =
+        BaseJdbcWithMiniLlap.getConnection(miniHS2.getJdbcURL(testDbName), System.getProperty("user.name"), "bar");
+    final Statement selStmt = conn.createStatement();
+    Throwable throwable = null;
+    try {
+      if (queryTimeoutSecs > 0) {
+        selStmt.setQueryTimeout(queryTimeoutSecs);
+      }
+      selStmt.execute(query);
+    } catch (SQLException e) {
+      throwable = e;
+    }
+    selStmt.close();
+    assertNotNull("Expected non-null throwable", throwable);
+    assertEquals(SQLException.class, throwable.getClass());
+    assertTrue("Query was killed due to " + throwable.getMessage() + " and not because of trigger violation",
+        throwable.getMessage().contains("violated"));
+    long end = System.currentTimeMillis();
+    LOG.info("time taken: {} ms", (end - start));
+  }
+
+  private static WMTrigger wmTriggerFromTrigger(Trigger trigger) {
+    WMTrigger result = new WMTrigger("rp", trigger.getName());
+    result.setTriggerExpression(trigger.getExpression().toString());
+    result.setActionExpression(trigger.getAction().toString());
+    return result;
+  }
+
+  @Test(timeout = 30000)
+  public void testWmPoolMetricsAfterKillTrigger() throws Exception {
+    CodahaleMetrics metrics = (CodahaleMetrics) MetricsFactory.getInstance();
+    String json = metrics.dumpJson();
+    MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.GAUGE, "WM_llap_numExecutors", 0);
+    MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.GAUGE, "WM_llap_numExecutorsMax", 4);
+    MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.GAUGE, "WM_llap_numParallelQueries", 1);
+    MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.GAUGE, "WM_llap_numRunningQueries", 0);
+
+    // Run Query with Kill Trigger in place
+    runQueryWithTrigger(10);
+
+    //Wait for Workload Manager main thread to update the metrics after kill query succeeded.
+    Thread.sleep(10000);
+    //Metrics should reset to original value after query is killed
+    json = metrics.dumpJson();

Review comment:
       With this test how do we really know if metrics are reset or not changed at all?

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
##########
@@ -677,6 +677,8 @@ private void processCurrentEvents(EventState e, WmThreadSyncWork syncWork) throw
           }
           wmEvent.endEvent(ctx.session);
         }
+        // Running query metrics needs to be updated for the pool

Review comment:
       nit: Add a blank line before the comment line. Check other places too.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
##########
@@ -744,6 +748,21 @@ private void processCurrentEvents(EventState e, WmThreadSyncWork syncWork) throw
     }
   }
 
+  private void updatePoolMetricsAfterKillTrigger(HashSet<String> poolsToRedistribute, KillQueryContext ctx) {
+    String poolName = ctx.getPoolName();
+    if (StringUtils.isNotBlank(poolName)) {
+      poolsToRedistribute.add(poolName);
+      PoolState pool = pools.get(poolName);
+      if (pool != null) {
+        if (pool.metrics != null) {

Review comment:
       Shall combine with outside if condition using "&&".

##########
File path: itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestWMMetricsWithTrigger.java
##########
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hive.jdbc;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.metrics.MetricsTestUtils;
+import org.apache.hadoop.hive.common.metrics.common.MetricsFactory;
+import org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics;
+import org.apache.hadoop.hive.common.metrics.metrics2.MetricsReporting;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.*;
+import org.apache.hadoop.hive.ql.exec.UDF;
+import org.apache.hadoop.hive.ql.exec.tez.WorkloadManager;
+import org.apache.hadoop.hive.ql.wm.*;
+import org.apache.hadoop.metrics2.AbstractMetric;
+import org.apache.hive.jdbc.miniHS2.MiniHS2;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.net.URL;
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.*;
+
+public class TestWMMetricsWithTrigger {
+
+  private final Logger LOG = LoggerFactory.getLogger(getClass().getName());
+  private static MiniHS2 miniHS2 = null;
+  private static List<Iterable<AbstractMetric>> metricValues = new ArrayList<>();
+  private static final String tableName = "testWmMetricsTriggerTbl";
+  private static final String testDbName = "testWmMetricsTrigger";
+  private static String wmPoolName = "llap";
+
+  public static class SleepMsUDF extends UDF {
+    private static final Logger LOG = LoggerFactory.getLogger(TestKillQueryWithAuthorizationDisabled.class);
+
+    public Integer evaluate(final Integer value, final Integer ms) {
+      try {
+        LOG.info("Sleeping for " + ms + " milliseconds");
+        Thread.sleep(ms);
+      } catch (InterruptedException e) {
+        LOG.warn("Interrupted Exception");
+        // No-op
+      }
+      return value;
+    }
+  }
+
+  static HiveConf defaultConf() throws Exception {
+    String confDir = "../../data/conf/llap/";
+    if (StringUtils.isNotBlank(confDir)) {
+      HiveConf.setHiveSiteLocation(new URL("file://" + new File(confDir).toURI().getPath() + "/hive-site.xml"));
+      System.out.println("Setting hive-site: " + HiveConf.getHiveSiteLocation());
+    }
+    HiveConf defaultConf = new HiveConf();
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_SERVER2_ENABLE_DOAS, false);
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_QUERY_RESULTS_CACHE_ENABLED, false);
+    defaultConf.setVar(HiveConf.ConfVars.HIVE_AUTHENTICATOR_MANAGER,
+        "org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator");
+    defaultConf.addResource(new URL("file://" + new File(confDir).toURI().getPath() + "/tez-site.xml"));
+    defaultConf.setTimeVar(HiveConf.ConfVars.HIVE_TRIGGER_VALIDATION_INTERVAL, 100, TimeUnit.MILLISECONDS);
+    defaultConf.setVar(HiveConf.ConfVars.HIVE_SERVER2_TEZ_INTERACTIVE_QUEUE, "default");
+    defaultConf.setBoolVar(HiveConf.ConfVars.HIVE_SERVER2_METRICS_ENABLED, true);
+    defaultConf.setVar(HiveConf.ConfVars.HIVE_METRICS_REPORTER, MetricsReporting.JSON_FILE.name());
+    // don't want cache hits from llap io for testing filesystem bytes read counters
+    defaultConf.setVar(HiveConf.ConfVars.LLAP_IO_MEMORY_MODE, "none");
+    return defaultConf;
+  }
+
+  @BeforeClass
+  public static void beforeTest() throws Exception {
+    HiveConf conf = defaultConf();
+
+    Class.forName(MiniHS2.getJdbcDriverName());
+    miniHS2 = new MiniHS2(conf, MiniHS2.MiniClusterType.LLAP);
+    Map<String, String> confOverlay = new HashMap<>();
+    miniHS2.start(confOverlay);
+    miniHS2.getDFS().getFileSystem().mkdirs(new Path("/apps_staging_dir/anonymous"));
+
+    Connection conDefault =
+        BaseJdbcWithMiniLlap.getConnection(miniHS2.getJdbcURL(), System.getProperty("user.name"), "bar");
+    Statement stmt = conDefault.createStatement();
+    String tblName = testDbName + "." + tableName;
+    String dataFileDir = conf.get("test.data.files").replace('\\', '/').replace("c:", "");
+    Path dataFilePath = new Path(dataFileDir, "kv1.txt");
+    String udfName = TestKillQueryWithAuthorizationDisabled.SleepMsUDF.class.getName();
+    stmt.execute("drop database if exists " + testDbName + " cascade");
+    stmt.execute("create database " + testDbName);
+    stmt.execute("dfs -put " + dataFilePath.toString() + " " + "kv1.txt");
+    stmt.execute("use " + testDbName);
+    stmt.execute("create table " + tblName + " (int_col int, value string) ");
+    stmt.execute("load data inpath 'kv1.txt' into table " + tblName);
+    stmt.execute("create function sleep as '" + udfName + "'");
+    stmt.close();
+    conDefault.close();
+    setupPlanAndTrigger();
+  }
+
+  private static void setupPlanAndTrigger() throws Exception {
+    WorkloadManager wm = WorkloadManager.getInstance();
+    WMPool wmPool = new WMPool("test_plan", wmPoolName);
+    wmPool.setAllocFraction(1.0f);
+    wmPool.setQueryParallelism(1);
+    WMFullResourcePlan resourcePlan = new WMFullResourcePlan(new WMResourcePlan("rp"), Lists.newArrayList(wmPool));
+    resourcePlan.getPlan().setDefaultPoolPath(wmPoolName);
+    Expression expression = ExpressionFactory.fromString("EXECUTION_TIME > 1000");
+    Trigger trigger = new ExecutionTrigger("kill_query", expression, new Action(Action.Type.KILL_QUERY));
+    WMTrigger wmTrigger = wmTriggerFromTrigger(trigger);
+    resourcePlan.addToTriggers(wmTrigger);
+    resourcePlan.addToPoolTriggers(new WMPoolTrigger("llap", trigger.getName()));
+    wm.updateResourcePlanAsync(resourcePlan).get(10, TimeUnit.SECONDS);
+  }
+
+  @AfterClass
+  public static void afterTest() {
+    if (miniHS2.isStarted()) {
+      miniHS2.stop();
+    }
+    metricValues.clear();
+    metricValues = null;
+  }
+
+  void runQueryWithTrigger(int queryTimeoutSecs) throws Exception {
+    LOG.info("Starting test");
+    String query = "select sleep(t1.int_col + t2.int_col, 500), t1.value from " + tableName + " t1 join " + tableName
+        + " t2 on t1.int_col>=t2.int_col";
+    long start = System.currentTimeMillis();
+    Connection conn =
+        BaseJdbcWithMiniLlap.getConnection(miniHS2.getJdbcURL(testDbName), System.getProperty("user.name"), "bar");
+    final Statement selStmt = conn.createStatement();
+    Throwable throwable = null;
+    try {
+      if (queryTimeoutSecs > 0) {
+        selStmt.setQueryTimeout(queryTimeoutSecs);
+      }
+      selStmt.execute(query);
+    } catch (SQLException e) {
+      throwable = e;
+    }
+    selStmt.close();
+    assertNotNull("Expected non-null throwable", throwable);
+    assertEquals(SQLException.class, throwable.getClass());
+    assertTrue("Query was killed due to " + throwable.getMessage() + " and not because of trigger violation",
+        throwable.getMessage().contains("violated"));
+    long end = System.currentTimeMillis();
+    LOG.info("time taken: {} ms", (end - start));
+  }
+
+  private static WMTrigger wmTriggerFromTrigger(Trigger trigger) {
+    WMTrigger result = new WMTrigger("rp", trigger.getName());
+    result.setTriggerExpression(trigger.getExpression().toString());
+    result.setActionExpression(trigger.getAction().toString());
+    return result;
+  }
+
+  @Test(timeout = 30000)
+  public void testWmPoolMetricsAfterKillTrigger() throws Exception {

Review comment:
       Can we add one more test to validate the metrics while the query is executing?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org